Configuration
Environment variables and settings for each Gryt service
Web Client
The web client is intended for local development and contributing to Gryt. Self-hosting it in production won't work with the centralized auth at auth.gryt.chat because your domain won't be registered as a redirect URI. Use the Gryt desktop app or app.gryt.chat to connect to servers. If you self-host your own Keycloak, the web client will work, but your users will have separate identities that don't carry across other Gryt servers.
Optional development settings in packages/client/.env:
VITE_AUDIO_SAMPLE_RATE=48000
VITE_AUDIO_BUFFER_SIZE=256Audio settings (volume, noise gate, device selection) are configured through the UI at runtime.
Signaling Server
Create a .env file in packages/server/ based on example.env:
PORT=5000
# SFU websocket (internal, container-to-container)
SFU_WS_HOST="ws://sfu:5005"
# SFU websocket that browsers connect to (public URL).
# Comma-separated for multi-network (client auto-selects fastest).
SFU_PUBLIC_HOST="wss://sfu.example.com"
STUN_SERVERS="stun:stun.l.google.com:19302"
SERVER_NAME="My Brand New Server"
CORS_ORIGIN="http://127.0.0.1:15738,https://app.gryt.chat"
# Optional: server-to-SFU shared secret (NOT a user join password)
# SERVER_PASSWORD="your-internal-sfu-shared-secret"
#
# If you set SERVER_PASSWORD, keep it stable. Changing it may require restarting the SFU,
# because the SFU caches the password per server ID.
# Invite brute-force protection (optional)
# Two-tier: per (IP + user) and per IP. Cooldowns escalate exponentially.
# SERVER_INVITE_MAX_RETRIES=8 # invalid invites before lockout (per IP+user)
# SERVER_INVITE_RETRY_WINDOW_MS=300000 # sliding window (5 min)
# SERVER_INVITE_RETRY_COOLDOWN_MS=60000 # base cooldown (1 min)
# SERVER_INVITE_MAX_COOLDOWN_MS=3600000 # escalation cap (1 hour)
# SERVER_INVITE_IP_MAX_RETRIES=20 # invalid invites before lockout (per IP)
# Authentication
GRYT_AUTH_API=https://auth.gryt.chat
# S3 / Object storage
S3_REGION=auto
S3_ACCESS_KEY_ID=
S3_SECRET_ACCESS_KEY=
S3_BUCKET=gryt-bucket # must be 3–63 characters (S3 naming rules)
S3_FORCE_PATH_STYLE=false
NODE_ENV=development
DEBUG=gryt:*Additional signaling server options
# Limit voice seats explicitly (otherwise derived from SFU UDP port range)
VOICE_MAX_USERS=100
# Refresh token lifetime (days)
REFRESH_TOKEN_TTL_DAYS=7
# Advanced: disable dependencies (mostly useful in development)
DISABLE_S3=true # disables uploads + icon/avatar endpoints
# Advanced: chat history cache TTL (ms)
MESSAGE_CACHE_TTL_MS=30000Changing the server owner (CLI)
Servers do not support an env-based owner override. Ownership is claimed by the first user to join a fresh server.
If you need to change the owner later (recovery, transfers), run the one-shot admin CLI inside the server container:
# Docker Compose
docker compose exec server node admin-setOwner.js --grytUserId <keycloak_sub>
# Or plain Docker
docker exec gryt-server node admin-setOwner.js --grytUserId <keycloak_sub><keycloak_sub> is the user's Gryt ID (OIDC subject). You can find it in the desktop app or web client by going to Settings → Profile and scrolling to the bottom. The target user should already be a member of the server (has joined before), otherwise they may be unable to join an invite-only server.
MinIO (self-hosted S3)
S3_BUCKET must be 3–63 characters (standard S3 bucket naming rules). A name like "nt" will be rejected — use something like "gryt-nt" instead.
S3_ENDPOINT=http://localhost:9000
S3_ACCESS_KEY_ID=admin
S3_SECRET_ACCESS_KEY=change-me-please
S3_BUCKET=gryt-bucket
S3_FORCE_PATH_STYLE=truedocker run -d --name minio \
-p 9000:9000 -p 9001:9001 \
-v /srv/minio/data:/data \
-e MINIO_ROOT_USER=admin \
-e MINIO_ROOT_PASSWORD=change-me-please \
quay.io/minio/minio server /data --console-address ":9001"
docker run --rm --network host \
-e MC_HOST_minio=http://admin:change-me-please@localhost:9000 \
quay.io/minio/mc mb --ignore-existing minio/gryt-bucketSFU Server
Create a .env file in packages/sfu/:
PORT=5005
STUN_SERVERS=stun:stun.l.google.com:19302,stun:stun1.l.google.com:19302
# Recommended: run WebRTC over a single UDP port (better on restrictive networks)
ICE_UDP_MUX_PORT=443
# Alternative (if not using ICE_UDP_MUX_PORT): pin a UDP port range
ICE_UDP_PORT_MIN=10000
ICE_UDP_PORT_MAX=10019
LOG_LEVEL=info
MAX_CONNECTIONS=1000For production, also set ICE_ADVERTISE_IP if the SFU host is behind NAT.
LAN / local network
If the SFU is on the same network as your users (e.g. a LAN party, office, or home server),
add the machine's local IP to ICE_ADVERTISE_IP. This lets the SFU advertise a direct LAN
candidate so clients connect instantly over the local network instead of routing through an
external STUN server:
ICE_ADVERTISE_IP=192.168.1.100For mixed setups (LAN + internet), use comma-separated values for both the SFU and the signaling server — the client pings each endpoint and picks the fastest:
# SFU (packages/sfu/.env)
ICE_ADVERTISE_IP=203.0.113.10,192.168.1.100
# Signaling server (packages/server/.env or docker .env)
SFU_PUBLIC_HOST=wss://sfu.example.com,ws://192.168.1.100:5005Authentication
Authentication is centrally hosted at auth.gryt.chat. No additional configuration is needed by default.
To require Gryt accounts on your server:
GRYT_AUTH_MODE=required
GRYT_OIDC_ISSUER=https://auth.gryt.chat/realms/gryt
GRYT_OIDC_AUDIENCE=gryt-webSelf-hosted authentication (bring your own Keycloak)
If you want to run a completely separate Gryt deployment with your own authentication provider, you can self-host Keycloak and point your server/client to your issuer.
A guide for self-hosting authentication is planned.
Locking the server
To reject all join attempts (temporarily lock the server):
GRYT_AUTH_MODE=disabled # server locked — all joins rejectedSee the Server docs for more on auth integration.
STUN + SFU UDP ports (production)
This project does not require a TURN server. Instead:
- Configure STUN servers (
STUN_SERVERS) - Either:
- Recommended: use ICE UDP mux on a single port (
ICE_UDP_MUX_PORT, typically443) and open UDP 443 - Or pin a dedicated SFU UDP port range (
ICE_UDP_PORT_MIN/ICE_UDP_PORT_MAX) and open that range
- Recommended: use ICE UDP mux on a single port (
If ICE_UDP_MUX_PORT is set, the SFU can carry ICE + DTLS + SRTP over that single UDP port, so you typically don't need to expose the high-port UDP range.
STUN_SERVERS=stun:stun.l.google.com:19302,stun:stun1.l.google.com:19302
# Recommended: single-port UDP for WebRTC
ICE_UDP_MUX_PORT=443
# Alternative: high-port UDP range
ICE_UDP_PORT_MIN=10000
ICE_UDP_PORT_MAX=10019Cloudflare Tunnel / proxy and WebRTC
Cloudflare Tunnel (and Cloudflare’s “orange cloud” proxying) can be used for the SFU WebSocket (SFU_PUBLIC_HOST over wss://...), but WebRTC media is still direct UDP to the SFU host.
This is a good way to get TLS/WSS for production without running your own HTTPS reverse proxy for the SFU WebSocket.
If you're tunneling the SFU WebSocket, make sure you still:
- Set the DNS record for the SFU hostname to Proxied (orange cloud) — the WebSocket needs to go through the tunnel
- Open the SFU UDP media port(s) on the host (recommended:
ICE_UDP_MUX_PORT/udp, typically443/udp) - Set
ICE_ADVERTISE_IPto your real public IPv4/IPv6 (not a Cloudflare anycast IP)
See Cloudflare Tunnel for a full walkthrough.
Docker Compose stacks
| Stack | Path | Use case |
|---|---|---|
| Production / self-host | ops/deploy/compose/prod.yml | Recommended self-hosting — pre-built GHCR images |
| Cloudflare Tunnel | ops/deploy/host/compose.yml | Hosting with Tunnel + S3 |
| Dev deps | ops/deploy/compose/dev-deps.yml | MinIO for local dev |
| Dev app | ops/deploy/compose/dev.yml | SFU + servers + client (builds from source) |
Health checks
curl http://localhost:5000/health # server
curl http://localhost:5005/health # SFUAccess control with CORS
The CORS_ORIGIN environment variable controls which web origins can talk to your server.
This acts as a simple access gate: only clients loaded from a listed origin will be able to connect.
Common origins
| Origin | What it is |
|---|---|
http://127.0.0.1:15738 | Gryt desktop app (Electron) — always include this |
https://app.gryt.chat | Official hosted web client |
http://localhost:3666 | Local web client (development only) |
A typical production setup:
CORS_ORIGIN="http://127.0.0.1:15738,https://app.gryt.chat"Environment isolation
You can separate dev and production servers by listing different web client origins:
# Production server
CORS_ORIGIN="http://127.0.0.1:15738,https://app.gryt.chat"
# Dev server — only accepts the local dev client
CORS_ORIGIN="http://127.0.0.1:15738,http://localhost:3777"A dev web client on localhost:3777 cannot connect to a production server that only allows
https://app.gryt.chat, and vice versa. The desktop app (127.0.0.1:15738) is typically
included in both since it connects to whichever server the user selects.
CORS is enforced by the browser and by the server's Socket.IO and REST middleware.
It is not a security boundary against non-browser clients — someone could still connect
with curl or a modified client. For real authentication, enable
Gryt Auth (GRYT_AUTH_MODE=required).
Dynamic server settings
The following settings are managed in the UI at Server Settings → Overview and take effect immediately without a server restart:
- Display name and description
- Server icon
- Upload limits (avatar, file, emoji)
- Profanity filter (off / flag / censor / block) and censor style
- System messages channel
- LAN open join — allow anyone on the local network to join without an invite code (see below)
LAN open join
Enable Allow anyone on LAN to join in Server Settings → Overview. When enabled, clients connecting from private IP ranges (10.x.x.x, 172.16–31.x.x, 192.168.x.x, localhost, IPv6 link-local/unique-local) can join without an invite code. Connections from public IPs still require an invite.
This is ideal for LAN parties and local events where you want frictionless access for everyone on the network while keeping the server invite-only for remote users.
Security best practices
- Never commit
.envfiles to version control - Use HTTPS/WSS in production
- Configure CORS origins to match your domain
- Use
GRYT_AUTH_MODE=requiredif you need account-based access control - Use invite links to control membership (Server settings → Invites)
- For LAN events, enable LAN open join in Server Settings instead of creating mass invite codes
- Run containers as non-root users