Docker Compose
Self-host Gryt with a single docker compose up — no cloning required
The fastest way to self-host Gryt. Everything runs from pre-built images published to GitHub Container Registry — no need to clone any repos or build anything.
What's included
| Service | Image | Purpose | Required? |
|---|---|---|---|
| Server | ghcr.io/gryt-chat/server | Signaling, chat, file uploads | Yes |
| SFU | ghcr.io/gryt-chat/sfu | WebRTC media forwarding | Yes |
| MinIO | minio/minio | S3-compatible file storage | Yes |
| Image Worker | ghcr.io/gryt-chat/image-worker | Background image compression + thumbnailing | Optional (recommended) |
| Client | ghcr.io/gryt-chat/client | Web UI (React + Nginx) | Dev / local only |
| Prometheus | prom/prometheus | Metrics collection | Optional |
| Grafana | grafana/grafana | Metrics dashboards | Optional |
Most users connect via the Gryt desktop app (available for Linux, macOS, and Windows).
Web client and authentication
The web client Docker image is included for local development and contributing to Gryt. It is not recommended for production self-hosting because the OIDC login flow requires your domain to be registered as a redirect URI with the centralized auth service at auth.gryt.chat — and only official Gryt domains are whitelisted.
If you need a web-based interface, use the hosted client at app.gryt.chat, or connect with the desktop app.
You can self-host your own Keycloak, but that means your users will have separate accounts that don't work on other Gryt servers (no cross-server identity).
Quick start
Download the compose file and example env
mkdir gryt && cd gryt
curl -Lo docker-compose.yml https://raw.githubusercontent.com/Gryt-chat/gryt/main/ops/deploy/compose/prod.yml
curl -Lo .env https://raw.githubusercontent.com/Gryt-chat/gryt/main/ops/deploy/compose/.env.examplemkdir gryt && cd gryt
wget -O docker-compose.yml https://raw.githubusercontent.com/Gryt-chat/gryt/main/ops/deploy/compose/prod.yml
wget -O .env https://raw.githubusercontent.com/Gryt-chat/gryt/main/ops/deploy/compose/.env.exampleEdit the .env file
Open .env in your editor and configure at minimum:
# Give your server a name
SERVER_NAME=My Gryt Server
# Set a real secret in production
JWT_SECRET=<run: openssl rand -base64 48>
# Allowed origins (desktop app + hosted web client)
CORS_ORIGIN=http://127.0.0.1:15738,https://app.gryt.chatStart the server
docker compose up -dThis starts the core services: Server, SFU, and MinIO. Connect with the Gryt desktop app or the hosted web client at app.gryt.chat.
Invite-only: the first user to join a brand-new server becomes the owner/admin automatically. After that, the server is invite-only. Create invite links in Server settings → Invites and share them.
Architecture
Configuration reference
Image versions
Images default to latest. Pin a specific version for reproducible deploys:
SERVER_VERSION=x.y.z
SFU_VERSION=x.y.z
IMAGE_WORKER_VERSION=x.y.z # only if running image-workerBrowse available tags at github.com/orgs/gryt-chat/packages.
Image Worker
| Variable | Default | Description |
|---|---|---|
IMAGE_WORKER_CONCURRENCY | 2 | Max images processed in parallel |
IMAGE_WORKER_POLL_MS | 1000 | Poll interval for queued jobs (ms) |
Ports
| Variable | Default | Description |
|---|---|---|
SERVER_PORT | 5000 | Signaling API + WebSocket |
SFU_PORT | 5005 | SFU WebSocket |
ICE_UDP_MUX_PORT | 443 | Recommended: run WebRTC over a single UDP port (open UDP 443) |
SFU_UDP_MIN | 10000 | WebRTC UDP range start (if not using ICE_UDP_MUX_PORT) |
SFU_UDP_MAX | 10019 | WebRTC UDP range end (if not using ICE_UDP_MUX_PORT) |
Server
| Variable | Default | Description |
|---|---|---|
SERVER_NAME | My Gryt Server | Display name in the server browser |
SERVER_DESCRIPTION | A Gryt voice chat server | Server description |
SERVER_PASSWORD | (empty) | Optional server↔SFU shared secret (not a user join password) |
SERVER_INVITE_MAX_RETRIES | 8 | Invalid invite attempts before lockout (per IP+user) |
SERVER_INVITE_RETRY_WINDOW_MS | 300000 | Sliding window for attempt counting (5 min) |
SERVER_INVITE_RETRY_COOLDOWN_MS | 60000 | Base cooldown after lockout (1 min) |
SERVER_INVITE_MAX_COOLDOWN_MS | 3600000 | Cooldown escalation cap (1 hour) |
SERVER_INVITE_IP_MAX_RETRIES | 20 | Per-IP invalid invite attempts before lockout |
VOICE_MAX_USERS | (auto) | Optional voice seat limit override |
REFRESH_TOKEN_TTL_DAYS | 7 | Refresh token lifetime (days) |
JWT_SECRET | change-me-in-production | Session token secret |
CORS_ORIGIN | see below | Allowed origins (comma-separated). Always include http://127.0.0.1:15738 (desktop app). Include https://app.gryt.chat if you want users to connect via the hosted web client. |
If you set SERVER_PASSWORD, keep it stable. Changing it may require restarting the SFU, because the SFU caches the password per server ID.
Changing the server owner (CLI)
The first user to join a brand-new server becomes the owner/admin automatically.
To change the owner later, run:
docker compose exec server node admin-setOwner.js --grytUserId <keycloak_sub><keycloak_sub> is your Gryt user ID. You can find it in the desktop app or web client by going to Settings → Profile and scrolling to the bottom.
Authentication
| Variable | Default | Description |
|---|---|---|
GRYT_AUTH_MODE | required | required (users sign in with a Gryt account) or disabled (server locked — all joins rejected) |
GRYT_OIDC_ISSUER | https://auth.gryt.chat/realms/gryt | OIDC issuer URL |
GRYT_OIDC_AUDIENCE | gryt-web | OIDC audience |
WebRTC / NAT
| Variable | Default | Description |
|---|---|---|
STUN_SERVERS | Google STUN | Comma-separated STUN URIs |
SFU_PUBLIC_HOST | ws://localhost:5005 | Public SFU URL(s) for browsers (comma-separated for multi-network) |
ICE_ADVERTISE_IP | (auto) | Public IP(s) to advertise in ICE candidates (comma-separated for multi-network) |
Storage
| Variable | Default | Description |
|---|---|---|
MINIO_ROOT_USER | minioadmin | MinIO admin username |
MINIO_ROOT_PASSWORD | minioadmin | MinIO admin password |
S3_BUCKET | gryt | Bucket for file uploads |
Production checklist
Before going public
- Set a strong
JWT_SECRET— generate withopenssl rand -base64 48 - Change
MINIO_ROOT_USERandMINIO_ROOT_PASSWORD - Open UDP 443 for the SFU (recommended), or open UDP
10000-10019if using the high-port range - Set
ICE_ADVERTISE_IPif behind NAT - Set
SFU_PUBLIC_HOSTto your publicwss://URL - Ensure
CORS_ORIGINincludeshttp://127.0.0.1:15738(desktop app) andhttps://app.gryt.chat(hosted web client) - Put a reverse proxy (Caddy, Nginx, Traefik) in front for TLS
- Pin image versions instead of
latest
TLS with Caddy (recommended)
The simplest way to add HTTPS is Caddy — it handles certificates automatically.
Add this to your docker-compose.yml:
services:
caddy:
image: caddy:latest
container_name: gryt-caddy
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy-data:/data
networks:
- gryt
restart: unless-stopped
volumes:
caddy-data:And create a Caddyfile:
api.example.com {
reverse_proxy server:5000
}
sfu.example.com {
reverse_proxy sfu:5005
}Then update your .env:
CORS_ORIGIN=http://127.0.0.1:15738,https://app.gryt.chat
SFU_PUBLIC_HOST=wss://sfu.example.com
# Multi-network example (LAN party + public):
# SFU_PUBLIC_HOST=wss://sfu.example.com,ws://192.168.1.100:5005
# ICE_ADVERTISE_IP=203.0.113.10,192.168.1.100LAN optimization
Hosting for a LAN party or local network?
Add your server's LAN IP to ICE_ADVERTISE_IP and SFU_PUBLIC_HOST so clients on the same
network connect directly — skipping external STUN entirely for near-zero-latency voice.
# Your server's LAN IP (find it with: hostname -I | awk '{print $1}')
ICE_ADVERTISE_IP=192.168.1.100
SFU_PUBLIC_HOST=ws://192.168.1.100:5005For mixed setups where some users are on the LAN and others connect over the internet, use comma-separated values. The client automatically pings each SFU endpoint and picks the fastest:
ICE_ADVERTISE_IP=203.0.113.10,192.168.1.100
SFU_PUBLIC_HOST=wss://sfu.example.com,ws://192.168.1.100:5005Managing the stack
# View logs
docker compose logs -f
# View logs for a single service
docker compose logs -f server
# Restart a service after config change
docker compose restart server
# Update to latest images
docker compose pull && docker compose up -d
# Update a single service to a specific version
SFU_VERSION=x.y.z docker compose up -d sfu
# Stop everything
docker compose down
# Stop and remove all data (clean slate)
docker compose down -vHealth checks
All services expose health endpoints. Check the stack health:
curl http://localhost:5000/health # server
curl http://localhost:5005/health # sfuThe image worker's health endpoint (GET / on port 8080) is internal-only — it's used by Docker's built-in healthcheck and is not exposed to the host.
Or use docker compose ps to see the health status of all containers.
Monitoring
Both the Server and SFU expose Prometheus metrics at /metrics. Enable the
optional monitoring stack (Prometheus + Grafana) with:
docker compose --profile monitoring up -dGrafana is available at http://localhost:3000 (default login: admin / admin).
See the full Monitoring guide for configuration, available metrics, and recommended dashboards.
Upgrading
# Pull the latest images
docker compose pull
# Recreate containers with new images
docker compose up -d
# Verify
docker compose psTo upgrade to specific versions, edit the *_VERSION vars in .env:
SERVER_VERSION=x.y.z
SFU_VERSION=x.y.zThen run docker compose up -d.
Using external S3
To use an external S3-compatible provider (AWS, Cloudflare R2, etc.) instead of the bundled MinIO, remove the
minio and minio-init services from docker-compose.yml and set the appropriate environment variables on the
server service directly:
server:
environment:
S3_ENDPOINT: "https://<account-id>.r2.cloudflarestorage.com"
S3_REGION: auto
S3_ACCESS_KEY_ID: "<your-key>"
S3_SECRET_ACCESS_KEY: "<your-secret>"
S3_BUCKET: gryt
S3_FORCE_PATH_STYLE: "false"