Gryt

Docker Compose

Self-host Gryt with a single docker compose up — no cloning required

The fastest way to self-host Gryt. Everything runs from pre-built images published to GitHub Container Registry — no need to clone any repos or build anything.

What's included

ServiceImagePurposeRequired?
Serverghcr.io/gryt-chat/serverSignaling, chat, file uploadsYes
SFUghcr.io/gryt-chat/sfuWebRTC media forwardingYes
MinIOminio/minioS3-compatible file storageYes
Image Workerghcr.io/gryt-chat/image-workerBackground image compression + thumbnailingOptional (recommended)
Clientghcr.io/gryt-chat/clientWeb UI (React + Nginx)Dev / local only
Prometheusprom/prometheusMetrics collectionOptional
Grafanagrafana/grafanaMetrics dashboardsOptional

Most users connect via the Gryt desktop app (available for Linux, macOS, and Windows).

Web client and authentication

The web client Docker image is included for local development and contributing to Gryt. It is not recommended for production self-hosting because the OIDC login flow requires your domain to be registered as a redirect URI with the centralized auth service at auth.gryt.chat — and only official Gryt domains are whitelisted.

If you need a web-based interface, use the hosted client at app.gryt.chat, or connect with the desktop app.

You can self-host your own Keycloak, but that means your users will have separate accounts that don't work on other Gryt servers (no cross-server identity).

Quick start

Download the compose file and example env

mkdir gryt && cd gryt
curl -Lo docker-compose.yml https://raw.githubusercontent.com/Gryt-chat/gryt/main/ops/deploy/compose/prod.yml
curl -Lo .env https://raw.githubusercontent.com/Gryt-chat/gryt/main/ops/deploy/compose/.env.example
mkdir gryt && cd gryt
wget -O docker-compose.yml https://raw.githubusercontent.com/Gryt-chat/gryt/main/ops/deploy/compose/prod.yml
wget -O .env https://raw.githubusercontent.com/Gryt-chat/gryt/main/ops/deploy/compose/.env.example

Edit the .env file

Open .env in your editor and configure at minimum:

# Give your server a name
SERVER_NAME=My Gryt Server

# Set a real secret in production
JWT_SECRET=<run: openssl rand -base64 48>

# Allowed origins (desktop app + hosted web client)
CORS_ORIGIN=http://127.0.0.1:15738,https://app.gryt.chat

Start the server

docker compose up -d

This starts the core services: Server, SFU, and MinIO. Connect with the Gryt desktop app or the hosted web client at app.gryt.chat.

Invite-only: the first user to join a brand-new server becomes the owner/admin automatically. After that, the server is invite-only. Create invite links in Server settings → Invites and share them.

Architecture

Configuration reference

Image versions

Images default to latest. Pin a specific version for reproducible deploys:

SERVER_VERSION=x.y.z
SFU_VERSION=x.y.z
IMAGE_WORKER_VERSION=x.y.z  # only if running image-worker

Browse available tags at github.com/orgs/gryt-chat/packages.

Image Worker

VariableDefaultDescription
IMAGE_WORKER_CONCURRENCY2Max images processed in parallel
IMAGE_WORKER_POLL_MS1000Poll interval for queued jobs (ms)

Ports

VariableDefaultDescription
SERVER_PORT5000Signaling API + WebSocket
SFU_PORT5005SFU WebSocket
ICE_UDP_MUX_PORT443Recommended: run WebRTC over a single UDP port (open UDP 443)
SFU_UDP_MIN10000WebRTC UDP range start (if not using ICE_UDP_MUX_PORT)
SFU_UDP_MAX10019WebRTC UDP range end (if not using ICE_UDP_MUX_PORT)

Server

VariableDefaultDescription
SERVER_NAMEMy Gryt ServerDisplay name in the server browser
SERVER_DESCRIPTIONA Gryt voice chat serverServer description
SERVER_PASSWORD(empty)Optional server↔SFU shared secret (not a user join password)
SERVER_INVITE_MAX_RETRIES8Invalid invite attempts before lockout (per IP+user)
SERVER_INVITE_RETRY_WINDOW_MS300000Sliding window for attempt counting (5 min)
SERVER_INVITE_RETRY_COOLDOWN_MS60000Base cooldown after lockout (1 min)
SERVER_INVITE_MAX_COOLDOWN_MS3600000Cooldown escalation cap (1 hour)
SERVER_INVITE_IP_MAX_RETRIES20Per-IP invalid invite attempts before lockout
VOICE_MAX_USERS(auto)Optional voice seat limit override
REFRESH_TOKEN_TTL_DAYS7Refresh token lifetime (days)
JWT_SECRETchange-me-in-productionSession token secret
CORS_ORIGINsee belowAllowed origins (comma-separated). Always include http://127.0.0.1:15738 (desktop app). Include https://app.gryt.chat if you want users to connect via the hosted web client.

If you set SERVER_PASSWORD, keep it stable. Changing it may require restarting the SFU, because the SFU caches the password per server ID.

Changing the server owner (CLI)

The first user to join a brand-new server becomes the owner/admin automatically.

To change the owner later, run:

docker compose exec server node admin-setOwner.js --grytUserId <keycloak_sub>

<keycloak_sub> is your Gryt user ID. You can find it in the desktop app or web client by going to Settings → Profile and scrolling to the bottom.

Authentication

VariableDefaultDescription
GRYT_AUTH_MODErequiredrequired (users sign in with a Gryt account) or disabled (server locked — all joins rejected)
GRYT_OIDC_ISSUERhttps://auth.gryt.chat/realms/grytOIDC issuer URL
GRYT_OIDC_AUDIENCEgryt-webOIDC audience

WebRTC / NAT

VariableDefaultDescription
STUN_SERVERSGoogle STUNComma-separated STUN URIs
SFU_PUBLIC_HOSTws://localhost:5005Public SFU URL(s) for browsers (comma-separated for multi-network)
ICE_ADVERTISE_IP(auto)Public IP(s) to advertise in ICE candidates (comma-separated for multi-network)

Storage

VariableDefaultDescription
MINIO_ROOT_USERminioadminMinIO admin username
MINIO_ROOT_PASSWORDminioadminMinIO admin password
S3_BUCKETgrytBucket for file uploads

Production checklist

Before going public

  • Set a strong JWT_SECRET — generate with openssl rand -base64 48
  • Change MINIO_ROOT_USER and MINIO_ROOT_PASSWORD
  • Open UDP 443 for the SFU (recommended), or open UDP 10000-10019 if using the high-port range
  • Set ICE_ADVERTISE_IP if behind NAT
  • Set SFU_PUBLIC_HOST to your public wss:// URL
  • Ensure CORS_ORIGIN includes http://127.0.0.1:15738 (desktop app) and https://app.gryt.chat (hosted web client)
  • Put a reverse proxy (Caddy, Nginx, Traefik) in front for TLS
  • Pin image versions instead of latest

The simplest way to add HTTPS is Caddy — it handles certificates automatically.

Add this to your docker-compose.yml:

services:
  caddy:
    image: caddy:latest
    container_name: gryt-caddy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - caddy-data:/data
    networks:
      - gryt
    restart: unless-stopped

volumes:
  caddy-data:

And create a Caddyfile:

api.example.com {
    reverse_proxy server:5000
}

sfu.example.com {
    reverse_proxy sfu:5005
}

Then update your .env:

CORS_ORIGIN=http://127.0.0.1:15738,https://app.gryt.chat
SFU_PUBLIC_HOST=wss://sfu.example.com

# Multi-network example (LAN party + public):
# SFU_PUBLIC_HOST=wss://sfu.example.com,ws://192.168.1.100:5005
# ICE_ADVERTISE_IP=203.0.113.10,192.168.1.100

LAN optimization

Hosting for a LAN party or local network?

Add your server's LAN IP to ICE_ADVERTISE_IP and SFU_PUBLIC_HOST so clients on the same network connect directly — skipping external STUN entirely for near-zero-latency voice.

# Your server's LAN IP (find it with: hostname -I | awk '{print $1}')
ICE_ADVERTISE_IP=192.168.1.100
SFU_PUBLIC_HOST=ws://192.168.1.100:5005

For mixed setups where some users are on the LAN and others connect over the internet, use comma-separated values. The client automatically pings each SFU endpoint and picks the fastest:

ICE_ADVERTISE_IP=203.0.113.10,192.168.1.100
SFU_PUBLIC_HOST=wss://sfu.example.com,ws://192.168.1.100:5005

Managing the stack

# View logs
docker compose logs -f

# View logs for a single service
docker compose logs -f server

# Restart a service after config change
docker compose restart server

# Update to latest images
docker compose pull && docker compose up -d

# Update a single service to a specific version
SFU_VERSION=x.y.z docker compose up -d sfu

# Stop everything
docker compose down

# Stop and remove all data (clean slate)
docker compose down -v

Health checks

All services expose health endpoints. Check the stack health:

curl http://localhost:5000/health   # server
curl http://localhost:5005/health   # sfu

The image worker's health endpoint (GET / on port 8080) is internal-only — it's used by Docker's built-in healthcheck and is not exposed to the host.

Or use docker compose ps to see the health status of all containers.

Monitoring

Both the Server and SFU expose Prometheus metrics at /metrics. Enable the optional monitoring stack (Prometheus + Grafana) with:

docker compose --profile monitoring up -d

Grafana is available at http://localhost:3000 (default login: admin / admin).

See the full Monitoring guide for configuration, available metrics, and recommended dashboards.

Upgrading

# Pull the latest images
docker compose pull

# Recreate containers with new images
docker compose up -d

# Verify
docker compose ps

To upgrade to specific versions, edit the *_VERSION vars in .env:

SERVER_VERSION=x.y.z
SFU_VERSION=x.y.z

Then run docker compose up -d.

Using external S3

To use an external S3-compatible provider (AWS, Cloudflare R2, etc.) instead of the bundled MinIO, remove the minio and minio-init services from docker-compose.yml and set the appropriate environment variables on the server service directly:

server:
  environment:
    S3_ENDPOINT: "https://<account-id>.r2.cloudflarestorage.com"
    S3_REGION: auto
    S3_ACCESS_KEY_ID: "<your-key>"
    S3_SECRET_ACCESS_KEY: "<your-secret>"
    S3_BUCKET: gryt
    S3_FORCE_PATH_STYLE: "false"

On this page