AviatoAviato

Docker Setup

Install, configure, and tune Aviato in Docker. Covers persistent storage, media mounts, TLS, GPU transcoding, and production tips.

The Quickstart walks you through a single docker run to get Aviato up. This page is for the next step: setting up a real install with docker compose, persistent storage on host paths, mounted media libraries, TLS, and hardware accelerated transcoding.

What you need

  • Docker 24+ on Linux, macOS, or Windows.
  • Docker Compose v2 (docker compose ...), bundled with current Docker Desktop and Engine releases.
  • A directory on the host to hold Aviato's persistent data (database, plugins, assets, logs, transcode cache).
  • The paths to whatever media you want Aviato to index. They get mounted into the container; nothing is copied.

Minimal compose file

A reasonable starting point:

services:
  aviato:
    image: docker.ato.software/ato/aviato:latest
    container_name: aviato
    restart: unless-stopped
    ports:
      - "8080:80"
    volumes:
      - ./data:/data
      - /srv/media/movies:/media/movies:ro
      - /srv/media/tv:/media/tv:ro
      - /srv/media/music:/media/music:ro

Save it as docker-compose.yml and start it:

docker compose up -d

Open http://localhost:8080 and run through first time setup.

The container listens on port 80 internally. The 8080:80 mapping means clients connect to host port 8080. Use "80:80" if you want Aviato on the standard HTTP port.

Persistent data

Everything Aviato persists lives under /data inside the container. The compose example above bind mounts ./data next to the compose file, but you can use any host path or a named volume.

PathContents
/data/aviato.dbThe SQLite database. Users, libraries, metadata.
/data/pluginsInstalled plugin bundles and their sandboxed workspaces.
/data/assetsPosters, backdrops, and profile images.
/data/transcodeLive transcode cache. Safe to delete; will regenerate.
/data/backupsOn demand and scheduled database backups.
/data/certsTLS certificate and key, when you bring your own.
/data/logsStructured logs.

Never delete the database, plugins, assets, or backups directories while Aviato is running.

Permissions on bind mounts

Aviato runs as the non root user aviato inside the container (UID 101, GID 102). Named Docker volumes handle this automatically. With a host bind mount, the directory must be writable by that UID:

sudo chown -R 101:102 ./data

If chowning a host path is impractical (NAS shares, multi user systems), use a named volume instead:

services:
  aviato:
    volumes:
      - aviato-data:/data
volumes:
  aviato-data:

Mounting media libraries

Aviato indexes the files you mount into the container. The path inside the container is what you'll enter when you create a library in the UI, so pick something memorable:

    volumes:
      - ./data:/data
      - /srv/media/movies:/media/movies:ro
      - /srv/media/tv:/media/tv:ro
      - /srv/media/music:/media/music:ro
      - /srv/media/audiobooks:/media/audiobooks:ro
      - /srv/media/photos:/media/photos:ro

The :ro suffix mounts the source read only. Use it unless you specifically need Aviato to write back to the source files (for example, sidecar .nfo or subtitle files alongside media). Read only is the safest setting and prevents any plugin from damaging your library.

You can mount as many or as few directories as you like. Aviato treats each mount as a separate library candidate; pointing one library at /media/movies and another at /media/tv is the standard layout.

TLS and HTTPS

The container has nginx baked in, so Aviato can terminate TLS itself. You need a certificate, a key, and four environment variables.

Bring your own certificate

Drop the cert and key on the host. Most people keep them next to the data directory:

./
├── docker-compose.yml
└── data/
    └── certs/
        ├── cert.pem
        └── key.pem

Map the certs directory into the container and tell Aviato where to look:

services:
  aviato:
    image: docker.ato.software/ato/aviato:latest
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./data:/data
      - ./data/certs:/data/certs:ro
    environment:
      - AVIATO_CERTIFICATE_MODE=custom
      - AVIATO_CERTIFICATE_DOMAIN=media.example.com
      - AVIATO_CERTIFICATE_FILE=/data/certs/cert.pem
      - AVIATO_CERTIFICATE_PRIVATE_KEY_FILE=/data/certs/key.pem

Aviato regenerates its nginx config on startup, redirects HTTP to HTTPS, and serves your cert on port 443.

Self signed for local testing

Generate a self signed cert for localhost:

mkdir -p ./data/certs
openssl req -x509 -newkey rsa:2048 -nodes \
  -keyout ./data/certs/key.pem \
  -out    ./data/certs/cert.pem \
  -days 365 \
  -subj "/CN=localhost" \
  -addext "subjectAltName=DNS:localhost,IP:127.0.0.1,IP:::1"

Map HTTPS to a high port so you don't need root, and tell Aviato about the non standard port so its redirects land on the right URL:

services:
  aviato:
    ports:
      - "8080:80"
      - "8443:443"
    volumes:
      - ./data:/data
      - ./data/certs:/data/certs:ro
    environment:
      - AVIATO_CERTIFICATE_MODE=custom
      - AVIATO_CERTIFICATE_DOMAIN=localhost
      - AVIATO_CERTIFICATE_FILE=/data/certs/cert.pem
      - AVIATO_CERTIFICATE_PRIVATE_KEY_FILE=/data/certs/key.pem
      - AVIATO_HTTPS_PORT=8443

Browse to https://localhost:8443/ and accept the warning. The cert is self signed, so the browser will not trust it. That is expected.

Rotating a certificate

Aviato reads the cert and key files from disk on every request. To rotate, swap the files on the host and either restart the container or toggle any TLS setting in Settings → Network:

docker compose restart aviato

Reverse proxying Aviato yourself

If you already run Caddy, Traefik, or nginx in front of your services, leave Aviato's TLS off and point your proxy at the plain HTTP port:

services:
  aviato:
    expose:
      - "80"
    networks:
      - reverse-proxy
    environment:
      - AVIATO_CERTIFICATE_MODE=none
networks:
  reverse-proxy:
    external: true

Your external proxy handles TLS termination and forwards to aviato:80. Make sure it preserves the Host, X-Forwarded-For, and X-Forwarded-Proto headers; Aviato uses them to build absolute URLs in webhook deliveries and stream playlists.

Hardware accelerated transcoding

Software transcoding works fine for small libraries and a couple of concurrent streams, but it eats CPU. For 4K, HEVC, or several simultaneous transcodes, hand the work off to the GPU.

Aviato supports four hardware backends via FFmpeg:

BackendWhere it runsAVIATO_TRANSCODING_HW_ACCEL
VAAPILinux. Intel iGPUs, AMD GPUsvaapi
QSVLinux. Modern Intel iGPUs and ARCqsv
NVENCLinux and Windows. NVIDIA GPUsnvenc
noneSoftware fallbacknone (default)

Pick the one your hardware actually supports. If you set the wrong backend, FFmpeg will fail at session start and Aviato will surface the error.

Heads up. Docker Desktop on macOS and Windows does not expose the host GPU to containers in any practical way. If you're on a Mac or Windows laptop and want hardware transcoding, run Aviato directly on the host instead of in Docker, or run Docker on a Linux server.

Intel iGPU and AMD (VAAPI)

This is the easiest path on Linux. The kernel exposes the GPU at /dev/dri/renderD128 (or renderD129 for a second GPU). You pass the device into the container and add the container's user to the host's render group so it can open it.

First, find the GID of the render group on your host:

getent group render | cut -d: -f3

Common values are 104 on Debian, 105 on Ubuntu, 989 on Fedora. On some older distros the relevant group is video instead of render; if render doesn't exist, run getent group video instead.

Then mount the device and add the GID:

services:
  aviato:
    image: docker.ato.software/ato/aviato:latest
    devices:
      - /dev/dri:/dev/dri
    group_add:
      - "989"  # Replace with the GID from `getent group render` on your host
    environment:
      - AVIATO_TRANSCODING_HW_ACCEL=vaapi
    volumes:
      - ./data:/data

VAAPI works for both Intel iGPUs (HD Graphics, Iris, Xe, Arc) and AMD GPUs (Vega, RDNA). The same compose snippet covers both.

Intel QSV instead of VAAPI

QSV is Intel's higher level wrapper. On Broadwell (5th gen Core) and newer, it usually performs better than VAAPI and exposes more knobs. Setup is identical to VAAPI; just switch the env var:

    environment:
      - AVIATO_TRANSCODING_HW_ACCEL=qsv

Stick with VAAPI on pre Broadwell hardware or if QSV gives you encoding artifacts.

NVIDIA NVENC

NVENC needs the NVIDIA Container Toolkit on the host. Install the proprietary NVIDIA driver first, then the toolkit, then restart Docker.

Verify the host can see the GPU before adding Aviato to the mix:

docker run --rm --gpus all nvidia/cuda:12.4.0-base-ubuntu22.04 nvidia-smi

If that prints your GPU, you're ready. Compose snippet:

services:
  aviato:
    image: docker.ato.software/ato/aviato:latest
    runtime: nvidia
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]
    environment:
      - AVIATO_TRANSCODING_HW_ACCEL=nvenc
      - NVIDIA_VISIBLE_DEVICES=all
      - NVIDIA_DRIVER_CAPABILITIES=compute,video,utility
    volumes:
      - ./data:/data

The two NVIDIA_* env vars are belt and braces. The Container Toolkit usually sets sensible defaults, but spelling them out makes the config self documenting.

If you have multiple GPUs and want to pin Aviato to a specific one, replace count: all with device_ids: ["0"] (or "1", etc.), and set NVIDIA_VISIBLE_DEVICES=0.

To verify NVENC inside the running container:

docker exec -it aviato nvidia-smi

You should see your GPU and (once a transcode is running) the aviato process listed under "Processes."

Concurrent NVENC sessions

Consumer NVIDIA cards have a driver imposed cap on simultaneous NVENC sessions, typically 3 to 8 depending on generation. Hitting the cap returns an out of memory style error from FFmpeg. Quadro and data center cards are unrestricted.

The community maintained NVENC patch lifts the cap on consumer cards. Use it at your own risk; it edits the proprietary driver binary.

Combining hardware and software fallback

Aviato falls back to software automatically if the hardware backend fails to initialize for a specific session (unsupported codec, out of sessions, transient driver error). The fallback is per session, not per process; subsequent sessions retry the hardware path.

If you want to confirm hardware acceleration is actually being used, watch the transcode session logs. They print the FFmpeg invocation, including the -hwaccel arguments, on session start.

Resource limits

Aviato is reasonably memory thrifty, but transcoding can spike. Sane defaults for a home server:

    deploy:
      resources:
        limits:
          memory: 2G
        reservations:
          memory: 256M

Bump the limit to 4G or higher if you expect several concurrent transcodes or run a very large library (Aviato keeps hot metadata in memory). If you set AVIATO_TRANSCODING_MAX_SESSIONS higher than the default 4, plan on an extra ~100 MB per session for working buffers.

Putting it all together

A complete compose file with persistent storage, media mounts, TLS, NVIDIA transcoding, and resource limits:

services:
  aviato:
    image: docker.ato.software/ato/aviato:latest
    container_name: aviato
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./data:/data
      - ./data/certs:/data/certs:ro
      - /srv/media/movies:/media/movies:ro
      - /srv/media/tv:/media/tv:ro
      - /srv/media/music:/media/music:ro
    environment:
      - AVIATO_LOGGING_LEVEL=info
      - AVIATO_LOGGING_FORMAT=json
      - AVIATO_JOBS_CONCURRENCY=4
      - AVIATO_TRANSCODING_MAX_SESSIONS=6
      - AVIATO_TRANSCODING_HW_ACCEL=nvenc
      - AVIATO_CERTIFICATE_MODE=custom
      - AVIATO_CERTIFICATE_DOMAIN=media.example.com
      - AVIATO_CERTIFICATE_FILE=/data/certs/cert.pem
      - AVIATO_CERTIFICATE_PRIVATE_KEY_FILE=/data/certs/key.pem
      - NVIDIA_VISIBLE_DEVICES=all
      - NVIDIA_DRIVER_CAPABILITIES=compute,video,utility
    runtime: nvidia
    deploy:
      resources:
        reservations:
          memory: 256M
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]
        limits:
          memory: 4G

Swap the NVIDIA bits for the VAAPI block above if you're on Intel or AMD.

Updating

Pull the latest image and recreate the container. Your data volume is preserved:

docker compose pull
docker compose up -d

Aviato runs pending database migrations on startup, before the API comes up. If a migration fails the server refuses to start; check docker compose logs aviato for the specific error.

Before updating a production install, take a backup from Settings → Backups → Create backup (or copy aviato.db while the container is stopped).

Health check

The image declares a HEALTHCHECK that hits /api/health every 30 seconds:

docker inspect --format='{{.State.Health.Status}}' aviato

Expect starting for about 30 seconds at boot, then healthy.

Troubleshooting

"Port is already allocated" Another service is using the host port. Pick a different one in the ports: block.

"Permission denied" on /data The bind mounted host directory is not writable by UID 101. Run sudo chown -R 101:102 ./data, or switch to a named volume.

Web UI loads but API returns 502 Nginx is up but the API server is still booting. Wait a few seconds. If it persists, docker compose logs aviato and look for crashes in the svc-server process.

Hardware transcoding silently falls back to software Check AVIATO_TRANSCODING_HW_ACCEL matches your hardware (vaapi for Intel/AMD, nvenc for NVIDIA, qsv for modern Intel). Confirm the device is visible inside the container (docker exec -it aviato ls /dev/dri for Intel/AMD, docker exec -it aviato nvidia-smi for NVIDIA). Watch the server logs while you start a stream and look for the FFmpeg invocation.

NVENC reports too many sessions You hit the consumer driver cap. Reduce AVIATO_TRANSCODING_MAX_SESSIONS, or apply the NVENC patch at your own risk.

HTTP redirects to HTTPS on the wrong port Set AVIATO_HTTPS_PORT to match the external (host side) port mapped to 443. The default of 443 is omitted from URLs; anything else must be declared.

HTTPS shows the wrong certificate after rotation Aviato regenerates nginx config when a TLS setting changes or the container starts. Restart the container, or toggle any TLS setting in the Network settings page.

What's next

  • Configure libraries. Settings → Libraries → Add library, pointing at the paths you mounted under /media/.
  • Tune the server. See Configuration for every option you can set via env var or config.yml.
  • Install plugins. Drop a plugin bundle into ./data/plugins/, or install from the in app plugin browser.
  • Set up backups. Settings → Backups schedules nightly snapshots into /data/backups.

On this page