Oliver Eidel · April 02, 2026

Self-hosting Cap (Loom Alternative): Hetzner + S3 + Caddy

Cap is an open-source Loom alternative. "But wait", you say, "why switch from Loom? it's awesome" - to which I respond by showing you this recent email:
Loom + Atlassian Crappiness, delivered via email

You see, Loom was acquired by Atlassian a while ago, and Atlassian has proven to be on a mission of very reliably delivering crappy user experiences. Just look at Jira and what its users have to say about it.

So it's no big surprise that the same crappiness is now going to permeate their newest acquisition, Loom.

Time to switch.

Cap looks interesting, because you can self-host it, essentially reducing your seat-based monthly Loom bill to a flat, hosting-based monthly Hetzner bill. Nice.

So this is essentially an opinionated version of the more generic self-hosting guide on the Cap website.

We'll be choosing Hetzner Cloud, because it's awesome, and Hetzner S3, because it's, um.. cheap.
(Hetzner S3 has been wrestling with quite a few availability problems)

Server Requirements

Running Cap locally shows that it needs more-or-less 1GB of RAM while being idle. So the minimum Hetzner Cloud instance for our purposes is probably CX33 with 4 shared CPU cores and 8GB RAM at only 8.32€ / month (less than one Loom seat!).

You could probably go higher at CPX31 (4 cores, 8GB RAM, more performance) for 21.41€ / month, but that probably depends on what sort of server load you're expecting.

We're a small team at OpenRegulatory, so CX33 it is.

Buy the Hetzner Cloud server and go through one of those "your first 10 minutes on a server" tutorials.

Hetzner Object Storage for Cap

Next up, create the Hetzner Object Storage bucket. Be sure to choose the same region your server is in (e.g. ngb1). Note down the access key and secret, we'll need those soon.

Here's the first tricky part: You need to configure your S3 bucket with this CORS config. I had to adapt the CORS config mentioned on the CAP website. Here's what worked for me:

Save this to cors.json:
{
  "CORSRules": [
    {
      "AllowedHeaders": ["*"],
      "AllowedMethods": ["GET", "HEAD", "PUT", "POST"],
      "AllowedOrigins": [
        "https://cap.yourdomain.com"
      ],
      "ExposeHeaders": ["ETag"],
      "MaxAgeSeconds": 3000
    }
  ]
}

Run this in your shell in the same directory:
aws s3api put-bucket-cors --bucket your-bucket-name --cors-configuration file://cors.json

Cool.

.env Configuration

Cap needs an epic amount of environment variables to run. Create an .env file and replace the values with your values:
CAP_URL=https://cap.yourdoman.com

MYSQL_PASSWORD=FIXME
MYSQL_ROOT_PASSWORD=FIXME

DATABASE_ENCRYPTION_KEY=FIXME
NEXTAUTH_SECRET=FIXME
MEDIA_SERVER_WEBHOOK_SECRET=FIXME

CAP_AWS_ACCESS_KEY=FIXME_FROM_HETZNER
CAP_AWS_SECRET_KEY=FIXME_FROM_HETZNER
CAP_AWS_BUCKET=your-cap-bucket-name
CAP_AWS_REGION=nbg1
S3_PUBLIC_ENDPOINT=https://nbg1.your-objectstorage.com
S3_INTERNAL_ENDPOINT=https://nbg1.your-objectstorage.com

RESEND_API_KEY=FIXME
RESEND_FROM_DOMAIN=yourdomain.com

DEEPGRAM_API_KEY=FIXME
OPENAI_API_KEY=FIXME

A few explanations:
  • You could just use a password manager to generate passwords for the "password" fields (MYSEL_PASSWORD, MYSEL_ROOT_PASSWORD). They are internal anyway, so.. probably not high-risk.
  • For the encryption keys (DATABASE_ENCRYPTION_KEY, NEXTAUTH_SECRET, MEDICA_SERVER_WEBHOOK_SECRET), you can generate each of them by running `openssl rand -hex 32`.
  • Fill in the Hetzner S3 bucket access keys and secret which you got when creating the bucket.
  • I pre-filled the S3 region as nbg1, but double-check as you may have chosen fsn1 or another region (same for the S3 endpoint).
  • Also note that we have to set S3_PATH_STYLE to `false` in the docker-compose file below, as that's what ChatGPT told me to do.

Cap docker-compose.yml

Here's my full docker-compose file. I made a few changes, e.g. removed the password placeholders, removed minio (we're using Hetzner S3), and added Caddy so that we get Letsencrypt SSL (nice!):

name: cap

services:
  cap-web:
    container_name: cap-web
    image: ghcr.io/capsoftware/cap-web:latest
    restart: unless-stopped
    depends_on:
      mysql:
        condition: service_healthy
    environment:
      DATABASE_URL: mysql://cap:${MYSQL_PASSWORD}@mysql:3306/cap
      WEB_URL: ${CAP_URL}
      NEXTAUTH_URL: ${CAP_URL}
      DATABASE_ENCRYPTION_KEY: ${DATABASE_ENCRYPTION_KEY}
      NEXTAUTH_SECRET: ${NEXTAUTH_SECRET}
      MEDIA_SERVER_WEBHOOK_SECRET: ${MEDIA_SERVER_WEBHOOK_SECRET}
      CAP_AWS_ACCESS_KEY: ${CAP_AWS_ACCESS_KEY}
      CAP_AWS_SECRET_KEY: ${CAP_AWS_SECRET_KEY}
      CAP_AWS_BUCKET: ${CAP_AWS_BUCKET}
      CAP_AWS_REGION: ${CAP_AWS_REGION}
      S3_PUBLIC_ENDPOINT: ${S3_PUBLIC_ENDPOINT}
      S3_INTERNAL_ENDPOINT: ${S3_INTERNAL_ENDPOINT}
      S3_PATH_STYLE: "false"
      RESEND_API_KEY: ${RESEND_API_KEY:-}
      RESEND_FROM_DOMAIN: ${RESEND_FROM_DOMAIN:-}
      MEDIA_SERVER_URL: http://media-server:3456
      MEDIA_SERVER_WEBHOOK_URL: http://cap-web:3000
    healthcheck:
      test: ["CMD", "wget", "-q", "-O", "/dev/null", "http://127.0.0.1:3000/"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 60s
    networks:
      - cap-network

  media-server:
    container_name: cap-media-server
    build:
      context: ./apps/media-server
      dockerfile: Dockerfile.standalone
    restart: unless-stopped
    environment:
      PORT: 3456
    healthcheck:
      test: ["CMD", "wget", "-qO-", "http://localhost:3456/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 10s
    networks:
      - cap-network

  mysql:
    container_name: cap-mysql
    image: mysql:8.0
    restart: unless-stopped
    environment:
      MYSQL_DATABASE: cap
      MYSQL_USER: cap
      MYSQL_PASSWORD: ${MYSQL_PASSWORD:-cap-local-pwd-123}
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD:-cap-root-pwd-789}
    command:
      - --max_connections=1000
      - --default-authentication-plugin=mysql_native_password
      - --character-set-server=utf8mb4
      - --collation-server=utf8mb4_unicode_ci
    volumes:
      - cap-mysql-data:/var/lib/mysql
    healthcheck:
      test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "cap", "-p${MYSQL_PASSWORD}"]
      interval: 10s
      timeout: 5s
      retries: 10
      start_period: 30s
    networks:
      - cap-network

  caddy:
      image: caddy:2
      restart: unless-stopped
      depends_on:
        cap-web:
          condition: service_healthy
      ports:
        - "80:80"
        - "443:443"
      volumes:
        - ./Caddyfile:/etc/caddy/Caddyfile:ro
        - caddy-data:/data
        - caddy-config:/config
      networks:
        - cap-network

volumes:
  cap-mysql-data:
  caddy-data:
  caddy-config:

networks:
  cap-network:
    driver: bridge


And here's the Caddyfile, which you should save to the same directory (as file named, you guessed it, Caddyfile):

cap.yourdomain.com {
  encode zstd gzip
  reverse_proxy cap-web:3000
}

Finally, there's one piece in this tech stack which is not complicated, chuckle.. Caddy is cool. And obviously, replace yourdomain with your actual domain.

Finally, run `docker compose up -d` and you're done!

Done?

And that's it already! A few quick notes:
  • You could skip the resend api keys, but then users can't sign in as the email verification keys (there is no password-based login) are always printed to the server logs. So.. you probably need them in the end.
  • You can skip the Deepgram and OpenAI API keys - those are for captions (Deepgram) and summaries (OpenAI), though I couldn't get those to work on my instance yet, weird.
  • The in-browser recording is surprisingly capable, no need to download the desktop app (which is also a bit confusing as it has a "Studio mode" and "instant mode" - also, you have to point it to your self-hosted URL at some point).

Also, the sheer number of env vars makes this setup quite confusing, but I was positively surprised that I got it working after two tries, I would have expected way more. I got stuck because my CORS S3 setup was initially wrong and video uploads would fail.

Further improvements:
  • I couldn't get captions to work, even though my Deepgram API key seems correct and my account is topped up. Weird.
    The OpenAI key doesn't seem to do anything while captions are broken, because it's probably used to summarize the captions.
  • Hetzner Object Storage can be really slow, especially when you're far away from the storage region. Cloudflare R2 might be really interesting for this use case.

But, all in all, this looks awesome. It's hard to imagine back then, when Loom launched, that we'd ever have software like this which enables us to self-host a video recording platform.

Good luck self-hosting!

Final thoughts
Was saving ~30€ / month on Loom worth it spending an afternoon of CEO-time on this? Maybe not, but it was fun. But at least there's bookkeeping simplification: One bill less for our bookkeeper (Loom) as it's just rolled into our Hetzner spending now. But wait, we actually now might get bills from Resend and Deepgram, too.. chuckle. It was fun though, would do it again.

Related posts