DEPLOYMENT
Self-host guide
Run ZRO on your own infrastructure. Docker Compose for single-server deployments, Kubernetes for production clusters, and full air-gap support for classified or regulated environments.
15 min read
On this page
System requirements
Note
GPU is optional. The AI inference pipeline runs on CPU if no GPU is detected, with degraded response times. CPU-only inference is suitable for low-volume deployments.
Docker Compose deployment
1
Clone the repository
bash
git clone https://github.com/zro-lab/zro-site.git
cd zro-site2
Configure environment variables
bash
cp .env.example .env.local
# Open .env.local in your editor and fill in required valuesRequired variables:
Important
Never commit .env.local to version control. The file is gitignored by default. If you accidentally expose JWT_SECRET, rotate it immediately — all active sessions will be invalidated.
3
Install Ollama and pull the inference model
Ollama must be running on the same host or accessible via network before starting the ZRO stack.
bash
# Install Ollama (Linux)
curl -fsSL https://ollama.com/install.sh | sh
# Pull the recommended inference model (see release notes for current version)
ollama pull <model-name>
# Verify Ollama is running
ollama list4
Start the stack
bash
docker compose up -d
# Verify containers are healthy
docker compose ps
# Tail application logs
docker compose logs -f app5
Run database migrations
bash
docker compose exec app npx drizzle-kit migrateNote
Migrations are idempotent. Running them on an already-migrated database is safe and will apply only new changes.
6
Verify the deployment
bash
curl https://yourdomain.com/site/api/health
# Expected response:
{
"status": "ok",
"db": "connected",
"ai": "reachable",
"uptime": 42
}Nginx reverse proxy
The following configuration proxies HTTPS traffic to the ZRO application. Place this in your nginx sites-enabled directory.
nginx
server {
listen 443 ssl http2;
server_name yourdomain.com;
ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
location /site/ {
proxy_pass http://127.0.0.1:<PORT>/site/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}Kubernetes deployment
Production Kubernetes manifests are provided in /deploy/k8s/. Apply in the following order:
bash
kubectl apply -f deploy/k8s/namespace.yaml
kubectl apply -f deploy/k8s/secrets.yaml # Create manually first
kubectl apply -f deploy/k8s/configmap.yaml
kubectl apply -f deploy/k8s/postgres-pvc.yaml
kubectl apply -f deploy/k8s/postgres.yaml
kubectl apply -f deploy/k8s/redis.yaml
kubectl apply -f deploy/k8s/app.yaml
kubectl apply -f deploy/k8s/service.yaml
kubectl apply -f deploy/k8s/ingress.yaml # Adapt to your ingress controllerImportant
Create the
secrets.yaml manifest manually with base64-encoded values for DATABASE_URL and JWT_SECRET. Never store plaintext secrets in Git.Air-gapped deployment
ZRO makes zero outbound network requests after initial setup. To deploy in a fully disconnected environment:
1
Pre-pull all container images on a connected machine
bash
docker pull node:20-alpine
docker pull postgres:15-alpine
docker pull redis:7-alpine
# Save to archives
docker save node:20-alpine | gzip > node20.tar.gz
docker save postgres:15-alpine | gzip > postgres15.tar.gz
docker save redis:7-alpine | gzip > redis7.tar.gz2
Pre-pull the Ollama model
bash
# On connected machine — pull the recommended model (see release notes)
ollama pull <model-name>
# Model blobs are in ~/.ollama/models — copy this directory to target host3
Transfer and load on the air-gapped host
bash
docker load < node20.tar.gz
docker load < postgres15.tar.gz
docker load < redis7.tar.gz
# Copy Ollama model blobs
cp -r ollama-models ~/.ollama/models
# Start normally
docker compose up -d