Deployment Procedure
This document describes the complete deployment procedure for the FirstBreath solution, from server provisioning to production deployment.
1. PaaS — Dokploy
We use Dokploy, a self-hosted open-source PaaS, to manage deployments on our OVH VPS.
Why Dokploy?
| Criterion | Advantage |
|---|---|
| Open-source | No licensing costs, full control |
| Git integration | Automatic deployment on Git push |
| Native Docker | Each app points to a docker-compose.dokploy.yml |
| Built-in Traefik | Reverse proxy, TLS, automatic load balancing |
| Web interface | Visual management of applications, logs, environment variables |
Application Structure
| Application | Repository | Compose File | Port(s) |
|---|---|---|---|
control-hub-back | Control-Hub-Back | docker-compose.dokploy.yml | 8080 (API), 8080 (WS) |
firstbreath-vision | firstbreath-vision | docker-compose.dokploy.yml | 4000-4002 (Prometheus) |
platform-docs | FirstBreath-Platform | apps/docs/Dockerfile | 80 (Nginx) |
firstbreath-showcase | Firstbreath Showcase | docker-compose.yml | 3000 |
2. Containerization — Docker
Multi-stage Build Strategy
All Dockerfiles use a multi-stage build to minimize image size and improve security.
Control-Hub-Back (API / WebSocket)
Base (node:20-alpine) → Deps (yarn install) → Build (yarn build) → Production
- Base image:
node:20.12.2-alpine3.18(pinned version) - Non-root user (
nodejs:1001) - Only compiled artifacts (
/app/build) andnode_modulesare copied to production
Firstbreath Showcase (Dashboard)
Deps (pnpm install) → Builder (prisma generate + next build) → Runner
- Base image:
node:22-alpine - Next.js standalone output for a minimal image
- Prisma migration separated via a
migrateservice (runs beforeweb) - Non-root user (
nextjs)
Control-Hub Frontend (my-app)
Deps (pnpm install) → Builder (next build) → Runner
- Next.js standalone output
- Non-root user (
nextjs:1001)
Vision — Camera Manager
- Base image:
ghcr.io/firstbreath/opencv-cuda:latest(custom image) - Custom image compiled with CUDA 12.4, cuDNN, FFmpeg CUDA, OpenCV 4.10 CUDA
- NVIDIA GPU required (driver capabilities)
Vision — Batch Inference
- Base image:
ultralytics/ultralytics:latest - Embedded YOLO model (
model.pt) - NVIDIA GPU required
Vision — Redis Worker
Builder (pip install) → Runtime (python:3.11-slim)
- Multi-stage build to separate dependencies
- Non-root user (
appuser) - Built-in healthcheck
Documentation (Docusaurus)
Builder (npm ci + npm run build) → Nginx Alpine
- Static build served by Nginx
- SPA configuration (
try_files)
Custom Base Image: OpenCV CUDA
The ghcr.io/firstbreath/opencv-cuda image is built from nvidia/cuda:12.4.1-cudnn-devel-ubuntu22.04 and compiles:
- FFmpeg 6.1 with CUDA support (nvenc, cuvid, npp)
- OpenCV 4.10 with CUDA, cuDNN, FFmpeg support
- Python 3.11 with NumPy
This image is stored on GitHub Container Registry and shared across Vision services.
3. Orchestration — Docker Compose
Production Environment (docker-compose.dokploy.yml)
Control-Hub-Back Stack
| Service | Image | Healthcheck | Restart |
|---|---|---|---|
mysql | mysql:8 | mysqladmin ping | always |
redis | redis:alpine | redis-cli -a $PASS ping | unless-stopped |
api (×2) | Local build | — | unless-stopped |
websocket | Local build | — | unless-stopped |
cloudbeaver | dbeaver/cloudbeaver | — | unless-stopped |
Key points:
- API runs in 2 replicas with Traefik load balancing
- Redis configured with
maxmemory 256mbandallkeys-lrueviction policy - MySQL with
wait_timeoutset to 1 year for long-lived connections - Traefik labels for HTTPS routing (
api.firstbreath.fr) init-permissionsservice for uploaded file volumes
Vision Stack
| Service | Image | GPU | Healthcheck |
|---|---|---|---|
camera-manager | Build (opencv-cuda) | 1× NVIDIA | pgrep -f src/manager.py |
batch-inference | Build (ultralytics) | 1× NVIDIA | pgrep -f inference_service |
redis-worker | Build (python-slim) | — | pgrep -f python worker.py |
Key points:
- Inter-service communication via the
monitor-netnetwork - Access to Control-Hub MySQL database without public exposure
- Mandatory GPU reservation (
nvidiadriver, capabilities[gpu]) - TensorRT cache persisted via Docker volume (
trt-engine-cache)
Development Environment (docker-compose.yml)
The local development environment includes additional services:
- RTSP Server (MediaMTX): camera stream simulation with test videos
- RTSPtoWeb: RTSP → WebRTC/HLS transcoding for the browser
- Prometheus + Grafana: local monitoring identical to production
4. Routing and Domains
Routing is managed by Traefik via Docker labels.
| Domain | Service | Entrypoint | TLS |
|---|---|---|---|
api.firstbreath.fr | REST API (×2) | websecure | Let's Encrypt |
api.firstbreath.fr/socket.io | WebSocket | websecure (priority 100) | Let's Encrypt |
db.firstbreath.fr | CloudBeaver | websecure | Let's Encrypt |
sonar.firstbreath.fr | SonarQube | websecure | Let's Encrypt |
HTTP → HTTPS Redirect: redirect-to-https@file middleware.
5. Step-by-step Deployment Procedure
First Deployment (new server)
- Install Dokploy on the VPS (official script)
- Configure applications in the Dokploy interface:
- Add Git repository
- Point to the
docker-compose.dokploy.yml - Configure environment variables
- Configure domains and TLS certificates
- Create Docker networks:
docker network create dokploy-network
docker network create monitor-net - Deploy via Dokploy (Deploy button or Git push)
Continuous Deployment (updates)
Post-deployment Verification
- All containers are
healthy(docker compose ps) - API responds at
https://api.firstbreath.fr - WebSockets work (
/socket.io) - Grafana dashboards show metrics
- Logs contain no errors (
docker compose logs -f)