Skip to main content

πŸ‘οΈ FirstBreath Vision

The Computer Vision and AI subsystem for the FirstBreath platform. This system hosts individual microservices responsible for RTSP stream management, high-performance batch inference (YOLOv11 + TensorRT), and motion metrics processing.

πŸ— Architecture​

The system is designed as a pipeline of independent microservices communicating via Redis:

  1. πŸŽ₯ Camera Manager (services/camera-manager):

    • Connects to RTSP streams using OpenCV (HW accelerated).
    • Publishes raw frames to Redis via a binary protocol.
    • Manages camera lifecycles and tracking scripts.
  2. 🧠 Batch Inference (services/batch-inference):

    • GPU Optimized: Uses NVIDIA TensorRT (FP32) / CUDA.
    • Batch Processing: Aggregates frames from multiple cameras into batches for efficient GPU usage.
    • Auto-Export: Automatically converts .pt models to .engine optimized for the host hardware.
    • Publishes detection results back to Redis.
  3. βš™οΈ Redis Worker (services/redis-worker):

    • Consumes detection results and calculates motion metrics (velocity, agitation).
    • Inserts metrics into MySQL.
    • Triggers alerts (e.g., "STRESSED" state) via Redis Pub/Sub.

πŸš€ Getting Started​

Prerequisites​

  • Docker & Docker Compose
  • NVIDIA GPU (Optional, but recommended for inference performance)
  • NVIDIA Container Toolkit (for GPU support)

Installation​

  1. Clone the repository:

    git clone git@github.com:FirstBreath/firstbreath-vision.git
    cd firstbreath-vision
  2. Run with Docker Compose:

    docker-compose up --build

    This launches:

    • camera-manager
    • batch-inference (Compiles TensorRT engine on first run ~2-5 mins)
    • redis-worker

πŸ“Š Observability​

All services expose Prometheus Metrics (scraped via monitor-net).

  • Camera Manager: Active scripts, CPU/RAM per camera.
  • Batch Inference: Inference Latency (P95), Batch Size, FPS.
  • Redis Worker: DB Insert Rate, Queue Lag, Alert counts.