Embedded Survellence

Embedded Survellence

Real-time video monitoring system using YOLOv8 for object detection, FastAPI for REST APIs, and SQLite for event storage. Designed for embedded devices like Jetson Nano with, configurable FPS

Video surveillance using YOLOv8, FastAPI, and SQLite. Runs on PCs and embedded devices like NVIDIA Jetson Nano with a modular architecture.

Python FastAPI OpenCV Ultralytics Windows Docker

Highlights

  • Real-time detection: YOLOv8 with configurable FPS and resolution
  • Event-driven: Detections flow through an EventEngine into SQLite
  • REST API: FastAPI endpoints for events and health checks
  • Headless-friendly: Works without display; suitable for servers and edge
  • Portable demos: Run from sample videos; no camera required
  • Tests & benchmarks: Pytest coverage and performance measurements

Why “Embedded Surveillance”?

This project is called “Embedded Surveillance” (not just “Surveillance”) because it’s designed for embedded devices like the NVIDIA Jetson Nano, Raspberry Pi, or other low-power edge computers. Unlike general surveillance systems that run on powerful servers, this focuses on:

  • Resource efficiency: Optimized for limited CPU/GPU, memory, and power
  • Headless operation: No GUI dependencies; runs in the background
  • Hardware abstraction: Easy to swap between cameras, files, or RTSP streams
  • Edge processing: Detections happen locally, reducing bandwidth and latency
  • Field use: Configurable for deployment on drones, security cameras, or IoT devices

Note: This project has not been physically tested on embedded hardware yet. However, it’s designed with embedded constraints in mind (e.g., CPU-only inference, configurable FPS/resolution, no heavy dependencies). Testing on sample videos simulates embedded performance, and the architecture supports easy hardware integration.

Testing Without Hardware

Testing and demos use sample video files instead of physical cameras or hardware. This simplifies development and CI/CD:

  • Sample videos: Stored in data/videos/ (e.g., Pedestrian-overpass.mp4)
  • Mocked detections: Unit tests use fake detection data to avoid hardware dependencies
  • Portable: Run on any machine without cameras; works on laptops or servers
  • Benchmarking: Measures performance on video files for consistent results

To test with real hardware, change video_source in config/default.yaml to 0 (webcam) or an RTSP URL.

Building for Production and Field Application

To make this production-ready for real-world surveillance:

Hardware Integration

  • Add support for multiple cameras (USB, IP, RTSP) in camera/video_source.py
  • Implement hardware-specific optimizations (e.g., CUDA for Jetson, OpenVINO for Intel)
  • Add GPIO controls for Raspberry Pi (motion sensors, LEDs)

Security & Reliability

  • Encrypt database storage and API communications (HTTPS, TLS)
  • Add authentication to FastAPI endpoints (OAuth, JWT)
  • Implement failover: switch to backup cameras or offline mode
  • Add logging and monitoring (Prometheus metrics, alerts)

Scalability

  • Containerize with Docker for easy deployment
  • Add Kubernetes manifests for multi-device clusters
  • Implement event streaming (Kafka, MQTT) for distributed systems
  • Database migration to PostgreSQL or cloud storage for larger deployments

Features for Field Use

  • Web dashboard for remote monitoring (React/Vue frontend)
  • Mobile app integration (notifications, live streams)
  • Automated alerts (email, SMS) for detection events
  • Power management (sleep modes, battery optimization)
  • Offline capabilities (store events locally, sync when online)

Testing & Quality

  • Add integration tests with real hardware in CI/CD
  • Performance profiling and optimization for target devices
  • End-to-end testing with simulated environments

Architecture

flowchart LR
    V[Video Source] --> CV[OpenCV]
    CV --> YOLO[YOLOv8]
    YOLO --> EE[Event Engine]
    EE --> DB[(SQLite)]
    EE --> API[FastAPI]

Modules are isolated and testable. Hardware sources (webcam/file/RTSP) are abstracted in camera/ and detections are translated into events in events/.

Project Structure

├── main.py              # Application entry point
├── benchmark.py         # Performance measurement
├── requirements.txt     # Pip dependencies
├── pyproject.toml       # Project metadata (uv/ruff optional)
├── ai/                  # YOLOv8 inference wrapper
├── camera/              # Video capture abstraction
├── events/              # Event processing pipeline
├── storage/             # SQLite operations
├── api/                 # FastAPI endpoints
├── config/              # Configuration files
├── benchmarks/          # Performance results
├── tests/               # Unit tests
└── data/videos/         # Sample videos

Quick Start (Windows)

Choose pip (simple) or uv (fast) to set up.

Using pip (PowerShell):

python -m venv .venv
.\.venv\Scripts\Activate.ps1
pip install --upgrade pip
pip install -r requirements.txt

# Run with a sample video (no camera required)
python main.py --source data/videos/Pedestrian-overpass.mp4

# Run tests
pytest

Using uv (optional):

uv sync
uv run python main.py --source data/videos/Pedestrian-overpass.mp4
uv run pytest

Configuration

Edit config/default.yaml:

video_source: 0              # 0 = webcam, or path to video file
fps_limit: 5                 # Max frames per second
resolution: [640, 480]       # Frame resolution
profile: jetson_nano         # Hardware profile

Run Modes

  • CLI processing:
python main.py --source data/videos/Pedestrian-overpass.mp4
  • API server:
uvicorn api.main:app --host 0.0.0.0 --port 8000

Endpoints: /, /health, /events, /events/{id}. Interactive docs at /docs.

Demo & Portfolio

Tooling 🔧

This project supports common Python development tooling and both pip + venv and uv workflows. Below are cross-platform commands for Windows (PowerShell) and Linux/macOS (bash).

Python + venv (Windows - PowerShell)

# Create and activate a virtual environment
python -m venv .venv
.\.venv\Scripts\Activate.ps1

# Upgrade and install dependencies
python -m pip install --upgrade pip
pip install -r requirements.txt

# Run the application
python main.py --source data/videos/Pedestrian-overpass.mp4

# Tests & quality
pytest                # Unit tests
pytest --cov=.        # Coverage
ruff check .          # Lint
ruff format .         # Auto-format

Python + venv (Linux / macOS - bash)

# Create and activate a virtual environment
python3 -m venv .venv
source .venv/bin/activate

# Upgrade and install dependencies
python -m pip install --upgrade pip
pip install -r requirements.txt

# Run the application
python main.py --source data/videos/Pedestrian-overpass.mp4

# Tests & quality
pytest                # Unit tests
pytest --cov=.        # Coverage
ruff check .          # Lint
ruff format .         # Auto-format

Using uv (optional - fast developer flow)

Install uv (one-liner):

  • PowerShell (Windows):
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
  • bash (Linux / macOS):
curl -sSL https://astral.sh/uv/install.sh | sh

Common uv commands:

uv sync                             # Sync dependencies from pyproject
uv run python main.py --source data/videos/Pedestrian-overpass.mp4
uv run pytest
uv run ruff check .
uv sync
uv run python main.py --source data/videos/Pedestrian-overpass.mp4
uv run pytest
uv run ruff check .

Note: Development dependencies (like ruff) are declared in pyproject.toml and included in requirements.txt for pip users.

Tests are hardware-independent; detections are mocked for reliability.

Benchmarking

python benchmark.py

Metrics: latency, CPU, memory, FPS. Results saved to benchmarks/results.csv.

Docker (Optional)

docker build -t embedded-surveillance .
docker run -p 8000:8000 embedded-surveillance

Or use Compose:

docker-compose up

Jetson Nano Notes

  • ARM64-friendly dependencies and CPU-only baseline
  • Optional CUDA acceleration when available
python3 -m pip install -r requirements.txt
python3 main.py --source /path/to/video.mp4

Yocto Integration

Yocto is a build system for creating custom embedded Linux distributions. This project can be integrated into a Yocto image for devices like Raspberry Pi or custom boards.

Prerequisites

  • Yocto environment set up (e.g., Poky)
  • Target device supports Python 3.10+, OpenCV, and PyTorch (for YOLOv8)

Steps

  1. Create a custom layer:
    bitbake-layers create-layer meta-embedded-surveillance
    bitbake-layers add-layer meta-embedded-surveillance
    
  2. Add a recipe for the project: Create meta-embedded-surveillance/recipes-embedded-surveillance/embedded-surveillance/embedded-surveillance_1.0.bb:
    SUMMARY = "Embedded Surveillance with YOLOv8"
    LICENSE = "MIT"
    LIC_FILES_CHKSUM = "file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
    
    SRC_URI = "git://github.com/yourusername/embedded-surveillance.git;protocol=https;branch=main"
    SRCREV = "${AUTOREV}"
    
    DEPENDS = "python3 python3-pip python3-opencv python3-pytorch"
    RDEPENDS:${PN} = "python3-core python3-numpy python3-fastapi python3-uvicorn"
    
    do_install() {
        install -d ${D}${bindir}
        install -m 0755 ${S}/main.py ${D}${bindir}/embedded-surveillance
        # Install other files as needed
    }
    
  3. Build the image:
    bitbake core-image-minimal
    
  4. Deploy and run: Flash the image to your device. Run the application:
    embedded-surveillance --source /path/to/video.mp4
    

Notes

  • Adjust dependencies based on your target (e.g., add CUDA for GPU acceleration).
  • For model files, include them in the recipe or download at runtime.
  • Test on a development board before production deployment.

Roadmap

  • Multi-camera support
  • Web dashboard
  • RTSP streaming
  • CI badges/workflows