Skip to content
Vector Stream Systems logoVector Stream Systems

Three steps

License. Pull. Run.

1. Get a license

Contact Vector Stream Systems or purchase a license key. Each key activates one production instance.

2. Pull the image

bash
docker pull radsilent/vectorowl:latest

3. Run

bash
docker run -d \
  --name vectorowl \
  -e VECTOROWL_LICENSE_KEY=VOWL-XXXX-XXXX-XXXX-XXXX \
  -p 8080:8080 \
  -p 8081:8081 \
  radsilent/vectorowl:latest

Port 8080 serves the UI and API. Port 8081 is the WebSocket real-time sync channel.

Want to build from source or run the native desktop app? See the MBSE install guide.

Platform instructions

macOS, Windows, or Linux

The deploy package works anywhere Docker runs. Pick your platform for the exact commands.

macOS

Docker Desktop for Mac

Requires Docker Desktop (Apple Silicon or Intel).

bash · zsh
curl -L https://github.com/radsilent/vectorowl-deploy/archive/main.tar.gz | tar xz
mv vectorowl-deploy-main vectorowl && cd vectorowl
cp .env.example .env
# Edit .env and set VECTOROWL_LICENSE_KEY
docker compose up -d

Open http://localhost:8080.

Windows

Docker Desktop or WSL2

Requires Docker Desktop (WSL2 backend recommended) or WSL2 with Docker Engine.

PowerShell
Invoke-WebRequest -Uri https://github.com/radsilent/vectorowl-deploy/archive/main.tar.gz -OutFile vectorowl-deploy.tar.gz
tar -xzf vectorowl-deploy.tar.gz
Rename-Item vectorowl-deploy-main vectorowl
Set-Location vectorowl
Copy-Item .env.example .env
# Edit .env and set VECTOROWL_LICENSE_KEY
docker compose up -d

Or use WSL2 and run the Linux commands below. Access at http://localhost:8080.

Linux

Docker Engine

Requires Docker Engine + Docker Compose.

bash
curl -L https://github.com/radsilent/vectorowl-deploy/archive/main.tar.gz | tar xz
mv vectorowl-deploy-main vectorowl && cd vectorowl
cp .env.example .env
# Edit .env and set VECTOROWL_LICENSE_KEY
docker-compose up -d

Open http://localhost:8080. If your system has the Compose plugin, use docker compose up -d (space, no hyphen).

Docker Compose

Persistent deployment

For production servers, use Docker Compose with auto-restart.

docker-compose.yml
services:
  vectorowl:
    image: radsilent/vectorowl:latest
    restart: unless-stopped
    ports:
      - "8080:8080"
      - "8081:8081"
    environment:
      VECTOROWL_LICENSE_KEY: VOWL-XXXX-XXXX-XXXX-XXXX

Save as docker-compose.yml, then docker-compose up -d. Use docker compose up -d if you have the Compose plugin.

GPU acceleration

Enable PyTorch GPU inference

Docker Run

Requires the NVIDIA Container Toolkit.

bash
docker run -d \
  --name vectorowl \
  --gpus all \
  -e VECTOROWL_LICENSE_KEY=VOWL-XXXX-XXXX-XXXX-XXXX \
  -e VECTOROWL_REQUIRE_TORCH_GPU=true \
  -p 8080:8080 \
  -p 8081:8081 \
  radsilent/vectorowl:latest
Docker Compose

Add the deploy block for GPU reservation.

docker-compose.yml
services:
  vectorowl:
    image: radsilent/vectorowl:latest
    restart: unless-stopped
    ports:
      - "8080:8080"
      - "8081:8081"
    environment:
      VECTOROWL_LICENSE_KEY: VOWL-XXXX-XXXX-XXXX-XXXX
      VECTOROWL_REQUIRE_TORCH_GPU: "true"
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]
Requirements
  • Docker 24+ or any OCI-compatible runtime
  • 1 CPU core, 512 MB RAM minimum (1 GB recommended)
  • Valid VectorOWL license key
  • CPU inference works out of the box — no GPU required
  • Optional: NVIDIA GPU with Container Toolkit for accelerated embedding inference