Skip to content

🧠 Project: AI Console (Open-WebUI)

Objective: Deploy a centralized, private web interface for interacting with the Federation's local LLMs.

🏗️ Architecture

  • UI Host: holodeck-lab (192.168.1.11 - LXC)
  • Inference Engine: starfleet-compute (192.168.1.35 - Physical Mac Mini)
  • Model: Mistral-Nemo (12B)
  • Standard: UI and Logic are decoupled. Heavy lifting (Ollama) stays on physical hardware; Web traffic stays in the lab environment.

🛠️ Hardware Requirements (Holodeck)

  • RAM: 4GB (Upgraded from 2GB baseline)
  • Disk: 10GB (For Docker images and UI cache)

🚀 Deployment Steps

  1. Upgrade Holodeck LXC resources.
  2. Deploy open-webui container on Holodeck.
  3. Link to Starfleet Ollama API via internal IP.
  4. Update Master Dashboard.

🔒 Security

  • Internal network access only (VPN required).
  • Decoupled from public-facing Caddy records for initial phase.