Inference tok/s
GPU Temp
Storage Used
Active Containers
🏠Infrastructure
Core platform architecture, messaging, and service mesh
Architecture
Full stack architecture overview: DGX cluster, Proxmox, Docker, Caddy, DNS
Cognitive Architecture
Multi-agent cognitive architecture design for CogStack
Benchmark Apr 3
Latest inference benchmark results across all models
CogStack Dashboard
Live CogStack monitoring and agent activity dashboard
MCP Servers
Model Context Protocol server registry and configuration
Plugins
Active plugin registry and management
Skills Reference
Complete skills catalog and documentation
Mattermost
Team messaging and agent communication hub
Matrix
Federated messaging and bridging infrastructure
LMP Router
Intelligent LLM request routing and load balancing
📡 Node Status
Simulated polling — Next check in 30s
dark.lmphq.net
10.1.2.155 · Nano-30B
Online
8ms
spark.lmphq.net
10.1.2.150 · Nano-30B
Online
7ms
stark.lmphq.net
10.1.2.151 · Qwen3.5-35B
Offline
bark.lmphq.net
10.1.2.153 · Qwen3.5-35B
Degraded
143ms
prod.lmphq.net
178.63.139.40 · Main server
Degraded
150ms
docker.lmphq.net
10.1.2.180 · Docker host
Online
2ms
🤖MCP / AI Platform
Model Context Protocol servers, Docker MCP toolkit, and AI integrations
LMP Custom Servers
8 custom MCP servers: cogstack, gitlab, proxmox, docker, ceph, caddy, dns, mattermost
Custom-built for LMP infrastructure
Docker MCP Toolkit
docker-mcp, compose-mcp, portainer-mcp for container management
Container orchestration layer
Cloudflare MCP
DNS, Workers, R2, KV management via MCP
Edge infrastructure control
High Priority Picks
Sentry, Grafana, Prometheus, GitHub, Linear MCP servers queued for integration
Next integration targets
Stage / Test
Staging environment MCP servers for pre-production validation
Pre-production pipeline
mcporter Status
MCP server health monitoring and auto-restart daemon
Health monitoring active
⚡GPU / Inference
GPU fleet, model serving, and RDMA fabric
GPU Fleet 512 GB
4x DGX Spark GB10 Blackwell Super Chips, 128 GB VRAM each, unified memory architecture
512 GB total VRAM
Models
Qwen3-30B-A3B, Qwen2.5-Coder-32B, DeepSeek-R1-0528-Qwen3-8B, Llama-3.3-70B-Instruct, + more
6+ models in rotation
RDMA Fabric
4x 200 Gbps ConnectX-8 SuperNIC, RoCEv2, GPUDirect RDMA for tensor parallelism
800 Gbps aggregate
💼Business
Client projects, licensing, policy, and CRM
Cloud Cost Comparison
DGX Spark vs cloud GPU: 4-12x cost advantage at current utilization
€400/mo vs €1,600-4,800/mo cloud equivalent
License Mapper
Software license tracking and compliance dashboard
AI Usage Policy
Company AI usage guidelines, data handling, and compliance framework
Harry CRM
Client relationship management with AI-powered insights
🚀Projects
Active development projects and experiments
BorderPatrol
Network security monitoring and intrusion detection system
Stavebna Cestaky
Construction travel expense management platform
IOL Agent
Intelligent online learning agent for automated course interaction
AutoCAD Digitization
AI-powered architectural drawing digitization pipeline
WoW Research
World of Warcraft AI agent research and bot framework
Resend Migration
Email infrastructure migration to Resend platform
EventAlpha
AI-driven event prediction and alpha generation system
📚Knowledge Base
Guides, session logs, and reference documents
\ud83e\udde0Agent Council
The four musketeer agents powering LMP operations
Koda
PM / Coordinator
claude-distilled-27BNexus
Infrastructure
claude-distilled-27BAtlas
Allrounder
claude-distilled-27BCatalyst
Business
claude-distilled-27B⚠️ Open Issues
Admin endpoint unresponsive, config reload failing
Proxy bound to 8001 instead of expected 8000
Container health check failing, needs restart
Cloudflare zone missing A/CNAME for new services
Wildcard cert renewal due in 46 days
Second RDMA cable pending install, disk usage warning