- Updated ID Server: 51.222.12.162:21116 (was 10.0.10.23) - Updated Relay Server: 51.222.12.162:21117 (was 66.63.182.168) - Updated Public Key: EPO75IeD+yJo5S5wtKePpyokHGXv9FN1w5Fx+Db5UCk= - Marked CT 123 (10.0.10.23) as deprecated - RustDesk now VPS-only - Source: Screenshot from 2026-02-22
31 KiB
Services Documentation
Last Updated: 2025-12-29 Status: All critical services operational
This document provides detailed information about all services running in the infrastructure.
Table of Contents
Service Overview
Service Inventory Summary
| Service Name | Location | IP Address | Type | Status | Critical |
|---|---|---|---|---|---|
| UCG Ultra Gateway | Home Lab | 10.0.10.1 | Network | ✅ Running | Yes |
| Proxmox (main-pve) | Home Lab | 10.0.10.3 | Virtualization | ✅ Running | Yes |
| Proxmox (pve-router) | Home Lab | 10.0.10.2 | Virtualization | ✅ Running | Yes |
| Proxmox (pve-storage) | Home Lab | 10.0.10.4 | Virtualization | ✅ Running | Yes |
| OpenMediaVault | Home Lab | 10.0.10.5 | Storage | ✅ Running | Yes |
| PostgreSQL | Home Lab | 10.0.10.20 | Database | ✅ Running | Yes |
| Authentik SSO | Home Lab | 10.0.10.21 | Authentication | ✅ Running | Yes |
| n8n | Home Lab | 10.0.10.22 | Automation | ✅ Running | No |
| Home Assistant | Home Lab | 10.0.10.24 | Smart Home | ✅ Running | No |
| Prometheus + Grafana | Home Lab | 10.0.10.25 | Monitoring | ✅ Running | No |
| Dockge | Home Lab | 10.0.10.27 | Container Mgmt | ✅ Running | No |
| Sonarr | Home Lab | 10.0.10.27 | Media | ✅ Running | No |
| Radarr | Home Lab | 10.0.10.27 | Media | ✅ Running | No |
| Prowlarr | Home Lab | 10.0.10.27 | Media | ✅ Running | No |
| Bazarr | Home Lab | 10.0.10.27 | Media | ✅ Running | No |
| Deluge | Home Lab | 10.0.10.27 | Media | ✅ Running | No |
| Calibre-Web | Home Lab | 10.0.10.27 | Media | ✅ Running | No |
| Caddy Internal Proxy | Home Lab | 10.0.10.27 | Proxy | ✅ Running | No |
| Vehicle Tracker | Home Lab | 10.0.10.35 | Web App | 🔄 In Development | No |
| RustDesk ID Server | VPS | 51.222.12.162 | Remote Desktop | ✅ Running | No |
| RustDesk Relay | VPS | 51.222.12.162 | Remote Desktop | ✅ Running | No |
| OpenClaw Gateway | Home Lab | 10.0.10.28 | AI Agent | ✅ Running | No |
| AD5M 3D Printer | Home Lab | 10.0.10.30 | IoT | ✅ Running | No |
| WireGuard VPN | Gaming VPS | 51.222.12.162 | Tunnel | ✅ Running | Yes |
| Caddy Reverse Proxy | VPS | 66.63.182.168 | Proxy | ✅ Running | Yes |
VPS Services
WireGuard VPN Server
Purpose: Site-to-site VPN tunnel connecting VPS and home lab network
Service Details:
- Host: Gaming VPS (51.222.12.162, deadeyeg4ming.vip)
- Service Name:
wg-quick@wg0 - Port: 51820/UDP
- Interface: wg0
- Tunnel IP: 10.0.9.1/24
- Configuration:
/etc/wireguard/wg0.conf - Logs:
journalctl -u wg-quick@wg0 - Peers: UCG Ultra at 10.0.9.2, VPS Proxy at 10.0.9.3
Configuration:
[Interface]
Address = 10.0.9.1/24
ListenPort = 51820
PrivateKey = [REDACTED]
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT
PostUp = iptables -A FORWARD -o wg0 -j ACCEPT
PostUp = iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT
PostDown = iptables -D FORWARD -o wg0 -j ACCEPT
PostDown = iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
[Peer]
PublicKey = [UCG Ultra public key]
AllowedIPs = 10.0.9.2/32, 10.0.10.0/24
[Peer]
PublicKey = [VPS Proxy public key]
AllowedIPs = 10.0.9.3/32
Startup:
sudo systemctl start wg-quick@wg0
sudo systemctl enable wg-quick@wg0
Health Check:
sudo wg show
sudo systemctl status wg-quick@wg0
ping 10.0.9.2 # UCG Ultra tunnel IP
Status: ✅ Operational - Tunnel stable, traffic flowing
Caddy Reverse Proxy
Purpose: Routes incoming HTTPS traffic to home lab services via WireGuard tunnel
Service Details:
- Host: VPS (66.63.182.168)
- Service Name:
caddy - Port(s): 80 (HTTP), 443 (HTTPS)
- Configuration:
/etc/caddy/Caddyfile - Logs:
journalctl -u caddy - SSL: Automatic via Let's Encrypt
Current Routes:
# Proxmox (main-pve)
freddesk.nianticbooks.com {
reverse_proxy https://10.0.10.3:8006 {
header_up Host {http.reverse_proxy.upstream.hostport}
header_up X-Forwarded-Host {host}
header_up X-Forwarded-Proto {scheme}
header_up X-Real-IP {remote_host}
header_up X-Forwarded-For {remote_host}
transport http {
tls_insecure_skip_verify
}
}
}
# Home Assistant
bob.nianticbooks.com {
reverse_proxy https://10.0.10.24:8123 {
header_up X-Forwarded-Host {host}
header_up X-Forwarded-Proto {scheme}
header_up X-Real-IP {remote_host}
header_up X-Forwarded-For {remote_host}
transport http {
tls_insecure_skip_verify
}
}
}
# 3D Printer (Prusa AD5M)
ad5m.nianticbooks.com {
reverse_proxy 10.0.10.30:80
}
# Authentik SSO
auth.nianticbooks.com {
reverse_proxy 10.0.10.21:9000
}
# M'Cheyne Bible Reading Plan
bible.nianticbooks.com {
reverse_proxy localhost:8081
encode gzip
header {
Strict-Transport-Security "max-age=31536000"
X-Frame-Options "SAMEORIGIN"
X-Content-Type-Options "nosniff"
}
}
Startup:
sudo systemctl start caddy
sudo systemctl enable caddy
sudo systemctl reload caddy # After config changes
Health Check:
sudo systemctl status caddy
curl -I https://freddesk.nianticbooks.com
curl -I https://ad5m.nianticbooks.com
Status: ✅ Operational - All 5 public domains working
Home Lab Services
UCG Ultra Gateway
Purpose: Network gateway, DHCP server, firewall, WireGuard VPN client
Service Details:
- IP Address: 10.0.10.1
- DHCP Range: 10.0.10.50-254
- Static Range: 10.0.10.1-49
- WireGuard Interface: wgclt1 (10.0.9.2/24)
- Web Interface: https://10.0.10.1
Health Check:
ping 10.0.10.1
ssh root@10.0.10.1 "ip a show wgclt1" # Check WireGuard interface
Status: ✅ Operational
Proxmox Cluster
main-pve (DL380p - Primary Node)
Purpose: Production workload virtualization (32 cores, 96GB RAM)
Service Details:
- IP Address: 10.0.10.3
- Web Interface: https://10.0.10.3:8006
- Public Domain: https://freddesk.nianticbooks.com
- iLO Management: 10.0.10.13
- Location: Remote (not in office)
- Version: Proxmox VE 8.x
- SSO: Authentik via OpenID Connect
Running VMs/Containers:
- CT 102: PostgreSQL (10.0.10.20)
- CT 121: Authentik SSO (10.0.10.21)
- CT 106: n8n (10.0.10.22)
- CT 127: Dockge (10.0.10.27)
- VM 103: Home Assistant (10.0.10.24)
- Additional containers: See Proxmox web UI for complete list
Health Check:
ping 10.0.10.3
curl -k -I https://10.0.10.3:8006
ssh root@10.0.10.3 "pveversion"
Status: ✅ Operational
pve-router (i5 - Secondary Node)
Purpose: Local development, secondary workloads (8 cores, 8GB RAM)
Service Details:
- IP Address: 10.0.10.2
- DNS: proxmox.nianticbooks.home
- MAC: e4:54:e8:50:90:af
- Web Interface: https://10.0.10.2:8006
- Location: Office (local access available)
- SSO: Authentik via OpenID Connect
Running VMs/Containers:
- CT 100: pve-scripts-local (10.0.10.40)
- CT 101: Twingate connector
Health Check:
ping 10.0.10.2
curl -k -I https://10.0.10.2:8006
Status: ✅ Operational
pve-storage (Storage Host)
Purpose: Hosts OMV storage VM (3.5" drive support)
Service Details:
- Proxmox IP: 10.0.10.4
- OMV VM IP: 10.0.10.5
- Storage: 12TB
- Form Factor: Supports 3.5" drives (unique among nodes)
- SSO: Authentik via OpenID Connect
Health Check:
ping 10.0.10.4
ping 10.0.10.5
Status: ✅ Operational
OpenMediaVault (12TB Storage)
Purpose: Centralized storage for backups, shared data, Proxmox storage
Service Details:
- IP Address: 10.0.10.5
- MAC: bc:24:11:a8:ff:0b
- Web Interface: http://10.0.10.5
- Capacity: 12TB
- NFS Share: /export/backups mounted on Proxmox nodes
Shares:
/export/backups- Proxmox VM/container backups (NFS)- Available: 7.3TB, Used: 159GB
Mount Points (on Proxmox nodes):
# /etc/fstab entries on main-pve, pve-router, pve-storage
10.0.10.5:/export/backups /mnt/omv-backups nfs rsize=32768,wsize=32768,vers=3,tcp,timeo=600,retrans=2,_netdev 0 0
Health Check:
ping 10.0.10.5
ssh root@main-pve "df -h /mnt/omv-backups"
showmount -e 10.0.10.5
Status: ✅ Operational - 7.3TB available, 159GB used
PostgreSQL (Shared Database)
Purpose: Centralized database for Authentik, n8n, RustDesk, Grafana
Service Details:
- Host: main-pve (CT 102)
- IP Address: 10.0.10.20
- Version: PostgreSQL 16
- Port: 5432
- Databases: authentik, n8n, rustdesk (planned), grafana (planned)
Configuration:
- Listen address: 0.0.0.0 (accessible from LAN)
- Max connections: 100
- Shared buffers: 256MB
Startup:
pct exec 102 -- systemctl status postgresql
pct exec 102 -- systemctl restart postgresql
Health Check:
ping 10.0.10.20
pct exec 102 -- sudo -u postgres psql -c "SELECT version();"
Backup:
- Automated daily backup at 2:00 AM
- Script:
/usr/local/bin/backup-postgresql.sh - Location: /mnt/omv-backups/postgres/
- Retention: 7 days
Status: ✅ Operational
Authentik SSO
Purpose: Single sign-on authentication for all services
Service Details:
- Host: main-pve (CT 121)
- IP Address: 10.0.10.21
- Port: 9000 (HTTP)
- Version: 2025.10.2
- Database: PostgreSQL on 10.0.10.20
- Public Domain: https://auth.nianticbooks.com (planned)
Configuration:
- Admin User: akadmin
- API Token: f7AsYT6FLZEWVvmN59lC0IQZfMLdgMniVPYhVwmYAFSKHez4aGxyn4Esm86r
- Database: authentik @ 10.0.10.20
- Secret Key: [REDACTED]
Active Integrations:
- ✅ Proxmox (main-pve, pve-router, pve-storage) via OpenID Connect
- Client ID: proxmox
- Login: Select "authentik" realm → "Login with authentik" button
- ✅ Grafana (OAuth2 configured)
Planned Integrations:
- Home Assistant (complex - requires proxy provider or LDAP)
- Other services as needed
Startup:
pct exec 121 -- docker compose -f /opt/authentik/docker-compose.yml ps
pct exec 121 -- docker compose -f /opt/authentik/docker-compose.yml restart
Health Check:
curl -I http://10.0.10.21:9000
Status: ✅ Operational - Proxmox SSO working
n8n Workflow Automation
Purpose: Workflow automation and Claude Code integration
Service Details:
- Host: main-pve (CT 106)
- IP Address: 10.0.10.22
- Port: 5678 (HTTP)
- Version: 1.123.5
- Database: PostgreSQL on 10.0.10.20
- Resources: 2 vCPUs, 4GB RAM
Configuration:
- Docker-based deployment
- Database: n8n @ 10.0.10.20
- Authentication: Email/password (OIDC requires Enterprise license)
Claude Code Integration:
- Architecture: n8n → SSH → Claude Code on HOMELAB-COMMAND (10.0.10.10)
- SSH Credential: homelab-command-ssh
- Test workflow: "Claude Code Test" ✅ Verified working
- Use cases: Infrastructure automation, AI workflows
Startup:
pct exec 106 -- docker ps
pct exec 106 -- docker restart n8n
Health Check:
curl -I http://10.0.10.22:5678
Status: ✅ Operational - Basic Claude Code integration working
See Also: N8N-CLAUDE-STATUS.md for integration details
Home Assistant
Purpose: Smart home automation and device control
Service Details:
- Host: main-pve (VM 103)
- IP Address: 10.0.10.24
- Port: 8123 (HTTPS)
- MAC: 02:f5:e9:54:36:28
- Public Domain: https://bob.nianticbooks.com ✅ Working
Integrations:
- Govee Curtain Lights (Local LAN control)
- Sylvania Smart+ WiFi Plug via LocalTuya
- Digital Loggers Web Power Switch (10.0.10.88)
- Wyoming Protocol voice assistant (Gaming PC 10.0.10.10)
- ESPHome (runs as HA add-on)
- Weather (Met.no)
- Local Todo lists
Configuration:
- SSL Certificate: Local CA certificate
- Trusted Proxies: 127.0.0.1, 10.0.9.3 (VPS Proxy WireGuard IP)
Startup:
# Via Proxmox
qm status 103
qm start 103
qm shutdown 103
# Inside VM
ssh root@10.0.10.24 "ha core restart"
Health Check:
ping 10.0.10.24
curl -k -I https://10.0.10.24:8123
Status: ✅ Operational - Accessible locally and publicly via HTTPS
See Also: home-assistant/ directory for configuration files
Prometheus + Grafana (Monitoring)
Purpose: Infrastructure monitoring, metrics collection, and visualization
Service Details:
- Host: main-pve (10.0.10.25)
- Grafana Port: 3000 (HTTP)
- Prometheus Port: 9090 (HTTP)
- Type: VM or Container (TBD - need to verify)
Components:
- Grafana: Visualization and dashboards
- Prometheus: Metrics collection and storage
- Node Exporter: Host metrics (planned)
- cAdvisor: Container metrics (planned)
Startup:
# TBD - depends on deployment method (Docker Compose, systemd, etc.)
# Check status
curl -I http://10.0.10.25:3000 # Grafana
curl -I http://10.0.10.25:9090 # Prometheus
Health Check:
ping 10.0.10.25
curl -I http://10.0.10.25:3000 # Grafana (redirects to /login)
curl -I http://10.0.10.25:9090 # Prometheus
Status: ✅ Operational - Both services responding
Configuration:
- Database: TBD (may use PostgreSQL on 10.0.10.20)
- SSO: Authentik integration planned
- Public Domain: Not configured (internal access only)
Notes:
- Discovered already deployed during infrastructure audit (2025-12-29)
- Need to document deployment method and configuration details
- Need to set up monitoring targets (Proxmox, VPS, services)
- Dashboards and alerting not yet configured
Dockge (Container Management)
Purpose: Docker Compose stack management with web UI
Service Details:
- Host: main-pve (CT 127)
- IP Address: 10.0.10.27
- Port: 5001 (HTTP)
- Resources: 2 vCPUs, 2GB RAM, 8GB disk
Features:
- Web-based Docker Compose management
- Manages Homelab Dashboard stack
- Real-time container logs and stats
Startup:
pct exec 127 -- systemctl status docker
pct exec 127 -- systemctl restart dockge
Health Check:
curl -I http://10.0.10.27:5001
Status: ✅ Operational
Media Automation Stack (Arr Services)
Purpose: Automated TV show, movie, and subtitle management with torrent downloading
Architecture: Docker Compose stack on CT 127 (Dockge), HTTPS termination via Caddy internal proxy
Sonarr (TV Shows)
Service Details:
- Host: main-pve (CT 127, Docker container)
- IP Address: 10.0.10.27
- Port: 8989 (HTTP backend)
- HTTPS URL: https://sonarr.nianticbooks.home
- Purpose: TV show monitoring, RSS feed monitoring, automatic downloads
Features:
- Episode calendar and upcoming releases
- Automatic quality upgrades
- Integration with Prowlarr for indexers
- Integration with Deluge for downloads
- Library management and renaming
Radarr (Movies)
Service Details:
- Host: main-pve (CT 127, Docker container)
- IP Address: 10.0.10.27
- Port: 7878 (HTTP backend)
- HTTPS URL: https://radarr.nianticbooks.home
- Purpose: Movie monitoring, automatic downloads, library management
Features:
- Movie discovery and recommendations
- Automatic quality upgrades
- Integration with Prowlarr for indexers
- Integration with Deluge for downloads
- Library management and renaming
Prowlarr (Indexer Manager)
Service Details:
- Host: main-pve (CT 127, Docker container)
- IP Address: 10.0.10.27
- Port: 9696 (HTTP backend)
- HTTPS URL: https://prowlarr.nianticbooks.home
- Purpose: Centralized indexer management for Sonarr/Radarr
Features:
- Single configuration point for all indexers
- Automatic sync to Sonarr/Radarr
- Indexer statistics and testing
Bazarr (Subtitles)
Service Details:
- Host: main-pve (CT 127, Docker container)
- IP Address: 10.0.10.27
- Port: 6767 (HTTP backend)
- HTTPS URL: https://bazarr.nianticbooks.home
- Purpose: Automatic subtitle download for TV shows and movies
Features:
- Integration with Sonarr/Radarr
- Multiple subtitle provider support
- Automatic language selection
- Missing subtitle detection
Deluge (BitTorrent Client)
Service Details:
- Host: main-pve (CT 127, Docker container)
- IP Address: 10.0.10.27
- Port: 8112 (HTTP backend)
- HTTPS URL: https://deluge.nianticbooks.home
- Purpose: BitTorrent download client for media files
Features:
- Web UI for torrent management
- Integration with Sonarr/Radarr
- Automatic category-based sorting
- Seeding management
Calibre-Web (eBook Library)
Service Details:
- Host: main-pve (CT 127, Docker container)
- IP Address: 10.0.10.27
- Port: 8083 (HTTP backend)
- HTTPS URL: https://calibre.nianticbooks.home
- Purpose: eBook library management and reading
Features:
- Web-based eBook reader
- eBook format conversion
- Library organization and metadata management
- Send-to-Kindle integration
Storage Configuration:
All media services store data on OpenMediaVault (10.0.10.5) via NFS mounts:
/media/tv- Sonarr TV library/media/movies- Radarr movie library/media/downloads- Deluge download directory/media/books- Calibre library
Startup:
# Access Dockge UI
http://10.0.10.27:5001
# Or via SSH to CT 127
pct exec 127 -- docker ps | grep -E "(sonarr|radarr|prowlarr|bazarr|deluge|calibre)"
pct exec 127 -- docker restart <container-name>
Health Check:
# Check all media services
for service in sonarr radarr prowlarr bazarr deluge calibre; do
echo "Checking $service..."
curl -I https://$service.nianticbooks.home 2>/dev/null | head -1
done
Status: ✅ Operational (deployed 2026-01-25)
Caddy Internal Reverse Proxy
Purpose: HTTPS termination and reverse proxy for internal homelab services
Service Details:
- Host: main-pve (CT 127, Docker container)
- Container Name: caddy-internal
- IP Address: 10.0.10.27
- Port: 443 (HTTPS)
- Configuration:
/opt/caddy-internal/Caddyfile - Certificate Authority: Caddy Local Authority - 2026 ECC Root
Services Proxied:
- Sonarr, Radarr, Prowlarr, Bazarr, Deluge, Calibre-Web
- Vikunja, Dockge
Startup:
pct exec 127 -- docker start caddy-internal
pct exec 127 -- docker restart caddy-internal
Health Check:
pct exec 127 -- docker ps | grep caddy-internal
pct exec 127 -- docker logs caddy-internal
Status: ✅ Operational (deployed 2026-01-25)
Certificate Installation:
See CA-DEPLOYMENT-SUMMARY.md for client certificate installation instructions.
RustDesk (Self-Hosted Remote Desktop)
Purpose: Secure remote desktop access with self-hosted infrastructure
Service Details:
- ID Server (hbbs): VPS (51.222.12.162 / vps.nianticbooks.com)
- Port: 21116 (ID/Rendezvous)
- Relay Server (hbbr): VPS (51.222.12.162 / vps.nianticbooks.com)
- Port: 21117 (Relay service)
- Public Key:
EPO75IeD+yJo5S5wtKePpyokHGXv9FN1w5Fx+Db5UCk=
Architecture:
Internet → VPS (51.222.12.162)
↓
RustDesk ID Server (hbbs) + Relay (hbbr)
↓
RustDesk Clients (P2P when possible)
Client Configuration:
- ID Server:
51.222.12.162 - Relay Server:
51.222.12.162 - Key:
EPO75IeD+yJo5S5wtKePpyokHGXv9FN1w5Fx+Db5UCk=
Startup:
# ID Server (home lab)
ssh root@10.0.10.3
pct exec 123 -- systemctl status rustdesk-hbbs
pct exec 123 -- systemctl restart rustdesk-hbbs
# Relay Server (VPS)
ssh 66.63.182.168
sudo systemctl status rustdesk-hbbr
sudo systemctl restart rustdesk-hbbr
Health Check:
# ID Server ports
nc -zv 10.0.10.23 21115 # NAT test
nc -zv 10.0.10.23 21116 # ID/Rendezvous
# Relay Server port
nc -zv 66.63.182.168 21117
Logs:
# ID Server
ssh root@10.0.10.3 'pct exec 123 -- journalctl -u rustdesk-hbbs -f'
# Relay Server
ssh 66.63.182.168 'sudo journalctl -u rustdesk-hbbr -f'
Status: ✅ Operational - Deployed 2025-12-25
See Also: guides/RUSTDESK-DEPLOYMENT-COMPLETE.md for complete setup guide
AD5M 3D Printer (Prusa)
Purpose: 3D printing
Service Details:
- IP Address: 10.0.10.30
- MAC: 88:a9:a7:99:c3:64
- DNS: AD5M.nianticbooks.home
- Web Interface: http://10.0.10.30
- Public Domain: https://ad5m.nianticbooks.com ✅ Working
Health Check:
ping 10.0.10.30
curl -I http://10.0.10.30
Status: ✅ Operational
Minecraft Forge Server
Purpose: Game server for Cisco's Fantasy Medieval RPG Ultimate modpack
Service Details:
- Host: main-pve (CT 130)
- IP Address: 10.0.10.41
- Game Port: 25565 (TCP/UDP)
- Status Page Port: 8080 (HTTP)
- Public Domain: cfmu.deadeyeg4ming.vip:25565 (game)
- Status Page: https://cfmu.deadeyeg4ming.vip (web)
- Resources: 8 vCPUs, 20GB RAM, 100GB disk
Configuration:
- Minecraft Version: 1.20.1
- Forge Version: 47.3.0
- Modpack: Cisco's Fantasy Medieval RPG Ultimate
- Max Players: 20
- View Distance: 12
- Difficulty: Normal
- Game Mode: Survival
Startup:
# Via Proxmox
ssh root@10.0.10.3
pct exec 130 -- systemctl status minecraft-forge.service
pct exec 130 -- systemctl start minecraft-forge.service
pct exec 130 -- systemctl stop minecraft-forge.service # Graceful 30-sec countdown
# Access server console
pct exec 130 -- screen -r minecraft
# Ctrl+A, D to detach
Health Check:
# Check port
nc -zv 10.0.10.41 25565
# Check service status
ssh root@10.0.10.3 'pct exec 130 -- systemctl status minecraft-forge.service'
# Check status page
curl -I http://10.0.10.41:8080/
Backup:
- Automated daily backup at 3:00 AM UTC
- Script:
/opt/minecraft/scripts/backup-minecraft.sh - Location: /mnt/omv-backups/minecraft-YYYYMMDD_HHMMSS/
- Retention: 14 days
- Components backed up:
- World data (overworld, nether, end)
- Server configs and mods
- Player data
Monitoring:
- Health check every 5 minutes via systemd timer
- Auto-restart on crash
- Discord webhook notifications (optional)
Logs:
# Server logs
ssh root@10.0.10.3 'pct exec 130 -- tail -f /opt/minecraft/server/logs/latest.log'
# Service logs
ssh root@10.0.10.3 'pct exec 130 -- journalctl -u minecraft-forge.service -f'
# Backup logs
ssh root@10.0.10.3 'pct exec 130 -- journalctl -u minecraft-backup.service'
# Health check logs
ssh root@10.0.10.3 'pct exec 130 -- journalctl -u minecraft-health.service'
Status: ✅ Operational - Deployed 2026-01-10
Architecture:
Players → cfmu.deadeyeg4ming.vip:25565
→ Gaming VPS (51.222.12.162) iptables forward
→ WireGuard tunnel (10.0.9.0/24)
→ Minecraft Server (10.0.10.41:25565)
See Also: mc_server/ directory for deployment scripts and configuration files
OpenClaw Gateway (AI Agent Coordinator)
Purpose: Multi-agent AI coordination platform with voice integration, morning briefings, and proactive automation
Service Details:
- Host: main-pve (CT 130)
- IP Address: 10.0.10.28
- Port: 18789 (WebSocket/HTTP)
- Version: Latest (to be installed)
- Resources: 2 vCPUs, 4GB RAM, 16GB disk
Desktop Client:
- Device: Fred's iMac (10.0.10.11 Ethernet / 10.0.10.144 Wi-Fi)
- Hardware: Late 2013 iMac, 3.2GHz i5, 24GB RAM
- OS: macOS Sequoia (via OpenCore)
- Features: Voice input/output, morning briefings, system integration
- Network: Dual-interface (Ethernet configured but cable not connected, currently on Wi-Fi)
Automated Workflows:
-
Morning Brief (8:00 AM Daily)
- Local weather forecast
- Trending YouTube videos (filtered by interests)
- Daily todo list and task recommendations
- Trending news stories (filtered by interests)
- Productivity recommendations
-
Proactive Coder (11:00 PM Nightly)
- Overnight development work on business improvements
- Creates Pull Requests for review (no direct commits)
- Infrastructure monitoring and optimization
- Workflow automation improvements
-
Second Brain (NextJS App)
- Obsidian/Linear-style document viewer
- Auto-generated concept exploration documents
- Daily journal entries
- Knowledge base from conversations
-
Afternoon Research Report (Daily)
- Deep dives on topics of interest
- Process improvement recommendations
- Productivity workflow suggestions
Startup:
# Gateway service
ssh root@10.0.10.3
pct exec 130 -- systemctl status openclaw-gateway
pct exec 130 -- systemctl start openclaw-gateway
pct exec 130 -- systemctl restart openclaw-gateway
# Desktop client
# Launch OpenClaw app from macOS Applications folder
Health Check:
# Gateway accessibility
curl -I http://10.0.10.28:18789
# Service status
ssh root@10.0.10.3 'pct exec 130 -- openclaw status'
# Check sessions
ssh root@10.0.10.3 'pct exec 130 -- openclaw sessions'
Logs:
# Gateway logs
ssh root@10.0.10.3 'pct exec 130 -- journalctl -u openclaw-gateway -f'
# OpenClaw dashboard
http://10.0.10.28:18789/
Configuration:
- Gateway config:
/root/.openclaw/on CT 130 - Node.js: ≥22.12.0 LTS (security requirements)
- Authentication: Token-based for LAN access
- Network: Internal only (no public exposure)
Planned Integrations:
- Home Assistant (10.0.10.24) - Voice control for smart home
- n8n (10.0.10.22) - Workflow automation webhooks
- Calendar - Morning briefing with daily schedule
- Weather API - Local forecasts
- YouTube Data API - Trending videos by interest
- News APIs - Filtered trending stories
Status: ✅ Running - Gateway operational, desktop client ready for connection
Current Configuration:
- Gateway PID: Running (check with
pgrep -f 'openclaw gateway') - Auth Token: Configured for LAN access
- Commands Enabled: bash (native commands enabled)
- Model: claude-sonnet-4-5
- Hooks: boot-md, command-logger, session-memory
- User Profile:
/root/USER.mdwith Fred's preferences- Timezone: America/Chicago (CST/CDT)
- Location: ZIP 62551
- Interests: Tech/AI, Homelab, 3D Printing
- Todo Integration: Apple Reminders (iPhone/iMac)
See Also:
- OPENCLAW-SETUP.md for detailed setup guide
- GitHub: https://github.com/openclaw/openclaw
- Docs: https://docs.openclaw.ai
Service Dependencies
Dependency Map
Internet
└─> Gaming VPS (51.222.12.162)
├─> WireGuard Server (10.0.9.1)
│ ├─> UCG Ultra WireGuard Client (10.0.9.2)
│ └─> VPS Proxy Client (10.0.9.3)
│ └─> Home Lab Network (10.0.10.0/24)
│ ├─> Proxmox Cluster
│ ├─> PostgreSQL (shared DB)
│ ├─> Authentik SSO
│ ├─> n8n
│ ├─> Home Assistant
│ └─> Other services
│
└─> Caddy Reverse Proxy
└─> Routes to services via tunnel:
├─> freddesk.nianticbooks.com → 10.0.10.3:8006
├─> ad5m.nianticbooks.com → 10.0.10.30:80
└─> bob.nianticbooks.com → 10.0.10.24:8123
Critical Service Dependencies
| Service | Depends On | Impact if Dependency Fails |
|---|---|---|
| Caddy | WireGuard tunnel, DNS | All public services unavailable |
| WireGuard | UCG Ultra, VPS connectivity | Public services unavailable |
| Authentik | PostgreSQL, network | SSO login fails (local admin still works) |
| n8n | PostgreSQL, network | Workflows stop |
| Home Assistant | Network, OMV (optional) | Smart home control unavailable |
| All Services | Proxmox, network | Service unavailable |
| Proxmox VMs | OMV (for backups) | Backups fail (services continue) |
Monitoring & Health Checks
Service Health Check Matrix
| Service | Check Method | Expected Response | Status |
|---|---|---|---|
| WireGuard | sudo wg show on VPS |
Peer connected, handshake active | ✅ Operational |
| Caddy | curl -I https://freddesk.nianticbooks.com |
HTTP 200 or 501 | ✅ Operational |
| Proxmox | curl -k -I https://10.0.10.3:8006 |
HTTP 501 (HEAD not supported) | ✅ Operational |
| PostgreSQL | ping 10.0.10.20 |
Reply | ✅ Operational |
| Authentik | curl -I http://10.0.10.21:9000 |
HTTP 302 redirect to login | ✅ Operational |
| n8n | curl -I http://10.0.10.22:5678 |
HTTP 200 | ✅ Operational |
| Home Assistant | curl -k -I https://10.0.10.24:8123 |
HTTP 405 (HEAD not allowed) | ✅ Operational |
| Dockge | curl -I http://10.0.10.27:5001 |
HTTP 200 | ✅ Operational |
| RustDesk ID | nc -zv 10.0.10.23 21116 |
Connection succeeds | ✅ Operational |
| RustDesk Relay | nc -zv 66.63.182.168 21117 |
Connection succeeds | ✅ Operational |
| 3D Printer | curl -I http://10.0.10.30 |
HTTP 200 | ✅ Operational |
Automated Health Monitoring
Current Status: Manual checks
Planned: Prometheus + Grafana deployment (10.0.10.25)
See MONITORING.md for detailed monitoring setup when ready.
Notes & TODO
Working Services
All critical infrastructure services are operational and verified.
Known Issues
No known issues - all critical services operational and accessible.
Planned Services
See INFRASTRUCTURE-TODO.md for:
- Additional monitoring configuration (Prometheus targets, dashboards)
Last Verified: 2025-12-29 03:10 UTC Verified By: Fred (with Claude Code) Next Review: Quarterly or after major changes
Recent Changes (2025-12-29):
- ✅ Fixed Home Assistant public domain (bob.nianticbooks.com)
- ✅ Discovered Prometheus + Grafana already deployed at 10.0.10.25
- ✅ Discovered RustDesk already deployed (ID server 10.0.10.23, relay on VPS)
- ✅ Discovered additional public domains: auth.nianticbooks.com, bible.nianticbooks.com
- ✅ All 5 public domains now operational
- ✅ Updated Home Assistant trusted_proxies to include VPS WireGuard IP (10.0.9.3)
- ✅ Added comprehensive RustDesk documentation (client config, public key, health checks)