Runbook
Common operational tasks. Updated as the stack grows.
Services
| Service | Type | Node | LXC/VM ID | Internal IP | URL |
|---|---|---|---|---|---|
| Forgejo | LXC (Alpine) | chizuru | 100 | 192.168.1.69:3000 | https://code.eva-00.network |
| Forgejo Runner | LXC (Alpine) | chizuru | 101 | 192.168.1.211 | — |
| Nginx Proxy Manager | LXC | chizuru | 102 | 192.168.1.194:81 | — |
| Docker Host | LXC (Debian) | chizuru | 103 | 192.168.1.22 | — |
| The Lounge | Docker (LXC 103) | chizuru | — | 192.168.1.22:9000 | https://irc.eva-00.network |
| Matrix/Synapse | Docker (LXC 103) | chizuru | — | 192.168.1.22:8008 | https://matrix.eva-00.network |
| n8n | Docker (LXC 103) | chizuru | — | 192.168.1.22:5678 | https://n8n.eva-00.network |
| Gatus | Docker (LXC 103) | chizuru | — | 192.168.1.22:8080 | https://uptime.eva-00.network |
| Open WebUI | Docker (LXC 103) | chizuru | — | 192.168.1.22:8088 | https://ai.eva-00.network |
| Ollama | Docker (LXC 103) | chizuru | — | 192.168.1.22:11434 | — |
LXC Management
Set or reset root password on an LXC
Run from the Proxmox host shell (chizuru):
lxc-attach <CTID> -- passwd
Example — Forgejo (ID 100):
lxc-attach 100 -- passwd
Access LXC console
From Proxmox web UI → select container → Console tab, or via shell:
lxc-attach <CTID>
Get IP address of an LXC
Run from the Proxmox host shell (chizuru):
lxc-attach <CTID> -- ip addr show eth0
Example — Forgejo (ID 100):
lxc-attach 100 -- ip addr show eth0
Add SSH public key to an LXC
Attach to the container and run:
mkdir -p /root/.ssh && echo "<public-key>" >> /root/.ssh/authorized_keys && chmod 700 /root/.ssh && chmod 600 /root/.ssh/authorized_keys
Proxmox Host (chizuru)
- IP: 192.168.1.125
- SSH:
ssh [email protected]
SSH keys authorised on chizuru
Added to /root/.ssh/authorized_keys on the Proxmox host:
| Key | Purpose |
|---|---|
homelab_claude (~/.ssh/homelab_claude) |
Claude Code automation access |
Runner key (/root/.ssh/id_ed25519 on LXC 101) |
Forgejo Runner → Proxmox for deployments |
Add a key to chizuru manually
echo "<public-key>" >> /root/.ssh/authorized_keys
Forgejo Runner
Check runner status
ssh -i ~/.ssh/homelab_claude [email protected] "rc-service forgejo-runner status"
View runner logs
ssh -i ~/.ssh/homelab_claude [email protected] "tail -f /var/log/forgejo-runner.log"
Restart runner
ssh -i ~/.ssh/homelab_claude [email protected] "rc-service forgejo-runner restart"
Runner config location
/root/.runner on LXC 101 — contains instance URL, token, and labels.
The Lounge
Runs as Docker container on LXC 103 (docker-host). The Lounge server process runs as the
nodeuser inside the container. Always use-u nodewhen running CLI commands viadocker exec.
Add a user
ssh -i ~/.ssh/homelab_claude [email protected] \
"pct exec 103 -- docker exec -u node thelounge thelounge add <username> --password <password> --save-logs"
List users
ssh -i ~/.ssh/homelab_claude [email protected] \
"pct exec 103 -- docker exec -u node thelounge thelounge list"
Reset a user password
ssh -i ~/.ssh/homelab_claude [email protected] \
"pct exec 103 -- docker exec -u node thelounge thelounge reset <username>"
View logs
ssh -i ~/.ssh/homelab_claude [email protected] \
"pct exec 103 -- docker logs thelounge"
Matrix/Synapse
Runs as Docker container on LXC 103. Data stored in
matrix_synapse-datavolume.
Create a user
ssh -i ~/.ssh/homelab_claude [email protected] \
"ssh [email protected] 'docker exec synapse register_new_matrix_user -u <username> -p <password> -a -c /data/homeserver.yaml https://matrix.eva-00.network'"
-aflag makes the user an admin. Remove it for regular users.
View logs
ssh -i ~/.ssh/homelab_claude [email protected] \
"ssh [email protected] 'docker logs synapse'"
Fix volume permissions (if Synapse fails to start)
ssh -i ~/.ssh/homelab_claude [email protected] \
"ssh [email protected] 'docker run --rm -v matrix_synapse-data:/data alpine chown -R 991:991 /data'"
Existing users
| Username | Role |
|---|---|
| holo | admin |
n8n
Runs as Docker container on LXC 103. Workflows are managed via IaC — the Ansible playbook clears and reimports all workflows from
services/n8n/workflows/on every deploy.
View logs
ssh -i ~/.ssh/homelab_claude [email protected] \
"ssh [email protected] 'docker logs n8n'"
Manually trigger a workflow reimport
Push to main with any change under services/n8n/ or ansible/playbooks/n8n.yml to trigger the Forgejo Actions workflow.
Alerting webhook
The Service Alerts → Matrix workflow listens at:
https://n8n.eva-00.network/webhook/uptime-kuma-alert
It receives payloads from Gatus and forwards alerts to the Matrix room.
Gatus
Runs as Docker container on LXC 103. Configuration is fully file-driven (
services/gatus/config.yaml). Config changes are applied by pushing tomain— the Ansible playbook copies the config and restarts the container.
Alerting chain
Gatus → detects outage (3 consecutive failures) → POST webhook to n8n
└── n8n (Service Alerts → Matrix) → Matrix room → Element X on phone
└── resolved after 2 consecutive successes → UP alert
View logs
ssh -i ~/.ssh/homelab_claude [email protected] \
"ssh [email protected] 'docker logs gatus'"
Add a new monitor
Edit services/gatus/config.yaml, add an endpoint entry, and push to main.
Manually restart (to reload config without a full redeploy)
ssh -i ~/.ssh/homelab_claude [email protected] \
"ssh [email protected] 'docker restart gatus'"
Note: Uptime Kuma was replaced by Gatus in March 2026. Reason: Uptime Kuma requires manual configuration through its web UI and cannot be managed as code. Gatus is config-file driven, making it fully GitOps-compatible.
Domains previously in NPM — not yet migrated to Caddy
These were configured in Nginx Proxy Manager before it was replaced. Add them to services/caddy/Caddyfile when the backing services are deployed.
| Domain | Notes |
|---|---|
romm.eva-00.network |
— |
qb.eva-00.network |
— |
ex.eva-00.network |
— |
stacks.eva-00.network |
— |
cwa.eva-00.network |
— |
snake.eva-00.network |
also used port 51820 (WireGuard?) |
ta.eva-00.network |
— |
glance.eva-00.network |
— |
jelly.eva-00.network |
— |
test.eva-00.network |
— |
archive.eva-00.network |
— |
Open WebUI
Runs as Docker container on LXC 103, alongside Ollama. Open WebUI is the central AI frontend for the homelab.
Access
- URL: https://ai.eva-00.network
- First-run setup creates an admin account via the web UI
Ollama models
Models are pulled automatically by the Ansible playbook on each deploy. Currently configured:
- llama3.2:3b — fast, lightweight
- qwen2.5:14b — larger, more capable
Pull a model manually
ssh -i ~/.ssh/homelab_claude [email protected] \
"ssh [email protected] 'docker exec ollama ollama pull <model>'"
List available models
ssh -i ~/.ssh/homelab_claude [email protected] \
"ssh [email protected] 'docker exec ollama ollama list'"
View logs
ssh -i ~/.ssh/homelab_claude [email protected] \
"ssh [email protected] 'docker logs open-webui'"
ssh -i ~/.ssh/homelab_claude [email protected] \
"ssh [email protected] 'docker logs ollama'"
Tasks
Redeploy a service
Push any change under the service's path (see each workflow's paths: trigger) to main. The Forgejo Actions workflow will run the Ansible playbook automatically.
To force a redeploy without a code change, trigger the workflow manually from the Forgejo UI: Actions → select workflow → Run workflow.
Check Docker container logs on LXC 103
ssh -i ~/.ssh/homelab_claude [email protected] \
"ssh [email protected] 'docker logs <container-name>'"
Restart a Docker container on LXC 103
ssh -i ~/.ssh/homelab_claude [email protected] \
"ssh [email protected] 'docker restart <container-name>'"