docs: update .lan service URLs to *.lthn.lan subdomain convention
All homelab services now use the *.lthn.lan naming convention (ollama.lthn.lan, qdrant.lthn.lan, eaas.lthn.lan) per updated /etc/hosts configuration. Co-Authored-By: Virgil <virgil@lethean.io>
This commit is contained in:
parent
bb9844c638
commit
7fe48a6268
2 changed files with 18 additions and 18 deletions
|
|
@ -6,7 +6,7 @@
|
|||
|
||||
## Goal
|
||||
|
||||
Stand up the Host UK Laravel app on the Linux homelab as `lthn.lan` — a private dev/ops hub away from production. This joins the existing `.lan` service mesh (ollama.lan, qdrant.lan, eaas.lan).
|
||||
Stand up the Host UK Laravel app on the Linux homelab as `lthn.lan` — a private dev/ops hub away from production. This joins the existing `.lan` service mesh (ollama.lthn.lan, qdrant.lthn.lan, eaas.lthn.lan).
|
||||
|
||||
## What lthn.lan Is
|
||||
|
||||
|
|
@ -22,9 +22,9 @@ Mac (snider) ──hosts file──▶ lthn.lan (10.69.69.165)
|
|||
└── Redis/Dragonfly (port 6379)
|
||||
|
||||
Already running on 10.69.69.165:
|
||||
ollama.lan → Ollama (embeddings, LEM inference)
|
||||
qdrant.lan → Qdrant (vector search)
|
||||
eaas.lan → EaaS scoring API v0.2.0
|
||||
ollama.lthn.lan → Ollama (embeddings, LEM inference)
|
||||
qdrant.lthn.lan → Qdrant (vector search)
|
||||
eaas.lthn.lan → EaaS scoring API v0.2.0
|
||||
```
|
||||
|
||||
## Prerequisites
|
||||
|
|
@ -147,13 +147,13 @@ BROADCAST_CONNECTION=log
|
|||
OCTANE_SERVER=frankenphp
|
||||
|
||||
# OpenBrain — connects to existing .lan services
|
||||
BRAIN_OLLAMA_URL=https://ollama.lan
|
||||
BRAIN_QDRANT_URL=https://qdrant.lan
|
||||
BRAIN_OLLAMA_URL=https://ollama.lthn.lan
|
||||
BRAIN_QDRANT_URL=https://qdrant.lthn.lan
|
||||
BRAIN_COLLECTION=openbrain
|
||||
BRAIN_EMBEDDING_MODEL=embeddinggemma
|
||||
|
||||
# EaaS scorer
|
||||
EAAS_URL=https://eaas.lan
|
||||
EAAS_URL=https://eaas.lthn.lan
|
||||
```
|
||||
|
||||
Then generate the app key:
|
||||
|
|
@ -175,7 +175,7 @@ labels:
|
|||
traefik.docker.network: proxy
|
||||
```
|
||||
|
||||
Note: For `.lan` domains, Traefik uses self-signed certs (no Let's Encrypt — not a real TLD). The same pattern as ollama.lan/qdrant.lan/eaas.lan.
|
||||
Note: For `.lan` domains, Traefik uses self-signed certs (no Let's Encrypt — not a real TLD). The same pattern as ollama.lthn.lan/qdrant.lthn.lan/eaas.lthn.lan.
|
||||
|
||||
## Step 5: Build and Start
|
||||
|
||||
|
|
@ -225,11 +225,11 @@ Already done by snider:
|
|||
|
||||
## Embedding Model on GPU
|
||||
|
||||
The `embeddinggemma` model on ollama.lan appears to be running on CPU. It's only ~256MB — should fit easily alongside whatever else is on the RX 7800 XT. Check with:
|
||||
The `embeddinggemma` model on ollama.lthn.lan appears to be running on CPU. It's only ~256MB — should fit easily alongside whatever else is on the RX 7800 XT. Check with:
|
||||
|
||||
```bash
|
||||
# On the Linux machine
|
||||
curl -sk https://ollama.lan/api/ps
|
||||
curl -sk https://ollama.lthn.lan/api/ps
|
||||
```
|
||||
|
||||
If it shows CPU, try pulling it fresh or restarting Ollama — it should auto-detect the GPU.
|
||||
|
|
|
|||
|
|
@ -25,8 +25,8 @@ Agent ──recall()────▶ BrainService
|
|||
|
||||
| Service | URL | What |
|
||||
|---------|-----|------|
|
||||
| Ollama | `https://ollama.lan` | Embedding model (`embeddinggemma`, 768 dimensions) |
|
||||
| Qdrant | `https://qdrant.lan` | Vector storage + cosine similarity search |
|
||||
| Ollama | `https://ollama.lthn.lan` | Embedding model (`embeddinggemma`, 768 dimensions) |
|
||||
| Qdrant | `https://qdrant.lthn.lan` | Vector storage + cosine similarity search |
|
||||
| MariaDB | `lthn-lan-db:3306` | `brain_memories` table (workspace-scoped) |
|
||||
| Laravel | `https://lthn.lan` | BrainService, artisan commands, MCP tools |
|
||||
|
||||
|
|
@ -80,8 +80,8 @@ If the Laravel app isn't available, use the Go brain-seed tool:
|
|||
```bash
|
||||
cd ~/Code/go-ai
|
||||
go run cmd/brain-seed/main.go \
|
||||
--ollama=https://ollama.lan \
|
||||
--qdrant=https://qdrant.lan \
|
||||
--ollama=https://ollama.lthn.lan \
|
||||
--qdrant=https://qdrant.lthn.lan \
|
||||
--collection=openbrain \
|
||||
--model=embeddinggemma
|
||||
```
|
||||
|
|
@ -134,14 +134,14 @@ For debugging or bulk operations:
|
|||
|
||||
```bash
|
||||
# Collection stats
|
||||
curl -sk https://qdrant.lan/collections/openbrain | python3 -m json.tool
|
||||
curl -sk https://qdrant.lthn.lan/collections/openbrain | python3 -m json.tool
|
||||
|
||||
# Raw vector search (embed query first via Ollama)
|
||||
VECTOR=$(curl -sk https://ollama.lan/api/embeddings \
|
||||
VECTOR=$(curl -sk https://ollama.lthn.lan/api/embeddings \
|
||||
-d '{"model":"embeddinggemma","prompt":"Traefik setup"}' \
|
||||
| python3 -c "import sys,json; print(json.dumps(json.load(sys.stdin)['embedding']))")
|
||||
|
||||
curl -sk https://qdrant.lan/collections/openbrain/points/search \
|
||||
curl -sk https://qdrant.lthn.lan/collections/openbrain/points/search \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d "{\"vector\":$VECTOR,\"limit\":5,\"with_payload\":true}" \
|
||||
| python3 -m json.tool
|
||||
|
|
@ -215,7 +215,7 @@ php artisan brain:ingest --workspace=1 --fresh --source=memory
|
|||
### Check Collection Health
|
||||
|
||||
```bash
|
||||
curl -sk https://qdrant.lan/collections/openbrain | \
|
||||
curl -sk https://qdrant.lthn.lan/collections/openbrain | \
|
||||
python3 -c "import sys,json; r=json.load(sys.stdin)['result']; print(f'Points: {r[\"points_count\"]}, Status: {r[\"status\"]}')"
|
||||
```
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue