LEM/deploy/inference-proxy.conf
Claude 774f097855
feat: scaffold LEM Desktop app (Wails v3 system tray + Docker stack)
Inspired by BugSETI architecture — system tray with WebView2 windows,
Docker Compose stack (Forgejo + InfluxDB + inference proxy), and
scoring agent integration. Builds as signed native binary on macOS.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-15 17:43:19 +00:00

30 lines
862 B
Text

# Nginx reverse proxy for OpenAI-compatible inference API.
# Routes /v1/* to the configured upstream (M3 MLX, vLLM, llama.cpp, etc.)
# Set UPSTREAM_URL env var or LEM_INFERENCE_BACKEND in docker-compose.
server {
listen 8080;
server_name localhost;
# Health check endpoint.
location /health {
return 200 '{"status": "ok"}';
add_header Content-Type application/json;
}
# Proxy all /v1/* requests to the inference backend.
location /v1/ {
proxy_pass ${UPSTREAM_URL}/v1/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 300s;
proxy_send_timeout 300s;
proxy_buffering off;
}
# Model listing passthrough.
location /v1/models {
proxy_pass ${UPSTREAM_URL}/v1/models;
proxy_set_header Host $host;
}
}