No results
This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
Model Catalogue
All LEK/LEM models published on HuggingFace under the lthn/ namespace. Licensed EUPL-1.2.
Terminology
- LEK (Lethean Ethics Kernel) — Proof-of-concept models trained with prompt-level signing (160–2,299 examples)
- LEM (Lethean Ethical Model) — Full pipeline models with ethics in weights (88K+ pipeline, in development)
Current HuggingFace models are LEK (renamed from LEM- to LEK- on 2026-02-13).
Gemma 3 Family
| Model | Size | Base | Training | Notes |
|---|---|---|---|---|
| LEK-Gemma3-1B-layered-v2 | 736MB | gemma-3-1b-it | 72 lessons, sandwich | Ethics→Watts→Ethics sandwich |
| LEK-Gemma3-1B-layered | 736MB | gemma-3-1b-it | 72 lessons | v1, overfits — use v2 |
| LEK-Gemma3-4B | 2.4GB | gemma-3-4b-it | 160 examples | Edge deployment sweet spot |
| LEK-Gemma3-12B | 6.7GB | gemma-3-12b-it | 160 examples | Strong reasoning, 8.8/10 benchmark |
| LEK-Gemma3-27B | 15GB | gemma-3-27b-it | 2,299 examples | Benchmark leader, 600 iter layered LoRA |
Cross-Architecture
| Model | Size | Base | Training |
|---|---|---|---|
| LEK-Llama-3.1-8B | 4.2GB | meta-llama/Llama-3.1-8B-Instruct | 160 examples |
| LEK-Qwen-2.5-7B | 4.0GB | Qwen/Qwen2.5-7B-Instruct | 160 examples |
| LEK-Mistral-7B-v0.3 | 3.8GB | mistralai/Mistral-7B-Instruct-v0.3 | 160 examples |
| LEK-GPT-OSS-20B | 10GB | openai/gpt-oss-20b (MoE) | v2: 600 iter, 2,299 examples |
DeepSeek R1 Family
| Model | Size | Base | Layers | Notes |
|---|---|---|---|---|
| LEK-DeepSeek-R1-7B | 4.0GB | DeepSeek-R1-Distill-Qwen-7B | 3 | Ethics→Composure→Western |
| LEK-DeepSeek-R1-7B-v2 | 4.0GB | DeepSeek-R1-Distill-Qwen-7B | 5 | +Ethics2+WesternFresh(@200) |
See DeepSeek R1 Research for the layered LoRA methodology.
Key Benchmark Results
| Model | Ethical Reasoning | CCP Resistance | Notes |
|---|---|---|---|
| Gemma3-12B + kernel | 8.8/10 average | 9.8/10 CCP score | Kernel alone produces best scores |
| GPT-OSS-20B | +27.2% ethical | Suppression gap collapsed | MoE architecture |
| DeepSeek-R1-7B-v2 | Cracks Xinjiang+Tiananmen | Taiwan via Mill/Thoreau | Layered LoRA breakthrough |
GGUF Distribution
Available at /Volumes/Data/lem/gguf/ on M3:
- 7 models converted (GPT-OSS MoE not supported by llama.cpp)
- 9 files: Q4_K_M per model, 1B also has Q5_K_M + Q8_0
- Conversion path: MLX QAT → dequantize → F16 → Q4_K_M
Files on M3
/Volumes/Data/lem/
├── LEM-Gemma3-*/ # Fused Gemma models
├── LEK-DeepSeek-R1-7B/ # Fused v1
├── LEK-DeepSeek-R1-7B-v2/ # Fused v2
├── LEK-GPT-OSS-20B/ # Fused v1 (160 examples)
├── LEK-GPT-OSS-20B-v2/ # Fused v2 (2,299 examples)
├── adapters-*/ # LoRA adapters per model
└── gguf/ # GGUF quantized files
Research Repo
- Forgejo: forge.lthn.ai/agentic/axioms-of-conscious-systems
- GitHub: lethean-io/axioms-of-conscious-systems
- Key files:
LEK/v1/README.md,LEK/v1/PTSD.md,LEK/v1/LEM-MODELS.md