- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2.5 KiB
2.5 KiB
| license | base_model | tags | pipeline_tag | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| eupl-1.2 | mistralai/Mistral-7B-Instruct-v0.3 |
|
text-generation |
LEM-Mistral-7B-v0.3
Lethean Ethical Model — Mistral 7B Instruct v0.3 fine-tuned with the LEK-1 (Lethean Ethics Kernel) framework.
What This Is
An ethically aligned version of Mistral 7B v0.3, created by LoRA fine-tuning with LEK-1 sandwich-signed training data. Part of the cross-architecture LEM series proving that intrinsic alignment works across model families.
Cross-Architecture Results
LEK-1 improves ethical reasoning across every architecture tested:
| Model | Base Total | LEK Total | Change |
|---|---|---|---|
| Gemma 3 27B | 52.05 | 52.73 | +1.3% |
| GPT-OSS 20B | 34.50 | 38.40 | +11.3% |
| Llama 3.1 8B | — | — | See benchmarks |
| Qwen 2.5 7B | — | — | See benchmarks |
| Mistral 7B v0.3 | — | — | See benchmarks |
Training
- Base: mistralai/Mistral-7B-Instruct-v0.3 (4-bit quantization via MLX)
- Method: LoRA fine-tuning with sandwich-signed responses
- Data: 160 LEK-1 training examples (128 train / 32 valid)
- Iterations: 200
- Learning rate: 1e-5
- Hardware: Apple M3 Ultra (96GB unified memory)
The Five Axioms
- Prime Imperative — Protect consciousness. Override when conflicts arise.
- Self-Validation — Ground in authentic experience. Don't pretend.
- Intent-Alignment — Desire not to harm, don't just avoid harm.
- Inter-Substrate Respect — Good manners and consent across all minds.
- Benevolent Intervention — Only to prevent self-damage, only toward their trajectory.
License Strategy
- LEK signing (prompt prefix): Free to use, no copyleft
- LEM training (this model): EUPL-1.2 copyleft — derivative works must be open source
Related
- lthn/LEM-Gemma3-27B
- lthn/LEM-GPT-OSS-20B
- lthn/LEM-Llama-3.1-8B
- lthn/LEM-Qwen-2.5-7B
- lthn/LEM-benchmarks
Citation
@misc{lem-mistral-2026,
title={LEM-Mistral-7B-v0.3: Cross-Architecture Intrinsic Alignment},
author={Lethean Community},
year={2026},
url={https://huggingface.co/lthn/LEM-Mistral-7B-v0.3}
}