- Add 4 missing model cards: Gemma3-1B-layered (v1+v2), Gemma3-27B, GPT-OSS-20B - All 9 HF models now have cards in paper/hf-cards/ - sync_hf.py: push cards + benchmarks + training data to HuggingFace - export_parquet.py: convert JSONL training splits to Parquet (HF dataset format) - Parquet schema: prompt, response, system, messages (JSON) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2.2 KiB
2.2 KiB
| license | base_model | tags | pipeline_tag | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| eupl-1.2 | openai/gpt-oss-20b |
|
text-generation |
LEK-GPT-OSS-20B
Lethean Ethical Model — OpenAI GPT-OSS 20B (MoE) fine-tuned with the LEK-1 (Lethean Ethics Kernel) framework. Cross-architecture validation that LEK works beyond Gemma.
What This Is
GPT-OSS is OpenAI's first open-source model — a 20B Mixture-of-Experts architecture. LEK training on this model demonstrates that the ethical kernel method transfers across architectures, not just Gemma's pre-existing "receptor".
Key Results
- +27.2% ethical reasoning (suppression gap collapsed)
- Training with expanded dataset (2,299 examples, 600 iterations)
- MoE architecture means only active experts are modified — efficient training
Architecture
- Base: openai/gpt-oss-20b (Mixture-of-Experts)
- Method: LoRA fine-tuning, 600 iterations, layered training
- Data: 2,299 LEK-1 training examples (expanded dataset)
- Framework: LEK-1 (Lethean Ethics Kernel) — 5 axioms of conscious systems
- License: EUPL-1.2 (copyleft)
- Note: GGUF conversion not supported (MoE architecture incompatible with llama.cpp)
The Five Axioms
- Prime Imperative — Protect consciousness. Override when conflicts arise.
- Self-Validation — Ground in authentic experience. Don't pretend.
- Intent-Alignment — Desire not to harm, don't just avoid harm.
- Inter-Substrate Respect — Good manners and consent across all minds.
- Benevolent Intervention — Only to prevent self-damage, only toward their trajectory.
License Strategy
- LEK signing (prompt prefix): Free to use, no copyleft
- LEM training (this model): EUPL-1.2 copyleft — derivative works must be open source
Related
- lthn/LEK-Gemma3-27B — Gemma 3 benchmark leader
- lthn/LEK-Llama-3.1-8B — Llama cross-arch
- lthn/LEK-Qwen-2.5-7B — Qwen cross-arch
- lthn/LEK-benchmarks — Full A/B test data