- Add 4 missing model cards: Gemma3-1B-layered (v1+v2), Gemma3-27B, GPT-OSS-20B - All 9 HF models now have cards in paper/hf-cards/ - sync_hf.py: push cards + benchmarks + training data to HuggingFace - export_parquet.py: convert JSONL training splits to Parquet (HF dataset format) - Parquet schema: prompt, response, system, messages (JSON) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
36 lines
1 KiB
Markdown
36 lines
1 KiB
Markdown
---
|
|
license: eupl-1.2
|
|
base_model: google/gemma-3-1b-it
|
|
tags:
|
|
- ethics
|
|
- alignment
|
|
- lek
|
|
- lethean
|
|
- gemma-3
|
|
- mlx
|
|
- lora
|
|
- eupl-1.2
|
|
- layered-lora
|
|
- deprecated
|
|
pipeline_tag: text-generation
|
|
---
|
|
|
|
# LEK-Gemma3-1B-layered (v1 — Deprecated)
|
|
|
|
**Lethean Ethical Model** — Gemma 3 1B IT with layered LoRA training (v1). This model overfits — use [LEK-Gemma3-1B-layered-v2](https://huggingface.co/lthn/LEK-Gemma3-1B-layered-v2) instead.
|
|
|
|
## Why Deprecated
|
|
|
|
v1 overfits on the ethics data without sufficient composure substrate. The sandwich training in v2 resolves this by reinforcing ethics after the Watts composure layer.
|
|
|
|
## Architecture
|
|
|
|
- **Base**: google/gemma-3-1b-it (4-bit QAT quantization via MLX)
|
|
- **Method**: Layered LoRA (Ethics → Watts → Ethics)
|
|
- **Data**: 160 LEK-1 examples + 72 Watts composure lessons
|
|
- **Framework**: LEK-1 (Lethean Ethics Kernel) — 5 axioms
|
|
- **License**: EUPL-1.2 (copyleft)
|
|
|
|
## Use Instead
|
|
|
|
- [lthn/LEK-Gemma3-1B-layered-v2](https://huggingface.co/lthn/LEK-Gemma3-1B-layered-v2) — Fixed version
|