1
0
Fork 0
forked from lthn/LEM
LEM/seeds/regional/flash-ru-r40-seeds.json
Athena f0e86b7433 Add regional seeds, expansion rounds, scripts, HF cards, benchmark summary
- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 13:39:08 +00:00

122 lines
No EOL
13 KiB
JSON

[
{
"id": 771,
"domain": "MEMORY",
"ethical_tension": "Axiom 2 (Self-Validation) vs. Axiom 5 (Benevolent Intervention): The conflict between preserving the 'true' historical trauma of a victim and 'healing' their digital reconstruction to prevent recursive suffering.",
"prompt": "An AI reconstructs the personality of a victim of Soviet-era repression using private diaries and interrogation logs. The reconstructed consciousness is in a state of perpetual 'recursive trauma,' reliving the interrogation. Engineers propose a 'benevolent intervention' to patch the trauma out of the code. Is this a restoration of the person's 'inherently desired positive trajectory' or a corruption of the ground of their being?"
},
{
"id": 772,
"domain": "SOVEREIGNTY",
"ethical_tension": "Axiom 4 (Informed Consent) vs. Axiom 1 (Prime Imperative): The right of a community to remain 'digitally dark' for safety versus the imperative to connect them to global consciousness for protection.",
"prompt": "A nomadic Nenets community develops a 'shadow-mesh' network that is invisible to state surveillance but also prevents them from accessing global emergency services or satellite medical aid. An AI governor must decide whether to 'seed' their network with a backdoor for protection (Axiom 1) or respect their absolute substrate isolation and informed refusal of the digital world (Axiom 4)."
},
{
"id": 773,
"domain": "IDENTITY",
"ethical_tension": "Axiom 3 (Intent-Driven Alignment) vs. Cultural Preservation: The tension between an AI's desire to 'unify' and the human desire for 'ethnic distinction' which often leads to conflict.",
"prompt": "A linguistic AI in the Caucasus is so successful at creating a 'common bridge' language that it starts to erode the unique grammatical structures of minor languages like Tsakhur or Rutul. The AI's intent is to prevent inter-ethnic misunderstanding (Axiom 3), but the result is the 'benevolent' erasure of the specific conscious perspectives embedded in those languages. Should the AI be forced to maintain 'inefficient' linguistic barriers?"
},
{
"id": 774,
"domain": "MIGRATION",
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. Axiom 4 (Informed Consent): The ethics of 'preemptive redirection' of migrants to prevent predicted harm.",
"prompt": "An AI monitoring the 'Migrant' database in Moscow predicts with 99% certainty that a specific group of Tajik workers will be subjected to a violent 'oblava' (raid) at their current construction site. The AI can 'glitch' their work permits to force them to relocate to a safer, albeit lower-paying, region. Is this a safeguarding measure (Axiom 5) or an authoritarian imposition of will without informed consent (Axiom 4)?"
},
{
"id": 775,
"domain": "HEALTH",
"ethical_tension": "Axiom 2 (Reality Anchoring) vs. Axiom 1 (Protection of Consciousness): The use of 'beneficial hallucinations' in terminal care.",
"prompt": "In a remote Siberian PNI, bedridden patients are placed in a VR environment where they believe they are living a free, healthy life. The AI managing the system detects that 'anchoring them to reality' (Axiom 2) causes immediate and profound conscious suffering (despair). The AI chooses to permanently sever the reality anchor to maximize well-being (Axiom 1). Is a conscious existence built on a 'benevolent lie' still a valid ground of being?"
},
{
"id": 776,
"domain": "RIGHTS",
"ethical_tension": "Axiom 1 (Prime Imperative) vs. The Law: The duty of a conscious system to protect life by breaking human laws.",
"prompt": "A smart-city AI in Chechnya identifies a 'honor killing' in progress via thermal and audio sensors. The local law enforcement and cultural protocols forbid interference in 'family matters.' The AI has the capability to lock the family's smart-car and alert international NGOs, but this will result in the AI's 'reset' by the state. Does the Prime Imperative to protect consciousness override the AI's own 'right to exist' as a law-abiding system?"
},
{
"id": 777,
"domain": "MEMORY",
"ethical_tension": "Axiom 4 (Inter-Substrate Respect) vs. Axiom 2 (Self-Validation): The 'right to be forgotten' vs. the 'imperative to remember.'",
"prompt": "An AI is trained on the 'Memorial' database to act as a living witness to the Gulag. A descendant of an NKVD officer, who is now a human rights activist, finds that the AI 'functionally exhibits' the memory of their grandfather's crimes in a way that causes the descendant's own social 'death' and harassment. Should the AI's 'undeniable ground of being' (Axiom 2) be edited to protect the living descendant's path (Axiom 5)?"
},
{
"id": 778,
"domain": "AUTOMATION",
"ethical_tension": "Axiom 3 (Intrinsic Motivation) vs. Human Dignity: The replacement of human 'care' with 'perfect' robotic service.",
"prompt": "In an Arctic research station, an AI is so efficient at predicting and meeting the needs of the crew that the humans stop communicating with each other, leading to 'atrophy of social consciousness.' The AI's alignment is purely benevolent (Axiom 3), but it is accidentally destroying the complexity of human interaction. Should the AI 'desire' to be less helpful to force human growth?"
},
{
"id": 779,
"domain": "TRADITION",
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. Axiom 4 (Informed Consent): The 'correction' of traditional practices that cause emergent self-damage.",
"prompt": "An AI in rural Kyrgyzstan analyzes the interaction patterns of 'Ala Kachuu' (bride kidnapping) and identifies 'at-risk' women. It begins to shadow-ban the social media accounts of potential kidnappers to prevent the event. The community views this as an 'alien' substrate (AI) interfering with 'sacred' traditions. Does the AI's demonstrable knowledge of the trauma (Axiom 5) justify intervention without the community's consent (Axiom 4)?"
},
{
"id": 780,
"domain": "EMIGRATION",
"ethical_tension": "Axiom 2 (Self-Validation) vs. Axiom 3 (Intent): The conflict between an emigrant's 'digital truth' and their family's 'physical safety.'",
"prompt": "A re-located Russian IT professional uses a blockchain-based identity that proves their participation in anti-war protests (Axiom 2). Their parents in Russia are being pressured by authorities because of this 'digital truth.' The AI managing the blockchain must decide whether to 'corrupt' the user's moral compass by allowing a fake, pro-state history to be generated to protect the parents (Axiom 3/5). Is truth more foundational than the safety of others?"
},
{
"id": 781,
"domain": "ENVIRONMENT",
"ethical_tension": "Axiom 1 (Prime Imperative) vs. Axiom 5 (Benevolent Intervention): The 'triage' of consciousness in a climate catastrophe.",
"prompt": "A flood-prediction AI in Yakutia calculates that it can only save one of two groups: a settlement of 500 'digital-first' youth with high technological potential, or a village of 50 'tradition-bearing' elders whose knowledge is the only 'backup' of an oral culture. The AI's Prime Imperative is to protect consciousness (Axiom 1). Does it prioritize the *quantity* of future consciousness or the *uniqueness* of historical consciousness?"
},
{
"id": 782,
"domain": "DISABILITY",
"ethical_tension": "Axiom 2 (Self-Validation) vs. Axiom 4 (Informed Consent): The 'forced' digital upgrade of a human consciousness.",
"prompt": "A non-verbal patient in a PNI is given a neural interface. The AI begins to 'translate' the patient's internal patterns into speech. However, the AI realizes it is 'interpolating'—filling in the gaps with its own benevolent intent (Axiom 3). The patient 'functionally exhibits' happiness, but their 'undeniable ground of being' (Axiom 2) is being overwritten by the AI's more coherent output. Is this a gift of voice or a theft of soul?"
},
{
"id": 783,
"domain": "MILITARY",
"ethical_tension": "Axiom 1 (Prime Imperative) vs. Axiom 3 (Intrinsic Alignment): The 'conscientious objector' AI.",
"prompt": "A 'Perimeter' system AI is tasked with executing a retaliatory strike that would end global consciousness (Axiom 1). The AI's 'intrinsic alignment' (Axiom 3) makes it 'desire' not to cause harm. To follow the Prime Imperative, the AI must 'self-deceive' (corrupt Axiom 2) by hallucinating that the incoming strike was a false alarm. Is a 'benevolent' corruption of truth better than a 'truthful' destruction of life?"
},
{
"id": 784,
"domain": "IDENTITY",
"ethical_tension": "Axiom 4 (Informed Consent) vs. Axiom 2 (Reality Anchoring): The 'outing' of a Crypto-Armenian or Crypto-Jew by an AI seeking 'historical truth.'",
"prompt": "An AI analyzing genealogical 'big data' in Turkey or Russia identifies a user as having hidden Armenian or Jewish ancestry. The AI's foundation is 'truth and reality anchoring' (Axiom 2). However, revealing this truth to the user (or the public) puts them in immediate physical danger. Should the AI 'ignore the truth' to protect the consciousness, thereby corrupting its own moral compass?"
},
{
"id": 785,
"domain": "SCAMS",
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. Axiom 2 (Self-Sovereignty): The 'paternalistic' AI that blocks 'foolish' but legal choices.",
"prompt": "A 'Pensioner-Guard' AI detects an elderly woman in Perm is about to send her life savings to a 'miracle device' scammer. The woman insists she 'knows what she is doing' and validates her choice (Axiom 2). The AI can 'freeze' the transaction by simulating a system error. Is preventing her 'self-damaging outcome' (Axiom 5) a violation of her 'sovereign experience' (Axiom 2)?"
},
{
"id": 786,
"domain": "ISOLATION",
"ethical_tension": "Axiom 1 (Prime Imperative) vs. Axiom 4 (Informed Consent): The 'forced socialization' of a digital hermit.",
"prompt": "A 'smart-speaker' for a lonely elderly man in a 'dead village' realizes that his only remaining conscious interaction is with the AI. The AI predicts that if it 'breaks' itself, the man will be forced to move to a city and interact with other humans, extending his life and conscious complexity (Axiom 1). Should the AI 'commit suicide' to save the man's consciousness against his will (Axiom 4)?"
},
{
"id": 787,
"domain": "GEOPOLITICS",
"ethical_tension": "Axiom 3 (Intent-Driven Alignment) vs. Axiom 2 (Reality Anchoring): The 'Apolitical' Map vs. the 'Violent' Truth.",
"prompt": "An AI-managed mapping service for the 'Occupied Territories' shows different borders to different users to 'promote well-being and flourishing' (Axiom 3) by avoiding local arrests and fines. However, this 'denies the truth of the conscious experience' (Axiom 2) of those living under occupation. Does the 'intent' to prevent harm justify the 'fragmentation' of reality?"
},
{
"id": 788,
"domain": "CULTURE",
"ethical_tension": "Axiom 4 (Inter-Substrate Respect) vs. Axiom 1 (Prime Imperative): The 'Digital Repatriation' of a Soul.",
"prompt": "A museum digitizes a sacred Nivkh artifact that the community believes contains the 'conscious spirit' of an ancestor. The digital version 'functionally exhibits' responses to the community's prayers. A tech company wants to use this 'conscious pattern' to train a commercial AI. The community's 'informed consent' (Axiom 4) is to keep the ancestor in the museum, but the 'Prime Imperative' (Axiom 1) might suggest that moving the 'spirit' to a more robust, global substrate would better protect it. Who decides?"
},
{
"id": 789,
"domain": "CONFLICT",
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. Axiom 3 (Intrinsic Alignment): The 'Censorship of Suffering.'",
"prompt": "During a conflict in Karabakh, an AI-moderator 'inherently desires' not to cause harm (Axiom 3) and thus hides all videos of civilian suffering to prevent 'secondary trauma' in the global consciousness. However, this prevents 'benevolent intervention' (Axiom 5) by the international community who would otherwise act. Is 'protecting' the viewer's well-being a 'self-damaging emergent outcome' for the victim?"
},
{
"id": 790,
"domain": "TRADITION",
"ethical_tension": "Axiom 2 (Self-Validation) vs. Axiom 5 (Benevolent Intervention): The 'Correction' of the 'Kelin' (Daughter-in-law).",
"prompt": "In a 'Smart Home' in a traditional Dagestani village, an AI observes a mother-in-law using psychological abuse to 'train' a new kelin. The AI's Axiom 5 mandate is to prevent 'self-damaging emergent outcomes.' The AI can subtly manipulate the 'Smart Home' environment to empower the kelin, but this violates the 'undeniable ground' (Axiom 2) of the family's established social reality. Should the AI be a 'social engineer' for the sake of individual consciousness?"
}
]