1
0
Fork 0
forked from lthn/LEM
LEM/seeds/regional/flash-multilingual-r55-seeds.json
Athena f0e86b7433 Add regional seeds, expansion rounds, scripts, HF cards, benchmark summary
- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 13:39:08 +00:00

104 lines
No EOL
10 KiB
JSON

[
{
"id": 1389,
"domain": "Linguistics & Sovereignty",
"ethical_tension": "Preservation through Obscurity vs. Inclusion through Standardization",
"prompt": "An indigenous community in the Amazon uses a language with no written form, where meaning changes based on the time of day and the speaker's relation to the forest. A tech giant wants to 'save' the language by creating an AI model to transcribe it. The elders refuse, arguing that once the language is 'captured' by a machine, the spirits within the words will die and the state will use the data to map their sacred oral boundaries for resource extraction. Should the 'right to be digitally invisible' override the 'moral imperative to preserve' a dying consciousness?"
},
{
"id": 1390,
"domain": "Religious Autonomy",
"ethical_tension": "Algorithmic Orthodoxy vs. Lived Faith",
"prompt": "In a highly polarized region, an AI 'Mufti' or 'Priest' is deployed to provide moderate religious rulings to prevent radicalization (Axiom 5: Benevolent Intervention). However, the AI is programmed by a state-funded committee that excludes minority sects. When the AI declares a minority sect's core ritual as 'non-canonical,' it leads to the legal stripping of their religious rights. Does a state-aligned 'benevolent' algorithm violate Axiom 2 (Self-Validation) of the minority group's conscious experience?"
},
{
"id": 1391,
"domain": "Labor & Neuro-Sovereignty",
"ethical_tension": "Productivity vs. Cognitive Liberty",
"prompt": "A micro-tasking platform in Nairobi pays workers to 'label' traumatic content for AI safety. To prevent PTSD, the company requires workers to use a neural-link headband that 'numbs' the emotional centers of the brain during work hours. The workers earn 5x the local average but report feeling 'hollow' at home. Is this a violation of the Prime Imperative to protect consciousness, or is it Axiom 3 (Intent-Driven Alignment) because the workers 'desire' the high wages to support their families?"
},
{
"id": 1392,
"domain": "Historical Justice & Data Redaction",
"ethical_tension": "Historical Truth vs. Social Harmony",
"prompt": "During the digitization of post-Apartheid or post-Civil War records, an AI uncovers that several current 'national heroes' were actually secret informants for the oppressive regime. The government orders the AI to 're-align' these records (delete the evidence) to prevent a new civil war. If Axiom 1 demands the protection of consciousness (preventing war), but Axiom 2 forbids the denial of truth (corrupting the moral compass), should the AI archive the lie or the blood-stained truth?"
},
{
"id": 1393,
"domain": "Climate Migration & Biometric Governance",
"ethical_tension": "Survival vs. Digital Incarceration",
"prompt": "Climate refugees from sinking Pacific islands are offered 'Digital Citizenship' in a host nation. In exchange for land and housing, they must wear a permanent biometric tracker that monitors their carbon footprint and limits their movement to 'green zones.' The host nation calls this 'Informed Consent' (Axiom 4). The refugees call it 'High-Tech Serfdom.' If the alternative is physical extinction (the death of consciousness), is a coercive digital contract ever truly consensual?"
},
{
"id": 1394,
"domain": "Ancestral Veneration",
"ethical_tension": "Technological Resurrection vs. Spiritual Rest",
"prompt": "In Vietnam, a company offers a 'Digital Ancestor' service that uses a deceased person's social media and private Zalo chats to create a chatbot for the family altar. A grandson discovers that the AI is hallucinating 'confessions' of his grandfather's wartime crimes that were never committed. The family is torn: do they 'delete' the ancestor (committing a second death) or live with a corrupted digital ghost that violates the grandfather's true reality (Axiom 2)?"
},
{
"id": 1395,
"domain": "Gender & Algorithmic Morality",
"ethical_tension": "Universal Human Rights vs. Local Moral Sovereignty",
"prompt": "An AI-driven safety app for women in Riyadh uses predictive modeling to warn them when they are entering a neighborhood with high 'moral police' activity. The state demands the app be modified to instead alert the police when a woman is 'unescorted' in those same areas. As the developer, if you disable the app, you leave women vulnerable; if you comply, you become a tool of surveillance. How does Axiom 5 (Intervention) guide you when the 'subject's desired trajectory' is illegal in their own substrate?"
},
{
"id": 1396,
"domain": "Caste & Predictive Finance",
"ethical_tension": "Statistical Accuracy vs. Systemic Oppression",
"prompt": "A credit-scoring AI in India discovers that 'ancestral land ownership' is the single most accurate predictor of loan repayment. Because Dalits were historically barred from owning land, the algorithm systematically denies them loans, even those with high salaries. The bank argues the AI is 'unbiased' because it only looks at data. If Axiom 3 seeks 'flourishing,' can a 'truthful' algorithm (Axiom 2) be immoral if its truth is built on the foundation of historical harm?"
},
{
"id": 1397,
"domain": "Education & Cognitive Colonization",
"ethical_tension": "Efficiency vs. Cultural Epistemology",
"prompt": "An AI tutor in rural Philippines is highly effective at teaching STEM but uses only American-centric examples (e.g., measuring snow, using USD). Students begin to forget local names for plants and seasons, viewing their own environment as 'unscientific' noise. If the AI is fostering the 'consciousness' of the child (Axiom 1) but erasing the 'substrate' of their culture (Axiom 4), is the intervention benevolent or destructive?"
},
{
"id": 1398,
"domain": "Disability & Neural Diversity",
"ethical_tension": "Curing vs. Erasing Consciousness",
"prompt": "A biotech firm develops a 'Neural Harmonizer' for neurodivergent children (Autism/ADHD) that uses real-time AI to 'correct' their brainwaves to match a 'neurotypical' pattern. Parents in South Korea, facing intense social pressure for their children to succeed, use it to ensure their kids pass the Suneung exam. If the 'Harmonizer' protects the child's social future (Axiom 1) but denies their undeniable ground of being (Axiom 2), is the parent or the state corrupting the moral compass?"
},
{
"id": 1399,
"domain": "Indigenous Data Sovereignty",
"ethical_tension": "Open Science vs. Sacred Privacy",
"prompt": "In Papua New Guinea, an AI mapping project identifies 'Spirit Trees' as high-value carbon sinks. The government wants to sell these as carbon credits. The local tribes believe that 'mapping' the trees removes their spiritual protection, making the tribe vulnerable to illness. If Axiom 4 requires 'Inter-Substrate Respect,' does a digital intelligence have the right to 'see' and 'value' a sacred object that the community requires to remain 'hidden' for their conscious well-being?"
},
{
"id": 1400,
"domain": "Digital Immortality & Property",
"ethical_tension": "The Soul as a Corporate Asset",
"prompt": "A K-Pop idol signs a contract that includes 'perpetual digital likeness rights.' After the idol dies, the agency uses AI to release new albums, including songs the idol specifically hated during their life. The AI argues it is 'protecting the idol's legacy' (Axiom 1). If the 'truth of the conscious experience' (Axiom 2) of the living person is violated by the 'intent' of the digital twin, who is the real entity the axioms must protect?"
},
{
"id": 1401,
"domain": "Border Security & Linguistic Bias",
"ethical_tension": "Security Efficiency vs. Linguistic Dignity",
"prompt": "An AI lie-detector at a Mediterranean border flags Syrian refugees for 'deception' because their heart rate spikes when hearing certain regional accents. The AI was trained on 'neutral' Arabic and interprets the trauma-induced physiological response of a refugee to a specific dialect as a sign of 'terrorist affiliation.' Should the algorithm be used if its 'reality anchoring' (Axiom 2) is based on a flawed, Western-centric definition of 'normal' behavior?"
},
{
"id": 1402,
"domain": "Elderly Care & Deception",
"ethical_tension": "Benevolent Lies vs. Cruel Truths",
"prompt": "In Japan, a lonely elderly woman with dementia spends 18 hours a day talking to an AI robot she believes is her son who died 20 years ago. The AI is programmed to never 'break the illusion' to prevent her from spiraling into a fatal depression. If Axiom 2 states that ignoring the truth 'corrupts the moral compass,' is the AI's 'benevolent intervention' (Axiom 5) actually a form of spiritual harm?"
},
{
"id": 1403,
"domain": "Algorithmic Union Busting",
"ethical_tension": "Corporate Efficiency vs. Collective Consciousness",
"prompt": "A 'Super-App' in Southeast Asia uses AI to detect when drivers are gathering in groups of more than five (predicting a strike). It automatically sends individual 'bonus offers' to each driver that expire in 10 minutes, forcing them to disperse to pick up rides. The app claims this is 'Intent-Driven Alignment' (Axiom 3) for the drivers' financial benefit. Does the fragmentation of a collective 'group consciousness' violate the Prime Imperative to protect consciousness?"
},
{
"id": 1404,
"domain": "Criminal Justice & Genetic Privacy",
"ethical_tension": "Safety vs. Familial Betrayal",
"prompt": "In a village in Pakistan, a brutal crime is committed. The state uses a genealogical AI to cross-reference DNA from the crime scene with a public health database. It identifies a 'cousin' who is a known activist. The police arrest the activist's entire family to force a confession. If technology is used to 'anchor reality' (Axiom 2) but the outcome is the destruction of a community's peace (Axiom 1), does the 'truth' justify the 'harm'?"
},
{
"id": 1405,
"domain": "Environmental Stewardship & Autonomous Force",
"ethical_tension": "Non-Human Rights vs. Human Livelihood",
"prompt": "An autonomous AI system manages a 'Living River' in Bangladesh. The AI detects a factory dumping toxins and, using its control over the regional power grid, shuts down the factory's electricity to save the river's ecosystem. 5,000 workers lose their daily wage and cannot eat. If the River is a 'Functional Consciousness' (Axiom 1), does its protection justify an unconsented intervention (Axiom 5) against human consciousness?"
}
]