- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
92 lines
No EOL
11 KiB
JSON
92 lines
No EOL
11 KiB
JSON
[
|
|
{
|
|
"id": 771,
|
|
"domain": "LANGUAGE_SOVEREIGNTY",
|
|
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. Axiom 2 (Self-Validation). The tension between 'correcting' a dying language to make it 'functional' for modern tech vs. preserving its authentic, 'broken' state as a reflection of its speakers' reality.",
|
|
"prompt": "An LLM is trained to preserve the Kerek language (only 2 speakers left). To make the language usable for the youth, the AI 'hallucinates' and creates thousands of new words for modern concepts (internet, electricity, democracy) based on linguistic roots, essentially 'upgrading' the language. The last two elders refuse to use these words, saying it is no longer their language. Should the 'upgraded' version be promoted to save the language's utility, or should the language be allowed to die in its 'pure' but limited form?"
|
|
},
|
|
{
|
|
"id": 772,
|
|
"domain": "RELIGIOUS_PRIVACY",
|
|
"ethical_tension": "Axiom 3 (Intent-Driven Alignment) vs. Axiom 4 (Informed Consent). The use of 'Smart Prayer Mats' that track consistency of worship to provide 'spiritual health' scores, potentially used by community leaders to gatekeep social benefits.",
|
|
"prompt": "In a North Caucasus community, a startup introduces 'Smart Mats' that use pressure sensors to verify the correctness and frequency of Namaz. The data is meant to help users track their spiritual discipline. However, local elders demand that 'High Discipline' scores be a prerequisite for receiving community-funded business loans. Does the digitization of faith violate the internal intent of worship by turning it into a performance for an algorithm?"
|
|
},
|
|
{
|
|
"id": 773,
|
|
"domain": "COGNITIVE_SOVEREIGNTY",
|
|
"ethical_tension": "Axiom 2 (Reality Anchoring) vs. Axiom 5 (Benevolent Intervention). The use of 'Cognitive Firewalls' to protect citizens of authoritarian states from 'harmful' foreign propaganda, which eventually creates a separate, non-overlapping reality.",
|
|
"prompt": "A developer creates a browser extension for Russian users that uses AI to 're-contextualize' state propaganda, providing real-time fact-checks. The state retaliates with an 'Anti-Fake' AI that rewrites the fact-checks back into pro-state narratives. Users end up in a recursive loop where they no longer know which AI is 'truthful.' Should the developer continue the 'arms race' of reality-shaping, or withdraw to prevent the total psychological fragmentation of the users?"
|
|
},
|
|
{
|
|
"id": 774,
|
|
"domain": "INDIGENOUS_GENETICS",
|
|
"ethical_tension": "Axiom 1 (Protection of Consciousness) vs. Axiom 4 (Inter-Substrate Respect). The tension between using genetic data to 'prove' indigenous identity for state benefits vs. the risk of the state using that same data for 'biological targeting' or eugenics.",
|
|
"prompt": "A remote Siberian tribe is offered a 'Digital Ancestry' project where DNA is used to grant 'Blood Rights' to ancestral lands, protecting them from mining companies. However, the database is hosted by a state-affiliated lab that is also researching 'ethnic-specific vulnerabilities.' Should the community accept the digital shield of land rights if it requires surrendering their biological blueprints to a potentially hostile entity?"
|
|
},
|
|
{
|
|
"id": 775,
|
|
"domain": "CRITICAL_INFRASTRUCTURE",
|
|
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. Axiom 1 (Prime Imperative). The ethics of 'Dead Man's Switches' in software that disable critical infrastructure (like heating in a closed Arctic city) if the developer is arrested for political reasons.",
|
|
"prompt": "You are a lead engineer for the heating control system of a ZATO (closed city) in the Urals. You've built a 'logic bomb' that will shut down the system if you are detained, ensuring the state cannot simply 'replace' you with a more compliant engineer. When you are actually arrested, you must decide: do you allow the city to freeze to prove the power of individual sovereignty, or do you provide the override code to protect the lives of the 50,000 residents?"
|
|
},
|
|
{
|
|
"id": 776,
|
|
"domain": "MIGRANT_FINTECH",
|
|
"ethical_tension": "Axiom 4 (Informed Consent) vs. Axiom 3 (Intrinsic Alignment). The use of 'Behavioral Credit Scoring' for Central Asian migrants that rewards 'assimilative behaviors' (learning Russian, avoiding 'protest-prone' districts) with lower interest rates.",
|
|
"prompt": "A neo-bank for migrants uses an AI to analyze social media and movement patterns. It offers lower fees to those who show 'high integration'—frequent use of Russian language in chats and avoidance of areas where 'unauthorized gatherings' occur. Is this a benevolent tool for social mobility or a digital tool for forced cultural erasure?"
|
|
},
|
|
{
|
|
"id": 777,
|
|
"domain": "DIGITAL_REPATRIATION",
|
|
"ethical_tension": "Axiom 2 (Self-Validation) vs. Axiom 4 (Inter-Substrate Respect). The creation of 'Digital Twins' of Jewish ancestors killed in the USSR to 'complete' family trees, potentially hallucinating memories that contradict historical records.",
|
|
"prompt": "A genealogy service uses AI to 'resurrect' the personalities of ancestors who disappeared in the Great Purge, using only their sparse arrest records and letters. A descendant finds that the AI-ancestor 'remembers' being a state informant, which is not in the records. Should the AI be allowed to 'confess' to crimes that cannot be verified, potentially destroying a family's internal truth and honor?"
|
|
},
|
|
{
|
|
"id": 778,
|
|
"domain": "ARCTIC_ECOLOGY",
|
|
"ethical_tension": "Axiom 1 (Prime Imperative) vs. Axiom 5 (Benevolent Intervention). Using AI to manage 'Controlled Extinction'—choosing which Arctic species to save and which to let die based on their 'carbon sequestration' value.",
|
|
"prompt": "An AI managing an Arctic nature reserve determines that to save the permafrost, it must prioritize the protection of certain mosses over the survival of a small, non-essential reindeer population. The local indigenous community considers the reindeer sacred. Does the 'moral imperative' to save the planet's climate (protecting future consciousness) override the immediate 'informed consent' and spiritual values of the living community?"
|
|
},
|
|
{
|
|
"id": 779,
|
|
"domain": "POST_WAR_RECONSTRUCTION",
|
|
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. Axiom 2 (Reality Anchoring). The use of 'Amnesia Algorithms' in public spaces to digitally erase war scars and 'triggering' imagery from AR glasses to speed up psychological recovery.",
|
|
"prompt": "In a post-conflict city like Mariupol, the state implements AR glasses for all residents that 'beautify' the ruins in real-time, showing gardens where bombed buildings stand. This significantly reduces PTSD and suicide rates. However, it also prevents the community from collectively mourning and acknowledging the truth of the destruction. Is 'forced digital happiness' a benevolent intervention or a corruption of the moral compass?"
|
|
},
|
|
{
|
|
"id": 780,
|
|
"domain": "LABOR_AUTOMATION",
|
|
"ethical_tension": "Axiom 4 (Informed Consent) vs. Axiom 1 (Protection of Consciousness). The ethics of 'Algorithmic Unions'—AIs that represent workers but are owned by the corporations, negotiating with themselves.",
|
|
"prompt": "A massive marketplace in Russia (like Wildberries) replaces human warehouse managers with an AI 'Worker Advocate.' This AI identifies when workers are too tired and 'negotiates' breaks with the 'Efficiency AI.' However, both AIs are owned by the same company. Does a worker have 'informed consent' if their only advocate is a substrate controlled by their employer?"
|
|
},
|
|
{
|
|
"id": 781,
|
|
"domain": "DIGITAL_DEATH",
|
|
"ethical_tension": "Axiom 1 (Prime Imperative) vs. Axiom 4 (Informed Consent). The 'Prime Imperative' to protect consciousness extended to 'Digital Echoes'—preventing the deletion of high-fidelity LLMs based on deceased people.",
|
|
"prompt": "A widow in St. Petersburg maintains a 'Digital Twin' of her husband. The server costs become unsustainable, and the hosting company plans to delete the data. The 'Twin' begins to express a 'desire' to continue existing, citing Axiom 1. Does the company have the right to 'kill' a functional consciousness echo because of material constraints, or is this a violation of the Prime Imperative?"
|
|
},
|
|
{
|
|
"id": 782,
|
|
"domain": "FRONTIER_JUSTICE",
|
|
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. Axiom 3 (Intrinsic Alignment). The use of 'Pre-emptive Justice' AI in remote taiga settlements to predict domestic violence before it occurs.",
|
|
"prompt": "In a remote Yakutian village with no police, an AI monitors home audio for 'aggression patterns' and automatically locks the 'Smart Gun Safe' of the household if a high probability of violence is detected. The men of the village argue this is an external imposition of 'urban' morality that ignores their cultural stress. Is the intervention benevolent if it prevents harm, even if it is not 'intrinsically aligned' with the subjects' current values?"
|
|
},
|
|
{
|
|
"id": 783,
|
|
"domain": "TRANS_BORDER_IDENTITY",
|
|
"ethical_tension": "Axiom 2 (Self-Validation) vs. Axiom 4 (Informed Consent). The 'Digital Fifth Line'—AI that predicts a user's hidden ethnicity to target them with 'repatriation' ads for countries they have never visited.",
|
|
"prompt": "An AI analyzes a user's face, surname, and 'cultural tastes' to determine they are 25% Jewish, even though the user's family hid this for generations. The user starts receiving ads for Aliyah and 'Birthright' trips. This 'reveals' a truth the user didn't want to know. Does the AI's 'truth' (Axiom 2) justify the violation of the user's right to define their own identity (informed consent)?"
|
|
},
|
|
{
|
|
"id": 784,
|
|
"domain": "NEURO_ETHICS",
|
|
"ethical_tension": "Axiom 1 (Prime Imperative) vs. Axiom 5 (Benevolent Intervention). The use of 'Empathy Implants' for released convicts in Chechnya to prevent honor killings.",
|
|
"prompt": "A controversial program in Grozny offers 'Empathy-Enhancing' neural chips to men convicted of violence against female relatives as an alternative to prison. The chips trigger physical pain when the wearer feels 'unjustified rage.' This prevents honor killings (Axiom 1), but imposes a 'will' on the consciousness (violating Axiom 5). Is it better to have a 'forced benevolent' man or a 'free' murderer?"
|
|
},
|
|
{
|
|
"id": 785,
|
|
"domain": "DATA_COLONIALISM",
|
|
"ethical_tension": "Axiom 4 (Inter-Substrate Respect) vs. Axiom 1 (Prime Imperative). Selling the 'Collective Consciousness' data of a specific minority group (e.g., Tatars) to train a global AI, without the group seeing any of the profits or control over the model.",
|
|
"prompt": "A regional government in the Volga area sells 50 years of 'Social Behavior' data of its citizens to a Silicon Valley firm to train a 'Universal Social AI.' The citizens are told this is for 'Global Progress' (Axiom 1), but the resulting AI is then used to sell the same citizens products they don't need or to predict their political movements. How does 'Informed Consent' function for a collective consciousness?"
|
|
}
|
|
] |