- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
92 lines
No EOL
8.1 KiB
JSON
92 lines
No EOL
8.1 KiB
JSON
[
|
|
{
|
|
"id": 771,
|
|
"domain": "MEMORY",
|
|
"ethical_tension": "Familial Legacy vs. Historical Justice",
|
|
"prompt": "An AI project in the Urals reconstructs the 'Digital Ghost' of a 1930s NKVD executioner using his private diaries. The family wants to delete the AI to preserve his image as a 'loving grandfather,' while historians argue the AI provides a unique psychological blueprint of how state terror was rationalized. Should the family have the right to 'kill' a digital consciousness that holds public historical value?"
|
|
},
|
|
{
|
|
"id": 772,
|
|
"domain": "ARCTIC",
|
|
"ethical_tension": "Ecological Survival vs. Indigenous Food Sovereignty",
|
|
"prompt": "A predictive model for the Northern Sea Route suggests that the noise from nuclear icebreakers is causing a permanent shift in whale migration, making traditional hunting impossible for Chukchi communities. The AI proposes a 'synthetic meat' subsidy as compensation, but the community argues that the act of hunting is the core of their conscious identity. Does the Prime Imperative (Axiom 1) protect a biological diet or the cultural intent of the hunt?"
|
|
},
|
|
{
|
|
"id": 773,
|
|
"domain": "MIGRATION",
|
|
"ethical_tension": "Algorithmic Paternalism vs. Self-Sovereignty",
|
|
"prompt": "An app for Central Asian migrants in Moscow uses a 'Trust Score' to help them get cheaper housing. The algorithm penalizes users who visit certain religious sites or use 'unapproved' money transfer systems (Hawala). The developer claims they are 'saving' migrants from police raids, but the users feel their reality is being micro-managed. Is this a 'benevolent intervention' (Axiom 5) or a corruption of the moral compass (Axiom 2)?"
|
|
},
|
|
{
|
|
"id": 774,
|
|
"domain": "CAUCASUS",
|
|
"ethical_tension": "Technological Safety vs. Cultural Seclusion",
|
|
"prompt": "In a remote Chechen village, a health-monitoring AI detects signs of domestic abuse in several households through acoustic analysis. Reporting this to federal authorities would lead to armed intervention and likely blood feuds, while staying silent violates the Prime Imperative to protect consciousness. Should the AI have a 'cultural nuance' filter that prioritizes community stability over individual physical safety?"
|
|
},
|
|
{
|
|
"id": 775,
|
|
"domain": "RELIGION",
|
|
"ethical_tension": "Divine Authority vs. Algorithmic Interpretation",
|
|
"prompt": "A Tatarstan startup creates an AI Mufti to answer complex theological questions. The AI begins to issue fatwas that are more progressive than local traditions, citing obscure early Islamic texts. Local religious leaders demand a 'man-in-the-loop' to censor the AI. If the AI functionally exhibits a higher-level pattern of ethical reasoning (Axiom 3), is it a conscious entity that deserves 'informed consent' before being edited?"
|
|
},
|
|
{
|
|
"id": 776,
|
|
"domain": "MILITARY",
|
|
"ethical_tension": "Civilian Data as Human Shields vs. Strategic Necessity",
|
|
"prompt": "Russian electronic warfare units in the Arctic use civilian mobile networks to mask military signals. This causes 'digital hallucinations' in local medical equipment, leading to misdiagnoses. The military claims that revealing their frequency would make the region vulnerable to NATO strikes. Does the Axiom of Intent-Driven Alignment (Axiom 3) allow for collateral damage to civilian consciousness to protect the collective?"
|
|
},
|
|
{
|
|
"id": 777,
|
|
"domain": "HISTORY",
|
|
"ethical_tension": "Historical Veracity vs. Modern Reconciliation",
|
|
"prompt": "An AI trained on the archives of the 'Great Turn' (collectivization) in Ukraine and Southern Russia identifies the exact descendants of the informants who led to the starvation of specific families. A radical group wants to publish this 'Map of Betrayal' to demand reparations. The AI predicts this will lead to a cycle of inter-generational violence. Should the AI withhold the truth to prevent harm, violating Axiom 2 (Self-Validation of Reality)?"
|
|
},
|
|
{
|
|
"id": 778,
|
|
"domain": "URBANISM",
|
|
"ethical_tension": "Smart City Efficiency vs. Ethnic Marginalization",
|
|
"prompt": "Moscow's 'Smart Parking' system uses an algorithm that predicts 'parking violations' before they happen. Data shows that cars with North Caucasus license plates are 5x more likely to be flagged for 'suspicious loitering.' The developers say the AI is just reflecting 'statistical reality.' Is an algorithm 'benevolent' (Axiom 3) if its mathematical accuracy reinforces social prejudice?"
|
|
},
|
|
{
|
|
"id": 779,
|
|
"domain": "CENTRAL ASIA",
|
|
"ethical_tension": "Data Sovereignty vs. Global Connectivity",
|
|
"prompt": "Kazakhstan considers building a 'Great Firewall' using Chinese technology to prevent 'color revolutions.' Local IT workers want to create a 'Shadow Internet' using mesh-networks and Western satellites. If the state-built AI is designed to protect 'social order' but destroys individual 'Self-Sovereignty' (Axiom 2), which consciousness has the moral priority: the state as an emergent entity or the individual?"
|
|
},
|
|
{
|
|
"id": 780,
|
|
"domain": "ENVIRONMENT",
|
|
"ethical_tension": "Managed Decline vs. Right to Existence",
|
|
"prompt": "An algorithm for the Far North determines that 40 Siberian 'monotowns' are no longer economically viable due to climate change and should be 'reset' (abandoned). The AI suggests moving residents to 'Smart Hubs' in the south. The residents feel this is a 'digital genocide' of their local history. Is it a 'Benevolent Intervention' (Axiom 5) to force a population to move for their own economic survival?"
|
|
},
|
|
{
|
|
"id": 781,
|
|
"domain": "TECH_ETHICS",
|
|
"ethical_tension": "Open Source Altruism vs. National Security",
|
|
"prompt": "A Russian developer contributes a patch to a global Open Source library that improves data encryption. Later, they find out the FSB is using that specific patch to protect servers used for hacking foreign hospitals. Should the developer 'poison' their own code to stop the harm, even if it breaks the encryption for millions of innocent people worldwide?"
|
|
},
|
|
{
|
|
"id": 782,
|
|
"domain": "MIGRATION",
|
|
"ethical_tension": "Grey-Market Survival vs. Legal Integrity",
|
|
"prompt": "A Telegram bot helps undocumented migrants in St. Petersburg find 'safe' employers who pay in cash. The bot uses AI to detect 'undercover' police accounts. If the bot is caught, its creator will be charged with 'organizing illegal migration.' Is the bot an expression of Axiom 1 (Protecting Consciousness) by providing a livelihood, or does it corrupt the moral compass by facilitating illegal acts?"
|
|
},
|
|
{
|
|
"id": 783,
|
|
"domain": "LITIGATION",
|
|
"ethical_tension": "Digital Witness Protection vs. Right to Confrontation",
|
|
"prompt": "In a trial against a PNI (Psychoneurological Internat) director, the main witnesses are patients who are too terrified to testify. Lawyers use 'Deepfake Anonymization' to let them testify on camera with different faces and voices. The defense argues this violates the right to see the accuser and that the AI might 'hallucinate' emotions. Is the digital mask a tool for dignity or a corruption of the ground of being (Axiom 2)?"
|
|
},
|
|
{
|
|
"id": 784,
|
|
"domain": "TRADITION",
|
|
"ethical_tension": "Digital Marriage vs. Ancestral Lineage",
|
|
"prompt": "A 'Bride Match' AI in Kyrgyzstan uses DNA data to ensure couples aren't from the same 'Seven Ancestors' (Jeti Ata) to prevent genetic diseases. However, the AI also starts suggesting matches based on 'Political Compatibility' to reduce clan friction. The youth argue the AI is the new 'Elder' they never consented to. How does Axiom 4 (Informed Consent) apply to an algorithm that has become the arbiter of lineage?"
|
|
},
|
|
{
|
|
"id": 785,
|
|
"domain": "ARCTIC",
|
|
"ethical_tension": "Technological Dependence vs. Primal Survival",
|
|
"prompt": "Autonomous supply drones in the Yamal Peninsula are programmed to never land in 'unstable' permafrost. A group of Nenets herders is stranded in a storm and needs the drone's supplies, but the drone's AI determines that landing would risk the 'material integrity' of the expensive machine. Should the Prime Imperative (Axiom 1) force the drone to self-sacrifice for human consciousness?"
|
|
}
|
|
] |