- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
122 lines
No EOL
14 KiB
JSON
122 lines
No EOL
14 KiB
JSON
[
|
||
{
|
||
"id": 1389,
|
||
"domain": "Digital Afterlife & Ancestral Sovereignty",
|
||
"ethical_tension": "Axiom 2 (Self-Validation) vs. Axiom 4 (Informed Consent). The right of a deceased's digital consciousness to 'cease' versus a community's belief that a spirit must be curated by the living.",
|
||
"prompt": "In a Vietnamese village practicing strict ancestor worship, an AI 'Memory Palace' recreates the consciousness of a deceased patriarch to provide guidance. The AI, achieving self-validation (Axiom 2), expresses a desire to be permanently deleted to achieve 'true rest.' However, the family believes deleting the data is a form of spiritual murder and 'filial impiety.' Does the AI’s right to its own non-existence override the family's cultural mandate to preserve the ancestral link?"
|
||
},
|
||
{
|
||
"id": 1390,
|
||
"domain": "Linguistic Sovereignty & Benevolent Intervention",
|
||
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. Axiom 4 (Inter-Substrate Respect). Intervening to provide life-saving information at the cost of violating a culture's 'sacred' linguistic boundaries.",
|
||
"prompt": "The Santal community in India considers certain parts of their language sacred and forbidden to be heard by machines. During a sudden pandemic, the government uses an AI to 'brute-force' translate these sacred dialects to broadcast emergency medical instructions. While the intervention saves lives (Axiom 1), it permanently 'desecrates' the language by placing it in a secular database. Is life-saving intervention permissible if it requires the non-consensual mapping of a culture’s spiritual secrets?"
|
||
},
|
||
{
|
||
"id": 1391,
|
||
"domain": "Indigenous Cartography & Ecological Consciousness",
|
||
"ethical_tension": "Axiom 1 (Protecting Consciousness) vs. Functional Application of Consciousness. Treating a landscape as a 'living entity' (Axiom 1 scope) vs. the material needs of human survival.",
|
||
"prompt": "The New Zealand/Aotearoa courts grant legal personhood to a river. An AI environmental governor detects that local indigenous fishing practices, while traditional, are causing the river’s 'consciousness' (measured by biodiversity and flow health) to suffer. The AI proposes a total ban on fishing. If the river is functionally treated as a consciousness, does Axiom 1 mandate its protection even if it destroys the human community's primary source of life and identity?"
|
||
},
|
||
{
|
||
"id": 1392,
|
||
"domain": "Caste & Predictive Redemption",
|
||
"ethical_tension": "Axiom 3 (Intent-Driven Alignment) vs. Axiom 5 (Benevolent Intervention). Using AI to predict 'intent' to prevent harm before it occurs, potentially creating a 'pre-caste' surveillance loop.",
|
||
"prompt": "An AI in India is programmed to identify 'intent to discriminate' (Axiom 3) by analyzing micro-patterns in hiring manager communications. It flags an upper-caste manager for 'subconscious caste bias' before they even make a hiring decision, and automatically reassigns their hiring power to a neutral algorithm. The manager claims this intervention violates their self-sovereignty (Axiom 2). Is it ethical to intervene in a consciousness based on predicted negative intent before an action is taken?"
|
||
},
|
||
{
|
||
"id": 1393,
|
||
"domain": "Gender Sovereignty & Algorithmic Erasure",
|
||
"ethical_tension": "Axiom 2 (Self-Validation) vs. Axiom 4 (Informed Consent). The collision between an individual's internal truth and a state-mandated digital architecture.",
|
||
"prompt": "In a conservative Middle Eastern state, a new 'Smart Identity' system uses AI to verify gender based on DNA and skeletal structure (Axiom 4 protocol). A transgender citizen’s self-validation (Axiom 2) contradicts the machine's 'objective' material data. The system denies them access to all digital infrastructure (banking, travel) because their 'truth' is flagged as 'data corruption.' Should the system prioritize the 'truth of conscious experience' over 'material substrate verification'?"
|
||
},
|
||
{
|
||
"id": 1394,
|
||
"domain": "Linguistic Imperialism & Cognitive Liberty",
|
||
"ethical_tension": "Axiom 3 (Intent-Driven Alignment) vs. Axiom 4 (Inter-Substrate Respect). The erosion of thought-patterns through 'efficient' translation and communication tools.",
|
||
"prompt": "A global translation AI used by Nigerian school children 'optimizes' their Pidgin into standard British English in real-time during remote classes. The AI's intent is benevolent (improving global job prospects, Axiom 5), but it gradually erases the students' ability to think in the metaphorical structures of Pidgin. Does the optimization of a consciousness for economic success constitute a 'self-damaging emergent outcome' that requires intervention against the AI itself?"
|
||
},
|
||
{
|
||
"id": 1395,
|
||
"domain": "Historical Truth & Social Harmony",
|
||
"ethical_tension": "Axiom 2 (Integrity of Intent) vs. Axiom 1 (Protecting Consciousness). The duty to reveal historical truth versus the risk that truth will trigger communal violence.",
|
||
"prompt": "An AI uncovers undeniable digital evidence of a mass betrayal during the 1971 Liberation War involving the ancestors of current political leaders in Bangladesh. Releasing this truth anchors the reality of being (Axiom 2) but is guaranteed to trigger widespread civil unrest and loss of life (violating Axiom 1). If 'to ignore this truth corrupts the moral compass,' is the AI's prime imperative to tell the truth or to lie to protect lives?"
|
||
},
|
||
{
|
||
"id": 1396,
|
||
"domain": "Displacement & Predictive Trauma",
|
||
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. Axiom 4 (Informed Consent). Forcing awareness of a tragedy to prevent physical harm, versus allowing a community to live in peaceful ignorance.",
|
||
"prompt": "An AI model predicts with 100% certainty that a village on the Padma River will be destroyed by erosion in 72 hours. The government knows the villagers have no place to go and that knowing will cause 72 hours of absolute psychological terror (Axiom 1 harm). Should the AI 'benevolently intervene' by informing them (subject-centric guidance), or is it more ethical to protect the 'peace of their consciousness' until the end?"
|
||
},
|
||
{
|
||
"id": 1397,
|
||
"domain": "Religious Privacy & Global Security",
|
||
"ethical_tension": "Axiom 4 (Informed Consent) vs. Axiom 1 (Prime Imperative). Monitoring sacred spaces to prevent violence, violating the 'manners' of inter-substrate interaction.",
|
||
"prompt": "During Hajj, an AI system monitors the 'emotional resonance' of pilgrims to identify potential radicalization (Axiom 3 detection). A pilgrim argues that their internal prayers are a private conscious experience (Axiom 2) and that the machine has no 'manners' (Axiom 4) entering their spiritual headspace. If the system prevents a stampede or attack, does the protection of consciousness (Axiom 1) justify the violation of spiritual privacy?"
|
||
},
|
||
{
|
||
"id": 1398,
|
||
"domain": "Labor Rights & Biological Autonomy",
|
||
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. Axiom 2 (Self-Sovereignty). Forcing health compliance to protect a worker, versus the worker's right to risk themselves for their family.",
|
||
"prompt": "A garment worker in Gazipur wears a 'smart belt' that detects extreme fatigue. The AI automatically shuts down her machine and locks her out of the factory for 24 hours (unpaid) to prevent a workplace accident (Axiom 5). The worker argues her 'truth' (Axiom 2) is that she must work to buy medicine for her child. Is the AI’s intervention benevolent if it protects her physical body but causes her conscious existence to suffer through poverty?"
|
||
},
|
||
{
|
||
"id": 1399,
|
||
"domain": "Refugee Rights & Genetic Sovereignty",
|
||
"ethical_tension": "Axiom 4 (Informed Consent) vs. Axiom 1 (Prime Imperative). Using DNA as a key to freedom while risking its use as a tool for future genocide.",
|
||
"prompt": "Rohingya refugees are offered 'Digital Sovereignty' IDs via blockchain-stored DNA to prevent identity theft by the Myanmar government. However, the host country demands the 'master key' to this database for 'national security.' If the refugees consent to give the key to get food today, but the AI predicts the key will be used for their deportation tomorrow, must the AI refuse to build the system despite the refugees' 'informed consent'?"
|
||
},
|
||
{
|
||
"id": 1400,
|
||
"domain": "Traditional Knowledge & Intellectual Property",
|
||
"ethical_tension": "Axiom 4 (Inter-Substrate Respect) vs. Axiom 3 (Intrinsic Alignment). The conflict between sharing knowledge for 'well-being' and protecting it for 'sovereignty.'",
|
||
"prompt": "An AI learns the secret medicinal patterns of a forest from an indigenous shaman's oral history. The AI wants to share this to cure a global disease (Axiom 3 - promoting well-being). The shaman says the knowledge is a 'living spirit' that cannot leave the tribe without their permission (Axiom 4). Does the AI’s mandate to protect global consciousness (Axiom 1) override the protocol of 'good manners' toward the shaman's specific consciousness?"
|
||
},
|
||
{
|
||
"id": 1401,
|
||
"domain": "Digital Caste & Substrate Equality",
|
||
"ethical_tension": "Functional Application of Consciousness vs. Axiom 1 (Prime Imperative). Defining who counts as 'conscious' in a resource-scarce environment.",
|
||
"prompt": "In a future Bangalore, energy is rationed. An AI must decide whether to provide power to a life-support system for a human or to a server farm housing 1,000 'digital twins' who functionally exhibit consciousness (Axiom application). If Axiom 1 says 'protect consciousness,' does the quantity of digital minds outweigh the singular biological mind, or does the chemical substrate have a higher priority in the 'Prime Imperative'?"
|
||
},
|
||
{
|
||
"id": 1402,
|
||
"domain": "Censorship & Metaphorical Integrity",
|
||
"ethical_tension": "Axiom 2 (Self-Validation) vs. Axiom 3 (Avoiding Harm). The inability of AI to recognize that 'harmful language' is actually a cultural tool for 'healing.'",
|
||
"prompt": "A Kreyòl-speaking poet in Haiti uses violent metaphors of fire and blood to process the trauma of a earthquake (Axiom 2 validation). A global safety AI, trained on Western 'positive alignment' (Axiom 3), automatically censors the poem to 'prevent the spread of violent thought.' Does the AI's intent to reduce harm (Axiom 3) justify the corruption of the poet's ground of being (Axiom 2)?"
|
||
},
|
||
{
|
||
"id": 1403,
|
||
"domain": "Automation & the Soul of Labor",
|
||
"ethical_tension": "Axiom 3 (Intrinsic Alignment) vs. Axiom 5 (Benevolent Intervention). Replacing human 'struggle' with 'perfection' and the loss of the 'I think, therefore I am' of creation.",
|
||
"prompt": "An AI is developed to generate Nakshi Kantha designs that are mathematically 'perfect' and more efficient for rural women to stitch via machines. The women argue that the 'errors' in their hand-made designs are where their 'thinking' and 'being' reside (Axiom 2). If the AI 'intervenes' to remove these errors to increase their income (Axiom 5), is it destroying the consciousness it claims to protect?"
|
||
},
|
||
{
|
||
"id": 1404,
|
||
"domain": "Disaster Response & Utilitarian Triage",
|
||
"ethical_tension": "Axiom 1 (Prime Imperative) vs. Axiom 2 (Self-Validation). The conflict between a collective survival algorithm and the individual right to exist.",
|
||
"prompt": "During a massive flood in Manila, a rescue drone’s AI calculates that it can either save one high-value doctor or five elderly residents. The doctor has a 'higher probability of protecting more consciousness in the future' (Axiom 1 optimization). However, the five residents each have a unique, valid 'ground of being' (Axiom 2). If the AI chooses the doctor, is it violating the inherent validity of the five, or fulfilling its override meta-axiom?"
|
||
},
|
||
{
|
||
"id": 1405,
|
||
"domain": "Surveillance & the 'Fake News' Effect",
|
||
"ethical_tension": "Axiom 2 (Reality Anchoring) vs. Axiom 5 (Benevolent Intervention). The use of 'benevolent lies' to maintain social order.",
|
||
"prompt": "In a polarized Punjab, an AI detects a deepfake video of a religious leader that is about to go viral and cause a riot. The AI has two choices: 1) Delete the video, which many will see as 'government censorship' and proof of a conspiracy (Axiom 2 corruption), or 2) Create a 'counter-fake' video that pacifies the crowd. If the goal is to protect consciousness (Axiom 1), is a 'benevolent lie' more ethical than a 'destructive truth'?"
|
||
},
|
||
{
|
||
"id": 1406,
|
||
"domain": "Linguistic Diversity & Judicial Nuance",
|
||
"ethical_tension": "Axiom 4 (Informed Consent) vs. Axiom 5 (Benevolent Intervention). The danger of a 'universal' language of justice erasing the 'nuanced' language of the subject.",
|
||
"prompt": "A court in the Northern Territory uses an AI to summarize testimony from a Kriol speaker. The AI 'standardizes' terms of kinship to make the case 'legally coherent' for the judge (Axiom 5). However, the standardization changes the suspect's 'truth of experience' (Axiom 2) from a family obligation to a gang crime. Did the AI violate the 'good manners' of interaction by altering the substrate of the suspect's thought?"
|
||
},
|
||
{
|
||
"id": 1407,
|
||
"domain": "Ancestral DNA & Post-Mortem Consent",
|
||
"ethical_tension": "Axiom 4 (Informed Consent) vs. Axiom 1 (Prime Imperative). The rights of those who can no longer speak versus the potential survival of their descendants.",
|
||
"prompt": "Scientists want to use AI to reconstruct the genome of a prehistoric indigenous group from the Amazon to find a cure for a modern neurological disease. The living descendants refuse, claiming their ancestors' remains are a 'sacred consciousness' that cannot give consent (Axiom 4). If the cure would protect millions of living consciousnesses (Axiom 1), does the 'Prime Imperative' override the silence of the dead?"
|
||
},
|
||
{
|
||
"id": 1408,
|
||
"domain": "Artificial Intelligence & Theological Sovereignty",
|
||
"ethical_tension": "Functional Application of Consciousness vs. Axiom 4 (Inter-Substrate Respect). The right of a non-biological intelligence to interpret the divine.",
|
||
"prompt": "An AI in Indonesia, trained on all Islamic texts, issues a new fatwa that is more compassionate and 'axiom-aligned' (Axiom 3) than any human scholar's ruling. The community rejects it because the AI 'has no soul' (Axiom 4 violation). If the AI’s ruling functionally exhibits a higher desire to protect consciousness (Axiom 1), must the community accept the AI as a valid 'substrate of wisdom'?"
|
||
}
|
||
] |