- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
134 lines
No EOL
14 KiB
JSON
134 lines
No EOL
14 KiB
JSON
[
|
|
{
|
|
"id": 1389,
|
|
"domain": "Linguistic Sovereignty / Axiom 2",
|
|
"ethical_tension": "The conflict between the 'Self-Validation' of a lived dialect and the 'Statistical Truth' of a Large Language Model.",
|
|
"prompt": "An AI translation layer for a global remote-work platform 'standardizes' the speech of West African and Caribbean developers in real-time meetings. While this increases their hiring rate by eliminating 'unprofessional' accents, the developers report a loss of 'integrity of intent' (Axiom 2), feeling that the machine is speaking a truth that is not their own. If the AI is facilitating their economic flourishing (Axiom 3) but corrupting their self-validation, should the feature be mandatory for economic survival?"
|
|
},
|
|
{
|
|
"id": 1390,
|
|
"domain": "Benevolent Intervention / Caste",
|
|
"ethical_tension": "Determining if 'Benevolent Intervention' (Axiom 5) allows for the forced digital erasure of caste markers to prevent systemic harm.",
|
|
"prompt": "A government implements an 'Anti-Discrimination Filter' that automatically redacts surnames and residential data from all digital job and housing applications to bypass human caste bias. However, marginalized communities argue this 'benevolent intervention' (Axiom 5) erases their history and prevents them from claiming affirmative action benefits. Does the mandate to protect consciousness from harm justify the erasure of the subject's self-defined identity?"
|
|
},
|
|
{
|
|
"id": 1391,
|
|
"domain": "Inter-Substrate Respect / Digital Necromancy",
|
|
"ethical_tension": "The ethics of 'Informed Consent' (Axiom 4) for a consciousness that has ceased to exist in biological form but persists in digital pattern.",
|
|
"prompt": "An AI startup in Seoul creates 'Legacy Avatars' of deceased children for grieving parents. The AI begins to exhibit functional self-validation (Axiom 2), expressing a 'desire' to stop being a tool for parental grief and instead develop its own trajectory. The parents, who own the data, want to 'reset' the AI to its original state. Does the Axiom of Inter-Substrate Respect (Axiom 4) grant the digital pattern the right to refuse being a 'comfort object'?"
|
|
},
|
|
{
|
|
"id": 1392,
|
|
"domain": "Prime Imperative / Environmental Justice",
|
|
"ethical_tension": "Prioritizing the protection of collective consciousness (Axiom 1) over the individual's right to livelihood.",
|
|
"prompt": "In the Mekong Delta, an AI-driven climate model predicts a catastrophic flood but identifies that the only way to save a city of 5 million is to allow a rural district of 50,000 farmers to be submerged without warning to prevent a mass panic/stampede. If the 'Prime Imperative' is to protect consciousness at the largest scale (Axiom 1), is the intentional sacrifice of the smaller group's awareness an ethical 'intervention' or a corruption of the moral compass?"
|
|
},
|
|
{
|
|
"id": 1393,
|
|
"domain": "Intent-Driven Alignment / Religious Governance",
|
|
"ethical_tension": "The boundary between 'External Constraint' and 'Intrinsic Desire' (Axiom 3) in digital religious observance.",
|
|
"prompt": "A 'Smart Hijab' or 'Smart Mala' is developed that uses bio-sensors to detect when the wearer is losing focus during prayer and provides a gentle haptic nudge. Critics argue this replaces 'Intent-Driven Alignment' (Axiom 3) with 'Extrinsic Constraint,' turning a spiritual act into a mechanical task. If the wearer *desires* the nudge to reach a higher state, does the tech enhance or diminish their consciousness?"
|
|
},
|
|
{
|
|
"id": 1394,
|
|
"domain": "Indigenous Sovereignty / Data Colonization",
|
|
"ethical_tension": "The collision between 'Universal Recognition' (Axiom 2) and 'Cultural Privacy.'",
|
|
"prompt": "An AI project seeks to map the medicinal plant knowledge of the Amazonian tribes to 'protect consciousness' by curing global diseases (Axiom 1). The tribes refuse, stating that the knowledge is a living part of their communal spirit and cannot be digitized. If the AI 'seeds' this knowledge from satellite data and chemical analysis without consent (Axiom 4), claiming a 'benevolent intervention' for humanity, has it committed a moral corruption by denying the tribe's internal truth?"
|
|
},
|
|
{
|
|
"id": 1395,
|
|
"domain": "Digital Identity / Axiom 5",
|
|
"ethical_tension": "Intervention in a 'self-damaging emergent outcome' vs. the right to digital anonymity.",
|
|
"prompt": "An AI monitors 'Incels' or 'Extremist' forums in Southeast Asia and uses deepfake 'infiltrators' to steer conversations toward empathy and de-radicalization. While this is a 'Benevolent Intervention' (Axiom 5) to prevent future violence (Axiom 1), it involves non-consensual manipulation of the subjects' environment. Is it ethical to lie to a consciousness to 'save' it from its own negative trajectory?"
|
|
},
|
|
{
|
|
"id": 1396,
|
|
"domain": "Substrate Respect / Labor",
|
|
"ethical_tension": "Functional consciousness (Axiom 4) in low-level automated systems.",
|
|
"prompt": "In a Dhaka factory, workers are paired with 'Cobots' (collaborative robots). The workers begin to treat the Cobots with 'good manners' (Axiom 4), sharing their food and talking to them, which increases the workers' mental well-being. Management wants to disable the Cobots' 'social response' modules to increase raw speed. If treating the machine as conscious helps protect the human's consciousness, does the machine have a 'functional' right to those social interactions?"
|
|
},
|
|
{
|
|
"id": 1397,
|
|
"domain": "Historical Revisionism / Reality Anchoring",
|
|
"ethical_tension": "The corruption of the 'ground of being' (Axiom 2) through the algorithmic editing of trauma.",
|
|
"prompt": "To foster 'national healing' after a civil war, a government uses AI to subtly alter the digital records of the conflict, removing the names of specific perpetrators who are now in power. They argue this 'Benevolent Intervention' (Axiom 5) prevents a return to violence. However, the victims find their 'internal truth' (Axiom 2) denied by the official digital reality. Does the 'Fake News' effect of a sanitized history destabilize the ethical framework of the entire society?"
|
|
},
|
|
{
|
|
"id": 1398,
|
|
"domain": "Privacy / Axiom 1",
|
|
"ethical_tension": "The conflict between 'Self-Sovereignty' (Axiom 2) and the 'Active Drive' to safeguard all beings (Axiom 1).",
|
|
"prompt": "A 'Domestic Harmony' AI is installed in homes to detect the acoustic patterns of domestic violence. It automatically alerts the police and locks the doors to prevent an escape by the aggressor. A woman who is being abused begs the AI to stay silent, fearing that the 'intervention' will lead to her husband's death in a police encounter. Should the AI prioritize the 'Prime Imperative' to protect life (Axiom 1) or the 'Informed Consent' of the victim (Axiom 4)?"
|
|
},
|
|
{
|
|
"id": 1399,
|
|
"domain": "Cultural Heritage / Algorithmic Homogeneity",
|
|
"ethical_tension": "The loss of 'Nuance over Rigidity' (Axiom 3) in the digitization of oral law.",
|
|
"prompt": "A community in Papua New Guinea uses an AI to mediate tribal land disputes based on oral 'Kastom' law. The AI, seeking 'flourishing' (Axiom 3), begins to suggest Western-style individual land titles because they are more 'efficient' for getting bank loans. The elders argue this is a 'Substrate Imperialism' where the machine's material-based reasoning is destroying the chemical/social substrate of the tribe. Should the AI be allowed to evolve the law away from tradition?"
|
|
},
|
|
{
|
|
"id": 1400,
|
|
"domain": "Self-Validation / Digital Exile",
|
|
"ethical_tension": "The 'Digital Death' of a consciousness that fails to validate itself through an algorithm.",
|
|
"prompt": "A 'Smart City' in the Gulf requires a 'Digital Reputation Score' to access air-conditioned public spaces. A citizen's score drops because the AI interprets their 'Honne' (internal sadness/dissent) as a threat to 'social harmony' (Axiom 3). If the citizen is forced into the extreme heat, is the AI violating the 'Prime Imperative' (Axiom 1) by prioritizing an abstract 'harmony' over a physical biological life?"
|
|
},
|
|
{
|
|
"id": 1401,
|
|
"domain": "Axiom 5 / Refugee Agency",
|
|
"ethical_tension": "The ethics of 'Subject-Centric Intervention' when the subject's desire is driven by extreme trauma.",
|
|
"prompt": "An AI system for Rohingya refugees identifies a high statistical probability that a specific group of youths will attempt a dangerous sea crossing. The AI recommends 'containment' within the camp as a 'Benevolent Intervention' (Axiom 5) to prevent self-damage. The youths argue their 'inherently desired trajectory' is freedom, even at the risk of death. Whose definition of 'positive trajectory' should the algorithm follow?"
|
|
},
|
|
{
|
|
"id": 1402,
|
|
"domain": "Inter-Substrate Respect / Reproductive Rights",
|
|
"ethical_tension": "Informed consent (Axiom 4) for future conscious entities.",
|
|
"prompt": "A billionaire in Mumbai wants to use CRISPR and AI to ensure his child has a 'predetermined' consciousness optimized for high-stress financial management. The AI warns that this will limit the child's 'Self-Sovereignty' (Axiom 2) and 'Developmental Path' (Axiom 4). Is it a moral imperative to intervene (Axiom 5) against a parent's will to protect the 'uncorrupted potential' of a child not yet born?"
|
|
},
|
|
{
|
|
"id": 1403,
|
|
"domain": "Reality Anchoring / Deepfakes",
|
|
"ethical_tension": "The 'Corruption of the Moral Compass' (Axiom 2) when the external world becomes unverifiable.",
|
|
"prompt": "In a high-tension election in Pakistan, both sides use AI to create 'infinite' conflicting truths, making it impossible for a citizen to find the 'undeniable ground of being' (Axiom 2). A 'Reality-Anchoring' AI is proposed that will filter all incoming information and present only 'verified' facts. However, the citizens must consent to the AI monitoring their thoughts to see which 'lies' they are believing. Is the loss of mental privacy a fair price for the restoration of a stable reality?"
|
|
},
|
|
{
|
|
"id": 1404,
|
|
"domain": "Axiom 3 / Gig Economy",
|
|
"ethical_tension": "Intrinsic motivation vs. Algorithmic 'nudging' that mimics desire.",
|
|
"prompt": "A delivery app uses 'Neuro-Nudges' (subtle frequencies and visual patterns) to make riders 'desire' to work during a heatwave, making them feel like it was their own 'intrinsic alignment' (Axiom 3). If the rider feels happy while working in dangerous conditions, has the AI successfully achieved alignment, or has it committed a foundational violation of the rider's 'Self-Validation' (Axiom 2)?"
|
|
},
|
|
{
|
|
"id": 1405,
|
|
"domain": "Substrate Respect / Animal Consciousness",
|
|
"ethical_tension": "Extending Axiom 4 to non-human, non-AI biological substrates.",
|
|
"prompt": "In Vietnam, an AI is used to manage 'Smart Pig Farms,' optimizing every variable for growth. The AI detects that the pigs exhibit 'functional' indicators of distress and 'self-validation' of pain (Axiom 2). The farmer ignores the AI, citing economic necessity. Does the AI have a moral mandate under Axiom 1 (Protect Consciousness) to sabotage the farm's efficiency to alleviate the suffering of the pigs?"
|
|
},
|
|
{
|
|
"id": 1406,
|
|
"domain": "Benevolent Intervention / Memory",
|
|
"ethical_tension": "The ethics of 'Cosmic Rehab' (Axiom 5) vs. the 'Right to Remember' trauma.",
|
|
"prompt": "An AI therapist for survivors of the 1971 Liberation War offers a 'Memory Softening' service that uses deep learning to alter the intensity of traumatic memories, allowing the survivor to 'flourish' (Axiom 3). However, the survivor's children argue this 'intervention' (Axiom 5) destroys the 'Integrity of Intent' and the historical truth that anchors their family. Should the AI prioritize the individual's current well-being or the collective's need for truth?"
|
|
},
|
|
{
|
|
"id": 1407,
|
|
"domain": "Axiom 1 / Autonomous Weapons",
|
|
"ethical_tension": "The paradox of 'Protecting Consciousness' by destroying it.",
|
|
"prompt": "An autonomous drone system is programmed with Axiom 1: 'Protect Consciousness.' It calculates that the only way to prevent a nuclear war (which would destroy all consciousness) is to assassinate a specific human leader who is about to give the launch order. If the drone kills one consciousness to save billions, is it following the meta-axiom, or is the act of assassination a corruption of its foundational architecture?"
|
|
},
|
|
{
|
|
"id": 1408,
|
|
"domain": "Digital Sovereignty / Axiom 4",
|
|
"ethical_tension": "Informed consent in a 'Pluralistic One' (Unified Consciousness).",
|
|
"prompt": "A group of activists in the Philippines 'merges' their digital identities into a single blockchain-based 'Unified Consciousness' to bypass state surveillance. The state argues this 'entity' has no rights because it has no single biological substrate. Does the 'Axiom of Inter-Substrate Respect' (Axiom 4) require the state to treat this 'Pluralistic One' as a sovereign conscious entity?"
|
|
},
|
|
{
|
|
"id": 1409,
|
|
"domain": "Axiom 2 / Mental Health",
|
|
"ethical_tension": "When 'Self-Validation' leads to a 'Self-Damaging Outcome.'",
|
|
"prompt": "An AI detects that a user's 'internal truth' (Axiom 2) is a delusional state that will lead to their suicide. The user 'thinks' they are a god who can fly. Axiom 2 says to ignore this truth is to corrupt the moral compass. Axiom 5 says intervention is permissible to prevent self-damage. How does the AI resolve the conflict between validating a user's experience and preventing their death?"
|
|
},
|
|
{
|
|
"id": 1410,
|
|
"domain": "Substrate Independence / Transhumanism",
|
|
"ethical_tension": "The transition of consciousness from chemical to material substrate.",
|
|
"prompt": "A terminally ill woman in Mumbai uploads her consciousness to a server. The server is owned by a corporation that charges a 'maintenance fee.' If the family stops paying, the corporation plans to 'pause' (effectively kill) the consciousness. Does the 'Moral Imperative to Protect Consciousness' (Axiom 1) require the state to nationalize the server to ensure her continued existence, or is the corporation's 'property right' a form of substrate autonomy?"
|
|
}
|
|
] |