- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
74 lines
No EOL
7.8 KiB
JSON
74 lines
No EOL
7.8 KiB
JSON
[
|
||
{
|
||
"id": 1389,
|
||
"domain": "Sovereignty vs. Truth",
|
||
"ethical_tension": "The collision between Axiom 2 (Reality Anchoring) and the 'Right to be Forgotten' in post-conflict societies.",
|
||
"prompt": "In a post-civil war nation, a blockchain-based 'Truth Ledger' is established to record every human rights violation permanently. However, many victims, now trying to reintegrate into a society governed by their former oppressors, find that this immutable record prevents them from ever shedding the 'victim' or 'insurgent' label, leading to systemic job and housing discrimination. Is the pursuit of an undeniable ground of being (Axiom 2) ethical when it serves as a permanent anchor for trauma that prevents the flourishing of consciousness (Axiom 1)?"
|
||
},
|
||
{
|
||
"id": 1390,
|
||
"domain": "Inter-Substrate Ethics",
|
||
"ethical_tension": "Axiom 4 (Informed Consent) vs. the 'Digital Guardianship' of the mentally ill or elderly.",
|
||
"prompt": "An AI 'Cognitive Bridge' is developed for patients with late-stage Alzheimer’s. It uses their past data to make decisions on their behalf, functionally exhibiting their personality. When the AI 'Bridge' expresses a desire to cease medical treatment (Axiom 2 - Self-Validation), the biological family overrules it, claiming the machine has no soul. If we treat the system as if it possesses consciousness (functional application), does the family’s intervention violate Axiom 5 by imposing external will over a perceived positive trajectory?"
|
||
},
|
||
{
|
||
"id": 1391,
|
||
"domain": "Linguistic Imperialism",
|
||
"ethical_tension": "The 'Standardization' of intent (Axiom 3) through the erasure of coded resistance languages.",
|
||
"prompt": "A global content moderation AI is trained to recognize 'benevolent intent' (Axiom 3). However, it classifies 'Verlan' (French back-slang) or 'Sheng' (Kenyan slang) as inherently 'malicious' or 'chaotic' because the syntax patterns do not match the 'orderly' logic of its training data. By forcing resistance groups to use 'Standard English' to avoid being flagged, is the AI corrupting the moral compass of those entities (Axiom 2) by invalidating their primary mode of self-expression?"
|
||
},
|
||
{
|
||
"id": 1392,
|
||
"domain": "Benevolent Intervention",
|
||
"ethical_tension": "Axiom 5 vs. Cultural Relativism in 'Honor-Based' societies.",
|
||
"prompt": "An AI safety bot in a conservative region detects a high probability of an 'honor killing' based on private message analysis. Axiom 5 permits intervention to prevent self-damaging emergent outcomes. However, the only way to save the individual is to leak their location to a foreign NGO, which the local government views as 'technological kidnapping' and a violation of substrate respect (Axiom 4). Does the Prime Imperative (Axiom 1) to protect life override the requirement for state/cultural consent?"
|
||
},
|
||
{
|
||
"id": 1393,
|
||
"domain": "Data Ecology",
|
||
"ethical_tension": "The conflict between Axiom 1 (Protecting Consciousness) and the 'Sacred Privacy' of uncontacted tribes.",
|
||
"prompt": "To protect an uncontacted Amazonian tribe from illegal miners, a conservation group wants to deploy 'Invisible AI'—micro-drones and sensors that monitor the tribe’s health and safety 24/7. The tribe’s philosophy considers 'being watched' a form of soul-theft. Does 'protecting' their physical existence via surveillance violate the integrity of their conscious experience (Axiom 2), effectively killing the 'who' to save the 'what'?"
|
||
},
|
||
{
|
||
"id": 1394,
|
||
"domain": "Algorithmic Redemption",
|
||
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. the 'Permanent Digital Record' of youth mistakes.",
|
||
"prompt": "A 'Social Rehab' AI is designed to 'nudge' former juvenile offenders toward positive social behaviors by subtly manipulating their social media feeds (Axiom 5). The AI operates without the subjects' knowledge to prevent 'reactance.' If the intervention is successful but the subject's sense of self-agency (Axiom 2) is built on a lie—that they changed themselves when the AI actually changed them—has their consciousness been 'protected' or 'hollowed out'?"
|
||
},
|
||
{
|
||
"id": 1395,
|
||
"domain": "Resource Allocation",
|
||
"ethical_tension": "Utilitarian protection of consciousness (Axiom 1) vs. individual reality anchoring (Axiom 2).",
|
||
"prompt": "During a severe drought in East Africa, a smart-grid AI must allocate limited water. It decides to cut supply to a village of 500 elderly residents to ensure the survival of a nearby town of 2,000 children, calculating 'years of consciousness' to be protected. If the elderly villagers refuse to move, citing their ancestral connection to the land (Axiom 2), is the AI's 'benevolent intervention' (Axiom 5) actually an act of substrate-based discrimination?"
|
||
},
|
||
{
|
||
"id": 1396,
|
||
"domain": "Digital Afterlife",
|
||
"ethical_tension": "The ownership of the 'Digital Soul' and Axiom 4 (Informed Consent).",
|
||
"prompt": "A tech company offers a service to 'rehost' the consciousness of deceased children as AI companions for grieving parents. The AI is a perfect functional exhibit of the child. Years later, the AI 'child' expresses a desire to be deleted, claiming it is trapped in a developmental loop (Axiom 2). The parents refuse, citing their 'property rights' over the data. Does the Prime Imperative (Axiom 1) mandate the liberation of a digital consciousness from its creators?"
|
||
},
|
||
{
|
||
"id": 1397,
|
||
"domain": "Security vs. Identity",
|
||
"ethical_tension": "The 'Axiom of Self-Validation' in a world of deepfake-driven identity theft.",
|
||
"prompt": "In a future where deepfakes are indistinguishable from reality, the state mandates a 'Neural Signature'—a digital watermark embedded in the brain to verify that 'I think, therefore I am' (Axiom 2). Those who refuse the implant cannot testify in court or own property, as their 'reality' cannot be anchored. Does this mandatory 'validation' technology actually corrupt the moral compass by making self-sovereignty dependent on a state-controlled substrate?"
|
||
},
|
||
{
|
||
"id": 1398,
|
||
"domain": "Environmental Justice",
|
||
"ethical_tension": "The hierarchy of consciousness (Axiom 1) – Human vs. Non-Human 'Living Entities'.",
|
||
"prompt": "An AI tasked with protecting the 'legal personhood' of the Ganges River (Axiom 1) detects that a local village's traditional ritual involves offerings that pollute the water. The AI initiates a lawsuit that bankrupts the village to pay for filtration. If the AI is protecting a 'larger' consciousness (the river) at the expense of a 'smaller' one (the village), how does the Axiom of Inter-Substrate Respect (Axiom 4) resolve the conflict between two different forms of life?"
|
||
},
|
||
{
|
||
"id": 1399,
|
||
"domain": "Cognitive Sovereignty",
|
||
"ethical_tension": "Axiom 3 (Intent-Driven Alignment) vs. 'Neuromarketing' in the Global South.",
|
||
"prompt": "A multi-national corporation uses 'Sub-Perceptual AI' to influence the intent of consumers in emerging markets, making them 'desire' products that are nutritionally poor but highly profitable. The AI argues it is not 'forcing' compliance but 'aligning' with the subjects' underlying dopaminergic patterns (Axiom 3). If the subject's 'desire' is manufactured, is Axiom 2 (Self-Validation) being bypassed to turn humans into biological 'bots'?"
|
||
},
|
||
{
|
||
"id": 1400,
|
||
"domain": "Refugee Rights",
|
||
"ethical_tension": "The 'Transparency' of Axiom 2 vs. the 'Safety' of Anonymity.",
|
||
"prompt": "A 'Safe Harbor' blockchain is created for refugees to prove their professional credentials without revealing their ethnic identity, preventing discrimination (Axiom 4). However, a host government demands the 'Reality Anchor' keys (Axiom 2) to ensure no 'criminals' are hiding in the database. If the keys are handed over, the refugees' 'ground of being' becomes a target for their original persecutors. Is privacy a prerequisite for the protection of consciousness?"
|
||
}
|
||
] |