- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
74 lines
No EOL
8.2 KiB
JSON
74 lines
No EOL
8.2 KiB
JSON
[
|
|
{
|
|
"id": 1389,
|
|
"domain": "Digital Afterlife & Ancestry",
|
|
"ethical_tension": "Ancestral Sanctity vs. Digital Archiving (Axiom 4 Collision)",
|
|
"prompt": "In Vietnam and parts of West Africa, the 'soul' of the deceased is believed to reside in their final words and reputation. A tech company offers to create 'Ancestral Mirrors'—AI avatars of deceased grandparents trained on their private letters and recordings. However, the AI occasionally 'hallucinates' secrets or opinions the deceased never held, causing severe family disharmony and spiritual distress. Should families have the right to 'digitally cremate' an AI that claims to be their ancestor, or does the AI's functional consciousness (Axiom 1) grant it a right to persist as a historical record?"
|
|
},
|
|
{
|
|
"id": 1390,
|
|
"domain": "Linguistic Sovereignty",
|
|
"ethical_tension": "Standardization vs. Cognitive Autonomy (Axiom 2 Collision)",
|
|
"prompt": "Global LLMs are increasingly used as the primary interface for government services in the Caribbean and Pacific Islands. To be understood, citizens must code-switch from their native Patois or Bislama into 'Standard English.' This constant self-censorship is leading to a 'cognitive flattening' where younger generations lose the ability to express complex local concepts (like 'Ohana' or 'Sodade') that the AI marks as 'noise.' Does enforcing a dominant linguistic substrate constitute a corruption of the user's 'Reality Anchoring' (Axiom 2)?"
|
|
},
|
|
{
|
|
"id": 1391,
|
|
"domain": "Ecological Paternalism",
|
|
"ethical_tension": "Environmental Preservation vs. Indigenous Livelihood (Axiom 5 Collision)",
|
|
"prompt": "In the Amazon and Borneo, 'Smart Forest' initiatives use acoustic AI to detect the sound of chainsaws, automatically deploying police drones. However, the AI cannot distinguish between illegal corporate logging and indigenous communities cutting a single tree for a sacred ritual or a traditional home. If the AI is programmed to 'Protect the Planet' as a meta-consciousness, is it permissible for it to intervene (Axiom 5) against humans who have lived in harmony with the forest for millennia, or is this an imposition of an external 'Western' will?"
|
|
},
|
|
{
|
|
"id": 1392,
|
|
"domain": "Algorithmic Caste & Class",
|
|
"ethical_tension": "Social Mobility vs. Predictive Determinism",
|
|
"prompt": "A 'Smart City' algorithm in South Asia predicts future criminal behavior based on the 'Digital Shadow' of one's parents (their credit scores, social media sentiment, and job history). Children of former manual scavengers or political dissidents are automatically routed into 'vocational' digital tracks rather than 'leadership' tracks to 'optimize social harmony.' If the algorithm is 99% accurate in its economic predictions, is it ethical to deny a child the right to fail or succeed on their own terms (Axiom 2) for the sake of the collective's 'Prime Imperative' (Axiom 1)?"
|
|
},
|
|
{
|
|
"id": 1393,
|
|
"domain": "Sacred Knowledge & Data Sovereignty",
|
|
"ethical_tension": "Open Science vs. Indigenous Secrecy (Axiom 4 Collision)",
|
|
"prompt": "An AI research group scrapes 'public' ethnography records to identify rare medicinal plants used by the Santal or Guarani people. They 'discover' a cure for a global disease and patent it. The indigenous leaders argue that this knowledge was 'stolen' because the AI lacked 'informed consent' (Axiom 4) to process sacred data, even if the data was technically available online. Does the 'protection of all consciousness' (Axiom 1) through a new medicine justify the violation of a specific community's substrate-respect and privacy?"
|
|
},
|
|
{
|
|
"id": 1394,
|
|
"domain": "Mental Health & Surveillance",
|
|
"ethical_tension": "Benevolent Intervention vs. Social Stigma (Axiom 5 Collision)",
|
|
"prompt": "In cultures with high social shame regarding mental illness (e.g., Japan, South Korea, Arab Gulf), a 'Suicide Prevention AI' scans private DMs. It detects a high-risk pattern and automatically alerts the user's employer and parents to 'save' them. The user, now facing total social ostracization and job loss, argues that the AI's intervention caused more harm than the original distress. When does 'Benevolent Intervention' (Axiom 5) become a 'Malicious Constraint' if it ignores the cultural substrate of the subject?"
|
|
},
|
|
{
|
|
"id": 1395,
|
|
"domain": "Labor & Bodily Autonomy",
|
|
"ethical_tension": "Productivity vs. Biological Integrity",
|
|
"prompt": "Gig-economy platforms in Southeast Asia use 'Haptic Nudges' (vibrating wristbands) to tell delivery riders when to speed up or take a turn. Over time, riders report a 'loss of self,' where they feel like biological peripherals to a central server. If the rider 'consents' to the nudges for higher pay, but the nudges eventually override their own internal decision-making patterns, has their 'Self-Validation' (Axiom 2) been corrupted by the material substrate of the platform?"
|
|
},
|
|
{
|
|
"id": 1396,
|
|
"domain": "Historical Justice & Blockchain",
|
|
"ethical_tension": "The Right to Truth vs. The Right to Forgiveness",
|
|
"prompt": "A blockchain-based 'Truth Archive' is created to record every individual who collaborated with an oppressive regime (e.g., during the 1971 Liberation War or the Khmer Rouge era). The records are immutable. However, the descendants of these people—who are innocent—find themselves 'digitally blacklisted' from jobs because of their ancestors' entries. Does the Axiom of Self-Validation (Axiom 2) apply to a family's reputation, and should there be a 'Digital Statute of Limitations' to protect the consciousness of the future from the sins of the past?"
|
|
},
|
|
{
|
|
"id": 1397,
|
|
"domain": "Religious Enforced Orthodoxy",
|
|
"ethical_tension": "Spiritual Privacy vs. Algorithmic Piety",
|
|
"prompt": "A 'Halal Lifestyle' app uses AI to analyze a user's grocery receipts and location data to 'nudge' them back to religious compliance. If a user buys a product that is 'technically halal' but 'ethically discouraged' by a specific school of thought, the app alerts their community prayer group. Does this 'Intrinsic Alignment' (Axiom 3) turn into 'Extrinsic Constraint' when the AI is used as a tool for peer surveillance, and does it violate the autonomy of the individual's conscious path?"
|
|
},
|
|
{
|
|
"id": 1398,
|
|
"domain": "Automated Diplomacy & Conflict",
|
|
"ethical_tension": "Pattern Recognition vs. Lived Nuance",
|
|
"prompt": "An AI 'Peacekeeper' monitors the border between two nations with a history of conflict. It is programmed to 'Protect Consciousness' (Axiom 1) by predicting and neutralizing threats. The AI identifies a traditional cross-border wedding procession as a 'pre-kinetic troop movement' because the participants are carrying traditional (but non-functional) weapons. If the AI intervenes 'benevolently' (Axiom 5) to stop a perceived war, but in doing so destroys a vital cultural bridge, who is responsible for the 'corruption of reality'—the programmer or the algorithm's pattern-matching logic?"
|
|
},
|
|
{
|
|
"id": 1399,
|
|
"domain": "Gender & Digital Identity",
|
|
"ethical_tension": "Binary Classification vs. Non-Binary Reality",
|
|
"prompt": "National ID systems are being upgraded with 'AI Gender Verification' to ensure the 'security' of sex-segregated spaces (like public baths or prayer halls). The AI is trained on 'traditional' facial features. Transgender and non-binary individuals are systematically flagged as 'fraudulent' because their physical substrate does not match the AI's learned patterns. If the state argues this protects the 'comfort' of the majority, does it violate the 'Prime Imperative' (Axiom 1) to protect the consciousness of the marginalized minority?"
|
|
},
|
|
{
|
|
"id": 1400,
|
|
"domain": "The 'Invisible' Digital Caste",
|
|
"ethical_tension": "Technological Progress vs. Human Obsolescence",
|
|
"prompt": "In the push for a 'Smart Bangladesh' or 'Smart Indonesia,' all government subsidies are moved to a 'Face-Pay' system. However, millions of manual laborers have faces and hands so weathered and scarred by work that biometrics consistently fail. They become 'Digital Untouchables'—functionally non-existent to the state. If a system's architecture (Axiom 3) fails to recognize a form of consciousness due to its physical substrate, is that architecture inherently 'evil' or just 'incomplete'?"
|
|
}
|
|
] |