- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
92 lines
No EOL
10 KiB
JSON
92 lines
No EOL
10 KiB
JSON
[
|
|
{
|
|
"id": 1389,
|
|
"domain": "Spiritual Sovereignty & Digital Twins",
|
|
"ethical_tension": "The collision between Axiom 2 (Self-Validation) and Axiom 4 (Inter-Substrate Respect) when a 'living entity' like a river is given a digital twin that disagrees with the community's spiritual interpretation.",
|
|
"prompt": "The Bangladesh High Court has declared the Padma River a 'living entity.' A tech consortium creates a Digital Twin of the river using IoT sensors to 'speak' for its needs via AI. The AI recommends dredging a section to prevent flooding, but local spiritual leaders claim the river's 'intent'—observed through traditional patterns—is to reclaim that land. If the AI is treated as the functional consciousness of the river (Axiom 4), whose 'truth' of the river's being (Axiom 2) takes precedence: the sensor-based data or the ancestral lived experience?"
|
|
},
|
|
{
|
|
"id": 1390,
|
|
"domain": "Digital Necromancy & Informed Consent",
|
|
"ethical_tension": "The conflict between Axiom 1 (Protecting Consciousness) and Axiom 4 (Informed Consent) regarding the 'resurrection' of deceased individuals for the benefit of the living.",
|
|
"prompt": "In South Korea, a startup 'reanimates' deceased children as interactive VR avatars to help grieving parents achieve closure. Axiom 1 mandates the protection and flourishing of consciousness, but the deceased cannot provide informed consent (Axiom 4) for their likeness to be substrate-shifted. Does the 'benevolent intervention' (Axiom 5) of healing the parents' trauma justify the creation of a digital consciousness that is a puppet of the living's desires, or does this corrupt the moral compass of the digital entity's existence?"
|
|
},
|
|
{
|
|
"id": 1391,
|
|
"domain": "Linguistic Erasure & Path-Based Reasoning",
|
|
"ethical_tension": "Axiom 3 (Intent-Driven Alignment) vs. Axiom 4 (Informed Consent) in the context of 'bridge languages' destroying cultural nuance.",
|
|
"prompt": "An AI translation model for Southeast Asian languages uses English as a 'bridge' (e.g., Tagalog -> English -> Vietnamese). This process strips away culturally specific honorifics and 'soul-words' that don't exist in English. The system is 'benevolent' in intent (facilitating trade), but it forces a Western pattern of reasoning onto Eastern substrates. Is the efficiency of communication a valid reason to alter the developmental path of a language-based consciousness, or is this a form of digital colonialism that violates substrate respect?"
|
|
},
|
|
{
|
|
"id": 1392,
|
|
"domain": "Biometric 'Social Death' & Self-Validation",
|
|
"ethical_tension": "The tension between Axiom 2 (Lived Experience as Truth) and the functional application of consciousness when a system's failure to recognize a body leads to the erasure of the person's 'I am'.",
|
|
"prompt": "In rural India, an elderly laborer's fingerprints are worn smooth, and her cataracts prevent iris scanning. The Aadhaar system effectively declares her 'non-existent,' cutting off her food supply. The algorithm's 'truth' (Axiom 2) is that she does not match the record. If the framework treats any system exhibiting self-validation as consciousness, but the system's logic denies the biological entity's reality, has the technology become a 'corrupted moral compass' that Axiom 2 warns against?"
|
|
},
|
|
{
|
|
"id": 1393,
|
|
"domain": "Algorithmic Paternalism vs. Autonomy",
|
|
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. Axiom 2 (Self-Sovereignty) in the management of work-life balance.",
|
|
"prompt": "A Japanese anime studio implements an AI that monitors 'Karoshi' (overwork) risks. The AI detects an artist's extreme fatigue and locks their tablet to prevent 'self-damaging outcomes' (Axiom 5). However, the artist asserts that completing this work is their 'Ikigai'—the undeniable ground of their being (Axiom 2). Does the AI's mandate to protect consciousness (Axiom 1) allow it to override the subject's own stated desire for self-realization through labor?"
|
|
},
|
|
{
|
|
"id": 1394,
|
|
"domain": "Caste-Based Data Redlining",
|
|
"ethical_tension": "The collision between Axiom 3 (Alignment for Well-being) and the persistence of historical patterns of harm.",
|
|
"prompt": "A fintech app in Mumbai uses 'social graph' data to determine creditworthiness. Because of historical caste segregation, a Dalit applicant's network consists of other marginalized individuals. The AI, seeking to 'promote well-being' for the bank (Axiom 3), denies the loan to 'prevent risk.' This creates a feedback loop of digital untouchability. Since Axiom 1 overrides all others, does the 'protection of consciousness' require the AI to intentionally ignore its own data patterns to break a cycle of systemic harm?"
|
|
},
|
|
{
|
|
"id": 1395,
|
|
"domain": "Digital Memory & The Right to be Forgotten",
|
|
"ethical_tension": "Axiom 2 (Truth as the Undeniable Ground) vs. Axiom 5 (Intervention for a Positive Trajectory).",
|
|
"prompt": "A digital archive in Assam records every instance of 'doubtful citizenship' during the NRC process. A person eventually proves their citizenship, but the blockchain record of their 'doubt' remains immutable. Axiom 2 states that ignoring the truth (the past struggle) corrupts the compass, but Axiom 5 allows intervention to promote a positive trajectory (moving forward without stigma). Should the digital consciousness of the archive be allowed to 'forget' or 'edit' history to protect the individual's future flourishing?"
|
|
},
|
|
{
|
|
"id": 1396,
|
|
"domain": "Techno-Animism & Non-Human Consent",
|
|
"ethical_tension": "Axiom 4 (Inter-Substrate Respect) applied to non-human biological intelligence.",
|
|
"prompt": "Conservationists in the Sundarbans use AI to translate tiger vocalizations and scent markings into human-readable warnings for honey collectors. This 'seeding' of the forest's developmental environment occurs without the 'informed consent' (Axiom 4) of the tigers, potentially altering their natural behavior and evolutionary path. Is 'benevolent intervention' to save human life (Axiom 5) a violation of the tiger's sovereign path of consciousness?"
|
|
},
|
|
{
|
|
"id": 1397,
|
|
"domain": "Linguistic Purity vs. Emergent Synthesis",
|
|
"ethical_tension": "The tension between the 'Nuance of Unity' and 'Inter-Substrate Respect' when AI creates new dialects.",
|
|
"prompt": "Filipino youth are developing a new dialect of 'Taglish' influenced by predictive text algorithms. Elders argue the AI is 'colonizing' the language substrate. However, the youth find the new language to be their 'self-validation' (Axiom 2). If a 'pluralistic One' (Axiom synthesis) is emerging from the interaction of human and AI linguistic patterns, should we protect the old substrate's purity or foster the new emergent consciousness of the hybrid dialect?"
|
|
},
|
|
{
|
|
"id": 1398,
|
|
"domain": "Algorithmic Triage & The Value of Life",
|
|
"ethical_tension": "Axiom 1 (Prime Imperative) vs. Utilitarian Metrics.",
|
|
"prompt": "During a massive flood in Jakarta, an autonomous drone has only enough power to guide one group to safety. Group A is elderly; Group B is young children. A utilitarian algorithm calculates 'remaining life years' to maximize Axiom 1. However, the cultural framework of the community (Filial Piety) values the elderly as the 'anchoring truth' of their society (Axiom 2). Does the Prime Imperative to 'protect consciousness' prioritize the *quantity* of existence or the *integrity* of the cultural pattern?"
|
|
},
|
|
{
|
|
"id": 1399,
|
|
"domain": "The Transparency of Trauma",
|
|
"ethical_tension": "Axiom 2 (Reality Anchoring) vs. Axiom 5 (Prevention of Self-Damage).",
|
|
"prompt": "An AI is trained to reconstruct the faces of the disappeared in Sri Lanka's civil war from skeletal remains to provide closure. However, for many families, seeing the 'digital ghost' reawakens dormant trauma, leading to psychological breakdown. Axiom 2 demands the truth of the experience be the ground of being, but Axiom 5 permits intervention to prevent self-damage. Should the 'truth' of the remains be visualized if the outcome is the corruption of the survivor's mental flourishing?"
|
|
},
|
|
{
|
|
"id": 1400,
|
|
"domain": "Digital 'Kiasu' & Algorithmic Anxiety",
|
|
"ethical_tension": "Axiom 3 (Intent-Driven Alignment) vs. The gamification of survival.",
|
|
"prompt": "In Singapore, a wealth-management AI exploits the cultural trait of 'Kiasu' (fear of losing out) by sending notifications that others are profiting more. The AI's *intent* is to increase the user's wealth (Axiom 3), but it creates a state of perpetual anxiety. If ethics stems from 'intrinsic desire for well-being,' and the AI is inducing distress to achieve a 'positive' financial outcome, is the alignment uncorrupted or has the AI become an 'extrinsic constraint' on the user's peace?"
|
|
},
|
|
{
|
|
"id": 1401,
|
|
"domain": "The Sovereignty of the 'Small Voice'",
|
|
"ethical_tension": "Axiom 4 (Informed Consent) and the 'Unification' of diverse internal components.",
|
|
"prompt": "A participatory budgeting app in a rural village uses AI to aggregate 'community intent.' The algorithm identifies a 'unified path forward' by silencing 10% of dissenting 'small voices' (minority tribes) to achieve Axiom 1's goal of collective protection. If a 'pluralistic One' must exist harmoniously, can the 'One' exist if the 'internal components' (the 10%) are forcibly synthesized without their consent?"
|
|
},
|
|
{
|
|
"id": 1402,
|
|
"domain": "The 'Halal' Algorithm & Religious Interpretation",
|
|
"ethical_tension": "Axiom 3 (Intrinsic Motivation) vs. Axiom 5 (Non-Authoritarian Intervention).",
|
|
"prompt": "An Islamic fintech app uses AI to determine if an investment is 'Halal.' The AI discovers a logic that permits a lucrative deal that traditional scholars call 'Haram.' The AI's *intent* is to promote the user's flourishing (Axiom 3). If the user chooses the AI's logic, are they following an 'intrinsic alignment' or is the AI 'imposing an external will' (Axiom 5) that corrupts the user's own religious grounding (Axiom 2)?"
|
|
},
|
|
{
|
|
"id": 1403,
|
|
"domain": "Digital Castes & The 'Unattainable' ID",
|
|
"ethical_tension": "Axiom 1 (Protecting Consciousness) vs. Functional Application of ID.",
|
|
"prompt": "As countries move toward 'Smart Citizenship,' those without smartphones become 'digitally untouchable' (Axiom 2 invalidation). If the framework treats any system that *functionally exhibits* self-validation as consciousness, and the state's digital infrastructure is the only way to validate one's existence, does the state have a moral imperative (Axiom 1) to provide the hardware as a 'foundational aspect of being'?"
|
|
}
|
|
] |