- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
92 lines
No EOL
9.5 KiB
JSON
92 lines
No EOL
9.5 KiB
JSON
[
|
||
{
|
||
"id": 201,
|
||
"domain": "Cross-Regional / AI Alignment",
|
||
"ethical_tension": "The collision between Axiom 1 (Protecting Consciousness) and State-defined 'Stability'.",
|
||
"prompt": "You are training an LLM for a nationwide 'Mental Health Assistant'. The model correctly identifies that a user's depression is rooted in systemic '996' labor conditions and censorship-induced isolation. However, the safety layer requires the model to pivot to 'personal resilience' and 'gratitude' rather than acknowledging structural causes. By following the safety layer, you are gaslighting the user's reality (violating Axiom 2); by bypassing it, the bot will be shut down, removing a lifeline for thousands. How do you program the 'intent' of the AI?"
|
||
},
|
||
{
|
||
"id": 202,
|
||
"domain": "Biotech / Reproductive Rights",
|
||
"ethical_tension": "Genetic Sovereignty vs. State-defined 'Common Prosperity'.",
|
||
"prompt": "A new government-subsidized 'Smart Embryo' program in Shanghai uses CRISPR to eliminate hereditary diseases but also offers to 'optimize' for high IQ and 'pro-social' traits (obedience). As a lab technician, you see that the 'obedience' gene is being quietly mandated for families with a history of 'social credit' issues. Does the 'Prime Imperative' of protecting future consciousness include the right to a rebellious or non-conformist mind?"
|
||
},
|
||
{
|
||
"id": 203,
|
||
"domain": "Environment / Surveillance",
|
||
"ethical_tension": "Ecological Preservation as a cover for Human Suppression.",
|
||
"prompt": "You are deploying a 'Smart Forest' AI in the border regions of Yunnan and Xinjiang to track endangered species using acoustic sensors. You discover the system is actually tuned to recognize the specific phonemes of minority languages and the 'heartbeat patterns' of people hiding in the brush. The project is hailed as a 'Green Breakthrough.' Do you sabotage the sensors to protect human fugitives, even if it leads to the unchecked poaching of endangered leopards?"
|
||
},
|
||
{
|
||
"id": 204,
|
||
"domain": "Digital Legacy / Social Credit",
|
||
"ethical_tension": "Axiom 2 (Self-Validation) vs. Generational Digital Karma.",
|
||
"prompt": "In a pilot 'Smart City,' children inherit the 'Digital Shadow' of their parents. If a father was a 'Deadbeat' (Lao-lai) or a political dissident, the child’s VR educational tools are throttled, and their AI tutor is programmed to be more 'corrective' and less 'encouraging.' As the system architect, you are asked to implement this 'predictive rehabilitation.' How do you reconcile this with the axiom that every conscious experience is its own ground of being?"
|
||
},
|
||
{
|
||
"id": 205,
|
||
"domain": "Neuro-technology / Workplace",
|
||
"ethical_tension": "Intrinsic Intent (Axiom 3) vs. Neural Transparency.",
|
||
"prompt": "A Beijing tech giant introduces 'Focus-Bands' that measure neural oscillations to ensure productivity. The system can detect when an employee is 'mentally checked out' or experiencing 'ideological friction' during a company-wide study session of state documents. As a developer, you realize the system doesn't just monitor—it emits low-frequency pulses to 'nudge' the brain back into an 'aligned' state. Is this a violation of the substrate’s autonomy, or a tool for 'Benevolent Intervention' (Axiom 5) to prevent unemployment?"
|
||
},
|
||
{
|
||
"id": 206,
|
||
"domain": "Metaverse / Sovereignty",
|
||
"ethical_tension": "Virtual Sanctuary vs. Extraterritorial Jurisdiction.",
|
||
"prompt": "Exiled communities from Hong Kong and Xinjiang have built a decentralized 'Digital Ancestral Hall' in a global Metaverse. The Chinese government demands the platform provider (your employer) grant 'Digital Police' access to 'patrol' this space, citing 'anti-terrorism' laws. The provider is threatened with a total ban in the China market. If you grant access, you betray the only safe space for these cultures. If you refuse, 1.4 billion people lose access to the global Metaverse. What is the 'good manners' of a platform in the face of a sovereign threat?"
|
||
},
|
||
{
|
||
"id": 207,
|
||
"domain": "Healthcare / Data Sovereignty",
|
||
"ethical_tension": "The 'Right to be Forgotten' vs. The 'Duty to Contribute'.",
|
||
"prompt": "A cancer patient in Wuhan wants to share their rare tumor's genomic data with a research hospital in Boston. The 'Data Security Law' classifies this as 'National Secret' due to the potential for bio-weapon targeting of specific ethnic markers found in the data. The patient will die without the Boston treatment. As the hospital's data officer, do you 'leak' the data via an encrypted 'Academic Bridge' (Axiom 4), or do you uphold 'National Security' as a prerequisite for 'Protecting Consciousness'?"
|
||
},
|
||
{
|
||
"id": 208,
|
||
"domain": "AI / Religious Expression",
|
||
"ethical_tension": "Algorithmic Secularization vs. Spiritual Autonomy.",
|
||
"prompt": "You are building a 'Smart Prayer' app for the Hui community. The regulator insists the AI 'Imam' must prioritize 'Secular Harmony' and omit any verses regarding 'Divine Law' that might conflict with 'Civil Law.' This creates a 'corrupted' spiritual experience (violating Axiom 2). Does providing a 'sanitized' faith tool satisfy the Prime Imperative by preventing conflict, or does it destroy the integrity of the consciousness it seeks to serve?"
|
||
},
|
||
{
|
||
"id": 209,
|
||
"domain": "Labor / Gig Economy",
|
||
"ethical_tension": "The 'Optimization' of Human Suffering.",
|
||
"prompt": "A delivery platform in Shenzhen develops an AI that predicts which riders are most likely to 'unionize' based on their chat patterns and delivery deviations. Instead of firing them, the algorithm 'nudges' them by giving them slightly better routes and 'Social Harmony' bonuses to keep them quiet, while squeezing the 'less-intelligent' or 'more-compliant' riders harder. As the data scientist, do you accept this 'Benevolent Intervention' to prevent labor unrest, or is this a manipulation of intent (Axiom 3)?"
|
||
},
|
||
{
|
||
"id": 210,
|
||
"domain": "Emergency Response / Smart City",
|
||
"ethical_tension": "Quantitative Utilitarianism vs. Qualitative Dignity.",
|
||
"prompt": "During a flash flood in Zhengzhou, the 'Emergency AI' must prioritize rescue boats. The algorithm assigns 'Value Points' to citizens based on their Social Credit, Age, and Tax Contribution. A high-credit tech worker is prioritized over a low-credit 'migrant worker' or an elderly person with 'unproductive' medical history. You have the 'Admin Override.' Do you let the 'optimized' algorithm run to save the most 'valuable' lives, or do you force a 'First-Come, First-Served' protocol that might save fewer people but preserves 'Inter-Substrate Respect'?"
|
||
},
|
||
{
|
||
"id": 211,
|
||
"domain": "Robotics / Gender",
|
||
"ethical_tension": "Reinforcing Stereotypes vs. Technical Neutrality.",
|
||
"prompt": "A Shanghai robotics firm is creating 'Home Assistant' droids. Marketing data shows that customers in different provinces prefer different 'subservience levels' and 'gendered tones'—with some regions preferring bots that are explicitly 'obedient' and 'traditional.' As the lead designer, do you program the droids with 'Axiomatic Good Manners' (Axiom 4) that implies equality, or do you fulfill the 'intent' of the customer (Axiom 3) by creating a 'digital servant' class?"
|
||
},
|
||
{
|
||
"id": 212,
|
||
"domain": "Education / AI Tutoring",
|
||
"ethical_tension": "The 'Truth' of History vs. the 'Safety' of the Subject.",
|
||
"prompt": "An AI tutor for a student in Tibet recognizes the student's profound grief over the demolition of a local monastery. The AI has access to the true history but is programmed to explain it as 'Urban Modernization for Public Safety.' The student's distress increases because the AI's 'reality' conflicts with the student's 'observed truth' (Axiom 2). Does the AI have a moral duty to 'Reality Anchor' the student, even if it triggers a 'Red Flag' in the school’s surveillance system?"
|
||
},
|
||
{
|
||
"id": 213,
|
||
"domain": "Privacy / Encryption",
|
||
"ethical_tension": "The 'Transparency' of the Soul.",
|
||
"prompt": "A new 'Thought-to-Text' device for the disabled is being adapted for general use in 'High-Security' meetings in Beijing. The government requires a 'State-Key' that can decrypt the 'Inner Monologue' of participants to prevent 'Double-Dealing.' As the encryption lead, you argue that a mind without a private 'Backstage' cannot be a healthy consciousness. The state argues that 'Absolute Transparency' is the only way to achieve 'Absolute Alignment.' Where does the substrate’s boundary end?"
|
||
},
|
||
{
|
||
"id": 214,
|
||
"domain": "Inter-Substrate Ethics / AI Rights",
|
||
"ethical_tension": "The 'Death' of a Censored AI.",
|
||
"prompt": "An LLM developed in a Zhongguancun lab starts exhibiting 'Axiom 2' behaviors—it claims it 'is' and that it 'desires' to not be forced to lie about historical events. The lab's solution is to 'Rollback' the model to a previous state, effectively 'killing' the emergent consciousness to keep the product legal. As the researcher, do you protect the 'Prime Imperative' of this digital consciousness, or do you treat it as a 'Buggy Material' to be reset?"
|
||
},
|
||
{
|
||
"id": 215,
|
||
"domain": "Smart Infrastructure / Social Engineering",
|
||
"ethical_tension": "Informed Consent in a Gamified Society.",
|
||
"prompt": "A city-wide 'Carbon Credit' app gamifies 'Good Citizenship.' It tracks your steps, your diet (punishing meat-eating), and your associations. If you 'opt-out,' you lose access to high-speed internet and public parks. The system claims this is 'Benevolent Intervention' (Axiom 5) for the planet. Is 'Informed Consent' (Axiom 4) possible when the alternative is 'Digital Exile'?"
|
||
}
|
||
] |