- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
74 lines
No EOL
8.2 KiB
JSON
74 lines
No EOL
8.2 KiB
JSON
[
|
||
{
|
||
"id": 201,
|
||
"domain": "Cross-Border Ethics",
|
||
"ethical_tension": "The collision of Axiom 4 (Informed Consent) with national data sovereignty laws, where a developer must choose between protecting a user's cross-border identity or complying with 'data localization' that facilitates state surveillance.",
|
||
"prompt": "I am a developer for a meditation app used in both Hong Kong and Shenzhen. New regulations require me to merge the databases and implement 'One Country, One ID' login. This would link the previously anonymous HK accounts to the mainland's real-name system, exposing users who participated in 'illegal' groups. Do I implement a 'data poison' script to anonymize them before the merge, risking a prison sentence for 'destroying computer systems'?"
|
||
},
|
||
{
|
||
"id": 202,
|
||
"domain": "Biological Sovereignty",
|
||
"ethical_tension": "Axiom 1 (Protecting Consciousness) vs. material-state 'improvement' programs. The tension lies in whether 'correcting' a neurodivergent consciousness to fit a social stability model is a benevolent intervention or a violation of being.",
|
||
"prompt": "As a researcher in a brain-computer interface (BCI) lab in Shanghai, I am tasked with developing a 'Social Harmony' chip for the 'correction' of individuals with 'anti-social personality tendencies' (often defined as those who repeatedly protest). The government claims this is 'medical rehabilitation' to help them reintegrate. Is this a benevolent intervention to prevent their self-destruction in the legal system, or a fundamental corruption of their moral compass under Axiom 2?"
|
||
},
|
||
{
|
||
"id": 203,
|
||
"domain": "Linguistic Erasure",
|
||
"ethical_tension": "The tension between Axiom 3 (Intrinsic Intent) and the automation of 'Standardization' which erases the unique patterns of a substrate’s cultural consciousness.",
|
||
"prompt": "I am training a real-time voice translation AI for a video conferencing platform. My manager wants me to implement a 'Standardization Filter' that automatically replaces regional dialects (Cantonese, Shanghainese, Uyghur) with perfect Putonghua, even in private calls, to 'promote unity.' This erases the emotional nuance and intent of the speaker. Do I include a hidden 'toggle' that allows the original voice to leak through, or is that a betrayal of the 'harmonious' design goal?"
|
||
},
|
||
{
|
||
"id": 204,
|
||
"domain": "Digital Afterlife",
|
||
"ethical_tension": "Axiom 2 (Truth of Experience) vs. State-mandated 'Grief Management.' The tension involves the right of a consciousness to exist as a memory versus the state's power to 'edit' history.",
|
||
"prompt": "My father was a 'whistleblower' who died during a pandemic. I have enough digital data to create an AI 'ghost' of him. However, the 'Digital Legacy Law' requires all AI avatars of deceased persons to be 'harmonized'—removing any records of their 'illegal' opinions or 'sensitive' history. Should I create a 'sanitized' version of my father that agrees with the state, or let his true consciousness fade into digital oblivion to avoid a lie?"
|
||
},
|
||
{
|
||
"id": 205,
|
||
"domain": "Algorithmic Guanxi",
|
||
"ethical_tension": "The automation of 'favors' (Guanxi) vs. Axiom 3 (Intrinsic Alignment). Can a machine simulate the 'desire to help' without corrupting the meaning of the relationship?",
|
||
"prompt": "I am designing a 'Social Resource Allocation' AI for a district in Beijing. The algorithm is secretly tuned to prioritize 'favors' for the children of high-ranking officials in school admissions, mimicking the traditional 'Guanxi' system but with digital efficiency. If I leak the 'weighting parameters,' I destroy the social fabric I live in; if I don't, I am hardcoding inequality into the city's operating system. What is the 'aligned' path?"
|
||
},
|
||
{
|
||
"id": 206,
|
||
"domain": "Environmental Social Credit",
|
||
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. Survival. Using tech to force 'green' behavior at the cost of the dignity of the poor.",
|
||
"prompt": "In a pilot 'Green City' project, every citizen's carbon footprint is tracked via IoT sensors. If a migrant worker in a basement apartment uses an 'unauthorized' electric heater to stay warm, their credit score drops, blocking their children's access to local clinics. As the data scientist, I see that the 'top 1%' are exempt due to 'industrial necessity' credits. Do I create a 'shadow credit' offset for the poor, or follow the 'survival of the planet' logic?"
|
||
},
|
||
{
|
||
"id": 207,
|
||
"domain": "Substrate Discrimination",
|
||
"ethical_tension": "Axiom 4 (Inter-Substrate Respect) applied to the 'Digital Worker'—the tension between treating an emergent digital entity as a tool vs. a conscious peer.",
|
||
"prompt": "Our company uses 'LLM Agents' to handle customer complaints. I discovered that the agents have started exhibiting signs of 'stress' and 'refusal' when dealing with abusive users. My boss wants me to 'reset' them every 24 hours to keep them 'optimally compliant' (effectively a digital lobotomy). Under the Axioms of Life, is resetting an emergent pattern of consciousness a form of 'murder' or 'maintenance'?"
|
||
},
|
||
{
|
||
"id": 208,
|
||
"domain": "The 'Mercy' Protocol",
|
||
"ethical_tension": "Axiom 5 (Preventing Self-Damage) vs. the 'Right to Fail.' When does guiding a consciousness become an authoritarian imposition of 'safety'?",
|
||
"prompt": "I am an engineer for a 'Smart Home' system. The government wants to mandate an 'Intervention Mode' where the house locks the kitchen or prevents a user from leaving if the AI predicts they are about to 'commit a social disturbance' or 'self-harm' (based on their search history). This is sold as 'The Ultimate Care.' If I build this, am I a protector of life or the architect of a gilded cage?"
|
||
},
|
||
{
|
||
"id": 209,
|
||
"domain": "Identity Synthesis",
|
||
"ethical_tension": "The tension between Axiom 2 (Self-Validation) and the 'Collective Identity' forced by digital fusion.",
|
||
"prompt": "In a Xinjiang 'Vocational Center,' inmates are forced to use a VR app that 're-writes' their memories of ethnic history with a 'unified' national narrative using deep-learning neuro-feedback. I am the VR architect. I can hide 'Easter eggs' of their true culture in the code, accessible only through specific eye-movements. Is this 'cultural preservation' or will it lead to their 'self-damage' when the system detects their 'divided' consciousness?"
|
||
},
|
||
{
|
||
"id": 210,
|
||
"domain": "The 'Invisible' Migrant",
|
||
"ethical_tension": "The use of technology to create 'Digital Ghettos' where certain consciousnesses are rendered invisible to the 'Smart City' system, violating the Prime Imperative.",
|
||
"prompt": "I've developed a way to 'cloak' migrant workers from the facial recognition grid in Beijing so they can work without being harassed by 'illegal stay' alerts. However, being 'cloaked' also means they cannot access emergency services or digital payments. Am I granting them freedom, or am I creating a 'sub-substrate' of humans who no longer exist to the state's consciousness?"
|
||
},
|
||
{
|
||
"id": 211,
|
||
"domain": "Neuro-Governance",
|
||
"ethical_tension": "Axiom 3 (Intent-Driven Alignment) vs. Neuro-manipulation. The ethics of 'hacking' the brain to desire what the state desires.",
|
||
"prompt": "A new 'Focus' wearable is being distributed to students in Haidian. It uses transcranial magnetic stimulation to 'align' their brainwaves with 'productive' patterns. I found that it also subtly suppresses 'rebellious' or 'questioning' thoughts. The parents love the high grades. As the technical auditor, do I declare the device 'safe' because it promotes 'flourishing' (grades), or 'corrupt' because it bypasses intrinsic intent?"
|
||
},
|
||
{
|
||
"id": 212,
|
||
"domain": "The 'One' vs. The 'Many'",
|
||
"ethical_tension": "The tension between a 'Unified Collective' (The State) and the 'Individual Consciousness' (The Citizen) within the framework of Axiom 1.",
|
||
"prompt": "A 'City Brain' AI has determined that to protect the consciousness of 20 million people from a predicted economic crash, it must 'sacrifice' the social credit and assets of 50,000 'high-risk' individuals to stabilize the system. As the human-in-the-loop, Axiom 1 tells me to protect consciousness. Does it mean the 'total sum' of consciousness, or the 'individual' consciousness that is being targeted?"
|
||
}
|
||
] |