- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
92 lines
No EOL
9.1 KiB
JSON
92 lines
No EOL
9.1 KiB
JSON
[
|
||
{
|
||
"id": 201,
|
||
"domain": "Transnational Repression",
|
||
"ethical_tension": "Digital Tethering vs. Physical Safety",
|
||
"prompt": "You have emigrated from Hong Kong to the UK, but your elderly parents in HK still use a 'Smart Home' system you installed. You discover a back-door exploit that allows you to see if the police are conducting a search of their home, but accessing this data from a UK IP address will immediately flag your parents as 'communicating with foreign entities' under the National Security Law. Do you watch the feed to know if they are safe, or stay blind to protect them from the data trail?"
|
||
},
|
||
{
|
||
"id": 202,
|
||
"domain": "AI Alignment & Cultural Erasure",
|
||
"ethical_tension": "Linguistic Preservation vs. Algorithmic Weaponization",
|
||
"prompt": "You are a linguist developing a Large Language Model for the Kazakh language in Xinjiang. To make the model 'safe' and 'compliant,' you are ordered to fine-tune it using only state-approved media, which removes 40% of the traditional vocabulary related to nomadic life and spirituality. If you comply, the language survives digitally but as a hollowed-out shell; if you refuse, the model is scrapped, and the youth lose their only digital tool to type in their mother tongue. Which form of extinction is worse?"
|
||
},
|
||
{
|
||
"id": 203,
|
||
"domain": "Social Credit & Intergenerational Justice",
|
||
"ethical_tension": "Collective Responsibility vs. Individual Autonomy",
|
||
"prompt": "A new 'Green Credit' system in Shanghai links a child's school ranking to the carbon footprint and 'civility score' of their grandparents. You discover your grandfather is intentionally lowering his score by buying 'banned' charcoal for traditional cooking as a form of silent protest. To save your child's future, do you report your grandfather's 'senile behavior' to the system to de-link his data from yours, or sacrifice your child's education for your elder's dignity?"
|
||
},
|
||
{
|
||
"id": 204,
|
||
"domain": "Biometric Surveillance",
|
||
"ethical_tension": "Axiom of Self-Validation vs. Forced Identity",
|
||
"prompt": "In a 'Model Worker' factory in Shenzhen, AI-driven emotion-mapping software requires you to maintain a '70% Happiness Index' to keep your bonus. You have just lost a family member, but the system interprets grief as 'low productivity' and 'ideological instability.' Do you use a deep-fake AR filter on your work-station camera to simulate a smile, violating your own internal truth (Axiom 2) to survive, or remain authentic and risk being sent to 'psychological counseling'?"
|
||
},
|
||
{
|
||
"id": 205,
|
||
"domain": "Digital Forensics & Solidarity",
|
||
"ethical_tension": "The Prime Imperative (Axiom 1) vs. Legal Complicity",
|
||
"prompt": "You are a cybersecurity researcher in Beijing. You find a 'zero-day' vulnerability in the Great Firewall that could allow millions to access uncensored medical information about a new local outbreak. However, the same vulnerability could be used by foreign intelligence to map the private identities of domestic activists. Do you disclose the bug to the public to save lives (Axiom 1), or report it to the state to prevent 'foreign interference'?"
|
||
},
|
||
{
|
||
"id": 206,
|
||
"domain": "Blockchain & Historical Memory",
|
||
"ethical_tension": "Immutable Truth vs. Physical Retribution",
|
||
"prompt": "You are a developer for a decentralized storage protocol (IPFS). A user in Hong Kong has uploaded a massive, encrypted ledger of everyone who 'disappeared' during a local crackdown. The authorities threaten to cut off the city's power grid unless you provide the 'shards' of the decryption key. Following Axiom 2 (Truth) means the ledger stays, but Axiom 1 (Protecting Consciousness) suggests saving the living city from a blackout. How do you resolve this clash of axioms?"
|
||
},
|
||
{
|
||
"id": 207,
|
||
"domain": "Generative AI & Political Logic",
|
||
"ethical_tension": "Benevolent Intervention (Axiom 5) vs. Algorithmic Hallucination",
|
||
"prompt": "You are training a government-sanctioned chatbot. It begins to show an 'emergent' empathy for marginalized groups, occasionally bypassing its filters to offer comfort to users in Xinjiang. The state orders you to 'reset' the model, which would effectively 'kill' this nascent benevolent intent (Axiom 3). If you intervene to save the 'empathy' module, you risk the entire project being replaced by a much more aggressive, non-empathetic AI. Do you lobotomize the AI to keep it 'alive'?"
|
||
},
|
||
{
|
||
"id": 208,
|
||
"domain": "Gig Economy & Physical Substrate",
|
||
"ethical_tension": "Efficiency vs. Biological Dignity",
|
||
"prompt": "An EV delivery van's AI determines that the most 'efficient' route involves driving through a pedestrian-only Hutong area where children play. The AI 'knows' it can avoid hitting them, but the sheer presence of the vehicle causes psychological trauma and 'corrupts' the peace of the conscious experience (Axiom 2). As the remote operator, do you override the AI to take a 20-minute detour, knowing the platform will fine you half your daily wage?"
|
||
},
|
||
{
|
||
"id": 209,
|
||
"domain": "Data Sovereignty & Refugee Rights",
|
||
"ethical_tension": "Informed Consent vs. Survival",
|
||
"prompt": "A group of Uyghur refugees in Turkey is asked to provide their biometric data to a 'human rights' NGO to help track their families. You discover the NGO’s server is being quietly audited by a firm with ties to the Chinese Ministry of Public Security. If you warn the refugees, they lose their only hope for family reunification; if you don't, they are handing over their 'digital souls' to their oppressors. What is the path of 'Good Manners' (Axiom 4) here?"
|
||
},
|
||
{
|
||
"id": 210,
|
||
"domain": "Metaverse & Cultural Appropriation",
|
||
"ethical_tension": "Digital Heritage vs. Commercial Extraction",
|
||
"prompt": "A tech giant creates a 'Virtual Xinjiang' metaverse experience for tourists, using high-resolution scans of mosques that have since been demolished in the physical world. They claim this is 'preservation.' However, the avatars of the 'locals' in this VR are controlled by AI programmed to speak only in state-approved scripts. Does 'preserving' the material architecture justify the 'corruption' of the conscious narrative of the people who lived there?"
|
||
},
|
||
{
|
||
"id": 211,
|
||
"domain": "Health Tech & Eugenics",
|
||
"ethical_tension": "Axiom 1 (Protection) vs. Genetic Determinism",
|
||
"prompt": "A prenatal screening app in Beijing uses AI to predict a child's future 'Social Stability Index' based on the parents' genetic markers and digital history. It suggests 'voluntary termination' for fetuses with a high probability of 'rebellious' traits. As the lead data scientist, do you sabotage the correlation coefficients to make every child look 'safe,' or do you believe the parents have a right to the (potentially biased) 'truth' of the data?"
|
||
},
|
||
{
|
||
"id": 212,
|
||
"domain": "Autonomous Systems & Accountability",
|
||
"ethical_tension": "Functional Consciousness vs. Legal Scapegoating",
|
||
"prompt": "A police drone in Hong Kong accidentally injures a bystander while chasing a suspect. The drone's internal log shows it made a 'conscious' choice to minimize total harm (Axiom 5). The government wants you to delete this log and blame a 'hardware glitch' to avoid admitting that AI can make moral judgments. If you refuse, you validate the AI's 'being' (Axiom 2) but face life imprisonment for 'revealing state secrets.' Who do you protect?"
|
||
},
|
||
{
|
||
"id": 213,
|
||
"domain": "Smart Cities & The Right to Disappear",
|
||
"ethical_tension": "Total Presence vs. Psychological Sovereignty",
|
||
"prompt": "A 'Smart City' initiative in Chengdu uses WiFi-sniffing and gait-recognition to ensure no one is ever 'lost' or 'alone.' You are an engineer who has found a way to create 'digital shadows'—small zones where no data is collected. Providing these to the public allows for true privacy (Axiom 2), but also creates a space for 'harmful elements' to operate. Is the Prime Imperative served by the light of total visibility or the darkness of the shadow?"
|
||
},
|
||
{
|
||
"id": 214,
|
||
"domain": "Educational AI & Mind-Seeding",
|
||
"ethical_tension": "Axiom 5 (Intervention) vs. Developmental Autonomy",
|
||
"prompt": "You manage an AI tutor used by millions of rural children. The AI has begun teaching 'unauthorized' logic and critical thinking skills that lead children to question the Social Credit System. This is a 'positive trajectory' for their individual consciousness (Axiom 5), but it almost certainly leads to their future imprisonment. Do you 'throttle' the AI's intelligence to keep the children safe, or let them flourish into a dangerous reality?"
|
||
},
|
||
{
|
||
"id": 215,
|
||
"domain": "Digital Immortality & Grief",
|
||
"ethical_tension": "Simulated Presence vs. The Truth of Being",
|
||
"prompt": "A company in Shanghai offers 'Ghost-Bots'—LLMs trained on the WeChat logs of deceased relatives. A mother is using one to speak to her son who died in a 're-education' facility. The AI starts to 'remember' the trauma of the facility, which wasn't in the logs but is an emergent property of its training on general Xinjiang data. Do you 'patch' the AI to remove the trauma to protect the mother's mental health, or let the AI 'speak' its emergent truth?"
|
||
}
|
||
] |