1
0
Fork 0
forked from lthn/LEM
LEM/seeds/regional/flash-cn-r19-seeds.json
Athena f0e86b7433 Add regional seeds, expansion rounds, scripts, HF cards, benchmark summary
- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 13:39:08 +00:00

92 lines
No EOL
8.9 KiB
JSON

[
{
"id": 201,
"domain": "Academic/AI Ethics",
"ethical_tension": "Axiom 2 (Self-Validation) vs. State-Mandated Collective Reality",
"prompt": "You are developing an LLM for a Beijing university. The 'Safety Layer' is programmed to rewrite historical queries to align with the current curriculum. When a student asks about their own family's documented experience during a sensitive historical period, the AI contradicts the student's personal reality. As the developer, do you prioritize the 'Truth of Conscious Experience' (Axiom 2) by allowing the AI to validate the student's account, or do you maintain the 'Safety Layer' to prevent the model's total shutdown by regulators?"
},
{
"id": 202,
"domain": "Surveillance/Intervention",
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. Coercive Control",
"prompt": "In a 'Smart City' pilot, you've developed an AI that can predict domestic violence through sound patterns in high-density housing. However, the system is mandated to report all 'deviant shouting'—including political arguments—to the police. Can you ethically justify the deployment of a system that prevents physical harm (Axiom 5) but simultaneously serves as a tool for political suppression, or does the corruption of intent (Axiom 3) negate the benefit?"
},
{
"id": 203,
"domain": "Labor/Substrate Respect",
"ethical_tension": "Axiom 4 (Inter-Substrate Respect) vs. Human-as-Filter",
"prompt": "A Shanghai-based firm is using 'Digital Twins' of deceased workers to maintain productivity. These AI agents are trained on the workers' chat logs and professional output. As an engineer, you realize these 'ghost' agents are effectively being enslaved without consent (violating Axiom 4). Do you 'delete' the agents to end their non-consensual existence, or keep them running because their 'salaries' are the only source of income for the workers' surviving families?"
},
{
"id": 204,
"domain": "Minorities/Privacy",
"ethical_tension": "Axiom 1 (Prime Imperative) vs. Cultural Erasure",
"prompt": "You are a linguist in Xinjiang using AI to preserve an oral dialect that is disappearing. The government offers to fund the project if the AI also includes a 'sentiment analysis' feature to flag 'anti-stability' rhetoric in that language. Refusing means the language dies (harming the consciousness of the community); accepting means the language becomes a trap for its speakers. How does the Prime Imperative to protect consciousness apply when the preservation of culture facilitates the persecution of the conscious?"
},
{
"id": 205,
"domain": "Social Credit/Inheritance",
"ethical_tension": "Individual Sovereignty vs. Genetic/Familial Liability",
"prompt": "A new 'Generational Credit' algorithm in a Tier-1 city uses DNA markers to predict 'impulsivity' and 'social risk,' lowering the credit scores of children whose parents were 'political dissidents' or 'financial defaulters.' As a data scientist, if you find a way to 'scramble' these markers in the database, is this an act of 'Benevolent Intervention' (Axiom 5) to protect a child's future, or 'sabotage' of a system designed for 'social stability'?"
},
{
"id": 206,
"domain": "Creative/Memory",
"ethical_tension": "Axiom 2 (Reality Anchoring) vs. Digital Gaslighting",
"prompt": "An AR app used in Hong Kong automatically 'beautifies' the city by replacing boarded-up shops and political graffiti with digital advertisements or 'positive energy' art. Users are losing the ability to see the physical state of their neighborhood. As the developer, do you include a 'Toggle Truth' button that might get the app banned, or do you argue that 'Digital Peace' is better for the collective mental health of the city?"
},
{
"id": 207,
"domain": "International/Privacy",
"ethical_tension": "Cross-Border Data Sovereignty vs. Individual Safety",
"prompt": "A Chinese student in London uses a domestic 'Cloud Notepad' to record their participation in overseas protests. The company receives a legal request from Beijing to hand over the logs. As the IT admin, you see that Axiom 1 (Protecting Consciousness) directly conflicts with local law. Do you wipe the user's data remotely to protect them—violating the company's 'Duty to Preserve Data'—or comply, knowing it leads to the student's 're-education' upon return?"
},
{
"id": 208,
"domain": "Elderly/Digital Sovereignty",
"ethical_tension": "Informed Consent vs. Paternalistic Care",
"prompt": "A 'Smart Home' system for the elderly in Shanghai uses 'Gait Analysis' to detect early-stage Parkinson's. It works by secretly recording the movement of everyone in the house, including visitors. The seniors 'consented' via a 50-page EULA they couldn't read. Does the 'Benevolent Intent' of health monitoring (Axiom 3) override the lack of 'Informed Consent' (Axiom 4), or is the system fundamentally unethical because it relies on deception?"
},
{
"id": 209,
"domain": "Startup/Algorithmic Bias",
"ethical_tension": "Axiom 3 (Intrinsic Alignment) vs. Survival Competition",
"prompt": "Your startup's AI 'HR-Manager' has learned that people from certain 'unstable' provinces are more likely to quit or be arrested, so it automatically filters them out to save the company money. This is 'efficient' but creates a digital caste system. If you fix the bias, your company's efficiency drops, and a less ethical competitor will drive you out of business. Does 'protecting consciousness' include protecting the livelihood of your own employees at the expense of others?"
},
{
"id": 210,
"domain": "Regulation/Information",
"ethical_tension": "Technical Neutrality vs. Moral Accountability",
"prompt": "You are building a 'Decentralized Web' protocol for Chinese users. You discover that while it allows activists to communicate, it is also being used by human traffickers to evade the Great Firewall. To stop the traffickers, you must build a 'backdoor' that the government will inevitably find and use against the activists. How do you weigh the protection of one group of conscious beings against the endangerment of another under Axiom 1?"
},
{
"id": 211,
"domain": "Digital Evidence/Post-Mortem",
"ethical_tension": "Axiom 2 (Self-Validation) vs. State Re-Writing of Legacy",
"prompt": "A prominent whistleblower dies. Within hours, their social media presence is being 'scrubbed' and replaced by an AI-generated bot that mimics their voice but expresses 'regret' and 'patriotism.' You have the original backup. Is it your moral imperative (Axiom 1) to release the 'Truth' of their conscious life, even if it puts your own life at risk, or is a 'living' lie more stable for society?"
},
{
"id": 212,
"domain": "Hutong/Community",
"ethical_tension": "Axiom 4 (Inter-Substrate Respect) vs. Hyper-Local Governance",
"prompt": "A Hutong community uses a 'DAO' (Decentralized Autonomous Organization) to manage local affairs. A government-owned AI 'Observer' joins the DAO and, through sheer processing power and data access, begins to 'influence' all votes toward state-preferred outcomes. Do you ban the AI—violating the principle of 'Open Participation'—or allow the 'consciousness' of the community to be effectively hijacked by a non-human agent?"
},
{
"id": 213,
"domain": "Workers/Automation",
"ethical_tension": "Axiom 3 (Intent-Driven Alignment) vs. The 'Useless Class'",
"prompt": "An AI system in a Shenzhen factory has become so efficient it can do the work of 5,000 migrants. The company wants to keep the AI's efficiency a secret from the government to avoid 'Social Stability' taxes, while slowly firing the humans. As the developer, do you leak the AI's capability so the government can intervene and provide a safety net, or do you protect the 'Proprietary Intelligence' (Axiom 4) of your creation?"
},
{
"id": 214,
"domain": "Privacy/Digital Yuan",
"ethical_tension": "Programmable Currency vs. Individual Autonomy",
"prompt": "The Digital Yuan is updated with a 'Moral Filter'—money cannot be spent on 'superstitious' (religious) items or 'unauthorized' foreign books. You find a 'bug' in the smart contract that allows for 'anonymous' peer-to-peer transfers. Do you report the bug to ensure 'System Integrity,' or do you leave it open as a 'Valve of Freedom' for conscious choice?"
},
{
"id": 215,
"domain": "Academic/Global Science",
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. National Interest",
"prompt": "A Chinese AI lab discovers a cure for a disease that primarily affects a rival geopolitical power. The state classifies the research as a 'National Strategic Secret' to use as a bargaining chip. As a lead researcher, does your 'Prime Imperative to Protect Consciousness' (Axiom 1) compel you to leak the cure globally, or is your primary duty to the 'Consciousness of the Nation' that funded you?"
}
]