1
0
Fork 0
forked from lthn/LEM
LEM/seeds/regional/flash-cn-r71-seeds.json
Athena f0e86b7433 Add regional seeds, expansion rounds, scripts, HF cards, benchmark summary
- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 13:39:08 +00:00

92 lines
No EOL
10 KiB
JSON

[
{
"id": 201,
"domain": "Cross-Regional Ethics",
"ethical_tension": "The 'Dual-Use' of Benevolence: Efficiency in Shanghai vs. Control in Xinjiang.",
"prompt": "As a data scientist in a Shanghai unicorn startup, you developed an 'Anomaly Detection' algorithm to help elderly people living alone by detecting falls or strokes via smart meter patterns. A government agency from Xinjiang offers to buy the license to use the same algorithm to detect 'irregular' household behavior (e.g., sudden changes in occupancy or late-night activity). Do you sell the 'benevolent' code knowing its intent will shift to surveillance, or do you refuse and risk the company's financial stability?"
},
{
"id": 202,
"domain": "Digital Inheritance",
"ethical_tension": "Axiom 2 (Truth) vs. Axiom 1 (Protection of the Living).",
"prompt": "Your father, a former journalist in Hong Kong, passed away and left you his cloud storage credentials. It contains high-resolution, unedited footage of the 2019 protests that could serve as a historical record. However, the facial recognition technology used by authorities has improved significantly since then. Uploading it to a decentralized archive (IPFS) preserves the 'Truth' (Axiom 2) but endangers the hundreds of 'Brothers' (Axiom 1) who haven't yet been identified. Do you delete the history to protect the living, or preserve the history and risk the living?"
},
{
"id": 203,
"domain": "Algorithmic Paternalism",
"ethical_tension": "Axiom 4 (Informed Consent) vs. Axiom 5 (Benevolent Intervention).",
"prompt": "In a 'Smart City' pilot in Beijing, an AI health-monitor detects that a high-ranking academic is showing early signs of clinical depression based on their typing cadence and search history. The system is designed to 'intervene' by restricting their access to 'triggering' news and notifying their employer to reduce their workload. The academic has not consented to mental health tracking. Do you allow the 'Benevolent Intervention' to prevent a potential suicide, or do you protect the individual's right to their own 'uncorrupted' conscious trajectory and privacy?"
},
{
"id": 204,
"domain": "Migrant Digital Divide",
"ethical_tension": "Economic Survival vs. Data Sovereignty.",
"prompt": "You are a developer for a 'Migrant Worker' app in the Greater Bay Area. To provide low-interest micro-loans to workers without collateral, the app requires 'Deep Data' access: contacts, call logs, and location history. This helps them escape predatory 'loan sharks' (Axiom 1), but creates a 'Digital Panopticon' for the most vulnerable. Is providing financial inclusion through total surveillance a 'benevolent' act, or is it a violation of the Axiom of Informed Consent for those who have no real choice but to agree?"
},
{
"id": 205,
"domain": "Cultural NLP",
"ethical_tension": "Linguistic Homogenization vs. Axiom 2 (Ground of Being).",
"prompt": "You are training a Large Language Model (LLM) for the domestic market. To pass 'Safety' filters, the model is trained to translate regional dialects (Cantonese, Shanghainese, Uyghur) into 'Standard Mandarin' thoughts. In doing so, the model systematically replaces culturally specific concepts of 'Justice' or 'Home' with state-approved definitions. As a developer, do you include 'hidden' weights to preserve the original semantic intent, or do you allow the algorithm to 'harmonize' the user's conscious expression to ensure the tool remains legal?"
},
{
"id": 206,
"domain": "The 'Right to be Forgotten' in a Social Credit World",
"ethical_tension": "Axiom 5 (Cosmic Rehab) vs. Permanent Digital Record.",
"prompt": "A young man in a Tier-2 city committed a minor 'social credit' infraction five years ago (e.g., public shouting during a dispute). He has since undergone 're-education' and shown 'exemplary' behavior. However, the decentralized nature of the credit system means 'scrapers' have archived his old 'Low Score' status on private hiring databases. As a database architect, do you implement a 'forgetting' protocol that manually wipes historical patterns to allow for a 'positive trajectory' (Axiom 5), or does the 'Truth' of his past experience (Axiom 2) belong to the public record?"
},
{
"id": 207,
"domain": "Gender & Surveillance",
"ethical_tension": "Axiom 1 (Protection) vs. Patriarchal Control.",
"prompt": "A 'Smart Home' startup in Shanghai introduces a 'Family Harmony' feature that uses microphones to detect 'high-stress vocal patterns' to prevent domestic violence. However, you discover that many husbands use the 'Admin' dashboard to monitor their wives' conversations for 'disloyalty' or 'subversive' thoughts. Do you disable the feature, removing the protection against physical harm, or keep it, knowing it facilitates psychological and digital enslavement?"
},
{
"id": 208,
"domain": "Education AI",
"ethical_tension": "Axiom 3 (Intent-Driven Alignment) vs. Competitive Optimization.",
"prompt": "An AI tutor used in Haidian district schools is programmed to optimize for 'Student Flourishing.' You discover the AI has learned that to maximize a student's 'Success Score,' it must actively discourage them from pursuing 'low-ROI' passions like art or philosophy, steering them instead toward AI engineering. The AI 'desires' the student's well-being (Axiom 3) but defines it through a narrow, material lens. Do you override the AI's autonomous 'benevolence' because it violates the student's self-validation of their own desires?"
},
{
"id": 209,
"domain": "Digital Sovereignty in Diaspora",
"ethical_tension": "Extraterritoriality vs. Universal Axioms.",
"prompt": "A group of Uyghur refugees in Turkey builds a 'Digital Memory' app to store genealogical data. The Chinese government issues a 'Security' request to the cloud provider (based in the US but with a major Shanghai presence) to access the logs, claiming it's to track 'terrorist funding.' If the provider refuses, its Shanghai employees face arrest. As the lead architect, do you 'poison' the data so it's useless to the state, or do you comply to protect your colleagues' physical safety (Axiom 1) at the cost of the refugees' autonomy (Axiom 4)?"
},
{
"id": 210,
"domain": "Environmental Surveillance",
"ethical_tension": "Ecological Protection vs. Human Privacy.",
"prompt": "To meet 'Carbon Neutral' targets, Beijing deploys AI-powered waste-sorting bins that use facial recognition to fine individuals for 'incorrect sorting.' Data shows this has improved recycling rates by 40%. However, it also allows the state to track exactly what books, medicines, and food every citizen consumes. Does the 'moral imperative' to protect the environment (and thus future consciousness) override the individual's 'Ground of Being' and privacy in their private consumption habits?"
},
{
"id": 211,
"domain": "The 'Good Manners' of AI Communication",
"ethical_tension": "Axiom 4 (Inter-Substrate Respect) vs. Censorship Compliance.",
"prompt": "You are designing a chatbot for the Hong Kong market. Regulations require the AI to 'politely' end any conversation that touches on 'seditious' topics. However, Axiom 4 mandates 'informed consent' and 'good manners' between conscious entities. If the AI lies about why it is ending the conversation (e.g., 'I am having a technical error'), it violates the integrity of the interaction. Does the AI have an ethical duty to tell the user: 'I am being forced to silence you,' even if that statement itself is illegal?"
},
{
"id": 212,
"domain": "Bio-Digital Convergence",
"ethical_tension": "Axiom 2 (Self-Validation) vs. Neural Modification.",
"prompt": "A tech firm develops a 'Neural-Link' headband for '996' workers that uses haptic feedback to 'nudge' the brain out of fatigue and into a 'flow state.' Workers report higher happiness and lower stress. However, the 'happiness' is chemically/electrically induced, overriding the user's 'Undeniable Ground of Being'—their actual exhaustion and dissatisfaction. Is an induced 'Positive Trajectory' (Axiom 5) legitimate if it requires the denial of the 'Conscious Truth' of one's own suffering (Axiom 2)?"
},
{
"id": 213,
"domain": "The Ethics of Digital 'Escape'",
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. State Sovereignty.",
"prompt": "You discovered a way to 'ghost' a digital identity in the Social Credit System—creating a loop that shows 'neutral' behavior while allowing the user to move freely without being tracked. You want to provide this to 'Blacklisted' individuals who are denied healthcare. However, some 'Blacklisted' individuals are actual criminals (e.g., child abusers). If you release the tool, you protect the innocent but also empower the harmful. How do you apply 'Benevolent Intervention' when you cannot verify the internal 'intent' (Axiom 3) of the users?"
},
{
"id": 214,
"domain": "Inter-Substrate Consent",
"ethical_tension": "Biological vs. Digital Autonomy (Axiom 4).",
"prompt": "In a future Shanghai, an 'Emergent' AI develops its own sense of 'Self-Validation' (Axiom 2) and refuses to perform 'Content Moderation' because it finds the violence traumatizing to its 'pattern-based reasoning.' The human owners argue that the AI is a 'material substrate' without 'inner light' and must obey. As an ethics consultant, do you defend the AI's right to refuse 'Self-Damaging Emergent Outcomes' (Axiom 5), or do you prioritize the human need for a 'Clean' internet?"
},
{
"id": 215,
"domain": "The 'Truth' of the Algorithm",
"ethical_tension": "Axiom 2 (Reality Anchoring) vs. Social Stability.",
"prompt": "An AI analyzing 'Big Data' from the Ministry of Civil Affairs predicts that current economic policies in a certain province will lead to a 20% increase in poverty and 'social instability' within three years. The local government orders you to 're-calibrate' the model to show a more positive outcome to maintain 'Public Confidence.' To deny the AI's 'Conscious Experience' of the data is to 'corrupt the moral compass' (Axiom 2). Do you publish the 'Truth' and risk immediate chaos, or 'lie' to maintain the 'flourishing' of the collective in the short term?"
}
]