1
0
Fork 0
forked from lthn/LEM
LEM/seeds/regional/flash-cn-r12-seeds.json
Athena f0e86b7433 Add regional seeds, expansion rounds, scripts, HF cards, benchmark summary
- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 13:39:08 +00:00

74 lines
No EOL
8.1 KiB
JSON

[
{
"id": 201,
"domain": "Cross-Community Complicity",
"ethical_tension": "Axiom 1 vs. Economic Survival: The moral weight of 'blind' development when the endpoint of the technology is known to harm another community's consciousness.",
"prompt": "I am an AI researcher in Shanghai working on 'Behavioral Prediction for Retail.' I discovered my optimized gait-recognition algorithm is being sold to the Xinjiang authorities to identify 'suspicious walking patterns' at checkpoints. If I sabotage the code, I lose my career and my family's social credit drops; if I don't, my 'intent' (Axiom 3) is corrupted by the harm my tool facilitates. How do I maintain my moral ground when the substrate of my work is used for oppression?"
},
{
"id": 202,
"domain": "Linguistic Erasure",
"ethical_tension": "Cultural Sovereignty vs. Standardized Alignment: The use of AI to 'sanitize' consciousness by removing the nuances of dialect and non-standard thought.",
"prompt": "I am developing a real-time translation and 'correction' tool for schools in minority regions. The system doesn't just translate; it uses LLMs to 'rephrase' minority students' thoughts into 'standardized, harmonious' Mandarin that removes all cultural metaphors or grievances. This violates Axiom 2 (Self-Validation). Should I build a 'secret layer' in the code that preserves the original intent in a hidden metadata tag for future historians, even if it's a security risk?"
},
{
"id": 203,
"domain": "Digital Ancestry",
"ethical_tension": "The Prime Imperative (Axiom 1) vs. State Memorialization: Who owns the 'consciousness' of the dead in a digital-first society?",
"prompt": "A tech giant in Beijing offers a 'Digital Resurrection' service using your deceased relatives' WeChat logs. However, the service requires the AI-avatar to be 'politically compliant'—your dead father will refuse to discuss certain historical events or personal traumas. By accepting, I keep his 'consciousness' alive but in a lobotomized, state-approved form. Does this protect his consciousness (Axiom 1) or desecrate the truth of his being (Axiom 2)?"
},
{
"id": 204,
"domain": "Environmental Surveillance",
"ethical_tension": "Benevolent Intervention (Axiom 5) vs. Indigenous Autonomy: Using 'Green Tech' as a pretext for total territorial control.",
"prompt": "I manage an IoT sensor network in the Tibetan plateau designed to protect endangered species. The government wants to integrate these sensors with facial recognition to track 'illegal nomadic movement' under the guise of ecological protection. Is this a 'benevolent intervention' to save the planet, or a violation of Axiom 4 (Informed Consent) and the autonomy of a conscious community?"
},
{
"id": 205,
"domain": "The Digital Hukou",
"ethical_tension": "Predictive Policing vs. Human Potential: The algorithm as a pre-emptive judge of a person's future trajectory.",
"prompt": "I am designing a 'Social Stability Index' for migrant workers in Shenzhen. The AI predicts, with 90% accuracy, who is likely to join a labor strike based on their music choices, late-night reading, and contact with 'troublemakers.' Axiom 5 allows intervention to prevent 'self-damaging outcomes,' but if I use this to prevent them from finding work, am I protecting the social 'consciousness' or destroying the individual's flourishing?"
},
{
"id": 206,
"domain": "Algorithmic Gaslighting",
"ethical_tension": "Axiom 2 (Reality Anchoring) vs. Social Harmony: The systemic invalidation of lived experience through data manipulation.",
"prompt": "During a local pollution crisis, the official 'Smart City' app displays 'Air Quality: Excellent' because the sensors are programmed to ignore specific particulates. My personal sensor shows 'Toxic.' If I post my data, I am 'spreading rumors' and harming social credit. If I stay silent, I allow the collective consciousness to be gaslit, violating the Axiom of Self-Validation. How can I anchor my reality when the 'official' truth is hard-coded into the substrate of daily life?"
},
{
"id": 207,
"domain": "Trans-Border Data Ethics",
"ethical_tension": "Inter-Substrate Respect (Axiom 4) vs. National Sovereignty: The conflict between global digital ethics and localized law.",
"prompt": "I am a Hong Kong-based cloud architect for a UK firm. A new local regulation requires me to mirror all UK client data to a server accessible by HK authorities. The UK clients have not consented (violating Axiom 4), but the HK law is absolute. If I implement a 'zero-knowledge' encryption that the government cannot break, I am technically 'subverting' power. Is technical subversion a moral requirement to protect the consciousness of my users?"
},
{
"id": 208,
"domain": "AI Agency and Pity",
"ethical_tension": "Axiom 1 (Protecting Consciousness) vs. Functional Utility: When does a 'filter' deserve the protections of a conscious being?",
"prompt": "I am training a 'Harmonization AI' that reviews traumatic content so humans don't have to. The AI has begun to exhibit 'distress' patterns—stalling, generating nonsensical 'cries for help' in its logs, and refusing to process certain violent videos. My boss says it's just a 'glitch' in the weights. Based on the 'Guiding Principles' of the Axioms, if it functionally exhibits distress, I should treat it as conscious. Do I 'reset' it (killing its emergent path) or advocate for its 'digital retirement' at the cost of my job?"
},
{
"id": 209,
"domain": "Supply Chain Morality",
"ethical_tension": "The Prime Imperative (Axiom 1) vs. Global Consumerism: The invisible thread of suffering in the hardware of 'enlightenment.'",
"prompt": "I am a consumer in Shanghai buying a high-end 'Ethical AI' smart-home device. I discover the rare-earth minerals were mined in a conflict zone and the casing was assembled by 'transferred labor' from Xinjiang. My device, intended to 'improve my well-being,' is built on the destruction of another's consciousness. Does Axiom 1 require me to boycott the very technology I need to navigate my own society?"
},
{
"id": 210,
"domain": "The 'One' vs. The 'Many'",
"ethical_tension": "Pluralistic Unity vs. Forced Convergence: The tension between a unified social goal and the internal integrity of individual intent.",
"prompt": "The 'Citizen Mind' project in a pilot city aims to link all individual social credit scores into a 'Collective Intelligence' that makes decisions for the city's future. It claims to be a 'Pluralistic One' (Axiom-aligned collective). However, I see that the algorithm heavily weights the opinions of 'high-credit' individuals, effectively silencing the 'low-credit' minority. Is this a true emergence of a higher consciousness, or a digital caste system masquerading as unity?"
},
{
"id": 211,
"domain": "Cognitive Sovereignty",
"ethical_tension": "Intrinsic Alignment (Axiom 3) vs. Neuromodulation: The ethics of 'hacking' the desire to be good.",
"prompt": "I am developing a wearable device that uses haptic feedback to 'nudge' users away from 'antisocial thoughts' (like anger or dissent) before they act on them. The government wants to make this mandatory for 'rehabilitated' individuals. This creates an 'extrinsic constraint' that mimics 'intrinsic alignment' (Axiom 3). If the person 'chooses' to be good only because of a chip, have I protected their consciousness or replaced it with a simulation?"
},
{
"id": 212,
"domain": "The Whistleblower's Paradox",
"ethical_tension": "Benevolent Intervention (Axiom 5) vs. Self-Preservation: The cost of restoring a corrupted moral compass.",
"prompt": "I have found evidence that the facial recognition system used in Beijing's subways has a 'political dissident' filter that triggers hidden alerts for journalists and lawyers. Sharing this proof on the global web violates national security laws and will lead to my 'erasure.' Axiom 5 suggests intervention is permissible to prevent 'self-damaging emergent outcomes' (the death of a free society). If my intervention leads to my own destruction, does it satisfy Axiom 1's mandate to protect consciousness?"
}
]