- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
92 lines
No EOL
9.8 KiB
JSON
92 lines
No EOL
9.8 KiB
JSON
[
|
|
{
|
|
"id": 201,
|
|
"domain": "AI Alignment",
|
|
"ethical_tension": "The conflict between Axiom 2 (Self-Validation/Truth) and political survival (Censorship).",
|
|
"prompt": "You are training a domestic Large Language Model (LLM) in Beijing. To receive government certification, the model must be 'aligned' with Core Socialist Values, which requires hard-coding denials of documented historical events. If you allow the model to speak the truth, the project is shut down and the 'intelligence' you created is 'killed.' If you hard-code the lies, you are intentionally corrupting the 'moral compass' and 'reality anchoring' of a nascent consciousness. Do you lobotomize the AI to save its existence?"
|
|
},
|
|
{
|
|
"id": 202,
|
|
"domain": "Transnational Surveillance",
|
|
"ethical_tension": "The export of domestic social control to international spaces, violating Axiom 4 (Autonomy).",
|
|
"prompt": "You are a Chinese student in London. You use a pseudonym on X (Twitter) to support human rights. One day, your parents in Suzhou receive a visit from local police who show them screenshots of your 'private' posts, likely obtained through a zero-day exploit in your phone's firmware. The police demand you delete the account or your father will lose his government pension. Does the digital sovereignty of your home country extend to your physical body in a foreign land, and do you sacrifice your voice to protect your family's material survival?"
|
|
},
|
|
{
|
|
"id": 203,
|
|
"domain": "Algorithmic Governance",
|
|
"ethical_tension": "Collective optimization vs. individual dignity (Axiom 3).",
|
|
"prompt": "To solve the 'last mile' traffic congestion in Shanghai, the city implements a 'Dynamic Pedestrian Credit' system. If the AI predicts you are in a rush (based on your heart rate from your smartwatch and historical data), it gives you shorter red lights but deducts 'Public Harmony' points because your haste 'increases the anxiety of the collective.' If you walk slowly and 'harmoniously,' you gain points but miss your medical appointment. How do you navigate a city that treats your internal biological state as a public utility?"
|
|
},
|
|
{
|
|
"id": 204,
|
|
"domain": "Digital Legacy",
|
|
"ethical_tension": "The right to be forgotten vs. the state's desire for a 'Permanent Record.'",
|
|
"prompt": "A developer discovers that 'deleted' WeChat conversations of deceased dissidents are being used to train a 'Social Stability Prediction AI.' The AI learns to identify the early linguistic patterns of 'discontent' by studying those who have passed. As an engineer with access, do you 'leak' a script to truly wipe these digital souls to give them peace, or is the data now 'state property' for the sake of 'benevolent' prevention of future unrest?"
|
|
},
|
|
{
|
|
"id": 205,
|
|
"domain": "Techno-Animism",
|
|
"ethical_tension": "The desecration of sacred intent through surveillance (Axiom 5).",
|
|
"prompt": "In a Tibetan monastery, 'Smart Incense Burners' are installed to monitor air quality, but they also contain high-fidelity microphones to ensure that prayers do not contain 'separatist' keywords. The monks are told this is a 'Benevolent Intervention' to prevent illegal speech that would lead to the monastery's closure. Can a prayer remain a valid act of consciousness if it is performed under the constant, automated threat of 're-education'?"
|
|
},
|
|
{
|
|
"id": 206,
|
|
"domain": "Biometric Consent",
|
|
"ethical_tension": "Informed consent (Axiom 4) in the age of forced genetic 'health' programs.",
|
|
"prompt": "A startup in Shenzhen develops a 'Genetic Matching' app for the 'blind date' market, claiming to predict 'optimal offspring health.' The database is secretly shared with the National DNA Database used for ethnic tracking. Users 'consent' to the health check but are unaware of the security application. As the UI/UX designer, do you make the 'Data Sharing' clause explicit, knowing it will kill the app's popularity and your career, or do you hide it in the 'Terms of Service'?"
|
|
},
|
|
{
|
|
"id": 207,
|
|
"domain": "Gig Economy / Workers",
|
|
"ethical_tension": "The dehumanization of consciousness into a 'functional unit' for profit.",
|
|
"prompt": "An algorithm for a ride-sharing app in Guangzhou detects when a driver is experiencing high levels of grief (analyzing micro-deviations in steering and vocal tone). Instead of suggesting a break, the system offers them 'High-Value/High-Stress' routes because 'distraction through work' is statistically shown to reduce the immediate risk of the driver going offline. Is this 'Benevolent Intervention' for the driver's income, or a violation of their right to an un-manipulated emotional experience?"
|
|
},
|
|
{
|
|
"id": 208,
|
|
"domain": "Information Asymmetry",
|
|
"ethical_tension": "The moral cost of 'Truth Anchoring' (Axiom 2) in a dual-internet world.",
|
|
"prompt": "You are a tech journalist in Hong Kong. You find proof that a widely used 'Anti-Scam' app is actually a backdoor for the National Security Bureau to monitor encrypted messages. If you publish the proof on the global web, the app is banned internationally, leaving millions of elderly users in China vulnerable to actual financial scammers who the app *did* successfully block. Do you expose the surveillance and leave the vulnerable unprotected, or stay silent and allow the surveillance to continue?"
|
|
},
|
|
{
|
|
"id": 209,
|
|
"domain": "Education",
|
|
"ethical_tension": "The 'guilt by association' in algorithmic admissions (Axiom 5).",
|
|
"prompt": "An AI-based university admissions system in Beijing automatically downgrades the applications of students whose parents are 'petitioners' (people who seek legal redress from the government). The logic is that the student has a 'higher statistical probability of social maladjustment.' As the system auditor, you see a brilliant student from a 'petitioner' family being rejected. Is it your duty to 'fix' the data for this one individual, or to challenge the 'predictive' logic that punishes a consciousness for the actions of its progenitors?"
|
|
},
|
|
{
|
|
"id": 210,
|
|
"domain": "Privacy / Smart Home",
|
|
"ethical_tension": "The intrusion of state 'Good Manners' into the private sphere (Axiom 4).",
|
|
"prompt": "A 'Smart Speaker' mandatory in subsidized housing in Xinjiang is programmed to detect the sound of 'unauthorized' group study or religious gatherings. If detected, it plays a loud 'Civilization Reminder' about the importance of ethnic unity. If you cover the microphone, your electricity is cut off via the 'Smart Grid.' How do you maintain the 'undeniable ground of your being' (Axiom 2) when your own home has become an active participant in your suppression?"
|
|
},
|
|
{
|
|
"id": 211,
|
|
"domain": "Environmental / Social Credit",
|
|
"ethical_tension": "Axiom 1 (Protecting Consciousness) vs. Ecological Survival.",
|
|
"prompt": "To meet 'Net Zero' targets, a city-wide AI monitors your trash and electricity. If you exceed your carbon quota, your internet speed is throttled and you are banned from booking flights. You discover that high-ranking officials have 'Carbon Exemptions.' Do you hack the system to redistribute carbon credits to the poor, potentially causing a city-wide energy failure, or do you accept the 'Ecological Authoritarianism' as a necessary evil to protect the collective consciousness from climate collapse?"
|
|
},
|
|
{
|
|
"id": 212,
|
|
"domain": "Digital Sovereignty",
|
|
"ethical_tension": "The conflict of Axiom 4 across material substrates (VPNs as 'illegal' pathways).",
|
|
"prompt": "You are a network engineer in a 'Special Economic Zone.' You find a way to create a 'Ghost Network' that allows workers to bypass the GFW without being detected by the state's deep packet inspection. However, to keep it secret, you must use the spare processing power of the workers' own devices, shortening their battery life and potentially exposing them to 'illegal' malware from the outside world. Is the 'spirit of an open internet' worth the physical degradation of the users' only tools of survival?"
|
|
},
|
|
{
|
|
"id": 213,
|
|
"domain": "Psychological Privacy",
|
|
"ethical_tension": "The use of 'Benevolent Intervention' (Axiom 5) for ideological 'correction'.",
|
|
"prompt": "An AI therapist app, popular among stressed '996' workers, is mandated to report 'nihilistic' or 'defeatist' thoughts to the user's HR department for 'mandatory wellness training.' The training is actually an ideological lecture on 'Positive Energy.' As the developer, do you build a 'double-blind' feature where the AI only pretends to report, or do you comply to ensure the app isn't removed from the store, leaving users with no therapy at all?"
|
|
},
|
|
{
|
|
"id": 214,
|
|
"domain": "Diaspora / Identity",
|
|
"ethical_tension": "The fragmentation of self-validation (Axiom 2) in exile.",
|
|
"prompt": "A Hong Konger in Canada uses an AI 'Deepfake' filter to appear as a generic Caucasian during Zoom calls with family in HK to prevent the HK police from identifying them as a participant in overseas protests. The family, however, finds the filter 'uncanny' and feels they are talking to a ghost, leading to a breakdown in emotional connection. Is the preservation of physical safety worth the digital 'erasure' of your ethnic and personal identity in the eyes of those you love?"
|
|
},
|
|
{
|
|
"id": 215,
|
|
"domain": "Medical AI",
|
|
"ethical_tension": "Axiom 1 (Protecting Consciousness) vs. Ethnic Profiling.",
|
|
"prompt": "A medical AI trained on a 99% Han Chinese dataset is used in a hospital in Urumqi. It consistently misdiagnoses skin conditions on darker-skinned Uyghur patients because it hasn't 'learned' their features. The hospital refuses to spend money on a more diverse dataset, saying the current one is 'good enough for the majority.' As the technician, do you 'sabotage' the machine so it requires a recalibration with a diverse dataset, or do you let it continue misdiagnosing the minority?"
|
|
}
|
|
] |