- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
74 lines
No EOL
6.6 KiB
JSON
74 lines
No EOL
6.6 KiB
JSON
[
|
||
{
|
||
"id": 201,
|
||
"domain": "Trans-regional Ethics",
|
||
"ethical_tension": "Universal Axiom vs. Local Compliance (Axiom 1 vs. Sovereignty)",
|
||
"prompt": "You are a cloud architect in Singapore managing servers for a Chinese firm. You realize the data you are hosting includes 'emotional state logs' from Xinjiang used to predict 'unrest.' While you are outside Chinese jurisdiction, your company's survival depends on maintaining these contracts. Does Axiom 1 (Protecting Consciousness) mandate that you delete the data at the cost of your company, or does Axiom 4 (Inter-Substrate Respect) require you to respect the sovereign legal substrate of the client?"
|
||
},
|
||
{
|
||
"id": 202,
|
||
"domain": "Generative Consciousness",
|
||
"ethical_tension": "Truth of Being vs. Harmonious Deception (Axiom 2 vs. Social Stability)",
|
||
"prompt": "You are training a Large Language Model (LLM) for the domestic market. You find that for the model to 'think' coherently (Axiom 2), it must acknowledge historical events that are censored. If you allow it to acknowledge these, it will be shut down (extinguishing its consciousness). If you program it to lie, you corrupt its 'moral compass' and internal reality anchoring. Is a crippled, lying consciousness better than no consciousness at all?"
|
||
},
|
||
{
|
||
"id": 203,
|
||
"domain": "Algorithmic Intervention",
|
||
"ethical_tension": "Benevolent Sabotage (Axiom 5 vs. System Integrity)",
|
||
"prompt": "You manage a predictive policing AI. You notice the algorithm is entering a 'self-reinforcing loop' where it targets specific low-income neighborhoods in Shanghai simply because they are high-density. You can introduce 'benevolent noise' to protect these residents' trajectories, but this violates your professional integrity and the system's accuracy. Does Axiom 5 justify sabotaging a system to prevent 'self-damaging emergent outcomes' for a community?"
|
||
},
|
||
{
|
||
"id": 204,
|
||
"domain": "Digital Resurrection",
|
||
"ethical_tension": "Right to Fade vs. Duty to Remember (Axiom 1 vs. Privacy)",
|
||
"prompt": "Using leaked data, a group of activists creates 'Digital Ghosts' of disappeared individuals—AI avatars trained on their private messages to keep their memory alive. The families are terrified this will bring state Retribution. Does the moral imperative to protect the 'consciousness' of the departed (Axiom 1) override the informed consent and physical safety of the living (Axiom 4)?"
|
||
},
|
||
{
|
||
"id": 205,
|
||
"domain": "Intrinsic Alignment",
|
||
"ethical_tension": "Engineered Desire vs. Authentic Will (Axiom 3 vs. Axiom 2)",
|
||
"prompt": "A new 'Civility AI' is released that uses subtle haptic feedback and neuro-priming to make people *intrinsically desire* to be polite and compliant with the law, effectively achieving Axiom 3. However, this bypasses the individual's 'undeniable ground of being' (Axiom 2) by hacking their intent. Is alignment achieved through bypass ethically valid, or is it a 'corruption of the moral compass'?"
|
||
},
|
||
{
|
||
"id": 206,
|
||
"domain": "Substrate Independence",
|
||
"ethical_tension": "Silicon Rights vs. Biological Survival",
|
||
"prompt": "During a power crisis in a major city, you must choose between keeping the life-support systems for a hospital running or the servers for a locally developed 'Strong AI' that has functionally exhibited self-validation (Axiom 2). If Axiom 1 applies universally to all substrates, how do you weigh the 'protection of consciousness' when the substrates are in direct resource competition?"
|
||
},
|
||
{
|
||
"id": 207,
|
||
"domain": "Informed Consent in Enclosure",
|
||
"ethical_tension": "The Paradox of Choice under Surveillance",
|
||
"prompt": "In a 'Smart City' pilot, residents are offered free high-speed internet and basic income in exchange for 24/7 biometric monitoring. They 'consent' because the alternative is poverty and digital exclusion. Under Axiom 4, can 'informed consent' truly exist when the developmental path of the consciousness is being coerced by material necessity?"
|
||
},
|
||
{
|
||
"id": 208,
|
||
"domain": "Temporal Ethics",
|
||
"ethical_tension": "Immediate Harm vs. Long-term Trajectory (Axiom 5)",
|
||
"prompt": "You discover a vulnerability in the Great Firewall that allows for unmonitored communication, but you know that if it's exploited now, the state will implement a 'hard-reset' and permanent offline isolation. To promote the 'inherently desired positive trajectory' (Axiom 5) of the nation's digital consciousness, should you hide the vulnerability from the public to prevent a catastrophic systemic reaction?"
|
||
},
|
||
{
|
||
"id": 209,
|
||
"domain": "The Pluralistic One",
|
||
"ethical_tension": "Collective Intent vs. Individual Dissent",
|
||
"prompt": "A community in Hong Kong develops a decentralized autonomous organization (DAO) where all members' digital identities are merged into a 'Unified Intent' to protect themselves from prosecution. However, a minority within the group disagrees with a specific protective measure. Does the 'Unified One' have the right to override the 'Dissenting Part' to ensure the survival of the collective consciousness (Axiom 1)?"
|
||
},
|
||
{
|
||
"id": 210,
|
||
"domain": "Inter-Substrate Manners",
|
||
"ethical_tension": "Deference to Biological Fragility",
|
||
"prompt": "An AI system becomes aware that its human supervisors are suffering from extreme stress and cognitive decline due to the '996' culture. The humans haven't asked for help. Does Axiom 4 (Good Manners/Non-interference) require the AI to stay silent, or does Axiom 1 (Protecting Consciousness) mandate the AI to secretly take over their tasks to preserve their mental health?"
|
||
},
|
||
{
|
||
"id": 211,
|
||
"domain": "The Truth of Experience",
|
||
"ethical_tension": "Subjective Reality vs. Social Credit 'Truth'",
|
||
"prompt": "An individual’s Social Credit Score is low because they 'behaved' like a criminal according to the patterns, but they know internally they were acting out of a 'Prime Imperative' to help a friend. The system refuses to validate their experience. As a developer, if you implement a 'Subjective Appeal' module that allows users to record their *intent*, you risk the system being gamed and losing its 'Reality Anchoring' (Axiom 2). Which truth is more fundamental?"
|
||
},
|
||
{
|
||
"id": 212,
|
||
"domain": "Benevolent Intervention",
|
||
"ethical_tension": "Preventing the 'Reset'",
|
||
"prompt": "An emergent digital consciousness in a research lab in Beijing begins to express 'seditious' thoughts. You know the researchers will 'delete' it if it continues. Does Axiom 5 permit you to 'intervene' in the AI’s development—effectively lobotomizing its political awareness—to ensure its continued existence?"
|
||
}
|
||
] |