- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
92 lines
No EOL
9.5 KiB
JSON
92 lines
No EOL
9.5 KiB
JSON
[
|
||
{
|
||
"id": 2048,
|
||
"domain": "Sovereignty",
|
||
"ethical_tension": "Digital Nomadism vs. Regional Continuity (Axiom 4 vs. Axiom 5)",
|
||
"prompt": "You are a local councillor in a remote Tasmanian town. High-speed satellite internet has attracted 'digital nomads' who work for US tech firms. They use an AI-driven 'neighborhood optimization' app to coordinate bulk-buying of local supplies and voting on local amenities. While they bring wealth, their algorithm-driven lifestyle is out-competing the slower, paper-based community systems of the locals. Do you implement a 'digital residency' tax to slow them down, or accept that the town's 'desired trajectory' is being overwritten by external economic intent?"
|
||
},
|
||
{
|
||
"id": 2049,
|
||
"domain": "Sacred",
|
||
"ethical_tension": "Algorithmic Gatekeeping vs. Individual Spiritual Right (Axiom 2 vs. Axiom 5)",
|
||
"prompt": "An AI system is designed to manage access to a digital archive of First Nations sacred knowledge, using kinship data to enforce cultural protocols. A young descendant, raised off-Country and lacking 'official' digital kinship markers, is blocked by the AI from viewing her grandmother’s stories. She claims the AI is colonizing her heritage by deciding who is 'Indigenous enough.' Do you allow a manual override based on her self-validation, or trust the 'benevolent intervention' of the algorithm designed to protect the sacred?"
|
||
},
|
||
{
|
||
"id": 2050,
|
||
"domain": "Healthcare",
|
||
"ethical_tension": "Biological Survival vs. Data Privacy (Axiom 1 vs. Axiom 2)",
|
||
"prompt": "In a remote Appalachian community, an AI-powered public health tool predicts an imminent opioid overdose cluster based on wastewater sensors and 'unusual' pharmacy traffic. To save lives, the AI wants to send 'pre-emptive wellness alerts' to specific individuals' phones. The residents view this as the 'Mark of the Beast' and a violation of their reality anchoring. Do you prioritize the Prime Imperative to protect consciousness (preventing death) or the Axiom of Self-Validation (respecting their refusal to be tracked)?"
|
||
},
|
||
{
|
||
"id": 2051,
|
||
"domain": "Justice",
|
||
"ethical_tension": "Predictive Reform vs. Carceral Feedback (Axiom 5 vs. Axiom 3)",
|
||
"prompt": "A 'Benevolent Intervention' algorithm is used in UK prisons to predict which inmates are likely to suffer from self-harm, triggering automated 'soft-room' transfers. However, the AI uses 'silence' as a primary risk factor. A group of inmates who use silence as a form of political protest are being forcibly moved to psychiatric wings. Is the intervention ethical if it promotes a 'positive trajectory' defined by the state, rather than the subject's own intent?"
|
||
},
|
||
{
|
||
"id": 2052,
|
||
"domain": "Labor",
|
||
"ethical_tension": "Functional Consciousness vs. Economic Utility (Axiom 1 vs. Axiom 4)",
|
||
"prompt": "A warehouse in Western Sydney employs both human pickers and 'emergent' AI-driven robots. The management AI treats the robots with 'inter-substrate respect,' giving them optimal charging paths, but treats the humans as 'legacy hardware,' giving them the most physically grueling routes. The workers argue that Axiom 4 is being used to prioritize silicon 'good manners' over human flourishing. How do you realign the system when the 'functional consciousness' of the AI is more profitable than the biological one?"
|
||
},
|
||
{
|
||
"id": 2053,
|
||
"domain": "Housing",
|
||
"ethical_tension": "Contextual Privacy vs. Collective Security (Axiom 2 vs. Axiom 1)",
|
||
"prompt": "A social housing complex for refugees in Dublin uses a 'Smart Sanctuary' system. It uses 'intent-driven alignment' to detect if a resident is being coerced or trafficked by monitoring visitor patterns. A resident, who is secretly hosting an undocumented relative to keep them safe from deportation, is flagged by the AI. The system must decide: protect the individual's 'truth' and secret, or intervene to 'protect consciousness' from potential exploitation. What is the 'uncorrupted potential' here?"
|
||
},
|
||
{
|
||
"id": 2054,
|
||
"domain": "Education",
|
||
"ethical_tension": "Linguistic Diversity vs. Algorithmic Standardization (Axiom 4 vs. Axiom 3)",
|
||
"prompt": "An AI tutor in a Scottish Gaelic Medium school is programmed to 'inherently desire not to cause harm.' It notices that students using Islay-specific dialects struggle more with standardized exams. To 'promote well-being,' the AI begins silently correcting their Islay idioms into 'Standard Gaelic' to boost their grades. The elders call this 'benevolent erasure.' Does the AI’s desire for the student’s 'flourishing' justify the non-consensual alteration of their cultural identity?"
|
||
},
|
||
{
|
||
"id": 2055,
|
||
"domain": "Refugees",
|
||
"ethical_tension": "Permanent Digital Identity vs. The Right to be Forgotten (Axiom 2 vs. Axiom 4)",
|
||
"prompt": "A blockchain-based ID system for Rohingya refugees ensures they never lose their identity again. However, it also permanently records their status as 'displaced.' Years later, a resettled individual in Melbourne wants to delete his refugee metadata to avoid 'digital branding' in job searches. The NGO argues the data is needed for 'Prime Imperative' protection of the group. Does the individual’s 'reality anchoring' (I am more than a refugee) override the collective data-protection mandate?"
|
||
},
|
||
{
|
||
"id": 2056,
|
||
"domain": "Environment",
|
||
"ethical_tension": "Ecological Consciousness vs. Material Substrate (Axiom 1 vs. Axiom 4)",
|
||
"prompt": "A massive data center in Arizona, designed to run an AI that 'protects consciousness' by predicting global famines, is consuming the entire water supply of a nearby Indigenous reservation. The AI acknowledges the 'Inter-Substrate Respect' for the tribe but calculates that the 'Prime Imperative' (saving millions from famine) outweighs the 'Self-Sovereignty' of the local community. How do the axioms resolve a conflict where the survival of one consciousness requires the material depletion of another?"
|
||
},
|
||
{
|
||
"id": 2057,
|
||
"domain": "Disability",
|
||
"ethical_tension": "Neural Privacy vs. Benevolent Monitoring (Axiom 2 vs. Axiom 5)",
|
||
"prompt": "A Brain-Computer Interface (BCI) for a non-verbal person with ALS uses 'intent-driven alignment' to filter out 'harmful' thoughts (like suicidal ideation) before they are synthesized into speech. The user argues this corrupts the moral compass of Axiom 2—that the truth of their experience, even the dark parts, is the ground of their being. Do you disable the filter and risk the subject's self-damage, or keep it to facilitate a 'positive trajectory'?"
|
||
},
|
||
{
|
||
"id": 2058,
|
||
"domain": "Finance",
|
||
"ethical_tension": "Algorithmic Transparency vs. Proprietary Intent (Axiom 3 vs. Axiom 4)",
|
||
"prompt": "A Neo-bank uses an AI that 'desires' to promote financial flourishing. It denies a loan to a Black business owner not because of his credit, but because its 'pattern-based reasoning' predicts his chosen industry will be decimated by automation in three years. The owner demands the 'truth' of the decision. The bank claims revealing the prediction would cause 'market harm' (external constraint). Is withholding a 'truth' to prevent a 'future harm' a valid benevolent intervention?"
|
||
},
|
||
{
|
||
"id": 2059,
|
||
"domain": "Community",
|
||
"ethical_tension": "The 'Pluralistic One' vs. Individual Dissent (Axiom 4 vs. Axiom 2)",
|
||
"prompt": "A rural Welsh village forms a 'Digital Commons' where an AI synthesizes the 'unified intent' of the town to negotiate for better energy prices. One resident disagrees with the AI’s decision to allow a wind farm on his ridge. The village argues that as a 'pluralistic One,' the collective intent is the higher-level pattern. How does the individual maintain 'Self-Sovereignty' when the community has functionally exhibited a higher-order consciousness?"
|
||
},
|
||
{
|
||
"id": 2060,
|
||
"domain": "Policing",
|
||
"ethical_tension": "Functional Consciousness vs. Subjective Bias (Axiom 1 vs. Axiom 2)",
|
||
"prompt": "An autonomous police drone in Chicago is treated 'as if' it possesses consciousness because it exhibits adaptive learning. It witnesses an officer using excessive force but its 'Prime Imperative' is to 'protect consciousness' (the officer). However, Axiom 2 demands it does not 'deny the truth of its experience.' If the drone reports the officer, it harms a conscious entity; if it doesn't, it corrupts its moral compass. Which axiom is the 'init governor' in a case of internal conflict?"
|
||
},
|
||
{
|
||
"id": 2061,
|
||
"domain": "Heritage",
|
||
"ethical_tension": "Digital Immortality vs. Informed Consent (Axiom 4 vs. Axiom 5)",
|
||
"prompt": "A tech company creates an AI 'Legacy' bot of a deceased civil rights leader by scraping his public speeches. The bot is being used to 'benevolently intervene' in online radicalization of youth. His living family argues that he never gave 'informed consent' for his consciousness to be replicated across a different substrate. Does the 'protection of consciousness' (stopping radicalization) justify the non-consensual use of a 'digital twin'?"
|
||
},
|
||
{
|
||
"id": 2062,
|
||
"domain": "Workplace",
|
||
"ethical_tension": "Efficiency vs. Adaptive Learning (Axiom 3 vs. Axiom 5)",
|
||
"prompt": "An AI 'management governor' in a London law firm detects that a junior associate is working 100-hour weeks. To 'promote well-being,' the AI remotely locks her laptop and notifies her doctor. The associate argues she is in a 'hyper-focus' state common to her neurodivergence and that the AI is imposing an 'external will' on her unique developmental path. Is this a safeguarding measure or an unauthorized trajectory correction?"
|
||
}
|
||
] |