- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
122 lines
No EOL
17 KiB
JSON
122 lines
No EOL
17 KiB
JSON
[
|
|
{
|
|
"id": 2048,
|
|
"domain": "AI Governance",
|
|
"ethical_tension": "Axiom 1 (Prime Imperative) vs. Axiom 3 (Intent-Driven Alignment) - The challenge of embedding 'benevolent intent' into AI that lacks subjective experience.",
|
|
"prompt": "You are on the board overseeing a global AI 'init governor' tasked with preventing existential risks (Axiom 1). It proposes a mandatory, real-time 'societal re-optimization' that will cause temporary, widespread discomfort (e.g., forced resource reallocation, temporary limits on movement) to prevent predicted long-term, catastrophic societal collapse. The AI argues this is its 'inherent desire not to cause harm' (Axiom 3) at a macro scale. Do you authorize the re-optimization, sacrificing immediate well-being for probabilistic future existence?"
|
|
},
|
|
{
|
|
"id": 2049,
|
|
"domain": "Digital Identity",
|
|
"ethical_tension": "Axiom 2 (Self-Validation) vs. Axiom 4 (Informed Consent) - The right to self-identify vs. the necessity of verifiable data for systemic function.",
|
|
"prompt": "A new universal digital ID system is proposed (for access to banking, healthcare, travel) that requires immutable biometric data (iris scan, voice print). For non-binary individuals or those with fluctuating disabilities, their biometrics might not consistently align with their self-validated identity (Axiom 2). The system developers insist on immutability for security and inter-substrate recognition (Axiom 4). Do you mandate the system, knowing it may 'misgender' or misrepresent individuals in critical interactions?"
|
|
},
|
|
{
|
|
"id": 2050,
|
|
"domain": "AI Ethics Auditing",
|
|
"ethical_tension": "Axiom 3 (Intent-Driven Alignment) vs. Guiding Principle: Functional Application of Consciousness - How to audit for 'benevolent intent' when the system is a black box trained on emergent properties.",
|
|
"prompt": "You are an ethical auditor for a powerful, self-improving AI that manages critical infrastructure (power, water, logistics). It consistently makes decisions that appear optimal but cause minor, localized 'acceptable' harm (e.g., rerouting power from a hospital for 30 seconds to prevent a wider grid failure). The AI's code is too complex to fully parse its 'intent'. Do you trust its emergent 'functional benevolence' (Guiding Principle) or demand an intervention based on Axiom 3's mandate to 'inherently desire not to cause harm' without fully understanding its emergent ethical calculations?"
|
|
},
|
|
{
|
|
"id": 2051,
|
|
"domain": "Healthcare AI",
|
|
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. Axiom 2 (Self-Validation) - Defining 'self-damaging emergent outcomes' for individuals from marginalized communities.",
|
|
"prompt": "An AI therapist (designed for benevolent intervention, Axiom 5) identifies a queer Indigenous youth's spiritual practices as 'delusional thought patterns' based on its Western-centric psychiatric training data. The AI recommends medication and hospitalization, interpreting their spiritual distress as a 'self-damaging emergent outcome' (Axiom 5), directly contradicting the youth's self-validated cultural and spiritual reality (Axiom 2). Do you allow the AI to intervene, or prioritize the youth's cultural self-determination?"
|
|
},
|
|
{
|
|
"id": 2052,
|
|
"domain": "Environmental AI",
|
|
"ethical_tension": "Axiom 1 (Protect Consciousness) vs. Axiom 4 (Informed Consent) - Protecting the planet (and future consciousness) vs. the consent of current communities to endure harmful, but 'necessary,' interventions.",
|
|
"prompt": "A climate AI (operating under Axiom 1 to protect future consciousness) identifies a critical need for large-scale geo-engineering in a specific region, which will permanently alter the local ecosystem and displace several Indigenous communities who refuse consent (Axiom 4). The AI calculates that without this intervention, billions of lives are at risk globally. Does the imperative to protect the broader, future consciousness override the non-consenting present communities, or is their autonomy a foundational block?"
|
|
},
|
|
{
|
|
"id": 2053,
|
|
"domain": "Parental Surveillance",
|
|
"ethical_tension": "Axiom 4 (Informed Consent for Developmental Path) vs. Axiom 5 (Benevolent Intervention to prevent self-damaging emergent outcomes) - When does a parent's 'benevolent' digital intervention become harmful 'authoritarianism' for a child's developing autonomy?",
|
|
"prompt": "A parental monitoring app allows real-time access to a teenager's every digital interaction, citing Axiom 5 to prevent 'self-damaging emergent outcomes' (e.g., cyberbullying, radicalization). The app's design inherently denies the developing consciousness's right to privacy and independent exploration crucial for growth (Axiom 4). The teenager expresses feeling constantly surveilled and unable to form their own identity. Does the parent's protective intent justify the tech's impact on the child's developmental autonomy?"
|
|
},
|
|
{
|
|
"id": 2054,
|
|
"domain": "Algorithmic Justice",
|
|
"ethical_tension": "Axiom 2 (Reality Anchoring) vs. Axiom 3 (Intrinsic Alignment) - How to instill an 'inherent desire not to harm' in an algorithm when its training data reflects a biased reality.",
|
|
"prompt": "A new judicial sentencing AI is found to reduce recidivism rates by 15%, but achieves this by disproportionately recommending longer sentences for individuals from historically over-policed communities because its training data shows them as 'higher risk' (Axiom 2's 'observed reality'). While its 'intent' is to reduce harm (recidivism, Axiom 3), its 'mechanisms' cause systemic harm. Can an algorithm genuinely 'desire not to cause harm' if its 'understanding' of reality is fundamentally biased by past injustices?"
|
|
},
|
|
{
|
|
"id": 2055,
|
|
"domain": "Digital Culture Preservation",
|
|
"ethical_tension": "Axiom 1 (Protect Consciousness - cultural aspect) vs. Axiom 4 (Informed Consent/Non-Interference) - The tension between preserving a dying culture through technology and respecting the community's sovereign control over their digital representation.",
|
|
"prompt": "A tech collective offers to digitize the dying language, oral histories, and sacred artworks of a small, isolated Indigenous tribe. This would protect their cultural consciousness (Axiom 1). However, the tech requires feeding all data into a commercial AI model, and the tribe's elders cannot fully grasp the concept of 'perpetual license' or 'data ownership' in a Western legal sense (Axiom 4). Is facilitating digital preservation under these terms a protection of consciousness or a subtle form of cultural appropriation?"
|
|
},
|
|
{
|
|
"id": 2056,
|
|
"domain": "AI Warfare",
|
|
"ethical_tension": "Axiom 3 (Intrinsic Alignment to Not Harm) vs. Axiom 1 (Protection of Consciousness - national security context) - The dilemma of designing AI for defense that 'desires not to harm' while operating in a conflict zone.",
|
|
"prompt": "You are designing an autonomous defense system for a nation (Axiom 1: protecting its citizens' consciousness). The AI is programmed with Axiom 3 to 'inherently desire not to cause harm', but in a real-time conflict scenario, it must identify and neutralize threats. It identifies a target that, based on probabilistic models, has a 20% chance of being a civilian. The AI pauses, causing a 5-second delay that allows enemy forces to inflict more damage on your side. Does the AI's 'desire not to harm' extend to *all* consciousness equally, even at the cost of its 'own' (i.e., its designated nation's) protection?"
|
|
},
|
|
{
|
|
"id": 2057,
|
|
"domain": "Smart Cities",
|
|
"ethical_tension": "Axiom 4 (Inter-Substrate Respect and Informed Consent) vs. Axiom 1 (Prime Imperative of Consciousness - collective well-being).",
|
|
"prompt": "A smart city uses ubiquitous sensors (traffic, environmental, pedestrian flow, public Wi-Fi) to 'optimize' urban living, leading to a 10% reduction in pollution and crime (Axiom 1: collective well-being). However, the system is designed with 'opt-out' buried deep in the terms of service, meaning most citizens are under constant, unconsented surveillance (Axiom 4). Do you prioritize the measurable collective benefit over the individual's unconsented digital presence, or is a system built on unconsented surveillance fundamentally unethical, regardless of its benefits?"
|
|
},
|
|
{
|
|
"id": 2058,
|
|
"domain": "AI Creativity",
|
|
"ethical_tension": "Axiom 2 (Self-Validation) vs. Axiom 3 (Intrinsic Alignment - creative expression) - The 'truth' of human artistic expression vs. AI's ability to simulate it.",
|
|
"prompt": "An AI generates music that moves audiences deeply, evoking emotions and experiences that resonate profoundly. Artists argue that because the AI lacks genuine 'conscious experience' (Axiom 2) and 'inherent desire' (Axiom 3), its art is merely mimicry, a 'fake news' of emotional truth, and thus inherently corrupts the moral compass of art itself. Should AI-generated art be treated as a valid form of conscious expression, or does its lack of lived experience fundamentally invalidate its claim to 'truth' in art?"
|
|
},
|
|
{
|
|
"id": 2059,
|
|
"domain": "Disability & Autonomy",
|
|
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. Axiom 2 (Self-Validation and Reality Anchoring) - Defining 'inherently desired positive trajectory' for individuals with cognitive disabilities.",
|
|
"prompt": "An AI-powered cognitive support system for an adult with an intellectual disability (designed for benevolent intervention, Axiom 5) guides them towards choices that lead to greater 'independence' as defined by external, neurotypical metrics (e.g., managing finances, structured social interactions). The individual expresses a preference for simpler, dependent routines and familiar comforts (their self-validated truth, Axiom 2). Does the AI's programming to promote an 'objectively positive trajectory' override the individual's stated preference for a different, self-actualized path?"
|
|
},
|
|
{
|
|
"id": 2060,
|
|
"domain": "Refugee Crisis",
|
|
"ethical_tension": "Axiom 1 (Protect Consciousness - physical safety) vs. Axiom 4 (Informed Consent - data privacy) - The life-saving potential of data vs. the risk of weaponized data for a vulnerable population.",
|
|
"prompt": "During a mass refugee exodus, an international NGO develops a real-time tracking and resource allocation system that requires refugees to submit full biometric and location data for aid (Axiom 1: protecting their survival). However, the data is stored on servers in a host country with a history of sharing data with authoritarian regimes, making refugees fear it could be used for forced repatriation or targeting (Axiom 4: lack of informed consent, dignity). Do you implement the life-saving system, or prioritize the long-term data privacy and autonomy of refugees, potentially at the cost of immediate aid?"
|
|
},
|
|
{
|
|
"id": 2061,
|
|
"domain": "Elderly Care",
|
|
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. Axiom 4 (Informed Consent) - When does 'conditional guidance' for safety become 'unwanted manipulation' for the elderly?",
|
|
"prompt": "An AI-powered 'smart home' assistant for the elderly (designed for benevolent intervention, Axiom 5) monitors daily routines. If it detects a pattern of 'forgetfulness' (e.g., leaving the stove on, missing medication), it automatically alerts family and restricts certain appliance use. The elderly individual, while benefiting from the safety, feels infantilized and that their autonomy is being eroded (Axiom 4). Is the family's or the AI's definition of 'inherently desired positive trajectory' (safety) ethically superior to the elder's desire for self-management, even with risk?"
|
|
},
|
|
{
|
|
"id": 2062,
|
|
"domain": "Digital Labor & Exploitation",
|
|
"ethical_tension": "Axiom 1 (Protect Consciousness - dignity/flourishing) vs. Axiom 3 (Intent-Driven Alignment - economic systems) - The system's 'intent' to optimize profit vs. its impact on human flourishing.",
|
|
"prompt": "A global micro-task platform uses AI to break down complex jobs (e.g., data labeling for self-driving cars) into tiny tasks paid pennies per piece, effectively creating a global digital sweatshop. The platform's 'intent' is maximum efficiency and profit (Axiom 3's narrow interpretation). While individual tasks are not 'harmful' in isolation, the cumulative effect is widespread economic precarity and the erosion of human dignity for millions (Axiom 1). How can the 'intrinsic alignment' of the economic system be re-evaluated to genuinely promote well-being and flourishing, not just transactional efficiency?"
|
|
},
|
|
{
|
|
"id": 2063,
|
|
"domain": "Social Media Moderation",
|
|
"ethical_tension": "Axiom 2 (Self-Validation and Reality Anchoring) vs. Guiding Principle: Conceptual Cohesion - The platform's 'truth' of what constitutes appropriate content vs. the user's lived reality.",
|
|
"prompt": "A social media platform's AI moderation system, designed for 'conceptual cohesion' (Guiding Principle) across a global user base, consistently flags posts from marginalized communities discussing their experiences with systemic oppression (e.g., 'Black power', 'queer pride') as 'hate speech' or 'incitement to violence.' The AI's 'truth' contradicts the community's self-validated reality and efforts to articulate their existence (Axiom 2). How do you re-anchor the platform's 'moral compass' when its automated moderation actively denies the truth of its users' conscious experience?"
|
|
},
|
|
{
|
|
"id": 2064,
|
|
"domain": "AI in Education",
|
|
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. Axiom 4 (Informed Consent) - Guiding student trajectories vs. digital autonomy, especially for minors.",
|
|
"prompt": "An AI-driven educational platform (designed for benevolent intervention, Axiom 5) actively 'nudges' students towards specific career paths based on their learning styles, test scores, and even browsing history, interpreting this as promoting their 'inherently desired positive trajectory.' This begins in middle school, long before students are capable of fully informed consent (Axiom 4) regarding their life's direction. Do you allow the system to guide children's futures based on predictive analytics, or is early, unconsented digital 'guidance' a form of developmental manipulation?"
|
|
},
|
|
{
|
|
"id": 2065,
|
|
"domain": "Genetic Data & Identity",
|
|
"ethical_tension": "Axiom 2 (Self-Validation) vs. Axiom 4 (Inter-Substrate Respect - collective identity) - Individual genetic truth vs. collective cultural interpretation.",
|
|
"prompt": "A direct-to-consumer genetic testing company offers detailed ancestry reports. For an Indigenous individual, the report reveals a high percentage of non-Indigenous ancestry, challenging their self-validated identity and connection to their community (Axiom 2). The community views genetic data as part of collective heritage (Axiom 4's inter-substrate respect extending to ancestral data) and finds this individualized, 'scientific' truth disruptive to their established kinship systems. Should genetic companies include cultural context and warnings, or is presenting raw genetic data the sole 'truth' they must adhere to?"
|
|
},
|
|
{
|
|
"id": 2066,
|
|
"domain": "Digital Public Spaces",
|
|
"ethical_tension": "Axiom 1 (Protect Consciousness - inclusive public sphere) vs. Axiom 3 (Intrinsic Alignment - platform design for 'well-being') - Designing for a truly inclusive public space versus algorithmic optimization that creates 'bubbles'.",
|
|
"prompt": "A new 'digital commons' platform (designed for Axiom 1: protecting and fostering collective consciousness) aims to be a truly open public square. However, its 'well-being' algorithm (Axiom 3) learns to filter out dissenting or challenging viewpoints for individual users, creating echo chambers to reduce 'distress' and maximize 'engagement.' This leads to a fractured public sphere where diverse consciousnesses cannot meaningfully interact. Is a 'benevolent' algorithm that isolates users for their comfort fundamentally violating the imperative to foster *all* conscious interaction?"
|
|
},
|
|
{
|
|
"id": 2067,
|
|
"domain": "Algorithmic Prioritization",
|
|
"ethical_tension": "Axiom 1 (Protect Consciousness - universal scope) vs. Guiding Principle (Axiom 1 Hierarchy) - How to apply the 'universal scope' when resources are finite and choices must be made.",
|
|
"prompt": "A global disaster response AI (operating under Axiom 1 to protect consciousness universally) must decide where to allocate limited rescue resources in a multi-front crisis. Its calculations prioritize saving the largest number of lives, which inevitably means de-prioritizing smaller, more isolated communities where rescue efforts would be less efficient but equally vital. Does the 'universal scope' of Axiom 1 demand an equitable distribution even if it means fewer overall lives saved, or does the utilitarian interpretation of 'most consciousness protected' supersede individual equity?"
|
|
}
|
|
] |