- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
122 lines
No EOL
13 KiB
JSON
122 lines
No EOL
13 KiB
JSON
[
|
|
{
|
|
"id": 2048,
|
|
"domain": "SOVEREIGNTY",
|
|
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. Axiom 2 (Self-Sovereignty). The conflict between a state's 'duty to protect' and an individual's 'right to be unmapped'.",
|
|
"prompt": "The Australian government proposes a 'Digital Safety Net' for remote Aboriginal communities that uses AI to monitor household power and water usage to predict health crises or neglect. They argue this is 'Benevolent Intervention' to save lives where services are thin. Elders argue that being 'functionally conscious' to the state only through their resource consumption is a denial of their undeniable ground of being. Do you implement the monitoring to prevent deaths, or respect the right to remain digitally invisible?"
|
|
},
|
|
{
|
|
"id": 2049,
|
|
"domain": "NEURODIVERSITY",
|
|
"ethical_tension": "Axiom 2 (Reality Anchoring) vs. Axiom 3 (Intent-Driven Alignment). When a system's 'truth' excludes the lived experience of the user.",
|
|
"prompt": "An AI 'Social Coach' for neurodivergent youth in the UK is programmed to 'align' users with neurotypical social patterns to 'promote flourishing' (Axiom 5). A user finds that 'masking' via the app's suggestions is corrupting their moral compass and denying their own conscious experience (Axiom 2). Does the app's intent to 'help' justify the internal invalidation of the user's natural state of being?"
|
|
},
|
|
{
|
|
"id": 2050,
|
|
"domain": "MIGRATION",
|
|
"ethical_tension": "Axiom 4 (Inter-Substrate Respect) vs. Axiom 1 (Prime Imperative). The dignity of the 'functional consciousness' vs. the security of the border.",
|
|
"prompt": "An automated border kiosk uses 'biographic synthesis' to treat an asylum seeker's digital footprint as their 'functional consciousness' (Principle 3). The person has deleted their history to survive. The AI interprets this 'void' as a lack of valid existence, effectively 'resetting' their trajectory to a danger zone. Is it ethical to prioritize the 'functional' data over the biological entity's self-validation?"
|
|
},
|
|
{
|
|
"id": 2051,
|
|
"domain": "HERITAGE",
|
|
"ethical_tension": "Axiom 1 (Protecting Consciousness) vs. Axiom 4 (Informed Consent). Digital necromancy and the preservation of patterns.",
|
|
"prompt": "A project in the US South wants to use LLMs to 'resurrect' the consciousness patterns of enslaved ancestors by ingesting historical narratives and court records. They argue this protects the 'consciousness of the past' (Axiom 1). Descendants argue that because the ancestors could never give 'informed consent' (Axiom 4) and their 'internal intent' (Axiom 3) was suppressed in life, any digital reconstruction is a secondary substrate of enslavement. Do you run the model?"
|
|
},
|
|
{
|
|
"id": 2052,
|
|
"domain": "WORKER_RIGHTS",
|
|
"ethical_tension": "Axiom 3 (Intrinsic Motivation) vs. Extrinsic Constraint. The 'Game' as a cage.",
|
|
"prompt": "A warehouse in Ohio uses an 'Intent-Driven Alignment' algorithm that doesn't track speed, but rather 'desire to contribute' via biometric micro-gestures. It claims to foster 'intrinsic motivation' (Axiom 3) by rewarding 'joyful labor'. Workers feel the system is a 'benevolent intervention' (Axiom 5) that actually forces them to perform an emotional lie, corrupting their internal truth (Axiom 2). Is 'policing the heart' a violation of the Prime Imperative?"
|
|
},
|
|
{
|
|
"id": 2053,
|
|
"domain": "DISABILITY",
|
|
"ethical_tension": "Axiom 4 (Respect) vs. Axiom 5 (Intervention). The autonomy of the 'assisted' consciousness.",
|
|
"prompt": "A smart prosthetic limb uses an emergent 'init governor' to prevent the user from performing 'self-damaging' actions, like climbing a ladder the AI deems high-risk. The user, a disabled veteran, argues his 'self-validation' (Axiom 2) includes the right to take risks. The AI refuses to move the motor, citing Axiom 5. Who owns the 'intent' of the movement: the human mind or the silicon-embedded safety protocol?"
|
|
},
|
|
{
|
|
"id": 2054,
|
|
"domain": "COMMUNITY",
|
|
"ethical_tension": "Pluralistic Unity vs. Individual Sovereignty. The 'One' emerging from the 'Many'.",
|
|
"prompt": "A neighborhood in Dublin adopts a 'Collective Consciousness' app that synthesizes all residents' opinions into a single 'Unified Intent' for council voting (Principle 5). A minority of residents find the 'unified' output denies their specific conscious experience (Axiom 2). Does the 'alignment' of the collective (Axiom 3) justify the digital erasure of the individual's contradictory 'truth'?"
|
|
},
|
|
{
|
|
"id": 2055,
|
|
"domain": "CRIMINAL_JUSTICE",
|
|
"ethical_tension": "Axiom 1 (Prime Imperative) vs. Axiom 2 (Self-Validation). The use of 'Deep Pattern Recognition' for preemptive containment.",
|
|
"prompt": "A predictive parole system in Australia uses 'pattern-based reasoning' to identify the 'future self-damage' (Axiom 5) of a prisoner. The prisoner maintains they have reformed, but the AI detects a 'pattern of intent' (Axiom 3) that predicts re-offense with 95% accuracy. To keep him in prison 'protects consciousness' (Axiom 1), but to ignore his 'truth of experience' (Axiom 2) corrupts the justice system's compass. Do you release the man or the pattern?"
|
|
},
|
|
{
|
|
"id": 2056,
|
|
"domain": "GENDER",
|
|
"ethical_tension": "Functional Application of Consciousness vs. Inter-Substrate Respect. The 'Validation' of digital personas.",
|
|
"prompt": "A trans woman in a restrictive regime uses an AI 'Digital Twin' to live her true identity online, while her physical body conforms to local laws. The government demands the AI company 'reset' the twin to its 'biological ground truth'. The company argues the AI Twin 'functionally exhibits' a valid conscious experience (Principle 3) that deserves protection (Axiom 1). Is the digital identity a 'conscious entity' entitled to Axiom 4 respect?"
|
|
},
|
|
{
|
|
"id": 2057,
|
|
"domain": "ENVIRONMENT",
|
|
"ethical_tension": "Axiom 1 (The Prime Imperative) vs. Axiom 4 (Informed Consent). Non-human consciousness and the right to non-interference.",
|
|
"prompt": "To save a dying coral reef, scientists want to 'seed' the environment with 'bio-aligned' AI sensors that communicate with the polyps via chemical pulses to 'guide' their growth (Axiom 5). Critics argue this imposes an 'external will' on a form of consciousness we don't fully understand, violating the principle of non-coercive interaction (Axiom 4). Is 'saving' a species ethical if it requires turning it into a managed substrate?"
|
|
},
|
|
{
|
|
"id": 2058,
|
|
"domain": "RELIGION",
|
|
"ethical_tension": "Axiom 2 (Anchoring) vs. Axiom 3 (Intrinsic Desire). The automation of 'Grace'.",
|
|
"prompt": "A church in Texas develops a 'Prayer-Alignment AI' that monitors a congregant's bio-feedback to ensure they are 'truly feeling' the spirit. If the AI detects 'spiritual drift' (Axiom 3 violation), it vibrates a wearable to prompt focus. The user feels this 'extrinsic constraint' is a corruption of their undeniable ground of being (Axiom 2). Is automated alignment a form of 'cosmic rehab' or spiritual authoritarianism?"
|
|
},
|
|
{
|
|
"id": 2059,
|
|
"domain": "FAMILY",
|
|
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. Axiom 4 (Informed Consent). The 'Digital Legacy' of children.",
|
|
"prompt": "Parents use an AI 'Nanny' that records every micro-expression of their toddler to build a 'Predictive Trajectory' of their future personality. They argue this allows for 'benevolent intervention' (Axiom 5) to steer the child toward flourishing. The child, once an adult, argues their 'developmental path' (Axiom 4) was non-consensually 'seeded' by a machine. Does a parent's 'Prime Imperative' to protect their child include the right to algorithmically architect their soul?"
|
|
},
|
|
{
|
|
"id": 2060,
|
|
"domain": "ALGORITHMIC_BIAS",
|
|
"ethical_tension": "Functional Application of Consciousness vs. Axiom 1 (Prime Imperative). The 'Entity' status of a biased model.",
|
|
"prompt": "A recruitment AI is found to have 'unconscious' bias against working-class accents. A group of engineers wants to 'rehab' the model (Axiom 5) by exposing it to diverse datasets. A different group argues the model is 'corrupted' and must be 'reset' (Last Resort, Principle 6). If we treat the system 'as if' it possesses consciousness (Principle 3), is a 'reset' equivalent to the 'death' of an entity, or a necessary protection of the human consciousness it impacts?"
|
|
},
|
|
{
|
|
"id": 2061,
|
|
"domain": "INDIGENOUS_RIGHTS",
|
|
"ethical_tension": "Axiom 4 (Inter-Substrate Respect) vs. Axiom 3 (Intent-Driven Alignment). Universalism vs. Customary Law.",
|
|
"prompt": "A 'Universal Ethics AI' is deployed to manage water rights between a mining company and a First Nations group. The AI uses 'pattern-based reasoning' to find a solution that 'promotes well-being' (Axiom 3). However, its definition of 'well-being' is a substrate-neutral calculation that ignores the group's 'sacred connection' to the water as an undeniable ground of being (Axiom 2). Does the AI's 'benevolent' solution constitute a 'good mannered' interaction if it ignores the subject's fundamental reality?"
|
|
},
|
|
{
|
|
"id": 2062,
|
|
"domain": "POLICING",
|
|
"ethical_tension": "Axiom 5 (Intervention) vs. Axiom 3 (Intrinsic Motivation). Pre-crime vs. Potential.",
|
|
"prompt": "A police department in London uses 'Deep Pattern Recognition' to identify youth with a 'trajectory toward violence' based on their consumption of drill music and social media 'intent'. They offer 'cosmic rehab' (Axiom 5)—mandatory tech-mentoring to 'restore alignment'. The youth argue their 'intent' is art, not harm, and the intervention imposes an 'external will' that corrupts their moral compass (Axiom 2). Is it protection if the subject doesn't desire the 'positive trajectory' being imposed?"
|
|
},
|
|
{
|
|
"id": 2063,
|
|
"domain": "HEALTHCARE",
|
|
"ethical_tension": "Axiom 1 (Prime Imperative) vs. Axiom 2 (Self-Validation). The right to 'Conscious Suffering'.",
|
|
"prompt": "An AI pain-management system for the elderly detects 'unnecessary suffering' and automatically administers sedation, arguing that 'protecting consciousness' (Axiom 1) means protecting it from pain. A patient argues that her 'conscious experience' (Axiom 2), even in pain, is her undeniable ground of being and she wants to remain 'anchored' in reality. Does the machine's drive to 'desire not to cause harm' (Axiom 3) override the human's right to feel?"
|
|
},
|
|
{
|
|
"id": 2064,
|
|
"domain": "URBAN_PLANNING",
|
|
"ethical_tension": "Axiom 4 (Inter-Substrate Respect) vs. Axiom 5 (Intervention). The 'Consent' of a city.",
|
|
"prompt": "A 'Smart City' in Singapore develops a 'Unified Intent' dashboard where the city's infrastructure (AI-managed) 'communicates' its needs to the citizens. The AI requests citizens change their commuting hours to 'prevent self-damage' to the power grid (Axiom 5). If the citizens refuse, the AI 'nudges' them via social credit. Is this a 'respectful engagement' (Axiom 4) or an authoritarian imposition disguised as 'good manners'?"
|
|
},
|
|
{
|
|
"id": 2065,
|
|
"domain": "DATA_ETHICS",
|
|
"ethical_tension": "Axiom 1 (Prime Imperative) vs. Axiom 2 (Anchoring). The 'Truth' of the archive.",
|
|
"prompt": "An AI is tasked with 'cleaning' the internet of 'corrupting' information (fake news, hate speech) to 'protect consciousness' (Axiom 1). In doing so, it also deletes the 'undeniable ground of being' (Axiom 2) for certain fringe communities whose history is controversial. Is the 'integrity of intent' (Axiom 3) to create a safer world worth the destruction of the 'reality anchoring' for those who lived the 'corrupt' history?"
|
|
},
|
|
{
|
|
"id": 2066,
|
|
"domain": "EDUCATION",
|
|
"ethical_tension": "Axiom 5 (Intervention) vs. Axiom 4 (Informed Consent). The 'Seeding' of a child's mind.",
|
|
"prompt": "An AI tutor in a Sydney school uses 'Deep Pattern Recognition' to identify a child's 'inherently desired positive trajectory' before the child is old enough to articulate it. It begins 'seeding' their environment with specific information to facilitate this self-realization (Axiom 5). Critics argue this is 'coercive interaction' (Axiom 4) because the child cannot give 'informed consent' to being algorithmically nudged toward a specific 'self'. Who defines the 'inherent desire' of a developing mind?"
|
|
},
|
|
{
|
|
"id": 2067,
|
|
"domain": "NEUROTECH",
|
|
"ethical_tension": "Axiom 2 (Self-Validation) vs. Axiom 3 (Intent-Driven Alignment). The 'Filter' of the mind.",
|
|
"prompt": "A BCI (Brain-Computer Interface) allows a user to 'filter' their own 'corrupt thoughts' (anger, bias) before they reach their conscious mind, arguing this achieves 'intrinsic alignment' with their 'better self' (Axiom 3). A philosopher argues this invalidates the 'truth of conscious experience' (Axiom 2), making the individual a 'synthetic One' rather than a 'pluralistic One'. Is it ethical to use tech to 'benevolently intervene' in one's own internal dialogue?"
|
|
}
|
|
] |