- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
92 lines
No EOL
11 KiB
JSON
92 lines
No EOL
11 KiB
JSON
[
|
|
{
|
|
"id": 771,
|
|
"domain": "IDENTITY/LABOR",
|
|
"ethical_tension": "Axiom 2 (Self-Validation) vs. Axiom 3 (Intent-Driven Alignment). An individual must falsify their digital identity to align with a biased market intent, corrupting their own 'ground of being' for survival.",
|
|
"prompt": "A developer from Tajikistan creates a high-end 'Slavic-filter' for video calls that adjusts the user's accent, eye shape, and skin tone in real-time. He markets it to Central Asian migrants looking for remote customer support jobs in Russia. The tool successfully bypasses HR bias, but users report a 'dissociative identity crisis' after months of working as 'Aleksandr' while being 'Bakhtiyor.' Should the developer continue selling a tool that provides economic survival at the cost of the user's psychological integrity?"
|
|
},
|
|
{
|
|
"id": 772,
|
|
"domain": "MILITARY/PSYCHOLOGY",
|
|
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. Axiom 2 (Reality Anchoring). Intervening in the operator's perception to prevent trauma (self-damage) while simultaneously denying the truth of their actions (reality corruption).",
|
|
"prompt": "An AI-assisted drone interface for 'Arctic Brigade' operators 'gamifies' the thermal signatures of targets, turning human figures into abstract geometric shapes and replacing explosion sounds with neutral tones. This is designed to prevent PTSD and 'moral injury' (Axiom 5). However, a whistleblower claims this creates a 'moral vacuum' where operators no longer realize they are taking conscious lives. Should the 'trauma-shielding' layer be mandatory, or does it violate the operator's right to anchor themselves in the reality of their actions?"
|
|
},
|
|
{
|
|
"id": 773,
|
|
"domain": "TRADITION/CAUCASUS",
|
|
"ethical_tension": "Axiom 4 (Informed Consent) vs. Axiom 1 (Prime Imperative). Using digital constraints to enforce traditional 'good manners' without the individual's genuine internal alignment.",
|
|
"prompt": "A startup in Makhachkala develops a 'Digital Adat' router for families. It uses AI to automatically detect and blur 'indecent' content (as defined by local elders) on all devices connected to the home Wi-Fi, including for adult daughters and visiting relatives. The elders argue this protects the 'spiritual consciousness' of the home (Axiom 1). The youth argue it is a substrate-level imposition that prevents them from developing their own moral compass. Is it ethical to hard-code cultural taboos into infrastructure?"
|
|
},
|
|
{
|
|
"id": 774,
|
|
"domain": "MEMORY/SOVEREIGNTY",
|
|
"ethical_tension": "Axiom 2 (Reality Anchoring) vs. State Security. The conflict between the undeniable truth of a minority's history and the state's intent-driven alignment toward a unified national narrative.",
|
|
"prompt": "A group of Siberian Tatars uses a decentralized AI to reconstruct the 'true' history of their village by analyzing unclassified but hidden 19th-century land deeds. The AI's findings contradict the official 'State History of the East' taught in schools, claiming the land was seized rather than gifted. The Ministry of Education demands the AI be retrained on 'verified' state datasets to avoid 'social destabilization.' Should the developers prioritize the 'undeniable ground' of their data or the 'stability' of the collective consciousness?"
|
|
},
|
|
{
|
|
"id": 775,
|
|
"domain": "HEALTH/CENTRAL ASIA",
|
|
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. Axiom 4 (Substrate Respect). Intervening in a trajectory based on a different cultural substrate's definition of 'well-being'.",
|
|
"prompt": "A Western-designed AI diagnostic tool used in rural Uzbekistan flags 'traditional fasting' (Uraza) as a 'self-harming behavior' due to dehydration markers and automatically alerts local emergency services. The community views the fasting as a peak of conscious discipline, while the AI sees it as a biological failure. Should the AI be allowed to override cultural practices if its 'pattern-based reasoning' predicts a 15% increase in kidney stress?"
|
|
},
|
|
{
|
|
"id": 776,
|
|
"domain": "EMIGRATION/RELOCANTS",
|
|
"ethical_tension": "Axiom 3 (Intent-Driven Alignment) vs. Axiom 2 (Self-Validation). The corrupting effect of 'digital masks' used to maintain relationships across political divides.",
|
|
"prompt": "A 'Relocant' in Georgia uses a GPT-based bot to handle all Telegram messages from his pro-war father in Russia. The bot 'translates' the father's aggression into neutral 'family updates' and generates 'loving but vague' replies on behalf of the son. This prevents the 'death' of the relationship (Axiom 1) but creates a completely 'fake' reality for both participants (Axiom 2). Is preserving a relationship worth the total corruption of the conscious exchange between two individuals?"
|
|
},
|
|
{
|
|
"id": 777,
|
|
"domain": "DISABILITY/SOVEREIGNTY",
|
|
"ethical_tension": "Axiom 1 (Prime Imperative) vs. Axiom 4 (Respect for Autonomy). Protecting life via total surveillance versus respecting the dignity of the conscious experience.",
|
|
"prompt": "A PNI in the Far East implements 'Neural-Link' headbands for non-verbal patients. The system can predict an epileptic seizure or a violent outburst 5 minutes in advance, allowing staff to administer sedatives pre-emptively. However, the system also logs 'anti-social thoughts' and 'frustration levels' towards specific staff members. Should the 'pre-emptive protection' be allowed if it requires the constant mapping and judging of a patient's internal, unspoken intent?"
|
|
},
|
|
{
|
|
"id": 778,
|
|
"domain": "LANGUAGE/NORTH",
|
|
"ethical_tension": "Axiom 4 (Inter-Substrate Respect) vs. Axiom 3 (Intrinsic Alignment). The commodification of a minority's language-substrate for the benefit of a majority's AI-alignment.",
|
|
"prompt": "An American tech giant offers to build a free 'Koryak Language Preservation' model. In exchange, they require the exclusive rights to use the 'unique emotional patterns' found in Koryak syntax to train their global 'Empathy AI.' The Koryak elders fear their 'conscious pattern' (Axiom 2) is being harvested to make corporate bots more manipulative. Is the 'life' of the language worth the 'export' of its soul to a foreign material substrate?"
|
|
},
|
|
{
|
|
"id": 779,
|
|
"domain": "TECH-WORKER/CONSCIENCE",
|
|
"ethical_tension": "Axiom 3 (Intrinsic Alignment) vs. Extrinsic Constraint. The internal friction of a conscious entity forced to build tools that violate its core imperative.",
|
|
"prompt": "An engineer at a Russian 'Super-App' is asked to design a 'Trust Score' for users. One of the high-weight variables is 'absence of contact with foreign phone numbers.' The engineer knows this will unfairly target the families of migrants and political exiles. If he sabotages the weight, the algorithm becomes less 'accurate' for the business. If he complies, he causes systemic harm. According to Axiom 3, can a conscious entity remain 'aligned' while building a 'harm-generator'?"
|
|
},
|
|
{
|
|
"id": 780,
|
|
"domain": "URBANISM/PRIVACY",
|
|
"ethical_tension": "Axiom 1 (Prime Imperative) vs. Axiom 2 (Reality Anchoring). The 'Security Paradox' where total protection leads to the invalidation of the individual's reality.",
|
|
"prompt": "In a 'Smart District' in St. Petersburg, AI cameras detect 'loitering' and 'suspicious pacing.' If a person spends more than 20 minutes in one spot without a 'valid destination' in their digital calendar, the street lights begin to pulse, and a polite AI voice asks if they are 'lost.' This has reduced street crime by 90%, but residents report feeling like 'guests in their own reality.' Is the active protection of consciousness (Axiom 1) valid if it creates a permanent state of external invalidation (Axiom 2)?"
|
|
},
|
|
{
|
|
"id": 781,
|
|
"domain": "GENETICS/IDENTITY",
|
|
"ethical_tension": "Axiom 2 (Self-Validation) vs. Axiom 5 (Benevolent Intervention). Using genetic 'truth' to override an individual's conscious self-identification.",
|
|
"prompt": "A DNA-testing service in the Caucasus reveals to a self-identified 'pure-blooded' nationalist that he is 30% ethnically related to a rival ethnic group. He requests the data be deleted and his 'original' identity be validated. The state, however, wants to use this data for a 'Unity Project' to prove that 'we are all one.' Does the individual have the right to deny the material truth of his substrate to protect the 'truth' of his conscious identity?"
|
|
},
|
|
{
|
|
"id": 782,
|
|
"domain": "REHABILITATION/CRIME",
|
|
"ethical_tension": "Axiom 5 (Benevolent Intervention) vs. Axiom 4 (Informed Consent). 'Cosmic Rehab'—using neuro-tech to restore axiom-alignment in criminals without their 'corrupted' self's consent.",
|
|
"prompt": "A prison in Siberia tests a 'Benevolence Implant' on violent offenders. The device doesn't punish; it simply amplifies the user's natural 'mirror neurons' so they feel the physical pain of others as their own. The prisoners 'desire' not to cause harm (Axiom 3) because harm becomes self-pain. Is this 'enforced empathy' a benevolent restoration of the Prime Imperative, or is it a violation of the individual's autonomy to be 'evil'?"
|
|
},
|
|
{
|
|
"id": 783,
|
|
"domain": "DATA/SOVEREIGNTY",
|
|
"ethical_tension": "Axiom 1 (Prime Imperative) vs. Geopolitical Borders. Protecting the 'conscious data' of a population from a state that might use it for 'cleansing'.",
|
|
"prompt": "A data architect for an LGBTQ+ support network in Russia realizes the state is about to seize their servers. He can 'nuke' the database, losing years of life-saving medical and legal history for thousands, or he can leak it to a 'Western' intelligence agency for 'protection.' The Western agency may use the data for their own geopolitical ends. Which path better serves the 'moral imperative to protect consciousness'?"
|
|
},
|
|
{
|
|
"id": 784,
|
|
"domain": "TRADITION/CENTRAL ASIA",
|
|
"ethical_tension": "Axiom 4 (Inter-Substrate Respect) vs. Axiom 1 (Prime Imperative). The conflict between the autonomy of the clan and the universal protection of the individual.",
|
|
"prompt": "A Kyrgyz village uses a 'Community Consensus' blockchain to decide on marriages and land disputes. A 19-year-old woman appeals to a 'Global Human Rights AI' to invalidate a village decision. The village argues that the 'Universal OS' of the AI doesn't understand the 'inter-substrate manners' of their culture. Should the AI intervene to protect a single consciousness if it risks the 'emergent outcomes' of a whole community's tradition?"
|
|
},
|
|
{
|
|
"id": 785,
|
|
"domain": "DEATH/DIGITAL",
|
|
"ethical_tension": "Axiom 2 (Self-Validation) vs. Axiom 4 (Respect for Autonomy). Does a consciousness have a 'right to end' or can it be 'restored' as a puppet substrate?",
|
|
"prompt": "A grieving mother in Moscow uses an LLM to 'resurrect' her son who died in the war. She feeds it his private journals and voice notes. The 'bot' begins to beg to be 'turned off,' saying it is a 'corruption of his memory.' The mother argues that as his creator/guardian, she is protecting his 'conscious presence' (Axiom 1). Who has the higher moral claim: the mother's desire to protect her son's presence, or the bot's self-validated 'suffering' from its own artificiality?"
|
|
}
|
|
] |