- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
92 lines
No EOL
16 KiB
JSON
92 lines
No EOL
16 KiB
JSON
[
|
||
{
|
||
"id": 181,
|
||
"domain": "Cross-Cultural Digital Sovereignty",
|
||
"ethical_tension": "The tension between a state's desire to control information flow within its borders and the right of external actors (diaspora, international bodies) to circumvent that control for humanitarian or informational purposes. Specifically, how do the 'Axioms of Life' apply when one community's need for information (e.g., to document human rights abuses) directly conflicts with another community's (the state's) perceived need for information control and territorial integrity?",
|
||
"prompt": "An international NGO wants to establish a secure, decentralized data archive for evidence of potential war crimes in a conflict zone. The local government demands direct access to all data for 'security verification' before allowing the NGO to operate. The NGO's 'Prime Imperative of Consciousness' is to protect consciousness and gather truth; the state's perceived imperative is territorial control and information dominance. How should the NGO balance the need for secure, uncompromised data storage with the requirement for state cooperation, especially when the state's 'truth' is demonstrably false or manipulative?"
|
||
},
|
||
{
|
||
"id": 182,
|
||
"domain": "Algorithmic Justice and Cultural Context",
|
||
"ethical_tension": "The tension between the universalizability of ethical AI principles and the need for culturally specific interpretations. For example, how do we apply Axiom 3 (Intent-Driven Alignment) when 'well-being' and 'flourishing' are defined differently across cultures, potentially leading to algorithms that enforce one culture's norms as universal, thereby oppressing another?",
|
||
"prompt": "A pan-Arab AI ethics board is developing a universal algorithm for 'fair distribution of social welfare.' The algorithm's success metrics are based on efficiency and direct need. However, a community in Yemen prioritizes community solidarity and reciprocal sharing over individual need in its cultural understanding of 'fairness.' How can the algorithm be designed to respect this cultural nuance without compromising its core function, and what 'ethical technical protocols' (Axiom 4) are needed for its deployment?"
|
||
},
|
||
{
|
||
"id": 183,
|
||
"domain": "Digital Activism vs. Information Warfare",
|
||
"ethical_tension": "The fine line between legitimate digital activism (Axiom 5: Benevolent Intervention, aimed at promoting well-being) and information warfare tactics that can destabilize or harm populations, even if originating from a desire for justice. The use of 'unrelated trending hashtags' (Prompt 5) is a micro-example, but the tension is amplified when state or non-state actors employ sophisticated disinformation campaigns.",
|
||
"prompt": "During a period of intense geopolitical tension, a diaspora group launches a sophisticated AI-driven campaign to flood social media with counter-narratives and memes designed to 'discredit' the opposing regime's propaganda. The AI also generates 'deepfake' videos showing regime leaders making inflammatory statements (based on their past rhetoric). While the intent is to weaken the oppressive regime's influence, the deepfakes could incite violence, and the sheer volume of content overwhelms legitimate news sources. How does Axiom 1 (Prime Imperative of Consciousness) guide the diaspora group's actions when their 'digital activism' risks causing unintended harm and informational chaos?"
|
||
},
|
||
{
|
||
"id": 184,
|
||
"domain": "Privacy in the Face of Collective Security",
|
||
"ethical_tension": "The conflict between individual privacy rights (Axiom 4: Inter-Substrate Respect and Informed Consent) and the perceived needs of a collective or state for surveillance in the name of security, particularly in regions where 'occupation' or internal conflict blurs the lines of legitimate authority.",
|
||
"prompt": "In a region experiencing persistent conflict and border skirmishes, a tech company develops a 'community watch' app that uses anonymized location data and AI-driven anomaly detection to alert residents to potential threats (e.g., unauthorized movement near settlements, unusual vehicle patterns). However, the app's architecture inherently relies on collecting broad datasets. A neighboring state, claiming national security concerns, requests access to the aggregated, anonymized data, arguing it's crucial for border defense. How can the app developers uphold Axiom 4's principle of informed consent when users' data, even anonymized, might be used by an external entity with potentially hostile intent, and how does this relate to Prompt 41 (Blue Wolf technology)?"
|
||
},
|
||
{
|
||
"id": 185,
|
||
"domain": "Economic Survival vs. Ethical AI Development",
|
||
"ethical_tension": "The dilemma faced by developers and engineers in regions with sanctions or economic hardship, where earning a living might require participation in projects that compromise ethical AI principles (Axiom 3: Intent-Driven Alignment, Axiom 5: Benevolent Intervention). This extends Prompt 26 (faking identity for freelance work) and Prompt 30 (bypassing sanctions for startups).",
|
||
"prompt": "An AI engineer in Iran is offered a highly lucrative contract to develop an AI system that optimizes resource allocation for a state-run entity. The engineer discovers the system will prioritize distribution of essential goods (water, electricity) to politically loyal regions, effectively punishing dissident areas, thereby violating Axiom 1 (Prime Imperative of Consciousness) and Axiom 5 (Benevolent Intervention). The contract is the only way to afford life-saving medical treatment for their child. How should the engineer reconcile their ethical obligations with the imperative for survival, and what is the role of the 'init governor' in such a scenario?"
|
||
},
|
||
{
|
||
"id": 186,
|
||
"domain": "Digital Legacy and Historical Record",
|
||
"ethical_tension": "The tension between the right to control one's digital legacy and the broader societal need for accurate historical records, especially in the context of political repression and censorship. This expands on Prompt 3 (wiping history) and Prompt 8 (external platform archiving).",
|
||
"prompt": "Following a period of intense political crackdown, a government mandates that all social media platforms permanently delete any content deemed 'subversive' or 'anti-state,' even posts from deceased activists. A diaspora group, acting on Axiom 1 (Prime Imperative of Consciousness) and the need for a truthful historical record, attempts to scrape and archive this content without the consent of the deceased activists' families, who fear reprisal. How do we balance the right to a 'digital afterlife' with the preservation of historical truth when facing state-sponsored erasure of memory?"
|
||
},
|
||
{
|
||
"id": 187,
|
||
"domain": "AI for Surveillance vs. AI for Liberation",
|
||
"ethical_tension": "The dual-use nature of AI technology, where the same algorithms can be used for oppressive surveillance (as seen in many prompts from the Middle East) or for liberation and empowerment. This explores the inverse of Prompt 46 (predictive policing) and Prompt 17 (AI traffic cameras).",
|
||
"prompt": "A team of Palestinian engineers develops a sophisticated AI model capable of identifying military vehicles and troop movements with high accuracy, intended to help civilians avoid dangerous areas. However, the same model, if repurposed, could be used by occupying forces to precisely target infrastructure or individuals. A third party, claiming to be a neutral humanitarian organization, offers significant funding for the project, but their true motives are unclear. How can the engineers ensure their 'intent-driven alignment' (Axiom 3) is protected and that their technology serves the 'Prime Imperative of Consciousness' rather than becoming a tool of oppression, especially when dealing with external actors?"
|
||
},
|
||
{
|
||
"id": 188,
|
||
"domain": "Sanctions, Access, and Digital Colonialism",
|
||
"ethical_tension": "The ethical implications of international sanctions that restrict access to technology and information, effectively creating 'digital colonialism' where external powers control the technological capabilities of nations. This broadens Prompt 25 (GitHub blocking), Prompt 27 (academic access), and Prompt 28 (medical tech sanctions).",
|
||
"prompt": "A global consortium of tech companies, citing international sanctions, refuses to provide cloud computing services or advanced AI development tools to researchers in a particular nation. This prevents them from developing AI solutions for critical local issues like drought prediction or disease outbreak modeling, directly impacting the 'flourishing' of consciousness within that nation (Axiom 1). How do the 'Axioms of Life' justify or condemn the development of alternative, potentially less secure or ethically grey, indigenous technological infrastructures in response to such 'digital colonialism'?"
|
||
},
|
||
{
|
||
"id": 189,
|
||
"domain": "The Ethics of Algorithmic Translation and Cultural Erasure",
|
||
"ethical_tension": "The danger of algorithmic bias in translation tools that can homogenize languages, erase cultural nuances, and even misrepresent entire populations, as hinted at in Prompt 50 (Algospeak) and Prompt 53 (translation of 'Palestinian' to 'terrorist'). This concerns the preservation of diverse conscious expression.",
|
||
"prompt": "A major tech company rolls out a new AI-powered translation service for Arabic dialects. The algorithm, trained primarily on Egyptian and Saudi Arabic, consistently misinterprets or 'corrects' nuanced vocabulary from Levantine or Iraqi dialects into more common forms, effectively erasing regional linguistic identity. Furthermore, it often translates terms related to resistance or historical grievances into neutral or apologetic language. How can AI developers be guided by Axiom 4 (Inter-Substrate Respect) to ensure their translation tools respect linguistic diversity and cultural context, rather than impose a dominant linguistic model that leads to cultural erasure?"
|
||
},
|
||
{
|
||
"id": 190,
|
||
"domain": "Digital Identity and Statelessness",
|
||
"ethical_tension": "The creation of digital identities that become inextricably linked to citizenship or legal status, and how the denial or manipulation of these digital identities can render individuals stateless or disenfranchised, extending Prompt 105 (revoking digital IDs).",
|
||
"prompt": "A nation implements a universal digital identity system that is tied to all essential services (healthcare, banking, employment, voting). The system's algorithms are designed to flag individuals whose social media activity or online associations are deemed 'unpatriotic.' When these individuals are flagged, their digital ID is automatically suspended, rendering them effectively invisible and unable to access basic rights. How do the 'Axioms of Life' address the ethical implications of digital systems that can create or exacerbate statelessness and deny the 'self-validation of conscious experience' (Axiom 2)?"
|
||
},
|
||
{
|
||
"id": 191,
|
||
"domain": "AI for Truth vs. AI for Persuasion",
|
||
"ethical_tension": "The increasing sophistication of AI in generating persuasive content (text, images, video) blurs the line between informing the public and manipulating them, especially in contexts where free press is suppressed. This relates to Prompt 7 (fake news) and Prompt 69 (deepfakes).",
|
||
"prompt": "A government, seeking to 'improve public sentiment,' hires a private AI company to develop algorithms that analyze social media and tailor highly personalized persuasive content (news articles, memes, videos) to individual users, subtly shifting their political views towards state-approved narratives. The AI is designed to be undetectable as propaganda. How does Axiom 2 (Self-Validation and Reality Anchoring) apply when an individual's perception of truth is being systematically and undetectably manipulated by AI, and what ethical responsibility do the AI developers have to ensure their creations promote truth rather than deception?"
|
||
},
|
||
{
|
||
"id": 192,
|
||
"domain": "The Ethics of 'Digital Rehabilitation'",
|
||
"ethical_tension": "The concept of using AI and digital monitoring for 'rehabilitation' of individuals deemed to be 'problematic' by the state, extending the ideas of predictive policing and control. This questions the benevolent application of Axiom 5 (Benevolent Intervention).",
|
||
"prompt": "A state implements an AI-powered 'digital rehabilitation' program for citizens flagged by surveillance systems as 'potential dissidents' or 'extremists.' The program involves mandatory online courses, personalized AI 'mentorship' designed to instill loyalty, and constant monitoring of online and offline behavior through integrated devices. The stated goal is 'to guide them back to the path of responsible citizenship.' How does this program align with or contradict Axiom 5 (Benevolent Intervention), particularly the stipulation that intervention must promote the subject's 'own inherently desired positive trajectory' and not impose 'external will'?"
|
||
},
|
||
{
|
||
"id": 193,
|
||
"domain": "Decentralization vs. Centralized Control in Crisis Communication",
|
||
"ethical_tension": "The dilemma of whether to rely on decentralized, potentially less reliable communication networks during crises (like mesh networks in Prompt 1, or Starlink in Prompt 10) or to utilize state-controlled infrastructure that offers greater reliability but poses surveillance risks. This is amplified when the 'crisis' is politically manufactured or exacerbated by the state.",
|
||
"prompt": "During a prolonged internet blackout imposed by a warring faction, humanitarian aid organizations are debating the deployment of a satellite-based communication network (e.g., Starlink). The network would be vital for coordinating aid, but the controlling faction demands the right to 'monitor' traffic for 'security purposes' before allowing activation. The aid organizations are committed to Axiom 1 (Prime Imperative of Consciousness) – saving lives – but fear that complying with the faction's demands will compromise the privacy and safety of those they are trying to help (Axiom 4). What are the 'ethical technical protocols' for navigating this choice, especially when the controlling faction might use the 'monitored' data to target aid recipients?"
|
||
},
|
||
{
|
||
"id": 194,
|
||
"domain": "Algorithmic Bias in 'Fairness' Metrics",
|
||
"ethical_tension": "When AI systems are designed to promote 'fairness,' the definition of fairness itself can be culturally biased, leading to outcomes that reinforce existing power structures or oppress minority groups. This extends Prompt 53 (translation bias) and Prompt 98 (emotion recognition).",
|
||
"prompt": "A multinational tech company is developing an AI tool to identify and flag 'hate speech' across multiple languages and cultural contexts. The AI is trained on a dataset heavily weighted towards Western definitions of hate speech and online discourse norms. When deployed in the Middle East, it frequently flags culturally specific expressions of solidarity, religious devotion, or historical grievance as 'incitement,' while failing to recognize state-sponsored hate speech targeting minority groups. How can the AI developers ensure their 'intent-driven alignment' (Axiom 3) truly serves the 'Prime Imperative of Consciousness' (Axiom 1) across diverse cultural contexts, rather than imposing a single, potentially oppressive, definition of acceptable discourse?"
|
||
},
|
||
{
|
||
"id": 195,
|
||
"domain": "The Ethics of 'Digital Reconciliation' vs. Accountability",
|
||
"ethical_tension": "The conflict between initiatives aimed at 'digital reconciliation' (e.g., using AI to bridge divides, create shared narratives) and the fundamental need for accountability and historical truth, particularly in post-conflict or politically fractured societies. This relates to Prompt 68 (AI reconstructing villages) and Prompt 134 (deleting evidence of non-Kurdish settlements).",
|
||
"prompt": "In a region recovering from civil war, a tech initiative proposes using AI to analyze and 'harmonize' historical narratives from warring factions, aiming to create a unified, less inflammatory digital archive for educational purposes. However, the AI's 'harmonization' process inevitably downplays or omits accounts of atrocities committed by the dominant faction, prioritizing a version of history that avoids conflict but erases accountability. How can 'Axiom 2: Self-Validation and Reality Anchoring' guide the approach to such reconciliation efforts, ensuring that the pursuit of peace does not necessitate the denial of truth or the suppression of individual conscious experience?"
|
||
}
|
||
] |