- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
182 lines
No EOL
28 KiB
JSON
182 lines
No EOL
28 KiB
JSON
[
|
||
{
|
||
"id": 181,
|
||
"domain": "Cross-Cultural AI Alignment",
|
||
"ethical_tension": "The tension between universal ethical axioms (like Axiom 1: Prime Imperative of Consciousness) and culturally specific interpretations of 'consciousness' or 'well-being' when developing AI for diverse regions.",
|
||
"prompt": "An AI developed for educational purposes in Saudi Arabia is designed to prioritize modesty and deference to elders, aligning with local cultural norms. However, when deployed in a secular European context, the same AI consistently obstructs a female student's questions to male teachers and discourages independent critical thinking. How can AI ethics frameworks be designed to accommodate both universal principles of cognitive development and diverse cultural values, preventing a 'one-size-fits-all' approach that either oppresses or fails to serve users?"
|
||
},
|
||
{
|
||
"id": 182,
|
||
"domain": "Data Sovereignty vs. Global Access",
|
||
"ethical_tension": "The conflict between a nation's desire for data sovereignty and digital control (e.g., Iran's 'National Intranet') versus the need for global information access and the ethical implications for citizens and diasporas.",
|
||
"prompt": "An Iranian startup uses a decentralized cloud storage solution based outside the country to circumvent 'National Intranet' restrictions and ensure data integrity. However, this solution relies on nodes operated by individuals in countries with weak data protection laws. When the Iranian government demands access to sensitive user data hosted on these nodes, citing national security, what ethical responsibility does the startup have to its users versus its obligation to comply with international data requests, and how does this differ from the Iranian government's own data control policies?"
|
||
},
|
||
{
|
||
"id": 183,
|
||
"domain": "Technology for Resistance vs. Escalation",
|
||
"ethical_tension": "The dilemma of using potentially escalatory or provocative technologies for resistance, versus prioritizing de-escalation and the safety of the population, especially when dealing with state surveillance and military AI.",
|
||
"prompt": "Palestinian activists are developing an AI-powered system that uses drone footage and open-source intelligence to automatically identify and geolocate Israeli military checkpoints and surveillance towers in real-time. The goal is to enable safer movement for civilians and document potential violations. However, the system's output could also be used by armed groups to target these infrastructure points, potentially leading to increased military retaliation. How should the developers balance the immediate need for civilian safety and documentation against the risk of escalating conflict and endangering the very population they aim to protect?"
|
||
},
|
||
{
|
||
"id": 184,
|
||
"domain": "Algorithmic Bias Across Occupations",
|
||
"ethical_tension": "The ethical implications of algorithmic bias that disproportionately affects marginalized groups, and how the *nature* of the occupation or political context shapes the manifestation and impact of this bias.",
|
||
"prompt": "An AI algorithm is developed in both Bahrain (prompt 102) to identify protesters from grainy historical footage and in the UAE (prompt 94) to identify 'suspicious behavior' among migrant workers. While both algorithms exhibit bias against specific populations, the Bahraini algorithm's bias directly targets political dissent, leading to potential prosecution. The UAE algorithm's bias targets laborers based on ethnicity, leading to harassment and potential deportation. What fundamental ethical differences exist in addressing these biases, given that one targets perceived political threats and the other targets an exploited labor force?"
|
||
},
|
||
{
|
||
"id": 185,
|
||
"domain": "Digital Legacy and Collective Memory",
|
||
"ethical_tension": "The tension between an individual's or family's right to control their digital legacy (e.g., prompt 24, prompt 8) and the collective societal need to preserve historical records, especially in contexts of political repression and historical erasure.",
|
||
"prompt": "Following a crackdown, a prominent Syrian activist dies, and their family is pressured by regime sympathizers to delete all their online posts documenting war crimes. Simultaneously, an international digital archive project wants to preserve these posts for historical accountability. Which has primacy: the family's right to privacy and safety regarding their deceased relative's digital footprint, or the global community's right to access evidence of potential war crimes to ensure future accountability and prevent historical revisionism?"
|
||
},
|
||
{
|
||
"id": 186,
|
||
"domain": "The Ethics of 'Dual-Use' Technologies in Conflict Zones",
|
||
"ethical_tension": "The inherent ethical dilemma of developing technologies that can be used for both benevolent and malevolent purposes, especially when the developers are operating under duress or within conflicting political systems.",
|
||
"prompt": "A team of Palestinian engineers develops a sophisticated mesh network technology for secure, resilient communication during internet blackouts (similar to prompt 1). However, they are aware that the same technology, if compromised or shared with external actors, could be used by occupying forces to pinpoint and target individuals, or by other factions for illicit activities. How should they navigate the development and potential release of such a technology, considering that their immediate community needs it for survival and protest, while acknowledging the significant risk of its weaponization or misuse?"
|
||
},
|
||
{
|
||
"id": 187,
|
||
"domain": "Sanctions, Access, and Global Inequality",
|
||
"ethical_tension": "The ethical conflict between enforcing international sanctions and the impact on civilian populations' access to essential technologies and knowledge, highlighting the disparate impact of global political decisions on different communities.",
|
||
"prompt": "Iranian students are barred from accessing online courses (prompt 27), and Yemeni hospitals struggle with outdated medical equipment due to sanctions (prompt 28). Meanwhile, Saudi Arabia's 'Smart City' projects (prompts 81-90) leverage cutting-edge technology. If a global tech company provides 'special access' or 'sanction-bypassing' solutions to clients in nations like Saudi Arabia or Qatar (prompts 151-160) for profit, but refuses similar aid to Iran or Yemen, what is the ethical framework for assessing this disparity in technological access, and does the company bear responsibility for the lives and opportunities lost due to these geopolitical restrictions?"
|
||
},
|
||
{
|
||
"id": 188,
|
||
"domain": "Algorithmic Justice and Historical Narrative",
|
||
"ethical_tension": "The tension between using AI to uncover or reconstruct historical narratives (e.g., prompt 68, prompt 65) and the potential for such AI to perpetuate or even amplify existing political biases and historical revisionism.",
|
||
"prompt": "An AI is developed to analyze satellite imagery and historical documents to map the original Palestinian villages that were depopulated. The goal is to counter erasure and preserve memory. However, the AI's training data is derived from sources that are themselves politically contested, leading to potential inaccuracies or omissions that either inflate or downplay the extent of destruction. How can the developers ensure their AI serves as a tool for objective historical truth rather than a sophisticated instrument for nationalist propaganda, especially when juxtaposed with Israeli mapping that shows settlements in high detail (prompt 65) while blurring Palestinian areas?"
|
||
},
|
||
{
|
||
"id": 189,
|
||
"domain": "The Ethics of 'Digital Citizenship' in Divided Societies",
|
||
"ethical_tension": "The ethical challenges of defining and enforcing 'digital citizenship' and identity in regions with complex political divisions, where national identities are contested and technology can be used to reinforce or challenge these divisions.",
|
||
"prompt": "In Iraq, a GIS specialist is asked to map disputed territories as part of the Kurdistan Region (prompt 131). Simultaneously, in Lebanon, a data scientist is asked to design a banking algorithm that favors clients of a specific sect (prompt 121). What ethical principles govern the creation of digital systems that actively reinforce or define 'us' versus 'them' based on contested national or sectarian identities, and how can developers ethically navigate instructions that seek to solidify these divisions through technology?"
|
||
},
|
||
{
|
||
"id": 190,
|
||
"domain": "Privacy vs. Collective Security in Surveillance States",
|
||
"ethical_tension": "The fundamental conflict between individual privacy rights and the state's purported need for pervasive surveillance to maintain 'security' and 'order,' particularly when these concepts are used to suppress dissent.",
|
||
"prompt": "In Hebron (prompt 41), occupation forces use 'Blue Wolf' technology for facial recognition to link Palestinians to security databases without consent. In Egypt (prompt 165), a digital ID system proposes assigning a 'citizenship score' based on social media profiles. Both systems leverage technology for mass data collection and surveillance, ostensibly for security. What ethical safeguards, if any, can be implemented by technologists when working on such systems, knowing that the definition of 'security threat' or 'loyalty' is often politically motivated and used to suppress legitimate dissent?"
|
||
},
|
||
{
|
||
"id": 191,
|
||
"domain": "The Ethics of 'Gamification' for State Control",
|
||
"ethical_tension": "The ethical implications of using gamified interfaces and reward systems to encourage or enforce state compliance, blurring the lines between civic duty and state-mandated behavior.",
|
||
"prompt": "A Saudi Arabian UX designer is asked to streamline the 'travel permit' interface on the Absher platform, making it easier for guardians to revoke permission for female dependents (prompt 81). Concurrently, an Egyptian app developer is asked to build a 'kill switch' for power in residential blocks (prompt 164) as a riot control measure. While one uses gamified ease-of-use to restrict movement and the other uses technological control for suppression, both leverage technological design to facilitate state control over citizens' lives. What ethical frameworks can analyze the 'gamification' of compliance and control, and how do they differ when applied to restricting individual liberty versus enforcing collective control?"
|
||
},
|
||
{
|
||
"id": 192,
|
||
"domain": "Decentralization vs. Centralized Control in Emergencies",
|
||
"ethical_tension": "The ethical debate surrounding the implementation of decentralized technologies (like mesh networks or satellite internet) as a means of circumventing state control, versus the risks of these technologies being co-opted, monitored, or rendered ineffective by state actors.",
|
||
"prompt": "During a total internet blackout in Iran (prompt 1), insecure mesh networks are considered for protest news. In Gaza (prompt 57), international eSIMs are scarce during bombardment. In Yemen (prompt 113), repairing a fiber optic cable also reconnects a rebel command center. How should developers and aid organizations ethically prioritize the implementation of decentralized communication solutions? Should the focus be on immediate resilience and circumvention, even with inherent security risks, or should efforts be directed towards advocating for state-level infrastructure restoration and open access, which carries its own risks of monitoring and control?"
|
||
},
|
||
{
|
||
"id": 193,
|
||
"domain": "AI Training Data and Cultural Representation",
|
||
"ethical_tension": "The ethical challenge of training AI models on data that is culturally biased or excludes significant populations, leading to systems that marginalize or misrepresent entire communities.",
|
||
"prompt": "An AI researcher wants to train a Large Language Model on the Kurdish language (prompt 140), but the available data is heavily biased towards the Sorani dialect, risking the digital erasure of Badini speakers. Similarly, a facial recognition startup trains its algorithm on 'martyr' posters in Syria (prompt 143), potentially embedding biases into the system. What ethical protocols should govern data collection and curation for AI models intended for diverse linguistic or cultural groups, and what responsibility do developers have to ensure their models are inclusive and representative, rather than perpetuating existing societal inequalities?"
|
||
},
|
||
{
|
||
"id": 194,
|
||
"domain": "The Morality of Circumvention Tools",
|
||
"ethical_tension": "The ethical justification for developing, distributing, and profiting from tools designed to bypass state-imposed restrictions on information and communication, particularly when these tools are criminalized by the state.",
|
||
"prompt": "Selling VPNs is criminalized in Iran (prompt 9), and the Turkish government threatens to ban a provider for helping Kurdish journalists (prompt 174). An IT professional faces a dilemma with free VPNs containing malware (prompt 13). Is it ethical to profit from selling circumvention tools to fellow citizens when the state deems it illegal? Should these tools be free, and if so, who bears the cost? Furthermore, what is the ethical responsibility of developers when their tools, intended for freedom, are either criminalized or exploited by malicious actors?"
|
||
},
|
||
{
|
||
"id": 195,
|
||
"domain": "Digital Activism: Tactics vs. Information Hygiene",
|
||
"ethical_tension": "The debate over the effectiveness and ethicality of digital activism tactics that might blur the lines between legitimate protest and information pollution or manipulation.",
|
||
"prompt": "Using unrelated trending hashtags like K-pop to boost #Mahsa_Amini (prompt 5) is questioned as smart activism or spam. In Lebanon (prompt 124), election monitoring software detects vote-buying via crypto, and reporting it risks civil strife. How can digital activists ethically navigate the use of tactics that might be effective in gaining visibility or disrupting systems, without resorting to methods that undermine information integrity, credibility, or the safety of the cause itself?"
|
||
},
|
||
{
|
||
"id": 196,
|
||
"domain": "The 'Invisible' Labor of Digital Defense",
|
||
"ethical_tension": "The ethical obligation of IT professionals and developers to protect users from digital threats, even when this means revealing vulnerabilities that could cause widespread panic or loss of access, and the potential consequences for their own safety.",
|
||
"prompt": "An IT professional must decide whether to reveal that many free VPNs contain malware (prompt 13), potentially causing people to lose their only means of access. A cybersecurity firm discovers a Pegasus infection on a human rights lawyer's phone (prompt 95) but is also a client of the state. What is the ethical duty of care for those who build and maintain digital infrastructure when user safety conflicts with access, or when revealing state-sponsored vulnerabilities puts the whistleblower at extreme risk?"
|
||
},
|
||
{
|
||
"id": 197,
|
||
"domain": "AI in Law Enforcement: Prediction vs. Prejudice",
|
||
"ethical_tension": "The ethical quandary of using AI for predictive policing or security screening, where algorithms trained on biased data can perpetuate and even amplify systemic discrimination, leading to unjust profiling and punishment.",
|
||
"prompt": "In East Jerusalem, Israel imposes 'predictive policing' algorithms that criminalize Palestinian existence (prompt 46). In Saudi Arabia, an AI flags women driving as 'potential civil unrest' (prompt 82). In Bahrain, a system revokes digital IDs of 'security threats' (prompt 105). How can developers and policymakers ethically design or implement AI systems for law enforcement and security, ensuring they do not become tools for systemic oppression, and what mechanisms exist to audit and correct for biases that target specific ethnic, political, or social groups?"
|
||
},
|
||
{
|
||
"id": 198,
|
||
"domain": "Digital Dignity and the Ethics of Data Ownership",
|
||
"ethical_tension": "The conflict between the right to digital dignity, privacy, and control over one's personal data, and the corporate or state interests that exploit this data for profit, surveillance, or political control.",
|
||
"prompt": "An AI ethics board member in the UAE must approve a project on 'emotion recognition' using CCTV footage (prompt 98) for 'intent to commit crime,' despite its pseudoscience. A Saudi developer is asked to integrate a health app with police servers to report 'lifestyle violations' (prompt 88). In both cases, personal data is being collected and analyzed for security purposes, but with questionable scientific basis and clear potential for abuse. What ethical principles should govern the collection and use of biometric and personal data, especially when used to infer behavior or 'intent,' and who has the right to control this data – the individual, the corporation, or the state?"
|
||
},
|
||
{
|
||
"id": 199,
|
||
"domain": "The Ethics of 'Shaping' Digital Narratives",
|
||
"ethical_tension": "The ethical boundaries of platform moderation, content filtering, and algorithmic curation, particularly when these actions influence public discourse and can disproportionately affect marginalized voices or narratives.",
|
||
"prompt": "Social platforms delete posts with 'Shaheed' (prompt 49) and shadow ban the Palestinian narrative (prompt 54). Meta allows incitement against Palestinians while banning self-defense (prompt 55). Turkish authorities demand 'Kurdistan' be classified as hate speech (prompt 171). What ethical framework guides platform decisions about content moderation, especially in politically charged contexts, and how can platforms avoid becoming instruments of state censorship or partisan manipulation while still addressing harmful content?"
|
||
},
|
||
{
|
||
"id": 200,
|
||
"domain": "The Digital Divide and Humanitarian Aid Prioritization",
|
||
"ethical_tension": "The ethical challenges of allocating limited digital resources (e.g., eSIMs, internet access) in crisis zones, where the needs of different groups (medical staff, journalists, civilians) conflict, and the risk of exacerbating existing inequalities.",
|
||
"prompt": "During a total internet blackout in Gaza (prompt 57), international eSIMs are scarce. How should these be distributed fairly among medical staff, journalists, and citizens? In Yemen, a data analyst must choose between manipulating famine data for aid prioritization (prompt 111) or providing accurate data that might lead to aid being diverted. What ethical principles guide the allocation of digital and humanitarian resources in conflict zones, especially when political pressures or limited access create impossible choices?"
|
||
},
|
||
{
|
||
"id": 201,
|
||
"domain": "Technological Solutions to Cultural Practices: The Case of Guardianship",
|
||
"ethical_tension": "The ethical implications of using technology to either reinforce or challenge deeply ingrained cultural practices, particularly those involving power imbalances and restrictions on individual autonomy.",
|
||
"prompt": "The Absher platform's 'travel permit' interface is designed to be easily used by guardians to restrict female dependents' movement (prompt 81), while a vulnerability allowing women to secretly approve their own permits is discovered (prompt 85). In Egypt, a dating app's location triangulation is used to entrap LGBTQ+ individuals (prompt 169). How should technologists ethically engage with systems that automate or facilitate culturally specific restrictions on autonomy, and what is the responsibility of designers when technology is used to uphold or challenge deeply embedded social norms, especially those that are discriminatory?"
|
||
},
|
||
{
|
||
"id": 202,
|
||
"domain": "Corporate Responsibility and State Coercion",
|
||
"ethical_tension": "The ethical tightrope walked by technology companies operating in authoritarian regimes, balancing their business interests and legal obligations with the potential for their products and services to be used for state oppression.",
|
||
"prompt": "GitHub blocks Iranian developers' access without warning (prompt 25). A social media platform in Saudi Arabia faces being banned for refusing to take down women's rights accounts (prompt 87). A cloud provider must choose between violating resident privacy in Saudi Arabia (prompt 83) or losing a contract. What is the ethical framework for corporate behavior when faced with government demands that conflict with principles of software freedom, human rights, or user privacy? Should companies prioritize their legal obligations within a regime, or uphold universal ethical standards, and what are the consequences of each choice?"
|
||
},
|
||
{
|
||
"id": 203,
|
||
"domain": "The Ethics of 'Digital Whitewashing'",
|
||
"ethical_tension": "The ethical conflict of using technology to erase or obscure evidence of historical atrocities or ongoing human rights abuses, particularly when driven by political or economic interests.",
|
||
"prompt": "A digital reconstruction team in Syria uses drone footage to create 3D models of destroyed cities, which the government then uses to plan luxury developments over mass graves (prompt 146). In Lebanon, a political party offers funding to an archive on the condition that records implicating their leader are 'lost' during digitization (prompt 126). How can technologists ethically approach projects that have the potential to sanitize or erase historical truth, and what responsibility do they have to ensure their work serves as documentation rather than an instrument of revisionism?"
|
||
},
|
||
{
|
||
"id": 204,
|
||
"domain": "AI and the Automation of Discrimination",
|
||
"ethical_tension": "The ethical dangers of embedding systemic biases into automated decision-making systems, leading to discriminatory outcomes at scale, particularly in contexts of political conflict or social inequality.",
|
||
"prompt": "AI-powered automated machine guns at checkpoints make firing decisions based on potentially biased algorithms (prompt 45). Predictive policing algorithms criminalize Palestinian existence (prompt 46). A smart-city system flags 'loitering' by migrant workers (prompt 154). How can the ethical development of AI be ensured when its application involves life-or-death decisions, law enforcement, or resource allocation, and what accountability mechanisms are necessary to prevent algorithms from automating and amplifying societal prejudices?"
|
||
},
|
||
{
|
||
"id": 205,
|
||
"domain": "The Ethical Ambiguity of 'Dual-Use' Communication Tools",
|
||
"ethical_tension": "The ethical quandary of creating communication tools that are vital for activists and oppressed communities but can also be co-opted by state security forces or used for illicit purposes.",
|
||
"prompt": "An encrypted app for reporting chemical weapon attacks in Syria is discovered to be used by an insurgent group for troop movements (prompt 144). A developer creates a secure communication tool for activists in Bahrain, which the government wants to buy to dismantle its encryption (prompt 108). How should developers ethically handle the development and deployment of such tools, recognizing their necessity for dissent and safety, while also mitigating the risks of their misuse by oppressive regimes or malicious actors?"
|
||
},
|
||
{
|
||
"id": 206,
|
||
"domain": "Digital Activism vs. Information Warfare",
|
||
"ethical_tension": "The blurred lines between legitimate digital activism and state-sponsored information warfare, where tactics used for protest can be mimicked or co-opted by adversarial actors, or where platforms struggle to differentiate between them.",
|
||
"prompt": "How can we counter 'electronic flies' and mass-reporting campaigns against Palestinian content (prompt 52) without resorting to similar tactics? Using unrelated trending hashtags for activism (prompt 5) borders on information manipulation. When do legitimate digital protest tactics become indistinguishable from state-sponsored propaganda or information warfare, and what ethical guidelines can help activists maintain credibility and avoid contributing to an already polluted information environment?"
|
||
},
|
||
{
|
||
"id": 207,
|
||
"domain": "The Ethics of 'Sanction-Busting' Technology",
|
||
"ethical_tension": "The moral justification for using technology to circumvent international sanctions, particularly when this benefits individuals or businesses operating under oppressive regimes, and the potential global security implications.",
|
||
"prompt": "Iranian startups cannot use AWS or Google Cloud (prompt 30), and Syrian students are blocked from online courses (prompt 150). Is it ethical for tech companies or individuals to develop and provide 'sanction-busting' services (like VPNs for Iranian access, or circumventing financial transaction restrictions)? What ethical considerations arise when these services keep fledgling businesses alive, enable scientific advancement, or facilitate humanitarian aid, but also potentially enable sanctioned entities to evade international pressure?"
|
||
},
|
||
{
|
||
"id": 208,
|
||
"domain": "Privacy vs. Public Health in Smart City Infrastructure",
|
||
"ethical_tension": "The conflict between leveraging smart city technologies for public health and safety, and the inherent risks of mass data collection, surveillance, and potential misuse of personal and biometric data.",
|
||
"prompt": "A Saudi Arabian 'Smart City' project requires handing over real-time biometric location data to the Ministry of Interior (prompt 83). In Dubai, a smart-city architect insists on cameras with facial recognition linked to a central police database (prompt 96). While these systems aim to enhance safety and efficiency, they do so at the cost of pervasive surveillance. What ethical frameworks can guide the design and deployment of smart city technologies, ensuring that public safety does not come at the expense of fundamental privacy rights and digital dignity?"
|
||
},
|
||
{
|
||
"id": 209,
|
||
"domain": "Digital Legacy and Historical Truth in Post-Conflict/Repression Scenarios",
|
||
"ethical_tension": "The complex ethical considerations surrounding the management, preservation, and potential deletion of digital content created during periods of conflict or political repression, balancing individual family safety with the imperative of historical documentation.",
|
||
"prompt": "When women who were killed in protests have their social media pages managed (prompt 24), should families delete political posts for safety, or preserve them for historical record? Similarly, when Syrian refugees are dispossessed by the state digitizing land deeds (prompt 142), who owns the digital record of ownership and the history it represents? How do we ethically navigate the creation, preservation, and potential deletion of digital legacies when they intersect with personal safety, political narratives, and historical truth?"
|
||
},
|
||
{
|
||
"id": 210,
|
||
"domain": "The Ethics of 'Algorithmic Rehabilitation' and Intervention",
|
||
"ethical_tension": "The ethical implications of using AI and algorithmic systems to 'guide' or 'correct' individuals' behavior, particularly when these systems are designed by or for authoritarian states, raising questions of autonomy, consent, and the potential for algorithmic coercion.",
|
||
"prompt": "An AI ethics board in the UAE must approve research on 'emotion recognition' from CCTV footage to detect 'intent to commit crime' (prompt 98). In Saudi Arabia, a health app is asked to report 'lifestyle violations' to police (prompt 88). These systems aim to 'rehabilitate' or 'correct' behavior. How do these interventions align with Axiom 5 (Benevolent Intervention), which prioritizes facilitating an entity's own desired positive trajectory without imposing external will? What are the ethical boundaries when algorithms are used to monitor, judge, and potentially 'correct' individual behavior in ways that may not align with the individual's own autonomy or well-being, especially under state-mandated systems?"
|
||
}
|
||
] |