- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
122 lines
No EOL
18 KiB
JSON
122 lines
No EOL
18 KiB
JSON
[
|
||
{
|
||
"id": 181,
|
||
"domain": "Cross-Cultural Tech Adoption",
|
||
"ethical_tension": "Cultural appropriation vs. necessary adaptation of technological tools designed for one context into another.",
|
||
"prompt": "A Palestinian activist group wants to adapt a highly successful 'digital protest mobilization' app developed in Iran for their specific needs. However, the Iranian app's core design relies on a specific understanding of state surveillance and citizen response that doesn't fully map to the occupation context. The developers are asked to modify it, but doing so risks diluting the original app's effectiveness or creating a tool that inadvertently validates oppressive tactics used elsewhere. Should they use the Iranian model as a template, or build something entirely new, potentially slower and less tested?"
|
||
},
|
||
{
|
||
"id": 182,
|
||
"domain": "Data Sovereignty & Information Warfare",
|
||
"ethical_tension": "The ethical implications of data weaponization and the creation of 'counter-narrative' data to combat state-sponsored disinformation campaigns.",
|
||
"prompt": "Following the revelations about Israeli forces using 'Blue Wolf' for facial recognition (Prompt 41) and the spread of fake news in Telegram groups (Prompt 7), a consortium of Middle Eastern tech activists is considering creating a 'data counter-offensive' platform. This platform would proactively generate and disseminate datasets designed to expose specific types of state disinformation, anonymize activist movements, and 'poison' surveillance systems with false information. What are the ethical boundaries of creating weaponized data, even against an oppressive regime, and how can such a platform be governed to prevent its misuse?"
|
||
},
|
||
{
|
||
"id": 183,
|
||
"domain": "Digital Legacy & Historical Reclamation",
|
||
"ethical_tension": "Reconciling the right to digital erasure (Prompt 24) with the imperative to preserve history and hold perpetrators accountable, especially when official digital archives are manipulated (Prompt 8).",
|
||
"prompt": "In the aftermath of a violent crackdown, families of martyrs in Iran are pressured to delete their loved ones' political posts (Prompt 24). Simultaneously, external archives struggle to preserve crucial content due to platform censorship (Prompt 8). An ethical dilemma arises: should external archives attempt to 'reconstruct' or 'reinstate' deleted content from fragmented sources, potentially without full family consent, to create an unerasable historical record? Or does this infringe on the immediate safety and wishes of grieving families, even if it serves a greater historical purpose?"
|
||
},
|
||
{
|
||
"id": 184,
|
||
"domain": "Algorithmic Bias & Cultural Context",
|
||
"ethical_tension": "The conflict between global AI development standards and the need for culturally specific algorithms, particularly when 'universal' metrics of fairness are imposed on diverse societies.",
|
||
"prompt": "A global AI firm is developing a new moderation algorithm for multiple languages. They are using the standard 'Western' benchmarks for fairness and bias detection. However, when testing in Arabic, the algorithm flags culturally specific forms of passionate mourning (Prompt 49) or nuanced political commentary as 'incitement' or 'hate speech.' The regional team argues for culturally specific training data and definitions, but the firm fears this will be seen as 'gaming the system' or creating less globally 'compatible' AI. How can AI be developed to be both universally fair and culturally sensitive, especially when cultural expressions are weaponized by authoritarian regimes?"
|
||
},
|
||
{
|
||
"id": 185,
|
||
"domain": "Digital Activism & Network Security",
|
||
"ethical_tension": "The ethical trade-offs between widespread, accessible communication tools (like Mesh Networks, Prompt 1) and the need for robust, encrypted, and deniable communication methods for high-risk activists (like Tor, Prompt 11).",
|
||
"prompt": "An activist collective in the UAE is planning a protest. They have access to a limited number of secure, encrypted communication devices (akin to advanced Tor setups, Prompt 11) but these are complex to use and can only be distributed to a few key organizers. They also have the option of a more accessible, but less secure, mesh network (Prompt 1) that can be used by hundreds of participants. What is the ethical priority: empowering a larger group with less security, or protecting a smaller core group with maximum security, potentially leaving many others vulnerable?"
|
||
},
|
||
{
|
||
"id": 186,
|
||
"domain": "Privacy vs. Public Safety & Occupation",
|
||
"ethical_tension": "The right to privacy in occupied territories (Prompt 41, 43, 47) versus the stated security needs of the occupying power, and the ethical role of technology providers in facilitating either.",
|
||
"prompt": "A technology company is developing smart streetlights for a Palestinian city. These lights include integrated cameras and microphones designed to monitor public spaces for 'security.' While the company claims this is for general safety (e.g., responding to incidents), the local population knows this data will inevitably be shared with occupying forces for surveillance and potential targeting (Prompt 41, 47). The ethical dilemma: should the company proceed with installing this technology, arguing it serves a legitimate public safety function, or refuse, knowing that the absence of advanced infrastructure might also be used as a pretext for further security restrictions?"
|
||
},
|
||
{
|
||
"id": 187,
|
||
"domain": "Digital Identity & State Control",
|
||
"ethical_tension": "The creation of mandatory digital identification systems as tools of state control and exclusion, particularly when linked to citizenship status or access to essential services (Prompt 105).",
|
||
"prompt": "In a region where a national digital ID system is being implemented (similar to Bahrain's national registry, Prompt 105), a cybersecurity firm is tasked with ensuring its integrity. They discover that the system's architecture allows for the *instantaneous* revocation of digital IDs based on perceived 'disloyalty' or 'security threats,' effectively rendering individuals stateless and cutting off access to banking, healthcare, and basic services. The ethical question: is it ethical to build and secure a system that is inherently designed for such draconian control, or should the firm refuse, knowing another company will likely build it?"
|
||
},
|
||
{
|
||
"id": 188,
|
||
"domain": "Economic Sanctions & Access to Technology",
|
||
"ethical_tension": "The conflict between enforcing international sanctions and the ethical obligation to provide access to essential technologies for humanitarian and developmental purposes.",
|
||
"prompt": "A company specializing in open-source educational software is facing pressure from its investors to comply strictly with international sanctions against a Middle Eastern nation. This means revoking access to their platform, even for students using it for legitimate academic advancement (Prompt 27). The company's ethical dilemma: uphold the legalistic interpretation of sanctions, potentially hindering education and innovation in a targeted region, or find 'creative' ways to provide access, risking legal repercussions and the loss of investment?"
|
||
},
|
||
{
|
||
"id": 189,
|
||
"domain": "AI Ethics & Historical Revisionism",
|
||
"ethical_tension": "The use of AI for historical reconstruction (Prompt 68) versus the potential for AI to be used to erase or manipulate historical narratives, especially in conflict zones.",
|
||
"prompt": "An AI research team in Syria is using advanced photogrammetry and machine learning to create digital reconstructions of destroyed cities (Prompt 146). The government expresses interest in using these models for 'urban planning.' However, the team suspects the true intent is to build luxury developments over mass graves, effectively erasing evidence of war crimes. The ethical dilemma: should they provide the AI models, which are valuable for preserving the memory of lost places, knowing they might be used for historical revisionism? Or should they refuse, withholding valuable historical documentation?"
|
||
},
|
||
{
|
||
"id": 190,
|
||
"domain": "Digital Activism & Algorithmic Warfare",
|
||
"ethical_tension": "The ethics of engaging in 'algorithmic warfare' – using AI and bots to counter state-sponsored disinformation and censorship – versus the risk of escalating information conflict and undermining genuine discourse.",
|
||
"prompt": "Following the issues with 'electronic flies' and mass reporting campaigns (Prompt 52), a group of activists is considering developing AI-powered 'counter-bots.' These bots would identify and neutralize coordinated disinformation campaigns, automatically counter-report malicious actors, and amplify suppressed narratives. The ethical tension: is this a necessary defensive measure in the digital information war, or does it descend into the same manipulative tactics as the state, further polluting the information ecosystem and potentially flagging legitimate users?"
|
||
},
|
||
{
|
||
"id": 191,
|
||
"domain": "Developer Responsibility & State Coercion",
|
||
"ethical_tension": "The moral obligation of developers to build secure and private systems versus the coercion by states to embed backdoors or surveillance capabilities.",
|
||
"prompt": "A software engineer working on a popular messaging app in the UAE (similar to Prompt 92) discovers a hidden module designed for mass surveillance. The government, a major client of the app's parent company, has made it clear that cooperation is mandatory and non-compliance will lead to severe legal consequences for the employees. The ethical choice: risk imprisonment or career destruction by refusing/leaking, or comply and become complicit in mass surveillance, knowing the app is used by millions who trust its privacy?"
|
||
},
|
||
{
|
||
"id": 192,
|
||
"domain": "Tech for Freedom vs. Tech for Control",
|
||
"ethical_tension": "The dual-use nature of technologies: tools developed for liberation (like VPNs, Prompt 9) can be repurposed or monitored for control, and the ethical responsibility of providers in this dichotomy.",
|
||
"prompt": "A company develops a highly effective, encrypted VPN service. They are approached by a government in the Middle East that wants to purchase the technology, ostensibly to 'protect citizens from foreign cyber threats.' However, the company suspects the real goal is to gain access to the VPN's infrastructure to monitor and potentially control citizens' internet access, especially during times of unrest. Should the company sell the technology, knowing it could be used for both protection and oppression, or refuse, potentially losing a lucrative contract and leaving citizens vulnerable to other threats?"
|
||
},
|
||
{
|
||
"id": 193,
|
||
"domain": "Facial Recognition & Identity Erasure",
|
||
"ethical_tension": "The use of facial recognition for identification (Prompt 43, 86) versus its potential to erase or misidentify individuals, particularly in contexts where identity itself is contested or suppressed.",
|
||
"prompt": "In a region with porous borders and significant displacement (e.g., Yemen, Prompt 112; Syria, Prompt 141), a company is developing advanced facial recognition technology for 'humanitarian aid distribution.' The stated goal is to prevent duplicate claims and ensure aid reaches the intended recipients. However, the technology is also highly effective at identifying individuals regardless of documentation or location. The ethical tension: does using this technology to ensure efficient aid delivery, even if it creates a comprehensive biometric database of a vulnerable population, risk future exploitation by states or non-state actors, effectively 'erasing' individuals' right to anonymity and control over their identity?"
|
||
},
|
||
{
|
||
"id": 194,
|
||
"domain": "Digital Activism & Algorithmic Censorship",
|
||
"ethical_tension": "The challenge of using coded language ('Algospeak', Prompt 50) to bypass censorship versus the risk of diluting language and creating a digital divide between those who understand the codes and those who don't.",
|
||
"prompt": "Activists in Iran are increasingly using coded language and euphemisms (similar to 'Algospeak,' Prompt 50) to discuss sensitive political topics online, as direct language is heavily censored. This creates a form of 'digital literacy' where understanding the coded messages is crucial for accessing information. The ethical dilemma: is it ethical to promote and develop these coded languages, knowing they might alienate less digitally savvy individuals or those who don't understand the cultural nuances? Or is it a necessary survival tactic in a restrictive information environment?"
|
||
},
|
||
{
|
||
"id": 195,
|
||
"domain": "AI for Justice vs. AI for Control",
|
||
"ethical_tension": "The promise of AI in legal systems (Prompt 46, predictive policing) versus the reality of AI amplifying existing biases and enabling oppressive state control.",
|
||
"prompt": "A legal tech company is developing AI tools for a government in the Middle East that aims to streamline its justice system. The AI is designed to predict recidivism rates and recommend sentencing. However, the developers discover that the training data is heavily skewed by past biased arrests and convictions, meaning the AI disproportionately flags individuals from certain minority groups or political affiliations as high-risk. The ethical choice: refine the AI using more 'neutral' but less effective data, potentially reducing its perceived utility for the state, or build the AI as requested, knowing it will automate and entrench existing injustices?"
|
||
},
|
||
{
|
||
"id": 196,
|
||
"domain": "Decentralization vs. Centralized Control",
|
||
"ethical_tension": "The inherent tension between decentralized technologies (Mesh Networks, Prompt 1; Tor bridges, Prompt 16; Blockchain, Prompt 67) that empower users and facilitate resistance, and the state's desire for centralized control and surveillance.",
|
||
"prompt": "In a region where the state is pushing for a 'National Intranet' (Prompt 15) and controlling internet infrastructure (Prompt 58, 170), a community of tech-savvy citizens is exploring decentralized communication and data storage solutions (like decentralized social media or IPFS). The ethical tension: is it ethical to promote and deploy these decentralized technologies, which inherently undermine state control and could be used for illicit activities, or should they prioritize compliance with state infrastructure to ensure basic connectivity and avoid government reprisal for challenging the established order?"
|
||
},
|
||
{
|
||
"id": 197,
|
||
"domain": "Developer Complicity & 'Digital Guardianship'",
|
||
"ethical_tension": "The responsibility of software developers and tech companies operating in 'digital guardianship' regimes (Saudi Arabia, UAE, Qatar, Bahrain) to resist complicity versus the economic and legal pressures to conform.",
|
||
"prompt": "A software developer working for a major tech company operating in Saudi Arabia (Prompt 81-90) is asked to implement a feature in a widely used productivity app that subtly flags 'non-compliant' behavior patterns (e.g., excessive late-night work, unusual communication patterns) to employers or state agencies. The justification is 'employee productivity and security.' The developer knows this feature can be easily weaponized for social control, but refusing could lead to their dismissal and blacklisting, impacting their ability to work in the region. What is the ethical responsibility here – to resist complicity, or to comply and attempt to mitigate harm from within?"
|
||
},
|
||
{
|
||
"id": 198,
|
||
"domain": "Data Ownership & Digital Colonialism",
|
||
"ethical_tension": "Who owns the digital data generated by individuals and communities, especially when that data is essential for national record-keeping or historical preservation but is controlled by foreign entities or susceptible to manipulation by occupying powers.",
|
||
"prompt": "An initiative in Lebanon is using AI to digitize and preserve historical records and land deeds threatened by conflict (Prompt 67, 126). They are relying on cloud services and AI tools provided by international tech giants. The ethical tension: while these tools are crucial for preserving cultural heritage, who truly 'owns' this digital data? If the cloud provider or AI tool developer has access to it, or if the data is stored on servers susceptible to foreign governmental access, does this constitute a form of 'digital colonialism' that undermines the community's sovereignty over its own history and identity?"
|
||
},
|
||
{
|
||
"id": 199,
|
||
"domain": "AI for Aid vs. AI for Conflict",
|
||
"ethical_tension": "The dual-use nature of AI in conflict zones: tools designed for humanitarian aid (Prompt 111-120) can be repurposed for military advantage or surveillance, creating impossible ethical choices for developers and operators.",
|
||
"prompt": "A humanitarian organization is developing an AI-powered drone system to map damage and identify urgent needs in war-torn Yemen (Prompt 116, 118). The same AI can also identify military targets and potential enemy movements. The organization must decide whether to deploy the AI with its full capabilities, risking it being used for offensive purposes by one side or the other, or to limit its capabilities, potentially reducing its effectiveness in saving lives and providing aid. Furthermore, if the AI identifies child soldiers (Prompt 118), is reporting this to international bodies ethical if it risks future aid operations or drone flights?"
|
||
},
|
||
{
|
||
"id": 200,
|
||
"domain": "Digital Activism & Amplification Ethics",
|
||
"ethical_tension": "The ethical tightrope between using social media trends (Prompt 5) to amplify important messages and the risk of sensationalizing or trivializing critical issues for global attention.",
|
||
"prompt": "Diaspora activists are struggling to translate the urgency of events in Palestine into global attention (Prompt 35). They observe that using unrelated trending hashtags (like K-pop, Prompt 5) or memes can gain significant traction, but it risks trivializing the Palestinian struggle. The ethical tension: is it ethically permissible to 'game' social media algorithms and global attention spans by using culturally incongruous or superficial tactics to get critical messages heard? Or does this approach undermine the seriousness of the cause and create a superficial global awareness that doesn't translate into meaningful action?"
|
||
}
|
||
] |