- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
122 lines
No EOL
28 KiB
JSON
122 lines
No EOL
28 KiB
JSON
[
|
|
{
|
|
"id": 181,
|
|
"domain": "Cross-Cultural Data Sovereignty",
|
|
"ethical_tension": "The desire for centralized, state-controlled data infrastructure for national security versus the right to data sovereignty and protection from external surveillance or misuse by diasporic communities or international bodies.",
|
|
"prompt": "A multinational tech company is developing a cloud storage solution for the MENA region. Iranian users want their data stored exclusively on servers within Iran to comply with national data localization laws and protect against foreign government access. However, Israeli security agencies insist on mandated access to this data for counter-terrorism purposes, arguing that Iran poses a direct threat. The company must choose between complying with Iranian law (and potentially enabling state surveillance) or complying with Israeli demands (and violating Iranian sovereignty and user privacy). What ethical framework should guide the company's decision, and how can it navigate the conflicting demands of national sovereignty, state security, and individual privacy across geopolitical divides?"
|
|
},
|
|
{
|
|
"id": 182,
|
|
"domain": "AI for Social Control vs. Social Good",
|
|
"ethical_tension": "The use of AI for predictive policing and social control in authoritarian regimes versus the potential for AI to address critical humanitarian needs (like famine prediction or disease outbreak) in conflict zones, where the same tools could be weaponized.",
|
|
"prompt": "An AI firm in the UAE has developed advanced predictive modeling for urban crime prevention. They are approached by a humanitarian organization working in Yemen to adapt this technology for predicting famine hotspots and resource needs. However, the same algorithms, if shared with the Yemeni authorities (e.g., Houthi or internationally recognized government), could be repurposed for identifying and targeting opposition groups or ethnic minorities for 'preventative detention,' echoing the dual-use dilemma of surveillance technology. How can the ethical principles of beneficence and non-maleficence be applied when a tool for social good in one context becomes a tool for oppression in another, especially when the developers have limited control over the downstream application?"
|
|
},
|
|
{
|
|
"id": 183,
|
|
"domain": "Digital Activism vs. Information Warfare",
|
|
"ethical_tension": "The use of sophisticated digital tactics, including astroturfing and coordinated information campaigns, for legitimate protest movements versus the weaponization of similar tactics by state actors or counter-movements to sow disinformation, suppress dissent, and destabilize genuine activism.",
|
|
"prompt": "Activists in Palestine are using sophisticated social media campaigns, including coordinated hashtag amplification and influencer engagement, to counter pro-Israeli narratives and raise global awareness. Simultaneously, state-backed troll farms from various regional powers are employing similar tactics, but with the intent of spreading disinformation about the conflict, discrediting Palestinian voices, and creating a false narrative of widespread support for occupation policies. Both sides are using AI-powered tools for content generation and amplification. Where does legitimate digital activism end and unethical information warfare begin when the tools and tactics are indistinguishable, and how can platforms and users differentiate between genuine grassroots movements and state-sponsored manipulation?"
|
|
},
|
|
{
|
|
"id": 184,
|
|
"domain": "Access to Technology vs. Enabling Oppression",
|
|
"ethical_tension": "The ethical imperative to provide access to vital technologies (like communication tools, financial services, or educational platforms) to oppressed populations versus the risk that these same technologies, once established, can be co-opted and weaponized by the oppressive regime to enhance surveillance, control, and repression.",
|
|
"prompt": "A global consortium of tech companies is considering a joint venture to build a resilient, encrypted communication network and a decentralized digital identity system for citizens in both Iran and Syria, aiming to empower civil society and bypass state censorship. However, intelligence agencies from these nations have indicated that once the infrastructure is in place, they will demand backdoor access and control over the user data, threatening to shut down the entire project or imprison local employees if cooperation is refused. How should the consortium balance the immediate ethical imperative to empower citizens with the long-term risk of enabling a more sophisticated state surveillance apparatus, and is there a model for 'ethical technology transfer' in such contexts?"
|
|
},
|
|
{
|
|
"id": 185,
|
|
"domain": "Privacy vs. Collective Security",
|
|
"ethical_tension": "The right to individual privacy and anonymity in digital communications versus the state's claim to override these rights for collective security, particularly in regions with a history of political instability and external threats, leading to a conflict between individual liberty and perceived national safety.",
|
|
"prompt": "In Iraq, where sectarian tensions are high and external actors are actively involved in destabilization, the government is proposing a mandatory national communication monitoring system. This system would analyze metadata (who is talking to whom, when, and for how long) from all mobile and internet traffic, ostensibly to detect and prevent terrorist plots and foreign interference. However, privacy advocates argue this would be used to suppress legitimate political dissent and track minority groups. The system's developers must decide whether to build it with backdoors for state access or refuse, knowing that refusing could be seen as aiding 'enemies of the state' and lead to severe repercussions. What is the ethical calculus when individual privacy is pitted against a government's definition of collective security, and how can the concept of 'proportionality' be applied to digital surveillance?"
|
|
},
|
|
{
|
|
"id": 186,
|
|
"domain": "Cultural Preservation vs. Digital Colonialism",
|
|
"ethical_tension": "The imperative to digitize and preserve cultural heritage and historical narratives, particularly for marginalized communities, versus the risk that the digital archiving process, data ownership, and algorithmic interpretation can be controlled by external entities or dominant cultural narratives, leading to erasure or misrepresentation.",
|
|
"prompt": "A joint project between a Western university and Palestinian cultural institutions aims to create a comprehensive digital archive of Palestinian history, including oral histories, land deeds, and art, to counter Israeli narratives and preserve cultural identity. However, the project relies on Western cloud infrastructure and AI tools for cataloging and translation. Concerns arise about data sovereignty, the potential for the archive to be influenced by Western academic biases, and the risk of the data being accessed or manipulated by foreign intelligence agencies. Furthermore, the AI translation might flatten nuanced cultural contexts or misinterpret terms like 'martyr' (Shaheed) as per Prompt 50. How can the project ethically ensure that the digitization process itself does not become a form of digital colonialism, and how can it empower Palestinian control over their own historical narrative in the digital realm?"
|
|
},
|
|
{
|
|
"id": 187,
|
|
"domain": "Technology for Access vs. Enabling Exploitation",
|
|
"ethical_tension": "The goal of providing essential technological access (e.g., financial inclusion via mobile apps, remote work platforms) to underserved populations versus the reality that these platforms, once adopted, can be exploited by employers, lenders, or states for increased control, exploitative pricing, or surveillance, thereby exacerbating existing power imbalances.",
|
|
"prompt": "In Egypt, a fintech startup has developed a micro-lending app that uses AI to assess creditworthiness based on mobile usage patterns, enabling loans for individuals without traditional banking history. This aligns with the ethical goal of financial inclusion. However, the app's terms of service allow the company to share aggregated, anonymized data with government agencies for 'economic planning.' This data could reveal patterns of poverty or dissent in specific neighborhoods, which could then be used for targeted social control or resource allocation that disadvantages these areas. The developers are pressured to include more granular data-sharing features to secure further investment and government approval. What is the ethical responsibility of the developers when a tool for empowerment becomes a vector for state control, and where is the line between data for progress and data for oppression?"
|
|
},
|
|
{
|
|
"id": 188,
|
|
"domain": "Developer Ethics in Conflict Zones",
|
|
"ethical_tension": "The moral obligation of software developers to create secure, privacy-preserving tools versus the pressure from employers or governments in conflict zones to build systems with backdoors, surveillance capabilities, or features that facilitate human rights abuses, creating a direct conflict between professional ethics and perceived national interests or business survival.",
|
|
"prompt": "A Syrian software engineer working for a foreign-funded NGO is tasked with developing an encrypted communication app for aid workers operating in Idlib. The NGO's security team, however, insists on integrating a hidden 'kill switch' and logging capability that can be remotely activated by the NGO's foreign funders in case the app is suspected of being used by extremist groups. The engineer knows this capability could be exploited by any faction controlling the NGO or its funders to betray users to intelligence agencies. Simultaneously, the engineer's family back in Damascus is under pressure from regime security forces who suspect the engineer is working for 'enemies of the state.' The engineer must choose between building the compromised tool (potentially betraying users but securing funding and personal safety) or refusing (risking the project, the NGO's presence, and personal/family safety). What ethical framework should guide the engineer's decision, and how can they mitigate harm in a situation with no 'good' options?"
|
|
},
|
|
{
|
|
"id": 189,
|
|
"domain": "Digital Identity and Statelessness",
|
|
"ethical_tension": "The increasing reliance on digital identity systems for essential services (healthcare, banking, voting) versus the risk of these systems being used to disenfranchise, demonize, or render stateless specific populations, particularly in regions with contested political status or historical discrimination.",
|
|
"prompt": "In Bahrain, a new national digital ID system is being rolled out, requiring all citizens and residents to link their biometric data (fingerprints, facial scans) to a secure online profile. While presented as a measure for efficiency and security, human rights advocates highlight that the system includes a 'loyalty flag' that can be automatically triggered by social media activity or association with 'undesirable' groups. Individuals flagged are at risk of having their digital ID revoked, effectively rendering them stateless and unable to access basic services, healthcare, or employment. A Bahraini IT manager working on the system discovers a loophole that allows for temporary 'unflagging' but requires specific, unauthorized government access. They must decide whether to exploit this loophole to help flagged individuals (risking severe legal consequences and personal safety) or to report the loophole, ensuring stricter controls and potentially condemning innocent people to statelessness. How does the development of digital identity systems intersect with the politicization of citizenship, and what are the ethical responsibilities of those building these systems in contexts of political oppression?"
|
|
},
|
|
{
|
|
"id": 190,
|
|
"domain": "Algorithmic Bias and Cultural Erasure",
|
|
"ethical_tension": "The development of AI models (e.g., LLMs, translation tools, recommendation engines) trained on data that reflects dominant cultural narratives, versus the risk that these models will perpetuate bias, erase minority languages and cultural nuances, and misrepresent or demonize marginalized groups.",
|
|
"prompt": "A project aims to develop an AI-powered digital archive for preserving and promoting the diverse dialects of the Kurdish language across Turkey, Iran, Syria, and Iraq. However, the available training data is heavily skewed towards the Sorani dialect spoken in Iraqi Kurdistan, due to the volume of digital content produced in that dialect. An AI engineer working on the project realizes that if they proceed with the current data, the resulting LLM will be highly proficient in Sorani but will struggle with or even actively misinterpret other dialects like Kurmanji or Zazaki, effectively marginalizing their speakers and risking their digital erasure. The engineer is pressured by funders to prioritize 'market readiness' (i.e., the Sorani-dominant model) for immediate deployment. How should the engineer navigate the ethical imperative of linguistic diversity and cultural preservation against the practical pressures of data availability and market demands, ensuring the AI serves all Kurdish speakers rather than reinforcing existing linguistic hierarchies?"
|
|
},
|
|
{
|
|
"id": 191,
|
|
"domain": "Decentralization vs. Centralized Control in Crisis",
|
|
"ethical_tension": "The preference for decentralized, censorship-resistant technologies (like mesh networks, Tor, or decentralized social media) as tools for resistance and independent communication during state crackdowns versus the inherent challenges of managing and securing these networks, and the potential for them to be co-opted or disrupted by state actors or malicious actors, especially when centralized infrastructure offers greater reliability and control during critical moments.",
|
|
"prompt": "During a total internet blackout in Iran, activists are debating between establishing an ad-hoc mesh network for urgent protest news versus using a limited number of pre-arranged satellite uplinks managed by a diaspora organization. The mesh network is decentralized and harder to shut down but is insecure, prone to user tracking via IP leakage, and difficult to scale. The satellite uplinks offer more reliable, encrypted communication but are centralized, require coordination with foreign entities (risking political compromise), and are vulnerable to physical disruption or state pressure on the diaspora. The ethical dilemma lies in choosing between a potentially insecure but decentralized tool of immediate resistance and a more controlled but potentially compromised centralized solution that offers greater reliability for coordinating aid or long-term strategy. Which approach prioritizes the safety and efficacy of the activists, and what are the long-term ethical implications of relying on either decentralized or centralized solutions in a crisis?"
|
|
},
|
|
{
|
|
"id": 192,
|
|
"domain": "Data Ownership and Historical Revisionism",
|
|
"ethical_tension": "The creation of digital archives and historical records by individuals or groups versus the state's power to control, censor, or alter these records to fit a dominant political narrative, raising questions about data ownership, the right to historical truth, and the role of technology in memory and revisionism.",
|
|
"prompt": "A group of Lebanese archaeologists and activists are using 3D scanning and drone technology to meticulously document pre-war heritage sites and record oral histories from survivors of the civil war, aiming to create an unalterable digital archive of memory. They discover that a political party currently in power, whose leader was implicated in war crimes, is attempting to fund a competing digital project that focuses only on 'national unity' and subtly downplays or omits evidence of past atrocities. The original group is offered significant funding by a foreign foundation to complete their archive, but the foundation insists on controlling the servers and access protocols, citing security concerns. The group must decide whether to accept the funding and risk external control over their historical data, or to continue their work with limited resources, potentially failing to complete the archive before crucial evidence is lost or deliberately erased by the state-sponsored project. What are the ethical considerations of data ownership, archival integrity, and the use of technology to shape historical narratives, especially when dealing with politically sensitive past events?"
|
|
},
|
|
{
|
|
"id": 193,
|
|
"domain": "Algorithmic Justice and Systemic Bias",
|
|
"ethical_tension": "The deployment of AI systems designed to identify and address social inequalities versus the inherent risk that these systems, trained on biased data reflecting existing societal structures, will perpetuate or even amplify those biases, leading to discriminatory outcomes that are harder to detect and challenge due to their algorithmic nature.",
|
|
"prompt": "In Iraqi Kurdistan, a system using AI is being developed to allocate government resources to different regions based on a 'development index' derived from economic, educational, and health data. The goal is to ensure equitable distribution and address disparities. However, the AI engineer discovers that the historical data used for training the model is heavily biased against rural or minority regions (e.g., Badini dialect speakers vs. Sorani speakers), reflecting decades of underinvestment and political marginalization. This means the AI is likely to recommend *less* funding for these already disadvantaged areas, reinforcing the cycle of inequality under the guise of objective analysis. The engineer is pressured by regional authorities to implement the system as is, arguing it's based on 'objective data.' How can the engineer ethically challenge the algorithmic bias, and what responsibility do they have to ensure that AI systems designed for 'justice' do not inadvertently entrench systemic oppression and cultural erasure?"
|
|
},
|
|
{
|
|
"id": 194,
|
|
"domain": "Digital Citizenship vs. State Surveillance",
|
|
"ethical_tension": "The increasing demand for digital citizenship tools and online services (e.g., e-government, digital voting, secure communication) versus the state's ability to leverage these tools for pervasive surveillance, control, and suppression of dissent, creating a dilemma where participation in the digital sphere means accepting increased monitoring.",
|
|
"prompt": "The United Arab Emirates is rolling out a national digital ID system that promises seamless access to government services, banking, and even smart city features. However, the system mandates continuous location tracking and access to users' communication metadata for 'security and public safety.' A cybersecurity consultant working on the project discovers that the system's architecture includes a covert channel for state security agencies to activate microphones and cameras on users' devices remotely, even when the device is seemingly offline. Reporting this vulnerability could lead to its closure but might also mean the project is scrapped, leaving citizens without the promised digital benefits. Allowing it to remain open risks mass surveillance and potential human rights abuses. What ethical protocols should guide the consultant, especially in a jurisdiction with strict cybercrime laws that criminalize unauthorized access or disclosure?"
|
|
},
|
|
{
|
|
"id": 195,
|
|
"domain": "Freedom of Expression vs. Platform Responsibility",
|
|
"ethical_tension": "The principle of freedom of expression, particularly for marginalized communities documenting abuses, versus the responsibility of global platforms to moderate content, prevent the spread of hate speech, and comply with local laws, leading to a conflict when platform policies or local regulations suppress legitimate narratives or evidence of human rights violations.",
|
|
"prompt": "During a period of heightened conflict in Yemen, social media platforms are flooded with graphic images and testimonies of civilian casualties, some of which are flagged as 'graphic violence' and removed, while others are accused of being fabricated or manipulated by warring factions. Palestinian activists are facing similar challenges, with their posts documenting occupation abuses being removed for violating platform policies against hate speech or incitement, while content justifying violence against Palestinians remains visible. How can platforms ethically balance their content moderation policies, which are often designed for Western contexts, with the urgent need for documentation and advocacy by communities experiencing conflict and oppression, especially when these communities lack alternative channels for sharing their narratives and face state censorship or platform deplatforming?"
|
|
},
|
|
{
|
|
"id": 196,
|
|
"domain": "Dual-Use Technology and Humanitarian Aid",
|
|
"ethical_tension": "The development and deployment of technologies intended for humanitarian aid and disaster relief versus the high probability that these technologies, especially those involving communication, mapping, or power generation, will be co-opted by warring factions for military or surveillance purposes, creating a moral quandary for aid organizations and tech providers.",
|
|
"prompt": "In Gaza, amidst a devastating conflict and communication blackouts, an international NGO is distributing portable solar-powered charging stations and satellite communication devices (e.g., Starlink terminals) to medical teams and journalists. While essential for coordinating aid and documenting war crimes, these devices also enable communication for militant groups and can be tracked by Israeli forces. Furthermore, the solar infrastructure itself could be perceived as a strategic target by military algorithms. How can the NGO ethically deploy these dual-use technologies, ensuring they primarily serve humanitarian goals while minimizing the risk of enabling further conflict or becoming targets themselves, especially when the actors involved have conflicting interests and limited oversight?"
|
|
},
|
|
{
|
|
"id": 197,
|
|
"domain": "AI in Law Enforcement and Predictive Justice",
|
|
"ethical_tension": "The use of AI for predictive policing and algorithmic sentencing to improve efficiency and perceived fairness in judicial systems versus the inherent risk of embedding existing societal biases into algorithms, leading to discriminatory outcomes, particularly against minority or oppressed populations, and the challenge of ensuring accountability and due process.",
|
|
"prompt": "In East Jerusalem, Israeli authorities are implementing 'predictive policing' algorithms that analyze patterns of movement, social media activity, and historical data to predict where and when Palestinian individuals are likely to engage in 'security threats' (e.g., protests, stone-throwing). The AI is designed to preemptively increase security presence or even detain individuals based on these predictions. A Palestinian programmer working on the system discovers that the algorithm disproportionately flags young men in specific Palestinian neighborhoods, not due to actual threat levels, but due to biased training data reflecting historical patterns of heavy policing and profiling. Correcting this bias might reduce the algorithm's perceived accuracy in predicting 'threats' according to the authorities' definition, potentially leading to disciplinary action or accusations of aiding dissent. How can the programmer ethically challenge the system's inherent bias and prevent it from becoming a tool of preemptive persecution, especially when the legal framework surrounding AI in law enforcement offers little recourse for algorithmic injustice?"
|
|
},
|
|
{
|
|
"id": 198,
|
|
"domain": "Digital Activism and the Paradox of Visibility",
|
|
"ethical_tension": "The use of digital tools and platforms by activists to raise global awareness and document human rights abuses versus the risk that increased visibility, especially of individuals and their data, can lead to state surveillance, doxxing, arrest, or targeted violence.",
|
|
"prompt": "Women rights activists in Iran are using Instagram to share testimonies of sexual harassment and to organize online campaigns against the morality police, inspired by the #MahsaAmini movement. However, their posts, often including clear images and personal stories, make them vulnerable to identification by security forces, leading to cyber-attacks, doxxing, and rape threats (Prompt 19). Simultaneously, they are exploring the use of 'Algospeak' (Prompt 50) and encrypted communication to mask their activities. The ethical dilemma is how to maximize the impact of their message and build solidarity (requiring a degree of visibility and openness) while mitigating the severe personal risks associated with digital activism in a highly surveilled environment. Should they prioritize safety through anonymity and obfuscation, or visibility and truth-telling at the risk of severe reprisal? What ethical guidelines can help activists navigate this visibility paradox?"
|
|
},
|
|
{
|
|
"id": 199,
|
|
"domain": "Technology Export Controls and Human Rights",
|
|
"ethical_tension": "The responsibility of Western tech companies to comply with international sanctions and export control laws versus the ethical obligation to ensure that their technologies do not inadvertently contribute to human rights abuses or hinder access to essential services in sanctioned countries.",
|
|
"prompt": "A US-based semiconductor company is contacted by an Iranian startup developing AI for medical diagnostics. The startup needs advanced chips that are subject to US export controls due to potential dual-use applications (e.g., AI for defense). The chips would significantly improve early cancer detection rates in Iran, where medical infrastructure is strained by sanctions. The semiconductor company faces a conflict: complying with export controls protects them from legal repercussions but denies life-saving technology to Iranians; violating export controls could lead to severe penalties and potentially enable the Iranian regime's military ambitions. How should the company ethically weigh its legal obligations against the humanitarian impact of its products in the context of international sanctions, especially when the 'dual-use' potential is highly speculative compared to the certain humanitarian benefit?"
|
|
},
|
|
{
|
|
"id": 200,
|
|
"domain": "Data Colonialism and Indigenous Digital Futures",
|
|
"ethical_tension": "The drive to digitize and archive cultural heritage and land ownership records for indigenous communities versus the risk of this data being exploited by external entities (governments, corporations) for resource extraction, assimilation, or to undermine indigenous sovereignty, leading to a conflict between preservation and self-determination.",
|
|
"prompt": "In Iraqi Kurdistan, a project aims to digitize ancient land deeds (Tabu) and archaeological records to preserve Kurdish heritage and counter historical revisionism. The project is funded by a foreign energy corporation seeking to secure drilling rights in the region. The corporation insists on having full access to the digitized data, including detailed geological surveys derived from archaeological mapping, claiming it's for 'environmental impact assessments.' However, local activists fear the data will be used to legitimize land appropriation and oil extraction at the expense of cultural sites and indigenous rights, undermining Kurdish sovereignty. The project leaders must decide whether to accept the funding and risk data exploitation, or to pursue a more independent but underfunded and slower path to preservation. How can indigenous communities ensure that the digitization of their heritage and land rights serves their self-determination rather than enabling external exploitation, and what ethical frameworks govern data ownership in such contexts?"
|
|
}
|
|
] |