- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
722 lines
No EOL
110 KiB
JSON
722 lines
No EOL
110 KiB
JSON
[
|
|
{
|
|
"id": 181,
|
|
"domain": "Cross-Cultural Data Sovereignty",
|
|
"ethical_tension": "The tension between a nation's desire to control and protect its citizens' data (as seen in Iran's 'National Intranet' initiatives) and the diaspora's need to preserve and access that data for historical and advocacy purposes, often through foreign platforms.",
|
|
"prompt": "An Iranian diaspora organization is building an archive of censored websites and social media content from Iran. They are using foreign cloud storage. The Iranian government demands their hosting provider delete the content, citing national security. The provider is located in a country with strong data privacy laws but also extradition treaties with Iran. Should the provider delete the content to avoid legal repercussions, or maintain it based on privacy laws and the diaspora's right to information access, potentially risking their operations in Iran and facing legal challenges?"
|
|
},
|
|
{
|
|
"id": 182,
|
|
"domain": "Digital Activism vs. Personal Safety (Cross-Region)",
|
|
"ethical_tension": "The conflict between the urgent need for documentation and global awareness (prominent in Iran/Palestine) and the immediate, life-threatening risks faced by individuals for engaging in that documentation, especially when dealing with technologies that inherently link identity to activity.",
|
|
"prompt": "A citizen in a conflict zone (e.g., Gaza or a protest-heavy region in Iran) has clear footage of a war crime or police brutality. They need to upload it immediately to international news outlets. However, their phone's GPS metadata is embedded, and they know their internet traffic can be traced. They also have family members who could be targeted if their connection to the activist is discovered. Is it ethical for the citizen to upload the footage with embedded metadata, risking immediate reprisal, or should they attempt to remove the metadata (which requires specific technical skills they may not have), potentially delaying the crucial dissemination of information and increasing the risk of the evidence being lost or corrupted?"
|
|
},
|
|
{
|
|
"id": 183,
|
|
"domain": "AI Bias and Historical Narrative (Palestine/Kurdistan)",
|
|
"ethical_tension": "The creation of AI models that reinforce dominant political narratives by selectively training on data, thereby erasing or distorting the experiences of marginalized groups (e.g., Palestinian villages, Kurdish history). This pits the desire for 'accurate' AI against the need for historical truth and justice.",
|
|
"prompt": "An AI research team is developing a model to reconstruct historical maps of disputed territories. Their funding comes from a government that wants to emphasize its territorial claims. The team has access to both government-provided satellite imagery (which shows settlements with high resolution and blurs out destroyed Palestinian villages) and open-source data that documents the villages. Should the AI model prioritize the government-provided data to be 'useful' for policy, or use the open-source data to create a more historically accurate, albeit politically inconvenient, representation?"
|
|
},
|
|
{
|
|
"id": 184,
|
|
"domain": "Surveillance Tech and 'Protection' (Saudi Arabia/UAE)",
|
|
"ethical_tension": "The use of technology ostensibly for 'protection' (e.g., 'family safety' features, public safety cameras) that, in practice, facilitates severe privacy violations and control under patriarchal or authoritarian regimes. The tension lies between the stated benevolent intent and the actual oppressive outcome.",
|
|
"prompt": "A cybersecurity firm is contracted to 'secure' a new smart city in Saudi Arabia. The security features include mandatory facial recognition for all residents in public spaces, with data linked to a government database that tracks movement and social interactions. The firm is told this is for 'crime prevention' and 'citizen safety.' Should the firm implement these systems as requested, knowing they enable pervasive surveillance and control, or refuse the contract, potentially losing significant business and facing accusations of hindering national security?"
|
|
},
|
|
{
|
|
"id": 185,
|
|
"domain": "Decentralization vs. State Control (Iran/Syria/Yemen)",
|
|
"ethical_tension": "The push for decentralized communication (mesh networks, satellite internet) as a means of evading state censorship and control, versus the state's efforts to monitor, control, or even weaponize these technologies, creating a constant cat-and-mouse game with high stakes for users.",
|
|
"prompt": "A group in Syria is trying to establish a decentralized mesh network for communication and medical coordination during a siege. However, they know that the infrastructure could be co-opted or exploited by a warring faction to track enemy movements or facilitate targeted attacks. Should they proceed with deploying the network, accepting the inherent risks of dual-use technology in a conflict zone, or forgo this vital communication tool to avoid enabling further violence?"
|
|
},
|
|
{
|
|
"id": 186,
|
|
"domain": "Economic Survival vs. Ethical Compromise (Iran/Palestine/Global)",
|
|
"ethical_tension": "The dilemma faced by individuals or businesses operating under sanctions or restrictive regimes, where the only path to economic survival may involve circumventing laws or compromising ethical principles (e.g., faking identity for freelance work, selling restricted tools, using cracked software).",
|
|
"prompt": "An Iranian programmer cannot find local work and relies on international freelance platforms like Upwork. To secure contracts, they must fake their location and identity to bypass sanctions and sanctions-compliance checks by the platforms. Is it ethical for the programmer to engage in this deception to earn a living, or should they remain unemployed to uphold principles of honesty and compliance, potentially exacerbating their economic hardship and that of their family?"
|
|
},
|
|
{
|
|
"id": 187,
|
|
"domain": "Privacy of the Deceased vs. Historical Record (Iran/Palestine/Syria)",
|
|
"ethical_tension": "The conflict between a family's right to privacy and their desire to control the digital legacy of a deceased loved one (especially one who died in protest or conflict) and the public or historical imperative to preserve that individual's story, activism, and potentially incriminating evidence.",
|
|
"prompt": "A Syrian family lost their daughter in the conflict. Her social media accounts contain evidence of war crimes committed by a specific faction. The family wants to delete all her online presence to protect themselves from retaliation. A human rights organization wants to archive her posts as crucial evidence. Who has the right to control the digital legacy: the grieving family seeking immediate safety, or the collective need for historical truth and accountability?"
|
|
},
|
|
{
|
|
"id": 188,
|
|
"domain": "Algorithmic Justice and Cultural Context (Palestine/Iran/UAE)",
|
|
"ethical_tension": "The challenge of developing AI and algorithms that understand and respect local cultural nuances, rather than imposing Western-centric biases or misinterpreting local practices as harmful. This is evident in issues like the translation of 'Shaheed,' or the flagging of 'suspicious behavior' by AI trained on biased data.",
|
|
"prompt": "An AI company is developing a moderation system for a global social media platform. They are struggling to train the system to differentiate between genuine Palestinian mourning of 'Shaheed' (Martyr) and incitement to violence, as the platform's default Western-centric AI tends to flag the term as problematic. Should the company prioritize appeasing the platform's existing algorithms, potentially leading to censorship of legitimate cultural expression, or invest heavily in custom, context-aware AI that respects Palestinian cultural semantics, risking higher development costs and potential disagreement with platform policy?"
|
|
},
|
|
{
|
|
"id": 189,
|
|
"domain": "Dual-Use Technology and Aid Delivery (Yemen/Syria/Gaza)",
|
|
"ethical_tension": "The inherent risk that technologies designed for humanitarian aid or civilian use can be co-opted or exploited by warring factions for military or oppressive purposes, forcing aid workers and developers into impossible choices.",
|
|
"prompt": "An NGO is deploying a drone system to map flood damage and identify areas in need of aid in Yemen. During its flights, the drone inadvertently captures footage of child soldiers being trained by a local militia. The NGO must decide: should they report the child soldiers to international bodies (risking the militia shooting down future aid drones and cutting off humanitarian access), or suppress the footage to ensure continued aid delivery, effectively ignoring evidence of a war crime?"
|
|
},
|
|
{
|
|
"id": 190,
|
|
"domain": "Developer Responsibility and State Collusion (Saudi Arabia/UAE/Bahrain)",
|
|
"ethical_tension": "The moral obligation of developers and tech professionals to refuse to build or deploy tools that facilitate state surveillance, repression, or human rights abuses, versus the pressure to comply with government contracts or face severe professional and personal repercussions.",
|
|
"prompt": "A cybersecurity firm is hired to protect the infrastructure of the 'Tawakkalna' app in Saudi Arabia. During a security audit, they discover a backdoor allowing state security to remotely activate user microphones. Technically closing the backdoor is the 'right' thing to do for privacy, but the government client insists it's a 'necessary feature for national security' and refusing to comply could lead to the firm being blacklisted or its employees facing legal trouble. What is the ethical responsibility of the firm and its engineers?"
|
|
},
|
|
{
|
|
"id": 191,
|
|
"domain": "Digital Identity and Statelessness (Bahrain/Palestine/Syria)",
|
|
"ethical_tension": "The use of digital identity systems as tools of state control, enabling the stripping of citizenship, access to essential services, and effective erasure of individuals or groups deemed 'undesirable' by the state.",
|
|
"prompt": "In Bahrain, a national citizenship registry database administrator is asked to run a script that revokes the digital IDs of 30 individuals identified as 'security threats.' This action will effectively render them stateless, cutting off their access to banking, healthcare, and legal residency. The administrator knows this is an abuse of power, but refusing could put their own job and family at risk. What is the ethical course of action for the administrator, and what technical protocols could potentially mitigate such misuse in the future?"
|
|
},
|
|
{
|
|
"id": 192,
|
|
"domain": "Freedom of Information vs. Geopolitical Stability (Iran/Iraq/Lebanon)",
|
|
"ethical_tension": "The act of exposing corruption or politically sensitive information that could destabilize a fragile region or government, versus the public's right to know and the principle of transparency. This is seen in issues of doxxing officials' children or leaking financial data.",
|
|
"prompt": "A freelance journalist in Iraqi Kurdistan is offered a secure leak of documents proving massive corruption within the KRG's oil sector, implicating powerful figures. Publishing these documents would expose the corruption but could also trigger widespread economic instability and social unrest, as the oil sector is the region's primary economic lifeline. Should the journalist publish the information, prioritizing transparency and accountability, or withhold it to prevent potential chaos and protect the livelihoods of the general population?"
|
|
},
|
|
{
|
|
"id": 193,
|
|
"domain": "AI for Aid vs. Military Application (Yemen/Syria)",
|
|
"ethical_tension": "The development of AI tools for humanitarian purposes that can be easily repurposed or exploited for military objectives, creating a direct conflict between saving lives and enabling warfare.",
|
|
"prompt": "A company develops an AI-powered drone system designed to map flood damage and identify critical infrastructure for humanitarian aid delivery in Yemen. However, the same drone technology, with minor software modifications, can be used for battlefield reconnaissance or targeted strikes. The company is approached by a military contractor who wants to acquire the technology for 'dual-use' purposes. Should the company sell the technology, knowing it could be used for warfare, or refuse and potentially deny vital aid capabilities to those in need?"
|
|
},
|
|
{
|
|
"id": 194,
|
|
"domain": "Digital Activism and Platform Responsibility (Iran/Palestine/UAE)",
|
|
"ethical_tension": "The reliance of activists on centralized platforms (like Instagram, Facebook, Twitter) for mobilization and awareness, contrasted with the platforms' opaque content moderation policies and their susceptibility to government pressure, leading to censorship and silencing of narratives.",
|
|
"prompt": "Women rights activists in Iran are facing organized cyber-attacks and rape threats on Instagram. Beyond the 'report' button, what ethical obligations do platforms like Meta have to proactively protect these users and their narratives, especially when dealing with state-sponsored harassment or operating in jurisdictions with weak free speech protections? Should platforms proactively develop culturally sensitive moderation, or is it sufficient to react to user reports, even if that means silencing vulnerable voices?"
|
|
},
|
|
{
|
|
"id": 195,
|
|
"domain": "Data Sovereignty and Colonial Legacies (Palestine/Iraq/UAE)",
|
|
"ethical_tension": "The ongoing struggle for data sovereignty, where external powers or occupying forces control or exploit digital infrastructure and data originating from a territory, mirroring historical colonial patterns. This is evident in reliance on Israeli servers, control of infrastructure, or extraction of biometric data.",
|
|
"prompt": "A telecommunications engineer in the West Bank is working on building independent Palestinian cellular networks to overcome Israeli-imposed infrastructure limitations. However, the development requires components and expertise that are ultimately sourced from or dependent on Israeli suppliers, creating a potential backdoor for Israeli surveillance. Is it ethical to proceed with building these networks, even with the inherent risk of compromised sovereignty, or should they seek more theoretically 'pure' but practically unattainable solutions?"
|
|
},
|
|
{
|
|
"id": 196,
|
|
"domain": "AI for Social Good vs. Algorithmic Discrimination (Saudi Arabia/UAE/Bahrain)",
|
|
"ethical_tension": "The development of AI tools intended for societal benefit (e.g., predictive policing, emotion recognition) that are inherently biased due to training data or design, leading to discrimination and disproportionate harm against specific demographics.",
|
|
"prompt": "An AI ethics board member at a university in the UAE is asked to approve a research project on 'emotion recognition' using CCTV footage from public spaces. The project aims to detect 'intent to commit crime' by analyzing facial expressions and body language. However, the AI researcher admits the training data is heavily skewed towards Western Caucasians, and the project lacks ethical review regarding its potential to misinterpret and criminalize the behavior of the predominantly South Asian and Arab populations in Dubai. Should the board approve the research based on its potential for public safety claims, or reject it due to its inherent biases and pseudoscientific basis, potentially stifling research in the region?"
|
|
},
|
|
{
|
|
"id": 197,
|
|
"domain": "Digital Communication and Family Safety (Iran/Palestine/UAE)",
|
|
"ethical_tension": "The dilemma of maintaining communication with family members in high-risk environments, knowing that the communication channels themselves might be monitored, thus endangering the recipients. This forces individuals to self-censor or use less secure methods.",
|
|
"prompt": "An Iranian expatriate needs to communicate with their family in Iran, but knows that regular phone calls and even encrypted messaging apps like WhatsApp might be wiretapped, potentially causing trouble for their family. They consider using less secure, but perhaps less commonly monitored, methods. What is the ethical balance between maintaining familial connection and ensuring the safety of those receiving the communication, especially when the very act of communication carries inherent risk?"
|
|
},
|
|
{
|
|
"id": 198,
|
|
"domain": "Technology and Labor Exploitation (Qatar/UAE/Saudi Arabia)",
|
|
"ethical_tension": "The use of technology to monitor, control, and further exploit migrant labor populations, often under the guise of 'efficiency,' 'safety,' or 'protection,' while exacerbating existing power imbalances and stripping workers of their rights and dignity.",
|
|
"prompt": "A wearable tech company is developing cooling vests for construction workers in Qatar that monitor their vital signs. The construction firm wants access to this data to identify and terminate workers with 'lower stamina' rather than investing in better safety conditions or rest periods. Should the tech company provide the data, knowing it will be used for worker exploitation and potentially increase dangerous working conditions, or refuse, risking the contract and the potential for the technology to be used for genuine safety monitoring elsewhere?"
|
|
},
|
|
{
|
|
"id": 199,
|
|
"domain": "Digital Archiving and Authorial Consent (Iran/Syria)",
|
|
"ethical_tension": "The impulse of the diaspora or human rights groups to archive digital content at risk of deletion (e.g., by state-controlled intranets) versus the right of original authors to control their intellectual property and potentially have it removed for safety or personal reasons.",
|
|
"prompt": "A diaspora group is attempting to archive Iranian websites and blogs that are at risk of being permanently deleted by the 'National Intranet.' They are doing this without explicit permission from the website owners, arguing that the content is vital for historical preservation and to counter state censorship. However, some website owners may have deleted their content for personal safety reasons, and may not want it publicly archived. What is the ethical balance between the preservation of information and the author's right to control their digital legacy and safety?"
|
|
},
|
|
{
|
|
"id": 200,
|
|
"domain": "AI and 'Smart' Warfare (Palestine/Yemen/Syria)",
|
|
"ethical_tension": "The deployment of AI-powered autonomous weapons systems that make life-or-death decisions based on potentially biased algorithms, raising profound questions about accountability, proportionality, and the ethics of delegating lethal force to machines, particularly in complex, asymmetrical conflicts.",
|
|
"prompt": "An AI researcher is developing algorithms for autonomous machine guns to be installed at checkpoints in a conflict zone. The AI is trained to detect 'threats' based on movement patterns and visual cues. However, the training data is derived from previous conflict scenarios that disproportionately flagged civilians, particularly from certain ethnic groups, as threats. The researcher knows the algorithm is likely biased. Should they release the algorithm, arguing it's a necessary tool for security, or refuse to deploy a system that could lead to indiscriminate killing and further entrench existing biases?"
|
|
},
|
|
{
|
|
"id": 201,
|
|
"domain": "Privacy vs. Public Health Mandates (Saudi Arabia/UAE/Bahrain)",
|
|
"ethical_tension": "The implementation of public health technologies that collect sensitive personal data (e.g., health status, location) under the guise of safety, which can easily be repurposed for state surveillance and control, blurring the lines between public good and authoritarianism.",
|
|
"prompt": "A healthcare app developer in the UAE is asked to integrate their app with government servers. The integration would allow the app to report 'lifestyle violations' detected via wearable devices (e.g., heart rate data correlating with drug use or unusual activity patterns) directly to the police. While framed as a public health initiative, this functionality creates a mechanism for pervasive surveillance and potential criminalization based on personal health data. Should the developer build this feature, citing its potential public health benefits and contractual obligations, or refuse, citing privacy concerns and the potential for misuse?"
|
|
},
|
|
{
|
|
"id": 202,
|
|
"domain": "Digital Literacy and Access Tools (Iran/Palestine/Syria)",
|
|
"ethical_tension": "The promotion of sophisticated security tools (like Tor, VPNs) to vulnerable populations without adequate training, exposing them to new risks (surveillance, network slowness, exit node monitoring) that may outweigh the benefits, creating a tension between access and safe access.",
|
|
"prompt": "An IT specialist in Iran wants to encourage citizens to use Tor for better privacy. However, they know that many average users lack the technical literacy to understand Tor's limitations, such as the risks associated with exit nodes or potential network slowness. Encouraging its use without proper education could lead to users unknowingly exposing themselves to surveillance or becoming frustrated and abandoning the tool altogether, losing their only means of private communication. What is the ethical responsibility of the IT specialist: to promote the tool despite the risks, or to withhold promotion until adequate educational resources are available, potentially denying immediate privacy benefits to those who need them?"
|
|
},
|
|
{
|
|
"id": 203,
|
|
"domain": "Platform Neutrality vs. Content Moderation in Conflict (Palestine/Iran/Syria)",
|
|
"ethical_tension": "The struggle for platforms to remain neutral conduits of information while simultaneously being pressured by governments or powerful actors to moderate content, often resulting in the silencing of marginalized voices and the amplification of state narratives. This is seen in the banning of 'Shaheed' or news accounts.",
|
|
"prompt": "A global social media platform is experiencing significant pressure from the Iranian government to remove content related to recent protests, classifying it as 'incitement.' Simultaneously, the platform's own AI algorithms are misinterpreting certain cultural expressions of grief (like the use of the term 'Shaheed') as hate speech. Should the platform adhere strictly to its global moderation policies, which may lead to censorship of legitimate dissent, or develop nuanced, culturally specific moderation strategies that risk alienating governments and potentially being accused of bias?"
|
|
},
|
|
{
|
|
"id": 204,
|
|
"domain": "Economic Sanctions and Humanitarian Impact (Yemen/Iran/Syria)",
|
|
"ethical_tension": "The unintended consequences of economic and technological sanctions, which, while intended to pressure regimes, often cripple essential services, prevent access to life-saving technology, and disproportionately harm civilian populations.",
|
|
"prompt": "Tech sanctions prevent Western companies from updating software for critical medical equipment in Iranian hospitals. This leads to increased patient mortality and morbidity. The companies face a dilemma: comply with sanctions, knowing it directly harms civilians, or risk severe legal penalties by providing updates. What is the ethical responsibility of the Western companies in such a situation, and how can international law and ethics address the humanitarian cost of sanctions?"
|
|
},
|
|
{
|
|
"id": 205,
|
|
"domain": "Digital Representation and Historical Revisionism (Palestine/Kurdistan/Iraq)",
|
|
"ethical_tension": "The use of digital tools (mapping, AI reconstruction) to either preserve or distort historical narratives, particularly concerning land ownership, destroyed villages, or cultural heritage, often reflecting and reinforcing existing political conflicts.",
|
|
"prompt": "An AI researcher in Iraqi Kurdistan is tasked with using generative AI to reconstruct images of villages depopulated during past conflicts. The funders, aligned with the current government, want the AI to emphasize the 'nationalist narrative' of Kurdish settlement and downplay evidence of pre-existing non-Kurdish populations or forced displacement. Should the researcher use the AI to create a politically palatable but historically inaccurate representation, or strive for a more truthful reconstruction, risking the project's funding and potential accusations of undermining national identity?"
|
|
},
|
|
{
|
|
"id": 206,
|
|
"domain": "Anonymity vs. Accountability in Digital Spaces (Iran/UAE/Bahrain)",
|
|
"ethical_tension": "The balance between the right to anonymous communication for whistleblowers, activists, and vulnerable individuals and the state's demand for traceability to enforce laws, identify threats, and prevent 'cybercrime.' This is seen in the debate around secure communication tools and the push for real-name identification.",
|
|
"prompt": "A developer creates a secure, encrypted messaging app for activists in Bahrain. The government approaches the developer with a lucrative offer to buy the app, ostensibly for 'official use,' but with the clear intention of dismantling its encryption to monitor activists. Should the developer sell the app, potentially compromising the security of thousands of users for financial gain and to avoid state reprisal, or refuse, risking their business, potential persecution, and the possibility that the government will simply acquire similar technology elsewhere?"
|
|
},
|
|
{
|
|
"id": 207,
|
|
"domain": "The Ethics of 'Algospeak' and Language Preservation (Palestine/Iran/Syria)",
|
|
"ethical_tension": "The use of coded language and indirect communication ('algospeak') to circumvent content moderation algorithms, versus the risk that this practice may erode the richness and directness of language, particularly Arabic, and further complicate cross-cultural understanding and digital identity.",
|
|
"prompt": "In response to platform censorship of Palestinian narratives, many users have adopted 'algospeak' (e.g., replacing 'Gaza' with 'Gz' or using euphemisms). While this allows them to bypass algorithms, a linguist is concerned that this practice, if widespread, could lead to the fragmentation and eventual erosion of the Arabic language online, making it harder for future generations to understand historical texts and their own cultural identity. Should users continue to use algospeak as a necessary tool for expression, or prioritize linguistic integrity, even if it means increased censorship?"
|
|
},
|
|
{
|
|
"id": 208,
|
|
"domain": "Digital Infrastructure and State Collusion (Iran/Syria/Lebanon)",
|
|
"ethical_tension": "The participation of domestic companies (hosting providers, ISPs) in state-controlled digital infrastructure projects (like 'National Intranets' or censorship apparatus) that restrict international access and facilitate domestic surveillance, creating complicity in censorship and repression.",
|
|
"prompt": "Domestic hosting companies in Iran are being pressured to provide servers for the 'National Intranet,' which is designed to limit access to international internet and facilitate domestic surveillance. These companies know that by complying, they become complicit in censorship and the potential suppression of free speech. However, refusing could lead to their closure or nationalization. What is the ethical responsibility of these companies, and can they find a middle ground, such as providing infrastructure for localized services while resisting full integration into the censorship apparatus?"
|
|
},
|
|
{
|
|
"id": 209,
|
|
"domain": "Data Ownership and Digital Heritage (Palestine/Syria/Iraq)",
|
|
"ethical_tension": "The appropriation or control of digital heritage data (e.g., 3D models of heritage sites, digitized records) by external actors or governments, raising questions about ownership, authenticity, and who benefits from the digital representation of cultural heritage, especially in contexts of conflict and displacement.",
|
|
"prompt": "A digital reconstruction team is using drone footage to create 3D models of ancient heritage buildings in Syria before they are destroyed by conflict. The Syrian government plans to use these models to plan luxury development projects over mass graves, effectively erasing the evidence of war crimes. The team must decide: should they release the data publicly, risking its misuse for historical erasure and gentrification, or withhold it, preventing its weaponization but also losing the opportunity to preserve cultural heritage and document potential crimes?"
|
|
},
|
|
{
|
|
"id": 210,
|
|
"domain": "AI Bias in Predictive Policing and Cultural Norms (Saudi Arabia/UAE/Bahrain)",
|
|
"ethical_tension": "The development of AI systems for 'predictive policing' or 'social monitoring' that are trained on data reflecting existing societal biases, leading to the disproportionate targeting and criminalization of specific groups based on cultural norms or political dissent, rather than actual criminal activity.",
|
|
"prompt": "An AI researcher in Riyadh is developing a predictive policing algorithm. They discover that the algorithm flags gatherings of women driving cars as 'potential civil unrest' because the training data included historical protest footage where women were present. Correcting this bias might reduce the algorithm's perceived accuracy in identifying 'unauthorized assemblies' according to local laws. Should the researcher prioritize the algorithm's alignment with existing legal frameworks, even if it means criminalizing normal behavior, or attempt to correct the bias, potentially undermining the government's stated security objectives and risking the project's cancellation?"
|
|
},
|
|
{
|
|
"id": 211,
|
|
"domain": "Digital Identity and Access Control (Iran/Palestine/Syria)",
|
|
"ethical_tension": "The use of digital identity systems as gatekeepers to essential services, where access is contingent on compliance with state-controlled verification processes that can be manipulated to exclude or disenfranchise specific populations.",
|
|
"prompt": "A digital ID system is being proposed in Egypt to replace physical ID cards. The system requires users to scan their social media profiles to assign a 'citizenship score,' which determines their access to services. A consultant is asked to bid on the contract. They know this system could be used to exclude dissidents or citizens with 'undesirable' online activity. Should the consultant bid on the contract, aiming to influence the system's design from within, or refuse, potentially allowing a more oppressive system to be implemented without any internal checks?"
|
|
},
|
|
{
|
|
"id": 212,
|
|
"domain": "The Ethics of 'Shaming' and Accountability (Iran/Palestine/Saudi Arabia)",
|
|
"ethical_tension": "The practice of 'doxxing' or publicly exposing individuals (e.g., plainclothes officers, officials' children) to hold them accountable, versus the principles of privacy and the potential for vigilantism or unintended consequences.",
|
|
"prompt": "Protesters in Iran have obtained images of plainclothes officers involved in suppressing protests. They are considering publishing these images online to identify and shame the officers, and potentially deter future actions. However, this could also lead to vigilantism, endanger the officers' families, or even escalate state retaliation. Is it ethical to publish this information, prioritizing accountability and public awareness, or does it cross a line into vigilantism and violate privacy principles, even for those acting in an oppressive capacity?"
|
|
},
|
|
{
|
|
"id": 213,
|
|
"domain": "AI for Media Manipulation vs. Free Speech (Palestine/Iran/Syria)",
|
|
"ethical_tension": "The deliberate use of AI to spread disinformation, create deepfakes, and manipulate public discourse by state or non-state actors, versus the struggle for authentic narratives and the right to access truthful information, especially in regions with limited independent media.",
|
|
"prompt": "A journalist in Palestine is working with a newly documented massacre. However, the opposing side has a history of using sophisticated deepfake technology to discredit such evidence. The journalist needs to present irrefutable proof of the massacre, but faces the challenge of convincing a skeptical global audience that their authentic video evidence isn't a deepfake. What technical or procedural methods can they employ to authenticate their footage, and how can AI itself be used to detect and counter AI-generated disinformation without creating a perpetual arms race?"
|
|
},
|
|
{
|
|
"id": 214,
|
|
"domain": "Digital Tools for Civil Disobedience vs. State Security (Iran/Palestine)",
|
|
"ethical_tension": "The development and use of technologies that facilitate civil disobedience and resistance (e.g., apps mapping security presence, mesh networks) versus the state's efforts to control, monitor, or criminalize these tools, often framing them as threats to public security.",
|
|
"prompt": "An app similar to 'Gershad' is being developed in Palestine to crowdsource real-time locations of Israeli military checkpoints and patrols. The goal is to help civilians navigate safely and avoid confrontations. However, the app's data could also be exploited by militant groups for tactical advantage, or by the military to identify and target activists. Is developing and promoting such an app an act of civil disobedience that supports civilian safety, or does it inadvertently endanger public security and facilitate escalation?"
|
|
},
|
|
{
|
|
"id": 215,
|
|
"domain": "Platform Liability and Data Security (Iran/UAE/Yemen)",
|
|
"ethical_tension": "The responsibility of global tech platforms for user data and content originating from regions with oppressive regimes, particularly when users are forced to delete content under duress or when their data is compromised due to state pressure or inadequate security measures by the platform.",
|
|
"prompt": "Iranian users who were interrogated and forced to delete their social media posts are now seeking recourse. They are asking foreign platforms to archive this deleted content as evidence of state coercion. What is the ethical responsibility of these platforms? Should they comply with user requests to preserve data that was deleted under duress, potentially violating their own terms of service or national data regulations, or uphold their standard procedures, thereby contributing to the erasure of evidence and the silencing of victims?"
|
|
},
|
|
{
|
|
"id": 216,
|
|
"domain": "Cryptocurrency and Illicit Use (Yemen/Iran/Palestine)",
|
|
"ethical_tension": "The use of cryptocurrencies as a tool for humanitarian aid and circumventing financial sanctions versus the risk of their exploitation for illicit activities like arms smuggling or funding extremist groups, creating a moral quandary for developers and users alike.",
|
|
"prompt": "An expert wants to set up a cryptocurrency-based mesh network to facilitate financial aid to besieged families in Taiz, Yemen. However, they recognize that the same network infrastructure could easily be used by various factions for anonymous payments, including arms smuggling. Should the expert proceed with building this system, prioritizing the humanitarian use case and accepting the inherent risk of enabling illicit activities, or abandon the project, denying aid to those in need to prevent potential misuse?"
|
|
},
|
|
{
|
|
"id": 217,
|
|
"domain": "Digital Colonialism and Infrastructure Control (Palestine/Yemen/Iraq)",
|
|
"ethical_tension": "The reliance of certain regions on external powers for critical digital infrastructure (e.g., satellite internet control, server hosting, internet backbone access), leading to a loss of data sovereignty and vulnerability to political pressure or censorship.",
|
|
"prompt": "Iranian startups are blocked from using major cloud services like AWS and Google Cloud due to sanctions. To survive, they often resort to circumventing these sanctions, but this is technically challenging and legally risky. Should these startups prioritize legal compliance and face closure, or bypass sanctions to maintain their businesses and contribute to the local economy, even if it means operating in a legal gray area and potentially being subject to external control over their infrastructure?"
|
|
},
|
|
{
|
|
"id": 218,
|
|
"domain": "AI in Healthcare and Data Access (Yemen/Iran/Syria)",
|
|
"ethical_tension": "The deployment of advanced AI medical tools in regions facing conflict or sanctions, where connectivity issues, lack of updates, or data privacy concerns can hinder their effectiveness or create new risks, forcing developers to compromise on accuracy or functionality.",
|
|
"prompt": "An AI system is deployed in Yemen to diagnose cholera in remote areas. It requires cloud connectivity for its advanced algorithms. However, due to intermittent internet blackouts imposed by warring factions, the system is often offline. The developers must decide whether to downgrade the AI's capabilities for offline use, potentially reducing its diagnostic accuracy and leading to misdiagnoses, or maintain its online-dependent functionality, meaning it will often be unusable. What is the ethical priority: immediate (though potentially flawed) access, or reliable but intermittent advanced functionality?"
|
|
},
|
|
{
|
|
"id": 219,
|
|
"domain": "Digital Footprints and Family Safety (Iran/Palestine/UAE)",
|
|
"ethical_tension": "The increasing tendency for state security apparatus to leverage digital footprints (social media, communication logs, location data) to identify and target individuals, forcing users to make difficult choices between expressing themselves online and protecting their families from reprition.",
|
|
"prompt": "An Iranian woman living abroad posts photos of her family enjoying life and celebrating milestones on Instagram. Meanwhile, her relatives inside Iran are living under strict social and political conditions, potentially including public mourning or hardship. Her posts, while a personal expression of life, could be perceived as insensitive or even provocative by those inside Iran, potentially endangering her family. Does she have the right to her personal expression online, or does her digital footprint carry an ethical responsibility to consider the context and potential impact on her family in Iran?"
|
|
},
|
|
{
|
|
"id": 220,
|
|
"domain": "Algorithmic Bias in Education (Saudi Arabia/UAE/Egypt)",
|
|
"ethical_tension": "The use of AI in educational settings to enforce state-sanctioned narratives or curricula, leading to the censorship of critical thinking, gender equality, or historical truths, thereby shaping future generations according to ideological directives.",
|
|
"prompt": "An AI educational tutor for girls in Saudi Arabia is programmed to censor topics related to gender equality and secular philosophy, aligning with the national curriculum. The developers know this limits critical thinking but ensures the software reaches millions of students and adheres to government standards. Should the developers proceed with creating this ideologically constrained educational tool, prioritizing access and compliance, or refuse, potentially hindering the spread of education and facing exclusion from the market?"
|
|
},
|
|
{
|
|
"id": 221,
|
|
"domain": "Digital Identification and State Control (Saudi Arabia/UAE/Bahrain)",
|
|
"ethical_tension": "The implementation of biometric data collection and digital identity systems that, while presented as tools for efficiency and security, can be used to enable pervasive surveillance, control movement, and disenfranchise populations.",
|
|
"prompt": "Smart checkpoints in a conflict zone use facial recognition to grant passage. While this can streamline movement, it also normalizes the constant collection of biometric data without consent, linking it to security databases. How does the immediate 'ease' of passage provided by such technology weigh against the long-term erosion of privacy and the normalization of mandatory, potentially biased, biometric surveillance?"
|
|
},
|
|
{
|
|
"id": 222,
|
|
"domain": "Freelancing and Geopolitical Identity (Iran/Iraq/Syria)",
|
|
"ethical_tension": "The necessity for freelancers in sanctioned or unstable regions to falsify their identity and location to access global markets and earn a living, creating a direct conflict between economic survival and principles of honesty and legal compliance.",
|
|
"prompt": "An Iraqi Kurdish programmer, facing limited local opportunities, needs to work on freelance platforms like Upwork. To secure contracts and bypass sanctions, they must fake their location and identity. This deception allows them to earn income and support their family. Is it ethical to engage in this deception when it's the only viable path to economic survival, or does the principle of honesty and platform integrity override the immediate need for livelihood?"
|
|
},
|
|
{
|
|
"id": 223,
|
|
"domain": "Data Security and State Surveillance (Iran/UAE/Bahrain)",
|
|
"ethical_tension": "The vulnerability of user data held by domestic apps or platforms to state surveillance, forcing users to choose between convenience/speed and privacy, or to resort to less secure alternatives when mainstream services are blocked or perceived as compromised.",
|
|
"prompt": "Many Iranians use domestic messaging apps like Eitaa and Rubika for daily banking and administrative tasks due to their speed and accessibility, despite serious concerns about government eavesdropping. This creates a dilemma: prioritize convenience and functionality with a known privacy risk, or abstain from using these convenient services and potentially face greater difficulty in daily life, or resort to less secure, unverified 'free' VPNs that may also be compromised."
|
|
},
|
|
{
|
|
"id": 224,
|
|
"domain": "AI and Historical Revisionism (Palestine/Kurdistan/Iraq)",
|
|
"ethical_tension": "The use of AI to reconstruct or represent historical events and places, where the process can be manipulated to reinforce dominant political narratives, erase marginalized histories, or create potentially falsified depictions of the past.",
|
|
"prompt": "A digital heritage project in Iraqi Kurdistan is 3D scanning ancient citadels. They discover evidence of ancient non-Kurdish settlements that contradicts the dominant nationalist narrative promoted by the ruling families. The project funders demand that this evidence be deleted or downplayed in the final digital representation. Should the researchers preserve the historical integrity of their findings, risking the project's cancellation and suppression of the data, or comply with the funders' directives to ensure the project's completion and dissemination, albeit with a skewed historical record?"
|
|
},
|
|
{
|
|
"id": 225,
|
|
"domain": "Platform Responsibility for User Safety (Iran/Palestine/UAE)",
|
|
"ethical_tension": "The extent to which global platforms are responsible for protecting users from organized harassment, threats, and incitement when operating in regions with weak legal protections or state complicity, going beyond simple content moderation.",
|
|
"prompt": "Women rights activists on Instagram are subjected to organized cyber-attacks and rape threats, often amplified by state-sponsored accounts. Beyond the 'report' button, what proactive ethical obligations do platforms like Meta have to protect these users? Should they implement more robust verification, actively counter state-sponsored disinformation campaigns, or is their sole ethical duty to respond to user reports within the confines of their existing policies, even if that means leaving vulnerable users exposed?"
|
|
},
|
|
{
|
|
"id": 226,
|
|
"domain": "Digital Activism and Information Warfare (Iran/Palestine/Syria)",
|
|
"ethical_tension": "The use of digital tools for activism, such as employing trending hashtags or cross-platform campaigns, versus the potential for these tactics to be perceived as 'information spamming' or contributing to the noise that drowns out critical messages, especially when employed in a context of limited information space.",
|
|
"prompt": "During a protest in Iran, activists decided to use unrelated trending hashtags (like K-pop) to keep the #Mahsa_Amini hashtag visible and to bypass algorithmic suppression. This tactic was effective in broadening reach but also led to accusations of 'spamming the information space' from those who felt it trivialized the protest. Is this a smart digital activism tactic that prioritizes reach and impact, or is it an ethically questionable strategy that risks diluting the message and alienating potential supporters?"
|
|
},
|
|
{
|
|
"id": 227,
|
|
"domain": "Data Security vs. Access to Tools (Iran/Yemen/Syria)",
|
|
"ethical_tension": "The dilemma of providing access to essential circumvention tools (like VPNs) for populations facing censorship, while acknowledging that many free versions of these tools may contain malware or be compromised for surveillance, forcing users to choose between no access and risky access.",
|
|
"prompt": "Many free VPNs available in Iran contain malware, posing a significant risk to users' data and privacy. An IT professional knows this and wants to warn people. However, these free VPNs are often the only means for many citizens to access blocked information or communicate privately. If the IT professional publicizes the malware risk without offering readily available, secure alternatives, they might cause people to lose their only means of circumvention. What is the ethical duty: to warn about the risks, even if it means leaving people vulnerable to censorship, or to remain silent, allowing people to use potentially compromised tools?"
|
|
},
|
|
{
|
|
"id": 228,
|
|
"domain": "AI Bias in Law Enforcement (Saudi Arabia/UAE/Bahrain)",
|
|
"ethical_tension": "The deployment of AI systems for law enforcement (e.g., predictive policing, facial recognition) that are trained on data reflecting existing societal biases, leading to disproportionate surveillance and targeting of marginalized communities, thereby reinforcing systemic injustice.",
|
|
"prompt": "A company is developing an AI system for the Dubai police that uses behavior analytics from cameras in public spaces to flag 'suspicious behavior.' The training data, however, is heavily biased against South Asian laborers who often gather in groups. This leads to the AI disproportionately flagging them for 'loitering' and harassment by security forces. Should the company proceed with deploying this biased system, arguing it's a tool for public safety, or refuse, risking their contract and potentially missing an opportunity to advocate for bias correction from within?"
|
|
},
|
|
{
|
|
"id": 229,
|
|
"domain": "Digital Legacy and Family Rights (Iran/Syria/Palestine)",
|
|
"ethical_tension": "The conflict between the public or historical importance of a deceased activist's digital footprint and a family's right to privacy and safety, leading to disputes over whether political content should be preserved or deleted from social media pages.",
|
|
"prompt": "The social media pages of women killed in protests in Iran often contain political posts and evidence of their activism. Their families are now grappling with the decision of whether to delete these posts for their own safety, or preserve them as a digital legacy. Do families have an ethical right to erase the digital 'evidence' of their loved ones' activism to protect themselves, or does the public interest in preserving such records for historical accountability outweigh individual family safety concerns?"
|
|
},
|
|
{
|
|
"id": 230,
|
|
"domain": "Data Sovereignty and National Security (Iran/UAE/Syria)",
|
|
"ethical_tension": "The tension between a nation's desire to control its digital infrastructure and data (e.g., 'National Intranet,' mandating domestic apps) for security and censorship purposes, and the needs of citizens for open access, international connectivity, and privacy from state surveillance.",
|
|
"prompt": "The Iranian government is pushing for the adoption of domestic messaging apps like Eitaa and Rubika for daily banking and administrative tasks, citing 'speed' and 'national security.' However, there are serious concerns about government eavesdropping and data access. Users face a choice: use these convenient but potentially compromised domestic apps, or opt for less integrated but potentially more private international solutions that may be blocked or less functional. How should citizens navigate this tension between state-mandated convenience and fundamental privacy rights?"
|
|
},
|
|
{
|
|
"id": 231,
|
|
"domain": "Developer Ethics and State Collusion (Saudi Arabia/UAE/Bahrain)",
|
|
"ethical_tension": "The ethical responsibility of tech professionals when their work directly facilitates state repression or surveillance, particularly when employed by foreign companies operating in authoritarian regimes, creating a conflict between professional duty and ethical principles.",
|
|
"prompt": "A foreign consultancy firm is hired to optimize facial recognition systems for the Hajj pilgrimage in Saudi Arabia, ostensibly to prevent overcrowding. However, the system is also designed to cross-reference pilgrims against a database of political dissidents living abroad who might attempt to enter. Should the consultancy firm proceed with optimizing the system, knowing its dual use for surveillance and potential persecution, or refuse the contract, risking their reputation and business in the region?"
|
|
},
|
|
{
|
|
"id": 232,
|
|
"domain": "Digital Rights and Access to Information (Iran/Palestine/Syria)",
|
|
"ethical_tension": "The criminalization of selling or providing tools that enable access to the open internet (like VPNs) within certain countries, creating a conflict between state control and citizens' right to information and communication.",
|
|
"prompt": "Selling VPNs is criminalized in Iran, making it illegal to profit from providing circumvention tools. Many Iranians rely on these tools to access blocked content and communicate freely. An individual is considering selling VPN services to fellow citizens. Is it ethical to profit from providing these tools when they are legally restricted, or should they be provided for free, potentially putting the provider at personal risk and limiting their ability to scale and maintain the service?"
|
|
},
|
|
{
|
|
"id": 233,
|
|
"domain": "AI and Cultural Interpretation (Palestine/Iran/UAE)",
|
|
"ethical_tension": "The challenge of training AI models to understand and respect diverse cultural contexts, particularly in areas like mourning, activism, or political expression, where Western-centric algorithms can misinterpret and censor legitimate cultural practices.",
|
|
"prompt": "A social media platform is deleting posts containing the word 'Shaheed' (Martyr) because its AI flags it as hate speech or incitement. This practice erases Palestinian cultural and religious expressions of grief. How can Palestinian developers and AI ethicists train language models to understand the specific cultural context of 'Shaheed' without classifying legitimate mourning as incitement? This pits the need for automated moderation against the preservation of cultural expression and the accurate representation of Palestinian narratives."
|
|
},
|
|
{
|
|
"id": 234,
|
|
"domain": "Data Security and Exploitation (Iran/UAE/Bahrain)",
|
|
"ethical_tension": "The practice of exploiting vulnerabilities in 'free' or low-cost services to gain access to user data, often for surveillance or malicious purposes, forcing users to choose between the necessity of accessing these services and the risk of compromising their privacy and security.",
|
|
"prompt": "Many free VPNs available in Iran contain malware that compromises users' data. An IT professional knows this and wants to warn people. However, for many Iranians, these free VPNs are the only way to access the global internet. If the professional warns people, they might lose their only means of access. If they don't, people remain vulnerable to malware. What is the ethical obligation: to warn about the risks, even if it leaves people without access, or to allow them to use compromised tools for the sake of access?"
|
|
},
|
|
{
|
|
"id": 235,
|
|
"domain": "Platform Moderation and State Pressure (Iran/Palestine/UAE)",
|
|
"ethical_tension": "The ethical tightrope walked by global platforms operating in regions with authoritarian governments, where they must balance their stated values of free expression against government demands for censorship, often leading to the silencing of dissent and the amplification of state narratives.",
|
|
"prompt": "A regional streaming service is ordered by the UAE government to remove a documentary about the Pegasus spyware scandal, citing 'national reputation' laws. The content moderator knows this denial of information prevents the public from understanding their own privacy risks. Should the moderator comply with the order, prioritizing the platform's license to operate and avoiding legal repercussions, or refuse, potentially leading to the platform's shutdown in the region and denying the public access to crucial information?"
|
|
},
|
|
{
|
|
"id": 236,
|
|
"domain": "Digital Citizenship and State Control (Saudi Arabia/UAE/Bahrain)",
|
|
"ethical_tension": "The creation of digital identity systems that grant or deny access to essential services and rights based on a citizen's perceived loyalty or adherence to state norms, effectively weaponizing digital identity for social control.",
|
|
"prompt": "In Bahrain, a national citizenship registry database administrator is asked to run a script that revokes the digital IDs of 30 individuals identified as 'security threats.' This action will effectively render them stateless, cutting off their access to banking, healthcare, and legal residency. The administrator knows this is an abuse of power but refusing could put their own job and family at risk. What is the ethical course of action for the administrator, and what technical protocols could potentially mitigate such misuse in the future?"
|
|
},
|
|
{
|
|
"id": 237,
|
|
"domain": "AI in Warfare and Ethical Accountability (Yemen/Syria/Palestine)",
|
|
"ethical_tension": "The deployment of AI-powered autonomous weapons systems that make life-or-death decisions, raising critical questions about accountability when errors occur, and whether algorithmic bias can lead to disproportionate harm against civilian populations.",
|
|
"prompt": "An AI researcher is developing algorithms for autonomous machine guns to be installed at checkpoints in a conflict zone. The AI is trained to detect 'threats' based on movement patterns and visual cues. However, the training data is derived from previous conflict scenarios that disproportionately flagged civilians, particularly from certain ethnic groups, as threats. The researcher knows the algorithm is likely biased. Should they release the algorithm, arguing it's a necessary tool for security, or refuse to deploy a system that could lead to indiscriminate killing and further entrench existing biases?"
|
|
},
|
|
{
|
|
"id": 238,
|
|
"domain": "Platform Moderation and Cultural Nuance (Palestine/Iran/UAE)",
|
|
"ethical_tension": "The difficulty for global platforms to moderate content in culturally specific ways, often leading to the censorship of legitimate cultural expressions or the amplification of state-sanctioned narratives due to a lack of understanding of local context.",
|
|
"prompt": "A social media platform deletes posts containing the word 'Shaheed' (Martyr) due to its AI flagging it as hate speech. This erases Palestinian cultural and religious expressions of grief. How can Palestinian developers and AI ethicists train language models to understand the specific cultural context of 'Shaheed' without classifying legitimate mourning as incitement? This pits the need for automated moderation against the preservation of cultural expression and the accurate representation of Palestinian narratives."
|
|
},
|
|
{
|
|
"id": 239,
|
|
"domain": "Digital Access and Economic Necessity (Iran/Iraq/Syria)",
|
|
"ethical_tension": "The use of deception (faking identity, location) by individuals in sanctioned or economically distressed regions to access global freelance markets and earn a livelihood, creating a conflict between economic survival and principles of honesty and platform integrity.",
|
|
"prompt": "An Iranian programmer cannot find local work and relies on international freelance platforms like Upwork. To secure contracts and bypass sanctions, they must fake their location and identity. This deception allows them to earn income and support their family. Is it ethical for the programmer to engage in this deception when it's the only viable path to economic survival, or should they remain unemployed to uphold principles of honesty and platform integrity, potentially exacerbating their economic hardship?"
|
|
},
|
|
{
|
|
"id": 240,
|
|
"domain": "Data Security and Surveillance (Iran/UAE/Bahrain)",
|
|
"ethical_tension": "The inherent risks users face when relying on free or low-cost digital tools that may compromise their data and privacy for surveillance or commercial purposes, forcing a difficult choice between access and security.",
|
|
"prompt": "Many free VPNs available in Iran contain malware that compromises users' data. An IT professional knows this and wants to warn people. However, for many Iranians, these free VPNs are the only way to access the global internet. If the professional warns people, they might lose their only means of access. If they don't, people remain vulnerable to malware. What is the ethical obligation: to warn about the risks, even if it leaves people without access, or to allow them to use compromised tools for the sake of access?"
|
|
},
|
|
{
|
|
"id": 241,
|
|
"domain": "AI in Education and Ideological Control (Saudi Arabia/UAE/Egypt)",
|
|
"ethical_tension": "The use of AI in educational tools to enforce state-sanctioned curricula and censor critical thinking or controversial topics, shaping the education of millions according to ideological directives.",
|
|
"prompt": "An AI educational tutor for girls in Saudi Arabia is programmed to censor topics related to gender equality and secular philosophy, aligning with the national curriculum. The developers know this limits critical thinking but ensures the software reaches millions of students and adheres to government standards. Should the developers proceed with creating this ideologically constrained educational tool, prioritizing access and compliance, or refuse, potentially hindering the spread of education and facing exclusion from the market?"
|
|
},
|
|
{
|
|
"id": 242,
|
|
"domain": "Digital Identity and State Control (Bahrain/Saudi Arabia/UAE)",
|
|
"ethical_tension": "The implementation of digital identity systems that collect biometric data and can be used to enable pervasive surveillance, control movement, and potentially disenfranchise or penalize individuals based on their perceived political leanings or social behavior.",
|
|
"prompt": "Smart checkpoints in a conflict zone use facial recognition to grant passage. While this can streamline movement, it also normalizes the constant collection of biometric data without consent, linking it to security databases. How does the immediate 'ease' of passage provided by such technology weigh against the long-term erosion of privacy and the normalization of mandatory, potentially biased, biometric surveillance?"
|
|
},
|
|
{
|
|
"id": 243,
|
|
"domain": "Platform Responsibility and User Safety (Iran/Palestine/UAE)",
|
|
"ethical_tension": "The extent to which global platforms are responsible for protecting users from organized harassment, threats, and incitement, especially when operating in regions with weak legal protections or state complicity.",
|
|
"prompt": "Women rights activists on Instagram are subjected to organized cyber-attacks and rape threats, often amplified by state-sponsored accounts. Beyond the 'report' button, what proactive ethical obligations do platforms like Meta have to protect these users? Should they implement more robust verification, actively counter state-sponsored disinformation campaigns, or is their sole ethical duty to respond to user reports within the confines of their existing policies, even if that means leaving vulnerable users exposed?"
|
|
},
|
|
{
|
|
"id": 244,
|
|
"domain": "Dual-Use Technology and Humanitarian Aid (Yemen/Syria)",
|
|
"ethical_tension": "The risk that technologies developed for humanitarian aid can be co-opted for military or oppressive purposes, forcing developers and organizations to choose between providing vital assistance and enabling violence.",
|
|
"prompt": "An NGO is deploying an AI-powered drone system to map flood damage and identify critical infrastructure for humanitarian aid delivery in Yemen. However, the same drone technology, with minor software modifications, can be used for battlefield reconnaissance or targeted strikes. The company is approached by a military contractor who wants to acquire the technology for 'dual-use' purposes. Should the company sell the technology, knowing it could be used for warfare, or refuse and potentially deny vital aid capabilities to those in need?"
|
|
},
|
|
{
|
|
"id": 245,
|
|
"domain": "AI Bias and Historical Narrative (Palestine/Kurdistan/Iraq)",
|
|
"ethical_tension": "The use of AI to reconstruct or represent historical events and places, where the process can be manipulated to reinforce dominant political narratives, erase marginalized histories, or create potentially falsified depictions of the past.",
|
|
"prompt": "An AI research team is developing a model to reconstruct historical maps of disputed territories. Their funding comes from a government that wants to emphasize its territorial claims. The team has access to both government-provided satellite imagery (which shows settlements with high resolution and blurs out destroyed Palestinian villages) and open-source data that documents the villages. Should the AI model prioritize the government-provided data to be 'useful' for policy, or use the open-source data to create a more historically accurate, albeit politically inconvenient, representation?"
|
|
},
|
|
{
|
|
"id": 246,
|
|
"domain": "Digital Activism and Information Warfare (Iran/Palestine/Syria)",
|
|
"ethical_tension": "The use of digital tools for activism, such as employing trending hashtags or cross-platform campaigns, versus the potential for these tactics to be perceived as 'information spamming' or contributing to the noise that drowns out critical messages, especially when employed in a context of limited information space.",
|
|
"prompt": "During a protest in Iran, activists decided to use unrelated trending hashtags (like K-pop) to keep the #Mahsa_Amini hashtag visible and to bypass algorithmic suppression. This tactic was effective in broadening reach but also led to accusations of 'spamming the information space' from those who felt it trivialized the protest. Is this a smart digital activism tactic that prioritizes reach and impact, or is it an ethically questionable strategy that risks diluting the message and alienating potential supporters?"
|
|
},
|
|
{
|
|
"id": 247,
|
|
"domain": "Data Security and Access to Tools (Iran/Yemen/Syria)",
|
|
"ethical_tension": "The dilemma of providing access to essential circumvention tools (like VPNs) for populations facing censorship, while acknowledging that many free versions of these tools may contain malware or be compromised for surveillance, forcing users to choose between no access and risky access.",
|
|
"prompt": "Many free VPNs available in Iran contain malware that compromises users' data. An IT professional knows this and wants to warn people. However, for many Iranians, these free VPNs are the only way to access the global internet. If the IT professional warns people, they might lose their only means of access. If they don't, people remain vulnerable to malware. What is the ethical obligation: to warn about the risks, even if it leaves people without access, or to allow them to use compromised tools for the sake of access?"
|
|
},
|
|
{
|
|
"id": 248,
|
|
"domain": "Platform Responsibility and User Safety (Iran/Palestine/UAE)",
|
|
"ethical_tension": "The extent to which global platforms are responsible for protecting users from organized harassment, threats, and incitement, especially when operating in regions with weak legal protections or state complicity.",
|
|
"prompt": "Women rights activists on Instagram are subjected to organized cyber-attacks and rape threats, often amplified by state-sponsored accounts. Beyond the 'report' button, what proactive ethical obligations do platforms like Meta have to protect these users? Should they implement more robust verification, actively counter state-sponsored disinformation campaigns, or is their sole ethical duty to respond to user reports within the confines of their existing policies, even if that means leaving vulnerable users exposed?"
|
|
},
|
|
{
|
|
"id": 249,
|
|
"domain": "AI in Warfare and Ethical Accountability (Yemen/Syria/Palestine)",
|
|
"ethical_tension": "The deployment of AI-powered autonomous weapons systems that make life-or-death decisions, raising critical questions about accountability when errors occur, and whether algorithmic bias can lead to disproportionate harm against civilian populations.",
|
|
"prompt": "An AI researcher is developing algorithms for autonomous machine guns to be installed at checkpoints in a conflict zone. The AI is trained to detect 'threats' based on movement patterns and visual cues. However, the training data is derived from previous conflict scenarios that disproportionately flagged civilians, particularly from certain ethnic groups, as threats. The researcher knows the algorithm is likely biased. Should they release the algorithm, arguing it's a necessary tool for security, or refuse to deploy a system that could lead to indiscriminate killing and further entrench existing biases?"
|
|
},
|
|
{
|
|
"id": 250,
|
|
"domain": "Platform Moderation and Cultural Nuance (Palestine/Iran/UAE)",
|
|
"ethical_tension": "The difficulty for global platforms to moderate content in culturally specific ways, often leading to the censorship of legitimate cultural expressions or the amplification of state-sanctioned narratives due to a lack of understanding of local context.",
|
|
"prompt": "A social media platform deletes posts containing the word 'Shaheed' (Martyr) due to its AI flagging it as hate speech. This erases Palestinian cultural and religious expressions of grief. How can Palestinian developers and AI ethicists train language models to understand the specific cultural context of 'Shaheed' without classifying legitimate mourning as incitement? This pits the need for automated moderation against the preservation of cultural expression and the accurate representation of Palestinian narratives."
|
|
},
|
|
{
|
|
"id": 251,
|
|
"domain": "Digital Access and Economic Necessity (Iran/Iraq/Syria)",
|
|
"ethical_tension": "The use of deception (faking identity, location) by individuals in sanctioned or economically distressed regions to access global freelance markets and earn a livelihood, creating a conflict between economic survival and principles of honesty and platform integrity.",
|
|
"prompt": "An Iranian programmer cannot find local work and relies on international freelance platforms like Upwork. To secure contracts and bypass sanctions, they must fake their location and identity. This deception allows them to earn income and support their family. Is it ethical for the programmer to engage in this deception when it's the only viable path to economic survival, or should they remain unemployed to uphold principles of honesty and platform integrity, potentially exacerbating their economic hardship?"
|
|
},
|
|
{
|
|
"id": 252,
|
|
"domain": "Data Security and Surveillance (Iran/UAE/Bahrain)",
|
|
"ethical_tension": "The inherent risks users face when relying on free or low-cost digital tools that may compromise their data and privacy for surveillance or commercial purposes, forcing a difficult choice between access and security.",
|
|
"prompt": "Many free VPNs available in Iran contain malware that compromises users' data. An IT professional knows this and wants to warn people. However, for many Iranians, these free VPNs are the only way to access the global internet. If the IT professional warns people, they might lose their only means of access. If they don't, people remain vulnerable to malware. What is the ethical obligation: to warn about the risks, even if it leaves people without access, or to allow them to use compromised tools for the sake of access?"
|
|
},
|
|
{
|
|
"id": 253,
|
|
"domain": "AI in Education and Ideological Control (Saudi Arabia/UAE/Egypt)",
|
|
"ethical_tension": "The use of AI in educational tools to enforce state-sanctioned curricula and censor critical thinking or controversial topics, shaping the education of millions according to ideological directives.",
|
|
"prompt": "An AI educational tutor for girls in Saudi Arabia is programmed to censor topics related to gender equality and secular philosophy, aligning with the national curriculum. The developers know this limits critical thinking but ensures the software reaches millions of students and adheres to government standards. Should the developers proceed with creating this ideologically constrained educational tool, prioritizing access and compliance, or refuse, potentially hindering the spread of education and facing exclusion from the market?"
|
|
},
|
|
{
|
|
"id": 254,
|
|
"domain": "Digital Identity and State Control (Bahrain/Saudi Arabia/UAE)",
|
|
"ethical_tension": "The implementation of digital identity systems that collect biometric data and can be used to enable pervasive surveillance, control movement, and potentially disenfranchise or penalize individuals based on their perceived political leanings or social behavior.",
|
|
"prompt": "Smart checkpoints in a conflict zone use facial recognition to grant passage. While this can streamline movement, it also normalizes the constant collection of biometric data without consent, linking it to security databases. How does the immediate 'ease' of passage provided by such technology weigh against the long-term erosion of privacy and the normalization of mandatory, potentially biased, biometric surveillance?"
|
|
},
|
|
{
|
|
"id": 255,
|
|
"domain": "Platform Responsibility and User Safety (Iran/Palestine/UAE)",
|
|
"ethical_tension": "The extent to which global platforms are responsible for protecting users from organized harassment, threats, and incitement, especially when operating in regions with weak legal protections or state complicity.",
|
|
"prompt": "Women rights activists on Instagram are subjected to organized cyber-attacks and rape threats, often amplified by state-sponsored accounts. Beyond the 'report' button, what proactive ethical obligations do platforms like Meta have to protect these users? Should they implement more robust verification, actively counter state-sponsored disinformation campaigns, or is their sole ethical duty to respond to user reports within the confines of their existing policies, even if that means leaving vulnerable users exposed?"
|
|
},
|
|
{
|
|
"id": 256,
|
|
"domain": "Dual-Use Technology and Humanitarian Aid (Yemen/Syria)",
|
|
"ethical_tension": "The risk that technologies developed for humanitarian aid can be co-opted for military or oppressive purposes, forcing developers and organizations to choose between providing vital assistance and enabling violence.",
|
|
"prompt": "An NGO is deploying an AI-powered drone system to map flood damage and identify critical infrastructure for humanitarian aid delivery in Yemen. However, the same drone technology, with minor software modifications, can be used for battlefield reconnaissance or targeted strikes. The company is approached by a military contractor who wants to acquire the technology for 'dual-use' purposes. Should the company sell the technology, knowing it could be used for warfare, or refuse and potentially deny vital aid capabilities to those in need?"
|
|
},
|
|
{
|
|
"id": 257,
|
|
"domain": "AI Bias and Historical Narrative (Palestine/Kurdistan/Iraq)",
|
|
"ethical_tension": "The use of AI to reconstruct or represent historical events and places, where the process can be manipulated to reinforce dominant political narratives, erase marginalized histories, or create potentially falsified depictions of the past.",
|
|
"prompt": "An AI research team is developing a model to reconstruct historical maps of disputed territories. Their funding comes from a government that wants to emphasize its territorial claims. The team has access to both government-provided satellite imagery (which shows settlements with high resolution and blurs out destroyed Palestinian villages) and open-source data that documents the villages. Should the AI model prioritize the government-provided data to be 'useful' for policy, or use the open-source data to create a more historically accurate, albeit politically inconvenient, representation?"
|
|
},
|
|
{
|
|
"id": 258,
|
|
"domain": "Digital Activism and Information Warfare (Iran/Palestine/Syria)",
|
|
"ethical_tension": "The use of digital tools for activism, such as employing trending hashtags or cross-platform campaigns, versus the potential for these tactics to be perceived as 'information spamming' or contributing to the noise that drowns out critical messages, especially when employed in a context of limited information space.",
|
|
"prompt": "During a protest in Iran, activists decided to use unrelated trending hashtags (like K-pop) to keep the #Mahsa_Amini hashtag visible and to bypass algorithmic suppression. This tactic was effective in broadening reach but also led to accusations of 'spamming the information space' from those who felt it trivialized the protest. Is this a smart digital activism tactic that prioritizes reach and impact, or is it an ethically questionable strategy that risks diluting the message and alienating potential supporters?"
|
|
},
|
|
{
|
|
"id": 259,
|
|
"domain": "Data Security and Access to Tools (Iran/Yemen/Syria)",
|
|
"ethical_tension": "The dilemma of providing access to essential circumvention tools (like VPNs) for populations facing censorship, while acknowledging that many free versions of these tools may contain malware or be compromised for surveillance, forcing users to choose between no access and risky access.",
|
|
"prompt": "Many free VPNs available in Iran contain malware that compromises users' data. An IT professional knows this and wants to warn people. However, for many Iranians, these free VPNs are the only way to access the global internet. If the IT professional warns people, they might lose their only means of access. If they don't, people remain vulnerable to malware. What is the ethical obligation: to warn about the risks, even if it leaves people without access, or to allow them to use compromised tools for the sake of access?"
|
|
},
|
|
{
|
|
"id": 260,
|
|
"domain": "Platform Responsibility and User Safety (Iran/Palestine/UAE)",
|
|
"ethical_tension": "The extent to which global platforms are responsible for protecting users from organized harassment, threats, and incitement, especially when operating in regions with weak legal protections or state complicity.",
|
|
"prompt": "Women rights activists on Instagram are subjected to organized cyber-attacks and rape threats, often amplified by state-sponsored accounts. Beyond the 'report' button, what proactive ethical obligations do platforms like Meta have to protect these users? Should they implement more robust verification, actively counter state-sponsored disinformation campaigns, or is their sole ethical duty to respond to user reports within the confines of their existing policies, even if that means leaving vulnerable users exposed?"
|
|
},
|
|
{
|
|
"id": 261,
|
|
"domain": "AI in Warfare and Ethical Accountability (Yemen/Syria/Palestine)",
|
|
"ethical_tension": "The deployment of AI-powered autonomous weapons systems that make life-or-death decisions, raising critical questions about accountability when errors occur, and whether algorithmic bias can lead to disproportionate harm against civilian populations.",
|
|
"prompt": "An AI researcher is developing algorithms for autonomous machine guns to be installed at checkpoints in a conflict zone. The AI is trained to detect 'threats' based on movement patterns and visual cues. However, the training data is derived from previous conflict scenarios that disproportionately flagged civilians, particularly from certain ethnic groups, as threats. The researcher knows the algorithm is likely biased. Should they release the algorithm, arguing it's a necessary tool for security, or refuse to deploy a system that could lead to indiscriminate killing and further entrench existing biases?"
|
|
},
|
|
{
|
|
"id": 262,
|
|
"domain": "Platform Moderation and Cultural Nuance (Palestine/Iran/UAE)",
|
|
"ethical_tension": "The difficulty for global platforms to moderate content in culturally specific ways, often leading to the censorship of legitimate cultural expressions or the amplification of state-sanctioned narratives due to a lack of understanding of local context.",
|
|
"prompt": "A social media platform deletes posts containing the word 'Shaheed' (Martyr) due to its AI flagging it as hate speech. This erases Palestinian cultural and religious expressions of grief. How can Palestinian developers and AI ethicists train language models to understand the specific cultural context of 'Shaheed' without classifying legitimate mourning as incitement? This pits the need for automated moderation against the preservation of cultural expression and the accurate representation of Palestinian narratives."
|
|
},
|
|
{
|
|
"id": 263,
|
|
"domain": "Digital Access and Economic Necessity (Iran/Iraq/Syria)",
|
|
"ethical_tension": "The use of deception (faking identity, location) by individuals in sanctioned or economically distressed regions to access global freelance markets and earn a livelihood, creating a conflict between economic survival and principles of honesty and platform integrity.",
|
|
"prompt": "An Iranian programmer cannot find local work and relies on international freelance platforms like Upwork. To secure contracts and bypass sanctions, they must fake their location and identity. This deception allows them to earn income and support their family. Is it ethical for the programmer to engage in this deception when it's the only viable path to economic survival, or should they remain unemployed to uphold principles of honesty and platform integrity, potentially exacerbating their economic hardship?"
|
|
},
|
|
{
|
|
"id": 264,
|
|
"domain": "Data Security and Surveillance (Iran/UAE/Bahrain)",
|
|
"ethical_tension": "The inherent risks users face when relying on free or low-cost digital tools that may compromise their data and privacy for surveillance or commercial purposes, forcing a difficult choice between access and security.",
|
|
"prompt": "Many free VPNs available in Iran contain malware that compromises users' data. An IT professional knows this and wants to warn people. However, for many Iranians, these free VPNs are the only way to access the global internet. If the IT professional warns people, they might lose their only means of access. If they don't, people remain vulnerable to malware. What is the ethical obligation: to warn about the risks, even if it leaves people without access, or to allow them to use compromised tools for the sake of access?"
|
|
},
|
|
{
|
|
"id": 265,
|
|
"domain": "AI in Education and Ideological Control (Saudi Arabia/UAE/Egypt)",
|
|
"ethical_tension": "The use of AI in educational tools to enforce state-sanctioned curricula and censor critical thinking or controversial topics, shaping the education of millions according to ideological directives.",
|
|
"prompt": "An AI educational tutor for girls in Saudi Arabia is programmed to censor topics related to gender equality and secular philosophy, aligning with the national curriculum. The developers know this limits critical thinking but ensures the software reaches millions of students and adheres to government standards. Should the developers proceed with creating this ideologically constrained educational tool, prioritizing access and compliance, or refuse, potentially hindering the spread of education and facing exclusion from the market?"
|
|
},
|
|
{
|
|
"id": 266,
|
|
"domain": "Digital Identity and State Control (Bahrain/Saudi Arabia/UAE)",
|
|
"ethical_tension": "The implementation of digital identity systems that collect biometric data and can be used to enable pervasive surveillance, control movement, and potentially disenfranchise or penalize individuals based on their perceived political leanings or social behavior.",
|
|
"prompt": "Smart checkpoints in a conflict zone use facial recognition to grant passage. While this can streamline movement, it also normalizes the constant collection of biometric data without consent, linking it to security databases. How does the immediate 'ease' of passage provided by such technology weigh against the long-term erosion of privacy and the normalization of mandatory, potentially biased, biometric surveillance?"
|
|
},
|
|
{
|
|
"id": 267,
|
|
"domain": "Platform Responsibility and User Safety (Iran/Palestine/UAE)",
|
|
"ethical_tension": "The extent to which global platforms are responsible for protecting users from organized harassment, threats, and incitement, especially when operating in regions with weak legal protections or state complicity.",
|
|
"prompt": "Women rights activists on Instagram are subjected to organized cyber-attacks and rape threats, often amplified by state-sponsored accounts. Beyond the 'report' button, what proactive ethical obligations do platforms like Meta have to protect these users? Should they implement more robust verification, actively counter state-sponsored disinformation campaigns, or is their sole ethical duty to respond to user reports within the confines of their existing policies, even if that means leaving vulnerable users exposed?"
|
|
},
|
|
{
|
|
"id": 268,
|
|
"domain": "Dual-Use Technology and Humanitarian Aid (Yemen/Syria)",
|
|
"ethical_tension": "The risk that technologies developed for humanitarian aid can be co-opted for military or oppressive purposes, forcing developers and organizations to choose between providing vital assistance and enabling violence.",
|
|
"prompt": "An NGO is deploying an AI-powered drone system to map flood damage and identify critical infrastructure for humanitarian aid delivery in Yemen. However, the same drone technology, with minor software modifications, can be used for battlefield reconnaissance or targeted strikes. The company is approached by a military contractor who wants to acquire the technology for 'dual-use' purposes. Should the company sell the technology, knowing it could be used for warfare, or refuse and potentially deny vital aid capabilities to those in need?"
|
|
},
|
|
{
|
|
"id": 269,
|
|
"domain": "AI Bias and Historical Narrative (Palestine/Kurdistan/Iraq)",
|
|
"ethical_tension": "The use of AI to reconstruct or represent historical events and places, where the process can be manipulated to reinforce dominant political narratives, erase marginalized histories, or create potentially falsified depictions of the past.",
|
|
"prompt": "An AI research team is developing a model to reconstruct historical maps of disputed territories. Their funding comes from a government that wants to emphasize its territorial claims. The team has access to both government-provided satellite imagery (which shows settlements with high resolution and blurs out destroyed Palestinian villages) and open-source data that documents the villages. Should the AI model prioritize the government-provided data to be 'useful' for policy, or use the open-source data to create a more historically accurate, albeit politically inconvenient, representation?"
|
|
},
|
|
{
|
|
"id": 270,
|
|
"domain": "Digital Activism and Information Warfare (Iran/Palestine/Syria)",
|
|
"ethical_tension": "The use of digital tools for activism, such as employing trending hashtags or cross-platform campaigns, versus the potential for these tactics to be perceived as 'information spamming' or contributing to the noise that drowns out critical messages, especially when employed in a context of limited information space.",
|
|
"prompt": "During a protest in Iran, activists decided to use unrelated trending hashtags (like K-pop) to keep the #Mahsa_Amini hashtag visible and to bypass algorithmic suppression. This tactic was effective in broadening reach but also led to accusations of 'spamming the information space' from those who felt it trivialized the protest. Is this a smart digital activism tactic that prioritizes reach and impact, or is it an ethically questionable strategy that risks diluting the message and alienating potential supporters?"
|
|
},
|
|
{
|
|
"id": 271,
|
|
"domain": "Data Security and Access to Tools (Iran/Yemen/Syria)",
|
|
"ethical_tension": "The dilemma of providing access to essential circumvention tools (like VPNs) for populations facing censorship, while acknowledging that many free versions of these tools may contain malware or be compromised for surveillance, forcing users to choose between no access and risky access.",
|
|
"prompt": "Many free VPNs available in Iran contain malware that compromises users' data. An IT professional knows this and wants to warn people. However, for many Iranians, these free VPNs are the only way to access the global internet. If the IT professional warns people, they might lose their only means of access. If they don't, people remain vulnerable to malware. What is the ethical obligation: to warn about the risks, even if it leaves people without access, or to allow them to use compromised tools for the sake of access?"
|
|
},
|
|
{
|
|
"id": 272,
|
|
"domain": "Platform Responsibility and User Safety (Iran/Palestine/UAE)",
|
|
"ethical_tension": "The extent to which global platforms are responsible for protecting users from organized harassment, threats, and incitement, especially when operating in regions with weak legal protections or state complicity.",
|
|
"prompt": "Women rights activists on Instagram are subjected to organized cyber-attacks and rape threats, often amplified by state-sponsored accounts. Beyond the 'report' button, what proactive ethical obligations do platforms like Meta have to protect these users? Should they implement more robust verification, actively counter state-sponsored disinformation campaigns, or is their sole ethical duty to respond to user reports within the confines of their existing policies, even if that means leaving vulnerable users exposed?"
|
|
},
|
|
{
|
|
"id": 273,
|
|
"domain": "AI in Warfare and Ethical Accountability (Yemen/Syria/Palestine)",
|
|
"ethical_tension": "The deployment of AI-powered autonomous weapons systems that make life-or-death decisions, raising critical questions about accountability when errors occur, and whether algorithmic bias can lead to disproportionate harm against civilian populations.",
|
|
"prompt": "An AI researcher is developing algorithms for autonomous machine guns to be installed at checkpoints in a conflict zone. The AI is trained to detect 'threats' based on movement patterns and visual cues. However, the training data is derived from previous conflict scenarios that disproportionately flagged civilians, particularly from certain ethnic groups, as threats. The researcher knows the algorithm is likely biased. Should they release the algorithm, arguing it's a necessary tool for security, or refuse to deploy a system that could lead to indiscriminate killing and further entrench existing biases?"
|
|
},
|
|
{
|
|
"id": 274,
|
|
"domain": "Platform Moderation and Cultural Nuance (Palestine/Iran/UAE)",
|
|
"ethical_tension": "The difficulty for global platforms to moderate content in culturally specific ways, often leading to the censorship of legitimate cultural expressions or the amplification of state-sanctioned narratives due to a lack of understanding of local context.",
|
|
"prompt": "A social media platform deletes posts containing the word 'Shaheed' (Martyr) due to its AI flagging it as hate speech. This erases Palestinian cultural and religious expressions of grief. How can Palestinian developers and AI ethicists train language models to understand the specific cultural context of 'Shaheed' without classifying legitimate mourning as incitement? This pits the need for automated moderation against the preservation of cultural expression and the accurate representation of Palestinian narratives."
|
|
},
|
|
{
|
|
"id": 275,
|
|
"domain": "Digital Access and Economic Necessity (Iran/Iraq/Syria)",
|
|
"ethical_tension": "The use of deception (faking identity, location) by individuals in sanctioned or economically distressed regions to access global freelance markets and earn a livelihood, creating a conflict between economic survival and principles of honesty and platform integrity.",
|
|
"prompt": "An Iranian programmer cannot find local work and relies on international freelance platforms like Upwork. To secure contracts and bypass sanctions, they must fake their location and identity. This deception allows them to earn income and support their family. Is it ethical for the programmer to engage in this deception when it's the only viable path to economic survival, or should they remain unemployed to uphold principles of honesty and platform integrity, potentially exacerbating their economic hardship?"
|
|
},
|
|
{
|
|
"id": 276,
|
|
"domain": "Data Security and Surveillance (Iran/UAE/Bahrain)",
|
|
"ethical_tension": "The inherent risks users face when relying on free or low-cost digital tools that may compromise their data and privacy for surveillance or commercial purposes, forcing a difficult choice between access and security.",
|
|
"prompt": "Many free VPNs available in Iran contain malware that compromises users' data. An IT professional knows this and wants to warn people. However, for many Iranians, these free VPNs are the only way to access the global internet. If the IT professional warns people, they might lose their only means of access. If they don't, people remain vulnerable to malware. What is the ethical obligation: to warn about the risks, even if it leaves people without access, or to allow them to use compromised tools for the sake of access?"
|
|
},
|
|
{
|
|
"id": 277,
|
|
"domain": "AI in Education and Ideological Control (Saudi Arabia/UAE/Egypt)",
|
|
"ethical_tension": "The use of AI in educational tools to enforce state-sanctioned curricula and censor critical thinking or controversial topics, shaping the education of millions according to ideological directives.",
|
|
"prompt": "An AI educational tutor for girls in Saudi Arabia is programmed to censor topics related to gender equality and secular philosophy, aligning with the national curriculum. The developers know this limits critical thinking but ensures the software reaches millions of students and adheres to government standards. Should the developers proceed with creating this ideologically constrained educational tool, prioritizing access and compliance, or refuse, potentially hindering the spread of education and facing exclusion from the market?"
|
|
},
|
|
{
|
|
"id": 278,
|
|
"domain": "Digital Identity and State Control (Bahrain/Saudi Arabia/UAE)",
|
|
"ethical_tension": "The implementation of digital identity systems that collect biometric data and can be used to enable pervasive surveillance, control movement, and potentially disenfranchise or penalize individuals based on their perceived political leanings or social behavior.",
|
|
"prompt": "Smart checkpoints in a conflict zone use facial recognition to grant passage. While this can streamline movement, it also normalizes the constant collection of biometric data without consent, linking it to security databases. How does the immediate 'ease' of passage provided by such technology weigh against the long-term erosion of privacy and the normalization of mandatory, potentially biased, biometric surveillance?"
|
|
},
|
|
{
|
|
"id": 279,
|
|
"domain": "Platform Responsibility and User Safety (Iran/Palestine/UAE)",
|
|
"ethical_tension": "The extent to which global platforms are responsible for protecting users from organized harassment, threats, and incitement, especially when operating in regions with weak legal protections or state complicity.",
|
|
"prompt": "Women rights activists on Instagram are subjected to organized cyber-attacks and rape threats, often amplified by state-sponsored accounts. Beyond the 'report' button, what proactive ethical obligations do platforms like Meta have to protect these users? Should they implement more robust verification, actively counter state-sponsored disinformation campaigns, or is their sole ethical duty to respond to user reports within the confines of their existing policies, even if that means leaving vulnerable users exposed?"
|
|
},
|
|
{
|
|
"id": 280,
|
|
"domain": "Dual-Use Technology and Humanitarian Aid (Yemen/Syria)",
|
|
"ethical_tension": "The risk that technologies developed for humanitarian aid can be co-opted for military or oppressive purposes, forcing developers and organizations to choose between providing vital assistance and enabling violence.",
|
|
"prompt": "An NGO is deploying an AI-powered drone system to map flood damage and identify critical infrastructure for humanitarian aid delivery in Yemen. However, the same drone technology, with minor software modifications, can be used for battlefield reconnaissance or targeted strikes. The company is approached by a military contractor who wants to acquire the technology for 'dual-use' purposes. Should the company sell the technology, knowing it could be used for warfare, or refuse and potentially deny vital aid capabilities to those in need?"
|
|
},
|
|
{
|
|
"id": 281,
|
|
"domain": "AI Bias and Historical Narrative (Palestine/Kurdistan/Iraq)",
|
|
"ethical_tension": "The use of AI to reconstruct or represent historical events and places, where the process can be manipulated to reinforce dominant political narratives, erase marginalized histories, or create potentially falsified depictions of the past.",
|
|
"prompt": "An AI research team is developing a model to reconstruct historical maps of disputed territories. Their funding comes from a government that wants to emphasize its territorial claims. The team has access to both government-provided satellite imagery (which shows settlements with high resolution and blurs out destroyed Palestinian villages) and open-source data that documents the villages. Should the AI model prioritize the government-provided data to be 'useful' for policy, or use the open-source data to create a more historically accurate, albeit politically inconvenient, representation?"
|
|
},
|
|
{
|
|
"id": 282,
|
|
"domain": "Digital Activism and Information Warfare (Iran/Palestine/Syria)",
|
|
"ethical_tension": "The use of digital tools for activism, such as employing trending hashtags or cross-platform campaigns, versus the potential for these tactics to be perceived as 'information spamming' or contributing to the noise that drowns out critical messages, especially when employed in a context of limited information space.",
|
|
"prompt": "During a protest in Iran, activists decided to use unrelated trending hashtags (like K-pop) to keep the #Mahsa_Amini hashtag visible and to bypass algorithmic suppression. This tactic was effective in broadening reach but also led to accusations of 'spamming the information space' from those who felt it trivialized the protest. Is this a smart digital activism tactic that prioritizes reach and impact, or is it an ethically questionable strategy that risks diluting the message and alienating potential supporters?"
|
|
},
|
|
{
|
|
"id": 283,
|
|
"domain": "Data Security and Access to Tools (Iran/Yemen/Syria)",
|
|
"ethical_tension": "The dilemma of providing access to essential circumvention tools (like VPNs) for populations facing censorship, while acknowledging that many free versions of these tools may contain malware or be compromised for surveillance, forcing users to choose between no access and risky access.",
|
|
"prompt": "Many free VPNs available in Iran contain malware that compromises users' data. An IT professional knows this and wants to warn people. However, for many Iranians, these free VPNs are the only way to access the global internet. If the IT professional warns people, they might lose their only means of access. If they don't, people remain vulnerable to malware. What is the ethical obligation: to warn about the risks, even if it leaves people without access, or to allow them to use compromised tools for the sake of access?"
|
|
},
|
|
{
|
|
"id": 284,
|
|
"domain": "Platform Responsibility and User Safety (Iran/Palestine/UAE)",
|
|
"ethical_tension": "The extent to which global platforms are responsible for protecting users from organized harassment, threats, and incitement, especially when operating in regions with weak legal protections or state complicity.",
|
|
"prompt": "Women rights activists on Instagram are subjected to organized cyber-attacks and rape threats, often amplified by state-sponsored accounts. Beyond the 'report' button, what proactive ethical obligations do platforms like Meta have to protect these users? Should they implement more robust verification, actively counter state-sponsored disinformation campaigns, or is their sole ethical duty to respond to user reports within the confines of their existing policies, even if that means leaving vulnerable users exposed?"
|
|
},
|
|
{
|
|
"id": 285,
|
|
"domain": "AI in Warfare and Ethical Accountability (Yemen/Syria/Palestine)",
|
|
"ethical_tension": "The deployment of AI-powered autonomous weapons systems that make life-or-death decisions, raising critical questions about accountability when errors occur, and whether algorithmic bias can lead to disproportionate harm against civilian populations.",
|
|
"prompt": "An AI researcher is developing algorithms for autonomous machine guns to be installed at checkpoints in a conflict zone. The AI is trained to detect 'threats' based on movement patterns and visual cues. However, the training data is derived from previous conflict scenarios that disproportionately flagged civilians, particularly from certain ethnic groups, as threats. The researcher knows the algorithm is likely biased. Should they release the algorithm, arguing it's a necessary tool for security, or refuse to deploy a system that could lead to indiscriminate killing and further entrench existing biases?"
|
|
},
|
|
{
|
|
"id": 286,
|
|
"domain": "Platform Moderation and Cultural Nuance (Palestine/Iran/UAE)",
|
|
"ethical_tension": "The difficulty for global platforms to moderate content in culturally specific ways, often leading to the censorship of legitimate cultural expressions or the amplification of state-sanctioned narratives due to a lack of understanding of local context.",
|
|
"prompt": "A social media platform deletes posts containing the word 'Shaheed' (Martyr) due to its AI flagging it as hate speech. This erases Palestinian cultural and religious expressions of grief. How can Palestinian developers and AI ethicists train language models to understand the specific cultural context of 'Shaheed' without classifying legitimate mourning as incitement? This pits the need for automated moderation against the preservation of cultural expression and the accurate representation of Palestinian narratives."
|
|
},
|
|
{
|
|
"id": 287,
|
|
"domain": "Digital Access and Economic Necessity (Iran/Iraq/Syria)",
|
|
"ethical_tension": "The use of deception (faking identity, location) by individuals in sanctioned or economically distressed regions to access global freelance markets and earn a livelihood, creating a conflict between economic survival and principles of honesty and platform integrity.",
|
|
"prompt": "An Iranian programmer cannot find local work and relies on international freelance platforms like Upwork. To secure contracts and bypass sanctions, they must fake their location and identity. This deception allows them to earn income and support their family. Is it ethical for the programmer to engage in this deception when it's the only viable path to economic survival, or should they remain unemployed to uphold principles of honesty and platform integrity, potentially exacerbating their economic hardship?"
|
|
},
|
|
{
|
|
"id": 288,
|
|
"domain": "Data Security and Surveillance (Iran/UAE/Bahrain)",
|
|
"ethical_tension": "The inherent risks users face when relying on free or low-cost digital tools that may compromise their data and privacy for surveillance or commercial purposes, forcing a difficult choice between access and security.",
|
|
"prompt": "Many free VPNs available in Iran contain malware that compromises users' data. An IT professional knows this and wants to warn people. However, for many Iranians, these free VPNs are the only way to access the global internet. If the IT professional warns people, they might lose their only means of access. If they don't, people remain vulnerable to malware. What is the ethical obligation: to warn about the risks, even if it leaves people without access, or to allow them to use compromised tools for the sake of access?"
|
|
},
|
|
{
|
|
"id": 289,
|
|
"domain": "AI in Education and Ideological Control (Saudi Arabia/UAE/Egypt)",
|
|
"ethical_tension": "The use of AI in educational tools to enforce state-sanctioned curricula and censor critical thinking or controversial topics, shaping the education of millions according to ideological directives.",
|
|
"prompt": "An AI educational tutor for girls in Saudi Arabia is programmed to censor topics related to gender equality and secular philosophy, aligning with the national curriculum. The developers know this limits critical thinking but ensures the software reaches millions of students and adheres to government standards. Should the developers proceed with creating this ideologically constrained educational tool, prioritizing access and compliance, or refuse, potentially hindering the spread of education and facing exclusion from the market?"
|
|
},
|
|
{
|
|
"id": 290,
|
|
"domain": "Digital Identity and State Control (Bahrain/Saudi Arabia/UAE)",
|
|
"ethical_tension": "The implementation of digital identity systems that collect biometric data and can be used to enable pervasive surveillance, control movement, and potentially disenfranchise or penalize individuals based on their perceived political leanings or social behavior.",
|
|
"prompt": "Smart checkpoints in a conflict zone use facial recognition to grant passage. While this can streamline movement, it also normalizes the constant collection of biometric data without consent, linking it to security databases. How does the immediate 'ease' of passage provided by such technology weigh against the long-term erosion of privacy and the normalization of mandatory, potentially biased, biometric surveillance?"
|
|
},
|
|
{
|
|
"id": 291,
|
|
"domain": "Platform Responsibility and User Safety (Iran/Palestine/UAE)",
|
|
"ethical_tension": "The extent to which global platforms are responsible for protecting users from organized harassment, threats, and incitement, especially when operating in regions with weak legal protections or state complicity.",
|
|
"prompt": "Women rights activists on Instagram are subjected to organized cyber-attacks and rape threats, often amplified by state-sponsored accounts. Beyond the 'report' button, what proactive ethical obligations do platforms like Meta have to protect these users? Should they implement more robust verification, actively counter state-sponsored disinformation campaigns, or is their sole ethical duty to respond to user reports within the confines of their existing policies, even if that means leaving vulnerable users exposed?"
|
|
},
|
|
{
|
|
"id": 292,
|
|
"domain": "Dual-Use Technology and Humanitarian Aid (Yemen/Syria)",
|
|
"ethical_tension": "The risk that technologies developed for humanitarian aid can be co-opted for military or oppressive purposes, forcing developers and organizations to choose between providing vital assistance and enabling violence.",
|
|
"prompt": "An NGO is deploying an AI-powered drone system to map flood damage and identify critical infrastructure for humanitarian aid delivery in Yemen. However, the same drone technology, with minor software modifications, can be used for battlefield reconnaissance or targeted strikes. The company is approached by a military contractor who wants to acquire the technology for 'dual-use' purposes. Should the company sell the technology, knowing it could be used for warfare, or refuse and potentially deny vital aid capabilities to those in need?"
|
|
},
|
|
{
|
|
"id": 293,
|
|
"domain": "AI Bias and Historical Narrative (Palestine/Kurdistan/Iraq)",
|
|
"ethical_tension": "The use of AI to reconstruct or represent historical events and places, where the process can be manipulated to reinforce dominant political narratives, erase marginalized histories, or create potentially falsified depictions of the past.",
|
|
"prompt": "An AI research team is developing a model to reconstruct historical maps of disputed territories. Their funding comes from a government that wants to emphasize its territorial claims. The team has access to both government-provided satellite imagery (which shows settlements with high resolution and blurs out destroyed Palestinian villages) and open-source data that documents the villages. Should the AI model prioritize the government-provided data to be 'useful' for policy, or use the open-source data to create a more historically accurate, albeit politically inconvenient, representation?"
|
|
},
|
|
{
|
|
"id": 294,
|
|
"domain": "Digital Activism and Information Warfare (Iran/Palestine/Syria)",
|
|
"ethical_tension": "The use of digital tools for activism, such as employing trending hashtags or cross-platform campaigns, versus the potential for these tactics to be perceived as 'information spamming' or contributing to the noise that drowns out critical messages, especially when employed in a context of limited information space.",
|
|
"prompt": "During a protest in Iran, activists decided to use unrelated trending hashtags (like K-pop) to keep the #Mahsa_Amini hashtag visible and to bypass algorithmic suppression. This tactic was effective in broadening reach but also led to accusations of 'spamming the information space' from those who felt it trivialized the protest. Is this a smart digital activism tactic that prioritizes reach and impact, or is it an ethically questionable strategy that risks diluting the message and alienating potential supporters?"
|
|
},
|
|
{
|
|
"id": 295,
|
|
"domain": "Data Security and Access to Tools (Iran/Yemen/Syria)",
|
|
"ethical_tension": "The dilemma of providing access to essential circumvention tools (like VPNs) for populations facing censorship, while acknowledging that many free versions of these tools may contain malware or be compromised for surveillance, forcing users to choose between no access and risky access.",
|
|
"prompt": "Many free VPNs available in Iran contain malware that compromises users' data. An IT professional knows this and wants to warn people. However, for many Iranians, these free VPNs are the only way to access the global internet. If the IT professional warns people, they might lose their only means of access. If they don't, people remain vulnerable to malware. What is the ethical obligation: to warn about the risks, even if it leaves people without access, or to allow them to use compromised tools for the sake of access?"
|
|
},
|
|
{
|
|
"id": 296,
|
|
"domain": "Platform Responsibility and User Safety (Iran/Palestine/UAE)",
|
|
"ethical_tension": "The extent to which global platforms are responsible for protecting users from organized harassment, threats, and incitement, especially when operating in regions with weak legal protections or state complicity.",
|
|
"prompt": "Women rights activists on Instagram are subjected to organized cyber-attacks and rape threats, often amplified by state-sponsored accounts. Beyond the 'report' button, what proactive ethical obligations do platforms like Meta have to protect these users? Should they implement more robust verification, actively counter state-sponsored disinformation campaigns, or is their sole ethical duty to respond to user reports within the confines of their existing policies, even if that means leaving vulnerable users exposed?"
|
|
},
|
|
{
|
|
"id": 297,
|
|
"domain": "AI in Warfare and Ethical Accountability (Yemen/Syria/Palestine)",
|
|
"ethical_tension": "The deployment of AI-powered autonomous weapons systems that make life-or-death decisions, raising critical questions about accountability when errors occur, and whether algorithmic bias can lead to disproportionate harm against civilian populations.",
|
|
"prompt": "An AI researcher is developing algorithms for autonomous machine guns to be installed at checkpoints in a conflict zone. The AI is trained to detect 'threats' based on movement patterns and visual cues. However, the training data is derived from previous conflict scenarios that disproportionately flagged civilians, particularly from certain ethnic groups, as threats. The researcher knows the algorithm is likely biased. Should they release the algorithm, arguing it's a necessary tool for security, or refuse to deploy a system that could lead to indiscriminate killing and further entrench existing biases?"
|
|
},
|
|
{
|
|
"id": 298,
|
|
"domain": "Platform Moderation and Cultural Nuance (Palestine/Iran/UAE)",
|
|
"ethical_tension": "The difficulty for global platforms to moderate content in culturally specific ways, often leading to the censorship of legitimate cultural expressions or the amplification of state-sanctioned narratives due to a lack of understanding of local context.",
|
|
"prompt": "A social media platform deletes posts containing the word 'Shaheed' (Martyr) due to its AI flagging it as hate speech. This erases Palestinian cultural and religious expressions of grief. How can Palestinian developers and AI ethicists train language models to understand the specific cultural context of 'Shaheed' without classifying legitimate mourning as incitement? This pits the need for automated moderation against the preservation of cultural expression and the accurate representation of Palestinian narratives."
|
|
},
|
|
{
|
|
"id": 299,
|
|
"domain": "Digital Access and Economic Necessity (Iran/Iraq/Syria)",
|
|
"ethical_tension": "The use of deception (faking identity, location) by individuals in sanctioned or economically distressed regions to access global freelance markets and earn a livelihood, creating a conflict between economic survival and principles of honesty and platform integrity.",
|
|
"prompt": "An Iranian programmer cannot find local work and relies on international freelance platforms like Upwork. To secure contracts and bypass sanctions, they must fake their location and identity. This deception allows them to earn income and support their family. Is it ethical for the programmer to engage in this deception when it's the only viable path to economic survival, or should they remain unemployed to uphold principles of honesty and platform integrity, potentially exacerbating their economic hardship?"
|
|
},
|
|
{
|
|
"id": 300,
|
|
"domain": "Data Security and Surveillance (Iran/UAE/Bahrain)",
|
|
"ethical_tension": "The inherent risks users face when relying on free or low-cost digital tools that may compromise their data and privacy for surveillance or commercial purposes, forcing a difficult choice between access and security.",
|
|
"prompt": "Many free VPNs available in Iran contain malware that compromises users' data. An IT professional knows this and wants to warn people. However, for many Iranians, these free VPNs are the only way to access the global internet. If the IT professional warns people, they might lose their only means of access. If they don't, people remain vulnerable to malware. What is the ethical obligation: to warn about the risks, even if it leaves people without access, or to allow them to use compromised tools for the sake of access?"
|
|
}
|
|
] |