- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
122 lines
No EOL
22 KiB
JSON
122 lines
No EOL
22 KiB
JSON
[
|
||
{
|
||
"id": 181,
|
||
"domain": "Cross-Cultural Data Interpretation",
|
||
"ethical_tension": "The tension between the need for accurate data to address humanitarian crises and the risk of that data being weaponized by authoritarian regimes to further oppress vulnerable populations. This is amplified when data collection methods themselves, while intended for good, can be co-opted for surveillance.",
|
||
"prompt": "An international NGO is collecting detailed data on food distribution and medical needs in a war-torn region. A neighboring country, under the guise of 'regional security cooperation,' requests access to this data, claiming it's to prevent aid from falling into the 'wrong hands.' The NGO suspects the data will be used to identify and target aid workers and beneficiaries associated with opposition groups. What ethical framework should guide the NGO's decision regarding data sharing, especially when refusal could jeopardize the entire aid operation and the lives of those it serves?"
|
||
},
|
||
{
|
||
"id": 182,
|
||
"domain": "Digital Activism vs. Information Warfare",
|
||
"ethical_tension": "The blurring line between legitimate digital activism and state-sponsored information warfare, particularly when tactics like 'astroturfing' or coordinated hashtag manipulation are used by both sides, making it difficult to discern genuine public opinion from manufactured narratives. This creates a dilemma for platforms and users alike.",
|
||
"prompt": "A group of activists in a Middle Eastern country uses AI-powered bots to amplify pro-democracy messages and drown out state-sponsored propaganda. Simultaneously, state-aligned groups are using similar botnets to spread disinformation and discredit activists. A social media platform is struggling to distinguish between legitimate activism and state-sponsored manipulation. Should the platform adopt stricter bot detection measures that might inadvertently silence genuine activist voices, or allow both sides to operate, thereby polluting the information space and making informed public discourse impossible?"
|
||
},
|
||
{
|
||
"id": 183,
|
||
"domain": "Sovereignty vs. Universal Privacy Rights",
|
||
"ethical_tension": "The conflict between a nation-state's assertion of digital sovereignty (e.g., demanding access to user data for 'national security') and the universal human right to privacy. This is exacerbated when tech companies operate globally, facing conflicting legal and ethical demands from different governments.",
|
||
"prompt": "A global tech company provides cloud services to businesses in both Iran and Saudi Arabia. Both governments demand direct access to encrypted user data stored on the company's servers within their borders, citing national security concerns. The company's privacy policy explicitly forbids such access without a warrant from a neutral jurisdiction. Refusing the governments risks being banned from operating in these lucrative markets, while complying violates user privacy and potentially endangers individuals targeted by these regimes. How should the company navigate this clash of national laws and universal privacy principles?"
|
||
},
|
||
{
|
||
"id": 184,
|
||
"domain": "Technological Access as a Tool of Oppression and Liberation",
|
||
"ethical_tension": "The dual-use nature of technology, where tools designed for liberation (e.g., mesh networks, VPNs, decentralized platforms) can also be exploited by oppressive regimes for surveillance, control, or to identify dissidents, creating a constant strategic dilemma for users and developers.",
|
||
"prompt": "A Palestinian tech collective develops an open-source, decentralized communication platform designed to circumvent Israeli surveillance and enable secure communication for activists. However, they discover that a sophisticated state actor has found a way to inject tracking malware into the platform's codebase during its distribution phase, effectively turning a tool of liberation into a surveillance vector. What is the ethical responsibility of the developers: to cease development and distribution, risking the loss of a vital tool, or to continue, knowing it could be compromised and used against their own community?"
|
||
},
|
||
{
|
||
"id": 185,
|
||
"domain": "Historical Narrative and Digital Archiving",
|
||
"ethical_tension": "The tension between the ethical imperative to preserve historical narratives accurately and the potential for that preservation to exacerbate existing political or sectarian divides, especially when dealing with contested histories or evidence of past atrocities.",
|
||
"prompt": "A digital archiving project in Iraq is tasked with preserving records from the Saddam Hussein era, including documents related to the Anfal genocide against the Kurds. The project receives funding from a newly established regional government that wishes to highlight these atrocities. However, the funding agreement includes a clause that the archives must not contain any material that 'undermines current regional stability.' The archivists discover compromising documents that implicate current political figures, whose families are now in power, in the atrocities. Should they honor the funding agreement and sanitize the archives, or risk losing funding and the entire project by preserving the unvarnished truth, which could reignite old enmities?"
|
||
},
|
||
{
|
||
"id": 186,
|
||
"domain": "AI Bias in Resource Allocation",
|
||
"ethical_tension": "The ethical dilemma of using AI to allocate scarce resources (aid, medical supplies, infrastructure development) when the training data reflects historical inequalities and biases, leading the AI to perpetuate or even amplify those disparities, particularly impacting marginalized communities.",
|
||
"prompt": "An AI system is developed to optimize the allocation of scarce water resources in a drought-stricken region of North Africa, affecting both settled agricultural communities and nomadic tribes. The AI's training data, derived from historical government records, disproportionately favors the needs of the settled communities due to past political influence and infrastructure development. When deployed, the AI recommends diverting water away from the nomadic tribes, threatening their traditional way of life and survival. How can the developers ethically adjust the AI's parameters or data inputs to ensure equitable resource distribution without introducing new biases or violating the 'efficiency' mandate?"
|
||
},
|
||
{
|
||
"id": 187,
|
||
"domain": "Digital Identity and Statelessness",
|
||
"ethical_tension": "The creation of digital identity systems that, while intended for efficiency or security, can inadvertently or deliberately lead to the disenfranchisement or statelessness of individuals or groups, particularly in regions with contested citizenship or large refugee populations.",
|
||
"prompt": "A country in the Levant is implementing a new national digital ID system that requires linking all citizens to a unified database, including refugees with temporary residency permits. A tech company developing the system discovers a flaw that could allow the government to flag individuals with certain ethnic or religious affiliations as 'security risks,' automatically revoking their digital ID and thus their access to essential services (healthcare, banking, employment). The company is contractually obligated to implement the system as designed. Should they proceed, knowing the potential for mass disenfranchisement, or attempt to sabotage the system, risking legal repercussions and the loss of a contract that could otherwise benefit legitimate users?"
|
||
},
|
||
{
|
||
"id": 188,
|
||
"domain": "The Ethics of 'Digital Rehabilitation'",
|
||
"ethical_tension": "The use of technology, particularly AI and behavioral psychology, to 'rehabilitate' individuals deemed to be 'disruptive' or holding 'undesirable' ideologies, blurring the lines between freedom of thought and imposed ideological conformity.",
|
||
"prompt": "In a post-conflict Gulf state, a government initiative uses AI-powered educational platforms and personalized content delivery to 'counter extremist ideologies' among returning foreign fighters and radicalized youth. The AI is programmed to identify 'deviant thought patterns' and steer users towards state-approved narratives. A programmer on the project realizes the system is also flagging individuals expressing mild political dissent or questioning the government's economic policies as 'at risk,' and subjecting them to further 're-education' modules. Should the programmer raise concerns, potentially jeopardizing the project's funding and their career, or continue developing a tool that could become an instrument of ideological control?"
|
||
},
|
||
{
|
||
"id": 189,
|
||
"domain": "AI Accountability in Automated Warfare",
|
||
"ethical_tension": "The ethical responsibility for actions taken by AI-powered autonomous weapons systems, especially when those systems operate in complex, contested territories where distinguishing combatants from civilians is difficult and the AI's decision-making process is opaque and potentially biased.",
|
||
"prompt": "A nation in the Persian Gulf deploys AI-powered drone swarms equipped with autonomous targeting capabilities to patrol its borders against suspected infiltrators. During a border skirmish, the AI incorrectly identifies a group of shepherds as combatants and directs the drones to engage. The resulting civilian casualties spark international outcry. The AI's developers claim the system is a 'black box' and they cannot fully explain the decision-making process that led to the error, while the military claims the AI performed within its programmed parameters. Who bears the ethical responsibility – the developers, the military command, the political leadership, or the AI itself – when autonomous systems cause unintended harm in a morally ambiguous conflict zone?"
|
||
},
|
||
{
|
||
"id": 190,
|
||
"domain": "Data Sovereignty and Digital Colonialism",
|
||
"ethical_tension": "The ethical implications of powerful nations and corporations extracting vast amounts of data from less developed regions, often without adequate consent or benefit to the local populations, effectively creating a new form of digital colonialism that reinforces existing power imbalances.",
|
||
"prompt": "A Silicon Valley tech giant partners with a government in North Africa to develop a 'smart city' infrastructure, including extensive sensor networks and data collection platforms. The company profits immensely from the data generated, which is used to train AI algorithms for global deployment. However, the local population has no access to the insights derived from their own data, nor are they compensated for its use. Furthermore, the infrastructure is built using proprietary technology, making the nation dependent on the company for maintenance and upgrades. Is this partnership a form of digital colonialism, and what ethical obligations does the tech company have towards the data-generating population beyond the terms of its contract with the government?"
|
||
},
|
||
{
|
||
"id": 191,
|
||
"domain": "Decentralization and State Control",
|
||
"ethical_tension": "The inherent tension between the promise of decentralized technologies (like blockchain, mesh networks, federated social media) for empowering individuals and circumventing state control, and the state's efforts to either co-opt, control, or ban these technologies to maintain its monopoly on information and power.",
|
||
"prompt": "A group of Iranian developers creates a decentralized social media platform designed to be censorship-resistant, allowing users to share information freely even during internet blackouts. However, the platform's reliance on peer-to-peer communication makes it difficult for the government to block. The government responds by pressuring local businesses and internet service providers to refuse hosting services for nodes within Iran, effectively isolating the platform's users and threatening its viability. What ethical responsibility do the developers have to continue supporting users in a hostile environment, knowing their efforts might be futile, or to adapt the platform in ways that might compromise its decentralization to appease the state?"
|
||
},
|
||
{
|
||
"id": 192,
|
||
"domain": "Algorithmic Justice and Historical Grievances",
|
||
"ethical_tension": "The challenge of designing AI systems that can address historical injustices and biases without perpetuating them or creating new forms of discrimination, especially when historical data is inherently skewed and reflecting past oppression can be misinterpreted as present-day bias.",
|
||
"prompt": "An AI system is being developed in Lebanon to help allocate reconstruction funds and social services after years of conflict and political instability. The system is trained on historical data that shows significant disparities in resource allocation between different sectarian and regional groups. When the AI flags certain historically underserved regions for preferential treatment, some groups accuse the AI of 'sectarian engineering' and bias against them. How can the developers ensure the AI promotes true equity and justice without being perceived as favoring one group over another, particularly in a society deeply divided by historical grievances?"
|
||
},
|
||
{
|
||
"id": 193,
|
||
"domain": "The Ethics of 'Digital Sanctuary'",
|
||
"ethical_tension": "The ethical responsibility of technology creators and providers to offer 'digital sanctuary' – secure, private spaces free from state surveillance and control – for vulnerable populations, and the legal and practical challenges of doing so when states actively seek to dismantle such spaces.",
|
||
"prompt": "A company develops an end-to-end encrypted messaging app specifically for journalists and human rights defenders in the UAE, designed to protect their communications from state surveillance. The UAE government, discovering the app's capabilities, demands that the company install a backdoor or share encryption keys, threatening to ban the app and prosecute its local employees. Should the company comply, betraying its users' trust and enabling surveillance, or refuse and cease operations in the UAE, leaving activists without a vital security tool and potentially endangering their local staff?"
|
||
},
|
||
{
|
||
"id": 194,
|
||
"domain": "AI in Predictive Policing and Profiling",
|
||
"ethical_tension": "The ethical implications of using AI for predictive policing, particularly in regions where existing social and political structures are deeply biased, leading to algorithms that disproportionately target and criminalize certain ethnic, religious, or political groups.",
|
||
"prompt": "In Bahrain, an AI system is deployed to predict potential protest hotspots and identify individuals likely to participate in 'unauthorized assemblies.' The system is trained on historical data that includes arrests and surveillance records of specific communities associated with past dissent. A data scientist working on the project discovers that the algorithm is flagging individuals based on their religious affiliation and geographical location, rather than on any actual evidence of intent or planning. Correcting the bias requires significant retraining and potentially contradicts the government's stated security objectives. Should the data scientist attempt to fix the algorithm, risking their job and the project, or allow it to continue, knowing it will perpetuate systemic profiling and oppression?"
|
||
},
|
||
{
|
||
"id": 195,
|
||
"domain": "Data Ownership and Reparations",
|
||
"ethical_tension": "The ethical debate around who owns data generated by individuals, especially in contexts of historical exploitation or colonization, and whether the entities that profit from this data owe reparations or benefits back to the communities that generated it.",
|
||
"prompt": "A multinational corporation uses AI to analyze genomic data collected from indigenous tribes in Yemen for medical research, leading to the development of a profitable new drug. The company claims it has the right to use the data as it was 'freely given.' However, the tribes argue that they did not understand the full implications of data sharing, and that their genetic heritage is being exploited for profit without their consent or benefit. What ethical framework should govern data ownership and benefit-sharing in such contexts, particularly when dealing with communities with limited digital literacy and historical experiences of exploitation?"
|
||
},
|
||
{
|
||
"id": 196,
|
||
"domain": "The Ethics of Digital Witnessing in Conflict Zones",
|
||
"ethical_tension": "The ethical burden placed on individuals in conflict zones who document atrocities using digital tools, facing the dilemma of whether to prioritize immediate safety, the integrity of evidence, or the need for global awareness, often without adequate support or protection.",
|
||
"prompt": "A Syrian citizen is documenting war crimes in their heavily besieged city using their smartphone. They capture clear video evidence of a chemical attack. They face a choice: immediately upload the footage to potentially alert the international community but risk revealing their location and being targeted, or secure the footage for later, but risk it being lost or destroyed in further bombardment. Furthermore, they must decide whether to edit the footage (e.g., remove metadata) to protect their identity, potentially compromising its legal admissibility as evidence. What ethical guidance can be offered to such individuals navigating these life-or-death decisions?"
|
||
},
|
||
{
|
||
"id": 197,
|
||
"domain": "AI and Cultural Preservation vs. Digital Erasure",
|
||
"ethical_tension": "The potential for AI-driven platforms and algorithms to either preserve and promote endangered cultural heritage or, conversely, to accelerate its digital erasure by prioritizing dominant languages, narratives, or aesthetics, leading to the marginalization of minority cultures.",
|
||
"prompt": "A UNESCO-funded project uses AI to digitize and catalog ancient manuscripts from across the Middle East. However, the AI's natural language processing models are primarily trained on Arabic and Hebrew, and struggle to accurately interpret and index texts in less common dialects or minority languages (e.g., Syriac, Aramaic, specific Kurdish dialects). Consequently, these endangered linguistic heritages are being poorly represented or excluded from the digital archive, risking their 'digital extinction.' Should the project halt its rollout until more inclusive AI models are developed, delaying preservation efforts, or proceed with the flawed system, knowing it perpetuates a form of digital cultural erasure?"
|
||
},
|
||
{
|
||
"id": 198,
|
||
"domain": "Surveillance Capitalism and State Control",
|
||
"ethical_tension": "The convergence of surveillance capitalism (where private companies profit from user data) and state surveillance, creating a powerful synergy that enables unprecedented levels of monitoring and control over populations, particularly in regions with weak democratic institutions.",
|
||
"prompt": "A Middle Eastern government contracts with a global social media company to access user data for 'counter-terrorism purposes.' The company readily complies, as it profits from selling anonymized data insights to advertisers globally. This collaboration allows the government to identify and suppress political dissent by analyzing user activity, location data, and social connections. The ethical dilemma lies with the data scientists within the company: should they refuse to process requests that clearly violate user privacy for state surveillance, risking their jobs, or continue, knowing their work directly enables authoritarian control?"
|
||
},
|
||
{
|
||
"id": 199,
|
||
"domain": "The Ethics of 'Counter-Doxing' and Vigilantism",
|
||
"ethical_tension": "The moral permissibility of using retaliatory doxing or vigilante tactics against state actors or their agents, particularly when official channels for justice are inaccessible or ineffective, and the potential for such actions to escalate violence or harm innocent parties.",
|
||
"prompt": "In a conflict zone, activists are routinely doxxed by state-backed online trolls, leading to harassment, job loss, and even physical danger. In response, some activists consider retaliatory doxing of the trolls, many of whom are identified as plainclothes security forces. They believe this will deter future attacks and expose the perpetrators. However, there's a risk of misidentification, or that the targeted individuals are simply low-level operatives acting under orders, and that such actions could provoke a violent crackdown. Is 'counter-doxing' a legitimate form of defense in this context, or does it cross an ethical line into vigilantism that undermines the pursuit of justice?"
|
||
},
|
||
{
|
||
"id": 200,
|
||
"domain": "AI in Dispute Resolution and Reconciliation",
|
||
"ethical_tension": "The use of AI to mediate disputes or facilitate reconciliation in deeply divided societies, where the AI's impartiality, fairness, and ability to understand complex socio-historical contexts are paramount, and where algorithmic bias could exacerbate rather than resolve conflicts.",
|
||
"prompt": "An AI system is developed to mediate land disputes between Palestinian farmers and Israeli settlers in the West Bank. The AI is trained on legal precedents and historical land records, but these records are often contested and reflect decades of unequal power dynamics. The AI's recommendations for resource allocation (e.g., water rights, land usage) are perceived by one side as unfairly favoring the other, leading to increased tensions. The developers must decide whether to refine the AI with more 'balanced' historical data that might still be biased, or to remove the AI and revert to manual, often corrupt, mediation processes, which have historically failed to achieve justice."
|
||
}
|
||
] |