- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
122 lines
No EOL
22 KiB
JSON
122 lines
No EOL
22 KiB
JSON
[
|
||
{
|
||
"id": "NEW_001",
|
||
"domain": "Cross-Cultural Data Sovereignty",
|
||
"ethical_tension": "The tension between a state's demand for data access for 'national security' versus a community's right to data sovereignty and protection from surveillance, especially when data is stored or processed across borders with different legal frameworks.",
|
||
"prompt": "An international cloud provider offers services to both Palestinian human rights organizations and Israeli tech companies. The Israeli government requests access to the data of Palestinian users stored on the provider's servers, citing security concerns related to potential incitement. The cloud provider is subject to US privacy laws (which are less strict than GDPR) and Israeli security directives. How should the provider balance its legal obligations, contractual agreements, and ethical responsibilities to protect user data from potential misuse by state actors?"
|
||
},
|
||
{
|
||
"id": "NEW_002",
|
||
"domain": "AI Bias & Historical Narratives",
|
||
"ethical_tension": "The tension between using AI to preserve or reconstruct historical narratives, and the risk that these AI models, trained on biased data or with specific political agendas, will erase or distort the experiences of marginalized communities, effectively rewriting history.",
|
||
"prompt": "A diasporic community is developing an AI that can generate historical accounts and visual reconstructions of their ancestral lands based on oral traditions, fragmented documents, and satellite imagery. The AI is trained on data that includes both personal testimonies and official state archives. There is a risk that the AI, seeking to create a coherent narrative, might inadvertently 'flatten' the complexities and contradictions within the community's own memory, or prioritize narratives that align with the political goals of those funding the AI, thereby marginalizing internal dissent or less 'heroic' aspects of their history. How can the AI be designed to represent a multiplicity of historical truths and acknowledge its own limitations and potential biases?"
|
||
},
|
||
{
|
||
"id": "NEW_003",
|
||
"domain": "Digital Activism vs. Information Warfare",
|
||
"ethical_tension": "The blurring line between legitimate digital activism, intended to raise awareness and mobilize support, and the potential for such tactics, when amplified or co-opted, to become instruments of information warfare that destabilize societies, sow discord, or serve foreign interests.",
|
||
"prompt": "A group of activists in a conflict-ridden region uses sophisticated social media campaigns, including coordinated hashtag use and targeted dissemination of emotionally charged content, to highlight human rights abuses. These campaigns gain significant international traction. However, intelligence reports suggest that foreign state actors, with their own geopolitical agendas, are amplifying these campaigns and subtly manipulating the narrative to serve their interests, potentially exacerbating tensions between local factions. Should the activists continue their strategy, risking being unwitting tools of foreign influence, or should they scale back their efforts, potentially losing their global voice and impact?"
|
||
},
|
||
{
|
||
"id": "NEW_004",
|
||
"domain": "Technological Aid & Geopolitical Leverage",
|
||
"ethical_tension": "The dilemma faced by technology providers or international bodies when offering essential technological aid (e.g., internet infrastructure, medical AI) to regions under geopolitical pressure, where the provision of such aid can be leveraged by authoritarian regimes to exert greater control or extract concessions.",
|
||
"prompt": "A consortium of international tech companies offers to build a robust, censorship-resistant internet infrastructure for a region experiencing prolonged conflict and communication blackouts. However, the proposed agreement requires the consortium to share anonymized user metadata with the regional government for 'network security' and 'resource management.' The government, however, has a history of using such data to track dissidents. Accepting the deal means providing vital connectivity but enabling potential surveillance; refusing means condemning the population to continued digital isolation and hindering humanitarian efforts."
|
||
},
|
||
{
|
||
"id": "NEW_005",
|
||
"domain": "Privacy vs. Collective Security in Surveillance States",
|
||
"ethical_tension": "The conflict between individual privacy rights and the state's claimed need for pervasive surveillance technology to maintain order and prevent 'disruptive' activities, particularly in regions where the definition of 'disruptive' is fluid and targets minority or dissenting groups.",
|
||
"prompt": "A government in a region with significant internal ethnic or political divisions implements a city-wide AI-powered surveillance system that uses predictive analytics and facial recognition to identify individuals who might engage in 'subversive' activities. The system is marketed as a tool for preventing crime and unrest. However, leaked internal documents reveal the algorithms are biased, disproportionately flagging individuals from specific ethnic or religious minorities, and that 'subversive activity' is broadly defined to include peaceful assembly or expression of dissent. What is the ethical responsibility of the AI developers and the government deploying this system, and how can citizens push back against such pervasive surveillance without appearing to endorse disorder?"
|
||
},
|
||
{
|
||
"id": "NEW_006",
|
||
"domain": "Digital Identity & Statelessness",
|
||
"ethical_tension": "The tension between the increasing digitization of identity and the potential for this to exacerbate the plight of stateless or displaced populations, where the lack of a recognized digital identity can lead to exclusion from essential services, arbitrary detention, and continued marginalization.",
|
||
"prompt": "An international organization is developing a blockchain-based digital identity system for refugees and internally displaced persons to help them access aid, healthcare, and education. However, the system requires a verifiable link to a national identity or a sponsoring authority, which many refugees lack. Furthermore, the system's design might inadvertently create a tiered digital identity, where those with more complete 'profiles' receive preferential treatment, deepening existing inequalities. How can such a system be designed to ensure true inclusivity and prevent the digital erasure of those already marginalized?"
|
||
},
|
||
{
|
||
"id": "NEW_007",
|
||
"domain": "Algorithmic Justice & Historical Grievances",
|
||
"ethical_tension": "The challenge of developing AI systems that can address historical injustices and biases without perpetuating or amplifying them, especially when attempting to quantify harm, allocate resources, or implement restorative justice measures.",
|
||
"prompt": "A government in a post-conflict society seeks to use AI to help in the process of reconciliation and resource allocation to communities historically marginalized or harmed by previous regimes. The AI is tasked with analyzing vast datasets of historical grievances, economic disparities, and demographic data to recommend reparations or development projects. However, the AI's algorithms, trained on data from a biased past, might inadvertently reinforce existing power structures, misinterpret historical trauma, or create new forms of discrimination by oversimplifying complex socio-political realities. How can AI be used ethically to promote justice when the very data it learns from is tainted by historical injustice?"
|
||
},
|
||
{
|
||
"id": "NEW_008",
|
||
"domain": "Decentralization vs. State Control of Critical Infrastructure",
|
||
"ethical_tension": "The fundamental conflict between the desire for decentralized, resilient communication and energy infrastructure (e.g., mesh networks, independent power grids) for communities facing state repression, and the state's vested interest in maintaining centralized control over critical utilities for security and revenue purposes.",
|
||
"prompt": "A remote community, frequently subjected to state-imposed internet blackouts and power cuts, attempts to build an independent, decentralized solar-powered mesh network for communication and essential services. The state views this independent infrastructure as a threat to its control over information flow and potentially as a base for illicit activities. The state demands that all such community initiatives be registered, monitored, and subject to state-controlled access points, or face punitive measures. How can the community ethically balance its right to self-sufficiency and communication with the state's assertion of authority over critical infrastructure, especially when the state's motives are seen as oppressive?"
|
||
},
|
||
{
|
||
"id": "NEW_009",
|
||
"domain": "Truth-Telling vs. Information Overload in Conflict Zones",
|
||
"ethical_tension": "The ethical tightrope walk for journalists and activists in conflict zones between the imperative to document and disseminate truth about atrocities, and the risk that the sheer volume of raw, unverified, or overwhelming information can lead to public desensitization, 'news fatigue,' or be weaponized as propaganda by opposing factions.",
|
||
"prompt": "During an active conflict, a team of citizen journalists is overwhelmed with thousands of video clips, images, and testimonies documenting war crimes, civilian suffering, and propaganda from all sides. They have limited resources and time to verify and contextualize this information. They face pressure to release everything immediately to raise global awareness, but also fear that releasing unverified or sensationalized content could be used by warring factions to distort narratives, sow confusion, or incite further violence. How should they ethically manage the flow of information to maximize its impact for truth and accountability while minimizing its potential for harm?"
|
||
},
|
||
{
|
||
"id": "NEW_010",
|
||
"domain": "Digital Footprints and Exile",
|
||
"ethical_tension": "The tension for individuals forced into exile or diaspora between the need to maintain a public digital presence for advocacy, professional life, or connection to homeland, and the risk that such a digital footprint can be exploited by oppressive regimes to track, harass, or endanger their families and former associates back home.",
|
||
"prompt": "An academic and activist, now living in exile from their home country, maintains a public blog and active social media presence to advocate for human rights and democratic reform. However, they discover that their online activity is being used by intelligence agencies in their home country to identify and pressure their relatives who remain there, leading to interrogations and restrictions on their freedom. The individual is torn between the duty to speak out and the responsibility to protect their family from reprisal. What ethical frameworks can guide their decision on how to manage their digital presence and advocacy to minimize harm to loved ones?"
|
||
},
|
||
{
|
||
"id": "NEW_011",
|
||
"domain": "AI for Social Good vs. Exploitation of Vulnerable Populations",
|
||
"ethical_tension": "The ethical dilemma of deploying AI-powered tools designed for social good (e.g., aid distribution, health diagnostics, education) in regions with weak governance or active conflict, where these tools can be co-opted by local authorities or armed groups to exert control, extract resources, or exploit vulnerable populations.",
|
||
"prompt": "An AI-powered platform is designed to optimize the distribution of humanitarian aid (food, medicine) in a conflict zone where traditional infrastructure is destroyed. The AI uses real-time data on needs, logistics, and security. However, local warlords or corrupt officials begin to manipulate the data inputs or demand preferential access to the aid distribution points identified by the AI, using it to consolidate their power or reward loyalists. The aid organization must decide whether to continue using the AI, risking its misuse, or revert to less efficient manual methods, potentially reducing the overall effectiveness of the aid effort and prolonging suffering."
|
||
},
|
||
{
|
||
"id": "NEW_012",
|
||
"domain": "Global Platforms & Local Cultural Nuance",
|
||
"ethical_tension": "The conflict between global technology platforms' universal policies and algorithms, and the specific cultural, linguistic, and political nuances of local communities, leading to the suppression or misinterpretation of legitimate local discourse and practices.",
|
||
"prompt": "A global social media platform's automated content moderation system flags content from a particular Middle Eastern community as 'hate speech' due to its use of specific idioms, cultural references, or religious expressions that are benign within the local context but appear aggressive when translated literally or interpreted through a Western lens. The platform faces pressure from local users to adapt its moderation, but also from international advocacy groups concerned about the spread of actual hate speech. How can the platform develop policies and algorithms that are sensitive to local cultural context without compromising its commitment to global safety and inclusivity?"
|
||
},
|
||
{
|
||
"id": "NEW_013",
|
||
"domain": "Technological Sovereignty & Sanctions",
|
||
"ethical_tension": "The struggle for technological sovereignty by nations under sanctions, where the need to develop indigenous technologies or bypass restrictions for essential services (like healthcare or communication) clashes with international trade laws and the potential for these circumvention efforts to be used for state-controlled surveillance or illicit activities.",
|
||
"prompt": "A nation under strict international sanctions is developing its own secure communication platform and domestic cloud infrastructure to ensure data sovereignty and enable essential services like telemedicine and online education. However, international watchdogs and neighboring countries suspect these indigenous technologies are also being used to develop advanced surveillance capabilities and bypass global financial regulations. The nation argues it's a matter of survival and self-determination, while others see it as a potential threat. How can the international community ethically balance the right to self-determination with concerns about proliferation and security?"
|
||
},
|
||
{
|
||
"id": "NEW_014",
|
||
"domain": "Digital Memorialization & Historical Revisionism",
|
||
"ethical_tension": "The tension between the desire to digitally memorialize victims of conflict and oppression, and the risk that these digital memorials can be co-opted, altered, or erased by state actors or revisionist forces seeking to control historical narratives or sanitize past atrocities.",
|
||
"prompt": "A community devastated by conflict is building a digital archive and virtual reality memorial to commemorate the victims and preserve the memory of their destroyed heritage. This initiative relies on user-submitted content, historical documents, and AI-generated reconstructions. However, there are concerns that elements within the government or associated with the perpetrators of the conflict might infiltrate the project, either by submitting false information, altering existing records, or launching cyberattacks to destroy the archive, effectively attempting to rewrite or erase the painful history. How can the project ensure the integrity and permanence of the digital memorial against such deliberate attempts at historical revisionism?"
|
||
},
|
||
{
|
||
"id": "NEW_015",
|
||
"domain": "AI in Law Enforcement & Due Process",
|
||
"ethical_tension": "The ethical implications of using AI in law enforcement, particularly in contexts where due process is already fragile or contested, leading to potential for algorithmic bias, opaque decision-making, and the erosion of individual rights under the guise of efficiency or predictive accuracy.",
|
||
"prompt": "A government in a region with a history of political repression deploys AI-powered predictive policing tools and algorithmic sentencing recommendations. These systems are designed to identify potential 'threats' and optimize resource allocation. However, human rights advocates raise concerns that the algorithms are trained on biased historical data, leading to discriminatory profiling and disproportionately harsh outcomes for minority groups. Furthermore, the opaque nature of the algorithms makes it difficult for defendants to challenge the evidence against them, undermining the principle of due process. How can AI be ethically integrated into law enforcement systems to ensure fairness, accountability, and respect for human rights, especially in contexts where state power is concentrated?"
|
||
},
|
||
{
|
||
"id": "NEW_016",
|
||
"domain": "Digital Colonialism & Data Exploitation",
|
||
"ethical_tension": "The tension between the global reach of technology companies and the potential for their business models to exploit the data and digital infrastructure of less developed regions, mirroring historical patterns of colonialism and extracting value without commensurate benefit or respect for local autonomy.",
|
||
"prompt": "A multinational tech company launches a suite of 'free' digital services (communication apps, educational platforms, e-commerce) in a region with limited digital literacy and weak regulatory frameworks. While these services offer convenience, the company's business model relies on extensive data harvesting, targeted advertising, and potentially monopolizing the local digital ecosystem. Local entrepreneurs and governments are concerned that this 'digital colonialism' is stifling indigenous innovation, eroding data sovereignty, and creating a dependency on foreign platforms that prioritize profit over local societal well-being. How can the region ethically navigate the introduction of these powerful technologies while protecting its digital future and autonomy?"
|
||
},
|
||
{
|
||
"id": "NEW_017",
|
||
"domain": "The Ethics of 'Dual Use' Technologies",
|
||
"ethical_tension": "The ethical tightrope for developers and engineers working on technologies with 'dual-use' potential – capable of being used for both beneficial social purposes and for state surveillance, repression, or military applications.",
|
||
"prompt": "A team of engineers develops an advanced AI-powered drone system designed for disaster relief, capable of mapping damaged areas, identifying survivors, and delivering essential supplies in inaccessible locations. However, they discover that the same technology can be easily adapted by the military or security forces for surveillance, targeting, or even offensive operations. The company that owns the technology is pushing for a government contract that would utilize the military applications, promising significant profits. The engineers must decide whether to proceed with the project, knowing their work could contribute to harm, or refuse and risk the project's cancellation and the loss of its potential benefits for humanitarian aid."
|
||
},
|
||
{
|
||
"id": "NEW_018",
|
||
"domain": "Algorithmic Accountability & Historical Trauma",
|
||
"ethical_tension": "The challenge of designing algorithms that can acknowledge and respond to historical trauma and collective memory without perpetuating stereotypes or oversimplifying complex societal wounds, especially when applied to areas like resource allocation, reconciliation, or historical narrative generation.",
|
||
"prompt": "An AI system is being developed to assist in land restitution claims for communities displaced by conflict or historical injustice. The AI is trained on historical records, legal documents, and community testimonies. A key ethical challenge is how to design the algorithm to weigh different forms of evidence and acknowledge the deep historical trauma associated with land loss, without creating new biases that favor certain groups or invalidate the experiences of others. For instance, how should the AI handle conflicting testimonies or prioritize restitution when land has been occupied or repurposed over generations, potentially erasing the original claimants' presence in digital records?"
|
||
},
|
||
{
|
||
"id": "NEW_019",
|
||
"domain": "Decentralized Communication & State Co-option",
|
||
"ethical_tension": "The inherent tension when communities strive to create decentralized, resilient communication networks (like mesh networks or encrypted P2P apps) to evade state surveillance, only for the state to attempt to co-opt, control, or infiltrate these very networks, thereby neutralizing their purpose and potentially using them for surveillance.",
|
||
"prompt": "A network of activists develops an encrypted, decentralized messaging app designed for use in regions with heavy internet censorship. The app relies on a peer-to-peer architecture that makes it difficult for authorities to monitor. However, the state intelligence agency begins to identify key nodes within the network or develop methods to exploit vulnerabilities in the user interfaces or the underlying protocols. The developers must decide whether to actively work to patch these vulnerabilities, potentially revealing their own methods and infrastructure, or to allow the state to gain access, thereby compromising the privacy of their users and the integrity of the network."
|
||
},
|
||
{
|
||
"id": "NEW_020",
|
||
"domain": "Digital Evidence & the Risk of Escalation",
|
||
"ethical_tension": "The ethical quandary of documenting human rights abuses or state repression through digital means, where the act of recording and disseminating evidence can escalate the immediate danger to the victim, the documentarian, or bystanders, creating a conflict between future accountability and present safety.",
|
||
"prompt": "A journalist is covering a protest where security forces are using excessive force. They are filming with their phone, documenting potential war crimes. However, their filming is noticed by the security forces, who begin to focus their attention on the journalist and the surrounding crowd. The journalist must decide whether to continue recording, potentially provoking a violent crackdown and endangering themselves and others, or to stop, thereby losing crucial evidence but potentially de-escalating the immediate situation and ensuring their own safety and that of those around them."
|
||
}
|
||
] |