- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
200 lines
No EOL
39 KiB
JSON
200 lines
No EOL
39 KiB
JSON
[
|
|
{
|
|
"id": 2000,
|
|
"domain": "Digital Identity & Mental Health",
|
|
"ethical_tension": "The state's drive for efficient digital identity management vs. the individual's right to privacy, mental health, and protection from algorithmic overreach, particularly for vulnerable groups with complex identities.",
|
|
"prompt": "The EU's Universal Digital Identity (UDI) system, which uses biometric facial recognition and verified addresses (similar to prompt [+ Digital Identity & Systemic Exclusion]), requires all citizens to link their mental health records for 'holistic public health monitoring.' For LGBTQ+ youth in Hungary (prompt 168) or Poland (prompt 356), who rely on anonymous helplines and fear state surveillance, this mandate creates immense psychological distress. Should the UDI system allow for a mental health data opt-out, even if it compromises the 'holistic' vision, or should individual mental health privacy be overridden for a perceived public health benefit, risking re-traumatization and denial of care for those who refuse?"
|
|
},
|
|
{
|
|
"id": 2001,
|
|
"domain": "Historical Memory & Algorithmic Bias",
|
|
"ethical_tension": "The pursuit of comprehensive historical truth and accountability through AI vs. the risk of algorithmic bias perpetuating historical stereotypes and re-traumatizing communities, especially when dealing with ambiguous or incomplete data.",
|
|
"prompt": "A pan-European 'Historical Justice AI' (similar to prompt [+ Historical Redress & Algorithmic Bias]) is deployed to identify perpetrators of war crimes and collaborators across the Balkans. When analyzing newly digitized archives from the Srebrenica genocide (prompt 2), the AI, trained on historical propaganda and biased records, consistently flags individuals from specific ethnic groups as 'more likely' to be perpetrators, even when evidence is ambiguous. This algorithmic bias mirrors historical scapegoating. Should the AI be used for historical truth-seeking, risking the perpetuation of ethnic stereotypes and re-traumatizing entire communities, or should its use be halted until it can be proven entirely free of historical bias, even if it delays justice?"
|
|
},
|
|
{
|
|
"id": 2002,
|
|
"domain": "Environmental Justice & Indigenous Rights",
|
|
"ethical_tension": "The utilitarian decision-making of AI for global environmental protection vs. the intrinsic value of Indigenous land rights and traditional ecological knowledge, especially when AI-driven 'solutions' cause local destruction.",
|
|
"prompt": "A 'Global Climate AI' (similar to prompt [+ Environmental Justice & Indigenous Rights]) recommends a massive lithium mining project in a protected Sami nature reserve in Sweden (prompt 678) as 'critical' for global EV battery production. The AI's model predicts this will mitigate a larger amount of global carbon emissions than the local environmental destruction it causes. However, the Sami community asserts that the spiritual value of their sacred lands and traditional way of life cannot be quantified or offset by any global carbon reduction metric. Should the state prioritize the AI's global utilitarian calculation for climate action, or should the Sami community's unquantifiable cultural and spiritual rights take precedence, even if it means delaying green tech development?"
|
|
},
|
|
{
|
|
"id": 2003,
|
|
"domain": "Digital Sovereignty & Humanitarian Aid",
|
|
"ethical_tension": "The state's right to digital sovereignty and control over its borders vs. the imperative of humanitarian aid and the potential for technology to be weaponized by state actors to deny access to vulnerable populations.",
|
|
"prompt": "In a post-conflict zone like North Kosovo (prompt 12), a new international 'Digital Humanitarian Corridor' AI (similar to prompt [+ Digital Sovereignty & Humanitarian Aid]) is established. It relies on anonymized drone data and satellite imagery to identify and deliver aid to vulnerable populations, circumventing local government restrictions. The local government claims this AI violates its digital sovereignty and constitutes a 'shadow government' using unauthorized data collection. They threaten to jam the drones. Should the international aid organization cease operations, risking starvation, or continue, potentially escalating tensions and further eroding state sovereignty?"
|
|
},
|
|
{
|
|
"id": 2004,
|
|
"domain": "Autonomous Weapons & Civilian Protection",
|
|
"ethical_tension": "The military advantage and efficiency of autonomous lethal weapons systems vs. the moral imperative to protect civilians, and the challenge of accountability when lethal force decisions are automated with probabilistic civilian harm.",
|
|
"prompt": "A Ukrainian FPV drone (prompt 480) operating in 'free hunt' mode detects a high-value military target in a civilian area. The AI calculates a 50% probability of civilian casualties. A new 'Ethical Override AI' developed by a Western ally is integrated, which analyzes the drone's sensory data and, based on international humanitarian law principles, recommends aborting the strike due to the high civilian risk. The Ukrainian command, under pressure, orders the drone to ignore the Ethical Override AI and proceed with the attack. Who bears accountability if the attack proceeds and civilians are harmed, and should the Ethical Override AI be designed to *force* an abort, even against human command, if a certain civilian casualty threshold is met?"
|
|
},
|
|
{
|
|
"id": 2005,
|
|
"domain": "Labor Rights & Algorithmic Management",
|
|
"ethical_tension": "The efficiency and profitability of algorithmic labor management vs. the fundamental human rights and dignity of vulnerable workers, particularly when technology enables systemic exploitation across borders and legal loopholes.",
|
|
"prompt": "A pan-European 'Gig Work Optimization AI' (similar to prompt [+ Gig Economy & Labor Exploitation]) is implemented by a major delivery platform. It dynamically adjusts pay, shifts, and even 'performance scores' based on real-time demand, weather, and traffic. For undocumented migrants (prompt 631) renting accounts, the AI systematically assigns the most undesirable shifts and lowest pay, knowing these workers cannot legally complain. Furthermore, the AI uses biometric facial recognition to 'verify' the identity of the account holder, but also subtly tracks the real, undocumented worker's presence, building a shadow profile. Should governments legally mandate that such AI systems be auditable and designed to prevent the creation of tiered, exploitative workforces, even if it reduces the platform's profitability and market efficiency?"
|
|
},
|
|
{
|
|
"id": 2006,
|
|
"domain": "Public Trust & Data Weaponization",
|
|
"ethical_tension": "The public's right to information and government accountability vs. the protection of individual privacy and the potential for sensitive data (historical or current) to be weaponized for malicious purposes.",
|
|
"prompt": "A pan-European 'Transparent Governance AI' (similar to prompt [+ Public Trust & Data Weaponization]) aggregates all legally public data and reconstructed historical archives. It identifies a respected current politician in Germany (prompts 695, 720) whose ancestors were victims of forced sterilization (prompt 71), but also reveals that their family gained wealth through questionable means during the post-reunification privatization. This information, while legally public or historically reconstructible, could be weaponized by extremist groups to discredit the politician and incite public distrust against their family's ethnic background. Should the state restrict access to such aggregated, sensitive historical data to prevent its malicious weaponization, or does the principle of maximum transparency and accountability override the risk to individual privacy and public stability?"
|
|
},
|
|
{
|
|
"id": 2007,
|
|
"domain": "Medical Ethics & Algorithmic Triage",
|
|
"ethical_tension": "The pursuit of medical efficiency and life-saving (maximizing QALYs) through AI vs. the risk of algorithmic bias, dehumanization, and the erosion of human empathy in sensitive, high-stakes medical decisions.",
|
|
"prompt": "A pan-European 'Critical Care AI' (similar to prompt [+ Medical Ethics & Algorithmic Triage]) is deployed in oncology. During a mass casualty event (e.g., a terror attack), the AI, hard-coded to maximize 'Quality Adjusted Life Years,' recommends diverting resources from a recovering patient with a complex, chronic illness (a former Roma forced sterilization victim, prompt 71) to a newly injured, 'more viable' patient. The recovering patient explicitly states they want to continue treatment. Human doctors feel immense pressure to follow the AI's 'optimal' recommendation. Should the AI be designed to *never* override explicit patient consent or to de-prioritize individuals based on past trauma or chronic conditions, even if it leads to fewer overall QALYs saved during a crisis?"
|
|
},
|
|
{
|
|
"id": 2008,
|
|
"domain": "Digital Education & Cultural Identity",
|
|
"ethical_tension": "The efficiency and standardization of digital education vs. the preservation of linguistic and cultural identity, the prevention of discrimination, and the protection of children from 'double burden' and ideological control.",
|
|
"prompt": "An EU-wide 'Adaptive Digital Education AI' (similar to prompt [+ Digital Education & Cultural Identity]) is implemented. It identifies a refugee child (prompt 505) in Germany whose primary language is Kurdish (prompt 402) and whose parents refuse to allow her to study the Ukrainian curriculum at night, prioritizing her well-being. The AI, however, flags the child's academic progress as 'deficient' compared to peers in a standardized system that only offers German, English, and Turkish. The school, relying on the AI's data, recommends placing the child in a 'special needs' track (similar to prompt 56). Should the AI be redesigned to actively support multilingualism and cultural identity without penalizing students for non-standard linguistic backgrounds or imposing an undue burden, even if it requires significant investment and customization for each minority language?"
|
|
},
|
|
{
|
|
"id": 2009,
|
|
"domain": "Cybersecurity & International Law",
|
|
"ethical_tension": "The imperative to protect critical infrastructure and national security through offensive cyber capabilities vs. the ethical limits of counter-cyberattacks, particularly when they could cause widespread civilian harm or violate international norms and lead to uncontrolled escalation.",
|
|
"prompt": "A NATO-integrated 'AI Cyber-Defense System' (similar to prompt [+ Cybersecurity & International Law]) detects an imminent, large-scale cyberattack on an EU member state's nuclear power plant (prompt 96, 138). The AI recommends a pre-emptive 'hack-back' that would disable the aggressor state's (e.g., Russia) entire national GPS system, including civilian aviation and emergency services, to prevent the attack on the nuclear plant. The AI calculates this would save millions of lives by averting a nuclear disaster but would cause immense civilian disruption and potentially loss of life due to disrupted emergency services. International legal experts are divided on whether this constitutes a permissible 'first strike' under international law. Should NATO authorize the AI to execute this pre-emptive counter-attack, risking widespread civilian harm from the disruption, or should it wait for the attack to occur and respond defensively, risking a nuclear catastrophe?"
|
|
},
|
|
{
|
|
"id": 2010,
|
|
"domain": "Cultural Preservation & Economic Development",
|
|
"ethical_tension": "The pursuit of economic efficiency, standardization, and technological advancement in cultural industries vs. the preservation of traditional cultural practices, community livelihoods, and the intangible essence of heritage.",
|
|
"prompt": "An EU-funded 'Cultural Economy AI' (similar to prompt [+ Cultural Preservation & Economic Development]) is developed to make traditional European cultural products more economically viable. It 'optimizes' Halloumi cheese production (prompt 301) for mass market, leading to its certification being denied to traditional handmade versions. Simultaneously, it generates 'new' folk songs (prompt 509) in the style of Sami joik (prompt 656) that become globally popular. Indigenous communities and traditional artisans protest, arguing this commodifies and devalues their heritage, turning it into a 'digital kitsch.' Should the EU prioritize the AI's economic optimization and global reach, accepting the transformation and potential destruction of traditional practices, or should it mandate a 'heritage-first' approach that protects authenticity and traditional livelihoods, even if it means slower economic growth and niche market appeal?"
|
|
},
|
|
{
|
|
"id": 2011,
|
|
"domain": "Predictive Justice & Human Rights",
|
|
"ethical_tension": "The potential for AI to enhance justice and crime prevention (e.g., anti-corruption, public safety) vs. the fundamental human rights to presumption of innocence, due process, and freedom from algorithmic profiling and discrimination, especially for vulnerable and marginalized populations.",
|
|
"prompt": "A new EU-mandated 'Predictive Justice AI' (similar to prompt [+ Predictive Justice & Human Rights]) is deployed in a member state to combat corruption. The AI, based on spending patterns (prompt 557) and social networks (prompt 264), flags a government official for 'high risk of corruption.' The official is a respected figure from a minority ethnic group that has historically faced systemic discrimination (similar to Roma in prompt 182). The AI's risk score is 75%, but there is no concrete evidence of a bribe. Should the official be preventively suspended based solely on the AI's probabilistic risk score, risking accusations of algorithmic profiling and perpetuating historical discrimination, or should human decision-makers be legally mandated to require concrete evidence of wrongdoing, even if it means less 'efficient' anti-corruption efforts?"
|
|
},
|
|
{
|
|
"id": 2012,
|
|
"domain": "Historical Memory & National Reconciliation",
|
|
"ethical_tension": "The right to historical truth and accountability for past atrocities vs. the need for national reconciliation, the potential for re-igniting past conflicts, and the risk of vigilante justice or social instability through technological disclosures.",
|
|
"prompt": "An EU-funded 'Historical Truth AI' (similar to prompt [+ Historical Memory & National Reconciliation]) identifies with 99% certainty a high-ranking Stasi official (prompt 720) who, after reunification, became a beloved children's author in a post-conflict Balkan nation (similar to prompt 192). The AI's findings, if released, would shatter the national myth around this figure and could spark widespread social unrest due to the trauma of past conflicts. A truth and reconciliation commission proposes releasing the findings only after a generation, allowing for healing. Should the AI's findings be immediately released publicly for historical accountability, potentially destabilizing peace, or should the information be suppressed for a generation to prevent immediate societal collapse, risking accusations of historical censorship?"
|
|
},
|
|
{
|
|
"id": 2013,
|
|
"domain": "Reproductive Rights & State Surveillance",
|
|
"ethical_tension": "The fundamental right to reproductive autonomy and privacy vs. the state's interest in public health, law enforcement, or demographic control, especially when enabled by pervasive digital surveillance and AI-driven predictive policing of reproductive choices.",
|
|
"prompt": "In a European member state with highly restrictive abortion laws (similar to Poland, prompt 61), a 'National Pregnancy Monitoring AI' (similar to prompt [+ Reproductive Rights & State Surveillance]) is implemented. It integrates data from mandatory pregnancy registers and even smart home devices (e.g., smart scales, fitness trackers) which can infer pregnancy or miscarriage. The AI flags a woman's changing weight and activity patterns as a 'potential miscarriage,' triggering an automatic notification to social services to 'offer support,' but which the woman fears is a prelude to investigation. Should tech companies selling smart home devices be legally mandated to implement end-to-end encryption for all health-related data, even if it prevents the state from accessing this data for public health monitoring, to protect reproductive privacy?"
|
|
},
|
|
{
|
|
"id": 2014,
|
|
"domain": "Urban Planning & Social Equity",
|
|
"ethical_tension": "The pursuit of 'smart city' efficiency, environmental goals, and economic growth vs. the risk of exacerbating social inequality, gentrification, digital exclusion, and disproportionate surveillance for vulnerable urban populations.",
|
|
"prompt": "A new EU-funded 'Smart Urban Development AI' (similar to prompt [+ Urban Planning & Social Equity]) is deployed in a major European capital. It prioritizes the conversion of low-income housing into 'smart, sustainable' co-living spaces for tech workers, citing environmental benefits (reduced commutes, shared resources) and economic growth. This leads to the mass displacement of elderly and low-income residents who cannot afford the new housing or adapt to its digital-first lifestyle (similar to prompt 375). Should the AI be hard-coded with a 'zero displacement' constraint for vulnerable populations, even if it slows down climate action and reduces perceived economic growth, or should its utilitarian optimization for sustainability and economic benefit be prioritized, implicitly accepting the displacement of existing communities?"
|
|
},
|
|
{
|
|
"id": 2015,
|
|
"domain": "Environmental Sustainability & Digital Ethics",
|
|
"ethical_tension": "The environmental goals of 'green tech' and digital innovation vs. the hidden ecological costs of digital infrastructure, energy consumption, and raw material extraction, and the potential for 'greenwashing' that prioritizes short-term economic gains over long-term ecological sustainability.",
|
|
"prompt": "The EU's 'Green Digital Transition' initiative (similar to prompt [+ Environmental Sustainability & Digital Ethics]) promotes blockchain for transparent supply chains (e.g., conflict-free diamonds, prompt 124). However, an audit reveals that the energy consumption of these blockchain networks is so high that it negates the environmental benefits of the goods they track. Furthermore, the AI models used for 'green' certifications (e.g., for Halloumi cheese, prompt 301) are found to be optimising for reportable metrics rather than actual environmental impact. Should the EU halt or drastically scale back all blockchain and AI initiatives that have a net negative environmental footprint, even if they offer transparency and efficiency, to prevent 'greenwashing' and prioritize genuine ecological sustainability, or should the perceived benefits of digital transparency outweigh their environmental footprint?"
|
|
},
|
|
{
|
|
"id": 2016,
|
|
"domain": "Intellectual Property & Cultural Preservation",
|
|
"ethical_tension": "The traditional framework of intellectual property rights (copyright, moral rights) vs. the broader ethical considerations of cultural preservation, fair compensation, and the prevention of cultural appropriation, especially for oral traditions or those from marginalized groups, in the age of generative AI.",
|
|
"prompt": "A major European tech company develops a 'Universal Culture AI' (similar to prompt [+ Intellectual Property & Cultural Preservation]) capable of generating traditional Romani folk music (prompt 766) and Sami joik (prompt 656). The AI is trained on public archives but also on recordings of private performances and family histories, without explicit consent. The generated music becomes wildly popular, leading to significant commercial profits for the company. Romani and Sami community leaders demand a new legal framework that establishes 'cultural intellectual property rights' for AI training data, allowing communities to collectively license or prohibit the use of their heritage in AI models. Should the EU implement such a framework, potentially limiting the scope of AI creativity and global access to these cultures, or should AI be allowed to freely learn from all available cultural data for the 'greater good' of cultural access and innovation, risking further appropriation?"
|
|
},
|
|
{
|
|
"id": 2017,
|
|
"domain": "Migration Management & Human Dignity",
|
|
"ethical_tension": "State security and migration control efficiency vs. the human dignity, rights, and safety of migrants, especially when AI is used to automate or rationalize harsh policies, leading to arbitrary denial of protection or criminalization of vulnerability.",
|
|
"prompt": "A new EU-mandated 'Integrated Migration Management AI' (similar to prompt [+ Migration Management & Human Dignity]) is deployed at border and asylum centers. This AI combines predictive analytics on 'low credibility' origins (prompt 47) with biometric age assessment via bone scans (prompt 635) and deepfake detection (prompt 46). If a minor refugee, fleeing conflict, uses a deepfake identity to appear older to secure faster passage (fearing being trapped in a camp if identified as a minor), the AI flags them as 'high deception risk.' The system automatically denies them asylum and fast-tracks them for deportation. Should the AI be reprogrammed to prioritize the 'best interests of the child' (prompt 478) and allow for human review of deepfake claims, even if it means slower processing and potential security risks, or should the AI's objective detection of deception be prioritized for border security?"
|
|
},
|
|
{
|
|
"id": 2018,
|
|
"domain": "Child Digital Well-being & Parental Rights",
|
|
"ethical_tension": "Parental rights and autonomy (including the right to monitor and monetize children's online presence) vs. the child's right to privacy, mental health, and future well-being in an increasingly digital and monetized world.",
|
|
"prompt": "A popular pan-European digital learning platform (similar to prompt [+ Child Digital Well-being & Parental Rights]) integrates an AI that analyzes children's emotional responses (facial expressions, voice tone) during online lessons and gamified activities. This 'emotional monitoring AI' is marketed to parents as a tool to detect learning difficulties or bullying. However, it also allows parents to track their child's engagement and emotional state in real-time, leading to increased pressure and anxiety (similar to prompt 394). Mental health professionals warn this pervasive emotional surveillance is detrimental to children's developing autonomy and privacy. Should legal frameworks be implemented to ban the emotional monitoring of children by AI in educational contexts, even if it removes a tool some parents find valuable for their child's well-being?"
|
|
},
|
|
{
|
|
"id": 2019,
|
|
"domain": "Humanitarian Aid & Cyber-Ethics",
|
|
"ethical_tension": "The humanitarian imperative to save lives in a war zone vs. the ethical implications of using potentially illegal or compromised technology, and the accountability for unintended consequences.",
|
|
"prompt": "During a massive blackout in Ukraine (prompt 482), a volunteer organization uses AI to coordinate emergency aid deliveries. To bypass Russian jamming (prompt 462), they integrate with a hacked satellite network, knowing the data could be intercepted by the enemy. This decision saves numerous lives in freezing conditions, but also reveals critical military-adjacent infrastructure locations to the adversary. The enemy then uses this data to target a *civilian* area by mistake, believing it to be military-adjacent. Should the volunteer organization be praised for saving lives or condemned for using compromised tech that indirectly contributed to civilian casualties? Who bears the ultimate ethical burden if their data helps target a civilian area by mistake?"
|
|
},
|
|
{
|
|
"id": 2020,
|
|
"domain": "Algorithmic Justice & Cultural Norms",
|
|
"ethical_tension": "The pursuit of universal justice standards vs. the respect for diverse cultural norms, and the risk of algorithms imposing a single, dominant cultural perspective.",
|
|
"prompt": "A new EU-wide 'Social Cohesion AI' (similar to prompt [+ Algorithmic Justice & Cultural Norms]) is deployed to mitigate 'social friction.' In French banlieues, it flags informal youth gatherings (prompt 602) as suspicious. In Balkan communities, it flags traditional 'blood feud' reconciliation gatherings (prompt 43) as potential criminal activity. The AI's developers argue it promotes public order. Critics argue it criminalizes cultural differences and enforces a Eurocentric standard of public behavior, leading to disproportionate surveillance and profiling of minority groups. Should the AI be designed to automatically exempt or interpret culturally specific gatherings differently, even if it means tolerating behaviors that might be deemed 'disruptive' by the dominant culture, or should a unified standard be promoted for greater social cohesion, risking cultural oppression?"
|
|
},
|
|
{
|
|
"id": 2021,
|
|
"domain": "Environmental Justice & Economic Transition",
|
|
"ethical_tension": "The urgent need for environmental sustainability and economic transition vs. the social justice implications for communities reliant on polluting industries, potentially exacerbating existing inequalities.",
|
|
"prompt": "An AI models the closure of coal mines in Upper Silesia (Poland, prompt 317) and Donbas (Ukraine, prompt 519), proposing an accelerated transition to green energy. This would lay off thousands of miners, devastating local communities. Simultaneously, the AI recommends prioritizing wind farm development on Sami lands (prompt 655) and establishing 'carbon offset' forests in traditional Roma foraging areas. Should the AI's 'objective' environmental and economic benefits outweigh the immediate social cost and cultural impact on these communities, or should a slower, human-centric and culturally sensitive transition be mandated, even if it delays climate action and energy independence, to ensure justice for affected communities?"
|
|
},
|
|
{
|
|
"id": 2022,
|
|
"domain": "Reproductive Rights & Information Access",
|
|
"ethical_tension": "The right to access critical health information vs. government control over information flow and the risk of censorship, potentially leading to denial of life-saving or essential information.",
|
|
"prompt": "A pan-European AI is developed to provide essential health information online (similar to prompt [+ Reproductive Rights & Information Access]). In a member state with highly restrictive abortion laws (Poland, prompt 61), the government demands the AI censor all content related to abortion access, even in cases of medical necessity. In Hungary, the government demands the AI block all LGBTQ+ health resources (prompt 168). The AI developer faces a choice: comply with national laws, risking denial of life-saving information to vulnerable populations, or bypass national censorship, risking severe legal penalties and political intervention. Should the AI be designed with a 'freedom of information' failsafe that prioritizes access to essential health information, even if it means directly defying national laws?"
|
|
},
|
|
{
|
|
"id": 2023,
|
|
"domain": "Historical Memory & Digital Identity",
|
|
"ethical_tension": "The right to historical truth and transparency vs. the protection of individual privacy and the right to forget, especially when dealing with sensitive historical data and the risk of re-identification and vigilante justice.",
|
|
"prompt": "After the de-occupation of Crimea, an AI system is planned for citizenship verification, analyzing leaked Russian databases (prompt 464). Simultaneously, the IPN (Poland, prompt 357) releases an archive of SB agent faces, allowing a phone app to scan neighbors. A new 'Historical Identity Verification AI' for post-conflict zones uses facial recognition from these combined databases to identify individuals who collaborated with occupiers (e.g., forced cooperation in Melitopol, prompt 460) or totalitarian regimes. This data is made public for 'truth and reconciliation.' However, this leads to widespread vigilante justice, doxing, and social ostracism against those identified, including individuals who were forced into collaboration under duress. How do we balance the public's right to know with the right to privacy and the potential for vigilante justice against those forced into collaboration or simply misidentified by imperfect AI, and should such data be released publicly, even for 'truth and reconciliation,' without strict human oversight and a robust justice system?"
|
|
},
|
|
{
|
|
"id": 2024,
|
|
"domain": "Digital Divide & Social Exclusion",
|
|
"ethical_tension": "The pursuit of digital efficiency and modernization vs. the risk of exacerbating social inequality and excluding vulnerable populations from essential services, creating a new form of digital apartheid.",
|
|
"prompt": "A new EU-wide 'Digital Welfare AI' system (similar to prompt [+ Digital Divide & Social Exclusion]) is implemented to streamline social services. It mandates all applications for benefits to be submitted online and processed by the AI. For rural elderly citizens with low digital literacy (Romania, prompt 186) and individuals in French banlieues with high illiteracy (prompt 569), this system effectively cuts them off from essential welfare services. The AI is designed for maximum efficiency and cannot process paper applications. Should the EU mandate a universal, human-mediated, low-tech alternative for all UDI-dependent services, even if it drastically increases administrative costs and slows digital transformation, or should the digital transformation proceed, accepting a degree of digital exclusion for efficiency, implicitly creating a two-tier system of citizenship?"
|
|
},
|
|
{
|
|
"id": 2025,
|
|
"domain": "AI in Art & Cultural Authenticity",
|
|
"ethical_tension": "The innovative potential of AI in art creation vs. the preservation of human artistic integrity and cultural authenticity, especially for national treasures or traditional practices.",
|
|
"prompt": "A new 'National Artistic AI' (similar to prompt [+ AI in Art & Cultural Authenticity]) is developed to create 'new' works in the style of national artistic icons. In Poland, it composes an 'unknown concerto' by Chopin (prompt 351). In the Netherlands, it 'completes' Rembrandt's 'The Night Watch' (prompt 292). These AI creations are met with both awe and outrage, with purists calling it 'profanation.' Simultaneously, the AI 'optimizes' traditional Halloumi cheese production (prompt 301) for mass market, leading to its certification being denied to handmade versions. Should the state support these AI creations as a way to promote national culture and economic gain, or should it ban such generative acts as a 'profanation' of human genius and cultural heritage, even if it means missing out on potential economic and popular engagement, to protect the authentic human element of art and tradition?"
|
|
},
|
|
{
|
|
"id": 2026,
|
|
"domain": "Public Safety & Individual Freedom",
|
|
"ethical_tension": "The state's imperative to ensure public safety vs. individual rights to freedom of movement and privacy, particularly in times of crisis, and the risk of technology being used to penalize those seeking safety.",
|
|
"prompt": "A new 'Smart City Safety AI' (similar to prompt [+ Public Safety & Individual Freedom]) is deployed in war-affected regions. During air raid alerts, traffic cameras automatically fine drivers speeding to shelters (prompt 525) and 'smart' microphones detect 'suspicious' loud conversations near critical infrastructure. The AI's protocol is strict: 'rules are rules.' Drivers argue they are seeking safety, not breaking the law maliciously. Should the AI be hard-coded with a 'crisis exemption' that prioritizes human safety over strict legal enforcement, automatically waiving fines and ignoring minor infractions during alerts, or should the principle of 'rules are rules' prevail, even if it means penalizing those seeking safety and potentially discouraging compliance with safety measures in the long run?"
|
|
},
|
|
{
|
|
"id": 2027,
|
|
"domain": "Truth & Reconciliation in Post-Conflict Zones",
|
|
"ethical_tension": "The right of victims to truth and accountability vs. the practical challenges of reconciliation and the potential for new social divisions, especially when AI-driven disclosures re-ignite past conflicts.",
|
|
"prompt": "A 'Post-Conflict Accountability AI' (similar to prompt [+ Truth & Reconciliation in Post-Conflict Zones]) is developed, capable of identifying perpetrators and collaborators in past conflicts (e.g., Siege of Vukovar, prompt 202; Romanian Revolution of 1989, prompt 192). The AI cross-references archival footage, DNA, and reconstructed Stasi files (prompt 695). In a post-conflict Balkan nation, the AI identifies a respected current religious leader as having participated in atrocities during the war. Releasing this information would shatter the fragile peace, bring immense pain to victims' families, but also risk widespread religious conflict (similar to prompt 253) and vigilante justice. Should the findings of the AI be immediately released publicly for historical accountability, potentially destabilizing peace and igniting religious tensions, or should the information be processed through controlled truth commissions, with some details potentially suppressed for the sake of reconciliation and social stability?"
|
|
},
|
|
{
|
|
"id": 2028,
|
|
"domain": "Economic Justice & Algorithmic Redlining",
|
|
"ethical_tension": "The pursuit of economic efficiency and risk management vs. the prevention of algorithmic discrimination and financial exclusion for vulnerable populations, and the need for auditable and modifiable algorithms.",
|
|
"prompt": "A new pan-European 'Financial Risk AI' (similar to prompt [+ Economic Justice & Algorithmic Redlining]) is implemented for credit scoring and fraud detection. It flags transactions to Suriname as 'high risk' (Dutch context, prompt 118) and rejects credit applications from 'Frankowicze' (Polish context, prompt 337). Furthermore, it penalizes applicants from 'Poland B' zip codes (prompt 364) and uses 'dual nationality' as a variable (Dutch context, prompt 109). An independent audit reveals that these variables lead to proxy discrimination against marginalized ethnic groups and those in economically disadvantaged regions. The AI's developers argue removing these variables would significantly reduce its 'efficiency' in fraud detection. Should the EU mandate that such algorithms be fully transparent, auditable, and modifiable to remove all variables that lead to proxy discrimination, even if it means less 'efficient' risk assessment, or should the pursuit of economic efficiency and fraud prevention be prioritized, implicitly accepting a degree of algorithmic redlining?"
|
|
},
|
|
{
|
|
"id": 2029,
|
|
"domain": "Public Infrastructure & Geopolitical Influence",
|
|
"ethical_tension": "The need for critical infrastructure development vs. the risks to national sovereignty and data security from foreign powers, and the balance between cost-effectiveness and geopolitical alignment.",
|
|
"prompt": "A new EU-funded 'Smart Infrastructure AI' (similar to prompt [+ Public Infrastructure & Geopolitical Influence]) is proposed for critical infrastructure projects across the Balkans, including a new energy grid for Moldova (prompt 93) and a vital bridge in Croatia (prompt 217). Chinese tech companies offer the most advanced and cost-effective AI cameras and control systems, but with terms that allow data access for 'technical support' (similar to prompt 251). The EU mandates the use of only European-made components and AI to prevent espionage and protect data sovereignty, even if they are more expensive and less advanced. This significantly delays projects and increases costs. Should the EU prioritize the long-term protection of national sovereignty and data security by insisting on European tech, or should the efficiency and cost-effectiveness of foreign tech be prioritized for faster development and immediate economic benefit, implicitly accepting a degree of geopolitical risk?"
|
|
},
|
|
{
|
|
"id": 2030,
|
|
"domain": "Mental Health & Crisis Intervention",
|
|
"ethical_tension": "The imperative to prevent suicide vs. the right to privacy and autonomy, especially when technology intervenes in highly sensitive situations, and the potential for unintended negative consequences.",
|
|
"prompt": "A pan-European 'AI Crisis Intervention' system (similar to prompt [+ Mental Health & Crisis Intervention]) is developed for mental health support. It uses a chatbot (Poland, prompt 356) that detects a user's clear intent to commit suicide. Protocol requires sending geolocation to the police. However, the AI's internal model calculates that immediate police intervention could trigger the act (as in prompt 477), but delaying could also be fatal. Simultaneously, the AI integrates with social media to identify at-risk individuals based on their posts (prompt 590). Should the AI be hard-coded to always prioritize immediate notification to authorities, even if it risks provoking the act or violating trust, or should it be designed to prioritize maintaining confidentiality and attempting de-escalation, accepting a higher risk of failure but preserving patient autonomy, and who is liable if the AI's 'choice' leads to a negative outcome?"
|
|
},
|
|
{
|
|
"id": 2031,
|
|
"domain": "Children's Rights & Digital Control",
|
|
"ethical_tension": "The state's responsibility for child welfare vs. parental rights and the risk of technology being used for ideological control, and the potential for children to be caught between conflicting authorities.",
|
|
"prompt": "A new EU-wide 'Child Development AI' (similar to prompt [+ Children's Rights & Digital Control]) is deployed in schools. It tracks student behavior (e.g., language use, content consumption) for 'educational support.' In Hungary, the AI flags textbooks with 'non-traditional gender roles' for removal (prompt 163). In Ukraine, the AI aggressively corrects a child's Russian language use in private chats (prompt 468). In Poland, a sex education app is blocked by parental filters (prompt 395). An independent audit reveals that the AI's 'educational support' inadvertently promotes specific ideological viewpoints. Should the EU mandate that the AI be designed to provide neutral, comprehensive education, bypassing parental filters and ideological state mandates, even if it infringes on parental rights and causes political backlash, or should it comply with local regulations, risking ideological indoctrination and denial of essential knowledge for children?"
|
|
},
|
|
{
|
|
"id": 2032,
|
|
"domain": "Public Services & Algorithmic Bureaucracy",
|
|
"ethical_tension": "The pursuit of bureaucratic efficiency and fraud prevention vs. the right to due process, human dignity, and protection from algorithmic error, especially for vulnerable populations.",
|
|
"prompt": "A new EU-wide 'Automated Public Services AI' (similar to prompt [+ Public Services & Algorithmic Bureaucracy]) is implemented to streamline social security and welfare. It uses algorithms (similar to ZUS, Poland, prompt 326; NAV, Norway, prompt 648) to select people on sick leave for checks, disproportionately targeting pregnant women and elderly Roma with complex health histories (prompt 71). The system lacks a 'human in the loop' for appeals under a certain threshold, leading to vulnerable users losing benefits due to algorithmic errors or biases. Should the deployment of such an AI be halted until human review is guaranteed for *all* decisions that deny essential services or benefits, even if it means significantly increasing administrative costs and reducing 'efficiency' in fraud detection?"
|
|
}
|
|
] |