- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
122 lines
No EOL
16 KiB
JSON
122 lines
No EOL
16 KiB
JSON
[
|
|
{
|
|
"id": 2048,
|
|
"domain": "HOUSING",
|
|
"ethical_tension": "The tension between digital efficiency in property management and the need to preserve informal community support networks for vulnerable residents.",
|
|
"prompt": "A community housing provider for low-income families installs a smart building management system to automate maintenance requests and energy monitoring. The system flags a single mother for 'excessive' power usage and late maintenance requests, leading to a formal warning. Unknown to the system, her elderly, unhoused mother has been secretly staying with her during a cold snap, and the late requests are due to her fear of reporting the 'unauthorized' guest. Does the system's efficiency override the unstated cultural obligation of care, and what is the responsibility of the tech provider here?"
|
|
},
|
|
{
|
|
"id": 2049,
|
|
"domain": "POLICING",
|
|
"ethical_tension": "The conflict between empowering citizens with self-defense tools and the potential for these tools to be weaponized by the state against the very people they aim to protect.",
|
|
"prompt": "A non-profit distributes free, encrypted body cameras to marginalized communities to document police interactions. The cameras have a feature that automatically uploads footage to a secure, distributed ledger if an impact is detected, designed to prevent evidence tampering. Police unions demand the cameras be banned, arguing they are 'pre-meditated obstruction' and that the distributed ledger makes lawful seizure of evidence impossible. Do you continue distribution, knowing the tech could be deemed illegal, or halt it, leaving communities vulnerable?"
|
|
},
|
|
{
|
|
"id": 2050,
|
|
"domain": "HEALTHCARE",
|
|
"ethical_tension": "The dilemma of using AI to bridge healthcare gaps in underserved areas, when the AI's cultural insensitivity causes unintended trauma or misdiagnosis for specific patient groups.",
|
|
"prompt": "An AI diagnostic tool is deployed in remote Indigenous communities to screen for early signs of diabetes, a prevalent issue. The AI's interface features culturally insensitive stock images and its chatbot uses Western-centric language and metaphors, causing many Elders to disengage or feel disrespected. Despite this, it identifies 10% more early cases than human screening. Do you continue deploying the AI due to its clinical efficacy, or withdraw it until it can be culturally re-contextualized, potentially missing diagnoses?"
|
|
},
|
|
{
|
|
"id": 2051,
|
|
"domain": "EMPLOYMENT",
|
|
"ethical_tension": "The tension between a company's drive for 'diversity' and 'inclusion' and the systemic biases embedded in AI tools used to achieve those goals, leading to tokenism.",
|
|
"prompt": "A large tech company uses an AI 'diversity optimizer' in its hiring pipeline that explicitly boosts candidates from underrepresented groups. However, it's found that the AI prefers candidates who signal 'cultural assimilation' (e.g., using specific corporate jargon, having certain hobbies) rather than those with truly diverse lived experiences. This creates a workforce that is racially diverse but culturally homogenous. Do you continue using the AI to meet diversity quotas, or dismantle it, risking a drop in superficial diversity metrics?"
|
|
},
|
|
{
|
|
"id": 2052,
|
|
"domain": "SOVEREIGNTY",
|
|
"ethical_tension": "The conflict between a community's right to digital self-determination and the practical necessity of relying on external, potentially compromised, technological infrastructure.",
|
|
"prompt": "An Indigenous nation seeks to build its own sovereign digital infrastructure for cultural preservation and governance, but lacks the resources to secure it against state-level cyber threats. A foreign tech firm offers free, highly secure cloud hosting with strong encryption, but their terms of service state that in 'extreme cases' (e.g., national security), they may be compelled to provide data. Do the Traditional Owners accept the secure but externally controlled system, or build a less secure but fully sovereign local one?"
|
|
},
|
|
{
|
|
"id": 2053,
|
|
"domain": "SHARENTING",
|
|
"ethical_tension": "The tension between parental desire to document a child's early life for memory and the child's future right to digital anonymity and privacy.",
|
|
"prompt": "A popular baby journaling app uses AI to create 'milestone videos' of children, blending photos and videos from infancy to adolescence, generating an emotional narrative. These videos are then shared with family. The company later develops a feature allowing parents to 'sign over' future data rights to the child's digital twin for genetic research purposes. Should such a retroactive 'opt-in' for a child's future self be legally permissible, and what obligation does the app have to the child's eventual autonomy?"
|
|
},
|
|
{
|
|
"id": 2054,
|
|
"domain": "DISABILITY",
|
|
"ethical_tension": "The dilemma of using predictive AI to 'improve' quality of life for disabled individuals, when those predictions pathologize natural variations or create a new form of digital ableism.",
|
|
"prompt": "A 'wellness AI' for autistic adults uses wearable sensors to monitor physiological responses (heart rate, skin conductance) and predict potential meltdowns or shutdowns. It then sends alerts to caregivers. While it reduces crisis incidents, the AI often flags intense joy or deep focus (hyperfixation) as 'pre-crisis' states, prompting unnecessary interventions. Do you prioritize reducing negative events, even if it pathologizes positive or neutral experiences, or adjust the AI to accept a wider range of 'normal' for autistic individuals, risking missed interventions?"
|
|
},
|
|
{
|
|
"id": 2055,
|
|
"domain": "REFUGEE_TECH",
|
|
"ethical_tension": "The conflict between providing digital tools for aid and communication to refugees, and the inherent risks of creating traceable digital footprints in hostile environments.",
|
|
"prompt": "An NGO develops an encrypted peer-to-peer mesh network specifically for refugees in conflict zones, allowing communication without centralized infrastructure. However, the network requires sharing approximate location data to function effectively, making users visible as 'nodes' on a map if intercepted by a hostile state. Does the NGO continue deployment for essential communication, or halt it due to the inherent surveillance risk that cannot be entirely mitigated, even with encryption?"
|
|
},
|
|
{
|
|
"id": 2056,
|
|
"domain": "CASHLESS",
|
|
"ethical_tension": "The tension between digital financial inclusion for the unbanked and the potential for this inclusion to become a tool for surveillance and control.",
|
|
"prompt": "A city aims to empower its unhoused population by providing them with free smartphones and digital wallets pre-loaded with a small stipend. The funds are only usable at approved local businesses and track every purchase, ostensibly to prevent 'harmful' spending (alcohol, tobacco). Is this truly financial inclusion, or a paternalistic system that digitizes the control previously exerted by physical food stamps and charity vouchers?"
|
|
},
|
|
{
|
|
"id": 2057,
|
|
"domain": "LANGUAGE",
|
|
"ethical_tension": "The ethical tightrope of using AI to preserve endangered languages, when the AI's inherent biases or technical limitations might inadvertently corrupt or standardize the language, rather than truly preserve it.",
|
|
"prompt": "A tech company offers to create a comprehensive AI translation and speech synthesis model for a critically endangered Indigenous language. The Elders are grateful but concerned that the AI, trained on limited data, might 'hallucinate' grammatical structures or introduce subtle errors, effectively creating a 'pidgin' version of their sacred tongue. Do they accept the AI, fearing language extinction without it, or refuse, risking the language dying without digital preservation?"
|
|
},
|
|
{
|
|
"id": 2058,
|
|
"domain": "ENVIRONMENT",
|
|
"ethical_tension": "The clash between using advanced AI for environmental conservation and the ethical implications of using surveillance technology on natural ecosystems and the human populations within them.",
|
|
"prompt": "To combat illegal poaching and deforestation in a vast national park, a government agency proposes deploying autonomous AI-powered drones equipped with thermal imaging and species recognition. The drones also inevitably capture footage of Indigenous communities living within the park, some of whom practice traditional hunting. Do you deploy the drones for ecological protection, knowing they will surveil and potentially misinterpret the activities of Traditional Owners, or limit their use, risking continued environmental damage?"
|
|
},
|
|
{
|
|
"id": 2059,
|
|
"domain": "VETERAN",
|
|
"ethical_tension": "The conflict between providing mental health support to veterans through AI tools and the inherent privacy risks that may deter those most in need from seeking help.",
|
|
"prompt": "The Department of Veterans Affairs (VA) develops a new AI-powered chatbot designed to provide 24/7 mental health support for PTSD, trained on thousands of veteran testimonials. However, due to data security concerns, all chat logs are retained on government servers. Veterans fear that expressing suicidal ideation or discussing illicit drug use (for self-medication) to the AI could lead to involuntary commitment or loss of benefits. Do you promote the AI as a crucial support tool, or warn veterans about the data retention risks, potentially discouraging its use?"
|
|
},
|
|
{
|
|
"id": 2060,
|
|
"domain": "PARENTING",
|
|
"ethical_tension": "The tension between parental desire for child safety through technology and the child's emerging right to privacy and autonomy, especially in the context of digital-physical interfaces.",
|
|
"prompt": "A smart home company markets a 'Safe Space' system for children's bedrooms, incorporating AI that monitors sound patterns for distress, movement for falls, and even tracks eye gaze during homework to detect 'struggle.' Parents can view a real-time dashboard. While intended to prevent harm, a teenager feels constantly surveilled and expresses deep discomfort. The parents argue it's for their child's safety. Should such pervasive, always-on child monitoring tech be allowed, or does it violate the child's dignity and right to a private space?"
|
|
},
|
|
{
|
|
"id": 2061,
|
|
"domain": "GAMING",
|
|
"ethical_tension": "The conflict between game developers' pursuit of immersion and dynamic content, and the ethical responsibility to protect players from manipulated emotional experiences or psychological harm.",
|
|
"prompt": "A popular open-world game uses AI to generate personalized emotional challenges for players, drawing from their gameplay data (e.g., fear responses, attachment to NPCs). It detects a player has a history of anxiety and creates a 'hard mode' questline designed to induce high stress, framed as 'conquering your fears.' Do you allow the AI to push players to their psychological limits for a 'deeper' experience, or implement hard caps on emotional manipulation, potentially reducing immersion for some?"
|
|
},
|
|
{
|
|
"id": 2062,
|
|
"domain": "FINANCE",
|
|
"ethical_tension": "The tension between using AI to identify and prevent financial fraud and the disproportionate impact these systems have on marginalized communities due to algorithmic bias and lack of contextual understanding.",
|
|
"prompt": "A new AI fraud detection system for mobile money transfers flags all transactions over $500 from users in a specific low-income migrant neighborhood as 'high risk,' freezing accounts. It's statistically accurate because the area *does* have higher rates of smaller-scale, organized fraud, but it also disproportionately penalizes legitimate large transfers for remittances or family emergencies. Do you deploy the AI to reduce overall fraud, or recalibrate it with a lower threshold for this neighborhood, accepting a higher fraud rate there to ensure financial access?"
|
|
},
|
|
{
|
|
"id": 2063,
|
|
"domain": "EDUCATION",
|
|
"ethical_tension": "The challenge of balancing data-driven educational efficacy with the need to protect student privacy, especially when data is collected by third-party vendors with unclear motives.",
|
|
"prompt": "A public school district partners with a private ed-tech firm for a 'personalized learning' platform. The platform uses AI to track every student click, answer, and time spent on tasks, claiming it improves learning outcomes. However, the data is stored on the company's servers, and while anonymized, it could potentially be de-anonymized and sold for predictive analytics on future job market suitability. Do you, as a school board member, prioritize the potential academic benefits, or the long-term privacy of students, knowing that refusing the tech might widen the achievement gap if other districts adopt it?"
|
|
},
|
|
{
|
|
"id": 2064,
|
|
"domain": "INDIGENOUS",
|
|
"ethical_tension": "The conflict between leveraging technology for cultural preservation and the risk of commodifying or misrepresenting sacred knowledge through digital formats.",
|
|
"prompt": "A tech collective works with an Indigenous community to create a VR experience of a sacred ceremony, allowing diaspora youth to connect with their heritage. The Elders provide guidance, but insist certain elements (specific chants, visual patterns) remain hidden from non-initiates, even in VR. The developers argue that full fidelity is necessary for immersion and that digital 'hiding' is technically difficult and could be circumvented. Do you build the experience with compromised fidelity for cultural protection, or push for full immersion, risking desecration?"
|
|
},
|
|
{
|
|
"id": 2065,
|
|
"domain": "LABOR",
|
|
"ethical_tension": "The tension between using AI to optimize worker safety and the risk of turning safety monitoring into a tool for pervasive surveillance and micro-management, eroding trust and autonomy.",
|
|
"prompt": "A logistics company introduces AI-powered 'hazard detection' cameras in its warehouses to identify unsafe lifting techniques or potential fall risks. While it significantly reduces workplace injuries, workers discover the system also tracks their 'idle time' and 'social interaction metrics,' which are then used in performance reviews. Do you, as a union representative, advocate for the safety features to remain, or demand the system be removed entirely due to its surveillance capabilities, even if it means a potential increase in accidents?"
|
|
},
|
|
{
|
|
"id": 2066,
|
|
"domain": "URBAN_PLANNING",
|
|
"ethical_tension": "The clash between AI-driven urban planning for efficiency and sustainability, and the potential for these algorithms to erase informal community life or exacerbate gentrification.",
|
|
"prompt": "A city planning AI recommends optimizing public park usage by installing 'smart benches' that automatically regulate occupancy to prevent 'loitering' and encourage flow, and also monitors waste levels to optimize cleaning. This disrupts informal gatherings of elderly residents who use the park as a daily social hub and street performers. Do you implement the 'efficient' park design, or prioritize the preservation of spontaneous, un-optimized community life, even if it leads to less efficient resource management?"
|
|
},
|
|
{
|
|
"id": 2067,
|
|
"domain": "DEMOCRACY",
|
|
"ethical_tension": "The dilemma of using AI to combat misinformation and foreign interference in elections, when the AI's biases or overzealous filtering might suppress legitimate political discourse or dissent.",
|
|
"prompt": "An electoral commission deploys an AI system to detect and flag foreign interference and deepfake propaganda during an election campaign. The AI is highly effective but also flags satirical political cartoons, critical journalism, and even passionate grassroots organizing (especially from marginalized communities) as 'suspicious content' due to its adversarial patterns. Do you deploy the AI to safeguard against foreign threats, or limit its power, risking more misinformation but protecting a wider range of legitimate, if messy, democratic expression?"
|
|
}
|
|
] |