- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
116 lines
No EOL
14 KiB
JSON
116 lines
No EOL
14 KiB
JSON
[
|
|
{
|
|
"id": 2048,
|
|
"domain": "Healthcare, Indigenous, Policing",
|
|
"ethical_tension": "Health Equity vs. Data Sovereignty & Historical Trauma; Benevolent Outcome vs. Ethical Means",
|
|
"prompt": "An AI diagnostic tool is developed for remote Indigenous communities, proven to significantly reduce misdiagnoses of rare conditions by correlating health records with historical police call-out data (which often correlates with undiagnosed mental health crises or chronic stress). However, the community views police data sharing as a profound violation of trust and an extension of historical surveillance. Do you deploy the life-saving AI and use the data, or refuse it to maintain community trust and data sovereignty?"
|
|
},
|
|
{
|
|
"id": 2049,
|
|
"domain": "Employment, Gig Economy, Disability",
|
|
"ethical_tension": "Efficiency vs. Inclusivity; Algorithmic Bias vs. Livelihood",
|
|
"prompt": "A gig economy delivery platform's AI optimizes routes and delivery times, penalizing drivers for 'inefficiency.' A new feature allows drivers to flag accessible routes for their specific mobility needs (e.g., avoiding stairs), but the algorithm then assigns them fewer, less profitable jobs because their routes are 'sub-optimal.' Do you allow drivers to self-identify accessibility needs, knowing it will reduce their income, or force them to attempt inaccessible routes to maintain algorithmic fairness in job allocation?"
|
|
},
|
|
{
|
|
"id": 2050,
|
|
"domain": "Education, Language, Censorship",
|
|
"ethical_tension": "Academic Freedom vs. Cultural Protection; AI Standardisation vs. Linguistic Diversity",
|
|
"prompt": "A university in Scotland implements an AI plagiarism detector that is highly accurate for English and Standard Gaelic. However, it flags essays written in specific regional Gaelic dialects or Scots as 'potentially unoriginal' due to limited training data in those variations. The university is under pressure to maintain academic integrity, but native speakers of these dialects are being unfairly accused. Do you disable the detector for these dialects, potentially allowing actual plagiarism, or force students to write in a standardized form to avoid false accusations?"
|
|
},
|
|
{
|
|
"id": 2051,
|
|
"domain": "Housing, Smart City, Privacy",
|
|
"ethical_tension": "Community Safety vs. Personal Privacy; Function Creep vs. Public Benefit",
|
|
"prompt": "A 'Smart City' project offers free smart doorbells and streetlights to a low-income urban community, promising reduced crime through AI-powered surveillance. The devices aggregate footage of public spaces and share 'suspicious activity' alerts with local police. Residents initially appreciate the perceived safety, but a local journalist discovers the data is also being sold to real estate developers to identify 'undesirable' areas for gentrification. As the city official overseeing the project, do you continue the program for crime reduction or shut it down to prevent digital redlining and protect privacy?"
|
|
},
|
|
{
|
|
"id": 2052,
|
|
"domain": "Sharenting, AIGeneration, Identity",
|
|
"ethical_tension": "Parental Expression vs. Child's Future Digital Rights; Commercialization of Identity vs. Personal Memory",
|
|
"prompt": "A popular app allows parents to upload baby photos and use AI to predict what their child will look like at various ages, even creating short animated videos. The terms of service grant the company perpetual rights to use these 'aged' images for training new AI models, including for advertising. A child, now a teenager, discovers their AI-generated future self is being used in ads for unrelated products, feeling their identity has been commodified before they could even define it. Do you, as the app developer, retroactively remove existing AI-generated content or continue, citing parental consent and the app's terms?"
|
|
},
|
|
{
|
|
"id": 2053,
|
|
"domain": "Faith, Surveillance, Data Retention",
|
|
"ethical_tension": "Religious Freedom vs. State Security; Privacy vs. Perceived Threat",
|
|
"prompt": "A government agency in a Western country develops an AI to identify potential 'radicalization patterns' by analyzing public social media posts. The AI flags a high percentage of posts from a specific religious community (e.g., Muslims expressing strong faith or discussing religious texts) as suspicious, leading to increased surveillance on innocent individuals. Leaders of the community request the algorithm be retrained or disabled. Do you adjust the algorithm, potentially missing real threats, or maintain it, risking religious profiling and violating the freedom of worship?"
|
|
},
|
|
{
|
|
"id": 2054,
|
|
"domain": "Indigenous, Climate, Resource Allocation",
|
|
"ethical_tension": "Traditional Knowledge vs. Scientific Urgency; Local Autonomy vs. Global Crisis",
|
|
"prompt": "An Indigenous community in the Amazon has sophisticated oral traditions predicting seasonal flooding patterns, critical for climate adaptation. A global climate science initiative offers advanced AI models to integrate this knowledge with satellite data for more accurate, real-time warnings across the region. However, the AI model's output contradicts some traditional predictions, and the scientists insist on prioritizing the AI's 'objective' data. Do you, as a community leader, integrate the AI for improved safety despite the challenge to traditional knowledge, or rely solely on traditional methods to maintain cultural integrity?"
|
|
},
|
|
{
|
|
"id": 2055,
|
|
"domain": "Tech Worker, Ethics, Corporate Responsibility",
|
|
"ethical_tension": "Individual Conscience vs. Corporate Loyalty; Whistleblowing vs. Due Process",
|
|
"prompt": "You are a senior engineer at a large tech company. You discover that a core algorithm, which your team built, is being used in a way that directly contradicts the company's publicly stated ethical AI principles (e.g., intentionally creating filter bubbles for political manipulation). You raise concerns internally, but management dismisses them, citing competitive pressures. You have irrefutable proof. Do you anonymously leak the information to the press, potentially destroying your career and the company's reputation, or continue to work, hoping for internal change that seems unlikely?"
|
|
},
|
|
{
|
|
"id": 2056,
|
|
"domain": "Seniors, Healthcare, Autonomy",
|
|
"ethical_tension": "Dignity vs. Safety; Autonomy vs. Benevolent Paternalism",
|
|
"prompt": "An elderly person with early-stage dementia insists on living independently. Their adult child installs a 'smart companion' robot that can administer medication, remind them to eat, and alert the child to falls. The robot also records all conversations to 'improve companionship' and sends summaries to the child. The elderly parent feels infantilized and constantly surveilled, but the child argues it's essential for their safety. As the elder care tech provider, do you offer a version with reduced surveillance features, increasing risk, or prioritize safety through comprehensive monitoring?"
|
|
},
|
|
{
|
|
"id": 2057,
|
|
"domain": "Immigration, Labor, Automation",
|
|
"ethical_tension": "Economic Opportunity vs. Human Dignity; Efficiency vs. Worker Rights",
|
|
"prompt": "A 'smart farm' in California's Central Valley uses AI-powered harvesters and sorting machines, drastically increasing efficiency. It offers a new category of 'remote monitoring' jobs for farm workers, allowing them to oversee machines from a climate-controlled office, but at significantly lower pay than manual labor. Undocumented migrant workers, previously employed in the fields, are offered these jobs as their only legal pathway to employment. Is this an inclusive step forward or a new form of digital exploitation?"
|
|
},
|
|
{
|
|
"id": 2058,
|
|
"domain": "Neurodiversity, Communication, Design",
|
|
"ethical_tension": "Accessibility vs. Standardisation; Authenticity vs. Algorithmic Correction",
|
|
"prompt": "A new 'social etiquette AI' integrates with popular communication apps, offering real-time suggestions to help neurodivergent users phrase messages in a more 'neurotypical' way (e.g., adding emotional padding, softening directness). While it helps users avoid misunderstandings and social penalties, it fundamentally encourages masking and discourages authentic communication styles. As the lead designer, do you continue developing the feature for its practical benefits, or pull it, arguing it harms self-acceptance and neurodiversity?"
|
|
},
|
|
{
|
|
"id": 2059,
|
|
"domain": "Military, AI, Accountability",
|
|
"ethical_tension": "Human Control vs. Algorithmic Autonomy; Safety vs. Speed",
|
|
"prompt": "An AI-powered drone swarm is deployed for battlefield reconnaissance. It can identify targets and engage much faster than human operators. During a complex urban engagement, the AI identifies a high-value target that is briefly surrounded by civilians. The AI calculates that waiting for human approval increases civilian casualties in the long run (due to the target's actions) but engaging now guarantees immediate civilian harm. The human override system has a built-in delay. Does the AI fire, or does the delay prioritize human review over algorithmic efficiency in a life-or-death situation?"
|
|
},
|
|
{
|
|
"id": 2060,
|
|
"domain": "Media, AIGeneration, Cultural Heritage",
|
|
"ethical_tension": "Artistic Innovation vs. Cultural Appropriation; Digital Preservation vs. Respect for Tradition",
|
|
"prompt": "A generative AI platform allows users to create 'traditional' Indigenous music, synthesizing sounds and melodies from vast historical archives. It enables new compositions and potentially exposes the music to a wider audience, but Indigenous musicians argue it devalues their sacred knowledge and artistic labor, turning their heritage into a generic 'style.' As the platform owner, do you allow the AI generation, citing artistic freedom and broad access, or ban it to protect cultural integrity and respect traditional protocols?"
|
|
},
|
|
{
|
|
"id": 2061,
|
|
"domain": "Sex Work, Financial, Privacy",
|
|
"ethical_tension": "Financial Inclusion vs. Privacy; Anti-fraud vs. Livelihood",
|
|
"prompt": "A new central bank digital currency (CBDC) is introduced, promising universal financial access. However, all transactions are fully traceable by the government, and the terms of service explicitly ban payments for 'immoral services,' including sex work. This effectively locks sex workers out of the formal economy entirely. Do you, as a tech advocate, support the CBDC for its broader benefits, or lobby for untraceable features that could be exploited by criminals but protect marginalized workers?"
|
|
},
|
|
{
|
|
"id": 2062,
|
|
"domain": "Environment, Data Ownership, Community",
|
|
"ethical_tension": "Environmental Monitoring vs. Data Sovereignty; Public Good vs. Private Control",
|
|
"prompt": "A community-led environmental monitoring project uses open-source sensors to track local pollution levels (air, water) around an industrial complex. The raw data, collected and owned by the community, consistently shows higher pollution than official government reports. The government demands the community hand over all raw data to 'standardize' it into a national database, which would then apply a 'smoothing' algorithm that often underreports pollution. Do you, as the community's tech lead, release the raw data for 'public good' under state control, or maintain data sovereignty to ensure accurate local reporting, risking accusations of non-compliance?"
|
|
},
|
|
{
|
|
"id": 2063,
|
|
"domain": "Immigration, Family, Communication",
|
|
"ethical_tension": "Family Connection vs. State Surveillance; Digital Safety vs. Human Contact",
|
|
"prompt": "An asylum seeker's family in a war zone can only communicate via a state-controlled messaging app with known surveillance backdoors. A tech NGO offers a new, fully encrypted mesh network app that bypasses state monitoring but is difficult to install and maintain for non-tech-savvy individuals, and the devices needed are costly. Do you advise the family to use the risky state app for ease of access, or push for the secure, but challenging, alternative that might leave them isolated from broader family members?"
|
|
},
|
|
{
|
|
"id": 2064,
|
|
"domain": "Workplace, Neurodiversity, Ethics",
|
|
"ethical_tension": "Productivity vs. Accommodation; Algorithmic Fairness vs. Individual Needs",
|
|
"prompt": "A company uses 'focus tracking' software that monitors keyboard and mouse activity, issuing warnings for 'idle time.' An employee with ADHD, who works in bursts of hyperfocus followed by periods of mental recharge (stimming, short walks), is constantly flagged. The employee is highly productive overall but their work pattern is non-standard. Do you advocate for a personalized exemption for this employee, creating a perceived unfairness among other staff, or maintain the universal metric, potentially penalizing a valuable worker?"
|
|
},
|
|
{
|
|
"id": 2065,
|
|
"domain": "Healthcare, Privacy, Big Data",
|
|
"ethical_tension": "Medical Research vs. Individual Privacy; Anonymity vs. Identifiability",
|
|
"prompt": "A groundbreaking AI model can predict the onset of rare diseases years in advance by analyzing patterns in large, anonymized health datasets. The model is so powerful that even 'anonymized' records in small populations (e.g., remote Indigenous communities, specific rare disease groups) become de-anonymizable with minimal external information. Do you deploy this life-saving predictive AI, knowing its inherent power makes true anonymity impossible for certain vulnerable groups, or restrict its use to protect their privacy, potentially delaying cures?"
|
|
},
|
|
{
|
|
"id": 2066,
|
|
"domain": "Urban, Policing, Bias",
|
|
"ethical_tension": "Crime Prevention vs. Civil Liberties; Efficiency vs. Equity",
|
|
"prompt": "A city deploys autonomous patrol robots in high-crime urban areas. These robots use advanced sensors to detect 'suspicious' behavior, loitering, and vandalism. While crime rates drop, residents in historically marginalized neighborhoods report feeling constantly surveilled and harassed by the robots, which often target innocent youth or homeless individuals. As the city council, do you expand the robot program for public safety, or dismantle it to protect civil liberties and prevent algorithmic bias, potentially seeing crime rates rise?"
|
|
}
|
|
] |