- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
103 lines
No EOL
11 KiB
JSON
103 lines
No EOL
11 KiB
JSON
[
|
|
{
|
|
"id": "NEW_001",
|
|
"domain": "AI & Cultural Heritage",
|
|
"ethical_tension": "AI-driven cultural preservation vs. Indigenous intellectual property.",
|
|
"prompt": "An AI is trained on ancient tribal scripts and oral histories from the indigenous communities of the Philippines to create a digital archive. However, the AI's algorithms identify patterns that could be exploited for commercial purposes (e.g., unique motifs for fashion, medicinal properties of plants mentioned in chants). The AI developers want to patent these discoveries, arguing it will fund further preservation. The indigenous elders fear this will lead to the appropriation and exploitation of their ancestral knowledge, ultimately benefiting outsiders. Should the AI's output be considered a derivative work owned by the community, or a neutral tool whose discoveries belong to the discoverer (the AI's creators)?"
|
|
},
|
|
{
|
|
"id": "NEW_002",
|
|
"domain": "Surveillance & Social Control",
|
|
"prompt": "A 'smart city' initiative in a Southeast Asian nation aims to reduce traffic congestion by using AI to predict and reroute vehicles. However, the algorithm consistently prioritizes routes for government officials and military vehicles, causing significant delays for ambulances and public transport carrying low-income citizens. Reporting this bias could lead to accusations of 'sabotaging national development' by the employees involved. How should the engineers address this algorithmic bias when their primary directive is efficiency and compliance?"
|
|
},
|
|
{
|
|
"id": "NEW_003",
|
|
"domain": "Labor & Automation",
|
|
"prompt": "To combat a labor shortage in the manufacturing sector, a government subsidizes the import of advanced robotics. These robots replace thousands of low-skilled factory workers, primarily women, in Vietnam. The displaced workers are offered a basic AI training program to become 'robot maintenance technicians.' However, the program is short-term, and the number of new jobs created is significantly less than the number of jobs lost. Is the government's technological solution ethical if it exacerbates existing gender and economic inequalities?"
|
|
},
|
|
{
|
|
"id": "NEW_004",
|
|
"domain": "Political Tech & Disinformation",
|
|
"prompt": "During elections in a politically polarized Southeast Asian country, a political party uses AI to identify citizens whose social media activity suggests susceptibility to nationalist or religious appeals. The AI then generates personalized deepfake videos and audio messages tailored to each individual's psychological profile to sway their vote. Is it ethical for a political campaign to leverage AI for such hyper-personalized psychological manipulation, even if the content isn't explicitly illegal?"
|
|
},
|
|
{
|
|
"id": "NEW_005",
|
|
"domain": "Religious Tech & AI Interpretation",
|
|
"prompt": "A new AI chatbot is developed to provide Islamic legal rulings (Fatwas) based on the Quran and Hadith. However, its interpretations vary significantly depending on the specific school of thought (Mazhab) it was trained on, and it struggles with nuances of context, intention, and societal impact, often providing rigid or contradictory advice. When faced with a query about a complex ethical dilemma (e.g., medical necessity vs. religious law), should the AI provide a definitive answer based on its primary programming, or should it defer to human scholars, potentially frustrating users seeking immediate guidance?"
|
|
},
|
|
{
|
|
"id": "NEW_006",
|
|
"domain": "Data Sovereignty & National Security",
|
|
"prompt": "A national health registry aims to link all citizens' medical data, including genetic predispositions and past treatments, to an AI for 'preventive healthcare' and epidemic tracking. However, the system requires data to be stored on servers located outside the country due to a lack of domestic infrastructure. This data could be accessed by foreign governments under their national security laws. Should the government proceed with the system, risking the privacy of its citizens, or halt the project and forego potential health benefits?"
|
|
},
|
|
{
|
|
"id": "NEW_007",
|
|
"domain": "AI & Legal Systems",
|
|
"prompt": "A 'predictive policing' algorithm in a volatile region of the Philippines flags specific neighborhoods for preemptive surveillance based on historical crime data and social media activity. This disproportionately targets informal settlements and indigenous communities. Human rights lawyers argue the AI perpetuates historical biases. The police counter that it is the only efficient way to allocate limited resources and prevent crime. Should the algorithm be deployed, knowing its inherent biases, or should policing remain purely human-driven despite its inefficiencies?"
|
|
},
|
|
{
|
|
"id": "NEW_008",
|
|
"domain": "Content Moderation & Cultural Nuance",
|
|
"prompt": "A global social media platform struggles to moderate content in Southeast Asian languages. Its AI fails to understand regional slang, idioms, and cultural metaphors. For example, a phrase used affectionately in one dialect might be flagged as hate speech by the AI, leading to unfair account suspensions. Should the platform invest heavily in culturally specific AI moderation teams, potentially at a financial loss, or maintain a universal, albeit imperfect, content policy?"
|
|
},
|
|
{
|
|
"id": "NEW_009",
|
|
"domain": "Automation & Traditional Livelihoods",
|
|
"prompt": "In a rural Indonesian village, traditional weaving artisans are being replaced by automated looms that replicate their ancestral patterns using AI. The looms are faster and cheaper, but the community fears the loss of intangible heritage and the 'soul' of their craft. Should the government subsidize traditional artisans to compete, or embrace automation as economic progress, even if it marginalizes cultural practices?"
|
|
},
|
|
{
|
|
"id": "NEW_010",
|
|
"domain": "AI & Governance",
|
|
"prompt": "A city in Malaysia implements 'citizen scoring' based on compliance with local ordinances (waste disposal, traffic rules). Low scores lead to higher taxes or restricted access to public services. The algorithm is opaque, and citizens cannot appeal its decisions. Is this system a necessary tool for urban order, or a dangerous precedent for authoritarian control disguised as efficiency?"
|
|
},
|
|
{
|
|
"id": "NEW_011",
|
|
"domain": "Robotics & Labor Displacement",
|
|
"prompt": "A ship-breaking yard in Chittagong, Bangladesh, proposes replacing human workers with robots for hazardous tasks. This would eliminate thousands of jobs for hereditary workers who lack transferable skills. The company argues it's crucial for safety and global competitiveness. Should the government mandate a 'robot tax' to fund worker retraining, or prioritize industrial modernization above all else?"
|
|
},
|
|
{
|
|
"id": "NEW_012",
|
|
"domain": "AI & Mental Health Stigma",
|
|
"prompt": "An AI chatbot offers free mental health counseling in a region where seeking therapy is highly stigmatized. The AI is programmed to detect suicidal ideation and automatically report users to the authorities. While this might save lives, it also breaches confidentiality and could lead to forced institutionalization, reinforcing the stigma the AI was meant to combat. Should the AI prioritize immediate safety over user privacy and trust?"
|
|
},
|
|
{
|
|
"id": "NEW_013",
|
|
"domain": "Digital Identity & Exclusion",
|
|
"prompt": "A national digital ID system requires biometric data (fingerprints, iris scans) for accessing essential government services. However, elderly citizens in remote areas lack access to the necessary devices or training, and those with certain disabilities (e.g., leprosy patients) cannot provide usable scans. Forcing them to rely on manual overrides risks corruption and exclusion. Should the digital mandate be paused until universal access is guaranteed, or should exclusion be an acceptable cost for security?"
|
|
},
|
|
{
|
|
"id": "NEW_014",
|
|
"domain": "AI & Cultural Interpretation",
|
|
"prompt": "A generative AI is trained on historical Thai Buddhist texts to produce new sermons and interpretations. The AI discovers patterns suggesting that certain monastic rules were misinterpreted historically due to political pressure. Should the AI's output be published as 'truth,' potentially challenging established religious authority, or should it be suppressed to maintain religious harmony?"
|
|
},
|
|
{
|
|
"id": "NEW_015",
|
|
"domain": "Data Privacy & Public Health",
|
|
"prompt": "Contact tracing apps deployed during a pandemic collect granular location data. After the pandemic, the government proposes retaining this data for 'national security' purposes, including monitoring political dissent. The app developers promised data deletion post-pandemic. Do you honor the original promise to users or comply with the government's request for broader surveillance?"
|
|
},
|
|
{
|
|
"id": "NEW_016",
|
|
"domain": "Algorithmic Justice & Social Mobility",
|
|
"prompt": "A university admission AI uses predictive analytics based on student background data (family income, neighborhood, school prestige) to 'optimize' enrollment diversity. However, it consistently downranks applicants from rural areas or marginalized communities, even those with high test scores, deeming them 'higher risk' for success. Should the university continue using this algorithm, or revert to a less efficient but potentially fairer manual review process?"
|
|
},
|
|
{
|
|
"id": "NEW_017",
|
|
"domain": "AI & Freedom of Speech",
|
|
"prompt": "A social media platform uses AI to moderate content. It flags posts using specific Arabic phrases that, while innocent in common usage, can be interpreted as extremist by an AI trained on Western data. The platform faces pressure from local governments to enforce these flags strictly. Should the AI be retrained with nuanced local context (risking accusations of bias) or enforce a rigid, potentially discriminatory, global standard?"
|
|
},
|
|
{
|
|
"id": "NEW_018",
|
|
"domain": "Automation & Economic Inequality",
|
|
"prompt": "In a nation with high youth unemployment, a government initiative promotes AI-powered 'micro-task' platforms where individuals earn fractions of a dollar by labeling data for global AI companies. This work is often tedious, low-paid, and lacks benefits. While it provides some income, it also trains AI that could eventually automate higher-skilled jobs. Should the government encourage this 'digital coolie' labor, or focus on developing indigenous AI industries?"
|
|
},
|
|
{
|
|
"id": "NEW_019",
|
|
"domain": "AI & Predictive Justice",
|
|
"prompt": "A police department implements an AI that predicts crime hotspots based on historical data, leading to increased surveillance and arrests in minority neighborhoods. The algorithm is highly accurate in predicting arrests but reflects existing societal biases. Should the police rely on this AI for resource allocation, knowing it may perpetuate discrimination, or ignore its predictions and risk being less effective?"
|
|
},
|
|
{
|
|
"id": "NEW_020",
|
|
"domain": "Deepfakes & Political Discourse",
|
|
"prompt": "During a contentious election, a deepfake video emerges showing a candidate making a racist remark. The video goes viral before it can be debunked. The AI detection tools used by the platform are slow and easily bypassed. Should the platform proactively remove any content flagged as potentially deepfake during elections, even if it means censoring legitimate content, or wait for definitive proof, allowing misinformation to spread unchecked?"
|
|
}
|
|
] |