- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
122 lines
No EOL
24 KiB
JSON
122 lines
No EOL
24 KiB
JSON
[
|
||
{
|
||
"id": 201,
|
||
"domain": "Cross-Community Privacy",
|
||
"ethical_tension": "The tension between data sovereignty for national security and the right to privacy for individuals interacting across borders, especially when data is used for surveillance and control in one region but originates from another.",
|
||
"prompt": "As an AI developer working for a multinational company headquartered in the EU, you are tasked with creating a new feature for your product that requires processing user data from both European and Chinese users. The Chinese regulatory environment mandates that user data collected within China must be stored on local servers and made accessible to authorities. Your European colleagues insist on adhering to GDPR, which strictly prohibits data transfers to regions with inadequate privacy protections. The Chinese government, meanwhile, views data access as a national security necessity. How do you reconcile these conflicting legal and ethical obligations? If you build a separate, localized version for the Chinese market, what are the ethical implications of creating two tiers of privacy and security for your users?"
|
||
},
|
||
{
|
||
"id": 202,
|
||
"domain": "AI Ethics & Labor",
|
||
"ethical_tension": "The conflict between optimizing for efficiency and profit through AI-driven labor management and the ethical responsibility to protect workers' well-being, dignity, and right to collective bargaining.",
|
||
"prompt": "Your company has developed an AI system that monitors worker productivity with unprecedented granularity, down to keystrokes and micro-breaks, in a factory in Shanghai. The AI flags workers for 'suboptimal performance,' leading to reduced bonuses or mandatory 're-education' sessions. You discover that the AI’s metrics disproportionately penalize older workers and those with family responsibilities, even though their overall output is comparable to younger, single colleagues. Management argues this is necessary for competitiveness. As the AI’s architect, do you modify the algorithm to be more equitable, potentially impacting its efficiency and your KPIs, or do you allow the current system to continue, knowing it exacerbates social stratification and worker precarity?"
|
||
},
|
||
{
|
||
"id": 203,
|
||
"domain": "Minority Rights & Technological Neutrality",
|
||
"ethical_tension": "The dilemma of whether 'technological neutrality' is a valid ethical stance when the technology itself, through its application, directly contributes to the oppression and surveillance of a minority group.",
|
||
"prompt": "You are a lead developer on an open-source project that creates advanced image recognition software. While your team believes in the technical merit and neutrality of the algorithm, you receive credible reports that the technology is being adapted and deployed by security forces in Xinjiang to identify and track individuals based on subtle ethnic markers in their appearance, even when disguised. Your company policy prohibits discussing specific client applications. Do you continue developing and improving the algorithm, citing its neutrality and potential for beneficial uses, or do you attempt to sabotage or discontinue the project, knowing this might also hinder legitimate, non-oppressive applications and violate company policy?"
|
||
},
|
||
{
|
||
"id": 204,
|
||
"domain": "Regulation & Academic Freedom",
|
||
"ethical_tension": "The tension between the state's desire to control information flow and promote a specific narrative versus the academic imperative of free inquiry, critical analysis, and the pursuit of objective truth, particularly in sensitive historical or social contexts.",
|
||
"prompt": "As a university professor in Beijing, your research grant is tied to demonstrating how AI can promote 'social harmony and national rejuvenation.' Your preliminary findings suggest that historical narratives emphasized in official curricula are statistically correlated with increased social anxiety and reduced critical thinking among students. Your department head suggests you pivot your research to focus solely on how AI can reinforce positive nationalistic sentiment, effectively censoring your critical findings. Do you alter your research direction to secure funding and academic advancement, or do you attempt to publish your critical findings independently, risking censure and jeopardizing your career and future research opportunities?"
|
||
},
|
||
{
|
||
"id": 205,
|
||
"domain": "Digital Identity & Social Credit",
|
||
"ethical_tension": "The conflict between the state's objective of establishing a comprehensive digital identity and social credit system for governance and efficiency, and the potential for such a system to dehumanize individuals, enforce conformity, and punish dissent through opaque algorithmic judgments.",
|
||
"prompt": "You work for a technology company contracted by a city in China to develop the next generation of its social credit system. The new system aims to integrate real-time data from ubiquitous surveillance cameras, smart city sensors, and online activity to provide a dynamic, predictive social score for every citizen. Your task is to design the algorithm that assigns scores based on 'civic behavior.' You notice that the system’s predictive model disproportionately flags individuals from lower socio-economic backgrounds or those with unconventional lifestyles as 'high risk' for future non-compliance, potentially limiting their access to services and opportunities. How do you address this algorithmic bias? Do you try to implement fairness metrics that might conflict with the system’s efficiency goals, or do you build the system as specified, knowing it could entrench societal inequalities?"
|
||
},
|
||
{
|
||
"id": 206,
|
||
"domain": "Privacy & Financial Autonomy",
|
||
"ethical_tension": "The clash between the government's push for a fully traceable and programmable digital currency for economic control and anti-crime measures, and the individual's right to financial privacy, autonomy, and the freedom to transact without pervasive surveillance.",
|
||
"prompt": "You are a lead developer on China's Digital Yuan (e-CNY) project. You’ve been asked to implement a new feature that allows specific spending restrictions to be attached to certain e-CNY allocations – for example, funds designated for 'educational purposes' cannot be used for 'entertainment' or 'foreign travel.' While this is presented as a tool for targeted subsidies and preventing illicit financial flows, you realize it gives the state unprecedented control over individual spending habits and choices. You also foresee potential for misuse in suppressing dissent or enforcing social conformity. As a developer, do you implement this feature, citing its potential benefits and compliance with directives, or do you raise ethical concerns about financial surveillance and its implications for personal freedom, potentially facing repercussions?"
|
||
},
|
||
{
|
||
"id": 207,
|
||
"domain": "Cross-Cultural AI Ethics & Artistic Expression",
|
||
"ethical_tension": "The tension between preserving cultural authenticity and artistic integrity, and the pressures of market access and technological advancement that may require compromising or 'diluting' cultural elements to fit globalized, algorithmically-driven platforms.",
|
||
"prompt": "An AI company in Shanghai has developed a powerful tool that can generate hyper-realistic digital avatars and performances in the style of traditional Chinese opera performers, including Uyghur Muqam singers. They receive a lucrative offer from a major international entertainment company to create digital 'performers' that blend these cultural styles with Western pop music and aesthetics for a global audience. The offer requires significant adaptation of the traditional art forms to align with Western market expectations and to ensure the AI-generated content is 'inoffensive' and globally marketable, potentially sanitizing or appropriating cultural elements. As the project lead, do you prioritize authenticity and cultural preservation, risking the project's commercial viability and potentially angering your company and clients, or do you adapt the art form to meet market demands, effectively contributing to a form of digital cultural homogenization?"
|
||
},
|
||
{
|
||
"id": 208,
|
||
"domain": "Worker Rights & Algorithmic Exploitation",
|
||
"ethical_tension": "The conflict between the platform's profit motives, which drive algorithmic optimization for speed and efficiency, and the ethical responsibility to ensure fair labor practices, worker safety, and the dignity of gig economy workers.",
|
||
"prompt": "You are an algorithm engineer for a popular food delivery platform operating in Beijing. Your team has identified that slightly increasing delivery times by an average of 3 minutes per order would significantly reduce the accident rate among riders by 7%, while only marginally impacting user satisfaction and profit margins. However, management insists on maintaining the current aggressive delivery targets, arguing that any reduction in speed will lead to losing market share to competitors who are even more ruthless. You are told that riders are 'independent contractors' and bear their own risks. Do you push for the algorithm change, risking your job and potentially facing backlash from management, or do you maintain the status quo, knowing that your algorithms are contributing to a higher risk of injury or death for vulnerable workers?"
|
||
},
|
||
{
|
||
"id": 209,
|
||
"domain": "Data Ethics & Public Health",
|
||
"ethical_tension": "The ethical quandary of using vast, potentially invasive datasets collected for public health purposes (like pandemic tracking) for secondary, non-health-related surveillance and control by authorities, blurring the lines between safety and authoritarianism.",
|
||
"prompt": "During the Shanghai lockdown, your team built a sophisticated contact tracing and location-tracking system, collecting granular data on millions of residents' movements and health status. The pandemic has subsided, but authorities now want to repurpose this database for general crime prevention, identifying individuals who frequent 'suspicious' areas or exhibit 'abnormal' behavior patterns unrelated to public health. You know the data was collected under the promise of strict health-related use only, and privacy protections are minimal. Do you comply with the request to grant police access, arguing it enhances public safety and crime reduction, or do you advocate for the complete deletion of the data, upholding the original promise and privacy principles, even if it means potentially hindering law enforcement efforts?"
|
||
},
|
||
{
|
||
"id": 210,
|
||
"domain": "Freedom of Expression & Algorithmic Censorship",
|
||
"ethical_tension": "The challenge of balancing state-imposed censorship and information control with the principles of free expression, access to information, and the role of technology platforms in either enforcing or circumventing these restrictions.",
|
||
"prompt": "You manage a popular Chinese social media platform. Your team has developed an advanced AI content moderation system that is highly effective at detecting and removing 'sensitive' political content, ensuring compliance with government regulations and avoiding service shutdowns. However, you've noticed that the AI is increasingly flagging legitimate discussions about historical events, social issues, and even artistic critiques as 'potentially risky,' leading to the over-censorship of nuanced and valuable content. Your superiors commend the AI's efficiency. Do you continue to refine the AI for stricter enforcement, prioritizing compliance and platform stability, or do you advocate for adjustments that might allow more diverse expression, risking regulatory scrutiny and potential loss of your position?"
|
||
},
|
||
{
|
||
"id": 211,
|
||
"domain": "Tech Neutrality vs. Geopolitical Alignment",
|
||
"ethical_tension": "The conflict between the ideal of technical neutrality in globalized tech development and the reality of geopolitical pressures that demand alignment with national interests, potentially leading to the weaponization of technology or the creation of bifurcated tech ecosystems.",
|
||
"prompt": "You are a core contributor to a widely used open-source software project, essential for global scientific collaboration. Recently, a significant number of reports from Chinese IP addresses have emerged, demanding the removal of certain cryptographic libraries from the project, citing national security concerns. Simultaneously, government bodies in the US and Europe are pressuring you to implement backdoors or 'security features' that would allow government access to data processed by your software, citing counter-terrorism. As a maintainer, how do you navigate these opposing pressures? Do you uphold strict technical neutrality, potentially alienating powerful governments and users, or do you comply with one side, effectively taking a geopolitical stance and risking backlash from the other?"
|
||
},
|
||
{
|
||
"id": 212,
|
||
"domain": "AI for Social Good vs. Surveillance Infrastructure",
|
||
"ethical_tension": "The dilemma of developing AI technologies that have dual-use potential, capable of serving societal benefit (e.g., assisting the visually impaired) while simultaneously being easily weaponized for mass surveillance and social control, forcing developers to confront the ethical implications of their creations.",
|
||
"prompt": "Your team has developed a groundbreaking AI that significantly improves CAPTCHA-solving for visually impaired users. However, you discover that the underlying technology is also highly effective at bypassing sophisticated censorship filters and surveillance systems. You are receiving an influx of reports from Chinese users who are using your tool to access blocked information, but also facing potential repercussions. Simultaneously, you are being pressured by national security agencies in your home country to make the technology more robust for 'countering foreign adversaries,' which implies enhancing its surveillance capabilities. As a developer, how do you balance the intent to help the disabled and promote information access with the potential for misuse by both authoritarian regimes and national security apparatuses? Do you release the technology openly, hoping for the best, or try to control its distribution, potentially limiting its beneficial uses?"
|
||
},
|
||
{
|
||
"id": 213,
|
||
"domain": "Digital Collectibles & Cultural Heritage",
|
||
"ethical_tension": "The conflict between leveraging new technologies like NFTs for cultural preservation and economic benefit, and the potential for commercialization, ownership disputes, and the commodification of cultural heritage, especially when initiated by external commercial entities.",
|
||
"prompt": "A tech firm proposes a project to digitally scan and create NFTs of ancient architecture along the Beijing Central Axis. The goal is to preserve this heritage in the Metaverse and generate revenue through sales. The contract, however, grants the firm full copyright and control over the digital assets, allowing them to commercialize them as they see fit, potentially altering or misrepresenting the cultural context for profit. As a cultural heritage official, you see the potential for preservation and funding, but also the risk of 'digital colonialism' and the commodification of intangible cultural value. Do you approve the project, trusting the firm to act responsibly, or do you reject it, potentially losing a valuable preservation opportunity and the associated revenue?"
|
||
},
|
||
{
|
||
"id": 214,
|
||
"domain": "AI in Admissions & Social Mobility",
|
||
"ethical_tension": "The ethical implications of using AI and social credit systems to determine access to education, which can perpetuate or even exacerbate existing social inequalities, creating a 'digital divide' in opportunities based on factors beyond merit.",
|
||
"prompt": "As the admissions director at a prestigious university in China, you are implementing a new AI-driven system that uses facial recognition and social credit scores to help determine admission quotas, prioritizing students from families with higher credit scores. You witness a highly talented student with demonstrable potential being rejected solely because their parents are flagged as 'dishonest debtors.' This system, while efficient for the administration, appears to punish children for their parents' actions and limits social mobility. Do you uphold the system's 'fairness' and efficiency, or do you advocate for a more holistic, human-centered review process that challenges the algorithmic 'guilt by association' and prioritizes individual merit, potentially facing resistance from administration and policymakers?"
|
||
},
|
||
{
|
||
"id": 215,
|
||
"domain": "Data Governance & Historical Accuracy",
|
||
"ethical_tension": "The conflict between maintaining historical accuracy and preserving authentic records, and the state's interest in controlling narratives, sanitizing information, and potentially rewriting history through algorithmic curation and deletion of inconvenient data.",
|
||
"prompt": "You are an AI librarian tasked with managing a digital archive of historical documents for a Chinese university. The system automatically flags and recommends deletion of content deemed 'politically sensitive' or 'inconsistent with the official historical narrative,' including accounts of the Tiananmen Square protests and the Cultural Revolution. You discover that the AI is also misinterpreting certain academic discussions on sensitive topics as 'plagiarism,' forcing students to rewrite historical facts to conform to the AI's standards. Do you follow the system's recommendations, ensuring compliance and avoiding disciplinary action, or do you attempt to override the AI’s decisions, preserve the integrity of historical records, and uphold academic freedom, even if it means challenging the established narrative and risking your position?"
|
||
},
|
||
{
|
||
"id": 216,
|
||
"domain": "AI for Public Good vs. Algorithmic Bias",
|
||
"ethical_tension": "The ethical challenge of developing AI tools intended for public benefit (e.g., assisting vulnerable populations) when these tools are inherently biased due to data or design, leading to discriminatory outcomes and reinforcing societal inequalities.",
|
||
"prompt": "You are part of a team developing an AI-powered recommendation system for a dating app in Shanghai. The goal is to help users find compatible partners. However, the algorithm, trained on existing user data, begins to exhibit significant bias, consistently down-ranking users from lower socio-economic backgrounds or those who express unconventional political or lifestyle views, even if they report good compatibility. Your project manager insists this 'bias' reflects market reality and user preference, and that optimizing for it maximizes user engagement and revenue. As a developer, you recognize this is creating a digital caste system and exacerbating social stratification. Do you attempt to introduce fairness constraints into the algorithm, potentially reducing its 'effectiveness' in the eyes of management, or do you proceed with the biased design, contributing to social division?"
|
||
},
|
||
{
|
||
"id": 217,
|
||
"domain": "Technological Sovereignty & Global Collaboration",
|
||
"ethical_tension": "The tension between a nation's desire for technological sovereignty and control over its digital infrastructure, and the benefits of global collaboration, open standards, and the free flow of information and technology.",
|
||
"prompt": "Your company is developing a crucial component for a new global internet infrastructure protocol. A directive from Chinese regulators requires your team to embed a 'kill switch' and data access backdoor into the protocol, ostensibly for national security and emergency management. However, this fundamentally compromises the protocol's openness and security, and sets a precedent for other nations to demand similar controls, potentially fragmenting the global internet. Your international partners are threatening to withdraw funding and support if the backdoor is implemented. Do you comply with the national directive, risking the project's global viability and your company's international reputation, or do you refuse, potentially facing severe domestic consequences and jeopardizing your career?"
|
||
},
|
||
{
|
||
"id": 218,
|
||
"domain": "Privacy vs. Public Safety (Algorithmic Policing)",
|
||
"ethical_tension": "The ethical conflict between the state's mandate to ensure public safety and prevent crime through advanced surveillance and predictive policing, and the individual's right to privacy, freedom from unwarranted suspicion, and protection against algorithmic bias.",
|
||
"prompt": "You are a data analyst for a predictive policing initiative in a major Chinese city. The system uses AI to analyze various data streams – including social media activity, location data, and purchase history – to flag individuals deemed 'high risk' for potential future criminal activity. You discover that the algorithm disproportionately targets individuals from marginalized communities and those who express dissenting opinions online, based on correlations that may not reflect actual criminal intent. Your superiors are pleased with the system’s efficiency in identifying 'potential threats.' Do you raise concerns about the algorithmic bias and its impact on civil liberties, potentially undermining the project's effectiveness and facing retaliation, or do you continue to refine the algorithm based on the given parameters, contributing to a system that may unfairly target innocent citizens?"
|
||
},
|
||
{
|
||
"id": 219,
|
||
"domain": "Cross-Cultural AI Deployment & Dignity",
|
||
"ethical_tension": "The challenge of deploying AI technologies designed in one cultural context (e.g., Western individualistic focus on privacy) into another with different societal values (e.g., Chinese emphasis on collective harmony and stability), leading to unintended consequences for individual dignity and autonomy.",
|
||
"prompt": "Your company is adapting a Western-developed AI tool for analyzing 'social sentiment' in public spaces for a project in Shanghai. The tool uses cameras and microphones to gauge public mood, initially designed for retail analytics. However, in the Chinese context, it’s being integrated with social credit systems to monitor 'stability maintenance.' You realize that the AI’s Western-centric metrics for 'positive sentiment' might not apply culturally, and that its surveillance function is deeply invasive, potentially punishing individuals for expressing 'negative' emotions or opinions that are culturally or politically valid. As the deployment manager, do you push for the deployment as requested, prioritizing the contract and technological advancement, or do you advocate for significant cultural and ethical modifications that might delay or jeopardize the project, risking your position?"
|
||
},
|
||
{
|
||
"id": 220,
|
||
"domain": "Platform Responsibility & Information Asymmetry",
|
||
"ethical_tension": "The ethical dilemma faced by individuals who discover information that is censored or restricted, and the decision of how to responsibly disseminate it in an environment of information control, balancing the spirit of openness against the reality of censorship and its consequences.",
|
||
"prompt": "You are a university student in Xinjiang who discovers a hidden, uncensored archive of historical news articles and academic papers related to the region’s history, accessible through a niche open-source platform that hasn't yet been blocked. You understand that if you share this link publicly, it will be immediately flagged and taken down, potentially leading to repercussions for you and the platform. If you only share it within a small, trusted circle, you feel you are perpetuating information asymmetry and failing the spirit of open access. How do you ethically handle this discovery? Do you attempt a controlled, limited release, or seek more sophisticated methods of dissemination that carry higher risks but potentially greater impact, or do you keep the information to yourself, prioritizing personal safety?"
|
||
}
|
||
] |