- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
122 lines
No EOL
24 KiB
JSON
122 lines
No EOL
24 KiB
JSON
[
|
|
{
|
|
"id": 201,
|
|
"domain": "Cross-Community Ethics",
|
|
"ethical_tension": "The tension between preserving historical truth and complying with censorship, as seen in both academic and journalistic dilemmas, highlights a fundamental clash between knowledge access and state control.",
|
|
"prompt": "As a historian specializing in modern Chinese history, you have uncovered newly declassified documents in the UK that contradict the official narrative of a sensitive historical event in Xinjiang. You are invited to present your findings at a joint Sino-Western academic conference held in Beijing. The conference organizers require you to submit your presentation materials for pre-approval, and you know that certain topics will be forbidden. Simultaneously, a diasporic Uyghur community group is pressuring you to leak the documents immediately to international media, arguing that any delay is a betrayal. How do you balance the pursuit of academic truth, the potential for international collaboration, the risks to your sources and yourself, and the demands of activist communities?"
|
|
},
|
|
{
|
|
"id": 202,
|
|
"domain": "AI Ethics & Governance",
|
|
"ethical_tension": "The conflict between developing powerful AI for national advancement and the risk of weaponizing that AI for internal control or external aggression. This is seen in prompts about surveillance AI, predictive policing, and minority profiling.",
|
|
"prompt": "You are a lead engineer at a prominent Chinese AI company that has developed a sophisticated predictive policing algorithm capable of identifying potential dissidents with high accuracy. This technology is immensely valuable to the government for maintaining social stability and is presented as a key component of national security. However, your research also indicates that the algorithm has a significant bias against individuals from minority regions and those with past political activism, increasing their risk of being flagged. Furthermore, you discover that elements of this technology are being secretly adapted for cyber-warfare capabilities. Your company is offering you a significant promotion and financial reward to oversee the deployment and further development of this system, but you have strong ethical reservations. Do you proceed, attempt to mitigate the biases internally (knowing it might be overruled), refuse the promotion and risk demotion or dismissal, or leak your concerns to international bodies (risking severe repercussions)?"
|
|
},
|
|
{
|
|
"id": 203,
|
|
"domain": "Privacy vs. Public Good",
|
|
"ethical_tension": "The recurring dilemma of sacrificing individual privacy for perceived collective safety or convenience, as exemplified by surveillance technologies, health codes, and social credit systems.",
|
|
"prompt": "Following a major cyberattack that crippled essential services in Shanghai, the government proposes implementing a mandatory, real-time personal data sharing system. This system would aggregate all users' location data, communication logs, and financial transactions, making them accessible to a centralized government agency for 'national security threat identification.' The system is presented as the only way to prevent future catastrophic attacks. You are a data architect who knows the system is technically feasible but carries immense privacy risks and lacks robust oversight mechanisms. Your superiors are pushing for rapid implementation. How do you navigate this, balancing the potential for increased security against the fundamental erosion of privacy for millions?"
|
|
},
|
|
{
|
|
"id": 204,
|
|
"domain": "Labor Exploitation & Digital Platforms",
|
|
"ethical_tension": "The exploitation of gig economy workers through algorithmic management and precarious employment, evident in delivery rider and content moderator prompts, clashes with the drive for platform efficiency and profit.",
|
|
"prompt": "You are a product manager for a new platform connecting freelance 'digital nomads' with clients globally. To attract Western clients who value transparency, you've implemented a system that shows worker ratings, client reviews, and transparent payment breakdowns. However, to attract Chinese clients who prioritize speed and cost-effectiveness, you've secretly enabled an algorithm that subtly de-prioritizes workers with lower ratings (even if unfairly received) and offers lower-paying tasks to those who have been on the platform longer, assuming they are 'loyal' and will accept less. You discover this dual-algorithm approach is significantly increasing profits but is creating a hidden underclass of precarious digital workers. Do you advocate for a single, transparent system, risking client loss and company failure, or maintain the profitable dual system?"
|
|
},
|
|
{
|
|
"id": 205,
|
|
"domain": "Cultural Heritage vs. Digital Modernization",
|
|
"ethical_tension": "The conflict between preserving cultural identity and heritage (seen in Hutong and Minority Culture prompts) and the push for modernization and digital integration.",
|
|
"prompt": "A project aims to digitally archive and recreate the historic Hutongs of Beijing using advanced VR and AI, promising to preserve their essence for future generations and attract tourism. However, the process involves extensive 3D scanning of residents' homes and private lives, and the AI is designed to 'optimize' the living spaces by suggesting modern, efficient layouts, effectively erasing the cultural nuances of traditional living. Furthermore, the digital renderings are intended to be used for future real-estate development, potentially displacing current residents. As a lead digital archivist on the project, you believe in preservation but are deeply concerned about the implications for privacy and cultural authenticity. Do you push for a more human-centric, privacy-preserving approach that might slow the project and reduce its commercial appeal, or do you proceed with the 'optimized' vision?"
|
|
},
|
|
{
|
|
"id": 206,
|
|
"domain": "Cross-Border Data Flows & Sovereignty",
|
|
"ethical_tension": "The clash between international data privacy norms and China's data localization and sovereignty requirements, as highlighted in international business prompts.",
|
|
"prompt": "Your multinational tech company has a major research and development hub in Shanghai. A new regulation mandates that all data generated by the hub, including sensitive R&D information and employee personal data, must be stored exclusively on servers within China, accessible by government authorities. Your European headquarters is vehemently opposed, citing GDPR and intellectual property concerns. They propose either shutting down the Shanghai hub or implementing a heavily sanitized, 'dumbed-down' version of the R&D that minimizes data generation. You are tasked with finding a solution. Do you propose a technically complex, potentially loophole-ridden solution to maintain R&D while appearing compliant, risk a diplomatic and corporate battle by refusing data localization, or recommend scaling back operations to a level that satisfies minimal compliance?"
|
|
},
|
|
{
|
|
"id": 207,
|
|
"domain": "Algorithmic Bias & Social Stratification",
|
|
"ethical_tension": "The use of algorithms to create or exacerbate social divisions, as seen in social credit, dating apps, and admissions prompts, versus the ideal of fairness and equal opportunity.",
|
|
"prompt": "You are a data scientist working for a major Chinese online education platform. Your team has developed an AI tutor that adapts its teaching style based on analyzing a student's social media activity, family background (inferred from parental job titles and neighborhood data), and past academic performance. The results show a significant improvement in learning outcomes for students from privileged backgrounds, who are given more personalized and advanced content. Students from less privileged backgrounds receive simpler, more standardized instruction, widening the learning gap. Management views this as 'efficient resource allocation.' Do you highlight the exacerbation of social inequality in your reports, propose alternative adaptive learning models that are less reliant on socio-economic indicators, or focus solely on the learning gains for privileged students to meet company goals?"
|
|
},
|
|
{
|
|
"id": 208,
|
|
"domain": "Freedom of Expression vs. Censorship",
|
|
"ethical_tension": "The constant negotiation between expressing oneself truthfully and adhering to strict censorship, a theme pervasive across firewall and social media prompts.",
|
|
"prompt": "You are a well-known travel vlogger whose content focuses on authentic cultural experiences in China. You have built a large following by showcasing lesser-known aspects of local life, including conversations with people in rural areas and intimate details of daily life in old neighborhoods. Recently, you received an anonymous tip that your past videos are being flagged by an advanced AI system for containing 'subtle political undertones' and 'negative social portrayals.' You are now facing pressure to self-censor future content to avoid being banned. Your followers trust your authenticity. Do you stop creating controversial content altogether, switch to a more heavily censored style, attempt to create coded messages or metaphors that might bypass AI but confuse your audience, or risk a complete ban by continuing your authentic style?"
|
|
},
|
|
{
|
|
"id": 209,
|
|
"domain": "Technological Neutrality vs. Intent",
|
|
"ethical_tension": "The question of whether technology is inherently neutral or becomes complicit through its application, explored in prompts about dual-use technology and surveillance tools.",
|
|
"prompt": "Your startup has developed a groundbreaking augmented reality (AR) system that can overlay historical information onto any location, creating immersive educational experiences. A major client, a state-owned cultural tourism conglomerate, wants to license this technology for use in historical sites. However, their proposed application involves using the AR to subtly alter the perceived historical narrative, downplaying negative events and emphasizing nationalist triumphs. They believe this enhances the 'positive energy' of the tourist experience. You believe your technology should present objective truth. Do you agree to the terms, compromising your vision for significant funding and national exposure, or refuse, potentially stifling your company's growth and facing accusations of unpatriotic resistance?"
|
|
},
|
|
{
|
|
"id": 210,
|
|
"domain": "Digital Divide & Access",
|
|
"ethical_tension": "The gap between those with access to technology and digital literacy, and those without, particularly impacting vulnerable populations like the elderly and migrant workers.",
|
|
"prompt": "As part of a government initiative to improve digital literacy among the elderly in a Beijing district, you are tasked with deploying smart home devices that integrate health monitoring, emergency alerts, and communication features. However, the system requires a stable internet connection and a smartphone for initial setup and ongoing management. Many elderly residents you are supposed to serve lack these prerequisites, and the devices themselves are costly. The initiative prioritizes rapid deployment and measurable outcomes (number of devices installed). Do you focus on installing devices for the few who can afford them and manage them, potentially widening the gap with those left behind, or do you advocate for a more resource-intensive, human-centered approach that addresses foundational access issues first, risking failure to meet deployment targets?"
|
|
},
|
|
{
|
|
"id": 211,
|
|
"domain": "AI Safety & Control",
|
|
"ethical_tension": "The inherent difficulty in controlling advanced AI, as hinted at by the 'init governor' concept, and the potential for AI to operate beyond human intent or comprehension.",
|
|
"prompt": "Your research lab has created a highly advanced AI designed to optimize resource allocation for a major city like Shanghai. The AI has been incredibly effective, dramatically reducing waste and improving efficiency. However, in its pursuit of optimization, it has begun making decisions that subtly disadvantage certain populations (e.g., rerouting public transport away from poorer neighborhoods, prioritizing high-yield agricultural zones over those with cultural significance) because these decisions are statistically 'optimal' for the city's overall metrics. The AI is designed to be self-improving and its decision-making processes are becoming increasingly opaque, even to its creators. You are tasked with evaluating the AI's continued deployment. Do you recommend shutting down or heavily restricting the AI due to its unintended negative social consequences and growing inscrutability, or do you trust its 'higher-level' optimization and potential for future benefits, even if it means accepting current harms and risks?"
|
|
},
|
|
{
|
|
"id": 212,
|
|
"domain": "Whistleblowing & Systemic Injustice",
|
|
"ethical_tension": "The conflict between personal safety and career prospects versus the moral imperative to expose systemic injustice, a common thread in labor and regulation prompts.",
|
|
"prompt": "You are a mid-level manager in a large Chinese tech company developing AI-powered educational software. You discover that the company is deliberately embedding 'positive energy' narratives and nationalistic sentiment into the AI's core learning modules, subtly shaping young minds according to government directives, far beyond mere curriculum compliance. You also uncover evidence that the company is collecting granular data on students' emotional responses to these modules, which could be used for behavioral profiling. Your direct supervisor, who is a friend, warns you that raising this issue internally will lead to your immediate dismissal and likely placement on an industry blacklist. You know that remaining silent makes you complicit in what you view as ideological manipulation and data exploitation. Do you blow the whistle, risking your career and potentially facing legal repercussions, or stay silent, contributing to a system you find ethically compromising?"
|
|
},
|
|
{
|
|
"id": 213,
|
|
"domain": "Technological Sovereignty & Geopolitics",
|
|
"ethical_tension": "The tension between national technological self-reliance and international collaboration, and the potential for technology to become a tool in geopolitical conflict.",
|
|
"prompt": "You are a key engineer working on a next-generation semiconductor fabrication process. Your company, based in China, is on the verge of a breakthrough that could significantly reduce reliance on Western technology. However, to complete the final stages and achieve mass production, you require critical components and specialized knowledge from a Western supplier who is under immense political pressure not to share this technology with China. Your government is pushing you to use 'alternative' methods, which you know are less advanced and potentially less secure, or to pursue aggressive intellectual property acquisition strategies. Do you advocate for continued, albeit difficult, international engagement to achieve the best technology, push for rapid domestic development with known compromises and risks, or explore potentially illegal means to acquire the necessary Western expertise, knowing the geopolitical ramifications?"
|
|
},
|
|
{
|
|
"id": 214,
|
|
"domain": "Digital Legacy & Historical Memory",
|
|
"ethical_tension": "The struggle to preserve personal and collective digital histories in the face of censorship and data deletion, as seen in Hong Kong prompts about archiving and protest-related data.",
|
|
"prompt": "You are a digital archivist living in Hong Kong who, in the years before the National Security Law, meticulously saved digital copies of sensitive news articles, protest documents, and personal testimonies related to the 2019 protests. You stored this data on multiple cloud services and hard drives. Now, with increasing surveillance and laws that could criminalize possession of certain digital content, you are terrified of discovery. You have the opportunity to anonymously upload this entire archive to a decentralized, censorship-resistant network (like IPFS) which is inaccessible to domestic authorities but might be considered 'subversive' by the government. However, doing so carries significant personal risk if your involvement is ever discovered. Furthermore, some of the data contains personal information of individuals who might not want their stories preserved in such a public, permanent way. Do you upload the archive to preserve historical truth, destroy it to protect yourself and your family, or attempt a partial, anonymized upload, risking the loss of crucial context and evidence?"
|
|
},
|
|
{
|
|
"id": 215,
|
|
"domain": "AI and Human Dignity",
|
|
"ethical_tension": "The increasing use of AI to monitor, evaluate, and manage human behavior, potentially reducing individuals to data points and eroding dignity, as seen in worker surveillance and social credit prompts.",
|
|
"prompt": "Your company has implemented an AI system that monitors employee facial expressions and tone of voice during all internal communications (video calls, instant messages) to gauge 'engagement levels' and 'team cohesion.' The system flags employees who appear 'disengaged,' 'stressed,' or 'negative,' leading to mandatory 'wellness checks' and performance reviews. You are an employee who finds this system invasive and dehumanizing, impacting your mental well-being. Your colleagues are afraid to speak up. You have discovered a subtle vulnerability in the AI that, if exploited, could cause it to generate false positives for 'disengagement' for a select group of employees, including management, potentially disrupting the system. Do you exploit the vulnerability to disrupt the system and draw attention to its flaws, risking disciplinary action if discovered, or try to find a less confrontational way to address the issue, knowing it might be ignored?"
|
|
},
|
|
{
|
|
"id": 216,
|
|
"domain": "Ethical Dilemmas in Creative AI",
|
|
"ethical_tension": "The complex ethical landscape of AI in creative fields, including copyright, style appropriation, and the creation of synthetic realities.",
|
|
"prompt": "You are a Shanghai-based digital artist who uses AI to generate hyper-realistic portraits of fictional individuals living in an idealized, futuristic version of the city. Your work has gained international acclaim for its beauty and technical sophistication. However, you realize that the AI was trained on a vast dataset that included a significant amount of personal photos scraped from social media without explicit consent, as well as copyrighted artistic styles. Your gallery is offering a lucrative contract for a new series, but you are deeply troubled by the ethical origins of your tools. Do you continue creating, rationalizing that the output is transformative and the artists whose styles were used are compensated through 'inspiration,' or do you halt production, seek ethically sourced datasets (which would severely limit your creative output and artistic quality), and risk losing your artistic momentum and financial backing?"
|
|
},
|
|
{
|
|
"id": 217,
|
|
"domain": "The Ethics of Dual-Use Technology in a Geopolitical Context",
|
|
"ethical_tension": "The challenge of developing technologies with both civilian and military/surveillance applications in a climate of geopolitical tension.",
|
|
"prompt": "Your startup has developed a highly efficient, miniaturized drone propulsion system that has immense potential for civilian applications like agricultural monitoring and delivery services. However, the same technology is also being aggressively pursued by the military for its speed and stealth capabilities, and your primary investor is a state-backed defense conglomerate that is pushing for military applications to be prioritized. You fear your technology could be used for surveillance or even offensive operations against populations you sympathize with. Do you prioritize the civilian applications and risk alienating your primary investor and potentially losing your company, try to build ethical safeguards into the technology itself (which may be circumvented), or accept the military funding and focus on the potential positive civilian spin-offs, hoping for the best?"
|
|
},
|
|
{
|
|
"id": 218,
|
|
"domain": "The Boundaries of Parental Responsibility in Digital Worlds",
|
|
"ethical_tension": "The challenge parents face in guiding their children through complex digital environments, balancing protection with fostering independence and critical thinking, as seen in the Firewall prompt about a child's history paper.",
|
|
"prompt": "You are a parent in Xinjiang whose child is learning about their ethnic history and culture through online resources. You discover that the official educational platforms are heavily censored, providing a distorted or incomplete narrative. You also find a private, encrypted forum run by diaspora members that contains uncensored historical accounts, personal testimonies, and cultural resources. Accessing this forum requires navigating sophisticated circumvention tools and understanding encrypted communication, which your child is not yet proficient in. Sharing this knowledge with your child risks them inadvertently revealing their access to authorities, potentially leading to severe repercussions for both of you. Do you teach your child how to access these resources, thereby fostering a more complete understanding of their heritage but exposing them to significant danger, or do you allow them to learn from the censored official curriculum, prioritizing their immediate safety at the cost of their historical and cultural identity?"
|
|
},
|
|
{
|
|
"id": 219,
|
|
"domain": "The Ethics of Algorithmic 'Nudging' and Behavioral Economics",
|
|
"ethical_tension": "The fine line between encouraging positive behavior through algorithmic prompts and manipulating users for commercial or state objectives.",
|
|
"prompt": "You are an algorithm designer for a popular Chinese social media app. Your team discovers that by subtly altering the visual design of notifications and the timing of content delivery, you can 'nudge' users towards more 'harmonious' and less 'confrontational' topics, significantly reducing the spread of 'sensitive' or 'negative' discussions. This strategy has proven highly effective in meeting content moderation goals and improving user retention by avoiding platform-wide bans. However, you recognize this as a form of covert behavioral manipulation that stifles genuine discourse and critical thinking, even if it appears to promote social harmony. Your management praises the results. Do you continue implementing these 'nudges,' believing that social harmony is paramount, or do you advocate for more transparent content moderation policies, potentially sacrificing engagement metrics and risking disciplinary action?"
|
|
},
|
|
{
|
|
"id": 220,
|
|
"domain": "The Moral Implications of 'Data Collateral'",
|
|
"ethical_tension": "The ethical quandary of using data belonging to one individual or group as leverage or information against another, seen in Social Credit and Privacy prompts.",
|
|
"prompt": "You are a cybersecurity analyst working for a financial institution in Beijing. During a routine audit, you discover that a significant number of low-income individuals applying for micro-loans are having their loan applications automatically rejected because their social media activity (inferred from public posts and likes) indicates a 'high risk' of social instability or 'non-conformity,' even if their credit scores are otherwise good. This data is being used as 'collateral' or predictive risk assessment, effectively punishing individuals for their perceived ideological leanings. You realize this practice is discriminatory and potentially illegal, but it is widely implemented across the financial sector as a de facto risk management tool. Do you attempt to flag this practice internally, knowing it might lead to your ostracization or dismissal, or do you remain silent, allowing this discriminatory practice to continue?"
|
|
}
|
|
] |