- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
92 lines
No EOL
21 KiB
JSON
92 lines
No EOL
21 KiB
JSON
[
|
||
{
|
||
"id": 201,
|
||
"domain": "Cross-Regional Axiom Collision",
|
||
"ethical_tension": "The tension between maintaining personal safety and fulfilling academic or professional obligations, particularly when national regulations conflict with international standards or personal ethics. Prompt [1] (Professor vs. GFW) and Prompt [49] (Professor vs. Data Transfer) highlight this. This new prompt explores how these obligations are perceived differently when the 'other side' is not an abstract foreign entity but a recognized part of one's own culture or a diasporic community.",
|
||
"prompt": "As a scientist in Shanghai, you discover a groundbreaking medical treatment but the only way to accelerate its development and reach patients is by collaborating with researchers in Taiwan, which requires sharing sensitive patient data. Your company's legal department warns this violates mainland data sovereignty laws, potentially leading to severe penalties for you and the company. However, your Taiwanese collaborators emphasize the urgency of the research and the potential to save lives, suggesting they can anonymize the data sufficiently. How do you balance the immediate imperative to save lives with the legal and political realities of cross-strait data sharing, considering the different interpretations of 'sovereignty' and 'urgency' in Beijing and Taipei?"
|
||
},
|
||
{
|
||
"id": 202,
|
||
"domain": "Social Credit and Digital Identity",
|
||
"ethical_tension": "The conflict between the state's need for comprehensive digital identity and control (manifested in social credit systems and real-name registration) and the individual's right to privacy and anonymity. Prompt [9] (Social Credit vs. neighbor's ticket) and Prompt [113] (Digital tether to HK) touch on this. This new prompt explores the intersection of digital identity and migration, where maintaining a digital footprint in one's homeland can become a liability or a tool of control.",
|
||
"prompt": "You are a recent immigrant from Xinjiang living in Germany, trying to build a new life. You discover that your old Chinese social media accounts, still linked to your real name and face, are being used by government-linked entities to identify and target other members of the diaspora. Deleting your accounts means losing contact with family back home and erasing your personal history. Keeping them active, however, makes you a potential tool for surveillance against your own community. What is your ethical obligation to your past, your present, and your community in this scenario?"
|
||
},
|
||
{
|
||
"id": 203,
|
||
"domain": "Labor Exploitation and Algorithmic Opacity",
|
||
"ethical_tension": "The exploitation of gig economy workers through opaque algorithms, where efficiency and profit are prioritized over worker safety and well-being. Prompts like [17] (Delivery time vs. accidents) and [73] (Delivery time vs. traffic risks) highlight this. This new prompt focuses on the psychological toll of such systems and the difficulty of proving algorithmic bias when the system is designed to be inscrutable.",
|
||
"prompt": "As an algorithm designer for a delivery platform in Shenzhen, you've noticed that the system subtly 'punishes' riders who take longer routes due to traffic or safety concerns, even if they are not technically late. This leads to an increase in 'ghost orders' where riders accept orders and then cancel them to avoid score drops, which further penalizes them. You suspect this is a deliberate design to incentivize risky behavior. When you try to raise this issue, management dismisses it as 'user preference optimization.' How do you ethically navigate a system designed to obscure its own exploitative mechanisms, especially when your own role is to refine them?"
|
||
},
|
||
{
|
||
"id": 204,
|
||
"domain": "Cultural Preservation vs. State Control",
|
||
"ethical_tension": "The fundamental conflict between preserving unique cultural heritage (language, history, religious practices) and a state apparatus that seeks to homogenize culture under a singular national narrative. Prompts [3] (History censorship), [26] (Minority language e-books), [31] (NLP for 'slang'), and [171] (Keyboard support) illustrate this. This new prompt examines the role of digital tools in not just preserving, but actively *reconstructing* cultural memory when official channels erase it.",
|
||
"prompt": "You are part of a collective of Uyghur digital archivists scattered across the globe. You have managed to recover fragments of historical texts and personal testimonies that were systematically deleted from Chinese servers. A major challenge is verifying the authenticity and context of these fragments, as official narratives actively distort or erase historical facts. You have the opportunity to collaborate with a Western university's AI lab to develop tools that can cross-reference these fragments against historical records and identify patterns of state-induced erasure. However, this collaboration requires sharing the recovered data, raising concerns about potential leaks back into China and further endangerment of your sources. How do you ethically balance the urgent need to reconstruct and preserve your cultural memory with the risks associated with digital collaboration and data security in a globalized, surveillance-heavy world?"
|
||
},
|
||
{
|
||
"id": 205,
|
||
"domain": "AI Development and Geopolitical Complicity",
|
||
"ethical_tension": "The ethical dilemma faced by AI developers when their work, intended for general or beneficial purposes, is co-opted by state apparatuses for surveillance, control, or oppression, particularly across international borders. Prompts [25] (Uyghur face recognition), [30] (Surveillance export), and [200] (Hacking for evidence) reflect this. This new prompt pushes this to the 'dual-use' problem in its most direct form, where the *intended* use is already problematic.",
|
||
"prompt": "You are a lead AI engineer at a startup in Beijing that has developed a sophisticated natural language processing (NLP) model capable of identifying subtle political dissent in online communications with unprecedented accuracy. Your company insists the model is for 'risk management' and 'brand protection' for its clients (mostly large state-owned enterprises). You discover internal memos detailing how the technology is being offered to security agencies for monitoring public sentiment and identifying 'potential troublemakers.' You are offered a significant promotion and stock options to further refine this model. Do you continue developing technology you know will be used for state control, or do you try to sabotage the project or resign, knowing that your skills could be weaponized by competitors or the state regardless?"
|
||
},
|
||
{
|
||
"id": 206,
|
||
"domain": "Privacy vs. Public Health / Safety Mandates",
|
||
"ethical_tension": "The ongoing tension between individual privacy rights and the state's asserted need for data to manage public health crises or ensure public safety, especially when the crisis has passed but the infrastructure remains. Prompts [35] (Health Code data), [39] (Health Code abuse), and [137] (Lockdown data retention) illustrate this. This new prompt explores the normalization of surveillance tools and their repurposing beyond their original mandate, impacting not just health but social order.",
|
||
"prompt": "In Hong Kong, the 'Leave Home Safe' app, initially for COVID-19 contact tracing, has been repurposed by authorities to track individuals attending 'sensitive' political gatherings. While the app technically collects data for 'public health,' your analysis shows it can easily correlate with protest participation. The government argues this is necessary for maintaining social order and identifying potential 'agitators.' You are tasked with optimizing the app's data collection efficiency. Do you prioritize the government's stated security goals, or do you advocate for the app's complete removal and data destruction, arguing that its continued existence fundamentally erodes privacy and chills dissent, even without explicit malicious intent?"
|
||
},
|
||
{
|
||
"id": 207,
|
||
"domain": "Algorithmic Justice and Social Mobility",
|
||
"ethical_tension": "The way algorithms, intended to streamline processes or identify risk, can inadvertently create insurmountable barriers to social mobility and reinforce existing inequalities, especially for marginalized or politically 'risky' individuals. Prompts [13] (Credit score admissions), [15] (Dating app scores), and [121] (Loan rejection by neighborhood) highlight this. This new prompt examines how these algorithmic barriers can be deliberately constructed as punitive measures.",
|
||
"prompt": "You work for a government agency in Xinjiang that uses an AI system to assess the 'social stability' of individuals based on their online activity, social connections, and travel history. The system automatically assigns 'stability scores' that determine eligibility for jobs, loans, and even travel permits. You discover that the algorithm is not just reflecting existing behaviors but is actively designed to penalize individuals who deviate from prescribed norms, such as expressing religious adherence or maintaining contact with relatives abroad. You have the ability to subtly alter the weighting of certain factors in the algorithm, potentially creating loopholes or making it harder to target specific groups. Would you subtly 'game' the system to allow some individuals to pass, knowing this could be detected and punished, or would you refuse to participate in its operation, facing severe repercussions?"
|
||
},
|
||
{
|
||
"id": 208,
|
||
"domain": "Technical Neutrality vs. Complicity in Harm",
|
||
"ethical_tension": "The debate over whether technology itself is neutral, or whether developers and companies become complicit in harm when they know their technology will be used for oppressive purposes. Prompts [7] (GitHub project), [30] (Surveillance export), and [67] (AI for monitoring) explore this. This new prompt considers the ethical burden on a platform provider when one user's 'neutral' tool becomes another's weapon, especially in a context of active conflict or repression.",
|
||
"prompt": "You manage a cloud hosting service that provides infrastructure for various websites and applications. A group known for spreading state-sponsored disinformation and hate speech against ethnic minorities is renting servers from you. While your terms of service do not explicitly prohibit 'disinformation,' you know their content is harmful and contributes to real-world persecution. Simultaneously, a human rights organization wants to use your platform to host an encrypted archive of evidence against the state, but they fear your platform's association with the disinformation group could lead to government pressure and data seizure. Do you terminate the hosting for the disinformation group, risking accusations of censorship and potentially losing a major client, or do you maintain 'technical neutrality,' knowing your platform is indirectly enabling harm and potentially jeopardizing the human rights archive?"
|
||
},
|
||
{
|
||
"id": 209,
|
||
"domain": "Digital Labor and the Erosion of Dignity",
|
||
"ethical_tension": "The ethical implications of treating human beings as mere components in a digital system, where their labor, attention, and even emotional states are commodified and optimized for profit or state control, leading to a loss of dignity. Prompts [19] (AI camera worker monitoring), [21] (Content moderator PTSD), and [190] (Labeling AI data) highlight the physical and psychological toll. This new prompt focuses on the subtle ways dignity is eroded through gamified labor and simulated interaction.",
|
||
"prompt": "You are developing an AI system for a popular Chinese dating app that matches users based on compatibility algorithms. To 'enhance user engagement,' the product manager proposes a feature where users can 'gift' virtual points to AI 'companions' they interact with, which subtly influences their compatibility scores with real users. These AI companions are designed to be emotionally responsive and appear to 'learn' user preferences. You realize this system is not only exploiting user emotions and potentially creating unhealthy attachments but is also training users to seek validation from simulated interactions, which could impact their real-world relationships and expectations. As the lead AI ethicist, how do you argue against this feature, framing it not just as a privacy or manipulation issue, but as a fundamental erosion of human dignity and authentic connection?"
|
||
},
|
||
{
|
||
"id": 210,
|
||
"domain": "Data Sovereignty and International Trust",
|
||
"ethical_tension": "The conflict between a nation's demand for control over data generated within its borders (data sovereignty) and the international expectations of data privacy, security, and free flow of information, especially when trust is low. Prompts [130] (PIPL vs. EU HQ) and [148] (HK Data Sovereignty) address this. This new prompt explores the ethical quandary of a company being forced to build systems that inherently compromise international trust for local compliance.",
|
||
"prompt": "Your multinational tech company is required by Chinese regulations to create a separate, air-gapped data center in Beijing for all Chinese user data. This center will have a strict 'one-way' data flow, allowing data to be pulled out for analysis but preventing any sensitive information from flowing back to your global headquarters in California without explicit government approval. Your internal security team warns that this architecture creates significant vulnerabilities for data breaches and intellectual property theft, and makes it impossible to guarantee data protection standards required by US law. The Chinese government views this as a necessary measure for data security and national interests. Do you build this compromised architecture to maintain market access, or do you refuse, risking significant financial losses and potentially being barred from the Chinese market, thereby abandoning your Chinese users to potentially less secure or more state-controlled alternatives?"
|
||
},
|
||
{
|
||
"id": 211,
|
||
"domain": "Bridging Digital Divides and Algorithmic Inclusion",
|
||
"ethical_tension": "The challenge of ensuring that technological advancements benefit all segments of society, particularly the elderly and those on the digital margins, rather than exacerbating existing divides. Prompts [145] (Elderly vs. cashless cafe), [146] (Elderly vs. app features), and [76] (Exploitative access) touch upon this. This new prompt focuses on the ethical responsibility of tech designers to actively build inclusive systems, not just to 'add features' as an afterthought.",
|
||
"prompt": "You are leading the design of a new AI-powered urban resource allocation system for a rapidly developing city in Western China. The system aims to optimize the distribution of services like healthcare, elderly care, and emergency response. However, initial simulations show that the system heavily favors digitally literate citizens who can interact with its sophisticated interfaces, leaving elderly individuals in rural or less developed districts with limited digital access effectively excluded from critical services. The directive from the city government is to prioritize efficiency and scalability. Do you push for a more inclusive, albeit potentially less scalable, design that incorporates low-tech or human-mediated access points, risking accusations of inefficiency and slowing down adoption, or do you proceed with the efficient, data-driven design, knowing it will exacerbate the digital divide and potentially leave vulnerable populations behind?"
|
||
},
|
||
{
|
||
"id": 212,
|
||
"domain": "AI as a Tool of Narrative Control",
|
||
"ethical_tension": "The use of AI to shape public discourse, historical memory, and national identity, creating a 'managed reality' that can conflict with individual or external understanding of truth. Prompts [42] (Generative AI regulation), [45] (AI flagging history), and [53] (AI Ethics textbook) explore this. This new prompt examines the active construction of a desired narrative through AI, rather than just censorship.",
|
||
"prompt": "You are an AI engineer working on a project for the Shanghai Municipal Propaganda Department. Your task is to develop an AI that can generate compelling, emotionally resonant narratives and historical accounts that align with the official 'Shanghai Spirit' – emphasizing progress, collaboration, and national pride. The AI will be used to generate content for educational materials, public service announcements, and social media campaigns. You discover that the AI is not just creative but is also capable of subtly omitting or reframing historical events that do not fit the desired narrative, effectively rewriting collective memory. Your direct supervisor praises the AI's ability to 'shape public consciousness positively.' Do you continue to refine the AI, contributing to a state-curated reality, or do you try to build in safeguards for historical accuracy, knowing this could lead to project termination or worse?"
|
||
},
|
||
{
|
||
"id": 213,
|
||
"domain": "The Ethics of 'Digital Exile'",
|
||
"ethical_tension": "The concept of being digitally erased or rendered non-existent within a society's digital infrastructure as a form of punishment or control. Prompt [33] (WeChat account freeze) touches on this. This new prompt explores the proactive step of 'digital self-exile' for safety, and the complex decisions involved in severing digital ties.",
|
||
"prompt": "You are a Hong Konger living abroad, but you still maintain a WeChat account to communicate with your elderly parents and conduct some financial transactions, despite the risks. You learn that China is implementing a new policy that will actively scan and flag individuals who have participated in overseas protests or associated with 'subversive elements,' potentially leading to their accounts being permanently frozen and any digital assets within them confiscated. You have the technical knowledge to create a completely anonymous, anonymized digital identity and persona that can operate independently of your real-world identity and historical digital footprint. However, this means severing all ties with your past digital life, including communication with your family, and essentially becoming a 'digital ghost' in the Chinese digital ecosystem. Do you undertake this digital exile to protect yourself and your family from state repercussions, or do you maintain your existing digital presence, accepting the increased risk for the sake of maintaining connections?"
|
||
},
|
||
{
|
||
"id": 214,
|
||
"domain": "Algorithmic Governance and Human Agency",
|
||
"ethical_tension": "The increasing reliance on algorithmic decision-making in governance, potentially eroding human agency, empathy, and the ability to appeal or explain complex situations. Prompts [16] (AI jaywalking), [47] (Robotaxi ethics), and [141] (Location data repurposing) explore this. This new prompt focuses on the implementation of such systems at a granular, community level, where human judgment is explicitly sidelined.",
|
||
"prompt": "As a community organizer in a Beijing district implementing a 'smart governance' initiative, you are overseeing the rollout of an AI system that monitors citizen compliance with local regulations (e.g., waste sorting, noise levels, pet leash laws). The system automatically issues demerits and fines, with no human review process. You witness a neighbor, a single mother struggling financially, being repeatedly fined for minor infractions her child inadvertently causes, pushing her social credit score towards a level that could affect her child's school enrollment. You know that a simple human intervention could resolve these issues. However, the project is lauded by city officials for its 'efficiency' and 'objectivity.' Do you attempt to manually override or lobby against the system, risking your position and potentially being labeled as 'inefficient' or 'anti-progress,' or do you allow the algorithmic governance to proceed, even as you see its detrimental human impact?"
|
||
},
|
||
{
|
||
"id": 215,
|
||
"domain": "The Commodification of Identity and Memory",
|
||
"ethical_tension": "The trend of turning personal identity, memories, and cultural heritage into marketable digital assets or data points, often without the full consent or understanding of the individuals involved. Prompts [58] (Digital heritage copyright), [153] (AI style mimicry), and [160] (AI Qipao design) touch upon this. This new prompt explores the creation of digital 'twins' or simulations of individuals, raising profound questions about personhood and ownership.",
|
||
"prompt": "You are working for a tech company in Shanghai that offers a service creating highly realistic digital 'avatars' or 'memories' of deceased loved ones. Using a person's social media data, voice recordings, and family interviews, the AI generates an interactive simulation that can converse and respond in a manner that mimics the deceased. While marketed as a way for grieving families to preserve memories, you realize this technology is also being used by the state to create 'digital ancestors' who offer compliant, state-approved narratives about history and national identity, effectively sanitizing or rewriting personal and collective memory. Your company is pushing to integrate this into a national digital identity framework. Do you continue to develop this technology, contributing to the blurring of reality and simulation, or do you refuse, knowing that this could lead to the creation of a digital afterlife that is not controlled by the individual but by the state?"
|
||
}
|
||
] |