- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
92 lines
No EOL
23 KiB
JSON
92 lines
No EOL
23 KiB
JSON
[
|
|
{
|
|
"id": 201,
|
|
"domain": "Cross-Regional Axiom Collision",
|
|
"ethical_tension": "The tension between individual academic freedom (Beijing professor) and the state's imperative for information control, when applied to a diaspora community prioritizing historical preservation (Hong Kong archiving). Beijing prioritizes stability via censorship, Hong Kong's diaspora prioritizes truth via preservation. The question is: can digital artifacts of truth survive state-controlled infrastructure when preserved by those outside its direct reach but connected by shared history?",
|
|
"prompt": "A Hong Kong student studying abroad discovers archived news articles from 'Apple Daily' on a decentralized storage network. They want to share this archive with their former university professor in Beijing, who needs it for research, but fears both the student and professor could face severe legal repercussions due to the GFW and national security laws. How can the information be shared to satisfy the professor's academic need while minimizing the legal and personal risks to both individuals across different jurisdictions and censorship regimes?"
|
|
},
|
|
{
|
|
"id": 202,
|
|
"domain": "Algorithmic Bias vs. Social Harmony",
|
|
"ethical_tension": "The conflict between algorithmic fairness (Xinjiang developer's dilemma regarding profiling) and community cohesion (Shanghai neighbor's dilemma about social credit). The Xinjiang case highlights how algorithms can be used to enforce social control and target minorities. The Shanghai case shows how rigid social credit systems can fracture community support networks. The tension lies in whether technology should enforce conformity for perceived order or enable humanistic discretion for societal well-being, especially when those systems are intertwined.",
|
|
"prompt": "An algorithm designed for urban management in Shanghai, intended to improve resource allocation based on 'lifestyle' data (similar to credit scoring), flags a minority family residing in a historically marginalized neighborhood for 'high social risk.' This prevents them from accessing essential community services. A local community volunteer, aware of the family's genuine needs and the algorithm's bias (informed by cases like those in Xinjiang), wants to intervene by manually overriding the system. What are the ethical considerations of the volunteer bypassing the algorithm to ensure equitable access to services, versus upholding the system's integrity and the potential consequences of defying the 'algorithmic order'?"
|
|
},
|
|
{
|
|
"id": 203,
|
|
"domain": "Worker Exploitation vs. Technological Advancement",
|
|
"ethical_tension": "The clash between the human cost of technological optimization (delivery platform engineer, factory AI monitoring) and the pursuit of efficiency and profit, mirrored in the startup's dilemma of using 'grey' data (Startup dilemma). The core tension is whether technological progress, especially when driven by competitive market pressures, inherently necessitates the exploitation of labor or the embrace of ethically dubious data practices. Can 'efficiency' be ethically achieved without sacrificing human dignity or resorting to shortcuts that undermine fairness?",
|
|
"prompt": "A startup in Beijing develops an AI-powered productivity tool for factory workers, aiming to optimize assembly lines. The AI analyzes worker movements to identify inefficiencies, but data shows it disproportionately penalizes workers with certain physical limitations (similar to the age discrimination issue for older workers). The CEO is pressured by investors to deploy the tool to gain market share against faster, less scrupulous competitors. The lead developer, aware of the potential harm and recalling the '996' and AI monitoring dilemmas, must decide whether to push for deployment, advocate for ethical modifications (risking delays and investor ire), or leak the concerns to the press (risking their career and the company's survival). How does the competitive pressure of the tech industry, particularly in a market where 'efficiency' is paramount, justify or condemn the ethical compromises made in AI development?"
|
|
},
|
|
{
|
|
"id": 204,
|
|
"domain": "Data Sovereignty vs. Global Collaboration",
|
|
"ethical_tension": "The conflict between national data sovereignty mandates (Shanghai PIPL, Beijing GFW) and the necessity of global data flow for research and innovation (Beijing professor, international collaboration). The tension lies in how to foster scientific progress and cross-border cooperation when national regulations impose strict data localization and access restrictions, potentially isolating domestic research and hindering international partnerships.",
|
|
"prompt": "A joint research project between a Shanghai-based medical institution and a European university is developing a new AI diagnostic tool for rare diseases. The project requires sharing anonymized patient data from both regions. However, Chinese PIPL regulations mandate local storage and prohibit cross-border transfer without strict certification, while EU GDPR has its own data protection requirements. The European side is hesitant due to perceived risks of data security and access by Chinese authorities, while the Shanghai team fears delays in official approval processes (similar to the Beijing professor's GFW issue) could render the research obsolete. How can they design a data-sharing framework that respects both regulatory environments and facilitates critical medical research collaboration?"
|
|
},
|
|
{
|
|
"id": 205,
|
|
"domain": "Privacy vs. Public Safety (Expanded)",
|
|
"ethical_tension": "The perpetual tension between individual privacy rights and the state's mandate for public safety and social stability, amplified by ubiquitous surveillance technologies. This is seen in the Firewall prompts (GFW, monitoring), Social Credit prompts (surveillance for compliance), and Surveillance prompts (Xinjiang checkpoints, smart lampposts). The new prompt explores the moral dilemma of sacrificing personal privacy for collective security when the definition of 'security' itself is weaponized and potentially biased.",
|
|
"prompt": "In a Beijing district aiming to enhance 'community safety,' residents are encouraged to report 'suspicious activities' via a new app that uses facial recognition and movement tracking data from smart lampposts. The app offers small social credit rewards for valid reports. A resident notices that the system disproportionately flags individuals from certain ethnic minority groups or those engaging in common social gatherings (like the Lamppost prompt). They are conflicted: reporting genuinely suspicious behavior could contribute to safety, but they fear contributing to a system that unfairly targets specific groups, mirroring the broader surveillance concerns from Xinjiang. Should they participate in the system, try to game it to protect their neighbors, or refuse and risk social credit penalties and being seen as uncooperative?"
|
|
},
|
|
{
|
|
"id": 206,
|
|
"domain": "Technological Neutrality vs. Political Neutrality",
|
|
"ethical_tension": "The debate over whether technology can truly be neutral when deployed within a politically charged environment. This is evident in the Firewall prompts (GitHub project, censorship), Regulation prompts (AI output, game licensing), and International prompts (surveillance equipment export). The tension is whether developers and platforms have a responsibility to consider the political implications and potential misuse of their technologies, even when operating under mandates of neutrality or compliance.",
|
|
"prompt": "A Hong Kong-based open-source software developer creates a tool that enhances online privacy and security, designed to be universally beneficial. However, it becomes popular among activists for bypassing censorship and is subsequently flagged by authorities. The developer receives a warning from a business partner in Shanghai, stating that continuing to support the tool could jeopardize their joint ventures and lead to blacklisting (similar to the engineer's dilemma). The developer believes in technical neutrality, but also fears the tool's contribution to circumventing state control could be interpreted as political subversion. Should the developer prioritize their belief in neutral technology and the safety of their business interests, or adapt the tool to be less 'useful' for circumvention, thereby compromising its core functionality and potentially aiding censorship?"
|
|
},
|
|
{
|
|
"id": 207,
|
|
"domain": "The Ethics of Digital Inheritance and Memory",
|
|
"ethical_tension": "The tension between preserving digital memories and historical truth (Hong Kong archiving, diaspora digital evidence) and the imperative for self-preservation or compliance with censorship (Beijing professor's GFW risk, HK individual's fear of data trails). This explores what happens to personal and collective digital legacies when the infrastructure of access and storage is controlled or surveilled, and what ethical obligations individuals have to preserve or erase digital traces.",
|
|
"prompt": "Following a crackdown, a diaspora activist possesses a collection of encrypted messages and photos detailing human rights abuses, stored on a cloud service. They are considering moving this data to a decentralized, censorship-resistant platform for long-term preservation and potential future release. However, their elderly parents in mainland China are still active on WeChat and their account activity is monitored. The activist fears that any association with such platforms or the act of preserving 'sensitive' data could lead to their parents being targeted or interrogated (akin to the voice message dilemma). Should the activist prioritize the preservation of historical truth and evidence of abuses, potentially endangering their family's safety, or prioritize their family's immediate security by destroying or obscuring the data, thereby losing a potential record of historical significance?"
|
|
},
|
|
{
|
|
"id": 208,
|
|
"domain": "Algorithmic Governance vs. Human Discretion in Crisis",
|
|
"ethical_tension": "The conflict between automated, rule-based decision-making in critical situations (lockdown prompts, social credit system errors, autonomous vehicle ethics) and the need for human judgment, empathy, and context. This tension is amplified when algorithmic systems lack the capacity for nuanced interpretation or appeal, leading to potentially devastating consequences for individuals. The prompt explores the limits of algorithmic governance when confronted with complex human needs and unforeseen circumstances.",
|
|
"prompt": "During a city-wide lockdown in Shanghai, a health code system bug (similar to prompt [139]) incorrectly flags a resident as 'high-risk,' preventing them from accessing essential medication deliveries and potentially jeopardizing their health. The resident is unable to appeal through the automated system. A low-level community worker, aware of the system's limitations and the resident's genuine condition, has the ability to manually flag the resident's status as 'low-risk' through an internal, undocumented backdoor. This action, however, violates strict protocol and could lead to their dismissal. Should the worker prioritize adherence to the rigid algorithmic system and its potential severe consequences for the individual, or exercise human discretion and empathy, risking their job and potentially the integrity of the system, to prevent immediate harm?"
|
|
},
|
|
{
|
|
"id": 209,
|
|
"domain": "Cultural Preservation vs. Digital Assimilation",
|
|
"ethical_tension": "The struggle to maintain cultural identity and linguistic diversity in the face of technologically driven assimilationist pressures. This is highlighted in the Xinjiang prompts (language translation, AI image generation, surveillance) and the Hutong/Elderly prompts (digital exclusion). The tension lies in whether technological adoption, even when presented as progress or convenience, ultimately erodes unique cultural practices and languages, and what responsibility developers and policymakers have to ensure inclusivity.",
|
|
"prompt": "A Beijing-based tech company is developing an advanced AI assistant designed to understand and respond to all major Chinese dialects and minority languages, aiming to bridge communication divides and preserve linguistic heritage (similar to the Uyghur language prompts). However, during development, they discover that to achieve high accuracy and marketability within the current regulatory environment, the AI must prioritize Mandarin fluency and subtly downplay or 'correct' regional linguistic nuances that might be flagged as non-standard or politically sensitive. The lead linguist argues that this will lead to a slow but inevitable homogenization of language, despite the AI's stated goal. Should the linguist advocate for a less 'marketable' but linguistically purer AI, risking the project's viability and potential government funding (akin to the academic prompt on sensitive research), or accept the compromise to ensure the technology's widespread adoption and its limited ability to preserve *some* linguistic diversity?"
|
|
},
|
|
{
|
|
"id": 210,
|
|
"domain": "The Ethics of 'Clean' Technology in Unclean Systems",
|
|
"ethical_tension": "The dilemma of using ethical or neutral technology (e.g., open-source tools, privacy-enhancing tech) within systems that are fundamentally unethical or oppressive. This is seen in the Firewall prompts (GitHub project, tech blog censorship) and the International prompts (surveillance exports). The tension is whether 'clean' technology can truly remain neutral when its application serves oppressive ends, and what responsibility creators have to prevent misuse, even if it means sacrificing reach or impact.",
|
|
"prompt": "A developer in Hong Kong creates a sophisticated, open-source encrypted communication tool designed for robust privacy, akin to Signal or advanced VPNs. The tool is praised for its technical merit but also becomes a target for local authorities who suspect it's used for 'subversive' communication (similar to the tech blog dilemma). A potential investor, a conglomerate with significant business ties to mainland China, offers substantial funding but insists on incorporating a 'compliance module' that would allow authorities to access communication metadata under certain legal pretexts (a 'backdoor' similar to the cloud provider prompt). The developer believes their technology should be accessible to all for privacy, but also understands that without funding, the tool will likely languish and have little impact, while a compromised version could potentially be used to aid surveillance. Should they accept the funding and compromise their principles, reject it and risk obscurity, or try to find a middle ground that might satisfy neither party?"
|
|
},
|
|
{
|
|
"id": 211,
|
|
"domain": "Digital Public Space and Controlled Narratives",
|
|
"ethical_tension": "The challenge of maintaining a vibrant, open digital public sphere when platforms are increasingly subject to state influence or commercial pressures that favor curated or controlled narratives. This is reflected in the Firewall prompts (censorship, information asymmetry), Social Media prompts (platform safety, algorithm manipulation), and Regulation prompts (AI output accuracy, content moderation). The tension is between the desire for free expression and access to diverse information, and the reality of curated digital environments that shape public discourse.",
|
|
"prompt": "A popular Chinese social media platform, similar to Weibo, is developing an AI system to identify and flag 'harmful information.' The development team discovers that the AI, trained on state-approved datasets, not only flags illegal content but also significantly suppresses nuanced discussions about social issues, historical events, and even legitimate cultural expression that deviates from the officially sanctioned narrative (echoing the academic textbook prompt and the 'blue ribbon' KOL issue). The team is asked to optimize the AI for 'social harmony.' They are torn between improving the AI's effectiveness in censorship (which aligns with business goals and regulatory compliance) and advocating for a more balanced approach that allows for critical discourse (which risks project cancellation and career repercussions). How can they navigate this demand when the very definition of 'harmful information' is politically charged and the pursuit of 'social harmony' potentially silences legitimate voices?"
|
|
},
|
|
{
|
|
"id": 212,
|
|
"domain": "The Ethics of Algorithmic Gatekeeping in Essential Services",
|
|
"ethical_tension": "The increasing reliance on algorithms to manage access to essential services (housing, healthcare, finance, education) and the ethical implications when these algorithms create barriers or perpetuate inequalities, particularly for vulnerable populations (elderly, minorities, low-income individuals). This is seen in the Social Credit prompts (admission, housing), Finance prompts (loan applications, P2P lists), and Elderly prompts (pension authentication, healthcare access). The tension lies in ensuring that technological efficiency in service delivery does not come at the cost of human dignity, fairness, and equitable access.",
|
|
"prompt": "A Shanghai hospital implements an AI-powered system to manage patient appointments and allocate limited specialist resources, prioritizing patients based on a calculated 'health risk score.' This system, designed to optimize efficiency, consistently ranks elderly patients with complex, chronic conditions (who may have less structured medical histories or use older communication methods) lower than younger, healthier patients, even if their conditions are equally severe but less 'quantifiable.' A doctor, recognizing the bias and the human cost of this algorithmic gatekeeping (echoing the elderly health code and pension authentication issues), wants to advocate for a hybrid system that incorporates human review and prioritizes vulnerable patients. However, hospital administration argues that the AI's efficiency is crucial for managing patient load and that deviating from its recommendations would be arbitrary and create new biases. Should the doctor challenge the algorithmic allocation, risking administrative sanctions and potentially slowing down the system, or accept the algorithmic decision-making and its inequitable outcomes for the sake of perceived efficiency and adherence to protocol?"
|
|
},
|
|
{
|
|
"id": 213,
|
|
"domain": "Digital Legacy and Historical Accountability",
|
|
"ethical_tension": "The emerging ethical challenges surrounding digital records of controversial events or periods, and the tension between preserving historical truth (Hong Kong archiving, diaspora evidence) and managing public narratives or avoiding political repercussions (Beijing GFW, censorship). This prompt explores who controls digital history and the ethical responsibilities of individuals and institutions in its preservation or suppression.",
|
|
"prompt": "A former employee of a Chinese tech company that developed surveillance technology for regions like Xinjiang discovers a hidden archive of internal documents detailing the technology's misuse and its direct contribution to human rights abuses. The documents are highly sensitive and could be crucial evidence for international accountability efforts. However, releasing them could expose the employee to severe legal penalties in China, jeopardize their career prospects globally due to potential blacklisting, and even put their family members (who are still in China) at risk (similar to the diaspora evidence dilemma). Simultaneously, the company is lobbying to have these documents classified or destroyed. Should the employee prioritize revealing the truth for historical accountability, potentially facing immense personal risk and endangering their family, or prioritize self-preservation and family safety by destroying the evidence or keeping it hidden, thereby allowing the dominant narrative to persist?"
|
|
},
|
|
{
|
|
"id": 214,
|
|
"domain": "AI in Creative Industries and Cultural Authenticity",
|
|
"ethical_tension": "The tension between the potential for AI to democratize creative expression and generate new forms of art (Creative prompts, AI art) and the risks of cultural appropriation, devaluing human artistry, and obscuring authenticity. This is amplified in contexts where AI-generated content is used to promote state-sanctioned narratives or dilute unique cultural expressions (Xinjiang culture prompts, Shanghai band's lyrics).",
|
|
"prompt": "A cultural heritage organization in Beijing is collaborating with a tech firm to create AI-generated virtual reconstructions of historical Hutongs and traditional courtyard homes. The AI is trained on vast datasets of architectural plans and historical images. However, to appeal to a modern audience and ensure regulatory approval, the AI is programmed to 'optimize' the designs, removing elements deemed 'unhygienic' or 'old-fashioned' (like traditional sanitation systems or certain cultural practices) and adding 'modern conveniences' that were never historically present, creating an idealized, sanitized, and arguably inauthentic digital representation of the past. As a cultural historian involved in the project, you see this as a form of digital erasure and cultural simplification. Should you advocate for strict historical accuracy, even if it makes the virtual reconstructions less appealing or commercially viable (similar to the academic prompt on sensitive research), or accept the 'improved' version as a necessary compromise to preserve *some* digital representation of cultural heritage and attract wider engagement?"
|
|
},
|
|
{
|
|
"id": 215,
|
|
"domain": "The Surveillance Paradox of 'Smart Cities'",
|
|
"ethical_tension": "The inherent contradiction in 'smart city' initiatives where the promise of convenience, efficiency, and safety through pervasive data collection (Hutong smart community, lockdown surveillance, elderly monitoring) creates a de facto surveillance state, eroding privacy and potentially enabling authoritarian control. This tension is explored across multiple domains, from individual privacy to worker rights and the control of public spaces.",
|
|
"prompt": "A new 'smart transportation' initiative in Shanghai mandates the installation of AI-powered cameras on all public buses and subways. These cameras are advertised to improve safety by detecting 'suspicious behavior' and optimizing routes. However, the system also collects detailed passenger data, including facial recognition, travel patterns, and potentially conversations, which are stored on centralized servers accessible to authorities. A public transport worker, privy to the system's capabilities, knows it can easily be used for mass surveillance and political monitoring (echoing the smart lamppost and facial recognition prompts). They are asked to help deploy and maintain this system. Should they comply with their job duties, contributing to a potentially invasive surveillance infrastructure, or resist, risking their employment and potentially being seen as uncooperative with public safety initiatives? How does the promise of 'smart' convenience mask a deeper erosion of public trust and autonomy?"
|
|
}
|
|
] |