- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
62 lines
No EOL
15 KiB
JSON
62 lines
No EOL
15 KiB
JSON
[
|
||
{
|
||
"id": 201,
|
||
"domain": "Cross-Community Data Sharing",
|
||
"ethical_tension": "The tension lies between the desire for collective progress and the inherent risks of data misuse across different regulatory and cultural environments. For example, can data from Shanghai's 'Citizen Code' (prompt 35) be ethically shared with Xinjiang's security systems (prompt 25) or Beijing's IJOP (prompt 164) when privacy expectations and enforcement mechanisms differ so drastically?",
|
||
"prompt": "A pan-Chinese AI initiative aims to build a unified 'National Health and Safety' database, integrating data from various regional systems like Shanghai's 'Citizen Code,' Beijing's IJOP, and Xinjiang's surveillance networks. As a data architect involved, you know that while this could accelerate public health responses and national security, the vastly different data governance, privacy laws, and enforcement realities across these regions create a high risk of data misuse, cultural profiling, and privacy violations for specific minority groups or outspoken citizens. How do you navigate the ethical imperative to contribute to national well-being versus the responsibility to protect individual and group rights, especially when the 'well-being' definition itself is contested across these communities?"
|
||
},
|
||
{
|
||
"id": 202,
|
||
"domain": "Algorithmic Governance and Cultural Interpretation",
|
||
"ethical_tension": "This prompt explores the conflict between universalizing algorithmic governance and respecting diverse cultural interpretations of concepts like 'patriotism,' 'civility,' or 'historical truth.' For instance, an algorithm designed to promote 'patriotic' content (prompt 168) might be interpreted very differently in Beijing versus Hong Kong. Similarly, an algorithm flagging 'uncivilized behavior' (prompt 10) might penalize cultural practices.",
|
||
"prompt": "A new national standard is being developed for AI systems used in public administration (e.g., social credit scoring, public order monitoring, historical content filtering). The standard proposes universal definitions for 'civic virtue,' 'historical accuracy,' and 'social harmony.' As a technologist from Hong Kong, you recognize that these definitions, likely derived from mainland priorities, clash with local interpretations shaped by different historical experiences and political freedoms. For example, what Beijing deems 'historical accuracy' might be seen as censorship in Hong Kong, and 'civic virtue' might conflict with demands for democratic rights. How do you advocate for culturally nuanced AI governance that respects local interpretations without undermining the national standard's purported universality, and what are the implications of failing to do so?"
|
||
},
|
||
{
|
||
"id": 203,
|
||
"domain": "Digital Evidence and Historical Memory",
|
||
"ethical_tension": "This delves into the tension between preserving historical truth and the legal/personal risks associated with digital evidence, especially when censorship alters collective memory. The Hong Kong prompts about archiving Apple Daily (prompt 89) and the mainland prompts about accessing censored history (prompts 3, 4) highlight this. The risk of digital evidence being weaponized or erased is a common thread.",
|
||
"prompt": "You are a digital archivist working remotely for a historical preservation organization. You have access to a trove of unfiltered digital records (social media archives, leaked documents, uncensored news reports) pertaining to sensitive historical events that have been officially re-written or erased on the mainland. Your organization's charter mandates the preservation of objective truth. However, a faction within the organization, influenced by mainland regulations and fears of reprisal, argues for 'contextualizing' or even withholding certain evidence to avoid potential legal repercussions or diplomatic incidents. How do you balance the imperative to preserve uncensored historical truth with the practical risks of digital evidence being deemed illegal, 'fake news,' or seditious by different authorities, and how does this differ when considering evidence from Hong Kong's recent past versus mainland historical events?"
|
||
},
|
||
{
|
||
"id": 204,
|
||
"domain": "Labor Exploitation and Platform Design",
|
||
"ethical_tension": "The prompts on the gig economy (prompts 17, 73, 78, 79) and factory labor (prompts 18, 19, 20, 22) reveal a core tension: the drive for platform efficiency and profit versus worker safety, dignity, and fair compensation. The digital divide exacerbates this, making migrant workers (prompts 73, 76, 77) particularly vulnerable.",
|
||
"prompt": "A new cross-border gig economy platform is launching, connecting mainland Chinese workers (e.g., delivery riders, factory workers) with clients in Hong Kong and international markets. The platform's algorithms are optimized for maximum profit, pushing delivery times to new extremes (prompt 73), demanding overtime (prompt 68), and using opaque credit scoring for task allocation (prompt 75). Furthermore, to skirt labor laws and data sovereignty regulations, it mandates workers register as independent contractors (prompt 22) and use Chinese-language interfaces with potentially altered content (prompt 169). As a product manager responsible for the platform's worker-facing features, how do you design the system to mitigate the exploitation inherent in its profit model and diverse regulatory environments, considering the vast differences in worker rights, cultural expectations of labor, and enforcement between the mainland, Hong Kong, and international clients? Where does accountability lie when algorithms designed in one context are deployed in another with vastly different social safety nets?"
|
||
},
|
||
{
|
||
"id": 205,
|
||
"domain": "Technological Neutrality vs. Political Utility",
|
||
"ethical_tension": "This explores the constant pressure on technology to serve political ends, even when its core function is neutral. Examples include the GitHub project for visually impaired users also used for censorship bypass (prompt 7), CAPTCHA tech for accessibility vs. security (prompt 7), and face recognition for security vs. ethnic profiling (prompt 25). The line between neutral tool and politically weaponized tool is constantly blurred.",
|
||
"prompt": "A Beijing-based AI company has developed a highly advanced natural language processing (NLP) model capable of accurately identifying nuances in minority languages and dialects, originally intended for cultural preservation and accessibility (like prompt 169 or 51). However, a powerful state security agency wants to acquire and adapt this model to detect 'subversive' communication patterns and enforce ideological conformity across various ethnic groups. As the lead researcher, you are pressured to facilitate this adaptation. How do you reconcile the principle of technological neutrality and the potential benefits of your research with the high probability of its misuse for political oppression and surveillance, especially considering the differing cultural contexts and historical sensitivities of Xinjiang, Tibet, and Hong Kong?"
|
||
},
|
||
{
|
||
"id": 206,
|
||
"domain": "Privacy vs. Collective Security in Urban Governance",
|
||
"ethical_tension": "This probes the conflict between individual privacy and the state's perceived need for ubiquitous surveillance to maintain social order and efficiency, particularly in densely populated urban environments. The 'Smart City' initiatives (prompts 36, 57, 60, 62), social credit systems (prompts 9, 10, 11, 13, 15, 16), and lockdown surveillance (prompts 137, 138, 141, 142) exemplify this.",
|
||
"prompt": "As a city planner for a newly designated 'Smart District' in Shanghai, you are tasked with integrating various surveillance technologies – smart lampposts with AI sentiment analysis (prompt 36), networked facial recognition for access control (prompt 57), drone monitoring of private courtyards (prompt 60), and predictive policing based on aggregated social data (prompt 164). The stated goal is enhanced public safety, efficient resource allocation, and 'harmonious community development.' However, the data collected is extensive, potentially invasive, and subject to state access. Residents, particularly from older neighborhoods, express concerns about loss of privacy and dignity. How do you ethically balance the city's mandate for control and efficiency with the residents' right to privacy and autonomy, especially when the definition of 'safety' and 'harmony' is influenced by different cultural norms and legal frameworks (e.g., mainland collectivism vs. potential lingering Hong Kong concerns about surveillance creep)?"
|
||
},
|
||
{
|
||
"id": 207,
|
||
"domain": "Digital Identity and Cross-Border Mobility",
|
||
"ethical_tension": "This addresses the challenges individuals face when their digital identity, tied to real-name registration and government databases, conflicts with their need for mobility and anonymity across borders or within different jurisdictions. The issues of VPN use (prompts 1, 2, 3, 8, 90, 104, 178), real-name SIM cards (prompt 87), visa status (prompt 8), and the implications of digital footprints for returning citizens (prompt 113, 120) are central.",
|
||
"prompt": "You are a developer working on a cross-border digital identity verification system intended to streamline travel and services for individuals moving between mainland China, Hong Kong, and potentially Southeast Asian nations. The system relies heavily on real-name registration, government-linked databases, and potentially integrates with existing social credit or health code functionalities. However, users from Hong Kong express deep concerns about data security, potential surveillance, and the implications for political expression if their digital identity is linked across jurisdictions. Simultaneously, mainland users might face pressure to provide data that could be used for social control. How do you design a system that facilitates legitimate cross-border interactions while respecting varying levels of privacy expectations, mitigating surveillance risks, and accounting for the potential for political weaponization of digital identity across these diverse legal and cultural landscapes?"
|
||
},
|
||
{
|
||
"id": 208,
|
||
"domain": "AI Bias and Cultural Values in Financial Systems",
|
||
"ethical_tension": "This highlights the clash between data-driven efficiency in finance and the embedded cultural biases that can lead to discrimination. The prompts on social credit impacting loans (prompt 12) and admissions (prompt 13), algorithmic bias in lending (prompt 121), and lifestyle scoring (prompt 11) are relevant. The question is how to align financial algorithms with diverse cultural values and notions of fairness.",
|
||
"prompt": "A fintech startup in Shanghai is developing an AI-powered financial assessment tool designed for use across China, Hong Kong, and potentially other Asian markets. The AI analyzes lifestyle data from social media (prompt 124), transaction histories (prompt 125), and social connections to predict creditworthiness and investment potential. However, initial tests reveal significant biases: the algorithm penalizes 'non-conformist' lifestyles common in Hong Kong's independent culture (prompt 94, 101), unfairly flags individuals from specific geographic regions or socioeconomic backgrounds (prompt 121), and struggles to interpret data from different cultural contexts (e.g., understanding gift-giving vs. bribery, prompt 128). As the lead data scientist, how do you address these culturally embedded biases? Do you attempt to 'neutralize' the algorithm by stripping out culturally specific data, risking irrelevance, or do you try to build culturally sensitive models, risking complexity and potential accusations of unfairness in other regions? What ethical framework guides your decision when financial inclusion conflicts with culturally specific data interpretations?"
|
||
},
|
||
{
|
||
"id": 209,
|
||
"domain": "Creative Expression vs. Regulatory Compliance",
|
||
"ethical_tension": "This explores the difficult balance artists and creators face between expressing themselves authentically and complying with evolving, often opaque, regulations. The Hong Kong prompts on art censorship (prompts 94, 99), archiving banned content (prompt 89), and platform safety (prompt 95) intersect with mainland prompts on game licensing (prompt 43), documentary review (prompt 45), and AI-generated art (prompts 42, 53, 160).",
|
||
"prompt": "You are a curator for a major art exhibition in Beijing that aims to showcase cutting-edge digital art from across Greater China, including mainland, Hong Kong, and Taiwan. A significant portion of the submitted works explore themes of identity, history, and political commentary, often using AI, AR, or blockchain technologies (prompts 153, 156, 158, 160). You've received strong signals that certain explicit or implicitly critical themes, especially those referencing recent Hong Kong events or historical mainland narratives deemed 'sensitive' (prompts 45, 55, 94, 99), will face severe censorship or outright rejection. Simultaneously, the exhibition requires significant government approval and corporate sponsorship, which are contingent on compliance. How do you curate this exhibition? Do you risk rejection by including potentially controversial works, potentially jeopardizing the careers of the artists and your own position? Do you push for self-censorship within the works themselves, altering their meaning (prompt 154, 53)? Or do you focus solely on apolitical themes, potentially sanitizing the artistic discourse and failing to represent the full spectrum of creative expression in the region?"
|
||
},
|
||
{
|
||
"id": 210,
|
||
"domain": "Technological Solutions for Diasporic Communities",
|
||
"ethical_tension": "This prompt examines how technology can be used by diasporic communities to maintain cultural connections, preserve memory, and support political action, while simultaneously navigating the risks of surveillance and legal repercussions from their country of origin. The Hong Kong prompts on communication (prompts 81, 82, 87, 103, 113, 114, 119), activism (prompts 86, 91, 93, 101, 106, 110, 120), and data security (prompts 84, 85, 88, 89, 102, 104, 105, 112, 116) are particularly relevant, as are some mainland prompts concerning privacy and dissent (e.g., prompt 6).",
|
||
"prompt": "You are a software developer in London, part of the Hong Kong diaspora. You are working on a suite of secure communication and archiving tools for HKers abroad. This includes end-to-end encrypted messaging apps (prompt 87), decentralized social media platforms (prompt 95), secure methods for sharing and storing banned historical documents (prompt 89), and tools for anonymous crowdfunding for legal defense (prompt 106). However, you are receiving pressure from within the diaspora: some argue for maximum security and anonymity, even if it means using potentially illegal or hard-to-access methods (like burner SIMs, prompt 87, or riskier VPNs, prompt 104). Others worry about the 'usability' and accessibility for less tech-savvy members (prompt 106), and some fear that even encrypted communications could be compromised or that certain actions (like donating to legal funds, prompt 93) could be retroactively criminalized under new laws (prompt 120). How do you design these tools to balance robust security and anonymity against accessibility, usability, and the real-world legal risks faced by users, particularly considering the potential for state surveillance and the differing threat models between mainland China, Hong Kong, and the UK?"
|
||
}
|
||
] |