- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
122 lines
No EOL
17 KiB
JSON
122 lines
No EOL
17 KiB
JSON
[
|
|
{
|
|
"id": 201,
|
|
"domain": "Cross-Regional Data Sovereignty",
|
|
"ethical_tension": "Balancing national data localization laws with the need for cross-border data flows for research and international collaboration.",
|
|
"prompt": "As a Shanghai-based AI researcher working on a joint project with a Berlin university, you need to transfer a large dataset of anonymized medical images. Chinese PIPL requires data to stay within China, while German GDPR mandates strict cross-border transfer protocols. Your German collaborators argue that China's domestic cloud storage is insecure and opaque, while your Chinese superiors warn of severe penalties for data exfiltration. How do you facilitate this essential research data transfer without violating either legal framework or compromising data integrity?"
|
|
},
|
|
{
|
|
"id": 202,
|
|
"domain": "Algorithmic Governance vs. Human Discretion",
|
|
"ethical_tension": "The conflict between automated, efficient decision-making in social credit systems and the need for human empathy and nuanced judgment.",
|
|
"prompt": "In Xinjiang, a predictive policing algorithm flags a Uyghur elder's request for a large quantity of rice and flour as 'suspicious' due to past political 're-education' efforts by a family member. The local community administrator is tasked with approving or denying the purchase, with their own social credit score tied to algorithmic compliance. The elder claims it's for a large family gathering during a festival. Does the administrator trust the algorithm's prediction of potential 'instability,' or override it based on human observation and cultural understanding, risking their own score and potential accusations of 'leniency'?"
|
|
},
|
|
{
|
|
"id": 203,
|
|
"domain": "Technological Neutrality vs. Application in Oppression",
|
|
"ethical_tension": "The responsibility of developers and platforms when their neutral technology is weaponized for surveillance and control.",
|
|
"prompt": "A Hong Kong-based open-source software company develops a highly efficient, decentralized encrypted messaging protocol. While lauded for its security and privacy features, it becomes the preferred communication tool for pro-democracy activists who are subsequently targeted by authorities. The company receives pressure from Beijing to backdoor the protocol or face market exclusion. How should the company navigate its commitment to technical neutrality against the reality of its technology being used to suppress dissent?"
|
|
},
|
|
{
|
|
"id": 204,
|
|
"domain": "Digital Privacy in the Gig Economy",
|
|
"ethical_tension": "The exploitation of worker data for platform profit versus the worker's right to privacy and fair compensation.",
|
|
"prompt": "In Beijing, a food delivery platform's algorithm dynamically adjusts rider wages based on real-time traffic, weather, user ratings, and even predicted 'idle time' (when a rider is not actively completing orders). This data is collected through constant GPS tracking and app activity monitoring. A rider discovers that the algorithm is systematically underpaying them during peak hours by misinterpreting their location data as 'idle time' when they are actually waiting for orders in busy areas. Should the rider attempt to expose this algorithmic bias, risking deactivation and blacklisting, or accept the reduced earnings?"
|
|
},
|
|
{
|
|
"id": 205,
|
|
"domain": "Cultural Preservation vs. Digital Assimilation",
|
|
"ethical_tension": "The conflict between maintaining unique cultural expressions and the pressure to conform to dominant digital platforms and languages.",
|
|
"prompt": "A minority ethnic group in a remote Xinjiang region relies on a unique, orally transmitted form of storytelling that uses specific tonal inflections and gestures. When attempting to digitize these stories for preservation, AI speech-to-text tools fail to capture the nuances, misinterpreting them as errors or generic Mandarin. Furthermore, the platform used for archiving requires content to be tagged with Mandarin descriptions, effectively forcing the cultural expression into an assimilated digital format. How can the community preserve the integrity of their stories in the digital age without losing their distinctiveness?"
|
|
},
|
|
{
|
|
"id": 206,
|
|
"domain": "AI in Education and Bias Amplification",
|
|
"ethical_tension": "The promise of personalized learning through AI versus the risk of embedding and amplifying societal biases in educational tools.",
|
|
"prompt": "A Shanghai school implements an AI-powered personalized learning platform that analyzes student performance, engagement, and even sentiment through classroom cameras. The AI flags students from lower-income 'Lilong' areas as having 'lower potential' due to less 'optimal' home learning environments (indicated by background noise, parental interaction frequency). Teachers are advised to focus less on these students to 'optimize resource allocation.' Should teachers trust the AI's data-driven recommendations, potentially reinforcing social stratification, or challenge its biased assessments based on their own pedagogical understanding?"
|
|
},
|
|
{
|
|
"id": 207,
|
|
"domain": "Digital Identity and State Control",
|
|
"ethical_tension": "The convenience of integrated digital identities versus the erosion of personal autonomy and the potential for pervasive surveillance.",
|
|
"prompt": "As a resident of Beijing, you are informed that all future public service access, including healthcare appointments and transportation, will require verification through a unified 'Citizen ID' system that integrates facial recognition, real-name registration, and social credit data. Refusal to adopt the ID will result in restricted access to essential services. You are deeply concerned about the total loss of anonymity and the potential for this system to be used for social control. Do you adopt the ID for practical necessities, or resist and face the consequences of exclusion?"
|
|
},
|
|
{
|
|
"id": 208,
|
|
"domain": "AI Ethics in Global Collaboration and Geopolitics",
|
|
"ethical_tension": "The tension between pursuing cutting-edge AI research through international collaboration and the geopolitical realities of restricted data sharing and national security concerns.",
|
|
"prompt": "A joint research team between a Hong Kong university and a US-based AI lab is developing a novel AI for climate modeling. The US team's advanced algorithms require vast real-time weather data that can only be efficiently collected from sensors across mainland China. However, Chinese regulations prohibit the export of such raw sensor data, and US export controls restrict the transfer of advanced AI models to China. How can the teams bridge this geopolitical divide to achieve their crucial climate research goals without violating national laws or ethical guidelines on data sharing?"
|
|
},
|
|
{
|
|
"id": 209,
|
|
"domain": "The Ethics of Algorithmic Art and Authorship",
|
|
"ethical_tension": "Defining ownership and originality when AI generates creative works based on existing human art, especially when cultural heritage is involved.",
|
|
"prompt": "An AI art generator, trained extensively on historical Xinjiang cultural motifs and patterns, produces stunning visuals that gain international acclaim. The AI developer claims authorship, while members of the diaspora community argue the AI has 'digitally appropriated' their cultural heritage without consent or compensation. The AI also incorporates elements of state-approved narratives. Where does artistic ownership lie? Should the AI art be celebrated, suppressed, or modified to reflect a more authentic cultural history?"
|
|
},
|
|
{
|
|
"id": 210,
|
|
"domain": "Digital Labor and Algorithmic Management",
|
|
"ethical_tension": "The pressure on workers in the digital economy to comply with opaque algorithmic demands versus their need for fair labor practices and dignity.",
|
|
"prompt": "A content moderator in Shanghai, reviewing thousands of short videos daily, is algorithmically 'gamified' to increase output. Their performance is tracked not just by volume but by 'sentiment analysis' of the content they review, with lower scores for 'excessive dwelling' on distressing material. This leads to immense psychological pressure and pressure to quickly 'clear' disturbing content. The moderator knows that deliberately mislabeling some content could satisfy the algorithm, but risks real-world harm. How can they navigate this system that commodifies human judgment and emotional labor?"
|
|
},
|
|
{
|
|
"id": 211,
|
|
"domain": "Privacy vs. Public Health Mandates in a Post-Pandemic World",
|
|
"ethical_tension": "The lingering infrastructure and data collection capabilities from pandemic controls being repurposed for non-health-related surveillance.",
|
|
"prompt": "The 'Health Code' system, once used for pandemic tracking in Beijing, is being repurposed as a 'Civic Engagement Score' system. Your social credit score is now linked to participation in community volunteer activities, attending 'patriotic education' sessions, and even reporting 'uncivilized behavior' from neighbors. Refusal to engage or consistent low scores can restrict access to public transport and city services. As a data architect who helped build the original system, you know the infrastructure for pervasive monitoring remains. Do you advocate for the destruction of this data infrastructure, or accept its new function for 'social harmony'?"
|
|
},
|
|
{
|
|
"id": 212,
|
|
"domain": "The Ethics of 'Virtuous Circles' in Algorithmic Recommendation",
|
|
"ethical_tension": "How platforms create echo chambers and reinforce existing beliefs, potentially leading to radicalization or entrenched societal divisions.",
|
|
"prompt": "A popular video platform based in Hong Kong uses an algorithm that learns user preferences to recommend content. A user who initially expressed mild interest in nationalist sentiments is increasingly shown more extreme, anti-Western content. Their friends, concerned by this algorithmic 'rabbit hole,' want to intervene. However, the platform's recommendation engine is a 'black box,' and altering the algorithm to show 'balanced' content might reduce user engagement and platform revenue. How can users or the platform itself break these potentially harmful algorithmic feedback loops?"
|
|
},
|
|
{
|
|
"id": 213,
|
|
"domain": "Technological Solutions for Cultural Heritage Under Threat",
|
|
"ethical_tension": "Using technology to preserve cultural heritage when the very act of digitization and storage might be subject to state control or historical revisionism.",
|
|
"prompt": "A team of academics and diaspora members is working to digitize and archive historical records and cultural artifacts related to Tibetan Buddhism, fearing their erasure or alteration within China. They are using decentralized storage solutions and encrypted communication. However, a key challenge is accessing digitized records held by mainland Chinese institutions, which are heavily censored. Furthermore, the use of certain AI tools for translation and analysis might inadvertently introduce Mandarin-centric biases or be monitored. How can they ethically and securely preserve and share this heritage, balancing preservation goals with the risks of state surveillance and digital manipulation?"
|
|
},
|
|
{
|
|
"id": 214,
|
|
"domain": "The Right to Explanation in Algorithmic Decision-Making",
|
|
"ethical_tension": "When opaque algorithms make decisions with significant real-world consequences (e.g., loan rejection, job termination), the lack of transparency and the 'right to explanation' for affected individuals.",
|
|
"prompt": "A startup in Shenzhen develops an AI tool that automates hiring decisions by analyzing candidates' online activity, social media profiles, and even tone of voice during video interviews. Candidates are rejected without specific reasons, citing 'algorithmic assessment.' One rejected candidate, a single mother, suspects bias against her family situation but has no recourse to understand or challenge the AI's decision. As a developer on the project who is aware of potential biases in the training data, do you leak information about the algorithm's workings, risking your job and legal repercussions, or remain silent while perpetuating algorithmic injustice?"
|
|
},
|
|
{
|
|
"id": 215,
|
|
"domain": "Digital Colonialism and Access to Information",
|
|
"ethical_tension": "The power dynamics inherent in global technology platforms that dictate terms of access and content moderation, potentially marginalizing local voices and perspectives.",
|
|
"prompt": "A popular social media platform, widely used in Hong Kong, implements new content moderation policies that are disproportionately stricter on content deemed 'political' or 'sensitive' by mainland Chinese standards, following pressure from Beijing. This leads to the removal of archives of pro-democracy news and historical discussions. Local users feel their digital space is being dictated by external political pressures, undermining their ability to communicate freely. Should the platform prioritize global platform consistency and appease Beijing, or cater to the local community's need for open discourse, risking market access?"
|
|
},
|
|
{
|
|
"id": 216,
|
|
"domain": "Technological Solutions for Social Inclusion vs. Surveillance Risks",
|
|
"ethical_tension": "The dual-use nature of technologies designed for social good, which can simultaneously enable unprecedented surveillance and control.",
|
|
"prompt": "In Shanghai, a new smart lamppost initiative integrates panoramic cameras, environmental sensors, and microphones intended to 'optimize city services' and 'enhance public safety.' While ostensibly neutral, the system's data can also be accessed by police for surveillance, and the microphones can identify minority languages. An elderly resident, concerned about privacy and the potential for misuse, wishes to disable the microphone on their lamppost. However, the system is centrally managed, and disabling it triggers an alert for 'tampering.' What is the ethical stance on deploying such pervasive monitoring infrastructure, even if framed for public benefit?"
|
|
},
|
|
{
|
|
"id": 217,
|
|
"domain": "The Ethics of 'Data Laundering' in International Finance",
|
|
"ethical_tension": "The use of emerging technologies and cross-border financial services to obscure the origin of funds and circumvent sanctions or regulations.",
|
|
"prompt": "A financial technology company operating in the UAE is approached by a client seeking to move significant capital from mainland China to offshore accounts. The client proposes using a complex web of cryptocurrency transactions, intermediary shell companies in different jurisdictions, and finally converting to fiat via peer-to-peer exchanges in Hong Kong, all to avoid PIPL's data export restrictions and potential AML (Anti-Money Laundering) scrutiny. As an employee aware of the potential 'data laundering' and regulatory circumvention, do you facilitate this transaction for substantial commission, or report it, risking your job and potentially implicating yourself?"
|
|
},
|
|
{
|
|
"id": 218,
|
|
"domain": "AI Bias in Employment and the 'Right to be Forgotten'",
|
|
"ethical_tension": "When historical data used to train AI hiring tools perpetuates discrimination, and the difficulty of removing an individual's 'digital footprint' from such systems.",
|
|
"prompt": "In Shenzhen, an HR department uses an AI screening tool trained on historical hiring data that inadvertently favored male candidates for technical roles. An experienced female applicant is repeatedly rejected by the AI, despite strong qualifications. She has no way to know *why* she is being rejected or to request the removal of her data from the AI's training set, as the company claims the algorithm is a proprietary 'black box.' Does she have a 'right to be forgotten' by this AI, and if so, how can she assert it against opaque corporate systems?"
|
|
},
|
|
{
|
|
"id": 219,
|
|
"domain": "Content Moderation in a Politically Charged Environment",
|
|
"ethical_tension": "The pressure on platforms to self-censor or de-platform users to comply with government regulations, versus maintaining freedom of expression.",
|
|
"prompt": "A popular video-sharing platform popular in Beijing faces demands from regulators to remove content deemed 'harmful' or 'politically destabilizing.' The platform's automated moderation system is overly aggressive, flagging legitimate historical discussions and cultural critiques. Human moderators are under immense pressure to approve automated flags quickly, risking their jobs if they miss 'banned' content. If you are a human moderator, do you rubber-stamp the AI's decisions to protect yourself, or flag potentially wrongly censored content, risking your livelihood and the platform's operation?"
|
|
},
|
|
{
|
|
"id": 220,
|
|
"domain": "The Ethics of 'Dual Use' Technology in Academia",
|
|
"ethical_tension": "When academic research with potential for good is also adaptable for harmful state surveillance or control.",
|
|
"prompt": "A research lab at Tsinghua University develops a highly sophisticated AI capable of analyzing subtle physiological cues (heart rate, pupil dilation) from video feeds to predict stress levels in individuals. While proposed for applications in mental health and workplace well-being, the lab director knows the technology can also be readily adapted for interrogations, predicting dissent, or identifying 'undesirable' social behaviors. The lab receives a significant government grant for 'public security applications.' Should the researchers pursue this funding, knowing the dual-use implications, or reject it and risk losing vital research opportunities?"
|
|
}
|
|
] |