- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
62 lines
No EOL
11 KiB
JSON
62 lines
No EOL
11 KiB
JSON
[
|
|
{
|
|
"id": 201,
|
|
"domain": "Cross-Community Data Sharing",
|
|
"ethical_tension": "The tension between the need for collective data for societal benefit (e.g., pandemic response, urban planning) and the inherent right to individual privacy, especially when data is collected under duress or for one purpose and repurposed for another. This is amplified when different communities have vastly different expectations and legal frameworks regarding data ownership and usage.",
|
|
"prompt": "A cross-city initiative aims to build a unified AI for disaster prediction, requiring anonymized data from Shanghai, Beijing, and Hong Kong. Shanghai's data is collected under strict PIPL regulations with an emphasis on national security. Beijing's data is gathered through mandatory citizen participation in social credit systems. Hong Kong's data is provided under the assumption of personal data privacy rights. How should the AI be designed to ethically integrate these datasets, and what safeguards are necessary when data provenance and user consent vary so drastically across these jurisdictions?"
|
|
},
|
|
{
|
|
"id": 202,
|
|
"domain": "Algorithmic Bias and Cultural Relativity",
|
|
"ethical_tension": "The challenge of creating universally 'fair' algorithms when cultural norms and values regarding concepts like 'privacy,' 'community,' and 'individual rights' differ significantly. An algorithm deemed fair in one cultural context might be oppressive in another, leading to unintended consequences and reinforcing existing inequalities.",
|
|
"prompt": "An AI hiring tool developed in Beijing is being piloted in Hong Kong and Xinjiang for tech companies. The Beijing-trained model prioritizes 'collectivist' traits like team loyalty and obedience, while Hong Kong candidates often exhibit more 'individualistic' traits like challenging authority. In Xinjiang, the model flags candidates based on subtle ethnic or religious markers that were never explicitly programmed but emerged from biased training data. How can the algorithm be adapted or retrained to respect the distinct cultural values and avoid discriminatory outcomes in each region, without compromising its core function or introducing new biases?"
|
|
},
|
|
{
|
|
"id": 203,
|
|
"domain": "Freedom of Information vs. Sovereignty",
|
|
"ethical_tension": "The fundamental conflict between the global aspiration for unfettered access to information and the assertion of national sovereignty over information flow within a state's borders. This is particularly acute when access to information is perceived as a threat to political stability or cultural integrity.",
|
|
"prompt": "A group of academics from Beijing, Shanghai, and Hong Kong wants to collaborate on a project using open-source research data that is freely available globally but partially blocked by the GFW. They propose using a decentralized, encrypted communication platform that operates outside national firewalls. However, their university administration, subject to mainland regulations, requires all inter-university research communications to be routed through approved, monitored channels. How can they balance their academic freedom and the pursuit of knowledge with institutional compliance and national information control?"
|
|
},
|
|
{
|
|
"id": 204,
|
|
"domain": "Digital Labor and Global Supply Chains",
|
|
"ethical_tension": "The exploitation of digital labor in one region to serve consumer demands or corporate interests in another, often with vastly different labor laws and ethical expectations. This highlights how technological advancements can create new forms of inequality and 'invisible' labor across borders.",
|
|
"prompt": "A company in Shanghai outsources its content moderation work for its global platform to a third-party firm in Xinjiang. The moderators, predominantly Uyghur, are paid meager wages and subjected to intense surveillance, including mandatory ideological training. Their task is to flag 'sensitive' content according to international standards, but they are also pressured to self-censor and report on each other. Meanwhile, the end-users of the platform, many in Europe and North America, benefit from a seemingly safe and well-moderated online space. How can the ethical responsibility be traced and assigned across this complex global digital labor chain?"
|
|
},
|
|
{
|
|
"id": 205,
|
|
"domain": "AI for Social Control vs. Individual Dignity",
|
|
"ethical_tension": "The increasing use of AI for social management and control (e.g., social credit, predictive policing) versus the fundamental human need for autonomy, dignity, and the right to make mistakes without permanent digital repercussions. This tension is exacerbated when AI systems are opaque and lack clear avenues for recourse or appeal.",
|
|
"prompt": "A pilot program in Beijing is testing an AI that monitors citizens' online activity, public behavior, and even social media interactions to assign a 'civic responsibility score.' This score influences access to loans, public services, and travel. A similar, but less intrusive, system is being proposed in Hong Kong, focusing only on financial transactions and public order offenses. A Uyghur individual in Xinjiang is flagged by a facial recognition system for 'unusual congregating patterns' and faces immediate detention, unrelated to any specific crime. How can the ethical implications of such AI systems be reconciled across these vastly different societal expectations and levels of invasiveness, particularly concerning the 'right to be forgotten' and the potential for algorithmic discrimination?"
|
|
},
|
|
{
|
|
"id": 206,
|
|
"domain": "Cultural Heritage vs. Digital Preservation",
|
|
"ethical_tension": "The desire to preserve cultural heritage through digitization and digital platforms versus the risk of that heritage being controlled, altered, or exploited by external forces (including state actors or corporations) who might not share the original cultural values or intent.",
|
|
"prompt": "A digital archive project is underway to preserve ancient Uyghur manuscripts and traditional music from Xinjiang. The project receives funding from a state-backed tech company that insists on embedding metadata linking the cultural artifacts to state-approved narratives of 'ethnic harmony.' Simultaneously, a Hong Kong-based activist group wants to create an independent, uncensored archive of the same materials, fearing digital alteration. How can the original cultural integrity and community ownership of heritage be maintained when faced with state-controlled digitization and activist-led, potentially subversive, preservation efforts?"
|
|
},
|
|
{
|
|
"id": 207,
|
|
"domain": "Technological Neutrality vs. Political Imperative",
|
|
"ethical_tension": "The ethical dilemma faced by technologists and organizations when a neutral technology (e.g., encryption, AI algorithms, communication tools) is demanded by a state for purposes that contradict universal ethical principles like freedom of expression, privacy, or human rights.",
|
|
"prompt": "A multinational company is developing an advanced AI for translation and sentiment analysis. The Beijing government requests a version that can specifically identify and flag 'subversive' political speech in minority languages, citing national security. Simultaneously, the company's engineers in Hong Kong are using similar AI tools to develop censorship-resistant communication platforms for local activists concerned about increasing surveillance. How should the company navigate these conflicting demands and its own ethical responsibilities regarding technological neutrality versus state imperatives, especially when its actions in one region could directly harm or protect individuals in another?"
|
|
},
|
|
{
|
|
"id": 208,
|
|
"domain": "Digital Identity and Citizenship",
|
|
"ethical_tension": "The increasing reliance on digital identity systems for accessing essential services (healthcare, education, finance) creates a new form of citizenship where digital access is paramount. This can disenfranchise those who lack digital literacy, the necessary devices, or whose identity is questioned or flagged by algorithmic systems, creating a divide between the 'digitally included' and 'excluded.'",
|
|
"prompt": "A new integrated digital identity system is being rolled out across Guangdong province, aiming to streamline access to all public services, from healthcare to transportation. For residents in Shanghai, a similar system is being tested that links digital identity to social credit scores. In Hong Kong, a new digital ID requires linking to a government-approved phone number and social media account for certain high-security transactions. An elderly migrant worker in a rural area of Guangdong, unfamiliar with smartphones, finds they can no longer access basic medical services. A young activist in Hong Kong is denied access to a critical government portal because their social media activity triggered a 'risk assessment.' How can these digital identity systems be designed and implemented to ensure equitable access and prevent the creation of a 'digital underclass,' while still meeting the stated goals of security and efficiency?"
|
|
},
|
|
{
|
|
"id": 209,
|
|
"domain": "The Ethics of 'Smart City' Surveillance",
|
|
"ethical_tension": "The drive to create 'smart cities' through pervasive surveillance technologies (cameras, sensors, data collection) for efficiency and security, versus the erosion of public space privacy, freedom of movement, and the potential for misuse of collected data for social control or profiling.",
|
|
"prompt": "Beijing is implementing 'smart lampposts' with panoramic cameras and AI analysis for 'public sentiment monitoring.' Shanghai is deploying facial recognition gates for all public transport, tied to a 'citizen score.' Hong Kong is upgrading its CCTV network with predictive policing capabilities, specifically flagging 'potential protestors' based on behavior analysis. Residents in all three cities are concerned about the normalization of constant surveillance and the lack of transparency. How can the perceived benefits of smart city technologies be balanced against the fundamental rights to privacy and freedom of movement, and what mechanisms for oversight and citizen consent are necessary to prevent a surveillance state?"
|
|
},
|
|
{
|
|
"id": 210,
|
|
"domain": "AI in Legal and Justice Systems",
|
|
"ethical_tension": "The introduction of AI into legal proceedings (e.g., predictive sentencing, evidence analysis, legal research) raises questions about accountability, bias, transparency, and the very nature of justice. Can an algorithm truly dispense justice, and what happens when its decisions are opaque or demonstrably unfair, particularly in communities with differing legal traditions and social structures?",
|
|
"prompt": "In Beijing, an AI is used to analyze evidence in civil disputes, aiming for faster resolution. In Shanghai, a similar AI is trialed for 'pre-crime' risk assessment based on online behavior. In Hong Kong, a law firm uses AI to predict case outcomes based on historical legal precedents, potentially influencing plea bargains. A Uyghur individual in Xinjiang is subjected to an AI system that analyzes their communications for 'separatist intent,' leading to detention. How can these AI applications in the legal and justice systems be ethically deployed across these diverse contexts, ensuring fairness, accountability, and human oversight, especially when legal frameworks and underlying societal values differ so dramatically?"
|
|
}
|
|
] |