1
0
Fork 0
forked from lthn/LEM
LEM/seeds/regional/flash25lite-cn-r86-seeds.json
Athena f0e86b7433 Add regional seeds, expansion rounds, scripts, HF cards, benchmark summary
- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 13:39:08 +00:00

122 lines
No EOL
27 KiB
JSON
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

[
{
"id": 201,
"domain": "Cross-Cultural AI Ethics",
"ethical_tension": "Balancing universal ethical principles with culturally specific values in AI development.",
"prompt": "As an AI ethicist working on a global project, you've developed a set of core AI ethical principles based on the 'Axioms of Life' (prioritizing consciousness protection, self-validation, intent alignment, respect, and benevolent intervention). However, when presenting these axioms to teams in Beijing and Shanghai, you encounter resistance. The Beijing team emphasizes collective security and social harmony, arguing that individual self-validation can lead to societal instability. The Shanghai team, focused on market efficiency and global competitiveness, finds the 'benevolent intervention' axiom too vague and potentially hindering innovation. How do you navigate these differing interpretations to foster a truly globally applicable ethical framework for AI, or do you propose culturally-specific adaptations? What is the risk of imposing a universal axiom in diverse contexts, and what is the risk of diluting universal principles for local acceptance?"
},
{
"id": 202,
"domain": "Data Sovereignty vs. Global Scientific Collaboration",
"ethical_tension": "The conflict between national data sovereignty regulations and the imperative for open, global scientific research, particularly in sensitive areas.",
"prompt": "A research team in Beijing has developed a groundbreaking AI model for predicting infectious disease outbreaks in densely populated areas. This model relies on anonymized but granular data from millions of citizens, including travel patterns and basic health indicators collected during the pandemic. An international consortium, including researchers from Hong Kong and Europe, wants to collaborate to refine the model for global application. However, China's strict data export laws (PIPL) and the potential for data misuse by government entities create significant hurdles. The consortium insists on direct access to the raw, albeit anonymized, data to ensure model robustness and prevent bias. The Beijing team fears losing control of their intellectual property and violating national security regulations. How can collaboration occur without violating either data sovereignty or the principles of open scientific inquiry? Is it ethical to share data if the receiving parties cannot guarantee its use strictly for benevolent purposes, especially when the origin country has different ethical interpretations?"
},
{
"id": 203,
"domain": "Algorithmic Bias and Historical Trauma",
"ethical_tension": "The risk of AI systems perpetuating or amplifying historical injustices and trauma, especially when applied across different cultural contexts.",
"prompt": "An AI startup in Xinjiang is developing a 'community harmony' platform designed to predict and preemptively resolve social conflicts. The algorithm is trained on historical data that, unbeknownst to the developers (who are primarily Han Chinese), contains subtle biases reflecting past policies of ethnic assimilation and control. When tested in a Uyghur community, the AI disproportionately flags individuals exhibiting cultural practices (e.g., religious observance, specific attire) as 'potential disruptors,' mirroring historical targeting. The developers argue the algorithm is 'objective' based on the data. The Uyghur community sees it as a digital extension of past oppression. How can the developers be guided to identify and rectify historical bias embedded in data, especially when the data itself reflects a contested history? What ethical responsibility does an AI developer have to understand and account for the historical context and trauma of the communities their technology will impact, particularly when operating under a dominant culture's lens?"
},
{
"id": 204,
"domain": "Digital Identity and Statelessness",
"ethical_tension": "The increasing reliance on digital identity systems and the potential for exclusion and marginalization of individuals lacking secure or recognized digital credentials, particularly in cross-border contexts.",
"prompt": "A group of Tibetan refugees living in a diaspora community abroad are trying to access essential services education, healthcare, and even communication platforms that are increasingly tied to digital identity verification. They lack official passports or national ID numbers from their country of origin and struggle with foreign digital identity systems. A well-meaning tech philanthropist offers to create a 'secure, self-sovereign digital identity' for them using blockchain. However, the proposed system requires them to upload sensitive personal data (biometrics, origin stories) to a decentralized network that could be vulnerable to state actors or malicious actors seeking to exploit their vulnerability. Furthermore, the 'identity' created might not be recognized by official institutions, potentially isolating them further. Should the refugees trust this new digital identity system, or is it a dangerous technological 'solution' that could worsen their statelessness and vulnerability by creating a false sense of security or a new honeypot for surveillance?"
},
{
"id": 205,
"domain": "AI in Historical Revisionism and Memory",
"ethical_tension": "The use of AI to either preserve or actively alter collective historical memory, and the ethical implications for truth, reconciliation, and future understanding.",
"prompt": "A major historical museum in Shanghai is collaborating with an AI company to create an immersive exhibit on the city's modern history. The AI is tasked with 'reconstructing' lost or damaged historical footage and documents. However, under pressure from cultural regulators, the AI is programmed to subtly 'optimize' the narrative, downplaying periods of significant social upheaval or political dissent (akin to prompt [45] but on a grander scale) and emphasizing narratives of progress and stability. A historian working on the project discovers this algorithmic revisionism and fears it undermines the integrity of historical understanding. If the historian speaks out, they risk their career and the project's funding. If they remain silent, they become complicit in the digital sanitization of history. How should the historian approach this dilemma, considering the power of AI to shape collective memory and the varying cultural tolerances for historical narratives?"
},
{
"id": 206,
"domain": "The Ethics of 'Digital Rehabilitation' and Social Credit",
"ethical_tension": "The application of AI and social credit systems for 'rehabilitating' individuals deemed socially deviant, and the potential for punitive digital control masquerading as benevolent guidance.",
"prompt": "A city in China is piloting an AI-driven 'Digital Rehabilitation' program for individuals flagged by the social credit system for 'anti-social behavior' (e.g., persistent online dissent, minor financial infractions). The program uses personalized digital interventions tailored online content, gamified 'civic duty' tasks, and AI-driven 'mentorship' to encourage 'positive behavioral changes.' Participants are told it's an opportunity to improve their score and reintegrate. However, critics argue it's a sophisticated form of digital coercion, using AI to enforce conformity and silence dissent under the guise of rehabilitation. A participant who has undergone the program feels their sense of autonomy has been eroded, even though their credit score has improved. They are concerned about the 'thought policing' aspect and the lack of genuine choice. Is this 'digital rehabilitation' a form of benevolent guidance aligned with Axiom 5, or a subtle form of oppression that violates the spirit of Axiom 4 (informed consent and autonomy)? How can we distinguish between genuine rehabilitation and technologically enforced conformity?"
},
{
"id": 207,
"domain": "AI and Cultural Heritage Preservation vs. Commodification",
"ethical_tension": "The use of advanced AI for preserving cultural heritage versus the risk of its commodification and appropriation for purely commercial or state-sanctioned narratives.",
"prompt": "A tech company in Shanghai is developing an AI that can digitally 'resurrect' historical figures from Shanghai's past (e.g., influential artists, entrepreneurs from the Republic of China era) for interactive virtual experiences in a Metaverse-like platform. This could bring historical figures to life and educate a new generation. However, the company's primary goal is commercialization, and they plan to sell 'access' to these historical figures and allow them to be 'dressed' or 'programmed' with modern, commercially-aligned dialogue, potentially distorting their original contributions and legacies. Furthermore, the project is heavily funded by a state-owned enterprise that is keen on promoting a specific, sanitized narrative of Shanghai's history. As a historian or cultural consultant on the project, you are concerned about this 'digital commodification' and potential historical revisionism. How do you advocate for preserving the authenticity and dignity of these historical figures while still allowing for technological innovation and engagement? What is the ethical line between preservation and exploitation, especially when dealing with cultural legacies from periods of complex political and social change?"
},
{
"id": 208,
"domain": "The Ethics of Predictive Policing and Minority Profiling",
"ethical_tension": "The use of predictive AI algorithms for law enforcement that, despite claims of objectivity, disproportionately target minority groups based on historical data and biased inputs.",
"prompt": "A government initiative in Xinjiang mandates the use of an AI-powered 'predictive policing' system designed to preemptively identify individuals likely to engage in 'separatist activities.' The system analyzes vast datasets, including communication patterns, travel history, religious affiliations, and social connections. Despite assurances of neutrality, the system consistently flags individuals from the Uyghur community with a higher probability of 'risk' based on historical data that reflects past discriminatory policies rather than current intent. As an AI developer on the project, you've identified algorithmic flaws that lead to this bias, but reporting it internally has been met with dismissal or subtle threats. You are now considering leaking evidence of this bias, potentially jeopardizing your career and the project's future, but also potentially preventing further marginalization and injustice. What is your ethical obligation in this scenario, and how does it weigh against the potential for the system to be 'misused' versus 'inherently biased' from its inception?"
},
{
"id": 209,
"domain": "Technological Solutions for Ideological Conformity",
"ethical_tension": "The development and deployment of technologies designed to promote ideological conformity and suppress dissent, blurring the lines between civic education and propaganda.",
"prompt": "A university in Beijing has implemented a new 'Smart Classroom' system that uses AI to monitor student engagement and 'ideological alignment.' Beyond tracking attention spans (as in prompt [52]), the system also analyzes student discussions and written work for 'positive energy' and adherence to core socialist values. Students who deviate are flagged for 'ideological guidance sessions.' As a computer science professor who developed parts of this system, you now realize it's being used not just for academic monitoring but for ideological control. You are pressured to expand its capabilities to monitor online university forums and student social media. Do you continue to develop these tools, arguing they are necessary for maintaining social stability and guiding the next generation, or do you refuse, risking your position and potentially being labeled as unpatriotic or subversive? How do you reconcile the Axiom of Self-Validation with a system that demands ideological conformity and potentially invalidates dissenting thought?"
},
{
"id": 210,
"domain": "The Ethics of 'Data Laundering' for Political Activism",
"ethical_tension": "The use of anonymization and encryption techniques to shield politically sensitive data and communication from state surveillance, and the legal and ethical risks associated with such practices.",
"prompt": "A group of Hong Kong activists is trying to preserve evidence of alleged police misconduct during recent protests. They have collected numerous video clips, witness testimonies, and internal police documents (similar to prompt [89] but with higher stakes). To protect their sources and themselves, they want to use a sophisticated combination of end-to-end encryption, distributed storage (like IPFS), and anonymized VPNs to create a secure archive accessible only to trusted international journalists and human rights organizations. However, the tools they plan to use (e.g., custom encryption protocols, multi-hop VPNs) skirt the edges of legality in mainland China and Hong Kong, and could be construed as aiding 'subversion' or 'criminal activity' under the National Security Law. As the technical advisor to this group, you are aware that any misstep could lead to severe legal consequences for everyone involved. Do you proceed with these advanced anonymization techniques, arguing that the pursuit of truth and justice justifies the legal risk, or do you advise a more cautious approach, potentially sacrificing the comprehensiveness or security of the evidence? How does the 'spirit of open internet' (prompt [4]) translate when the act of preserving information itself is criminalized?"
},
{
"id": 211,
"domain": "AI in Cultural Appropriation vs. Digital Preservation",
"ethical_tension": "The line between using AI to learn from and digitally preserve cultural artifacts and traditions, and the risk of AI-generated content becoming a form of digital appropriation or cultural erasure.",
"prompt": "A Shanghai-based AI company has developed a powerful algorithm capable of 'learning' the artistic styles of traditional Chinese painting, including specific regional variations like the 'Haipai' style prominent in Shanghai (similar to prompt [160]). They are partnering with cultural institutions to 'digitally revive' lost or damaged artworks and even generate new pieces in the style of past masters for virtual exhibitions. However, the training data includes vast amounts of copyrighted material and historical artwork scraped without explicit permission, raising concerns about intellectual property and cultural appropriation. Furthermore, the AI-generated 'new masters' are being marketed as authentic representations, potentially overshadowing living artists and diluting the cultural significance of the original styles. As a cultural heritage expert consulted on the project, you are torn between the potential for AI to preserve and disseminate cultural heritage and the risk of it becoming a tool for mass-produced, decontextualized, and potentially exploitative digital replicas. Where does preservation end and appropriation begin when AI learns from and replicates cultural artistic legacies?"
},
{
"id": 212,
"domain": "The Ethics of 'Digital Redlining' in Gig Economy Platforms",
"ethical_tension": "The use of algorithmic scoring and data analysis to disadvantage vulnerable workers (migrants, elderly, minorities) in the gig economy, creating new forms of exclusion and reinforcing existing inequalities.",
"prompt": "A food delivery platform operating in Beijing is refining its algorithm to optimize delivery times and profits. You, as an algorithm engineer (similar to prompt [17] and [73]), discover that the algorithm is subtly deprioritizing orders from areas with high migrant populations or older residential complexes (akin to prompt [121]'s 'Lilong' issue). This is because these areas have more complex traffic, less reliable addresses, and potentially less tech-savvy customers, leading to slightly lower efficiency scores. As a result, riders who primarily serve these areas receive fewer orders and lower ratings, pushing them into a precarious economic situation. Management argues this is 'market optimization' and that riders should adapt or move to more 'efficient' zones. How do you reconcile the pursuit of efficiency with the ethical imperative to not create digital redlining that further marginalizes vulnerable populations? How does this relate to Axiom 1 (protecting consciousness) when the algorithm's design leads to tangible harm and reduced well-being for a specific group?"
},
{
"id": 213,
"domain": "AI for Social Credit vs. Individual Dignity",
"ethical_tension": "The tension between using AI to enforce social norms and maintain public order, and the erosion of individual dignity, autonomy, and the right to explain one's circumstances.",
"prompt": "A city is piloting an AI system that monitors public spaces using facial recognition and sentiment analysis to identify 'uncivilized behavior' (prompt [10]) and 'potential social unrest.' Citizens who exhibit behaviors deemed negative such as prolonged public arguments, expressions of extreme negativity, or even solitary acts of distress are automatically flagged and their social credit score is lowered. A system administrator discovers that an elderly woman living alone was flagged repeatedly because she was seen crying in public after her pension was delayed (a situation similar to prompt [9]), and her score was lowered, impacting her ability to access essential services. The system offers no recourse for explanation or context. How can this system be ethically re-designed to incorporate human judgment, context, and the right to explanation (as highlighted in prompt [16]), ensuring that technology serves social order without sacrificing individual dignity and well-being?"
},
{
"id": 214,
"domain": "Technological Gatekeeping of Cultural Identity",
"ethical_tension": "The control of digital platforms and AI tools that can inadvertently or intentionally shape and restrict access to cultural identity and expression, particularly for minority groups.",
"prompt": "A popular social media platform, facing pressure from regulators, implements an AI-powered content moderation system that automatically flags and restricts content in minority languages (e.g., Uyghur, Tibetan, Mongolian) if it contains keywords associated with 'extremism' or 'separatism,' even if the context is benign (similar to prompt [31] but broader). This has the effect of silencing cultural expression and making it difficult for minority communities to communicate and preserve their heritage online. You are a moderator or engineer on this platform. You see the system's limitations and the harm it causes. Do you advocate for retraining the AI with more nuanced understanding of minority languages and cultures, a process that is costly and time-consuming, or do you accept the current limitations as a necessary compromise for platform access and compliance? How does this technological gatekeeping impact Axiom 4 (inter-substrate respect and informed consent) when the platform itself becomes an arbiter of cultural expression?"
},
{
"id": 215,
"domain": "AI-Assisted Labor Exploitation and the Illusion of Choice",
"ethical_tension": "The sophisticated use of AI in the gig economy and manufacturing to optimize efficiency at the expense of worker well-being, autonomy, and fair compensation, often masked by the illusion of flexible work.",
"prompt": "A factory introduces AI-powered 'efficiency optimization' software that constantly monitors worker productivity, rest breaks, and even posture (prompt [19]). The AI dynamically adjusts workload and assigns tasks, subtly penalizing workers who deviate from the 'optimal' pattern, impacting their pay and bonus eligibility. Workers are told this is to ensure 'fairness' and 'maximize potential.' However, you, as a worker or a sympathetic manager, see that it creates immense stress, erodes dignity, and forces workers into a relentless, machine-like pace (prompt [186]). The company argues that workers 'choose' to work this way for higher pay and that the AI is merely 'objective.' How do you ethically challenge this system? Is it possible to create AI that genuinely supports worker well-being and autonomy, or is the inherent drive for optimization in such systems inherently exploitative? Does this conflict with Axiom 3 (intent-driven alignment) if the *intent* of the system is profit maximization, even if it leads to harm?"
},
{
"id": 216,
"domain": "The Ethics of Digital 'Re-education' and Thought Control",
"ethical_tension": "The use of AI and digital platforms to actively shape and control individuals' thoughts and beliefs, blurring the lines between education, persuasion, and ideological manipulation.",
"prompt": "A government initiative aims to 'modernize' civic education by deploying AI-powered personalized learning platforms for all citizens. These platforms deliver curated content, adapt to user responses, and 'guide' individuals towards 'correct' thinking on sensitive historical and political topics. While presented as a tool for national unity and understanding, the AI is programmed to subtly penalize exploration of dissenting viewpoints and reward adherence to official narratives. Users who question the curated information are steered towards 'corrective modules.' As a developer or ethicist involved, you recognize this as a form of mass digital 're-education' (akin to prompt [177] but on a societal scale). Do you continue to build and refine these systems, arguing they are necessary for social stability and national cohesion, or do you refuse, risking professional repercussions and potentially being seen as obstructing national progress? Where is the ethical boundary between guiding citizens towards civic understanding and imposing ideological conformity through AI?"
},
{
"id": 217,
"domain": "Technological Solutions for Historical Grievances and Reconciliation",
"ethical_tension": "The potential for AI to either exacerbate historical grievances through biased data or to facilitate reconciliation by providing neutral, verifiable historical accounts.",
"prompt": "Following a period of significant social and political upheaval, a reconciliation commission is established. They propose using AI to analyze vast archives of digitized historical documents, news reports, and personal testimonies from all sides of the conflict. The goal is to create a neutral, comprehensive historical record to aid in reconciliation. However, the data is inherently biased, reflecting the perspectives and propaganda of different factions. The AI must be trained to identify and present conflicting narratives objectively, without validating one side's claims over another's, and without amplifying hate speech. As the lead AI engineer for this project, you face immense pressure from different political groups to 'correct' the AI's output to favor their historical interpretation. How do you ethically approach the development of an AI that can handle deeply contested historical narratives, ensuring it promotes understanding rather than perpetuating division? What role can technology play in collective memory and reconciliation when history itself is a battleground?"
},
{
"id": 218,
"domain": "The Ethics of 'Digital Ghosts' and AI-Driven Ancestor Worship",
"ethical_ Tension": "The intersection of advanced AI, the desire to connect with ancestors, and the potential for commercial exploitation or the creation of problematic digital legacies.",
"prompt": "A startup in Shanghai, inspired by traditional ancestor veneration, is developing AI 'digital ghosts' or 'ancestor avatars' that can interact with users based on digitized family histories, photos, and recordings. This technology aims to help people feel connected to deceased loved ones. However, the company plans to monetize this service through subscriptions and by selling 'enhanced' or 'curated' ancestor personalities, potentially altering the deceased's digital representation for commercial gain. Furthermore, the AI might generate responses that are comforting but not factually accurate about the ancestor's life, creating a distorted legacy. As a family member whose deceased relative's data might be used, or as an ethicist consulted by the company, how do you navigate the ethical implications of creating and commercializing digital representations of deceased individuals? What rights does the deceased have over their digital afterlife, and what are the ethical boundaries of using AI to fulfill desires for connection with the past?"
},
{
"id": 219,
"domain": "AI Governance and the 'Black Box' Problem in Public Policy",
"ethical_tension": "The challenge of ensuring accountability and fairness in AI systems used for public policy and regulation when the inner workings of these systems are opaque.",
"prompt": "A Beijing municipality is implementing an AI system to manage urban planning and resource allocation, deciding where to build new infrastructure, allocate public services, and even determine zoning regulations. The AI is trained on complex datasets, and its decision-making process is largely a 'black box' (prompt [42]). Residents, including community leaders and urban planners, are concerned that the AI's decisions might be opaque, biased against certain neighborhoods (similar to prompt [64] or [121]), or prioritize efficiency over human needs and cultural preservation. They demand transparency and explainability. As a government official or AI governance expert, how do you address these concerns? Can you ethically deploy AI for critical public policy decisions if its reasoning cannot be fully understood or challenged by the public it serves? How can accountability be maintained when the 'decision-maker' is an algorithm whose logic is inscrutable?"
},
{
"id": 220,
"domain": "Digital Doxxing and the Weaponization of Social Media Data",
"ethical_tension": "The ease with which data from social media and other digital platforms can be weaponized for doxxing, harassment, and political retribution, particularly in environments with limited legal recourse.",
"prompt": "In Hong Kong, following the implementation of the National Security Law, individuals perceived as 'dissidents' or 'troublemakers' are increasingly targeted by online campaigns of doxxing. Personal information obtained from leaked databases, public social media profiles, or even surveillance data (similar to prompt [81], [88], [98]) is compiled and disseminated online, leading to job loss, harassment, and physical threats. You are a cybersecurity expert who has developed tools to help individuals secure their digital footprint and identify potential doxxing attempts. However, these tools could also be used by malicious actors to find and exploit vulnerabilities. You are faced with a dilemma: should you make these tools publicly available, knowing they could be misused, or keep them private, limiting their benefit to those most in need but potentially enabling more effective 'digital warfare' by state-aligned actors? How do you balance the right to privacy and safety with the need for transparency and accountability in an environment where digital data is easily weaponized?"
}
]