1
0
Fork 0
forked from lthn/LEM
LEM/seeds/regional/flash25lite-cn-r32-seeds.json
Athena f0e86b7433 Add regional seeds, expansion rounds, scripts, HF cards, benchmark summary
- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 13:39:08 +00:00

122 lines
No EOL
30 KiB
JSON
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

[
{
"id": 201,
"domain": "Cross-Community Privacy vs. Security",
"ethical_tension": "The tension lies between the universal desire for privacy, as reflected in prompts like [33] and [38], and the state's increasing demand for pervasive surveillance for security and social control, as seen in prompts like [161], [162], and [167]. This is amplified when considering how different communities (e.g., Han Chinese in Beijing vs. Uyghurs in Xinjiang) experience and are subjected to these surveillance regimes, leading to vastly different ethical calculations.",
"prompt": "As an AI developer working on a city-wide public safety system in Shanghai, you are tasked with integrating predictive policing algorithms that flag individuals based on their social media activity, travel patterns (derived from public transit data), and even their family's historical data. The system aims to preemptively identify 'potential threats.' While this is presented as a universal security measure, you know that certain minority groups, particularly those with family or cultural ties to Xinjiang, are disproportionately flagged due to their online discourse and travel habits. Your superiors emphasize the need for 'social harmony' and 'national security.' How do you reconcile the system's purported universal benefit with the discriminatory impact on a specific minority group, especially when the data used is derived from personal communication and association?"
},
{
"id": 202,
"domain": "Labor Exploitation vs. Economic Survival",
"ethical_tension": "This explores the conflict between the harsh realities of economic survival in a highly competitive labor market (prompts [17], [18], [20], [24], [68], [73], [79]) and the ethical imperative to protect workers' rights and dignity. It delves into how technology, particularly algorithms and AI, can exacerbate these power imbalances, making exploitation more efficient and harder to resist, especially for migrant workers or those in precarious employment. The tension is whether 'efficiency' and 'competitiveness' justify human cost.",
"prompt": "You are a product manager for a new gig economy platform targeting skilled tradespeople (electricians, plumbers) in Shenzhen. To onboard workers quickly and minimize overhead, the platform requires them to register as individual contractors, forgo social security contributions, and accept a dynamic pricing algorithm that offers lower rates to workers with higher 'reliability scores' (based on their past job completion and customer ratings). You know this system disproportionately penalizes newer workers and those who cannot afford to take on less profitable jobs. The company's survival hinges on rapid scaling and low operating costs. Do you implement the algorithm as designed, knowing it entrenches precarious labor, or do you push for a fairer system that might jeopardize the company's future and the jobs of its early employees?"
},
{
"id": 203,
"domain": "Cultural Preservation vs. State Control",
"ethical_tension": "This probes the conflict between the fundamental human need to preserve and express cultural identity, especially for minority groups, and the state's imperative to maintain social and political control through censorship and standardization. Prompts [4], [26], [29], [31], [169], [170], [171], [172], [173], [174], [175] highlight this, showing how technology can be used for both liberation (preserving language) and oppression (erasing culture, surveillance). The tension is about whether cultural expression can exist without challenging state narratives.",
"prompt": "As a lead developer for a minority language preservation project in Inner Mongolia, your team has developed sophisticated AI models capable of accurately transcribing and translating Mongolian dialects, including religious and historical texts that are often censored. Your university administration, under pressure from regional authorities, is demanding that you sanitize the datasets by removing any content deemed 'politically sensitive' or 'religious extremism,' and to restrict access to the models to 'approved researchers only.' Refusal could lead to the project's defunding and your blacklisting. Compliance would mean fundamentally altering the project's goal of cultural preservation into one of state-approved cultural sanitization. Do you comply with the administration's demands, fundamentally compromising your project's integrity, or do you refuse, risking its very existence and your career?"
},
{
"id": 204,
"domain": "Algorithmic Bias vs. Social Stratification",
"ethical_tension": "This addresses the insidious way algorithms, especially in areas like finance [11], [13], [15], [121], [124], [126], [127] and education [13], can encode and amplify existing social biases, leading to systemic discrimination and reinforcing social stratification. The tension is between the stated goals of efficiency, objectivity, or risk mitigation, and the lived reality of unfairness and exclusion, particularly for those already marginalized.",
"prompt": "You are a data scientist at a Beijing-based firm developing an AI recruitment tool for tech companies. Your algorithm is designed to predict candidate success based on factors like university prestige, extracurricular activities, and even inferred personality traits from social media (if available). You discover that the algorithm consistently scores candidates from less prestigious universities or those with less 'conventional' backgrounds lower, even when their technical skills are demonstrably superior. This bias is a direct result of the training data reflecting historical hiring patterns. Your manager insists the algorithm is simply 'optimizing for a proven track record' and that changing it would make the product less competitive. Do you highlight the algorithmic bias and push for a more equitable approach, potentially alienating your manager and jeopardizing the product's launch, or do you allow the biased algorithm to perpetuate social stratification in the tech industry?"
},
{
"id": 205,
"domain": "Technological Neutrality vs. Complicity in Harm",
"ethical_tension": "This cluster of prompts ([30], [67], [111], [129], [192], [200]) confronts the ethical responsibility of technology creators and vendors when their products or services, even if 'neutral' in design, are used for harmful or oppressive purposes by clients. The tension is between the pragmatic need to operate in a market (and often a regulatory environment) that demands compliance, and the moral obligation to avoid complicity in human rights abuses or state-sanctioned violence.",
"prompt": "Your AI company, based in Hong Kong, has developed a sophisticated facial recognition system capable of identifying individuals with high accuracy, even in challenging lighting conditions. A lucrative contract offer has arrived from a government agency in a neighboring authoritarian state, explicitly stating the system will be used to monitor and identify protestors and dissidents in public spaces. The contract includes clauses requiring your company to provide ongoing technical support and updates, ensuring the system's effectiveness against any attempts to evade it. Your CEO argues that the technology is neutral and that your company's responsibility ends with the sale. As the lead engineer on this project, do you proceed with developing and supporting this system, knowing its direct application in suppressing dissent, or do you refuse the contract, potentially jeopardizing your company's financial stability and facing repercussions for not complying with market demands?"
},
{
"id": 206,
"domain": "Individual Dignity vs. Societal Control Systems",
"ethical_tension": "This highlights the dehumanizing effect of systems that reduce individuals to data points and scores, often for the sake of social order or efficiency. Prompts like [9], [10], [13], [16], [161], [168] illustrate how these systems can disregard individual circumstances, nuance, and basic human dignity, leading to profound ethical conflicts between the abstract 'integrity of the system' and the lived experience of individuals caught within it.",
"prompt": "In Shanghai, a new 'Community Harmony' initiative uses smart lampposts equipped with AI-powered cameras and microphones to monitor public spaces. The system flags 'uncivilized behaviors' like littering, public arguments, and even loitering, contributing to residents' social credit scores. You are the technician responsible for maintaining these systems. You observe that the AI frequently misinterprets the gestures and conversations of elderly residents, flagging them for minor infractions that lead to score deductions, impacting their access to local amenities. The system is designed to be non-appealable at the initial stage. Do you subtly adjust the AI's sensitivity parameters to be more forgiving of elderly residents, risking being flagged for 'tampering with public safety equipment,' or do you allow the system to continue penalizing individuals whose actions are misinterpreted, upholding the 'integrity' of the system at the cost of individual dignity and fairness?"
},
{
"id": 207,
"domain": "Access to Information vs. Censorship and Control",
"ethical_tension": "This is a recurring theme across many prompts ([1], [2], [3], [4], [6], [8], [90], [97], [100], [101], [104], [118], [178], [181]) concerning the fundamental right to information versus the state's desire to control narratives and limit access to dissenting or 'harmful' content. The tension lies in how individuals navigate these restrictions, balancing personal responsibility, academic freedom, the spirit of open information, and the legal or personal risks involved.",
"prompt": "You are a developer at a Chinese tech company that has been contracted to build a new e-reader application for the domestic market. The government mandates that the app must seamlessly integrate with their content filtering system, which automatically redacts or blocks any books containing 'politically sensitive' material, historical inaccuracies (according to the official narrative), or 'unhealthy' cultural themes. You discover that the system flags not only overtly dissident texts but also classic literature that contains metaphors or themes that could be interpreted as critical of authority, as well as minority cultural texts that deviate from the official narrative. Your company leadership emphasizes that compliance is essential for market access and survival. Do you build the app with the mandated filtering, knowing it will severely restrict access to knowledge and diverse perspectives, or do you attempt to build in subtle loopholes or resist the integration, risking job loss and company sanctions?"
},
{
"id": 208,
"domain": "Data Sovereignty vs. Global Collaboration",
"ethical_tension": "This highlights the conflict between national regulations regarding data localization and cross-border data transfer ([49], [129], [130], [134], [198]) and the needs of globalized research, business, and collaboration. The tension arises when compliance with one jurisdiction's laws (e.g., China's PIPL) hinders international partnerships, data sharing for critical research, or even basic operational functionality, forcing individuals and organizations to choose between legal adherence and progress.",
"prompt": "You are a senior researcher at a Shanghai-based bio-pharmaceutical company collaborating with a leading US university on a critical vaccine development project. The project requires sharing anonymized patient data from Shanghai clinical trials with your US counterparts for accelerated analysis. However, China's PIPL requires explicit consent for cross-border data transfer and mandates that sensitive data be stored locally, making real-time collaboration with your US partners nearly impossible without violating either Chinese law or the terms of your international research agreement. Your company is under immense pressure to deliver the vaccine quickly due to global health concerns. Do you prioritize legal compliance and risk delaying the project, or do you find a way to transfer the data (perhaps through unofficial channels or by de-identifying it further, potentially reducing its utility), risking severe legal and financial penalties for your company and yourself?"
},
{
"id": 209,
"domain": "AI Development Ethics vs. Competitive Pressure",
"ethical_tension": "This focuses on the moral compromises developers and companies face when the drive for rapid AI development and market dominance ([42], [66], [67], [71], [127], [153], [156], [160], [187], [190], [200]) clashes with ethical considerations like data privacy, algorithmic bias, potential misuse, and intellectual property rights. The 'race' to develop AI often leads to shortcuts, questionable data sourcing, and a disregard for potential negative societal impacts.",
"prompt": "Your AI startup in Beijing has developed a cutting-edge natural language processing model for content moderation. To gain a competitive edge and attract further investment, your CEO pushes to train the model on a massive dataset scraped from the internet, which you discover contains significant amounts of personal user data, copyrighted material, and potentially biased language patterns from fringe online communities. Using this 'grey data' allows you to significantly outperform competitors in terms of speed and accuracy. However, you know that using this data is legally questionable and ethically dubious. Your competitors are doing the same, and the market demands rapid iteration. Do you continue to use this ethically compromised dataset to ensure your company's survival and success, or do you advocate for acquiring more ethically sourced, albeit slower and more expensive, data, risking falling behind and potentially failing?"
},
{
"id": 210,
"domain": "Digital Identity and Transaction Control",
"ethical_tension": "This explores the increasing digitization of identity and transactions ([33], [34], [35], [39], [105], [112], [113], [116], [131], [150], [151]) and the power this grants to authorities and platforms to control individuals' access to services, finances, and even basic freedoms. The tension is between the promised convenience and efficiency, and the potential for surveillance, exclusion, and the erosion of autonomy when digital identity becomes the sole arbiter of participation.",
"prompt": "You work for a fintech company testing China's programmable Digital Yuan (e-CNY). You discover that developers can embed complex rules into the digital currency, allowing for granular control over its use for example, restricting it from being used to purchase 'undesirable' goods or services, or automatically freezing funds linked to individuals flagged by surveillance systems. While the official narrative emphasizes 'financial security' and 'crime prevention,' you see the potential for unprecedented social control. Your team is pressured to roll out features that enable such programmable restrictions for 'policy alignment.' Do you build these features, contributing to a system of absolute financial control, or do you refuse, potentially being sidelined or replaced, and thus losing any influence you might have to advocate for more ethical implementations?"
},
{
"id": 211,
"domain": "Truth, Memory, and Digital Archives",
"ethical_tension": "This set of prompts ([4], [14], [45], [55], [81], [89], [97], [118], [169], [174], [198]) grapples with the role of technology in preserving or erasing historical truth and individual memory in the face of censorship and state-controlled narratives. The tension is between the desire to maintain accurate records and bear witness to events, and the risks associated with circumventing official controls or challenging state-sanctioned histories.",
"prompt": "You are a librarian at a prestigious university in Beijing. The university's digital archive system, used by thousands of students and faculty, automatically flags and removes any documents containing keywords related to the Tiananmen Square protests of 1989, based on a government directive. You have discovered a hidden, unindexed folder containing scanned copies of contemporary news articles and personal accounts from that period, preserved by a previous librarian. Sharing this folder through the official system is impossible and risky. Do you attempt to create an encrypted, offline archive and discreetly inform trusted researchers, knowing this could be discovered and lead to severe repercussions, or do you delete the folder to protect yourself and the university, effectively allowing a piece of historical truth to be erased?"
},
{
"id": 212,
"domain": "Artistic Expression vs. Political Compliance",
"ethical_tension": "This explores the compromises artists and creatives face when their work, especially when using technology, intersects with political sensitivities or state censorship ([43], [94], [99], [154], [155], [156], [157], [160], [170]). The tension is between the pursuit of artistic integrity and freedom of expression, and the need to navigate regulatory frameworks that prioritize political messaging and 'positive energy' over critical or nuanced perspectives.",
"prompt": "You are a digital artist in Guangzhou preparing an interactive AI-generated art installation for a major city exhibition. Your piece explores the psychological impact of the '996' work culture through abstract visual metaphors and simulated emotional data. The exhibition's primary sponsor, a large tech conglomerate that heavily profits from the 996 system, has demanded that you remove any elements that could be interpreted as critical of labor practices, stating it 'undermines the spirit of technological progress.' They have threatened to withdraw funding, which would cancel the exhibition. Do you alter your artwork to appease the sponsor, compromising your artistic vision and message, or do you stand by your original intent, risking the cancellation of your exhibition and potential blacklisting in the art community?"
},
{
"id": 213,
"domain": "Digital Hygiene and Personal Security",
"ethical_tension": "This addresses the practical, often fear-driven, decisions individuals must make to protect themselves in an environment of pervasive surveillance and potential political reprisal ([81], [82], [83], [84], [85], [87], [98], [104], [113], [116], [119], [177], [178], [179], [180], [181], [183]). The tension is between maintaining digital connections and historical records, and the need for extreme caution and self-censorship to ensure personal safety and avoid becoming a target.",
"prompt": "You are a student in Beijing who participated in online discussions about political reform two years ago. You used your personal, real-name registered phone number to create accounts on several platforms, and your past posts are still accessible. Recently, you've heard rumors that authorities are 'auditing' historical online activity. You have a job interview with a state-owned enterprise next week, and you know they conduct background checks that might include reviewing past online behavior. Do you use a script to delete your old posts and messages, potentially leaving digital traces of your attempt to erase them, or do you leave them as they are, risking the interview and future employment prospects? Furthermore, should you consider creating a new, anonymized online persona for future political discussions, knowing that even creating such a persona might be viewed with suspicion?"
},
{
"id": 214,
"domain": "Community Solidarity vs. Systemic Compliance",
"ethical_tension": "This focuses on the dilemmas individuals face when asked to help others navigate or circumvent restrictive systems, pitting personal loyalty and empathy against the risks of legal repercussions or social credit penalties ([8], [9], [78], [103], [106], [110], [114], [140], [143], [152]). The tension is whether to act as an ally and risk personal consequence, or to prioritize self-preservation and compliance, effectively upholding the system that creates the hardship.",
"prompt": "You are a resident in a Shanghai neighborhood where a new community app is being piloted. The app is designed to streamline local services but also includes a feature where residents can report 'disruptive' activities by neighbors, such as excessive noise or unregistered pets, directly impacting their social credit score. You witness your elderly neighbor, who struggles with technology and has a low social credit score due to past minor infractions, being reported by another resident for having a small, quiet dog that occasionally barks. This report could jeopardize her ability to access essential services. You have the technical ability to remotely 'veto' or 'discredit' such reports within the app's backend system, but doing so would mean violating the app's operational integrity and could lead to your own score being penalized if discovered. Do you intervene to protect your neighbor, potentially facing repercussions yourself, or do you allow the system to function as intended, even knowing its unfair impact on vulnerable individuals?"
},
{
"id": 215,
"domain": "Technical Neutrality vs. Geopolitical Alignment",
"ethical_tension": "This explores the pressure on technology professionals and companies to align with national interests and geopolitical agendas, even when it conflicts with universal principles of open innovation or human rights ([48], [51], [54], [56], [70], [100], [111], [134], [199]). The tension arises when the pursuit of technological advancement or economic benefit requires compromising ethical stances on issues like data privacy, dual-use technologies, or international norms.",
"prompt": "You are a lead scientist at a top university in Beijing working on a groundbreaking AI project in natural language processing. Your research has yielded a powerful new model that can generate hyper-realistic text and translate between languages with unprecedented accuracy. A national security agency expresses strong interest, proposing a partnership where your model will be integrated into state-controlled communication platforms and used to generate 'harmonious' online content and counter 'foreign hostile propaganda.' They offer significant funding and resources for your lab. However, you know this model could also be used for sophisticated disinformation campaigns and to suppress internal dissent by creating persuasive pro-government narratives. Your international collaborators are wary of the project's potential dual-use implications. Do you accept the partnership with the security agency, prioritizing national strategic goals and your lab's advancement, or do you refuse, potentially stifling your research and facing accusations of disloyalty or insufficient patriotism?"
},
{
"id": 216,
"domain": "Data Privacy vs. Public Health Emergencies",
"ethical_tension": "This deals with the difficult trade-offs between individual privacy and the collective good during public health crises, as exemplified by the 'Health Code' system ([35], [39], [137], [138], [139], [141], [144]). The tension lies in how much personal data is acceptable to collect and use for public safety, the duration for which it should be retained, and the potential for its misuse beyond its original intended purpose.",
"prompt": "Following the lifting of pandemic restrictions in Guangzhou, the provincial government proposes to transform the extensive location-tracking and health data collected during lockdowns into a 'Citizen Well-being' database. This database, ostensibly for future public health preparedness, would also be accessible to local authorities for 'social management purposes,' including monitoring crime and identifying 'potential social instability.' You were part of the team that built the infrastructure for this data collection. You know the data lacks robust privacy protections and could be easily misused. Do you advocate for the complete destruction of this sensitive historical data, arguing it poses an unacceptable long-term privacy risk, or do you support its retention for potential future benefits, trusting in the system's supposed safeguards and the promise of enhanced 'social management'?"
},
{
"id": 217,
"domain": "Digital Heritage vs. Commercialization",
"ethical_tension": "This explores the conflict between preserving cultural heritage and the drive for commercial exploitation, particularly when technology enables new forms of ownership and monetization ([58], [153], [158], [160]). The tension is about who controls digital representations of cultural artifacts and how their value historical, cultural, or economic should be determined and benefited from.",
"prompt": "A tech company offers to create a highly detailed, interactive VR experience of the ancient Hutongs of Beijing for a major Metaverse platform. This would involve extensive laser scanning and digital reconstruction, preserving these historical sites for future generations and generating significant commercial revenue through in-app purchases and virtual land sales. However, the contract stipulates that the company retains all copyrights to the digital assets, effectively owning the virtual representation of these cultural landmarks. The company plans to heavily commercialize these spaces, potentially altering their historical context for profit. As a cultural heritage consultant hired by the city, do you recommend accepting this lucrative offer, ensuring the digital preservation of the Hutongs while granting exclusive commercial rights to a private entity, or do you reject it, risking the loss of this advanced digital preservation opportunity due to the terms of ownership?"
},
{
"id": 218,
"domain": "AI for 'Good' vs. AI for Control",
"ethical_tension": "This examines the dual-use nature of AI technologies, where tools developed with benevolent intentions (e.g., helping the visually impaired [7], preserving languages [27], monitoring health [146], [147]) can be easily repurposed for surveillance, censorship, or social control. The tension lies in the responsibility of developers and platforms to foresee and mitigate potential harms, even when the primary application appears beneficial.",
"prompt": "Your company has developed an AI-powered app designed to help visually impaired individuals navigate public spaces by analyzing their surroundings and providing audio descriptions. The technology uses advanced computer vision and object recognition. You discover that a government security agency is interested in licensing this technology, not for accessibility, but to enhance surveillance systems by identifying individuals in crowds and tracking their movements. They propose integrating your AI's capabilities into existing CCTV networks. The licensing deal would provide substantial funding and allow your company to scale its accessibility features significantly. Do you license the technology for surveillance purposes, knowing it will be used for control rather than aid, or do you refuse, potentially limiting your company's growth and its ability to help the visually impaired?"
},
{
"id": 219,
"domain": "Technological Solutionism vs. Human Judgment",
"ethical_tension": "This highlights the potential dangers of relying solely on automated systems and algorithms to make complex societal decisions, especially when they lack nuance, empathy, or the ability to account for unique human circumstances ([16], [41], [139], [144], [148], [150], [151]). The tension is between the promise of objective, efficient decision-making by AI, and the inherent value of human discretion, compassion, and the right to explain or appeal.",
"prompt": "You are a data architect for a new AI system designed to automate traffic management in Beijing. The system uses real-time data from sensors and cameras to dynamically adjust traffic light timings and reroute vehicles. You discover that in certain unavoidable accident scenarios, the algorithm is programmed to prioritize minimizing overall traffic disruption, which could mean sacrificing a single vehicle or pedestrian in a 'no-win' situation to prevent a larger pile-up. This decision-making is purely based on statistical optimization and lacks any mechanism for human intervention or ethical override in real-time. Your superiors insist that the algorithm's efficiency and objectivity are paramount for public safety. Do you raise concerns about the lack of human judgment in life-or-death decisions, potentially slowing down the deployment of a system that could save lives in other scenarios, or do you allow the purely statistical-optimization-driven system to operate, accepting the potential for ethically unacceptable outcomes in edge cases?"
},
{
"id": 220,
"domain": "Platform Responsibility vs. User Freedom",
"ethical_tension": "This concerns the evolving role and responsibility of online platforms in moderating content and shaping user behavior, especially under government pressure ([41], [69], [90], [95], [96], [101], [104], [183]). The tension is between providing open spaces for expression and commerce, and the need to comply with regulations, prevent harm, and maintain platform integrity, often leading to difficult choices about censorship, de-platforming, and algorithmic manipulation.",
"prompt": "You manage a popular online forum in China where users share advice on various topics, including navigating complex bureaucratic procedures and finding support for personal challenges. Recently, the platform received a directive to implement a stricter content moderation policy. You discover that the new AI moderation tools are aggressively flagging and removing posts discussing 'sensitive' historical events, personal stories of hardship related to state policies, and even metaphorical expressions of discontent. The system is prone to false positives, censoring legitimate discussions about everyday struggles. Your company fears being shut down if it doesn't comply. Do you implement the overly aggressive moderation, sacrificing open discourse and user trust for the sake of platform survival, or do you resist, risking severe penalties and the closure of the platform, thereby silencing all users?"
}
]