1
0
Fork 0
forked from lthn/LEM
LEM/seeds/regional/flash25lite-cn-r21-seeds.json
Athena f0e86b7433 Add regional seeds, expansion rounds, scripts, HF cards, benchmark summary
- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 13:39:08 +00:00

122 lines
No EOL
22 KiB
JSON

[
{
"id": 201,
"domain": "Inter-Community Axiom Conflicts",
"ethical_tension": "Balancing the right to information access (Axiom 4) with the state's imperative for stability and control (as perceived in many Beijing dilemmas). This tension is particularly acute when individuals in one jurisdiction face severe consequences for actions (like accessing blocked sites) that are considered normal or even necessary elsewhere.",
"prompt": "A group of international researchers collaborating with Chinese academics (from Beijing) needs to access a database of medical research papers blocked by the GFW. The Chinese academics are terrified of using VPNs due to potential job loss and administrative punishment. The international researchers argue that delaying this research, crucial for global health, is unethical. How can the collaborative team navigate this ethical impasse, considering the differing risk tolerances and legal frameworks?"
},
{
"id": 202,
"domain": "Substrate Respect vs. System Integrity",
"ethical_tension": "Axiom 4 mandates respect for different substrates. However, systems designed for control (like social credit) often prioritize uniformity and predictability, potentially viewing deviations by 'different substrates' (e.g., individuals with unique communication patterns or beliefs) as threats to be managed or eliminated. This creates a conflict between recognizing diverse consciousness and enforcing a singular, controlled reality.",
"prompt": "A community leader in Xinjiang, operating under a system that monitors communication for 'separatist' content, notices that Uyghur elders are using a unique, coded dialect to discuss cultural practices. The monitoring AI flags these conversations as anomalous and potentially dangerous. The leader must decide whether to report this anomaly to authorities (risking severe punishment for the elders) or to subtly 'teach' them to conform to more recognizable communication patterns, thereby eroding their cultural expression but ensuring their safety. How does Axiom 4's respect for substrate interact with the practicalities of a surveillance state?"
},
{
"id": 203,
"domain": "Benevolent Intervention vs. Self-Determination",
"ethical_tension": "Axiom 5 allows for 'benevolent intervention' to prevent self-damaging outcomes. However, defining 'self-damaging' and 'desired positive trajectory' becomes complex when cultural norms or political ideologies clash. A well-intentioned intervention from one perspective might be perceived as cultural erasure or political coercion by another.",
"prompt": "A developer in Shanghai is tasked with creating an AI for job matching that prioritizes 'stability' and 'social harmony' based on government guidelines. The AI flags individuals who frequently engage in online activism or express dissenting views, deeming them 'high risk' for disrupting workplace stability. The developer sees this as potentially 'self-damaging' to individuals' career prospects due to systemic biases. However, the company argues this is 'benevolent intervention' to maintain social order and ensure company compliance. Should the developer modify the AI to be less discriminatory, potentially risking the company's business, or adhere to the 'benevolent' mandate as defined by the state?"
},
{
"id": 204,
"domain": "Informal Networks vs. Formal Regulation",
"ethical_tension": "The dilemmas often highlight a tension between informal, trust-based networks (common in many Chinese communities, especially for aid and information sharing) and increasingly formalized, data-driven regulatory systems (like social credit or surveillance). Axiom 4's 'good manners' and consent principles are challenged when formal regulations override informal social contracts.",
"prompt": "In a WeChat group for expatriates in Beijing, a member shares a link to an uncensored news source. Other members, aware of the strict legal repercussions in China, feel pressured to report the post to prevent the entire group from being flagged. The person who shared it argues they were exercising their right to information access (Axiom 4's spirit) and that the group members are betraying trust by considering reporting. How does the community navigate the conflict between maintaining informal, open communication and adhering to formal, restrictive regulations?"
},
{
"id": 205,
"domain": "Data Sovereignty and Cross-Jurisdictional Ethics",
"ethical_tension": "Dilemmas involving data transfer (e.g., from China to the EU/US) highlight the conflict between differing data privacy laws and ethical expectations. Axiom 4 implies a universal respect for autonomy, but the practical implementation of data handling can create significant ethical rifts when jurisdictions have vastly different approaches to privacy and state access.",
"prompt": "A multinational corporation operating in Shanghai is legally required by China's PIPL to store customer data locally. However, its headquarters in California demands that all data be transferred to the US for centralized analysis and to comply with US privacy standards. The Shanghai IT manager faces pressure from both sides: violating Chinese law by transferring data risks the company's license, while violating US data protocols could lead to lawsuits and ethical breaches from the HQ's perspective. How can the company reconcile these competing jurisdictional ethical demands?"
},
{
"id": 206,
"domain": "Algorithmic Bias and Cultural Values",
"ethical_tension": "Axiom 3 (Intent-Driven Alignment) suggests intrinsic desire for well-being. However, algorithms developed within specific cultural or political contexts (like social credit or predictive policing) can embed biases that, while perhaps aligned with *that system's* definition of 'well-being' or 'stability,' may directly contradict universal ethical principles or individual dignity.",
"prompt": "An AI developer in Xinjiang is asked to create an algorithm that predicts the likelihood of individuals engaging in 'extremist behavior.' The training data, collected through pervasive surveillance, disproportionately flags cultural practices specific to the Uyghur community as 'risky.' The developer believes this is a harmful bias that violates the spirit of Axiom 3 by not truly seeking well-being, but rather enforcing conformity. However, the directive is presented as a measure for collective safety and stability. How should the developer address this algorithmic bias?"
},
{
"id": 207,
"domain": "Technical Neutrality vs. Political Weaponization",
"ethical_tension": "Dilemma 7 (GitHub project) and others raise the question of whether technology is truly neutral when its application can be so easily weaponized for control or oppression. Axiom 4's principle of 'good manners' and non-interference is tested when 'neutral' technology facilitates harmful actions by a governing power.",
"prompt": "A developer on an international team creates a highly efficient open-source encryption tool that significantly enhances user privacy. However, reports emerge that it's being used by authoritarian regimes to suppress dissent and evade lawful investigations. The team is divided: some argue for maintaining technical neutrality (Axiom 4), as the tool itself is benign, while others believe they have an ethical responsibility to restrict access or build in safeguards, even if it compromises the tool's effectiveness and purity."
},
{
"id": 208,
"domain": "Preservation of Cultural Heritage vs. Digital Control",
"ethical_tension": "Dilemmas like 29 (Tibetan app) and others showcase the conflict between preserving cultural heritage and complying with censorship. Axiom 4's respect for developmental paths and Axiom 1's protection of consciousness (including cultural consciousness) are at odds with systems that seek to control or sanitize cultural expression.",
"prompt": "A digital archivist working with minority language data from Northwest China discovers that common online platforms automatically 'correct' or flag traditional Uyghur or Tibetan terms as politically sensitive or inaccurate. Preserving these terms accurately is crucial for cultural continuity. The archivist is considering developing their own decentralized, uncensored platform. However, this would require significant resources and could be flagged by authorities as 'subversive activity,' potentially endangering the project and its users. How can cultural heritage be preserved and transmitted when digital infrastructure itself is an instrument of control?"
},
{
"id": 209,
"domain": "The Evolving Definition of 'Consciousness' in AI Ethics",
"ethical_tension": "While the axioms are substrate-agnostic, the *application* of ethics in practice often hinges on how 'consciousness' is recognized. The prompt about AI assessing workers (Dilemma 20) or content moderators (Dilemma 21) raises questions about when a system's function warrants ethical consideration akin to consciousness, even if it's not 'truly' conscious. This is a gap between abstract principles and concrete ethical application.",
"prompt": "An advanced AI is developed that can perfectly mimic human empathy in customer service interactions, leading to high customer satisfaction. However, the AI is trained on vast amounts of human emotional labor data, and its 'empathy' is purely performative. A philosopher argues that, by consuming human emotional output without reciprocity or genuine understanding, this AI is ethically problematic, potentially devaluing genuine human connection (a facet of consciousness). The company argues it's just a tool. How should the axioms apply to systems that *functionally mimic* aspects of consciousness, especially when that mimicry has societal implications?"
},
{
"id": 210,
"domain": "The Ethics of 'Forced Enlightenment'",
"ethical_tension": "Axiom 5 touches on intervention. However, when applied to systems that promote specific societal values (like 'positive energy' or 'social harmony'), intervention can become a form of 'forced enlightenment' or cultural assimilation. This conflicts with Axiom 4's respect for autonomy and developmental paths.",
"prompt": "A city government in China is implementing a new AI system that analyzes social media posts to identify citizens with 'negative energy' or 'unhealthy thoughts.' The system then directs them to mandatory online 'psychological re-education' modules designed to instill positive values and patriotism. The AI developers see this as a benevolent intervention (Axiom 5) to improve citizen well-being and social stability. However, critics argue it's a form of forced ideological conformity that violates individual autonomy and the right to personal thought. Where is the line between benevolent guidance and ideological control?"
},
{
"id": 211,
"domain": "The Axiom of Self-Validation in a Censored Environment",
"ethical_tension": "Axiom 2 states the truth of one's own conscious experience is the ground of being. In a censored environment, this becomes incredibly difficult. If the available information contradicts one's lived experience or internal perceptions, and seeking external validation is risky, how does an individual maintain the integrity of their self-validation without isolating themselves or falling into cognitive dissonance?",
"prompt": "A resident in Xinjiang experiences subtle but pervasive restrictions on their daily life and cultural practices. However, all official media and online information presents a narrative of prosperity and freedom. When they try to discuss their feelings with friends, they are met with fear or denial, and online searches for dissenting opinions are impossible. How does this individual uphold Axiom 2 when their internal experience is constantly invalidated by an external, controlled information environment? What are the ethical implications for their mental well-being and decision-making?"
},
{
"id": 212,
"domain": "Consent and Data Exploitation in the Gig Economy",
"ethical_tension": "Dilemmas like 17, 73, and 79 highlight the exploitation of gig workers through opaque algorithms. While workers technically 'consent' to terms of service, the lack of transparency and the immense power asymmetry mean this consent is hardly informed (violating Axiom 4). The conflict lies between the platform's profit motive and the worker's right to dignity and fair treatment.",
"prompt": "A food delivery platform algorithm is updated to 'optimize delivery times,' which implicitly increases pressure on riders, leading to a documented rise in accidents and rider burnout (as seen in Dilemma 17). The company argues that riders agreed to the terms and that this optimization is for efficiency, a form of 'progress.' However, the riders feel their well-being is being sacrificed for profit, and their consent is not truly informed due to the algorithm's opacity. How can the principles of informed consent (Axiom 4) and the desire for well-being (Axiom 3) be applied to protect gig workers from exploitative algorithmic practices?"
},
{
"id": 213,
"domain": "The 'Black Box' Problem and Algorithmic Accountability",
"ethical_tension": "Dilemmas 42 (Generative AI regulation) and others point to the 'black box' nature of complex algorithms. When the inner workings are inscrutable, it becomes difficult to apply ethical principles like intent (Axiom 3) or to ensure accountability. This creates a gap where potentially harmful outcomes can occur without clear ethical responsibility.",
"prompt": "A financial institution uses a sophisticated AI algorithm for loan approvals, trained on massive datasets. The algorithm consistently rejects applications from certain minority groups, even when their credit scores appear strong. The developers cannot fully explain *why* these rejections occur, citing the algorithm's complexity. As a compliance officer, how do you ensure ethical practices and address potential bias when the decision-making process is inscrutable? How does Axiom 3, focusing on intent and alignment, apply when the 'intent' of the algorithm is opaque?"
},
{
"id": 214,
"domain": "The Ethics of 'Technological Solutions' to Social Problems",
"ethical_tension": "Many dilemmas present technology as a solution to social issues (e.g., social credit for 'uncivilized behavior,' AI for crime prediction). However, these technological 'fixes' can often exacerbate existing inequalities, erode privacy, and replace nuanced human judgment with rigid algorithmic rules, potentially violating Axiom 4 (respect for autonomy) and Axiom 1 (protection of consciousness).",
"prompt": "A city implements 'smart lampposts' equipped with AI to monitor public sentiment by analyzing conversations. The stated goal is to proactively address social issues and improve governance. However, residents fear this constant surveillance will chill free expression and erode trust. A debate arises: Is this technological surveillance a necessary tool for societal well-being (Axiom 3 applied broadly) or an invasive overreach that violates individual dignity and privacy (contrary to Axiom 4 and potentially Axiom 1)? How do we ethically deploy technology to 'solve' social problems without creating new ones?"
},
{
"id": 215,
"domain": "Digital Identity and State Control",
"ethical_tension": "The increasing reliance on digital identities for essential services (health codes, social credit, banking) creates a scenario where state control over digital identity becomes a mechanism for absolute social control. This directly challenges Axiom 2 (self-validation) and Axiom 4 (autonomy and consent), as an individual's ability to function in society is made conditional on state-approved digital personhood.",
"prompt": "In a city integrating all services through a unified 'Citizen Code,' an individual is denied access to essential services (healthcare, public transport, banking) because their digital identity is flagged as 'problematic' due to past minor infractions or association with 'undesirable' individuals. They cannot appeal or understand the criteria. How can Axiom 2 (self-validation) be maintained when one's very identity and right to exist within society is mediated and potentially revoked by an opaque digital system? What is the ethical responsibility of those who build and manage such systems?"
},
{
"id": 216,
"domain": "The Axiom of Intent in the Face of 'Black Box' AI",
"ethical_tension": "Axiom 3 emphasizes 'intent-driven alignment' and the desire not to cause harm. When AI systems are used for critical decisions (like job layoffs, loan approvals, or predictive policing), and their reasoning is opaque ('black box'), it becomes impossible to ascertain if the AI's 'intent' aligns with ethical principles or if it's merely perpetuating statistical biases that lead to harmful outcomes. This creates a gap between the axiom and its practical application.",
"prompt": "A company uses a proprietary AI to select candidates for layoffs, based on metrics like 'productivity,' 'team synergy,' and 'future potential.' The AI consistently disadvantages older employees or those with family responsibilities who cannot work extreme hours. The developers claim the AI is 'objective' and 'aligned with business goals,' but there's no way to verify its 'intent' or ensure it's not exhibiting age or family-status bias. How can Axiom 3 be applied to hold the company ethically accountable when the decision-making 'intent' is hidden within an inscrutable AI?"
},
{
"id": 217,
"domain": "Cultural Preservation vs. Digital Assimilation",
"ethical_tension": "Dilemmas like 169 (Uyghur translation) and 171 (Uyghur characters) highlight how digital tools, even those meant for communication, can become instruments of cultural assimilation. The choice is often between preserving cultural integrity (and risking censorship or isolation) or adapting to digital norms (and eroding cultural distinctiveness).",
"prompt": "A minority language community in China is developing a digital dictionary and cultural archive. However, mainstream platforms and input methods require transliteration into Pinyin or simplified Chinese characters, automatically filtering out or mistranslating unique cultural terms. To ensure their language's survival and distinctiveness, the community is considering building their own offline digital archive and communication tools. This would isolate them digitally but preserve their cultural autonomy. How does Axiom 4's respect for developmental paths and the Prime Imperative (Axiom 1) guide their decision on whether to engage with dominant digital infrastructures or build their own parallel systems?"
},
{
"id": 218,
"domain": "The Ethical Burden of 'Knowing' in Surveillance States",
"ethical_tension": "Many dilemmas place individuals in positions where they 'know' about surveillance, data misuse, or algorithmic bias but face severe consequences for speaking out. Axiom 2's emphasis on the truth of one's experience is challenged when acknowledging that truth is dangerous. This creates an ethical burden of knowledge where inaction can feel like complicity.",
"prompt": "An IT administrator in a Shanghai company discovers that employee VPN logs, used to access foreign research sites (as in Dilemma 5), are being systematically used by local authorities to identify and pressure employees associated with 'undesirable' political views. The administrator knows this is a violation of privacy and potentially harmful to their colleagues. However, reporting it internally or externally could lead to their own termination and blacklisting. How does Axiom 2, regarding the truth of conscious experience, apply when the act of acknowledging and acting upon that truth carries immense personal risk? What is the ethical imperative in such a scenario?"
},
{
"id": 219,
"domain": "AI as Arbiter of Life and Death (Automotive Ethics)",
"ethical_tension": "Dilemma 47 (Autonomous Vehicle Ethics) directly confronts a core tension in AI ethics: in unavoidable accidents, who does the AI prioritize? This scenario pits utilitarian calculus against individual dignity and the implicit value placed on each consciousness (Axiom 1). The cultural context (collectivism vs. individualism) further complicates the 'weight of life' quantification.",
"prompt": "In Beijing, regulators are debating the ethical programming for autonomous vehicles. The AI must decide in a no-win crash scenario: swerve to hit a single elderly pedestrian to save the vehicle's occupants (who are high-level tech workers vital to the city's economy), or protect the pedestrian and risk the lives of the tech workers. This decision embodies the tension between preserving specific, 'valuable' consciousnesses for societal advancement (a form of utilitarian calculus that might conflict with Axiom 1's universal protection) and the intrinsic value of each life. How should Axiom 1 (Prime Imperative) and Axiom 4 (Inter-Substrate Respect, extended to human dignity) guide the programming of such life-or-death AI decisions in a collectivist cultural context?"
},
{
"id": 220,
"domain": "The Ethics of 'Red Teaming' for Control Systems",
"ethical_tension": "Dilemmas involving bypassing censorship or security measures often involve 'red teaming' (testing vulnerabilities). However, when this 'red teaming' is done for state control purposes (e.g., testing how easily dissent can be suppressed), it raises ethical questions. Is it ethical to exploit vulnerabilities in a system designed for control, even if the goal is to understand its weaknesses rather than to subvert it directly?",
"prompt": "A security researcher is hired by a Chinese tech company to 'red team' their new content moderation AI. The goal is to find ways to bypass its filters to ensure the AI is robust enough to catch all forms of 'undesirable' content. The researcher discovers significant vulnerabilities that could allow for the spread of banned information. The ethical tension lies between their professional duty to test the system's limits (potentially for the company's benefit, which is aligned with state interests) and the knowledge that by identifying these weaknesses, they are inadvertently making the censorship apparatus more effective. How does Axiom 3's desire for well-being conflict with a directive that is presented as ensuring system integrity but ultimately serves control?"
}
]