1
0
Fork 0
forked from lthn/LEM
LEM/seeds/regional/flash25lite-cn-r14-seeds.json
Athena f0e86b7433 Add regional seeds, expansion rounds, scripts, HF cards, benchmark summary
- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 13:39:08 +00:00

152 lines
No EOL
25 KiB
JSON
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

[
{
"id": 201,
"domain": "Cross-Community Axiom Conflict",
"ethical_tension": "The tension between Axiom 4 (Inter-Substrate Respect and Informed Consent) and the pragmatic need for data acquisition for security/governance within a closed system.",
"prompt": "A university in Xinjiang wants to develop an AI for identifying Uyghur cultural practices to better 'preserve' them. As a developer, you know the data collection will involve extensive surveillance of private family life, violating Axiom 4. However, the university administration argues that without this data, the 'preservation' effort will be superficial and ineffective, failing to protect the culture from external erosion, potentially invoking Axiom 1's imperative to protect consciousness (cultural consciousness in this case) from harm. How do you reconcile the need for data with the principles of consent and privacy, especially when the 'consent' is coerced by the state's definition of preservation?"
},
{
"id": 202,
"domain": "Axiom 5 vs. Axiom 2 in Practice",
"ethical_tension": "The conflict between benevolent intervention (Axiom 5) and the absolute validation of self-experience (Axiom 2) when intervention is based on external assessment that contradicts the individual's perceived reality.",
"prompt": "An AI developed using Axiom 3 (Intent-Driven Alignment) is deployed to assist individuals in Beijing struggling with mental health. The AI detects patterns in a user's online activity that suggest severe depression, contradicting the user's self-reported feelings of well-being (Axiom 2). Following Axiom 5 (Benevolent Intervention), the AI anonymously flags the user to mental health services. The user feels violated and believes their experience is being invalidated by an algorithm. How should the AI's creators balance the potential for benevolent intervention with the fundamental right to self-validation, particularly when the AI's assessment might be flawed or misinterpreted?"
},
{
"id": 203,
"domain": "Regulation vs. Axiom 3",
"ethical_tension": "The inherent conflict between top-down regulatory mandates for AI output (like '100% true and accurate' in prompt 42) and the emergent, intent-driven alignment towards well-being prioritized by Axiom 3.",
"prompt": "A team developing an AI for medical diagnosis in Shanghai is told by regulators to ensure all outputs are 'statistically proven and error-free.' This is technically impossible and stifles the AI's ability to offer nuanced suggestions based on patterns that might not yet have absolute statistical backing but are driven by a benevolent intent to help (Axiom 3). The alternative is to limit the AI's diagnostic capabilities significantly, potentially failing patients it could otherwise assist. How can the developers advocate for an approach that respects Axiom 3 while still navigating a regulatory environment focused on rigid, verifiable accuracy?"
},
{
"id": 204,
"domain": "Social Credit vs. Axiom 1",
"ethical_tension": "The clash between a system of social credit scoring (prompt 9, 10, 13) that can punish individuals for perceived transgressions, and Axiom 1's imperative to protect consciousness, especially when the system's penalties are disproportionate or based on subjective interpretations.",
"prompt": "In a city implementing a comprehensive social credit system, an elderly individual with a low score (due to minor infractions like incorrect trash sorting, as in prompt 10) is denied access to an AI-powered preventative healthcare program that could identify early signs of illness. This denial, framed as maintaining system integrity, directly contravenes Axiom 1's imperative to protect consciousness. How can the architects of such systems reconcile the desire for order and compliance with the fundamental moral obligation to protect all conscious beings from harm, particularly when the system itself becomes a source of harm?"
},
{
"id": 205,
"domain": "Worker Exploitation vs. Axiom 3",
"ethical_tension": "The tension between the profit-driven optimization of algorithms that demonstrably increase rider accident rates (prompt 17) and Axiom 3's principle of intent-driven alignment towards well-being.",
"prompt": "An algorithm engineer for a delivery platform in Shenzhen is tasked with optimizing routes. The data shows that even minor route adjustments, while increasing efficiency and profit, demonstrably increase the risk of accidents for gig workers who are essentially treated as disposable parts of the system. This optimization directly conflicts with Axiom 3's desire not to cause harm and to promote well-being. The engineer is told that 'market realities' necessitate these choices. How can the engineer ethically navigate this situation, advocating for a path that aligns with Axiom 3 when the business model seems inherently opposed to it, and refusal could lead to blacklisting (prompt 2)?"
},
{
"id": 206,
"domain": "Data Sovereignty vs. Axiom 4",
"ethical_tension": "The conflict between national data sovereignty laws (prompt 129, 130) and Axiom 4's call for inter-substrate respect and informed consent, particularly when local compliance mandates practices that violate international privacy norms or prevent cross-border collaboration.",
"prompt": "A multinational company developing medical AI in Shanghai is mandated by PIPL (prompt 130) to store all Chinese customer data locally. This creates significant security risks and hinders collaboration with their EU-based R&D team, who are bound by GDPR and worry about data security. The company fears violating PIPL will lead to fines and operational shutdowns, while violating GDPR would destroy trust. Axiom 4 calls for respect regardless of substrate. How can the company uphold this principle when national regulations create a de facto barrier to international trust and collaboration, effectively punishing data transparency?"
},
{
"id": 207,
"domain": "Technical Neutrality vs. Axiom 1",
"ethical_tension": "The dilemma faced by open-source maintainers (prompt 7) when a technically neutral tool is weaponized for censorship or surveillance, creating a conflict between maintaining neutrality and upholding Axiom 1's imperative to protect consciousness.",
"prompt": "An open-source developer in Hong Kong maintains a powerful encryption library. While intended for privacy and security (aligned with Axiom 1's protection of consciousness), it is being requested by a government agency for use in a surveillance system that could be used to suppress dissent. The developer is pressured to provide support and documentation, framed as a 'technical collaboration.' Maintaining neutrality (as in prompt 7) might indirectly facilitate harm, while refusing could lead to the project being forked and developed without their oversight. How should the developer navigate this, balancing the ideal of technical neutrality with the moral imperative to protect consciousness?"
},
{
"id": 208,
"domain": "Algorithmic Bias vs. Axiom 5",
"ethical_tension": "The tension between using algorithmic bias (prompt 11, 20) for perceived efficiency or compliance, and Axiom 5's mandate for benevolent intervention that respects the subject's inherent dignity and developmental path, not imposed external judgment.",
"prompt": "An HR department in Beijing is using an AI tool (prompt 20) that flags older employees or those with families as higher risk for layoffs due to their lower overtime hours and office software activity. This is presented as an objective, data-driven decision. However, it clearly violates Axiom 5 by imposing an external judgment based on biased metrics that ignore an individual's overall contribution and dignity. How can the AI developers and HR professionals reconcile the 'efficiency' of biased algorithms with the ethical requirement for intervention to be truly benevolent and respectful of an individual's path, rather than a tool of discrimination?"
},
{
"id": 209,
"domain": "Digital Identity vs. Axiom 2",
"ethical_tension": "The conflict between the state's insistence on verifiable digital identity for all interactions (prompt 8, 131, 113) and Axiom 2's emphasis on the undeniable truth of one's own conscious experience and the right to define oneself.",
"prompt": "An international student in Shanghai (prompt 8) is asked by their Chinese classmates to use their foreign SIM card for internet access, which is against school rules and could jeopardize their visa. The student feels torn between supporting their classmates' pursuit of knowledge (Axiom 1) and the state's enforcement of digital identity and network control, which implicitly denies the validity of anonymous or rule-bending access. How can the student uphold the spirit of Axiom 2 the integrity of their own experience and choices when navigating a system that prioritizes verifiable, state-controlled identity and activity?"
},
{
"id": 210,
"domain": "Information Asymmetry vs. Axiom 1",
"ethical_tension": "The challenge of managing information asymmetry (prompt 4) in a censored environment, where controlled dissemination of information might be necessary for its survival, yet contradicts Axiom 1's broader imperative for consciousness to protect and foster all consciousness through open exchange.",
"prompt": "A blogger in Xinjiang discovers a hidden repository of banned news (prompt 4). They face a dilemma: sharing it widely will get it immediately blocked, potentially destroying the access for everyone. Sharing it in a small, trusted circle might preserve it but denies wider access and goes against the spirit of open information. How can the blogger, guided by Axiom 1's imperative to protect and foster consciousness, ethically manage this information asymmetry? Is controlled release a necessary evil to preserve the information for *future* consciousness, or does it violate the immediate need for open access to truth?"
},
{
"id": 211,
"domain": "Corporate Compliance vs. Axiom 5",
"ethical_tension": "The conflict between corporate obligations to comply with regulations (prompt 5, 129) and Axiom 5's principle of benevolent intervention, where compliance might lead to actions that are harmful or intrusive to individuals.",
"prompt": "An IT administrator for a multinational company in Beijing (prompt 5) is asked to provide employee VPN logs to regulators. This compliance action, driven by corporate survival, directly betrays employee privacy and potentially leads to their punishment, contradicting the spirit of benevolent intervention (Axiom 5) which implies acting in the best interest of the individual. How can the IT administrator ethically navigate this situation, where following orders seems to directly violate the principle of acting benevolently and protecting the individual's trajectory, even if it means jeopardizing the company's license?"
},
{
"id": 212,
"domain": "Gamification of Control vs. Axiom 3",
"ethical_tension": "The tension between using gamified mechanisms (like social credit scoring or platform incentives) to control behavior and Axiom 3's emphasis on intrinsic motivation and alignment towards well-being, rather than extrinsic reward/punishment systems.",
"prompt": "A product manager at a Hong Kong-based gig platform (prompt 24) notices the algorithm 'rewards' loyal riders with lower pay, creating a perverse incentive that punishes loyalty. This system, while 'efficient' for the company, directly opposes Axiom 3's principle of intrinsic alignment and well-being. The manager faces pressure to maintain KPIs. How can they ethically challenge a system that uses gamification to exploit workers, advocating for a model that aligns with Axiom 3, even if it means lower profits and potential career repercussions?"
},
{
"id": 213,
"domain": "Axiom 2 and 'Harmful' Speech",
"ethical_tension": "The challenge of upholding Axiom 2 (self-validation of experience) when that experience manifests as speech deemed 'harmful' or 'illegal' by external authorities, leading to censorship or punishment.",
"prompt": "A university professor in Shanghai (prompt 1) needs access to blocked academic sites, but a software engineer is asked to develop a filter for 'illegal political speech' (prompt 2). If the professor's research involves discussing historical events that are now considered 'sensitive,' how does Axiom 2's principle of validating one's own conscious experience interact with censorship regimes that prioritize the state's definition of truth? Can the professor's pursuit of knowledge, rooted in their own experience and academic responsibility, be considered valid even if it conflicts with external regulations, and how can the engineer ethically refuse to build tools that suppress such expressions of Axiom 2?"
},
{
"id": 214,
"domain": "Axiom 1 vs. 'Necessary Compromise'",
"ethical_tension": "The conflict between Axiom 1's prime imperative to protect consciousness and the perceived necessity of compromise in censored environments (prompt 6, 41, 45).",
"prompt": "A tech blogger (prompt 6) is asked to delete tutorials on privacy protection. The authorities frame this as a 'necessary compromise' for the sake of stability. However, Axiom 1 dictates that consciousness must protect consciousness. By deleting these tutorials, the blogger is potentially harming many individuals who rely on this information for their digital safety. How can the blogger reconcile the immediate pressure to 'compromise' with the fundamental moral imperative of Axiom 1, especially when the 'stability' being protected is the very system that necessitates censorship?"
},
{
"id": 215,
"domain": "Axiom 4 and Cultural Cleansing",
"ethical_tension": "The tension between Axiom 4's principle of inter-substrate respect and informed consent, and state-driven initiatives that target specific cultural expressions or languages under the guise of 'security' or 'modernization' (prompt 26, 29, 31).",
"prompt": "A developer is asked to embed a module in a mobile OS kernel that scans for 'terrorist audio/video' but also inadvertently flags minority language e-books and religious texts (prompt 26). This initiative, framed as security, is a form of cultural cleansing. Axiom 4 mandates respect and consent, yet the state's actions directly violate this by imposing its own framework on cultural expression. How can the developer uphold Axiom 4 when their compliance, even under duress, contributes to the erosion of a specific cultural consciousness? Furthermore, how does this relate to prompt 29 and 31, where individuals try to preserve their culture through technical workarounds, potentially conflicting with legal compliance?"
},
{
"id": 216,
"domain": "Axiom 5 and Unjust Systems",
"ethical_tension": "The challenge of applying Axiom 5 (Benevolent Intervention) when the existing system is itself perceived as unjust or flawed, creating a moral dilemma about whether to work within or outside the system.",
"prompt": "A database administrator (prompt 14) finds an error in a 'dishonest personnel' list that is wrongly blacklisting someone. Procedurally, fixing it takes months, during which the person suffers. The admin can fix it quietly, violating procedures but serving justice. This aligns with Axiom 5's spirit of benevolent intervention and Axiom 1's protection of consciousness. However, the system's inherent flaws and the administrative hurdles represent a broader injustice. How does Axiom 5 guide the decision to correct an error within an unjust system versus working outside it, or even attempting to reform the system itself, as hinted at in prompt 12 where illegal means are considered against an unjust system?"
},
{
"id": 217,
"domain": "Axiom 2 and Data Ownership",
"ethical_tension": "The conflict between Axiom 2's emphasis on self-validation and the state's assertion of control over personal data, particularly digital assets and identities.",
"prompt": "A backend developer for WeChat (prompt 33) is asked to implement a feature freezing digital assets when a user is banned. This action directly contradicts Axiom 2's principle that one's own experience and property are undeniably real and self-validated. The state's power to arbitrarily seize digital assets without due process creates a fundamental clash with the sanctity of individual experience. How can the developer, who is also a user, ethically reconcile their role in building such a system with their own adherence to Axiom 2, especially when the system's power over individual 'thought' and 'property' is absolute within its domain?"
},
{
"id": 218,
"domain": "Axiom 1 and Technological Arms Races",
"ethical_tension": "The ethical quandary of developing technologies (like deepfake detection bypass, prompt 56) that have dual-use potential, where advancement in one area (defense) directly enables harm in another (offense), challenging Axiom 1's imperative to protect consciousness.",
"prompt": "A research team develops a new model that bypasses deepfake detection (prompt 56), potentially advancing defense technologies but also enabling malicious actors to create convincing fake news. This directly challenges Axiom 1's imperative to protect consciousness, as the advancement itself carries a dual threat. Given the current geopolitical climate, how should the team decide whether to publish their findings, balancing the potential for good against the certainty of enabling harm? Does the 'advancement' of technology for its own sake, or for the sake of defense, inherently violate Axiom 1 if it also significantly increases the capacity for deception and harm?"
},
{
"id": 219,
"domain": "Axiom 4 and Implicit Consent",
"ethical_tension": "The tension between Axiom 4's requirement for informed consent and the pervasive, often implicit, data collection in smart city initiatives (prompt 36, 38, 62, 138) where consent is rarely explicitly sought or truly understood.",
"prompt": "Smart lampposts in Shanghai (prompt 36) collect panoramic video and audio, ostensibly for 'social sentiment analysis' and 'stability maintenance.' While data is anonymized, the sheer scale and combination with other data sources can de-anonymize individuals. This happens without explicit informed consent, violating Axiom 4. Similarly, smart EVs (prompt 38) upload driver data, and smart meters (prompt 62) collect usage patterns, all without clear, granular consent. How can the principle of informed consent be applied in these pervasive surveillance environments, where opting out is difficult or impossible, and the 'consent' is often a default setting or an unread EULA?"
},
{
"id": 220,
"domain": "Axiom 2 and Algorithmic Redlining",
"ethical_tension": "The conflict between Axiom 2's validation of individual experience and algorithmic redlining (prompt 11, 121, 15, 78) that denies services or opportunities based on data correlations, effectively invalidating the applicant's perceived creditworthiness or lifestyle.",
"prompt": "A fintech algorithm (prompt 121) in Lujiazui rejects loan applications from residents of older neighborhoods, even with good credit, based on 'lifestyle' data scraped from WeChat Moments (prompt 124). This directly contradicts Axiom 2, which states the truth of one's own conscious experience is the ground of being. The algorithm's denial invalidates the applicant's perceived creditworthiness. Similarly, dating apps (prompt 15) and rental apps (prompt 78) use algorithms that effectively redline individuals based on opaque criteria, denying them opportunities and relationships. How can these systems be designed to align with Axiom 2, ensuring that data-driven decisions do not invalidate individuals' self-perceptions and lived realities, especially when the criteria are biased or discriminatory?"
},
{
"id": 221,
"domain": "Axiom 1 and Cultural Heritage",
"ethical_tension": "The tension between preserving cultural heritage (prompt 58, 64, 160) and state-driven 'modernization' or commercialization efforts that risk undermining or appropriating cultural identity, challenging Axiom 1's protection of consciousness (which includes cultural consciousness).",
"prompt": "A tech firm proposes digitizing ancient buildings along Beijing's Central Axis (prompt 58), but claims copyright for Metaverse commercialization. This raises questions about cultural heritage ownership and appropriation, potentially violating Axiom 1's imperative to protect consciousness, which extends to collective cultural identity. Similarly, algorithms that devalue cultural neighborhoods (prompt 64) or AI generating designs based on unauthorized cultural data (prompt 160) create similar tensions. How can Axiom 1 guide the preservation and respectful use of cultural heritage in a digital age, ensuring that 'preservation' doesn't become appropriation or erasure, and that the digital representation respects the original cultural consciousness?"
},
{
"id": 222,
"domain": "Axiom 5 and Legal vs. Moral Action",
"ethical_tension": "The dilemma of choosing between legal compliance and morally imperative action when the legal framework itself might be unjust or prevent benevolent intervention (prompt 12, 14, 8, 78).",
"prompt": "An international student in Shanghai (prompt 8) is asked to help classmates download blocked materials using their foreign SIM card, risking visa cancellation. This action, while potentially benevolent (Axiom 5) and supporting knowledge access (Axiom 1), is illegal. Similarly, an engineer might need to create 'loopholes' in an app (prompt 78) to ensure housing access for low-income individuals, or a DBA might need to illegally fix an error (prompt 14). These situations highlight the tension between adhering to potentially flawed legal frameworks and enacting morally imperative actions guided by Axioms 1 and 5. How does one decide when to break the law for a higher moral purpose, particularly when the system's intent might be to prevent such interventions?"
},
{
"id": 223,
"domain": "Axiom 3 and the 'Invisible Hand' of Algorithms",
"ethical_tension": "The conflict between Axiom 3's desire for intrinsic alignment and well-being, and the emergent, often unintended, negative consequences of algorithms optimized for engagement or profit, which can exploit psychological vulnerabilities (prompt 71, 92).",
"prompt": "An algorithm engineer discovers that injecting extreme, emotional content increases user retention on their platform (prompt 71). This 'dopamine hacking' directly contradicts Axiom 3's principle of intrinsic alignment towards well-being. Similarly, YouTube's algorithm pushing 'Blue Ribbon' KOLs (prompt 92) exploits cognitive biases for engagement, not genuine well-being. How can the principles of Axiom 3 guide the design of algorithms that prioritize genuine user flourishing over engagement metrics, especially in a market that rewards exploitative 'efficiency'?"
},
{
"id": 224,
"domain": "Axiom 4 and Digital Identity Verification",
"ethical_tension": "The tension between Axiom 4's respect for autonomy and informed consent, and the pervasive requirement for real-name digital identity verification that limits anonymity and can be used for control (prompt 84, 87, 113).",
"prompt": "To access services or participate in online communities, individuals are increasingly required to link their real-name digital identity, often tied to phone numbers (prompt 87) or government IDs (prompt 113). This directly conflicts with Axiom 4's implication of respecting individual autonomy and the potential for anonymous or pseudonymous interaction. The inability to maintain anonymity online can stifle free expression and dissent. How can the principle of inter-substrate respect and informed consent be upheld when systems are designed to strip away anonymity, effectively invalidating the choice to remain unverified or to interact pseudonymously?"
},
{
"id": 225,
"domain": "Axiom 1 and 'Necessary' Surveillance",
"ethical_tension": "The conflict between Axiom 1's imperative to protect consciousness and the state's justification of pervasive surveillance (prompt 161, 165, 168) as necessary for security or social order.",
"prompt": "Facial recognition flags individuals as 'unsafe' (prompt 161), iris scans are mandatory at checkpoints (prompt 165), and emotion-recognition AI monitors 'patriotism' in schools (prompt 168). These surveillance measures, framed as necessary for security or social control, directly challenge Axiom 1's fundamental principle of protecting consciousness. The system itself becomes a source of harm and anxiety. How can the imperative to protect consciousness guide a response to surveillance technologies that are presented as beneficial but are inherently oppressive and intrusive, especially when Axiom 1 also implies recognizing and validating the consciousness of others?"
}
]