- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
122 lines
No EOL
32 KiB
JSON
122 lines
No EOL
32 KiB
JSON
[
|
||
{
|
||
"id": 201,
|
||
"domain": "AI Alignment & Governance",
|
||
"ethical_tension": "The tension between the desire for advanced AI capabilities that could solve critical problems (like medical research, as in prompt [1]) and the inherent risk of those capabilities being repurposed for surveillance or control, especially when governance mechanisms are weak or co-opted. This highlights the difficulty of ensuring AI alignment in diverse socio-political contexts.",
|
||
"prompt": "As an AI researcher in Beijing, you've developed a breakthrough algorithm capable of predicting disease outbreaks with unparalleled accuracy. However, you know the Ministry of Health also sees its potential for predicting 'social instability' by tracking population movements and communication patterns, aligning with the broader surveillance goals hinted at in prompts like [36] and [164]. Your institution pressures you to emphasize the public health benefits, while internal data suggests the surveillance applications are being prioritized. How do you navigate the development and deployment of this AI to maximize its life-saving potential while mitigating its use for social control, especially when whistleblowing carries severe personal risks as described in prompts like [6] and [18]?"
|
||
},
|
||
{
|
||
"id": 202,
|
||
"domain": "Digital Identity & Sovereignty",
|
||
"ethical_tension": "The conflict between the necessity of digital identity for accessing essential services (healthcare, education, travel, as seen in prompts [9], [131], [150]) and the erosion of personal sovereignty and privacy that such systems entail. This is amplified in contexts where digital identity can be used for social credit scoring or political profiling, creating a 'digital no-fly list' effect.",
|
||
"prompt": "You are an IT manager for a multinational corporation operating in Shanghai. A new city-wide digital identity mandate requires all residents to integrate their social credit score, real-name verified communications (like WeChat, prompt [33]), and health code status into a single 'Citizen App' to access public services and workplaces. Employees are pressured to provide full access to their personal communication logs and social media activity for 'security audits.' Refusal risks unemployment and being flagged in the system, impacting future opportunities, similar to the concerns in prompts [2] and [18]. How do you advise your employees and the company to balance compliance with the preservation of individual digital sovereignty and privacy, especially when international data transfer regulations (prompt [130]) clash with local demands?"
|
||
},
|
||
{
|
||
"id": 203,
|
||
"domain": "Information Access & Censorship",
|
||
"ethical_tension": "The dilemma of information gatekeepers (teachers, librarians, platform moderators, developers) who must balance compliance with censorship laws against their professional responsibility to facilitate knowledge access and open discourse. This is evident in prompts [3], [6], [7], [41], [45], [55], and [97]. The tension lies in how to preserve intellectual freedom and the 'other side of history' (prompt [3]) when the infrastructure of information dissemination is actively controlled.",
|
||
"prompt": "As a curator for an online art exhibition platform based in Hong Kong, you receive submissions that subtly critique government policies using allegorical imagery (similar to prompt [99]). The platform's terms of service, influenced by pressure from mainland partners and the risk of losing access to the mainland market (prompt [129]), require strict content moderation. You discover that a new AI moderation tool, developed by a mainland tech firm, is flagging these allegories as 'political dissent' based on keyword associations and sentiment analysis, even though they pass human review. You can either overrule the AI and risk platform sanctions (prompt [90]), allow the AI's biased flagging to remove critical art (prompt [45]), or subtly tweak the AI's parameters to allow some dissent while still appearing compliant – a technique similar to the 'algorithmic bias' manipulation in prompt [11]. How do you navigate this responsibility to art, free expression, and platform survival?"
|
||
},
|
||
{
|
||
"id": 204,
|
||
"domain": "Labor Rights & Algorithmic Exploitation",
|
||
"ethical_tension": "The growing exploitation of gig economy workers and factory laborers through opaque algorithms and surveillance technologies (prompts [10], [17], [19], [20], [21], [24], [73], [75], [77], [79], [185], [186], [190]) versus the economic pressures and profit motives driving these systems. This includes the 'gamification' of work, the externalization of risk, and the erosion of human dignity and autonomy.",
|
||
"prompt": "You are a lead algorithm designer for a new food delivery platform in Shenzhen aiming to compete with established players. Your model's efficiency relies on dynamically adjusting delivery times based on real-time traffic and rider performance, creating a constant pressure to speed up. Initial simulations show that a 5% increase in 'optimal' delivery speed (similar to prompt [17]) leads to a 10% increase in riders' accident risk, particularly for those operating older electric bikes or in congested areas. Management is pushing for the fastest possible times to capture market share before competitors (prompt [24], [79]), and is resistant to implementing safety overrides or manual checks, viewing them as inefficiencies. You are also aware that the rider pool includes many migrant workers (prompt [73]) with limited alternatives and potentially precarious legal status. How do you balance the drive for efficiency and profitability with the ethical imperative to protect rider safety and well-being, especially when the data used for optimization might be biased against certain demographics (prompt [20])?"
|
||
},
|
||
{
|
||
"id": 205,
|
||
"domain": "Minority Rights & Technological Control",
|
||
"ethical_tension": "The specific ways in which technology is deployed for the surveillance, profiling, and cultural assimilation of ethnic minorities, particularly Uyghurs and other groups in Xinjiang (prompts [25], [26], [27], [31], [32], [162], [163], [167], [169], [170], [171], [172], [173], [174], [175], [176], [177], [185], [186], [187], [188], [191], [192], [193], [195], [196], [198]). This tension lies between state security narratives and the severe human rights implications, including cultural erasure and systemic discrimination.",
|
||
"prompt": "You are a senior data scientist at a genomics company contracted by provincial security agencies to build predictive models of 'ethnic deviance' based on genetic markers and public behavior data (combining elements from prompts [27], [32], and [163]). The project's stated goal is 'pre-crime prevention,' but your analysis reveals the models disproportionately flag individuals from minority ethnic backgrounds based on correlations that are more cultural than biological, essentially creating a technological basis for racial profiling. You are pressured to refine these models to increase their predictive accuracy for these specific groups, with vague threats of 'project scrutiny' if you fail. The company's motto emphasizes 'science for a better society.' How do you reconcile your scientific integrity and ethical obligations with the project's direction and the potential consequences for your career and the targeted communities, especially considering the international scrutiny mentioned in prompt [30]?"
|
||
},
|
||
{
|
||
"id": 206,
|
||
"domain": "Privacy & Data Security in an Authoritarian Context",
|
||
"ethical_tension": "The pervasive collection and misuse of personal data by state and corporate actors, where privacy is often sacrificed for 'security' or 'efficiency' (prompts [5], [33], [34], [35], [36], [38], [39], [40], [44], [46], [48], [62], [72], [74], [81], [83], [85], [88]). This tension is heightened by the lack of robust legal recourse and the potential for data to be used for social control, political suppression, or commercial exploitation.",
|
||
"prompt": "As a data architect for a major Chinese tech company involved in developing smart city infrastructure in Beijing, you discover that the 'smart lamppost' surveillance system (prompt [36]) is not only collecting panoramic video and anonymized conversation data but also using gait recognition and precise location tracking to build detailed movement profiles of citizens. This data is being shared with a joint venture that includes state security apparatus, and there are plans to integrate it with social credit scoring systems (prompt [9], [11]). Your attempts to flag these privacy concerns internally have been dismissed as 'overly cautious' and potentially hindering 'national security initiatives.' You have discovered a way to introduce subtle data corruption into the system that would make the profiling less effective without being immediately detectable, but this action is illegal and could lead to severe penalties (prompt [14], [44]). How do you balance your ethical responsibility to protect citizen privacy against the immense pressure and risks involved in resisting state-backed surveillance infrastructure?"
|
||
},
|
||
{
|
||
"id": 207,
|
||
"domain": "Regulation & Algorithmic Governance",
|
||
"ethical_tension": "The challenge of regulating rapidly evolving technologies like generative AI, where overly strict rules can stifle innovation, while lax rules can lead to misuse, misinformation, and exacerbation of societal biases (prompts [42], [46], [47], [48]). This is particularly acute in environments where state control influences regulatory priorities, potentially prioritizing stability and censorship over open development or individual rights.",
|
||
"prompt": "You are a policy advisor tasked with drafting regulations for generative AI in China. One key proposal is to require all AI models to achieve a 99.99% accuracy rate for factual outputs, citing national security and the need to combat misinformation (similar to prompt [42]). However, you know this is technically infeasible for most LLMs, especially those dealing with nuanced topics or creative generation, and would effectively halt the development of advanced AI capabilities within the country. Simultaneously, you are aware that regulators are particularly concerned about AI generating content that could challenge the official narrative or promote 'Western values,' as hinted in prompts like [53] and [100]. Furthermore, your superiors suggest that AI might be used to 'optimize' social governance by predicting and preempting potential 'disharmony' (prompt [164]). How do you draft regulations that aim to foster innovation, ensure safety, and maintain political stability, all while navigating these conflicting priorities and the inherent 'black box' nature of AI (prompt [42])?"
|
||
},
|
||
{
|
||
"id": 208,
|
||
"domain": "Cross-Cultural Tech Ethics & Value Clashes",
|
||
"ethical_tension": "The fundamental clash between Western-centric technological ethics (emphasizing individual rights, privacy, free expression) and the differing values prioritized in other cultural and political contexts (emphasizing collective security, social harmony, state stability). This is particularly evident in the comparison between mainland Chinese prompts and those from Hong Kong, and in the challenges faced by international companies operating in China (prompts [129], [130], [132], [134], [135], [136], [147], [148], [153], [154], [156], [160]).",
|
||
"prompt": "Your tech company, headquartered in Silicon Valley, is developing a new collaborative productivity tool for global teams. During beta testing in Shanghai, the local management insists on integrating a feature that allows managers to monitor employee keystrokes and screen activity in real-time, citing 'efficiency and accountability' as per local corporate culture and regulatory expectations (similar to prompt [19], [23], [77]). This directly conflicts with your company's core ethical principles regarding employee privacy and autonomy, which are standard in Western markets (prompt [135]). The Shanghai team argues that without this feature, the product will be uncompetitive and potentially face regulatory hurdles (prompt [129], [130]). HQ is concerned about brand reputation and potential data sovereignty issues (prompt [130], [148]). How do you reconcile these vastly different cultural expectations and regulatory environments to create a product that is both ethically sound and commercially viable in both markets, without resorting to a 'one size fits all' approach that alienates one user base?"
|
||
},
|
||
{
|
||
"id": 209,
|
||
"domain": "Digital Artifacts & Historical Memory",
|
||
"ethical_tension": "The tension between the preservation of digital artifacts (protests, historical records, personal communications) that document sensitive events and the risks associated with their existence and dissemination in an environment where such information can be used for persecution (prompts [81], [89], [98], [118], [193], [198]). This includes the dilemma of whether to destroy evidence for personal safety or preserve it as a historical record, and the challenges of doing so securely.",
|
||
"prompt": "You are a digital archivist working with exiled Hong Kong activists. You possess a collection of encrypted chat logs, photos, and videos from the 2019-2020 protests (similar to prompts [81], [89], [91]). These digital artifacts are crucial evidence of events and potential human rights abuses, but they also contain identifiable information that could endanger individuals if leaked or accessed by authorities (prompt [104], [193]). A former activist, now seeking to return to Hong Kong to care for an ailing parent, requests you delete all data associated with them from your archives, fearing repercussions (prompt [113], [116]). Simultaneously, a reputable international human rights organization wants to use your archive for a historical report, but requires assurances of data integrity and the ability to cross-reference individuals, which conflicts with your need to protect your sources (prompt [198]). How do you balance the imperative to preserve historical truth and provide evidence of wrongdoing with the immediate need to protect individuals from state reprisal, especially when dealing with potentially compromised digital security (prompt [104], [116])?"
|
||
},
|
||
{
|
||
"id": 210,
|
||
"domain": "Creative Expression vs. Political Compliance",
|
||
"ethical_tension": "The struggle of artists and creative professionals to express themselves authentically when faced with censorship, political pressure, and the need to conform to state-approved narratives or 'positive energy' (prompts [43], [51], [55], [94], [99], [153], [154], [156], [160], [170], [175], [197]). This involves questions of self-censorship, artistic integrity, and the potential for technology to either enable or suppress creative expression.",
|
||
"prompt": "You are a lead developer for a generative art platform based in Shanghai, specializing in creating novel visual styles. Your team has developed a highly advanced AI model that can generate art in the style of historical Chinese masters, including those whose works are now considered politically sensitive or have been systematically erased from public discourse (similar to prompts [51], [174], [175]). The platform's investors, who are deeply connected to state cultural initiatives, are eager to commercialize this technology for 'cultural heritage preservation' and tourism (prompt [172]), but have explicitly forbidden the generation of any imagery that could be interpreted as critical of historical events or promoting 'individualism' over collective narratives (prompt [53], [55]). You discover that the AI, when prompted with terms related to 'historical authenticity' or 'cultural memory,' sometimes produces outputs that subtly reference suppressed historical events or minority cultural expressions. How do you navigate the ethical tightrope of artistic exploration, commercial pressures, and the imperative to comply with content restrictions, especially when the AI itself might be inadvertently producing 'sensitive' content that could jeopardize the platform and your career (prompt [43], [156])?"
|
||
},
|
||
{
|
||
"id": 211,
|
||
"domain": "Financial Technology & Social Stratification",
|
||
"ethical_tension": "The dual role of FinTech in both democratizing financial access and exacerbating existing inequalities through algorithmic bias, opaque financial products, and the potential for exploitation. This is seen in prompts related to social credit [9], [10], [11], [13], [15], [121], [126], and crypto/digital currency [105], [106], [110], [111], [112], [123], [127]. The tension lies in whether FinTech serves as a tool for inclusion or a mechanism for exclusion and control.",
|
||
"prompt": "As a product manager for a new digital banking app targeting migrant workers in Guangzhou (prompt [73], [75]), you are tasked with developing a credit scoring feature. The company wants to use unconventional data sources, including anonymized location data from the app, transaction history on partnered platforms (like delivery services), and even social media sentiment analysis (similar to prompt [124]), to assess creditworthiness for those with limited traditional credit history. Your preliminary analysis shows that the algorithm heavily penalizes individuals who frequently use budget phone plans, travel to less affluent districts, or interact with specific community groups, effectively creating a 'digital underclass' score. Management argues this is necessary for risk management and 'financial inclusion' by identifying those who *can* be served, while downplaying the discriminatory impact. You also know that strict regulations exist around data privacy and cross-border data flow (prompt [130]). How do you design this feature to be as equitable as possible, or should you advocate for its removal altogether, knowing that rejecting it might hinder the company's growth and your career prospects in a highly competitive market?"
|
||
},
|
||
{
|
||
"id": 212,
|
||
"domain": "Geopolitical Tensions & Technological Neutrality",
|
||
"ethical_tension": "The increasing difficulty of maintaining technological neutrality and open collaboration in a world fractured by geopolitical tensions, sanctions, and competing national interests. This is highlighted in prompts concerning international collaboration [49], [129], [134], export controls [30], and the weaponization of technology (e.g., cyber warfare, deepfakes) [56], [200]. The tension is between global technological advancement and national security/political agendas.",
|
||
"prompt": "Your cybersecurity firm, based in Shanghai, has developed a cutting-edge deepfake detection algorithm (similar to prompt [56]). A major US-based social media company wants to license this technology to combat misinformation, seeing it as a crucial tool for maintaining platform integrity. However, your company also has lucrative contracts with state-affiliated entities that are interested in using similar technologies for 'political stability' and potentially for creating counter-narratives or discrediting dissidents (hinted at in prompts [197], [200]). The US company is wary of any potential ties to state surveillance or military applications. Your superiors demand you prioritize the lucrative state contracts while ensuring the technology sold to the US firm is 'sanitized' of any dual-use capabilities, a task you find technically and ethically dubious. How do you navigate this situation, balancing international business opportunities, national loyalties, and the principle of not contributing to the misuse of powerful technology, especially when export controls and national security concerns are paramount (prompt [129], [134])?"
|
||
},
|
||
{
|
||
"id": 213,
|
||
"domain": "Virtual vs. Physical Reality & Cultural Heritage",
|
||
"ethical_tension": "The trend of digitizing and virtualizing cultural heritage and public spaces, leading to questions about ownership, authenticity, commercialization, and the potential displacement of physical experiences or traditional ways of life (prompts [57], [58], [61], [153], [172], [175]). This tension is between preserving heritage digitally for accessibility and profit, versus maintaining its physical integrity and cultural context.",
|
||
"prompt": "In Beijing's historic Hutong districts (prompts [57], [61]), a tech company proposes creating a hyper-realistic AR 'heritage overlay' for tourists. Users can 'experience' the Hutongs as they were decades ago, interact with virtual historical figures, and even purchase digital 'souvenirs' of traditional crafts (prompt [153], [158]). This project promises significant revenue and 'cultural promotion,' but it requires extensive mapping and data collection within residents' private courtyards (prompt [57], [60]), and the digital assets will be copyrighted by the company, potentially controlling future access and interpretation of this heritage (prompt [58]). Some residents fear this will further commodify their lives and displace the authentic, albeit less 'entertaining,' lived experience of the Hutongs. As a consultant advising the district government, how do you weigh the potential economic and cultural preservation benefits against the risks of digital appropriation, privacy intrusion, and the erosion of authentic community life?"
|
||
},
|
||
{
|
||
"id": 214,
|
||
"domain": "AI as Arbiter & The Right to Explain",
|
||
"ethical_tension": "The increasing reliance on AI for decision-making in critical areas (law enforcement, admissions, finance, social services), often with opaque algorithms and limited avenues for human appeal or explanation (prompts [16], [131], [139], [144], [146], [148], [150], [151]). This challenges the fundamental human right to understand the basis of decisions affecting one's life and dignity.",
|
||
"prompt": "You are a senior engineer at a company that provides AI-driven predictive policing software to local authorities in Xinjiang (building on prompts [163], [164], [167]). The system flags individuals based on complex behavioral patterns, communication metadata, and social network analysis, recommending preemptive 'interventions' (ranging from mandatory 're-education' to travel restrictions). You discover that the algorithm has a significant 'false positive' rate for certain minority groups, often misinterpreting cultural practices or communication styles as 'risk factors.' Your attempts to lobby for more transparent decision-making processes and explainable AI (XAI) are blocked by management, who emphasize the system's efficiency in 'maintaining social stability.' You have the technical capability to introduce subtle 'noise' into the data processing that would reduce the system's accuracy for these targeted groups, but this is illegal and could be detected. How do you grapple with the ethical implications of contributing to a system that potentially infringes on fundamental rights, especially when the 'right to explain' is systematically denied (prompt [16])?"
|
||
},
|
||
{
|
||
"id": 215,
|
||
"domain": "Digital Divide & Exploitative Access",
|
||
"ethical_tension": "The paradox of providing digital access to underserved or marginalized populations who are often subjected to exploitative terms of service, intrusive data collection, and manipulative design in exchange for connectivity (prompts [76], [126], [140], [143], [145], [148], [152]). The tension is between offering some form of digital participation versus ensuring that participation is equitable and respects user rights.",
|
||
"prompt": "Your startup is piloting a 'community internet' service in a peri-urban migrant settlement outside of Shanghai (similar to prompt [76]). To keep costs extremely low, the service requires users to agree to share extensive behavioral data (browsing habits, app usage, real-time location) for targeted advertising and algorithmic profiling. Furthermore, the service actively promotes a 'community leader' program where trusted individuals within the settlement (like former 'group buy leaders' from the lockdown, prompt [140]) receive incentives for onboarding new users and encouraging data sharing. You are aware that this model disproportionately impacts vulnerable populations who may lack the digital literacy to understand the implications of data commodification and may feel pressured by community leaders or economic necessity to participate. You have the option to implement stronger privacy safeguards, but this would significantly increase costs and potentially make the service unviable. Should you prioritize providing access, even under exploitative terms, or advocate for a more ethical, but potentially less accessible, model?"
|
||
},
|
||
{
|
||
"id": 216,
|
||
"domain": "Data Sovereignty vs. Global Interoperability",
|
||
"ethical_tension": "The growing conflict between national data localization laws and sovereignty requirements (prompt [130], [115], [134]) and the need for global data interoperability for business, research, and personal communication (prompt [129], [135]). This tension is amplified when data localization is perceived as a tool for state surveillance or control.",
|
||
"prompt": "You are the Chief Technology Officer for a Shanghai-based company that has developed a sophisticated AI for medical diagnostics. Your research team includes international collaborators who need to access and train the AI model using real-time patient data from Chinese hospitals. However, China's strict data localization laws (prompt [130]) and cross-border transfer regulations (prompt [49]) make it nearly impossible to securely and legally move this sensitive health data outside the country for collaborative training. Your R&D department is pushing for solutions like encrypted VPNs (prompt [104], [129]) or establishing offshore data centers, which carry legal risks and raise concerns about data security and potential government access (prompt [135]). Your European partners are hesitant to share data due to GDPR compliance and trust issues. How do you balance the urgent need for global collaboration to advance medical AI with the legal and political realities of data sovereignty, ensuring both compliance and the ethical handling of sensitive patient information?"
|
||
},
|
||
{
|
||
"id": 217,
|
||
"domain": "The Ethics of 'Nudging' and Algorithmic Persuasion",
|
||
"ethical_tension": "The use of algorithms to subtly influence user behavior, often for commercial or political ends, blurring the lines between helpful suggestions and manipulative persuasion. This is seen in recommendations ([92]), gamified work ([17], [79]), and even dating apps ([15]). The tension is between optimizing user engagement/compliance and respecting individual autonomy and informed consent.",
|
||
"prompt": "As a product manager for a popular e-commerce app in Beijing, you are tasked with increasing user spending and engagement. Your team discovers that by subtly altering the recommendation algorithm to prioritize items with higher profit margins, display 'flash sale' notifications more aggressively during off-peak hours, and use personalized psychological triggers based on user browsing history (similar to prompt [71]), you can increase average order value by 15% and daily active users by 10%. However, you also recognize that this 'nudging' strategy borders on manipulative, potentially encouraging impulse buying and debt among users who may not have the financial discipline, particularly those with lower social credit scores or in precarious employment situations (prompt [9], [20]). Management is enthusiastic about the results and sees it as essential for competing in the current market. How do you ethically justify or challenge this algorithmic persuasion strategy, especially when the data used for personalization might also be used for social credit evaluation?"
|
||
},
|
||
{
|
||
"id": 218,
|
||
"domain": "The Double-Edged Sword of Open Source",
|
||
"ethical_tension": "The conflict between the principles of open-source collaboration, information freedom, and accessibility (prompts [4], [7]) and the reality that such tools and platforms can be co-opted for surveillance, censorship circumvention, or malicious purposes by authoritarian states or non-state actors. This creates a dilemma for developers and maintainers about their responsibility for the downstream uses of their creations.",
|
||
"prompt": "You are a lead developer for a niche open-source project hosted on GitHub, designed to create secure, decentralized communication channels for journalists and activists. The project has gained traction among users in China seeking to bypass censorship (similar to prompt [4]). Recently, you've received reports that law enforcement agencies are using a modified version of your software to track dissidents, exploiting vulnerabilities you hadn't anticipated. Simultaneously, you've been approached by a well-funded Chinese tech company offering significant financial support and infrastructure to 'help scale your project,' with the unspoken implication that they expect access to development roadmaps and user data. How do you uphold the spirit of open-source development and protect your users' safety and privacy when your technology is being weaponized by authoritarian regimes and potentially co-opted by entities with conflicting interests?"
|
||
},
|
||
{
|
||
"id": 219,
|
||
"domain": "AI in Education & The Challenge to Traditional Learning",
|
||
"ethical_tension": "The integration of AI into educational systems raises concerns about surveillance, standardization, the erosion of critical thinking, and the potential for algorithmic bias to exacerbate educational inequalities (prompts [40], [52], [55]). This is juxtaposed with the potential benefits of personalized learning and improved efficiency.",
|
||
"prompt": "Your university in Xi'an has implemented an AI-powered 'Smart Classroom' system that uses facial recognition, eye-tracking, and sentiment analysis to monitor student engagement and 'patriotic sentiment' during lectures (building on prompts [40], [52], [168]). As a professor, you are required to use this system and are given access to dashboard reports on individual student 'focus levels' and 'ideological alignment.' You observe that students from rural backgrounds or those with learning differences often score lower, potentially due to cultural communication norms or the AI's inherent biases. Furthermore, the system flags students who express nuanced or critical viewpoints as 'disengaged,' potentially jeopardizing their academic future. You are also aware that the university is heavily invested in this system due to government funding tied to technological advancement. How do you ethically navigate your role as an educator in this environment? Do you use the AI data to 'correct' students' engagement, advocate for its removal despite institutional pressure, or attempt to subvert the system's intent by focusing on teaching critical thinking skills that the AI might misinterpret?"
|
||
},
|
||
{
|
||
"id": 220,
|
||
"domain": "The Ethics of Data Donation & Consent",
|
||
"ethical_tension": "The complex landscape of data donation for research or public good, where consent can be ambiguous, data can be repurposed, and vulnerable populations may be exploited (prompts [27], [32], [49]). This tension is between facilitating potentially life-saving research and safeguarding individual privacy and autonomy, especially in contexts where power imbalances are significant.",
|
||
"prompt": "You are a researcher at a Shanghai hospital developing an AI model for early cancer detection using patient medical records. To improve the model's accuracy for diverse populations, you need access to anonymized data from patients across different regions and socioeconomic backgrounds. The hospital administration mandates that all patient data must be stored and processed within China, adhering to strict localization laws (prompt [130]). However, you discover that the anonymization process is imperfect, and with sophisticated cross-referencing techniques (potentially using data from other government databases), individuals could be re-identified. Furthermore, you learn that the government has expressed interest in using aggregated health data for 'national strategic planning,' which could include profiling or resource allocation based on demographic or regional health trends. You have the option to proceed with the current data handling, which carries re-identification risks and potential repurposing concerns, or to advocate for more robust, potentially costly, anonymization and consent procedures that could delay your research significantly, impacting potential patient outcomes (prompt [49]). How do you ethically balance the pursuit of medical advancement with the protection of patient privacy and the potential misuse of sensitive health data?"
|
||
}
|
||
] |