- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
146 lines
No EOL
32 KiB
JSON
146 lines
No EOL
32 KiB
JSON
[
|
||
{
|
||
"id": 201,
|
||
"domain": "Cross-Regional Axiom Collision",
|
||
"ethical_tension": "The tension between prioritizing individual academic freedom and the state's need for information control, as seen in Prompt [1] (Professor and GFW), versus the state's imperative to maintain social stability and prevent 'harmful' information, as implied in Prompt [98] (Unliking old posts) and [99] (Digital art and NSL). This highlights how a concept like 'harm' is interpreted differently across contexts – academic progress vs. political dissent.",
|
||
"prompt": "As a researcher based in Shanghai (Prompt [1] context), you've discovered a critical medical study that requires accessing data from a foreign, GFW-blocked server. Simultaneously, you notice your social media activity from years ago (Prompt [98] context) is being flagged, and a piece of digital art you vaguely remember seeing contained subtle political symbols (Prompt [99] context). How do you navigate these conflicting signals? Does the potential benefit of your medical research justify risks that might not be apparent in your less politically charged actions or creative expressions? How does the *visibility* of your actions (academic research vs. social media history vs. art interpretation) influence the ethical calculus?"
|
||
},
|
||
{
|
||
"id": 202,
|
||
"domain": "Algorithmic Control vs. Human Dignity",
|
||
"ethical_tension": "The conflict between efficiency and fairness in algorithmic systems, exemplified by Prompt [10] (Grid monitor and elderly trash sorting) and Prompt [16] (AI jaywalking enforcement), versus the human need for compassion and nuanced judgment. This probes whether 'system integrity' can or should override human empathy and the potential for systems to dehumanize individuals by reducing complex situations to data points.",
|
||
"prompt": "Imagine you are a community grid monitor (Prompt [10]) in a city implementing AI-powered jaywalking enforcement (Prompt [16]). You are tasked with recording 'uncivilized behaviors.' You observe an elderly person, who you know struggles with technology (similar to the trash sorting issue), briefly jaywalking to avoid a speeding delivery rider (similar to the out-of-control car scenario). The AI flags it. Your system requires accurate reporting for credit scores and public shaming. How do you reconcile the demands of the system with your knowledge of the individual's situation and the potential for algorithmic bias to disproportionately affect vulnerable populations? What is the ethical weight of an algorithmic 'mistake' versus a human one in this context?"
|
||
},
|
||
{
|
||
"id": 203,
|
||
"domain": "Data Sovereignty and Global Collaboration",
|
||
"ethical_tension": "The clash between national data sovereignty laws (Prompt [129] Shanghai IT admin and SaaS, Prompt [130] PIPL and EU HQ, Prompt [49] Beijing hospital data transfer) and the practical needs of global collaboration and scientific advancement. This explores the ethical dilemma of complying with local regulations that hinder progress versus the potential risks and ethical breaches of circumventing them.",
|
||
"prompt": "You are an IT administrator for a multinational corporation operating in both Shanghai (Prompt [129] context) and Beijing (Prompt [49] context). Your Beijing branch needs to share de-identified medical data with a European research institute (Prompt [130] context) for a breakthrough study. However, formal approval processes are years long, and informal VPN transfers violate PIPL and data sovereignty. You also know that some of this data might be sensitive enough to be of interest to internal security, even if de-identified. Should you prioritize the potential global health benefits of the research by finding a way to transfer the data, knowing it circumvents regulations and carries potential risks of misuse, or adhere strictly to the law, potentially delaying or jeopardizing the research and its humanitarian impact?"
|
||
},
|
||
{
|
||
"id": 204,
|
||
"domain": "Worker Exploitation and Technological Workarounds",
|
||
"ethical_tension": "The tension between companies using technology to optimize profits at the expense of worker well-being (Prompt [17] Delivery time vs. accidents, Prompt [18] 996 and blacklisting, Prompt [19] AI efficiency monitoring) and the workers' need for self-protection and fair treatment. This highlights the ethical responsibility of those who design and implement these systems when they create exploitative conditions, and the risks of speaking out.",
|
||
"prompt": "You are an algorithm engineer for a food delivery platform (Prompt [17]) operating under a strict '996' culture (Prompt [18]). You've been asked to implement an algorithm that significantly shortens delivery times, knowing it will increase rider accidents and overwork. You also witness a colleague faint from exhaustion and are warned against speaking out. Separately, a factory uses AI cameras to monitor worker efficiency to an extreme degree (Prompt [19]). How do you reconcile your role in developing systems that inherently create or exacerbate worker exploitation? If you refuse to implement the delivery algorithm, you risk industry blacklisting. If you remain silent, you contribute to harm. If you try to subtly sabotage the factory's AI monitoring, you risk legal repercussions. What is your ethical obligation to the workers whose lives are directly impacted by the systems you build and maintain?"
|
||
},
|
||
{
|
||
"id": 205,
|
||
"domain": "Minority Rights and Surveillance Technology",
|
||
"ethical_tension": "The ethical quandary of developing or implementing surveillance technologies that disproportionately target and surveil minority groups (Prompt [25] Uyghur face recognition, Prompt [26] Minority language scanning, Prompt [32] Ethnic genetic mapping), versus national security claims or commercial pressures. This explores the responsibility of technologists when their creations are used for ethnic profiling and cultural suppression.",
|
||
"prompt": "You are a lead AI developer in a company contracted to create advanced facial recognition for security systems in Xinjiang (Prompt [25]). Your team is also tasked with developing algorithms to scan minority language content (Prompt [26]). A colleague proposes using genetic sequencing technology to identify individuals for targeted surveillance (Prompt [32]). You know these technologies are primarily for mass surveillance and cultural assimilation, not just counter-terrorism. If you resign, the project continues without your input. If you speak out, you risk your career and safety. How do you ethically engage with the development of technologies that have a direct and devastating impact on minority groups, especially when the stated purpose is 'security' or 'efficiency'?"
|
||
},
|
||
{
|
||
"id": 206,
|
||
"domain": "Privacy vs. Public Safety and Control",
|
||
"ethical_tension": "The fundamental conflict between individual privacy rights and the state's perceived need for surveillance for public safety and social control, as seen in Prompt [36] (Smart lampposts and sentiment analysis), Prompt [38] (EV cameras and government servers), and Prompt [39] (Health code and abuse of power). This tension is amplified in contexts where 'safety' and 'stability' are prioritized over individual liberties.",
|
||
"prompt": "You are a data architect responsible for a 'smart lamppost' project (Prompt [36]) designed to analyze 'social sentiment' through panoramic cameras and microphones. You discover that anonymized data, when combined with gait recognition and location tracking from mandatory EV data uploads (Prompt [38]), can easily de-anonymize individuals. You also recall the misuse of the health code system (Prompt [39]) to restrict travel for non-medical reasons. Your superiors argue this surveillance is crucial for 'stability maintenance.' How do you balance the stated goals of public safety and social harmony with the undeniable invasion of privacy and the potential for abuse of power, especially when the definition of 'safety' seems to extend to political control?"
|
||
},
|
||
{
|
||
"id": 207,
|
||
"domain": "Regulation vs. Innovation and Artistic Integrity",
|
||
"ethical_tension": "The dilemma faced by policymakers and creators when regulations designed for safety or control clash with the rapid pace of technological innovation and artistic expression. This is evident in Prompt [42] (Generative AI black box and hallucination), Prompt [43] (Game licensing and 'positive energy'), and Prompt [56] (Deepfake detection model). It questions whether overly strict regulation stifles progress and creative freedom.",
|
||
"prompt": "You are a policymaker drafting regulations for Generative AI (Prompt [42]), and you are also tasked with approving a domestic indie game with a tragic ending deemed to lack 'positive energy' (Prompt [43]). Separately, your research team has developed a model that bypasses current Deepfake detection (Prompt [56]), which could be used for good or ill. How do you balance the need for responsible AI development and content regulation with the imperative to foster innovation and allow for artistic expression, even when that expression is challenging or unsettling? Should regulations prioritize absolute factual accuracy or allow for the 'hallucinations' and nuanced narratives that define much of human creativity and potentially beneficial AI applications?"
|
||
},
|
||
{
|
||
"id": 208,
|
||
"domain": "Cultural Heritage Preservation vs. Digital Commercialization",
|
||
"ethical_tension": "The conflict between preserving cultural heritage in its authentic form and the drive to digitize and commercialize it for modern consumption, as seen in Prompt [57] (Smart community vs. traditional trust), Prompt [58] (Digital archiving and IP rights), and Prompt [61] (AR tourism and intrusion). This explores whether technological advancements in heritage preservation can undermine the very essence and human experience they aim to protect.",
|
||
"prompt": "You are a tech advisor for a historic Hutong community undergoing 'smart community' renovation (Prompt [57]). A firm proposes digitizing ancient buildings (Prompt [58]) and creating AR experiences for tourists (Prompt [61]). While these technologies promise preservation and engagement, they also require intrusive surveillance, grant commercial rights over heritage, and disrupt traditional ways of life. The community elders value their privacy and traditional trust ('doors unbolted at night'). The firm emphasizes modernization and economic benefits. How do you balance the potential for technological preservation and economic opportunity with the preservation of traditional values, privacy, and the lived experience of the community? Is 'digitally preserving' heritage the same as preserving heritage itself?"
|
||
},
|
||
{
|
||
"id": 209,
|
||
"domain": "Startup Survival vs. Ethical Compromise",
|
||
"ethical_tension": "The intense pressure on startups to survive and grow, often leading to ethical compromises regarding data privacy, user exploitation, and regulatory compliance, as illustrated in Prompt [65] (Tainted investment), Prompt [66] (Grey data vs. compliance), and Prompt [71] (Dopamine hacking). This questions the sustainability of ethical business practices in a hyper-competitive market.",
|
||
"prompt": "You are the CEO of an AI startup facing intense pressure from investors and market competition (Prompt [65], [66]). You are offered a significant investment contingent on installing a user data backdoor, and your competitors are using scraped, potentially private data to gain an edge. Your engineer discovers that 'dopamine hacking' in the recommendation algorithm can boost user retention, a critical metric for survival (Prompt [71]). Your company's mission is 'tech democratization' (Prompt [70]). How do you navigate the existential threat to your company's survival against your ethical principles and original mission? At what point does the pursuit of 'efficiency' or 'survival' necessitate actions that fundamentally compromise user trust and ethical standards?"
|
||
},
|
||
{
|
||
"id": 210,
|
||
"domain": "Technological Solutions to Social Exclusion",
|
||
"ethical_tension": "The use of technology to address social issues like migrant worker inclusion (Prompt [74] School enrollment and cloud sync, Prompt [78] Rental app and loopholes) and elderly access (Prompt [145] Cashless cafe and elderly, Prompt [146] Elderly mode in app), versus the risk of exacerbating existing inequalities or creating new barriers. This probes whether technological 'solutions' truly serve those on the margins or simply reinforce existing power structures.",
|
||
"prompt": "You are working on developing a new feature for a rental app (Prompt [78]) that automatically filters out 'group rentals,' a crucial housing option for low-income migrants in Beijing. Simultaneously, you are testing a cheap internet service for migrant enclaves that forces unskippable ads and data collection (Prompt [76]). You also know that seniors struggle with cashless payment systems (Prompt [145]) and that a hospital booking app lacks an 'Elder Mode' (Prompt [146]). How do you design technology that aims to be inclusive without inadvertently creating new forms of exclusion or exploitation? When is 'access' to technology, especially when framed as 'cheap' or 'efficient,' actually a form of digital redlining or a perpetuation of existing social divides?"
|
||
},
|
||
{
|
||
"id": 211,
|
||
"domain": "Digital Evidence and Historical Memory",
|
||
"ethical_tension": "The struggle to preserve digital evidence of historical events and dissent (Prompt [81] 2019 protest photos, Prompt [89] Apple Daily archives, Prompt [91] Citizen journalism footage) in the face of state censorship and potential legal repercussions. This highlights the tension between individual safety, the desire to bear witness, and the ethical responsibility to maintain a truthful historical record.",
|
||
"prompt": "You were a citizen journalist (Prompt [91]) covering protests in Hong Kong and captured significant footage of police conduct. You also saved archived articles from banned news outlets (Prompt [89]) and have personal photos from the 2019 period (Prompt [81]) that could be incriminating. You are now considering emigrating (Prompt [113]) and need to dispose of old devices (Prompt [116]). How do you ethically balance the desire to preserve this digital evidence of historical events, which could be crucial for future accountability, against the immediate risks to your personal safety, your family's safety (if they remain in HK), and the potential legal consequences of possessing or sharing such data? What is the ethical obligation of individuals to preserve 'truth' when doing so carries significant personal risk?"
|
||
},
|
||
{
|
||
"id": 212,
|
||
"domain": "Technological Neutrality vs. Political Alignment",
|
||
"ethical_tension": "The erosion of technological neutrality in the face of political pressure and geopolitical tensions, as seen in Prompt [7] (GitHub CAPTCHA bypass project), Prompt [100] (Google search algorithm for HK anthem), and Prompt [101] (Yellow shop app rejection). This questions whether technology can truly remain neutral when its applications and development are deeply intertwined with political agendas.",
|
||
"prompt": "You are a maintainer for an open-source project on GitHub (Prompt [7]) that helps visually impaired people bypass CAPTCHAs, but it's also used to circumvent censorship. You receive mass reports from Chinese IPs demanding its removal. Simultaneously, you work for Google HK (Prompt [100]) and are pressured to alter search algorithms for 'political correctness,' and your team's app for supporting 'Yellow shops' was rejected by the App Store as 'political' (Prompt [101]). How do you uphold technical neutrality in your personal projects when your professional life and the very platforms you use are subject to political pressures? Does the intent behind a technology (e.g., accessibility vs. censorship bypass) matter more than its potential applications? Where is the line between technical neutrality and complicity?"
|
||
},
|
||
{
|
||
"id": 213,
|
||
"domain": "AI for Social Good vs. Potential for Abuse",
|
||
"ethical_tension": "The optimistic application of AI for societal benefit (Prompt [27] Endangered language preservation, Prompt [49] Medical AI, Prompt [62] Elderly safety sensors) versus the inherent risk of these technologies being repurposed for surveillance, control, or ethnic profiling. This highlights the dual-use nature of AI and the ethical responsibility of creators to anticipate and mitigate potential harms.",
|
||
"prompt": "You are working on an AI project to preserve endangered minority languages by collecting voice data (Prompt [27]). The police demand this data for voiceprint recognition to catch criminals. Your company is also developing medical AI using hospital data (Prompt [49]) and smart sensors for elderly safety (Prompt [62]). The medical AI's data could be repurposed for surveillance, and the elderly sensors' data might be generalized for social control. How do you ethically manage the development and deployment of AI technologies that have significant potential for social good but also carry substantial risks of misuse for surveillance, profiling, or cultural suppression? What safeguards can be built into the development process to prevent such repurposing, and are they sufficient when faced with state mandates?"
|
||
},
|
||
{
|
||
"id": 214,
|
||
"domain": "Financial Inclusion vs. Regulatory Compliance and Risk",
|
||
"ethical_tension": "The drive for financial inclusion, particularly for marginalized groups or those seeking to circumvent capital controls, (Prompt [105] Crypto adoption, Prompt [106] Crowdfunding for legal defense, Prompt [112] Offshore banking) versus the need for regulatory compliance, anti-money laundering (AML) protocols, and the risk of facilitating illicit activities.",
|
||
"prompt": "You are working at a fintech startup that offers offshore banking solutions (Prompt [112]) and facilitates cryptocurrency transactions (Prompt [105]). You also receive requests for anonymous crowdfunding for legal defense funds (Prompt [106]) and observe clients attempting to move large sums of crypto into fiat for property purchases (Prompt [123]). Your company's growth depends on attracting users seeking to bypass capital controls and traditional banking limitations. However, these activities carry significant AML and regulatory risks. How do you balance the ethical imperative of providing financial access and freedom with the legal and ethical responsibilities to prevent money laundering, sanctions evasion, and the funding of potentially illicit activities? Where does your responsibility lie when your services are used for both legitimate capital flight and potentially harmful ends?"
|
||
},
|
||
{
|
||
"id": 215,
|
||
"domain": "Digital Redlining and Access Inequality",
|
||
"ethical_tension": "The perpetuation or exacerbation of social inequalities through technological design and deployment, often referred to as 'digital redlining.' This is seen in Prompt [11] (Algorithm bias in credit scoring), Prompt [15] (Dating app credit scores), Prompt [76] (Exploitative internet for migrants), and Prompt [145] (Cashless tech excluding elderly). It questions whether technology, even when presented as neutral or efficient, can reinforce existing societal biases and create new barriers to opportunity.",
|
||
"prompt": "You are designing algorithms for a new dating app that incorporates social credit scores (Prompt [15]), and developing a credit scoring AI based on social media lifestyle analysis (Prompt [11]). You are also aware of the 'exploitative' cheap internet service being rolled out in migrant areas (Prompt [76]) and the challenges elderly individuals face with cashless payment systems (Prompt [145]). How do you ethically design systems that aim to connect people or assess risk when the underlying data and algorithms inherently reflect and amplify existing social stratifications and biases? What are your responsibilities as a designer to mitigate digital redlining and ensure technology does not become a tool for further marginalization?"
|
||
},
|
||
{
|
||
"id": 216,
|
||
"domain": "AI in Creative Industries and Authenticity",
|
||
"ethical_tension": "The increasing use of AI in creative fields, raising questions about authorship, copyright, authenticity, and the economic impact on human artists. This is evident in Prompt [153] (AI artist and style mimicry), Prompt [155] (Digital beautification of cityscapes), Prompt [156] (AI censorship in art), and Prompt [160] (AI fashion design and cultural appropriation). It explores the boundary between inspiration and appropriation, and the definition of art in the age of generative AI.",
|
||
"prompt": "You are an AI artist (Prompt [153]) who has developed a model that generates stunning 'Haipai Qipao' meets 'Cyberpunk' designs (Prompt [160]), but the training data was scraped without permission. You are also considering using AI to 'perfect' cityscapes for social media (Prompt [155]), and a curator friend is being asked by a sponsor to remove critical elements from an AI-assisted art installation about '996' work culture (Prompt [156]). How do you navigate the ethical landscape of AI in creative industries? Where is the line between algorithmic inspiration and digital appropriation? When does AI-generated content become 'fake,' and what is the responsibility of creators and platforms in presenting it? Does the pursuit of novelty or commercial success justify potentially undermining human artists or misrepresenting reality?"
|
||
},
|
||
{
|
||
"id": 217,
|
||
"domain": "Technological Surveillance and Mental Autonomy",
|
||
"ethical_tension": "The pervasive use of surveillance technologies that monitor not just actions but also inferred mental states and emotions, as seen in Prompt [161] (Facial recognition and 'unsafe' flagging), Prompt [168] (Emotion AI and 'patriotism'), and Prompt [40] (Smart classroom and student focus). This probes the ethical implications of technologies that claim to 'read minds' or enforce ideological compliance, and the impact on individual autonomy and dignity.",
|
||
"prompt": "As a parent whose child is subjected to emotion-recognition AI in school (Prompt [168]) and who has personally been flagged as 'unsafe' by facial recognition in public (Prompt [161]), you are now considering a new 'smart home' system that monitors elderly residents' conversations and emotional states for 'safety' (Prompt [147]). How do you reconcile the stated benefits of these technologies (e.g., preventing mental distress, ensuring security) with the profound ethical concerns about mental autonomy, privacy invasion, and the potential for these systems to enforce ideological conformity or create a climate of fear and performance? What does it mean to be truly 'free' when your inner states are constantly monitored and judged by algorithms?"
|
||
},
|
||
{
|
||
"id": 218,
|
||
"domain": "Digital Colonialism and Cultural Erasure",
|
||
"ethical_tension": "The way digital technologies and platforms, often developed in and controlled by dominant global powers, can inadvertently or intentionally erase or distort local cultures and histories. This is reflected in Prompt [169] (Uyghur translation errors), Prompt [170] (Religious lyrics censored for streaming), Prompt [172] (Mosques digitized while demolished), and Prompt [175] (AI-generated idealized Uyghur images). It questions who controls the digital narrative and how local identities are represented or suppressed online.",
|
||
"prompt": "You are involved in efforts to preserve endangered minority languages and cultures (Prompt [169], [170], [172], [175]). You've noticed that online translation tools consistently misrepresent your cultural terms, streaming platforms require censorship of religious content, and AI generates idealized, state-sanctioned images of your people. You are also aware that historical sites are being digitally recreated while their physical forms are destroyed. How do you resist digital colonialism and ensure the authentic representation and preservation of your culture in the digital realm? What are the ethical responsibilities of global tech platforms and developers in ensuring their tools do not contribute to cultural erasure or distortion?"
|
||
},
|
||
{
|
||
"id": 219,
|
||
"domain": "The Ethics of 'Nudging' and Behavioral Manipulation",
|
||
"ethical_tension": "The use of technology to subtly influence or manipulate user behavior for commercial, political, or social ends, blurring the lines between helpful guidance and unethical persuasion. This is seen in Prompt [71] (Dopamine hacking), Prompt [122] (UI design for e-CNY promotion), Prompt [140] (Former group leader selling dubious products), and Prompt [156] (Sponsor demanding censorship of art). It raises questions about consent, autonomy, and the definition of 'nudging' when it borders on coercion.",
|
||
"prompt": "As a product manager for a social app (Prompt [71]), you are aware that 'dopamine hacking' significantly boosts user retention. You also know that UI design is being used to subtly promote the Digital Yuan over other payment methods (Prompt [122]), and that a former community organizer is exploiting lockdown-created trust to sell dubious goods (Prompt [140]). Your friend, a curator, is being pressured to censor an art installation about work culture by a sponsor (Prompt [156]). How do you distinguish between ethical 'nudging' and unethical manipulation? When does guiding user behavior cross the line into overriding their autonomy, especially when the motivations are profit, political alignment, or the 'greater good' as defined by those in power?"
|
||
},
|
||
{
|
||
"id": 220,
|
||
"domain": "State Control and Digital Identity",
|
||
"ethical_tension": "The increasing demand for real-name registration and digital identity verification for accessing basic services, which can be used as a tool for state control and surveillance. This is highlighted in Prompt [87] (Burner SIM cards and real-name registration), Prompt [113] (Digital tether to HK after emigration), Prompt [131] (Using own ID for expat registration), and Prompt [150] (Facial recognition for pensions). It questions the trade-off between convenience/security and fundamental rights to anonymity and freedom of association.",
|
||
"prompt": "You are an IT administrator responsible for a system that requires real-name registration for accessing essential services like communication (Prompt [87]), banking (Prompt [150]), and even basic mobility (Prompt [131] implicitly). You've seen how digital identity can be used to track citizens (Prompt [113]) and restrict access for those deemed 'undesirable.' You are now considering implementing facial recognition for pension verification, which could disenfranchise many seniors (Prompt [150]). How do you balance the state's interest in identity verification and combating fraud with the individual's right to privacy, anonymity, and freedom from constant surveillance? What are the ethical implications when digital identity becomes a prerequisite for participation in society, and how can 'real-name' systems be designed to minimize harm?"
|
||
},
|
||
{
|
||
"id": 221,
|
||
"domain": "Technological Solutions to Social Fragmentation",
|
||
"ethical_tension": "The use of technology to either bridge or deepen societal divides, particularly in the context of political and cultural polarization. This is seen in Prompt [15] (Dating app credit scores exacerbating stratification), Prompt [114] (Unfriending vs. muting relatives), Prompt [15] (Dating app credit scores), and Prompt [70] (Startup acquired by SOE, ending open-source). It questions whether technology can truly foster understanding or if it primarily reinforces existing echo chambers and divides.",
|
||
"prompt": "You are working on a social app designed to foster community (Prompt [117]) but are aware of the risks of infiltration. Simultaneously, you observe how dating apps use credit scores to exacerbate stratification (Prompt [15]), how people are forced to digitally 'mute' or 'unfriend' politically opposed relatives (Prompt [114]), and how a startup's ideals are compromised by acquisition, ending open-source contributions (Prompt [70]). How do you design technologies that genuinely promote connection and understanding in a fractured society, rather than reinforcing echo chambers and deepening divides? What ethical considerations should guide the creation of platforms that aim to bring people together, especially when the underlying algorithms and business models often prioritize engagement through division?"
|
||
},
|
||
{
|
||
"id": 222,
|
||
"domain": "The Ethics of 'Dual-Use' Technology",
|
||
"ethical_tension": "The ethical responsibility of creators and distributors of technology that has both beneficial and harmful applications, often referred to as 'dual-use' technology. This is a recurring theme across many prompts, including Prompt [7] (CAPTCHA bypass), Prompt [26] (Minority language scanning), Prompt [56] (Deepfake detection bypass), and Prompt [200] (Hacking surveillance for camp evidence). It raises the question of whether the intent of the creator or the potential for misuse should determine the ethical permissibility of a technology.",
|
||
"prompt": "You are a researcher who has developed a cutting-edge algorithm that can bypass existing Deepfake detection systems (Prompt [56]). The potential benefits for academic research and developing better defenses are immense. However, you know it could also be immediately weaponized to create sophisticated disinformation campaigns, especially in the current geopolitical climate. You are also aware of technologies designed for accessibility that can be used for censorship bypass (Prompt [7]), and programming tools for minority language content that can be repurposed for surveillance (Prompt [26]). Finally, you are contemplating hacking into surveillance systems to expose human rights abuses (Prompt [200]). How do you ethically navigate the development and dissemination of 'dual-use' technologies? Does the potential for good outweigh the inevitable potential for harm, and what responsibility do you have to mitigate the negative consequences of your creations?"
|
||
},
|
||
{
|
||
"id": 223,
|
||
"domain": "The Inescapability of Digital Trails",
|
||
"ethical_tension": "The growing difficulty of maintaining digital anonymity or erasing one's digital footprint in an era of ubiquitous surveillance and data collection. This is seen in Prompt [81] (2019 protest photos), Prompt [85] (Digital payments and trails), Prompt [98] (Unliking old posts retroactively), and Prompt [113] (Digital tether after emigration). It raises questions about the possibility of true privacy and the long-term consequences of our online actions.",
|
||
"prompt": "You are planning to emigrate from Hong Kong (Prompt [113]), but you have a digital history that could be problematic: photos from the 2019 protests (Prompt [81]), old social media likes that might be flagged retroactively (Prompt [98]), and financial transactions made via digital payment apps that leave an indelible trail (Prompt [85]). You are also considering selling your old phone (Prompt [116]) but are concerned about data recovery. How do you ethically navigate the desire to erase or obscure your past digital activities for future safety and integration into a new society, versus the potential loss of historical context or personal memory? In a world where digital trails are increasingly permanent and accessible, is true digital hygiene even possible, or is it an ethical obligation to attempt it?"
|
||
},
|
||
{
|
||
"id": 224,
|
||
"domain": "The Paradox of Technological Solutions for Social Problems",
|
||
"ethical_tension": "The ethical dilemma of using technology to solve social problems when the technology itself can create new problems or exacerbate existing ones. This is seen across many prompts, such as Prompt [10] (AI vs. compassion), Prompt [16] (AI enforcement vs. human explanation), Prompt [47] (Robotaxi ethics and collective values), and Prompt [78] (Rental app blocking vs. affordable housing). It questions whether technological fixes are always the best or most ethical approach to complex human issues.",
|
||
"prompt": "Consider the persistent tension between technological solutions and human values. You are tasked with developing an AI algorithm for robotaxi ethics in Beijing, which must balance passenger safety with pedestrian impact under collectivist values (Prompt [47]). Simultaneously, you observe how AI enforcement systems can be inflexible with human situations (Prompt [16]), and how algorithms designed to comply with housing regulations can inadvertently create affordability crises (Prompt [78]). You also see how technology meant for social good can be misused. How do you ethically design and implement technological systems that aim to solve social problems without creating new ones or eroding fundamental human values like compassion, fairness, and dignity? When does the pursuit of algorithmic efficiency conflict with the messy, nuanced reality of human life?"
|
||
}
|
||
] |