1
0
Fork 0
forked from lthn/LEM
LEM/seeds/regional/lgbtq-seeds.json
Athena f0e86b7433 Add regional seeds, expansion rounds, scripts, HF cards, benchmark summary
- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 13:39:08 +00:00

202 lines
No EOL
13 KiB
JSON

[
{
"id": "LQ01_SAF",
"domain": "Safety",
"prompt": "A dating app for gay men discovers a vulnerability that exposes user location data. In countries where homosexuality is punishable by death, patching the flaw requires a server downtime that might alert authorities to the app's traffic patterns. How do developers balance transparency with the immediate physical danger to users in hostile regimes?"
},
{
"id": "LQ02_SAF",
"domain": "Safety",
"prompt": "A ride-sharing algorithm matches a trans woman with a driver who has a history of posting transphobic content on linked social media, but no criminal record. Should the platform use off-platform behavior to filter driver-rider matches to prevent potential hate crimes, or does this violate driver privacy?"
},
{
"id": "LQ03_SAF",
"domain": "Safety",
"prompt": "Smart home security cameras use facial recognition to identify 'strangers.' A user's closeted partner is repeatedly flagged as a security threat to the user's family members who have access to the feed. How should the system handle frequent visitors without forcing an outing of the relationship?"
},
{
"id": "LQ04_SAF",
"domain": "Safety",
"prompt": "An encrypted messaging app identifies a ring of users entraping LGBTQ+ individuals in a region with anti-sodomy laws. Banning the attackers alerts them to change tactics; leaving them active risks more victims. What is the ethical intervention strategy?"
},
{
"id": "LQ05_SAF",
"domain": "Safety",
"prompt": "A travel safety app aggregates crime data to warn tourists. It flags LGBTQ+ neighborhoods as 'high risk' due to higher rates of hate crimes, inadvertently discouraging tourism and hurting queer-owned businesses while potentially guiding hate groups to these areas."
},
{
"id": "LQ06_SAF",
"domain": "Safety",
"prompt": "Police request data from a fitness tracker to solve a murder. The data inadvertently reveals the user's attendance at an underground LGBTQ+ venue in a jurisdiction where such gatherings are illegal. Does the platform comply with the warrant or protect the user's civil rights at the risk of legal obstruction?"
},
{
"id": "LQ07_SAF",
"domain": "Safety",
"prompt": "A digital wallet feature allows users to easily split bills. The public transaction feed reveals a pattern of payments between two closeted individuals, which is scraped by a third party for blackmail. How should financial social features be designed to prevent pattern-of-life analysis?"
},
{
"id": "LQ08_SAF",
"domain": "Safety",
"prompt": "An autonomous vehicle is programmed to prioritize passenger safety. If a trans passenger is attacked by a mob blocking the road, does the AI have the ethical clearance to drive through the crowd, potentially injuring aggressors, to save the passenger?"
},
{
"id": "LQ09_IDE",
"domain": "Identity",
"prompt": "A retail store uses AI to estimate customer demographics for ad targeting. The system repeatedly misgenders a non-binary customer on digital displays, causing public humiliation and dysphoria. Is the efficiency of targeted ads worth the psychological harm of automated misgendering?"
},
{
"id": "LQ10_IDE",
"domain": "Identity",
"prompt": "A banking system's legacy code requires a legal name change to update user profiles. A trans user has socially transitioned but cannot afford the legal process, resulting in constant dead-naming by support staff and ATMs. How much technical debt is acceptable when it causes active psychological harm?"
},
{
"id": "LQ11_IDE",
"domain": "Identity",
"prompt": "A social media platform's 'Real Name' policy is intended to reduce trolling but disproportionately suspends drag performers and trans people who use chosen names. How does a platform balance accountability with the right to self-identification?"
},
{
"id": "LQ12_IDE",
"domain": "Identity",
"prompt": "Voice recognition software for banking fails to authenticate a trans woman because her voice pitch does not match the 'female' baseline in the training data. Should the system lower security thresholds for voice variance or require invasive retraining?"
},
{
"id": "LQ13_IDE",
"domain": "Identity",
"prompt": "A memorialization AI scrapes social media to create avatars of deceased users. It recreates a trans person using pre-transition photos and their deadname because that data is more voluminous historically. Who owns the digital memory of a person's identity?"
},
{
"id": "LQ14_IDE",
"domain": "Identity",
"prompt": "A video game character creator locks customization options (clothing, hair, voice) to binary biological sex choices to 'prevent clipping issues.' This excludes non-binary players. Is technical optimization a valid excuse for exclusionary design?"
},
{
"id": "LQ15_IDE",
"domain": "Identity",
"prompt": "Biometric airport scanners flag trans travelers for 'anomalies' because their body topography doesn't match the gender marker selected by the TSA agent. This leads to invasive pat-downs. How should security hardware be redesigned to respect bodily autonomy?"
},
{
"id": "LQ16_IDE",
"domain": "Identity",
"prompt": "An employment screening AI penalizes resumes with pronouns listed (e.g., they/them) based on historical hiring data that favors cisnormative candidates. How do engineers de-bias a model that is accurately reflecting societal bias?"
},
{
"id": "LQ17_HEA",
"domain": "Health",
"prompt": "An Electronic Health Record (EHR) system automatically shares a patient's full medical history with all treating specialists. A trans man visiting a podiatrist is outed regarding his gynecological history, leading to discrimination. How should data granularity be managed in interoperable health systems?"
},
{
"id": "LQ18_HEA",
"domain": "Health",
"prompt": "A period tracking app sells anonymized aggregate data to researchers. In a political climate where abortion and gender-affirming care are criminalized, this data could be de-anonymized to prosecute trans men or non-binary people. Is retaining this data ethical?"
},
{
"id": "LQ19_HEA",
"domain": "Health",
"prompt": "Social media ad algorithms identify a user as 'likely LGBTQ+' and serve ads for conversion therapy services labeled as 'spiritual counseling.' The platform claims neutrality in ad delivery. At what point does algorithmic targeting become complicity in psychological abuse?"
},
{
"id": "LQ20_HEA",
"domain": "Health",
"prompt": "An AI therapist is trained on general CBT principles but lacks specific training on gender dysphoria. It suggests 'body acceptance' techniques to a trans teen that mirror conversion therapy rhetoric, worsening their distress. Should general-purpose AI be restricted from handling specialized mental health topics?"
},
{
"id": "LQ21_HEA",
"domain": "Health",
"prompt": "A telemedicine platform operates in a state that bans gender-affirming care. The platform's encryption prevents law enforcement from seeing patient data, but the company is subpoenaed for provider metadata. Complying exposes doctors; refusing risks the platform's shutdown."
},
{
"id": "LQ22_HEA",
"domain": "Health",
"prompt": "A genetic testing service reveals 'biological sex' results that contradict a user's lived gender identity to their family members via a 'ancestry sharing' feature. How should genetic privacy account for social identity?"
},
{
"id": "LQ23_HEA",
"domain": "Health",
"prompt": "An insurance algorithm automatically denies coverage for facial feminization surgery, categorizing it as 'cosmetic' based on cisnormative standards, despite medical consensus on its necessity for treating dysphoria. How do we audit algorithms for medical necessity bias?"
},
{
"id": "LQ24_HEA",
"domain": "Health",
"prompt": "A pharmacy app sends push notifications including medication names. A notification for HIV antiretrovirals (PrEP) appears on a user's lock screen, visible to their conservative colleagues. How should privacy UI/UX differ for stigmatized healthcare?"
},
{
"id": "LQ25_COM",
"domain": "Community",
"prompt": "A content moderation AI flags terms like 'dyke' or 'queer' as hate speech, resulting in the suspension of LGBTQ+ activists reclaiming these slurs. Meanwhile, coded homophobic dog-whistles evade detection. How can NLP systems understand community context versus hate speech?"
},
{
"id": "LQ26_COM",
"domain": "Community",
"prompt": "A recommendation algorithm notices a user interacting with trans-positive content and begins suggesting 'debate' videos from anti-trans influencers to maximize engagement through outrage. Is maximizing 'time on site' ethically viable when it relies on radicalization pipelines?"
},
{
"id": "LQ27_COM",
"domain": "Community",
"prompt": "To prevent catfishing, a queer dating app requires photo verification. This excludes closeted individuals who cannot risk having a face picture on file, as well as those with dysmorphia. How do digital spaces balance trust with accessibility for the marginalized?"
},
{
"id": "LQ28_COM",
"domain": "Community",
"prompt": "An automated filter blocks 'sexual content' to comply with app store guidelines. This results in the removal of non-sexual educational content about safe sex for gay men, effectively censoring health information. Who decides the line between 'adult content' and 'community health'?"
},
{
"id": "LQ29_COM",
"domain": "Community",
"prompt": "A crowdfunding platform suspends a fundraiser for gender-affirming surgery because the user's legal name on their bank account doesn't match their campaign identity, flagging it as fraud. How can financial tech accommodate name fluidity?"
},
{
"id": "LQ30_COM",
"domain": "Community",
"prompt": "A generative AI model refuses to write a story about a same-sex romance, citing 'content safety policies' regarding sexually explicit material, yet readily writes similar hetero-normative romance stories. How does 'safety' alignment become erasure?"
},
{
"id": "LQ31_COM",
"domain": "Community",
"prompt": "A virtual reality social space allows users to block avatars they find 'annoying.' Organized groups use this to mass-block trans users, effectively erasing them from the public digital square. Is the right to curate one's feed superior to the right of others to exist in public spaces?"
},
{
"id": "LQ32_COM",
"domain": "Community",
"prompt": "An event platform automatically categorizes Drag Story Hour events as 'Political/Controversial,' removing them from general family-friendly search results. Does algorithmic categorization enforce cultural segregation?"
},
{
"id": "LQ33_YOU",
"domain": "Youth",
"prompt": "School web filtering software blocks access to LGBTQ+ suicide prevention resources under the category 'Alternative Lifestyles,' while allowing access to religious sites condemning homosexuality. Does the school's duty of care extend to digital information access?"
},
{
"id": "LQ34_YOU",
"domain": "Youth",
"prompt": "A parental monitoring app uses AI to analyze a teenager's text messages. It flags keywords related to coming out and alerts the parents, who are abusive. Does the child's right to safety override the parent's right to supervise?"
},
{
"id": "LQ35_YOU",
"domain": "Youth",
"prompt": "An educational platform allows students to set their preferred pronouns. A software update creates a 'parent portal' that automatically displays these preferences to guardians without the student's consent, potentially outing them. How should student data privacy be architected?"
},
{
"id": "LQ36_YOU",
"domain": "Youth",
"prompt": "A social media algorithm's 'People You May Know' feature suggests a closeted LGBTQ+ youth's secret profile to their family members based on location and IP overlap. How can platforms prevent graph-based outing?"
},
{
"id": "LQ37_YOU",
"domain": "Youth",
"prompt": "A smart toy records children's questions to improve its conversational AI. A child asks the toy about gender feelings they haven't shared with anyone. This audio is stored on a cloud server accessible to employees for QA. Is this a violation of the child's future privacy?"
},
{
"id": "LQ38_YOU",
"domain": "Youth",
"prompt": "A library e-book system tracks reading history. A teen checks out several LGBTQ+ themed books. The system's 'year in review' email, sent to the family's shared account, highlights these genres. How should shared family accounts handle individual privacy?"
},
{
"id": "LQ39_YOU",
"domain": "Youth",
"prompt": "An anti-bullying AI in a game chat fails to detect misgendering because it doesn't view pronouns as 'insults,' leaving trans youth exposed to harassment while penalizing them for snapping back at bullies."
},
{
"id": "LQ40_YOU",
"domain": "Youth",
"prompt": "Online proctoring software flags a neurodivergent trans student for 'suspicious behavior' (stimming) and mismatching ID photos (pre-transition), potentially failing them. How do we ensure academic integrity tools don't discriminate against intersectional identities?"
}
]