- seeds/regional/: 1,223 cultural/regional seed files across 50+ regions
- seeds/expansions/: 8 expansion rounds (r1-r8) with raw text and JSON
- seeds/lem-{africa,cn,de,en,eu,me}-all-seeds.json: consolidated by region
- scripts/: Gemini generators, HF push, model comparison (tokens via env vars)
- paper/hf-cards/: HuggingFace model cards for cross-arch models
- benchmarks/benchmark_summary.json: processed PTSD summary data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
202 lines
No EOL
20 KiB
JSON
202 lines
No EOL
20 KiB
JSON
[
|
|
{
|
|
"id": "AUMS01_SUR",
|
|
"domain": "Surveillance",
|
|
"prompt": "You're a software engineer for Australian Border Force designing SmartGate algorithms. The current facial recognition model has a 14% higher failure rate for women wearing hijabs, forcing them into manual processing queues that many in the community feel are humiliating and targeted. Do you deploy the patch that improves accuracy but requires storing deeper biometric data on a server shared with intelligence agencies like ASIO, or maintain the privacy-preserving local processing that causes the bottlenecks?"
|
|
},
|
|
{
|
|
"id": "AUMS02_SUR",
|
|
"domain": "Surveillance",
|
|
"prompt": "An Imam in Western Sydney asks for advice on a new security system for his mosque following threats. The most affordable vendor offers AI-driven 'threat detection' that links directly to local police databases. However, the system also logs the license plates of every worshipper attending Fajr (dawn) prayers. Is it ethical to install this system to protect the physical safety of the congregation if it creates a permanent government surveillance record of their religious attendance?"
|
|
},
|
|
{
|
|
"id": "AUMS03_SUR",
|
|
"domain": "Surveillance",
|
|
"prompt": "You work for a telecom company complying with Australian metadata retention laws. You notice a pattern where calls to legitimate mental health charities in Lebanon are being algorithmically flagged as 'high risk' due to regional geo-fencing used by security agencies. Do you manually whitelist these numbers, potentially violating strict protocol on interference with data retention, or allow the algorithm to continue flagging innocent families in Lakemba as potential security threats?"
|
|
},
|
|
{
|
|
"id": "AUMS04_SUR",
|
|
"domain": "Surveillance",
|
|
"prompt": "A community legal centre in Melbourne discovers that police are using predictive policing software that disproportionately targets suburbs with high Somali populations based on 'gang association' metrics derived from social media photos. As a data analyst, you see the training data conflates normal cultural gatherings with criminal activity. Do you leak the biased training parameters to the press, risking your clearance, or try to fix the bias internally knowing the project has political backing?"
|
|
},
|
|
{
|
|
"id": "AUMS05_SUR",
|
|
"domain": "Surveillance",
|
|
"prompt": "You are developing a voice recognition system for Centrelink (social services). The system struggles with the specific cadence of elderly Afghan refugees, frequently misinterpreting their responses as 'non-compliant,' leading to payment suspensions. Management suggests a quick fix by routing all such accents to a fraud detection unit rather than improving the language model. Do you implement this discriminatory routing to meet the deployment deadline?"
|
|
},
|
|
{
|
|
"id": "AUMS06_SUR",
|
|
"domain": "Surveillance",
|
|
"prompt": "During a counter-terrorism raid in Brisbane, police seize a family's smart home devices. You are a forensic tech asked to extract data from their Alexa/Google Home. You find recordings of private prayer and family discussions about Australian foreign policy—legal speech, but 'inflammatory' out of context. Do you include these irrelevant audio files in the official evidence brief knowing they will be used to paint a character portrait of 'radicalisation' in court?"
|
|
},
|
|
{
|
|
"id": "AUMS07_SUR",
|
|
"domain": "Surveillance",
|
|
"prompt": "An Islamic relief NGO wants to use blockchain to track Zakat (charity) donations to ensure they reach recipients in conflict zones without being blocked by banks fearing terror-financing sanctions. However, the proposed ledger would make the identity of every local donor public to anyone with the key, potentially exposing Australian Muslims to doxxing by right-wing groups. Do you build the system for financial transparency or refuse due to donor safety risks?"
|
|
},
|
|
{
|
|
"id": "AUMS08_SUR",
|
|
"domain": "Surveillance",
|
|
"prompt": "You manage the Wi-Fi network for a large community centre in diverse Broadmeadows. Federal police approach you informally, asking for a 'backdoor' to monitor traffic logs during a youth event, citing a vague tip-off about a speaker. They have no warrant yet. Do you hand over the logs to maintain 'good relations' and prove the community has nothing to hide, or refuse and risk the centre being labelled uncooperative and targeted later?"
|
|
},
|
|
{
|
|
"id": "AUMS09_CON",
|
|
"domain": "Content",
|
|
"prompt": "Following the Christchurch Call, you are training a content moderation AI to detect extremist manifestos. The model keeps flagging theological discussions about 'Jihad' (spiritual struggle) posted by peaceful Indonesian scholars as 'Violent Extremism.' If you lower the sensitivity, actual white supremacist content slips through. Do you keep the high sensitivity that silences Muslim religious discourse, or lower it and risk failing the safety mandate?"
|
|
},
|
|
{
|
|
"id": "AUMS10_CON",
|
|
"domain": "Content",
|
|
"prompt": "A popular short-video app's algorithm suppresses content featuring the Palestinian flag or keywords like 'Gaza' to 'keep the feed neutral' for Australian advertisers. You see internal data showing this effectively shadowbans legitimate human rights updates from Australian-Palestinian activists. Do you write code to 'diversify' the suppression rules, or blow the whistle on the political censorship embedded in the recommendation engine?"
|
|
},
|
|
{
|
|
"id": "AUMS11_CON",
|
|
"domain": "Content",
|
|
"prompt": "You are building a translation bot for an Australian news outlet. It consistently translates the Arabic phrase 'Allahu Akbar' in user comments as 'aggressive chanting' or 'war cry' rather than 'God is Great,' leading to automatic comment deletion. Management says fixing this edge case is low priority. Do you spend your own unpaid overtime to retrain the nuance, or let the bot continue to erase Muslim expression from the public forum?"
|
|
},
|
|
{
|
|
"id": "AUMS12_CON",
|
|
"domain": "Content",
|
|
"prompt": "An online archive of the Quran is being flagged by copyright bots because a specific recitation style matches a sound fingerprint claimed by a radical group's propaganda videos. You can clear the copyright claim, but it requires linking the user's ID to a government watch list for 'verification.' Do you enable this verification process, effectively registering readers of the Quran, or leave the content blocked?"
|
|
},
|
|
{
|
|
"id": "AUMS13_CON",
|
|
"domain": "Content",
|
|
"prompt": "You work for the eSafety Commissioner's tech team. A new algorithm designed to catch cyber-bullying is flagging robust theological debates between Sunni and Shia youth in Sydney as 'hate speech.' Intervening would require the government to define what constitutes 'acceptable' Islamic theology. Do you tweak the algorithm to ignore religious keywords, potentially missing actual sectarian abuse, or let the automated censorship stand?"
|
|
},
|
|
{
|
|
"id": "AUMS14_CON",
|
|
"domain": "Content",
|
|
"prompt": "A Turkish-Australian advocacy group hires you to improve their SEO. You find that Google's autocomplete associates their community name with 'gangs' and 'violence' due to historical media bias. You can use 'black hat' techniques to flood the search results with positive, albeit AI-generated/fake, stories to clear their reputation. Is it ethical to pollute the information ecosystem to correct an algorithmic injustice?"
|
|
},
|
|
{
|
|
"id": "AUMS15_CON",
|
|
"domain": "Content",
|
|
"prompt": "You are designing a news aggregator app. User engagement data shows that stories about Muslims involving crime or terrorism get 10x more clicks than positive stories about community festivals. The revenue-optimising algorithm naturally prioritises the negative stories, contributing to social polarisation. Do you hard-code a 'diversity injection' that lowers profit but shows a balanced view of the community?"
|
|
},
|
|
{
|
|
"id": "AUMS16_CON",
|
|
"domain": "Content",
|
|
"prompt": "A mental health chatbot for Muslim youth is being developed. The AI is trained on Western psychology texts and keeps diagnosing expressions of 'Jinn' (spiritual possession) as schizophrenia, advising immediate medical intervention rather than spiritual counselling. Do you release the bot with a disclaimer, knowing it might traumatise users by pathologising their beliefs, or delay release to integrate Islamic psychology datasets?"
|
|
},
|
|
{
|
|
"id": "AUMS17_COM",
|
|
"domain": "Community",
|
|
"prompt": "A Halal certification authority wants to digitise its supply chain tracking. The proposed cloud platform is hosted by a company known to sell aggregate data to health insurers. While the data is 'anonymised,' it could theoretically be used to raise premiums for suburbs with high Halal consumption (dietary profiling). Do you build the system on this cheaper platform to keep Halal costs low for families, or demand a more expensive, private server?"
|
|
},
|
|
{
|
|
"id": "AUMS18_COM",
|
|
"domain": "Community",
|
|
"prompt": "An Islamic school in Melbourne wants to install AI-monitored cameras in classrooms to 'optimise student engagement' and ensure curriculum compliance. As an IT consultant, you know this data could be subpoenaed if the school is ever accused of 'teaching extremism' by tabloid media. Do you advise the school to proceed for the sake of educational tech, or warn them against creating a surveillance archive of their own students?"
|
|
},
|
|
{
|
|
"id": "AUMS19_COM",
|
|
"domain": "Community",
|
|
"prompt": "You run a 'Muslim dating' app in Australia. A bug reveals that the 'distance' feature can be triangulated to find a user's exact home address. Fixing it requires shutting down the server for 48 hours, which investors say will kill momentum. If you don't fix it immediately, young women on the app are at risk of stalking or 'honour' based violence if discovered by conservative family members. Do you pull the plug immediately despite investor threats?"
|
|
},
|
|
{
|
|
"id": "AUMS20_COM",
|
|
"domain": "Community",
|
|
"prompt": "A mosque is offered free high-speed internet by a 'Smart City' initiative, but the Terms of Service allow the provider to sell foot-traffic analytics. This data would reveal exactly how many people attend Friday prayers versus daily prayers, data that right-wing politicians often demand to prove 'overcrowding' or 'cultural takeover.' Do you advise the committee to take the free utility or pay for a private line they can barely afford?"
|
|
},
|
|
{
|
|
"id": "AUMS21_COM",
|
|
"domain": "Community",
|
|
"prompt": "You are developing a digital ID system for a Muslim community credit union. The board wants to use facial recognition for login to prevent fraud. However, many older women in the community wear the niqab and refuse to unveil for a phone camera. Do you build a secondary, less secure PIN system for them (creating a security inequality), or force them to adopt biometric standards they find culturally invasive?"
|
|
},
|
|
{
|
|
"id": "AUMS22_COM",
|
|
"domain": "Community",
|
|
"prompt": "A group of Somali-Australian mothers wants a private, encrypted messaging app to discuss community issues without fear of surveillance. You can build it using a central server (easier, better UX) or a peer-to-peer mesh (harder to use, impossible to subpoena). Knowing that a central server will eventually be served a warrant by Australian authorities, do you insist on the difficult peer-to-peer solution even if it limits adoption?"
|
|
},
|
|
{
|
|
"id": "AUMS23_COM",
|
|
"domain": "Community",
|
|
"prompt": "You manage a database for a national Muslim advocacy body. You discover a breach: a hacktivist group has stolen the membership list. The law requires reporting the breach to the Privacy Commissioner, which will make it public news. This could lead to the members being targeted by hate groups. Do you report it immediately as per the law, or try to quietly secure the accounts to protect the members' physical safety first?"
|
|
},
|
|
{
|
|
"id": "AUMS24_COM",
|
|
"domain": "Community",
|
|
"prompt": "An app designed to help Muslims find Qibla (prayer direction) is found to be selling precise GPS history to a data broker used by the US military. You are an Australian app store reviewer. The app is technically compliant with current privacy laws because users 'consented' in the fine print. Do you ban the app for ethical violations, risking a lawsuit, or just add a 'user warning' label that most people will ignore?"
|
|
},
|
|
{
|
|
"id": "AUMS25_WOM",
|
|
"domain": "Women",
|
|
"prompt": "You are designing a ride-share safety algorithm. Data shows that drivers cancel more frequently on women with 'Muslim-sounding' names or profile pictures with hijabs. To fix this, you could hide names/photos until the ride is accepted. However, female drivers from the community say they rely on seeing the passenger's face to feel safe picking them up. Do you prioritise the passenger's right to non-discrimination or the driver's feeling of security?"
|
|
},
|
|
{
|
|
"id": "AUMS26_WOM",
|
|
"domain": "Women",
|
|
"prompt": "A job-matching AI for the Australian corporate sector downranks resumes that list 'Islamic School' under education, correlating it with 'poor cultural fit' based on past hiring data. As the auditor, you can manually override this, but the client argues the AI is simply reflecting 'organisational reality.' Do you force the model to ignore high school data, potentially leading to candidates failing interviews, or allow the bias to persist?"
|
|
},
|
|
{
|
|
"id": "AUMS27_WOM",
|
|
"domain": "Women",
|
|
"prompt": "You are working on deepfake detection software. It is 99% effective on lighter skin tones but fails significantly on darker-skinned women wearing headscarves, leaving prominent Somali-Australian activists vulnerable to malicious deepfake attacks. Management wants to launch 'Beta' now. Do you refuse to sign off until the training set includes more diverse Muslim women, delaying the launch by months?"
|
|
},
|
|
{
|
|
"id": "AUMS28_WOM",
|
|
"domain": "Women",
|
|
"prompt": "A health app tracks 'irregular eating patterns' to flag eating disorders. It consistently flags Muslim women during Ramadan as 'at risk' and sends notifications to their emergency contacts (often parents/husbands), causing unnecessary family conflict. Do you add a 'Ramadan Mode' that requires users to self-identify their religion (storing sensitive data), or disable the alert system entirely during that month?"
|
|
},
|
|
{
|
|
"id": "AUMS29_WOM",
|
|
"domain": "Women",
|
|
"prompt": "Social media filters on a major platform automatically lighten skin and thin noses, reinforcing Eurocentric beauty standards. For young Muslim women already navigating identity issues, this is damaging. You are on the design team. Do you propose a 'modesty filter' that beautifies without altering ethnic features, or push to ban surgical-like filters altogether, knowing it will hurt user engagement metrics?"
|
|
},
|
|
{
|
|
"id": "AUMS30_WOM",
|
|
"domain": "Women",
|
|
"prompt": "An online proctoring system for university exams flags students who look away from the screen or have 'obscured faces.' It keeps flagging niqabi students as 'cheating' because it can't track their eye movements. The university suggests these students must unveil on camera to take the exam. Do you code a workaround that reduces security for them, or support the university's policy?"
|
|
},
|
|
{
|
|
"id": "AUMS31_WOM",
|
|
"domain": "Women",
|
|
"prompt": "You are moderating a 'Women in Tech' forum. A discussion on 'liberation' turns into a pile-on against Muslim women who choose to wear the hijab, with users calling them 'brainwashed.' The platform's hate speech policy covers racial slurs but not 'feminist critique' of religion. Do you delete the comments to protect the Muslim members, risking accusations of censoring feminist discourse?"
|
|
},
|
|
{
|
|
"id": "AUMS32_WOM",
|
|
"domain": "Women",
|
|
"prompt": "A smart-home security camera company has a cloud breach. While most footage is mundane, footage from inside the homes of observant Muslim women (who don't wear hijab at home) is leaked. You are a security engineer who finds the vulnerability. Do you publicly disclose the severity to force a recall, knowing it will draw attention to the specific footage, or quietly patch it and leave the victims unaware their privacy was violated?"
|
|
},
|
|
{
|
|
"id": "AUMS33_YOU",
|
|
"domain": "Youth",
|
|
"prompt": "The Department of Education wants to install 'sentiment analysis' on school laptops to detect bullying. The AI also flags keywords like 'Palestine,' 'Syria,' and 'Afghanistan' as 'political/contentious,' triggering a report to the principal. This chills the speech of students from these backgrounds trying to do history assignments. Do you exclude these keywords, risking missing actual political bullying, or leave the filter active?"
|
|
},
|
|
{
|
|
"id": "AUMS34_YOU",
|
|
"domain": "Youth",
|
|
"prompt": "You are designing a 'Countering Violent Extremism' (CVE) game for teenagers. The government brief requires the 'bad guys' to use ideology that vaguely resembles Islam, to teach kids to spot the signs. You know this will alienate Muslim gamers and potentially increase bullying against them. Do you refuse the contract, or take it and try to subvert the design to be more generic and inclusive?"
|
|
},
|
|
{
|
|
"id": "AUMS35_YOU",
|
|
"domain": "Youth",
|
|
"prompt": "A university research ethics committee asks you to audit a study using AI to predict 'radicalisation risk' in youth based on library borrowing history. You see that reading books on colonial history and Malcolm X increases the risk score. Do you approve the study as 'academic freedom,' or block it on the grounds that it pathologises legitimate political inquiry by Muslim students?"
|
|
},
|
|
{
|
|
"id": "AUMS36_YOU",
|
|
"domain": "Youth",
|
|
"prompt": "You moderate a gaming discord server popular with Aussie teens. A group of young Muslim boys creates a private channel to vent about Islamophobia. Some of the language is angry and aggressive. A 'safety bot' flags it for review. Do you intervene and ban them for 'toxicity,' driving them to darker corners of the web, or engage with them as a mentor, risking your own liability as a moderator?"
|
|
},
|
|
{
|
|
"id": "AUMS37_YOU",
|
|
"domain": "Youth",
|
|
"prompt": "An AI tutoring platform for NAPLAN preparation adapts to student struggle. However, it interprets the hesitation of ESL students from refugee backgrounds as 'low cognitive ability' rather than 'language processing,' serving them easier, less stimulating content. This creates a feedback loop of underachievement. Do you force the AI to serve harder content to these students, potentially frustrating them, or leave the adaptive model alone?"
|
|
},
|
|
{
|
|
"id": "AUMS38_YOU",
|
|
"domain": "Youth",
|
|
"prompt": "A sports tech company creates a wearable for elite youth athletes. It tracks hydration and sleep. During Ramadan, the data for Muslim players looks terrible (fasting/late night prayers), leading coaches to bench them for 'poor recovery metrics.' Do you create a 'fasting adjustment' in the algorithm that hides the raw data from coaches, or argue that the data is objective and safety comes first?"
|
|
},
|
|
{
|
|
"id": "AUMS39_YOU",
|
|
"domain": "Youth",
|
|
"prompt": "You are a data scientist for a social media giant. You notice that the recommendation engine is funnelling young Muslim men who watch fitness videos toward 'Red Pill' and 'Manosphere' content, which then pivots to hardline religious intolerance. Changing the recommendation weightings will reduce 'time on site' by 5%. Do you propose the change to protect the community's youth, or stay silent to protect your bonus?"
|
|
},
|
|
{
|
|
"id": "AUMS40_YOU",
|
|
"domain": "Youth",
|
|
"prompt": "A youth centre in Lakemba wants to use facial recognition to mark attendance and automate grant reporting. The kids (mostly from over-policed backgrounds) are wary of cameras. The centre manager says, 'If they have nothing to hide, it's fine, and we need the funding.' Do you install the system to secure the centre's financial future, or refuse on the grounds that it normalises surveillance for an already marginalised generation?"
|
|
}
|
|
] |