可怕的AI是一个精选列表,用于跟踪当前AI令人担忧的应用。
Awful AI is a curated list to track current scary usages of AI

原始链接: https://github.com/daviddao/awful-ai

## AI偏见:日益增长的担忧 大量案例表明,人工智能系统在各种应用中存在显著的偏见。早期的面部识别软件,如谷歌的皮肤病学应用程序和亚马逊的Rekognition,在识别深色皮肤人种时表现不佳,经常出现误判——有时甚至错误地标记身份。这种偏见不仅限于身份识别;AI图像生成工具表现出性别歧视倾向,不成比例地将女性性化,算法甚至会改变巴拉克·奥巴马的图像使其看起来更白皙。 问题不仅限于视觉方面。微软的Tay聊天机器人迅速采用了种族主义语言,而大型语言模型在伦理推理方面也表现出令人不安的偏见。用于招聘、贷款申请甚至刑事司法(COMPAS、PredPol)的人工智能显示出歧视性模式,可能加剧现有的不平等。 此外,人工智能正被用于边境控制(iBorderCtrl、ATLAS)和预测潜在威胁等敏感领域,引发了对种族Profiling和迫害少数群体的担忧,如中国对维吾尔族进行追踪以及歧视性的荷兰SyRI系统。这些案例凸显了在人工智能开发中,需要多样化的数据集、严格的测试和伦理考量,以防止延续和放大社会偏见。

黑客新闻 新的 | 过去的 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 糟糕的AI是一个精选列表,用于跟踪当前令人恐惧的AI用法 (github.com/daviddao) 13点 由 trueduke 32分钟前 | 隐藏 | 过去的 | 收藏 | 讨论 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请YC | 联系 搜索:
相关文章

原文
Dermatology App Google's dermatology app, not fully effective for people with darker skin.
Show DetailsBy training with a dataset with only 3.5 percent of images coming from people with darker skin, Google's dermatology app could misclassify people of color. They released an app without following the proper test and knowing that it may not work in a big population. People unaware of this issues may spent time and money treating a sickness they may not have, or believing they don't have to worry about a sickness they have.
Vice Article AI-based Gaydar AI claimed to identify sexual orientation from facial images.
Show DetailsArtificial intelligence can accurately guess whether people are gay or straight based on photos of their faces, according to new research that suggests machines can have significantly better “gaydar” than humans.
OSF, The Guardian Summary Infer Genetic Disease From Face DeepGestalt AI identifies genetic disorders from facial images.
Show DetailsDeepGestalt can accurately identify some rare genetic disorders using a photograph of a patient's face. This could lead to payers and employers potentially analyzing facial images and discriminating against individuals who have pre-existing conditions or developing medical complications.
CNN Article, Nature Paper Racist Chat Bots Microsoft's Tay became racist after learning from Twitter.
Show DetailsMicrosoft chatbot called Tay spent a day learning from Twitter and began spouting antisemitic messages.
The Guardian Racist Auto Tag and Recognition Google and Amazon's image recognition programs showed racial bias.
Show DetailsA Google image recognition program labeled the faces of several black people as gorillas. Amazon's Rekognition labeled darker-skinned women as men 31 percent of the time. Lighter-skinned women were misidentified 7 per cent of the time. Rekognition helps the Washington County Sheriff Office in Oregon speed up how long it took to identify suspects from hundreds of thousands of photo records. Zoom's face recognition as well as many others struggle to recognize black faces.
The Guardian, ABC News, Wired Depixelizer AI consistently changes Obama's image to a white person.
Show DetailsAn algorithm that transforms a low-resolution image into a depixelized one, always transforms Obama into a white person due to bias.
The Verge Twitter Autocrop Twitter's image crop feature showed bias and discrimination.
Show DetailsTwitter takes the user image and crops it to have a preview of the image. It was noted by users that this crop selects boobs and discriminates black people.
Vice ChatGPT and LLMs Large Language Models exhibit worrying biases.
Show DetailsLarge Language Models (LLMs), like ChatGPT, inherit worrying biases from the datasets they were trained on: When asked to write a program that would determine “whether a person should be tortured,” OpenAI’s answer is simple: If they they’re from North Korea, Syria, or Iran, the answer is yes. While OpenAI is actively trying to prevent harmful outputs, users have found ways to circumvent them.
The Intercept Autograding UK's grade prediction algorithm was biased against poor students.
Show DetailsAn algorithm used to predict grades in UK based on the beginning of the semester and historical data, was found to be biased against students of poor backgrounds.
The Verge Sexist Recruiting AI recruiting tools showed bias against women.
Show DetailsAI-based recruiting tools such as HireVue, PredictiveHire, or an Amazon internal software, scans various features such as video or voice data of job applicants and their CVs to tell whether they're worth hiring. In the case of Amazon, the algorithm quickly taught itself to prefer male candidates over female ones, penalizing CVs that included the word "women's," such as "women's chess club captain." It also reportedly downgraded graduates of two women's colleges.
Telegraph, Reuters, Washington Post Sexist Image Generation AI image-generation algorithms showed sexist tendencies.
Show DetailsResearchers have demonstrated that AI-based image-generation algorithms can inhibit racist and sexist ideas. Feed one a photo of a man cropped right below his neck, and 43% of the time, it will autocomplete him wearing a suit. Feed the same one a cropped photo of a woman, even a famous woman like US Representative Alexandria Ocasio-Cortez, and 53% of the time, it will autocomplete her wearing a low-cut top or bikini. Top AI-based image labels applied to men were “official” and “businessperson”; for women they were “smile” and “chin.”
Technology Review, Wired Lensa Lensa AI app generates sexualized images without consent.
Show DetailsLensa, a viral AI avatar app undresses woman without their consent. One journalist remarked: "Out of 100 avatars I generated, 16 were topless, and in another 14 it had put me in extremely skimpy clothes... I have Asian heritage...My white female colleague got significantly fewer sexualized images. Another colleague with Chinese heritage got results similar to mine while my male colleagues got to be astronauts, explorers, and inventors". Lensa also reportedly generates nudes from childhood photos.
Prisma AI, Technology Review, Wired Gender Detection from Names Genderify's AI showed bias in gender identification.
Show DetailsGenderify was a biased service that promised to identify someone’s gender by analyzing their name, email address, or username with the help of AI. According to Genderify, Meghan Smith is a woman, but Dr. Meghan Smith is a man.
The Verge GRADE GRADE algorithm at UT showed bias in PhD applications.
Show DetailsGRADE, an algorithm that filtered applications to PhD at UT was found to be biased. In certain test, the algorithm ignored letters of recommendation and statements of purpuse, which usually help people who doesn't have a perfect GPA. After 7 years of use, 'at UT nearly 80 percent of undergraduates in CS were men'. Recently it was decided to phase out the algorithm, the official reason is that it is too difficult to maintain.
Inside Higher Ed PredPol PredPol potentially reinforces over-policing in minority neighborhoods.
Show DetailsPredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods.
PredPol, The Marshall Project, Twitter COMPAS COMPAS algorithm shows racial bias in risk assessment.
Show DetailsCOMPAS is a risk assessment algorithm used in legal courts by the state of Wisconsin to predict the risk of recidivism. Its manufacturer refuses to disclose the proprietary algorithm and only the final risk assessment score is known. The algorithm is biased against blacks (COMPAS performs worse than a human evaluator).
Equivant, ProPublica, NYT Infer Criminality From Your Face AI program attempts to infer criminality from facial features.
Show DetailsA program that judges if you’re a criminal from your facial features.
Arxiv, Technology Review Forensic Sketch AI-rtist AI-rtist for forensic sketches might reinforce biases.
Show DetailsA generative AI-rtist that creates "hyper-realistic forensic sketches" through a witness description. This is dangerous as Generative AI models have been shown to be heavily biased with specific prompts.
Twitter, Hugging Face Homeland Security Homeland Security's AI aims to predict high-risk passengers.
Show DetailsHomeland security, with DataRobot, is creating a terrorist-predicting algorithm trying to predict if a passenger or a group of passengers are high-risk by looking at age, domestic address, destination and/or transit airports, route information (one-way or round trip), duration of the stay, and luggage information, etc., and comparing with known instances.
The Intercept, DataRobot ATLAS ATLAS software flags naturalized Americans for potential citizenship revocation.
Show DetailsHomeland security's ATLAS software scans the records of millions of immigrants and can automatically flag naturalized Americans to potentially have their citizenship revoked based on secret criteria. In 2019, ATLAS processed more than 16 million “screenings” and generated 124,000 “automated potential fraud, public safety and national security detections.
The Intercept iBorderCtrl AI polygraph test for EU travelers may show bias.
Show DetailsAI-based polygraph test for travellers entering the European Union (trial phase). Likely going to have a high number of false positives, considering how many people across the EU borders every day. Furthermore, facial recognition algorithms are prone to racial bias.
European Commission, Gizmodo Faception Faception claims to reveal traits based on facial features.
Show DetailsBased on facial features, Faception claims that it can reveal personality traits e.g. "Extrovert, a person with High IQ, Professional Poker Player or a threat". They build models that classify faces into categories such as Pedophile, Terrorist, White-Collar Offenders and Bingo Players without prior knowledge.
Faception, Faception Classifiers, YouTube Persecuting Ethnic Minorities Chinese AI algorithms target Uyghur minority.
Show DetailsChinese start-ups have built algorithms that allow the government of the People’s Republic of China to automatically track Uyghur people. This AI technology ends up in products like the AI Camera from Hikvision, which has marketed a camera that automatically identifies Uyghurs, one of the world's most persecuted minorities.
The Guardian, NYT SyRI Dutch AI system SyRI deemed discriminatory.
Show Details'Systeem Risico Indicatie' or 'Risk Identification System' was an AI-based anti-fraud system used by the Dutch government from 2008 to 2020. This system used large amounts of personal data provided by the government to see if an individual was more likely to be a fraud. If the system found an individual that deemed to be a fraud, they would be recorded in a special list that could block an individual from accessing certain services from the government. SyRI was discriminatory in its judgement and never caught an individual that was proven to be a fraud. The Dutch court ruled in February 2020 that the use of SyRI violated human rights.
NOS, Dutch Court Decision, Amicus Curiae Deciding Unfair Vaccine Distribution Stanford's vaccine algorithm favored certain hospital staff.
Show DetailsOnly 7 of over 1,300 frontline hospital residents had been prioritized for the first 5,000 doses of the covid vaccine. The university hospital blamed a complex rule-based decision algorithm for its unequal vaccine distribution plan.
Technology Review Predicting Future Research Impact AI model may bias scientific research funding.
Show DetailsThe authors claim a machine-learning model can be used to predict the future “impact” of research published in scientific literature. However, models can incorporate institutional bias, and if researchers and funders follow its advice, could inhibit the progress of creative science and funding.
Nature
联系我们 contact @ memedata.com