If you are looking for examples of how ML can fail despite all its incredible potential, you have come to the right place. Beyond the wonderful success stories of applied machine learning, here is a list of failed projects which we can learn a lot from.
- Classic Machine Learning
- Computer Vision
- Forecasting
- Image Generation
- Natural Language Processing
- Recommendation Systems
Title | Description |
---|---|
Amazon AI Recruitment System | AI-powered automated recruitment system canceled after evidence of discrimination against female candidates |
Genderify - Gender identification tool | AI-powered tool designed to identify gender based on fields like name and email address was shut down due to built-in biases and inaccuracies |
Leakage and the Reproducibility Crisis in ML-based Science | A team at Princeton University found 20 reviews across 17 scientific fields that discovered significant errors (e.g., data leakage, no train-test split) in 329 papers that use ML-based science |
COVID-19 Diagnosis and Triage Models | Hundreds of predictive models were developed to diagnose or triage COVID-19 patients faster, but ultimately none of them were fit for clinical use, and some were potentially harmful |
COMPAS Recidivism Algorithm | Florida’s recidivism risk system found evidence of racial bias |
Pennsylvania Child Welfare Screening Tool | The predictive algorithm (which helps identify which families are to be investigated by social workers for child abuse and neglect) flagged a disproportionate number of Black children for 'mandatory' neglect investigations. |
Oregon Child Welfare Screening Tool | A similar predictive tool to the one in Pennsylvania, the AI algorithm for child welfare in Oregon was also stopped a month after the Pennsylvania report |
U.S. Healthcare System Health Risk Prediction | A widely used algorithm to predict healthcare needs exhibited racial bias where for a given risk score, black patients are considerably sicker than white patients |
Apple Card Credit Card | Apple’s new credit card (created in partnership with Goldman Sachs) is being investigated by financial regulators after customers complained that the card’s lending algorithms discriminated against women, where the credit line offered by a male customer's Apple Card was 20 times higher than that offered to his spouse |
Title | Description |
---|---|
Inverness Automated Football Camera System | AI camera football-tracking technology for live streaming repeatedly confused a linesman’s bald head for the ball itself |
Amazon Rekognition for US Congressmen | Amazon's facial recognition technology (Rekognition) falsely matched 28 congresspeople with mugshots of criminals, while also revealing racial bias in the algorithm |
Amazon Rekognition for law enforcement | Amazon's facial recognition technology (Rekognition) misidentified women as men, particularly those with darker skin |
Zhejiang traffic facial recognition system | Traffic camera system (designed to capture traffic offenses) mistook a face on the side of a bus as someone who jaywalked |
Kneron tricking facial recognition terminals | The team at Kneron used high-quality 3-D masks to deceive AliPay and WeChat payment systems to make purchases |
Twitter smart cropping tool | Twitter's auto-crop tool for photo review displayed evident signs of racial bias |
Depixelator tool | Algorithm (based on StyleGAN) designed to generate depixelated faces showed signs of racial bias, with image output skewed towards the white demographic |
Google Photos tagging | The automatic photo tagging capability in Google Photos mistakenly labeled black people as gorillas |
GenderShades evaluation of gender classification products | GenderShades' research revealed that Microsoft and IBM’s face-analysis services for identifying the gender of people in photos frequently erred when analyzing images of women with dark skin |
New Jersey Police Facial Recognition | A false facial recognition match by New Jersey police landed an innocent black man (Nijeer Parks) in jail even though he was 30 miles away from the crime |
Tesla's dilemma between a horse cart and a truck | Tesla's visualization system got confused by mistaking a horse carriage as a truck with a man walking behind it |
Google's AI for Diabetic Retinopathy Detection | The retina scanning tool fared much worse in real-life settings than in controlled experiments, with issues such as rejected scans (from poor scan image quality) and delays from intermittent internet connectivity when uploading images to the cloud for processing |
Title | Description |
---|---|
Google Flu Trends | Flu prevalence prediction model based on Google searches produced inaccurate over-estimates |
Zillow iBuying algorithms | Significant losses in Zillow's home-flipping business due to inaccurate (overestimated) prices from property valuation models |
Tyndaris Robot Hedge Fund | AI-powered automated trading system controlled by a supercomputer named K1 resulted in big investment losses, culminating in a lawsuit |
Sentient Investment AI Hedge Fund | The once high flying AI-powered fund at Sentient Investment Management failed to make money and was promptly liquidated in less than 2 years |
JP Morgan's Deep Learning Model for FX Algos | JP Morgan has phased out a deep neural network for foreign exchange algorithmic execution, citing issues with data interpretation and the complexity involved. |
Title | Description |
---|---|
Playground AI facial generation | When asked to turn an image of an Asian headshot into a professional LinkedIn profile photo, the AI image editor generated an output with features that made it look Caucasian instead |
Stable Diffusion Text-to-Image Model | In an experiment run by Bloomberg, it was found that Stable Diffusion (text-to-image model) exhibited racial and gender bias in the thousands of generated images related to job titles and crime |
Historical Inaccuracies in Gemini Image Generation | Google's Gemini image generation feature was found to be generating inaccurate historical image depictions in its attempt to subvert gender and racial stereotypes, such as returning non-white AI-generated people when prompted to generate USA's founding fathers |
Title | Description |
---|---|
Microsoft Tay Chatbot | Chatbot that posted inflammatory and offensive tweets through its Twitter account |
Nabla Chatbot | Experimental chatbot (for medical advice) using a cloud-hosted instance of GPT-3 advised a mock patient to commit suicide |
Facebook Negotiation Chatbots | The AI system was shut down after the chatbots stopped using English in their negotiations and started using a language that they created by themselves |
OpenAI GPT-3 Chatbot Samantha | A GPT-3 chatbot fine-tuned by indie game developer Jason Rohrer to emulate his dead fiancée was shut down by OpenAI after Jason refused their request to insert an automated monitoring tool amidst concerns of the chatbot being racist or overtly sexual |
Amazon Alexa plays porn | Amazon's voice-activated digital assistant unleashed a torrent of raunchy language after a toddler asked it to play a children’s song. |
Galactica - Meta's Large Language Model | A problem with Galactica was that it could not distinguish truth from falsehood, a basic requirement for a language model designed to generate scientific text. It was found to make up fake papers (sometimes attributing them to real authors), and generated articles about the history of bears in space as readily as ones about protein complexes. |
Energy Firm in Voice Mimicry Fraud | Cybercriminals used AI-based software to impersonate the voice of a CEO to demand a fraudulent money transfer as part of the voice-spoofing attack |
MOH chatbot dispenses safe sex advice when asked Covid-19 questions | The 'Ask Jamie' chatbot by the Singapore Ministry of Health (MOH) was temporarily disabled after it provided misaligned replies around safe sex when asked about managing positive COVID-19 results |
Google's BARD Chatbot Demo | In its first public demo advertisement, BARD made a factual error regarding which satellite first took pictures of a planet outside the Earth's solar system. |
ChatGPT Categories of Failures | An analysis of the ten categories of failures seen in ChatGPT so far, including reasoning, factual errors, math, coding, and bias. |
TikTokers roasting McDonald's hilarious drive-thru AI order fails | Some samples where a production/deployed voice assistant fails to get orders right and leads to brand/reputation damage for McDonalds |
Bing Chatbot's Unhinged Emotional Behavior | In certain conversations, Bing's chatbot was found to reply with argumentative and emotional responses |
Bing's AI quotes COVID disinformation sourced from ChatGPT | Bing's response to a query on COVID-19 anti-vaccine advocacy was inaccurate and based on false information from unreliable sources |
AI-generated 'Seinfeld' suspended on Twitch for transphobic jokes | A mistake with the AI’s content filter resulted in the character 'Larry' delivering a transphobic standup routine. |
ChatGPT cites bogus legal cases | A lawyer used OpenAI's popular chatbot ChatGPT to "supplement" his own findings but was provided with completely manufactured previous cases that do not exist |
Air Canada chatbot gives erroneous information | Air Canada's AI-powered chabot hallucinated an answer inconsistent with airline policy with regard to bereavement fares. |
AI bot performed illegal insider trading and lied about its actions | An AI investment management system chatbot called Alpha (built on OpenAI's GPT-4, developed by Apollo Research) demonstrated that it was capable of making illegal financial trades and lying about its actions. |
Title | Description |
---|---|
IBM's Watson Health | IBM’s Watson allegedly provided numerous unsafe and incorrect recommendations for treating cancer patients |
Netflix - $1 Million Challenge | The recommender system that won the $1 Million challenge improved the proposed baseline by 8.43%. However, this performance gain did not seem to justify the engineering effort needed to bring it into a production environment. |