Table of Contents
Artificial Intelligence (AI) is poised to change our world forever and represents one of the greatest technological revolutions of this century.
But like any other invention made by humans, mistakes, glitches and unplanned accidents can happen.
While some of these are just minor issues that can stall a project for a while or stall it in the early stages of development, others have far more dire consequences.
A failed AI deployment can leave a well-known brand red-faced and tarnished – albeit sometimes in hilarious ways.
Let’s look at some of these embarrassing, disastrous, and sometimes hilarious AI glitches over the past few years:
Silenced by the AI assistant
A few years ago, in 2018, AI assistants were quite a novelty that created a very profitable new market.
Unsurprisingly, many players jumped on the bandwagon, and LG was one of them. They tried to launch Cloi, a small talking robot that would control a smart home by talking to it.
However, upon her public debut, Cloi apparently disliked the owner, humiliating LG’s US marketing chief David VanderWaal.
After a while, the tiny and cute AI started ignoring commands repeatedly, silencing an embarrassed and frustrated VanderWaal (courtesy of Youtube).
Maybe Cloi wanted to signal that it was time to take a break from the relationship.
The (not so) tiny difference between “bald” and “ball”
In October 2020, when the Covid-19 pandemic often meant going without human staff, a Scottish football club used an automated camera to record a game.
The automatic camera worked well for a while, recording the game between Inverness Caledonian Thistle and Ayr United at the Caledonian Stadium with no problems.
However, as the game got underway, the camera mistook a linesman’s shiny, bald head for the ball itself.
In one extremely strange situation the spectators were deprived of the actual gameplay by placing the poor man’s head in the center.
We all look forward to a future where football clubs will mandate the wearing of hats and wigs for all linesmen and players.
If facial recognition doesn’t recognize you – not at all
According to our recent research, facial recognition appears to be far from a reliable technology, and its mistakes can have devastating consequences on people’s lives – up to and including prison sentences.
However, sometimes these mishaps are particularly embarrassing for the developers of these tools, especially when AI misunderstandings lead to unpredictable errors.
One such case involved the Chinese government itself. A method used in many cities to discourage people from crossing the street illegally is to publicly shame pedestrians.
Their faces are captured by street cameras and then shown on big screens, along with the legal ramifications.
In 2018, such a camera captured the face of Dong Mingzhu, a billionaire who heads China’s largest air conditioner maker and was pictured on a billboard for a nearby bus. The camera reacted to her face and shamed her even though she wasn’t there.
Needless to say, the Chinese government was embarrassed the most, but to keep things fair and balanced, they weren’t the only ones to experience their own dose of… facial recognition-based embarrassment (pun intended).
In the same year, Amazon’s Rekognition surveillance technology captured the mug shots of criminals arrested for a crime incorrectly assigned to the faces of 28 members of Congress.
Maybe the AI has the claim that all politicians are criminals taken a bit too literally…
Why AI should never replace your doctor’s advice
Another government harmed by flawed AI was the UK government in 2020.
With the coronavirus pandemic in full swing, UK health officials launched CIBot, an AI-powered virtual assistant designed to provide people with useful information about the COVID-19 virus.
The idea was to provide the public with important advice, but the tool didn’t just limit itself to evaluating official sources and went a little too far.
In the end, the bot provided inaccurate information about the severity and transmission routes of the virus and recommended treatments, including the inhaling steam. At least we’re lucky he didn’t end up bleach as therapy recommended.
When generative AI starts making things up
Many say that generative AI is like children taking their first steps into the world of real, confident intelligence.
There are some cases like this where this claim rings particularly true. When children are asked a question they know absolutely nothing about, they often make things up on the spot to look good or to take full advantage of their very limited knowledge of the world.
A few months ago, Google’s AI chatbot made Bard apparently same mistake, much to the chagrin of its own creators. And that in his very first demo.
On the question “What new discoveries from the James Webb Space Telescope can I tell my 9-year-old?” the chatbot gave a bullet-point reply, which included saying that the telescope “captured the first-ever images of a planet outside our own solar system”.
Some astronomers aptly noted that the first image of an exoplanet was taken nearly 20 years earlier, in 2004.
This “childhood mistake” would not be so bad if it had not led to the Google shares lost $100 billion in market value in just one day had.
While these AI blunders aren’t as horrible as the days when AI went haywire, they can be a source of significant embarrassment for their companies and developers.
Still, we can’t deny how entertaining it can sometimes be to watch the absurdities created by inexperienced or flawed generative AI.
As humans, we learn more from mistakes than successes – we can only hope that those mistakes will help the AI improve even faster.