Cryptheory – Just Crypto

Cryptocurrencies are our life! Get an Overview of Market News

When AI facial recognition arrests innocent people

5 min read

Lawsuits are mounting in the United States against the use of facial recognition by police to arrest people. The most recent survey, filed in Detroit in August 2023, is the sixth case in the past three years.

Of course, the humiliation of being wrongly arrested because an artificial intelligence (AI) made a mistake is a horrifying event that can have devastating consequences for a person.

This is all the more true if the false accusations are not discovered in time and the victim faces imprisonment.

On the other hand, proponents of this technology claim that it has made law enforcement agencies much more efficient.

These problems can be solved by fixing some software bugs or using high-resolution imagery regularly.

However, how ethical is it to keep “testing” facial recognition technology (FRT) and arresting people who may be innocent?

What is the ethical significance of using AI facial recognition in general, given how much it is a constant violation of our privacy, because it offers the possibility at any time to identify people without their consent?

First, let’s look at the damage done so far.

Face recognition errors in the past

A pregnant woman, 2023

The most recent case in which the FRT misidentified a person, happened in Detroit earlier this year. To make matters worse, the victim, 32-year-old Porsha Woodruff, was eight months pregnant at the time.

Woodruff was arrested in front of her two daughters (6 and 12 years old) and had to spend a day at the police station. Afterward, feeling stressed and unwell, she went to a medical center, where she went into labor.

Doctors diagnosed her with dehydration and a low heart rate. Not the best way to spend some of the trickiest days of pregnancy.

Woodruff wasn’t the only victim of FRT failures.

False allegations based on blurry surveillance footage, 2020

In January 2020, Robert Williams was accused of stealing five watches worth $3,800.

It only took Detroit police a few blurry surveillance footage to arrest the man who in handcuffs in front of all the neighbors lay on his lawn while his wife and two young daughters could only watch in despair.

In theory, facial recognition matches could only be used as investigative evidence, not the sole evidence to charge Williams with a crime.

However, this was enough for the police. She arrested him without further evidence, although it turned out in the end that Williams was on his way home from work at the time of the attack.

Further investigation will show us that these are not isolated cases. There have been a number of similar issues over the years.

Wrong ID as reason for wrongful arrest, 2019

In 2019, a shoplifter left a fake Tennessee driver’s license at the scene in Woodbridge, New Jersey, after stealing candy.

When the stolen ID was checked with a facial recognition scanner, Nijeer Parks was identified as a “strong” match.

He was arrested and, having already been convicted of drug offenses and facing a double prison sentence, he debated whether a confession might be a better option.

Luckily for him, he finally proved his innocence when he received a receipt for a money transfer from Western Union which took place at the same hour as the shoplifting – at a location 30 miles from the gift shop.

According to criminal defense lawyers, it is not an isolated case that people wrongly accused of face recognition make admissions, even if they are innocent.

The NY sock case: 6 months jail for possible match

In 2018, another man was accused of stealing a pair of socks from a TJ Maxx store in New York City.

The entire case was based on a single, fuzzy security video that revealed a “possible match” months after the incident.

As a witness confirmed that “he was the man” the accused spent six months in prison before being found guilty – although he still maintains his innocence.

The defense’s argument? The man was actually registered at a hospital for the birth of his child at the time of the crime.

In some of the above cases, there was evidence to the contrary (successful in two cases, unsuccessful in the other) that the suspect was far from the scene of the crime.

Not everyone will be so lucky, however.

In other words, the cases known to date may represent only a small fraction of the innocent people currently in prison or facing prison time for false facial recognition.

“Regulate” vs. “Prohibit”

As is often the case in life, the examples above are more about how to use tools than the tools themselves.

In many situations, law enforcement agencies use FRT as the sole piece of evidence to bring people to jail, rather than using potential detection as an easy lead in a broader investigative process.

Sherlock Holmes might have welcomed the technology. But he would spend his time analyzing the evidence before accepting it as fact.

There is a much more serious problem that makes this technology very biased and its use controversial at best.

2019 released that National Institute of Standards and Technology (NIST) a study that found growing evidence that FRT with significant racial bias connected is.

Often, if not regularly, people with darker skin tones, younger people and women are misidentified by AI. Asians and African Americans are 100 times more likely to be misidentified. In Native Americans it is even greater.

Demographic differences such as age and gender also contribute. This mismatch can be even more pronounced in some less precise systems.

In addition to the massive concerns that FRT is disproportionately targeting people of certain ethnicities, the use of this technology could violate privacy and civil liberties.

Real-time public surveillance identifies individuals without their consent, and aggregated databases are often created without any rules defining their legality.

Biometrics can be collected far too easily and covertly and used for all sorts of purposes, including overall control of our privacy, which many of us are likely to find unacceptable.

Technical vulnerabilities allow captured footage to be used for all sorts of malicious activities: identity theft, deepfakes, physical or digital counterfeiting, and even harassment.

These limits can be overcome over time. However, as long as policies are developed to regulate the use of FRT, innocent people will still be persecuted.

Some cities, such as B. San Francisco, have completely banned facial recognition from the police and other authorities. Many argue that this may be the only solution to this problem.

Conclusion

The use of FRT for law enforcement purposes is a very controversial topic. Undoubtedly, it is a great threat detection tool when a quick response is essential, such as: B. stopping terrorists or ensuring airport security.

However, many believe that this technology is an unacceptable intrusion into privacy and that constant surveillance by the prying eyes of a government is a dangerous scenario.

What is certain is that this technology is not yet operational in its current state – at least not without the risk of serious repercussions.

But the lack of readiness is not only due to the technical limitations of FRT, but also to the inappropriate use that people make of it.

In other words, for FRT to serve justice, we need a solid set of laws and rules to govern it. Who monitors the guards?

Crypto exchanges with the lowest fees 2023

 

All content in this article is for informational purposes only and in no way serves as investment advice. Investing in cryptocurrencies, commodities and stocks is very risky and can lead to capital losses.