Table of Contents
New AI technologies are constantly being introduced. And just as constantly, warnings and concerns are voiced. Sometimes the fears seem exaggerated or an offshoot of conspiracy theories. In other cases they may be justified.
This reminds us of an old joke tumblr:
- technology enthusiasts: Everything in my house is wired to the Internet of Things! I control everything from my smartphone! My smart house is Bluetooth enabled and I can give it voice commands via Alexa! I love the future!
- programmers/engineers: The latest piece of tech I own is a 2004 printer, and I keep a loaded pistol ready to shoot it if it ever makes an unexpected noise.
- Security engineer: *takes a deep sip of whiskey* I wish I had been born in the Neolithic period.
As technologies take hold and become commonplace, the concerns disappear and are replaced by new things that at first seem strange or spooky, straight out of a sci-fi novel.
Just think, self-driving vehicles are consistently touted as the way of the future, while some close to the industry claim the technology isn’t even remotely ready for the time being.
That is the main reason for the concern of people who understand the obvious security risks involved!
Here are six of the AI technologies — including self-driving vehicles — that seem most threatening to people surveyed by study researchers in recent months.
1) The smart pillow
New types of bedding and high-tech pillows can do more for us when we’re most vulnerable — while we sleep. There’s nothing scary about that, is there?
In a way, it’s intuitive to use new AI technologies to improve things like CPAP or BiPAP machines. Many people suffer from sleep apnea or other sleep disorders, so why not apply AI to the new medical science for treatment?
Well, for some people, including some black humor fans, the idea of machines tracking their sleep is just plain creepy. Look at yourself for example smart pillows that gently nudge your head in different directions and can be connected to your smartphone.
As long as her cautious prompts do you good, you’re fine, but what if the pillow starts doing things you wouldn’t agree to if you were awake?
Apply this concern to any technology we use to monitor or assist our sleep!
2) AI and simulated pain
Much has been said about the application of AI in pain management, but how about the opposite – using AI to simulate pain through a human’s central nervous system?
If you’re wondering where this would be applicable in retail and industry, it’s in the gaming market. We’re getting closer Virtual Reality Gamingin which people walk around in virtual environments.
Some companies are pioneering direct heat applications and certain types of impacts that produce a physical reaction if the player is shot or stabbed during gameplay – things that players have seen in quite a few modern day shoot ’em up games -Play likely to happen.
If you’re the speculative type, you can probably see where this is headed. There are many ways these technologies can go overboard and lead to some pretty scary and nefarious AI applications.
3) Self-driving vehicles
At this point, we come back to the concerns of letting a computer control your car.
Driving a car is not an easy task. We talk about self-driving vehicles’ ability to navigate the roads, but we tend to overlook many of the intuitive and instinctive aspects of the human task of driving.
Check out watch this video from real driversusing Tesla Autopilot on Boston streets (not without human intervention!), and you’ll see why many of these self-driving technologies unfortunately lag behind when it comes to ensuring safe driving in traffic.
A single sensor failure or other mishap is enough to cause a fatality, and that’s one reason we won’t be deploying fully self-driving vehicles any time soon, especially on streets that are typically pedestrianized.
Some experts believe road freight will come first, but even that requires a level of security that we may not have with today’s AI.
4) Computer chip implants
Concerns about internal microchips are as old as computer technology itself. Many of them stem from something even older, namely the biblical revelations about the “mark of the beast” being forcibly implanted under the skin.
That being said, people also have other, more prosaic, fears of having chip implants in their bodies, particularly for cognitive purposes. One Pew Research Center study shows that internal computer chips were by far respondents’ top concern for a range of questions and the “scariest” AI technology.
5) Weapons Technology
Here’s a slightly different case where the AI simply gives people the ability to do nasty things.
in one Report by Verge described how an AI program could propose no fewer than 40,000 different types of chemical weapons in six hours.
The point here is not that AI could do anything menacing or dangerous to humanity. It’s about giving human actors the key to do these bad things themselves.
AI applications for guns tend to make them even more powerful, and guns are generally pretty scary to anyone with a shred of common sense!
As such, some of these applications are on the radar of those who believe that AI needs to be used for good, not bad, applications.
6) Great technique
Some people are afraid of self-propelled tractors, others would give a wide berth to a trash compactor that seemingly does its job without human management or intervention.
Large devices, according to people, should be controlled by people and not by a computer algorithm.
In this and many other respects, the concerns about AI have to do with the combination of non-human cognitive systems and large physical devices.
As long as AI’s work takes place in cyberspace, we feel the technology is more in control. Is that a false sense of security? In some cases yes, in other cases no. These examples are just the tip of the iceberg when it comes to “scary” AI. Other reports are about more of these intangible horrors, such as Machines that read your mind can!
Striving for explainable and transparent AI is part of the response to these and other frightening situations. By introducing a human-centric scenario and promoting trustworthy AI that doesn’t use black-box algorithms, we’re trying to ensure we have confidence in the development of new technologies. And that will make all the difference in how we experience technology in the future!