Table of Contents
When people talk about artificial intelligence (AI) that simulates human thinking, they usually refer to a specific type of AI called neural networks. Neural networks are essentially designed like the human brain, with thousands of algorithmic nodes that process data independently, but in a coordinated manner.
However, the fact that this similarity exists does not mean that AI has developed human thinking capacity. There are many differences between natural and artificial brains, both in structure and scope. This means we still have a long way to go before AI comes close to matching the power and complexity of the human mind.
Fast and powerful artificial intelligence
Artificial Neural Networks (ANNs) are useful in a wide range of applications. Their ability to parse and quickly analyze complex data patterns makes them better than other types of AI in fast-changing situations such as autonomous vehicle operation and real-time dialogue.
According to Akash Takyar, CEO of LeewayHertz, a digital solutions developer, most neural network architectures consist of several layers, nodes, and functional elements. This structure helps handle distortions, data loss, and updates.
In most cases, these designs are inspired by the neurons, synapses and hierarchical structures of the human brain. Input data flows through each layer of the ANN, where it is processed and converted into some form of output – usually a decision, recommendation, or prediction.
In this way, it is still a computer that processes bits and bytes, but the paths it uses to convert raw data into actionable intelligence are more complex.
While this may seem similar to a simulated human brain, recent studies suggest this may not be the case. A team from MIT recently examined more than 11,000 neural networks and found that they showed the cell-like processing characteristics of human thinking only when they were specifically trained to do so.
Research associate Rylan Schaeffer explained:
“What this suggests is that to achieve a grid cell result, the researchers training the models had to integrate these results with specific, biologically implausible implementation choices.”
Without those constraints, few networks developed the cell-like activity that can be used to predict actual brain functionality, which develops naturally without preconditions.
This research suggests that data scientists should probably qualify the claim that neural networks largely mimic the human brain. Given the right parameters, they can produce results based on natural neural pathways, but without those parameters they can still produce results without forming these brain-like architectures.
Ila Fiete, lead author of the paper and member of MIT’s McGovern Institute for Brain Research, said:
“If you use deep learning models, they can be a powerful tool. However, you have to be very careful in interpreting them and determining whether they really shed light on what the brain is optimizing.”
Differences in learning
Another important difference between neural networks and living brains is the way they learn. According to Maxim Bazhenov, Ph.D. and professor of medicine at the University of California San Diego’s School of Medicine, ANNs overwrite old data as new data comes in, while brains continually learn and absorb new data to gain a better understanding.
This leads to a phenomenon in neural networks called “catastrophic forgetting”. This causes them to suddenly fail at previously known tasks or change predictions that were once accurate.
Strangely enough, one of the solutions to this problem is to integrate a simple biological function into the artificial model: sleep.
By alternating the training routine between peaks of new data and offline periods, researchers see a decrease in catastrophic forgetting as the model replays old memories without using old training data. This mimics the same kind of “synaptic plasticity” that occurs when we sleep.
Despite these similarities, the fact remains that the human brain is far more powerful than even the most advanced neural network.
When researchers at the Hebrew University in Jerusalem set out to determine how complex a neural network would have to be to match the computing power of a single human neuron, they were shocked by the results. While some neurons are equivalent to “shallow” neural networks, meaning they do not have highly layered architectures, the neurons in the cerebral cortex require deep, seven-layer networks, with each layer containing up to 128 computing units.
And this is just for one neuron. There are more than 10 billion neurons in an average human brain, each requiring deep networks of five to eight layers. In this light, computer science still has a long way to go before it can create an artificial equivalent of the human brain.
But this does not mean that AI is a false promise, or that there is no reason to be cautious in its development and implementation. Even an artificial reptilian brain can cause significant damage if left unchecked, just like a crocodile can.
What it does mean is that the artificial intelligence we have today, even the kind modeled after the human brain, is still in its infancy and is not nearly as intuitive and intellectual as our brain.
Rather than being a threat, AI can greatly enhance our innate cognitive abilities – and yes, like a natural brain, those abilities can still be used for good or evil.