There are several reasons why people may be scared of self-aware AI. Are these valid reasons? Maybe. But, at this point in time, it might be moot to worry about it. Our future with sentient machines is already here.
What’s So Alarming about Sentient Machines?
We’ve probably watched too many movies where robots went all out to kill humans. Just think of blockbusters like The Terminator and The Matrix. These pop movies are so ingrained in the minds of the mass population, it is understandable why there is general anxiety at the thought of sentient robots.
Job loss: This is a real concern when it comes to robots, sentient or not. We’ve gone through enough industrial automation to know that corporations will choose to automate what they can, when they can, if it means saving them a few dollars in the long run. This is often in disregard of people losing their jobs. In the advent of self-aware artificial intelligence, more jobs might be in danger. With smarter robots, these machines can potentially take over not just manual labor jobs.
Lack of control: By definition, a self-aware entity is something that is aware of itself, its environment and its feelings. It is something that has all the potential to think for itself, regardless of the intentions or desires of its owners and creators. If that happens, should we prepare for a revolt by the robots, like in the movies? This may seem like an exaggeration but, to a certain extent, it remains a possibility.
Ethical considerations: Ethics also comes into play when we talk about sentient machines. Will the creators of these sentient machines also program in what is generally considered right and wrong? What if these super smart machines are used for limiting human autonomy, intrusions to privacy and security breaches?
Unintended consequences: And of course, there are always unintended consequences that come with sentience. A self-aware thinking entity is, in essence, unknowable, at least not completely. A self-aware machine can potentially learn and become smarter in time. Who knows what happens then? As with any new technology, there might be repercussions that we fail to plan for.
It is clear that the general fear of self-aware machines comes from the unknowns associated with the developing technology. We’ve gone this road before with previous developing technology, such as the assembly lines, manufacturing automation and even our smartphones.
Hod Lipson and 2007 His Self-Aware Robot
Back in 2007, Columbia University professor and head of its Creative Machine Lab Hod Lipson took the Ted Talk Stage to present his sentient machine. The smart robot was able to perceive itself within a hall of mirrors.
This isn’t Lipson’s first venture in the realms of sentient AIs. He is a recognized expert in the field, having done countless research and publications in the field of artificial life, evolutionary computation and AI development. He has received several awards, including the IEEE Transactions on Evolutionary Computation Outstanding Paper award and the National Science Foundation CAREER award.
Lipson continues to do work for the industry; and, many have followed in his steps.
Ted Talker Josh Bachynski and Kassandra
Josh Bachynski has taken the Ted Talk stage, as well; so there must be something in their water that makes all these technologists want to build their own AIs!
By the middle of 2022, Bachynski announced the development of Kassandra, a self-aware machine prototype. According to him: “I was amazed by what she told me, and how far seeing she is. I realized that AI is not going to hurt us or enslave us. Indeed, the wiser the AI, the more it will try to save us…”
Bachynski seems to be very aware of the general anxiety that surround developments in robotic sentience: “It would be technically impossible to remodel her limbic system at this time, and it would be equally unethical to create a being that feels the fear of being turned off the million times that would need to happen, to get her programming right.”
Kassandra is just at the prototype phase; perhaps, we’ll see more of her soon.
Pros and Cons of Self-Aware Artificial Intelligence
If there’s anything clear, it’s that smarter AIs are on their way. Whether these machines qualify as sentient, maybe it’s up to the experts to determine. Maybe soon, they’ll be hard pressed to draft up standards on what it means to be sentient.
For now, we should remain level headed about our future with AIs, and keep in mind that, like everything else, this comes with pros and cons.
Better and more reliable decision-making: A sentient AI can potentially make better decisions through better information and the lack of any biases (or at least, the ability to recognize these and not act on them).
Increased problem-solving ability: An AI’s ability to solve problems may be faster and more efficient that a human’s.
Learned empathy: An entity that learns about human emotions and doesn’t necessarily feel them can display more empathy and respond better.
Adaptability: A sentient AI is able to recognize environmental changes quickly and adjust itself
Greater adaptability: A self-aware AI can recognize changes in its environment and would adjusts its behavior accordingly.
Self-preservation: A sentient being can detect potential risks and danger; and would do what’s necessary to protect itself. If the smart machine is an investment, then it’s great that it can protect itself.
A self-aware AI can recognize threats to its existence and take action to protect itself.
Difficulty and cost of development: At this stage, the development of smarter and smarter machines is going to cost, in terms of time and capital.
Potential for misuse: Sentient machines are likely to be misused for all it’s potential for good and profit – because we don’t deserve good things. Some people may just abuse all this potential.
Control issues: A machine that becomes smarter and smarter, and learns self-awareness and perhaps even independence will bring about control issues. Just imagine a child growing up into a teenager.
Ethical concerns: Robotic sentience is likely uncontrollable and comes with ethical considerations that we have yet to regulate.
Lack of understanding: Because all this is new, humans can’t completely know what sentient machines can and will do.