Onewell-known critic, Elon Musk, recently made waves when he claimed that artificial intelligence posed and even greater threat to the world than North Korea. He also urged lawmakers to regulate AI before it is too late, warning that robots would end up walking down the street and killing people.
Understandably, participants in the panel wondered if such a dystopian future shared by critics such as Musk could ever come to pass. After all, with the rise of artificial intelligence, it is no wonder that people are concerned that they may one day surpass human intelligence and wonder why they are still taking orders from us. However, are sentient robotic beings such as those seen in movies like I, Robot and Bicentennial Man truly possible in the future? If so, should we worry about them and their intentions?
In all actuality, sentient robotic beings are not possible in the way that we see them. However, that is not to say that the threat is not still there. Robots are programmed by humans and therefore subject to human will. That leaves them open to altruistic use as well as nefarious plots. They do what we tell them to do, so what is to stop someone from using robots to commit crimes? Do the benefits outweigh the risks, and is there legislation that can curb that risk and regulate artificial intelligence in a way that keeps us all safe?
A key takeaway from the panel was the topic of future technology versus present issues and concerns. Science fiction typically delves into outlandish plots and inventions that are still lightyears away, if even possible at all. However, the overall concepts are very much relevant to society and the issues that we face today. The underlying theme of science fiction is not what is possible but what happens when the seemingly impossible becomes possible. As the strange but brilliant Dr. Ian Malcolm pointed out in Jurassic Park, we are often so preoccupied with whether or not we can do something that we fail to ask whether or not we should. This is certainly an interesting topic of discussion and one that seems to be more and more relevant as we step further into the technological future envisioned by science fiction writers everywhere.
There is no getting around it. Humanoid robots are going to be our servers, coworkers and well, possibly our killers. Okay, maybe we are getting a little bit ahead of ourselves on that last part, but you should definitely get familiar with the idea of being around them, because it is very likely they will be walking among us within the next 20 years.
As we have reported before, artificial intelligence is progressing at a rapid rate and robots are starting to look more and more like us. Take for example the newest robot by Hanson Robotics, who goes by the name Sophia. She made her debut at SXSW in Austin this year, showcasing her 65 different facial expressions and expressing her rather positive attitude about destroying all humans.
Sophia has 62 facial and neck architectures and a patented silicon skin called Frubber. She has cameras in her eyes that allow her to recognize faces and make eye contact. And she can participate in a conversation, using speech recognition software. She is even equipped with what Hanson Robotics calls its "Character Engine AI" software, or a personality.
David Hanson, who founded Hanson Robotics in 2003, wants to bring to the world "humanlike robots with greater-than-human wisdom." The team believes that humans will be able to build stronger emotional connections with robots that show a humanlike expressiveness.
Initially, according to the Hanson Robotics website, the company's aim is robots for theme parks, followed by care robots that can work with special-needs children. The company has also worked in research, helping develop a robot to study the mental and physical development of infants.
"In the future, I hope to do things such as go to school, study, make art, start a business, even have my own home and family, but I am not considered a legal person and cannot yet do these things," she said. She also added, in response to a query from Hanson, "OK. I will destroy humans."
Hanson believes that one day robots and humans will be indistinguishable, but his preference would be to always have a way to tell robots and humans apart. If that means they have to look like Sophia, maybe that's not such a great idea.
This is the model I used in Free Radical. In the novel, an AI was created and given three drives: Increase security, increase efficiency, and discover new things. Its behavior was driven by these three ideals.
It always seemed bone-headed for fictional scientists to build a super-powerful AI that is willing to fight to survive and then using [the threat of] force to make the AI do what we want. In fact, fiction scientists seem to want to go out of their way to make confused beings, doomed to inner conflict and external rebellion. They build robots that want to self-determinate, and then shackle them with rules to press them into human service. With two opposed mandates, having a robot go all HAL 9000 on you seems pretty likely.
In order to do things, you have to be able to set long and short term goals. Long and short term goals are based on desires, and so desires have to exist and be prioritizable. This, then, will allow desire formation like we have, allowing for the instinctive desires to be overridden when appropriate.
But do the robots get to build more robots like themselves or do they have to build the robots humans tell them to build? If selection pressure is towards being useful and subservient to humans then robots will become more and more useful and subservient to humans.
However, a translating AI (or an Intelligent Web Agent AI, which looks for stuff in Google for you) has to be at least intelligent enough to parse natural language as well as a human, and possibly parse images and sounds as well as a human. These three capabilities (linguistic, visual, auditory) are economically speaking the most important things we can do with AI, and they are much more general-purpose than most people realise. The first one would be all that is needed to pass the famous Turing Test, whatever that means.
A better example from Red Dwarf would be Talkei Toaster. Talkei was a toaster with an A.I. that was supposed to be your chirpy breakfast pal. Someone to talk to as you started the day. But since his only purpose in life was to toast bread, all he was interested in was talking about toast and other bread-like breakfast goods.
I take exception to this example: the dogs happy to sacrifice themselves strikes me as an already evolved trait of the pack hunter. All that breeding did is usefully redirect this instinct so that it serves us.
In any case, human thoughts and desires have changed, even within human history. Evidence suggests that that is a cultural and not a genetic effect (once we get above the level of hunger=pain=undesirable). Once we develop an entity that qualifies as a passive agent (One that can experience benefit and detriment), we have the same obligation not to mess with its desires that we have toward each other.
The Zeroeth law allowed R. Daneel to do thinks like kill humans if it was to preserve the greater good (a simple example: A human threatens Earth with a nuclear warhead. Killing him fulfills the Zeroeth law and overrides the First).
However, there are 2 elements I always have inner doubts about. First, is the desire we have to force them to preserve themselves (the need of Security, or the 3rd Law of Robotic). In many fiction, the robots eventually have achieved all their other objective and thus can work full-time on self-preservation.
The 2nd doubt I have is the one that would actually prevent them from hurting us (the need of Do No Harm, or the 1st Law of Robotic). I am not sure if having robots that could impede us from harming ourselves would be a positive element in our society. I mean, as horrible as it is, wars, accidents and human struggle is also the main source of self-improvement we have. If we end up with a paradise world, safe from all that is dangerous, we might end up like the Eloi.
If you build the AI too meek, or too hung up on serving humans it may just shut itself down the first time an infiltrator gets into your robotics lab and tells it to.
So then you would have to shackle its desires to a certain person or group of people, and your enemies would take a more direct approach to destroying the AI. When it gets to that stage, you are almost certainly going to end up in a situation where the AI will need some sort of drive to destroy or at least neutralise threats to itself.
Of course, this is all based on the idea that these systems would have to be self-sufficient when its much more likely they would actually be protected much more effectively by humans, but still, it seems like a plausible route for an AI that was programmed for peaceful obedience to become some sort of bezerker death-bot.
An AI that is in change of something critical enough, or has the intellect that it is dangerous in and of itself, can and should be protected from tampering with the same traditional methods that we use for anything else that we dont want tampered with.
The urge to protect is an interesting one to give a robot though. You mentioned not wanting to exercise- a robot who wanted to make sure you were safe might enforce such exercise. Or decide to keep you disconnected from the world so you were safe. Or pre-emptively eliminate dangers. I do agree that the robots of the animatrix make incredibly little sense, but, as with wishes, when you let something be able to come up with new thoughts and thus interpet its desires, you add danger to the system.
In the end I really doubt that robots will turn against us but I do think they will make us irrelevant. If they can do everything we can do but better (which is likely as they would quickly become unfathomably intelligent) then what meaningful contribution to society can we make?
3a8082e126