AI, Humanoid Robots, the Singularity, or Curiosity Killed the Cat

Webtek Media

Even before the Jetson’s, Isaac Asimov gave us the Laws of Robotics in a short story called “Roundabout” published in 1942. The idea of creating machines with brains which could do work for us has been a dream of humans living in many different eras. But it was the stuff of fiction until now. And although we are closer to robots who think like humans than humans have ever been, although we now use Artificial Intelligence in many ways, we have not created the ‘singularity.’ There is currently a race to make the iconic robot, a thinking, feeling machine that can anticipate our needs and our moods, relieve us of tedious tasks, and take on the dangerous tasks that would kill or maim fragile humans.

Asimov’s Laws were written to protect humans from robots. We have always feared achieving humanoid robots even as we are heading pell mell towards producing them. Why do we keep going? If we understood that, we would have unlocked the mystery of what drives humans to do anything.

The very fact that Asimov wrote his laws reminds us that we also know the dual nature of our reality. Regardless of what we invent, create, or do to solve a problem, the solution will eventually reveal that it offers negatives as well as positives. Asimov wanted to look at what could go awry with robotics and protect against it. Clearly the more intelligent the creations, the more we endow them with the power to use human decision-making skills, make them capable of learning (and learning perhaps even beyond human intelligence), the more nervous we are that robots will no longer need humans and may harm us.

Asimov wrote:

First Law of Robotics

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law of Robotics

A robot must obey the orders given it by human beings except where such orders would conflict with the first law.

Third Law of Robotics

A robot  must protect its own existence as long as such protection does not conflict with the first and second law.

We have had some evil robots like Hal the computer in 2001: A Space Odyssey and in some of Asimov’s books, like I, Robot. In fact Asimov wrote many books about robots. We have had robots we fell in love with like C-3PO and R2-D2, and BB-8. We had robots of war like the AT -AT walkers in Star Wars. 

Since robots obey their owners and people are both good and evil, robots in fiction represent both sides of our nature. Recent books with robots and AI’s seem to be deliberately presenting us with kinder, gentler robots, whose human-like personalities make them very appealing. Authors seem to want to persuade us that Asimov was wrong when he suggested that robots that are too human may turn on us. Asimov implied, after all why would they need us? His robots learn that humans are not worthy of respect. Modern authors picture robot-human interactions as far more emotional and positive. 

Martha Wells presents us with The Murderbot Diaries, where the main character is a robot designed to be a ‘murderbot’ who doesn’t like his role in life, finds a way to excise his governing controller and goes rogue — but in the most heroic and entertaining ways. Kazuo Ishiguro presents us, in Klara and the Sun, with the very tuned-in AI, Klara, who runs on solar energy and seems to ‘out-human’ the humans by taking such good care of the person she is purchased for. Modern sci-fi of the world-building variety mixes humans and robots indiscriminately. We even have the dark view of humans who exploit robots as in Blade Runner. Google (which runs on AI) can give you a list of books about robots.

We may be discussing whether or not we should advance artificial intelligence, how far we should go with it and if we will ever reach the ‘singularity,’ but all the while we are discussing AI we are already using it in so many spheres. We have manufacturing robots, self-driving cars, smart assistants, proactive healthcare management, disease mapping, automated financial investing, virtual travel booking agents, social media monitoring, inter-team chat tools, conversational marketing bots, natural language processing tools, Facebook newsfeed, Google search, book recommendations, an iRobot or a Roomba to vacuum or mop our floors, GPS, facial recognition, handwriting recognition, speech recognition, virtual reality, artificial creativity and if you google artificial intelligence there is even more. Some of these involve simple algorithms, some are more complex. 

Neal Stephenson, in his book Fall, or Dodge in Hell, suggests that the eventual outcome of finding the singularity is oblivion, or living forever but only as digital beings. Is he issuing a warning like Asimov’s laws, that experts say robotics has moved beyond, or is his warning, which is more informed by where robotics has gone and where computing has gone, a warning we should heed. We are close to having quantum computing that does not require subhuman conditions to function, allowing us to handle data with exponentially greater speed and complexity. What good things will come with quantum computing and what will we do that might bring harm to humans, animals or the planet? We don’t know, but it looks like we’ll find out, unless the climate changes wrought by human excesses knock us back into a more primitive age. Stop or go on, yes or no? I think we all know the answer.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.