Should we trust robots? Do you trust your cat?

Every night, Fred, my pet cat of going on four years now, curls up at the foot of my bed, purrs, and sleeps soundly. In the morning, he wakes me for his breakfast with irresistible insistence. During the day he lets me comb his long orange tabby hair, plays with his toys, sits on the floor waiting for a treat while I clean his litter box. If it is sunny out, he will go outside and kill something. This last item shocks me. How a fellow animal, particularly one who’s daily needs are so over served, spontaneously descend into such an atavistic behaviour? What is his motiviation? If he were bigger (or I were smaller) would he hunt and kill me? I realize that I really don’t and cannot know him beyond our primitive, dependent/provider relationship. I read somewhere once that a cat’s purring and rubbing against it’s owners legs is not a show of affection, it is just a survival technique which it is hoping will prevent you from skinning and eating him.

Most pet owners are guilty of a certain amount of anthropomorphism with their pets. We like to think that we’re bonding with a totally non-judgemental partner when in fact we are hosting a biological robot whose physically-limited intelligence has been trained over and evolved over thousands of generations of subsistance hunters. They lack the complex facilities required for advanced planning and organization or social congregation beyond what is necessary to sustain their immediate gene pool. Furthermore, the common house cat shows no sign of remorse for it’s actions or even cognition that it may have just extinguished the life of another living being. It is apparently, unaware of the concept of consciousness except for it’s own.
So there is no logical reason to assign Fred or others of his ilk any human qualities of compassion for others or sense of self in the broader world.

Yet cats have a neocortex orders of magnitude more complex than most artificial intelligence systems (the brains of robots) to date outside of a few very specific research laboratories.

“American computer scientist Dharmendra Modha and his IBM colleagues have created a cell-by-cell simulation of a portion of the human visual neocortex comprising 1.6 billion virtual neurons and 9 trillion synapses, which is equivalent to a cat neocortex. It runs 100 times slower than real timeon an IBM BlueGene/P supercomputer consisting of 147456 processors.”1

Modern AI training algorithms and data sets make up the gap in narrow applications only. Artificial general human intelligence, the kind which a person could mistake for a real human being, is still decades away. So, if an AI doesn’t have observable genetic traits that can be reliably trusted better than that of a common house cat but is installed in a mechanical body with thousands of times more killing power, no, you should not leave it alone in a room with your child.

One of the reasons simple intelligences such as cats and robots are untrustworthy is because they are only semi-conscious from the perspective of an actual human being.
Ray Kurzweil, who is on record for predicting the singularity (convergence) of human and artificial intelligence by the year 2023, states that “we need a leap of faith as to what and who is conscious.”2 Kurzweil defines his own leap of faith as:

“Once machines do succeed in being convincing when they speak of their qualia and conscious experiences, they will indeed consititute conscious persons….She [“a robot or an avatar”] seems, in fact, like a person. Would you accept her as a conscious person?”

He claims that if he were to witness a robot expressing a human reaction such as terror to the prospect of possible destruction, he would react to the robot “in the same empathetic way that [he] would if [he] witnessed such a scene with a person.”

There is a certain circularness to this argument in “conscious experiences” = “conscious person” and he re-inforces his bias by refering to the machine as “She”. These semantic concerns aside, I find his point disturbing for two reasons:

a) What he calls a “leap of faith” I would call a willful suspension of disbelief. Later he talks about how audiences seem to empathize with robot characters in movies such as R2D2 in the Star Wars saga. I claim that when we sit down to watch a movie, we know at the outset that we will be presented with fiction and thus we willingly engage with the plot of the story for the entertainment value. We always know we can get up and walk away and rejoin the real world at anytime. If movies or some future virtual reality entertainment system replace reality, then it is not the robots that have gained consciousness, it is the humans that have forgone consciousness.

b) By awarding robots human qualities of consciousness, we enter debates as to what rights, if any, and what legal obligations do these artificial humans assume? Will it be just as wrong to turn off a robot or recycle a defective robot as it is to kill a human?

Bonus question: Is it reasonable to assume that AI and robot technology will continue to develop in a direction that removes the need for human intervention in the operation of the system? Kurzweil distinguishes between biological intelligence and computer or silicon based intelligence. At some point it is possible that artificial intelligence will jump the mechanical/biological barrier when it becomes more feasible to simulate intelligence using cells and dna rather than transistors. This will likely change our perception of what is consciousness and what is programming.

There is a lot more written in far greater detail on the safety of future artificial intelligences, particularly once they reach human level consciousness. The foundational text on this is “Superintelligence: Paths, Dangers, Strategies”4 by Nick Bostrom. A more accessible text and available free online is “Smarter Than Us: The Rise of Machine Intelligence” 5 by Stuart Armstrong.

Personally, I don’t argue that artificial intelligence can be incredibly useful to humanity but I am not yet ready to anthropomorphize it to human status.


  1. From “How to Create a Mind” by Ray Kurzweil, 2012 chp6, p128
  2. “How to Create a Mind” by Ray Kurzweil, 2012 chp 9, p209
  3. https://en.m.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies
  4. https://smarterthan.us

You may also like...