What Is It We Think Humans Can Do That Robots Can’t?

Part of the I, ROBOT series

If this essay were a classic sci-fi script, then by the end the computer I’m writing it on would experience Love for the first time and, unable to process it, explode and shut down.

I guess I’m Captain Kirk in this analogy. That does not happen often.

“Spiritual Materialism” Meets “Intellectual Idealism” on Tinder

“Love” is one of those emotional sacred cows that are supposed to draw the lines between human and machine – but come on now, anyone who’s ever dated much can tell you: there are plenty of humans who, upon experiencing Love for the first time, discover they are unable to process it, explode, and shut down.

Love is, like all human emotions, at least a significant part chemical cocktail – that is to say, it is an embodied phenomenon. While love isn’t necessarily reducible to brain chemicals and hormones and blood rushes (oh my!), love without any of those things is a very different phenomenon than most human beings ever experience.

Without the human bodies that we live in, love is mostly a phenomenon that machines wouldn’t experience enough of to have a meltdown. They just don’t have the right chemicals. (Maybe if Google started hormone treatments?) And as for the rest – what the Greeks termed “agape” (pure, spiritual love) instead of “eros,” physical love – given just how different machine intelligence would be from human intelligence (which, again, is all about being embodied) … are we so sure that if machines were experiencing pure and spiritual love, we’d even notice?

Seriously, let’s ask ourselves: if your Internet of Things toaster were to suddenly experience deep and passionate love … how would you know?

What would that be?

The question is absurd because the premise is absurd – and maybe it’s not actually okay to imagine our future on the basis of syndicated science fiction involving fights with Klingons. In order for machines to feel love as humans do, they would have to basically already be functioning humans – at which point the premise answers the question. Otherwise, no, should your toaster ever experience anything at all, it will not experience it in any way that resembles human consciousness, because the way we generate and experience our consciousness is through our wet, squishy, bodies.

Can Your Intelligent Vacuum be Intelligent in a Vacuum?

Ironically, to posit the idea that machines could ever think and feel the way fleshy, wet, humans do is to embrace an idealist vision of intelligence and emotion: that there is some – spiritual? – quality existing outside of the materials and systems that express it, and that humans have it and right now machines don’t. But that machines could get it, if they just get so big/fast/complex/social/emergent that the things they are in fact made out of don’t matter.

And … okay? … maybe? But then, what is that quality?

We already know that some machines can beat humans at some tasks – calculating, playing chess, playing Go. But this isn’t very interesting: a lot of things can beat humans at stuff. Cheetahs run faster, penguins can swim circles around us. Dogs are better trackers. Ants are more cooperative. Nobody thinks that makes them any more human. It’s just what they do – sometimes even what they’re designed to do through selective breeding. Just so, machines have been beating people at stuff for a long time: the printing press was waaaaay better at copying manuscripts. Steam drills are better at punching holes in mountains.

The economic ramifications of the use of steam drills for miners like John Henry were significant. But the philosophical ramifications? So a drill drills faster than a human. So what?

To say that machines are “as” or “more” intelligent than human beings means there has to be a generalizable quality of intelligence that we both can share. An abstract quality of “intelligence” that can be independently measured and quantified for both humans and machines. Raw calculation isn’t it, any more than drilling power, so what would it be?

Elementary, My Dear Watsonbot!

There are, in fact, some forms of reasoning that we have in common. Both humans and machines use “inductive” reasoning (believing that’s what has tended to happen in the past is what’s going to happen next – “the sun has always risen in the east, therefore the sun will rise in the east tomorrow”) and “deductive” reasoning (building conclusions off of premises: “All men are mortal; Socrates is a man; Therefore Socrates is mortal.”)

If anything, contemporary AI is actually much better at these kinds of reasoning than people are: pretty much the only time AIs use them incorrectly is when people inadvertently program their own cognitive biases into the system (which, in fact, we often do. Always do. We’re pretty much constantly doing it).

But human beings also use other reasoning strategies, like “abductive” reasoning (leaping from insufficient information to a correct conclusion on the fly) and contextual knowledge (an understanding of the larger world in which these things are happening).

So far, all the evidence we have strongly suggests that AI has neither contextual knowledge or genuine understanding – which are absolutely vital to how human beings engage with the world. Abductive reasoning is also still a frontier for Artificial Intelligence, and most of the time they haven’t really got it yet – they’re just processing so much data that they’re able to make conclusions based on massive reams of information. This resembles abduction when it goes right, but works differently and happens without any contextual understanding of the issues involved.

That is not – so far as we know – how human beings think.

In fact, the more we look at machine and human learning, the more clear it becomes that “reasoning strategies” alone does not actually imply “intelligence.” It is possible that there is some pure form of intelligence that simply exists within its own axioms and never actually connects them to the world (arguably that’s Plato and Gödel’s concept of mathematics), much in the way it’s possible to imagine a smart toaster quietly experiencing a perfect spiritual love for sourdough. But most of the time it seems very strange to imagine “intelligence” existing without actually being able to fathom the world in which it’s applied.

It’s Not “What” You Know, It’s “How” You Know

The problem with deciding what exactly it is that would make a machine intelligent is that we’ve never figured it out in people, either. No matter how we try to test intelligence and pin it down to a limited set of functions, there are always more areas in which it applies and things it can do. Generalizable intelligence is not a specialized quality.

So while “Artificial Intelligence” may or may not ever actually be intelligent, it turns out that all forms of measuring intelligence are artificial. Either we’re measuring specific tasks or reasoning strategies – which are not “intelligence” in any meaningful sense – or we’re trying to gauge self-awareness and fundamental comprehension, which we don’t know how to measure. Not even in humans.

All of which is to say that, given a good enough design, there are probably no quantitative tasks which people can do but machines can’t. We do not, however, do them the same way – so far as we can tell there’s very little overlap – and so the qualitative nature of how things turn out will be very different.

The question of “what” Artificial Intelligence can do (Chess? Bowling? Poetry? Love?) has overshadowed the question of “how” it can do these things – but that, it turns out, is the more relevant question for our future and lives.

Computers absolutely can “love” – in the same way that they call “sell” and “play chess”: as instrumental tasks. Something they do not grasp for its own sake but can make as efficient as possible. Is the future of Love in our society a mechanized sex doll with a personalized AI? Is it an app that reminds you about daily check-ins and anniversaries and that recommends romantic gestures based on past partner preferences?  These are achievable goals – in fact we already have them.

Is that really what we want when we think of love?

If something about that seems off to you, don’t blame the machines. They’re literally doing what we tell them to. Instead it’s worth us asking: what parts of our world do we want to be governed by quantitative metrics – which ultimately can be automated – and what parts do we want to focus on the qualitative?

Impersonal Personalization for a Big Data World

Is it an accident that as our powers of automation and rapid assembly have never been greater, the artisanal and personalized has become an increasingly entrenched value for those who can afford it? Or is it a very human response to a world that is becoming increasingly “personalized” in the most impersonal way imaginable? One that is intended not to actually engage who you are and explore who you can be, but to slot you into categories and check the elements of your personality off as items on a to-do list?

AI may represent a real-life “Monkey’s Paw” scenario: they can do anything you ask them to. But how they do it may give you something very different from the world you want. The question is not “what can computers do?” They can do everything – or at least a measurable approximation of everything. The question is: how much of life should be automated?  And which parts?



Cover Image: Ancient Intelligence by Erica Halpern (Photo by Steven Fritz)

About the author: Caveat Magister

Caveat Magister

A member of Burning Man Project's Philosophical Center, Caveat served as the Volunteer Coordinator for Media Mecca from 2008 - 2013. He is presently working with Burning Man's education program on a cultural studies curriculum for Burning Man culture. Caveat is the author of the short story collection A Guide to Bars and Nightlife in the Sacred City, which has nothing to do with Burning Man, and the novel The Deeds of Pounce, which is about goblins. He has finally got his email address caveat (at) burningman (dot) org working again. He tweets, occasionally, as @BenjaminWachs

10 Comments on “What Is It We Think Humans Can Do That Robots Can’t?

  • Many contend that Ego is a manifestation of fear. Love is Nirvana and thus the absence of Ego (Very Buddhist and Hindu). We can definitely program a robot to be Fearless. That is why robots could very well become Superior to man (woman). Imagine a world without our basket full of Ego (fears, greed and the rest of the junk that each of us experience daily). I will be 72 years old in August, 2018 and with my many life experiences, I gain wisdom each and every moment on the Playa by looking into my mirror.

    Report comment

  • doraemon says:

    If a human is asked, ‘Do you exist?’ the answer is undoubtedly a ‘yes,’ – If a robot is asked, Do you exist?’ the response could be a ‘yes!’ although it would have been preprogrammed. Humans can do the ‘knowing’ that at their subtle but intimate core they are first, awareness.

    Report comment

  • Ubersuave says:

    I liked this. Thanks.

    Do you think the machines could ever gain consciousness? If they had all the sensory inputs and those inputs gave feedback to the “brain” wouldn’t there then be desires, observation of the self, and possibly love?

    Report comment

  • i talk my self. How this thing happen??? “Impersonal Personalization for a Big Data World

    Report comment

  • supersnakeio says:

    I though robot can do better than human.

    Report comment

  • Vickie Fleming Ashby says:

    At least one thing humans can do (better than AI) is to recognize when the best course of action is to ignore the rules. Compassion and mercy come to mind; both are outside the normal balance of things, both are essential to what it is to be human. People who lack or ignore them have been responsible for inhumane acts on a scale stretching from the personal to the global. They typically believe that their ends justify their means, with an almost “mechanical” ability to rationalize their actions.

    Report comment

  • Robin says:

    I wonder why a computer doesn’t have a comment on here…..hmm..

    Report comment

  • Dennis Hinkamp says:

    Robots could not possibly do worse than humans

    Report comment

  • Beautiful article!! I feel honored that you used my art! Thank you!

    Report comment

  • mangafox says:

    Many contend that Ego is a manifestation of fear. Love is Nirvana and thus the absence of Ego (Very Buddhist and Hindu). We can definitely program a robot to be Fearless. That is why robots could very well become Superior to man (woman). Imagine a world without our basket full of Ego (fears, greed and the rest of the junk that each of us experience daily). I will be 72 years old in August, 2018 and with my many life experiences, I gain wisdom each and every moment on the Playa by looking into my mirror.

    Report comment

  • Comments are closed.