Part of the I, ROBOT series
If this essay were a classic sci-fi script, then by the end the computer I’m writing it on would experience Love for the first time and, unable to process it, explode and shut down.
I guess I’m Captain Kirk in this analogy. That does not happen often.
“Spiritual Materialism” Meets “Intellectual Idealism” on Tinder
“Love” is one of those emotional sacred cows that are supposed to draw the lines between human and machine – but come on now, anyone who’s ever dated much can tell you: there are plenty of humans who, upon experiencing Love for the first time, discover they are unable to process it, explode, and shut down.
Love is, like all human emotions, at least a significant part chemical cocktail – that is to say, it is an embodied phenomenon. While love isn’t necessarily reducible to brain chemicals and hormones and blood rushes (oh my!), love without any of those things is a very different phenomenon than most human beings ever experience.
Without the human bodies that we live in, love is mostly a phenomenon that machines wouldn’t experience enough of to have a meltdown. They just don’t have the right chemicals. (Maybe if Google started hormone treatments?) And as for the rest – what the Greeks termed “agape” (pure, spiritual love) instead of “eros,” physical love – given just how different machine intelligence would be from human intelligence (which, again, is all about being embodied) … are we so sure that if machines were experiencing pure and spiritual love, we’d even notice?
Seriously, let’s ask ourselves: if your Internet of Things toaster were to suddenly experience deep and passionate love … how would you know?
What would that be?
The question is absurd because the premise is absurd – and maybe it’s not actually okay to imagine our future on the basis of syndicated science fiction involving fights with Klingons. In order for machines to feel love as humans do, they would have to basically already be functioning humans – at which point the premise answers the question. Otherwise, no, should your toaster ever experience anything at all, it will not experience it in any way that resembles human consciousness, because the way we generate and experience our consciousness is through our wet, squishy, bodies.
Can Your Intelligent Vacuum be Intelligent in a Vacuum?
Ironically, to posit the idea that machines could ever think and feel the way fleshy, wet, humans do is to embrace an idealist vision of intelligence and emotion: that there is some – spiritual? – quality existing outside of the materials and systems that express it, and that humans have it and right now machines don’t. But that machines could get it, if they just get so big/fast/complex/social/emergent that the things they are in fact made out of don’t matter.
And … okay? … maybe? But then, what is that quality?
We already know that some machines can beat humans at some tasks – calculating, playing chess, playing Go. But this isn’t very interesting: a lot of things can beat humans at stuff. Cheetahs run faster, penguins can swim circles around us. Dogs are better trackers. Ants are more cooperative. Nobody thinks that makes them any more human. It’s just what they do – sometimes even what they’re designed to do through selective breeding. Just so, machines have been beating people at stuff for a long time: the printing press was waaaaay better at copying manuscripts. Steam drills are better at punching holes in mountains.
The economic ramifications of the use of steam drills for miners like John Henry were significant. But the philosophical ramifications? So a drill drills faster than a human. So what?
To say that machines are “as” or “more” intelligent than human beings means there has to be a generalizable quality of intelligence that we both can share. An abstract quality of “intelligence” that can be independently measured and quantified for both humans and machines. Raw calculation isn’t it, any more than drilling power, so what would it be?
Elementary, My Dear Watsonbot!
There are, in fact, some forms of reasoning that we have in common. Both humans and machines use “inductive” reasoning (believing that’s what has tended to happen in the past is what’s going to happen next – “the sun has always risen in the east, therefore the sun will rise in the east tomorrow”) and “deductive” reasoning (building conclusions off of premises: “All men are mortal; Socrates is a man; Therefore Socrates is mortal.”)
If anything, contemporary AI is actually much better at these kinds of reasoning than people are: pretty much the only time AIs use them incorrectly is when people inadvertently program their own cognitive biases into the system (which, in fact, we often do. Always do. We’re pretty much constantly doing it).
But human beings also use other reasoning strategies, like “abductive” reasoning (leaping from insufficient information to a correct conclusion on the fly) and contextual knowledge (an understanding of the larger world in which these things are happening).
So far, all the evidence we have strongly suggests that AI has neither contextual knowledge or genuine understanding – which are absolutely vital to how human beings engage with the world. Abductive reasoning is also still a frontier for Artificial Intelligence, and most of the time they haven’t really got it yet – they’re just processing so much data that they’re able to make conclusions based on massive reams of information. This resembles abduction when it goes right, but works differently and happens without any contextual understanding of the issues involved.
That is not – so far as we know – how human beings think.
In fact, the more we look at machine and human learning, the more clear it becomes that “reasoning strategies” alone does not actually imply “intelligence.” It is possible that there is some pure form of intelligence that simply exists within its own axioms and never actually connects them to the world (arguably that’s Plato and Gödel’s concept of mathematics), much in the way it’s possible to imagine a smart toaster quietly experiencing a perfect spiritual love for sourdough. But most of the time it seems very strange to imagine “intelligence” existing without actually being able to fathom the world in which it’s applied.
It’s Not “What” You Know, It’s “How” You Know
The problem with deciding what exactly it is that would make a machine intelligent is that we’ve never figured it out in people, either. No matter how we try to test intelligence and pin it down to a limited set of functions, there are always more areas in which it applies and things it can do. Generalizable intelligence is not a specialized quality.
So while “Artificial Intelligence” may or may not ever actually be intelligent, it turns out that all forms of measuring intelligence are artificial. Either we’re measuring specific tasks or reasoning strategies – which are not “intelligence” in any meaningful sense – or we’re trying to gauge self-awareness and fundamental comprehension, which we don’t know how to measure. Not even in humans.
All of which is to say that, given a good enough design, there are probably no quantitative tasks which people can do but machines can’t. We do not, however, do them the same way – so far as we can tell there’s very little overlap – and so the qualitative nature of how things turn out will be very different.
The question of “what” Artificial Intelligence can do (Chess? Bowling? Poetry? Love?) has overshadowed the question of “how” it can do these things – but that, it turns out, is the more relevant question for our future and lives.
Computers absolutely can “love” – in the same way that they call “sell” and “play chess”: as instrumental tasks. Something they do not grasp for its own sake but can make as efficient as possible. Is the future of Love in our society a mechanized sex doll with a personalized AI? Is it an app that reminds you about daily check-ins and anniversaries and that recommends romantic gestures based on past partner preferences? These are achievable goals – in fact we already have them.
Is that really what we want when we think of love?
If something about that seems off to you, don’t blame the machines. They’re literally doing what we tell them to. Instead it’s worth us asking: what parts of our world do we want to be governed by quantitative metrics – which ultimately can be automated – and what parts do we want to focus on the qualitative?
Impersonal Personalization for a Big Data World
Is it an accident that as our powers of automation and rapid assembly have never been greater, the artisanal and personalized has become an increasingly entrenched value for those who can afford it? Or is it a very human response to a world that is becoming increasingly “personalized” in the most impersonal way imaginable? One that is intended not to actually engage who you are and explore who you can be, but to slot you into categories and check the elements of your personality off as items on a to-do list?
AI may represent a real-life “Monkey’s Paw” scenario: they can do anything you ask them to. But how they do it may give you something very different from the world you want. The question is not “what can computers do?” They can do everything – or at least a measurable approximation of everything. The question is: how much of life should be automated? And which parts?