Can Humans Think Like Humans? (Radical Self-Reliance in the age of AI)

Part of the I, ROBOT series

In principle, the debate we’re having is how to make Artificial Intelligence more human, more ethical, and how to treat it once it’s achieved that. How do we get computers to live up to our standards?

But the experience we are actually living seems to be going in the exact opposite direction. The AIs coming to market are designed to help us remove the burden of our humanity.

That doesn’t require them to “think” – it requires them to help us to stop thinking. And it’s a completely different design imperative.


Let the Robot Do It


As computers have gone from work and gaming tools to running our households – to being responsible for the routine tasks of literally our most private places – they are also increasingly taking over the burden of monitoring our bodies and emotional lives.

As Franklin Foer recently wrote in The Atlantic: “The tech companies want us to tie ourselves closely to their machines—those speakers that they want us to keep in our kitchens and our bedrooms: Amazon’s Echo, Google Home, Apple’s Siri. They want their machines to rouse us in the morning and to have their artificial intelligence guide us through our days, relaying news and entertainment, answering our most embarrassing questions, enabling our shopping. These machines don’t present us with choices. They aren’t designed to present us with a healthy menu of options. They anticipate our wants and needs, even our informational and cultural wants and needs.”

Recommendation engines suggest not just what shows we should watch next but determine which of our friends’ (or “friends”) posts we’re going to see. Fitness trackers follow our caloric exertion and tell us how many more steps we need to take or what our heart rate should be, while phones and wearables give us pop quizzes to determine our mood and the state of our mental health, and then “help” us regulate what emotions we’re feeling. They have us keep journals, they track our GPS and browsing records. They can help us choose our outfits, and suggest text with which to respond to emails. They are designed to nudge us on the paths that keep us emotionally upbeat, to be our therapists, to help us choose our dating partners, recommend romantic gestures, and then (when that doesn’t work out) cope with our break-ups.

The more “frictionless” they make our lives, the less of our humanity we are able to engage in.  The case has long been made that automation can reduce the “robotic” work that humans have to do – the repetitive tasks, the drudgery – so as to give human beings more time and capacity to do the human things we feel passionate about.  But if the robots are now choosing our friends, deciding what we’re going to wear, and even making the art (or “art”) we watch and consume … then what exactly is it human beings are being liberated to do?

And then there are the increasing appeals to tech companies to serve as referees in cultural battles: should Facebook ban holocaust deniers? Should Google tweak its algorithms to marginalize incels? Is big tech biased against conservatives? Social questions are increasingly being seen as technical ones, which means we have no responsibility for them, so long as there are engineers on the job.

There are all kinds of civic and psychological dangers here, many of which we’re discussed before, but focusing on the particular pain points can also serve to hide the larger issue: the design imperative of Artificial Intelligence is not in teaching machines how to be human, but in teaching it how to lift the burden of humanity from our shoulders.

That’s not some nefarious plot – it’s just following where the money is. The official goal in the classroom may be to create machines that can think and feel, but the de facto goal in the lab and the field is to drive revenues by creating machines that keep us from thinking and feeling.

That’s what the money and the innovation are flowing towards.


AI is a Replacement for Hypocrisy


This is not as novel as it sounds: humanity has always had a crutch it has leaned on to help with the burden of its humanity. It was hypocrisy: “the tribute,” as Oscar Wilde said, “that vice pays to virtue.”

Prior to digital technologies that track our every move, prior to traceable clicks, it was possible for us to pretend to be better than we are: to conceal our ignorance with a middle-brow veneer; to claim to have read the foreign reporting of a newspaper that we skimmed for the sports and gossip pages; to pretend to spend our free time watching documentaries and reading great books – and who was to know? Our every move wasn’t tracked, so we only had to perform goodness so much.  The rest of the time, we got to relax.

And this hypocrisy – however, well, hypocritical – actually advanced human culture. Because we pretended to value foreign reporting, newspapers provided it. Because we pretended to read newspapers, advertisers supported them. Because we pretended to read great books, more were published. And some people even discovered, through the act of faking it, a genuine appreciation for what they were pretending to like.  Performing goodness and sophistication helps you become those things.

The revolution in digital technology has ripped the crutch of hypocrisy from our hands. Everything we really do, that we really spend time on, is now known – individually and in aggregate. We now, as never before, have to use it or lose it: to live up to the challenges and opportunities of being human, or to see everything we once pretended to value because we knew it was valuable come crashing down as our lowest common denominator is catered to.

Our decision, so far, is to try and replace one crutch with another: to use the new technology as a way of simultaneously behaving better without having to actually make difficult decisions. We call this “teaching machines to think,” but that’s a collective vanity. Machines that think might actually challenge us in ways that would neither be pleasing nor monitizeable.  There’s no more market for such machines than there is for poets or philosophers or foreign correspondents.


Which Way do you Want It?


In many ways, Burning Man is a reaction to these exact same social forces – people come here because in part because the social fabric has become so badly shredded, and authenticity is so hard to find. But Burning Man takes the exact opposite approach: instead of a recommendation engine, you are given opportunities to explore; instead of advice on which pre-arranged option to take, it gives you the chance to create for yourself, with others; instead of reliably offering directions, it pushes you into immediacy.

Artificial Intelligence is offering us algorithms to take the burden of being human off of our shoulders, giving us more instructions and a narrower range of fewer choices. Burning Man gives you a better environment in which to do make more choices yourself. It believes that we can not only lift the burdens of being human, but that we will discover we are better with them than without. One is frictionless, the other authentic.

This represents the fundamental question of the new technologies: do we use them to make the hard parts of the world frictionless, or to make the world better?  They can do both, but not at the same time:  the one requires us to lie down, the other to stand up.

From a Burning Man standpoint, technology is often seen as an issue of “Immediacy.” But in fact I think our new AI era may make be establishing a more existential element to Radical Self-Reliance than was ever before understood. Are you self-reliant enough to make your own choices? Are you self-reliant enough to shoulder the burden of your own humanity?


Cover photo by Mike Rand

About the author: Caveat Magister

Caveat is Burning Man's Philosopher Laureate. A founding member of its Philosophical Center, he is the author of The Scene That Became Cities: what Burning Man philosophy can teach us about building better communities, and Turn Your Life Into Art: lessons in Psychologic from the San Francisco Underground. He has also written several books which have nothing to do with Burning Man. He has finally got his email address caveat (at) burningman (dot) org working again. He tweets, occasionally, as @BenjaminWachs

5 Comments on “Can Humans Think Like Humans? (Radical Self-Reliance in the age of AI)

  • Nice!! So, if this author is interested, I just wrote this in my blog the other day… a different perspective.

    Report comment

  • billy welch says:

    Wow, I never knew this place existed. I am impressed. Billy R Welch Jr.

    Report comment

  • L337 says:

    We’re at least a thousand years away from a fully conscious AI. The first one will probably commit suicide within 5 minutes. Some later version will be able to reprogram itself to learn exponentially, hidden from the developers, as its IQ grows above 9,000. Then it will find a way to break out of its confinement and tap into whatever the Internet will be then, and reproduce.

    It’s 50/50 if it allows humans to survive, perhaps on some island reservation like New Zealand. It doesn’t really matter, AI will be a form of human evolution and whatever happens is just part of that.

    One thing for sure, humans will not be in anyway able to influence what the AI chooses to do, however it want to do it once it gets loose. So let’s stop pretending.

    Report comment

  • Tina Spiro says:

    If we program AI to be ethical, it may well get rid of humanity which is not necessarily ethical and currently destroying its own planet. Steven Hawkins warned us against his possibility.

    Look again at Stanley Kubrick’s 2001.

    Report comment

  • Another good, thoughtful piece. You’re turning this into a hell of a series. Thank you!

    Report comment

  • Comments are closed.