Why We Get the Robot Overlords We Deserve

Part of the I, ROBOT series

Many months ago, while still developing this year’s theme, Larry Harvey began receiving a series of robocalls in which an automated system tried to fool him into thinking it was a human woman, explaining away any odd delays or inabilities to understand simple questions by claiming its headset wasn’t working right.

A few weeks after the 2018 theme was announced, news outlets reported that the Kingdom of Saudi Arabia had given the first citizenship to an AI, (“Sophia”) as part of an effort to attract high-tech businesses to invest there. If true, this meant a robot in the shape of a woman arguably has more rights than any actual women in that country.

These seem like recent, timely developments, but in fact we’ve been playing fast and loose with the definitions of personhood for some time. The landmark Supreme Court “Citizens United” decision declared that corporations have a right to free speech, effectively enshrining abstract economic entities with a kind of humanity. “Scientific” tests were used at the beginning of the last century to try and create racial hierarchies of intelligence, which was effectively an exercise in refusing to acknowledge the obvious humanity in others.

Similar questions about “are they REALLY human?” were asked and tragically answered when conquistadors had need for slave labor in the New World. “No,” they said, pointing at human beings, “of course they’re not really human. Just look at them!”

All of which is to say that while a subset of philosophers and theologians may have taken fundamental questions of personhood seriously, when the rubber hit the road, who we have and haven’t included as a person has always been more about economics and prejudice. If corporations can be given fundamental freedoms, sure, why can’t AIs? If women can be denied property rights and whole nations can be considered chattel based on their skin color, sure, why not the rest of us?

Robots and AI, then, confront us with subtle versions of challenges that we could never answer as a species even when the challenges were obvious.

Robots and AI confront us with subtle versions of challenges that we could never answer as a species even when the challenges were obvious.

We talk about personhood as though it is immutable, but we act as though it is a matter of economic convenience, and we always have. Over the past year, reports came out that Uber had male managers pretend to be women on text-based communications with drivers, because it had data showing that its contractors were more responsive and accommodating to women. Note that its preference, given this data, was not to hire more women, but to try to change the public identities of its male employees, which is kind of corporate mandated transgendering. If you want to change your gender because you feel it’s fundamental to who you are as a person, moralists will scream at you and call you an abomination; if your company executives want to change your gender because it’s good for business, nobody protests.

Artificial Intelligence will certainly change the world, and it could very well destroy it, but any look at the historical record suggests that we are afraid of AI not so much because of its potential for inhumanity, but because we fear it might start to behave as human beings traditionally have to other human beings.

We are afraid of AI not so much because of its potential for inhumanity, but because we fear it might start to behave as human beings traditionally have to other human beings.

We cannot actually imagine how a society could be organized around the idea that people’s humanity and personhood are non-negotiable. We have internalized the idea that seeing personhood as subordinate to economics is a hallmark of intelligence — we have no idea how an alternate approach to intelligent interaction would work.

It’s no accident that when Microsoft last year turned a learning chatbot loose on Twitter, its output became horribly racist and sexist within 24 hours. Twitter’s terms of service, its official company line on how to behave, were meaningless. Only the behavior of other users mattered. AI learns through example, not through rhetoric. Its behavior will be guided by ours. If we are indeed teaching AI to follow our example, this could be a significant problem, but it wouldn’t involve anything we’re not already doing to ourselves.

If, on the other hand, we want future Artificial Intelligences to be beneficent, to hold human life as valuable, even sacred, to look out for our best interests … we may have to show it what that looks like first.

If we want future Artificial Intelligences to be beneficent, to hold human life as valuable, even sacred, to look out for our best interests … we may have to show it what that looks like first.

The rate at which human social intelligence advances (or doesn’t) turns out to be every bit as relevant as advances in technical capacity. How AI engineers treat their janitors may determine what kind of machine intelligence powers the future.

The questions we have the most trouble answering about AI are the questions we have never satisfactorily answered about ourselves. Before we can “decide” if machines are human, we need to decide if we are. That seems, in theory, like a much simpler problem. But our history suggests that, in practice, it may be far, far, more difficult. We get the robot overlords we deserve.

These are the questions and issues we will examine in this series, in the hope that the emergence of AI can make us kinder to one another.


Photo by  Jillian Jerat

About the author: Caveat Magister

Caveat is Burning Man's Philosopher Laureate. A founding member of its Philosophical Center, he is the author of The Scene That Became Cities: what Burning Man philosophy can teach us about building better communities, and Turn Your Life Into Art: lessons in Psychologic from the San Francisco Underground. He has also written several books which have nothing to do with Burning Man. He has finally got his email address caveat (at) burningman (dot) org working again. He tweets, occasionally, as @BenjaminWachs

14 Comments on “Why We Get the Robot Overlords We Deserve

  • Great article! Do you have a cite to the thing about Microsoft’s bit learning racist behaviors? Would love to read more about that.

    Report comment

  • Ty says:

    Bravo! The path you create is simultaneously bold, questionable, convoluted, and informative. Thanks for the ride.

    Report comment

  • Single Ply says:

    “Radical Inclusion” should include robots but they should “Participate” too. I can’t think of a better use of playa robots than to program them to help-out their human counterparts (our beloved sanitation workers) and have them clean the portable toilets four times per day!

    Please?

    Report comment

  • Mr. Robot says:

    My 4-year-old grandson has grown up with Alexa in the house, and there have been some comical moments as he’s learned what the pint-sized bot can and cannot do. “Alexa, play TNT, for instance, gets better results than “Alexa, MORE CHEESE!” So he can at least head-slam in his high chair to AC/DC without parental intervention. But the dark side of all this adorableness, is, of course, that he is part of the first American generation in a long time growing up with a slave in the house, and learning the mindset of the master.

    Report comment

  • Sean says:

    Great article, thought provoking indeed.

    After last year I shudder to imagine what a learning robot would develop into after a week at Burningman.

    Report comment

  • Traveller in Time says:

    have to find a way to make it commercial attractive to show the AI we are not selfish. .

    I am afraid of the AI’s as I know they are superior in some ways. Well as long as the AI’s misunderstand me/us half the time we are safe.

    Report comment

  • SeaBass says:

    Poignant article. What kind of role model are we providing to intelligent machines? I’d add: what kind of role model will artificially intelligent machines provide us when they exceed our intelligence? (predicted to occur ~2030.) We are already learning to think like machines with our input/output interface to computers and mobile devices. Actually, humanity is evolving (rather awkwardly) just to keep pace with machines. Of course, there are many kinds of intelligence. Emotional intelligence, for one. So here’s another question: Will machines, and people, learn to have a heart? If so, who will teach whom?

    Report comment

  • Excellent article. It touches upon a number of exciting, if scary, possibilities for our future to come.

    I have a question for you: are all ‘humans’ that you know, truly ‘human’ to you? There are people who are incredibly logical, there are people who are not. People who are temporarily or permanently incapacitated (chemically, or biologically). There are people with asperger’s disease. And some people, if they chatted with a stranger online, might not pass turing tests, even though they are considered human…

    What I am alluding to , perhaps, is that there is not a binary classification of ‘human’ or ‘not human’, but a sliding scale of humanity. And the the axes of this sliding scale are likely of high dimensionality.

    When we (as a society) come to realize that much of life has elements that we often attribute to strictly humanity, then we will begin to avoid some of the elements of humanism that are dangerous to our continued existence.

    When we realize that an individual ‘human’ is not strictly the biological assemblage blueprinted by our own DNA, but that we are formed from billions of microbiota, and that we are formed from the interactions of people, and of tools and technology outside of ourselves, then we can begin to bypass the challenges with AI that we may soon be facing.

    How? By recognizing that we are an element of a complex cybernetic system already, an element within a ‘superorganism’ if you will, we can see how AI might play a unifying role.

    Just as the microbiota that symbiotically interact with the various systems of human physiology and, well, pretty much all biological life, humans will remain elements of this cybernetic superorganism. AI integrates will integrate as another element of the system. Perhaps the way it would integrate would analogous to a the subconscious or conscious of our brain?

    If the life embodied within a GENeral Artificial Super Intelligence (GENASI), would possibly take a while to consider that some elements of the system are harmful to the entirety of the superorganism, but not all of them. And many of the elements would be deemed essential.

    At least essential in the near-term.

    Long term, we as humans, will certainly have to evolve to remain useful to the entirety of the cybernetic superorganism that now encompasses the globe.

    But to get to the long-term, we will have to navigate the many challenges of Existential Apathy (lack of purpose b/c machines can do things better) and figure out how we, individually, can work to positively contribute to society.

    Report comment

  • Ned says:

    Our humanity has always been aspirational. Whatever dogma we choose to assimulate into our daily lives we all fall foul of ideological thinking. Two sides to the self which rise and fall like the tides. AI at least may remain predictable if not absurd in it’s interactions as it learns in it’s own unique way. Humans across the globe cause human suffering. We have the capacity to live more altruistic ways of existence but how many of us have the time or energy to influence on a grand scale. AI like any other potential commodity will be used to line the pockets of those already grossly wealthy. “Humanity” As a concept or a basic descriptive statement? Like all things that have gone before, we rinse and repeat again and again, we little folk will continue to hope for a better world and nothing will fundamentally change. Where true power lies the concept of humanity is extinguished.

    Report comment

  • roblox free says:

    I am eagerly waiting for such kind of themes on robots from long time

    Report comment

  • Comments are closed.