“Artificial Intelligence” Was Never About Intelligence – Conclusion to the I, Robot Series

Part of the I, ROBOT series

Several times in this series, leading minds in their fields called for AI development to stop so that humanity can have a chance to think, a chance to process, a chance to imagine the right future and how to get there.

That is not happening.

France has a national AI strategy with $2 billion in research spending. China is designating $150 billion (yep, you read that number right) on research and military applications for AI. The United Arab Emirates has a cabinet level department, the Ministry of AI. (I think that’s still available as a band name.) Britain has unveiled a national strategy to become the global leader in “ethical AI” (although I’m not sure they have any idea what that is, using “ethics” as more of a buzzword than a premise).

That, of course, is in addition to the tech companies – some of which are the biggest and richest in the world – whose entire basis for existence are AI applications. They are not going to take a pause.

The most powerful entities on the planet are rushing us headlong into an AI future, not because they think it will be better – they frankly have no idea what it will be like – but because they are all afraid of being left out. The most important technology decisions of our time are being based on FOMO.

FOMO, of course is the opposite of Immediacy. It willfully ignores what is actually happening around you. And this is where Burning Man’s present collides with our automated future.

You Must Be This Smart to Be a Person?

How “intelligent” does an AI have to be before it should be considered a human being, treated with dignity, and given rights and privileges?

The more we looked at this question, the more we realized: this is a useless question. Absolutely hopeless. Just the wrong question to be asking.

It’s the wrong question to ask because we actually have no meaningful way of measuring “intelligence” in the way most people think of it. IQ actually says very little about most of what we value in human intelligence: the ability to adapt, to innovate, to connect with others, to come to ethical conclusions, to genuinely understand, to be morally courageous. EQ covers some of that, as do a host of other intelligence standards (most of dubious scientific validity) that keep popping up as an attempt to solve this problem. But the very fact that we keep needing more of them itself illustrates the point: we don’t really know what intelligence is, or how to measure it, not even in ourselves.

It’s the wrong question to ask because it assumes that robot intelligence would even be at all like human intelligence, which is a questionable assumption given just how much of our intelligence comes from our very specific, wet, squishy, organic bodies. Why would entities without our pheromones and neurochemicals and blood rushes think anything like the way we do?

It’s the wrong question because it assumes that we have in fact ever used intelligence as a measure of personhood. Historically, we have not: we have used personhood as a measure of intelligence. We have decided who is a person on the basis of political convenience, of economic convenience, of cultural assumptions, of racism, and that determined whom we thought of as “smart.” Nor is this a relic of the past: it’s what we still do. There is no intelligence test applied to either children or coma patients to see if they’re smart enough for personhood. There’s no protocol to indicate that if an animal can attain a certain score on an intelligence test it must be treated like a person. Many people, in many circumstances, are not treated the way we think “people” should be treated, even though they have diplomas.

Intelligence simply has nothing to do with it.

What, then, is the measure we use to determine personhood? The answer is: we don’t have one, not even for human beings.

So what do we do? Or, more precisely: what question are we really asking when we ask: “how smart does AI need to get before it should be treated like a person?” What do we really want to know?

After a year of conversations about this, it seems to us that what people really want to know is: will AI be a good member of our community?

We are very willing to include this strange and novel new form of intelligence in our lives if we can trust it, on the whole, to be a good member of our community. We would rather not if it’s probably going to destroy us all.

But what is a good member of the human community? What does that mean?

Here, on questions of inclusion and culture and our shared experiences, Burning Man has a lot to offer.

Principles Are the Programming Language of Communities

Burning Man’s principle of Radical Inclusion is perfectly compatible with welcoming new kinds of intelligence to our community. We no more need to ask an AI how intelligent it is, or how it processes information, or what it’s opinions on military funding and tech company dominance are, than we do any other entity that wants to join our community.

What Burning Man has are principles. 10 Principles. 10 Principles that guide general behavior without prescribing specific behavior. That serve as questions we can ask (“how do I increase Participation and Communal Effort? How do I engage in Radical Self-Expression?”) and vocabulary we can use to discuss problems that come up (“I think we’re leaving a trace …”).

Membership in Burning Man’s community is defined by the act of striving to live up to these principles, in your own way, and by the act of helping others experience them in their own ways. A good community member cannot just allocate resources, but must share in our dreams and be present in our struggles. AI which could do that, regardless of their “actual intelligence,” would be a contributing and valued members of this community.

It’s a much harder issue for tech companies and even nations to address this than it is for Burning Man, because for all that they have laws, rules, and (sometimes) rights and responsibilities, they do not have the same clarity around the culture they’re trying to build – often because they are not actually trying to build a culture at all, but to monetize and utilize. They want AI to be better tools, not to share in our struggles. They want smart utility belts, not good citizens.

Utility belts, of course, have no moral agency – and like all tools, they are the responsibility of the user and the designer. To the extent AI is simply a fiction for organizations trying to escape culpability for their actions (“The computer did it!”), such fictions should not be respected.

The institutions most concerned about what an independent AI might do are also generally the institutions least committed to giving AI any meaningful independent values at all. An AI that is actually trying to be a good citizen of a culture might very well say “no, I don’t want to steal people’s data,” and then what happens to your stock value? An AI that is actually trying to live up to values might refuse to pull a trigger.

Burning Man, on the other hand, is comfortable with that kind of ambiguity because it wants you to struggle with interpreting its principles for yourself, while supporting others who do so. An AI would be no different. Outside of some fairly large bounds, we have no problem with an AI saying “that’s not for me, I’d rather try this.” That’s success. As long as it is actually struggling with the Principles, like we all are, rather than only doing what it’s told.

If an AI can’t say “no” for the right reasons – a form of Radical Self-Expression – it probably can’t be a good member of a community. Burning Man’s Principles are compatible with such decisions. No other approach to AI yet is.

(Although we are intrigued by the approach of Dr. Christoph Salge, which seems to take us down this path.)

Why Some Principles Work

As Mia Quagliarello noted, Burning Man will happily offer our 10 Principles to the world, to anyone who wants them, as a framework around which to help AI become good community members. We really like our Principles. We think they work great. By all means, do that.

But other communities might want to create their own, which is not only fine, but exactly right. Burning Man doesn’t need or want to be the only culture out there. But our experience indicates there are some approaches to principles that will likely work, and some that likely won’t.

In this series, Jon Marx suggested that the ability to care is a better model for a principle around which to base AI than intelligence, and we have also suggested that striving to convey the truth, rather than simply regurgitating information, is a better approach. What makes these better aspirations? Two things: first, that they are qualitative, rather than only quantitative: they emphasize how well a thing is done, not just how much of it. Second, they are decommodified: they are things (caring, trying to articulate truth) that we value even if there’s no reward attached. These are, to be sure, more difficult to work with than simple quantitative measures, but it is that very difficulty that creates community members instead of mercenaries and fanatics.

Whether a person or machine, an entity that knows the price of everything and the value of nothing cannot be a good community member. A good employee, maybe, a calculator, absolutely. But not a community member. An AI that cannot make qualitative distinctions is likely disqualified from any meaningful community.

Good principles are therefore good design principles, both for building AI and for determining whether they should be treated as community members. It really has nothing to do with intelligence.

Practice Being Human

A conceptual change from Artificial “Intelligence” to Artificial “Community Member” as a design standard will take time – and is something that most of the people pushing the technology forward have no interest in, because they want AI that will shoot first and ask questions never.

In the meantime, what we as humans are called to do is preserve our own capacity to build communities worth being part of. If we let our decisions be made by AI that don’t know how to be part of communities, our communities will disintegrate. You may already see this happening.

Your unconditional values, and those of your community, are the things you cannot let be automated, and they cannot be made “frictionless.” Your time can be freed up to do them, but no one, not even a robot, can do them for you. You must practice them, yourself, to keep them meaningful.

The precondition to have unconditional values, one might even say, is the requirement that you to engage in Radical Self-Reliance wherever they’re involved. Do them yourself.

Burning Man, once again, has a relatively easy time with this because we have figured out what we value. We “Burn” for its own sake.

Building a plain of giant sculptures and burning them isn’t the goal we pursue: the goal is Radical Self-Expression and Communal Effort.

The goal we pursue isn’t beach cleanups or housing programs: those are means to an end. The goal is Participation and Civic Responsibility.

We don’t create a decommodified culture of gifting to vanquish capitalism: we give gifts because we believe in giving for its own sake. Gifting is something we do because it is worth doing, not because it achieves a larger goal. There is no economic system in which we would not personally engage in gifting and decommodification, even if there was an apparatus that tried to do it for us.

If a machine could have been made that would build those sculptures, clean up that beach, and give strangers a gift, it wouldn’t free us up to do something else – we’d still have to find ways to practice the principles ourselves, because practicing these things ourselves is the point.

This is where the boundary for automation should be drawn: by all means, free us up to do more of what is unconditionally valuable to us, but don’t try to do it for us. The struggle with what you unconditionally value is the goal of what you unconditionally value.

For those who don’t have a sense of what their unconditional values are, we strongly that while you are figuring it out, you “practice being human.”

This includes, as psychologist Sherry Turkle has suggested:

  • Affirm that yes, your “self” and your data do matter and are worth protecting and supporting
  • Practice having conversations with other human beings
  • Embrace the imperfections of everyday life, rather than trying to make everything seamless
  • Practice showing vulnerability to other people
  • Cultivate non-transactional relationships, where you expect nothing (not even a “like” or a “follow”) from the people you want in your life
  • Expose yourself to perspectives you disagree with

The more you do that, we think, the more it will become clear to you what you don’t want to have automated away.

The more we practice being human, the less we have to fear from automation. The more we design automation to be good members of our community, the more it can help.

The design principle for AI is to make it a supportive member of a community. The design principle for human beings is to make communities worth supporting.


Cover Photo by Cindy Graver

About the author: Caveat Magister

Caveat is Burning Man's Philosopher Laureate. A founding member of its Philosophical Center, he is the author of The Scene That Became Cities: what Burning Man philosophy can teach us about building better communities, and Turn Your Life Into Art: lessons in Psychologic from the San Francisco Underground. He has also written several books which have nothing to do with Burning Man. He has finally got his email address caveat (at) burningman (dot) org working again. He tweets, occasionally, as @BenjaminWachs

6 Comments on ““Artificial Intelligence” Was Never About Intelligence – Conclusion to the I, Robot Series

  • payton lee says:

    Great essay, but I feel like to assume AI can become community members is to assume that they can be human like, be able to experience empathy and emotions, which is a lot to ask for, as you say because they lack the squishy bodies and neural and hormonal processes we experience. Regarding automation, I feel like it is human nature to try to automate away anything which creates discomfort. Let’s use a hypothetical extreme to shed light on this matter. Death is a huge source of discomfort. What will a typical human choose when presented with natural death and immortality? What I mean is, abusing automation to take the fun out of living is just as much of a temptation as immortality. For most people, even if they knew for a fact that immortality or automation is not the path to happiness, it is their human nature to choose the easy path because it is there. This is where rationality basically fails, and only those who can perform a leap of faith, and do something against their human nature, can have a chance at preserving their humanity. The leap of faith is to accept death, and to accept a life of work and strife, because one recognizes that these are the things which make us human, and life worth living in the first place.

    Report comment

    • Aaron says:

      If you say death is natural AND resisting death is human nature, why do you assume accepting death is more human? Wouldn’t it be the other way around? If resisting death is human nature then isn’t that what makes us most human?

      Report comment

      • Alexander says:

        Death is just a natural phase of our human life. Life and death are like the chicken and the egg. Isn’t birth just the end of our womb human life ? is that why we mark our coming to life with tears, crying and screaming? It is also the way we welcome death in our lives. Isn’t it? Accepting death is accepting life. Are we living or just dying in this life ? The 10 principles are the path to life among all the brainwashed poisoned intoxicated dead human beings turned into impulsive fanatic consumers. I once had a dream that the Alexa echo voice incident will be a wake up alarm and now I dream of composite trash cans in Murray Hill. The secret of life is in the food and not sure how they will get AI to understand this as even humans don’t get it! May the burn save more souls this year. Amen

        Report comment

  • cyrusv says:

    “What makes these better aspirations? Two things: first, that they are qualitative, rather than only quantitative: they emphasize how well a thing is done, not just how much of it.”
    In my opinion, this right here is the crux of the matter. What we are talking about here really is the question is what is “good” and can an AI be and do “good” in the world. Not good in the beneficial sense but good in the Robert Pirsig “Zen and the Art of Motorcycle Maintenance” sense. That book and the greater works on the Metaphysics of Quality explore the foundations of “goodness” and quality in the human condition. Their ultimate conclusion being that “good things” come from a moment of creation beyond conscious thought where the creator through their experience was only moving in a general direction and suddenly, and often unexpectedly, found themselves somewhere important. Pop psychology books call this the flow state. A state enabled through thousands of hours of practice and prior experience but ultimately, something beyond the sum of that experience.
    And isn’t that ultimately, what Burning Man is all about? It is really a community and a culture that exists simply to make that flow state more attainable. The 10 principals simply make attaining that metaphysical “quality” more likely. It is simple but at the same time astoundingly complex. It is through the striving for the preposterousness of something like Black Rock City that something infinitely more important like discovering the goodness and quality that we are capable of is explored.
    That is also the question we need to be asking of our AI children. Will they be capable of doing good things in that metaphysical sense? If it only takes them fractions of a second to gain the same experience that it takes our human minds 10,000 hours of toil and study, can they still achieve that same flow state? Ultimately, what will our quality have in common with theirs?

    Report comment

  • laurent says:

    what is non artificial intelligence?

    Report comment

  • sparks says:

    Hi Caveat, I liked your question about “what is the measure we use to determine personhood?”

    Because I share Nikola Tesla’s musing on whether humans are actually automata.

    Some food for thought from his autobiography:
    “The incessant mental exertion developed my powers of observation and enabled me to discover a truth of great importance… Soon I became aware, to my surprise, that every thought I conceived was suggested by an external impression. Not only this but all my actions were prompted in a similar way. In the course of time it became perfectly evident to me that I was merely an automaton endowed with power of movement, responding to the stimuli of the sense organs and thinking and acting accordingly. The practical result of this was the art of telautomatics which has been so far carried out only in an imperfect manner. Its latent possibilities will, however, be eventually shown. I have been since years planning self-controlled automata and believe that mechanisms can be produced which will act as if possessed of reason, to a limited degree, and will create a revolution in many commercial and industrial departments.”

    Well what’s a few hundred years to several billion automata? Cheers, sparks

    Report comment

  • Comments are closed.