Part of the I, ROBOT series
Several times in this series, leading minds in their fields called for AI development to stop so that humanity can have a chance to think, a chance to process, a chance to imagine the right future and how to get there.
That is not happening.
France has a national AI strategy with $2 billion in research spending. China is designating $150 billion (yep, you read that number right) on research and military applications for AI. The United Arab Emirates has a cabinet level department, the Ministry of AI. (I think that’s still available as a band name.) Britain has unveiled a national strategy to become the global leader in “ethical AI” (although I’m not sure they have any idea what that is, using “ethics” as more of a buzzword than a premise).
That, of course, is in addition to the tech companies – some of which are the biggest and richest in the world – whose entire basis for existence are AI applications. They are not going to take a pause.
The most powerful entities on the planet are rushing us headlong into an AI future, not because they think it will be better – they frankly have no idea what it will be like – but because they are all afraid of being left out. The most important technology decisions of our time are being based on FOMO.
FOMO, of course is the opposite of Immediacy. It willfully ignores what is actually happening around you. And this is where Burning Man’s present collides with our automated future.
You Must Be This Smart to Be a Person?
How “intelligent” does an AI have to be before it should be considered a human being, treated with dignity, and given rights and privileges?
The more we looked at this question, the more we realized: this is a useless question. Absolutely hopeless. Just the wrong question to be asking.
It’s the wrong question to ask because we actually have no meaningful way of measuring “intelligence” in the way most people think of it. IQ actually says very little about most of what we value in human intelligence: the ability to adapt, to innovate, to connect with others, to come to ethical conclusions, to genuinely understand, to be morally courageous. EQ covers some of that, as do a host of other intelligence standards (most of dubious scientific validity) that keep popping up as an attempt to solve this problem. But the very fact that we keep needing more of them itself illustrates the point: we don’t really know what intelligence is, or how to measure it, not even in ourselves.
It’s the wrong question to ask because it assumes that robot intelligence would even be at all like human intelligence, which is a questionable assumption given just how much of our intelligence comes from our very specific, wet, squishy, organic bodies. Why would entities without our pheromones and neurochemicals and blood rushes think anything like the way we do?
It’s the wrong question because it assumes that we have in fact ever used intelligence as a measure of personhood. Historically, we have not: we have used personhood as a measure of intelligence. We have decided who is a person on the basis of political convenience, of economic convenience, of cultural assumptions, of racism, and that determined whom we thought of as “smart.” Nor is this a relic of the past: it’s what we still do. There is no intelligence test applied to either children or coma patients to see if they’re smart enough for personhood. There’s no protocol to indicate that if an animal can attain a certain score on an intelligence test it must be treated like a person. Many people, in many circumstances, are not treated the way we think “people” should be treated, even though they have diplomas.
Intelligence simply has nothing to do with it.
What, then, is the measure we use to determine personhood? The answer is: we don’t have one, not even for human beings.
So what do we do? Or, more precisely: what question are we really asking when we ask: “how smart does AI need to get before it should be treated like a person?” What do we really want to know?
After a year of conversations about this, it seems to us that what people really want to know is: will AI be a good member of our community?
We are very willing to include this strange and novel new form of intelligence in our lives if we can trust it, on the whole, to be a good member of our community. We would rather not if it’s probably going to destroy us all.
But what is a good member of the human community? What does that mean?
Here, on questions of inclusion and culture and our shared experiences, Burning Man has a lot to offer.
Principles Are the Programming Language of Communities
Burning Man’s principle of Radical Inclusion is perfectly compatible with welcoming new kinds of intelligence to our community. We no more need to ask an AI how intelligent it is, or how it processes information, or what it’s opinions on military funding and tech company dominance are, than we do any other entity that wants to join our community.
What Burning Man has are principles. 10 Principles. 10 Principles that guide general behavior without prescribing specific behavior. That serve as questions we can ask (“how do I increase Participation and Communal Effort? How do I engage in Radical Self-Expression?”) and vocabulary we can use to discuss problems that come up (“I think we’re leaving a trace …”).
Membership in Burning Man’s community is defined by the act of striving to live up to these principles, in your own way, and by the act of helping others experience them in their own ways. A good community member cannot just allocate resources, but must share in our dreams and be present in our struggles. AI which could do that, regardless of their “actual intelligence,” would be a contributing and valued members of this community.
It’s a much harder issue for tech companies and even nations to address this than it is for Burning Man, because for all that they have laws, rules, and (sometimes) rights and responsibilities, they do not have the same clarity around the culture they’re trying to build – often because they are not actually trying to build a culture at all, but to monetize and utilize. They want AI to be better tools, not to share in our struggles. They want smart utility belts, not good citizens.
Utility belts, of course, have no moral agency – and like all tools, they are the responsibility of the user and the designer. To the extent AI is simply a fiction for organizations trying to escape culpability for their actions (“The computer did it!”), such fictions should not be respected.
The institutions most concerned about what an independent AI might do are also generally the institutions least committed to giving AI any meaningful independent values at all. An AI that is actually trying to be a good citizen of a culture might very well say “no, I don’t want to steal people’s data,” and then what happens to your stock value? An AI that is actually trying to live up to values might refuse to pull a trigger.
Burning Man, on the other hand, is comfortable with that kind of ambiguity because it wants you to struggle with interpreting its principles for yourself, while supporting others who do so. An AI would be no different. Outside of some fairly large bounds, we have no problem with an AI saying “that’s not for me, I’d rather try this.” That’s success. As long as it is actually struggling with the Principles, like we all are, rather than only doing what it’s told.
If an AI can’t say “no” for the right reasons – a form of Radical Self-Expression – it probably can’t be a good member of a community. Burning Man’s Principles are compatible with such decisions. No other approach to AI yet is.
(Although we are intrigued by the approach of Dr. Christoph Salge, which seems to take us down this path.)
Why Some Principles Work
As Mia Quagliarello noted, Burning Man will happily offer our 10 Principles to the world, to anyone who wants them, as a framework around which to help AI become good community members. We really like our Principles. We think they work great. By all means, do that.
But other communities might want to create their own, which is not only fine, but exactly right. Burning Man doesn’t need or want to be the only culture out there. But our experience indicates there are some approaches to principles that will likely work, and some that likely won’t.
In this series, Jon Marx suggested that the ability to care is a better model for a principle around which to base AI than intelligence, and we have also suggested that striving to convey the truth, rather than simply regurgitating information, is a better approach. What makes these better aspirations? Two things: first, that they are qualitative, rather than only quantitative: they emphasize how well a thing is done, not just how much of it. Second, they are decommodified: they are things (caring, trying to articulate truth) that we value even if there’s no reward attached. These are, to be sure, more difficult to work with than simple quantitative measures, but it is that very difficulty that creates community members instead of mercenaries and fanatics.
Whether a person or machine, an entity that knows the price of everything and the value of nothing cannot be a good community member. A good employee, maybe, a calculator, absolutely. But not a community member. An AI that cannot make qualitative distinctions is likely disqualified from any meaningful community.
Good principles are therefore good design principles, both for building AI and for determining whether they should be treated as community members. It really has nothing to do with intelligence.
Practice Being Human
A conceptual change from Artificial “Intelligence” to Artificial “Community Member” as a design standard will take time – and is something that most of the people pushing the technology forward have no interest in, because they want AI that will shoot first and ask questions never.
In the meantime, what we as humans are called to do is preserve our own capacity to build communities worth being part of. If we let our decisions be made by AI that don’t know how to be part of communities, our communities will disintegrate. You may already see this happening.
Your unconditional values, and those of your community, are the things you cannot let be automated, and they cannot be made “frictionless.” Your time can be freed up to do them, but no one, not even a robot, can do them for you. You must practice them, yourself, to keep them meaningful.
The precondition to have unconditional values, one might even say, is the requirement that you to engage in Radical Self-Reliance wherever they’re involved. Do them yourself.
Burning Man, once again, has a relatively easy time with this because we have figured out what we value. We “Burn” for its own sake.
Building a plain of giant sculptures and burning them isn’t the goal we pursue: the goal is Radical Self-Expression and Communal Effort.
The goal we pursue isn’t beach cleanups or housing programs: those are means to an end. The goal is Participation and Civic Responsibility.
We don’t create a decommodified culture of gifting to vanquish capitalism: we give gifts because we believe in giving for its own sake. Gifting is something we do because it is worth doing, not because it achieves a larger goal. There is no economic system in which we would not personally engage in gifting and decommodification, even if there was an apparatus that tried to do it for us.
If a machine could have been made that would build those sculptures, clean up that beach, and give strangers a gift, it wouldn’t free us up to do something else – we’d still have to find ways to practice the principles ourselves, because practicing these things ourselves is the point.
This is where the boundary for automation should be drawn: by all means, free us up to do more of what is unconditionally valuable to us, but don’t try to do it for us. The struggle with what you unconditionally value is the goal of what you unconditionally value.
For those who don’t have a sense of what their unconditional values are, we strongly that while you are figuring it out, you “practice being human.”
This includes, as psychologist Sherry Turkle has suggested:
- Affirm that yes, your “self” and your data do matter and are worth protecting and supporting
- Practice having conversations with other human beings
- Embrace the imperfections of everyday life, rather than trying to make everything seamless
- Practice showing vulnerability to other people
- Cultivate non-transactional relationships, where you expect nothing (not even a “like” or a “follow”) from the people you want in your life
- Expose yourself to perspectives you disagree with
The more you do that, we think, the more it will become clear to you what you don’t want to have automated away.
The more we practice being human, the less we have to fear from automation. The more we design automation to be good members of our community, the more it can help.
The design principle for AI is to make it a supportive member of a community. The design principle for human beings is to make communities worth supporting.