Part of the I, ROBOT series
Many months ago, while still developing this year’s theme, Larry Harvey began receiving a series of robocalls in which an automated system tried to fool him into thinking it was a human woman, explaining away any odd delays or inabilities to understand simple questions by claiming its headset wasn’t working right.
A few weeks after the 2018 theme was announced, news outlets reported that the Kingdom of Saudi Arabia had given the first citizenship to an AI, (“Sophia”) as part of an effort to attract high-tech businesses to invest there. If true, this meant a robot in the shape of a woman arguably has more rights than any actual women in that country.
These seem like recent, timely developments, but in fact we’ve been playing fast and loose with the definitions of personhood for some time. The landmark Supreme Court “Citizens United” decision declared that corporations have a right to free speech, effectively enshrining abstract economic entities with a kind of humanity. “Scientific” tests were used at the beginning of the last century to try and create racial hierarchies of intelligence, which was effectively an exercise in refusing to acknowledge the obvious humanity in others.
Similar questions about “are they REALLY human?” were asked and tragically answered when conquistadors had need for slave labor in the New World. “No,” they said, pointing at human beings, “of course they’re not really human. Just look at them!”
All of which is to say that while a subset of philosophers and theologians may have taken fundamental questions of personhood seriously, when the rubber hit the road, who we have and haven’t included as a person has always been more about economics and prejudice. If corporations can be given fundamental freedoms, sure, why can’t AIs? If women can be denied property rights and whole nations can be considered chattel based on their skin color, sure, why not the rest of us?
Robots and AI, then, confront us with subtle versions of challenges that we could never answer as a species even when the challenges were obvious.
Robots and AI confront us with subtle versions of challenges that we could never answer as a species even when the challenges were obvious.
Artificial Intelligence will certainly change the world, and it could very well destroy it, but any look at the historical record suggests that we are afraid of AI not so much because of its potential for inhumanity, but because we fear it might start to behave as human beings traditionally have to other human beings.
We are afraid of AI not so much because of its potential for inhumanity, but because we fear it might start to behave as human beings traditionally have to other human beings.
It’s no accident that when Microsoft last year turned a learning chatbot loose on Twitter, its output became horribly racist and sexist within 24 hours. Twitter’s terms of service, its official company line on how to behave, were meaningless. Only the behavior of other users mattered. AI learns through example, not through rhetoric. Its behavior will be guided by ours. If we are indeed teaching AI to follow our example, this could be a significant problem, but it wouldn’t involve anything we’re not already doing to ourselves.
If, on the other hand, we want future Artificial Intelligences to be beneficent, to hold human life as valuable, even sacred, to look out for our best interests … we may have to show it what that looks like first.
If we want future Artificial Intelligences to be beneficent, to hold human life as valuable, even sacred, to look out for our best interests … we may have to show it what that looks like first.
The rate at which human social intelligence advances (or doesn’t) turns out to be every bit as relevant as advances in technical capacity. How AI engineers treat their janitors may determine what kind of machine intelligence powers the future.
The questions we have the most trouble answering about AI are the questions we have never satisfactorily answered about ourselves. Before we can “decide” if machines are human, we need to decide if we are. That seems, in theory, like a much simpler problem. But our history suggests that, in practice, it may be far, far, more difficult. We get the robot overlords we deserve.
These are the questions and issues we will examine in this series, in the hope that the emergence of AI can make us kinder to one another.
Great article! Do you have a cite to the thing about Microsoft’s bit learning racist behaviors? Would love to read more about that.
Report comment
it was all over the news at the time. Google it, you’ll find stuff.
Report comment
Bravo! The path you create is simultaneously bold, questionable, convoluted, and informative. Thanks for the ride.
Report comment
“Radical Inclusion” should include robots but they should “Participate” too. I can’t think of a better use of playa robots than to program them to help-out their human counterparts (our beloved sanitation workers) and have them clean the portable toilets four times per day!
Please?
Report comment
That is beginning to sound more like slavery of the robots…
Report comment
My 4-year-old grandson has grown up with Alexa in the house, and there have been some comical moments as he’s learned what the pint-sized bot can and cannot do. “Alexa, play TNT, for instance, gets better results than “Alexa, MORE CHEESE!” So he can at least head-slam in his high chair to AC/DC without parental intervention. But the dark side of all this adorableness, is, of course, that he is part of the first American generation in a long time growing up with a slave in the house, and learning the mindset of the master.
Report comment
so many parents are eager to put themselves into the role of slave that I don’t think it will be as big a change as you’re imagining.
Report comment
Great article, thought provoking indeed.
After last year I shudder to imagine what a learning robot would develop into after a week at Burningman.
Report comment
It would be a smiling, hugging, compassionate being, waving , singing and dancing as it went along its merry way…playa dust proofing required …
Report comment
have to find a way to make it commercial attractive to show the AI we are not selfish. .
I am afraid of the AI’s as I know they are superior in some ways. Well as long as the AI’s misunderstand me/us half the time we are safe.
Report comment
Poignant article. What kind of role model are we providing to intelligent machines? I’d add: what kind of role model will artificially intelligent machines provide us when they exceed our intelligence? (predicted to occur ~2030.) We are already learning to think like machines with our input/output interface to computers and mobile devices. Actually, humanity is evolving (rather awkwardly) just to keep pace with machines. Of course, there are many kinds of intelligence. Emotional intelligence, for one. So here’s another question: Will machines, and people, learn to have a heart? If so, who will teach whom?
Report comment
Excellent article. It touches upon a number of exciting, if scary, possibilities for our future to come.
I have a question for you: are all ‘humans’ that you know, truly ‘human’ to you? There are people who are incredibly logical, there are people who are not. People who are temporarily or permanently incapacitated (chemically, or biologically). There are people with asperger’s disease. And some people, if they chatted with a stranger online, might not pass turing tests, even though they are considered human…
What I am alluding to , perhaps, is that there is not a binary classification of ‘human’ or ‘not human’, but a sliding scale of humanity. And the the axes of this sliding scale are likely of high dimensionality.
When we (as a society) come to realize that much of life has elements that we often attribute to strictly humanity, then we will begin to avoid some of the elements of humanism that are dangerous to our continued existence.
When we realize that an individual ‘human’ is not strictly the biological assemblage blueprinted by our own DNA, but that we are formed from billions of microbiota, and that we are formed from the interactions of people, and of tools and technology outside of ourselves, then we can begin to bypass the challenges with AI that we may soon be facing.
How? By recognizing that we are an element of a complex cybernetic system already, an element within a ‘superorganism’ if you will, we can see how AI might play a unifying role.
Just as the microbiota that symbiotically interact with the various systems of human physiology and, well, pretty much all biological life, humans will remain elements of this cybernetic superorganism. AI integrates will integrate as another element of the system. Perhaps the way it would integrate would analogous to a the subconscious or conscious of our brain?
If the life embodied within a GENeral Artificial Super Intelligence (GENASI), would possibly take a while to consider that some elements of the system are harmful to the entirety of the superorganism, but not all of them. And many of the elements would be deemed essential.
At least essential in the near-term.
Long term, we as humans, will certainly have to evolve to remain useful to the entirety of the cybernetic superorganism that now encompasses the globe.
But to get to the long-term, we will have to navigate the many challenges of Existential Apathy (lack of purpose b/c machines can do things better) and figure out how we, individually, can work to positively contribute to society.
Report comment
Our humanity has always been aspirational. Whatever dogma we choose to assimulate into our daily lives we all fall foul of ideological thinking. Two sides to the self which rise and fall like the tides. AI at least may remain predictable if not absurd in it’s interactions as it learns in it’s own unique way. Humans across the globe cause human suffering. We have the capacity to live more altruistic ways of existence but how many of us have the time or energy to influence on a grand scale. AI like any other potential commodity will be used to line the pockets of those already grossly wealthy. “Humanity” As a concept or a basic descriptive statement? Like all things that have gone before, we rinse and repeat again and again, we little folk will continue to hope for a better world and nothing will fundamentally change. Where true power lies the concept of humanity is extinguished.
Report comment
I am eagerly waiting for such kind of themes on robots from long time
Report comment
Comments are closed.