Part of the I, ROBOT series
Many months ago, while still developing this year’s theme, Larry Harvey began receiving a series of robocalls in which an automated system tried to fool him into thinking it was a human woman, explaining away any odd delays or inabilities to understand simple questions by claiming its headset wasn’t working right.
A few weeks after the 2018 theme was announced, news outlets reported that the Kingdom of Saudi Arabia had given the first citizenship to an AI, (“Sophia”) as part of an effort to attract high-tech businesses to invest there. If true, this meant a robot in the shape of a woman arguably has more rights than any actual women in that country.
These seem like recent, timely developments, but in fact we’ve been playing fast and loose with the definitions of personhood for some time. The landmark Supreme Court “Citizens United” decision declared that corporations have a right to free speech, effectively enshrining abstract economic entities with a kind of humanity. “Scientific” tests were used at the beginning of the last century to try and create racial hierarchies of intelligence, which was effectively an exercise in refusing to acknowledge the obvious humanity in others.
Similar questions about “are they REALLY human?” were asked and tragically answered when conquistadors had need for slave labor in the New World. “No,” they said, pointing at human beings, “of course they’re not really human. Just look at them!”
All of which is to say that while a subset of philosophers and theologians may have taken fundamental questions of personhood seriously, when the rubber hit the road, who we have and haven’t included as a person has always been more about economics and prejudice. If corporations can be given fundamental freedoms, sure, why can’t AIs? If women can be denied property rights and whole nations can be considered chattel based on their skin color, sure, why not the rest of us?
Robots and AI, then, confront us with subtle versions of challenges that we could never answer as a species even when the challenges were obvious.
Robots and AI confront us with subtle versions of challenges that we could never answer as a species even when the challenges were obvious.
Artificial Intelligence will certainly change the world, and it could very well destroy it, but any look at the historical record suggests that we are afraid of AI not so much because of its potential for inhumanity, but because we fear it might start to behave as human beings traditionally have to other human beings.
We are afraid of AI not so much because of its potential for inhumanity, but because we fear it might start to behave as human beings traditionally have to other human beings.
It’s no accident that when Microsoft last year turned a learning chatbot loose on Twitter, its output became horribly racist and sexist within 24 hours. Twitter’s terms of service, its official company line on how to behave, were meaningless. Only the behavior of other users mattered. AI learns through example, not through rhetoric. Its behavior will be guided by ours. If we are indeed teaching AI to follow our example, this could be a significant problem, but it wouldn’t involve anything we’re not already doing to ourselves.
If, on the other hand, we want future Artificial Intelligences to be beneficent, to hold human life as valuable, even sacred, to look out for our best interests … we may have to show it what that looks like first.
If we want future Artificial Intelligences to be beneficent, to hold human life as valuable, even sacred, to look out for our best interests … we may have to show it what that looks like first.
The rate at which human social intelligence advances (or doesn’t) turns out to be every bit as relevant as advances in technical capacity. How AI engineers treat their janitors may determine what kind of machine intelligence powers the future.
The questions we have the most trouble answering about AI are the questions we have never satisfactorily answered about ourselves. Before we can “decide” if machines are human, we need to decide if we are. That seems, in theory, like a much simpler problem. But our history suggests that, in practice, it may be far, far, more difficult. We get the robot overlords we deserve.
These are the questions and issues we will examine in this series, in the hope that the emergence of AI can make us kinder to one another.