Part of the I, ROBOT series
“I got you $50 off your monthly internet bill!” she chirped, as I walked in from work. I hadn’t even closed the door yet.
“That’s great! How did you do it?” I asked. It was nice to be greeted with good news.
“Easy,” she replied. “Your mobile phone service provider was having a promotion, and they needed your social media data for the last 30 days. Since you told me you wouldn’t mind if I shared it, I went ahead and signed you up.”
Maybe these robots aren’t so bad, I thought. I never wanted one of these voice-activated home assistants, but if they can save me money, I guess I’ll adapt.
The concept of a “robot” emerged after World War I. The word, from the Czech “roboti” — meaning forced labor — was first used in Czech writer Karel Čapek’s (cha-peck) 1920 play called R.U.R. (Rossum’s Universal Robots). This story about assembled biological beings, like the hosts in Westworld or the Cylons in Battlestar Galactica, was sufficiently compelling to earn a spot on Broadway and in theatres from London to Los Angeles. Adaptations for film and radio appeared through the 1930’s and 1940’s. A live performance featuring real robots as actors was even performed in Prague in 2015.
In the play, Čapek’s robots eventually take over the world, but can’t figure out how to reproduce. In the end, Alquist (the last human left on Earth) senses that two of the robots are falling in love, something they weren’t programmed to do. The play closes as Alquist wistfully “hopes for the best” and the audience, no doubt, rejoices that something like this couldn’t possibly happen in real life.
Robots are designed to be agents — machines that will do things on your behalf, sometimes even making decisions and taking actions in a way that (hopefully) you would. Although we usually think of robots as hardware (like surgical robots that handle precision cutting, or industrial robots that assemble and transform parts in sometimes dangerous environments), they can be software too (although in that case, many people instinctively shorten the term to “bot”). Artificial intelligence and machine learning techniques can be used by either of these to acquire and process data, make complex decisions, and take actions towards specific goals.
“The Internet of Things (IoT) does not concern objects only; it is about the relations between the everyday objects surrounding humans and humans themselves.” —Santucci, G. (2011). The internet of things: the way ahead. Internet of Things-Global Technological and Societal Trends from Smart Environments and Spaces to Green ICT, edited by Ovidiu Vermessan, Peter Friess, 53-98.
[Spoiler: Our home assistant hasn’t actually saved us money on our internet service, and to be honest, I’m not very fond of it yet. First, I’d rather a text-based interface than voice-based — my phone and my computer satisfy all my needs (at least all the needs I know I have). Also, it creeps me out to have a device listening to me all the time, even though I know my phone does the same thing. But if I could train the home assistant with my preferences, and it could work intelligently on my behalf to make doctor’s appointments, order new checks, or cancel those domain names that I’ll never use, ordered late on Saturday-nights-of-great-ideas… I’d probably change my mind… and fast.]
But what happens when our things become more autonomous, or can better represent (and act on) our thoughts, beliefs, emotions, and intentions? What happens when non-humans are granted rights? The Saudi robot Sophia has already been granted citizenship there, and in the US, the notion of “corporate personhood” (though controversial) sets the stage for machine personhood. If groups of people can be treated like individuals, why not machines that act as agents of the people?
In 2010, the US Supreme Court (in Citizens United vs FEC) decided that “non-persons” have the right to free speech. Because of this right, corporations would finally be able to buy political ads to support (or oppose) political candidates. Turns out, the First Amendment protects not only the entity’s right to share information — but also to receive it. The listener has rights that are separate and distinct from those of the speaker, and the listener also has a right to know who the message is coming from. The role of the government is specific: to protect non-consenting individuals so you don’t have to get a person’s (or company’s) free speech forced on you.
“Any form of speech generated by such a body is constitutionally protected because a listener has the right to hear it. Monkeys and corporations may generate information which may profoundly affect a listener, and the First Amendment guarantees our right to hear such speech.” — McPhail, Stuart, A Million Corporations with a Million Campaign Ads: Citizens United, the People’s Rights Amendment, and the Speech of Non-Persons (June 3, 2013). Available at SSRN: https://ssrn.com/abstract=2273795 or http://dx.doi.org/10.2139/ssrn.2273795
If your autonomous agent has a right to free speech and you own the agent, is that right independent of yours, or an extension of yours? How can you keep your stuff from ganging up on you, or from revealing secrets that you hadn’t previously recognized were secret? Questions like this make me happy I chose data science over law school.
Panpsychism is the philosophy that all things — even atoms, and books, and toasters — have some sort of mind or consciousness. The exact meaning of these terms has been the subject of intense philosophical debate for centuries, and panpsychism itself been widely shrugged off as “crazy” or “unrealistic.” But if you examine the behavior of objects rather than their intrinsic nature, the concept of panpsychism fits — and shouldn’t be completely discounted when designing intelligent objects that will be situated in intelligent environments. For example, have you ever been engaged in an argument with a bot on Twitter, only to realize later that it was a non-human entity stirring your emotions? Even if the technologies themselves don’t have a mind or consciousness, they can clearly interfere with the health and sanity of yours.
Your toaster may never have an inner life, but it will be able to interact with you and your other belongings, and accomplish tasks, exchange information, or engage in transactions on your behalf. It may even be able to interact with other people’s stuff. In fact, the Iota (or “Internet of Things Application”) cryptocurrency was originally designed so that future objects have a value store they can use to engage in transactions with one another.
“…it is increasingly difficult to distinguish between human and nonhuman actors in ubiquitous information environments. Context-aware applications perfectly exemplify the role of machine agency in human interaction.” — From Olsson, C. M., & Henfridsson, O. (2005). Designing context-aware interaction: An action research study. In Designing ubiquitous information environments: Socio-technical issues and challenges (pp. 233-247). Springer US.
When you can’t tell if you’re interacting with a human or non-human online, does the distinction even matter any more? When IoT objects around us start demonstrating “free will” — or at least making decisions that are unexpected (and maybe also opaque) to us — can we continue thinking about objects as being solely passive?
The answer to both questions is probably not. But we don’t have to buy in to the notion that “everything has a mind or consciousness” to thrive in an intelligent environment either. We can, however, use that idea as a lens to examine how to broaden and enrich our community’s ethos and culture in a hybrid society of interoperating people and machines.
Some of the 10 Principles can help start discussions about how to ethically integrate IoT and other intelligent technologies into our culture:
- Radical Inclusion: Because anyone can be a part of Burning Man, should any thing also be included? Will including connected objects enable us to include people who ordinarily would not be engaged with our community? Intelligent technologies should advance inclusion rather than separate people from one another.
- Radical Self-Expression: Intelligent technologies should also preserve human autonomy. People need to be able to consent to inclusion, and opt out when systems harvest information or engage in transactions on their behalf. Explicit consent is critical.
- Radical Self-Reliance: Intelligent technologies should preserve human agency. People have different needs, capabilities, and capacities for engagement, and should not be made dependent on devices. They need to be in charge of their own digital footprint, which includes the information they produce (over the lifetime of the information, not the person).
- Gifting: Could intelligent objects promote the principle of gifting? Could they talk amongst themselves to help us better identify what people in our community need, and what they have to share?
- Civic Responsibility: Interactions between people and machines should always honor public welfare. How can intelligent technologies be leveraged to further social justice, to reduce discrimination, and to reduce inequality?
Over the next decade or two, as our lives become even more technologically steeped than they are now, we will be exploring the “use cases” and “abuse cases” of intelligent technologies together. Čapek’s robots may have been science fiction, but today’s intelligent agents are not — and the health of their relationships may reflect how well we address trust, power, control, and consent within ours.