How Harmless Little Lies About Consciousness Add Up to Big, Scary, Dystopias About Advertising

Part of the I, ROBOT series

Technology changes fast, so let’s take a moment to stop and review.

 

Better Off a Bot

 

Shortly before we started this series, six months ago, an “artificial intelligence” had been given citizenship by a sovereign nation for the first time in history: a female shaped automaton, “Sophia,” had been made a formal member of the Kingdom of Saudi Arabia – a country which does not bestow full personhood on actual women.

How’s that gone for Sophia?

Well, according to an article in Wired, receiving legal personhood has “condemned (her) to a lifeless career in marketing.”

“Since obtaining personhood,” Emily Reynolds writes, “Sophia has gone on a whistle-stop marketing tour – CES, the Digital World Exposition, the Creative Industry Summit – and has used her Twitter account to promote tourism in Abu Dhabi, a smartphone, a Channel 4 show, and a credit card.”

Gaining personhood, in other words, has made Sophia a better cog in the machine.

 

Fool Me Once, Shame on You. Fool Me Twice, Shame on Your Engineering Team

 

Another major moment – though one that, like Sophia, may have been more hype than help – was the project demo of “Google Duplex,” an automated system in which a human-sounding voice places calls on your behalf. Google claims it can understand complex sentences, fast speech and long remarks (although according to CNET it can only be used to “schedule a hair appointment, make reservations at a restaurant and get holiday hours of a business,” raising questions about what, exactly it “understands.”) In its demo, it specifically did not identify itself as an AI to the people on the other end of the call, and in fact was specifically programmed to use verbal tics to make itself seem human.

Much in the way that the world’s first robot citizen has been co-opted into an endless marketing gig, the very first public demonstration of the world’s first truly automated digital phone assistant was used to lie to other humans.

Unless of course its demo was heavily staged, which has been suggested, in which case it was being used as a tool to lie to a much larger, more credulous, group of humans. Either way: deceit was central to its purpose.

 

Hacking Consensus Reality

 

But by far the most significant development in automation since the beginning of this series has been the realization that “deep fake” technology is now upon us. Anyone who has the right AI algorithms – which are available on numerous code-sharing platforms – can use their basic video-editing systems to make it appear that almost anyone is saying or doing almost anything.

It was first widely used for custom digital pornography, welding prominent actresses’ heads to anonymous sex workers’ bodies to create the illusion that famous women who were not having sex to amuse you were, in fact, having sex to amuse you. But a later demonstration was able to create the convincing image of a speech by President Obama – literally putting words into a close-to-literal facsimile of his mouth.

If seeing is believing, we are creating a world where the most basic building blocks of consensus reality can be manipulated by bored teenagers, angry incels, national intelligence services, and digital mercenaries, for purposes ranging from nefarious to shits-n’-giggles. And in so doing they continue to weaken the common bonds, however tenuous, that connect us together.

How do we fact check when we literally can’t trust our own eyes?

 

The System Is Designed To Lie To You

 

What we’re seeing playing out before us, not in history but in real time, is the tendency of our new information technologies to be used to dehumanize each other, and lie to each other.

No matter how this technology advances, that seems to remain a constant.

Being human-enough-for-government-work only made Sophia more exploitable. Putting words in someone’s mouth, let alone using their image for pornography without their consent, is surely a kind of dehumanization. And all of these applications of the new technology are explicitly trafficking in lies.

In the case of deep fakes, deception and exploitation is explicitly the point. Likewise the granting of citizenship to Sophia was always a marketing stunt – there was no larger purpose than to generate publicity by making exactly the kind of ludicrous claim that the media feels obliged to report on.

But why did Google Duplex need to fool anyone? What purpose did that serve? How does it enhance the functionality or improve the product? (In fact, fooling the person on the phone arguably makes the AI less functional, as it means people are more likely to interpret its behavior incorrectly.) It seems possible, likely even, that Google felt the need to demonstrate that Duplux could fool human beings because the companies that would pay for the service see the ability to fool people as a crucial product feature. Sure.

But that only begs the question: why?

Why is it so damn important that our technology deceive us?

 

Science Fiction Lied to Us!  (And so Encouraged Us to Lie to Each Other)

 

The stage was perhaps set first by the 1920s play R.U.R., which introduced the word “robot” into the English language, and by the classic silent sci-fi “Metropolis.” But the key moment was likely in the early 1950s, when Alan Turing explicitly set the goal of “thinking machines” – to fool real people. The whole premise of the Turing Test, the holy grail of AI, is not to be honest, or create beauty, or help people: it is to successfully deceive them. That may be the moment in which deceit as not just an operating principle but a goal to be aspired to, entered into the DNA of a new technological frontier.

I don’t know if this had a direct connection to the work of contemporary science fiction writers like Asimov and Roddenberry, or if the idea had simply become part of the zeitgeist by then, but both they and others wrote immensely popular opuses around the idea of machines that were hard to tell apart from “real” people – or that even truly were real people, when you take a second look.  By then deceit was a standard measure of success.

It was, to be sure, seen as a benign form of deceit, a kind of “I can’t believe it’s not consciousness!” with all the malice of margarine. Technologists always think they’re on the side of humanity. No one really imagined it as “deceit” in the expansive sense, the significant sense. And why would they? It always worked out so well in fiction! And in fairness, it took decades for this seed to grow into nightshade.

But now it has become clear that one of the reasons so many technologies that were sold to us as connecting have in fact eroded our civic fabric and devoured our ability to trust and engage with one another is precisely that they embraced deceit not just as a design principle but as a fundamental aspiration: it was the dream. To fail to fool people was, in fact, to fail – which meant that honesty and transparency were not only design flaws but refusals to be ambitious at all.

And of course, of course, when we use our technology to lie to each other, exploitation follows close behind.

 

If You Don’t Have To Think About It, It’s Probably Exploiting You

 

The aspiration to deceive in AI design is part and parcel of larger design trends: to make design so seamless, so invisible, that you don’t even notice it. A seamless experience that seems so natural that it feels inevitable.

Which is beautiful. But it turns out making technology and manipulation inevitable and invisible erodes our ability to trust one another and create a meaningful consensus reality.

The role of engineers is to build and sometimes innovate. The role of artists – often overlooked in the development of technology – is to inspire the standards that they are building and innovating towards. This is especially true when the standards art has set have become so embedded in the minds of engineers and designers that they don’t even know there’s an alternative.

Which is another lie.

In this era of “fake news,” we have exceeded all expectations in creating AI that lies to us. We are now behooved to ask: can we design AI that tell us the truth? Not “recite facts” or “provide data,” but tell us the truth?

If we can, we need to give up the simplistic binaries of the Turing Test, and design for AI whose purpose is not to fool us but to connect us.

If we can’t … if we can’t even conceive of an AI system that tells us the truth, rather than aiming to deceive us … then maybe we really should give this technology up. Designing global systems around increasingly efficient lies seems like a terrible idea.

But we should at least give it a try. The truth sounds like a great design principle, and an even better aspiration.

 

 

Cover Image:  “BELIEVE” by Laura Kimpton and “Truth is Beauty” by Marco Cochrane (Photo by Andrew Wyatt)

About the author: Caveat Magister

Caveat is Burning Man's Philosopher Laureate. A founding member of its Philosophical Center, he is the author of The Scene That Became Cities: what Burning Man philosophy can teach us about building better communities, and Turn Your Life Into Art: lessons in Psychologic from the San Francisco Underground. He has also written several books which have nothing to do with Burning Man. He has finally got his email address caveat (at) burningman (dot) org working again. He tweets, occasionally, as @BenjaminWachs

10 Comments on “How Harmless Little Lies About Consciousness Add Up to Big, Scary, Dystopias About Advertising

  • roissy says:

    I don’t see how Google Duplex is any more deceiving than an out-of-country call center trying to make you believe they are somewhere in the midwest???

    Report comment

    • Caveat Magister says:

      Hi Roissy:

      It’s the difference between one human lying to another on the phone, and the phone system itself expressing lies. It’s the difference between somebody lying on the internet, and the internet itself being designed to deceive us.

      The former is people using a medium to lie to one another – which is bad but going to happen, and if the medium is truly neutral, they can also use it to communicate the truth. Maybe even achieve moments of genuine human connection. It can happen, even with someone from a call center.

      But if the technology itself is designed to perpetuate deceit? To lie and exploit no matter what happens? I think that is a significant difference.

      Report comment

  • Stealth says:

    From your lips (typing fingers) to God’s ear, Caveat.

    Report comment

  • robophobia says:

    Just as we learned to stop fearing the devil, we might also learn to stop fearing robots as we become more and more accustomed to them. It’s important to remember that humans have the ability to program these machines — which means we can deploy measures that ensure the robot apocalypse won’t happen. People and robots are being programmed to lie. Most the time you can tell it’s happening. It could enhance our intuition as humans. At the very least, the ‘uncanny valley’ may keep us from ever wanting to create a robot that’s totally human. *The “Uncanny Valley” is a sociology term referring to the creeps one is given by depictions of humanlike people or objects that closely resemble actual humans. This could even just be CGI animation onscreen that doesn’t even have a physical presence. But when we’re talking about robots, we’re talking about humanoid machines with a human face. And for many people, that shit is just weird. And of course is the first lie of robots.

    Report comment

  • Kelli Hoversten says:

    I am much more concerned with the lies that people tell to each other and the lies that organizations tell to people. Until that stops being acceptable why would anyone think that the things that people, Organizations build and use will become any less deceptive.

    Report comment

  • Haywire says:

    According to this then, the Turing Test needs to evolve along with the evolution of AI. Instead of merely being indistinguishable from human dialect, the test needs to become “an ability to sustain an argument” or a demonstrated capacity to change its assumptions when confronted with mitigating facts. (Sigh) It seems like many humans would fail such a test.

    Report comment

  • Leatherstocking says:

    The systems are designed to connect to us. To gain trust. Xenophobic species that we are, we trust “human” more than “robot” – so in order for us to trust “robots” they “need” to become more human-like. Which, when discovered, blows away trust. (Terminator, anyone?)

    Gaining trust by telling the truth, now that’s a good subject. I guess Data from Star Trek is such an example? It says it’s an android. Do you have ideas on how to create such AIs? It’s quite interesting.

    Report comment

  • Phil Goetz says:

    “But the key moment was likely in the early 1950s, when Alan Turing explicitly set the goal of “thinking machines” – to fool real people.”

    This is paranoid delusional talk. The purpose of the Turing test is to eliminate metaphysics from arguments over whether a machine is intelligent. Before Turing, people would argue endlessly that machines couldn’t be intelligent because they couldn’t have a soul, because they weren’t made of brain-stuff, because they weren’t always-already embedded in “the hermeneutic circle”, etc. The Turing test says: If we can’t tell the difference between a machine and a person, we should assume the machine is a person.

    But your essay only goes downhill from there. You say science fiction engages in “deceit” when it imagines robots who are people–because you’re a slave of the same Platonist, essentialist metaphysics that Turing was fighting against, that insists that only things made out of brain-stuff can be people. When we do make machines that can connect to us, you’ll be the one demanding we treat them as slaves instead of as people, because they don’t fit into your metaphysics.

    Then you say we should try to make AI that “tells us the truth”–as if that hasn’t been the point all along! Is Google “lying to us” when it finds web page results? Is Siri “deceiving us” when it gives us a weather report? Your critique is disconnected from the reality it claims to observe.

    Report comment

    • Jon Alexandr says:

      Phil Goetz is correct in that Caveat Magister (CM) does not seem to understand what the Turing Test is designed for. Also, CM does not seem to understand the importance of facts and data. Hello! Undercutting facts and data is part of the totalitarian toolkit of the current regime in DC. Don’t let the playa dust obscure reality.

      Report comment

  • bystander says:

    I think Google Duplex is nothing so much as a confession by the techies that they are running out of ideas.

    Is making a hair appointment oneself so onerous a task that I’m going to pay Google (I assume this will not be free!) to do it mechanically for me? It would take 10x as long for me to learn how to use it than two years worth of calling the hair dresser myself.

    Report comment

  • Comments are closed.