Part of the I, ROBOT series
At the time Kafka wrote his infamous novels about human beings struggling to make sense of a world full of unaccountable and inscrutable bureaucracies, there was no Artificial Intelligence.
And yet … the closest thing any of us may ever experience to a Kafka-esque nightmare is trying to work our way through an automated phone system in order to ask a question not on the main menu. Especially when it’s for an institution that has significant power over our lives.
The logic of “The Trial,” or “The Castle,” or “The Penal Colony,” is almost identical to the logic of the customer service line for a global bank, a complaints desk for major social media platform, or the filing processes of a major government entity.
Kafka was responding to a world in which the logic of bureaucracies was overcoming the wisdom of people and institutions. He was not a technologist. Yet as the world has gotten more automated, it has become more Kafka-esque. AI is either causing or correlated to bureaucracy.
How Decisions About Decisions Get Made
Prior to the modern era, decisions had almost exclusively been made by people who were responsible for them. There were, of course, rules that had been set down — perhaps by an emperor or a king, perhaps by a pope or a bishop, but fundamentally the people who made the rules were responsible for them: some person made the decision, and was accountable for it. Just as vitally: with limited communications technology, the people outside of the ruler’s immediate environment who implemented the rules had an extraordinary measure of autonomy. They had to: it was not possible to check in quickly with the home office, or to have significant oversight. The circumference was far enough from the center that whatever the theory of absolute obedience, it needed to have some level of independence … and that meant some person was responsible for how the rules he implemented were interpreted, how the decisions were made, and what exceptions were granted. There were systems and guidelines and limitations, to be sure, but decisions could easily be traced back to their sources.
So while enormous bureaucracies were created in the Roman Empire, in Imperial China, and the Catholic Church, their functioning depended upon the agency of the people designated to carry them out.
That began to change with faster travel and early communications technology. Suddenly people at the peripheries could send timely requests for decisions to centralized authority figures — and centralized authority figures began to demand them. The people responsible for implementing the decisions had less authority and autonomy.
You would think that as the former decision makers at the edges lost the ability to make decisions, that the power of the central authorities would increase. And it did — in theory. But in practice, it meant that centralized authorities had exponentially more decisions to make — so many, that they had to create large systems of delegation. The king only has so many hours in a day, so he has to appoint ministers; the ministers only have so many hours in a day, so they appoint directors; and so on, and so on. Authority was therefore centralized, but within the central offices it was also diffused. More and more people making decisions meant that responsibility for any given decision was harder and harder to track down.
By Kafka’s time in the early 20th century, it could already seem like no one was making a decision at all: the systems were making decisions for themselves.
Bureaucracy Without Bureaucrats
Artificial Intelligence takes Kafka’s metaphor and makes it literal. Systems, designed by programmers who are completely detached from the decisions being made (programmers who make a system for prisons do not themselves meet the prisoners or guards; programmers who make a system for a bank do not have a stake in the welfare of the customers), create systems to the specifications of people who know nothing about programming, and these systems do, in fact, run themselves. The dissolution of responsibility is complete: there is often literally no person who is responsible turning you down for a loan or tagging you as a security threat. The system did it itself. Who can you point to, who can you appeal to, who can you rail against?
AI is uniquely suited to this incarnation of bureaucracy because the logic of AI is the logic of bureaucracy: all inputs are interpreted based on heuristics and precedent, with no outside factors considered. AI is the perfect bureaucracy, and bureaucracy is the perfect AI.
Thus the promise of AI — to make our lives more convenient and our systems smarter — is only possible if our lives become more bureaucratic, more and more governed by systems that act autonomously, without responsibility or appeal. The very justification of AI, that our puny human brains can’t process enough information to make the right decisions, makes the act of appealing their decisions more futile. Who are you to tell the system how it should treat you?
This wholesale reorganization of our world will also surely have an impact on the human psyche. Major shifts in technology have always led to new expressions of the human condition. The Industrial Revolution, the advent of mass transit, the establishment of mass media, and the information economy have all privileged certain kinds of lifestyles and ways of relating to the world — and thus encouraged new habits of mind among the population.
The social organization that technology makes possible shapes the psyches of the people who grow up in it. The more AIs take over major decision making processes, the more the logic of perfect bureaucracy will come to dominate all aspects of life, and come to seem natural to future generations. What kind of people will such a society create?
Silicon Valley doesn’t ask that question (marketing decks don’t count). Kafka and the cyberpunks did.
Thank you Caveat, I love this. I don’t feel like most people spend enough time thinking about the consequences of this push towards complete divestment of responsibility from real human actors. There is a lot of faith and reliance on an algorithm somehow being fair and just, but I’m not sure there’s any reason to believe that.
Report comment
Yuval Noah Harari writes (in Homo Deus) about how AI threatens to destroy the liberal humanist world order (our faith-based beliefs in free will, the self, and by extension: democracy and human rights). This seems entirely plausible to me, and it is terrifying. It’s enough to make me want to go get even weirder in the desert each year.
Report comment
This thing never happen. Believe me.!
Report comment
“The very justification of AI, that our puny human brains can’t process enough information to make the right decisions, makes the act of appealing their decisions more futile.”
The worst part here is that it is taken for granted that “the right decisions” are the ones that the machine takes to be “the right decisions.”
This in itself is the largest and worst “impact on the human psyche.”
Report comment
I am looking so many things coming to me in this way.
Report comment
I have been saying for years: “this is proof the engineers don’t use what they write.” After using badly implemented programs…
This is a good “food for thought” article, AI may be useful in helping make a decision, but I would not want it making the final decision…
Report comment
This is a good “food for thought” article, AI may be useful in helping make a decision, but I would not want it making the final decision…
Report comment
Comments are closed.