Part of the I, ROBOT series
The problem with the way we try to give AI ethics is that we design these ethics to work for the status quo, the way things are right now.
But things never stay the same for long, the status quo is constantly changing, and so that sets our AI ethics up to fail.
That’s the insight of Dr. Christoph Salge, a professor of Computer Science at the University of Hertfordshire and New York University’s Game Innovation Lab. Instead of giving Artificial Intelligence a set of rules to follow that will become obsolete as the world (and our needs) change, we should give AI principles to follow that can adapt to our own changing needs.
He’s now working on one such possibility: that AI should be given the guiding principle of empowering human beings. That their fundamental purpose, whatever else they do, is to be empowering.
That doesn’t mean doing everything humans want, but it does mean understanding that outcomes which give people more choices, and more beneficial outcomes, rather than fewer, are usually better. That way, as circumstances change, robots will understand that their optimum task isn’t to make decisions for us but to give us better options from which to choose.
That’s great in theory, but how do we implement it in practice? Human beings have a hard enough time empowering other human beings – how can we teach our robots how to do it?
We discuss this new approach to AI ethics with Salge in this podcast of the Burning Man Philosophical Center.
Listen to all the Philosophical Center podcasts here.