Part of the I, ROBOT series
The problem with the way we try to give AI ethics is that we design these ethics to work for the status quo, the way things are right now.
But things never stay the same for long, the status quo is constantly changing, and so that sets our AI ethics up to fail.
That’s the insight of Dr. Christoph Salge, a professor of Computer Science at the University of Hertfordshire and New York University’s Game Innovation Lab. Instead of giving Artificial Intelligence a set of rules to follow that will become obsolete as the world (and our needs) change, we should give AI principles to follow that can adapt to our own changing needs.
He’s now working on one such possibility: that AI should be given the guiding principle of empowering human beings. That their fundamental purpose, whatever else they do, is to be empowering.
That doesn’t mean doing everything humans want, but it does mean understanding that outcomes which give people more choices, and more beneficial outcomes, rather than fewer, are usually better. That way, as circumstances change, robots will understand that their optimum task isn’t to make decisions for us but to give us better options from which to choose.
That’s great in theory, but how do we implement it in practice? Human beings have a hard enough time empowering other human beings – how can we teach our robots how to do it?
We discuss this new approach to AI ethics with Salge in this podcast of the Burning Man Philosophical Center.
Listen to all the Philosophical Center podcasts here.
Really nice discussion, got some nice creative thoughts for my ongoing scifi project and might even bleed over into other as well. Thanks to the both of you!
Report comment
Logic in a sense is the prime directive of consciousness. It is easy to think in absurd stereotypes when we imagine a person primarily driven by logic like an AI. But to a lot of human beings it would also seem illogical to suppress emotions or have disregard human needs. The ability for an AI to reflect as well as some kind of a
programmed reward system would be needed within AI so that a dominant focus for AI is to act in ways most efficient for the benefit of themselves & for humanity. For humans this is supposed to and has been built in from evolutionary conditioning,
even though there have been eons worth of humanity not empowering each other. I guess it would come down to what core value that the AI would be programmed with. Even a perfect simulation can not predict with complete certainty of how events will unfold. Nothing is ever truly certain. AI should be programmed with a core value,
something other that comfort, success or social
validation. Logic or committing to ‘doing the right thing’ for humans at anytime depending on the knowledge and the logical connections that AI can make thru reflection would be most important.
The electro chemical fireworks firing in our neurons from experience on top of experience on top of experience creates the most intellectually and satisfying human programming. It’s from
that primary evolutionary conditioning that our brains are most capable of running which is fundamentally that of being selfless. The honest desire to take care of or to empower humans would be one hell of a program to program.
Report comment
Comments are closed.