Predictions about what the future holds if we are able to develop a self-aware Artificial Intelligence are not new. Elon Musk recently compared AI to nuclear weapons and Stephen Hawking said that the development of full Artificial Intelligence could spell the end of the human race. Hollywood has even gotten in on the action with movies like The Matix or The Terminator.
Having a big red button at the ready in case the robot is about to perform a harmful action may be easy at first thought, but how do you prevent the AI from learning how to ignore this command or to disable the interrupt all together?
According to a paper published by researchers at Google Deep Mind explained that they are developing a framework to “allow a human operator to repeatedly safely interrupt a robot while making sure the robot will not learn to prevent or induce these interruptions.”
The example provided by the researchers is that of a robot operating in a warehouse. The robot has two functions, to sort boxes inside or go outside and bring boxes in. Assuming the latter were more important, the robot would receive a greater reward for completing this task.
Well let’s consider this, Often in many places around the world they get an equal amount or more days of rain than not. By the operator repeatedly sending the robot an interrupt into its routine and stop it from going outside in the rain, The AI may permanently adopt this request to change its behavior instead of processing each request as a one time interrupt and will no longer go outside. The worst-case scenario is that the robot would rather have the higher reward of moving the boxes inside and prevents the interrupt from changing its behavior.
When asked about other AI algorithms the researchers say that most have this feature either already in place or could easily be modified to include this feature. However, AI based on a policy-search algorithm does not have this feature already in place nor would it work with their framework.
I, for one, welcome our future robot overlords.