Researchers have warned it is already too late to stop killer robots – and say banning them would be little more than a temporary solution.
University at Buffalo researchers claim ‘society is entering into a situation where systems like these have and will become possible.’
Elon Musk and Professor Stephen Hawking have both warned that artificial intelligence could develop a will of its own that is in conflict with that of humanity, and could herald dangers like powerful autonomous weapons.
Killer robots have a Pentagon budget line and a group of non-governmental organizations, including Human Rights Watch, is already working collectively to stop their development, the team say.
They claim governance and control of systems like killer robots needs to go beyond the end products.
‘We have to deconstruct the term ‘killer robot’ into smaller cultural techniques,’ says Tero Karppi, assistant professor of media study, whose paper with Marc Böhlen, UB professor of media study, and Yvette Granta, a graduate student at the university, appears in the International Journal of Cultural Studies.
‘We need to go back and look at the history of machine learning, pattern recognition and predictive modeling, and how these things are conceived,’ says Karppi, an expert in critical platform and software studies whose interests include automation, artificial intelligence and how these systems fail.
‘What are the principles and ideologies of building an automated system? What can it do?’
By looking at killer robots we are forced to address questions that are set to define the coming age of automation, artificial intelligence and robotics, he says.
‘Are humans better than robots to make decisions? If not, then what separates humans from robots? When we are defining what robots are and what they do we also define what it means to be a human in this culture and this society,’ Karppi says.