AI researchers sound alarm against killer robots

A group of scientists including Stephen Hawking are petitioning the UN to urgently consider how AI might play a part in the future of warfare.
24 August 2017

Interview with 

Peter Clarke, Resurgo Genetics

Share

War has always happened throughout human history and, chances are, it will continue to do so in the future. With this in mind, it’s important to ensure that if it does occur it’s carried out as humanely as possible, which is why treaties such as the Geneva Convention exists. Violating certain parts of this treaty, such as the use of chemical and biological weapons, for example, constitutes a war crime. With recent developments in artificial intelligence, a new version of the convention may be required. There have been two major revolutions in warfare so far: gunpowder and nuclear weapons, and the use of artificial intelligence is seen by many as the third such revolution. In an open letter to the United Nations, more than 100 leading robotics experts, including Elon Musk, Stephen Hawking, and the founder of Google’s Deepmind have called for a ban on the use of AI in managing weapons systems. Tom Crawford spoke to Peter Clark, founder of the Resurgo Genetics is an expert in machine learning…

Peter - The aim of this article was to try and head off the possibility of an AI arms race. So where you start to develop these technologies, and as most of the people developing these AI and robotic technologies they have no wish to create killer robots. But once this technology is in place, and becomes a massively powerful tool in the suite of armaments that a nation state or other factors have to fight warfare these will, inevitably, get used. So what it’s trying to do is it’s trying to trigger a debate about having international legislation much in the same ways that you have nuclear weapons or chemical weapons. We need to start thinking about a similar type of international level legislation for these autonomous weapons systems because they are going to transform the nature of warfare, and we need to prepare for that. T

om - We currently have not quite fully autonomous killing machines, but I feel like we’re very close in some of the technology that currently exists. For example, drones which are flown by remote control from a pilot in one country and can drop bombs which, of course, will kill people. So how are these autonomous systems different to something like that which still has the human element?

Peter - Withe the human element you’re always subject to your own personal morality and ethics that, even if you happen to be separated by many thousands of miles from these events, you still realise and have that capacity to understand that you’re destroying people’s lives. With an autonomous system, programmed with a particular set of objectives, you’re removing that human moral and ethical constraint on the behaviour of those systems and, as soon as that’s lost, we enter a very, very different world.

AI - Intruder, intruder’s must be destroyed - pow, pow, pow, pow, pow

Tom - Is this a case of a robot with a gun just walking around and shooting people that it thinks are a threat or that are enemies?

Peter - This is the interesting thing, and I think that we all have these images from terminator and other films which is the sort of lumbering great robots who are hunting people down but, actually, the reality can be very different. What you may have is something along the lines of swarms of mini, little autonomous drones carrying small packets of explosives that could target individuals in a population, so you could have swarms of millions of these things sweeping over cities. So many very different realisations of the applications of this type of technology, but we know that in whatever form they will transform warfare.

Tom - How would such an autonomous system even go about knowing specifically who to target?

Peter - I think much in the same way for that, for example, credit reference decisions are made nowadays where you’re aggregating information on people’s previous payment histories. Maybe now, as marketing gets more and more targeted where you can build up very, very precise profiles of who people are in an autonomous way and target them autonomously for advertising. These same types of techniques can be used to profile people in all sorts of other ways. You could imagine a situation in the future whereby an authoritarian regime could use this same type of targeting and profiling of the population to target individuals in that population that were a threat to those power structures, and have the entire chain not be subject to human decision. So it’s really taking all of the technologies that exist now in terms of your robotic weapons systems, tying those into these tools that allow very precise profiling of a population, and tying these things together. This is not talking about technologies that are far off, sci-fi, future fantasies, these are talking about technologies that are available now, and could be put together now into a system which could be catastrophic for the globe.

Comments

Add a comment