
Paul Scharre says technological advancement is likely to lead to wider use of autonomous weapons.Credit: Greta Lorenz
“Latest: 8,300 students killed. 12 universities hit,” reads a news ticker at the bottom of the screen. “Authorities are still struggling to make sense of an attack on university campuses worldwide, which targeted some students and not others,” the newsreader says.
Nature Spotlight: Robotics
This scene is from Slaughterbots, a short, fictional film made by the Future of Life Institute, a California-based group that has called for a ban on autonomous weapons enabled by artificial intelligence. In the story, an executive at an arms company proudly pitches swarms of tiny drones that use facial recognition to spot and execute its customers’ enemies in a crowd. Spoiler alert: things don’t end well for student activists targeted by the weapons system because of their online political profiles.
At the time of the film’s 2017 release, Paul Scharre, a former special-operations infantry soldier and executive vice-president of the Center for a New American Security, a think tank in Washington DC, said that although a Slaughterbots-style scenario was technologically feasible, there were defensive measures against the type of attack portrayed.
Scharre, who has helped to draft US government policies on autonomous weapons and published a book on the topic in 2019, fears that humanity is at risk of an arms race that could lead to unreliable, hard-to-control machines with the power to decide when and whom to kill. Nature spoke to Scharre about how technology is enabling these new weapons systems.
How do you define autonomous weapons systems, and how widely used are they?
There is no internationally agreed definition of autonomous weapons, which can complicate discussions about them. I define them as weapons that, once launched by humans, can search for, find and attack targets on their own. Since the 1980s, many countries have acquired air-defence systems that can automatically track and shoot down incoming threats, for example.
When people debate autonomous weapons today, however, they are generally referring to those used in offensive combat. An Israeli drone system called Harpy hunts targets that emit radar signals, such as ground-based air defences designed to detect incoming aircraft and missiles. Such offensive systems are designed to be used in specific scenarios, and I’m not aware that they have been used widely in combat.
Humans are still generally making the final decisions about targets, but the pace of technological development means I can certainly see a world in which offensive weapons systems with greater autonomy become widely used.
Can you give an example of a cutting-edge system in use today?
Ukraine has been experimenting with autonomous terminal-guidance systems, also called the autonomous last-mile solution. In this kind of system, human operators navigate a drone towards a moving vehicle or a person, for example, and once it has locked on, even if the communications link to the operator is severed, the drone can continue to track and then strike the target on its own. Although a human is still choosing the target, these systems are a potential stepping stone towards weapons with greater autonomy.
How do you see the future development of these systems?
As well as terminal-guidance systems, we have also seen drones equipped with machine learning being used to identify targets in Ukraine. It is not a huge leap to see these two technologies being combined to enable systems to hunt for and attack targets without human involvement.
Countries are investing in ways to counter the increased use of drones, such as jamming their communications. If drones can operate autonomously, then jamming isn’t such a big problem. So I think we’re going to see more and more autonomy in warfare.
One vision of the future is that autonomous weapons systems are given the authority to strike targets within kill zones, or ranges of space and time, on the basis of certain criteria.


