
Researchers working at the companies behind AI tools ChatGPT, Claude and Gemini are urging the firms to stop using AI technologies for mass surveillance and in fully autonomous weapons.Credit: Jaque Silva/NurPhoto/Getty
Attacks on Iran by the United States and Israel have reminded us how close artificial-intelligence research is to the front line. Reports of AI being used to prioritize targets are not new. Every second counts in the heat of battle, and those deploying the technology think it has the potential to fire faster, or respond to incoming fire more quickly, than humans can, and with minimal need for direct communication with command centres. At last week’s Raisina Dialogue — India’s annual foreign-policy conference — Anil Chauhan, the country’s chief of defence staff, and his Philippine counterpart, Romeo Brawner, stressed that AI and automated systems are transforming warfare.
How AI is shaping the war in Iran — and what’s next for future conflicts
Many AI researchers say that even the most advanced technologies, known as frontier AI models, are not yet capable of performing reliably or within the existing laws of war. Employees at OpenAI and Google, two of the technology firms developing such models, have gone public with their concerns. Their warning must be heeded. As with other technologies that have the potential to kill indiscriminately, the use of AI in warfare should not be permitted until there are specific rules to govern that use.
At present, there are no international laws that explicitly mention AI use in war. However, international humanitarian law states clearly that weapons must not be used indiscriminately. Moreover, combatants must take precautions to verify their targets and minimize the risk of civilian casualties. These requirements should apply to AI as much as to any other military technology.
Last month, Anthropic — the firm in San Francisco, California, behind the AI model Claude — publicly fell out with the US Department of Defense (DoD). The DoD demanded the right to use Anthropic’s technology for any lawful purpose. Anthropic refused permission for its models to be used for mass surveillance or in weapons without human oversight, in line with what it says its models can “reliably and responsibly do”. The DoD cancelled Anthropic’s contract and has officially designated the firm a “supply chain risk”. This would exclude Anthropic from bidding for certain US government contracts. In response, Anthropic has sued.
AI weapons: Russia’s war in Ukraine shows why the world must enact a ban
By contrast, rival AI company OpenAI, also based in San Francisco, agreed to a version of the DoD’s terms. More than 100 of the firm’s employees, along with nearly 900 at Google, have signed an open letter calling on the two companies to refuse the government’s demands (see https://notdivided.org/). On Saturday, Caitlin Kalinowski, leader of robotics at OpenAI, resigned, saying, “AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”
As Nature and others have reported, many researchers have ethical and moral objections to AI’s use for mass surveillance and in autonomous weapons. And some have practical concerns relating to the technology’s current capabilities. AI models can give inaccurate outputs, and how they reach their conclusions is often opaque. Incorporating AI into military technology might one day reduce civilian casualties, but the creators of some of the best AI models say that this is not yet true.
Why we should limit the autonomy of AI-enabled weapons
Many staff members joined US AI companies on the understanding that their work would not be used by the military, or would have only restricted use. But in January 2024, OpenAI updated its policies, deleting wording that said its models could not be used in “military and warfare”. Last February, Google dropped its commitment to preventing its AI being used in surveillance or weapons. And Anthropic’s chief executive, Dario Amodei, has previously suggested using AI to achieve military superiority over autocracies.
These developments clearly show the limitations of voluntary commitments, which can always be walked back. An alternative approach is to draw up a legally binding international agreement on what is acceptable. There are precedents for scientists taking the lead in such efforts. Nuclear weapons were developed with little regulation or international agreement. Accords such as the Treaty on the Non-Proliferation of Nuclear Weapons and the Treaty on the Prohibition of Nuclear Weapons came about partly as a result of initiatives led by scientists. Under the Chemical Weapons Convention, it is illegal to develop or deploy such weapons, and a similar legal agreement exists to prevent the use of biological weapons.
UN creates new scientific AI advisory panel: what will it do?
The Convention on Certain Conventional Weapons is designed to address issues relating to emerging weapons. Its member states, including the United States, have been meeting to discuss lethal autonomous weapons. This could eventually lead to the creation of an international agreement that addresses AI specifically. At present, however, the development of such an accord is hindered by a lack of support from China, Israel and the United States, and by a lack of agreement over the precise role that AI has in warfare and what counts as ‘acting autonomously’. Gathering such evidence should be the job of the United Nations Independent International Scientific Panel, which was appointed last month and is working on its first report.
According to OpenAI, the DoD is planning to convene a working group on the subject, involving military personnel, government staff members and leaders of frontier AI laboratories. Researchers at those labs must continue to press for the use of AI in warfare to be restricted, and use the power they have to advocate for global rules. Negotiating treaties takes time. Some argue that the pace of AI development means that the world cannot afford to wait for slow diplomatic solutions. That is no argument for inaction. If we never get started, it will always be too late.





