
China has been forging its own path on the regulation of technologies based on artificial intelligence.Credit: Hector Retamal/AFP/Getty
The past few years have seen no shortage of international dialogues, white papers and recommendations from advisory groups on the development and use of artificial intelligence. Yet when it has come to turning these into globally agreed rules to maximize the benefits and minimize the harm of AI, there has been a leadership vacuum.
China wants to lead the world on AI regulation — will the plan work?
As Nature reported last week (Nature https://doi.org/qhbv; 2025), one country is pushing forwards with plans to change that. China is proposing to set up a global body to coordinate the regulation of AI, to be known as the World Artificial Intelligence Cooperation Organization (WAICO). Establishing such a body is in all countries’ interests, and governments around the world should get on board.
AI models have astounding power, and abilities that could supercharge science and boost economic growth. But they do not fully understand the world, and can fail in unpredictable ways. There are many ways in which they could cause harm, including exacerbating inequality, aiding criminality and assisting the spread of mis- and disinformation. Some prominent researchers even argue that superintelligent AI could one day destroy humanity.
So far, such risks have not been given due attention in the breakneck race to develop AI — a race that many fear has created an economic bubble that is on the brink of bursting. The United States, which is home to many of the companies making the most powerful and widely used models, has no national AI regulations, just a patchwork of state-level laws. On the whole, companies in the United States are expected to police themselves and establish their own internal guardrails — while also being in relentless competition.
The latest assessment of large technology companies’ safety and risk policies — the AI Safety Index — by the Future of Life Institute, based in Campbell, California, was published on 3 December. On a scale from A to F, no US firm scores higher than a C+ (see go.nature.com/48ikyhv). Yet last month, US President Donald Trump launched an initiative dubbed the Genesis Mission, which will give companies and researchers developing AI models unprecedented access to government data sets. The administration has compared it to the Apollo programme to reach the Moon.
Rules to keep AI in check: nations carve different paths for tech regulation
In the European Union, the AI Act, introduced last year, requires the makers of the most powerful advanced AI systems to strengthen their analyses of the threats their models pose. The act is being implemented in stages, and it is not yet clear what effect the threat of substantial fines for non-compliance will have. Media reports suggest that companies are pressuring the EU to water down its laws.
China, meanwhile, has charted its own course. Its government is pushing hard to embed AI across society, from local government chatbots to factory robots that can accelerate production. But its regulators have also been pushing for AI outputs to be traceable and companies to be accountable (prominent China-based AI companies score even lower on the AI Safety Index than do their Western counterparts, partly, according to the Future of Life Institute, owing to failure to address catastrophic misuse).
Since 2022, the country has introduced an array of laws and technical standards that require developers to submit generative AI models to regulators for pre-deployment safety assessments and to include visible, indelible watermarks on AI-generated content, in a bid to prevent fraud and misinformation. That push is accelerating: China issued as many national requirements on AI in the first half of 2025 as it did in the previous three years, according to the Beijing-based consultancy Concordia AI.
If the AI bubble bursts, what will it mean for research?
Chinese officials want to prevent AI from rocking political stability, but still allow it to be used to boost economic growth, says Angela Zhang, a law researcher and technology-regulation specialist at the University of Southern California in Los Angeles. Any attempt to control the outputs of AI models in the name of safety, whether for political purposes or to align outputs with social values, remains fraught. But, as neuroscientist and AI entrepreneur Gary Marcus told reporters at an event in London in October, in many respects China’s motivations are “pretty similar” to those of other countries. “They don’t want to be annihilated by robots.”
Understanding China’s standards on AI is important for everyone. The country’s policy of producing free or cheap-to-access ‘open weight’ models means that companies around the world are increasingly building their services on Chinese AI. At the same time, it’s important that China’s researchers engage multilaterally to help assess what kind of global governance would be both effective and realistic.
Existing efforts to oversee AI at the global level — for example, the Organisation for Economic Co-operation and Development’s AI Principles and the Council of Europe’s Framework Convention on AI — are either non-binding or not enforced. To effectively regulate AI, unique solutions will be required, but one model that WAICO could learn from is that used by the International Atomic Energy Agency in Vienna to regulate nuclear energy. This involves countries agreeing to limits on development and to opening nuclear facilities for inspection, which allows nations to verify that others are adhering to the system.
Much of the rhetoric surrounding AI frames it as a geopolitical contest, whereby a country’s safety is guaranteed by technological dominance. But ‘winning’ is not an option for most nations, and an AI arms race won’t guarantee that people will be safer or wealthier. A better plan would be for everyone to come together to agree on what is safe and how the world wants to harness AI. China’s initiative should be welcomed, and researchers and authorities around the world should engage.




