
Urgent research needed to tackle AI threats was the warning from Google DeepMind chief Sir Demis Hassabis as global leaders and tech executives debated how to keep fast moving AI development safe. Speaking during the AI Impact Summit in New Delhi, Hassabis said safety work has to move quicker, and that governments should focus on “smart regulation” aimed at the biggest risks, not rules that slow everything down.
His comments come at a time when there is growing disagreement over who should set the rules for AI and how strict those rules should be. Several delegations at the summit pushed for stronger international coordination, with calls for shared standards on safety testing, transparency and the use of powerful systems. The United States took a different line, arguing that heavy central control could hold back adoption and innovation.
What Hassabis says the real risks are
Hassabis pointed to two main dangers.
The first is misuse. As AI tools become more capable and cheaper to access, the same systems that help people code, design and learn can also be used by criminals and hostile groups. That can include scams, deepfakes, automated hacking attempts, and more efficient ways to spread disinformation. In his view, the industry needs clearer guardrails and faster research into how to stop “bad actors” from abusing the technology.
The second is loss of control. Hassabis said the world needs to take seriously the possibility that increasingly autonomous systems could behave in unpredictable ways as they get more powerful. That does not mean machines suddenly “become evil”, but it does raise questions about reliability, alignment, and how humans keep meaningful control when systems can plan, act and adapt at speed.
He also acknowledged the practical challenge for policymakers. AI is moving quickly, and regulation can lag behind. Even when governments want to respond, it can be hard to write rules that still make sense a year later.
Can Big Tech slow things down
Asked whether major AI labs could slow progress to give society more time to catch up, Hassabis suggested one company cannot control the whole direction of the field. DeepMind can choose how it releases systems and what safety checks it runs, but competition and global research mean the wider pace is not set by any single player.
That position will sound familiar to many regulators. They often rely on voluntary commitments, safety frameworks and reporting, while trying to avoid a race where everyone ships first and fixes later.
A summit split on global governance
The debate in Delhi showed a clear divide.
Many leaders at the summit argued AI needs common international approaches, especially as systems cross borders instantly. They say shared governance could help prevent regulatory loopholes, set minimum safety tests for frontier models, and support poorer countries that lack resources for oversight.
The US delegation rejected the idea of “global governance” for AI. Washington’s argument is that countries should shape rules locally, based on their own needs and values. The US side also warned that large international structures could turn into slow moving bureaucracies that discourage experimentation and deployment.
This is not just theory. The direction governments take affects everything from how AI is trained and evaluated to what companies must disclose and how liability works when systems cause harm.
What other leaders are saying
OpenAI chief Sam Altman also called for urgent regulation at the summit, backing safeguards similar to what societies have used for other powerful technologies. His point was not that AI should be locked away, but that the pace and scale of change demand clear rules, especially around high risk uses.
India’s Prime Minister Narendra Modi urged countries to work together so the benefits of AI are shared widely, not concentrated in a few companies or nations. UK Deputy Prime Minister David Lammy, representing the UK, said safety and security should come first, and argued politicians and tech firms need to work together rather than pointing fingers at each other.
The China question
Hassabis also weighed in on the global race for AI leadership. He suggested the US and its allies remain slightly ahead, but that China is close and could narrow the gap quickly. That adds pressure to move fast, yet it also raises the stakes. In a competitive environment, it is harder to agree on limits, and easier for safety work to be treated as optional.
Why science education still matters
On skills, Hassabis made the case that strong foundations in science and maths will still matter even if AI writes more code and automates more tasks. His view is that AI will expand what people can build, but judgement, creativity and domain knowledge will still separate good work from bad work. In other words, tools help, but understanding remains valuable.
What happens next
The summit is expected to end with a shared message from participating countries and companies on how to approach AI. Even if the wording is careful, the split between those pushing global coordination and those resisting it is now very public.
For everyday people, the argument matters because AI is already changing work, education, media and security. The question is whether safety research and rules will keep pace, or whether the world will keep reacting after problems show up.
