article
AI Action summit in Paris
A stress test for global AI governance
Contact
Programmes
Published by
Thomson Reuters Foundation
February 10, 2025
This text was published as an opinion piece at Context News, a media platform created by the Thomson Reuters Foundation.
By Lisa Soder, February 07, 2025
Expectations for next week’s AI Action Summit in Paris are sky-high - partly due to last month’s DeepSeek shock and, of course, to Donald Trump.
This is the first AI summit bringing together world leaders, including U.S. Vice President JD Vance, heads of government, and the global tech elite since the political shift in Washington.
The first two international AI summits, held in Bletchley Park and Seoul, yielded some notable successes: despite strained relations, the U.S. and China signed a joint declaration on AI safety, and tech giants like OpenAI, Mistral, and Google DeepMind committed to greater transparency and stricter security standards.
However, the geopolitical landscape has fundamentally changed since then. Rivalries between global powers - especially the U.S. and China - have intensified, and Europe’s willingness to confront Big Tech seems to be waning.
What, then, can and must the Paris summit achieve under these new circumstances?
Address risks
Initially, these AI summits aimed to address the risks posed by rapidly advancing technology.
Since the last summit in May, the urgency has only grown: according to the AI Safety Report, the latest AI models are approaching the skill level of professional cybersecurity teams, in some cases identifying vulnerabilities faster than human experts.
In biotechnology, they are setting new benchmarks, at times even outperforming PhD-level scientists in planning complex lab experiments.
However, as AI capabilities increase, so do the risks: cyberattacks, deepfakes, and dual-use scenarios in biotechnology pose serious threats to democracies and public security alike.
The geopolitical landscape has fundamentally changed
AI applications are also massive energy consumers: by 2026, they could require as much electricity annually as an entire country the size of Austria.
Despite past promises to move beyond non-binding declarations and establish a concrete regulatory framework, little progress has been made.
Meanwhile, the political climate is shifting rapidly.
Trump has reinstated his “America First” doctrine, rolling back AI safety and environmental regulations in one of his first executive actions while threatening protectionist measures.
At the same time, China’s DeepSeek has made spectacular breakthroughs, unsettling companies like OpenAI and Microsoft and further fuelling the AI arms race.
The European Union, once hailed - or feared - as the world’s "super regulator," has recently toned down its ambitions following the Draghi Report on European competitiveness.
The bloc now seems wary of deterring investors and tech firms or provoking Trump’s threatened tariffs.
This new geopolitical climate is reflected in the summit’s agenda: while regulatory discussions remain on the table, the focus has shifted toward innovation, culture, and public-sector AI applications.
These are undoubtedly important issues. However, as the agenda broadens, it becomes harder to enforce binding commitments on the handful of tech giants driving the highest risks.
Window of opportunity
If the summit turns into a mere showcase of successful AI projects, its core mission - setting clear boundaries for powerful tech firms and mitigating AI’s societal and environmental dangers - will be sidelined.
For the Paris summit to be a success, three key points are crucial.
First, a critical assessment is needed to determine whether the self-regulatory commitments made in Bletchley Park and Seoul have led to any real progress.
Without a mechanism to reward compliance, the most irresponsible actors will ultimately benefit, setting the standards for the entire industry.
The AI Action Summit presents an ideal opportunity to scrutinize the actual implementation of safety measures.