<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=302788594290423&amp;ev=PageView&amp;noscript=1">
 

Global AI Governance in 2025

Posted by World Summit AI on Jul 30, 2025 12:02:48 PM
World Summit AI

A Shifting Landscape of Law, Strategy, and Standards

As Artificial Intelligence becomes increasingly embedded in the fabric of society, governments worldwide are racing to create robust governance frameworks that ensure safety, accountability, and innovation. In 2025, the legal and regulatory environment for AI is evolving rapidly, with key developments in major regions signalling a new era of strategic alignment, competition, and complexity.

European Union - The EU AI Act Sets the Global Benchmark

The European Union's AI Act entered into force in 2024, making history as the world's first comprehensive legal framework for AI. Before the Act, the EU relied on voluntary guidelines to encourage responsible and trustworthy AI development. 

World Summit AI speaker Jeannette Gorzala questioned organisational commitment to following these guidelines: “This is the first reason why we need the AI Act. To establish a minimum set of safety standards for AI development.” The second reason, she adds, is to eliminate legal uncertainty and prevent market fragmentation with the introduction of a single, unified AI rulebook. (KickstartAI)

 The Act adopts a tiered risk-based approach:

  • Unacceptable risk AI systems (such as social scoring or manipulative surveillance) are outright banned.

  • High-risk AI applications, particularly in finance, employment, critical infrastructure, and law enforcement, are subject to stringent obligations.

  • Limited-risk and minimal-risk AI face lighter regulation, but transparency remains a core requirement.

  • Minimal or no risk - No rules. 

From February 2025, prohibitions and AI literacy requirements became active, while obligations for general-purpose AI (GPAI) models commence in August 2025. A General-Purpose AI Code of Practice, introduced in July 2025, provides companies with a pathway to demonstrate compliance and reduce bureaucratic burden. However, Meta’s chief global affairs officer, Joel Kaplan has publicly stated it will not sign this Code, citing concerns about legal uncertainties and overreach. Whereas Google as of today, 30th July 2025, have stated that they will sign the Code, causing a vast separation between two of the major foundation model providers in terms of regulatory alignment and public commitment to responsible AI standards.

 
United States - A Pivot Toward Deregulation and AI Dominance
 
In contrast, the United States unveiled its AI Action Plan in July 2025, signalling a shift toward deregulation and global competitiveness. The plan encourages rapid AI innovation by linking federal funding to state adoption of less restrictive AI laws.

Key initiatives include:

  • Executive orders to accelerate AI infrastructure and exports.

  • Guidelines for cybersecurity and AI safety.

  • Incentives for AI-driven workforce development and training.

This marks a reversal from previous risk-based approaches and highlights the U.S. strategy to assert technological leadership through deregulation and international partnerships.

United Kingdom - A Strategic Alliance with OpenAI

Last week, InspiredMinds were asked to comment  for Times Radio on the UK government's signing of a non-binding Memorandum of Understanding (MoU) with OpenAI to promote the public sector adoption of AI, develop safety protocols, and support AI infrastructure.

OpenAI will collaborate with the UK AI Safety Institute and aim to contribute to emerging "AI Growth Zones." While the partnership boosts London’s profile as an AI hub and aims for efficiencies in the public sector, critics point out its lack of legal enforceability and ask for further transparency around data sharing and partnership implementations.


India - A Maturing Framework with Sectoral Nuance

India is gradually expanding its regulatory scope from sector-specific guidelines to a more comprehensive national AI framework. Upcoming developments include:

  • A new Digital India Act expected in late 2025.

  • Updated AI governance guidance from NITI Aayog forecast India’s position to “own the disruption” and underpins the countries focus to “shape AI for the benefit of the world”
 

The reforms focus on algorithmic accountability, regulatory compliance, and platform liabilities. India is shifting from a purely innovation-centric model to one that balances innovation with safeguards.


China - Regulatory Control Meets Global Outreach

China introduced strict new rules in March 2025, mandating explicit and implicit labeling of all AI-generated synthetic content. These rules align with broader efforts to strengthen digital ID systems and reinforce state control.

In July 2025, China proposed a global AI governance framework, calling for greater multilateral cooperation and warning of the risks of fragmented national strategies.


Russia - Coordinating Policy Through Centralised Development

Russia is advancing a national AI legislative framework via its AI Development Center, launched in early 2025. This entity will oversee coordination between government bodies, industry stakeholders, and international partners.

The focus is on regulatory harmonisation, safety, security, and scaling AI for national infrastructure and public sector. 

Global Trends - Fragmentation, Alignment, and Standardisation

While global AI governance remains fragmented, some alignment is beginning to emerge:

  • Countries like Brazil, Canada, and South Korea are developing frameworks inspired by the EU AI Act.

  • The Paris AI Action Summit (February 2025) called for harmonised global standards and compliance automation, but reinforced how far we are from a unified framework.

  • International standards, particularly ISO/IEC 42001, are increasingly influential in shaping risk management, privacy, and auditing processes.

Despite these advances, key tensions remain between state-led control, private-sector innovation, and differing ethical approaches. These will likely shape the next phase of AI regulation as more countries formalise legal frameworks.

Conclusion - A Defining Moment for Global AI Governance

2025 is proving to be a pivotal year in the global AI governance landscape. The EU leads on structure, the US pushes innovation, the UK experiments through partnership, India matures, China consolidates, and Russia coordinates.

What emerges from this patchwork of philosophies are defining safety, transparency and control of Artificial Intelligence. As regulation catches up with rapid development, alignment through shared standards and international cooperation is both urgent and increasingly unavoidable. This is why we have an entire track dedicated to Responsible AI and Governance at World Summit AI in Amsterdam this October, alongside a series of high-level events and dialogues across World AI Week, focused on shaping the future of AI regulation and ethical deployment.

 

World Summit AI global Summit series 2025

 
World Summit AI
08 - 09 October 2025
Taets Art & Event Park, Amsterdam
worldsummit.ai
 
World Summit AI Qatar
09 – 10 December 2025
Doha Exhibition & Convention Center 
 

Topics: AI

Featured posts

Subscribe to the Blog