For years, data centres sat in the background: expensive, essential, and mostly invisible. In the AI era, that has changed. Data centres have quietly become the physical backbone of digital power – and increasingly, a geopolitical asset in their own right.
For enterprise leaders and big‑tech operators, that really does change the conversation. It is no longer only about where to find enough compute at the right price. It is about where your models actually run, which grids power them, which jurisdictions control them, and how exposed you are to the new politics of AI infrastructure.
At World Summit AI 2026, this is exactly what the mainstage panel “Developing a new global system of data centres for the modern economy” is set up to explore: how to build resilient, secure and scalable data‑centre ecosystems that span continents and, increasingly, orbit, while navigating governance, digital sovereignty and new technologies.
That session sits alongside themes like “AI Infrastructure: efficiency, sustainability & scalability,” “Building digital sovereignty and national resilience,” and “The Building Blocks of Sovereign AI” – a clear sign that infrastructure is now a front‑of‑house topic, not just something buried deep in the stack.
The World Economic Forum describes this as a scramble to secure control over data and the digital tools of the future, with data centres themselves firmly in scope. Hosting major facilities is no longer only an economic development play. It is about digital resilience and reducing dependence on foreign connectivity and compute.
Industry leaders are talking in the same terms. Nvidia’s CEO Jensen Huang has called AI “the largest infrastructure build‑out in human history,” and has described the shift as turning data centres into “AI factories” – not just warehouses for information, but industrial plants that turn energy and silicon into intelligence. That changes how boards should see these assets. High‑end AI data centres become productive capital, not interchangeable cloud real estate.
First, capacity is concentrating. The largest pools of AI‑ready compute are clustered in a handful of countries and in the hands of a few hyperscalers and chip vendors, with the US and China still setting the pace. That concentration creates obvious vulnerabilities for everyone else – and for enterprises that have built their AI plans on a narrow set of cloud and silicon dependencies.
Second, energy is becoming the binding constraint. AI demand is running into grid limits, permitting delays and local political resistance. Analysts highlight the growing challenge of finding enough power, people and capital to build large‑scale facilities fast enough. Others argue that access to reliable electricity is now one of the main constraints on continued AI leadership. As AMD’s CTO Mark Papermaster has put it, AI is “quickly becoming as much an energy challenge as it is a compute one,” with efficiency across the whole stack now critical.
The Iran war has made this painfully visible. Missile strikes on Gulf energy assets and regional data centres, plus disruption around the Strait of Hormuz, have pushed up oil and gas prices and raised questions about the long‑term viability of some of the Middle East’s biggest AI build‑out plans. For operators whose business models already run on thin margins and cheap power, that kind of shock is no longer a tail‑risk; it is something boards now have to plan around.
Third, trade and security policy now reach deep into the stack. Export controls, sanctions and supply‑chain disputes are pulling data centres “from the backrooms of tech to the foreground of global security concerns.” A facility is only as advanced as the chips, cooling, fibre and power infrastructure it can actually get – and all of those are now shaped by geopolitics.
The result is a new kind of infrastructure politics. Governments are no longer asking only whether they have enough cloud capacity. They are asking who owns it, where it sits, what happens in a crisis and whether dependence on foreign compute leaves critical sectors exposed.
For large enterprises, none of this is theoretical. It reshapes four kinds of risk.
Concentration risk is the most obvious. If your most important AI workloads sit in a small number of regions or with a very small number of providers, then geopolitical shocks, export restrictions or power problems can quickly become business‑continuity issues. The uneven geography of digital infrastructure means local disruptions can have outsized effects far beyond national borders.
Jurisdictional risk is rising at the same time. Data‑sovereignty rules, national‑security reviews and sector‑specific regulations increasingly shape where workloads can run and which suppliers can serve them. An AI system that is perfectly usable in one region may run into legal or political friction in another, even if the technology is identical.
And then there is physical and security risk. Commentators now talk about data centres as a new frontline in geopolitical conflict because they concentrate digital services, sensitive data and national dependency in a small set of locations. Even where there is no direct conflict, the logic is simple: the more strategic these sites become, the more they attract political attention, scrutiny and defensive planning.
One response is regional diversification. Instead of assuming a single global footprint is “good enough,” companies are spreading critical AI workloads across multiple regions and providers to reduce exposure to regulatory, supply or energy shocks. It is not cheap. But discovering that a single bottleneck can slow your entire AI roadmap is far more expensive.
Another is tying AI strategy to energy strategy. The leaders in enterprise AI will treat compute and power as one problem, not two. That means asking harder questions about long‑term electricity access, cooling, land use, power‑purchase agreements and local political licence – not just whether GPUs are available this quarter.
A third is sovereign and sector‑specific architecture. For highly regulated or politically sensitive workloads, more organisations are starting to separate out regional or sovereign environments instead of pushing everything through one global stack. That does not mean every company needs to build its own sovereign cloud, but it does mean the “one cloud fits all” era is fading, especially in public services, defence, finance and critical infrastructure.
These are exactly the trade‑offs and patterns that World Summit AI is designed to surface, alongside perspectives from speakers like AMD’s Mark Papermaster and Schneider Electric’s Philippe Rambach on hybrid architectures for critical systems.
This is where the topic stops being abstract. The questions that matter are not purely technical; they are strategic:
The goal is not to panic at every headline. It is to stop treating data‑centre strategy as an implementation detail. In the AI economy, infrastructure choices now shape growth, resilience and market access far more directly than many boards still assume.
For enterprise leaders, that makes this more than a real‑estate or cloud‑optimisation story. It is about designing an AI operating model that can survive a more fragmented, power‑hungry and politically contested environment.
In the age of AI, whoever really understands and shapes the new power grid – silicon, racks, cables and megawatts, not just models – will set the terms of competition for everyone else. If you do not know who controls the compute you rely on, you are already playing by someone else’s rules.
If this is something you’re actively navigating, the conversation doesn’t stop here.
Inside the InspiredMinds! Community Hub, you’ll find deeper insights, expert perspectives, and practical discussions from the people building and deploying AI across Europe and beyond.