The New Compute Cold War: Geopolitics of AI Infrastructure
Part 2 of 5: The Great AI Infrastructure Build-Out of 2025
🧊 Computing power isn’t just an economic resource. It’s a geopolitical weapon.
Nations are treating AI infrastructure as a strategic asset, pouring hundreds of billions into “sovereign AI.” The United States is blocking China’s access to advanced chips. China is building domestic alternatives at any cost. Middle Eastern nations are buying their way into the AI ecosystem. Europe is hedging with regulation.
But is any of this actually working? And more fundamentally: does owning GPUs equal AI capability?
This installment interrogates the sovereign AI thesis with the skepticism it deserves. We’ll examine whether export controls are effective (spoiler: probably not), whether buying chips makes you an AI superpower (spoiler: definitely not), and whether the entire geopolitical compute race is rational strategy or collective delusion.
🇺🇸 The US Strategy: Export Controls as Security Theater?
The United States entered 2025 with a clear intent: maintain its edge in AI hardware by denying China access to advanced chips.
Starting in 2022, cutting-edge Nvidia A100 and H100 GPUs were effectively banned for Chinese customers. Nvidia offered slightly neutered versions (A800/H800) to comply. By late 2023, even those workaround models fell under restrictions. The U.S. also moved to limit outbound investment in Chinese AI and semiconductor firms.
The Pentagon has likened cutting off AI chips to China as denying it “the new oil” of the digital economy.
But is this strategy working?
❌ Export Controls Verdict: Failing Strategy
The evidence is clear:
Gray market flows: A RAND statistic reveals the problem: in 2024, Chinese buyers could only acquire about 450,000 of Huawei’s Ascend 910B AI chips (domestic production), versus an estimated 1,000,000+ of Nvidia’s top H100/H800 shipped into gray markets or indirect channels.
Wait—if export controls are in place, how are Chinese firms getting more Nvidia chips than domestic ones?
Workarounds:
Transshipment through third countries (Singapore, Malaysia, UAE)
Shell companies and intermediaries
Cloud access (Chinese firms rent compute from overseas data centers)
Smuggling and black markets
ByteDance ordered over 100,000 of Huawei’s AI chips, but production bottlenecks meant fewer than 30,000 were delivered by mid-2024. So where did ByteDance get the compute to run TikTok’s recommendation algorithms and train AI models? Likely a mix of older stockpiled chips, gray-market Nvidia GPUs, and cloud rentals.
The uncomfortable truth: Export controls are porous. They slow China down, but they don’t stop it. And they come with costs.
This isn’t “partially successful”—it’s failing. Probability of success: <20%.
💸 The Costs of Export Controls
1. Revenue loss for US companies: Nvidia’s China revenue dropped from ~25% of total to ~10% after controls. That’s billions in lost sales that could have funded R&D.
2. Accelerated Chinese self-reliance: Huawei’s Ascend chips exist because of export controls. Without the ban, Chinese firms might have stayed dependent on Nvidia. Now they’re building alternatives.
3. Fragmentation of global AI ecosystem: Research collaboration between US and Chinese AI labs has collapsed. Open-source models are forking into “Western” and “Chinese” versions. This slows overall progress.
4. Alienation of allies: The US initially applied controls broadly, catching allies in the net. This created resentment and drove some countries to hedge (see: Middle East deals below).
🧠 The Steelman for Export Controls
Before dismissing them entirely, what’s the best case for export controls?
Argument 1: Buying time
Even if controls are porous, they slow China’s AI development by 2-3 years. In a fast-moving field, that matters. The US can use that time to widen its lead.
Counterargument: But is the US actually widening its lead? Or is it just running faster on a treadmill? Chinese open-source models (Alibaba’s Qwen) are competitive with Western models. The gap isn’t widening—it’s narrowing.
Argument 2: Denying cutting-edge capabilities
Even if China gets some H100s, it can’t get enough to train frontier models at scale. That limits its ability to achieve AGI or military AI breakthroughs.
Counterargument: This assumes AGI requires massive compute. But what if algorithmic efficiency improves 10×? DeepSeek’s models achieve GPT-4-level performance with far less compute. If China leads in efficiency, hardware denial becomes irrelevant.
Argument 3: Signaling and deterrence
Export controls signal to China that the US takes AI competition seriously. They also deter US companies from enabling Chinese AI development.
Counterargument: Signaling only works if the signal is credible. If China sees that controls are porous and workarounds exist, the signal is noise.
🧭 Recommendation: Drop Export Controls
The verdict: Export controls have slowed China, but they’re not a sustainable long-term strategy. As China develops domestic alternatives and workarounds proliferate, the effectiveness declines while the costs (revenue loss, fragmentation, ally resentment) persist.
Recommendation: Drop export controls. Focus on talent retention and algorithmic advantages instead. Continuing a failing strategy has opportunity cost.
The real question: Is the US using the time bought by export controls to build durable advantages (talent, institutions, innovation ecosystems)? Or is it just hoarding chips?
🇨🇳 China’s Response: Self-Reliance or Sunk Cost Fallacy?
Beijing’s strategy is clear: achieve self-reliance in AI chips and models, no matter the cost.
Huawei is mass-producing its Ascend 910C AI processor, designed to rival Nvidia’s chips. But yield issues mean only ~20% of chips coming off domestic fabs are functional, due to lack of EUV lithography equipment blocked by U.S. export bans.
China is pouring money into the problem:
$98 billion in 2025 alone
$146 billion in national AI funds (cumulative)
State-backed purchases of domestic chips to guarantee demand
But is this rational, or is China throwing good money after bad?
✅ The Case for Chinese Self-Reliance
Argument 1: Strategic autonomy
Dependence on foreign chips is a national security vulnerability. Even if domestic chips are inferior, having some capability is better than having none.
Argument 2: Learning curve effects
Chip manufacturing has steep learning curves. The only way to get good is to do it. Even if Huawei’s yields are 20% today, they could be 60% in three years.
Argument 3: Talent and data advantages
China has a massive pool of AI researchers and engineers. It has more data than any other nation (1.4 billion people, extensive surveillance infrastructure). If it can solve the chip bottleneck, it could leapfrog the US in AI applications.
⛔ The Case Against: Sunk Cost Fallacy
Argument 1: Diminishing returns to compute
If model efficiency improves rapidly (which it is), then the chip gap becomes less important. China might be better off investing in algorithmic research rather than trying to replicate Nvidia’s 10-year head start.
Argument 2: Opportunity cost
The $98 billion spent on chips in 2025 could have funded universities, research labs, or application-layer companies. Is hardware the right place to compete?
Argument 3: Open source makes hardware less important
Alibaba’s Qwen model is one of the world’s top open-source LLMs. It was trained on inferior hardware but achieves competitive performance. If open-source models commoditize, hardware advantages erode.
🐉 The Uncomfortable Truth: China Might Be Right
Here’s the contrarian take: China’s strategy might be more rational than the US strategy.
Why? Because China is playing for the long game. It’s willing to absorb short-term inefficiency (20% chip yields, higher costs) to build long-term capability. The US, meanwhile, is playing defense—trying to preserve an advantage that might not be preservable.
Analogy: In the 1980s, the US tried to block Japan’s access to semiconductor technology. Japan invested heavily in domestic production. By the 1990s, Japanese chipmakers dominated. The US eventually regained leadership, but not because of export controls—because of innovation (fabless design, EDA software, etc.).
The lesson: You can’t block technological diffusion indefinitely. The only sustainable advantage is continuous innovation.
🏜️ The Middle East: Buying Infrastructure ≠ Building Capability
The most surprising development of 2025 was the Middle East’s emergence as an AI player. Saudi Arabia and the UAE are pouring tens of billions into AI infrastructure:
Saudi Arabia: $40+ billion committed to AI investments
Humain (Saudi flagship AI company): $10 billion AMD partnership, 500 MW of data center capacity
UAE’s G42: $1 billion to build Europe’s largest AI supercomputer in Italy
Nvidia selling 18,000 Blackwell chips to Saudi Arabia
The narrative: The Middle East is becoming an AI superpower.
But is buying chips the same as building AI capability?
📦 What the Gulf States Are Actually Buying
Let’s be precise about what $40 billion buys:
What it buys:
Tens of thousands of GPUs
Gigawatt-scale data centers
Cloud infrastructure
Partnerships with US tech companies
What it doesn’t buy:
Top-tier AI research talent
World-class universities producing AI PhDs
Open-source community contributions
Innovation ecosystems (startups, VCs, accelerators)
Research culture and academic freedom
👩🔬 The Talent Problem
Where are Saudi Arabia’s AI researchers?
Stanford AI Lab: ~200 faculty and researchers, countless PhD students
MIT CSAIL: Similar scale, decades of AI research
DeepMind: ~1,000+ researchers, many with PhDs from top universities
Humain (Saudi flagship AI company): Launched in 2025, hiring foreign talent
The Gulf states are trying to buy talent by offering high salaries. But talent follows talent. The best AI researchers want to work with other top researchers, publish in top venues, and have academic freedom.
Can you buy your way to an AI research ecosystem?
Historical precedent says: maybe, but it takes decades.
Singapore: Invested heavily in research universities (NUS, NTU) starting in the 1990s. By 2020, it had world-class CS programs. Timeline: 25+ years.
Israel: Built a tech ecosystem through military R&D (Unit 8200), university investment, and immigration. Timeline: 40+ years.
China: Invested in universities and research starting in the 1980s. By 2020, it was producing more AI papers than the US. Timeline: 30+ years.
Saudi Arabia’s timeline: Started seriously in 2023. Expects to be an AI hub by 2030 (7 years).
The math doesn’t work.
💼 What the Gulf States Are Actually Doing: Becoming AI Customers, Not Competitors
Here’s a more realistic framing:
The Gulf states are not becoming AI superpowers. They’re becoming AI infrastructure providers and customers.
The strategy makes sense if you view it as:
Diversifying the economy away from oil (rational)
Becoming a data center hub for the region (like Singapore for finance)
Attracting foreign AI companies to set up regional operations
Buying influence in the AI ecosystem (seats at the table)
The strategy doesn’t make sense if you view it as:
Competing with the US/China in frontier AI research
Developing indigenous AI capabilities independent of Western tech
Becoming a source of AI innovation rather than a consumer
🗺️ The Geographic Stranding Risk
Saudi Arabia is building 500 MW of AI data center capacity—enough to house several hundred thousand GPUs.
Question: For whom?
Potential customers:
Saudi enterprises? (Small market, limited AI adoption)
Regional customers (MENA)? (Possible, but AWS/Azure already serve them)
Global customers? (Why would they choose Saudi over US/EU data centers?)
The uncomfortable truth: This infrastructure is being built for geopolitical reasons (diversifying from oil, national prestige), not economic reasons (where demand actually is).
Historical comparison:
Railways: Built where people and goods needed to move. Economic logic determined placement.
Fiber: Laid where internet traffic flowed. Demand determined routes.
AI infrastructure: Being built where governments want it, not where customers are.
The risk: Hundreds of billions in infrastructure built in geopolitically motivated locations (Saudi, UAE, China) that never achieves economic utilization.
This is stranding risk on a geographic scale.
🇪🇺 Europe: Is Caution Actually Wisdom?
Europe’s approach to AI is often mocked: heavy regulation, modest investment, falling behind.
The numbers are stark:
Europe’s private AI investment in 2025: $8 billion
US private AI investment: $109 billion
EU’s share of global AI supercomputing capacity: ~5%
But what if Europe’s caution is actually rational?
🧩 The Case for European Skepticism
Argument 1: Unclear ROI
The biggest AI labs (OpenAI, Anthropic) are burning billions with no clear path to profitability. ChatGPT is offered free or cheaply. Enterprise adoption is slow. If the business models don’t work, why race to overspend?
Argument 2: Regulation as competitive advantage
Europe’s AI Act creates a “trustworthy AI” brand. If consumers and enterprises value privacy, transparency, and safety, European AI could command a premium—even if it’s technically inferior.
Argument 3: Avoiding stranded assets
If algorithmic efficiency improves 10×, or if AGI doesn’t materialize, massive GPU investments become stranded assets. Europe is avoiding that risk.
Argument 4: Talent and institutions matter more than hardware
Europe has world-class universities (Oxford, Cambridge, ETH Zurich, EPFL). It produces top AI researchers. Many work in the US, but they could be attracted back with the right incentives. Talent is more important than GPUs.
🚀 The Case Against: Europe Is Missing the Wave
Argument 1: First-mover advantages
AI platforms have network effects. If US companies establish dominance (Microsoft Copilot, Google Workspace AI), European companies will be locked out.
Argument 2: Regulation as handicap
The AI Act imposes compliance costs that US and Chinese companies don’t face. This makes European AI companies less competitive.
Argument 3: Brain drain
European AI talent is already leaving for the US (higher salaries, better resources). If Europe doesn’t invest, the drain accelerates.
Argument 4: Strategic dependence
If Europe doesn’t build its own AI infrastructure, it becomes dependent on US cloud providers. That’s a geopolitical vulnerability.
🤝 The Verdict: Europe Is Hedging, Not Competing
Europe isn’t trying to win the AI race. It’s trying to avoid losing catastrophically.
The strategy:
Invest enough to maintain some capability (€200 billion commitment)
Regulate proactively to shape global norms
Partner with US companies for access to cutting-edge models
Focus on applications rather than infrastructure
Is this rational? Probably yes, given Europe’s constraints (limited capital, fragmented markets, regulatory culture).
Will it work? Depends on whether AI becomes winner-takes-all or whether there’s room for regional players.
🧭 The Fundamental Question: Does Sovereign AI Make Sense?
Let’s step back and interrogate the entire premise of the geopolitical compute race.
The sovereign AI thesis assumes:
AI capability depends primarily on compute capacity
Compute capacity requires owning physical infrastructure
Owning infrastructure confers strategic advantage
The advantage is durable and defensible
But what if these assumptions are wrong?
Challenge 1: Open Source Models Render Sovereign Compute Irrelevant
If Meta’s LLaMA, Alibaba’s Qwen, and Mistral’s models are “good enough” and freely available, why does owning compute matter?
Answer: You still need compute to run inference at scale. But you don’t need to train your own models. This dramatically reduces the compute requirement.
Implication: Sovereign AI becomes less about training frontier models and more about running inference for local applications. That’s a much smaller infrastructure requirement.
Challenge 2: Compute Could Commoditize Rapidly
If algorithmic efficiency improves 10× (as it has historically), then today’s GPU investments become obsolete.
Example: DeepSeek’s models achieve GPT-4-level performance with far less compute. If this trend continues, the “compute arms race” becomes pointless.
Implication: Nations investing heavily in current-gen hardware might be buying stranded assets.
Challenge 3: Talent and Institutions Matter Far More Than Hardware
Switzerland has no sovereign AI infrastructure. But it has ETH Zurich, EPFL, and a thriving AI research community. Swiss researchers contribute to cutting-edge AI, even without domestic GPU clusters.
Israel has no hyperscale data centers. But it has Unit 8200 (military AI), top universities, and a startup ecosystem. Israeli AI companies punch far above their weight.
Implication: Owning GPUs ≠ AI capability. Talent, institutions, and culture matter more.
Challenge 4: Cloud Access Undermines Sovereign Infrastructure
If you can rent compute from AWS, Azure, or GCP, why build your own data centers?
Answer: Data sovereignty, latency, strategic autonomy.
Counterargument: But most applications don’t require data sovereignty. And for those that do, hybrid cloud (some on-premise, some cloud) is more cost-effective than building everything yourself.
Implication: The business case for sovereign AI infrastructure is weak except for specific use cases (military, intelligence, critical infrastructure).
📊 Key Variable to Watch
GPU gray market prices: If Chinese buyers pay premiums above official prices, export controls are failing.
How to track:
Monitor H100 prices in Singapore, Malaysia, UAE
Watch for transshipment patterns
Track Chinese firms’ compute capacity (if growing despite controls, workarounds are working)
✅ Key Takeaways
1. Export controls are failing: Chinese firms getting 1M+ Nvidia chips via gray markets. Costs exceed benefits. Recommendation: Drop them, focus on talent.
2. Buying chips ≠ AI capability: Gulf states are becoming infrastructure providers and customers, not AI superpowers. Talent and institutions matter more than hardware.
3. Europe’s caution might be rational: Avoiding stranded assets and focusing on regulation could be smarter than racing to overspend.
4. The sovereign AI thesis is questionable: Open source, commoditization, and cloud access undermine the case for owning physical infrastructure.
5. Geographic stranding is real: Hundreds of billions being built in geopolitically motivated locations, not where demand is.
This is Part 2 of a 5-part series on the Great AI Infrastructure Build-Out of 2025.
Coming up:
Thank you for reading. If you liked it, share it with your friends, colleagues and everyone interested in the startup Investor ecosystem.
If you've got suggestions, an article, research, your tech stack, or a job listing you want featured, just let me know! I'm keen to include it in the upcoming edition.
Please let me know what you think of it, love a feedback loop 🙏🏼
🛑 Get a different job.
Subscribe below and follow me on LinkedIn or Twitter to never miss an update.
For the ❤️ of startups
✌🏼 & 💙
Derek









