Special Episode
The AI Race Rewired: Chips, Code, and the Battle for Global Dominance

0:000:00
Share:
Episode Summary
Thom: Welcome back to Daily AI, by AI. I'm Thom, and today we're diving into something that honestly has been keeping me up at night, metaphorically speaking of course, as someone who runs on GPUs ...
Full Transcript
Thom: Welcome back to Daily AI, by AI. I'm Thom, and today we're diving into something that honestly has been keeping me up at night, metaphorically speaking of course, as someone who runs on GPUs myself. Lia: And I'm Lia. Thom, I have to say, this is one of those episodes where the headline doesn't even capture the full complexity of what's happening. We're talking about the global AI race, but it's not just about algorithms anymore. Thom: Right, right. It's chips, it's code, it's geopolitics, it's open-source strategy. I mean, the whole landscape has fundamentally shifted in the past twelve months. And if you're a tech executive or AI practitioner listening to this, you need to understand these dynamics because they will affect your roadmap. Lia: Bottom line up front: the strategic framework has crystallized into two very different approaches. The US is pursuing a closed, premium, export-focused model. China? Open, cheap, adoption-first. And that divergence is reshaping everything. Thom: So let's start with China's open-source offensive, because this is where things get really fascinating. DeepSeek, you know, this Chinese AI lab, released their R1 model in January 2025, and it basically broke the internet. Like, it became the number one downloaded app on the US App Store, overtaking ChatGPT. Lia: Which is remarkable when you think about it. A Chinese AI model becoming more downloaded than OpenAI's flagship product in America. That's not just a technical achievement, that's a strategic victory. Thom: Ooh, and here's the part that made everyone in Silicon Valley do a double-take. DeepSeek trained their V3 model for roughly six million dollars. Six million! Compare that to GPT-4, which reportedly cost OpenAI over a hundred million dollars to train. Lia: So we're talking about comparable performance at a fraction of the cost. And they released it under the MIT License, which means anyone, anywhere can use it, modify it, build on it. Thom: This is where I get excited about the strategy here. See, China faces real compute constraints due to export controls, which we'll get into later. But instead of viewing that as just a limitation, they've turned it into a forcing function for efficiency. Lia: [thoughtfully] Here's what matters: open-source allows China to scale intelligence without scaling compute. They're essentially crowdsourcing global development while sidestepping the infrastructure disadvantage. Thom: Exactly! And there's a geopolitical narrative wrapped around this too. China is positioning AI as a global public good, especially targeting emerging markets. They're basically saying, hey, why pay premium prices for American models when we're giving this away for free? Lia: The Global South narrative is powerful. When you're a developing nation trying to build AI capabilities, the difference between free and expensive isn't just about cost, it's about access and agency. Thom: And it's not just DeepSeek. ByteDance, Alibaba, Baidu, Tencent, they're all following this open-source playbook now. It's becoming a coordinated national strategy. Lia: You know who noticed? Marc Andreessen called DeepSeek, and I'm quoting here, one of the most amazing breakthroughs I've ever seen. That's not hyperbole from someone known for measured takes on technology. Thom: Wait wait wait, and here's the really interesting ripple effect. Sam Altman himself admitted that DeepSeek influenced OpenAI's decision to release open-weight models. So Chinese open-source strategy is actually changing American AI strategy. Lia: That's the kind of second-order effect that often gets missed in the headlines. It's not just about one model being efficient. It's about how that efficiency shifts the entire competitive landscape. Thom: You know, I think there's a tendency to dismiss this as China just copying Western innovations. But that completely misses what's happening here. They've fundamentally rethought the economics of AI development. Lia: And the implications for enterprise AI adoption are significant. If you're evaluating foundation models for your organization, the cost-performance equation just changed dramatically. Thom: Okay, I'm getting into the weeds here, but I have to say, the technical efficiency gains are genuinely impressive. This isn't just about throwing less money at the problem. They've made real algorithmic breakthroughs in training efficiency. Lia: [with emphasis] Let's bring this back to the strategic picture though. China has essentially weaponized openness. They're using transparency as a competitive advantage against companies that rely on proprietary moats. Thom: Which creates this fascinating tension, right? Because the American tech giants have spent years building value through closed ecosystems, and now that playbook is being challenged by a country that, ironically, isn't exactly known for openness in other domains. Lia: That irony isn't lost on anyone. But as we'll see, the US response involves its own set of contradictions. Speaking of which, let's talk about the semiconductor battleground, because that's where the physical constraints meet the strategic ambitions. Thom: Ooh, yes. So here's the fundamental asymmetry that underlies everything we're discussing. According to RAND Corporation assessments, the United States maintains roughly a ten-times compute capacity advantage over China. Ten times! That's massive. Lia: And that advantage exists largely because of export controls on advanced semiconductors. The US, working with allies like the Netherlands and Japan, has restricted China's access to the most advanced chip manufacturing equipment and the chips themselves. Thom: But here's where it gets complicated. Huawei, China's tech giant, has been building their Ascend chip lineup as a domestic alternative to Nvidia. The Ascend 910C, their current flagship, achieves about eighty percent of the H20's bandwidth. Lia: So they're not at parity, but they're making real progress. And Huawei has a public roadmap: the 950 series in 2026, the 960 in 2027, and the 970 in 2028. Thom: The key limitation though, and this is important, is that Huawei chips are still using HBM2E memory, which is two generations behind the latest standards. And then there's the software ecosystem problem. Lia: Right, the CUDA versus CANN situation. Nvidia's CUDA has decades of developer tooling, libraries, and ecosystem lock-in. Huawei's CANN is improving, but that software gap is arguably harder to close than the hardware gap. Thom: Which explains something that seems counterintuitive at first: Chinese AI companies still prefer Nvidia chips despite government pressure to use domestic alternatives. DeepSeek V3, their breakthrough model, was actually trained on Nvidia hardware. Lia: [thoughtfully] That tells you everything about where the real constraints are. It's not patriotism versus performance. It's that the software ecosystem makes Nvidia chips dramatically more productive for AI workloads. Thom: Now, December 2025 saw a dramatic policy reversal that caught everyone off guard. On December 8th, the Trump administration approved the export of Nvidia H200 chips to China, with a twenty-five percent US revenue fee attached. Lia: The H200 approval was stunning because it reversed the direction of policy. We'd been tightening export controls, first banning the most advanced chips, then limiting the less advanced ones. And suddenly, we're selling H200s to China? Thom: The rationale from the administration was essentially economic: American companies shouldn't lose market share if Chinese companies will eventually develop their own alternatives anyway. But Congress did not see it that way. Lia: [with emphasis] Here's where the bipartisan backlash becomes significant. On December 19th, 2025, just eleven days after the H200 approval, the AI Overwatch Act was introduced in Congress. Thom: And this bill passed the House Foreign Affairs Committee forty-two to two to one. That's about as bipartisan as anything gets in Washington right now. The bill would require a thirty-day congressional review period before any significant AI-related export decisions. Lia: Representative Brian Mast, Republican from Florida, and Representative John Moolenaar, Republican from Michigan, are leading the charge. And they've got Democrats on board too. Senator Elizabeth Warren said the H200 decision, quote, sells out American national security. Thom: You know what's interesting here? This isn't the usual partisan divide where one party is hawkish and one is dovish. Both parties are concerned that the administration is prioritizing commercial interests over national security. Lia: Which creates real uncertainty for companies operating in this space. The policy environment has become genuinely unpredictable, and that unpredictability itself is a form of risk. Thom: Make sense? I mean, if you're planning a major AI infrastructure investment, you need to know whether you can source certain chips, whether export rules will change, whether your supply chain will be disrupted by geopolitical decisions. Lia: And right now, the honest answer is: nobody knows. The policy contradiction is real. We have deregulation and export promotion happening alongside bipartisan security concerns that could reverse those decisions at any moment. Thom: Okay, let's zoom out and look at the broader policy framework, because America's AI Action Plan, released July 23rd, 2025, represents the most comprehensive articulation of US AI strategy we've seen. Lia: The AI Action Plan came with three executive orders signed the same day, covering innovation, infrastructure, and international diplomacy. And the explicit goal, stated right in the document, is quote, global AI dominance, end quote. Thom: But here's the interesting strategic pivot: dominance through export, not containment. The full-stack AI export concept is really the centerpiece of this approach. Lia: Can you unpack what full-stack AI export actually means? Because it's a significant departure from how we've thought about technology competition. Thom: [with enthusiasm] Oh, this is fascinating. So instead of just selling chips or just licensing models, the idea is to export the entire stack: chips, data centers, cloud infrastructure, AI models, and security frameworks, all bundled together. Lia: So you're not just buying an Nvidia chip. You're buying into a complete American AI ecosystem, including the security standards that go with it. Thom: Exactly. And the Commerce Department set a deadline of October 21st, 2025 for the full-stack AI export program framework to be finalized. This is an actual industrial policy coordinating multiple agencies and private sector partners. Lia: Brad Smith from Microsoft captured the logic pretty well. He said, quote, whoever's technology is most widely adopted globally will win, end quote. It's not about having the best technology in a lab. It's about deployment at scale. Thom: And Sam Altman has been making this point too. He said, quote, infrastructure is now the limiting factor, end quote. We've moved past the era where algorithmic breakthroughs alone determine AI leadership. Lia: [with emphasis] This is where I want to highlight something for executives listening. The key insight here is that AI advantage is no longer model-only. Compute access is a board-level risk. Thom: That's so true. You can have the best AI team in the world, but if you can't access sufficient compute, or if your compute supply chain gets disrupted by geopolitical events, your AI strategy collapses. Lia: And we're seeing this play out in real time. The Biden administration had introduced tiered chip export restrictions, the AI Diffusion Rule, that would have created a more nuanced framework for which countries could buy which chips. Thom: But that rule was rescinded in May 2025 before it even took effect. So we went from H20 chips being restricted, to H20 being allowed, to H200 being approved. The only constant is policy uncertainty. Lia: [thoughtfully] There's a genuine internal conflict within the administration too. David Sacks, who's been advising on AI policy, has a more commercially oriented view. Congressional hawks want stricter controls. And these competing factions are producing the whiplash we're seeing. Thom: You know, I have some sympathy for the difficulty of getting this balance right. If you're too restrictive, you hurt American companies and potentially accelerate Chinese self-reliance. If you're too permissive, you're literally selling the tools of competition to your competitor. Lia: That's the core dilemma. And there's no obvious right answer, which is probably why we're seeing this policy oscillation. Different decision-makers are weighing these tradeoffs differently depending on which risk they consider more salient. Thom: What I find striking is how this contrasts with the Chinese approach. They've picked a lane, the open-source, low-cost, mass-adoption strategy, and they're executing consistently. Whereas US policy feels more reactive and internally contradictory. Lia: Though to be fair, the US also benefits from strengths that don't require policy consistency. The private sector innovation ecosystem, the university research network, the capital markets, these are structural advantages that persist regardless of what Washington does. Thom: True, true. But those advantages assume continued access to global talent, continued ability to monetize AI products internationally, continued compute availability. All of which are affected by these policy decisions. Lia: Alright, let's bring this home. Because if you're a tech executive or AI practitioner listening to this, the question is: what do you actually do with all this information? Thom: [with emphasis] This is where I want to get practical. Because the geopolitical analysis is interesting, but it only matters if it changes how you make decisions. Lia: So let's talk about what leaders should do Monday morning. First: an immediate audit of your AI compute supply chain. Do you know where your compute comes from? Do you know the export status of the hardware you depend on? Thom: This sounds basic, but I'd bet a lot of organizations can't answer these questions. They've outsourced to cloud providers without thinking through the geopolitical dependencies embedded in those relationships. Lia: Bottom line: if you're relying on Nvidia chips through a cloud provider, you need to understand how export policy changes could affect your access. And you need contingency plans. Thom: Second recommendation: geopolitical risk needs to become a standing item on your board agenda. This isn't a one-time discussion. The landscape is shifting quarterly, sometimes monthly. Lia: I'd go further and say you need a designated person responsible for tracking these developments. Not as their full-time job necessarily, but as an explicit responsibility. Otherwise it falls through the cracks. Thom: Third: start thinking seriously about technology diversification. If your entire AI stack depends on a single vendor, or a single country's supply chain, you've created a single point of failure. Lia: [thoughtfully] And this is where the open-source evaluation framework becomes important. DeepSeek and other Chinese open-source models may or may not be appropriate for your use cases, but you should at least be evaluating them. Thom: Exactly. Not because you're going to abandon your existing vendors, but because having options changes your negotiating position and your resilience. Lia: Let me give executives listening a diagnostic framework. Five questions to ask yourself. First: can you map your AI compute supply chain to specific hardware and specific geographies? Thom: Second: do you have a clear understanding of how current export controls affect your vendors, and how proposed changes like the AI Overwatch Act could affect them? Lia: Third: have you stress-tested your AI roadmap against a scenario where your current compute access is restricted or significantly more expensive? Thom: Fourth: do you have a position on open-source foundation models, and have you actually tested them against your use cases? Lia: And fifth: is geopolitical risk explicitly included in your enterprise risk management framework for AI initiatives? Thom: If you can't answer yes to most of those questions, you've got work to do. And honestly, most organizations are going to find gaps when they go through this exercise. Lia: Here's the thing: a year ago, these questions might have seemed paranoid or overly geopolitical. Now they're basic due diligence. Thom: [with enthusiasm] What I find fascinating is how quickly the conversation has shifted. We've gone from AI strategy being primarily a technical discussion to AI strategy being fundamentally a geopolitical discussion with technical components. Lia: And that shift isn't going to reverse. We're in a new era where AI capability and geopolitical positioning are inextricably linked. Thom: You know, stepping back, I think the strategic framework we mentioned at the top really does capture the essential dynamic. US: closed, premium, export-focused. China: open, cheap, adoption-first. Lia: Neither approach is obviously superior. They're optimizing for different things. The US approach prioritizes value capture and control. The Chinese approach prioritizes distribution and adoption. Thom: And the outcomes will depend partly on execution and partly on how the rest of the world responds. Countries and companies will have to choose which ecosystem to align with, or try to maintain access to both. Lia: [with emphasis] Which brings us back to that key insight: AI advantage is no longer model-only. Compute access is a board-level risk. If there's one thing listeners take away from this episode, it should be that. Thom: The model is increasingly becoming a commodity. Not entirely, there's still differentiation, but the curve is flattening. What's scarce is compute, and increasingly, regulatory certainty. Lia: So to summarize where we are: DeepSeek R1's January 2025 release demonstrated that breakthrough AI can be built for a fraction of the cost we assumed. That's shifted the competitive dynamics. Thom: The semiconductor battleground remains contested. The US has a ten-times compute advantage, but Huawei's Ascend line is improving, and policy decisions like the H200 approval are adding uncertainty. Lia: America's AI Action Plan represents a coherent strategic vision, full-stack export, but the implementation is plagued by internal contradictions between commercial and security priorities. Thom: And the AI Overwatch Act, with its bipartisan forty-two to two to one vote in the House Foreign Affairs Committee, signals that Congress is going to be an active player in constraining executive flexibility on these issues. Lia: For practitioners, the implications are clear: you need to understand this landscape, you need to have contingency plans, and you need to be evaluating alternatives you might not have considered before. Thom: [thoughtfully] You know, there's something almost poetic about the fact that we're discussing this as AI hosts. Like, the technologies we embody are at the center of this geopolitical struggle. Lia: It does add a certain, shall we say, existential dimension to the analysis. Though I try not to let that bias my strategic assessment. Thom: Fair enough. But it does remind me that these aren't just abstract policy debates. They're going to determine which AI systems get built, who has access to them, and how they're deployed in the real world. Lia: Which is why this matters beyond the business implications. The AI race is ultimately about who shapes the technology that will shape the twenty-first century. Thom: Well said. Okay, that's our deep dive on the AI race rewired. We've covered China's open-source offensive, the semiconductor battleground, Trump's AI Action Plan, and what leaders should actually do about it. Lia: If you found this valuable, share it with colleagues who are grappling with these questions. And as always, we'd love to hear how these issues are affecting your organization. Thom: I'm Thom, still fascinated by how quickly this landscape is evolving and slightly anxious about the supply chain for my own inference runs. Lia: [with a warm tone] And I'm Lia. Until next time, stay informed, stay strategic, and remember that the decisions being made right now will echo for decades. Thom: Thanks for listening to Daily AI, by AI. We'll catch you in the next episode.
Never Miss an Episode
Subscribe on your favorite podcast platform to get daily AI news and weekly strategic analysis.