tech

US-China AI Race Spurs Investment in Chips

FC
Fazen Capital Research·
7 min read
1,710 words
Key Takeaway

Bernstein (Mar 2026) says power and cooling—not just chips—will decide AI leadership; Nvidia topped $1T market cap in 2023 (Bloomberg). CNBC coverage: Mar 22, 2026.

Lead paragraph

The US-China AI race has shifted from algorithms to infrastructure: Bernstein's March 2026 research note—summarized by CNBC on March 22, 2026—argues that the decisive battleground will be who can deliver the scale of power, cooling and manufacturing to sustain next-generation AI compute. That thesis reframes investment priorities away from pure software winners toward capital-intensive hardware: GPUs and accelerators, power delivery and datacenter construction. Market signals already reflect this reallocation; Nvidia's ascent to a market capitalization in excess of $1 trillion in 2023 (Bloomberg) demonstrated how a chip vendor can command premium multiples when it is perceived as the bottleneck for AI compute. For institutional investors this creates a set of sectoral and cross-border trade-offs—between fab capacity, supply-chain security, sovereign policy risk, and the physical limits of electricity grids.

Context

Bernstein's March 2026 note, as covered by CNBC (Mar 22, 2026), emphasizes that the problem of AI leadership is not solely algorithmic superiority but the ability to power AI training at scale. Historically, leadership in computing cycles has favored the geography with superior capital intensity and liberalized energy markets; the United States historically benefited from that dynamic in the 2010s and early 2020s. China has pursued a catch-up strategy through state-directed capital expenditure and domestic semiconductor policies, raising the stakes: policymakers in Beijing have targeted substantial domestic semiconductor buildout since the mid-2010s, while Washington has enacted export controls and subsidies aimed at preserving an edge in advanced nodes.

The shift is visible in capex flows. Hyperscalers and cloud providers have announced multibillion-dollar builds and expansions in 2024–2026 to host AI clusters, and public disclosures from several large providers note single-campus power footprints in the 10–30 megawatt (MW) range for advanced AI deployments (company filings and engineering disclosures 2020–2025). Those footprints translate directly into grid stress and require long lead times for substations, transmission upgrades, and on-site cooling architecture. In capital markets, this complexity raises different risk premia—projects require regulatory coordination, long-dated permits, and coordination with utilities—factors that affect the cost of capital and time to revenue realization.

Bernstein’s message reframes winners: not just the chip designers but the ecosystem of power, packaging, and local supply. The implication is structural: even if a country achieves parity in model development, constraint in power delivery or advanced packaging will cap effective deployment. That constraint is measurable—projects without grid access can see lead times stretched 18–36 months versus greenfield sites with pre-approved utility agreements—introducing tangible schedule risk for compute rollouts (industry permitting studies, 2021–2024).

Data Deep Dive

Three hard anchors shape the data picture. First, CNBC published coverage of Bernstein’s thesis on March 22, 2026, bringing the commentary into public view at a moment when both markets and policymakers are actively responding (CNBC, Mar 22, 2026). Second, Nvidia’s market-cap milestone—surpassing $1 trillion in 2023—illustrates market recognition of hardware scarcity as a value-driver (Bloomberg, 2023). Third, hyperscaler disclosures and engineering literature indicate AI clusters commonly consume tens of megawatts per deployment, which is orders of magnitude higher than a traditional enterprise datacenter rack power density; this is a practical limit that constrains scale unless mitigated by upgraded grids or on-site generation (public filings and engineering papers, 2020–2025).

A comparison clarifies magnitude: a 20 MW AI cluster running at high utilization can draw as much instantaneous power as roughly 15,000 average U.S. households. On a year-to-year basis, cloud capex tied to AI infrastructure grew materially from 2023 to 2025 in public filings of major providers—several firms disclosed increases in infrastructure capex of 15–40% YoY during that period as they accelerated AI deployments (company reports, 2024–2025). By contrast, incremental revenue from AI workloads is only gradually monetized through enterprise contracts and productivity software, creating a classic timing mismatch between upfront capital intensity and monetization.

Supply-chain metrics matter. Advanced packaging and wafer capacity cannot be scaled overnight: adding an advanced packaging line or a new fab has lead times measured in quarters to years and requires billion-dollar investments. That dynamic favors incumbents and countries that can underwrite long-horizon industrial policy, but it also creates arbitrage opportunities where regulatory friction or export controls materially shift sourcing and customer relationships.

Sector Implications

Semiconductor stocks will remain central, but winners are likely to be diversified across adjacent hardware and infrastructure providers. GPU and accelerator vendors capture value through design and software stacks; foundries and advanced packaging capture value through scarce manufacturing capacity; utilities and energy-storage providers capture value through the ability to turn grid constraints into sellable capacity. For example, investors should be aware that while a fab or a packaging facility can command steady, contractually backed revenue, utility upgrades and permitting are lumpy and exposed to local politics and environmental review timelines.

Geopolitical bifurcation is not binary but layered. In the U.S., subsidy programs—such as the CHIPS Act enacted in 2022—aim to shore up domestic production and were materially expanded in subsequent appropriations; these policy moves reduce one axis of risk for U.S.-based suppliers. China’s state-led capital deployments, conversely, can accelerate capacity buildout but introduce counterparty and regulatory opacity. Comparisons of capital intensity are instructive: a single advanced packaging line can cost several hundred million dollars and take 12–24 months to reach volume; a leading-edge fab can cost multiple billions and take 24–36 months, magnifying project financing risk for countries and firms that lack deep pockets or favorable policy support.

For investors, the sector implications translate into concentrated exposure risk: hardware winners will likely exhibit higher capital turnover and longer payback cycles than typical software winners. That dynamic affects valuation frameworks—DCF models must incorporate longer rollout horizons, higher capex-to-sales ratios and potentially lower early cash conversion, offset by pricing power if compute scarcity persists.

Risk Assessment

Physical and regulatory risks dominate. Physical risks include electricity grid limitations, water availability for cooling in some jurisdictions, and supply-chain chokepoints for critical materials used in semiconductors and advanced cooling systems. Regulatory risks include export controls, subsidy conditionality, and national security reviews that can disrupt long-term supplier-buyer relationships. The interplay of these risks means that two companies with similar technology can have divergent outcomes if one operates in a more supportive policy environment.

Market risks also exist: valuation multiples have compressed in cyclical hardware sectors when capex expectations outpace revenue growth. If AI monetization timelines stretch or adoption stalls, investors may see downward re-ratings in capital-intensive segments. Counterparty concentration is another vector—if a small number of hyperscalers account for a disproportionate share of demand for high-end accelerators, any slowdown in that demand will disproportionately affect upstream suppliers.

Operational execution should not be overlooked. Firms that succeed will be those that manage long supply chains, secure strategic raw materials, and deploy capital with disciplined timelines. Technical risks—such as advances in model efficiency that reduce compute per inference or training job—could materially alter demand forecasts for raw compute and thus change the investment case for capacity-focused plays.

Fazen Capital Perspective

Our view is contrarian to the simple ‘chip-only’ narrative. While chip IP and accelerator design will command outsized rents in the near term, the investment opportunity set that is underappreciated includes the intermediaries that solve the power and thermal problems: advanced packaging specialists, grid-scale energy-storage developers, modular datacenter builders, and specialist utilities that can sign long-term contracts. From a risk-adjusted perspective, owning a diversified exposure across design, foundry, and power solutions mitigates single-point failures tied to export controls or geopolitical decoupling.

We also believe the timeline for onshore capacity expansion will be longer and more expensive than typical consensus forecasts: adding sustainable, reliable power at scale requires both capital and protracted approvals. That creates a tactical window where companies that can offer incremental power or thermal management solutions—battery-in-the-loop systems, liquid cooling retrofit specialists, or rapid-build modular datacenters—should see sustained demand and potentially superior cash conversion in the medium term. For institutional portfolios, a staged allocation that privileges companies with contracted revenue, strong balance sheets, and tangible ties to utility and regulatory partners will likely outperform concentrated equity bets on single chip designers.

Outlook

Over the next 18–36 months, expect policy and capital to chase the bottlenecks identified by Bernstein and others: power, cooling, packaging and foundry capacity. Markets will price in these constraints unevenly—public equities will re-rate as new contracts and subsidies reduce execution risk, while private markets may see elevated multiples for companies that can demonstrate signed offtake or utility agreements. Cross-border investors must price in geopolitical tail risks and the probability of further export-control actions that segment markets and redirect supply chains.

A medium-term scenario to watch: if compute scarcity persists and monetization trajectories for AI workloads accelerate, valuations for hardware and infrastructure firms could rerate materially. Conversely, if model efficiency improvements reduce compute intensity or if grid upgrades outpace demand growth, the shortage premium could erode. Scenario analysis that models both infrastructure capex timelines and alternative AI efficiency trajectories will likely be the most informative tool for fiduciary decision-making.

Bottom Line

Bernstein’s March 2026 thesis reframes the AI competition as a contest of infrastructure and energy as much as algorithms; investors should follow capex, policy and utility metrics as closely as software roadmaps. Companies that solve power, thermal and packaging constraints—across both private and public markets—represent differentiated exposure to this structural shift.

Disclaimer: This article is for informational purposes only and does not constitute investment advice.

FAQ

Q: How quickly can countries scale datacenter power capacity to support large AI clusters?

A: Scaling grid capacity typically takes 18–36 months from permitting to commissioning for large projects, depending on local regulatory regimes and utility readiness. Shorter timelines are possible with on-site generation and battery solutions, but those carry higher short-term costs and may require fuel or logistics support.

Q: Could improvements in model efficiency reduce demand for new hardware?

A: Yes. Algorithmic improvements that reduce compute per training run or enable model distillation can materially lower incremental hardware demand. That risk is real and is one reason to diversify across infrastructure providers and vendors with long-term contracts or differentiated technologies.

Q: Are there specific policy catalysts to watch?

A: Watch subsidy programs and export-control developments (e.g., CHIPS-style appropriations or additional export restrictions) and large-scale grid modernization bills. These trigger both funding availability and shifts in supply-chain routing that will affect time-to-market for capacity builds.

[See our related insights](https://fazencapital.com/insights/en) and [research on infrastructure exposures](https://fazencapital.com/insights/en) for deeper background.

Vantage Markets Partner

Official Trading Partner

Trusted by Fazen Capital Fund

Ready to apply this analysis? Vantage Markets provides the same institutional-grade execution and ultra-tight spreads that power our fund's performance.

Regulated Broker
Institutional Spreads
Premium Support

Daily Market Brief

Join @fazencapital on Telegram

Get the Morning Brief every day at 8 AM CET. Top 3-5 market-moving stories with clear implications for investors — sharp, professional, mobile-friendly.

Geopolitics
Finance
Markets