Why we aren't in an AI bubble (top 1% of Hedge Funds know this)
AI is a state capacity build. CapEx is converging on governable AI: identity, provenance, audit, lineage, rollback, simulation. That’s a decade of insatiable, constraint-bound demand.
I’ll let you in on a little industry secret.
Most of my research is actually not very controversial.
The top ~1% of portfolio managers/hedge funds, and the top 5-10% of defense/intelligence-aligned funds (DAFs) already know most of this, but their weighting is cautious, partly because they can’t openly market the technocracy thesis to Limited Partners (LPs).
Most of this would sound insane in an investment memo.
You will not see this advertised it in the open, and for very good reasons.
I’ve already written about how sometimes Conspiracy = Asymmetry in financial markets.
Many people talk about how the AI sector is in a bubble, and maybe that is the case for some companies, however, they fail to understand that many of the companies with billions in CapEx are actually the government.
The end goal of course is AI governance.
So how much AI/High-Performance Computing would the Controllers need for AI governance?
Think in three compute tiers (because that’s how you’d actually run it at scale):
Edge sieve (cheap, everywhere): cameras/phones/routers/ATMs/industrial sensors run tiny models to tag, hash, and discard 99.9% of raw stream.
Regional fusion (medium, many): metro/co-lo sites run multi-modal “situation” models across streams (ID, location, payments, comms, logistics).
Central brains (heavy, few): national/ally clusters train & steer giant world-models + simulations; push down policies/weights.
Back-of-the-envelope capacity (ballpark, not brochure math):
Population-scale monitoring target: suppose you want to continuously cover meaningful signals across ~8B people + critical infrastructure. After edge filtering you still ingest, say, 10–100 events/person/day (payments, travel gates, high-salience comms, checkpoints, high-risk Internet of Things). Call it 10¹¹–10¹² events/day into regional fusion.
Regional fusion inference: lightweight multi-modal models at ~1–10 Giga Floating Point Operations Per Second (GFLOPs)/event (post-edge). That’s 10²⁰–10²¹ FLOPs/day ⇒ ~1–10 exaFLOPS (EFLOPs)/s sustained (exaflops/second) just for regional inference.
Central training & simulation: persistent fine-tuning of trillion-parameter world-models, policy Reinforcement Learning, counterfactual simulations. Realistically 10–100 EFLOPs/s peak (not sustained 24/7, but frequent). Plus a few EFLOPs/s for national-level inference/agentic planners.
Power footprint: today’s top AI Data Centers run 100–300 MW each. A governance-grade grid is ~50–150 sites at 100–300 MW = 5–30 GW facility power (Power Usage Effectiveness ~1.2), with bursts and redundancy. That’s multiples of current hyperscale. That’s far above what’s broadly deployed now; not infinite, but constrained by power, High Bandwidth Memory (HBM), packaging, and grid plumbing, not by demand.
Accelerator count (NVIDIA H100-ish equivalents):
Moderate regime: ~5M accelerators (IT power ~3–4 GW).
Hard regime: ~20M (IT ~14 GW).
Maximal “omnivision”: ~50M (IT ~35 GW).
These numbers are feasible only if you solve: High Bandwidth Memory (HBM) output, Chip on Wafer on Substrate (CoWoS)/System on Integrated Chip (SoIC) capacity, 2–3nm leading-edge, and multi-GW interconnect + cooling.
CoWoS (Chip on Wafer on Substrate) and SoIC (System on Integrated Chip) capacity refer to the ability of semiconductor packaging technologies to integrate multiple chips and components into a single package, enhancing performance and efficiency.
Takeaway: For AI governance under a low Gross Consent Product lens (stability > truth), demand for AI/HPC is effectively insatiable for a decade.
The constraint is power + packaging + memory, not “use cases”. More capacity yields broader context windows, deeper cross-domain fusion, faster simulation cycles — directly improving control quality. There is no natural upper bound until the grid and supply chain say “no”.
Let’s dive a bit deeper now.
Zero-illusion take
“AI bubble” misses the buyer. In a low Gross Consent Product regime, the marginal, price-insensitive buyer is the State (directly + via regulated incumbents). Their objective isn’t ad clicks; it’s decision dominance. Incentives > ideals; control > fairness; stability > truth.
Demand is not cyclical, it’s programmatic: identity rails, provenance, online safety, tax-at-source, sanctions automation, domestic Intelligence, Surveillance, and Reconnaissance (ISR), and simulation cells for policy. All of that scales with population, complexity, and risk, not with earnings seasons.
Constraint is power + High Bandwidth Memory (HBM) + packaging + grid plumbing, not “use cases”. More compute = finer control (longer context, deeper fusion, faster simulations). There’s no natural upper bound until the grid and supply chain say “no”.
What “AI governance” actually needs
Think of these three tiers because that’s how it will run.
1) Edge sieve (cheap, everywhere)
What runs: phone/OS trust chains, Customer Premises Equipment (CPE) routers, cameras, ATMs, vehicle/industrial controllers.
Jobs: hash/sign, dedupe, redact PII, do micro-classifiers (violence, faces-on-watchlist hash, dangerous-tools, anomaly scores), discard 99.9% locally.
Hardware reality: Internet Service Provider Customer Premises Equipment refreshes, Network Video Recorders, smart-camera ASICs, mobile Neural Processing Units. Power budgets of 1–10 W per device.
Who wins: AAPL/GOOGL (OS attestation, on-device models), QCOM/NVDA Jetson/NXP (embedded), AXON/MSI/HIK (public-safety capture), NET/ZS (policy at the edge).
Alpha: The OS/app-store policy lever is the law that doesn’t need parliament. Distribution = enforcement.
2) Regional fusion (medium, many)
What runs: metro Data Centers, carrier hotels, sovereign co-los; multi-modal “situation” inference across ID, payments, comms, logistics, travel gates, sensors.
Throughput math (tightened): After edge, assume 10–100 salient events/person/day → 1e11–1e12 events/day.
Per-event model cost 1–10 GFLOPs (post-edge, small transformer + rule layer) → 1e20–1e21 FLOPs/day → ≈ 1–10 EFLOPs/s sustained regional inference.
Who wins: NVDA/AMD (accelerators), MSFT/AMZN/GOOG (sovereign cloud), CSCO/ANET/MRVL (400/800G fabrics), PANW/FTNT/CRWD/TENB (policy-grade security), PLTR (entity/lineage/rollback governance, cross-domain join).
Alpha: Buy vendors whose artifacts survive court/audit (provenance, consent lineage, rollback). Inference at this tier must be admissible, not “smart”.
3) Central brains (heavy, few)
What runs: national/ally clusters for world-model training, policy Reinforcement Learning, counterfactual simulations, and agentic planners; push down weights/policies.
Capacity reality: 10–100 EFLOPs/s peak (not 24/7, but regularly), plus a few EFLOPs/s for national inference/planning.
Facilities: 50–150 sites × 100–300 MW each → 5–30 GW of facility power (Power Usage Effectiveness ~1.2), with bursts. This exceeds today’s hyperscale.
Who wins: NVDA (GB200, NVL), TSMC (N3/N2 + CoWoS/SoIC), SK hynix/Micron/Samsung (HBM3E/HBM4), VERTIV/Schneider (power, liquid cooling), EIX/NEE/DUK + transformer OEMs (grid), EQIX/DLR/SWCH (sovereign DCs), PLTR (simulation governance layer), MSFT (policy console: Entra/Purview/Compliance + Azure Gov).
Quant envelopes (ballpark, but conservative)
Moderate governance regime: ~5M accelerators (H100-class equiv), IT draw 3–4 GW, facility ~4–5 GW.
Hard regime: ~20M accelerators, IT ~14 GW, facility ~17 GW.
Maximal “omnivision”: ~50M accelerators, IT ~35 GW, facility ~42 GW.
Feasible only if: High Bandwidth Memory (HBM) output 2–3×, CoWoS/SoIC lines 2–3×, interconnect moves to 800G/1.6T broadly, and multi-GW liquid cooling becomes default. The bottleneck stack is ex-silicon (substrates, HBM stacks, PSUs, pumps, heat exchangers, 230/400 kV grid taps, lead-time on 2000 kVA transformers = 12–36 months).
Takeaway: Annual Total Addressable Market (TAM) is physically capped by High Bandwidth Memory (HBM) + packaging + power, not by “AI demand”. This is why “bubble” calls that stare at app revenues miss the State buyer and the plumbing bottlenecks.
“AI bubble” vs control demand (separating hype from inevitables)
Likely bubble (trim/avoid without proofs):
Ad-tech flavored LLM front-ends with no compliance surface.
General “copilot everywhere” clones without records/lineage/retention.
GPU-adjacent SPAC detritus (coolant-in-a-box with no utility scale wins).
“Open-source will eat it all” equities (great community ≠ sovereign deployment).
Not a bubble (the annuities):
High Bandwidth Memory (unit volume + pricing power because nothing substitutes bandwidth).
Advanced packaging (CoWoS/SoIC capacity = kingmaker; bookable years out).
High-density power/cooling (lithium UPS, CDU/immersion, heat reuse).
Identity/provenance/admissibility software (what lets AI exist in court & policy).
800G/1.6T optics & switching (you can’t move EFLOPs without moving bits).
Sovereign cloud footprints (Gov IL5/IL6-like tiers; once installed, never removed).
Why the bubble meme persists: people benchmark ROI to consumer apps. The actual ROI is loss avoidance + compliance certainty + enforcement elasticity. That’s not visible in a spreadsheet — but it’s overwhelming to the buyer who writes laws.
How this shows up in procurement (repeated pattern)
Crisis/pilot phase: grants, emergency powers, “proof-of-concept” to stand up a rail (ID gateway; provenance).
Template export: standard drafted; allies adopt.
Perimeter enforcement: app stores/banks/cloud Acceptable Use Policies align; effective compliance without new statutes.
Ratchet: pilot becomes mandatory; budgets recur; vendor lock-in via artifact formats (evidence, lineage).
Your trade: buy at pilot template + perimeter signal, not after the “clarity” press release.
Investment map
Note that none of this should be considered investment advice.
Most of what I’ve written here is well-known and reflected in financial markets.
Meaning, most of the companies I’m going to mention in this article are fairly valued.
Core longs (multi-year)
PLTR – Decision admissibility + cross-domain lineage + simulation: the OS for governance-grade AI.
MSFT – Distribution + compliance console (Entra/Purview/Compliance) + sovereign cloud; default allocator for the State.
NVDA – Monopolistic system company across silicon + interconnect + software; scarcity rent persists as long as High Bandwidth Memory/packaging bind.
TSMC – The actual moat (N3/N2 + CoWoS/SoIC capacity).
SK hynix/Micron – HBM oligopoly; pricing power until bandwidth ceases to be the bottleneck (not soon).
VERTIV/Schneider – High-density power + liquid cooling; secular backlog.
EQIX/DLR/SWCH – Sovereign/hyperscale DC estates near 230/400 kV taps.
NET – Provenance/CDN policy edge (C2PA enforcement, tokenized requests).
RELX/EXPN/EFX – KYC/identity primitives that tie wallets and actions to persons.
Selective adds
ANET/CSCO/MRVL – Switching + PAM4 optics (800G/1.6T).
ADBE – Creation tools embedding C2PA (content authenticity becomes law-by-default).
PANW/CRWD/TENB – Platforms that can be underwritten by insurers/regulators (policy-grade security).
Cycle tactics
Buy the bottleneck whenever lead-times expand (HBM, CoWoS, transformers).
Buy Value-at-Risk shocks (policy panics, hearings) in policy-grade software; sell/overwrite on “clarity” rallies.
Why “insatiable for a decade” is not hyperbole
I previously wrote:
For AI governance under a low Gross Consent Product lens (stability > truth), demand for AI/HPC is effectively insatiable for a decade.
Why “insatiable for a decade” is not hyperbole:
Policy surface is expanding (ID, provenance, online safety, carbon, biosecurity, sanctions, real-time tax). Each domain adds to compute; none subtract.
Context windows and fusion depth rise with compute — governance outcomes get measurably better (fewer false positives, faster adjudication, higher automation). The buyer sees it; they keep buying.
Simulation cadence is the real sink: running counterfactual societies (policy Reinforcement Learning across health, labor, energy, conflict) is exa-class (Exascale computing) by definition. There’s no “enough” until political risk is minimized.
Addressing “AI bubble” cleanly
There is speculative froth in front-end apps and GPU cosplay. Cull those.
There is no bubble in the power-packaging-HBM-identity-provenance stack demanded by AI governance. That stack is being bought by the only buyer that matters when consent is scarce — and they don’t miss quarters.
Bottom line: In a low Gross Consent Product world, “AI” isn’t a consumer story. It’s a state capacity build. CapEx is converging on governable AI: identity, provenance, audit, lineage, rollback, simulation. That’s a decade of insatiable, constraint-bound demand. Ignore anyone calling it a bubble from the vantage point of ad budgets.
None of this should be considered investment advice.
Most of what I’ve written here is well-known and reflected in financial markets.
Meaning, most of the companies I’ve mentioned in the article are fairly valued.
More context:
