Jensen Huang In‑Depth Interview: Token Economy Surge, AI Computing’s Share Of GDP To Multiply One Hundredfold, NVIDIA’s $10 Trillion Valuation Inevitable

date
20:12 24/03/2026
avatar
GMT Eight
NVIDIA CEO Jensen Huang stated that AI computing has shifted from “storage systems” to “factories” producing tokens, with buyers willing to pay USD 1,000 per million tokens, and predicted computing’s share of global GDP will expand 100‑fold.

NVIDIA Chief Executive Jensen Huang recently joined the Lex Fridman Podcast for an extended conversation addressing AI scaling laws, constraints on compute and power, the emergence of AI factories, the company’s strategic trajectory, and AI’s societal implications. Over more than two hours, he articulated a series of technical and economic observations that frame his outlook on the industry’s next phase.

Huang argued that the nature of computation has undergone a structural shift from passive storage and retrieval toward context‑aware generative systems. Where traditional computing functioned primarily as a repository, he described modern AI infrastructure as a production facility that directly contributes to revenue generation. In Huang’s view, these AI “factories” are producing a new tradable unit—“tokens”—that have been segmented and priced for different user tiers. He noted that some buyers are prepared to pay USD 1,000 for every one million tokens, and he characterized the transition as a reclassification of compute from a cost center into a profit center. Projecting this dynamic at scale, Huang asserted his conviction that the portion of global GDP attributable to computing will expand by a factor of one hundred as productivity accelerates.

On the question of NVIDIA’s market capitalization trajectory, Huang treated the USD 10 trillion figure as a conceptual benchmark rather than a precise forecast, while expressing strong confidence in continued growth. He indicated that achieving revenue on the order of USD 3 trillion is within the realm of possibility under the token‑driven economic model.

Addressing power constraints, Huang acknowledged electricity as a significant concern but emphasized two complementary responses: sustained improvements in energy efficiency and more effective utilization of existing grid capacity. He proposed measuring efficiency in tokens per watt per second and described an engineering approach of extreme co‑design to drive down token cost, with annual reductions measured in orders of magnitude. To access additional power without overbuilding generation, he recommended contractual and operational changes that allow data centers to accept temporary reductions in supply. By designing facilities that can “gracefully degrade” performance—shifting critical loads or throttling compute rates when the grid requests reduced consumption—operators can exploit the substantial headroom that exists below peak grid design levels.

Huang also discussed supply‑chain and memory strategies. He described a Vera Rubin rack as comprising approximately 1.3 to 1.5 million components sourced from roughly 200 suppliers, and he explained that NVIDIA has moved toward assembling fully integrated racks within the supply chain rather than relying on field assembly. This shift requires suppliers to support gigawatt‑scale testing capacity prior to shipment. Regarding memory, Huang recounted persuading major memory manufacturers three years ago to expand high‑bandwidth memory (HBM) capacity, transforming HBM from a niche supercomputing component into a mainstream data‑center technology. He further noted efforts to adapt low‑power mobile memory (LPDDR) for high‑performance computing use cases.

On scaling laws, Huang decomposed AI expansion into four vectors: pre‑training, post‑training, test‑time scaling, and agent‑based scaling. He argued that training will increasingly be constrained by compute rather than by data, with synthetic data playing a growing role. For inference, which he equated with “thinking,” Huang emphasized that reasoning, planning, and search are computationally intensive tasks that will drive rising demand for inference compute.

Huang identified CUDA’s extensive installed base and the surrounding developer ecosystem as NVIDIA’s deepest competitive moat, supported by a global workforce of 43,000 and millions of developers who rely on continuous low‑level optimization. He confirmed that NVIDIA GPUs are already operating in space for satellite imagery preprocessing, but he cautioned that constructing large‑scale orbital data centers faces fundamental thermal‑management challenges because space lacks conduction and convection cooling. For now, he argued, the pragmatic priority is to maximize underutilized terrestrial power.

Commenting on rapid infrastructure builds, Huang praised the speed and systems thinking behind xAI’s Colossus supercomputer, which deployed 100,000 GPUs in four months, attributing the achievement to first‑principles engineering and minimalist execution. He characterized Elon Musk as a systems thinker who drives urgency through hands‑on leadership.

On workforce implications, Huang stated a clear hiring preference for candidates proficient in AI across disciplines, asserting that AI fluency will be essential for accountants, lawyers, sales professionals, supply‑chain managers, pharmacists, electricians, and carpenters alike. He distinguished between roles defined primarily by repetitive tasks, which are vulnerable to automation, and roles defined by higher‑order purpose, which can leverage AI to automate routine work and enable innovation. Huang advised those unfamiliar with AI to begin by asking AI how to use it, arguing that the barrier to entry is effectively zero and that delay imposes increasing opportunity cost. He projected a dramatic expansion in the population of programmers, suggesting that the number of individuals capable of specifying computational tasks could grow from roughly 30 million to as many as 1 billion as programming becomes synonymous with describing specifications for automated construction.

Finally, when asked about artificial general intelligence, Huang offered a provocative stance: if AGI is defined as a system capable of autonomously developing applications and generating profit, he believes such systems already exist. He envisioned AI autonomously creating web services or digital applications that attract billions of users and monetize successfully, asserting that the technical feasibility for such outcomes is present today.