Chip giant crushes forecasts as hyperscalers pour billions into AI, but market jitters linger
Investors have rarely seen a company grow this fast: Nvidia’s annual revenue has rocketed from US$27bn to US$216bn in roughly three years, and analysts expect it to top US$330bn next fiscal year, according to AP News.
For the November–January period, Nvidia’s fiscal fourth-quarter revenue rose 73 percent year over year to US$68.13bn, beating the consensus of US$66.21bn, while profit nearly doubled to about US$43bn, or US$1.76 per share.
CNBC reported that adjusted earnings per share came in at US$1.62, above expectations of US$1.53.
Guidance remains aggressive.
Nvidia projects fiscal first-quarter revenue of US$78bn, plus or minus 2 percent, well ahead of analyst forecasts of US$72.6bn.
AP News noted that if the company hits that target, revenue would be up 77 percent from a year earlier, suggesting its growth rate is still accelerating.
CEO Jensen Huang told analysts demand for its chips is “skyrocketing” and said, “AI is here, AI is not going to go back… AI is only going to only get better from here.”
At the same time, market reaction has been cautious.
Nvidia’s market value has climbed from about US$400bn at the end of 2022 to nearly US$4.8tn, yet even blowout reports have not always satisfied investors.
After the latest numbers, the stock initially rose about 4 percent in extended trading before slipping slightly following Huang’s upbeat call.
AP News also recalled that after a previous quarter that far exceeded forecasts, the shares still fell 3 percent the next day.
The entire story centres on AI infrastructure.
Nvidia now generates over 91 percent of its revenue from its data centre unit, which sells its market-leading AI chips, according to CNBC.
Data centre revenue reached US$62.3bn in the quarter, topping expectations of US$60.69bn.
Within that unit, networking products used to link large GPU clusters produced US$10.98bn in sales, up 263 percent year over year, driven by adoption of its NVLink technology and Spectrum‑X Ethernet switches and boosted by new deals with customers such as Meta.
Big Tech’s capex commitments underline why Nvidia’s data centre business has become so central.
AP News said Amazon, Microsoft, Alphabet and Meta collectively plan to spend about US$650bn this year to ramp up AI computing power.
CNBC separately reported that based on their capex guidance and analyst estimates, combined spending could approach US$700bn as they build out AI infrastructure.
Nvidia’s CFO commentary said Alphabet, Amazon, Meta and Microsoft “remained our largest customer category,” accounting for just over half of data centre revenue.
Investors watching AI cyclicality have little evidence of a slowdown so far.
Reuters quoted Bob O’Donnell of TECHnalysis Research saying concerns about an AI cooling “simply are not showing up yet” and noting that data centre revenue is diversifying beyond the biggest hyperscalers, which suggests “growth opportunity in more places.”
Nvidia’s forward pipeline is also substantial.
CFO Colette Kress told analysts the company expects sales growth to exceed the US$500bn revenue pipeline for 2026 it disclosed in October and anticipates growth in every quarter of the 2026 calendar year, without offering a more specific timeline.
On the product front, Nvidia is preparing its next-generation Vera Rubin rack‑scale systems, the successor to Grace Blackwell.
According to CNBC, Kress said Nvidia shipped its first Vera Rubin samples to customers earlier this week and remains on track to begin production shipments in the second half of the year.
Vera Rubin is expected to deliver 10 times more performance per watt, which targets energy efficiency at a time when data centres face power constraints.
Nvidia is simultaneously trying to derisk its manufacturing base.
The company is expanding its supply chain beyond Asia into the United States and Latin America, producing Blackwell GPUs at Taiwan Semiconductor Manufacturing Co.’s new Arizona plants and assembling some rack-scale systems at a large Foxconn facility in Mexico.
Nvidia said these moves aim to strengthen its supply chain, add resiliency and meet growing AI infrastructure demand, while warning that scaling output will depend on local manufacturing ecosystems.
Regulation and geopolitics remain a key watchpoint.
Reuters said Nvidia’s forecast does not include data centre revenue from China, though the company has recently received US licences to ship “small amounts” of its H200 chips to Chinese customers.
US authorities last year allowed Nvidia’s H20 and AMD’s MI308, both designed for AI inference, to resume shipments to China while keeping tighter curbs on more advanced processors.
Competitive pressure is building from multiple directions.
Reuters reported that AMD plans to roll out a new flagship AI server chip later this year and has secured deals with top Nvidia customers, including Meta.
Alphabet’s Google has emerged as a major rival by supplying its in‑house TPUs to Anthropic and is in talks to provide chips to Meta.
Big Tech firms are increasingly designing and deploying their own processors in their data centres.
For now, according to AP News, Nvidia’s numbers continue to set the pace for the AI trade, even as investors debate how long such growth and capital intensity can last.