Nvidia is cheap, actually

Nvidia is cheap, actually

Stay informed with free updates

You, a rube, might think that Nvidia is a big fat bubble, given that it’s worth almost as much as the entire UK stock market, trades at over 40 times next year’s forecasted earnings and even has its own earnings parties, spivvy support act and a monstrous options ecosystem.

However, FT Alphaville has been huffing some sellside commentary, and can now confidently say that Nvidia is actually cheap. Super cheap! It’s basically a deep value stock now.

Indeed, it represents a “generational opportunity”, Bank of America said in a note today, which upped its price target to $190.

We reiterate our Buy rating, raise our CY25/26 pf-EPS est. by 13%-20%, and lift our PO to $190 from $165 (unch. CY25E 42x PE) on top AI pick NVDA. Our confidence in NVDA’s competitive lead (80-85% mkt. share) and generational opportunity ($400bn+ TAM, 4x+ vs. CY24) is boosted by: (1) recent industry events (TSMC results, AMD AI event, our meetings with AVGO, MU, optical experts, launch pace of large language models, capex commentary from top hyperscalers and NVDA mgt. re “insane Blackwell demand”); (2) NVDA’s underappreciated enterprise partnerships (Accenture, ServiceNow, Oracle, etc.) and software offerings (NIMs); and (3) ability to generate $200bn in FCF over the next two years. Meanwhile, in our view, NVDA’s valuation remains compelling at just 0.6x CY25E PE to YoY EPS growth-rate or PEG, well below “Mag-7” avg. of 1.9x. Data from the BofA strategy team suggests NVDA is broadly owned but only ~1x mkt. weighted in active portfolios.

To unpick the sellside verbiage a little, various stuff has happened — such as TSMC’s smashing results and Nvidia’s CEO saying demand for its latest chip has been “insane” — which has made Bank of America even more optimistic than it was just a few months ago.

Throw in enterprise partnerships with the likes of Accenture and the bank now predicts that Nvidia’s earnings-per-share will more than quintuple to $5.67 by 2027, which will deflate the price-to-earnings ratio to a more modest 24 times by then. The overall free cash flow will clock in at $200bn over the next two years, BofA predicts.

Wall Street analysts are almost uniformly positive on Nvidia — which, to be fair, has been smashing expectations for a while now — but this is pretty punchy.

Of the 64 analysts polled by LSEG, 58 rate it a buy and there are no sell ratings, but BofA’s EPS forecast are the third highest of all analyst estimates, with only China’s Everbright and Brazil’s Banco Itaú more bullish. Its price target is only outdone by Rosenblatt Securities and Elazar Advisors (us neither)

It makes Goldman Sachs’ price target — raised to $150 just a week ago after a meeting with Nvidia’s CEO Jensen Huang — seem sober. But Goldman also thinks Nvidia is very reasonably priced, trading at near its three-year median PE, and well below its recent history compared to its peers.

Nvidia’s next-twelve-month PE ratio
Nvidia’s next-twelve-month PE ratio relative to other similar companies in Goldman’s coverage universe

Here are Goldman’s main arguments for their own price target upgrade, which we’ll quote at length given how much interest there is in the stock:

Continued focus on Accelerated Computing: With classic Moore’s Law exhibiting diminishing marginal returns (and, in turn, the need to innovate via architectural advancements increasingly apparent), and the emergence of Generative AI providing an opportunity for its customers to grow revenues and/or improve productivity, Nvidia believes data center operators will continue to focus their capital spending on Accelerated Computing and, specifically, GPUs. On the highly debated topic of customer ROI, management noted that hyperscalers with large social media and/or e-commerce platforms where customization is critical, are already witnessing solid returns on investment. Beyond the large cloud service providers (CSPs) and consumer internet companies, Nvidia expects the next wave of AI adoption to be driven by Enterprise in the form of digital AI agents that collaborate with and augment employees.

Blackwell ramp: Management highlighted the architectural shift in Blackwell vis-à-vis Hopper as well as the associated expansion in the company’s market opportunity (e.g. new CPU configuration, introduction of Spectrum-X, new NVLink switches). By integrating seven chips, and each playing a role in delivering higher performance at the data center level, we view the introduction and ramp of Blackwell not only as a near- and medium-term revenue growth driver, but also a dynamic that extends Nvidia’s competitive advantage. The ramp of Blackwell-based products remains on track with several billion dollars in revenue expected in the January quarter followed by further growth in April and beyond. Customers equipped with liquid-cooled infrastructure are expected to adopt the GB200 NVL72 (i.e. 36 Grace CPUs and 72 Blackwell GPUs connected in a rack-scale design) whereas others will likely opt for other configurations, most notably the HGX B100/200.

Increase in Inference complexity: Per our conversations, investors had historically perceived Inference as a relatively ‘easy’, less compute-intensive workload and a market in which Nvidia would face intense competition. However, as OpenAI’s recent release of its o1 models that are designed to spend more time ‘thinking’ or ‘reasoning’ before responding indicates, the complexity of Inference (and thus, the amount of compute that is required) is clearly on the rise. In fact, demand for Inference compute could grow exponentially as model builders solve for high throughput and low latency. Importantly, supported by its full-stack approach, we believe Nvidia is well-positioned to capture this growth opportunity in Inference (which is already nearing ~50% of the company’s Data Center revenue).

Competitive moat: Mr. Huang spoke to the company’s competitive moat which rests on a) the company’s large installed base (which, in turn, fuels the virtuous positive cycle that entails more developers), b) the company’s ability to innovate not just at the chip level but at the data center level, and c) its robust and growing software offerings, including domain-specific libraries such as Nvidia Parabricks (i.e. genomics analysis) and Nvidia AI Aerial (i.e. software-defined and cloud-native 5G networks). On the topic of ASICs (application-specific ICs) and their value proposition in relation to merchant GPUs, management reiterated their view that while ASICs have had and will always have a place in the data center, especially for applications such as video transcoding and general deep learning, they do not view ASICs as direct competition as they do not possess the agility, breadth (i.e. installed base) and reach (i.e. ability to work with or support any cloud service provider) Nvidia GPUs are able to offer.

Forward visibility: With lead times for its GB200 NVL products extending to ~12 months, Nvidia has strong forward visibility in its Data Center business. Importantly, the company’s engagements, particularly with the large CSPs, are deep and extend as far out as their public product roadmap (i.e. ~2027). By committing to a one-year product cadence and providing transparency to its customers, the company also hopes to achieve a healthy supply/demand balance whereby customers procure enough hardware to address near-term needs as opposed to front-loading capital spending (which often has the potential of creating unwanted year-to-year volatility).

Supply outlook and foundry strategy: Given the current demand backdrop, Nvidia expects supply to remain tight for the foreseeable future, despite its partners’ concerted effort to support the company’s growth outlook. On HBM, Nvidia expects to ultimately qualify a third supplier in Samsung, despite the company’s challenges over the past year. On its foundry strategy, management noted a) its long-standing and successful relationship with TSMC, b) while TSMC offers leading-edge process technology, the company’s agility, speed and strong/consistent execution are additional key characteristics that truly differentiates them from the competition, and c) while Nvidia would like to diversify its foundry footprint, if anything, they believe not engaging with TSMC would be a bigger risk.

Sovereign AI and AV/Humanoid Robots: In addition to the introduction of Blackwell and the growing opportunity in Inference, management called out Sovereign AI, autonomous vehicles and Humanoid robots as current and future growth drivers for the business. On Sovereign AI, recall that the company had increased its FY2025 revenue guidance from high-single digit billions of dollars to low-double digit billions of dollars on its August earnings call.

The secret is of course that there is no price target so outlandish that it can’t be made credible by some equally outlandish assumptions.

Snarking about Nvidia and optimist analysts feels a bit dangerous at the moment, given how the company just keeps treating estimates as pesky little puddles to skip over, but like the proverbial broken clock our scepticism will be proven right sometime.