Amid xAIs Colossus and Amazons (NASDAQ:AMZN) Project Rainier supercomputers, Nvidia (NASDAQ:NVDA) remains the linchpin of most AI operations, with xAI particularly dependent on it. However, Amazon is taking an independent route with new AI accelerator chips to challenge Nvidias monopoly. While CEO Jensen Huang is steering Nvidia toward sustained prosperity by protecting market share through diversification and expanding AI inference capabilities, the market is inevitably becoming saturated with more players. My valuation model indicates Nvidia is moderately undervalued at present, but in the next three to five years, a significant valuation decline is likely due to an impending revenue contraction, according to my analysis.
Nvidia has long been established as the unrivaled AI accelerator provider, but the market is subtly shifting. New entrants, such as Amazon with its Trainium3 chip announced at re:Invent, and Googles (NASDAQ:GOOGL) (GOOG) TPUs, are likely to pressure Nvidias market position in the long term. However, for now, Nvidia remains the undisputed leader in AI chip design.
Nvidias flagship GPUs, such as the H100, have played a critical role in data centers, leading to the company accounting for 98% of global GPU shipments in 2023. Despite developing their own chips, companies like Amazon continue collaborating with Nvidia. For example, Amazon's Project Ceiba involves building an AI supercomputer exclusively on AWS using 20,736 Nvidia GB200 Superchips, with 414 exaflops for Nvidias own AI R&D.
Competitors such as Cerebras and Groq are also targeting the AI inference market as training workloads diminish. Nvidia is responding by enhancing its GPUs for inference tasks while exploring unified solutions that bridge the gap between training and inference. This is critical for Nvidia to remain relevant, as one of the core risks of investing in Nvidia is its vulnerability to a revenue contraction, which could inevitably lead to a valuation decline.
Companies like Microsoft (NASDAQ:MSFT), Meta (NASDAQ:META), and Google have invested billions of dollars in infrastructure built around Nvidias GPUs and CUDA platform. CUDA has been instrumental in Nvidias dominance, offering over 400 CUDA-accelerated libraries tailored to tasks like linear algebra, deep learning, and data processing. These reduce development time and improve performance for AI applications. Given the high switching costs associated with migration, Nvidias position is robust. However, competitors such as Advanced Micro Devices (AMD) (with ROCm), Intel (INTC) (with oneAPI), and emerging players like Spectral Compute are developing alternatives to challenge Nvidias dominance. Looking 10 years ahead, these dynamics are likely to compound, leading to an unavoidable revenue decline as AI infrastructure demand tapers with the conclusion of the bulk of the AI training phase.