Nvidia is forming a new business unit that will design custom processors for a wide range of applications, including AI processors, Reuters reports citing nine sources. The potential client list includes automakers, major cloud service providers (CSPs), and telecommunications companies. The custom chip unit will help Nvidia expand its future business.
The new unit led by vice president Dina McKinney (who used to be responsible for AMD’s Cat series CPU microarchitectures, some of Qualcomm’s Adreno GPU designs, and Marvell’s infrastructure processors) and was designed to meet the needs of automotive, consoles, data centers, telecommunications, and other applications that can take advantage of custom silicon. Nvidia has not confirmed the existence of the new business unit, but McKinney’s LinkedIn profile indicates that as VP of Silicon Engineering, she is responsible for silicon aimed at ‘cloud, 5G, gaming, and automotive,’ suggesting the diverse nature of her work.
While all major CSPs use Nvidia’s A100 and H00 processors for AI and high-performance computing (HPC) workloads, many of them, including Amazon Web Services, Google, and Microsoft, are also deploying their own custom processors for AI and general-purpose needs. This allows them to optimize costs (without needing to pay a premium to Nvidia), tailor the capabilities of their data centers, and optimize performance and power consumption, saving large sums of money. Additionally, by being responsible for the silicon design, these companies can quickly add custom features (such as new data formats) to their chips and protect their IP. As a result, for some workloads, Nvidia’s AI and HPC GPUs are irreplaceable; there are many workloads deployed on hardware running custom silicon. The trend towards custom silicon is extensive, and the market is growing rapidly, essentially causing CSPs to eat Nvidia’s lunch.
The report says that Nvidia has engaged in preliminary discussions with tech giants, including Amazon, Meta, Microsoft, Google, and OpenAI, to explore opportunities for creating custom chips, signaling an expanded focus beyond traditional ready-to-use data center offerings.
Nvidia is particularly successful in meeting the needs of AI applications with its ready-to-use A100 and H100 processors and their variations (e.g., A800, H800, rumored H20 DGX, etc.), as well as RTX series graphic processors for client PCs and data centers. The company’s Mellanox connectivity and networking products are also in high demand among cloud service providers.
But when it comes to the automotive market, Nvidia’s sales of solutions for automotive applications have lagged behind its lucrative data center, gaming, and professional visualization solutions. To some extent, this is because many automakers are also seeking custom silicon to power their software-defined vehicles, and although the Nvidia Drive platform is ahead of many developments, at least some vehicle makers prefer to have their own highly customized platform instead, due to cost, competition, and IP control reasons.
This approach not only opens new paths for Nvidia, but also puts it in direct competition with other custom chip designers, such as AMD, Alchip, Broadcom, Marvell Technology, and Sondrel. Although these companies have a lot of experience, Nvidia has many highly competitive IP technologies, including CPU, GPU, AI, HPC, networking, and sensor processing technologies that are already competitive. The sale of some of these IPs in custom packages could significantly increase Nvidia’s total addressable market (TAM) and eventually boost its earnings.