Nvidia has agreed to pay $20 billion to AI chip startup Groq to licence its AI inference {hardware}, whereas recruiting a number of of its workers, together with its founder, in an sudden deal that underscores how the combat for AI chip dominance is simply rising extra intense.
Groq on Wednesday, December 24, stated that it has entered right into a non-exclusive licensing settlement with Nvidia so as to present expanded entry to its high-performance, low-cost inference chips. To assist advance and scale the licenced expertise, Groq CEO Jonathan Ross, president Sunny Madra, and different members of the AI chip startup shall be becoming a member of Nvidia. Groq’s new CEO shall be Simon Edwards, and it’ll proceed to function as an unbiased firm even after the Nvidia partnership.
Whereas the total monetary particulars haven’t been disclosed, the $20 billion-dollar licensing deal is reportedly Nvidia’s largest buy of any expertise thus far. It’s also about thrice of Groq’s $6.9 billion valuation after closing a $750 million-dollar financing spherical only a few months in the past. The non-exclusive licensing settlement is critical in mild of two necessary developments shaping the AI business:
First, the Groq deal comes amid a rising narrative that Nvidia’s unquestioned management over the AI chip market is fracturing. It means that the business chief sees strategic worth in licensing and absorbing inference expertise from its rising rivals, whilst focus shifts from training-centric GPUs that characterised the early interval of AI improvement to {custom} silicon.
Second, the deal reinforces the pattern of acqui-hires inside the AI business, the place having in-house expertise and specialised experience is turning into simply as essential as entry to the {hardware} or expertise itself.
Nvidia: Rising competitors, strategic bets
Buyers have been jittery about potential competitors for Nvidia since its AI chip enterprise exploded in the course of 2023, a number of months after OpenAI launched its world-changing ChatGPT. Since then Nvidia’s quarterly income has expanded to $57 billion from $7 billion. Main hyperscalers resembling Google, Microsoft, Amazon, Meta, and Oracle have come to rely closely on Nvidia’s graphics processing items (GPUs) to coach and develop their very own AI fashions in addition to lease them out to AI startups like OpenAI and Anthropic.
However Nvidia’s blockbuster success has additionally invited an array of potential rivals, starting from established gamers resembling Superior Micro Gadgets (AMD) and Broadcom to well-funded upstarts like Groq and Cerebras. Tech giants resembling Google, Amazon, and Apple have additionally been growing in-house, custom-built AI chips to interrupt their expensive dependence on Nvidia’s GPUs.
Story continues beneath this advert
To this point, the sheer high quality of Nvidia’s chips has maintained its edge. But, Google is mounting a severe problem for the reason that launch of Gemini 3, its newest household of AI fashions that was hailed as a significant enchancment and is reportedly developed solely on the corporate’s tensor processing items (TPUs) — a sort of {custom} application-specific built-in circuits (ASICs) whose efficiency relative to GPUs continues to be below query, but is rising as a possible various.
Confronted with rising competitors, Nvidia has strategically ramped up its investments in chip startups and the broader ecosystem. Until October 2025, Nvidia had $60.6 billion in money and short-term investments, up from $13.3 billion in early 2023, in response to a report by The Data.
It purchased Israeli chip designer Mellanox for near $7 billion in 2019. As a part of an identical however smaller deal than the Groq settlement, Nvidia stated in September 2025, that it could shell out $900 million to licence AI {hardware} startup Enfabrica’s expertise whereas hiring its CEO and different workers.
Nvidia has additionally backed AI and power infrastructure firm Crusoe, AI mannequin developer Cohere, and AI-centric cloud supplier CoreWeave. It has additional invested $5 billion in Intel as a part of a partnership, and struck a $100 billion funding cope with OpenAI in change for a dedication from the ChatGPT maker to deploy at the very least 10 gigawatts of its chips.
Story continues beneath this advert
Groq: TPU roots, inference play
GPUs are stated to work nicely for coaching AI fashions, however because the expertise matures and competitors will increase, corporations need to optimise AI mannequin inferencing, the place the mannequin is made to generate outputs on beforehand unseen knowledge. In the course of the previous few months, the AI business has shifted focus to {hardware} designed particularly for inferencing so as to enhance the general high quality of AI fashions.
That is the place Groq is available in. Based in 2016, the US semiconductor startup is thought for producing AI inference chips that it calls LPUs (language processing items), and which can be utilized to optimise pre-trained fashions. “Inference is defining this period of AI, and we’re constructing the American infrastructure that delivers it with excessive pace and low value,” Jonathan Ross, Groq founder and CEO, has beforehand stated.
Notably, Ross is a former Google engineer and one of many creators of the tech big’s {custom} TPU chips. Groq’s buyers embrace Samsung, Cisco, Blackrock, Neuberger Berman, Altimeter Capital, Social Capital, and 1789 Capital, which reportedly counts US President Donald Trump’s son, Donald Trump Jr, as one among its companions.
Groq’s merchandise are primarily aimed toward builders and enterprises. They’re made accessible as both a cloud service or an on-premises {hardware} cluster. Collectively, they’re used to energy AI functions constructed by greater than two million builders, in response to a latest JHB report. Its cloud service and on-prem {hardware} are additionally used to run standard open-weight AI fashions developed by Meta, DeepSeek, Qwen, Mistral, Google, and OpenAI. The corporate has stated that its choices keep and generally even enhance the efficiency of AI fashions at a considerably lesser value than options.
Story continues beneath this advert
Nvidia CEO Jensen Huang has stated that the trillion-dollar firm’s settlement with Groq will increase its capabilities. “We plan to combine Groq’s low-latency processors into the NVIDIA AI manufacturing unit structure, extending the platform to serve a good broader vary of AI inference and real-time workloads,” Huang was quoted as saying by CNBC.

