Nvidia’s Next-Generation AI Chips Enter Full Production, CEO Huang Confirms
Nvidia CEO Jensen Huang announced that the company’s next generation of artificial intelligence chips is now in full production, marking a major milestone for the world’s most valuable semiconductor maker. Speaking at the Consumer Electronics Show (CES) in Las Vegas on Monday, Huang said the new chips deliver up to five times the AI computing power of their predecessors when running chatbots and other AI-driven applications.
Vera Rubin Platform to Power Advanced AI Systems
Huang unveiled new details about the Vera Rubin platform, which comprises six Nvidia chips and will debut later this year. The platform’s flagship server will feature 72 graphics processing units and 36 central processors. According to Huang, multiple servers can be linked into large “pods” containing more than 1,000 Rubin chips, achieving up to tenfold improvements in AI token generation efficiency — a key metric in large-scale model performance.
He explained that these gains stem from the use of proprietary data formats, which the company hopes will become an industry standard. “This is how we were able to deliver such a gigantic step up in performance, even though we only have 1.6 times the number of transistors,” Huang said.
Strong Demand Amid Rising Competition
Despite Nvidia’s dominant position in AI model training, the company faces growing competition from traditional rivals such as Advanced Micro Devices (AMD) and from major customers including Alphabet’s Google. Huang said that partners like CoreWeave will be among the first to deploy the new Vera Rubin systems, with Microsoft, Amazon, Oracle and Google also expected to adopt them.
The CEO also highlighted a new storage innovation called “context memory storage,” which is designed to help chatbots deliver quicker, more coherent responses during extended conversations. Nvidia further introduced upgraded networking switches featuring “co-packaged optics,” a technology aimed at linking thousands of machines efficiently and competing with similar offerings from Broadcom and Cisco Systems.
Open-Source Software and Strategic Expansion
Huang announced that Nvidia will make its self-driving car decision-making software, Alpamayo, publicly available, along with the training data that powers it. “Not only do we open-source the models, we also open-source the data that we use to train those models, because only in that way can you truly trust how the models came to be,” he said.
He also confirmed Nvidia’s acquisition of talent and chip technology from AI startup Groq, noting that while the deal will not affect the company’s core business, it may lead to new product lines. Nvidia continues to see high demand for its earlier H200 chips, which are still being exported to China under new US government licensing requirements.
Huang concluded his CES presentation by stressing that the Vera Rubin generation represents Nvidia’s next leap forward in AI performance and scalability, solidifying its leadership in the rapidly expanding artificial intelligence hardware market.
with inputs from Reuters

