Recently, Nvidia entered into a “non-exclusive license agreement” with Groq, coming tantalizingly close to an actual acquisition. It is now reported that the next-generation Feynman AI accelerators might incorporate chips based on Groq’s technologies, with a physical design reminiscent of the Ryzen X3D.
This innovation suggests that, apart from the GPUs, the next-gen accelerators will also include Groq-developed LPUs (Language Processing Units). These LPUs could be mounted on top of the main die using TSMC’s SoIC interconnect technology, akin to how 3D V-Cache chips are placed on Ryzen X3D processors.
Groq’s LPU design stands out as a cutting-edge natural language processing hardware. During initial tests at launch, Groq’s language processor was considered one of the best, if not the leading, solutions for deploying neural network models. Notably, the current market offers no AI accelerators that match this capability, and competitors are unlikely to swiftly develop an alternative.
It is noteworthy that the Feynman series is expected only after the Rubin Ultra, which is slated for release in 2027. The Feynman accelerators will utilize a 1.6nm process and require up to 4.4 kW each.
As the AI field rapidly evolves, Nvidia’s strategic moves place it in a strong position against competitors like Intel and AMD, who are intensifying efforts in similar technologies but yet to make parallel strides.
Samsung has already initiated mass production of its flagship Galaxy S26 smartphones, yet the company…
Insider Ice Universe has vividly demonstrated the difference in photographic capabilities between Xiaomi 17 Ultra…
iPhone 17 Outshines Rivals in a Competitive Chinese MarketThe iPhone 17 series by Apple continues…
The Alienware brand, a subsidiary of Dell, is gearing up to announce new gaming laptops…
While everyone was discussing the sudden surge in RAM prices, the SSD market has also…
Lisuan Tech has announced the delivery of its latest G100 series graphics processors to Chinese…