Recently, Nvidia entered into a “non-exclusive license agreement” with Groq, coming tantalizingly close to an actual acquisition. It is now reported that the next-generation Feynman AI accelerators might incorporate chips based on Groq’s technologies, with a physical design reminiscent of the Ryzen X3D.
This innovation suggests that, apart from the GPUs, the next-gen accelerators will also include Groq-developed LPUs (Language Processing Units). These LPUs could be mounted on top of the main die using TSMC’s SoIC interconnect technology, akin to how 3D V-Cache chips are placed on Ryzen X3D processors.
Groq’s LPU design stands out as a cutting-edge natural language processing hardware. During initial tests at launch, Groq’s language processor was considered one of the best, if not the leading, solutions for deploying neural network models. Notably, the current market offers no AI accelerators that match this capability, and competitors are unlikely to swiftly develop an alternative.
It is noteworthy that the Feynman series is expected only after the Rubin Ultra, which is slated for release in 2027. The Feynman accelerators will utilize a 1.6nm process and require up to 4.4 kW each.
As the AI field rapidly evolves, Nvidia’s strategic moves place it in a strong position against competitors like Intel and AMD, who are intensifying efforts in similar technologies but yet to make parallel strides.
In a striking illustration of the soaring value of high-end technology, a thief in South…
A New Chapter in a Shadowy SagaChina's reusable spaceplane, "Shenlong" or "Divine Dragon," has once…
Apple has announced that its manufacturing partner, Foxconn, will begin assembling certain Mac mini computers…
After a brief slowdown for the Chinese New Year celebrations, Xiaomi's rollout of its HyperOS…
A recent photo leak by blogger Sahil Karoul has sparked a debate in the tech…
In the wake of the Lunar New Year festivities, the smartphone market is stirring with…