Recently, Nvidia entered into a “non-exclusive license agreement” with Groq, coming tantalizingly close to an actual acquisition. It is now reported that the next-generation Feynman AI accelerators might incorporate chips based on Groq’s technologies, with a physical design reminiscent of the Ryzen X3D.
This innovation suggests that, apart from the GPUs, the next-gen accelerators will also include Groq-developed LPUs (Language Processing Units). These LPUs could be mounted on top of the main die using TSMC’s SoIC interconnect technology, akin to how 3D V-Cache chips are placed on Ryzen X3D processors.
Groq’s LPU design stands out as a cutting-edge natural language processing hardware. During initial tests at launch, Groq’s language processor was considered one of the best, if not the leading, solutions for deploying neural network models. Notably, the current market offers no AI accelerators that match this capability, and competitors are unlikely to swiftly develop an alternative.
It is noteworthy that the Feynman series is expected only after the Rubin Ultra, which is slated for release in 2027. The Feynman accelerators will utilize a 1.6nm process and require up to 4.4 kW each.
As the AI field rapidly evolves, Nvidia’s strategic moves place it in a strong position against competitors like Intel and AMD, who are intensifying efforts in similar technologies but yet to make parallel strides.
According to TrendForce's forecast, this year memory chip manufacturers are expected to earn more than…
While Western and Taiwanese giants are eager to integrate memory chips from the Chinese company…
A week ago, it was announced that Intel plans to launch a new type of…
Introduction of Silicon One G300Cisco has unveiled its own Silicon One G300 processor aimed at…
Lexar has unveiled its latest portable drives, the JumpDrive A50V and C50V, which come in…
According to TF International Securities analyst Ming-Chi Kuo, the next generation of Apple AirPods will…