Credo Technology Group Holding Ltd (Credo), the company that is widely recognized for its technological advancements in high-speed connectivity, has released the memory fanout gearbox, the first in the industry, called Weaver. The main goal of Weaver is to get rid of memory bottlenecks in AI inference workloads. Thus, by going beyond the limitations of the current memory architecture, it provides not only a performance boost but also scalability extension for the next-gen AI and data center applications.
Revolutionizing AI Memory Efficiency
Weaver marks the debut of Credo’s new OmniConnect family, a series of solutions that aim to address scale-up and scale-out challenges in large-scale AI environments. Using advanced 112G very short reach (VSR) SerDes and proprietary architecture, Weaver boosts I/O density up to ten times. It enables as much as 6.4 TB of memory and 16 TB/s of bandwidth with LPDDR5X technology.
Conventional memory technologies like LPDDR5X or GDDR have limited capabilities in terms of bandwidth, density, and energy efficiency. In addition, even though High Bandwidth Memory (HBM) is strong, it is still expensive and limited in production. Credo’s Weaver removes these obstacles, enabling improved throughput, density, and scalability. Credo’s Weaver removes these obstacles, enabling improved throughput, density, and scalability. It effectively supports the most demanding AI workloads.
Don Barnetson, Senior Vice President of Product at Credo, said Weaver was engineered for “the flexibility and scalability required for future AI inference systems.” He added that the technology allows partners to optimize memory provisioning, reduce costs, and speed up deployment for advanced AI infrastructures.
Mitesh Agrawal, CEO of Positron, praised the product’s impact, saying, “Weaver helps us solve critical memory challenges and deliver high-performance compute power for our next generation of AI inference servers.”
Smart Design for Future-Ready AI Systems
Weaver provides system integrators with the ability to reconfigure AI models by using flexible DRAM packaging. It also enables late binding to their configurations. In addition, the solution is environmentally friendly. It is future-ready because it supports upcoming memory protocols. The solution integrates telemetry and diagnostics features to maintain strong reliability and uptime in high-demand environments.
To learn more about Weaver and Credo’s OmniConnect series, industry professionals can register for the upcoming webinar, “Breaking the Memory Wall: Scaling AI Inference with Innovative Memory Fanout Architecture,” scheduled for November 10 at 8 a.m. PT / 11 a.m. ET on TechOnline.
With Weaver, Credo positions itself at the forefront of AI infrastructure innovation. The company offers a scalable, efficient, and cost-effective solution to the most pressing memory performance challenges in AI inference and data center workloads.
Explore IT Tech News for the latest advancements in Information Technology & insightful updates from industry experts!
News Source: Businesswire.com