Liqid, a leader in composable infrastructure solutions, has unveiled new innovations tailored for enterprise AI in edge and on-premises datacenters.
These advancements aim to deliver high-performance scalability while reducing energy use and infrastructure waste. The latest offerings allow organizations to adapt resources dynamically, minimizing underutilization and cutting power and cooling needs.

Enterprises can now double tokens per watt and boost tokens per dollar by 50% using Liqid’s solutions.
As AI drives digital transformation, Liqid’s platforms support precise, on-demand infrastructure allocation. This reduces energy consumption and boosts throughput, enabling enterprises to maximize AI infrastructure ROI.

To support AI-driven applications like HPC, VDI, and rendering, Liqid has introduced a new suite of solutions.
These include:

  • Liqid Matrix® 3.6, a unified platform for managing composable memory, GPU, and storage with real-time control and 100% utilization.
  • Liqid EX-5410P, a PCIe Gen5 10-slot GPU platform designed for modern 600W GPUs and accelerators. It supports ultra-low-latency, high-bandwidth connections.
  • Liqid EX-5410C, a memory solution built on CXL 2.0 to power LLMs and in-memory databases. It ensures high-speed, low-latency access to pooled memory.
  • Liqid LQD-5500, the latest Gen5 NVMe drive, offering up to 128TB of ultra-fast cache storage.

Edgar Masri, CEO of Liqid, emphasized the urgency for smarter AI infrastructure as demand surges.
“With generative AI shifting on-prem, organizations must scale without surpassing power budgets,” he said. “Our latest updates ensure agility, efficiency, and performance to match future needs.”

Liqid Matrix 3.6 provides a unified interface to orchestrate GPU, memory, and storage resources seamlessly across environments. The platform integrates easily with tools like Kubernetes, VMware, OpenShift, Ansible, and Slurm, allowing dynamic AI resource factories.

The EX-5410P enables high GPU density and supports top-tier GPUs such as NVIDIA H200 and Intel Gaudi 3. Managed through Liqid Matrix, this solution lowers costs per rack unit while improving performance and agility for diverse AI workloads.

Liqid offers UltraStack and SmartStack composable GPU options.
UltraStack dedicates up to 30 GPUs per server, while SmartStack shares up to 30 GPUs across 20 servers for maximum flexibility.

Its composable memory solution disaggregates DRAM with CXL 2.0, unlocking up to 100TB of pooled memory per deployment. UltraStack dedicates the entire pool to one server, while SmartStack dynamically shares memory across 32 nodes.

Liqid’s LQD-5500 delivers high-speed NVMe storage with up to 128TB capacity, 50GB/s bandwidth, and 6 million IOPS. It is ideal for real-time AI analytics, HPC, and demanding enterprise data workloads.

Liqid transforms traditional server infrastructure with its composable GPU, memory, and storage architecture. Its software-defined systems reduce costs, space, and energy demands making it a go-to choice for next-gen AI factories.

Stay ahead with the latest in IT and data tech—visit IT Tech News for more innovations shaping the future.

News Source: Businesswire.com