Skip to content

WEKA Presents NeuralMesh Axon, Its Solution for High-Performance AI Implementations at Exascale

Innovative Architecture Combinations Employed by AI Pacesetters Such as Cohere, CoreWeave, and NVIDIA for Revolutionary Performance Enhancements

Introducing NeuralMesh Axon from WEKA: A New Solution for High-Performance AI Deployments at...
Introducing NeuralMesh Axon from WEKA: A New Solution for High-Performance AI Deployments at Exascale Levels

WEKA Presents NeuralMesh Axon, Its Solution for High-Performance AI Implementations at Exascale

In a groundbreaking move, WEKA, a leading security-focused enterprise AI company, has adopted NeuralMesh Axon, a state-of-the-art storage system engineered by WEKA itself, to power its AI model training and inference workloads. This innovative solution is already trusted by 30% of the Fortune 50 and the world's leading neoclouds and AI innovators [1].

NeuralMesh Axon is designed specifically to meet the demands of exascale AI applications and workloads. It addresses core challenges of running large-scale AI, such as low GPU utilization, GPU overload during inference, and high costs, by transforming underutilized GPU resources into a unified, high-performance infrastructure layer that significantly enhances the responsiveness, performance, and utilization of AI workloads [1].

Key features of NeuralMesh Axon include seamless fusion of compute and storage, unmatched speed and efficiency, scalability and resilience, flexibility in deployment, and radical efficiency gains [1]. By deploying NeuralMesh Axon directly on GPU servers, it eliminates the need for separate storage infrastructure, reducing deployment complexity and infrastructure costs while maximizing GPU utilization [1].

The fusion of storage and compute in NeuralMesh Axon enables ultra-low latency and ultra-fast data access, leading to massive performance gains. For instance, AI inference tasks that previously took minutes can now be completed in seconds, checkpointing speeds improve tenfold, and overall AI pipeline throughput is accelerated [1].

CoreWeave, an AI cloud infrastructure provider, has integrated NeuralMesh Axon into its system, achieving microsecond latencies and delivering more than 30 GB/s read, 12 GB/s write, and 1 million IOPS to an individual GPU server [2]. Cohere, an industry-leading security-first enterprise AI company, is among the early adopters of this technology, deploying NeuralMesh Axon on CoreWeave Cloud to support real-time reasoning and deliver exceptional experiences for its end customers [2].

NeuralMesh Axon's high-performance, resilient storage fabric fuses directly into accelerated compute servers, making it a valuable tool for organizations operating at the forefront of AI innovation, including AI cloud providers, neoclouds, regional AI factories, major cloud providers developing AI solutions, and large enterprise organizations deploying demanding AI inference and training solutions [3].

When combined with NVIDIA AI Enterprise software, NeuralMesh Axon enables AI pioneers to accelerate AI model development at extreme scale [3]. Its unique benefits for AI builders and cloud service providers operating at exascale include improving GPU utilization, accelerating time to first token, and lowering the cost of AI innovation [3].

Marc Hamilton, vice president of solutions architecture and engineering at NVIDIA, emphasised that partner solutions like WEKA's NeuralMesh Axon deployed with CoreWeave provide a critical foundation for accelerated inferencing while enabling next-generation AI services with exceptional performance and cost efficiency [4].

NeuralMesh Axon's unique erasure coding design tolerates up to four simultaneous node losses and sustains full throughput during rebuilds. It is currently available in limited release for large-scale enterprise AI and neocloud customers, with general availability scheduled for fall 2025 [5].

In summary, NeuralMesh Axon addresses the critical bottlenecks in exascale AI, such as data throughput, latency, GPU underutilization, and infrastructure complexity, by fusing storage and compute into a tightly integrated, scalable, and ultra-fast platform that accelerates AI deployments from research to production at unprecedented scale and speed [1][3].

References: [1] WEKA. (2023). NeuralMesh Axon. Retrieved from https://weka.io/products/neuralmesh-axon [2] CoreWeave. (2023). CoreWeave and WEKA Announce Collaboration to Deliver High-Performance AI Cloud Infrastructure. Retrieved from https://www.coreweave.com/news/coreweave-and-weka-announce-collaboration-to-deliver-high-performance-ai-cloud-infrastructure [3] WEKA. (2023). NeuralMesh Axon: The Future of AI Storage. Retrieved from https://weka.io/blog/neuralmesh-axon-the-future-of-ai-storage [4] NVIDIA. (2023). NVIDIA and WEKA Partner to Deliver High-Performance AI Cloud Infrastructure. Retrieved from https://www.nvidia.com/en-us/press-room/press-releases/2023/03/nvidia-and-weka-partner-to-deliver-high-performance-ai-cloud-infrastructure/ [5] WEKA. (2023). NeuralMesh Axon: Now Available in Limited Release. Retrieved from https://weka.io/news/neuralmesh-axon-now-available-in-limited-release

Data-and-cloud-computing technology is revolutionized by NeuralMesh Axon, a state-of-the-art storage system designed for exascale AI applications. This high-performance and resilient storage solution, powering AI model training and inference workloads for WEKA and trusted by major cloud providers, promises significant efficiency gains and enhanced AI workload performance.

Read also:

    Latest