Dell Technologies has introduced updates to the Dell AI Data Platform with NVIDIA that help businesses identify and activate their enterprise data, while delivering extreme storage performance to power AI applications and autonomous AI agents.
AI is rapidly evolving from assistive tools to fully autonomous systems, but its effectiveness is limited by the data it can access, trust, and act on. Most companies hit a roadblock because much of their data remains trapped in silos, without proper structure, business context, or governance. The result: AI initiatives stall, investments underperform, and competitive advantages are lost.
Break down data silos
Dell and NVIDIA are removing one of the biggest obstacles to enterprise AI: data that is too slow, siled, or disorganized for use. As a core component of Dell AI Factory with NVIDIA, Dell AI Data Platform with NVIDIA enables enterprise data for AI, maintaining security, governance, and world-class performance at scale. Customers experience up to 12x faster vector indexing, 3x faster data processing2, and 19x faster time to first token3 compared to traditional computing approaches.
Dell data engines, accelerated by NVIDIA AI infrastructure, automate the entire data lifecycle for AI and dramatically reduce data preparation time while maintaining enterprise governance.
The Dell Data Orchestration Engine, powered by technology from Dell’s recent acquisition of Dataloop, redefines the way enterprises operationalize data for AI. This no-code or low-code engine orchestrates the AI data lifecycle: automatically discovering, labeling, enriching, and transforming structured, unstructured, and multimodal data into governed, AI-ready datasets at scale. By combining automated workflows with active learning and human intervention, organizations can continually improve the quality of data sets and the accuracy of models, while maintaining governance and control. The Data Orchestration Engine Marketplace enables organizations to deploy production-ready data workflows without having to build them from scratch, thanks to a curated library of NVIDIA NIM microservices, NVIDIA AI Blueprints, and more than 200 additional models, applications, and templates.
NVIDIA Accelerated Data Engines
Dell Technologies supports the latest NVIDIA AI-Q blueprint in production environments, enabling businesses to create customizable AI agents that deliver actionable insights to improve decision-making. NVIDIA-accelerated data engine integrations within the Dell AI Data Platform enable high-performance data preparation, retrieval, and reasoning pipelines for both structured and unstructured data. Customers also get access to a growing library of preconfigured NVIDIA blueprints and NIMs, along with the NVIDIA Nemotron 3 Super model on the Dell Enterprise Hub in Hugging Face.
Dell Technologies will also support NVIDIA STX, a new modular reference design powered by next-generation NVIDIA Vera Rubin NVL72, NVIDIA BlueField-4 DPUs and NVIDIA Spectrum-X Ethernet networking, which accelerates the way organizations manage, process and retrieve data for AI.
The new AI Assistant within the Dell Data Analytics Engine introduces a natural language conversational interface directly into SQL analyses. Business users can query, visualize, and collaborate on governed data products with a common semantic understanding of key metrics intuitively, without requiring specialized SQL knowledge. This democratizes access to data, streamlines decision-making, and enables deeper insights faster, which is especially critical for organizations deploying AI agents that need access to structured data.
Within Dell’s AI data platform with NVIDIA, the introduction of NVIDIA RTX PRO™ Blackwell Server Edition GPUs will bring acceleration directly to the data platform layer. NVIDIA CUDA-X accelerated libraries, including NVIDIA cuDF for structured data processing and NVIDIA cuVS for vector indexing and searching applied to unstructured data, work together with Dell data engines and optimized infrastructure to deliver up to 3x faster SQL queries4 and up to 12x faster vector indexing5. These improvements translate directly to more agile AI applications and cost efficiencies in the infrastructure needed to prepare data at scale.
Large-scale storage software innovations
As companies move from AI experimentation to production deployment, storage becomes the primary constraint. Traditional storage architectures slow down as they scale, creating bottlenecks that leave GPUs idle and wasted infrastructure investments. Dell’s AI-optimized storage engines solve this problem with architectures specifically designed to maintain performance even at scale.
Dell Lightning File System, the world’s fastest parallel file system6, delivers extreme performance density for AI training and inference environments, with up to 150 GB per second per rack7, up to 20x the performance of competing all-flash offerings8 and up to twice the throughput per rack unit of competitive systems9. Its purpose-built network architecture with direct access to storage prevents slowdowns and keeps GPUs fully utilized even at large scale. Lightning FS integrates seamlessly into NVIDIA-based AI infrastructures, keeping training and inference workloads running at full speed.
Dell Exascale Storage, the only 3-in-1 storage designed for extreme-scale AI and HPC1010, gives IT teams the flexibility to deploy Dell storage software—file system, object, and parallel file system—on the latest Dell PowerEdge servers. Customers can allocate Dell PowerScale, Dell ObjectScale and/or Dell Lightning File System storage resources on a common hardware platform to support the most demanding AI and HPC environments, such as high-frequency trading or neoclouds. With support for NVIDIA CX-8 and CX-9 SuperNICs and planned network connectivity of up to 800GbE, Exascale delivers read performance of up to 6TB per second per rack11, providing the high throughput required for multi-modal AI workloads.
The joint solution democratizes access to data, speeds up decision-making and allows for deeper insights
Support for NVIDIA CMX Context Memory Storage (CMX) and inference acceleration with KV Cache on storage shared between Dell PowerScale, Dell ObjectScale, and Dell Lightning File System allows organizations to offload KV cache from GPU memory to Dell CMX Storage and high-speed shared network storage based on performance needs. This dramatically improves GPU utilization for long-context AI and agentic AI workloads, allowing AI systems to maintain context during long interactions without exhausting GPU memory. This capability is essential for companies deploying AI agents that need to query large volumes of historical data or maintain long conversation threads.
Performance testing on PowerScale: New testing shows that PowerScale’s software-based Parallel Network File System (pNFS) architecture delivers up to six times the performance with large files in enterprise AI environments compared to NFSv312. This keeps GPU-intensive AI workloads continuously fed with data, reducing bottlenecks throughout the processing chain and ensuring that expensive GPU resources do not sit idle waiting for data.
Towards ROI in enterprise AI
Dell Technologies celebrates the second anniversary of Dell AI Factory with NVIDIA with advancements in its end-to-end AI infrastructure, as well as its portfolio of solutions and services, designed to help companies take AI from pilot projects to full-scale production. More than 4,500 customers have already deployed Dell AI Factory, in addition to early adopters, reporting up to 2.6x ROI in the first year13. Dell demonstrates that a comprehensive approach delivers measurable business results.
