Domino Data Lab announces hybrid MLOps architecture for future-proof, model-driven business at scale

Domino Data Lab, a leading enterprise MLOps platform trusted by more than 20 percent of the Fortune 100, announced its new Nexus Hybrid Enterprise MLOps architecture that will enable enterprises to do data science work Rapidly scale, govern, and orchestrate across multiple compute clusters—in multiple geographies—on-premises and even across multiple clouds.

Despite the attention paid to cloud migration, concerns about cost, security and regulation are forcing a growing majority of organizations to adopt AI infrastructure strategies that span on-premises data centers and the cloud. According to Forrester Consulting, 66 percent of IT decision makers have already invested in hybrid support for developing AI workloads, and 91 percent plan to do so within two years. The new Nexus architecture enables enterprise MLOps for this new reality. It provides the portability and cost management for AI development and deployment that enterprises need and the flexibility that data science teams need to accelerate breakthrough innovation at scale.

“Although the shift to cloud is underway, more and more organizations are currently having some type of on-premises and cloud-based architecture,” said Melanie Posey, research director for cloud & managed services transformation at 451 Research, part of S&P Global Market Intelligence. “The reality is that cost optimization remains an ongoing issue for both cloud veterans and cloud newcomers.”

Nexus is a highly scalable hybrid MLOps enterprise platform architecture that gives organizations the best of both worlds—the cost advantages of on-premises infrastructure and the flexibility to quickly scale to the cloud using a single point of control. Customers achieve maximum cost optimization by using their own on-premises NVIDIA GPUs and the ability to offload workloads to cloud-based GPUs when additional capacity is needed—all without sacrificing reliability, security, or usability.

“Enterprise data science and IT organizations are continually demanding more infrastructure flexibility to optimize compute spend, data security and avoid vendor lock-in,” said Nick Elprin, CEO and co-founder of Domino Data Lab. “Our Nexus architecture will help our customers unleash data science while future-proofing their infrastructure investments.”

Domino Expands Collaboration with NVIDIA as First Nexus Launch Partner

Domino has already begun development of Nexus with NVIDIA as a launch partner, a venture that will include specific solution architectures validated against NVIDIA technologies, with a target release later this year. Today, enterprise IT teams can learn how to scale data science workloads by completing a free, immediately available hands-on lab that includes the Domino Enterprise MLOps platform and the NVIDIA AI Enterprise software suite to which can be accessed via NVIDIA LaunchPad.

To enable further strong competitive advantages through innovative AI-enabled use cases, Domino has also joined the NVIDIA AI Accelerated program, which enables software and solution partners to leverage the NVIDIA AI platform and its rich libraries and SDKs to deliver accelerated AI – Create applications for customers. Domino continues to work with NVIDIA to streamline the development, deployment, and management of GPU-trained models on a variety of computing platforms, from on-premises infrastructure to edge devices, leveraging Domino and the NVIDIA AI platform, which includes NVIDIA AI Enterprise and NVIDIA Fleet Command. Hybrid MLOps is a continuation of Domino’s vision to create the most innovative and flexible solutions, balancing customer demands for the most effective data science work.

“Enterprises are looking for AI solutions that balance performance and costs with a strategy that aligns with their IT policies and practices,” said Manuvir Das, vice president of enterprise computing at NVIDIA. “NVIDIA’s collaboration with Domino Data Lab provides customers with a powerful hybrid MLOps solution with the flexibility needed to maximize productivity throughout the AI ​​development and deployment lifecycle.”

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 –

Leave a Comment