
IREN Ltd, a provider of cloud infrastructure services, today revealed its intent to acquire Mirantis, a provider of open source OpenStack and Kubernetes software that is deployed in both cloud computing and on-premises IT environments.
Under terms of the agreement valued at $625 million, Mirantis will operate as an independent subsidiary of IREN, a former provider of bitcoin mining services that now specializes in hosting artificial intelligence (AI) workloads.
Dominic Wilde, senior vice president of marketing for Mirantis, said that while Mirantis will continue to engage with the more than 1,500 IT organizations it supports directly, the combined entity will reduce friction as IT teams move to deploy AI workloads that require deeper levels of infrastructure integration. That level of alignment will benefit IT organizations that are customers of both companies, he added.
Privately held, Mirantis has long provided a distribution of OpenStack and more recently has been curating a distribution of Kubernetes for cloud-native application environments. Most recently, Mirantis launched a k0rdent AI platform to provide a control plane to integrate the management of infrastructure across bare metal, virtual machines and Kubernetes environments. Earlier this year, Mirantis also made available a reference architecture for building and deploying AI workloads on Kubernetes clusters that is based on the k0rdent AI platform.
The overall goal is to make it simpler to build multi-tenant environments spanning multiple classes of processors using a set of reusable templates spanning compute, storage, networking and graphical processor units (GPUs) from NVIDIA, AMD and Intel.
IREN, meanwhile, will be able to leverage those frameworks to deploy workloads faster in a way that also serves to optimize workload performance and reduces total costs.
It’s unclear to what degree the management of infrastructure will further unify in the age of AI, but there is a growing amount of cost sensitivity. The underlying graphical processor units (GPUs) needed to run AI workloads remain scarce. Paradoxically, overall GPU utilization rates remain relatively low, creating an opportunity to use platforms like Mirantis to distribute AI workloads across servers and clusters in a way that should reduce the total IT costs.
There is, of course, no shortage of providers of distributions of Kubernetes and OpenStack platforms. The challenge has always been determining which providers of these distributions have the best tools and frameworks that DevOps teams need to deploy and manage these open source platforms at scale.
Unfortunately, many IT teams lack the IT infrastructure management expertise required to deploy and manage AI workloads. In fact, many organizations are underestimating the total cost of AI by not factoring in the amount of IT infrastructure that will be required to run these types of workloads at scale. As the pressure to operationalize AI continues to increase, the number of infrastructure challenges that IT teams will encounter will only continue to expand, especially as responsibility for managing the IT infrastructure needed to run these application shifts from data science to IT operations teams. The issue then becomes not just finding the best way to streamline the management of that IT infrastructure, but also justifying the level of investment that is going to ultimately be required.

