UPDATED 09:00 EST / MARCH 15 2022

AI

Run:ai raises $75M to grow its AI workload virtualization platform

Run:ai Labs Ltd., a startup that’s built a unique virtualization infrastructure to speed up artificial intelligence model training, said today it has raised $75 million in a new round of funding after growing its annual recurring revenue almost 10 times in the last year.

Tiger Global Management and Insight Partners led today’s Series C round, with participation from existing investors TLV Partners and S Capital VC, bring Run:ai’s total amount raised to date to $118 million.

Run:ai, which last secured funding back in January 2021, has created what it calls a special virtualization layer for deep learning that can train AI models running on graphics processing units much faster than is normally possible, while using less resources. Deep learning is a subset of AI that mimics the way the human brain works and enables technologies such as image recognition, autonomous driving, smart assistants and more.

Run:ai reckons it is the first and only company in the world to marry the concept of operating system-level virtualization with AI workloads that run on GPUs. It said it was inspired by traditional virtualization, which brought better management to central processing units and revolutionized computing back in the 1990s. Its AI virtualization platform works similarly, by pooling the resources of large clusters of GPUs, sharing these among different AI workloads by automatically assigning the required compute power each job needs.

“We do for AI hardware what VMware and virtualization did for traditional computing,” said Run:ai co-founder and Chief Technology Officer Ronen Dar.

Perhaps the most impressive thing about Run:ai’s software is its compute abstraction layer, which uses graph-based parallel computing algorithms to analyze deep learning models and break them down into smaller parts. Then, it can train different parts of each model in parallel to accelerate the time it takes to complete the training process. This abstraction layer also helps to overcome the limitations of the GPUs’ underlying random-access memory, so bigger and more accurate models can be trained.

Run:ai describes its platform as a “foundation for AI clouds” and says it allows organizations to combine all of their AI computing resources, whether they’re located in the cloud, on-premises or at the edge, onto a single platform.

Andy Thurai of Constellation Research Inc. said Run:ai’s software looks promising because data scientists are not really engineers and while they can work magic with AI models and data wrangling, they struggle with proper provisioning of resources. “Particularly with compute intensive AI model training, greedy data scientists can become GPU hoarders,” he said.

So the ability of Run:ai’s GPU abstraction layer and orchestration platform to pool costly GPU resources and dynamically allocate them to meet compute needs on demand will be very helpful, Thurai said, helping to reduce GPU idle time and avoid anyone stashing a secret stockpile to maximize its use when they need it.

“Run.ai ‘s platform allows for guaranteed quotas based on need, thus ensuring maximum utilization of GPUs while maximizing allocation across board,” Thurai added. “By combining all available CPU, memory, and GPU resources, data scientists can be given access to unlimited compute similar to elastic cloud provisioning, without the need to manipulate compute infrastructure.”

Thurai noted that Run:ai isn’t alone in trying to virtualize GPU resources. He said Amazon Web Services Inc. has also been trying to get into the space, optimizing its instances for AI workloads. “For example, accelerated computing instances such as P4 are built for AI/ML workloads, though the utilization maximization and proper allocation needs to be done separately by pooling across resource demands and distributing workloads as needed,” he said.

Run:ai doesn’t reveal who its customers are, but insisted they include Fortune 500 firms as well as cutting-edge startups in the automotive, finance, healthcare and gaming markets, plus academic AI research centers.

“Our growth has been phenomenal, and this investment is a vote of confidence in our path,” said Run:ai co-founder and Chief Executive Omri Geller. “Run:ai is enabling organizations to orchestrate all stages of their AI work at scale, so companies can begin their AI journey and innovate faster.”

Run:ai said it will use today’s funding to grow its staff further, having already tripled its headcount in the last year. It will also be considering strategic acquisitions to develop and enhance its software platform.

Photo: Run.ai

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU