UPDATED 20:41 EDT / JUNE 28 2018

CLOUD

Virtual Instruments simulates cloud migrations to identify gotchas before they become problems

Infrastructure performance management firm Virtual Instruments Inc. is applying its technology to cloud migration with a new service that helps organizations simulate the experience of moving their applications to cloud infrastructure platforms, so they can identify performance and dependency issues before they become problems.

Cloud Migration Readiness is intended to help customers avoid unpleasant surprises when shifting workloads to the cloud by identifying common glitches in a simulated environment.

It combines workload discovery, dependency mapping and workload profiling to simulate cloud workloads and identify such problems as calls to forgotten databases or storage sources. It also enables customers that are on a migration path to select the optimal CPU, memory, network and storage configurations for each migrated workload.

In making its announcement, the company cited a 2017 Forrester Consulting survey conducted under contract with CloudHealth Technologies Inc. that found that fewer than 40 percent of surveyed firms had met or exceed goals related to both migration costs while 58 percent said cloud infrastructure costs were higher than estimated.

Virtual Instruments suggested that’s because companies don’t fully understand the characteristics of the applications they move. To address that problem, it’s applying its Virtual Wisdom performance monitoring and analysis platform to creating snapshots of existing applications and testing them on various cloud infrastructure platforms for performance and hidden dependencies.

“This is a preplanning exercise and doesn’t require actually moving,” said Jagan Jagannathan, chief innovation officer at Virtual Instruments. “Our intent is to derisk migrations to the public cloud from data centers.”

Migrations may be fraught with unintended consequences. For example, they may be tethered to on-premises databases that were long ago forgotten or access storage resources that aren’t easily moved to the cloud.

The company uses a proprietary process it calls NetFlow Monitoring to look at traffic traversing a network and determine the nature of interactions. “We understand CPU usage, memory, storage and bandwidth for each virtual and physical machine,” Jagannathan said. “We use that to determine the instance, shape, storage and network characteristics we need and which provider is the best fit for the workload.”

It does that by creating a load simulator, which is software that mimics the application to be moved on a time-compressed basis and testing it on various cloud platforms. “The simulator essentially takes the condensed profile and recreates that workload in the cloud,” Jagannathan said. “If it’s able to do so, it’s a good match. If it struggles, it’s underprovisioned.”

The service works with all major infrastructure-as-a-service platforms. It’s also customized to each customer’s needs with pricing that starts at $30,000. Total costs are affected by the volume of work to be analyzed and moved.

Image: Unsplash

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU