UPDATED 12:52 EDT / SEPTEMBER 29 2021

AI

Opsani champions AI-driven, solution-focused approach to cloud app management

Organizations that run most (or even any) of their applications or operations on the cloud always strive for scalability, cost efficiency, reliability, ease of use and value delivery. These key goals aren’t always entirely met, however, due to a variety of reasons within and without the organization’s cloud environment.

Hence, companies look to solutions providers to help optimize (or finetune) their cloud operations. Datagrid Systems Inc. (dba Opsani) is one of those providers, and the company has doubled down on artificial intelligence as the key tool with which it seeks to solve customers’ performance and reliability issues and balance out any factorial compromises that may arise.

“Sometimes, scalability is more important than cost, but what we’re going to do because of our machine learning capability, we’re going to always make sure that you’re never spending more than you should spend,” said Pat Conte (pictured), chief commercial officer at Opsani. “So we’re always going to make sure that you have the best cost for whatever the performance and reliability factors that you want to have.” 

Conte spoke with John Furrier, host of theCUBE, SiliconANGLE Media’s livestreaming studio, during the AWS Startup Showcase: New Breakthroughs in DevOps, Analytics, and Cloud Management Tools event. They discussed the current state of cloud-based application management and how AI and ML technologies can drive the paradigm forward. (* Disclosure below.)

Juggling a delicate balancing act

Ideally, organizations in the cloud try to solve for (or balance) three different factors: expenditure, performance and customer experience. That’s why many consult with solutions companies; they just don’t have the expertise necessary inhouse. Many corporate IT departments today say that grappling with the several associated tradeoffs of these variables is a large part of the job.

Without AI/ML in the mix, you’re only going to be able to optimize one or two of those variables, according to Conte. These technologies allow for real-time control over customer experience, performance and scalability — all while keeping costs at manageable levels.

Opsani’s philosophy, for its part, is focused on delivering value to its customers’ customers, and to do that, it has built a platform that, with the power of AI and ML at the helm, can optimize all of a cloud application’s key parameters, according to Conte.

“Those [parameters] are things like the CPU usage, the memory usage, the number of replicas in a Kubernetes or container environment, those kinds of things,” he explained. “It seems like it would be simple just to grab some values and plug them in, but it’s not. In actuality,  the combination of them has to be right. Otherwise, you get delays, faults or other problems with the application.”

Navigating a fledgling niche

Opsani operates in the area of AIOps — artificial intelligence for IT operations — which is, itself, a relatively new sub-sector in the broader cloud computing space. It involves the application of intelligence to production or developmental cloud operations. However, new as its playing field is, the company is already making strides setting itself — and its business model — apart from those of its competitors, according to Conte.

Organizations, these days, tend to handle DevOps in a similar fashion. When the need for a new application or app feature is felt, developers mostly rely on existing code from sources like Git, tweaking the code to suit the desired end goal. So, finally, it’s usually the first working combination of this trial-and-error exercise that then makes it into production, Conte explained.

When failures happen, as they occasionally do, administrators then begin to overprovision resources in a bid to “soak up the problem,” further driving up cloud infrastructure costs. In response, Opsani’s unique model involves plugging first into the production phase and tuning all of the aforementioned variables, like CPU speed and usage, all at machine speed. That way, the customer enjoys the reliability and performance gains at the best possible cost and from the star, Conte added.

“We can plug-in at the test phase as well. If you think about it, the DevOps guy can actually not have to overprovision before he throws it over to the SREs. He can actually optimize and find the right size of the application before he sends it through to the SREs, and what this does is collapses the timeframe because it means the SREs don’t have to hunt for a working set of parameters,” Conte stated.

Both of these processes form the core of autonomous optimization, which, by its very nature, negates the need for human effort in most of the optimization process and is crucial to AIOps, Conte concluded.

Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of the AWS Startup Showcase: New Breakthroughs in DevOps, Analytics, and Cloud Management Tools event. (* Disclosure: Datagrid Systems Inc. sponsored this segment of theCUBE. Neither Datagrid nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU