UPDATED 12:45 EST / NOVEMBER 25 2019

INFRA

There aren’t enough humans for cloud-native infra. Can DevOps deal?

Applications are the tip of the iceberg in cloud-native computing. When monoliths shard into microservices and containers (a virtualized method for running distributed applications), the underlying infrastructure feels it. So do administrators and developer operations teams. How can they possibly spread themselves thin enough to handle all these distributed components?

They can’t, and they shouldn’t try, according to Vijoy Pandey (pictured), vice president and chief technology officer of cloud computing at Cisco Systems Inc. Distributed apps require support — from network systems and databases — that don’t obey the rules that worked for physical and virtual infrastructure.

“You can’t just have a database admin,” Pandey explained. “A database is now 500 components. So you need your [site reliability engineer] organizations and your DevOps organizations to be aligned to that.”

Pandey spoke with Stu Miniman (@stu), host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, and guest host Justin Warren (@jpwarren), chief analyst at PivotNine Pty Ltd., during the KubeCon + CloudNativeCon event in San Diego, California. They discussed how infrastructure teams can stay sane amid cloud-native chaos. (* Disclosure below.)

Three A’s of the networking future: automation, application, AI

When Pandey ran networking infrastructure at Google, they had a saying: If your infrastructure depends on humans to scale out, then there aren’t enough humans to hire. To illustrate, take the telemetry data coming from large modern networks. “We were sending telemetry data through [Simple Network Management Protocol]. Now, we are sending telemetry data streaming,” he said. 

Exploding telemetry data — and overall complexity of multicloud and cloud-native networks — should be offloaded to nonhuman intelligence to manage, Pandey believes. “Putting formal verification, formal models, formal closed-loop automation systems with AI in place — I think that’s the only way to go forward, at least on large-scale networks,” he added. 

It can also ease burdens on admins and DevOps teams to view complex networks through the scope of the application. “Being deep and narrow and being very selfish about the application that you’re trying to connect to simplifies the problem. Because as an app developer, I’m only concerned about this particular app and not what it connects to,” Pandey stated. 

This is the essential concept behind Cisco’s Network Service Mesh. “I just want to connect A to B within my application. … And these are the attributes that I want from that connection. … That’s what we’re trying to handle from the cloud-native perspective,” he concluded. 

Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of the KubeCon + CloudNativeCon event. (* Disclosure: Cisco Systems Inc. sponsored this segment of theCUBE. Neither Cisco nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU