Adapting to the public cloud: when DevOps forces combine | #VTUG


For many companies, the move to the public cloud is a difficult decision with many factors to consider, the foremost being security. Many opt for a private cloud or to remain on premises, but it may not be cost efficient.

Today, many companies are using hybrid solutions that include either on-premises and public cloud. For many industries, this provides data security where it is most needed and scale and efficiency for applications and developers.

Dewey Sasser, president of Aligned Software Solutions, consults with enterprises about the social aspects of public cloud and how to adapt to the environment. He joined Stu Miniman (@stu), senior analyst at Wikibon and host of theCUBE, SiliconANGLE Media’s mobile live streaming studio, at Virtualization Technology Users Group Winter Warmer 2017 at Gillette Stadium in Foxborough, MA, to offer his perspective on the challenges of the public cloud.

Aligning the vision

Two forces need to team up to successfully make the move to the public cloud: DevOps and operations, Sasser explained. For developers, the cloud is preferable because it offers greater accessibility, scalability, virtualized resources, agility and secure provisioning. Sasser told Miniman that it is necessary for responsibility and authority to align by empowering people to do the jobs for which they are qualified.

“The key here as we move into public cloud, the basic momentum of DevOps and agile processes and public cloud are really coming together to meet in one spot … we need to change so people can be responsible for what’s going on — on a minute-to-minute basis, whereas we used to do it on a week-to-week basis,” Sasser said, explaining that the fast pace of the cloud means granting the right permissions to the right people.

From an application standpoint, Sasser spoke about two different techniques used when going in with a new environment. One method uses the Phoenix server, which essentially means automate everything and replace everything. Sasser calls it “burning down the server.” The second method is caring for each individual server by debugging the problems and fixing it.

His assessment is that both work for public cloud. The Phoenix server is the more agile way to go due to its elasticity. When it comes to legacy product migrations, he has done these as well. “It’s just different tools and techniques that you can take advantage of some of the elasticity and some of the dynamic provisioning you get from public cloud,” he assessed.

Focus on core competencies

Sasser spoke about the pitfalls of the public cloud, but overall, he believes it can handle any workload and will garner cost savings. “A lot of the fundamentals remain the same: Networking is networking, whether it is virtual or physical. Security is also security,” Sasser remarked.

He acknowledged that understanding the application is most important and assessing the security of an application along with the customers’ needs are not the things to outsource.

“The best way I’ve heard it put is: Don’t outsource your core competency. I think what Amazon is doing [by] going up the stack … [allows you] to really focus in and tighten your efforts to just your core competencies to what really delivers value to your customer. Let Amazon worry about the rest,” he advised.

Watch the complete video interview below, and be sure to check out more of SiliconANGLE and theCUBE’s coverage of VTUG Winter Warmer 2017.

Photo by SiliconANGLE