While the term DevOps is relatively new, not everyone is a DevOps white belt. The reality is that there are many companies that have practicing DevOps for many years. In fact, there are four established techniques and tools to help guide anyone on the path to becoming a DevOps Ninja.
Two key areas that will help overcome deployment roadblocks
- A streamlined, well understood process
- Automation of build, deployment and testing
While people and interactions are important, processes and tools still play a key part. Using extensive automation necessitates having good process in order to get good results. A bad process that is automated just becomes a bad process that is able to be executed in a repeatable manner. Automation does not turn a bad process into a good process.
DevOps isn’t about development taking over operations responsibilities; it is about enabling development and operations to work closely together to get software that delivers value to customers faster.
So how do you become a DevOps ninja? Let’s take a look at four techniques that everyone should master to quickly roll out changes to customers without sacrificing accountability and traceability.
Technique 1: Understand the Process
A single path to production is key to a successful DevOps implementation. Having the ability to bypass process breaks traceability, so make sure that the process that is modeled covers multiple use cases including emergency fixes and rollback.
Approvals between each stage of the process are also important, and these can be automated or manual approvals. The ability to model both automated and manual approvals will have an impact on your choice of process automation technology. Even if the goal is to have fullly automated continuous delivery with no human interaction, it’s preferable to have technology that can handle both manual and automated approvals should the need ever arise to insert manual approvals down the line (e.g., in response to new regulatory requirements).
Technique 2: Automate the Process
No matter whether you are part of development, QA or operations you’ve likely heard the frustrated cry of “It works on my machine!” followed by an even more frustrated cry of “I don’t care if it works on your machine. It’s not done until it is deployed in all environments!” So what can be done to avoid this?
Having development and operations working closely throughout the application development process means that there should not be surprises about choice of hardware or architecture or any number of smaller issues that may drop between the cracks in traditional handoffs. This close collaboration should allow for operations to provide environments that closely reflect production environments. Virtualization is key here – being able to spin up instances on demand to avoid hardware-driven bottlenecks is key. By giving developers environments that resemble production, changes can be tested against properly configured environments very early on in the development process.
Technique 3: Reproducible and Robust Deployments
Deployment automation comes in many different forms. When was the last time you talked to someone who didn’t have at least some kind of basic script to perform at least a small amount of deployment automation? As environments and architectures have become more complex, so too have the scripts that need to be created and maintained. Commonly, the scripts work for some but not all environments, and frequently the authors of the scripts are too busy to maintain them once the scripts become complex. Writing scripts to deploy products and handle integrations to multiple systems usually isn’t a core competency in an organization.
Fortunately, tools like Puppet and Chef have given the ability to perform many low level operations easily, but there is still a need for deployment automation that can integrate into all aspects of the software development lifecycle – from managing the initial product requirements to monitoring software once it is in production. Custom integrations can be complex, fragile and time consuming to maintain, especially given the rate of innovation in many of the systems that you might want to integrate to, yet they are critical in maintaining end-to-end traceability.
It is also important to resist temptation to create scripts that are specific to an environment. Remember, an important goal is to have a robust, repeatable process. If there are different scripts for different environments, then by the time an application is deployed to production it is possible that it is the first time the scripts have been run. By utilizing model-driven deployment automation, it is possible to create deployment automation scripts that are environment aware – meaning the same scripts are executed with the appropriate data being fed to the scripts at deployment time. This means that by the time an application has been deployed to production the associated deployment script has been executed successfully many times along the path from development environments to production.
Technique 4: Put the Pieces Together
It is unlikely that most organizations will throw out technologies that are already in place. While deployment automation can handle integrations when applications are being deployed, there are multiple touch points when trying to automate an end-to-end process. Touch points include requirements management, issue tracking, continuous integration, test automation and application monitoring. The process management layer should be able to exchange information either by pulling information from another system or having information pushed to it. Having the higher-level process management framework in place ensures traceability, which is key to surviving an audit.
These four techniques will help anyone to get on the path to DevOps mastery and overcome your biggest deployment roadblocks. The solutions employed to remove your deployment roadblocks will seem invisible, eliminating problems and removing obstacles. Just like a ninja.
About the Author
Jonathan Thorpe is Product Marketing Manager for all things DevOps and Continuous Delivery at Serena Software. Previously Jonathan worked as a Systems Analyst at Electric Cloud, specializing in DevOps-related solutions. Jonathan holds a degree in Computing Systems from Nottingham Trent University.
Latest posts by Guest Author (see all)
- Guest Post: ‘Virtualization 2.0′ is your on-ramp to the cloud - July 27, 2016
- To unlock Big Data’s potential, learn to use fast data - August 17, 2015
- 5 reasons Google+ failed: One power user’s observations - August 3, 2015