UPDATED 16:12 EST / AUGUST 15 2014

Performance trends thrive in Big Data DevOps | HP Vertica event highlights

cloud_computing_2014_0001Modern business produces data from everything it does, and a great deal of that data arises in forms that normal, everyday databases do not handle very well. The result is that much of that data is dismissed or lost into the ether because either it cannot be stored, or it cannot be readily queried. With the rise of Big Data, elastic data types, and distributed storage systems such as Hadoop means a lot more of this data can be collected, analyzed, and turned into actionable information.

According to Wikibon.org research 60 percent of organizations are in the process of shifting data management resources to Hadoop, with 30 percent looking to start by the end of the year. Big Data and Hadoop represent a powerful motive force in the industry. This fact means that DevOps teams and agile paradigms can follow and make better operational use of that data.

Chris Selland, VP of Marketing and Business Development at Hewlett-Packard Co.’s Vertica, says enterprise-grade performance is a turning point when it comes to Big Data.

In DevOps, analytics is the workhorse of continuous delivery and continuous availability—the performance of that analytics can make or break a team’s capability. Real-time analysis of incoming data producing actionable information sooner can mean heading off a system crash versus having to clean up afterwards.

The DevOps revolution, virtualization, and SLI

DevOps is a word that means many things to many people, so it was no surprise that when John Furrier asked Localytics, Inc. co-founder and chief software officer Andrew Rollins about DevOps culture he cautioned against focusing on either dev or ops.

“DevOps is a loaded term,” says Rollins, “because sometimes people hear ‘ops’ and sometimes people hear ‘dev.’ You don’t want to get pigeonholed into one or the other.”

The paradigm of DevOps has changed how traditional IT approaches the build, test, deploy cycle, not just with automation and abstraction, but by making development part of operations. To this end, Rollins argued that it’s best to present DevOps expertise as engineering expertise rather than someone who is a pure developer or pure technician. This is becoming more true as SaaS platforms and cloud operating systems dominate Software Led Infrastructure (or SLI).

David Floyer from Wikibon.org defines SLI as: “Pervasive virtualization across compute, storage, and networking technologies” leading to the control systems of datacenters existing in software rather than hardware. Where software and customization goes, so also go developers. The shift to SLI is being led by cloud and virtualization; Wikibon.org research from Stuart Miniman shows that by 2011, about 72 percent of organizations said their data centers were at least 25 percent virtual.

When discussing how DevOps affects Localytics, Rollins mentioned that developers dealt with a Heroku-like infrastructure (a cloud platform OS.) As a result, Localytics developers do not worry much about the underlying infrastructure, which virtualization and cloud is abstracting away, and instead “they’re just writing apps and they deploy them.”

With software acting as infrastructure, IT operations begins to have a very similar job to dev. While the application team builds and deploys code; the deployed code itself depends on a software-defined underpinning that DevOps engineers monitor, log, and analyze. The result is a tightening of lines of communication for DevOps teams with application development.

This is especially seen through trends in DevOps tools produced by leading analytics outfits such as Splunk Inc., Moogsoft Inc., and Logentries Inc.. Splunk’s further foray into the cloud pushes virtualization; but the real trend is in the release of App for Stream, which enables the collection of network-direct wire data, which is in line with Big Data collection and analysis. Moogsoft’s Incident.MOOG and “situation room” as well as Logentries’s real-time team annotation products both show the tightening of lines of communication that combine with analytics to build DevOps paradigms.

DevOps and scale

The other side of the DevOps and Big Data connection is scale—something that code does well with access to virtual machines. After all, in the cloud it’s possible to spin up one node or a thousand, all alike; but in the end a quality team (a component of DevOps) needs to make sure that the dependencies and code can withstand having a thousand copies running at once.

“Because the fundamental culture is managed through code, not through manual process,’ Rollins adds, speaking from the engineering-perspective on DevOps culture. “You want to say that you both get what it means to manage infrastructure at scale, but you also know how to make reliable code that scales as well.”

The bigger the system, the bigger the data flowing through and out of it. Distributed infrastructures, software-led design, Big Data databases and analytics tools continue to arise to respond to the needs of DevOps teams for increased real-time visibility of the health of the systems they control.

photo credit: mansikka via photopin cc

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU