UPDATED 20:18 EDT / AUGUST 08 2017

NEWS

New Wikibon research challenges execs to cut downtime costs in half

A perfect storm of heightened regulation, advanced cyberthreats and greater business risk is creating a new imperative for Global 2000 organizations to rethink data recovery for high-value applications.

That’s according to new research by Wikibon analyst David Floyer. In a recent Wikibon research report (subscription required), Floyer introduces a new phrase, “application-led backup and recovery,” which takes a system view of recovery, versus a decades-old approach of relying on a storage-centric architecture to manage data protection and recovery. Wikibon argues that an application-led design can noticeably cut the cost of downtime relative to conventional approaches such as those typified by purpose-built backup appliances, or PBBAs (pictured).

Why Now?

Wikibon’s research underscores three major forces that it predicts will create a tipping point for change:

  • New regulations with implications on ensuring timely recovery, such as the EU’s General Data Protection Regulation, which beginning next May imposes fines equal to the larger of 4 percent of revenue or 20 million euros;
  • Cyber threats are no longer just “Hacktivists” causing trouble. Increasingly, organized crime (for example, Wannacry and derivative ransomware attacks) and even state-sponsored offensive cyberattacks place virtually all G2000 organizations at risk, especially those running critical infrastructure (for example, financial systems and power grids);
  • Digital business. Digital means data and when mistakes are made or malicious attacks are perpetrated, bad things happen fast and data gets corrupted at light speed. As such the pressure to recover more quickly with less data loss is heightened by digital transformation initiatives.

According to Wikibon, the average cost of downtime for mission-critical systems at large organizations equates to nearly 5 percent of revenue, or about $735 million per year. The researcher estimates that taking an application-led recovery approach can cut this by more than half over a four-year period, to 2.3 percent.

What is an Application-Led Architecture?

According to Wikibon, the old models of backup-and-recovery are rooted in the days of tape, which then moved to disk-based backup in the form of PBBAs, popularized by Data Domain in the mid-2000s. These solutions are storage-based, meaning that an application-consistent copy of a database and other files is taken at a point in time and a backup copy is created.

An application-led system, on the other hand, sends transactions from memory buffers (before flushing to disk), significantly reducing data loss exposure to a subsecond granularity level, as ongoing transactions are stored on the production database and simultaneously to a specialized appliance. To explain this process, Wikibon outlines in minute detail an example of an Oracle Corp. ZDLRA recovery appliance and how it works.

The benefit of this approach comes from an end-to-end view of the data, the ability to leverage large memories with integrated software and a dramatic simplification in the backup-and-recovery process. This last point is critical, according to Wikibon’s research. Specifically, the data Wikibon provides shows that organizations that take this approach and automate procedures can reduce the number of steps involved in backup and recovery from 80 to 14 over a period of time.

Are there really that many steps to reduce? Evidently, yes. These steps, according to Wikibon, include things like the following, many of which today are handled by unreliable scripts or manual/semi-automated processes:

  • Scheduling backups (full, incremental and redo log sweeps) and keeping track of the exact record of the backup status for every database and application.
  • System audits to ensure database backups are correct and fully completed, versus just reporting errors. Wikibon notes that often unreported errors are only detected on recovery.
  • Verification that backed up files represent a full restore, a very time consuming process to test, which can’t be done on incremental backups.
  • Ensuring that every step in the backup-and-recovery process is completed accurately.
  • Checking that logs are accurately sequenced and complete (for example, redo log files).
  • Checking for corrupt blocks and missing files (for example, RMAN Restore Validation) – typically done quarterly or annually or not at all because it’s so resource-intensive.

According to Wikibon, there are dozens of these steps and substeps embedded in the backup and recovery processes of today. Importantly, these steps are associated with both failure management and compliance reporting, which dramatically increases complexity.

It is this very complexity, says Wikibon, that system vendors, such as Oracle today and others in the future, are attacking with new solutions that are integrated and specifically designed to protect the world’s most sensitive data files such as financial transactions, airline reservation data, retail transactions and more.

Business Case Conclusions

The Wikibon study puts out some other compelling statistics related to downtime associated with mission critical applications:

  • An application-led recovery architecture can cut downtime loss from 4.9 percent of revenue to 2.3 percent over a four-year period.
  • The best-case scenario for improving an existing storage-centric data protection approach gets the average customer to a 4 percent revenue loss over the same four-year period.
  • Wikibon estimates that for the average G2000 customer, an application-led architecture will deliver over $1.2 billion in value within four years, quantified in reduced loss from downtime.
  • The business case for an application-led architecture suggests the net present value for such a project is $1 billion, assuming a 5 percent time value of money. The break-even is just six months and the internal rate of return is a whopping 591 percent.

Talking about data protection is like discussing insurance. But in this digital age of always-on availability, security risks and high speed/high risk business velocity, data loss from downtime is only increasing. Wikibon challenges senior business executives at the board level to set a moonshot goal of cutting their cost of downtime in half by early next decade.

Shareholders, Wikibon says, should take notice too and demand that organizations make reducing data losses a board level agenda item.

Image: Wikibon

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU