IBM Edge 2013 Takeaway: Re-Invents Storage Definition to Become More Relevant
IBM Edge 2013 has come and gone, so now we take collection of the annual event highlights and see what they tells us of IBM’s strategy moving forward. First off, IBM made a slew of additions and announcements to its flash storage portfolio. Additionally, IBM focused a lot on “smarter” at IBM Edge 2013: smarter computing and smarter storage.
Here was an interesting message from IBM at IBM Edge: in the Big Data + Storage conundrum, storage isn’t used simply as a repository, but more as an innovation mechanism. IBM believes the word ‘storage’ is soon becoming obsolete under the weight of Big Data and what we can do with it.
IBM’s Storage Strategy
Flash, Dibs! (Well, sorta)
IBM made no bones about jumping into the Flash storage market, and seemed adamant it had a game plan for success. What IBM failed to mention is that EMC and others have been investing in flash since 2008. IBM’s messaging at IBM Edge was that flash is going to have a major impact on system architectures. Wikibon Senior Analyst Stu Miniman recently gave some additional context to that message on our morning NewsDesk,
“…that is one of the criticisms of IBM, is they do come out sometimes and say ‘We have this great idea’, I actually went to a session of theirs in 2010 where they said ‘flash is going to radically change whats going on in storage’…we’re sitting there saying ‘well EMC has been putting flash in their array since 2008’, here we are 2013 still talking about flash. We think its a good message though. Flash is radically changing the entire software. IBM now has a pretty solid portfolio. Everything from what they’re doing in extending their Easy Tier into the sever, through the flash systems which was the TMS acquisition. Flash is the hot button topic for storage.”
Though late to market, Miniman is glad to see that IBM is at least on point with its messaging. At IBM Edge, another announcement was FlashSystem family of all-flash appliances — delivering less than 1/10 the cost per transaction while using four percent less energy and two percent less space compared to hybrid disk and flash systems. IBM also added support for 4TB drives to its Storwize V7000 and XIV advanced systems for 33 percent more capacity.
XIV’s new capabilities let clients send large volumes of data between systems through the cloud. As discussed in the video, IBM also upgraded its Easy Tier technology, an automation tool that moves data to the most effective storage tier in a storage system improving efficiency and speed.
Cloud Opportunities, With a Focus on the Public Cloud
Another big announcement was IBM’s new solutions for PureFlex Systems: Cloud + Mobile + Social + Analytics. As part of our coverage of IBM Edge, we had Andy Monshaw, GM PureFlex Systems, on #theCUBE. IBM is hedging its bet on a trend that the industry is driving levels of requirements for infrastructure and applications we haven’t see in a long time. Big Data is driving complexity points at a rate so quickly, service providers can’t keep up — in essence, the old way of working is acting as an anchor.
Monshaw gave a great quote on where he and his team believe Big Data is moving the industry:
The movement to cloud is happening at a rate and pace that even the analyst aren’t recognizing…an example is clients using Salesforce.com, and that is a cloud service. They don’t think that way. I believe what we’re going to see in the marketplace is the second wave in the VM effect…SMB are moving their business to a highly virtualized servers…Second virtualization effect…IBM has a massive cloud offerings…There are thousands and thousands mid-sized managed server providers…The opportunity extends well well well beyond a few big players.
Further supporting its cloud efforts is IBM’s recent $2 billion acquisition of Softlayer. ‘Big Blue’ is planning to create a new cloud services division within its Global Services unit, with Softlayer continuing to operate as a separate business entity inside of that. The acquisition of SoftLayer marks a departure from IBM’s previous cloud strategy, which until the acquisition had been largely focused on private cloud offerings.
Take Away the Word Storage and Replacing it With the Word Data.
Big Data is extending outside of the IT arena for the first time. Inhi Cho Suh, VP of Product Management & Strategy, talks about how clients care more about managing data and moving it up the scale in terms of value. Organizational lines are starting to blur and Big Data is the hot topic. If you didn’t catch Inhi Cho Su’s presentation around Big Data myths, we’ll try and track that down for you. Myth #1 – Big Data is only about volume. False, it’s not about volume, there are other V’s: velocity, volume, veracity, value. Myth #2 – Big Data doesn’t include transactional data. False, majority of Big Data implementations include integrating transactional data. That’s how most of the world operates in terms of business operations. You want to integrate that with new data sources, new data types, to actually make measurable impact at the point of contact.
Data warehouse augmentation is one of the dominant use cases for IBM and the shifts in Big Data right now. Active archiving and transactional data around real-time fraud is an example that comes to mind. When traveling abroad or making big purchases outside of your norm, companies are able to immediately contact you via text or call as an extra layer of service preventing fraudulent activity.
Okay Big Data, how do you get started?
“Probably the two questions I’m getting is, one around where should I start? Meaning which use cases are demonstrating real ROI and results. That gets to your point (Dave Vellante’s point) about value. And then we’ve determined there are 5 dominate uses cases that we’ve seen clients actually be quite successful in starting from. And then the second piece is do I have the right skills in-house to get this stuff up and running. And then the skills can be quite varied , skills can be around do I have the analytic skills, the applied math skills to write both the queries…data mining skills, it could be visualization techniques, because once you get to larger pools of data to consume it can take quite a bit of time. Or do I have data integration, quality, governance and privacy skills to ensure I’m putting the right governance around how thats being accessed and touched.”
What IBM calls Software-Defined Environments, the rest of the world calls Software-Defined Storage. IBM is making heavy investments in both dollars and resources to a flash-driven software-defined storage. IBM is trying to change the narrative from just ‘storing’ Big Data, but focusing on the utilization of Big Data to improve business verticals, save money and innovate.
IBM belives storage is becoming obsolete. Here is what that statement really means: storage will become so deeply ingrained in the infrastructure of the data center, no one will need to talk about anymore. It will be underneath, solving problems for the customers.
So that is IBM’s biggest takeaway from IBM Edge: storage (largely flash-arrays) will be the underbody of data center infrastructure solving problems like a #bawse.
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU