Storage industry technology has once again become the hot trend due to the advent and adoption of flash storage. The changing dynamics of cloud-based architectures and our insatiable appetite for big data have driven companies to think about how they can turn infrastructure spend – and particularly their storage –into a profit engine. As a former professor at one of the top engineering schools in the country, I’m proud to be a part of this innovation.
However, there’s been much discussion lately surrounding cloud about moving the data and storage problem from one place to another. In fact, The New York Times recently posted an interesting and detailed story around the challenges large data centers have based on cost, energy and footprint. Data centers continue to grow unabated as we are busy filling the hard drives with work data plus downloaded movies, music, family photos, tax files and every email we’ve sent or received in the past five years.
The concept and promise of cloud is great – don’t get me wrong. But there is a much bigger problem in the data center than just idle file servers waiting for computations to occur than what’s mentioned in the article. There are also racks and racks of storage servers under heavy loadthat contain hard drives that are filled to just a fraction of their capacity. This is called short stroking, whereby lots of disk drives are used in parallel to get performance. Because these disks are utilized at or near 100 percent of their performance, the remaining available capacity cannot be used, because there is no more available performance to get additional data on and off the drive. When short stroked, the added number of these high-speed hard drives consume enormous amounts of energy and take up vast amounts of space within the data center. And the sad fact of the matter is that 90 percent of the available capacity can go unutilized.
Now, many argue that adding flash storage is the solution. In most cases, I would agree at a conceptual level. With flash, you are able to get significant performanceimprovements allowing your business to run more efficiently. Ultimately, you can do a whole lot more in a lot less time.
But with innovation comes hurdles, both from a physical and a mental perspective. It’s great that we have new technologies like flash storage, but today’s storage architecture, rooted in 25 years of disk-based technology, will not allow customers to reap the true benefits of this game-changing technology. While we know it’s economically and operationally tough to rip and replace a data center based on a brand new architecture, we need to take small but important steps to begin this paradigm shift.
One quick way to begin this change is answering the following question – where is the optimal place for flash storage in your data center?
There are many differing opinions on where flash is most appropriately deployed. Some vendors advocate putting it in the array. This can be an expensive and non-optimal cost, creating islands of flash and using a fraction of the technology – even arrays that are under-utilized for a specific work load would bear the cost of the flash technology. And it still doesn’t address the bottleneck of moving significant amounts of data over the network to the server. Others want you to keep flash on the server. With this deployment, the benefits are greatly limited due to the lack of sharing (each server only has access to its local flash), it’s still expensive and requires significant integration.
As an industry, we are putting flash in the wrong locations. Why continue to grow the size of our data centers as if they’re landfills,rather than capitalize on the intellectual capital within the technology industry and leverage it to develop more cost-effective, ecologically beneficial alternatives? We challenge the broader storage and data management vendors of the world. We challenge ourselves in our own technology solutions to do the same. But most importantly, we challenge the consumer to be in the know, as well.
So, the answer is not in the server and not in the storage – but in the network. By placing flash on the network, you can extend and amplify the benefits and create globally shared pools of performance and efficiency. The term network attached flash will become the way that more and more data centers turn their expenditures into a profit engine. This will allow organizations to achieve unlimited application performance scaling, free applications from the confines of the data center by eliminating latency and cut storage costs by more than half.
In the end, we need to invert the equation of spending most of our IT data center budget to “just keeping the lights on.” While flash is going to help us change that equation, it’s paramount to make sure you consider where this technology is placed throughout your infrastructure.
Would love to hear your comments. Also, would like to know how you’re using flash and the pros and cons you’ve experienced so far.
About the Author
Ronald Bianchini, Jr.
President and CEO
As President and Chief Executive Officer of Avere Systems, Co-Founder Ron Bianchini has a long record of accomplishment in building and leading successful companies that deliver breakthrough technologies. Prior to Avere, Ron was a Senior Vice President at NetApp, where he served as the leader of the NetApp Pittsburgh Technology Center. Before NetApp, he was CEO and Co-Founder of Spinnaker Networks, which developed the Storage Grid architecture acquired by NetApp. Ron also served as Vice President of Product Architecture of FORE Systems, where he was responsible for ATM products. Previously, he co-founded Scalable Networks [acquired by FORE], which designed and implemented a large-scale Gigabit Ethernet switch, and earlier in his career, he was a professor at Carnegie Mellon University.
Ron received an S.B. degree in Electrical Engineering from the Massachusetts Institute of Technology and M.S. and Ph.D. degrees in Electrical and Computer Engineering from Carnegie Mellon University. He also holds numerous patents in fault-tolerant distributed systems and high-speed network design and has published extensively in technical journals.
Latest posts by Guest Author (see all)
- Guest post: 3 ways to avoid the data silo syndrome - September 22, 2016
- Guest Post: ‘Virtualization 2.0′ is your on-ramp to the cloud - July 27, 2016
- To unlock Big Data’s potential, learn to use fast data - August 17, 2015