According to a new study, if HDDs continue to progress at their current pace, then in 2020 a two-disk, 2.5-inch disk drive will be capable of storing more than 14 TB and will cost about $40 (today, a typical 500 GB hard drive costs about $100). Although flash memories have also become popular – with advantages such as lower power consumption, faster read access time, and better mechanical reliability than HDDs – the cost per GB for flash memories is nearly 10 times that of HDDs. In addition, flash memory technology will reach technical limits that will prevent its continued scaling before 2020, keeping them from replacing HDDs.
As we look to a future with smart grids/cities, digital medical records, lifestreaming, and things we haven’t even thought of yet, the one constant is that we are overrunning our capability to store all the data that is being generated. I am not talking about the physical requirements for storage because we can simply keep building bigger storage arrays that constantly catchup to what our requirements are, but the problem that remains is overcoming the mechanical limitations of a hard drive in order to serve up the piece of data that is required when it’s required.
So there is a physical performance issue and most likely a schematic issue as we come to terms with the limitations of data storage models for accommodating 10, 50, 100x the amount of data that we deal with today. Just building cheaper bigger hard drives is not the solution, it helps but it’s not the primary problem that requires solving… data storage models, the digital detritus problem, and data performance are the problems I foresee.