

Storage was simple when spinning disks ruled the world, but today distinctions are drawn along such lines as disk versus flash, logical versus physical and local versus cloud. Underlying each category are the issues of latency versus capacity.
That last distinction is becoming more important as data volumes, sources and types grow, writes Wikibon CTO David Floyer. Latency storage is high-performance and read-oriented. It stores data that is in high demand, where very high speed access is vital. Traditionally this data was on high-speed disk. In the last few years high-speed flash has been taking this market as the cost of flash has rapidly decreased. The introduction of new technologies such as three-dimensional flash will drive prices even lower.
Capacity storage is write-oriented and designed to archive large volumes of data that may be accessed seldom if ever. The focus in capacity storage is on cost and write capacity. Media choices include SATA disk, capacity flash, tape, cloud storage or write-once-read-almost-never drives. At the extreme end of this, new technologies such as HGST Shingled 10 TB helium drives will bring storage prices down but at the cost of very slow read rates and I/O. Floyer writes that the metadata for capacity storage should go on low-latency storage, so that when someone does need to access data from the archive, its location can be found quickly, improving retrieval times.
Floyer recommends that CIOs plan for separate latency and capacity storage networks, with the former moved as close to the compute resource as possible. As data streams in from various sources – people, IT systems, video and surveillance, mobile devices, IoT sensors – a fraction is diverted to active processing with the remainder written out on the capacity network and seldom accessed.
The full report, which includes Floyer’s projections for growth of both latency and capacity storage revenues, is available on the Wikibon Premium website.
photo credit: .SilentMode via photopin cc
THANK YOU