NetApp has gone from an underdog in the market to one of the big players.
They have kept with innovations to some extent through leadership with unified storage and new reference architectures. But to keep pace in the big data world, NetApp needs to fill in some pieces.
With its analyst meeting this week, questions about NetApp’s next wave of innovation are natural, considering the moves by EMC and IBM, which are investing heavily in its converged infrastructures.
NetApp launched the FlexPod with Cisco in 2010. Flexpod is an architecture that can be built in pieces. That’s unlike Oracle’s “red stacks,” and pre-configured options such as VCE, which is only available in five configurations. See Wikibon analyst Stuart Miniman’s post on converged infrastructure he wrote last week for an excellent overview of the market.
EMC’s VSPEX is the answer to the broader market where NetApp likes to play. Other converged and flexible options include IBM PureSystems and various offerings from Dell and HP.
Big data is the next frontier for NetApp and its competitors.
At Hadoop World last week, NetApp announced a joint partnership with Hortonworks and the Open Solution for Hadoop Rack. The Open Splution includes NetApp FAS and E-Series storage along with Hewlett-Packard servers and Cisco switches.
It follows the NetApp strategy to offer flexibility in its integrations.
On theCube last week at Hadoop World, Val Bercovici of Net App talked about how open-source will impact the big data ecosystem. He summed it up by saying big data will essentially surround enterprise applications, forcing adoption.
Which takes us to how NetApp will evolve its own infrastructure. What about flash? EMC recently acquired XtremIO. It will use that purchase to offer a pure flash storage environment.
The answer for NetApp may again come with Cisco, which last week announced a partnership with Fusion-io. Fusion-io is the red-hot flash storage sub system that integrates directly into a server. HP and Dell now use Fusin-IO, which accelerates application performance. Flash solutions like Fusion-io’s will become essential as more data gets pumped into enterprise environments.
And then there is the tiering issue.
According to Wikibon, the NetApp FAS Series offers flash as a read-only cache. With it, NetApp offers manual movement of data between tiers but has been resistant to providing an automated tiered storage solution (ATS).
Wikibon’s David Floyer and Nick Allen added this about NetApp:
The vendor least supportive of ATS is NetApp, which instead only sells the flash-cache PCIe card on the FAS storage arrays. The size of this cache is limited and does not offer the potential savings of a full ATS implementation, especially against the EMC VNX, which offers an integrated flash-cache and ATS. Using flash-cache is just an extension of using traditional controller cache in storage arrays. Flash is lower cost than DRAM, and can be larger than DRAM, but the types of workload that are improved by larger cache sizes, whether from DRAM or Flash, are the same. There are diminishing returns as the cache size increases. Flash-cache is only applicable to workloads that are cache friendly, i.e., have a relatively small working set and are predictable. ATS has a different performance value proposition, and is additive to cache. Wikibon observes that NetApp arrays have previously led other arrays in additional storage array function such as compression and de-duplication. Wikibon believes that NetApp needs to add ATS functionality to its FAS array offerings to remain competitive.
So, will we see NetApp announce some new flash options and ATS at the analyst meeting this week?
It would make sense considering the importance of this functionality in big data environments.