UPDATED 11:15 EDT / APRIL 28 2015

Hitachi reboots its data center portfolio for cloud and analytics

hitachiinspireThe aggressive investments that Hitachi Data Systems Corp. has made over the past year to catch up on the new trends sweeping through the enterprise reached a culmination this morning with the introduction of over a dozen hardware and software products spanning the full breadth of its portfolio.

Headlining the launch are four new converged systems that combine homegrown servers and storage with partner-supplied networking equipment in various integrated configurations. Most notable of the bunch is an implementation of VMware Inc.’s EVO:Rail architecture that also layers its management capabilities over the top. It’s joined by a similar appliance that offers the same pre-packaged value proposition but uses Hitachi’s own software brew instead.

The Japanese technology titan says that the combination of its file system and open-source virtualization technology – as opposed to the propriety hypervisor from VMware powering EVO:Rail – makes the Hyper Scale-Out Platform ideal for running data-intensive workloads such as Hadoop. But the converged approach doesn’t support every implementation of the framework equally well.

As a matter of fact, scaling computational power and capacity in fixed increments is often far from optimal in distributed analytic clusters where the ratios between the two resources can vary greatly. Hitachi has taken that into account and is also targeting use cases that require managing storage independently with five new configurations of its beefy G1000 array.

The scaled-down variations are aimed at allowing organizations to take advantage of the management stack at the heart of the ultra-reliable system without paying for capacity they don’t need. Hitachi boasts that the additions make the series the first of its kind to implement a single administrative platform across every model, which is useful in environments that incorporate multiple different systems.

Administrators can manage their installations using a software module called Infrastructure Director also introduced as part of the launch that incorporates a recommendation engine to determine how much capacity should be allocated to each application. It’s complemented by another new tool, aptly-named Instance Director, that packs capabilities for copying and backing up workloads.

Those operational processes are in turn tracked through an analytics service dubbed Hitachi Live Insight that aggregates logs from the company’s hardware to provide what is described as a high-level view of the data center. That kind of visibility can come in particularly handy for hunting down the cause of technical problems, a task where potentially every minute counts when mission-critical applications are involved.


A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU