Red Hat, the open-source software and infrastructure provider, has officially jumped on the Hadoop bandwagon this week. The company has just announced its Big Data Manifesto, with a series of principles it believes will help its enterprise customers get to grips with the problems of managing rapidly growing data volumes. In line with this move, the company said it intends to donate its Red Hat Storage plug-in to Apache Hadoop’s open community as part of a strategy focused on offering enterprise customers infrastructures and platforms for open hybrid-cloud environments.
Red Hat Storage is an alternative to the Hadoop Distributed File System built on technology the company obtained through the acquisition of Gluster in 2011. It has several advantages over HDFS, including the elimination of a single point of failure.
The company doesn’t plan on going it alone. Instead, it plans to build a “robust network of ecosystem and enterprise integration partners to deliver comprehensive big data solutions to enterprise customers.” To do so, Red Hat has begun cooperating with members of the open cloud community too, such as Amazon Web Services, in order to support its Big Data customers.
Red Hat states:
“An open hybrid cloud environment enables enterprises to transfer workloads from the public cloud into their private cloud without the need to re-tool their applications. Red Hat is actively engaged in the open cloud community through projects like OpenStack and Red Hat’s own OpenShift Origin to help meet these enterprise big data expectations both today and in the future.”
To facilitate these plans, Red Hat will be adding Big Data functionality to its infrastructure portfolio, which includes products like Red Hat Storage and Red Hat Enterprise Linux, while the Red Hat Storage Apache Hadoop plugin – currently in technology preview – will give enterprise Hadoop deployments a new storage option that maintains API compatibility and local data access whilst delivering enterprise storage features.
Meanwhile, Red Hat Storage will also be integrated with Red Hat Enterprise Virtualization 3.1, providing benefits such as expanded portability, reduced operational costs, scalability, community-driven innovation through open-source Gluster and oVirt projects, and choice of infrastructure to enterprises.
Users will be able to access their Big Data on these platforms via Red Hat’s JBoss middleware, while the company also plans to team up with Big Data hardware and software providers to offer greater operability. Eventually, it’s hoped that users will be able to install and integrate a wide range of comprehensive enterprises Big Data solutions, which in turn will lead to the creation of better solutions for its customers utilizing the reference architectures that Red Hat and its eco-system partners develop.
These are ambitious plans and there will certainly be plenty of debate about whether or not Red Hat can actually pull it off, but at least one expert seems to think they can.
Ashish Nadkarni, Research Director of System Storage at IDC, stated:
“Red Hat is one of the very few infrastructure providers that can deliver a comprehensive big data solution because of the breadth of its infrastructure solutions and application platforms for on-premises or cloud delivery models. As a leading contributor to open source communities developing essential technologies for the big data IT stack – from Linux to OpenStack Origin and Gluster – Red Hat will continue to play a pivotal role in Big Data.”
Red Hat is growing its presence in both the analytics market and the cloud: just a couple months ago, the firm shelled out over $100 million for New Jersey-based virtualization solutions vendor ManageIQ. The firm’s IP is being integrated with CloudForms, a part of Red Hat’s SaaS offering.
Contributors: Maria Deutscher