Controversial ODPi specification launches to rein in the Hadoop ecosystem
A year after 14 of the biggest names in the Hadoop ecosystem joined forces to try and harmonize the vast array of disparate technologies that make up the platform, the first version of their project plan has been released. The ODPi Runtime Specification lays out what the group sees as the key ingredients that a distribution should include to effectively address user demands.
Vendors wishing to meet the standard have to implement the Hadoop File System as the data storage component of their offerings, let customers process the information they keep inside using MapReduce and utilize YARN to coordinate the analytics workflow. ODPi also details a long list of configuration requirements that need to fullfilled in order to ensure the combined whole operates efficiently. The end-goal is to have all the leading distributors of the analytics platform adopt the specification so as to provide a degree of interoperability not available to customers today.
The effort is motivated by the same reason that drives the OpenStack initiative over in the cloud ecosystem: Giving users the freedom to move workloads between different vendors’ platforms without having to make expensive and time-consuming modifications. But as the backers of the data center operating system have already discovered, enabling such interoperability is much easier said than done. For starters, the ODP group will need to garner the support of the Hadoop distributors it’s hoping will adopt the standard, which may be a lost cause from the outset.
While Hortonworks Inc. was named among the consortium’s founding members when the standard was first announced a year ago, its two biggest rivals are not nearly as enthusiastic about the initiative. In fact, Cloudera Inc. went as far as publicly dismissing the effort at the time, arguing that customers simply don’t have a need to move their workloads from one Hadoop distribution to another. However, the backers of ODPi insist that interoperability has the potential to deliver tangible benefits on the long run.
One of the group’s main arguments is that having a common set of features could make it easier for vendors to embrace Hadoop. Instead of needing to get separately certified by every distributor as is the case now, a third party developer would only need to ensure its offering is compatible with ODPi. As a result, software makers that currently can’t or won’t spare the resources to support the analytics platform’s entire user base would be able to make their solutions much more broadly accessible, thereby increasing the choice of technologies available to customers.
The ODPi group hopes that enough vendors and users will join the bandwagon over time for its specification to start actively influencing the development roadmap of Hadoop. But no matter how many third parties end up joining its ranks, the plan’s success ultimately still hinges on securing the support of the three major distributors. And even if the consortium somehow manages to pull that off, it may still have a hard time reigning in the sprawling upstream ecosystem.
All three major components in the ODPi Runtime Specification are currently being threatened by newer open-source alternatives. MapReduce in particular is losing a lot of ground to Spark, which is increasingly being deployed without HDFS or even YARN. And in environments where a resource manager is still required, Mesos is often used instead. In other words, the standard’s backers can expect a hard time realizing their vision for Hadoop.
Image via Geralt
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU