On the misconceptions of open source vs. proprietary debate in Big Data
Editor’s Note: the following is an excerpt from an interview conducted with Jim Vogt, CEO of Zettaset. As part of our Big Data series, we’ve asked industry thought leaders about scaling Big Data applications, the open source vs. open core debate, and where the true spots of innovation are moving forward. The original interview was conducted by SiliconANGLE Senior Managing Editor, Kristen Nicole.
Big Data applications at the enterprise level
If you are looking for the best method to scale then you must build a data store that fulfills all the prerequisites. Each element, from how you run applications to performance and scalability must first be in place. By having your data store infrastructure in place, you create an automated, scalable and repeatable process. Thus, you are avoiding the anguish of starting from scratch each time and are leveraging with software as opposed to professional services. If you can scale your data store, then you can harness the power of your applications while providing security and high performance with ease.
.
Monetizing Big Data’s open source background for commercial use
First, we need to accept that the open source community and the process for open source technologies to evolve take time. Thus, organizations looking to tap into open source because it is cost effective are faced with the dilemma that one size does not fit all. Monetization arises at this intersection, where vendors, like Zettaset, can fill in the gaps left by open source by addressing real customer requirements and meeting enterprise-level expectations.
.
Big Data best practices: open source v. open core
When it comes to open source versus open core proprietary there can be some mistaken assumptions. For instance, proprietary is equated to meaning closed and implies vendor lock-in. Yet, not all “non-open source” software solutions are proprietary or closed. At Zettaset we take an approach that is transparent to Hadoop distributions and analytics/BI applications. For example, Zettaset Orchestrator is fully compatible with distributions from Cloudera, Hortonworks and IBM. Orchestrator has an open API that enables it to provide security and high availability to these distributions and applications. This distribution-agnostic model creates broader options for organizations, enabling them to work with many different vendors and products.
What’s still lacking in Big Data?
The Big Data industry is lacking solutions that can integrate into existing IT systems. Organizations with legacy technologies are not going rebuild or discard their existing IT investments. Thus, Zettaset solutions are designed to fit into existing environments in order to address real customer needs. Zettaset innovates by enabling enterprises to freely mix file systems, distributions and databases to meet their business requirements.
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU