On the misconceptions of open source vs. proprietary debate in Big Data

Editor’s Note: the following is an excerpt from an interview conducted with Jim Vogt, CEO of Zettaset. As part of our Big Data series, we’ve asked industry thought leaders about scaling Big Data applications, the open source vs. open core debate, and where the true spots of innovation are moving forward. The original interview was conducted by SiliconANGLE Senior Managing Editor, Kristen Nicole.

Big Data, Worldwide connected Big Data

Big Data applications at the enterprise level


If you are looking for the best method to scale then you must build a data store that fulfills all the prerequisites. Each element, from how you run applications to performance and scalability must first be in place. By having your data store infrastructure in place, you create an automated, scalable and repeatable process. Thus, you are avoiding the anguish of starting from scratch each time and are leveraging with software as opposed to professional services. If you can scale your data store, then you can harness the power of your applications while providing security and high performance with ease.


Monetizing Big Data’s open source background for commercial use

First, we need to accept that the open source community and the process for open source technologies to evolve take time. Thus, organizations looking to tap into open source because it is cost effective are faced with the dilemma that one size does not fit all. Monetization arises at this intersection, where vendors, like Zettaset, can fill in the gaps left by open source by addressing real customer requirements and meeting enterprise-level expectations.


Big Data best practices: open source v. open core

Jim Vogt, President and CEO at Zettaset, Inc.When it comes to open source versus open core proprietary there can be some mistaken assumptions. For instance, proprietary is equated to meaning closed and implies vendor lock-in. Yet, not all “non-open source” software solutions are proprietary or closed. At Zettaset we take an approach that is transparent to Hadoop distributions and analytics/BI applications. For example, Zettaset Orchestrator is fully compatible with distributions from Cloudera, Hortonworks and IBM. Orchestrator has an open API that enables it to provide security and high availability to these distributions and applications. This distribution-agnostic model creates broader options for organizations, enabling them to work with many different vendors and products.


What’s still lacking in Big Data?

The Big Data industry is lacking solutions that can integrate into existing IT systems. Organizations with legacy technologies are not going rebuild or discard their existing IT investments. Thus, Zettaset solutions are designed to fit into existing environments in order to address real customer needs. Zettaset innovates by enabling enterprises to freely mix file systems, distributions and databases to meet their business requirements.