

Customers are excited about the opportunities that come from data analytics. As the Hadoop Summit is making clear this week, there is a lot of exploration going on right now, and a lot that can be achieved. Apache Kafka is one system helping to do just that.
“The demand for Kafka is for the ability to simultaneous broadcast to multiple ecosystems,” said Kevin Petrie, senior director and technology evangelist at Attunity, Inc. Kafka can take a stream of data, such as real-time transactions, put it into a data loop where it can mingle with the data lake of Hadoop, including social media streams. This allows you to view the data together, rather than as isolated strands, and you can act on that information.
Petrie spoke to John Furrier (@furrier) and George Gilbert (@ggilbert41), hosts of theCUBE, from the SiliconANGLE Media team, during Hadoop Summit US at the San Jose Convention Center in California.
Attunity is also working to integrate these different sources of data. One way is by automating creation of data warehouses. Traditional warehouses store information in rows and columns in a very structured way. Hadoop has become an offload candidate for cold data stored in a more cost-effective manner.
Use of Attunity and the logical data warehouses that Attunity helps automate can help the customer view and manage all their data in a more holistic fashion.
Striving to make data analytics even stronger, Attunity announced at the Hadoop Summit Visibility 7.1, allowing deeper intelligence for Hadoop. This tool will help the customer understand how files are being managed, who is using the data and how, and “ensure Hadoop data lake has the necessary specifications,” Petrie said.
Ultimately, it will provide intuitive metrics that are crucial for the customer when making decisions and could save them a lot of time and money.
Watch the entire video interview below, and be sure to check out more of SiliconANGLE and theCUBE’s coverage of the Hadoop Summit US.
THANK YOU