Converging on the future of Big Data | #HPdiscover
When it comes to cloud computing, Hewlett-Packard has gotten on board with its ConvergedSystem platform, including the newly released ConvergedSystems 900 (CS900) server. Using a common converged infrastructure architecture across its entire server, storage, and networking products, HP Converged Systems pools its resources, allowing them to be shared across the variety of different applications while still being managed from a management platform using standard security software.
Paul Miller, VP of Marketing for HP Converge Systems, sat down and talked with Dave Vellante and Jeff Frick of theCUBE at HP Discover in Las Vegas about the latest developments with the converged system.
Revolutionary scalability and real-time data analysis
Unlike smaller databases of the past, which are more time intensive, the capabilities of the CS900 relieve some of this burden. And according to Miller, for the first time, customers have a database scaling up to 12 physical terabytes of data where they can do in-memory transactions and analyze it at the same time.
“This is revolutionary,” Miller said.
He also pointed out that when people talk about Big Data and in-memory, they usually focus on the speed. However, two other important elements are bigger than just the speed. “It’s the ability to have this business transformation of doing real-time analytics on the data, not moving it out. And what that does then is bring massive simplicity.”
He added that customers he talks to spend more time moving data from the transactional database to another database for BW or analytics. That takes time, resources, and adds complexity, as well as requiring multiple different tool software.
“When you can collapse it all into one, you have a great solution that is highly scalable, taking out more simplicity than you can out of anything else,” he said. “And that’s the real magic of what we announced with the ConvergedSystem 900.”
- In-memory
One of the biggest reasons for the success of the ConvergedSystem has to do with its in-memory capabilities. In-memory allows a server to access data faster since it pulls it from main memory as opposed to off of a disk. According to Miller, some of the reasons in-memory is so hot right now relies on a few factors.
“One is the cost of memory has come down significantly. Two, architectures like the CS900 enable you to scale to 12 terabytes. Most customers can get C compression from 3 to up to 10 percent. So, we talk about real live data, of online transaction processing (OLTP) data in the 50 to 80 terabytes,” he explained. “That means now almost any workload in the world, any database in the world, we can handle and drive. So, in the past, you could only have scalability to 2 or 4 terabytes. Not that interesting for most people’s real time data — most retailers, big financial shops. But now with 12 terabytes you can really get the performance and the scale that you need.”
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU