Internet of Things: Impact on the Data Center + Big Data?
Editor’s Note: the following is an excerpt from an interview conducted with CohesiveFT: CTO Chris Swan regarding the impact of the Internet of Things on the data center. Swan speaks on how consumerization of IT is influencing Big Data, the gap between Big Data hype and Big Data solutions, and just how we connect Big Data to BYOD to Internet of Things.
Internet of Things + Data Democratization
.
The Internet of Things (IoT) hasn’t begun to impact datacenter architecture yet. In due course we will see a ‘cloud inversion’ where processing and storage moves from big centrally managed data centers out to edge. This is an entirely predictable change as we’ve seen the oscillation between centralised and distributed models throughout the history of IT.
‘Big Data’ is largely an industry contrivance, whilst consumer cloud and smartphones are very real, so they’re following different trajectories. To misquote Roger Needham – ‘Whoever thinks their problem is solved by big data, doesn’t understand their problem and doesn’t understand big data’. There’s actually a fairly limited subset of problems where you get better results by throwing a larger amount of data at a simpler algorithm, and somehow we’ve allowed the hype around those things to saturate the market (mostly at the behest of vendors selling stuff with a ‘big data’ label slapped on the side).
The users I worked with for the past 12 years in financial services had dedicated groups of quantitative analysts (who we might also call ‘data scientists’ these days) to help them navigate data sets and the algorithms to apply to them. Such expertise is thin on the ground and very unevenly distributed.
Consumerization of IT + Internet of Things, and Their Collective Influence on the Data Center
There are many problems where end users want the power to ask questions of their data in real time and under their own control, so there’s a renaissance in tools to help do that. Whether those data sets are ‘big’ is largely irrelevant. As we instrument the world around us there will be an increasing need to store and process the data generated from that. The products and services that we see at the moment are mostly about getting that data to a central place in order to make sense of it and extract value.
Today’s cloud works despite the ‘thin straw’ of network connectivity, so centralised models won’t be practical in the longer term, which is why processing and storage will become distributed. Since many of the use cases for IoT are local e.g. traffic control, home automation, higher resolution weather forecasting etc. it makes perfect sense for the data to stay local.
Making Big Data More Consumable
It’s not the architecture that needs to change in most cases, it’s the user experience. The tools for managing data have been built for and sold to an IT elite, and now the end users want to cut out the middle man and ask questions of their data without help desks and trouble tickets. The software defined data center allows us to build systems on demand rather than statically provision for a given volume of capacity. This opens up the opportunity to dynamically build out the resources needed to answer a question that a users is asking from a given set of data.
The main win here is being able to smash through the single system barrier that previously existed where anything that was bigger than a PC running a spreadsheet or local database involved an (enormously expensive and time consuming) IT project to build a data warehouse or similar.
The Gap Between Big Data Hype and Big Data Solutions, and Where Does Security Lie?
The Big Data hype is mostly driven by vendors selling a story that unique, powerful and differentiating insights can be found by analysing more data, and their solutions are needed to do that. The flip side of the hype is that there are now a whole bunch of excellent open source solutions to problems associated with managing and analysing large data sets. So it’s now possible to build cost effective systems that work in different ways to traditional enterprise data management.
The key to solutions in this space is having an understanding of a given data set and the algorithms that can be applied to extract meaning and value from it. There’s no magic on offer here – just big scary mathematics that can be made to look like magic by skilled conjurors.
It’s entirely unreasonable to expect end users to take care of security – they just want to get on with their job with a minimum of interruptions and annoyances. That doesn’t mean that it’s the IT department’s problem either. Security needs to be baked in to solutions so that data is protected all the way from input to eyeball. The business users need to define and clearly communicate their risk appetite, and it’s up to IT to source and integrate the components of an overall solution that provide corresponding controls on risk.
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU