UPDATED 16:31 EST / JUNE 18 2013

NEWS

The Industrial Internet Will Open Up a Whole New Horizon for Developers

Development and operations have never quite seen a challenge like is being heralded by the advent of the Internet of Things and the Industrial Internet—two things that mean essentially the same thing, but exist in different contexts. To make matters more interesting these concepts bind together not only a multitude of devices, but everything we know about the cloud, and big data all into one package. As more and more “things” are connected and emitting data via sensors, and more and more APIs are needed to permit the connection, authentication, and transmission of that data both development and operations will have their hands full thinking about how their systems interact with their distant satellite information sources.

In order to understand the Industrial Internet, Wikibon’s David Floyer looked into what industry segments the technology has penetrated. Already we’re seeing GE looking into hooking up aviation equipment to Amazon’s cloud to get better metrics on how aircraft engines and parts are doing and that should be a signal to the developer community at large its time to start thinking about the problems associated with generating and collecting information.

API Frameworks: Sensors and server will have to be “smart” about their habits

As the number of sensors expands, each of those devices will have to connect back for its data to be collected. For this, I feel developers might find their best benefit by looking at the mobile industry because it already does this: mobile devices load apps (across thousands of devices), apps connect to the Internet, they connect back to their servers, and viola, that’s how they provides their services and make their money. The Industrial Internet will be extremely similar—just with the alternative that most of the time they’ll be inside of a private network rather than reaching out over the open network.

However, this still means that scalability and reliability will come into play when thinking about all those sensors. API frameworks will need to exist to allow devices that collect data from sensors to connect back to the storage system, to authenticate themselves, and then to deliver the data—and in probably a lot of cases, receive instructions back on how to handle future incoming data.

We can take the GE example where a sensor connected to a wind-farm turbine could collect data about the speed of the turbine (power output, temperature, resistance, etc.) every minute or 10 seconds. All that data would either have to be stored on the device to be transmitted later, or transmitted instantly. Depending on the bandwidth available it might not be possible (or even desirable) for every sensor on an entire turbine to collect data every second so its set to ping every 60s. However, if something is starting to go wrong, a faster collection rate might be triggered.

Not only do the sensors on the “thing” need to be instructed properly by their own internal resources, but the receiving server needs to be prepared to see more or less data flowing from them.

Thinking along these lines will be just as important as developing authentication methods from the sensor devices to the servers (probably public-private key trades to encrypt the transmissions or at least identify individual sensors) and producing API frameworks that can open up space for new sensors being added on the fly in environments where more sensors will come and go. The sense of scale of the Industrial Internet could be just as daunting as a few hundred thousand customers with smartphones—and a little bit more critical to make sure each sensor knows how to operate in its niche.

Cloud-thinking across time and “thing”

With the Internet of Things (on the consumer side) Expertmaker CTO and founder Lars Hard mentioned to me that it was “all about the sensors,” and there’s no reason to believe this isn’t as true for the Industrial Internet. Those sensors are just connected to many more things than they are directly to people and the end product is designed to go to an operator than a consumer.

This means that after the challenge of detecting and connecting all those sensors and then collecting all their data is done (or, more accurately, is ongoing) there’s going to be an analysis phase. This is where big data and technology like Hadoop are going to come into play. For an extensive list of examples and challenges, you can see a report by Jeffy Kelley from Wikibon on the subject.

As with most big data analysis, there’s a spectrum of modes that operators are interested in and those are split between real-time analysis and historical analysis. For the most part, big data will be thinking about historical analysis to try to detect and identify faults or enable engineers to optimize equipment based on how it acted and reacted in the past. For more potential catastrophic problems, sensors will want to compare that historical data to current incoming data and warn a human (or trigger a programmed response) to whatever is currently going on.

In the wind-turbine example there might be a case where greatly increased wind speed might need the system to react by reducing the total electrical load on the generator to prevent it from providing too much wind resistance. Sensors talking from the turbine to the analytical servers could have that trigger automatically or ask a human operator to judge from the visualized data in front of her to make the decision.

However, I see a whole other problem cropping up and that’s one of occasionally connected sensors. All this data may provide a better landscape to understand an industrial application; but with real time big data analysis not all the data is coming in a continuous stream in the order it was sent. As a result, the Hadoop system underneath needs to not just handle a lot of differentiated data from numerous sources, but it needs to be prepared and developed to respond to fluctuating historical data.

After all, some of those sensors might have had to wait minutes or hours before offloading if network congestion became too high and decision making would need to take into account what sensors had checked in and when they were expected to check in.

Food for thought in the Industrial Internet.


A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU