UPDATED 11:18 EST / MARCH 26 2013

NEWS

Google Glass and iWatch: Next-Gen Wearable Computing is All About the Sensors, Baby

Computing technology has come a long way from mainframes that took up entire floors of buildings and now what was once mammoth huge fits in my pocket. This is the crux of wearable computing: making the big small and giving mobility to big data and big information. Much of the mobile environment has been focused on pulling data out of the cloud into the hands of interested parties, forming windows into the Internet and the vast sea of human knowledge. Mobile devices talk; but they can also listen.

Smartphones are small, powerful information devices that are becoming audio-visual labs, capable of seeing and analyzing the world around them. Phones also include GPS and other types of location awareness, tell the time (the first “sensor” any phone had)–but on the more exotic side some include accelerometers, air-pressure sensors, and other suites of sensing technology. Everything a mobile device can “sense” about the world around the user is another pair of ears and eyes (and ESP) that apps and agents under the control of the wearer can process and collate.

The “wearable” in wearable computing is being exemplified by offerings in the tech market between Google Glass (worn on the face) and the iWatch (worn on the wrist). Both of these locations permit even further use of sensors: Google Glass adds POV cameras and path-of-gaze detection and the iWatch could have a heartbeat, blood pressure and oxygen sensor, along with accelerometers that could guess at what gestures that had was making, even detect the flex and tension of fingers.

Through the looking glass with augmented reality

To understand why sensors are so important to wearable computing, first we must look at the interface that wearables provide. Google Glass and iWatch both suffer from the same thing all mobile devices must overcome: small screen real estate in a highly cluttered environment. According to Expertmaker CTO an d founder Lars Hard, the problem with mobile wearables is that there’s so much information that too much or the wrong information will be detrimental to wearers rather than useful.

“[Wearable devices] must be smarter, and more proactive, prepared to give us information more than we request it,” Hard said in an interview. “If you’re projecting images and text in front of people’s eyes, then it must be the right material that you actually project. The relevance must be in total focus.”

Everyday mobile devices make users superhuman by bringing information to them from all around them. They deliver local news about events for a night out (even that there’s a chance of rain), directions to restaurants during lunch; they can even tap into the bus schedule to inform the wearer their ride might be late. One important problem here is that all of this information is not relevant at the same time, worse most of it is clutter and if it’s showing up in Google Glass it’s occluding the action and that’s not helpful.

Making sense of the sensors

So, suddenly this is where augmented reality and artificial intelligence mix together to make sense of those sensors.

Hard explains that because most of that information that could be displayed isn’t useful right now and devices such as Google Glass will have extremely limited user interfaces it will be up to the device to know what to display when.

Both Glass and iWatch will be more passive on the interface front and must be smarter about pushing information to users.

“We see a huge amount of signals coming to these devices, but one clear trend is that we add more and more sensors,” Hard adds, noting that sensors will be the savior when it comes to information overload. “We have accelerometers, temperature, etc. more and more sensor data, therefore these devices are very aware of the context that the person has surrounded themselves in.

“AI is one of the few things that we can use to take all of this sensor information and sort it out.”


A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU