UPDATED 13:27 EST / DECEMBER 31 2012

NEWS

The State of Augmented Reality in 2012: Heads Up, Metadata, and Mapping

Cameras. They’re everywhere and in everything, usually as a person on the street we connect cameras with the idea that someone is watching us—but in our hands, they have a secondary purpose: they allow us to record and translate our own experience through a device. That’s a long way of saying, “My smartphone can use its camera to give me more information about what I can see—and at the same time share my experience.”

Right now, augmented reality is still in its infancy. It’s the stuff of video games and a slapped-on surface added to applications questing for a purchase in a culture that is putting smartphones in every hand. People already look to their phones for a great deal of information—maps, movie times, traffic reports, the news, movies, even self-recording—but it may still be a few more years before we see many people peering at the screen and looking (through the camera) at the world.

Smartphones are a clumsy way to do augmented reality in the way that comes naturally; thus I’m going to discuss Google Glass and the implications of a wearable augmented reality device in the next article.

Augmented reality and mapping our world with metadata

In 2011 we talked about a trend of hyperlocal information delivery. Smartphones and other connected devices provide an excellent opportunity to allow people to hook into a nexus of information about the area around them—use Bluetooth, GPS, or some other fashion of getting information about your surrounds and your smartphone becomes a focus for businesses, museums, passersby, or others to deliver information that is fitting for wherever you’re standing.

This part of augmented reality is all about giving a second layer of context to place. Point your smartphone at a statue in a park and receive a short blurb about the statue’s history; point it at a business across the street and get a lunch menu; point it at the street and a menu appears offering information on when the next bus passes by or offers to hail a cab. All this from knowing where you’re standing and what the phone is pointed at.

Nokia has been attempting to lead this trend with augmented reality connected devices by baking the capability directly into their phones. As I’ve said above, using a camera and a way to give software context to where you’re standing, allows your phone to become a second-skin over what you can see—think about the applications to shopping such as this app developed by IBM to aid in personal shopping.

Using a smartphone or tablet to “look” at products on shelves and with access to a huge database of information a user could then use such an app not just to navigate the store (imagine being able to ask your phone, “Where’s the taco seasoning?”) but also to help you do a quick calculation such as price-per-ounce comparisons between products. All displayed in the surface of the mobile device as you’re looking “through” it at a shelf of products.

Going to cook dinner? Blend in a cookbook app that delivers a list of ingredients to the augmented reality app—which turns that into a shopping list, which provides a list of likely products, connects with the store’s own database of products, and even helps price it out.

Although the science fiction video SIGHT is a sterile look at how this technology might be used; it’s still an awesome view of how it could change our lives (if we didn’t sacrifice our humanity for the tool.)

Delivering information in a more convenient way

Augmented reality has applications for public transit, going from-place-to-place, public transit, and even just learning more about the world on the fly; but it also has a lot of applications for the workplace. One excellent way augmented reality can be used is for jobs that require a certain amount of on-the-spot information from either gauges, communication, or simulation.

In 2012, I came across an article about a welding mask that included a camera and a Heads Up Display (HUD) that enabled welders to finely adjust their welding technique to match the material and flame heat to get a stronger weld. Software would watch through the camera in real time and project to the welder’s vision a read out that enabled them to decide how quickly or slowly to weld the seam for the best results.

We’ve seen this sort of thing already with visualization systems such as Kinect hacked to help surgeons blur the lines between visualizing and working on a patient or enabling surgeons to manipulate images without touching anythingn (to maintain the antiseptic environment). Already much medical equipment does a great deal of context-adding when used to visualize the human body—it’s a perfect candidate to use a HUD for during surgery.

We’ve seen the first steps toward using vehicle windows as a HUD with “smart glass”. Although, right now, this is more about entertaining passengers than delivering more information to drivers—this might be useful for public transit drivers or delivery drivers to receive maps and other information that doesn’t obstruct their line of sight but enables them to check a map without taking their eyes off the road.

This is only the first part of a series on augmented reality from 2012 into 2013, so stay tuned for a discussion of Google Glass and wearable augmented reality technology.


A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU