UPDATED 13:31 EDT / APRIL 05 2012

NEWS

The Future’s So Bright I Gotta Wear Google Glasses: 5 Possible Innovations

By now, almost everyone watching Google—and the Internet—will be aware of the first prototype for the search giant’s sci-fi project X augmented reality glasses. They’ve changed somewhat from our first expectations (a bit less covering and more VISOR-like) but Project Glass continues to hold all the elements that are necessary to make them a good piece of hardware for the cognoscenti geek elite.

As this is a still-burgeoning technology, will likely run on Android and therefore receive all the apps expected, it means we can guess that it will have access to Google Video and Google Maps and many other things. That’s the foundation upon which we’ll see these sci-fi glasses build into the world; but where can we go from here?

Seeing your way home: Augmented reality GPS on-the-nose

Sure, Google Maps is wonderful and can even give you an overhead map—all the people who play online games will know exactly where I’m going here—or a semi-transparent minimap. With augmented reality glasses you can do two things to allow people to find their way around and that’s a little-distracting map in your vision and an overlay on what you see that displays guidance.

With the simple underlying technology of GPS, Location-awareness, and Google Maps a minimap could be projected onto the upper corner of your vision or a semi-transparent map could overlay your vision. The minimap could be a 10-meter resolution map giving either a satellite view map or an atlas styled map. Connected to direction-awareness (via a compass or some other device) could then just give the person some sort of an indicator for points of interest and possibly their intended location.

Next, there’s directly augmented reality rather than a UI overlay—since the glasses should have a camera that can “see” what the wearer sees, when GPS coordinates are added for a potential go-to-here location it could project arrows, lines, or even flashing lights along a potential route. With awareness of nearby buildings and roads the direction lines could act as guides. A little dangerous because it could be more distracting than just a map, but it would ease location finding.

This is pretty basic video game Heads Up Display (HUD) technology that everyone who plays online gaming is used to as a UI and it would probably work great here.

Highly immersive augmented reality locations: Meta-museum everywhere

Advertisers will love this element of augmented reality and visual overlay, but it also has a great deal of use for Wikipedia-like elements. I imagine Wiki-museum to take advantage of augmented reality and location-awareness in order to add meta information to monuments, objects, locales, to allow users of the glasses to pull up further information on the place they’re in.

For example, if I’m walking through downtown Tempe and look at the historical buildings, some of them have plaques talking about the history of the place, but that’s only a small portion of everything that’s happened. Looking at a plaque, or a historical building, could trigger an overlay event that would ask me if I wanted more information. That could lead me to a voiceover, an HTML display, or something else with further information.

Add that to a Wikipedia-like system with crowdsourcing and democratized information and you have a ready-system for local history, venues, and information. Instant augmented reality tourism.

We’ve already seen this in effect with advertisements with cell phones that do augmented reality capable effects: look at a billboard or advertisement on the side of a bus and let the glasses display an advertisement for you. Or have them tell you the closest place to buy a Coke (soft drink advertisement) or where a particular bar is situated. This links directly to the navigation system described above.

Augmented reality shopping and the virtual grocer

We’ve all done it before: we go to the grocery store or go retail shopping and then either forget what we wanted to buy (those of us who forget to do lists) or we find something we like and cannot figure out if we should purchase it or not. The advent of smartphones has helped this a great deal because now people can simply photograph the UPS of a product and get reviews, they can store lists on their phones and check them off, and so forth.

Now we have the possibility of augmented reality spectacles and it opens up a whole new world of information capture.

Combine the last three discussion points and a grocery store who have set metadata about all of their aisles, uses RFID to track where items are stored, and keeps it online for the glasses to connect to and you’ve got the makings for a virtual grocer. All you’d have to do is go online, enter your desired products, and then go to the store. First, it’ll give you a minimap that gives you the location of each item you’re looking for, another app watches the camera on the glasses and overlays HUD highlights that represent coupons or deals and finally grabbing an item and tossing it into your cart or bag could check it off your list.

In fact, if you find something you want to buy but don’t know enough about it, you could use the UPC code or packaging visual in the camera to pull up information on it from the product maker (or advertiser) and then use that information to decide if you want it or not.

People should be able to choose their level of immersion with this sort of application, especially fixed to the privacy statement of the grocer and the app they’re using. After all, grocery stores and advertisers love data points about customers and this will be an extremely big one—knowing not just what people shop for but how they shop for it.

Gesture-controlled information retrieval: My office comes to my commute

Tablets and smartphones have really opened up the possibility of the virtual office in ways that we haven’t seen before. With an Internet connection everywhere, it’s possible to stay on IM, get e-mail, and even work up presentations in a tiny screen. The augmented reality glasses add even an extra layer to that: after a little adjustment they would add a considerable amount of real-estate to the visual field for displaying and interacting with data.

Fundamentally, the processing power and storage of the glasses themselves would be limited; however, because of cloud-storage and -streaming it would be trivial to have them run a virtual desktop elsewhere and stream it to our vision while we’re on the train, or in a plane. NFC or Bluetooth would even be useful for linking them to a powerful device like a laptop.

Someone riding the train to work could bring up a Minority Report-like HUD and interact with their work software as they sit there (or just play video games.) We’ve already seen how simple gesture recognition is with a pair of cameras and an infrared projector via the technology of the Kinect; the glasses can see your hands and detect the motion of your head. With low enough latency they could allow a person to bring up information and manipulate it with in-air gestures.

The first virtual keyboards will probably be slow-to-react and primitive, but gesture recognition technology is becoming a mainstay of innovation (thanks again to the Kinect) and with that in mind, augmented reality glasses would give us the next link in the chain.

Experience what other people experience: The “personal movie” redux

In my e-book Born to the Spectacle: The Anti-Nokia Experience I introduced the idea of a highly interconnected society who use augmented reality to “see” what other people see, who experience rich media and magazines in a way that allows them to integrate it into their daily experience. Well, the Google glasses are certainly not that far along, but as long as they have a camera (or ideally a pair of cameras) they can record what a person sees and hears into a format that can be projected back to someone else.

We can already guess that the glasses will be connected to Google Video (YouTube) and people should be able to record directly from them—possibly with a swish of their hand—and then share whatever they see and experience immediately. In fact, people might see this as a chance to stream their experience at loud concerts, or directly from the scene of unfolding news, or simply stand in front of a mirror to talk to one of their friends.

That’s the glasses-to-Internet aspect; but I also see a glasses-to-glasses aspect that would also be probably extremely primitive. Taking advantage of as much of the real-estate of the lenses as possible, a recording from one pair of glasses displayed through another could provide more of an immersive portal for the wearer. They may not be right there with them, but being able to record the experience and then have it projected back in a fashion that is closer to what it looked like to the person recording it could be a new form of rich media.

In Born to the Spectacle reporters gathered news and information by experiencing it directly. Here, we could have interviews done by bloggers and citizen journalists who look directly at their interviewees and speak to them one-on-one, the recording played back to the glasses of other people who get to look into the eyes of their interests, or right at the scene of a big event.

Point-of-view streaming could become the next-big-thing, especially noting the power of sites such as Justin.tv.


A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU