Google has been riding a wave of hype related to their Google Glasses (called “Project Glass”) and they’re not shy about this product—in fact, they interrupted their own G+ product keynote portion to demo the glasses and provide the astonished audience with a multitude of uses for the new social- and Internet-enabled glasses.
The first part of the demonstration showed off how small and how light this wearable device is. Certainly, they’re substantial enough to get noticed (who can miss a bit of white and metal up there near a person’s eyebrow?) but that they don’t get in the way while doing a lot of useful things for the consumer.
“We wanted to put all the equipment off to the side, creating an asymmetrical design, but allowing us to design it around anything,” the demonstrator said. Showing off how the standard Google Glasses can be attached to a multitude of different frames. People who actually wear spectacles will not have to worry about Google Glasses not fitting with their favorite frames—the asymmetric design will permit them to overlay with most glasses.
As a result, Google Glasses won’t be the bane of prescription glasses wearers anywhere.
One thing that we’ve learned from Google’s culture of presentation is that most of what they do is minimalistic and fashionable—it’s no surprise that their high tech spectacles will follow this aesthetic.
Capture and share your world from your point-of-view
Google Glasses come with a very light, very small camera that can store and upload video from the point-of-view of the user. It enables immediate recording, sharing, and the capability of reflecting the moment instantly.
This reminds me of what science fiction authors might describe as “gargoyling” or the effect of being able to capture the perspective of point-of-view and provide it to other people without having to wear cumbersome equipment. It permits people to enable their friends to become part of their experience by using their point-of-view to get pictures and recordings.
One example provided by the Google presenters showed how the glasses could be used to create a set of instructions of how to make dumplings. As a person cooks, they have a direct one-to-one experience with the apparatus that they use: the knives, the sink, kneading dough, or chopping vegetables. With the Glasses recording (or taking pictures when asked) a series of images can be generated rapidly that tell the story of making the dumplings.
Also, it’s possible to record or snap pictures and immediately share them. To tell a story or deliver an experience almost real-time by sharing as things are happening. This can be done with the Google Glasses by snapping pictures and sending them to G+ (or possibly another service) as they’re being captured, enabling friends to see them as soon as they upload and even comment on them.
The example for this presented as part of Google Hangouts with the Google Glasses recording live and broadcasting to a hangout while the users spoke to one another about what appeared. It’s hard not to see how this could have amazing implications for video-calling from a device such as the spectacles–essentially allowing someone to take someone else out into the field and allow them to see what they’re seeing.
Unhindered information retrieval mentioned but not fleshed out
Another exceptional element of what the Google Glasses could do for consumers was mentioned by the presenters but not really fleshed out. That is where a person could use the glasses instead of a mobile device or a computer in order to gather information about the world around them or to answer questions that they have.
I’ve mentioned this as a possible innovation that would make Google Glasses a killer app, the capability of augmented reality and acting as a second screen for mobile devices. The presenters spoke about using the glasses to get images from your world and send them to the Internet to have friends or others help answer questions; but not so much about how that information would be retereived.
Perhaps that will be part of a future demonstration.