UPDATED 11:30 EDT / NOVEMBER 19 2009

Deeper Dive on the $350 Sixth Sense

image While browsing the newly redesigned Engadget, I caught a story about the MIT projectSixth Sensewe talked about a bit on Tuesday.  I’ve since done a little bit more digging on the project and extended an invitation to the members of the project to talk more about what they’re working on here at SiliconANGLE (though I’m sure they’re particularly busy right now with all the flood of attention they’re getting, hopefully they’ll be able to make some time for us).

So I read through the whitepaper that MIT published on the project, and was able to get a bit more detail on what was running this setup, and the hardware details that make it so cheap and compelling.

image As I made my way through the paper, I was able to get a better look at the camera in some of the illustrations, and it was as I suspected, the Logitech QuickcamPro 9000 for Laptops (seen in the image to right).

It’s a quality camera, one that I’ve owned in the past, and is only second to the current camera I own (the non-laptop version of the same model). It’s perfect for this sort of application because not only is it high-resolution, but it is highly compact and easy to clip to just about anything. The camera, depending on where you get it from, runs between $60-150.

The projector I wasn’t particularly familiar with turned out to be a “3M MPro110,” according to the whitepapers. I’ve never used it before, but it appears to function fairly well in a variety of low to high lit situations (if the demo video is any guide), and can be gotten fairly cheaply – from $160-250.

It’s also worth noting that this thing relies on a laptop or portable computing device being tucked away in a pocket or a shoulderbag.  I believe during the TED demo, they said it used a mobile phone, but in the whitepaper, it talked about there being an attached laptop.

Of course, the secret sauce in this equation is the software, and the whitepaper offered very few clues as to what sort of gestural algorithms or command interpreter this was running off of.  I’ve interviewed a number of companies that work with gestural interfaces that run from camera input, and it all seems fairly proprietary and highly complex (not to mention cutting edge), so while it’s entirely possible they came up with this in the lab, I wouldn’t be surprised to hear that they’re licensing or experimenting with others’ technology.

My hope is that it’s home-grown MIT technology, and that they’d be interested in open-sourcing it to the community. Being able to load up some software like this and turn the world into my surface computing device seems like it’d just be loads of fun.

I’ve included the whitepaper I’ve pulled this info from below the jump, since I can’t seem to locate where I originally grabbed it from.


WUW – Wear Ur World


A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU