UPDATED 12:47 EDT / DECEMBER 01 2010

Want 3D Processing? Oliver Kreylos Shows Two Kinects are Better than One

2-kinect-3d When last we checked in on Oliver Kreylos he was hacking a single Xbox 360 Kinect device to produce fairly cool 3D streaming visuals. Now, he’s gone a step further and tied the feeds of two Kinect cameras together to give his creation stereoscopic vision. This is the same way how humans perceive depth, with two cameras (eyes) set slightly apart, high up above the ground, we’re able to discern how far away an object is by how its image differs eye-to-eye. In Oliver’s case, however, he’s placing the cameras much further apart so that they can manage a broader—but still overlapping—field of view.

We picked up the videos of his work, first covered over at Engadget, and hope everyone is suitably impressed.

He’s back at it again, this time blowing minds and demonstrating that two Kinects can be paired and their output meshed — one basically filling in the gaps of the other. He found that the two do create some interference, the dotted IR pattern of one causing some holes and blotches in the other, but when the two are combined they basically help each other out and the results are quite impressive… Oliver is able to rotate the camera perspective and basically film himself from a new camera angle that exists somewhere in between the position of the two Kinects, and do-so in real-time. Sure, the quality leaves a lot to be desired, but still. Wow.

Although there is a bit of interference, my guess this is being caused due to the refresh rates between the two devices not being quite in sync. In humans, this sort of problem is resolved with the visual cortex and the optic nerves; therefore, it’s likely the Oliver will probably solve the issue through the use of image processing software.

Now, watching this, non-gaming applications immediately jump to mind. Imagine teleconferencing software running in 3D? Have a product to show off? Put it on the able I the view of the dual-Kinect teleconference view and everyone on board can get a solid pan-and-scan of the entire thing, even remotely. It even opens up hilarious presentation methods for stand-up auditorium conferences, or what about the first band to release a 3D music video shot with these cameras allowing the viewers to watch the drummer’s hands from various angles.

Microsoft’s Pandora’s Box technology continues to wow from the homebrew front. Expect a lot more coming out of it from both in-industry and DIY developers. (Especially now that Microsoft has so swiftly backpedaled from their verbal gaff position on what they called “hacking” the device.)


A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.