UPDATED 11:51 EDT / DECEMBER 29 2010

Robotics Expert Uses Kinect to “Connect” Human and Machine

xbox-kinect-robot-overlord Thank you, Japan. I’ll start with that, because it looks like they’ve just helped us produce yet another amazing innovation for the Xbox Kinect. In this case, it’s teleoperator software for controlling a robot! Stick around for the interview video after the fold.

Information is sparse, but at least we know his name and some spiffy details, brought to us by the robots dreams blog,

Taylor Veltrop, pretty much working on his own in a suburb of Tokyo, has accomplished very professional and noteworthy work in humanoid robotics including integrating the Willow Garage ROS system and the Roboard with a Kondo KHR-1HV; publishing detailed information enabling others to replicate and improve on his work in an Open Source fashion; and making tons of previously obscure information, like Kondo UART configurations, clear and easy to understand and work with. If that wasn’t enough, he’s also a high level LEGO Mindstorms robot designer, and recently qualified as an official participant in the Aldebaran NAO Robot Developer Program.

I would be amazed if NASA isn’t very interested in this sort of thing for telepresent operation of in-field robots, waldos, and armatures. Right now, most interfaces for these devices go through computer screens, involve complex joystick gizmos and sometimes interesting hands-on motion actuators that take into account finger and wrist movement through physical touch.

Taylor Veltrop participates in the Aldebaran NAO Robot Developer Program, which is an amazing repository for knowledge of robotics and Open Source development to pioneer innovation these sorts of projects. A controller of this type, using-skeletal sensing, could certainly revolutionize how remote operation of devices that translate human dexterity could be used.

The Microsoft Xbox 360 Kinect’s technology and its amazing ability to track the hands and even fingers—as seen in the production of software to aid in learning American Sign Language—could make it so that no external device is needed. A user could simply turn kick on the peripheral, reach over into its field of view, and begin remote manipulation.

Imagine doing telepresence work from Earth by controlling an armature in space to fix a space station, carefully cut and collect rocks on the moon—or, if you want something closer to the gravity well, think about the advances in microsurgery this sort of interface could provide.


A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU