Facebook’s modular ‘droidlet’ platform is a one-stop shop for building smarter robots
Facebook Inc.’s artificial intelligence research team today announced an open-source platform that it says can be used by researchers to collaborate on more intelligent robots.
The idea is that by mixing and matching components including machine learning models and perceptual modules used to sense the environment, researchers can build new robots capable of performing complex tasks more quickly.
In a blog post today, Facebook AI researchers Mary Williamson and Arthur Szlam said the droidlet platform is a modular, heterogeneous embodied agent architecture that sits at the intersection of natural language processing, computer vision and robotics. The advantage of droidlet is that it simplifies the process of integrating machine learning algorithms with different types of robots, they said, helping facilitate more rapid prototyping.
Using droidlet, researchers will be able to test different algorithms on their robots quickly, they said, or swap out one natural language understanding model for another to see which is most effective.
Robots today can already do some impressive things, but there’s a long way to go before we can build truly intelligent machines such as R2-D2 in “Star Wars,” which can think and act entirely independently. At present, existing robots cannot understand or engage with the world in the same way as humans can.
“This is largely because today’s robots don’t understand the world around them at a deep level,” Williamson and Szlam said. “They can be programmed to back up when bumping into a chair, but they can’t recognize what a chair is or know that bumping into a spilled soda can will only make a bigger mess.”
It’s hoped that the droidlet platform will eventually change that by allowing researchers to collaborate and build more intelligent robots faster, the researchers said.
Droidlet, as described in this research paper, is a collection of components that can be used to build embodied agents. Included is a set of perceptual modules, such as object detection and pose estimation systems, that are able to process information from within the physical world the robot operates in, and a memory system that acts as a nexus for all of those modules, where it stores what the robot learns about its environment.
There’s also a set of “lower-level tasks,” or instructions for robots such as “move three feet forward” and “place item in hand at given coordinates” that can affect changes in the robot’s environment. Finally, there’s a controller that’s used to decide which tasks to execute based on the state of the memory system.
Williamson and Szlam said this modular approach is advantageous because it means researchers can use the same intelligent agent with different robotic hardware, simply by swapping out the tasks and perceptual modules as required by each one’s physical architecture and sensor requirements.
“The droidlet platform supports researchers building embodied agents more generally by reducing friction in integrating ML models and new capabilities, whether scripted or learned, into their systems, and by providing UX for human-agent interaction and data annotation,” they added.
The droidlet platform can be used to create robots that work either in the physical world or simulated environments such as Minecraft or Facebook’s very own virtual Habitat world. It includes an interactive dashboard (below) that serves as an operational interface for building agents. There are various debugging and visualization tools, as well as an interface for correcting errors on the fly and for annotations with open-source projects.
There are a bunch of basic open-source agents available for researchers and even hobbyists to work with, too. These include primitives for visual perception and language, Williamson and Szlam said.
Constellation Research Inc. analyst Andy Thurai told SiliconANGLE that droidlet looks to be a very exciting initiative by Facebook. He said the platform approach to building AI in robots is the right way to do things, as most robots have very rigid and limited AI capabilities at present.
“While they can self-learn to an extent, there is no modular approach for integrating the best-of-breed that newer enterprise microservices and API paradigms dictate,” he said. “For example, can robots use existing NLP, computer vision APIs, such as Google cloud vision, rather than build these capabilities themselves? If that happens, the robot, in a matter of seconds, will be able to understand and speak all languages that Google can interpret instantaneously.”
The second main advantage of the droidlet platform approach is it will essentially connect AI and robotics at the hip, Thurai said. What this means is it will be far easier to retrieve real-world data from robots using sensors, and then update their ML models to act based on current conditions.
“For example, you could build a robot that’s able to deploy inbuilt body armor while walking into a hazardous chemical spill situation,” he said. “It would also be able to warn other robots and humans in the area. And if it can’t figure out what the smell is, it can send it to the cloud and have the core AI analyze it and get back with specific information about what the chemical spill is.”
Droidlet certainly has potential, but Thurai noted that it’s still very much a nascent initiative that will likely take several years and a lot of effort to fulfill its promise.
“What is cool about this is researchers can experiment with the hypothesis using Minecraft or Facebook’s Habitat as the test bed before deploying in the real world,” Thurai said. “If this works the way it is intended, ultimately, rather than buying pre-built robots or robotics programs, developers will be able to build robots capable of self-learning using the real world environment they operate in.”
Images: Facebook AI
A message from John Furrier, co-founder of SiliconANGLE:
Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.
We really want to hear from you, and we’re looking forward to seeing you at the event and in theCUBE Club.