UPDATED 14:22 EDT / DECEMBER 17 2013

NEWS

Robot engineered by Agile team faces off in competition to aid disaster recovery

Fifty minutes after the March 2011 Tohoku earthquake, a tsunami flooded the Fukushima Nuclear Power Plant producing equipment failure, nuclear meltdown and release of radioactive material. Workers struggled to supply power to the reactor’s coolant systems and restore power to the control rooms. Fortunately, no fatalities were linked to the short-term radiation exposure. But, deaths from certain types of cancers, such as leukemia, are predicted for accumulated exposure to radiation.

Instead of workers, wouldn’t it have been great if robots were sent to restore power to the Nuclear plant eliminating the exposure of radiation to all those everyday heroes? This idea, using robots in scenarios where humans could be injured, poisoned or killed, is the genesis for the upcoming Dec 20th DARPA Robotics Challenge (DRC), a competition designed to showcase robotics in the setting of a natural or man-made disaster recovery. Here, semi-autonomous humanoid robots will perform tasks usually performed by humans such as climbing ladders, crossing debris-filled corridors, or restoring power to nuclear control rooms. I was fortunate enough to join one of the teams –the Institute for Human and Machine Cognition (IHMC) — that made it to this round of the competition.  For one month, I helped boost their development workflow and assist as a software engineer.

The IHMC is a not-for-profit research institute of the Florida University System that develops robotics software and hardware with a goal of leveraging and extending human capabilities. The team uses Atlassian software to support their iterative development process that ensures continuous delivery of cutting edge robots. Specifically, they use Atlassian JIRA, issue management software; Atlassian JIRA Agile for agile workflows; Atlassian Confluence, a content collaboration platform for recording research; Atlassian Bamboo for CI and Atlassian FishEye for browsing their 7GB SVN repository.

The team already won the first round of the competition, a set of virtualized obstacle courses implemented in Gazebo (a robotics simulator maintained by the Open Source Robotics Foundation) for which they wrote their own software controllers and operator interfaces. But the upcoming competition presents a slew of new challenges as it takes the trials from the virtual to the real world. As you can imagine, this leap involves a huge amount of iterative development, testing and calibration. Working with the Boston Dynamics‘ Atlas robot, an anthropomorphic robot specifically designed to negotiate terrain and operate tools in a manner similar to a human, the team is furiously prepping itself for this upcoming round of battle.

Getting a (robotic) leg up on the competition with test-driven development

With millions of dollars on the line, testing needs to be taken very seriously. Just one unfortunate bug can damage or destroy the Atlas robot, which is why IHMC’s commitment to unit and regression testing gives it an edge over the competition. When implementing an algorithm, either based on existing literature or IHMC’s own original research, developers often first implement a set of test cases prior to actually coding the solution.  IHMC uses a suite of custom made unit tests* (1,730 at the time of writing), to quickly iterate on code, make sweeping refactors and save countless hours of manual testing.

The team generates coverage reports in Atlassian Clover to ensure a test suite is kept up to date. If a Clover report shows that the tests cover less code, then the offending commits are tracked down and the author gets some “re-education” on the value of unit testing.

With thorough regression testing, new team members can start committing code quickly and the risk of causing a regression to the existing code base is minimized. This is especially important for IHMC because half of their DRC team consists of visiting interns, and it’s crucial that the project maintains continuity and undisrupted progress.

CI for smoother, faster development

For me, a cooler feature of the team’s test suite is its collection of end-to-end functional tests. These tests boot up a robotics simulator (simulation software maintained by the IHMC called the SimulationConstructionSet, or SCS), deploy a virtual robot and make it perform a task, such as walking over uneven terrain or manipulating an object. The tests not only make assertions about the environment and the end state of the robot, but they also record the robot’s telemetry and film the simulation to facilitate human review and speed debugging in the event of an assertion failure.

And while running and recording an SCS environment is computationally expensive and increases the time to execute the test suite by a factor of three, the IHMC circumvents the delay by running the video and telemetry builds nightly in Atlassian Bamboo. The build telemetry is written out to a shared drive and the videos are uploaded to YouTube and published as a Bamboo build artifact for the team to review when they arrive for work.

These end-to-end tests are extremely valuable, not only to monitor for regressions but also as true integration tests. At any given time, multiple developers are working on different facets of the project – from performing modifications to the SCS simulator, and new abstractions over the myriad electrical signals required to take a robotic step, to tweaking to the robot’s force control software, and more. All of these changes happen concurrently, and are tested in concert using Atlassian Bamboo via the Simulation Construction Set.

This continuous integration of changes ensures that developers don’t commit conflicting software changes or otherwise break each other’s code. If, for example, a developer tweaking the Instantaneous Capture Point (the place the robot is aiming to place its foot to ‘catch’ itself in the controlled falling we know as ‘walking’) makes changes that aren’t compatible with another developer’s new convex hull calculation algorithm (used, in part, to determine the points where the robot’s center of mass should be balanced over to be considered ‘dynamically balanced’), they’ll know about it shortly after they commit.

As a career web developer, I’m pretty jealous of all this. There’s something a bit more exciting about committing a logic error and watching a virtual robot trip and fall face-first down some stairs than watching a webdriver test fall over with a 500!

Stay tuned for part 2 of this interesting article!

About the Author

Tim Pettersen is a developer at Atlassian. He’s spent the last few years working on developer tools, most recently the hot new enterprise Git hosting solution Atlassian Stash. Tim’s passions in software are pluggability, API design and integration. When he’s not speaking at conferences, Tim enjoys hacking on anything android, git or realtime related.


A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU