UPDATED 13:01 EST / DECEMBER 17 2020

EMERGING TECH

MIT and Penn State join forces with AWS to help classify images of flood disaster zones

Researchers at MIT Lincoln Laboratory and students at the Penn State College of Information Sciences and Technology have been working on artificial intelligence computer models that uses disaster scene images to inform responders about flooding.

For humans, this process is relatively easy, but when a dataset is made up of more than 100,000 areal images that vary in altitude, cloud cover, context and area and need to be processed in a matter of days or hours, computers become a necessity. That’s when researchers turned to Amazon Web Services Inc.to use their cloud services.

Students at Penn State began with a project that analyzed imagery from the Low Altitude Disaster Imagery dataset, a collection of aerial images taken above disaster scenes since 2015 to train the computer vision algorithm.

AWS does most of the heavy lifting by providing the compute resources to have computer vision algorithms train systems to understand the difference between lakes – which are clearly not flood zones – and actual flooding. In this manner, when a disaster happens, the machine learning algorithm is fed areal images it can quickly feed flood zones to rapid responders so that they can look over photos to see where they may be needed.

Making a guess about the difference between an image being a flood zone or not could be as easy as asking, “Is there a clear shoreline with discernible sand” or “Are there visible trees sticking out of water?”

Although that might seem easy for humans, it’s not that easy for computers. For example, in 2019 a leading computer vision benchmark mislabeled a flooded region as a “toilet” and a highway surrounded by flooding as a “runway.” When the computer is less confident about a label, the solution is to add a human.

Augmenting AI with human intelligence

Thus, the machine learning and LADI dataset portion of the project is only half of the puzzle. The other part is humans from Amazon’s Mechanical Turk who come into play when the machine learning algorithm is not confident about an image being a flood zone.

MTurk, as it’s often called for short, is a crowdsourcing marketplace where individuals and businesses outsource tasks to a virtual workforce – in this case, image classification. In this manner, MTurk workers review and label images to shore up any gaps in the algorithm adding a human element.

“We met with the MIT Lincoln Laboratory team in June 2019 and recognized shared goals around improving annotation models for satellite and LADI objects, as we’ve been developing similar computer vision solutions here at AWS,” said Kumar Chellapilla, general manager of Human-in-the-Loop Machine Learning Services at AWS. “We connected the team with the AWS Machine Learning Research Awards, now part of the Amazon Research Awards program, and the AWS Open Data Program and funded MTurk credits for the development of MIT Lincoln Laboratory’s ground truth dataset.”

According to Penn State, this work has led to a trained model with an expected accuracy of 79%. The students’ code and models are now being integrated into the LADI project as an open-source baseline classifier and tutorial.

“During a disaster, a lot of data can be collected very quickly,” said Andrew Weinert, a staff research associate at Lincoln Laboratory who helped facilitate the project with the College of IST. “But collecting data and actually putting information together for decision-makers is a very different thing.”

Amazon also supported the development of a user interface for use by urban search and rescue teams, enabled by MIT Lincoln Laboratory to pilot real-time Civilian Air Patrol image annotation during Hurricane Dorian.

And during this fall, the same MIT team will build a pipeline to CAP data using Amazon Augmented AI, or A2I, to route low-confidence results to MTurk for human review.

“A2I is like ‘phone a friend’ for the model,” said Weinert. “It helps us route the images that can’t confidently be labeled by the classifier to MTurk Workers for review. Ultimately, developing the tools that can be used by first responders to get help to those that need it.”

Photo: Pixabay

Since you’re here …

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission:    >>>>>>  SUBSCRIBE NOW >>>>>>  to our YouTube channel.

… We’d also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.