UPDATED 11:00 EDT / NOVEMBER 17 2021

AI

AWS makes AI and machine learning tangible with first major art debut at Smithsonian

Amazon Web Services Inc. has commissioned its first-ever major art piece, a site-specific sculpture powered by artificial intelligence and designed by artist and architect Suchi Reddy that will be the centerpiece of the Smithsonian’s “Futures” exhibit.

The artwork, called “me + you,” was unveiled today in the 90-foot-tall central rotunda of the Smithsonian’s historic Arts and Industries Building in Washinton, D.C. It’s an important locale as America’s first national museum and because the interactive sculpture itself is nearly two stories tall.

The sculpture takes up the center of the room, with a base that appears to have large fiber-optic cables sticking out of it toward people with inviting circular interfaces that Reddy (pictured, right) called “mandalas.” Rising from the center of the sculpture is a broad, segmented series of panels called a “totem” upon which colorful kinetic patterned lights flow upward, representing interactive futures spoken to the artwork by the public.

The idea of the artwork is to present how humans and technology interface and evolve together, Reddy told SiliconANGLE in an interview while she demonstrated the sculpture in action.

“The idea of the sculpture … is really about a co-evolution of humans and technology that highlights the consciousness, awareness and responsibility of humans and our interaction with technology,” Reddy said.

The mandalas, she explained, were currently in “attract” mode and people could walk up and speak into the sculpture to trigger it. The trigger word that the mandalas listened for was “future.” So the public could say into a mandala “My future looks hopeful,” or “bright,” “sad,” “interesting” or any other manner of words.

It would then use AI and machine learning developed by Swami Sivasubramanian (left), vice president of Amazon Machine Learning at AWS, and his team, to detect the word for the future and determine its sentiment. That sentiment would then be translated into a series of patterns and colors on the mandala, flow up the central totemic piece and mix with other people’s spoken futures.

“We translate both the word and the sentiment of the word into these patterns of color and light that shows us what our individual visions of the future look like,” said Reddy. “And the central piece, which is a totemic piece, that looks like a loom and hearkens to looms being early computers, puts together the colors of all of our visions and becomes a collective vision of our future that’s constantly evolving.”

Under the hood, Sivasubramanian said, the sculpture puts together an array of AWS technologies to draw all of these human and technology interactions together.

When a person walks up to the sculpture and speaks into it, AWS Transcribe uses AI to translate the speech into machine-understandable text so that the prompt and the future word can be detected. After that, AWS Comprehend is used to examine the sentiment behind the future word and provide it with an emotion and thus determine the color palette for the pattern of lights.

Finally, AWS used an Amazon SageMaker Notebook to build a custom machine learning model to provide the framework that puts everything in motion. SageMaker is Amazon’s comprehensive development platform for the step-by-step creation of machine learning models, including labeling, data preparation, engineering, bias detection, training, tuning, hosting and monitoring.

These technologies power both the sculpture at the Smithsonian and also a companion mobile app that operates the same way as the mandalas – except the app takes text directly instead of voice.

To create “me + you,” a team of engineers from AWS has worked with Reddy for more than two years and spent more than 1,200 hours building the underlying AI and cloud technology that powers the sculpture.

“With this exhibit, we aim to blend technology and art in a way that is inspiring,” Sivasubramanian said when asked why AWS chose now to commission its first major artwork.  “It can serve as an inspiration to the next generation of builders and artists.”

Sivasubramanian explained that AWS machine learning technologies have been put to use in numerous industries including healthcare, information technology, manufacturing and more. He added that it is one of the most transformative technologies in AWS’ stable and it has been accelerating in use.

However, the use of machine learning in the industry is behind the scenes and largely hidden from sight. Thus, he hopes, demonstrating this technology in use by nonprofits and artists will provide a more tangible way for the public to engage meaningfully with it.

“As I was talking to Suchi yesterday, she spoke about how now machine learning will be part of her core toolkit for the next set of artworks she will explore in her journey,” Sivasubramanian said. “This is her way that she will inspire other builders, and it’s our way of supporting the local arts community as well.”

Photos: AWS

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU