Google tries to get AI to make music with “Magenta”

Synthesizer

Google already stunned the artificial intelligence community earlier this year when its AlphaGo AI managed to defeat one of the best Go players in the world, and now Google is setting its sights even higher with Magenta, a new project whose purpose is to develop AI capable of creating music and other forms of art.

There is no question that AI has incredible potential when it comes to numbers-driven tasks like analyzing large amounts of data, but replicating human creativity is a far more difficult task for AI researchers.

Interestingly, Google’s new project is following a methodology not unlike AlphaGo. The developers “train” the AI by feeding it an enormous library of existing data, which in the case of the new Magenta project is music. Then, much like AlphaGo determines the best current move based on the moves that have been made so far, the Magenta music AI chooses what musical notes to play based on what it has heard.

In this way, the works more like a jam session robot than a purely creative AI. During the recent Moogfest event in North Carolina, Google software engineer Adam Roberts demonstrated Magenta’s capabilities by playing a few notes, which Magenta then extended with its own contribution.

“It’s basically just taking what I played and trying to find what’s the most probable note to come after that based on all of the music that we’ve played for it,” Roberts explained during his demonstration.

The music AI is still in its infancy, and while its capabilities are certainly impressive, it likely has a long way to go before it starts putting musicians out of a job. According to Roberts, the team is planning on open-sourcing the program in the near future through GitHub at github.com/tensorflow/magenta.

Aside from music, Google has plans to explore other art forms using AI, including visual art, video, and text.

Photo by Ⅿeagan