UPDATED 17:02 EDT / JANUARY 27 2023

AI

Google develops new AI system for generating high-fidelity music

Google LLC researchers have developed an artificial intelligence system that can generate high-fidelity music based on a text description provided by the user.

Google detailed the system in a Jan. 26 research paper spotted today by TechCrunch. The AI, known as MusicLM, was trained on 280,000 hours of audio. It’s based on an earlier AI-powered music generator called AudioLM that was detailed last October. 

The new MusicLM system takes a natural language description of a musical track as input and automatically generates corresponding audio. Users can specify the type and number of instruments that the AI should simulate, the genre and other details. 

MusicLM also allows users to describe a track in more abstract terms. During one internal test, Google researchers instructed the AI to generate music that “induces the experience of being lost in space.” Moreover, MusicLM is capable of generating music based on a melody whistled or hummed by the user.

The system generates music that “remains consistent over several minutes” in some cases, Google’s researchers detailed. Internal tests determined that the AI system delivers higher audio quality than existing AI-based music generators. Moreover, it does so while adhering more closely to the description provided by the user.

MusicLM comprises not one but several neural networks that each manage a different part of the music generation workflow. The system’s neural networks are based on the so-called Transformer architecture. Introduced by Google in 2017, the architecture is a popular method of designing AI systems that is particularly widely used for natural language processing.

Neural networks usually analyze multiple data points when making a decision, such as how a piece of music should be generated. The Transformer architecture allows a neural network to prioritize the data points it analyzes based on their importance. The most important details influence the processing result to a greater extent than the rest, which improves accuracy.

The MusicLM system also incorporates an AI approach known as sequence-to-sequence modeling. The approach involves turning a piece of text, such a user’s description of a musical track, into an abstract mathematical representation called an embedding. This embedding can be turned into another type of data, such as audio, more easily than the original text description.

Google has not yet released the code for MusicLM. However, the company’s researchers published an AI training dataset to support further research into automated music generation. The dataset comprises about 5,500 pieces of music that each include an accompanying text description designed to make them easier to interpret for neural networks.

Photo: Google

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU