Google just released new details about its latest AI project dubbed Magenta.
The tech giant originally announced Magenta last month, stating that that the initiative would focus on training AI to create its own music, video, and art.
But the company gave us a better idea of why it aims to make AI more creative in a company blog post published Wednesday.
Basically, if Magenta’s AI succeeds at becoming more creative, it could help improve the language capabilities of Google’s AI.
So far, machine-generated music works well in small chunks, but can’t form any long narrative chunk, Douglas Eck, the research scientist leading Magenta, wrote in the blog post. If Google can create AI capable of generating actual, long songs, it would have huge potential in other areas as well, like language modelling.
“The design of models that learn to construct long narrative arcs is important not only for music and art generation, but also areas like language modelling, where it remains a challenge to carry meaning even across a long paragraph, much less whole stories,” Eck wrote.
Building AI that can create music also has more specific use cases. When Google announced Magenta last month, Eck said that it could create music to counteract stress. So, if someone’s wearable detected their heart rate was elevated, the AI could play soothing music.
To make that AI more creative, Google will us TensorFlow — a machine learning library that Google made open source in November. You may have unknowingly interacted with TensorFlow, which uses a process known as deep learning to complete tasks all on its own, when using the search function in Google Photos.
TensorFlow’s sophisticated image recognition is why you can search for a landmark in Google Photos and expect it to bring up appropriate photos of that search.
Magenta has also added its software to its GitHub page that outside researchers and coders can use. Eventually, coders can insert their own code into Magenta’s GitHub.