
Google's new innovation is an AI system that turns text into music
According to a report that was published on The Verge, an American website that covers technology news, via TechCrunch, researchers at the American tech giant Google have developed artificial intelligence that can generate minutes-long musical pieces from text prompts and even transform a whistled or hummed melody into other instruments.
This is similar to the way that systems like DALL-E generate images from written prompts.
The firm has made accessible a number of samples that it developed with the assistance of the model; the outlet names the model MusicLM. Although you are unable to play around with the model yourself, the corporation has made these samples available.
Every single one of the examples is quite convincing.
There are five-minute pieces that are made from one or two words like 'melodic techno,' and there are thirty-second fragments that sound like full songs that are created from paragraph-long descriptions that prescribe a genre, vibe, and even specific instruments. Both types of pieces can be found here.
The demonstration website also includes the results of training the model to generate 10-second clips of instruments such as the cello or maracas, eight-second clips of a given genre, music that would match a jail escape, and even what a beginner piano player would sound like in comparison to an accomplished musician.
According to The Verge, there are explanations available for phrases such as 'futuristic club' and 'accordion death metal.'
MusicLM is capable of imitating human sounds, and while it seems to get the tone and general sound of voices right, there is an unusual sense to the vocals that MusicLM produces.
According to The Verge, there have been systems that have been credited with making blockbuster songs, replicating Bach better than a person could in the 1990s, and backing live performances, so it's not as if AI-generated music is a whole new phenomenon.
However, none of them has been able to generate music with particularly intricate composition or high fidelity. This is mostly because of the limits imposed by technology, as well as a dearth of training data.
According to the study, MusicLM has the potential to be the first service to do so.
According to a research paper that was published in the United States by Cornell University, MusicLM was trained to generate coherent songs from descriptions of 'significant complexity.
' For example, 'enchanting jazz song with a memorable saxophone solo and a solo singer' or 'Berlin '90s techno with a low bass and strong kick' are both examples of the types of descriptions that were used.
In addition, Google has announced that users of Chrome on Android now have the option to lock their private browsing session for good, making it impossible for anyone to access it.
The multinational technology company made the announcement in a blog post, stating that 'you can require biometric authentication when you restart an Incognito session that was stopped.'
This function was formerly exclusive to iOS users, but it is now also made available to Android users. Previously, it has only been offered for iOS users.