The sober lines of the Mies van de Rohe Pavilion act as a sounding board for the notes that Marco Mezquida extracts from the piano this Tuesday, accompanied by sounds that come from the four speakers that surround him. Tubular bells, Tibetan voices or white noise, synthetic textures that combine with the Menorcan pianist’s improvisations in “Piano AI” in a collaboration between man and machine that announces the new possibilities offered by artificial intelligence (AI) in the world of music
This proposal by the Sónar festival to combine AI with analog music, which already started in 2021, is a new approach to a phenomenon that in recent months has made its appearance in the world of music with great excitement, as it has past in other areas. Paul McCartney announced yesterday that AI has been used to recover John Lennon’s voice from a 1978 demo so he can record the song Now and Then, which will be released this year. “It’s a little scary but it’s exciting, because it’s the future. We’ll have to see where it takes us”, said the ex-Beatle, who weeks ago could be heard performing with his late partner Lennon New, a song that the bassist composed in 2013, in a version created entirely with AI and that it has been uploaded to the networks.
Technology has also joined David Guetta and Eminem virtually, a work produced by the same dj that, despite leaving him satisfied with the quality of the result, did not get to publish it to avoid possible accusations of plagiarism. This is what happened with Heart on my sleeve, a song published on the networks, which unites the voices of Drake and The Weeknd through artificial intelligence software in a song that neither of them composed. After adding 15 million reproductions, the musical proposal disappeared due to the pressures of the record industry.
A proposal similar in concept is that of Aisis, or what is the same, the recreation through artificial intelligence of the reunion of the Gallagher brothers to record new Oasis songs. In this case they used tracks composed in 2013 by the band Breezer, performed by an AI fed with recordings of Liam Gallagher, vocalist of the defunct band, who admitted to having heard a track that sounded “better than other nonsense out there over there”.
And there are not a few artists who are beginning to look this new technology in the face, as the American CJ Carr has done, who with his Dadabots project has gone a step further and trained an AI to compose songs in from certain styles; in his case he has created black metal, math rock and even free jazz following the style of John Coltrane. Although perhaps the most colorful tracks are those in which Kurt Cobain performs Gorillaz songs or, upping the ante, Frank Sinatra sings Britney Spears’ Toxic.
Behind this explosion of possibilities is ChatGPT, the text generation software that has revolutionized the world of artificial intelligence and that, together with other generative models, has driven the creation of programs that allow you to modify your voice or create music from AI. “Since natural language has an impact in almost all areas of expression, it has impacted everything”, highlights Francesc Xavier Serra, director of the music technology research group at the Pompeu Fabra University, who affirms with certainty that today there is enough technology to create, for example, a device that allows you to sing with someone else’s voice, such as with the aforementioned Drake, John Lennon or Sinatra, without a doubt a future karaoke success. “This is the first project we did with Yamaha 30 years ago”, explains Xavier Serra; “it was not commercialized due to an issue of computing power, but now it could be done perfectly, we just need someone to do it”.
“It’s a world with enormous potential, I think we’re at the first of many chapters”, comments Marco Mezquida, who remembers that behind his proposal for collaboration with artificial intelligence is the work of University researchers Polytechnic of Catalonia Philippe Salembier and Josep Maria Comajuncosa, human control behind the machine. The possibility of AI being able to develop on its own, without human presence, for the pianist is “surprising” and he considers that in any case it would be able to compose musical pieces, “understanding the parameters of a composition, the approach , the development and a denouement”, but it would be more difficult to improvise. “To connect and create from the stimuli of other musicians, and have a coherent, musical and artistically plausible response as a jazz quartet can have, for example, right now I see it a bit far, but surely it’s a matter of time, because all this is possible”.
Xavier Serra goes a step further and states that 40 years ago, when he started to be interested in music technology, scores were already being made using algorithms, although “I was convinced that it would never replace a composer or the ability of a well-trained musician”. Now “I don’t think like that anymore; I am convinced that we have reached this point, and if we have not gone further it is because of copyright issues, political and industrial conflicts, not because of technological limitations”. In other words, currently artificial intelligence can “perfectly generate music of a quality similar to what a mediocre composer could do”.
With the possibilities of this new technology come fears about the dangers it implies, in front of which the visual artist and art theorist Daniel G. Andújar wonders “how to generate a space of resistance in view of this world that it is increasingly globalized”. Take for example the case of Spotify, and how the music streaming service takes advantage of the information of its users to create playlists. Databases that serve to feed the AI ??and that will be increasingly difficult to access, while they will be marketed to compose more commercial songs. “The danger comes from privatization”, he comments; “Spotify will sell you success patterns, what are the sounds, what are the repeating patterns, it’s actually already working.”
“With AI, a lot of music will be created automatically” predicts Xavier Serra, “music by the meter to make playlists to go to sleep”, in which no composer will be needed because an AI will do it perfectly. A music that employed many artists, jobs that “will surely be lost”.
Associated with this problem is the new paradigm that supposes the possibility of selling the tone of voice or the style of an artist without any sound artifact to support it. This is what Grimes is doing, a Canadian dance music artist who allows his voice to be used through AI in other artists’ songs in exchange for 50% of the royalties.
“The Beatles sell all the rights to their music”, gives Daniel G. Andújar as an example, to warn that, as an artist, “the moment you have a success you will sell your face, your gesture, your voice, your doorbell You won’t even need to sing anymore. And you don’t have to act, because apart from that they are also making databases of actors, of movement, of dance, of style simulations”.
“With this new technology, the record industry has to change, but it is very reluctant”, reasons Xavier Serra, and remembers that they already managed to overcome the first revolution, that of the internet and streaming services, in which “the music industry “he managed to get in and control it”. “I would like to think that it will not be able to adapt now”, he acknowledges, hoping that the arrival of artificial intelligence will “radically change the concept of creation and what is monetizable” and remembers that the concept of copying it is cultural, not physical. “Everyone has heard all the music, and throughout history it has been inspired and copied.”