Artificial Intelligence is becoming increasingly widespread, even in unexpected areas. Until a few years ago it was thought that its use would be limited to industrial production, repetitive tasks and, in general, jobs that do not enrich the human spirit. Today this assumption is no longer valid. Now AI is also an artist.

Painting, sculpture, poetry, photography, cinema; there is no artistic field in which Artificial Intelligence has not been applied at least once, often with surprising results.

The musical field is most involved in this revolution of creativity, probably because music, after all, is an art that lives on mathematics and physics, therefore predisposed to the influence of algorithms, codes, and data.

The latest album publications, soundtracks and songs by artists or companies that have used some tools based on Artificial Intelligence have terrorized the music market. According to the experts, today the sector risks a profound revolution (if not destruction). But is it so? In other words, is it right or not to shoot the..."artificial" piano player? 

 

How does the application of artificial intelligence to music work?

We can simply say that, in order to learn, AI is fed thousands and thousands of songs through neural networks (mathematical models that imitate biological neural networks) that work through machine learning, and in particular through deep learning (a sub-category of ML that is also able to infer meanings without human supervision). These pieces are fragmented and studied, and the machine manages to extract the basic information and can recognize the patterns it can use to create original works, similar to those that any artist could compose.

 

Everything depends on the use made of it, and how it sounds…

If the learning process is similar for any system based on machine learning, there are however two different applications of AI for music: Flow Machines by Sony and Magenta by Google, for example, are placed at two extremes.

The first is not a creative Artificial Intelligence, or at least not in the sense in which we assume the term; it merely facilitates the artist's work, allowing the person to free their creativity, stimulating it with suggestions and ideas based on their preferences and attitudes.

Magenta, on the other hand, is a true artificial composer that, depending on the inputs provided to it, independently manages to create an original track. The quality of the composition is still not pleasing from many points of view, but technological innovation is growing exponentially and so are its results.

These are not the only two tools available at the moment; among others, we can mention AIVA, MuseNet of OpenAI, Amper and Jukedeck. Everyone is specialized in some features and functionalities. What they have in common is the fact that they have attracted the attention of media and investors.

If we also consider the recommendation algorithms of streaming platforms like Spotify or Apple Music, or all the applications of AI in the field of editing tools, it is clear that the penetration of this technology in the musical field is more advanced than we might believe.

 

But what are the possible consequences of a macro-spread?

At least in the short term, there should be no substantial change in the way we listen to or choose our music.

Some "artificial" songs and albums, like "I AM AI" by Taryn Southern, sung by the performer but composed, played and produced by the open-source software Amper, will continue to come out and will surely get a good commercial success, but they will be exceptions, and probably they will be appreciated for their innovativeness and not for their intrinsic quality.

Over time, however, things will change. A sign of this evolution is Jukedeck's acquisition, which we mentioned earlier as one of the best intelligent music composition tools, by TikTok, one of the most successful social networks of the last period and especially loved by the new generations.

Imagine what could come of this marriage. Maybe we will have the opportunity, once registered on that social network, to create our song, helped by an evolved AI, and to sing it and share it with friends. 

This way, it would be possible to break down the barrier, impassable for most of people, of learning a musical instrument.

 

Every subscriber could become a singer, a musician, and maybe a music influencer.

This story is the fruit of our imagination, no matter how beautiful or frightening it may be. Things are undoubtedly changing, and music is facing many transformations stimulated by technological innovation (augmented reality concerts, artists who are no longer alive returning to sing in the form of holograms, bitcoins to buy songs and albums directly from singers...and so on).

Ultimately, to answer the question that we asked ourselves in the beginning: is it right or not to shoot the "artificial" piano player? 

Well, there is one thing that is always true: blocking innovation is counterproductive. The goal is to be able to guide it on the right path, to allow a gentle transformation for artists and experts and not damage anyone.

Artificial intelligence is born as a tool to enable or facilitate human activities. In this case, if we know how to use it properly, it could stimulate people's creativity, finally giving shape to art for everyone.

Photo by bady qb on Unsplash