The Ethics of the AI Song Contest
--
by Ed Newton-Rex and Hendrik Vincent Koops
Earlier this year, we were on the judging panel for the first international AI song contest. Run by Dutch national broadcaster VPRO, and inspired by Eurovision, the contest brought together 13 teams who battled it out, in front of the public and us judges, for supremacy in AI music. The idea wasn’t to have AI write an entire song – rather, it was to encourage human-AI collaboration.
The submissions were fantastic. The contest was won by a team called Uncanny Valley from Australia, but it was incredibly hard to pick between the entries. There are lots of ways you can use AI in a piece of music, and it can be hard to tell how exactly AI has been used just by listening, so our judging was made much easier by each team submitting a write-up of the process they used along with their song.
These write-ups made for really interesting reading. Human-AI co-creation of music is a very new field, and there hasn’t been much study into how people approach it. We asked a comprehensive set of questions to the teams, and their answers gave us really interesting insights into the co-creation process. We wrote a paper detailing some of the things we learned, including the kinds of tools people are using for AI co-creation at the moment and the kinds of approaches that seem to work for people.
One thing we didn’t cover in the paper, though, is the ethical considerations behind the submissions. The ethics of AI systems in general are an incredibly important topic, and the ethics of AI music are no exception. So, as part of the teams’ write-ups, we asked them a number of questions regarding how they thought about the ethics of what they were doing. The answers we received are well worth diving into, as they’re really revealing as to the state of ethical thinking among practitioners of human-AI musical collaboration today.
It’s important to bear in mind that many of the entrants to the contest were new to AI music — so there’s no reason they should have given any thought to the ethical considerations prior to the contest. Evaluating their responses isn’t meant to highlight any perceived inadequacies in their thinking or point fingers if people have views different from our own — far from it. We just wanted to review the responses to see what we could learn.
The questions
We limited our ethical questions to just two:
- How do you feel in terms of ownership and authorship of the song relative to the AI?
- What are some ethical considerations that came up in the process?
By keeping them general, we left the teams room to discuss anything that came to mind in the ethical space.
Ownership
In regular music, ownership of a song is generally meant to reflect the split of how much of the song various musicians wrote (in theory, at least). How do you account for ownership of a song that is created out of a collaboration between humans and AI? What should the split be between the engineers who built the AI system, the person who used the system to create the song, and the musicians whose music the system was trained on?
Entrants’ views on ownership of the songs can be divided into two categories:
- What they consider their level of ownership to be.
- Which parties they even considered when forming an opinion on ownership.
On the first question, of the nine teams who discussed ownership, four considered themselves the sole owners of the created music (or, in one case, indicated that they felt like they created a song and offered no suggestion that ownership should be allotted elsewhere), three considered there to be a joint ownership between various parties, with the majority of ownership falling to them, and two suggested joint ownership without offering an opinion on where the balance lay.
On the second question, seven teams mentioned the AI systems or the developers of those systems when considering the question of ownership, two teams mentioned no parties other than themselves, and only one team mentioned the musicians who had provided the training data when discussing ownership.
Among this small set of practitioners, then, a significant minority considered themselves sole owners of the created work, and vastly more teams considered the AI systems or the developers who wrote them worthy of consideration for ownership than the musicians whose training data was used. One team, for instance, wrote this:
We chose which algorithm to run and also selected the track. So in that sense, we’re the owner. On the other hand, we made use of Magenta, which generated the basics. And magenta is created by other people. On the other hand, you also can’t generate a guitar tune without a guitar and we never say that Gibson is part of the owner of a song. Is AI a tool or is it a creator? It’s both, so there’s a very gray zone.
Calling it a ‘gray zone’ seems accurate — the question of ownership of human-AI co-created music is one that doesn’t have a clear answer yet. But it feels like leaving out entirely the musicians whose music was used as training data would be unlikely to go down well in the musical community.
Training data and copyright
Every team used AI systems that were trained on some other musicians’ music. What would the entrants think of this? And would they touch on the question of whether or not training on third-party music breaches copyright?
Five teams commented on questions of data use, copyright, and/or giving credit to authors of training data. Of these, three noted the possibility that exact replicas of segments of the training data might be created; this led two teams to implement checks that prohibited AI-generated material that duplicated the training data from being included in their entry. It is interesting to note that these checks suggest a misinterpretation of copyright law: copyright concerns the point at which a copy of the original material is made (i.e., in machine learning systems, the point at which the training data is copied for the purposes of training), rather than the output of the system.
One team considered the implications for copyright law in future, and one team pointed out the need to credit authors of the original musical training material, saying they preferred to err on the side of being “too generous”. Overall, the level of discussion of questions of data use, copyright and/or giving credit to authors of training data was notable predominantly for its absence, and, where noted, it was generally clear that the level of knowledge of the legal questions surrounding copyright was low among entrants.
Lyrics
Only one team considered the risk of lyrics being generated that they would rather avoid, and checked the output “for things like racism, gender bias, etc.” before finalising their song. Given that “Kill the government” was a line in one of the entries, this is something that perhaps should be more widely considered in musical co-creation in future.
Machines taking musicians’ jobs
The question of technological displacement of human musicians’ jobs was only addressed by one team, who said:
“Fully automated composition systems will achieve commercially and artistically viable results. This will endanger an important source of income for many artists.”
The emphasis elsewhere in the entrants’ process documents on the potential positive implications of human-AI co-creation — such as one team’s assertion that “this research and results open up a world in front of AI composers who can assist in composing complete albums without burning out artists and giving them sleepless nights” — coupled with this lack of addressing the question of technological unemployment — might indicate that there is an argument for increasing education among practitioners of human-AI musical co-creation as to the potential adverse effects of AI in music creation, particularly regarding any impact on the jobs market for musicians.
Some other ethical considerations mentioned
One team noted that
“we believe that casual listeners won’t even notice that our song was composed thanks to the assistance of AI.”
This presents an interesting ethical dilemma: should listeners have a right to be made aware of the use of AI in musical creation? In France, images that have been digitally altered to make subjects appear thinner are required to be accompanied by a disclaimer of that fact; perhaps there is an argument for a similar requirement when music has been created with input from AI systems.
And one team noted the importance of transparency around how music created using AI is made, saying:
“Videos posted on YouTube with titles like “song composed by Artificial Intelligence” […] are clickbait, misinformation, fake news!”
Their opinion was that
“it’s fundamentally problematic and wrong to mislead the public about “where AI is at” […] because it sows distrust in AI in particular, and science and academia in general.”
It is certainly true that claims of music being created by AI are often not accompanied by detailed explanations of how the music was made, and there is perhaps room for better explanations of how the human-AI musical co-creation process works in individual examples in future.
What next for ethics in AI music?
Comparing the teams’ responses on ethics with their descriptions of their technical processes, it’s clear that much more work and thought in academia and industry have gone into developing AI systems for making music than have gone into exploring the ethical implications of doing so. This is perhaps unsurprising: it’s often the case with new technology that it’s built first and questioned, ethically, only later.
However, now that AI music creation tools are getting more advanced, now is the time to start asking ethical questions. Who owns music that’s created by humans in collaboration with AI, where the AI is trained on musicians’ material? Is training on copyrighted music, without permission, ethical? Do listeners have a right to know AI was involved in the creation process? How should we think about the impact AI might have on musicians’ jobs? Our hope is that, in future iterations of the AI Song Contest, and in other forums, these questions will be explored more and more, and a growing discussion will help ensure that AI music is taken in a direction that everyone — researchers, musicians and listeners — can be happy with.