In a paper published on the preprint Arxiv.org, they describe the novel approach, which uses a specially-designed component to capture the timbre of singers from noisy singing data.
The work -- like OpenAI's music-generating Jukebox AI -- has obvious commercial implications.
Music artists are often pulled in for pick-up sessions to address mistakes, changes, or additions after a recording finish. AI-assisted voice synthesis could eliminate the need for these, saving time and money on the part of the singers' employers.
Another use is that it could be used to create deepfakes that stand in for musicians, making it seem as though they sang lyrics they never did (or put them out of work.
The researchers said that singing voices have more complicated patterns and rhythms than normal speaking voices. Synthesising them requires information to control the duration and the pitch, which makes the task challenging. Plus, there aren't many publicly available singing training data sets, and songs used in training must be manually analysed at the lyrics and audio level.