Researchers at the University of Waterloo in Canada have developed LyricJam, a deep learning system capable of generating live lyrics for instrumental music. The goal is to create a tool that allows artists to explore themes and ideas related to the emotions produced by their music.
Will LyricJam be an essential tool for artists in search of inspiration in the future? It's the idea behind a project that researchers at the University of Waterloo in Ontario have come up with -- a computer system that can generate lyrics for live instrumental music. Unlike other lyric generation systems, LyricJam can suggest words to match the rhythm, chords or instruments in real time, as an artist plays live music. This can help artists find new compositions that fit their music.
The researchers have been developing this deep learning system for several years. Initially, the goal was to be able to generate lyrics according to an artist's style by analyzing their lyrics and music. Now the system is heading towards a model where artificial intelligence acts as a kind of musical partner that suggests new possibilities to creators and inspires them for new creations.
Make music in a different way
"The goal of this research was to design a system that can generate lyrics reflecting the mood and emotions expressed through various aspects of music, such as chords, instruments, tempo, etc.," Olga Vechtomova, one of the researchers on the project, told TechXplore. "We set out to create a tool that musicians could use to get inspiration for their own song writing."
In essence the system is capable of processing live music played by a musician or band and generating lyrics that match the emotions expressed by the music. Artists would then be able to review these generated lyrics and be inspired by them or adapt them, giving them an opportunity to discover new and interesting themes or lyric ideas they had not previously considered.
So how does it all work? The software converts audio files into spectrograms and then uses its learning models to generate lyrics corresponding to the music being processed in real time. To enable it to learn quickly, the model uses two variational autoencoders, one for the audio music representations, the other for learning the lyrics. Autoencoders are artificial neural networks that are used to represent a data set. It becomes variational when it is asked to make hypotheses of responses. In this case, suggestions of lyrics.
It is possible to try out LyricJam on its dedicated site to see its progress. For now, it only provides lyrics in English. LyricJam could one day become an extremely valuable tool for artists looking for unique lyrics and wishing to diversify their themes. Researchers at the University of Waterloo are continuing to refine the system and are trying to design other tools to improve its capabilities.