چکیده
|
Recent developments in music streaming applications and websites have made the music emotion recognition task continually active and exciting. Some of the music emotion recognition's significant challenges include data accessibility, data volume, and recognizing emotionally relevant features. Several researchers have proved that emotionally relevant features can be identified by analyzing lyrics and audio signals. The challenging part is the availability of datasets annotated with a lyrical emotion. The lyrical features relevant for identifying four emotions (happy, sad, relaxed, and angry) were recognized from the Music4All dataset in this study with the help of several machine learning algorithms based on a semantic psychometric model. Also, a transfer learning approach was used to learn the feelings of lyrics from an in-domain dataset and then predict the emotion of the target dataset. Further, the BERT model improves the overall model's accuracy (92%). A simple lyrics recommender system is also built using the Sentence Transformer model.
|