Human expressions play an important role in the extraction of an individual\\\\\\\\\\\\\\\'s emotional state. It helps in determining the current state and mood of an individual, extracting and understanding the emotion that an individual has based on various features of the face such as eyes, cheeks, forehead, or even through the curve of the smile.
A survey confirmed that people use Music as a form of expression. They often relate to a particular piece of music according to their emotions. Considering these aspects of how music impacts a part of the human brain and body, our project will deal with extracting the user’s facial expressions and features to determine the current mood of the user. Once the emotion is detected, a playlist of songs suitable to the mood of the user will be presented to the user. This can be a big help to alleviate the mood or simply calm the individual and can also get quicker song according to the mood, saving time from looking up different songs and parallel developing a software that can be used anywhere with the help of providing the functionality of playing music according to the emotion detected.
Introduction
I. INTRODUCTION
A survey of 2019 and 2020 proved that 68% of people aged from 18 to 34 listen to music every day and significant time spent weekly listening to songs is 16 hours and 14 minutes. This clearly states that music acts as an escape for a few moments and provides relaxation. Looking at the advancement of technology several music players with features such as fast forward, pause, shuffle, repeat, etc. have been developed but applications that provide facial recognition have not been used on a regular basis. Therefore our project can play an important role in this scenario as this music player implies working on the emotions and behavior of the user. It recognizes the facial emotions of the user and plays the songs according to their emotion. The emotions are recognized using a machine learning method EMO algorithm. The webcam captures the image of the user or the user can also select the emoji for expressions. It then extracts the facial features of the user from the captured image. The foremost concept of this project is to automatically play songs based on the emotions of the user. It aims to provide user-preferred music with respect to the emotions detected. According to the emotion, the music will be played from the predefined directories.
II. LITERATURE REVIEW
There are several applications that provide facilities and services for music playlist generation or play a particular song and in this process, all manual work is involved. Now to provide there are various techniques and approaches have been proposed and developed to classify the human emotional state of behavior. The proposed approaches have only focused on only some of the basic emotions using complex techniques like Viola and Jones. Several research papers giving a brief about the idea are:
In this paper, the Authors states that, Nowadays, people tend to increasingly have more stress due to the bad economy, high living expenses, etc. taking note of music may be a key activity that assists to scale back stress. However, it's going to be unhelpful if the music doesn't suit the present emotion of the listener. Moreover, there's no music player which is in a position to pick songs that supported the user’s emotion. To unravel this problem, this paper proposes an emotion-based music player, which is in a position to suggest songs that supported the user's emotions; sad, happy, neutral, and angry. The appliance receives either the user's pulse or facial image from a sensible band or mobile camera. It then uses the classification method to spot the user's emotion. This paper presents 2 sorts of classification methods; the guts rate-based and therefore the facial image-based methods. Then, the appliance returns songs that have an equivalent mood because of the user's emotion. The experimental results show that the proposed approach is in a position to exactly classify the happy emotion because the guts rate range of this emotion is wide.
Authors say that Digital audio is straightforward to record, play, process, and manage. Its ubiquity means devices for handling it are cheap, letting more people record and play music and speech. Additionally, the web has improved access to the recorded audio. So, the quantity of recorded music that folks own has rapidly increased. Most current audio players compress audio files and store them in internal memory. Because storage costs have consistently declined, the quantity of music which will be stored has rapidly increased. A player with 16 Gbytes of memory can hold approximately 3,200 songs if each song is stored in compressed format and occupies 5 Mbytes. Effectively organizing such large volumes of music is difficult. People often listen repeatedly to a little number of favorite songs, while others remain unjustifiably neglected. We've developed Affection, an efficient system for managing music collections. Affection groups pieces of music that convey similar emotions and labels each group with a corresponding icon. These icons let listeners easily select music consistent with their emotions.
In this paper, a sensible music system is meant by recognizing the emotion using a voice speech signal as an input. The target of the speech emotion recognition (SER) system is to work out the state of emotion of a person's being's voice. This study recognizes five emotions-anger, anxiety, boredom, happiness, and sadness. The important aspects in implementing this SER system include speech processing using the Berlin emotional database, then extracting suitable features and selecting appropriate pattern recognition or classifier methods to spot the emotional states. Once the emotion of the speech is recognized, the system platform automatically selects a bit of music as a cheer-up strategy from the database of song playlists stored. The analysis results show that this SER system implemented over five emotions provides successful emotional classification performance of 76.31% using the GMM model and overall better accuracy of 81.57% with the SVM model.
III. METHODOLOGY
We propose an Emotion-based music player which will play songs according to the emotion of the user. It aims to provide user-preferred music with emotional awareness. It is based on the idea of automating much of the interaction between the music player and its user.
Emotion-Based Music Player is installed on a personal computer where the user can access their customized playlists and play songs based on their emotions.
Emotion Based Music Player is a useful application for music listeners with a PC and an internet connection. The Application is accessible by anyone who creates a profile on the system.
We have provided our users with the additional option of using emojis to generate the playlist. Whenever the user does not wish to or is unable to take a snapshot of their mood due to various reasons such as extremely high or low lighting, their camera not working properly, they have a lower resolution camera which is unable to take a clear picture of their face, which in turn is unable to detect the proper mood or any other reason, the user can click on the “Use Emoji” button and select the emoji which represents the mood which they are in, or the mood that they want their playlist to be generated of.
This flow chart provides an overview of the application and simply explains the functionality of the application.
Login/signUp Stage: Users have to create a profile to store personal data. If the user already has an account, they can log in to their account to access customized playlists as well as songs. Once users logs-in, their profile is saved on the application database, until they are manually logged out.
Emotion Capture Stage: As soon as the authentication phase is done, the application will ask the user's permission to access media and photos and will access the web camera to capture the user's image.
API Stage: After the image is captured, the application sends image capture to SDK. There, the captured image is processed and the image feedback is sent to the application.
Recognition Stage: In this stage, the application receives the image information and recognizes the emotion based on the defined threshold. This emotion is sent to the database to fetch the emotion playlist.
Display Stage: Here, the songs are organized based on an EMO algorithm and the user can play any song from the list displayed. The user has the option to add, remove, modify the songs and also can change the category and interest level of a song at any time in the application. The application also has a recommendation tab where the system notifies the user of songs that are rarely played.
IV. RESULT DISCUSSIONS
As every person has unique facial features, it is difficult to detect accurate human emotion or mood. But with proper facial expressions, it can be detected up to a certain extent. The camera of the device should have a higher resolution. The application shall run successfully and meet the outcome as precisely as possible.
For Example: For “angry”, “Fear”, “disgust” and “surprise” moods, devotional, motivational, and patriotic songs are suggested to the user. Hence, the user is also provided with mood improvement.
Instructions Explained to the User. In this scenario, the users were given instructions as to what is to be done to perform the prediction of the emotion expressed which provided the following results. Sometimes in cases where the inner emotion is sad and facial expression is happy it resulted in a failed case.
We as the authors would like to extend a special thanks of a vote to the reviewers of this paper for their valuable suggestions to improve this paper. The paper is supported by Acropolis Institute of Technology and Research, Indore (M.P.)
Conclusion
The Emotion-Based Music Player is used to automate and give a better music player experience for the end-user. The application solves the basic needs of music listeners without troubling them as existing applications do: it uses increases the interaction of the system with the user in many ways. It eases the work of the end-user by capturing the image using a camera, determining their emotion, and suggesting a customized play-list through a more advanced and interactive system. The user will also be notified of songs that are not being played, to help them free up storage space.
Our main aim is to consume users’ time and to satisfy them. We have designed this application in such a manner that it can run on mobile [ Android ] as well as a desktop [ Windows ].
References
[1] Hafeez Kabani, Sharik Khan, Omar Khan, Shabana Tadvi” Emotion Based Music Player” International Journal of Engineering Research and General Science Volume 3, Issue 1, January-February, 2015
[2] Shlok Gikla, Husain Zafar, Chintan Soni, Kshitija Waghurdekar” SMART MUSIC INTEGRATING AND MUSIC MOOD RECOMMENDATION”2017 International Conference on Wireless Communications, Signal Processing and Networking(WiSppNET)
[3] Srushti Sawant, Shraddha Patil, Shubhangi Biradar,” EMOTION BASED MUSIC SYSTEM”, International Journal of Innovations & Advancement in Computer Science, IJIACS ISSN 2347-8616 volume 7, Issue 3 March 2018