This project detects human emotions in real-time using facial expressions and suggests music or content based on the detected mood. The user can even choose to play the recommended music either in the web browser or on a selected Spotify device.
- Real-time emotion detection using OpenCV and dlib
- Music recommendation based on detected emotions
- Playback choice between web browser and selected Spotify device
- 5-minute interval for mood change detection to avoid continuous detection
- OpenCV
- dlib
- numpy
- TensorFlow/Keras
- Spotipy (Spotify API)
- python-dotenv
- webbrowser
- Python 3.x
- Spotify Developer Account
- Spotify API credentials (Client ID, Client Secret)
-
Clone the repository:
git clone https://github.com/debjit-mandal/MoodMelody.git cd MoodMelody
-
Create and activate a virtual environment:
python -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate`
-
Install the dependencies:
pip install -r requirements.txt
-
Set up environment variables:
Create a
.env
file in the project root directory with the following content:SPOTIFY_CLIENT_ID=your_spotify_client_id SPOTIFY_CLIENT_SECRET=your_spotify_client_secret SPOTIFY_REDIRECT_URI=http://localhost:8888/callback
Replace
your_spotify_client_id
andyour_spotify_client_secret
with your actual Spotify API credentials.
-
Run the project:
python src/main.py
-
Choose a device for playback:
The script will list available Spotify devices. Choose a device for playback or default to web browser playback.
-
Webcam detection and music recommendation:
The webcam will detect your emotion and recommend music accordingly. A new song will be played only if a new emotion is detected, with a 5-minute interval to avoid continuous detection.
MoodMelody/
│
├── src/
│ ├── face_detection.py
│ ├── emotion_recognition.py
│ ├── music_recommendation.py
│ ├── mood_tracking.py
│ ├── main.py
│ ├── train_emotion_recognition_model.py # Training script
│
├── models/
│ └── emotion_model.h5 # Trained emotion recognition model
│
├── data/
│ ├── shape_predictor_68_face_landmarks.dat # dlib face landmark model
│ └── fer2013.csv # FER2013 dataset file for training
│
├── requirements.txt
├── .gitignore
├── .env
└── README.md
Contributions are welcome! Please fork this repository and submit pull requests.
This project is licensed under the MIT License - see the LICENSE file for details.