Musify is an innovative web application that combines the power of deep learning and image processing technologies to detect facial emotions and provide personalized music recommendations. Whether you're feeling happy, sad, excited, or calm, Musify delivers a seamless and immersive experience, bringing together the worlds of facial emotion detection and music.
Live Demo | Jupyter Notebook
- Emotion Detection: Initially utilized a custom-built CNN model for facial emotion detection, and now leverages the highly accurate facial expression model by for even better results.
- Personalized Music Recommendations: Integrates with the Spotify API to curate customized music playlists based on users' facial expression.
- Face Detection: Employs face-api.js for face detection, ensuring a seamless user experience.
- Accuracy and loss
- Confusion matrix
- React: JavaScript library for building user interfaces.
- Tensorflow: Provides comprehensive ecosystem for building and deploying machine learning/deep learning models.
- Tensorflow.js: JavaScript library that allows running TensorFlow models directly in the web browser or on Node.js
- Sass: CSS extension language that provides more advanced features and capabilities.
-
- Fork the repo
- Clone the repo to your local machine
git clone https://github.com/codedmachine111/musify.git
- Change current directory
cd musify
- Install latest version of Nodejs and install all the dependencies using:
npm install
- For using spotify in the web-app, create a .env file in the root directory of the project and add:
VITE_SPOTIFY_CLIENT_ID = "YOUR-SPOTIFY-CLIENT-ID"
VITE_SPOTIFY_CLIENT_SECRET = "YOUR-SPOTIFY-CLIENT-SECRET"
VITE_APP_URL = "VITE-APP-URL-AFTER-HOSTING"
- Run the development server:
npm run dev
Contributions are welcome! If you have any suggestions, improvements, or bug fixes, please submit a pull request or open an issue on the GitHub repository.
This project is licensed under the MIT License.