Multi stream emotion analyser EmotionNet Project Welcome to the EmotionNet project repository. This project focuses on emotion recognition using a multi-modal approach, leveraging both video and audio data.
Project Structure src/: Contains the source code for the project. notebooks/: Jupyter notebooks used for experimentation and data analysis. data/: Placeholder for the dataset (due to size constraints, the dataset is not included in this repository). models/: Placeholder for the trained models (due to size constraints, the models are not included in this repository). Dataset & Models Due to the size constraints of GitHub, the dataset and the trained models are not included in this repository. If you need access to them, please contact the repository owner.
Getting Started Clone the Repository:
bash Copy code git clone https://github.com/ab-uol/EmotionNet.git cd EmotionNet Setup the Environment:
It's recommended to use a virtual environment. Install the required packages using pip install -r requirements.txt. Run the Code:
Navigate to the src/ directory. Run the desired scripts or notebooks. Contribution & Feedback If you have any suggestions or feedback, please open an issue or submit a pull request. We appreciate your contributions!