Skip to content

Latest commit

 

History

History
19 lines (17 loc) · 1.7 KB

README.md

File metadata and controls

19 lines (17 loc) · 1.7 KB

Toxic-comment-classification

A Classification task with the aim of detect toxic comments from social media using Machine Learning or Deep learning This case study is based on the Kaggle Competition: https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification

Detailed Blog: https://medium.com/@vanpariyavishal02/toxicity-of-the-comment-by-jigsaw-1faea0e716b3

Contents:

S.No Section Jupyter Notebook
1. EDA File jigsaw-eda.ipynb
2. Baseline Model Jigsaw-baseline.ipynb
3. Machine Learning Models ML_model.ipynb
4. Model-1 containes 2 LSTM and one attention layer bi-lstm-attention.ipynb
5. Model-2 Stacked LSTM with fasttext and w2v Stacked-lstm-fasttext-glove.ipynb
6. Model-3 2 LSTM and 2 Attention layers bi-lstm-bi-attention.ipynb
7. Feature extraction for BERT BERT_FE.ipynb
8. Bert training BERT.ipynb
9. End-to-End piepline class final_function.ipynb
10. Deployment Code on GCP Deployment_code.ipynb