Skip to content

Text classifier for analyzing social media comment's level of toxicity

Notifications You must be signed in to change notification settings

adsnaider/toxicity_analysis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Soft Spoken

By Adam Snaider, Mathew Li, Shivan Vipani, and Michael Wono

About

This project was created at SB Hacks IV

It is a machine learning algorithm to detect the levels of toxicity in online comments. The idea and the dataset came from this Kaggle competition

Usage

In order to run the training step, one must first clean the data and create the word embeddings. To do this, run the python script vocab.py. It will generate all the data needed.

After all the data has been processed, it's time to train the model by running the train.py module. This model was created using TensorFlow so make sure that TensorFlow is installed.

After train.py finishes its run, you can run the server.py module and navigate to 'http://127.0.0.1:9876/'. From here, you just write a comment in the text box and click submit to show the results in the chart

About

Text classifier for analyzing social media comment's level of toxicity

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published