Datasets for Deep learning Personas
TL;DR: These are the datasets that we've used in our fun AI side project experiment, over at https://personas.huggingface.co/
We've trained seq2seq models using DeepQA, a tensorflow implementation of "A neural conversational model" (a.k.a. the Google paper), a Deep learning based chatbot.
- Cornell Movie Dialogs corpus
- Supreme Court Conversation Data.
- Ubuntu Dialogue Corpus for tech-support type discussion.
- Stack Exchange Data Dump
This is an anonymized dump of all user-contributed content on the Stack Exchange network. Each site is formatted as a separate archive consisting of XML files zipped via 7-zip using bzip2 compression. Each site archive includes Posts, Users, Votes, Comments, PostHistory and PostLinks. For complete schema information, see the included readme.txt.
Attribution: cc-by-sa 3.0