-
Notifications
You must be signed in to change notification settings - Fork 4.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pymongo.errors.OperationFailure: distinct too big, 16mb cap #686
Comments
I got around the 16mb cap by pulling up mongo shell and entering db.collection.allowDiskUse="true" but... then you hit issues such as this in mongo, after entering an input and while waiting for a response- 2017-05-15T21:30:24.661-0400 I COMMAND [conn40] warning: log line attempted (5147kB) over max size (10kB) haven't figured out a workaround for that, but as of right now... It's looking like that corpus is just wayyyy too large |
tried Even when mongodb aggregation framework is used. MongoDB restricts the BSON document to be not more than 16 MB, so fails .
|
Running in to this as well now. So, its simply not possible to train on the Ubuntu corpus? @vkosuri |
@peterel I never tried, i'll give a try by next week |
@PeterL , tried training on Ubuntu Corpus, In 16 hours, about 70% of the corpus is indexed. |
@telkomops Then we are in the same situation. I had it running for around 15 hours and now running in to the 16MB issue. How did you proceed? Im thinking of scrapping MongoDB all together and try either some sql-version or maybe some other "chatbot framework". The jsonAdapter is simply much too slow to use in my testrunts. |
IMO, ChatterBot would really benefit from a rewrite to integrate Keras/TensorFlow as the Machine/Deep Learning backend. It's not a difficult implementation now that Keras has been re-written by Google, complete with high-level Python APIs. It provides powerful, well-maintained Machine Learning models/algorithms which have numerous NLP/chatbot examples on the web, will handle training storage, provides a web-UI to dig in to the bot's Neural Network, training progress, etc), and far more. I'd be very happy to contribute should the proposal be accepted as I was thinking of migrating my project away from ChatterBot to Keras anyways, but I'd much rather make a good thing great here. Some useful links: |
Opened proposal under #761 "+1" if you're interested. |
+1 |
@vkosuri Did you find the time to give this a try and reproduce the issue? |
@peterel i trained my bot with Offline Ubuntu corpus https://arxiv.org/abs/1506.08909, insted of above issue, I have seen file not found issues. I'll give another try today. If possible where exactly you have seen the above error? I mean on which tsv file you encountered above error? |
@vkosuri Hmm, not sure I follow. This issue is regarding using the Ubuntu corpus and training it for a long time only to get a "pymongo.errors.OperationFailure: distinct too big, 16mb cap" error. In my case, I trained with the Ubuntu corpus for a couple of hours and all was well, it gave "decent" answers. Then I left if over night and when I tried a new conversation I hit this 16mb cap error. Or is your comment referring to that it matter which version of the Ubuntu corpus one trains with (the "built in" one or this offline one you are referring to)? |
@telkomops I gave up on the Ubuntu corpus as well. Using the built in one and hope users will "fill up" som proper responses. Are you going with Keras or Tensorflow instead? Or is there any other similar "framework" as Chatterbot youd recommend? |
@telkomops / @peterel, The Ubuntu corpus is a massive data set. I think it may have been a mistake for me to add documentation and training support for it. ChatterBot isn't ready to handle that much data, yet. I'm working to improve this but the changes required to optimize these queries on large data sets are still a few releases away. |
@gunthercox Many thanks for your efforts. Chatterbot is very cool :) I do agree though about the Ubuntu corpus. Since it wont "work" with Chatterbot atm its probably better to remove it from the docs or at least make folks aware that this wont work now. The alternative is, like for me, to spend days on training and then realizing that it doesnt work. Again, thanks for your efforts! |
Instead removing it from chatterbot, how about moving it to https://github.com/gunthercox/chatterbot-corpus. If any users are like to use UbuntuCorpusTrainer they will use it. from chatterbot_corpus.trainers import UbuntuCorpusTrainer |
@vkosuri Hmm, if the Ubuntu Corpus cant be used with Chatterbot, I think its better not to include it at all or at least show a "disclaimer". Otherwise, youll end up with folks spending days in training only to see it crash...which is not so good :) |
The maximum BSON document size is 16 megabytes. There is lots of stuff on this and workarounds on stack overflow. Most recommend using GridFS. Which would mean switching from '.distinct' to '.aggregate' (as mentioned above in this thread too). This doesn't work but it would probably look something like this (mongodb.py):
http://api.mongodb.com/python/current/examples/aggregation.html |
Hi @gunthercox , I have trained my bot with Ubuntu Dialog Corpus for 1 day, and after running i am receiving this error Thanks |
When trying to train from and use the Ubuntu Dialog Corpus with the MongoDB Storage Adapter I'm hitting the following exception. The code is pretty much identical to the Ubuntu Corpus example in this repo.
I believe the issue is related MongoDB not being able to handle strings over 160 characters (which there is in the Ubuntu Corpus). So this should either be somehow resolved, or support dropped as it's currently "broken".
The text was updated successfully, but these errors were encountered: