You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi , i just had a question regarding the parameters of the model, did you put them imperially or did you use an optimization technique to choose the parameters of the model ?
Thanks !!
The text was updated successfully, but these errors were encountered:
Hi, this space is for issues, not questions, but I can answer your question: no, it wasn't arbitrary, I reserved a small fraction of the training set as a validation set to observe the cross-entropy during training (I also printed the validation answers to have a better feeling about performance). The variable n_test in line 34 of train_bot.py defines the amount of data reserved for this validation process, now it’s set to a small value because I have already found the best architecture of this model to use as chatbot (for other uses, such as translation or text summarization, it's better to re-evaluate the architecture, i.e. word embedding dimension, thought vector dimension, number of neurons and layers... ).
Hi , i just had a question regarding the parameters of the model, did you put them imperially or did you use an optimization technique to choose the parameters of the model ?
Thanks !!
The text was updated successfully, but these errors were encountered: