-
Notifications
You must be signed in to change notification settings - Fork 348
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
make_sentence_that_contains #52
Comments
There's no built-in method to do this in while True:
sentence = text_model.make_sentence()
if "computer" in sentence: break
print(sentence) |
Thanks, I will give that a try. |
I'm sorry to re-open this Feature Request, but I've been trying the solution approach of using the while loop and with a corpus of 6mb it takes almost 30 seconds (or more) to generate a sentence containing the specific word. Thank you very much, and continue the good work! |
Hi @voltaxvoltax, and thanks for reminding me about this thread. Implementing a more efficient version of Assuming the word is
|
Just to +1 interest in this, I have a project right now with this exact use case. I agree with your analysis that the only way to achieve this is a reverse markov probability, but it would be highly useful. Just to describe my goal quickly, it would be to 'half-simulate' a conversation between different models. The first one generates a sentence, the second one generates a sentence containing some word from the first one, and so on. Thanks for the consideration. I'd be very tolerant to longer model processing time if there were a second reverse model processed at the same time. |
You could generate a sentence that begins with the keyword and them generate another random sentence, cut it on a verb (using nltk) and append it at the beggining of the other sentence. |
This works fine for me on a corpus of 14 MB.
It helps to also check the Markov chain to see how often that word appears. If it appears at a very low frequency, that might be why it takes a long time for the model to generate a sentence with the word in it. There's nothing we can do about it in that case, other than to choose a different word or to substantially increase the size of the corpus. If |
I'm working in a naive approach on my fork. To build the reversed matrix, i reverse the sentences then i process it, probably not the best idea but it looks like it's working. We will probably need another method in order to create that matrix. In order to make it work i combine Still working in multi words containing, it's experimental right now. Any suggestions are welcome |
Is it possible to create a Markov chain that contains a key word?
I'm thinking of a chat bot situation. For example, the word "computer" is used in the utterance, then this command would be called:
text_model.make_sentence_that_contains("computer")
If so, could you please either add this to markovify, or show me how I could write the code for my personal use.
Many thanks.
The text was updated successfully, but these errors were encountered: