-
Notifications
You must be signed in to change notification settings - Fork 4.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logic adapters not processing correctly #111
Comments
I believe the issue here is due to an overly aggressive confidence threshold on the closest match adapter. As you mentioned, line 11 in multi_adapter.py appears to be ignoring output from evaluate_mathematically, even when it properly generates a response with a confidence of 1. Logging the output of each adapter being used, I noticed that the closest match adapter's confidence value is being returned as an integer percentage (above 1). Dividing by 100 appears to correct the issue by bringing it back down to a representation between 0 and 1. I am currently writing up tests and I will create a pull request shortly. |
As stated in the PR, the issue has been fixed. The only issue remaining in the current implementation for logic adapters is that both the adapter's response and the user's prompt are saved to the database. This could allow the bot to give some odd responses to a prompt the user gives. I think the solution is pretty simple though: We return an additional variable which tells ChatterBot whether or not to save the response to the database. If that variable is true, then chatterbot saves the prompt and the response to the database. Otherwise, it doesn't. Ideas? |
Some general thoughts on adding a variable to determine if a given statement should be saved to the database when returned from a logic adapter: My initial thought on this is that whatever is returned from a logic adapter should always be saved (I am open to alternate suggestions). My reasoning for this is that the response that a logic adapter returns is part of a conversation. In a conversation each statement and the order in which they occur is an important characteristic. Although there is nothing implemented at the moment that takes the context of the current conversation into account, it is my hope that there will be in the future. For each statement that exists in a bot's database, the most valuable statements are those in which the bot has learned which other statement(s) the given statement is in response to. This is because this data makes it possible to select a match to a user's input that is known to yield a response. So if a user asks the bot a question, lets say something like "What time is it?", and the bot replies that "The current time is 10:59pm", we can only assume that whatever the user says next is a response to the bot's output. I would say that reasonably that would warrant maintaining the integrity of this data relationship by saving this data to the database. |
I agree with you with respect to the user's response being a response to the bot's output. The only issue with the current system is the fact that the bot's response is used as a response to a user's question, when it should not. For example, if I ask the bot what the current time is, it do exactly what it should and respond with the current time. However, later on in the chat, maybe 10 minutes later, it will display that same time stamp as a response to something I say. This is not correct. @gunthercox Do you see what I mean? |
Hmm, that's strange. This might actually be an issue with the response selection algorithm. Technically that should only happen if there are very few possible responses in the database and it is choosing one at random, or if the use's input is very close to a question about the current time. I will write up a few test cases to see if I can consistently recreate this. It may be possible to refine the response selection process without blocking previous responses from being stored. |
@gunthercox Can I close this? |
I believe this should be safe to close. |
Although it looks fine on the surface, there are issues with the current method for using multiple logic adapters. This can be easily found when you run chatterbot and type "what time is it?". Sometimes, I will get the current time while other times I will get "what time is it?", which is displayed because there is no other statement in the database. In addition, most of the time when I ask that question later in the conversation, I do not get the actual time but a response from the database.
After debugging the error for a fair amount of time, I found where the problem is coming from. The current way responses are being prioritized makes getting the correct logic adapter to respond very difficult. I am not sure why this is occurring, but the problem is in https://github.com/gunthercox/ChatterBot/blob/master/chatterbot/adapters/logic/multi_adapter.py#L11. You can test this by adding a print statement in evaluate_mathematically in the try block (located at https://github.com/gunthercox/ChatterBot/blob/master/chatterbot/adapters/logic/evaluate_mathematically.py#L31) printing the expression after assigning it the evaluated input. If you then type in something like "what is 100 * 10?", you will see that evaluate_mathematically properly generates a response, and returns a confidence of 1, but still the result is not used.
Further, the database responses generated from the evaluate_mathematically & time functions are incorrect. This might be unrelated, or the program might be checking the database before checking the other adapters first, and when it gets a 1 confidence for an equal statement it returns that statement and doesn't replace it with the generated result from other logic adapters. This is an issue particularly for logic adapters that would have been considered plugins, which do not want their results saved to the database.
To test, the order of my logic adapters is as follows:
" logic_adapters=[
"chatterbot.adapters.logic.EvaluateMathematically",
"chatterbot.adapters.logic.TimeLogicAdapter",
"chatterbot.adapters.logic.ClosestMatchAdapter"
],"
@gunthercox Any ideas how to fix this? I have 2 separate solutions that I have been able to come up with so far:
Re-implement plugin support. By classifying them differently, we can easily fix the database issue as well as put higher priority on getting a result from a plugin, which will fix the issue regarding incorrect order.
Leave the system as-is, but change the way adapters are processed. This will make the code less readable, but may be a much better solution than 1. It would require adding a flag to tell chatterbot whether to save the result and the statement to the database. In addition, it would require a new way to process and give confidence values for logic adapters. I would argue that we remove the tie breaker, and use the following logic instead (the for loop just stays the same):
a) At https://github.com/gunthercox/ChatterBot/blob/master/chatterbot/adapters/logic/multi_adapter.py#L22, we would add an additional if statement above which, if the confidence was = 1, would return the result
b) Otherwise, do what is done currently getting the highest confidence.
Thoughts?
The text was updated successfully, but these errors were encountered: