Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adapted memoization policies for graph #9383

Merged
merged 49 commits into from
Sep 2, 2021
Merged
Show file tree
Hide file tree
Changes from 34 commits
Commits
Show all changes
49 commits
Select commit Hold shift + click to select a range
c63f592
Adapted memoization policies for graph
twerkmeister Aug 17, 2021
8e7be7e
Fixed conftest import
twerkmeister Aug 18, 2021
0209f73
Code quality
twerkmeister Aug 18, 2021
d809ae5
Removed required packages var
twerkmeister Aug 18, 2021
fdb0413
Delegating a couple things to the super class
twerkmeister Aug 18, 2021
967a28b
Not persisting config anymore for graph policy components
twerkmeister Aug 23, 2021
ccfea81
adjusted unit testing for memoization graph component
twerkmeister Aug 23, 2021
52857e6
removed unused imports
twerkmeister Aug 23, 2021
eadd3cb
Removed max history hack in ted policy
twerkmeister Aug 24, 2021
d1013a6
Making sure policy config is used for initializing featurizers if nec…
twerkmeister Aug 24, 2021
75b0678
Removed duplicate method
twerkmeister Aug 24, 2021
f3164ac
Merge branch 'main' into 9344-memoization-graph-component
twerkmeister Aug 24, 2021
be45a01
Added policy priority to ted policy default config
twerkmeister Aug 24, 2021
716238a
Removed unecessary type check
twerkmeister Aug 24, 2021
df0fd0c
Removed kwargs overwrite of policy config
twerkmeister Aug 24, 2021
1eb211d
Made policy config a privat var
twerkmeister Aug 24, 2021
9be669a
Revert "Made policy config a privat var"
twerkmeister Aug 24, 2021
4491703
Simplified policy test suites
twerkmeister Aug 24, 2021
3789d0b
Merge branch 'main' into 9344-memoization-graph-component
twerkmeister Aug 24, 2021
14c5b05
Code quality + unexpecTED adjustment
twerkmeister Aug 24, 2021
f770661
Persisting policy by default when training
twerkmeister Aug 25, 2021
f53b219
Keeping original policy name in unit tests
twerkmeister Aug 25, 2021
0607458
Only carrying over max_history from policy to featurizer for MaxHisto…
twerkmeister Aug 25, 2021
d7543e1
Turned load fail logging to warning from info
twerkmeister Aug 25, 2021
e6e9fca
Removed forward reference
twerkmeister Aug 25, 2021
63074ea
Accessing POLICY_MAX_HISTORY directly
twerkmeister Aug 25, 2021
d1c2bb8
Removed compatibility code for old policy component types
twerkmeister Aug 25, 2021
70be73b
Removed unused import
twerkmeister Aug 25, 2021
18cad3e
Moved persist implementation from Policy class to MemoizationPolicy
twerkmeister Aug 25, 2021
d997416
Merge branch 'main' into 9344-memoization-graph-component
twerkmeister Aug 25, 2021
cd155b9
Merge branch 'main' into 9344-memoization-graph-component
twerkmeister Aug 30, 2021
cf64b39
Removed metadata and persist abstract methods from policy class
twerkmeister Aug 30, 2021
1dd8a32
Merge branch 'main' into 9344-memoization-graph-component
twerkmeister Aug 30, 2021
a1f2cbf
Fixed access to policy.max_history
twerkmeister Aug 30, 2021
7bda727
Test commit
twerkmeister Aug 30, 2021
3958782
Joined all memoization policy tests
twerkmeister Aug 30, 2021
1018ff8
Fixed delorean code
twerkmeister Aug 30, 2021
0c59a5b
Update rasa/core/policies/memoization.py
twerkmeister Aug 30, 2021
919827d
policy featurizer creation refactoring
twerkmeister Aug 30, 2021
f17ff76
Made _standard_featurizer more uniform and fixed it in UnexpectedTED …
twerkmeister Aug 30, 2021
fc45164
Removed unused imports
twerkmeister Aug 30, 2021
fb00f5a
Merge branch 'main' into 9344-memoization-graph-component
twerkmeister Aug 30, 2021
b63cfe7
Removing name from config dict before instantiating feeaturizers
twerkmeister Aug 30, 2021
f35e0ce
Revert "Removing name from config dict before instantiating feeaturiz…
twerkmeister Aug 30, 2021
a261108
Revert "policy featurizer creation refactoring"
twerkmeister Aug 30, 2021
be57395
made create featurizer use copy of policy again
twerkmeister Aug 30, 2021
9e2210d
removed surplus argument
twerkmeister Aug 30, 2021
208da22
Merge branch 'main' into 9344-memoization-graph-component
twerkmeister Sep 1, 2021
2793c4c
Merge branch 'main' into 9344-memoization-graph-component
joejuzl Sep 2, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
369 changes: 369 additions & 0 deletions rasa/core/policies/_memoization.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,369 @@
# WARNING: This module will be dropped before Rasa Open Source 3.0 is released.
# Please don't do any changes in this module and rather adapt `MemoizationPolicyGraphComponent` from
# the regular `rasa.core.policies.memoization` module. This module is a
# workaround to defer breaking changes due to the architecture revamp in 3.0.
# flake8: noqa
import zlib

import base64
import json
import logging

from tqdm import tqdm
from typing import Optional, Any, Dict, List, Text

import rasa.utils.io
import rasa.shared.utils.io
from rasa.shared.core.domain import State, Domain
from rasa.shared.core.events import ActionExecuted
from rasa.core.featurizers.tracker_featurizers import (
TrackerFeaturizer,
MaxHistoryTrackerFeaturizer,
)
from rasa.shared.nlu.interpreter import NaturalLanguageInterpreter
from rasa.core.policies.policy import Policy, PolicyPrediction
from rasa.shared.core.trackers import DialogueStateTracker
from rasa.shared.core.generator import TrackerWithCachedStates
from rasa.shared.utils.io import is_logging_disabled
from rasa.core.constants import MEMOIZATION_POLICY_PRIORITY, DEFAULT_MAX_HISTORY

logger = logging.getLogger(__name__)


class MemoizationPolicy(Policy):
"""A policy that follows exact examples of `max_history` turns in training stories.

Since `slots` that are set some time in the past are
preserved in all future feature vectors until they are set
to None, this policy implicitly remembers and most importantly
recalls examples in the context of the current dialogue
longer than `max_history`.

This policy is not supposed to be the only policy in an ensemble,
it is optimized for precision and not recall.
It should get a 100% precision because it emits probabilities of 1.1
along it's predictions, which makes every mistake fatal as
no other policy can overrule it.

If it is needed to recall turns from training dialogues where
some slots might not be set during prediction time, and there are
training stories for this, use AugmentedMemoizationPolicy.
"""

ENABLE_FEATURE_STRING_COMPRESSION = True

USE_NLU_CONFIDENCE_AS_SCORE = False

@staticmethod
def _standard_featurizer(
max_history: Optional[int] = None,
) -> MaxHistoryTrackerFeaturizer:
# Memoization policy always uses MaxHistoryTrackerFeaturizer
# without state_featurizer
return MaxHistoryTrackerFeaturizer(
state_featurizer=None, max_history=max_history
)

def __init__(
self,
featurizer: Optional[TrackerFeaturizer] = None,
priority: int = MEMOIZATION_POLICY_PRIORITY,
max_history: Optional[int] = DEFAULT_MAX_HISTORY,
lookup: Optional[Dict] = None,
**kwargs: Any,
) -> None:
"""Initialize the policy.

Args:
featurizer: tracker featurizer
priority: the priority of the policy
max_history: maximum history to take into account when featurizing trackers
lookup: a dictionary that stores featurized tracker states and
predicted actions for them
"""
if not featurizer:
featurizer = self._standard_featurizer(max_history)

super().__init__(featurizer, priority, **kwargs)

self.max_history = self.featurizer.max_history
self.lookup = lookup if lookup is not None else {}

def _create_lookup_from_states(
self,
trackers_as_states: List[List[State]],
trackers_as_actions: List[List[Text]],
) -> Dict[Text, Text]:
"""Creates lookup dictionary from the tracker represented as states.

Args:
trackers_as_states: representation of the trackers as a list of states
trackers_as_actions: representation of the trackers as a list of actions

Returns:
lookup dictionary
"""

lookup = {}

if not trackers_as_states:
return lookup

assert len(trackers_as_actions[0]) == 1, (
f"The second dimension of trackers_as_action should be 1, "
f"instead of {len(trackers_as_actions[0])}"
)

ambiguous_feature_keys = set()

pbar = tqdm(
zip(trackers_as_states, trackers_as_actions),
desc="Processed actions",
disable=is_logging_disabled(),
)
for states, actions in pbar:
action = actions[0]

feature_key = self._create_feature_key(states)
if not feature_key:
continue

if feature_key not in ambiguous_feature_keys:
if feature_key in lookup.keys():
if lookup[feature_key] != action:
# delete contradicting example created by
# partial history augmentation from memory
ambiguous_feature_keys.add(feature_key)
del lookup[feature_key]
else:
lookup[feature_key] = action
pbar.set_postfix({"# examples": "{:d}".format(len(lookup))})

return lookup

def _create_feature_key(self, states: List[State]) -> Text:
# we sort keys to make sure that the same states
# represented as dictionaries have the same json strings
# quotes are removed for aesthetic reasons
feature_str = json.dumps(states, sort_keys=True).replace('"', "")
if self.ENABLE_FEATURE_STRING_COMPRESSION:
compressed = zlib.compress(
bytes(feature_str, rasa.shared.utils.io.DEFAULT_ENCODING)
)
return base64.b64encode(compressed).decode(
rasa.shared.utils.io.DEFAULT_ENCODING
)
else:
return feature_str

def train(
self,
training_trackers: List[TrackerWithCachedStates],
domain: Domain,
interpreter: NaturalLanguageInterpreter,
**kwargs: Any,
) -> None:
# only considers original trackers (no augmented ones)
training_trackers = [
t
for t in training_trackers
if not hasattr(t, "is_augmented") or not t.is_augmented
]
(
trackers_as_states,
trackers_as_actions,
) = self.featurizer.training_states_and_labels(training_trackers, domain)
self.lookup = self._create_lookup_from_states(
trackers_as_states, trackers_as_actions
)
logger.debug(f"Memorized {len(self.lookup)} unique examples.")

def _recall_states(self, states: List[State]) -> Optional[Text]:
return self.lookup.get(self._create_feature_key(states))

def recall(
self, states: List[State], tracker: DialogueStateTracker, domain: Domain,
) -> Optional[Text]:
"""Finds the action based on the given states.

Args:
states: List of states.
tracker: The tracker.
domain: The Domain.

Returns:
The name of the action.
"""
return self._recall_states(states)

def _prediction_result(
self, action_name: Text, tracker: DialogueStateTracker, domain: Domain
) -> List[float]:
result = self._default_predictions(domain)
if action_name:
if self.USE_NLU_CONFIDENCE_AS_SCORE:
# the memoization will use the confidence of NLU on the latest
# user message to set the confidence of the action
score = tracker.latest_message.intent.get("confidence", 1.0)
else:
score = 1.0

result[domain.index_for_action(action_name)] = score

return result

def predict_action_probabilities(
self,
tracker: DialogueStateTracker,
domain: Domain,
interpreter: NaturalLanguageInterpreter,
**kwargs: Any,
) -> PolicyPrediction:
"""Predicts the next action the bot should take after seeing the tracker.

Args:
tracker: the :class:`rasa.core.trackers.DialogueStateTracker`
domain: the :class:`rasa.shared.core.domain.Domain`
interpreter: Interpreter which may be used by the policies to create
additional features.

Returns:
The policy's prediction (e.g. the probabilities for the actions).
"""
result = self._default_predictions(domain)

states = self._prediction_states(tracker, domain)
logger.debug(f"Current tracker state:{self.format_tracker_states(states)}")
predicted_action_name = self.recall(states, tracker, domain)
if predicted_action_name is not None:
logger.debug(f"There is a memorised next action '{predicted_action_name}'")
result = self._prediction_result(predicted_action_name, tracker, domain)
else:
logger.debug("There is no memorised next action")

return self._prediction(result)

def _metadata(self) -> Dict[Text, Any]:
return {
"priority": self.priority,
"max_history": self.max_history,
"lookup": self.lookup,
}

@classmethod
def _metadata_filename(cls) -> Text:
return "memorized_turns.json"


class AugmentedMemoizationPolicy(MemoizationPolicy):
"""The policy that remembers examples from training stories
for `max_history` turns.

If it is needed to recall turns from training dialogues
where some slots might not be set during prediction time,
add relevant stories without such slots to training data.
E.g. reminder stories.

Since `slots` that are set some time in the past are
preserved in all future feature vectors until they are set
to None, this policy has a capability to recall the turns
up to `max_history` from training stories during prediction
even if additional slots were filled in the past
for current dialogue.
"""

@staticmethod
def _back_to_the_future(
tracker: DialogueStateTracker, again: bool = False
) -> Optional[DialogueStateTracker]:
"""Send Marty to the past to get
the new featurization for the future"""

idx_of_first_action = None
idx_of_second_action = None

# we need to find second executed action
for e_i, event in enumerate(tracker.applied_events()):
# find second ActionExecuted
if isinstance(event, ActionExecuted):
if idx_of_first_action is None:
idx_of_first_action = e_i
else:
idx_of_second_action = e_i
break

# use first action, if we went first time and second action, if we went again
idx_to_use = idx_of_second_action if again else idx_of_first_action
if idx_to_use is None:
return None

# make second ActionExecuted the first one
events = tracker.applied_events()[idx_to_use:]
if not events:
return None

mcfly_tracker = tracker.init_copy()
for e in events:
mcfly_tracker.update(e)

return mcfly_tracker

def _recall_using_delorean(
self, old_states: List[State], tracker: DialogueStateTracker, domain: Domain,
) -> Optional[Text]:
"""Applies to the future idea to change the past and get the new future.

Recursively go to the past to correctly forget slots,
and then back to the future to recall.

Args:
old_states: List of states.
tracker: The tracker.
domain: The Domain.

Returns:
The name of the action.
"""
logger.debug("Launch DeLorean...")

mcfly_tracker = self._back_to_the_future(tracker)
while mcfly_tracker is not None:
states = self._prediction_states(mcfly_tracker, domain,)

if old_states != states:
# check if we like new futures
memorised = self._recall_states(states)
if memorised is not None:
logger.debug(f"Current tracker state {states}")
return memorised
old_states = states

# go back again
mcfly_tracker = self._back_to_the_future(mcfly_tracker, again=True)

# No match found
logger.debug(f"Current tracker state {old_states}")
return None

def recall(
self, states: List[State], tracker: DialogueStateTracker, domain: Domain,
) -> Optional[Text]:
"""Finds the action based on the given states.

Uses back to the future idea to change the past and check whether the new future
can be used to recall the action.

Args:
states: List of states.
tracker: The tracker.
domain: The Domain.

Returns:
The name of the action.
"""
predicted_action_name = self._recall_states(states)
if predicted_action_name is None:
# let's try a different method to recall that tracker
return self._recall_using_delorean(states, tracker, domain,)
else:
return predicted_action_name
Loading