Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dream emotion dist #568

Merged
merged 35 commits into from
Nov 16, 2023
Merged

Dream emotion dist #568

merged 35 commits into from
Nov 16, 2023

Conversation

asleepann
Copy link
Collaborator

@asleepann asleepann commented Sep 12, 2023

Made dream_emotion dist with the following annotators: bot-emotion-classifier (determines emotion and mood of the bot) and emotional-bot-response (change hypotheses text according to emotion and mood of the bot). Response selector in dream_emotion selects emotional text of the hypothesis.
(old PR: #563)
DO NOT DELETE THIS BRANCH

del api_conf[key]
break

logger.info(f"Available APIs: {', '.join([api['display_name'] for api in api_conf.values()])}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you do not use API_CONFIGS and api_conf, so I suggest to remove them (and from yml files too)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed

@@ -0,0 +1,25 @@
name: emotion-response_selector
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i suggest to follow unofficial rules and name components with "_" not "-".
With "-" we name containers.
(it was important rule when we were using Kubernetes, so we just follow it for now to be able to use kuber in future without breaking changes)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed
also, thank you for explanations :)

@@ -0,0 +1,23 @@
name: bot-emotion-classifier
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

btw why bot emotion cls is annotator not bot uttr annotator?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

name fixed
don't know why I didn't fixed type of the annotator before)
now should be correct: changed state manager method and formatters

SERVICE_PORT: 8050
SERVICE_NAME: emotional-bot-response
WORK_DIR: annotators/emotional_bot_response
FLASK_APP: server
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need to write it here

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed

build:
args:
SERVICE_PORT: 8050
SERVICE_NAME: emotional-bot-response
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

with _

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

N_UTTERANCES_CONTEXT: 5
FILTER_TOXIC_OR_BADLISTED: 1
EMOTIONAL_RESPONSES: 1
FLASK_APP: server
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you may not put it here- it's below

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed

state_formatters/dp_formatters.py Show resolved Hide resolved
hypots = [h["text"] for h in hypotheses]

bot_mood_label = (
dialog["bot_utterances"][-1]["annotations"].get("bot_emotion_classifier", {}).get("bot_mood_label", "happy")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if this fucntion will be triggered on the 1st step of the dialog? When no bot uttrs

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added handling of this situation

if GENERATIVE_SERVICE_CONFIG:
with open(f"common/generative_configs/{GENERATIVE_SERVICE_CONFIG}", "r") as f:
GENERATIVE_SERVICE_CONFIG = json.load(f)
GENERATIVE_TIMEOUT = int(getenv("GENERATIVE_TIMEOUT", 30))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

float

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

def make_prompt(sentence, emotion="neutral", mood="happy", intensity=7):
prompt = f"""You are a writer who needs to rewrite a sentence to express
specific emotions, moods, and intensity levels. Your task is to generate
a new sentence that conveys the desired emotions, moods, and emotion intensity
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

put the prompt into a file. you may replace in the following view. In the prompt put:

- Sentence: {SENTENCE}

and then do: prompt.replace("{SENTENCE}", sentence)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

except Exception as e:
sentry_sdk.capture_exception(e)
logger.exception(e)
response = None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

None is a bad choice. If response is str, then set default value to ""

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

except Exception as exc:
logger.exception(exc)
sentry_sdk.capture_exception(exc)
result = {"hypotheses": "fallback standard response"}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"" empty fallback

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

bot_mood_labels = ["angry"]
bot_emotions = ["anger"]
responses = rewrite_sentences(sentences, bot_mood_labels, bot_emotions)
logger.info("TEST. Sentences: {}".format(sentences))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

more common to use f-string for now which is:

f"TEST. Sentences: {sentences}"

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

bot_emotion = request.json.get("bot_emotion", [])

results = []
for i in range(len(sentences)):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just FYI
you can do:

for sent, mood_lab, emot in zip(sentences, bot_mood_labels, bot_emotions):

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks! done


sentences = request.json.get("sentences", [])
bot_mood_label = request.json.get("bot_mood_label", [])
bot_emotion = request.json.get("bot_emotion", [])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

multiple bot_emotionS (and mood labelS)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

build:
args:
SERVICE_PORT: 8048
SERVICE_NAME: emotion-response_selector
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

name of component with only _
name of container with only -

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

"metadata": {
"display_name": "Dream",
"author": "DeepPavlov",
"description": "Main version of DeepPavlov Dream Socialbot",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change it

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

{
"sentences": hypots,
"bot_mood_label": bot_mood_labels,
"bot_emotion": bot_emotions,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bot_emotions
bot_mood_labels

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

if GENERATIVE_SERVICE_CONFIG:
with open(f"common/generative_configs/{GENERATIVE_SERVICE_CONFIG}", "r") as f:
GENERATIVE_SERVICE_CONFIG = json.load(f)
GENERATIVE_TIMEOUT = int(getenv("GENERATIVE_TIMEOUT", 30.0))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
GENERATIVE_TIMEOUT = int(getenv("GENERATIVE_TIMEOUT", 30.0))
GENERATIVE_TIMEOUT = float(getenv("GENERATIVE_TIMEOUT", 30.0))

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Copy link
Collaborator

@dilyararimovna dilyararimovna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good Job! thank you!

print("SENTENCE:", sentence)
print("USER EMOTION: ", user_emotion)
print("SENTIMENT: ", sentiment)
print("OLD BOT MOOD: ", bot_mood)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all prints should be deleted (or changed to logger) -- they will not be shown because of logger

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deleted them (I already had the same info in logger)

annotated_utterances = request.json.get("annotated_utterances", [])
bot_moods = request.json.get("bot_mood", [])

for i in range(len(sentences)):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just writing to show you how it is more beautiful to write these 4 lines of code in one:

for sentence, annotated_utterance, bot_mood in zip(sentences, annotated_utterances, bot_moods):

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know why but sometimes zip confuses me in code of other people, and I find for i easier to understand, but I will try to use zip instead because it's truly more beautiful :)

)
try:
hypotheses = send_request_to_prompted_generative_service(
"",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dialog context is a list of utterance not str

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

sentence-ranker:8128, property-extraction:8136, prompt-selector:8135, openai-api-chatgpt:8145,
dff-dream-persona-chatgpt-prompted-skill:8137, dff-dream-faq-prompted-skill:8170,
openai-api-chatgpt-16k:8167, bot-emotion-classifier:8051, emotional-bot-response:8050,
emotion-ranking-based-response-selector:8048"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i suggest to get rid of extra components in this distribution. For example, factoid compoentns, etc.
So, you me leave components from "in general" prompted distribution (aka https://github.com/deeppavlov/dream/tree/main/assistant_dists/dream_persona_openai_prompted):

sentseg:8011, combined-classification:8087,
sentence-ranker:8128, prompt-selector:8135, openai-api-chatgpt:8145,
dff-dream-persona-chatgpt-prompted-skill:8137, dff-dream-faq-prompted-skill:8170,
openai-api-chatgpt-16k:8167, bot-emotion-classifier:8051, emotional-bot-response:8050,
emotion-ranking-based-response-selector:8048

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done. Hope I removed unnecessary components from all config files...

components/fuipwahef749384h2e.yml Show resolved Hide resolved
- candidate_annotators
required_previous_services: null
state_manager_method: add_bot_utterance
tags: null
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

selectors. And in pipeline conf please also fix it. As in skill selector (this is a bug in whole dream -- I am goijng to fix it in a separate pull request for all response selectors. I actually thought I did it previously, but seems like not yet).
https://github.com/deeppavlov/dream/blob/dev/assistant_dists/dream_persona_openai_prompted/pipeline_conf.json#L159-L161 (also find a card for this component and make analogy)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added tags to response selector in component card and in pipeline_conf

group: candidate_annotators
connector:
protocol: http
timeout: 15.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why this timeout is 15 seconds if your generative timeout is 120? fix it. Including in pipeline and othe config files. Maybe you can decrease the generative timeout (it won't take 120 secods because the prompt is yours hard-coded, so it would be a fast response by openai)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed generative timeout to 30 sec everywhere


SENTENCE_RANKER_ANNOTATION_NAME = getenv("SENTENCE_RANKER_ANNOTATION_NAME")
SENTENCE_RANKER_SERVICE_URL = getenv("SENTENCE_RANKER_SERVICE_URL")
SENTENCE_RANKER_TIMEOUT = int(getenv("SENTENCE_RANKER_TIMEOUT"))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
SENTENCE_RANKER_TIMEOUT = int(getenv("SENTENCE_RANKER_TIMEOUT"))
SENTENCE_RANKER_TIMEOUT = float(getenv("SENTENCE_RANKER_TIMEOUT"))

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also compare file with the latest version of response selector original (which it is copied from) because I think there were some bug fixes, i want you to check whether you do not have same bugs here

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed. Also, added new test files as in ranking_based_response_selector: test.py and conftest.py


if EMOTIONAL_RESPONSES:
emotional_hypotheses = hypotheses[best_id].get("annotations").get("emotional_bot_response")
selected_responses.append(emotional_hypotheses.pop("hypotheses"))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

из emotional_hypotheses не обязательно извлекать текст как pop.
а вот из hypotheses[best_id] надо выдернуть text в любом случае - чтобы убрать лишнее из дикта и оставить там только атрибуты, без ключа text.
плюс нужна проверка, что предсказание аннотатора, то есть этот эмо респонс, не пустой.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -0,0 +1,9 @@
SERVICE_PORT: 8048
SERVICE_NAME: emotion_response_selector
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ты его в пайплайн конфе используешь как response_selector, а не emotion_response_selector. Так что поменяй тут название обратно (и в docker compose yml тоже)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

build:
args:
SERVICE_PORT: 8048
SERVICE_NAME: emotion_response_selector
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ты его в пайплайн конфе используешь как response_selector, а не emotion_response_selector. Так что поменяй тут название обратно (и в docker compose yml тоже)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

dialog_formatter: state_formatters.dp_formatters:bot_mood_emotion_formatter
response_formatter: state_formatters.dp_formatters:simple_formatter_service
previous_services: skills
required_previous_services: skills
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

не надо делать это required. пусть тут будет null

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

build:
args:
SERVICE_PORT: 8048
SERVICE_NAME: emotion_response_selector
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ты его в пайплайн конфе используешь как response_selector, а не emotion_response_selector. Так что поменяй тут название обратно (и в docker compose yml тоже)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

{
"bot_mood": new_bot_mood,
"bot_mood_label": new_bot_mood_label,
"bot_emotion": bot_emotion,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops... I have noticed previously.
So, you need to create an empty list result, and append those dicts to it on every iteration of for cycle above. and then jsonify result

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so, you input and output batches would be of the same length. because now they are input of, for example, 5 samples, and output of length 1 anyway

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

model_type: NN-based
is_customizable: true
author: publisher@deeppavlov.ai
description: Propose alternative hypotheses with specified emotional color.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if it proposes a hypotheses, why it is a candidate annotator NOT a skill?
I suggest to make a few changes and move it to skills

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not a skill, because it uses hypotheses and changes them according to specified emotions. Yes, it sounds confusing, we initially discussed this too, but we need to finish it for the AC report. I can change the description in the component

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed description

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so, it doesn't produce a new one hypotheses, but processes existing hypothesEs?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes

"is_enabled": true,
"source": {
"component": "components/rFC0YJOoDFvS.yml",
"service": "services/agent_services/service_configs/dream"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

component above card should be created a new one (in the card in one place the title of the dist is used, so it is unique). And dream -> dream_emotion

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

"is_enabled": true,
"source": {
"component": "components/sbDcAqiNqxFz.yml",
"service": "services/agent_services/service_configs/dream"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

component above card should be created a new one (in the card in one place the title of the dist is used, so it is unique). And dream -> dream_emotion

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Collaborator

@dilyararimovna dilyararimovna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

@dilyararimovna dilyararimovna merged commit e12986e into dev Nov 16, 2023
2 checks passed
oserikov pushed a commit that referenced this pull request Nov 21, 2024
* add dream_emotion dist with annotators

* add component cards

* fix codestyle flake8

* fix codestyle

* another codestyle fix

* PR fixes

* fixed batches

* move preprocessing to annotator

* fix codestyle black

* minor fixes

* removed unnecessary api_configs

* changed type of annotator to bot utts annotator

* fixed component file

* fixed service.yml files

* fixed formatters

* fixed emotional response server and dist files

* moved prompt into separate file

* fixed annotators

* fixed annotators again

* fixed emotion response selector

* fixed emotion dist and components

* fixed codestyle

* fixed codestyle again

* fixed codestyle black

* fixed naming of emotion response selector

* fixed emotional_bot_response component

* fixed emotion response selector server

* added prev_services in annotator in pipeline_conf

* fixed result formatting of bot_emo_cls

* fixed codestyle

* fixed codestyle again

* changed description of emo_bot_response

* added necessary component cards

* fix: timeout

* fix: timeout

---------

Co-authored-by: Dilyara Zharikova (Baymurzina) <dilyara.rimovna@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants