Skip to content

[EMNLP2023]: MIRACLE: Towards Personalized Dialogue Generation with Latent-Space Multiple Personal Attribute Control

Notifications You must be signed in to change notification settings

LZY-the-boys/MIRACLE

Repository files navigation

MIRACLE

EMNLP2023-findings: MIRACLE: Towards Personalized Dialogue Generation with Latent-Space Multiple Personal Attribute Control

arxiv link

Dataset is uploaded in here.

image

The Concept

We modeling the multi-facted personality as the fusion of multiple personal attribute ($P_1, P_2, \cdots, P_N$), where each attribute may have many aspects ($p_1, p_2, \cdots, p_m$).

Setup

This repo is build based on 🤗 huggingface transformers and torchdiffeq.

The training and evaluating can be done with a single RTX3090/4090.

The recommendate environment :

conda create -n miracle python=3.9
pip install -r requirements.txt

Training and evaluating

In the following part, we assume you have downloaded the dataset we used

Firstly, you should train two model for evaluation:

  • single-personal-attribute text classifier for calculating personalization score.
ATTR=languagestyle bash train_classifier.sh
  • NLI model for calculating dialogue resposne semantic coherence.
bash train_nli.sh

Then, you can train our Miracle and generate personalized responses by:

bash pipeline.sh

which call train_cvae.sh for training and gen_cave.sh for generation. We select the 11th epoch model as our final model. Notice that for different dataset or different senarioes, you may need to adjust hyper-parameters to gain better results.

Customize you data

To train our model, you can use any personl-attribute-dense dataset in the format like our released data.

We also upload our ChatGPT-API script as reference. It generate personalized response in aspect level. Notice that you need to prepare you own topics in dataset/sample_topics:

python chatgpt_data.py --key 'Your OpenAI API key' --aspect 'xx1'

With the collected data, you need to format them as follows in dataset/dialogue_yy.jsonl for each attribute:

{"input": ["user post1", "user post2"], "output": ["model resp1", "model resp2"], "tag": 'xx1'}
{"input": ["user post1", "user post2"], "output": ["model resp1", "model resp2"], "tag": 'xx2'}
...

We recommendate you to clean them before use to make sure it's personality richness.

Before training, change the schema.py to set you dataset file paths.

Similar Insights

  • Controllable and Compositional Generation with Latent-Space Energy-Based Models. NIPS 2021 code
  • Composable Text Controls in Latent Space with ODEs. EMNLP 2022 code
  • A Distributional Lens for Multi-Aspect Controllable Text Generation. EMNLP 2022 code
  • Controllable Text Generation via Probability Density Estimation in the Latent Space. ACL 2023 code

Citation

@misc{lu2023miracle,
    title={MIRACLE: Towards Personalized Dialogue Generation with Latent-Space Multiple Personal Attribute Control},
    author={Zhenyi Lu and Wei Wei and Xiaoye Qu and XianLing Mao and Dangyang Chen and Jixiong Chen},
    year={2023},
    eprint={2310.18342},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

About

[EMNLP2023]: MIRACLE: Towards Personalized Dialogue Generation with Latent-Space Multiple Personal Attribute Control

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published