Skip to content

Latest commit

 

History

History
85 lines (42 loc) · 2.78 KB

README.md

File metadata and controls

85 lines (42 loc) · 2.78 KB

Conversations Are Not Flat: Modeling the Dynamic Information Flow across Dialogue Utterances

This repository contains the code and pre-trained models for our ACL 2021 paper Conversations Are Not Flat: Modeling the Dynamic Information Flow across Dialogue Utterances pdf.

**************************** Updates ****************************

The Chinese version is comming soon!

  • 6/30: We released the code and pre-trained model (English version) of DialoFlow.
  • 5/10: We released the code and pre-trained model of Flow Score. Try to use it!

Overview

We propose the DialoFlow, a new paradigm to construct the dynamic information flow in the dialogue history by addressing the semantic influence brought about by each utterance. Besides, we design an automatic reference-free evaluation metric Flow Score based on the pre-trained DialoFlow for interactive dialogue quality evaluation.

Overview of DialoFlow

DialoFlow

Requirements

torch==1.7.0

transformers==3.0.2

apex

Pre-trained models

DialoFlow is pre-trained on the Reddit dataset based on the GPT-2.

For more details about the dataset, please refer to DialoGPT.

We release three pre-trained models: DialoFlow_base, DialoFlow_medium, and DialoFlow_large.

Please download the pre-trained models under the path models/.

The fine-tuning models on the BST dataset and the Chinese version will be public soon.

Dialogue Generation

We provide the code for dialogue generation using the pre-trained DialoFlow model.

The script generate.py contains two decoding methods: beam search and nucleus sampling.

You can modify the code for your own data and task.

Fine-tuning

We fine-tuned the pre-trained model on the DailyDialog dataset.

cd dailydialog
bash fine-tune.sh

Flow Score

Flow Score is an automatic reference-free evaluation metric for interactive dialogue evaluation based on the pre-trained DialoFlow. Flow Score can be found here.

Citation

Please cite our paper if you use DialoFlow in your work.

@inproceedings{li2021dialoflow,
   title={Conversations are not Flat: Modeling the Dynamic Information Flow across Dialogue Utterances},
   author={Li, Zekang and Zhang, Jinchao and Fei, Zhengcong and Feng, Yang and Zhou, Jie},
   booktitle={Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
   year={2021}
}