Skip to content
/ ADELIE Public

[EMNLP2024] Aligning Large Language Models on Information Extraction

Notifications You must be signed in to change notification settings

THU-KEG/ADELIE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Aligning Large Language Models on Information Extraction

Paper Pretrained Models

We introduce ADELIE (Aligning large language moDELs on Information Extraction), an aligned LLM that effectively solves various IE tasks, including closed IE, open IE, and on-demand IE. We first collect and construct a high-quality alignment corpus IEInstruct for IE. Then we train ADELIESFT using instruction tuning on IEInstruct. We further train ADELIESFT with direct preference optimization (DPO) objective, resulting in ADELIEDPO. Extensive experiments on various held-out IE datasets demonstrate that our models (ADELIESFT and ADELIEDPO) achieve state-of-the-art (SoTA) performance among open-source models. We further explore the general capabilities of ADELIE, and experimental results reveal that their general capabilities do not exhibit a noticeable decline.

News❗️❗️❗️

  • [2024-11-4] To facilitate deployment, we further trained Qwen2.5-1.5B (ADELIE-SFT-1.5B and ADELIE-DPO-1.5B) and Llama3.2-3B (ADELIE-SFT-3B and ADELIE-DPO-3B). The models are now available on the HF repository.
  • [2024-9-20] Our paper has been accepted by the EMNLP 2024 main conference!
  • [2024-05-06] Initial release of the codebase containing the data constructing and training code for our arxiv paper.

An inference example

Installation

The code repository is based on Pytorch and Transformers. Please use the following command to install the necessary dependcies. pip install -r requirements.txt.

Pretrained models

We release three ADELIE models based on LLama-2 (7B), Llama-3.2 (3B) and Qwen2.5 (1.5B). The models are available in the 🤗HuggingFace Hub.

The table below presents the average F1 scores (%) of the ADELIE model across closed IE, open IE, and on-demand IE tasks, as well as its overall performance (%) on general benchmarks. For dataset details, please refer to the paper.

Model Closed IE Open IE On-demand IE General Average Score
Llama2 7B 5.7 5.6 22.4 52.2
ADELIE-SFT 42.6 46.9 60.4 53.5
ADELIE-DPO 42.7 47.6 60.5 53.8
----------------- ----------- --------- -------------- -----------------------
Llama3.2 3B 19.1 18.5 20.8 55.5
ADELIE-SFT-3B 41.8 47.6 60.8 55.6
ADELIE-DPO-3B 39.2 47.8 60.7 55.6
----------------- ----------- --------- -------------- -----------------------
Qwen2.5 1.5B 16.5 14.2 20.5 54.6
ADELIE-SFT-1.5B 37.7 44.6 58.9 55.0
ADELIE-DPO-1.5B 38.5 45.6 59.2 55.1

Generate the ADELIE dataset

ADELIESFT is trained on IEInstruct. And it is further trained with direct preference optimization (DPO) objective on IEFeedback, resulting in ADELIEDPO.
Among our training and testing tasks, the copyright of TACRED, ACE 2005, and RichERE belongs to LDC2 and we access them through our LDC membership. All the other datasets are open-sourced, and we strictly adhere to their licenses.
We remove the non-open source datasets from IEInstruct and IEFeedback, and make these two training datasets public. You can download the data from ADELIE Datasets.

IEInstruct

To access the full version of the IEInstruct and evaluation dataset, first install the entire raw dataset as prepared in the data/Readme.md file, then proceed with the following instructions:

#Generate a unified data format
sh ./scripts/generate_unified_data.sh

#Generate IEInstruct mixture
sh ./scripts/generate_mixtural_train_data.sh

IEFeedback

#Generate sampled data
sh ./scripts/generate_dpo_sample_data.sh

#Sample output from ADELIE-SFT
sh ./train4llama/scripts/predict.sh

#Generate IEFeedback mixture
sh ./scripts/generate_mixtural_dpo_data.sh

Model training

First, you need to generate the ADELIE dataset.

Second, you can train ADELIE-SFT and ADELIE-DPO by running the following command.

# ADELIE-SFT: 
sh train4llama/scripts/finetune_with_accelerate.sh

# ADELIE-DPO: 
sh train4llama/scripts/dpo_train_with_accelerate.sh

Please note that the training data for DPO includes ADELIE-SFT generation. Therefore, upon completing the ADELIE-SFT training, it is necessary to generate DPO training data following the method mentioned above for IEFeedback dataset generation.

Our training code is based on the open-instruct

Evaluation

We have publicly released preprocessed test datasets for evaluation of IE capabilities, excluding the RichERE dataset. Execute the following command to perform IE ability testing.

Note: For ondemand-IE and Open IE datasets, it is necessary to download the raw data from ODIE and ROBUST respectively, and place them in the data directory before evaluation can proceed.

sh ./train4llama/scripts/eval.sh

Citation

@misc{qi2024adelie,
      title={ADELIE: Aligning Large Language Models on Information Extraction}, 
      author={Yunjia Qi and Hao Peng and Xiaozhi Wang and Bin Xu and Lei Hou and Juanzi Li},
      year={2024},
      eprint={2405.05008},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

About

[EMNLP2024] Aligning Large Language Models on Information Extraction

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published