Skip to content

An hybrid method for explaining LLM predictions for Entity Resolution tasks

License

Notifications You must be signed in to change notification settings

tteofili/ellmer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ELLMER

Code for the paper "Can we trust LLM Self-Explanations for Entity Resolution?".

Installation

To install ELLMER locally run :

pip install .

Usage

To replicate experiments, first download the DeepMatcher datasets somewhere on your local disk, then use the python eval script.

You can choose the LLM model_type by choosing:

  • OpenAI models deployed on Azure with --model_type azure_openai
  • local Llama2-13B model --model_type llama2
  • local Falcon model --model_type falcon
  • HF models --model_type hf --model_name meta-llama/Llama-3.1-8B-Instruct

You can choose how many samples the evaluation should account for (--samples param), the explanation granularity (--granularity param, accepted values are token and attribute).

You can choose one or more datasets for the evaluation as the name of one or more directories in the base_dir.

python scripts/eval.py --base_dir path/to/deepmatcher_datasets --model_type azure_openai --datasets beers --samples 5 --granularity token

Other optional parameters can be specified in the script.

Notebooks

About

An hybrid method for explaining LLM predictions for Entity Resolution tasks

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published