Skip to content

Tevatron - A flexible toolkit for dense retrieval research and development.

License

Notifications You must be signed in to change notification settings

thomas0809/tevatron

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tevatron for TextReact

This repository contains the training code for the SMILES-to-text retriever of TextReact. It is forked from Tevatron.

Contact: Yujie Qian yujieq@csail.mit.edu

Setup

Follow the installation instruction below and install this repo as editable.

conda install -c pytorch faiss-cpu=1.7.4 mkl=2021 blas=1.0=mkl
pip install --editable .

Data

If you have already downloaded the data in TextReact, create soft links in this repo.

ln -s /path/to/textreact/data data
ln -s /path/to/textreact/data/Tevatron_data preprocessed

Otherwise, (1) download the corpus to data/, and (2) download the preprocessed data to preprocessed/.

wget https://huggingface.co/datasets/yujieq/TextReact/resolve/main/USPTO_rxn_corpus.zip
unzip USPTO_rxn_corpus.zip -d data
wget https://huggingface.co/datasets/yujieq/TextReact/resolve/main/Tevatron_data.zip
unzip Tevatron_data.zip
mv Tevatron_data preprocessed

Scripts

The following scripts can be used to reproduce the TextReact experiments. Each script contains the commands for train/encode/retrieve.

Evaluation of retriever performance (take RCR as an example):

python -m tevatron.faiss_retriever.evaluate --corpus data/USPTO_rxn_corpus.csv --file output/RCR/test_rank.json

The following is the original README of Tevatron.

Tevatron

Tevatron is a simple and efficient toolkit for training and running dense retrievers with deep language models. The toolkit has a modularized design for easy research; a set of command line tools are also provided for fast development and testing. A set of easy-to-use interfaces to Huggingface's state-of-the-art pre-trained transformers ensures Tevatron's superior performance.

Tevatron is currently under initial development stage. We will be actively adding new features and API changes may happen. Suggestions, feature requests and PRs are welcomed.

Features

  • Command line interface for dense retriever training/encoding and dense index search.
  • Flexible and extendable Pytorch retriever models.
  • Highly efficient Trainer, a subclass of Huggingface Trainer, that naively support training performance features like mixed precision and distributed data parallel.
  • Fast and memory-efficient train/inference data access based on memory mapping with Apache Arrow through Huggingface datasets.
  • Jax/Flax training/encoding on TPU

Installation

First install neural network and similarity search backends, namely Pytorch (or Jax) and FAISS. Check out the official installation guides for Pytorch , Jax / Flax and FAISS accordingly.

Then install Tevatron with pip,

pip install tevatron

Or typically for development and research, clone this repo and install as editable,

git https://github.com/texttron/tevatron
cd tevatron
pip install --editable .

Note: The current code base has been tested with, torch==1.10.1, faiss-cpu==1.7.2, transformers==4.15.0, datasets==1.17.0

Optionally, you can also install GradCache to support our gradient cache feature during training by:

git clone https://github.com/luyug/GradCache
cd GradCache
pip install .

Documentation

Examples

In the /examples folder, we provided full pipeline instructions for various IR/QA tasks.

Citation

If you find Tevatron helpful, please consider citing our paper.

@article{Gao2022TevatronAE,
  title={Tevatron: An Efficient and Flexible Toolkit for Dense Retrieval},
  author={Luyu Gao and Xueguang Ma and Jimmy J. Lin and Jamie Callan},
  journal={ArXiv},
  year={2022},
  volume={abs/2203.05765}
}

Contacts

If you have a toolkit specific question, feel free to open an issue.

You can also reach out to us for general comments/suggestions/questions through email.

About

Tevatron - A flexible toolkit for dense retrieval research and development.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 93.4%
  • Shell 6.6%