The accompanying code for running the Llama-2 experiments shown in "Structured information extraction from scientific text with large language models" by Dagdelen & Dunn et al. Includes code for loading, fine-tuning, and running inference with LLama-2 models fine-tuned with the LLM-NERRE method.
Note this release does not include the raw llama-2 weights due to their size. We have included code to download them via Figshare (https://doi.org/10.6084/m9.figshare.24501331.v1) into the correct format.