pv_evaluation is a Python package built to help advance research on author/inventor name disambiguation systems such as PatentsView. It provides:
- A large set of benchmark datasets for U.S. patents inventor name disambiguation.
- Disambiguation summary statistics, evaluation methodology, and performance estimators through the ER-Evaluation Python package.
See the project website for full documentation. The Examples page provides real-world examples of the use of pv_evaluation submodules.
pv_evaluation has the following submodules:
- benchmark.data: Access to evaluation datasets and standardized comparison benchmarks. The following benchmark datasets are available:
- Academic Life Sciences (ALS) inventors benchmark.
- Israeli inventors benchmark.
- Engineering and Sciences (ENS) inventors benchmark.
- Lai's 2011 inventors benchmark.
- PatentsView's 2021 inventors benchmark.
- Binette et al.'s 2022 inventors benchmark.
- benchmark.report: Visualization of key monitoring and performance metrics.
- templates: Templated performance summary reports.
Install the released version of pv_evaluation using
pip install pv-evaluation
Rendering reports requires the installation of quarto from quarto.org.
Note: Working with the full patent data requires large amounts of memory (we suggest having 64GB RAM available).
See the examples page for complete reproducible examples. The examples below only provide a quick overview of pv_evaluation's functionality.
Generate an html report summarizing properties of the current disambiguation algorithm (see this example):
from pv_evaluation.templates import render_inventor_disambiguation_report
render_inventor_disambiguation_report(
".",
disambiguation_files=["disambiguation_20211230.tsv", "disambiguation_20220630.tsv"],
inventor_not_disambiguated_file="g_inventor_not_disambiguated.tsv"
)
Access PatentsView-Evaluation's large collection of benchmark datasets:
from pv_evaluation.benchmark import *
load_lai_2011_inventors_benchmark()
load_israeli_inventors_benchmark()
load_patentsview_inventors_benchmark()
load_als_inventors_benchmark()
load_ens_inventors_benchmark()
load_binette_2022_inventors_benchmark()
load_air_umass_assignees_benchmark()
load_nber_subset_assignees_benchmark()
See this example of how representative performance estimates are obtained from Binette's 2022 benchmark dataset.
- Binette, Olivier, Sarvo Madhavan, Jack Butler, Beth Anne Card, Emily Melluso and Christina Jones. (2023). PatentsView-Evaluation: Evaluation Datasets and Tools to Advance Research on Inventor Name Disambiguation. arXiv e-prints: arxiv:2301.03591
- Binette, Olivier, Sokhna A York, Emma Hickerson, Youngsoo Baek, Sarvo Madhavan, Christina Jones. (2022). Estimating the Performance of Entity Resolution Algorithms: Lessons Learned Through PatentsView.org. arXiv e-prints: arxiv:2210.01230
Look through the GitHub issues for bugs and feature requests. To contribute to this package:
- Fork this repository
- Make your changes and update CHANGELOG.md
- Submit a pull request
- For maintainers: if needed, update the "release" branch and create a release.
A conda environment is provided for development convenience. To create or update this environment, make sure you have conda installed and then run make env
. You can then activate the development environment using conda activate pv-evaluation
.
The makefile provides other development utilities such as make black
to format Python files, make data
to re-generate benchmark datasets from raw data located on AWS S3, and make docs
to generate the documentation website.
Raw public data is located on PatentsView's AWS S3 server at https://s3.amazonaws.com/data.patentsview.org/PatentsView-Evaluation/data-raw.zip. This zip file should be updated as needed to reflect datasets provided by this package and to ensure that original data sources are preserved without modification.
The minimal testing requirement for this package is a check that all code executes without error. We recommend placing execution checks in a runnable notebook and using the testbook package for execution within unit tests. User examples should also be provided to exemplify usage on real data.
Report bugs and submit feedback at https://github.com/PatentsView/PatentsView-Evaluation/issues.