Project Page | Paper | Supplementary material | Extended abstract | 2min talk | 4min talk | 8min talk
This repo provides the code for HIVE, a human evaluation framework for computer vision interpretability methods.
@inproceedings{Kim2022HIVE,
author = {Sunnie S. Y. Kim and Nicole Meister and Vikram V. Ramaswamy and Ruth Fong and Olga Russakovsky},
title = {{HIVE}: Evaluating the Human Interpretability of Visual Explanations},
booktitle = {European Conference on Computer Vision (ECCV)},
year = {2022}
}
- combined_gradcam_nolabels.html
- combined_bagnet_nolabels.html
- combined_protopnet_distinction.html
- combined_prototree_distinction.html
- combined_protopnet_agreement.html
- combined_prototree_agreement.html
- combined_gradcam_labels.html
- combined_bagnet_labels.html
- combined_prototree_agreement_tree.html
We ran our studies through Human Intelligence Tasks (HITs) deployed on Amazon Mechanical Turk (AMT). We use simple-amt, a microframework for working with AMT. Here we describe which files correspond to which study UIs and provide brief instructions for running studies.
Please check out the original simple-amt repository for more information on how to run a HIT on AMT.
python launch_hits.py \
--html_template=hit_templates/combined_prototree_distinction.html \
--hit_properties_file=hit_properties/properties.json \
--input_json_file=examples/input_prototree_distinction.txt \
--hit_ids_file=examples/hit_ids_prototree_distinction.txt --prod
python show_hit_progress.py \
--hit_ids_file=examples/hit_ids_prototree_distinction.txt --prod
python get_results.py \
--hit_ids_file=examples/hit_ids_prototree_distinction.txt \
--output_file=examples/results_prototree_distinction.txt \
> examples/results_prototree_distinction.txt --prod
python approve_hits.py \
--hit_ids_file=examples/hit_ids_prototree_distinction.txt --prod