This repository contains the source code for training and evaluating models of the master's thesis Analysing and overcoming the dataset bias for optical flow backbone networks.
This code has been developed under Anaconda(Python 3.6), Pytorch 1.5 and CUDA > 10.2 on Ubuntu 18.04.
-
pytorch and tqdm (
conda install -c conda-forge tqdm==4.43.0
) -
Install the correlation package:
- Depending on your system, configure
-gencode
,-ccbin
,cuda-path
inmodels/correlation_package/setup.py
accordingly - Then, install the correlation package:
./install.sh
- Depending on your system, configure
-
The datasets used for this projects are followings:
-
Configure paths
- Copy
config_sample.py
toconfig.py
- (
config.py
is ignored by git)
- (
- Adjust the paths for datasets and temp folders to match your setup.
- Adjust the output paths for the evaluation scripts (
evaluations
folder).
- Copy
The scripts
folder contains training scripts of experiments demonstrated in the thesis.
To train the model, you can simply run the script file, e.g., ./pwcnet_no_experts.sh
.
In script files, please configure your own experiment directory (EXPERIMENTS_HOME). Please also configure the dataset directory (e.g. FLYINGCHAIRS_HOME). Valid values are paths in the local file system or a dataset specified in dataset_locations
in config.py
.
The basic error metrics for every trained model are stored separately. For further analysis all evaluation results are collected into a single file eval_summary.csv
. This step also computes missing values and derived metrics.
To add a new model for analysis, please follow the steps below:
- Add new line to
evaluations/eval_models.sh
, pointing to the new model(s) - run
eval_models.sh
(~30min for evaluating a PWC Model on all four datasets) - open
model_meta.py
- add a new line to
model_meta
- set the key name to the name chosen in
eval_models.sh
- fill in the model parameters
- the fields correspond to the entries of the
model_meta_fields
array.
- the fields correspond to the entries of the
- add the newly inserted key to the model_meta_ordering
- this is used for ordering models when creating
eval_summary.csv
.
- this is used for ordering models when creating
- run
evaluations/collect_model_results.py
to update theeval_summary.csv
file.
- add a new line to
Note: By default evaluate_for_all_datasets.py
will not reevaluate a dataset split, if the model meta file already contains a result for it. This prevents computing the same results over and over again. If you have changed your evaluation method, use the reevaluate
variable to force a reevaluation of the affected dataset split(s).
Author: Moritz Willig (https://moritz-willig.de)
Base repository: https://github.com/MoritzWillig/flowbias
The repository is based on Iterative Residual Refinement for Joint Optical Flow and Occlusion Estimation by Junhwa Hur.
Portions of the source code (e.g., training pipeline, runtime, argument parser, and logger) are from Jochen Gast