PyTorch implementation for Latent Discriminant deterministic Uncertainty (ECCV 2022).
Paper
In this work we advance a scalable and effective Deterministic Uncertainty Methods (DUM) that relaxes the Lipschitz constraint typically hindering practicality of such architectures. We learn a discriminant latent space by leveraging a distinction maximization layer over an arbitrarily-sized set of trainable prototypes.
Overview of LDU: the DNN learns a discriminative latent space thanks to learnable prototypes. The DNN backbone computes a feature vector z for an input x and then the DM layer matches it with the prototypes. The computed similarities reflecting the position of z in the learned feature space, are subsequently processed by the classification layer and the uncertainty estimation layer. The dashed arrows point to the loss functions that need to be optimized for training LDU.
For more details, please refer to our paper.
We currently only provide the codes for toy example, classification and monocular depth estimation.
The semantic segmentation part will be released in near future.
We provide a toy example for illustrating LDU on two-moon dataset.
In folder monocular_depth_estimation/
, we provide the codes and instructions for LDU applying on monocular depth estimation task. The detailed information is shown on monocular_depth_estimation/README.md
.
- Add classification codes
If you find this work useful for your research, please consider citing our paper:
@article{franchi2022latent,
title={Latent Discriminant deterministic Uncertainty},
author={Franchi, Gianni and Yu, Xuanlong and Bursuc, Andrei and Aldea, Emanuel and Dubuisson, Severine and Filliat, David},
journal={arXiv preprint arXiv:2207.10130},
year={2022}
}