Skip to content

Latest commit

 

History

History
17 lines (17 loc) · 1.87 KB

README.md

File metadata and controls

17 lines (17 loc) · 1.87 KB

Adverasial-Model-Confidence

Mentor : Dr. Soumya Dutta

Objective

Visualizing Impact of Uncertainty and Adverasarial Attack on Deep Classifier Models

Description

This project aims to address issues related to the quality, confidence and robustness associated with predictions made by deep classifier models based on convolutional neural networks. To achieve this, a visual analytics approach has been taken to allow users to understand how uncertainty estimation techniques and adversarial attacks affect the performance of these models. The project has resulted in the development of the Model Vizualizer Website that can show the behavior of the classifier under different circumstances, such as uncertainty and adversarial attack. By exploring factors such as model prediction confidence and accuracy, the tool/website can visually compare the behavior of a model under adversarial attack to that of a benign model.


This repository contains codes/files for :-


Contributors