8000 GitHub - TrustAI/DeepQuant: Quantifying the Robustness of Deep Neural Networks - Complex & Intelligent Systems
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

TrustAI/DeepQuant

Repository files navigation

DeepQuant

Towards the Quantification of Safety Risks in Deep Neural Networks

Datesets

(1) ACSC-Xu

(2) MNIST

(3) CIFAR-10

(4) ImageNet

Software

1: Matlab 2018b

2: Neural Network Toolbox

3: Image Processing Toolbox

4: Parallel Computing Toolbox

Run

Folder "ACSC_XU_NN_Robustness" contains two robustness metrics methods for ACSC_XU dataset.

Folder "Decision Reachability" contains ReLu and Tanh neural networks.

Folder "MNIST_NN_Uncertainty" contains the uncertainty examples for MNIST.

Sample Results

alt text

alt text

About

Quantifying the Robustness of Deep Neural Networks - Complex & Intelligent Systems

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No pack 353D ages published

Contributors 2

  •  
  •  

Languages

0