Stars
Repo for counting stars and contributing. Press F to pay respect to glorious developers.
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Blackbox attacks for deep neural network models
Keras code and weights files for popular deep learning models.
😎 A curated list of awesome Jupyterlab extension projects. 🌠 Detailed introduction with images.
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
吴恩达(Andrew Ng)在coursera的机器学习课程习题的python实现
Implementation of the Boundary Attack algorithm as described in Brendel, Wieland, Jonas Rauber, and Matthias Bethge. "Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine …
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks
Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]
CLEVER (Cross-Lipschitz Extreme Value for nEtwork Robustness) is a robustness metric for deep neural networks
Codes for reproducing the robustness evaluation scores in “Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach,” ICLR 2018
cvpr2024/cvpr2023/cvpr2022/cvpr2021/cvpr2020/cvpr2019/cvpr2018/cvpr2017 论文/代码/解读/直播合集,极市团队整理
Code to reproduce experiments from "A Statistical Approach to Assessing Neural Network Robustness"