8000 jiang4355 / Starred · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
View jiang4355's full-sized avatar

Block or report jiang4355

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Showing results

SurFree: a fast surrogate-free black-box attack

Python 43 12 Updated Jun 27, 2024

Repo for counting stars and contributing. Press F to pay respect to glorious developers.

271,817 21,098 Updated Oct 3, 2024
Python 85 20 Updated Feb 6, 2021

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

Python 5,253 1,213 Updated May 12, 2025

Blackbox attacks for deep neural network models

Jupyter Notebook 70 19 Updated Aug 2, 2018

Keras code and weights files for popular deep learning models.

Python 7,334 2,451 Updated Oct 1, 2020

😎 A curated list of awesome Jupyterlab extension projects. 🌠 Detailed introduction with images.

265 27 Updated Nov 10, 2022

A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX

Python 2,859 432 Updated Apr 3, 2024

吴恩达(Andrew Ng)在coursera的机器学习课程习题的python实现

HTML 128 35 Updated Jun 23, 2019

Implementation of the Boundary Attack algorithm as described in Brendel, Wieland, Jonas Rauber, and Matthias Bethge. "Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine …

Python 96 21 Updated Dec 12, 2020

ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks

Python 169 47 Updated Aug 3, 2021

Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]

Python 18 3 Updated Apr 8, 2018

CLEVER (Cross-Lipschitz Extreme Value for nEtwork Robustness) is a robustness metric for deep neural networks

Python 61 20 Updated Aug 3, 2021

Codes for reproducing the robustness evaluation scores in “Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach,” ICLR 2018 ​​​​​​​

Python 52 18 Updated Sep 18, 2018

cvpr2024/cvpr2023/cvpr2022/cvpr2021/cvpr2020/cvpr2019/cvpr2018/cvpr2017 论文/代码/解读/直播合集,极市团队整理

12,493 2,286 Updated Apr 25, 2024

Code to reproduce experiments from "A Statistical Approach to Assessing Neural Network Robustness"

Python 12 9 Updated Feb 11, 2019
0