[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3474370.3485662acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
keynote

Moving Target Defense against Adversarial Machine Learning

Published: 15 November 2021 Publication History

Abstract

As Machine Learning (ML) models are increasingly employed in a number of applications across a multitude of fields, the threat of adversarial attacks against ML models is also increasing. Adversarial samples crafted via specialized attack algorithms have been shown to significantly decrease the performance of ML models. Furthermore, it has also been found that adversarial samples generated for a particular model can transfer across other models, and decrease accuracy and other performance metrics for a model they were not originally crafted for. In recent research, many different defense approaches have been proposed for making ML models robust, ranging from adversarial input re-training to defensive distillation, among others. While these approaches operate at the model-level, we propose an alternate approach to defending ML models against adversarial attacks, using Moving Target Defense (MTD). We formulate the problem and provide preliminary results to showcase the validity of the proposed approach.

Supplementary Material

MP4 File (MTD21-mtd02.mp4)
This video explains the motivation behind this paper and gives a brief background for the techniques and tools used in the research of this paper. The main part of the video gives a detailed explanation of the main idea of the proposed approach and its implementation. Also, premilitary results will be showed and explained in this video. Finally, the video layouts the future work needed to do.

References

[1]
N. Carlini, G. Katz, C. Barrett, and D. L. Dill, ?Provably minimally distorted adversarial examples," arXiv preprint arXiv:1709.10207, 2017.
[2]
N. Papernot, P. McDaniel, and I. Goodfellow, "Transferability in machine learning: from phenomena to black-box attacks using adversarial samples," arXiv preprint arXiv:1605.07277, 2016.
[3]
N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, "Distillation as a defense to adversarial perturbations against deep neural networks," in 2016 IEEE symposium on security and privacy (SP). IEEE, 2016, pp. 582--597.
[4]
A. Roy, A. Chhabra, C. A. Kamhoua, and P. Mohapatra, "A moving target defense against adversarial machine learning," in Proceedings of the 4th ACM/IEEE Symposium on Edge Computing, 2019, pp. 383--388.
[5]
J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, "Proximal policy optimization algorithms," arXiv preprint arXiv:1707.06347, 2017.
[6]
R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction. MIT press, 2018.

Cited By

View all
  • (2023)Adversarial Machine Learning Attacks against Intrusion Detection Systems: A Survey on Strategies and DefenseFuture Internet10.3390/fi1502006215:2(62)Online publication date: 31-Jan-2023
  • (2023)A Survey on Moving Target Defense: Intelligently Affordable, Optimized and Self-AdaptiveApplied Sciences10.3390/app1309536713:9(5367)Online publication date: 25-Apr-2023

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
MTD '21: Proceedings of the 8th ACM Workshop on Moving Target Defense
November 2021
48 pages
ISBN:9781450386586
DOI:10.1145/3474370
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 November 2021

Check for updates

Author Tags

  1. adversarial attacks
  2. adversarial machine learning
  3. moving target defense
  4. reinforcement learning

Qualifiers

  • Keynote

Conference

CCS '21
Sponsor:

Acceptance Rates

Overall Acceptance Rate 40 of 92 submissions, 43%

Upcoming Conference

ICSE 2025

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)55
  • Downloads (Last 6 weeks)6
Reflects downloads up to 03 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Adversarial Machine Learning Attacks against Intrusion Detection Systems: A Survey on Strategies and DefenseFuture Internet10.3390/fi1502006215:2(62)Online publication date: 31-Jan-2023
  • (2023)A Survey on Moving Target Defense: Intelligently Affordable, Optimized and Self-AdaptiveApplied Sciences10.3390/app1309536713:9(5367)Online publication date: 25-Apr-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media