[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3472538.3472540acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfdgConference Proceedingsconference-collections
research-article
Public Access

Dealing with Adversarial Player Strategies in the Neural Network Game iNNk through Ensemble Learning

Published: 21 October 2021 Publication History

Abstract

Applying neural network (NN) methods in games can lead to various new and exciting game dynamics not previously possible. However, they also lead to new challenges such as the lack of large, clean datasets, varying player skill levels, and changing gameplay strategies. In this paper, we focus on the adversarial player strategy aspect in the game iNNk, in which players try to communicate secret code words through drawings with the goal of not being deciphered by a NN. Some strategies exploit weaknesses in the NN that consistently trick it into making incorrect classifications, leading to unbalanced gameplay. We present a method that combines transfer learning and ensemble methods to obtain a data-efficient adaptation to these strategies. This combination significantly outperforms the baseline NN across all adversarial player strategies despite only being trained on a limited set of adversarial examples. We expect the methods developed in this paper to be useful for the rapidly growing field of NN-based games, which will require new approaches to deal with unforeseen player creativity.

References

[1]
Mahdieh Abbasi and Christian Gagné. 2017. Robustness to adversarial examples through an ensemble of specialists. arXiv preprint arXiv:1702.06856(2017).
[2]
M. A. Alcorn, Q. Li, Z. Gong, C. Wang, L. Mai, W. Ku, and A. Nguyen. 2019. Strike (With) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 4840–4849. https://doi.org/10.1109/CVPR.2019.00498
[3]
Maren Awiszus, Frederik Schubert, and Bodo Rosenhahn. 2020. TOAD-GAN: Coherent Style Level Generation from a Single Example. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 16, 1 (Oct 2020), 10–16. https://ojs.aaai.org/index.php/AIIDE/article/view/7401
[4]
Philipp Benz, Chaoning Zhang, and In So Kweon. 2020. Batch Normalization Increases Adversarial Vulnerability: Disentangling Usefulness and Robustness of Model Features. arxiv:2010.03316 [cs.LG]
[5]
John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th annual meeting of the association of computational linguistics. 440–447.
[6]
Leo Breiman. 1996. Bagging predictors. Machine learning 24, 2 (1996), 123–140.
[7]
Andrew Brock, Jeff Donahue, and Karen Simonyan. 2019. Large Scale GAN Training for High Fidelity Natural Image Synthesis. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. https://openreview.net/forum?id=B1xsqj09Fm
[8]
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. arxiv:2005.14165 [cs.CL]
[9]
Gabriele Cimolino, Sam Lee, Quentin Petraroia, and TC Nicholas Graham. 2019. Oui, Chef!!: Supervised Learning for Novel Gameplay with Believable AI. In Extended Abstracts of the Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts. 241–246.
[10]
Wang Dawei, Deng Limiao, Ni Jiangong, Gao Jiyue, Zhu Hongfei, and Han Zhongzhi. 2019. Recognition pest by image-based transfer learning. Journal of the Science of Food and Agriculture 99, 10(2019), 4524–4531. https://doi.org/10.1002/jsfa.9689 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/jsfa.9689
[11]
Thomas G Dietterich. 2000. Ensemble methods in machine learning. In International workshop on multiple classifier systems. Springer, 1–15.
[12]
K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, and D. Song. 2018. Robust Physical-World Attacks on Deep Learning Visual Classification. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1625–1634. https://doi.org/10.1109/CVPR.2018.00175
[13]
Matthew C. Fontaine, Ruilin Liu, Ahmed Khalifa, Jignesh Modi, Julian Togelius, Amy K. Hoover, and Stefanos Nikolaidis. 2021. Illuminating Mario Scenes in the Latent Space of a Generative Adversarial Network. Proceedings of the AAAI Conference on Artificial Intelligence 35, 7 (May 2021), 5922–5930. https://ojs.aaai.org/index.php/AAAI/article/view/16740
[14]
Yoav Freund and Robert E. Schapire. 1996. Experiments with a New Boosting Algorithm. In Proceedings of the Thirteenth International Conference on International Conference on Machine Learning (Bari, Italy) (ICML’96). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 148–156.
[15]
Angus Galloway, Anna Golubeva, Thomas Tanay, Medhat Moussa, and Graham W. Taylor. 2019. Batch Normalization is a Cause of Adversarial Vulnerability. arxiv:1905.02161 [cs.LG]
[16]
Yuqing Gao and Khalid M. Mosalam. 2018. Deep Transfer Learning for Image-Based Structural Damage Recognition. Computer-Aided Civil and Infrastructure Engineering 33, 9(2018), 748–768. https://doi.org/10.1111/mice.12363 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/mice.12363
[17]
Jonathan M Gitlin. 2020. War Stories: How Forza learned to love neural nets to train AI drivers. https://arstechnica.com/gaming/2020/09/war-stories-how-forza-learned-to-love-neural-nets-to-train-ai-drivers/
[18]
Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics(Proceedings of Machine Learning Research, Vol. 9), Yee Whye Teh and Mike Titterington (Eds.). JMLR Workshop and Conference Proceedings, Chia Laguna Resort, Sardinia, Italy, 249–256. http://proceedings.mlr.press/v9/glorot10a.html
[19]
Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. 2016. Deep learning. Vol. 1. MIT press Cambridge.
[20]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In Advances in Neural Information Processing Systems, Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Q. Weinberger(Eds.), Vol. 27. Curran Associates, Inc., 2672–2680. https://proceedings.neurips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf
[21]
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. CoRR abs/1412.6572(2015).
[22]
Google. 2016. Quick, Draw!online. https://quickdraw.withgoogle.com
[23]
Stephen Grand, Dave Cliff, and Anil Malhotra. 1997. Creatures: Artificial life autonomous software agents for home entertainment. In Proceedings of the first international conference on Autonomous agents. 22–29.
[24]
Chuan Guo, Mayank Rana, Moustapha Cissé, and Laurens van der Maaten. 2018. Countering Adversarial Images using Input Transformations. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. https://openreview.net/forum?id=SyJ7ClWCb
[25]
Lars Kai Hansen and Peter Salamon. 1990. Neural network ensembles. IEEE transactions on pattern analysis and machine intelligence 12, 10(1990), 993–1001.
[26]
Erin J Hastings, Ratan K Guha, and Kenneth O Stanley. 2009. Evolving content in the galactic arms race video game. In 2009 IEEE Symposium on Computational Intelligence and Games. IEEE, 241–248.
[27]
Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the Knowledge in a Neural Network. In NIPS Deep Learning and Representation Learning Workshop. http://arxiv.org/abs/1503.02531
[28]
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-term Memory. Neural computation 9 (12 1997), 1735–80. https://doi.org/10.1162/neco.1997.9.8.1735
[29]
Jui-Ting Huang, Jinyu Li, Dong Yu, Li Deng, and Yifan Gong. 2013. Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 7304–7308.
[30]
Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. 2018. Black-box Adversarial Attacks with Limited Queries and Information. arxiv:1804.08598 [cs.CV]
[31]
Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning. PMLR, 448–456.
[32]
Sergey Ioffe and Christian Szegedy. 2015. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the 32nd International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 37), Francis Bach and David Blei (Eds.). PMLR, Lille, France, 448–456. http://proceedings.mlr.press/v37/ioffe15.html
[33]
Diederik Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. International Conference on Learning Representations (12 2014).
[34]
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2017. Adversarial Machine Learning at Scale. arxiv:1611.01236 [cs.CV]
[35]
Chunchuan Lyu, Kaizhu Huang, and Hai-Ning Liang. 2015. A Unified Gradient Regularization Family for Adversarial Examples. In Proceedings of the 2015 IEEE International Conference on Data Mining (ICDM)(ICDM ’15). IEEE Computer Society, USA, 301–309. https://doi.org/10.1109/ICDM.2015.84
[36]
Ian Millington and John Funge. 2009. Artificial intelligence for games. CRC Press.
[37]
Nir Morgulis, Alexander Kreines, Shachar Mendelowitz, and Yuval Weisglass. 2019. Fooling a Real Car with Adversarial Traffic Signs. arxiv:1907.00374 [cs.CR]
[38]
Marzieh Mozafari, Reza Farahbakhsh, and Noël Crespi. 2020. A BERT-Based Transfer Learning Approach for Hate Speech Detection in Online Social Media. In Complex Networks and Their Applications VIII, Hocine Cherifi, Sabrina Gaito, José Fernendo Mendes, Esteban Moro, and Luis Mateus Rocha (Eds.). Springer International Publishing, Cham, 928–940.
[39]
A. Nguyen, J. Yosinski, and J. Clune. 2015. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 427–436. https://doi.org/10.1109/CVPR.2015.7298640
[40]
David Opitz and Richard Maclin. 1999. Popular ensemble methods: An empirical study. Journal of artificial intelligence research 11 (1999), 169–198.
[41]
N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami. 2016. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. In 2016 IEEE Symposium on Security and Privacy (SP). 582–597. https://doi.org/10.1109/SP.2016.41
[42]
Bambang Parmanto, Paul W Munro, and Howard R Doyle. 1996. Improving committee diagnosis with resampling techniques. In Advances in neural information processing systems. 882–888.
[43]
Lutz Prechelt. 1998. Early stopping-but when?In Neural Networks: Tricks of the trade. Springer, 55–69.
[44]
Alec Radford, Luke Metz, and Soumith Chintala. 2016. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, Yoshua Bengio and Yann LeCun (Eds.). http://arxiv.org/abs/1511.06434
[45]
Urs Ramer. 1972. An iterative procedure for the polygonal approximation of plane curves. Computer Graphics and Image Processing 1, 3 (1972), 244 – 256. https://doi.org/10.1016/S0146-664X(72)80017-0
[46]
Sebastian Risi, Joel Lehman, David B D’Ambrosio, Ryan Hall, and Kenneth O Stanley. 2015. Petalz: Search-based procedural content generation for the casual gamer. IEEE Transactions on Computational Intelligence and AI in Games 8, 3(2015), 244–255.
[47]
Andrew Ross and Finale Doshi-Velez. 2018. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32. New Orleans, Louisiana, USA.
[48]
H. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. Mollura, and R. M. Summers. 2016. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Transactions on Medical Imaging 35, 5 (2016), 1285–1298. https://doi.org/10.1109/TMI.2016.2528162
[49]
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15, 1 (2014), 1929–1958.
[50]
Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai. 2019. One Pixel Attack for Fooling Deep Neural Networks. IEEE Transactions on Evolutionary Computation 23, 5 (Oct 2019), 828–841. https://doi.org/10.1109/tevc.2019.2890858
[51]
Chuanqi Tan, Fuchun Sun, Tao Kong, Wenchang Zhang, Chao Yang, and Chunfang Liu. 2018. A survey on deep transfer learning. In International conference on artificial neural networks. Springer, 270–279.
[52]
Latitude Team. [n.d.]. AI Dungeon: Dragon Model Upgrade. online. https://aidungeon.medium.com/ai-dungeon-dragon-model-upgrade-7e8ea579abfe
[53]
Tensorflow. 2019. Recurrent Neural Networks for Drawing Classification. online. https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/sequences/recurrent_quickdraw.md
[54]
Sarjak Thakkar, Changxing Cao, Lifan Wang, Tae Jong Choi, and Julian Togelius. 2019. Autoencoder and Evolutionary Algorithm for Level Generation in Lode Runner. In 2019 IEEE Conference on Games (CoG). 1–4. https://doi.org/10.1109/CIG.2019.8848076
[55]
Stanford University. 2016. CS231n Convolutional Neural Networks for Visual Recognition: Transfer Learning. online. http://cs231n.stanford.edu
[56]
Jennifer Villareale, Ana V. Acosta-Ruiz, Samuel Adam Arcaro, Thomas Fox, Evan Freed, Robert C. Gray, Mathias Löwe, Panote Nuchprayoon, Aleksanteri Sladek, Rush Weigelt, Yifu Li, Sebastian Risi, and Jichen Zhu. 2020. INNk: A Multi-Player Game to Deceive a Neural Network. Association for Computing Machinery, New York, NY, USA, 33–37. https://doi.org/10.1145/3383668.3419858
[57]
Vanessa Volz, Jacob Schrum, Jialin Liu, Simon M. Lucas, Adam Smith, and Sebastian Risi. 2018. Evolving Mario Levels in the Latent Space of a Deep Convolutional Generative Adversarial Network. In Proceedings of the Genetic and Evolutionary Computation Conference (Kyoto, Japan) (GECCO ’18). Association for Computing Machinery, New York, NY, USA, 221–228. https://doi.org/10.1145/3205455.3205517
[58]
Nick Walton. 2019. AI Dungeon. Website. https://aidungeon.io/
[59]
Katy Warr. 2019. Strengthening Deep Neural Networks: Making AI Less Susceptible to Adversarial Trickery (1 ed.). O’Reilly Media, Inc. 246 pages.
[60]
David H Wolpert. 1992. Stacked generalization. Neural networks 5, 2 (1992), 241–259.
[61]
Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. How transferable are features in deep neural networks?. In Advances in Neural Information Processing Systems, Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Q. Weinberger(Eds.), Vol. 27. Curran Associates, Inc., 3320–3328. https://proceedings.neurips.cc/paper/2014/file/375c71349b295fbe2dcdca9206f20a06-Paper.pdf
[62]
J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang. 2018. Generative Image Inpainting with Contextual Attention. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5505–5514. https://doi.org/10.1109/CVPR.2018.00577
[63]
Wei Zhao. 2017. Research on the deep learning of the small sample data based on transfer learning. AIP Conference Proceedings 1864, 1 (2017), 020018. https://doi.org/10.1063/1.4992835 arXiv:https://aip.scitation.org/doi/pdf/10.1063/1.4992835
[64]
Zhi-Hua Zhou. 2012. Ensemble methods: foundations and algorithms. CRC press.
[65]
Jichen Zhu, Jennifer Villareale, Nithesh Javvaji, Sebastian Risi, Mathias Löwe, Rush Weigelt, and Casper Harteveld. 2021. Player-AI Interaction: What Neural Network Games Reveal About AI as Play. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–17.

Cited By

View all
  • (2022)"I Want To See How Smart This AI Really Is": Player Mental Model Development of an Adversarial AI PlayerProceedings of the ACM on Human-Computer Interaction10.1145/35494826:CHI PLAY(1-26)Online publication date: 31-Oct-2022
  • (2022)STEWARTComputers in Industry10.1016/j.compind.2022.103660140:COnline publication date: 1-Sep-2022
  1. Dealing with Adversarial Player Strategies in the Neural Network Game iNNk through Ensemble Learning

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    FDG '21: Proceedings of the 16th International Conference on the Foundations of Digital Games
    August 2021
    534 pages
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 21 October 2021

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. adversarial attacks
    2. ensemble methods
    3. games
    4. neural networks
    5. transfer learning

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    Conference

    FDG'21

    Acceptance Rates

    Overall Acceptance Rate 152 of 415 submissions, 37%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)339
    • Downloads (Last 6 weeks)19
    Reflects downloads up to 19 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2022)"I Want To See How Smart This AI Really Is": Player Mental Model Development of an Adversarial AI PlayerProceedings of the ACM on Human-Computer Interaction10.1145/35494826:CHI PLAY(1-26)Online publication date: 31-Oct-2022
    • (2022)STEWARTComputers in Industry10.1016/j.compind.2022.103660140:COnline publication date: 1-Sep-2022

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media