default search action
SafeAI@AAAI 2021: Virtual Event
- Huáscar Espinoza, John A. McDermid, Xiaowei Huang, Mauricio Castillo-Effen, Xin Cynthia Chen, José Hernández-Orallo, Seán Ó hÉigeartaigh, Richard Mallah:
Proceedings of the Workshop on Artificial Intelligence Safety 2021 (SafeAI 2021) co-located with the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI 2021), Virtual, February 8, 2021. CEUR Workshop Proceedings 2808, CEUR-WS.org 2021
Session 1: Dynamic Safety and Anomaly Assessment
- Haiwen Huang, Zhihan Li, Lulu Wang, Sishuo Chen, Xinyu Zhou, Bin Dong:
Feature Space Singularity for Out-of-Distribution Detection. - Saasha Nair, Sina Shafaei, Daniel Auge, Alois C. Knoll:
An Evaluation of "Crash Prediction Networks" (CPN) for Autonomous Driving Scenarios in CARLA Simulator. - Franziska Schwaiger, Maximilian Henne, Fabian Küppers, Felippe Schmoeller Roza, Karsten Roscher, Anselm Haselhoff:
From Black-box to White-box: Examining Confidence Calibration under different Conditions.
Session 2: Safety Considerations for the Assurance of AI-based Systems
- Rob Ashmore, Alec Banks:
The Utility of Neural Network Test Coverage Measures. - Gavin Leech, Nandi Schoots, Joar Skalse:
Safety Properties of Inductive Logic Programming. - Jan-Pieter Paardekooper, Mauro Comi, Corrado Grappiolo, Ron Snijders, Willeke van Vught, Rutger Beekelaar:
A Hybrid-AI Approach for Competence Assessment of Automated Driving functions.
Session 3: Adversarial Machine Learning and Trustworthiness
- Takuma Amada, Kazuya Kakizaki, Seng Pei Liew, Toshinori Araki, Joseph Keshet, Jun Furukawa:
Adversarial Robustness for Face Recognition: How to Introduce Ensemble Diversity among Feature Extractors? - John Hyatt, Michael Lee:
Multi-Modal Generative Adversarial Networks Make Realistic and Diverse but Untrustworthy Predictions When Applied to Ill-posed Problems. - Javier Hernandez-Ortega, Ruben Tolosana, Julian Fiérrez, Aythami Morales:
DeepFakesON-Phys: DeepFakes Detection based on Heart Rate Estimation.
Session 4: Safe Autonomous Agents
- Hal Ashton:
What criminal and civil law tells us about Safe RL techniques to generate law-abiding behaviour. - Jakub Tetek, Marek Sklenka, Tomas Gavenciak:
Performance of Bounded-Rational Agents With the Ability to Self-Modify. - Jared Markowitz, Marie Chau, I-Jeng Wang:
Deep CPT-RL: Imparting Human-Like Risk Sensitivity to Artificial Agents. - David Lindner, Kyle Matoba, Alexander Meulemans:
Challenges for Using Impact Regularizers to Avoid Negative Side Effects.
Poster Papers
- Christopher Harper, Praminda Caleb-Solly:
Towards an Ontological Framework for Environmental Survey Hazard Analysis of Autonomous Systems. - Adrien Gauffriau, François Malgouyres, Mélanie Ducoffe:
Overestimation Learning with Guarantees. - Franz Wotawa:
On the Use of Available Testing Methods for Verification & Validation of AI-based Software and Systems. - Vibhu Gautam, Youcef Gheraibia, Rob Alexander, Richard Hawkins:
Runtime Decision Making Under Uncertainty in Autonomous Vehicles. - John Burden, José Hernández-Orallo, Seán Ó hÉigeartaigh:
Negative Side Effects and AI Agent Indicators: Experiments in SafeLife. - Ville Vakkuri, Marianna Jantunen, Erika Halme, Kai-Kristian Kemell, Anh Nguyen-Duc, Tommi Mikkonen, Pekka Abrahamsson:
Time for AI (Ethics) Maturity Model Is Now. - Ernest Wozniak, Henrik J. Putzer, Carmen Cârlan:
AI-Blueprint for Deep Neural Networks. - Václav Divis, Marek Hrúz:
Neural Criticality: Validation of Convolutional Neural Networks. - Francesco Cartella, Orlando Anunciação, Yuki Funabiki, Daisuke Yamaguchi, Toru Akishita, Olivier Elshocht:
Adversarial Attacks for Tabular Data: Application to Fraud Detection and Imbalanced Data. - Nivasini Ananthakrishnan, Shai Ben-David, Tosca Lechner:
Classification Confidence Scores with Point-wise Guarantees.
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.