Bader Rasheed
Exploring the Impact of Conceptual Bottlenecks on Adversarial Robustness of Deep Neural Networks
Rasheed, Bader; Abdelhamid, Mohamed; Khan, Adil; Menezes, Igor; Masood Khatak, Asad
Authors
Mohamed Abdelhamid
Professor Adil Khan A.M.Khan@hull.ac.uk
Professor
Dr Igor Menezes I.G.Menezes@hull.ac.uk
Senior Lecturer (Associate Professor) in OBHRM and People Analytics
Asad Masood Khatak
Abstract
Deep neural networks (DNNs), while powerful, often suffer from a lack of interpretability and vulnerability to adversarial attacks. Concept bottleneck models (CBMs), which incorporate intermediate high-level concepts into the model architecture, promise enhanced interpretability. This study delves into the robustness of Concept Bottleneck Models (CBMs) against adversarial attacks, comparing their original and adversarial performance with standard Convolutional Neural Networks (CNNs). The premise is that CBMs prioritize conceptual integrity and data compression, enabling them to maintain high performance under adversarial conditions by filtering out non-essential variations in input data. Our extensive evaluations across different datasets and adversarial attacks confirm that CBMs not only maintain higher accuracy but also show improved defense capabilities against a range of adversarial attacks compared to traditional models. Our findings indicate that CBMs, particularly those trained sequentially, inherently exhibit higher robustness against adversarial attacks than their standard CNN counterparts. Additionally, we explore the effects of increasing conceptual complexity and the application of adversarial training techniques. While adversarial training generally boosts robustness, the increment varies between CBMs and CNNs, highlighting the role of training strategies in achieving adversarial resilience.
Citation
Rasheed, B., Abdelhamid, M., Khan, A., Menezes, I., & Masood Khatak, A. (2024). Exploring the Impact of Conceptual Bottlenecks on Adversarial Robustness of Deep Neural Networks. IEEE Access, 12, 131323-131335. https://doi.org/10.1109/ACCESS.2024.3457784
Journal Article Type | Article |
---|---|
Acceptance Date | Sep 5, 2024 |
Online Publication Date | Sep 11, 2024 |
Publication Date | Jan 1, 2024 |
Deposit Date | Sep 8, 2024 |
Publicly Available Date | Oct 1, 2024 |
Electronic ISSN | 2169-3536 |
Publisher | Institute of Electrical and Electronics Engineers |
Peer Reviewed | Peer Reviewed |
Volume | 12 |
Pages | 131323-131335 |
DOI | https://doi.org/10.1109/ACCESS.2024.3457784 |
Keywords | Concept bottleneck models; Adversarial attacks; Robustness; Interpretable models |
Public URL | https://hull-repository.worktribe.com/output/4820709 |
Files
Published article
(1.6 Mb)
PDF
Publisher Licence URL
https://creativecommons.org/licenses/by-nc-nd/4.0/
Copyright Statement
© 2024 The Authors. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.
For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
You might also like
A hybrid contextual framework to predict severity of infectious disease: COVID-19 case study
(2024)
Journal Article
Overhead Based Cluster Scheduling of Mixed Criticality Systems on Multicore Platform
(2023)
Journal Article
Downloadable Citations
About Repository@Hull
Administrator e-mail: repository@hull.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search