Skip to main content

Research Repository

Advanced Search

Exploring the Impact of Conceptual Bottlenecks on Adversarial Robustness of Deep Neural Networks

Rasheed, Bader; Abdelhamid, Mohamed; Khan, Adil; Menezes, Igor; Masood Khatak, Asad

Authors

Bader Rasheed

Mohamed Abdelhamid

Profile image of Igor Menezes

Dr Igor Menezes I.G.Menezes@hull.ac.uk
Senior Lecturer (Associate Professor) in OBHRM and People Analytics

Asad Masood Khatak



Abstract

Deep neural networks (DNNs), while powerful, often suffer from a lack of interpretability and vulnerability to adversarial attacks. Concept bottleneck models (CBMs), which incorporate intermediate high-level concepts into the model architecture, promise enhanced interpretability. This study delves into the robustness of Concept Bottleneck Models (CBMs) against adversarial attacks, comparing their original and adversarial performance with standard Convolutional Neural Networks (CNNs). The premise is that CBMs prioritize conceptual integrity and data compression, enabling them to maintain high performance under adversarial conditions by filtering out non-essential variations in input data. Our extensive evaluations across different datasets and adversarial attacks confirm that CBMs not only maintain higher accuracy but also show improved defense capabilities against a range of adversarial attacks compared to traditional models. Our findings indicate that CBMs, particularly those trained sequentially, inherently exhibit higher robustness against adversarial attacks than their standard CNN counterparts. Additionally, we explore the effects of increasing conceptual complexity and the application of adversarial training techniques. While adversarial training generally boosts robustness, the increment varies between CBMs and CNNs, highlighting the role of training strategies in achieving adversarial resilience.

Citation

Rasheed, B., Abdelhamid, M., Khan, A., Menezes, I., & Masood Khatak, A. (2024). Exploring the Impact of Conceptual Bottlenecks on Adversarial Robustness of Deep Neural Networks. IEEE Access, 12, 131323-131335. https://doi.org/10.1109/ACCESS.2024.3457784

Journal Article Type Article
Acceptance Date Sep 5, 2024
Online Publication Date Sep 11, 2024
Publication Date Jan 1, 2024
Deposit Date Sep 8, 2024
Publicly Available Date Oct 1, 2024
Electronic ISSN 2169-3536
Publisher Institute of Electrical and Electronics Engineers
Peer Reviewed Peer Reviewed
Volume 12
Pages 131323-131335
DOI https://doi.org/10.1109/ACCESS.2024.3457784
Keywords Concept bottleneck models; Adversarial attacks; Robustness; Interpretable models
Public URL https://hull-repository.worktribe.com/output/4820709

Files




You might also like



Downloadable Citations