Skip to main content

Research Repository

Advanced Search

SafeML: Safety Monitoring of Machine Learning Classifiers Through Statistical Difference Measures

Aslansefat, Koorosh; Sorokos, Ioannis; Whiting, Declan; Tavakoli Kolagari, Ramin; Papadopoulos, Yiannis

Authors

Ioannis Sorokos

Declan Whiting

Ramin Tavakoli Kolagari



Abstract

Ensuring safety and explainability of machine learning (ML) is a topic of increasing relevance as data-driven applications venture into safety-critical application domains, traditionally committed to high safety standards that are not satisfied with an exclusive testing approach of otherwise inaccessible black-box systems. Especially the interaction between safety and security is a central challenge, as security violations can lead to compromised safety. The contribution of this paper to addressing both safety and security within a single concept of protection applicable during the operation of ML systems is active monitoring of the behavior and the operational context of the data-driven system based on distance measures of the Empirical Cumulative Distribution Function (ECDF). We investigate abstract datasets (XOR, Spiral, Circle) and current security-specific datasets for intrusion detection (CICIDS2017) of simulated network traffic, using distributional shift detection measures including the Kolmogorov-Smirnov, Kuiper, Anderson-Darling, Wasserstein and mixed Wasserstein-Anderson-Darling measures. Our preliminary findings indicate that there is a meaningful correlation between ML decisions and the ECDF-based distances measures of the input features. Thus, they can provide a confidence level that can be used for a) analyzing the applicability of the ML system in a given field (safety/security) and b) analyzing if the field data was maliciously manipulated. (Our preliminary code and results are available at https://github.com/ISorokos/SafeML.)

Citation

Aslansefat, K., Sorokos, I., Whiting, D., Tavakoli Kolagari, R., & Papadopoulos, Y. (2020). SafeML: Safety Monitoring of Machine Learning Classifiers Through Statistical Difference Measures. Lecture notes in computer science, 12297, 197-211. https://doi.org/10.1007/978-3-030-58920-2_13

Journal Article Type Conference Paper
Conference Name IMBSA: International Symposium on Model-Based Safety and Assessment
Conference Location Lisbon
Acceptance Date Mar 1, 2020
Online Publication Date Sep 4, 2020
Publication Date 2020
Deposit Date Feb 17, 2021
Publicly Available Date Jul 1, 2021
Journal Lecture Notes in Computer Science
Print ISSN 0302-9743
Electronic ISSN 1611-3349
Publisher Springer Verlag
Peer Reviewed Peer Reviewed
Volume 12297
Pages 197-211
ISBN 9783030589196
DOI https://doi.org/10.1007/978-3-030-58920-2_13
Keywords Safety; SafeML; Machine Learning; Deep Learning; Artificial Intelligence; Statistical difference; Domain adaptation
Public URL https://hull-repository.worktribe.com/output/3579760

Files

Author-created version (1.4 Mb)
PDF

Copyright Statement
Copyright © 2020, Springer Nature Switzerland AG





You might also like



Downloadable Citations