Connor Walker
Safety Monitoring for Large Language Models: A Case Study of Offshore Wind Maintenance
Walker, Connor; Rothon, Callum; Aslansefat, Koorosh; Papadopoulos, Yiannis; Dethlefs, Nina
Authors
Callum Rothon
Dr Koorosh Aslansefat K.Aslansefat@hull.ac.uk
Lecturer/Assistant Professor
Professor Yiannis Papadopoulos Y.I.Papadopoulos@hull.ac.uk
Professor
Nina Dethlefs
Abstract
It has been forecasted that a quarter of the world's energy usage will be supplied from Offshore Wind (OSW) by 2050 (Smith 2023). Given that up to one third of Levelised Cost of Energy (LCOE) arises from Operations and Maintenance (O&M), the motive for cost reduction is enormous. In typical OSW farms hundreds of alarms occur within a single day, making manual O&M planning without automated systems costly and difficult. Increased pressure to ensure safety and high reliability in progressively harsher environments motivates the exploration of Artificial Intelligence (AI) and Machine Learning (ML) systems as aids to the task. We recently introduced a specialised conversational agent trained to interpret alarm sequences from Supervisory Control and Data Acquisition (SCADA) and recommend comprehensible repair actions (Walker et al. 2023). Building on recent advancements on Large Language Models (LLMs), we expand on this earlier work, fine tuning LLAMA (Touvron 2018), using available maintenance records from EDF Energy. An issue presented by LLMs is the risk of responses containing unsafe actions, or irrelevant hallucinated procedures. This paper proposes a novel framework for safety monitoring of OSW, combining previous work with additional safety layers. Generated responses of this agent are being filtered to prevent raw responses endangering personnel and the environment. The algorithm represents such responses in embedding space to quantify dissimilarity to pre-defined unsafe concepts using the Empirical Cumulative Distribution Function (ECDF). A second layer identifies hallucination in responses by exploiting probability distributions to analyse against stochastically generated sentences. Combining these layers, the approach finetunes individual safety thresholds based on categorised concepts, providing a unique safety filter. The proposed framework has potential to utilise the O&M planning for OSW farms using state-of-the-art LLMs as well as equipping them with safety monitoring that can increase technology acceptance within the industry.
Citation
Walker, C., Rothon, C., Aslansefat, K., Papadopoulos, Y., & Dethlefs, N. (2024, February). Safety Monitoring for Large Language Models: A Case Study of Offshore Wind Maintenance. Presented at Safety Critical Systems Symposium SSS'24, Bristol, UK
Presentation Conference Type | Conference Paper (published) |
---|---|
Conference Name | Safety Critical Systems Symposium SSS'24 |
Start Date | Feb 13, 2024 |
End Date | Feb 15, 2024 |
Acceptance Date | Nov 1, 2023 |
Online Publication Date | Dec 12, 2023 |
Publication Date | Dec 12, 2023 |
Deposit Date | Jul 9, 2024 |
Publicly Available Date | Jul 19, 2024 |
Peer Reviewed | Peer Reviewed |
Book Title | Safe AI Systems: Proceedings of the 32nd Safety-Critical Systems Symposium (SSS’24) |
Chapter Number | 12 |
ISBN | 979-8868463440 |
Keywords | Large Language Model; Safety Assurance; AI Safety; Statistical Distance Measure; SafeML; Safe Machine Learning; SafeLLM 2 Connor Walker; Callum Rothon; Koorosh Aslansefat; Yiannis Papadopoulos; Nina Dethlefs |
Public URL | https://hull-repository.worktribe.com/output/4734261 |
Publisher URL | https://scsc.uk/scsc-188 |
Files
Safety Monitoring For Large Language Models A Case Study Of Offshore Wind Maintenance
(1.1 Mb)
PDF
Publisher Licence URL
http://creativecommons.org/licenses/by/4.0
Copyright Statement
© AURA CDT, University of Hull 2024.
Published by the Safety-Critical Systems Club. This work is licensed under Creative Commons Attribution 4.0 International https://creativecommons.org/licenses/by/4.0/
You might also like
Safety-Security Co-Engineering Framework
(2023)
Report