Michele Scandola
Body Form Modulates the Prediction of Human and Artificial Behaviour from Gaze Observation
Scandola, Michele; Cross, Emily S.; Caruana, Nathan; Tidoni, Emmanuele
Authors
Emily S. Cross
Nathan Caruana
Emmanuele Tidoni
Abstract
The future of human–robot collaboration relies on people’s ability to understand and predict robots' actions. The machine-like appearance of robots, as well as contextual information, may influence people’s ability to anticipate the behaviour of robots. We conducted six separate experiments to investigate how spatial cues and task instructions modulate people’s ability to understand what a robot is doing. Participants observed goal-directed and non-goal directed gaze shifts made by human and robot agents, as well as directional cues displayed by a triangle. We report that biasing an observer's attention, by showing just one object an agent can interact with, can improve people’s ability to understand what humanoid robots will do. Crucially, this cue had no impact on people’s ability to predict the upcoming behaviour of the triangle. Moreover, task instructions that focus on the visual and motor consequences of the observed gaze were found to influence mentalising abilities. We suggest that the human-like shape of an agent and its physical capabilities facilitate the prediction of an upcoming action. The reported findings expand current models of gaze perception and may have important implications for human–human and human–robot collaboration.
Citation
Scandola, M., Cross, E. S., Caruana, N., & Tidoni, E. (2023). Body Form Modulates the Prediction of Human and Artificial Behaviour from Gaze Observation. International Journal of Social Robotics, https://doi.org/10.1007/s12369-022-00962-2
Journal Article Type | Article |
---|---|
Acceptance Date | Dec 19, 2022 |
Online Publication Date | Jan 24, 2023 |
Publication Date | 2023 |
Deposit Date | Feb 3, 2023 |
Publicly Available Date | Feb 6, 2023 |
Journal | International Journal of Social Robotics |
Print ISSN | 1875-4791 |
Electronic ISSN | 1875-4805 |
Publisher | Springer |
Peer Reviewed | Peer Reviewed |
DOI | https://doi.org/10.1007/s12369-022-00962-2 |
Keywords | Gaze perception; Body perception; Action prediction; Human–robot interaction; Mentalising |
Public URL | https://hull-repository.worktribe.com/output/4190395 |
Files
Published article
(3.5 Mb)
PDF
Publisher Licence URL
http://creativecommons.org/licenses/by/4.0
Copyright Statement
© The Author(s) 2023.
Open Access. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
You might also like
Human but not robotic gaze facilitates action prediction
(2022)
Journal Article
Simulating the future of actions in the human corticospinal system
(2010)
Journal Article
Downloadable Citations
About Repository@Hull
Administrator e-mail: repository@hull.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search