Mixed Reality Human-Robot Interface for Robotic Operations in Hazardous Environments

  1. Szczurek, Krzysztof
Zuzendaria:
  1. Raúl Marín Zuzendaria
  2. Mario Di Castro Zuzendarikidea

Defentsa unibertsitatea: Universitat Jaume I

Fecha de defensa: 2023(e)ko urria-(a)k 23

Epaimahaia:
  1. Pedro José Sanz Valero Presidentea
  2. Itziar Cabanes Axpe Idazkaria
  3. Alejandro Ribes Cortés Kidea

Mota: Tesia

Teseo: 824311 DIALNET lock_openTDX editor

Laburpena

Interventions in high-risk hazardous environments often require teleoperated remote systems or mobile robotic manipulators to prevent human exposure to danger. The need for secure and effective teleoperation is growing, demanding enhanced environmental understanding and collision prevention. Therefore, the human-robot interfaces must be designed with reliability and safety in mind to enable the operator to perform remote inspections, repairs, or maintenance. Modern interfaces provide some degree of telepresence for the operator, but they do not allow full immersion in the controlled situation. Mixed Reality (MR) technologies with Head-Mounted Devices (HMDs) can address this issue, as they allow for stereoscopic perception and interaction with virtual and real objects simultaneously. However, such human-robot interfaces were not showcased in telerobotic interventions in hazardous environments, and the work done within this thesis intended to address this challenge. The research was done at the European Organization for Nuclear Research (CERN) for mobile robots operated remotely in particle accelerators and experimental areas. During the thesis progression, three subsequent goals were achieved. Firstly, the teleoperator was provided with immersive interactions while still ensuring the accurate positioning of the robot. These techniques had to be adapted to accommodate delays, bandwidth restrictions, and fluctuations in the 4G shared network of the realistic underground particle accelerator environment. A developed network optimization framework enabled Mixed Reality technologies, such as 3D collision detection and avoidance, trajectory planning, real-time control, and automated target approach. A novel application-layer congestion control with automatic settings was applied to the video and point cloud feedback with adaptive algorithms. Secondly, the MR human-robot interface was designed to function with Augmented Reality (AR) HMDs in wireless network environments. The multimodal interface provided efficient and precise interaction through hand and eye tracking, user motion tracking, voice recognition, and video, 3D point cloud, and audio feedback from the robot. Furthermore, the interface allowed multiple experts to collaborate locally and remotely in the AR workspace, enabling them to share or monitor the robot's control. The interface was tested in real intervention scenarios at CERN to evaluate its performance. Network characterization and measurements were conducted to assess if the interface met the operational requirements and if the network architecture could support single and multi-user communication loads. Finally, the 3D MR human-robot interface was compared with a well-validated 2D interface to ensure it was safe and efficient. The 3D MR interface brought multiple useful functionalities, which may have added to the operator's workload and stress while increasing system complexity. The CERN 3D MR and operational 2D interfaces were compared using the NASA TLX assessment method, custom questionnaires, task execution time curves, and by measurement of the heart rate (HR), respiration rate (RR), and skin electrodermal activity (EDA) evaluated by the developed Operator Monitoring System (OMS). The system was designed to measure the physiological parameters of a teleoperator during robotic interventions. The developed interface systems demonstrated operational readiness, achieving a Technical Readiness Level (TRL) 8, through successful single and multi-user missions. Limitations and further research areas for improvement were identified, such as optimizing the network architecture for multi-user scenarios and applying automatic interaction strategies depending on network conditions for higher-level interface actions. The practical use of OMS revealed the necessity of applying machine learning techniques in signal interpretation to detect non-standard situations.