[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (302)

Search Parameters:
Keywords = teleoperation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 22783 KiB  
Article
A Latency Composition Analysis for Telerobotic Performance Insights Across Various Network Scenarios
by Nick Bray, Matthew Boeding, Michael Hempel, Hamid Sharif, Tapio Heikkilä, Markku Suomalainen and Tuomas Seppälä
Future Internet 2024, 16(12), 457; https://doi.org/10.3390/fi16120457 - 4 Dec 2024
Viewed by 349
Abstract
Telerobotics involves the operation of robots from a distance, often using advanced communication technologies combining wireless and wired technologies and a variety of protocols. This application domain is crucial because it allows humans to interact with and control robotic systems safely and from [...] Read more.
Telerobotics involves the operation of robots from a distance, often using advanced communication technologies combining wireless and wired technologies and a variety of protocols. This application domain is crucial because it allows humans to interact with and control robotic systems safely and from a distance, often performing activities in hazardous or inaccessible environments. Thus, by enabling remote operations, telerobotics not only enhances safety but also expands the possibilities for medical and industrial applications. In some use cases, telerobotics bridges the gap between human skill and robotic precision, making the completion of complex tasks requiring high accuracy possible without being physically present. With the growing availability of high-speed networks around the world, especially with the advent of 5G cellular technologies, applications of telerobotics can now span a gamut of scenarios ranging from remote control in the same room to robotic control across the globe. However, there are a variety of factors that can impact the control precision of the robotic platform and user experience of the teleoperator. One such critical factor is latency, especially across large geographical areas or complex network topologies. Consequently, military telerobotics and remote operations, for example, rely on dedicated communications infrastructure for such tasks. However, this creates a barrier to entry for many other applications and domains, as the cost of dedicated infrastructure would be prohibitive. In this paper, we examine the network latency of robotic control over shared network resources in a variety of network settings, such as a local network, access-controlled networks through Wi-Fi and cellular, and a remote transatlantic connection between Finland and the United States. The aim of this study is to quantify and evaluate the constituent latency components that comprise the control feedback loop of this telerobotics experience—of a camera feed for an operator to observe the telerobotic platform’s environment in one direction and the control communications from the operator to the robot in the reverse direction. The results show stable average round-trip latency of 6.6 ms for local network connection, 58.4 ms when connecting over Wi-Fi, 115.4 ms when connecting through cellular, and 240.7 ms when connecting from Finland to the United States over a VPN access-controlled network. These findings provide a better understanding of the capabilities and performance limitations of conducting telerobotics activities over commodity networks, and lay the foundation of our future work to use these insights for optimizing the overall user experience and the responsiveness of this control loop. Full article
(This article belongs to the Special Issue Advances and Perspectives in Human-Computer Interaction II)
Show Figures

Figure 1

Figure 1
<p>Photo of Baxter robot.</p>
Full article ">Figure 2
<p>Timing diagram and request/response flow between client, host, and robotic platform.</p>
Full article ">Figure 3
<p>Network diagram of wired non-VPN University network connection scenario.</p>
Full article ">Figure 4
<p>Network diagram of Lab Wi-Fi-to-University network connection scenario including VPN.</p>
Full article ">Figure 5
<p>Network diagram of Mobile Hotspot-to-University network connection scenario including VPN.</p>
Full article ">Figure 6
<p>Network diagram of overseas network connection scenario including VPN.</p>
Full article ">Figure 7
<p>Box plot of latency over University network.</p>
Full article ">Figure 8
<p>Box plot of latency for Lab Wi-Fi-to-University network connection scenario including VPN.</p>
Full article ">Figure 9
<p>Box plot of latency from Mobile Hotspot-to-University network connection scenario including VPN.</p>
Full article ">Figure 10
<p>A map showing the endpoint locations of our overseas tests between UNL and VTT.</p>
Full article ">Figure 11
<p>Box Plot of Latency from Transatlantic Network Connection scenario and Edge VPN.</p>
Full article ">Figure 12
<p>IK Solver latency comparison for all tested network scenarios.</p>
Full article ">Figure 13
<p>Move Request latency comparison.</p>
Full article ">Figure 14
<p>Box plot of camera feed latency across different network scenarios.</p>
Full article ">Figure 15
<p>Box plot comparison of the client’s request duration across different network scenarios.</p>
Full article ">
13 pages, 1871 KiB  
Article
Exploring the Psychological and Physiological Effects of Operating a Telenoid: The Preliminary Assessment of a Minimal Humanoid Robot for Mediated Communication
by Aya Nakae, Hani M. Bu-Omer, Wei-Chuan Chang, Chie Kishimoto and Hidenobu Sumioka
Sensors 2024, 24(23), 7541; https://doi.org/10.3390/s24237541 - 26 Nov 2024
Viewed by 455
Abstract
Background: As the Internet of Things (IoT) expands, it enables new forms of communication, including interactions mediated by teleoperated robots like avatars. While extensive research exists on the effects of these devices on communication partners, there is limited research on the impact on [...] Read more.
Background: As the Internet of Things (IoT) expands, it enables new forms of communication, including interactions mediated by teleoperated robots like avatars. While extensive research exists on the effects of these devices on communication partners, there is limited research on the impact on the operators themselves. This study aimed to objectively assess the psychological and physiological effects of operating a teleoperated robot, specifically Telenoid, on its human operator. Methods: Twelve healthy participants (2 women and 10 men, aged 18–23 years) were recruited from Osaka University. Participants engaged in two communication sessions with a first-time partner: face-to-face and Telenoid-mediated. Telenoid is a minimalist humanoid robot teleoperated by a participant. Blood samples were collected before and after each session to measure hormonal and oxidative markers, including cortisol, diacron reactive oxygen metabolites (d-ROMs), and the biological antioxidat activity of plasma (BAP). Psychological stress was assessed using validated questionnaires (POMS-2, HADS, and SRS-18). Results: A trend of a decrease in cortisol levels was observed during Telenoid-mediated communication, whereas face-to-face interactions showed no significant changes. Oxidative stress, measured by d-ROMs, significantly increased after face-to-face interactions but not in Telenoid-mediated sessions. Significant correlations were found between oxytocin and d-ROMs and psychological stress scores, particularly in terms of helplessness and total stress measures. However, no significant changes were observed in other biomarkers or between the two conditions for most psychological measures. Conclusions: These findings suggest that cortisol and d-ROMs may serve as objective biomarkers for assessing psychophysiological stress during robot-mediated communication. Telenoid’s minimalist design may help reduce social pressures and mitigate stress compared to face-to-face interactions. Further research with larger, more diverse samples and longitudinal designs is needed to validate these findings and explore the broader impacts of teleoperated robots. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>Study design.</p>
Full article ">Figure 2
<p>Depiction of experimental setup in the two sessions: (<b>a</b>) Facing session and (<b>b</b>) Telenoid session.</p>
Full article ">Figure 3
<p>Changes in serum hormones and markers of oxidation/antioxidation levels; serum levels of (<b>a</b>) cortisol, (<b>b</b>) oxytocin, (<b>c</b>) D-ROMs, and (<b>d</b>) BAP, before (pre) and after (post) conversation in the Facing and Telenoid sessions. Significant differences indicated as # <span class="html-italic">p</span> &lt; 0.1 or ** <span class="html-italic">p</span> &lt; 0.01.</p>
Full article ">Figure 4
<p>Changes in the scores of each questionnaire before (pre) and after conversation (post) in the Facing and Telenoid sessions: (<b>a</b>) POMS-2, (<b>b</b>) HADS, and (<b>c</b>) SRS-18. Significant differences are indicated as # <span class="html-italic">p</span> &lt; 0.1, * <span class="html-italic">p</span> &lt; 0.05, or ** <span class="html-italic">p</span> &lt; 0.01.</p>
Full article ">Figure 4 Cont.
<p>Changes in the scores of each questionnaire before (pre) and after conversation (post) in the Facing and Telenoid sessions: (<b>a</b>) POMS-2, (<b>b</b>) HADS, and (<b>c</b>) SRS-18. Significant differences are indicated as # <span class="html-italic">p</span> &lt; 0.1, * <span class="html-italic">p</span> &lt; 0.05, or ** <span class="html-italic">p</span> &lt; 0.01.</p>
Full article ">
20 pages, 8922 KiB  
Article
Prediction and Elimination of Physiological Tremor During Control of Teleoperated Robot Based on Deep Learning
by Juntao Chen, Zhiqing Zhang, Wei Guan, Xinxin Cao and Ke Liang
Sensors 2024, 24(22), 7359; https://doi.org/10.3390/s24227359 - 18 Nov 2024
Viewed by 540
Abstract
Currently, teleoperated robots, with the operator’s input, can fully perceive unknown factors in a complex environment and have strong environmental interaction and perception abilities. However, physiological tremors in the human hand can seriously affect the accuracy of processes that require high-precision control. Therefore, [...] Read more.
Currently, teleoperated robots, with the operator’s input, can fully perceive unknown factors in a complex environment and have strong environmental interaction and perception abilities. However, physiological tremors in the human hand can seriously affect the accuracy of processes that require high-precision control. Therefore, this paper proposes an EEMD-IWOA-LSTM model, which can decompose the physiological tremor of the hand into several intrinsic modal components (IMF) by using the EEMD decomposition strategy and convert the complex nonlinear and non-stationary physiological tremor curve of the human hand into multiple simple sequences. An LSTM neural network is used to build a prediction model for each (IMF) component, and an IWOA is proposed to optimize the model, thereby improving the prediction accuracy of the physiological tremor and eliminating it. At the same time, the prediction results of this model are compared with those of different models, and the results of EEMD-IWOA-LSTM presented in this study show obvious superior performance. In the two examples, the MSE of the prediction model proposed are 0.1148 and 0.00623, respectively. The defibrillation model proposed in this study can effectively eliminate the physiological tremor of the human hand during teleoperation and improve the control accuracy of the robot during teleoperation. Full article
(This article belongs to the Special Issue Advanced Robotic Manipulators and Control Applications)
Show Figures

Figure 1

Figure 1
<p>Control flow chart of teleoperation system.</p>
Full article ">Figure 2
<p>Tremor Suppression Model.</p>
Full article ">Figure 3
<p>LSTM structure diagram.</p>
Full article ">Figure 4
<p>Decomposition process of EEMD.</p>
Full article ">Figure 5
<p>EEMD-LSTM model structure diagram.</p>
Full article ">Figure 6
<p>IWOA flow chart.</p>
Full article ">Figure 7
<p>Decomposition results of EEMD.</p>
Full article ">Figure 8
<p>Modeling process in Example 1.</p>
Full article ">Figure 9
<p>Prediction results of tremor signal.</p>
Full article ">Figure 9 Cont.
<p>Prediction results of tremor signal.</p>
Full article ">Figure 10
<p>Fitness curve of each IMF component.</p>
Full article ">Figure 11
<p>Box diagram of different axes. (<b>a</b>) is the <span class="html-italic">x</span> axis, (<b>b</b>) is the <span class="html-italic">y</span> axis, and (<b>c</b>) is the <span class="html-italic">z</span> axis.</p>
Full article ">Figure 12
<p>Comparison of the effects of different activation functions.</p>
Full article ">Figure 13
<p>Prediction results of tremor signal.</p>
Full article ">Figure 14
<p>Error box diagram; (<b>a</b>) is the <span class="html-italic">x</span> axis, (<b>b</b>) is the <span class="html-italic">y</span> axis, and (<b>c</b>) is the <span class="html-italic">z</span> axis.</p>
Full article ">Figure 15
<p>Tremor data <math display="inline"><semantics> <mrow> <msup> <mrow> <mi mathvariant="normal">R</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math> for three axes in two cases; (<b>a</b>) Case 1; (<b>b</b>) Case 2.</p>
Full article ">
14 pages, 7441 KiB  
Article
Construction of a Wi-Fi System with a Tethered Balloon in a Mountainous Region for the Teleoperation of Vehicular Forestry Machines
by Gyun-Hyung Kim, Hyeon-Seung Lee, Ho-Seong Mun, Jae-Heun Oh and Beom-Soo Shin
Forests 2024, 15(11), 1994; https://doi.org/10.3390/f15111994 - 12 Nov 2024
Viewed by 553
Abstract
In this study, a Wi-Fi system with a tethered balloon is proposed for the teleoperation of vehicular forestry machines. This system was developed to establish a Wi-Fi communication for stable teleoperation in a timber harvesting site. This system consisted of a helium balloon, [...] Read more.
In this study, a Wi-Fi system with a tethered balloon is proposed for the teleoperation of vehicular forestry machines. This system was developed to establish a Wi-Fi communication for stable teleoperation in a timber harvesting site. This system consisted of a helium balloon, Wi-Fi nodes, a measurement system, a global navigation satellite system (GNSS) antenna, and a wind speed sensor. The measurement system included a GNSS module, an inertial measurement unit (IMU), a data logger, and an altitude sensor. While the helium balloon with the Wi-Fi system was 60 m in the air, the received signal strength indicator (RSSI) was measured by moving a Wi-Fi receiver on the ground. Another GNSS set was also utilized to collect the latitude and longitude data from the Wi-Fi receiver as it traveled. The developed Wi-Fi system with a tethered balloon can create a Wi-Fi zone of up to 1.9 ha within an average wind speed range of 2.2 m/s. It is also capable of performing the teleoperation of vehicular forestry machines with a maximum latency of 185.7 ms. Full article
(This article belongs to the Section Forest Operations and Engineering)
Show Figures

Figure 1

Figure 1
<p>Concept of forest machine teleoperation using Wi-Fi on tethered balloon.</p>
Full article ">Figure 2
<p>Overview of helium balloon: (<b>a</b>) front and (<b>b</b>) bottom views.</p>
Full article ">Figure 3
<p>Real view of (<b>a</b>) lower jig, (<b>b</b>) Wi-Fi nodes under lower jig, and (<b>c</b>) upper jig.</p>
Full article ">Figure 4
<p>Data acquisition logic of the developed data logger.</p>
Full article ">Figure 5
<p>Developed mobile mooring and console station.</p>
Full article ">Figure 6
<p>Data collection and analysis.</p>
Full article ">Figure 7
<p>Study site.</p>
Full article ">Figure 8
<p>Wind velocity (<b>left</b>) and coordinates of the helium balloon moved by the wind (<b>right</b>).</p>
Full article ">Figure 9
<p>Changes in roll, pitch, and yaw according to altitude of the tethered balloon.</p>
Full article ">Figure 10
<p>Installation of the developed system.</p>
Full article ">Figure 11
<p>Schematic of the latency occurring in the Wi-Fi system with a tethered balloon (Wi-Fi roaming occurs from Wi-Fi node (1) to Wi-Fi node (2) when Wi-Fi receiver on the machine Wi-Fi goes out of area covered by Wi-Fi node (1)).</p>
Full article ">Figure 12
<p>Schematic diagram of LOS distance calculation method.</p>
Full article ">Figure 13
<p>Traveled path converted to planar coordinates.</p>
Full article ">Figure 14
<p>Creation of the Wi-Fi zone for the developed system.</p>
Full article ">Figure 15
<p>Overall latency for RSSI.</p>
Full article ">
21 pages, 5673 KiB  
Article
HaptiScan: A Haptically-Enabled Robotic Ultrasound System for Remote Medical Diagnostics
by Zoran Najdovski, Siamak Pedrammehr, Mohammad Reza Chalak Qazani, Hamid Abdi, Sameer Deshpande, Taoming Liu, James Mullins, Michael Fielding, Stephen Hilton and Houshyar Asadi
Robotics 2024, 13(11), 164; https://doi.org/10.3390/robotics13110164 - 10 Nov 2024
Viewed by 1007
Abstract
Medical ultrasound is a widely used diagnostic imaging modality that provides real-time imaging at a relatively low cost. However, its widespread application is hindered by the need for expert operation, particularly in remote regional areas where trained sonographers are scarce. This paper presents [...] Read more.
Medical ultrasound is a widely used diagnostic imaging modality that provides real-time imaging at a relatively low cost. However, its widespread application is hindered by the need for expert operation, particularly in remote regional areas where trained sonographers are scarce. This paper presents the development of HaptiScan, a state-of-the-art telerobotic ultrasound system equipped with haptic feedback. The system utilizes a commercially available robotic manipulator, the UR5 robot from Universal Robots, integrated with a force/torque sensor and the Phantom Omni haptic device. This configuration enables skilled sonographers to remotely conduct ultrasound procedures via an internet connection, addressing both the geographic and ergonomic limitations faced in traditional sonography. Key innovative features of the system include real-time force feedback, ensuring that sonographers can precisely control the ultrasound probe from a remote location. The system is further enhanced by safety measures such as over-force sensing, patient discomfort monitoring, and emergency stop mechanisms. Quantitative indicators of the system’s performance include successful teleoperation over long distances with time delays, as demonstrated in simulations. These simulations validate the system’s control methodologies, showing stable performance with force feedback under varying time delays and distances. Additionally, the UR5 manipulator’s precision, kinematic, and dynamic models are mathematically formulated to optimize teleoperation. The results highlight the effectiveness of the proposed system in overcoming the technical challenges of remote ultrasound procedures, offering a viable solution for real-world telemedicine applications. Full article
(This article belongs to the Special Issue Development of Biomedical Robotics)
Show Figures

Figure 1

Figure 1
<p>The graphical abstract representation of the proposed methodology in this research.</p>
Full article ">Figure 2
<p>(<b>a</b>) Haptically-Enabled Robotic Ultrasound Platform; (<b>b</b>) CAD model of the HaptiScan platform.</p>
Full article ">Figure 3
<p>The kinematics representation of Phantom Omni.</p>
Full article ">Figure 4
<p>Vectorial representation of Phantom Omni: (<b>a</b>) top view; (<b>b</b>) side view.</p>
Full article ">Figure 5
<p>UR5 robot model with the DH coordinate frames assignments.</p>
Full article ">Figure 6
<p>(<b>a</b>) Signostics Signos RT handheld ultrasound device [<a href="#B45-robotics-13-00164" class="html-bibr">45</a>], (<b>b</b>) ultrasound probe support mechanism with ATI Nano 17 sensor.</p>
Full article ">Figure 7
<p>UR5 robot model with the DH coordinate frames assignments.</p>
Full article ">Figure 8
<p>Teleoperation system scheme.</p>
Full article ">Figure 9
<p>The SimMechanics model of Phantom Omni.</p>
Full article ">Figure 10
<p>Time delay.</p>
Full article ">Figure 11
<p>Cartesian position and orientation of the slave manipulator.</p>
Full article ">Figure 12
<p>(<b>a</b>) Cartesian velocity of both manipulators; (<b>b</b>) Cartesian velocity error of the manipulators.</p>
Full article ">Figure 13
<p>(<b>a</b>) Joints’ angle and velocity of the master manipulator; (<b>b</b>) Joints’ angle and velocity of the slave manipulator.</p>
Full article ">Figure 14
<p>Force error observed during the teleoperation under varying time delays.</p>
Full article ">
20 pages, 27274 KiB  
Article
Subtask-Based Usability Evaluation of Control Interfaces for Teleoperated Excavation Tasks
by Takumi Nagate, Hikaru Nagano, Yuichi Tazaki and Yasuyoshi Yokokohji
Robotics 2024, 13(11), 163; https://doi.org/10.3390/robotics13110163 - 9 Nov 2024
Viewed by 782
Abstract
This study aims to experimentally determine the most suitable control interface for different subtasks in the teleoperation of construction robots in a simulation environment. We compare a conventional lever-based rate control interface (“Rate-lever”) with two alternative methods: rate control (“Rate-3D”) and position control [...] Read more.
This study aims to experimentally determine the most suitable control interface for different subtasks in the teleoperation of construction robots in a simulation environment. We compare a conventional lever-based rate control interface (“Rate-lever”) with two alternative methods: rate control (“Rate-3D”) and position control (“Position-3D”), both using a 3D positional input device. In the experiments, participants operated a construction machine in a virtual environment and evaluated the control interfaces across three tasks: sagittal plane excavation, turning, and continuous operation. The results revealed that “Position-3D” outperformed others for sagittal excavation, while both “Rate-lever” and “Rate-3D” were more effective for turning. Notably, “Position-3D” and “Rate-3D” can be implemented on the same input device and are easily integrated. This feature offers the possibility of a hybrid-type interface suitable for operators to obtain optimized performance in sagittal and horizontal tasks. Full article
(This article belongs to the Special Issue Robot Teleoperation Integrating with Augmented Reality)
Show Figures

Figure 1

Figure 1
<p>Typical construction equipment task consisting of consecutive subtasks.</p>
Full article ">Figure 2
<p>Correspondence between input device and construction machinery. (<b>a</b>) Position–rate correspondence. (<b>b</b>) Position–position correspondence.</p>
Full article ">Figure 3
<p>Conventional rate-lever method.</p>
Full article ">Figure 4
<p>Kinematic model of excavator arm.</p>
Full article ">Figure 5
<p>Correspondence between 3D positional input device and excavator arm and correspondence between handle angle and bucket angle.</p>
Full article ">Figure 6
<p>Function of inactive area in Rate-3D method. (<b>a</b>) In inactive area. (<b>b</b>) Out of inactive area.</p>
Full article ">Figure 7
<p>Experimental environment. (<b>a</b>) Conventional rate-lever condition. (<b>b</b>) Position-3D and Rate-3D conditions.</p>
Full article ">Figure 8
<p>Simulated experimental environment for Task 1. (<b>a</b>) Overview. (<b>b</b>) Time lapse of task.</p>
Full article ">Figure 9
<p>Simulated experimental environment for Task 2. (<b>a</b>) Overview. (<b>b</b>) Time lapse of task.</p>
Full article ">Figure 10
<p>Simulated experimental environment for Task 3. (<b>a</b>) Overview. (<b>b</b>) Time lapse of task.</p>
Full article ">Figure 11
<p>Working time for each path and control method in in Task 1. <math display="inline"><semantics> <mrow> <mo>∗</mo> </mrow> </semantics></math>: <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>∗</mo> <mo>∗</mo> </mrow> </semantics></math>: <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.01</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>Trajectory error in Task 1.</p>
Full article ">Figure 13
<p>Operability in Task 1. <math display="inline"><semantics> <mrow> <mo>∗</mo> </mrow> </semantics></math>: <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>∗</mo> <mo>∗</mo> </mrow> </semantics></math>: <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.01</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 14
<p>Ease of learning in Task 1. <math display="inline"><semantics> <mrow> <mo>∗</mo> </mrow> </semantics></math>: <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>∗</mo> <mo>∗</mo> </mrow> </semantics></math>: <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.01</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 15
<p>Physical demand in Task 1.</p>
Full article ">Figure 16
<p>Mental demand in Task 1. <math display="inline"><semantics> <mrow> <mo>∗</mo> </mrow> </semantics></math>: <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 17
<p>Operability in Task 2. <math display="inline"><semantics> <mrow> <mo>∗</mo> <mo>∗</mo> </mrow> </semantics></math>: <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.01</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 18
<p>Ease of learning in Task 2.</p>
Full article ">Figure 19
<p>Physical demand in Task 2. <math display="inline"><semantics> <mrow> <mo>∗</mo> <mo>∗</mo> </mrow> </semantics></math>: <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.01</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 20
<p>Mental demand in Task 2. <math display="inline"><semantics> <mrow> <mo>∗</mo> </mrow> </semantics></math>: <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 21
<p>Excavation volume in Task 3. <math display="inline"><semantics> <mrow> <mo>∗</mo> </mrow> </semantics></math>: <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 22
<p>Operability in Task 3. <math display="inline"><semantics> <mrow> <mo>∗</mo> </mrow> </semantics></math>: <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 23
<p>Ease of learning in Task 3. <math display="inline"><semantics> <mrow> <mo>∗</mo> </mrow> </semantics></math>: <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 24
<p>Physical demand in Task 3. <math display="inline"><semantics> <mrow> <mo>∗</mo> </mrow> </semantics></math>: <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 25
<p>Mental demand in Task 3.</p>
Full article ">Figure 26
<p>Integrated hybrid control method.</p>
Full article ">
20 pages, 12356 KiB  
Article
Quantifying the Remote Driver’s Interaction with 5G-Enabled Level 4 Automated Vehicles: A Real-World Study
by Shuo Li, Yanghanzi Zhang, Simon Edwards and Phil Blythe
Electronics 2024, 13(22), 4366; https://doi.org/10.3390/electronics13224366 - 7 Nov 2024
Viewed by 769
Abstract
This real-world investigation aimed to quantify the human–machine interaction between remote drivers of teleoperation systems and the Level 4 automated vehicle in a real-world setting. The primary goal was to investigate the effects of disengagement and distraction on remote driver performance and behaviour. [...] Read more.
This real-world investigation aimed to quantify the human–machine interaction between remote drivers of teleoperation systems and the Level 4 automated vehicle in a real-world setting. The primary goal was to investigate the effects of disengagement and distraction on remote driver performance and behaviour. Key findings revealed that mental disengagement, achieved through distraction via a reading task, significantly slowed the remote driver’s reaction time by an average of 5.309 s when the Level 4 automated system required intervention. Similarly, disengagement resulted in a 4.232 s delay in decision-making time for remote drivers when they needed to step in and make critical strategic decisions. Moreover, mental disengagement affected the remote drivers’ attention focus on the road and increased their cognitive workload compared to constant monitoring. Furthermore, when actively controlling the vehicle remotely, drivers experienced a higher cognitive workload than in both “monitoring” and “disengagement” conditions. The findings emphasize the importance of designing teleoperation systems that keep remote drivers actively engaged with their environment, minimise distractions, and reduce disengagement. Such designs are essential for enhancing safety and effectiveness in remote driving scenarios, ultimately supporting the successful deployment of Level 4 automated vehicles in real-world applications. Full article
(This article belongs to the Special Issue Advanced Technologies in Intelligent Transport Systems)
Show Figures

Figure 1

Figure 1
<p>Level 4 automated vehicle (<b>left</b>) and the teleoperation system (<b>right</b>).</p>
Full article ">Figure 2
<p>The trail route of the connected and automated logistics.</p>
Full article ">Figure 3
<p>The remote driver is in the “monitoring” driving condition in the Level 4 automated vehicle teleoperation workstation.</p>
Full article ">Figure 4
<p>The remote driver is in the “disengaged” driving condition in the Level 4 automated vehicle teleoperation workstation.</p>
Full article ">Figure 5
<p>Illustration of motor readiness time.</p>
Full article ">Figure 6
<p>Illustration of decision-making time.</p>
Full article ">Figure 7
<p>Fixation duration heat map when remote driver is in the “monitoring” condition (Gaze filter-Tobii I-VT, Radius 30 px, Scale max value 2.20 s, warmer colours-red and yellow indicates areas where the remote driver focused their gaze the most, cooler colours-green shows areas of less visual attention).</p>
Full article ">Figure 8
<p>Fixation duration heat map when remote driver is in the “disengaged” condition (Gaze filter-Tobii I-VT, Radius 30 px, Scale max value 2.20 s, warmer colours-red and yellow indicates areas where the remote driver focused their gaze the most, cooler colours-green shows areas of less visual attention).</p>
Full article ">Figure 9
<p>Fixation duration heat map when remote driver is in teleoperation mode (Gaze filter-Tobii I-VT, Radius 30 px, Scale max value 2.20 s, warmer colours-red and yellow indicates areas where the remote driver focused their gaze the most, cooler colours-green shows areas of less visual attention).</p>
Full article ">
18 pages, 9899 KiB  
Article
A Robotic Teleoperation System with Integrated Augmented Reality and Digital Twin Technologies for Disassembling End-of-Life Batteries
by Feifan Zhao, Wupeng Deng and Duc Truong Pham
Batteries 2024, 10(11), 382; https://doi.org/10.3390/batteries10110382 - 30 Oct 2024
Viewed by 1009
Abstract
Disassembly is a key step in remanufacturing, especially for end-of-life (EoL) products such as electric vehicle (EV) batteries, which are challenging to dismantle due to uncertainties in their condition and potential risks of fire, fumes, explosions, and electrical shock. To address these challenges, [...] Read more.
Disassembly is a key step in remanufacturing, especially for end-of-life (EoL) products such as electric vehicle (EV) batteries, which are challenging to dismantle due to uncertainties in their condition and potential risks of fire, fumes, explosions, and electrical shock. To address these challenges, this paper presents a robotic teleoperation system that leverages augmented reality (AR) and digital twin (DT) technologies to enable a human operator to work away from the danger zone. By integrating AR and DTs, the system not only provides a real-time visual representation of the robot’s status but also enables remote control via gesture recognition. A bidirectional communication framework established within the system synchronises the virtual robot with its physical counterpart in an AR environment, which enhances the operator’s understanding of both the robot and task statuses. In the event of anomalies, the operator can interact with the virtual robot through intuitive gestures based on information displayed on the AR interface, thereby improving decision-making efficiency and operational safety. The application of this system is demonstrated through a case study involving the disassembly of a busbar from an EoL EV battery. Furthermore, the performance of the system in terms of task completion time and operator workload was evaluated and compared with that of AR-based control methods without informational cues and ‘smartpad’ controls. The findings indicate that the proposed system reduces operation time and enhances user experience, delivering its broad application potential in complex industrial settings. Full article
(This article belongs to the Section Battery Processing, Manufacturing and Recycling)
Show Figures

Figure 1

Figure 1
<p>Framework of the proposed system.</p>
Full article ">Figure 2
<p>The physical robot and virtual robot. (<b>a</b>) Physical robot in the physical environment; (<b>b</b>) virtual robot registered on the physical robot by using the Vuforia image target.</p>
Full article ">Figure 3
<p>Key information in the AR interface. (<b>a</b>) Display in the Unity environment; (<b>b</b>) display in the AR HMD devices.</p>
Full article ">Figure 4
<p>AR-based robot control method. (<b>a</b>) Gesture recognition; (<b>b</b>) AR robot control interface.</p>
Full article ">Figure 5
<p>Coordinate system transformation.</p>
Full article ">Figure 6
<p>Teleoperated human–robot collaborative disassembly platform.</p>
Full article ">Figure 7
<p>Workflow of busbar disassembly using the proposed system. (<b>a</b>) Robot initialisation and programme execution; (<b>b</b>) virtual robot synchronisation and key information display; (<b>c</b>) real-time monitoring of robot status; (<b>d</b>) manual control intervention; (<b>e</b>) execution of planned motions.</p>
Full article ">Figure 8
<p>Busbar disassembly. (<b>a</b>) Busbar jammed on studs; (<b>b</b>) successful disassembly of the busbar.</p>
Full article ">Figure 9
<p>Three robot teleoperation methods. (<b>a</b>) Proposed system; (<b>b</b>) AR-based control; (<b>c</b>) smartpad control.</p>
Full article ">Figure 10
<p>Disassembly times for different control methods.</p>
Full article ">Figure 11
<p>Average NASA RTLX scores for different control methods.</p>
Full article ">Figure 12
<p>NASA RTLX indicator scores for different control methods.</p>
Full article ">
26 pages, 9199 KiB  
Article
Wireless PID-Based Control for a Single-Legged Rehabilitation Exoskeleton
by Rabé Andersson, Mikael Cronhjort and José Chilo
Machines 2024, 12(11), 745; https://doi.org/10.3390/machines12110745 - 22 Oct 2024
Viewed by 770
Abstract
The demand for remote rehabilitation is increasing, opening up convenient and effective home-based therapy for the sick and elderly. In this study, we use AnyBody simulations to analyze muscle activity and determine key parameters for designing a rehabilitation exoskeleton, as well as selecting [...] Read more.
The demand for remote rehabilitation is increasing, opening up convenient and effective home-based therapy for the sick and elderly. In this study, we use AnyBody simulations to analyze muscle activity and determine key parameters for designing a rehabilitation exoskeleton, as well as selecting the appropriate motor torque to assist patients during rehabilitation sessions. The exoskeleton was designed with a PID control mechanism for the precise management of motor positions and joint torques, and it operates in both automated and teleoperation modes. Hip and knee movements are monitored via smartphone-based IMU sensors, enabling real-time feedback. Bluetooth communication ensures seamless control during various training scenarios. Our study demonstrates that remotely controlled rehabilitation systems can be implemented effectively, offering vital support not only during global health crises such as pandemics but also in improving the accessibility of rehabilitation services in remote or underserved areas. This approach has the potential to transform the way physical therapy can be delivered, making it more accessible and adaptable to the needs of a larger patient population. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

Figure 1
<p>Targeted users for using the rehabilitation exoskeleton in this study: (<b>a</b>) paralyzed patients, where a position control strategy is used; (<b>b</b>) patients with any form of locomotion disorder, where a torque control strategy is used.</p>
Full article ">Figure 2
<p>The link and joint (hip and knee) coordinate frames of the right leg.</p>
Full article ">Figure 3
<p>The prototypes designed in SolidWorks CAD software (release 2024): (<b>a</b>) the exoskeleton robot model for rehabilitation; (<b>b</b>) the leg prototype for moving the leg.</p>
Full article ">Figure 4
<p>The joint trajectory of subject 1: (<b>a</b>) the hip joint angle; (<b>b</b>) the knee joint angle.</p>
Full article ">Figure 5
<p>The human model in AnyBody modeling system: (<b>a</b>) a human walking without an exoskeleton; (<b>b</b>) a human model walking with an exoskeleton.</p>
Full article ">Figure 6
<p>The normalized maximum muscle activities of the human model with and without exoskeleton in AnyBody simulation modeling.</p>
Full article ">Figure 7
<p>The hip and knee joints of the exoskeleton leg: (<b>a</b>) the notes for calculating the torque needed by the joints; (<b>b</b>) the exoskeleton prototype with the mannequin.</p>
Full article ">Figure 8
<p>The wireless connection between the prototypes and the operation protocols.</p>
Full article ">Figure 9
<p>The connection of Arduino Nano 33 IoT with the MKR CAN shield and the MyActuator RMD-X8 motors.</p>
Full article ">Figure 10
<p>The connection between Arduino Nano 33 IoT and Arduino Uno Rev3 in the leg prototype.</p>
Full article ">Figure 11
<p>A diagram of the PID controller for controlling the joint angle of the exoskeleton prototype [<a href="#B34-machines-12-00745" class="html-bibr">34</a>].</p>
Full article ">Figure 12
<p>The position and hybrid position torque control diagram.</p>
Full article ">Figure 13
<p>A flowchart for the programming procedure for the angular position and torque trajectories.</p>
Full article ">Figure 14
<p>(<b>a</b>) The hip joint trajectories of five subjects; (<b>b</b>) the knee joint trajectories of five subjects [<a href="#B13-machines-12-00745" class="html-bibr">13</a>].</p>
Full article ">Figure 15
<p>The measurement setup on the mannequin for capturing hip and knee joint angles using IMU sensors embedded in smartphones.</p>
Full article ">Figure 16
<p>(<b>a</b>) The hip joint trajectories of five subjects; (<b>b</b>) the knee joint trajectories of five subjects [<a href="#B13-machines-12-00745" class="html-bibr">13</a>].</p>
Full article ">Figure 17
<p>The hip joint trajectory of subject 2 and subject 3: (<b>a</b>) the hip joint angle trajectory; (<b>b</b>) the knee joint angle trajectory.</p>
Full article ">Figure 18
<p>(<b>a</b>) The hip joint trajectory using teleoperation mode; (<b>b</b>) the knee joint trajectory using teleoperation mode.</p>
Full article ">
14 pages, 13034 KiB  
Article
Learning Underwater Intervention Skills Based on Dynamic Movement Primitives
by Xuejiao Yang, Yunxiu Zhang, Rongrong Li, Xinhui Zheng and Qifeng Zhang
Electronics 2024, 13(19), 3860; https://doi.org/10.3390/electronics13193860 - 29 Sep 2024
Viewed by 547
Abstract
Improving the autonomy of underwater interventions by remotely operated vehicles (ROVs) can help mitigate the impact of communication delays on operational efficiency. Currently, underwater interventions for ROVs usually rely on real-time teleoperation or preprogramming by operators, which is not only time-consuming and increases [...] Read more.
Improving the autonomy of underwater interventions by remotely operated vehicles (ROVs) can help mitigate the impact of communication delays on operational efficiency. Currently, underwater interventions for ROVs usually rely on real-time teleoperation or preprogramming by operators, which is not only time-consuming and increases the cognitive burden on operators but also requires extensive specialized programming. Instead, this paper uses the intuitive learning from demonstrations (LfD) approach that uses operator demonstrations as inputs and models the trajectory characteristics of the task through the dynamic movement primitive (DMP) approach for task reproduction as well as the generalization of knowledge to new environments. Unlike existing applications of DMP-based robot trajectory learning methods, we propose the underwater DMP (UDMP) method to address the problem that the complexity and stochasticity of underwater operational environments (e.g., current perturbations and floating operations) diminish the representativeness of the demonstrated trajectories. First, the Gaussian mixture model (GMM) and Gaussian mixture regression (GMR) are used for feature extraction of multiple demonstration trajectories to obtain typical trajectories as inputs to the DMP method. The UDMP method is more suitable for the LfD of underwater interventions than the method that directly learns the nonlinear terms of the DMP. In addition, we improve the commonly used homomorphic-based teleoperation mode to heteromorphic mode, which allows the operator to focus more on the end-operation task. Finally, the effectiveness of the developed method is verified by simulation experiments. Full article
Show Figures

Figure 1

Figure 1
<p>Components of an underwater teleoperation system.</p>
Full article ">Figure 2
<p>Overview of the learning framework.</p>
Full article ">Figure 3
<p>The composition of the experimental system.</p>
Full article ">Figure 4
<p>The position of the demonstration trajectories.</p>
Full article ">Figure 5
<p>The orientation of the demonstration trajectories.</p>
Full article ">Figure 6
<p>GMM–GMR preprocessed demonstration trajectories used to obtain the <span class="html-italic">t</span>-<math display="inline"><semantics> <mrow> <mi>a</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> of the position.</p>
Full article ">Figure 7
<p>GMM–GMR preprocessed demonstration trajectories used to obtain the <span class="html-italic">t</span>-<math display="inline"><semantics> <mrow> <mi>a</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> of the orientation.</p>
Full article ">Figure 8
<p>Nonlinear term <span class="html-italic">s</span>-<math display="inline"><semantics> <mrow> <mi>f</mi> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> </semantics></math> in DMP modeling [<a href="#B12-electronics-13-03860" class="html-bibr">12</a>] of demonstration trajectories (position).</p>
Full article ">Figure 9
<p>Nonlinear term <span class="html-italic">s</span>-<math display="inline"><semantics> <mrow> <mi>f</mi> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> </semantics></math> in DMP modeling [<a href="#B12-electronics-13-03860" class="html-bibr">12</a>] of demonstration trajectories (orientation).</p>
Full article ">Figure 10
<p>Position trajectories and errors reproduced by DMP and UDMP methods.</p>
Full article ">Figure 11
<p>Orientation trajectories and errors reproduced by DMP and UDMP methods (expressed as RPY).</p>
Full article ">Figure 12
<p>DMP and UDMP methods for generalizing new target position.</p>
Full article ">
27 pages, 28326 KiB  
Article
Full-Body Pose Estimation of Humanoid Robots Using Head-Worn Cameras for Digital Human-Augmented Robotic Telepresence
by Youngdae Cho, Wooram Son, Jaewan Bak, Yisoo Lee, Hwasup Lim and Youngwoon Cha
Mathematics 2024, 12(19), 3039; https://doi.org/10.3390/math12193039 - 28 Sep 2024
Cited by 1 | Viewed by 888
Abstract
We envision a telepresence system that enhances remote work by facilitating both physical and immersive visual interactions between individuals. However, during robot teleoperation, communication often lacks realism, as users see the robot’s body rather than the remote individual. To address this, we propose [...] Read more.
We envision a telepresence system that enhances remote work by facilitating both physical and immersive visual interactions between individuals. However, during robot teleoperation, communication often lacks realism, as users see the robot’s body rather than the remote individual. To address this, we propose a method for overlaying a digital human model onto a humanoid robot using XR visualization, enabling an immersive 3D telepresence experience. Our approach employs a learning-based method to estimate the 2D poses of the humanoid robot from head-worn stereo views, leveraging a newly collected dataset of full-body poses for humanoid robots. The stereo 2D poses and sparse inertial measurements from the remote operator are optimized to compute 3D poses over time. The digital human is localized from the perspective of a continuously moving observer, utilizing the estimated 3D pose of the humanoid robot. Our moving camera-based pose estimation method does not rely on any markers or external knowledge of the robot’s status, effectively overcoming challenges such as marker occlusion, calibration issues, and dependencies on headset tracking errors. We demonstrate the system in a remote physical training scenario, achieving real-time performance at 40 fps, which enables simultaneous immersive and physical interactions. Experimental results show that our learning-based 3D pose estimation method, which operates without prior knowledge of the robot, significantly outperforms alternative approaches requiring the robot’s global pose, particularly during rapid headset movements, achieving markerless digital human augmentation from head-worn views. Full article
(This article belongs to the Topic Extended Reality: Models and Applications)
Show Figures

Figure 1

Figure 1
<p>We present a digital human-augmented robotic telepresence system that overlays a remote person onto a humanoid robot using only head-worn cameras worn by the local user. This approach enables immersive 3D telepresence for continuously moving observers during robot teleoperation without the need for markers or knowledge of the robot’s status. Top left: The local user interacts visually with the digital avatar through a head-mounted display (HMD) while physically interacting with the humanoid robot, facilitating both visual and physical immersion. Top right: The remote user communicates via video conferencing through the HMD. Bottom row, <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>×</mo> <mn>3</mn> </mrow> </semantics></math> image group: The remote user operates the robot using wireless motion capture.</p>
Full article ">Figure 2
<p>Overview of our digital human-augmented robotic telepresence system: The remote user (<b>left</b>) and the local user (<b>right</b>) are located in separate spaces and are wirelessly connected. The local user interacts with the remote person through a visually immersive digital human avatar and a physically interactive humanoid robot.</p>
Full article ">Figure 3
<p>Humanoid robots used in our system: (<b>a</b>) Version 1 equipped with both arms and a head. (<b>b</b>) Version 2 equipped with both arms, legs, and a head. (<b>c</b>) Version 3 equipped with a full body and both hands.</p>
Full article ">Figure 4
<p>The digital human avatar used in our system. The digital human model is rescaled to match the size of the humanoid robot in real-world scale.</p>
Full article ">Figure 5
<p>Telepresence and capture prototype: (<b>a</b>) Prototype for the remote person, featuring six IMUs for body motion capture and an Apple Vision Pro [<a href="#B52-mathematics-12-03039" class="html-bibr">52</a>] for video conferencing. (<b>b</b>) Prototype for the local person, equipped with XReal Light [<a href="#B53-mathematics-12-03039" class="html-bibr">53</a>] for Augmented Reality (AR) and stereo cameras mounted on top of the HMD for local space observations.</p>
Full article ">Figure 6
<p>Network structure for the 2D joint detector: Given a single input image, the 4-stage network outputs the keypoint coordinates <span class="html-italic">K</span> and their confidence maps <span class="html-italic">H</span>. The Hourglass module [<a href="#B58-mathematics-12-03039" class="html-bibr">58</a>] outputs unnormalized heatmaps <math display="inline"><semantics> <mover accent="true"> <mi>H</mi> <mo>˜</mo> </mover> </semantics></math>, which are propagated to the next stage. The DSNT regression module [<a href="#B59-mathematics-12-03039" class="html-bibr">59</a>] normalizes <math display="inline"><semantics> <mover accent="true"> <mi>H</mi> <mo>˜</mo> </mover> </semantics></math> and produces <span class="html-italic">H</span> and <span class="html-italic">K</span>.</p>
Full article ">Figure 7
<p>Full-body 3D pose estimation pipeline: Heterogeneous sensor data from the XR headset and stereo camera rig worn by the local user, combined with inertial sensors worn by the remote person, are fused to estimate the full-body pose of the digital human from the local user’s perspective.</p>
Full article ">Figure 8
<p>Qualitative evaluation of the joint detector, showing selected frames of full-body humanoid robot pose estimation from head-worn camera views. The joint detector accurately identifies full-body joints even during fast head movements with motion blur, when parts of the robot are out of frame, or at varying distances, allowing for close-range interactions with the user.</p>
Full article ">Figure 9
<p>Qualitative evaluation in world space. Each row presents SMPL [<a href="#B41-mathematics-12-03039" class="html-bibr">41</a>] overlay results from a viewpoint in global space: (<b>a</b>) World space view. (<b>b</b>) Mocap using ground truth pose for the lower body, root rotation, and position. (<b>c</b>) Smplify [<a href="#B40-mathematics-12-03039" class="html-bibr">40</a>] using ground truth position. (<b>d</b>) Ours, operating without external knowledge. The checkerboard pattern on the ground is used solely for external camera calibration in global space.</p>
Full article ">Figure 10
<p>Qualitative evaluation on head-worn views. Each row displays SMPL [<a href="#B41-mathematics-12-03039" class="html-bibr">41</a>] overlay results corresponding to the frames in <a href="#mathematics-12-03039-f009" class="html-fig">Figure 9</a> from the input camera perspective: (<b>a</b>) Input head-worn view. (<b>b</b>) Mocap using ground truth pose for the lower body, root rotation, and position. (<b>c</b>) Smplify [<a href="#B40-mathematics-12-03039" class="html-bibr">40</a>] using ground truth position. (<b>d</b>) Ours, operating without external knowledge. The checkerboard pattern on the ground is used solely for external camera calibration in global space.</p>
Full article ">Figure 11
<p>Selected frames of digital human visualizations captured directly from the head-mounted display. The photos were taken externally, showing the view as visualized to the user’s eyes through the XR glasses.</p>
Full article ">
18 pages, 17808 KiB  
Article
Virtual Hand Deformation-Based Pseudo-Haptic Feedback for Enhanced Force Perception and Task Performance in Physically Constrained Teleoperation
by Kento Yamamoto, Yaonan Zhu, Tadayoshi Aoyama and Yasuhisa Hasegawa
Robotics 2024, 13(10), 143; https://doi.org/10.3390/robotics13100143 - 24 Sep 2024
Viewed by 1433
Abstract
Force-feedback devices enhance task performance in most robot teleoperations. However, their increased size with additional degrees of freedom can limit the robot’s applicability. To address this, an interface that visually presents force feedback is proposed, eliminating the need for bulky physical devices. Our [...] Read more.
Force-feedback devices enhance task performance in most robot teleoperations. However, their increased size with additional degrees of freedom can limit the robot’s applicability. To address this, an interface that visually presents force feedback is proposed, eliminating the need for bulky physical devices. Our telepresence system renders robotic hands transparent in the camera image while displaying virtual hands. The forces applied to the robot deform these virtual hands. The deformation creates an illusion that the operator’s hands are deforming, thus providing pseudo-haptic feedback. We conducted a weight comparison experiment in a virtual reality environment to evaluate force sensitivity. In addition, we conducted an object touch experiment to assess the speed of contact detection in a robot teleoperation setting. The results demonstrate that our method significantly surpasses conventional pseudo-haptic feedback in conveying force differences. Operators detected object touch 24.7% faster using virtual hand deformation compared to conditions without feedback. This matches the response times of physical force-feedback devices. This interface not only increases the operator’s force sensitivity but also matches the performance of conventional force-feedback devices without physically constraining the operator. Therefore, the interface enhances both task performance and the experience of teleoperation. Full article
(This article belongs to the Special Issue Extended Reality and AI Empowered Robots)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Interface that presents pseudo-haptic feedback through deformation of virtual hands.</p>
Full article ">Figure 2
<p>Schematic of the teleoperation system utilized in this study, featuring an HMD (HTC VIVE Pro Eye) for immersive visual feedback, quantum metagloves for precise hand tracking, and universal robots (URs) for robotic manipulation. The system integrates visual feedback with hand tracking to enable real-time teleoperation within a robot operating system (ROS) environment.</p>
Full article ">Figure 3
<p>After the robotic hand is rendered transparent by chroma keying, the virtual hand is displayed according to the position and posture of the robotic hand.</p>
Full article ">Figure 4
<p>Offset of pseudo-haptic feedback and amount of fingertip movement by virtual hand deformation are equal. Index finger and thumb tips have robotic grippers.</p>
Full article ">Figure 5
<p>(<b>a</b>) Virtual finger deformation for comparison with the pseudo-haptic condition in Experiment 1. (<b>b</b>) Virtual finger deformation caused by the force applied to the gripper in Experiment 2. The second joint of the thumb and the third joint of the index finger rotate in the opposite direction of the applied force.</p>
Full article ">Figure 6
<p>Experimental environment and participant’s viewpoint for weight comparison experiment.</p>
Full article ">Figure 7
<p>Appearance of virtual finger deformation for each offset.</p>
Full article ">Figure 8
<p>(<b>a</b>–<b>h</b>): Correct answer rates for where the participants accurately determined the heavier weight by comparing the weight of each base object and control object. The blue bars show the results with pseudo-haptic feedback, and the red bars show the results with virtual finger deformation. Error bars show the standard error. * <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.01.</p>
Full article ">Figure 9
<p>The results of the question, “How much weight do you feel in your hand?” on a seven-point Likert scale (from −3 = do not feel weight to 3 = strongly feel weight) for each weight. Blue bars show the results with pseudo-haptic feedback, and red bars show the results with virtual finger deformation. Error bars show the standard error. * <span class="html-italic">p</span> &lt; 0.05.</p>
Full article ">Figure 10
<p>Experimental environment for force feedback condition and operator’s viewpoints in the no-feedback, force-feedback, and visual-force-feedback conditions. (<b>a</b>) Participant lifting the plug to the height of the arrow before starting the experiment and after pressing the Enter key. (<b>b</b>) Participants move the plug closer to the power strip at a constant speed and press the Enter key when they feel the tip of the plug touching or being inserted into the power strip.</p>
Full article ">Figure 11
<p>(<b>a</b>) Time taken to recognize that the plug touched the power strip in each condition. (<b>b</b>) Time taken to recognize that the plug was stuck deep into the power strip under each condition. The red boxes indicate the results for the no-feedback condition. The yellow boxes show the results for the visual-force-feedback condition. The green boxes show the results for the force-feedback condition. The red dots show average values. * <span class="html-italic">p</span> &lt; 0.05.</p>
Full article ">
18 pages, 1924 KiB  
Article
Safety, Efficiency, and Mental Workload in Simulated Teledriving of a Vehicle as Functions of Camera Viewpoint
by Oren Musicant, Assaf Botzer and Bar Richmond-Hacham
Sensors 2024, 24(18), 6134; https://doi.org/10.3390/s24186134 - 23 Sep 2024
Viewed by 609
Abstract
Teleoperation services are expected to operate on-road and often in urban areas. In current teleoperation applications, teleoperators gain a higher viewpoint of the environment from a camera on the vehicle’s roof. However, it is unclear how this viewpoint compares to a conventional viewpoint [...] Read more.
Teleoperation services are expected to operate on-road and often in urban areas. In current teleoperation applications, teleoperators gain a higher viewpoint of the environment from a camera on the vehicle’s roof. However, it is unclear how this viewpoint compares to a conventional viewpoint in terms of safety, efficiency, and mental workload. In the current study, teleoperators (n = 148) performed driving tasks in a simulated urban environment with a conventional viewpoint (i.e., the simulated camera was positioned inside the vehicle at the height of a driver’s eyes) and a higher viewpoint (the simulated camera was positioned on the vehicle roof). The tasks required negotiating road geometry and other road users. At the end of the session, participants completed the NASA-TLX questionnaire. Results showed that participants completed most tasks faster with the higher viewpoint and reported lower frustration and mental demand. The camera position did not affect collision rates nor the probability of hard braking and steering events. We conclude that a viewpoint from the vehicle roof may improve teleoperation efficiency without compromising driving safety, while also lowering the teleoperators’ mental workload. Full article
(This article belongs to the Special Issue On-Board and Remote Sensors in Intelligent Vehicles-2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Left panel-position of the simulated cameras. Right panel-Teledriver (top right) and Driver (bottom right) viewpoints on three monitors of 27″ with a forward field of view of 135°.</p>
Full article ">Figure 2
<p>A map of the simulated route and significant points along it. Red arrows indicate three key locations where navigation errors sometimes occurred because participants did not respond correctly to the direction signs (with blue background). These signs are depicted at the bottom of the figure.</p>
Full article ">Figure 3
<p>Completion time ratio between the teledriver and driver viewpoints (<span class="html-italic">x</span>-axis) by driving challenge (<span class="html-italic">y</span>-axis). Notes: (1) The ratio estimates are based on a mixed-effects model to control for repeated observations. (2) Asterisks represent statistical significance: * <span class="html-italic">p</span> value &lt; 0.05, ** <span class="html-italic">p</span> value &lt; 0.01, *** <span class="html-italic">p</span> value &lt; 0.001. (3) Below each confidence interval line, we specify the mean [SD] time (in seconds) to complete the corresponding challenge with the teledriver (in the numerator) and driver (in the denominator) viewpoints. We note that the estimates of the mixed effect model (see note 1) slightly differ from the simple ratio of the means. For example, for pedestrian crossing (last line in <a href="#sensors-24-06134-f003" class="html-fig">Figure 3</a>), the mixed model estimate of 1.03 is different from the ratio of 12.9 s and 10.9 s that we write below it.</p>
Full article ">Figure 4
<p>Survival analysis. The survival probability is on the <span class="html-italic">y</span>-axis and the route distance is on the <span class="html-italic">x</span>-axis. Note: The two-sided horizontal arrows designate the driving challenges, with longer arrows for longer road segments.</p>
Full article ">Figure 5
<p>The probability (<span class="html-italic">y</span>-axis) of braking events (left panel) and steering events (right panel) as a function of camera viewpoints (separate lines) and acceleration thresholds (ranging from 2 to 8 m/s<sup>2</sup> on the <span class="html-italic">x</span>-axis).</p>
Full article ">Figure 6
<p>The ratio of maximal braking (left panel) and steering (right panel) intensity between the teledriver and driver viewpoints during the various driving challenges (<span class="html-italic">y</span>-axis). Notes: (1) Asterisks represent statistical significance: * <span class="html-italic">p</span> value &lt; 0.05, ** <span class="html-italic">p</span> value &lt; 0.01, *** <span class="html-italic">p</span> value &lt; 0.001. (2) Below each confidence interval line, we specify the mean [SD] of the braking/steering max intensity for the teledriver (in the numerator) and driver (in the denominator) viewpoints. We note that the estimates of the mixed effect model slightly differ from the simple deviation of the means (see a similar note below <a href="#sensors-24-06134-f003" class="html-fig">Figure 3</a>).</p>
Full article ">Figure 7
<p>Teledriver and driver viewpoints on the six subscales of the NASA-TLX. Notes: (1) Asterisks represent statistical significance: * <span class="html-italic">p</span> value &lt; 0.05, ** <span class="html-italic">p</span> value &lt; 0.01.</p>
Full article ">
23 pages, 3808 KiB  
Article
Gesture Recognition Framework for Teleoperation of Infrared (IR) Consumer Devices Using a Novel pFMG Soft Armband
by Sam Young, Hao Zhou and Gursel Alici
Sensors 2024, 24(18), 6124; https://doi.org/10.3390/s24186124 - 22 Sep 2024
Viewed by 979
Abstract
Wearable technologies represent a significant advancement in facilitating communication between humans and machines. Powered by artificial intelligence (AI), human gestures detected by wearable sensors can provide people with seamless interaction with physical, digital, and mixed environments. In this paper, the foundations of a [...] Read more.
Wearable technologies represent a significant advancement in facilitating communication between humans and machines. Powered by artificial intelligence (AI), human gestures detected by wearable sensors can provide people with seamless interaction with physical, digital, and mixed environments. In this paper, the foundations of a gesture-recognition framework for the teleoperation of infrared consumer electronics are established. This framework is based on force myography data of the upper forearm, acquired from a prototype novel soft pressure-based force myography (pFMG) armband. Here, the sub-processes of the framework are detailed, including the acquisition of infrared and force myography data; pre-processing; feature construction/selection; classifier selection; post-processing; and interfacing/actuation. The gesture recognition system is evaluated using 12 subjects’ force myography data obtained whilst performing five classes of gestures. Our results demonstrate an inter-session and inter-trial gesture average recognition accuracy of approximately 92.2% and 88.9%, respectively. The gesture recognition framework was successfully able to teleoperate several infrared consumer electronics as a wearable, safe and affordable human–machine interface system. The contribution of this study centres around proposing and demonstrating a user-centred design methodology to allow direct human–machine interaction and interface for applications where humans and devices are in the same loop or coexist, as typified between users and infrared-communicating devices in this study. Full article
(This article belongs to the Special Issue Intelligent Human-Computer Interaction Systems and Their Evaluation)
Show Figures

Figure 1

Figure 1
<p>Novel pneumatic myography (PMG) armband [<a href="#B13-sensors-24-06124" class="html-bibr">13</a>].</p>
Full article ">Figure 2
<p>Gesture recognition and device teleoperation framework.</p>
Full article ">Figure 3
<p>Gestures performed by subjects—(left to right) Wave In, Wave Out, Fist, Spread Fingers and Pinch.</p>
Full article ">Figure 4
<p>Sliding window implementation with 50% overlap (not to scale).</p>
Full article ">Figure 5
<p>Comparison between dynamic gestures (<b>Top row</b>) and quasi-dynamic gestures (<b>Bottom row</b>).</p>
Full article ">Figure 6
<p>Characterisation of gesture recordings for 1 trial (2.7 s of data per gesture).</p>
Full article ">Figure 7
<p>Sample of participant classifier model results with respect to accuracy, log loss and training/prediction time, note scaling ((<b>a</b>) inter-trial; (<b>b</b>) inter-session).</p>
Full article ">Figure 8
<p>Normalized confusion matrix for given participant sample with LDA classifier model.</p>
Full article ">Figure 9
<p>Demonstration of customised gestures for device teleoperation—gestures for shaka, peace, okay and rest (control) are depicted here.</p>
Full article ">Figure 10
<p>Operation of an LED strip, TV set box, movie hard drive player and laser tag equipment, from left to right. Also demonstrates some more custom gestures that are intuitive to the application (e.g., pistol gesture for shooting mechanic in laser tag game).</p>
Full article ">Figure 11
<p>Spatial augmentation demonstration—<b>Top row</b>: Fist gesture augmented by arm orientation changes LED light colour. <b>Bottom row</b>: Without activating a gesture, the orientation does nothing.</p>
Full article ">
19 pages, 6078 KiB  
Article
Using a Guidance Virtual Fixture on a Soft Robot to Improve Ureteroscopy Procedures in a Phantom
by Chun-Feng Lai, Elena De Momi, Giancarlo Ferrigno and Jenny Dankelman
Robotics 2024, 13(9), 140; https://doi.org/10.3390/robotics13090140 - 18 Sep 2024
Viewed by 893
Abstract
Manipulating a flexible ureteroscope is difficult, due to its bendable body and hand–eye coordination problems, especially when exploring the lower pole of the kidney. Though robotic interventions have been adopted in various clinical scenarios, they are rarely used in ureteroscopy. This study proposes [...] Read more.
Manipulating a flexible ureteroscope is difficult, due to its bendable body and hand–eye coordination problems, especially when exploring the lower pole of the kidney. Though robotic interventions have been adopted in various clinical scenarios, they are rarely used in ureteroscopy. This study proposes a teleoperation system consists of a soft robotic endoscope together with a Guidance Virtual Fixture (GVF) to help users explore the kidney’s lower pole. The soft robotic arm was a cable-driven, 3D-printed design with a helicoid structure. GVF was dynamically constructed using video streams from an endoscopic camera. With a haptic controller, GVF can provide haptic feedback to guide the users in following a trajectory. In the user study, participants were asked to follow trajectories when the soft robotic arm was in a retroflex posture. The results suggest that the GVF can reduce errors in the trajectory tracking tasks when the users receive the proper training and gain more experience. Based on the NASA Task Load Index questionnaires, most participants preferred having the GVF when manipulating the robotic arm. In conclusion, the results demonstrate the benefits and potential of using a robotic arm with a GVF. More research is needed to investigate the effectiveness of the GVFs and the robotic endoscope in ureteroscopic procedures. Full article
(This article belongs to the Section Soft Robotics)
Show Figures

Figure 1

Figure 1
<p>The robotic endoscope system, ATLAScope, to simulate a robotized ureteroscope. (1) Two stepper motors; (2) two pulleys; (3) cable tunnels to guide driving cables; (4) soft robotic arm with HelicoFlex design, with a total length of 90 mm and a steerable segment of 70 mm; (5) miniaturized endoscopic camera and the two bending directions of the robotic arm.</p>
Full article ">Figure 2
<p>The teleoperation system with the GVF consists of ATLAScope, a haptic controller and a communication channel. The user commands the haptic controller with <math display="inline"><semantics> <msub> <mover> <mi mathvariant="bold-italic">u</mi> <mo>˙</mo> </mover> <mi>c</mi> </msub> </semantics></math>, the velocity of the tip of the haptic controller. This movement is translated into <math display="inline"><semantics> <msub> <mover> <mi mathvariant="bold-italic">y</mi> <mo>˙</mo> </mover> <mi>i</mi> </msub> </semantics></math>, a desired moving velocity of the target in the image space. Then, the velocity of motors <math display="inline"><semantics> <mover> <mi mathvariant="bold-italic">θ</mi> <mo>˙</mo> </mover> </semantics></math> in the actuation space is determined by the Moore–Penrose inverse of the model-free Jacobian matrix <math display="inline"><semantics> <msubsup> <mrow> <mi mathvariant="bold">J</mi> </mrow> <mrow> <mi>f</mi> <mi>r</mi> <mi>e</mi> <mi>e</mi> </mrow> <mo>†</mo> </msubsup> </semantics></math>. After the motors move the tip of the endoscopic camera into a new position, <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">t</mi> </mrow> </semantics></math>, the camera captures a new image. The Segmentation and Target Detection module processes this new image and returns a new target vector, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">p</mi> <mi>i</mi> </msub> </mrow> </semantics></math>, which is the shortest vector from the center of the image to the route. Within the Virtual Fixture module, this target vector is translated into a force <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">f</mi> <mi>c</mi> </msub> </mrow> </semantics></math> by the spring-damper model <math display="inline"><semantics> <mrow> <mi>F</mi> <mo>(</mo> <mo>·</mo> <mo>)</mo> </mrow> </semantics></math> and exerted on the user. <b>K</b>, <span class="html-italic">k</span>, and <math display="inline"><semantics> <mi>ξ</mi> </semantics></math> are working space transformation matrix, spring constant, and damping constant, respectively. <math display="inline"><semantics> <mo>Ω</mo> </semantics></math> stands for coordinate space, and its superscript <span class="html-italic">C</span>, <span class="html-italic">I</span>, <span class="html-italic">A</span>, and <span class="html-italic">E</span> stand for controller, image actuation and end-effector, respectively.</p>
Full article ">Figure 3
<p>GVF coordinate transformation between the image space <math display="inline"><semantics> <msup> <mo>Ω</mo> <mi>I</mi> </msup> </semantics></math> (Left) and controller space <math display="inline"><semantics> <msup> <mo>Ω</mo> <mi>C</mi> </msup> </semantics></math> (Right) by the scaling transformation matrix <b>K</b>. Both the teleoperating manipulation and the GVF rely on the information in <math display="inline"><semantics> <msup> <mo>Ω</mo> <mi>I</mi> </msup> </semantics></math>. To link <math display="inline"><semantics> <msup> <mo>Ω</mo> <mi>I</mi> </msup> </semantics></math> with <math display="inline"><semantics> <msup> <mo>Ω</mo> <mi>C</mi> </msup> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">u</mi> <mi>c</mi> </msub> </mrow> </semantics></math> in <math display="inline"><semantics> <msup> <mo>Ω</mo> <mi>C</mi> </msup> </semantics></math> are projected into <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>y</mi> </mrow> </semantics></math> plane to form <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">u</mi> <mi>m</mi> </msub> </mrow> </semantics></math>. Using space transformation matrix <b>K</b>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">u</mi> <mi>m</mi> </msub> </mrow> </semantics></math> is transformed into desired target movement, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">y</mi> <mi>i</mi> </msub> </mrow> </semantics></math>, in <math display="inline"><semantics> <msup> <mo>Ω</mo> <mi>I</mi> </msup> </semantics></math>. Reversely, the guidance vector, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">p</mi> <mi>i</mi> </msub> </mrow> </semantics></math> created by GVF in <math display="inline"><semantics> <msup> <mo>Ω</mo> <mi>I</mi> </msup> </semantics></math> can be also transformed into <math display="inline"><semantics> <msup> <mo>Ω</mo> <mi>C</mi> </msup> </semantics></math> as <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">p</mi> <mi>c</mi> </msub> </mrow> </semantics></math> using <math display="inline"><semantics> <msup> <mrow> <mi mathvariant="bold">K</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </semantics></math>. In the top figure, <span class="html-italic">R</span> is the set of two-dimensional vectors of the segmented route. <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">c</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">r</mi> <mi>s</mi> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">p</mi> <mi>i</mi> </msub> </mrow> </semantics></math> is the center of the image, the closest point in <span class="html-italic">R</span> to <b>c</b>, and the guidance vector, respectively.</p>
Full article ">Figure 4
<p>Experimental set-up with the flexible arm bent in a retroflex posture. (1) Soft robotic arm; (2) 3D printed fixture mold to restrict the movement of the soft robotic arm; (3) tip of the robotic arm equipped with a miniaturized endoscopic camera in a retroflex posture; (4) target plane with a triangle or oval route; (5) two designed routes and their dimensions.</p>
Full article ">Figure 5
<p>A flow diagram showing the user study protocol. After the first training session, the participants are divided into two groups (Group A and Group B). Each group has two sets of runs: one set of Control tasks (Control) and one set of Guided Virtual Fixture tasks (GVF-on). Within each set, there are two different routes (Oval Route and Triangle Route) that participants had to repeat five times, and they had to fill in one NASA TLX Questionnaire. Finally, all the participants had to fill in a Comparison Questionnaire. It is worth noting that a crossover group is being highlighted within the dashed line, and the colors and dashed lines represent the group of data shown in the next figures.</p>
Full article ">Figure 6
<p>Box and whisker plots comparing overall results for the three performance metrics, Completion Time (CT), Mean Absolute Error (MAE), and Max Error (ME). The color blue represents the Control set, while the color orange represents the GVF-on set. Those boxes with slashes are the results of Triangle Route. The hollow circles are the outliers, and the black horizontal lines in the box indicate where the median values are.</p>
Full article ">Figure 7
<p>Results of Crossover Groups in each run. The box and whisker plots show the three performance metrics (Completion Time, Mean Absolute Error, Max Error) per run with respect to the Oval Route (in the upper row) and the Triangle Route (in the lower row). Blue: Control; orange: GVF-on, respectively. Darker color tones: Group A; lighter tones: Group B. (*) <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>. (<b>a</b>) First Crossover Group. (<b>b</b>) Second Crossover Group. (<b>c</b>) First Crossover Group. (<b>d</b>) Second Crossover Group.</p>
Full article ">Figure 7 Cont.
<p>Results of Crossover Groups in each run. The box and whisker plots show the three performance metrics (Completion Time, Mean Absolute Error, Max Error) per run with respect to the Oval Route (in the upper row) and the Triangle Route (in the lower row). Blue: Control; orange: GVF-on, respectively. Darker color tones: Group A; lighter tones: Group B. (*) <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>. (<b>a</b>) First Crossover Group. (<b>b</b>) Second Crossover Group. (<b>c</b>) First Crossover Group. (<b>d</b>) Second Crossover Group.</p>
Full article ">Figure 8
<p>Results of each Crossover Group. Compared into two different dimensions, within its Crossover Groups and in Control and GVF-on. Blue: Control; orange: GVF-on. Darker tones: Group A; lighter tones: Group B. Boxes without slashes: Oval Route; boxes with slashes: Triangle Route. (*) <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math> and (**) <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.01</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Bar plots representing the results of NASA TLX Questionnaires. Left: all Participants, middle: Group A, right: Group B. The bars and error bars show each TLX index’s mean and standard error, respectively. Note: Scales are transferred into percentages.</p>
Full article ">Figure 10
<p>Bar plot showing the results of Comparison Questionnaires. The bar shows the preferences of participants toward the two tasks with respect to the six task load indexes and their general preferences toward the two sets of tasks.</p>
Full article ">
Back to TopTop