[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (110)

Search Parameters:
Keywords = dual-arm robot

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 8608 KiB  
Article
Manipulability-Aware Task-Oriented Grasp Planning and Motion Control with Application in a Seven-DoF Redundant Dual-Arm Robot
by Ching-Chang Wong, Chi-Yi Tsai, Yu-Cheng Lai and Shang-Wen Wong
Electronics 2024, 13(24), 5025; https://doi.org/10.3390/electronics13245025 - 20 Dec 2024
Viewed by 376
Abstract
Task-oriented grasp planning poses complex challenges in modern robotics, requiring the precise determination of the grasping pose of a robotic arm to grasp objects with a high level of manipulability while avoiding hardware constraints, such as joint limits, joint over-speeds, and singularities. This [...] Read more.
Task-oriented grasp planning poses complex challenges in modern robotics, requiring the precise determination of the grasping pose of a robotic arm to grasp objects with a high level of manipulability while avoiding hardware constraints, such as joint limits, joint over-speeds, and singularities. This paper introduces a novel manipulability-aware (M-aware) grasp planning and motion control system for seven-degree-of-freedom (7-DoF) redundant dual-arm robots to achieve task-oriented grasping with optimal manipulability. The proposed system consists of two subsystems: (1) M-aware grasp planning; and (2) M-aware motion control. The former predicts task-oriented grasp candidates from an RGB-D image and selects the best grasping pose among the candidates. The latter enables the robot to select an appropriate arm to perform the grasping task while maintaining a high level of manipulability. To achieve this goal, we propose a new manipulability evaluation function to evaluate the manipulability score (M-score) of a given robot arm configuration with respect to a desired grasping pose to ensure safe grasping actions and avoid its joint limits and singularities. Experimental results demonstrate that our system can autonomously detect the graspable areas of a target object, select an appropriate grasping pose, grasp the target with a high level of manipulability, and achieve an average success rate of about 98.6%. Full article
Show Figures

Figure 1

Figure 1
<p>Architecture of the proposed M-aware grasp planning and motion control system.</p>
Full article ">Figure 2
<p>The JL-score plot of the joint limits evaluation function proposed in (<b>a</b>) previous studies [<a href="#B22-electronics-13-05025" class="html-bibr">22</a>,<a href="#B23-electronics-13-05025" class="html-bibr">23</a>] and (<b>b</b>) this work. The evaluation function in (<b>a</b>) features an unbounded value range, making it challenging to quantitatively assess the degree of joint limits. In contrast, the function in (<b>b</b>) has a bounded value range between 0 and 1, simplifying the quantitative assessment of the degree of joint limits.</p>
Full article ">Figure 3
<p>System configuration of the lab-made 7-DoF redundant dual-arm robot used in this study. Each arm has an additional prismatic joint to adjust its <span class="html-italic">z</span>-axis position to extend the workspace along the z<sub>0</sub>-axis.</p>
Full article ">Figure 4
<p>The S-score plot of the proposed singularity evaluation functions (15) and (16). These plots illustrate that the singularity evaluation of the robot arm is treated as a joint limit evaluation of its wrist singularity factors. Consequently, the S-score of the evaluation function remains within the bounded range of 0 to 1.</p>
Full article ">Figure 5
<p>System architecture of the proposed M-aware grasp planning method, which aims to determine maximum M-score grasp poses for the robot arm to perform object grasping tasks.</p>
Full article ">Figure 6
<p>The result of each step in the TOGD method: (<b>a</b>) the input RGB image, (<b>b</b>) the result of Mask R-CNN, (<b>c</b>) the affordances of all segmented objects (the darker regions), and (<b>d</b>) the feasible grasp rectangles.</p>
Full article ">Figure 7
<p>The state definition of the grasp rectangle used in this work.</p>
Full article ">Figure 8
<p>Definition of position and orientation errors used in grasp matching: (<b>a</b>) position error and (<b>b</b>) orientation error.</p>
Full article ">Figure 9
<p>Illustration of (<b>a</b>) the definition of the target pose vector used in this study and (<b>b</b>) the proposed grasp pose transformation to transform a target pose vector from the camera frame to the robot base frame.</p>
Full article ">Figure 10
<p>The hardware equipment and environment settings used in the experiments of this study.</p>
Full article ">Figure 11
<p>Three target objects used in the experiment and their default affordance masks: (<b>a</b>) a PET bottle with cap and body masks, (<b>b</b>) a cup with mouth and body masks, and (<b>c</b>) a scraper with handler and head masks.</p>
Full article ">Figure 12
<p>Comparison between the proposed TOGD method and the existing Grasp R-CNN method: (<b>a</b>) Input image showing two target objects (bottle and cup) and three non-target objects (vacuum gripper, ruler, and screwdriver), (<b>b</b>) task-agnostic grasp detection results of the Grasp R-CNN method, presenting multiple possible grasp rectangles for target and non-target objects, (<b>c</b>) object detection results obtained by the Mask R-CNN method, and (<b>d</b>) task-oriented grasp detection results of the proposed TOGD method, showing multiple grasp rectangles only for the two target objects.</p>
Full article ">Figure 13
<p>A virtual 7-DoF dual-arm robot model was used to test the performance of all compared methods in the Gazebo simulator.</p>
Full article ">Figure 14
<p>Comparison results of the manipulability evolution of each control method in (<b>a</b>) Test 1 and (<b>b</b>) Test 2.</p>
Full article ">Figure 15
<p>Comparison results of the angle evolution of all joints in Test 3 (<b>a</b>) without and (<b>b</b>) with the proposed JOSI method.</p>
Full article ">Figure 16
<p>Experimental setup for verifying the performance of the proposed system, in which the planar platform in front of the robot is divided into five regions (regions A to E) for object grasping tests to verify the performance of the proposed M-aware grasp planning and motion control system.</p>
Full article ">Figure 17
<p>Demonstration of the proposed system controlling the dual-arm robot to perform a task-oriented pouring action: (<b>a</b>) system initialization; (<b>b</b>,<b>c</b>) the robot uses its left arm to grasp the bottle object; (<b>d</b>,<b>e</b>) the robot uses its right arm to grasp the cup object; (<b>f</b>) the robot prepares to perform the pouring action; (<b>g</b>–<b>i</b>) the robot performs the pouring action.</p>
Full article ">
22 pages, 45649 KiB  
Article
A Whole-Body Coordinated Motion Control Method for Highly Redundant Degrees of Freedom Mobile Humanoid Robots
by Hao Niu, Xin Zhao, Hongzhe Jin and Xiuli Zhang
Biomimetics 2024, 9(12), 766; https://doi.org/10.3390/biomimetics9120766 - 16 Dec 2024
Viewed by 585
Abstract
Humanoid robots are becoming a global research focus. Due to the limitations of bipedal walking technology, mobile humanoid robots equipped with a wheeled chassis and dual arms have emerged as the most suitable configuration for performing complex tasks in factory or home environments. [...] Read more.
Humanoid robots are becoming a global research focus. Due to the limitations of bipedal walking technology, mobile humanoid robots equipped with a wheeled chassis and dual arms have emerged as the most suitable configuration for performing complex tasks in factory or home environments. To address the high redundancy issue arising from the wheeled chassis and dual-arm design of mobile humanoid robots, this study proposes a whole-body coordinated motion control algorithm based on arm potential energy optimization. By constructing a gravity potential energy model for the arms and a virtual torsional spring elastic potential energy model with the shoulder-wrist line as the rotation axis, we establish an optimization index function for the arms. A neural network with variable stiffness is introduced to fit the virtual torsional spring, representing the stiffness variation trend of the human arm. Additionally, a posture mapping method is employed to map the human arm potential energy model to the robot, enabling realistic humanoid movements. Combining task-space and joint-space planning algorithms, we designed experiments for single-arm manipulation, independent object retrieval, and dual-arm carrying in a simulation of a 23-degree-of-freedom mobile humanoid robot. The results validate the effectiveness of this approach, demonstrating smooth motion, the ability to maintain a low potential energy state, and conformity to the operational characteristics of the human arm. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Whole -body coordinated motion control architecture for mobile humanoid robot.</p>
Full article ">Figure 2
<p>Illustration of grasping experiments at different heights.</p>
Full article ">Figure 3
<p>Violin plot of elbow and wrist height variation during arm movement. (<b>a</b>) Changes in wrist and elbow height in the desktop grasping experiment. (<b>a</b>) Changes in wrist and elbow height in the lift-arm experiment.</p>
Full article ">Figure 4
<p>Illustration of arm angle.</p>
Full article ">Figure 5
<p>Human-robot posture mapping process.</p>
Full article ">Figure 6
<p>Neural network for virtual stiffness fitting. (<b>a</b>) Neural network architecture used for virtual torsion spring fitting process. (<b>b</b>) Loss variation during training process.</p>
Full article ">Figure 7
<p>Mobile humanoid robot model.</p>
Full article ">Figure 8
<p>Experimental simulation scenario for the mobile humanoid robot.</p>
Full article ">Figure 9
<p>Trajectory generalization process of the robot’s right arm. (<b>a</b>) Generalized trajectory. (<b>b</b>) Execution process of the robot’s generalized trajectory. Where stages (a) to (d), (e) to (h), and (i) to (l) respectively indicate that the robot performs three different generalization trajectories.</p>
Full article ">Figure 10
<p>Obstacle avoidance process in operational space trajectory. (<b>a</b>) Obstacle avoidance trajectory. (<b>b</b>) Robot obstacle avoidance trajectory execution process.</p>
Full article ">Figure 11
<p>Independent dual-arm grasping and placement process. Stages (<b>a</b>–<b>l</b>) represent the turning and placing process after the robot grasps the object.</p>
Full article ">Figure 12
<p>Joint velocity of the mobile humanoid robot. (<b>a</b>) Left arm joint velocity. (<b>b</b>) Right arm joint velocity. (<b>c</b>) Waist joint velocity. (<b>d</b>) Mecanum wheel velocity.</p>
Full article ">Figure 13
<p>Energy variation diagram. (<b>a</b>) Total potential energy variation curve of the arm. (<b>b</b>) Trend of virtual torsional spring elastic potential energy of the arm. The blue dots represent the actual data, and the red dotted line represent the corresponding trend.</p>
Full article ">Figure 14
<p>Comparison of our proposed method with the pseudo-inverse method. (<b>a</b>) Cases of motion planning failure using the pseudoinverse method. (<b>b</b>) Motion Effect of Pseudoinverse Method. (<b>c</b>) Motion Effect of the Proposed Method.</p>
Full article ">Figure 15
<p>Illustration of primary-secondary arm task-space trajectories.</p>
Full article ">Figure 16
<p>Dual-arm coordinated grasping motion process.</p>
Full article ">Figure 17
<p>Joint velocity of the mobile humanoid robot. (<b>a</b>) Left arm joint velocity. (<b>b</b>) Right arm joint velocity. (<b>c</b>) Waist joint velocity. (<b>d</b>) Mecanum wheel velocity.</p>
Full article ">
19 pages, 9948 KiB  
Article
Design of and Experiment with a Dual-Arm Apple Harvesting Robot System
by Wenlei Huang, Zhonghua Miao, Tao Wu, Zhengwei Guo, Wenkai Han and Tao Li
Horticulturae 2024, 10(12), 1268; https://doi.org/10.3390/horticulturae10121268 - 28 Nov 2024
Viewed by 455
Abstract
Robotic harvesting has become an urgent need for the development of the apple industry, due to the sharp decline in agricultural labor. At present, harvesting apples using robots in unstructured orchard environments remains a significant challenge. This paper focuses on addressing the challenges [...] Read more.
Robotic harvesting has become an urgent need for the development of the apple industry, due to the sharp decline in agricultural labor. At present, harvesting apples using robots in unstructured orchard environments remains a significant challenge. This paper focuses on addressing the challenges of perception, localization, and dual-arm coordination in harvesting robots and presents a dual-arm apple harvesting robot system. First, the paper introduces the integration of the robot’s hardware and software systems, as well as the control system architecture, and describes the robot’s workflow. Secondly, combining a dual-vision perception system, the paper adopts a fruit recognition method based on a multi-task network model and a frustum-based fruit localization approach to identify and localize fruits. Finally, to improve collaboration efficiency, a multi-arm task planning method based on a genetic algorithm is used to optimize the target harvesting sequence for each arm. Field experiments were conducted in an orchard to evaluate the overall performance of the robot system. The field trials demonstrated that the robot system achieved an overall harvest success rate of 76.97%, with an average fruit picking time of 7.29 s per fruit and a fruit damage rate of only 5.56%. Full article
(This article belongs to the Section Fruit Production Systems)
Show Figures

Figure 1

Figure 1
<p>Dual-arm picking robot prototype.</p>
Full article ">Figure 2
<p>The overall structure of the robotic harvester for apple picking. (<b>a</b>) The main components of the robotic harvester. (<b>b</b>) The relevant dimensions of the robotic harvester.</p>
Full article ">Figure 3
<p>The structure of the 4-DOF robotic arm. (<b>a</b>) The motion outer frame. (<b>b</b>) The picking arm.</p>
Full article ">Figure 4
<p>The structure of the gripper. (<b>a</b>) The relevant dimensions of the gripper. (<b>b</b>) An actual image of the gripper.</p>
Full article ">Figure 5
<p>The structure of the fruit transport and collection mechanism.</p>
Full article ">Figure 6
<p>Dual-arm apple harvesting robot control system.</p>
Full article ">Figure 7
<p>Topology of control and communication of harvesting robot.</p>
Full article ">Figure 8
<p>Workflow diagram of overall robotic system.</p>
Full article ">Figure 9
<p>Dual-vision fruit information acquisition system.</p>
Full article ">Figure 10
<p>The multi-task MTN-O of fruit identification.</p>
Full article ">Figure 11
<p>MTN-O network model detection performance.</p>
Full article ">Figure 12
<p>Dedicated and common working areas of robotic arms.</p>
Full article ">Figure 13
<p>Asynchronous overlapping zoning planning of 30 cities (fruits) and 2 salesmen (robotic arms).</p>
Full article ">Figure 14
<p>Standardized orchard of dwarf rootstock. (<b>a</b>) Row and line space. (<b>b</b>) Spindle-shaped tree.</p>
Full article ">Figure 15
<p>Harvesting workspace in canopy. (<b>a</b>) Front view. (<b>b</b>) Top view.</p>
Full article ">Figure 16
<p>Field experiment.</p>
Full article ">Figure 17
<p>Several common failure scenarios. (<b>a</b>) Recognition and localization error. (<b>b</b>) Obstacle obstruction. (<b>c</b>) Grasp failure. (<b>d</b>) Separation failure.</p>
Full article ">
21 pages, 7973 KiB  
Article
Research on Target Hybrid Recognition and Localization Methods Based on an Industrial Camera and a Depth Camera in Complex Scenes
by Mingxin Yuan, Jie Li, Borui Cao, Shihao Bao, Li Sun and Xiangbin Li
Electronics 2024, 13(22), 4381; https://doi.org/10.3390/electronics13224381 - 8 Nov 2024
Viewed by 594
Abstract
In order to improve the target visual recognition and localization accuracy of robotic arms in complex scenes with similar targets, hybrid recognition and localization methods based on an industrial camera and depth camera are proposed. First, according to the speed and accuracy requirements [...] Read more.
In order to improve the target visual recognition and localization accuracy of robotic arms in complex scenes with similar targets, hybrid recognition and localization methods based on an industrial camera and depth camera are proposed. First, according to the speed and accuracy requirements of target recognition and localization, YOLOv5s is introduced as the basic algorithm model for target hybrid recognition and localization. Then, in order to improve the accuracy of target recognition and coarse localization based on an industrial camera (eye-to-hand), the AFPN feature fusion module, simple and parameter-free attention module (SimAM), and soft non-maximum suppression (Soft NMS) are introduced. In order to improve the accuracy of target recognition and fine localization based on a depth camera (eye-in-hand), the SENetV2 backbone network structure, dynamic head module, deformable attention mechanism, and chain-of-thought prompted adaptive enhancer network are introduced. After that, on the basis of constructing a dual camera platform for target hybrid recognition and localization, the hand–eye calibration, collection and production of image datasets required for model training are completed. Finally, for the docking of the oil filling port, the hybrid recognition and localization experimental tests are completed in sequence. The test results show that in target recognition and coarse localization based on the industrial camera, the recognition accuracy of the designed model reaches 99%, and the average localization errors in the horizontal and vertical directions are 2.22 mm and 3.66 mm, respectively. In target recognition and fine localization based on the depth camera, the recognition accuracy of the designed model reaches 98%, and the average errors in depth, horizontal, and vertical directions are 0.12 mm, 0.28 mm, and 0.16 mm, respectively. These not only verify the effectiveness of the target hybrid recognition and localization methods based on dual cameras, but also demonstrate that they meet the high-precision recognition and localization requirements in complex scenes. Full article
(This article belongs to the Special Issue Applications and Challenges of Image Processing in Smart Environment)
Show Figures

Figure 1

Figure 1
<p>Network structure of YOLOv5s.</p>
Full article ">Figure 2
<p>Network structure of AFPN.</p>
Full article ">Figure 3
<p>A simple and parameter-free attention module.</p>
Full article ">Figure 4
<p>Network structure of SENet.</p>
Full article ">Figure 5
<p>Network structure of SENetV2.</p>
Full article ">Figure 6
<p>Network structure of dynamic detection head.</p>
Full article ">Figure 7
<p>Network structure of DAT.</p>
Full article ">Figure 8
<p>Network structure of CPA-Enhancer.</p>
Full article ">Figure 9
<p>Visual experimental platform.</p>
Full article ">Figure 10
<p>Recognition and localization test of nine circles: (<b>a</b>) Circular feature; (<b>b</b>) Pixel coordinate.</p>
Full article ">Figure 11
<p>Extraction of corner points and reading of end pose of robotic arm: (<b>a</b>) Calibration round 1; (<b>b</b>) Calibration round 2; (<b>c</b>) Calibration round 3.</p>
Full article ">Figure 12
<p>Example of filling port dataset: (<b>a</b>) Layout environment 1; (<b>b</b>) Layout environment 2; (<b>c</b>) Layout environment 3.</p>
Full article ">Figure 13
<p>Recognition results: (<b>a</b>) Group A; (<b>b</b>) Group B; (<b>c</b>) Group C; (<b>d</b>) Group D; (<b>e</b>) Group E.</p>
Full article ">Figure 14
<p>Effect of coarse localization.</p>
Full article ">Figure 15
<p>Recognition effect for filling port: (<b>a</b>) Filling port scene 1; (<b>b</b>) Filling port scene 2; (<b>c</b>) Filling port scene 3; (<b>d</b>) Filling port scene 4; (<b>e</b>) Filling port scene 5; (<b>f</b>) Filling port scene 6.</p>
Full article ">Figure 16
<p>Recognition effect for filling port: (<b>a</b>) Filling port scene 1; (<b>b</b>) Filling port scene 2; (<b>c</b>) Filling port scene 3; (<b>d</b>) Filling port scene 4; (<b>e</b>) Filling port scene 5; (<b>f</b>) Filling port scene 6.</p>
Full article ">Figure 16 Cont.
<p>Recognition effect for filling port: (<b>a</b>) Filling port scene 1; (<b>b</b>) Filling port scene 2; (<b>c</b>) Filling port scene 3; (<b>d</b>) Filling port scene 4; (<b>e</b>) Filling port scene 5; (<b>f</b>) Filling port scene 6.</p>
Full article ">Figure 17
<p>Docking experiment results for the filling port: (<b>a</b>) Initial position; (<b>b</b>) Position after coarse localization; (<b>c</b>) Fine localization begins; (<b>d</b>) Pose adjustment of robotic arm; (<b>e</b>) Pose adjustment completed; (<b>f</b>) Autonomous docking completed.</p>
Full article ">Figure 17 Cont.
<p>Docking experiment results for the filling port: (<b>a</b>) Initial position; (<b>b</b>) Position after coarse localization; (<b>c</b>) Fine localization begins; (<b>d</b>) Pose adjustment of robotic arm; (<b>e</b>) Pose adjustment completed; (<b>f</b>) Autonomous docking completed.</p>
Full article ">
14 pages, 1809 KiB  
Article
Dual-Arm Space Robot On-Orbit Operation of Auxiliary Docking Prescribed Performance Impedance Control
by Dongbo Liu and Li Chen
Aerospace 2024, 11(11), 867; https://doi.org/10.3390/aerospace11110867 - 23 Oct 2024
Viewed by 619
Abstract
The impedance control of a dual-arm space robot in orbit auxiliary docking operation is studied. First, for the closed-chain hybrid system formed by the dual-arm space robot after capture operation, the dynamic equation of position uncontrolled and attitude controlled is established. The second-order [...] Read more.
The impedance control of a dual-arm space robot in orbit auxiliary docking operation is studied. First, for the closed-chain hybrid system formed by the dual-arm space robot after capture operation, the dynamic equation of position uncontrolled and attitude controlled is established. The second-order linear impedance model and second-order approximate environment model are established for the problem of simultaneous output force/pose control of the end of the manipulator. Then, aiming at the transient performance control requirements of the dual-arm space robot auxiliary docking operation in orbit, a sliding mode controller with equivalent replacement of tracking errors is designed by introducing Prescribed Performance Control (PPC) theory. Next, Radial Basis Function Neural Networks (RBFNN) are used to accurately compensate for the modeling uncertainties of the system. Finally, the stability of the system is verified by Lyapunov stability determination. The simulation results show that the attitude control accuracy is better than 0.5°, the position control accuracy is better than 103 m, and the output force control accuracy is better than 0.5 N when it reaches 30 N. It also indicated that the proposed control algorithm can limit the transient performance of the controlled system within the preset range and achieve high-precision force/pose control, which ensures a more stable on-orbit auxiliary docking operation of the dual-arm space robot. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

Figure 1
<p>Dual-arm space robot system and auxiliary docking target spacecraft system.</p>
Full article ">Figure 2
<p>Dual-arm space robot end trajectory tracking curves of on-orbit auxiliary docking operation. (<b>a</b>) The carrier attitude. (<b>b</b>) The X-direction position. (<b>c</b>) The Y-direction position. (<b>d</b>) The space robot end attitude.</p>
Full article ">Figure 3
<p>Dual-arm space robot joint trajectory tracking curves of on-orbit auxiliary docking operation. (<b>a</b>) The left arm. (<b>b</b>) The right arm.</p>
Full article ">Figure 4
<p>Dual-arm space robot end output force, base, and joints control torques of on-orbit docking operation. (<b>a</b>) The end. (<b>b</b>) The base. (<b>c</b>) The left-arm joints. (<b>d</b>) The right-arm joints.</p>
Full article ">Figure 5
<p>Tracking errors of on-orbit insertion operation. (<b>a</b>) PPC method. (<b>b</b>) PD method.</p>
Full article ">
20 pages, 17753 KiB  
Article
KOALA: A Modular Dual-Arm Robot for Automated Precision Pruning Equipped with Cross-Functionality Sensor Fusion
by Charan Vikram, Sidharth Jeyabal, Prithvi Krishna Chittoor, Sathian Pookkuttath, Mohan Rajesh Elara and Wang You
Agriculture 2024, 14(10), 1852; https://doi.org/10.3390/agriculture14101852 - 21 Oct 2024
Viewed by 1096
Abstract
Landscape maintenance is essential for ensuring agricultural productivity, promoting sustainable land use, and preserving soil and ecosystem health. Pruning is a labor-intensive task among landscaping applications that often involves repetitive pruning operations. To address these limitations, this paper presents the development of a [...] Read more.
Landscape maintenance is essential for ensuring agricultural productivity, promoting sustainable land use, and preserving soil and ecosystem health. Pruning is a labor-intensive task among landscaping applications that often involves repetitive pruning operations. To address these limitations, this paper presents the development of a dual-arm holonomic robot (called the KOALA robot) for precision plant pruning. The robot utilizes a cross-functionality sensor fusion approach, combining light detection and ranging (LiDAR) sensor and depth camera data for plant recognition and isolating the data points that require pruning. The You Only Look Once v8 (YOLOv8) object detection model powers the plant detection algorithm, achieving a 98.5% pruning plant detection rate and a 95% pruning accuracy using camera, depth sensor, and LiDAR data. The fused data allows the robot to identify the target boxwood plants, assess the density of the pruning area, and optimize the pruning path. The robot operates at a pruning speed of 10–50 cm/s and has a maximum robot travel speed of 0.5 m/s, with the ability to perform up to 4 h of pruning. The robot’s base can lift 400 kg, ensuring stability and versatility for multiple applications. The findings demonstrate the robot’s potential to significantly enhance efficiency, reduce labor requirements, and improve landscape maintenance precision compared to those of traditional manual methods. This paves the way for further advancements in automating repetitive tasks within landscaping applications. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

Figure 1
<p>Robot chassis 3D model and the developed dual arm precision pruning robot.</p>
Full article ">Figure 2
<p>Detailed 3D model of the robot end-effector.</p>
Full article ">Figure 3
<p>Block diagram of the electrical subsystem of the KOALA robot.</p>
Full article ">Figure 4
<p>Methodology of the mapping and pruning process.</p>
Full article ">Figure 5
<p>Output of the 2D LiDARs merged to give 360° view around the robot.</p>
Full article ">Figure 6
<p>A 2D map of the deployment zone retrieved from the 2D LiDARs and IMUs.</p>
Full article ">Figure 7
<p>Distortion correction for 2D points projected into the 3D camera coordinate system.</p>
Full article ">Figure 8
<p>LiDAR point cloud preprocessing.</p>
Full article ">Figure 9
<p>Wall segmentation from the LiDAR scan.</p>
Full article ">Figure 10
<p>LiDAR point clouds preprocessing.</p>
Full article ">Figure 11
<p>Segmentation of target area using a colored mask and a binary mask.</p>
Full article ">Figure 12
<p>Training results of the YOLOv8 algorithm.</p>
Full article ">Figure 13
<p>(<b>a</b>) Deployment zone for testing the pruning algorithm; (<b>b</b>) detection of pruning subjects using YOLOv8.</p>
Full article ">Figure 14
<p>(<b>a</b>) Detection of target points at various stages of pruning operation; (<b>b</b>) original color image, binary mask, and color mask of the target zone; (<b>c</b>) waypoint estimation at various stages of pruning.</p>
Full article ">
14 pages, 6903 KiB  
Communication
Development of Dual-Arm Human Companion Robots That Can Dance
by Joonyoung Kim, Taewoong Kang, Dongwoon Song, Gijae Ahn and Seung-Joon Yi
Sensors 2024, 24(20), 6704; https://doi.org/10.3390/s24206704 - 18 Oct 2024
Viewed by 768
Abstract
As gestures play an important role in human communication, there have been a number of service robots equipped with a pair of human-like arms for gesture-based human–robot interactions. However, the arms of most human companion robots are limited to slow and simple gestures [...] Read more.
As gestures play an important role in human communication, there have been a number of service robots equipped with a pair of human-like arms for gesture-based human–robot interactions. However, the arms of most human companion robots are limited to slow and simple gestures due to the low maximum velocity of the arm actuators. In this work, we present the JF-2 robot, a mobile home service robot equipped with a pair of torque-controlled anthropomorphic arms. Thanks to the low inertia design of the arm, responsive Quasi-Direct Drive (QDD) actuators, and active compliant control of the joints, the robot can replicate fast human dance motions while being safe in the environment. In addition to the JF-2 robot, we also present the JF-mini robot, a scaled-down, low-cost version of the JF-2 robot mainly targeted for commercial use at kindergarten and childcare facilities. The suggested system is validated by performing three experiments, a safety test, teaching children how to dance along to the music, and bringing a requested item to a human subject. Full article
(This article belongs to the Special Issue Intelligent Social Robotic Systems)
Show Figures

Figure 1

Figure 1
<p>The JF-2 (left) and JF-mini (right) human companion robots without casing.</p>
Full article ">Figure 2
<p>Hardware configurations and dimensions of the JF-2 and JF-mini robots.</p>
Full article ">Figure 3
<p>Four-DOF arm mechanisms of the JF-2 and JF-mini robots.</p>
Full article ">Figure 4
<p>Use cases of the chest display. (<b>a</b>) Robot control. (<b>b</b>) Synchronously showing an animation while dancing.</p>
Full article ">Figure 5
<p>Various facial expressions with the circular LCD head.</p>
Full article ">Figure 6
<p>Software architecture of the JF-2 and JF-mini robots.</p>
Full article ">Figure 7
<p>Three different human motion capturing methods. (<b>a</b>) Motion capture-based. (<b>b</b>) VR tracker-based. (<b>c</b>) Keypoint detection-based.</p>
Full article ">Figure 8
<p>Human motion-replicating process. (<b>a</b>) Shoulder, elbow, and hand 3D positions assuming zero chest tilt and roll angles. (<b>b</b>) Retargeted joint angles in a simulated environment. (<b>c</b>) Arm postures realized with JF-2 robot.</p>
Full article ">Figure 9
<p>(<b>a</b>) Snapshots of the safety test. (<b>b</b>) Time-force results.</p>
Full article ">Figure 10
<p>JF-2 and JF-mini robots dancing along to four different children’s songs.</p>
Full article ">Figure 10 Cont.
<p>JF-2 and JF-mini robots dancing along to four different children’s songs.</p>
Full article ">Figure 11
<p>JF-2 robot executing the gift delivery task.</p>
Full article ">Figure 12
<p>Examples of dance motion by JF2.</p>
Full article ">
18 pages, 6112 KiB  
Article
A Globally Guided Dual-Arm Reactive Motion Controller for Coordinated Self-Handover in a Confined Domestic Environment
by Zihang Geng, Zhiyuan Yang, Wei Xu, Weichao Guo and Xinjun Sheng
Biomimetics 2024, 9(10), 629; https://doi.org/10.3390/biomimetics9100629 - 16 Oct 2024
Viewed by 876
Abstract
Future humanoid robots will be widely deployed in our daily lives. Motion planning and control in an unstructured, confined, and human-centered environment utilizing dexterity and a cooperative ability of dual-arm robots is still an open issue. We propose a globally guided dual-arm reactive [...] Read more.
Future humanoid robots will be widely deployed in our daily lives. Motion planning and control in an unstructured, confined, and human-centered environment utilizing dexterity and a cooperative ability of dual-arm robots is still an open issue. We propose a globally guided dual-arm reactive motion controller (GGDRC) that combines the strengths of global planning and reactive methods. In this framework, a global planner module with a prospective task horizon provides feasible guidance in a Cartesian space, and a local reactive controller module addresses real-time collision avoidance and coordinated task constraints through the exploitation of dual-arm redundancy. GGDRC extends the start-of-the-art optimization-based reactive method for motion-restricted dynamic scenarios requiring dual-arm cooperation. We design a pick–handover–place task to compare the performances of these two methods. Results demonstrate that GGDRC exhibits accurate collision avoidance precision (5 mm) and a high success rate (84.5%). Full article
(This article belongs to the Special Issue Human-Inspired Grasp Control in Robotics)
Show Figures

Figure 1

Figure 1
<p>Desktop organizing task. In this cluttered scenario, a dual-arm humanoid robot picks up the water bottle and puts it on the shelf.</p>
Full article ">Figure 2
<p>Framework of a globally guided dual-arm reactive controller (GGDRC). This framework consists of two modules: (<b>left</b>) global object trajectory planner for reference waypoints over the task horizon and (<b>right</b>) QP-based local reactive controller for dual-arm motion coordination and obstacle avoidance exploiting kinematic redundancy.</p>
Full article ">Figure 3
<p>Dual-arm model and overlapped workspace: (<b>a</b>) dual-arm model: <math display="inline"><semantics> <mrow> <mi>z</mi> <mn>0</mn> <mo>−</mo> <mi>z</mi> <mn>7</mn> </mrow> </semantics></math> denote rotation axes of the joints; (<b>b</b>) overlapped workspace: blue area denotes the workspace of left arm and red of the right arm.</p>
Full article ">Figure 4
<p>Dual-arm motion decomposition in a self-handover task. In independent mode, <math display="inline"><semantics> <msub> <mi>x</mi> <mi>l</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>x</mi> <mi>r</mi> </msub> </semantics></math> denote the left EE motion and the right EE motion, respectively. Meanwhile, in coordinated mode, <math display="inline"><semantics> <msub> <mi>x</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </semantics></math> is the relative motion between two end effectors and <math display="inline"><semantics> <msub> <mi>x</mi> <mrow> <mi>a</mi> <mi>b</mi> <mi>s</mi> </mrow> </msub> </semantics></math> is the absolute object motion simplified as <math display="inline"><semantics> <msub> <mi>x</mi> <mi>l</mi> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>The velocity damper (VD) method. (<b>a</b>) VD is activated when the minimum distance <math display="inline"><semantics> <msub> <mi>d</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> is less than the check distance <math display="inline"><semantics> <msub> <mi>d</mi> <mrow> <mi>c</mi> <mi>h</mi> <mi>e</mi> <mi>c</mi> <mi>k</mi> </mrow> </msub> </semantics></math>. Then <math display="inline"><semantics> <msub> <mi>d</mi> <mrow> <mi>s</mi> <mi>a</mi> <mi>f</mi> <mi>e</mi> </mrow> </msub> </semantics></math> is the ideal safe boundary. (<b>b</b>) Illustration of the VD applying to convex geometries. The blue and red ellipses denote the current positions on which <span class="html-italic">A</span> and <span class="html-italic">B</span> are the witness points. The translucent red ellipse denotes the relative position in next time step on which <math display="inline"><semantics> <msup> <mi>B</mi> <mo>′</mo> </msup> </semantics></math> is the point corresponding to <span class="html-italic">B</span>. The relative velocity <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi>B</mi> </msub> <mo>−</mo> <msub> <mi>v</mi> <mi>A</mi> </msub> </mrow> </semantics></math> will lead two ellipses away from each other.</p>
Full article ">Figure 6
<p>Comparison between VD and MVD at a safe boundary. (<b>a</b>) In this 2d VD example, witness points change (from <span class="html-italic">A</span> and <span class="html-italic">B</span> to <span class="html-italic">A’</span> and <span class="html-italic">B’</span>) because of the relative rotation. The minimum distance decreases in spite of a velocity damper applied to <span class="html-italic">A</span> and <span class="html-italic">B</span>. (<b>b</b>) In this 3d MVD example, the witness points are consistent. Combining translation and rotation dampers, two geometries are kept away from the separate plans.</p>
Full article ">Figure 7
<p>Comparison of different collision avoidance methods in a confined workspace. (<b>a</b>) Environment of the picking task. Robot hand is in close proximity to the obstacle when reaching a target grasp pose. (<b>b</b>) Minimum Euclidean distances between the robot and the obstacle or the goal pose comparing VD, MVD, and GG-MVD. The VD method in red color failed at 1.92 s. MVD in blue color succeeded at 12.41 s and had a minimum distance of 4.8 mm (to the obstacle). GG-MVD in green color also succeeded and finished around 3.7× faster than MVD at 3.32 s with a larger minimum distance of 8.7 mm (to the obstacle).</p>
Full article ">Figure 8
<p>Different motions of a dual-arm robot generated by NEO’ and GGDRC in the pick–handover–place task: (<b>a</b>) start configuration, (<b>b</b>) object grasped by the left hand, (<b>c</b>) two hands approaching each other, (<b>d</b>) object self-handover, (<b>e</b>) dual-arm independent motion, and (<b>f</b>) object placed on the shelf. NEO’ failed in this task because of the local minimum, while GGDRC succeeded with the global reference trajectory of the object.</p>
Full article ">Figure 9
<p>Joint angles of the whole pick–handover–place process with two arms plotted separately. There are two vertical dashed lines in every subplot. The first line represents the moment when the left robot hand grasps the object; in the meanwhile, the global planner starts to generate the reference trajectory of the object. The second one denotes that self-handover takes place, and the object is transferred from the left hand to the right hand.</p>
Full article ">Figure 10
<p>Top view of the repeated pick–handover–place experiment with 200 trials. Randomly sampled start positions are illustrated in the left area of the desktop. Blue circle markers denote the successful trials, while orange cross markers denote the failed trials. (<b>a</b>) The result of GGDRC with a success rate of <math display="inline"><semantics> <mrow> <mn>84.5</mn> <mo>%</mo> </mrow> </semantics></math> and an average runtime of 10.83 s. (<b>b</b>) The result of NEO’ with a success rate of <math display="inline"><semantics> <mrow> <mn>16.5</mn> <mo>%</mo> </mrow> </semantics></math> and an average runtime of 19.43 s.</p>
Full article ">Figure 11
<p>Distribution of minimum distance <math display="inline"><semantics> <msub> <mi>d</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> from the robot to the moving obstacle with different velocities. Obstacle approaching (<b>Top</b>) from the left side; and (<b>Bottom</b>) from the right side. For each value of velocity, a boxplot shows the distribution of 15 data points. The upper and lower whiskers are based on a 1.5 IQR value, and outliers are indicated by hollow circle markers. The blue middle line across the box denotes the median value, and the green triangle denotes the arithmetic mean value of valid trials. In addition, the red cross markers represent failed trials because of collision.</p>
Full article ">
13 pages, 4426 KiB  
Article
Enhancement of Robot Position Control for Dual-User Operation of Remote Robot System with Force Feedback
by Pingguo Huang and Yutaka Ishibashi
Appl. Sci. 2024, 14(20), 9376; https://doi.org/10.3390/app14209376 - 14 Oct 2024
Viewed by 853
Abstract
We focus on dual-user operation, where two users control a single remote robot equipped with a force sensor using haptic interface devices. We employ a cooperative work in which the two users control the remote robot to collaborate with remote robot systems with [...] Read more.
We focus on dual-user operation, where two users control a single remote robot equipped with a force sensor using haptic interface devices. We employ a cooperative work in which the two users control the remote robot to collaborate with remote robot systems with force feedback to carry an object. By measuring the force acting upon the object, we aim to better understand the underlying mechanisms by which the user with lower network latency can help the other user, as observed in our previous work. We notice that with increasing network delays, the force exerted on the object tends to intensify, indicating that it becomes more challenging for users to operate the remote robot effectively as network delays increase. We also measure the force applied to the object by changing the network delays between the remote robot and the two users to clarify why the user with the lower network delay can assist the other user. We find that when the total network delay is the same, the average force magnitude and the average maximum force magnitude remain nearly identical. This is because, despite the challenges faced by the user with the larger network delay, the user with the smaller delay can operate the remote robot more easily and assist the other user. In order to reduce the force acting upon the object, we propose an enhancement method for the robot position control, which determines the position of the remote robot arm while accounting for network delay, and investigate the effects by experiment. Experimental results demonstrate that our proposed method is effective and can reduce the applied force. This is because the proposed method adjusts the ratio between the user with the lower delay and the user with the higher delay. The user with the lower delay can operate the remote robot more easily and respond to it more quickly. Our findings and proposed method can be useful in improving work accuracy and operability when designing a remote robot system with force feedback for applications. Full article
(This article belongs to the Special Issue Trajectory Analysis, Positioning and Control of Mobile Robots)
Show Figures

Figure 1

Figure 1
<p>Setup of single-user operation (remote robot system with force feedback).</p>
Full article ">Figure 2
<p>Block diagram of stabilization control with filter.</p>
Full article ">Figure 3
<p>System configuration of dual-user operation.</p>
Full article ">Figure 4
<p>Appearance at master and slave terminals.</p>
Full article ">Figure 5
<p>Plane view showing arrangement of the wooden stick, wooden blocks, and paper blocks.</p>
Full article ">Figure 6
<p>Average of average force magnitude and average of maximum force magnitude.</p>
Full article ">Figure 7
<p>Results of <span class="html-italic">t</span>-test between combinations of network delays.</p>
Full article ">Figure 8
<p>Average of average force magnitude and average of maximum force magnitude when the sum of network delay 1 and network delay 2 is same.</p>
Full article ">Figure 9
<p>Average of average force magnitude in experiment 1.</p>
Full article ">Figure 10
<p>Average of average force magnitude in experiment 2.</p>
Full article ">
17 pages, 8199 KiB  
Article
Curriculum Design and Sim2Real Transfer for Reinforcement Learning in Robotic Dual-Arm Assembly
by Konstantin Wrede, Sebastian Zarnack, Robert Lange, Oliver Donath, Tommy Wohlfahrt and Ute Feldmann
Machines 2024, 12(10), 682; https://doi.org/10.3390/machines12100682 - 29 Sep 2024
Viewed by 1314
Abstract
Robotic systems are crucial in modern manufacturing. Complex assembly tasks require the collaboration of multiple robots. Their orchestration is challenging due to tight tolerances and precision requirements. In this work, we set up two Franka Panda robots performing a peg-in-hole insertion task of [...] Read more.
Robotic systems are crucial in modern manufacturing. Complex assembly tasks require the collaboration of multiple robots. Their orchestration is challenging due to tight tolerances and precision requirements. In this work, we set up two Franka Panda robots performing a peg-in-hole insertion task of 1 mm clearance. We structure the control system hierarchically, planning the robots’ feedback-based trajectories with a central policy trained with reinforcement learning. These trajectories are executed by a low-level impedance controller on each robot. To enhance training convergence, we use reverse curriculum learning, novel for such a two-armed control task, iteratively structured with a minimum requirements and fine-tuning phase. We incorporate domain randomization, varying initial joint configurations of the task for generalization of the applicability. After training, we test the system in a simulation, discovering the impact of curriculum parameters on the emerging process time and its variance. Finally, we transfer the trained model to the real-world, resulting in a small decrease in task duration. Comparing our approach to classical path planning and control shows a decrease in process time, but higher robustness towards calibration errors. Full article
Show Figures

Figure 1

Figure 1
<p>Environment consisting of two robot arms, peg and hole workpieces. (<b>a</b>) Setup in simulation. (<b>b</b>) Setup in reality.</p>
Full article ">Figure 2
<p>Hierachical control system architecture with a central policy in form of a NN generating setpoints for the decentral low-level impedance controllers.</p>
Full article ">Figure 3
<p>Quantities describing the geometric relationship between the peg and the hole.</p>
Full article ">Figure 4
<p>Iterative training procedure of the dual-robot peg-in-hole task as reverse curriculum. Quantities represented by positions marked red will change if DR is activated.</p>
Full article ">Figure 5
<p>Return per training episode. Each color highlights a new iteration of the curriculum.</p>
Full article ">Figure 6
<p>Comparison of process time for the proposed RL-based control system in simulation and reality.</p>
Full article ">Figure 7
<p>Comparison of process time for the classical and the proposed RL-based control in reality.</p>
Full article ">Figure 8
<p>Comparison of EE forces during insertion of a squared peg for the classical and RL control strategy.</p>
Full article ">Figure 9
<p>Adaptation of the peg to a circular base area. Comparison of process time for the classical and the proposed RL-based control system in reality.</p>
Full article ">Figure 10
<p>Distribution of process cycle times for RL control with an injected base offset of 2 mm and 3 mm.</p>
Full article ">
15 pages, 6865 KiB  
Article
Method for Bottle Opening with a Dual-Arm Robot
by Francisco J. Naranjo-Campos, Juan G. Victores and Carlos Balaguer
Biomimetics 2024, 9(9), 577; https://doi.org/10.3390/biomimetics9090577 - 23 Sep 2024
Cited by 1 | Viewed by 1125
Abstract
This paper introduces a novel approach to robotic assistance in bottle opening using the dual-arm robot TIAGo++. The solution enhances accessibility by addressing the needs of individuals with injuries or disabilities who may require help with common manipulation tasks. The aim of this [...] Read more.
This paper introduces a novel approach to robotic assistance in bottle opening using the dual-arm robot TIAGo++. The solution enhances accessibility by addressing the needs of individuals with injuries or disabilities who may require help with common manipulation tasks. The aim of this paper is to propose a method involving vision, manipulation, and learning techniques to effectively address the task of bottle opening. The process begins with the acquisition of bottle and cap positions using an RGB-D camera and computer vision. Subsequently, the robot picks the bottle with one gripper and grips the cap with the other, each by planning safe trajectories. Then, the opening procedure is executed via a position and force control scheme that ensures both grippers follow the unscrewing path defined by the cap thread. Within the control loop, force sensor information is employed to control the vertical axis movements, while gripper rotation control is achieved through a Deep Reinforcement Learning (DRL) algorithm trained to determine the optimal angle increments for rotation. The results demonstrate the successful training of the learning agent. The experiments confirm the effectiveness of the proposed method in bottle opening with the TIAGo++ robot, showcasing the practical viability of the approach. Full article
(This article belongs to the Special Issue Computer-Aided Biomimetics: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>TIAGo++ dual-arm mobile manipulator robot with a bottle in one gripper.</p>
Full article ">Figure 2
<p>Process pipeline to achieve the opening of the bottle with the dual-arm robot.</p>
Full article ">Figure 3
<p>A 3D representation of a helix curve shown in blue. The radial coordinate <math display="inline"><semantics> <mi>θ</mi> </semantics></math> is highlighted in red, while the vertical coordinate <span class="html-italic">Z</span> is shown in green. The pitch of the helix (<math display="inline"><semantics> <mi>ρ</mi> </semantics></math>) is depicted with a magenta dashed line.</p>
Full article ">Figure 4
<p>Thread standard for bottles with a 26 mm aperture. On the left, the thread development from the outside is shown, and on the right, the profile view is provided. Adapted from [<a href="#B35-biomimetics-09-00577" class="html-bibr">35</a>].</p>
Full article ">Figure 5
<p>Image processing to obtain centroids of bottle cap and body. (<b>a</b>) ROI of the bottle detected with YOLO, marked in red. (<b>b</b>) Watershed applied to obtain a mask of the bottle’s body, shaded in white. (<b>c</b>) Centroids of body and cap of the bottle, marked in blue and yellow.</p>
Full article ">Figure 6
<p>Frames of the sequence of picking the bottle. (<b>a</b>) Initial state after vision. (<b>b</b>) Arm in pre-pick configuration. (<b>c</b>) Grasp the bottle. (<b>d</b>) Retreat the arm.</p>
Full article ">Figure 7
<p>Robot grasping the bottle and the cap.</p>
Full article ">Figure 8
<p>Control scheme for force and position. The controller, represented by the control vector <math display="inline"><semantics> <mover accent="true"> <mi>c</mi> <mo>→</mo> </mover> </semantics></math>, provides the desired poses <span class="html-italic">P</span> of both grippers. These poses are transformed into joint values <span class="html-italic">q</span> using inverse kinematics (IK). The achieved poses of the robot are determined via forward kinematics to obtain the angles <math display="inline"><semantics> <mi>θ</mi> </semantics></math> along the <span class="html-italic">Z</span> axis for positional feedback. Additionally, the force along the <span class="html-italic">Z</span> axis of the right gripper is measured by the force/torque sensor (F/T) for force feedback.</p>
Full article ">Figure 9
<p>Simulation of the TIAGo++ robot in the initial position for unscrewing a bottle. (<b>a</b>) Perspective. (<b>b</b>) Profile. (<b>c</b>) Elevation. (<b>d</b>) Plan.</p>
Full article ">Figure 10
<p>Training summary values of DRL agent over 100 runs. (<b>a</b>) Episode length, mean and standard deviation across 100 runs. (<b>b</b>) Episode reward, mean and standard deviation across 100 runs.</p>
Full article ">Figure 11
<p>Relative angle over steps in simulation and real robot, mean and standard deviation over 10 executions.</p>
Full article ">
21 pages, 2113 KiB  
Article
Periodic Scheduling Optimization for Dual-Arm Cluster Tools with Arm Task and Residency Time Constraints via Petri Net Model
by Lei Gu, Naiqi Wu, Tan Li, Siwei Zhang and Wenyu Wu
Mathematics 2024, 12(18), 2912; https://doi.org/10.3390/math12182912 - 19 Sep 2024
Viewed by 574
Abstract
In order to improve quality assurance in wafer manufacturing, there are strict process requirements. Besides the well-known residency time constraints (RTCs), a dual-arm cluster tool also requires each robot arm to execute a specific set of tasks. We call such a tool an [...] Read more.
In order to improve quality assurance in wafer manufacturing, there are strict process requirements. Besides the well-known residency time constraints (RTCs), a dual-arm cluster tool also requires each robot arm to execute a specific set of tasks. We call such a tool an arm task-constrained dual-arm cluster tool (ATC-DACT). To do this, one of the arms is identified as the dirty one and the other as the clean one. The dirty one can deal with raw wafers, while the clean one can deal with processed wafers. This requirement raises a new problem for scheduling a cluster tool. This paper discusses the scheduling problem of ATC-DACTs with RTCs. Due to the arm task constraints, the proven, effective swap strategy is no longer applicable to ATC-DACTs, making the scheduling problem difficult. To address this problem, we explicitly describe the robot waiting as an event and build a Petri net (PN) model. Then, we propose a hybrid task sequence (HTS) as an operation strategy by combining the swap and backward strategies. Based on the HTS, the necessary and sufficient conditions for schedulability are established; also, a linear programming model is developed. We then develop an algorithm using these results to optimally schedule the system. Industrial case studies demonstrate the application of this method. Full article
(This article belongs to the Section Computational and Applied Mathematics)
Show Figures

Figure 1

Figure 1
<p>A dual-arm cluster tool.</p>
Full article ">Figure 2
<p>A PN Model for an ATC-DACT (<span class="html-italic">n</span> &gt; 2).</p>
Full article ">Figure 3
<p>A PN Model for an ATC-DACT (<span class="html-italic">n</span> = 2).</p>
Full article ">
17 pages, 6887 KiB  
Article
Research on the Safety Design and Trajectory Planning for a New Dual Upper Limb Rehabilitation Robot
by Chenchen Zhang, Hao Yan, Jian Wei, Fangcao Zhang, Zhongliang Shi and Xingao Li
Actuators 2024, 13(9), 364; https://doi.org/10.3390/act13090364 - 18 Sep 2024
Cited by 1 | Viewed by 744
Abstract
The increasing utilization of upper limb rehabilitation robots in rehabilitation therapy has brought to light significant safety concerns regarding their mechanical structures and control systems. This study focuses on a six degrees of freedom (DOF) upper limb rehabilitation robot, which has been designed [...] Read more.
The increasing utilization of upper limb rehabilitation robots in rehabilitation therapy has brought to light significant safety concerns regarding their mechanical structures and control systems. This study focuses on a six degrees of freedom (DOF) upper limb rehabilitation robot, which has been designed with an emphasis on safety through careful consideration of its mechanical structure and trajectory planning. Various parameter schemes for the shoulder joint angles were proposed, and the robotic arm’s structure was developed by analyzing the spatial motion trajectories of the shoulder joint motor. This design successfully achieves the objective of minimizing the installation space while maximizing the range of motion. Additionally, an enhanced artificial field method is introduced to facilitate the planning of self-collision avoidance trajectories for dual-arm movements. This approach effectively mitigates the risk of collisions between the robotic arm and the human body, as well as between the two robotic arms, during movement. The efficacy of this method has been validated through experimental testing. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

Figure 1
<p>Upper limb rehabilitation robot.</p>
Full article ">Figure 2
<p>(<b>a</b>) Joint drive module. (<b>b</b>) Upper limb rehabilitation machine position control block diagram.</p>
Full article ">Figure 3
<p>Upper limb shoulder joint bone model.</p>
Full article ">Figure 4
<p>Schematic diagram of the head collision between the mechanism and the human. (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>θ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>45</mn> <mo>°</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>90</mn> <mo>°</mo> </mrow> </semantics></math>, where there will be no collision. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>θ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>60</mn> <mo>°</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>90</mn> <mo>°</mo> </mrow> </semantics></math>, where there will be no collision. (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>θ</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>θ</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>90</mn> <mo>°</mo> </mrow> </semantics></math>, where there is a collision.</p>
Full article ">Figure 5
<p>Big arm capsule body bounding box.</p>
Full article ">Figure 6
<p>Forearm capsule body bounding box.</p>
Full article ">Figure 7
<p>Secondary projection diagram of the space bounding box.</p>
Full article ">Figure 8
<p>Analysis of the safe distance between the mechanical arm and the human body in a sitting position.</p>
Full article ">Figure 9
<p>Analysis of the safe distance between the mechanical arm and the human body in a sitting position.</p>
Full article ">Figure 10
<p>Algorithm flow chart.</p>
Full article ">Figure 11
<p>Non-self-collision avoidance algorithm simulation. (<b>a</b>) From the arm and shoulder, the elbow joint angle changes. (<b>b</b>) The minimum distance between the arms of each bar.</p>
Full article ">Figure 12
<p>Two-arm simulation three-dimensional diagram.</p>
Full article ">Figure 13
<p>The self-collision avoidance algorithm simulation. (<b>a</b>) From the arm and shoulder, the elbow joint angle changes. (<b>b</b>) The minimum distance between the arms of each bar.</p>
Full article ">Figure 14
<p>Two-arm simulation three-dimensional diagram.</p>
Full article ">Figure 15
<p>Experimental scene diagram of the dual-arm self-collision avoidance.</p>
Full article ">Figure 16
<p>Double-arm self-collision avoidance experiment. (<b>a</b>) Angle of each joint of the right arm. (<b>b</b>) The minimum distance between the arms of each bar.</p>
Full article ">
15 pages, 3585 KiB  
Article
Upper-Limb and Low-Back Load Analysis in Workers Performing an Actual Industrial Use-Case with and without a Dual-Arm Collaborative Robot
by Alessio Silvetti, Tiwana Varrecchia, Giorgia Chini, Sonny Tarbouriech, Benjamin Navarro, Andrea Cherubini, Francesco Draicchio and Alberto Ranavolo
Safety 2024, 10(3), 78; https://doi.org/10.3390/safety10030078 - 11 Sep 2024
Viewed by 805
Abstract
In the Industry 4.0 scenario, human–robot collaboration (HRC) plays a key role in factories to reduce costs, increase production, and help aged and/or sick workers maintain their job. The approaches of the ISO 11228 series commonly used for biomechanical risk assessments cannot be [...] Read more.
In the Industry 4.0 scenario, human–robot collaboration (HRC) plays a key role in factories to reduce costs, increase production, and help aged and/or sick workers maintain their job. The approaches of the ISO 11228 series commonly used for biomechanical risk assessments cannot be applied in Industry 4.0, as they do not involve interactions between workers and HRC technologies. The use of wearable sensor networks and software for biomechanical risk assessments could help us develop a more reliable idea about the effectiveness of collaborative robots (coBots) in reducing the biomechanical load for workers. The aim of the present study was to investigate some biomechanical parameters with the 3D Static Strength Prediction Program (3DSSPP) software v.7.1.3, on workers executing a practical manual material-handling task, by comparing a dual-arm coBot-assisted scenario with a no-coBot scenario. In this study, we calculated the mean and the standard deviation (SD) values from eleven participants for some 3DSSPP parameters. We considered the following parameters: the percentage of maximum voluntary contraction (%MVC), the maximum allowed static exertion time (MaxST), the low-back spine compression forces at the L4/L5 level (L4Ort), and the strength percent capable value (SPC). The advantages of introducing the coBot, according to our statistics, concerned trunk flexion (SPC from 85.8% without coBot to 95.2%; %MVC from 63.5% without coBot to 43.4%; MaxST from 33.9 s without coBot to 86.2 s), left shoulder abdo-adduction (%MVC from 46.1% without coBot to 32.6%; MaxST from 32.7 s without coBot to 65 s), and right shoulder abdo-adduction (%MVC from 43.9% without coBot to 30.0%; MaxST from 37.2 s without coBot to 70.7 s) in Phase 1, and right shoulder humeral rotation (%MVC from 68.4% without coBot to 7.4%; MaxST from 873.0 s without coBot to 125.2 s), right shoulder abdo-adduction (%MVC from 31.0% without coBot to 18.3%; MaxST from 60.3 s without coBot to 183.6 s), and right wrist flexion/extension rotation (%MVC from 50.2% without coBot to 3.0%; MaxST from 58.8 s without coBot to 1200.0 s) in Phase 2. Moreover, Phase 3, which consisted of another manual handling task, would be removed by using a coBot. In summary, using a coBot in this industrial scenario would reduce the biomechanical risk for workers, particularly for the trunk, both shoulders, and the right wrist. Finally, the 3DSSPP software could be an easy, fast, and costless tool for biomechanical risk assessments in an Industry 4.0 scenario where ISO 11228 series cannot be applied; it could be used by occupational medicine physicians and health and safety technicians, and could also help employers to justify a long-term investment. Full article
Show Figures

Figure 1

Figure 1
<p>Some 3DSSPP reconstructions of the three subtasks analyzed: Phase 1 with (<b>a1</b>) and without (<b>b1</b>) the coBot; Phase 2 with (<b>a2</b>) and without (<b>b2</b>) the coBot; and Phase 3 with (<b>a3</b>) and without (<b>b3</b>) the coBot.</p>
Full article ">Figure 2
<p>Mean and SD values for Phase 1, with Bazar (wB) in blue and without Bazar (woB) in red, for the investigated parameters (L4–L5 orthogonal forces, strength percent capable value, %MVC, and maximum holding time). An asterisk (*) over the bars shows statistical significance.</p>
Full article ">Figure 3
<p>Mean and SD values for Phase 2, with Bazar (wB) in blue and without Bazar (woB) in red, for the investigated parameters (L4–L5 orthogonal forces, strength percent capable value, %MVC, and maximum holding time). An asterisk (*) over the bars shows statistical significance.</p>
Full article ">Figure 4
<p>Mean and SD values for Phase 3 without Bazar (woB) in red, for the investigated parameters (L4–L5 orthogonal forces, strength percent capable value, %MVC, and maximum holding time). When using the Bazar coBot, this phase would be totally automatized, so we do not have values with the Bazar (wB).</p>
Full article ">
63 pages, 37620 KiB  
Article
BLUE SABINO: Development of a BiLateral Upper-Limb Exoskeleton for Simultaneous Assessment of Biomechanical and Neuromuscular Output
by Christopher K. Bitikofer, Sebastian Rueda Parra, Rene Maura, Eric T. Wolbrecht and Joel C. Perry
Machines 2024, 12(9), 617; https://doi.org/10.3390/machines12090617 - 3 Sep 2024
Cited by 1 | Viewed by 1261
Abstract
Arm and hand function play a critical role in the successful completion of everyday tasks. Lost function due to neurological impairment impacts millions of lives worldwide. Despite improvements in the ability to assess and rehabilitate arm deficits, knowledge about underlying sources of impairment [...] Read more.
Arm and hand function play a critical role in the successful completion of everyday tasks. Lost function due to neurological impairment impacts millions of lives worldwide. Despite improvements in the ability to assess and rehabilitate arm deficits, knowledge about underlying sources of impairment and related sequela remains limited. The comprehensive assessment of function requires the measurement of both biomechanics and neuromuscular contributors to performance during the completion of tasks that often use multiple joints and span three-dimensional workspaces. To our knowledge, the complexity of movement and diversity of measures required are beyond the capabilities of existing assessment systems. To bridge current gaps in assessment capability, a new exoskeleton instrument is developed with comprehensive bilateral assessment in mind. The development of the BiLateral Upper-limb Exoskeleton for Simultaneous Assessment of Biomechanical and Neuromuscular Output (BLUE SABINO) expands on prior iterations toward full-arm assessment during reach-and-grasp tasks through the development of a dual-arm and dual-hand system, with 9 active degrees of freedom per arm and 12 degrees of freedom (six active, six passive) per hand. Joints are powered by electric motors driven by a real-time control system with input from force and force/torque sensors located at all attachment points between the user and exoskeleton. Biosignals from electromyography and electroencephalography can be simultaneously measured to provide insight into neurological performance during unimanual or bimanual tasks involving arm reach and grasp. Design trade-offs achieve near-human performance in exoskeleton speed and strength, with positional measurement at the wrist having an error of less than 2 mm and supporting a range of motion approximately equivalent to the 50th-percentile human. The system adjustability in seat height, shoulder width, arm length, and orthosis width accommodate subjects from approximately the 5th-percentile female to the 95th-percentile male. Integration between precision actuation, human–robot-interaction force-torque sensing, and biosignal acquisition systems successfully provide the simultaneous measurement of human movement and neurological function. The bilateral design enables use with left- or right-side impairments as well as intra-subject performance comparisons. With the resulting instrument, the authors plan to investigate underlying neural and physiological correlates of arm function, impairment, learning, and recovery. Full article
(This article belongs to the Special Issue Advances in Assistive Robotics)
Show Figures

Figure 1

Figure 1
<p>Bilateral exoskeleton predecessors to the BLUE SABINO: (<b>LEFT</b>) the EXO-UL7 and (<b>RIGHT</b>) the EXO-UL8.</p>
Full article ">Figure 2
<p>The BLUE SABINO instrument design is composed of a width-adjustable base, height-adjustable chair, length-adjustable upper arm and forearm segments, size-adjustable HRA attachments, remote-center four-bar mechanisms, two-DOF shoulder modules (PRISM), and optional 12-DOF hand modules.</p>
Full article ">Figure 3
<p>The kinematics of the human arm from the shoulder to the wrist can be represented by nine degrees of freedom. (<b>A</b>) Joints J<sub>1</sub>–J<sub>9</sub> and their corresponding anatomical axes (indicated by red dashed arrows oriented along the axes of rotation). (<b>B</b>) The kinematics of BLUE SABINO accommodate these nine degrees of freedom per arm (here, red arrows indicate the selected positive orientation of each joint’s rotation axis).</p>
Full article ">Figure 4
<p>BLUE SABINO rigid body links. The right-side rigid links of the right BLUE SABINO arm are shown in an exploded view ((<b>top left</b>) and (<b>center</b>)). Three of the links form movable assemblies composed of various parallel link mechanisms. The upper-arm (<b>top right</b>) and forearm (<b>bottom left</b>) remote-center mechanisms are composed of five primary links and an additional link for arm-length adjustment. PRISM (<b>bottom right</b>) is constructed with ten links, nine moving, and one stationary base.</p>
Full article ">Figure 5
<p>Anthropomorphic arm modeling for human–robot attachments (HRAs). Elliptical profiles for proximal and distal ends of the (<b>A</b>) upper arm and (<b>B</b>) forearm, and (<b>C</b>) lofted bend-U-shaped profile for the hand.</p>
Full article ">Figure 6
<p>Adjustable HRA orthotic designs and exploded assembly views. (<b>A</b>) The upper-arm orthosis. (<b>B</b>) The forearm orthosis. (<b>C</b>) The hand orthosis.</p>
Full article ">Figure 7
<p>Definition of manipulator points (q), axes (ω), and force sensor body frames.</p>
Full article ">Figure 8
<p>BLUE SABINO 18-DOF bilateral electromechanical system.</p>
Full article ">Figure 9
<p>Anticipated torque distributions per joint during ADL tasks (adapted from [<a href="#B75-machines-12-00617" class="html-bibr">75</a>]) used to select motors and gears for Joints 1–6.</p>
Full article ">Figure 10
<p>Layout for BLUE SABINO power and communication distribution.</p>
Full article ">Figure 11
<p>BLUE SABINO system startup sequence.</p>
Full article ">Figure 12
<p>Automatic and manual safety systems are integrated into the BLUE SABINO control architecture. Automatic systems provide fast and dependable safety responses, while the manual system allows the user and operator to stop the system manually, if needed.</p>
Full article ">Figure 13
<p>Admittance control scheme for BLUE SABINO. (1) User-applied forces are converted to human joint torques, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">τ</mi> <mi mathvariant="normal">h</mi> </msub> </mrow> </semantics></math>. (2) The admittance-control loop uses <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">τ</mi> <mi mathvariant="normal">h</mi> </msub> </mrow> </semantics></math> to set target states. (3) Joint-level admittance models, including inertia, m<sub>a</sub> velocity damping, b<sub>v</sub>, and velocity error damping, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">b</mi> <mrow> <mi>ve</mi> </mrow> </msub> </mrow> </semantics></math>, to set the inner-loop trajectory targets. (4) The trajectory-control loop computes proportional-derivative (PD) admittance-state tracking control torques, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">τ</mi> <mrow> <mi>PD</mi> </mrow> </msub> </mrow> </semantics></math>. (5) Model-based compensation for friction and gravity is added to the control torque, resulting in <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">τ</mi> <mi mathvariant="normal">u</mi> </msub> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mo>.</mo> </mrow> </semantics></math> (6) Safety limits are enforced on human–robot interaction forces and joint range of motion. Control-state monitoring disables control torque throughput if any safety limits are exceeded or network/device faults are detected.</p>
Full article ">Figure 14
<p>System integration phases: (<b>A</b>) An initial two-DOF version supported elbow flexion/extension and forearm pronosupination. (<b>B</b>) The five-DOF version added three orthogonal joints to the shoulder. (<b>C</b>) The future 18-DOF bilateral version adds two joints at the wrist and two joints at the base of the shoulder.</p>
Full article ">Figure 15
<p>Motion-capture setup and predefined robot trajectories. (<b>Top</b>) The motion of the right-side seven-DOF BLUE SABINO arm is recorded simultaneously using a set of five Flex 13 IR motion-capture cameras. The cameras track the spatial positions of retroreflective markers on a special wrist-mounted motion-capture end-effector part. Optitrack software fits a rigid body in real time to the markerset to define the position and orientation of <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">P</mi> <mi mathvariant="normal">c</mi> </msub> </mrow> </semantics></math>. (<b>Bottom</b>) Three-dimensional views of the upper-arm section of BLUE SABINO (with orthosis components removed for clarity) are shown in relation to the predefined trajectories traced out for motion-capture experiments. The path is traced by the end-effector point <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">P</mi> <mi mathvariant="normal">c</mi> </msub> </mrow> </semantics></math>, whose initial position is indicated by the purple sphere and represents the centroid of the rigid body tracked by the motion-capture system.</p>
Full article ">Figure 16
<p>Sinusoid tracking inputs. The input position (red)- and velocity (blue)-state target signals are shown for an experiment using joint J<sub>5</sub>. The PD control torque <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">τ</mi> <mrow> <mi>PD</mi> </mrow> </msub> </mrow> </semantics></math> (purple) and gravity/friction compensation torque <math display="inline"><semantics> <mrow> <msub> <mrow> <mo> </mo> <mover> <mi mathvariant="sans-serif">τ</mi> <mo stretchy="false">^</mo> </mover> </mrow> <mi mathvariant="normal">g</mi> </msub> <msub> <mrow> <mrow> <mo>+</mo> <mover> <mi mathvariant="sans-serif">τ</mi> <mo stretchy="false">^</mo> </mover> </mrow> </mrow> <mi mathvariant="normal">f</mi> </msub> </mrow> </semantics></math> (orange) are combined to generate the control input torque <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">τ</mi> <mi mathvariant="normal">u</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 17
<p>Logarithmic-ramping chirp-state inputs. The input position (red) and velocity (blue) state target signals, and the instantaneous command frequency (black) are shown for the first half of the experiment with chirp ramping up between 0.1 and 2 Hz.</p>
Full article ">Figure 18
<p>Biosignal acquisition validation task. The user begins the task with the fingertips touching the start target (tennis ball) located in the lower front part of the workspace. After hearing an audio cue, the user reaches to the second target (tennis ball) in the upper right part of their workspace, touches it, and returns their hand to touch the start target. An example trajectory for a single motion is illustrated in pink (<b>left</b>) in context of a virtual robot model. Pink arrows overlaid on the experimental setup (<b>center</b>) represent the movement from one target to the next and back. The transparent grey scatter (<b>right</b>) illustrates the area traveled in all repetitions on the same virtual robot charts.</p>
Full article ">Figure 19
<p>Topographical layout of EEG montage with reference electrode at A2 (blue) and ground at Afz (red).</p>
Full article ">Figure 20
<p>EMG montage and target muscles of the upper limb. Five EMG locations were placed on the skin over the shown target muscles. Bipolar electrodes were placed in pairs to enable differential measurement for improved noise rejection.</p>
Full article ">Figure 21
<p>BLUE SABINO exoskeleton: (<b>A</b>) Seven-DOF bilateral arm configuration with task display screen, operator console, control tower, and shoulder-width adjustment mechanism; (<b>B</b>) Experimental chair with footrest and control-enable footswitch; (<b>C</b>) Overhead and (<b>D</b>) front views with subject wearing right-hand three-fingered OTHER Hand module.</p>
Full article ">Figure 22
<p>OTHER Hand on the BLUE SABINO System.</p>
Full article ">Figure 23
<p>Animation of BLUE SABINO joints via kinematics-driven MATLAB script.</p>
Full article ">Figure 24
<p>Results of motion capture for a segment of the UofI I trajectory illustrate the high agreement between end-effector position measured by motion capture (blue) vs. encoder position (red). The absolute difference (purple) remains low, indicating that the robot is able to accurately measure its true position within 0.4 mm on average for the task.</p>
Full article ">Figure 25
<p>Mean tracking error per shape. The means of the distance error between the end-effector point <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>c</mi> </msub> </mrow> </semantics></math> and the target position are shown on the left and center charts. The left chart shows errors reported by forward kinematics according to joint position measurements. The center chart shows the error according to motion capture. The rightmost chart displays mean absolute difference between the forward kinematic and motion capture measurements. Error bars display 95% confidence intervals of the means. Blue bars indicate statistics computed over individual actions (five repetitions each), while orange bars show statistics over all motions and repetitions (20 repetitions total).</p>
Full article ">Figure 26
<p>The adjustments supporting 5th to 95th percentile users in BLUE SABINO are included in its custom chair, base structure, length-adjustable arms, and the size-adjustable orthotic components forming the human–machine attachments (HMAs) and adjustment mechanisms.</p>
Full article ">Figure 27
<p>Range of motion comparison between healthy male and female ROM reported in [<a href="#B62-machines-12-00617" class="html-bibr">62</a>,<a href="#B63-machines-12-00617" class="html-bibr">63</a>,<a href="#B103-machines-12-00617" class="html-bibr">103</a>,<a href="#B104-machines-12-00617" class="html-bibr">104</a>,<a href="#B105-machines-12-00617" class="html-bibr">105</a>], healthy movements measured by motion capture during ADL tasks [<a href="#B64-machines-12-00617" class="html-bibr">64</a>,<a href="#B106-machines-12-00617" class="html-bibr">106</a>,<a href="#B107-machines-12-00617" class="html-bibr">107</a>], and BLUE SABINO’s achieved ROM. BLUE SABINO’s ROM encompasses approximately 95% of the ADL motion range for all joints combined. It also covers between 83% and 89% of healthy 50th-percentile ROM on all joints.</p>
Full article ">Figure 28
<p>Sinusoid tracking-state accuracy. (<b>Top</b>) Position- and velocity-state tracking accuracy is shown for each of the joints of BLUE SABINO 5-DOF-RIGHT. Progression between states moves clockwise around each circle with the position and velocity states shown in red, and the error shown in blue. Only the portion of the input with full 10-degree amplitude (between 20 and 80 s in <a href="#machines-12-00617-f016" class="html-fig">Figure 16</a>) is shown. (<b>Bottom</b>) An enlarged view of the J<sub>7</sub> state chart shows the cyclical tracking error present in detail. Sixty wave cycles are shown with measured state line colors progressing from red to yellow to blue to highlight the variation of tracking accuracy between cycles.</p>
Full article ">Figure 29
<p>Sinusoid tracking phase magnitude characterization. A single sine-wave input and position/velocity response is shown for J<sub>7</sub>. The velocity-state measurement is filtered using a noncausal low-pass filter with a 10 Hz cutoff that smooths the signal, making it easier to identify the response peak time. The state time delay <math display="inline"><semantics> <mi mathvariant="sans-serif">β</mi> </semantics></math> is extracted as the time distance between peaks of the target and measured position states. The state-magnitude ratios are computed via the measurement and target state values at the identified peak times.</p>
Full article ">Figure 30
<p>RMS chirp-trajectory state-tracking error vs. frequency.</p>
Full article ">Figure 31
<p>Chirp-trajectory state-tracking-error variance vs. frequency.</p>
Full article ">Figure 32
<p>Logarithmic-ramping chirp-tracking time-series results. The input position (red)- and velocity (blue)-state targets signals and the instantaneous command frequency (dashed black) are shown in the first four columns. Command torque signals are shown on the fifth and sixth columns, overlaid with each actuator’s continuous and peak torque output band.</p>
Full article ">Figure 33
<p>Ensemble-averaging robotic measures. (<b>Top</b>) Hand displacement in x (red), y (blue), z (green), and absolute displacement (purple); (<b>Middle</b>) Hand-velocity ensemble average (red) of all measurements (grey); (<b>Bottom</b>) Required shoulder torque to complete each reach trajectory and the ensemble-average torque profile (purple). Individual trajectories from all reach movements and shoulder torques required to drive the exoskeleton are displayed in grey.</p>
Full article ">Figure 34
<p>Trial average EEG, EMG, and robot kinematics. The ensemble-averaged biosignal including contralateral low-beta EEG at C3, EMG from three shoulder muscles, and robotic measurements, including the displacement and velocity of the right hand, and the absolute summed magnitude of interaction torque between the user and robot’s shoulder joints.</p>
Full article ">Figure 35
<p>Topographical progression of EEG power: The top portion of the plot shows Lβ power and μ power at C3, as well as shoulder torque from robot joints (RJ) 3–5, from 1000 ms before the cue presentation to 4000 ms after the cue. On the same timescale along the bottom are topographical heat maps showing the percent change in Lβ power with respect to baseline.</p>
Full article ">Figure 36
<p>Adjustable HRA based on ellipse-fit forearm model. (<b>Left</b>) Three-piece HRA design. (<b>Right</b>) Ellipse size range with enlarged view of potential range of alignment errors. Colored dots represent the center location of each ellipse when the adjustment is fully contracted (cyan), centered (yellow), and fully expanded (green). Red axes represent the model coordinate system which was built around the 50th-percentile arm.</p>
Full article ">Figure A1
<p>A triple-pivot four-par mechanism is similar to a standard four-bar mechanism (<b>A</b>), with two intermediate links operated in parallel (<b>B</b>), and with both intermediate links extended beyond the output link pivot to a remote center (RC) output link (<b>C</b>).</p>
Full article ">Figure A2
<p>Remote-center mechanisms. (<b>A</b>) Remote-center mechanisms at the upper arm and forearm allow the placement of actuators for internal/external rotation and pronation/supination away from the anatomical centers of rotation. (<b>B</b>) The mechanisms use ball bearing pairs that are spaced apart to reduce angular play resulting from each bearing experiencing radial play in opposite directions. Precision shims ensure a snug fit at each bearing interface. The shims and compression preload applied by the precision shoulder bolts reduce both axial and radial play.</p>
Full article ">Figure A3
<p>Experimental mean and standard deviation measurements vs. optimal-fit and relaxed-Coulombic-fit models show the torque required to overcome friction in each system motor. The sigmoid model of Equation (18) with parameters fit using FMINCON optimization best fits the experimentally measured torque–velocity profiles. However, the relaxed model reduces activation in the low-velocity region, improving chatter rejection.</p>
Full article ">Figure A4
<p>The effect of the friction model using relaxed-fit vs. optimal-fit friction parameters is illustrated. Friction torque compensation for a white noise-corrupted 1 Hz velocity signal is computed using the proposed sigmoid friction model with both sets of parameters. The relaxed-fit model effectively reduces the effect of discontinuous chatter as velocity passes through 0.</p>
Full article ">
Back to TopTop