[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (150)

Search Parameters:
Keywords = multi-arms robot

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 9948 KiB  
Article
Design of and Experiment with a Dual-Arm Apple Harvesting Robot System
by Wenlei Huang, Zhonghua Miao, Tao Wu, Zhengwei Guo, Wenkai Han and Tao Li
Horticulturae 2024, 10(12), 1268; https://doi.org/10.3390/horticulturae10121268 - 28 Nov 2024
Viewed by 385
Abstract
Robotic harvesting has become an urgent need for the development of the apple industry, due to the sharp decline in agricultural labor. At present, harvesting apples using robots in unstructured orchard environments remains a significant challenge. This paper focuses on addressing the challenges [...] Read more.
Robotic harvesting has become an urgent need for the development of the apple industry, due to the sharp decline in agricultural labor. At present, harvesting apples using robots in unstructured orchard environments remains a significant challenge. This paper focuses on addressing the challenges of perception, localization, and dual-arm coordination in harvesting robots and presents a dual-arm apple harvesting robot system. First, the paper introduces the integration of the robot’s hardware and software systems, as well as the control system architecture, and describes the robot’s workflow. Secondly, combining a dual-vision perception system, the paper adopts a fruit recognition method based on a multi-task network model and a frustum-based fruit localization approach to identify and localize fruits. Finally, to improve collaboration efficiency, a multi-arm task planning method based on a genetic algorithm is used to optimize the target harvesting sequence for each arm. Field experiments were conducted in an orchard to evaluate the overall performance of the robot system. The field trials demonstrated that the robot system achieved an overall harvest success rate of 76.97%, with an average fruit picking time of 7.29 s per fruit and a fruit damage rate of only 5.56%. Full article
(This article belongs to the Section Fruit Production Systems)
Show Figures

Figure 1

Figure 1
<p>Dual-arm picking robot prototype.</p>
Full article ">Figure 2
<p>The overall structure of the robotic harvester for apple picking. (<b>a</b>) The main components of the robotic harvester. (<b>b</b>) The relevant dimensions of the robotic harvester.</p>
Full article ">Figure 3
<p>The structure of the 4-DOF robotic arm. (<b>a</b>) The motion outer frame. (<b>b</b>) The picking arm.</p>
Full article ">Figure 4
<p>The structure of the gripper. (<b>a</b>) The relevant dimensions of the gripper. (<b>b</b>) An actual image of the gripper.</p>
Full article ">Figure 5
<p>The structure of the fruit transport and collection mechanism.</p>
Full article ">Figure 6
<p>Dual-arm apple harvesting robot control system.</p>
Full article ">Figure 7
<p>Topology of control and communication of harvesting robot.</p>
Full article ">Figure 8
<p>Workflow diagram of overall robotic system.</p>
Full article ">Figure 9
<p>Dual-vision fruit information acquisition system.</p>
Full article ">Figure 10
<p>The multi-task MTN-O of fruit identification.</p>
Full article ">Figure 11
<p>MTN-O network model detection performance.</p>
Full article ">Figure 12
<p>Dedicated and common working areas of robotic arms.</p>
Full article ">Figure 13
<p>Asynchronous overlapping zoning planning of 30 cities (fruits) and 2 salesmen (robotic arms).</p>
Full article ">Figure 14
<p>Standardized orchard of dwarf rootstock. (<b>a</b>) Row and line space. (<b>b</b>) Spindle-shaped tree.</p>
Full article ">Figure 15
<p>Harvesting workspace in canopy. (<b>a</b>) Front view. (<b>b</b>) Top view.</p>
Full article ">Figure 16
<p>Field experiment.</p>
Full article ">Figure 17
<p>Several common failure scenarios. (<b>a</b>) Recognition and localization error. (<b>b</b>) Obstacle obstruction. (<b>c</b>) Grasp failure. (<b>d</b>) Separation failure.</p>
Full article ">
26 pages, 3127 KiB  
Review
Advances in Robotic Surgery: A Review of New Surgical Platforms
by Paola Picozzi, Umberto Nocco, Chiara Labate, Isabella Gambini, Greta Puleo, Federica Silvi, Andrea Pezzillo, Rocco Mantione and Veronica Cimolin
Electronics 2024, 13(23), 4675; https://doi.org/10.3390/electronics13234675 - 27 Nov 2024
Viewed by 1792
Abstract
In recent decades, the development of surgical systems which minimize patient impact has been a major focus for surgeons and researchers, leading to the advent of robotic systems for minimally invasive surgery. These technologies offer significant patient benefits, including enhanced outcome quality and [...] Read more.
In recent decades, the development of surgical systems which minimize patient impact has been a major focus for surgeons and researchers, leading to the advent of robotic systems for minimally invasive surgery. These technologies offer significant patient benefits, including enhanced outcome quality and accuracy, reduced invasiveness, lower blood loss, decreased postoperative pain, diminished infection risk, and shorter hospitalization and recovery times. Surgeons benefit from the elimination of human tremor, ergonomic advantages, improved vision systems, better access to challenging anatomical areas, and magnified 3DHD visualization of the operating field. Since 2000, Intuitive Surgical has developed multiple generations of master-slave multi-arm robots, securing over 7000 patents, which created significant barriers for competitors. This monopoly resulted in the widespread adoption of their technology, now used in over 11 million surgeries globally. With the expiration of key patents, new robotic platforms featuring innovative designs, such as modular systems, are emerging. This review examines advancements in robotic surgery within the fields of general, urological, and gynecological surgery. The objective is to analyze the current robotic surgical platforms, their technological progress, and their impact on surgical practices. By examining these platforms, this review provides insights into their development, potential benefits, and future directions in robotic-assisted surgery. Full article
Show Figures

Figure 1

Figure 1
<p>PRISMA flowchart.</p>
Full article ">Figure 2
<p>Number of papers per country.</p>
Full article ">Figure 3
<p>Senhance<sup>®</sup> robotic platform [<a href="#B143-electronics-13-04675" class="html-bibr">143</a>].</p>
Full article ">Figure 4
<p>Revo-i<sup>®</sup> patient chart. Robotic arms (A, B, C, D) [<a href="#B144-electronics-13-04675" class="html-bibr">144</a>].</p>
Full article ">Figure 5
<p>Hugo<sup>TM</sup> robotic platform [<a href="#B146-electronics-13-04675" class="html-bibr">146</a>].</p>
Full article ">Figure 6
<p>Hinorori<sup>TM</sup> surgical system [<a href="#B147-electronics-13-04675" class="html-bibr">147</a>].</p>
Full article ">Figure 7
<p>Versius<sup>®</sup> surgical robot [<a href="#B148-electronics-13-04675" class="html-bibr">148</a>].</p>
Full article ">Figure 8
<p>Mantra surgical robot [<a href="#B151-electronics-13-04675" class="html-bibr">151</a>].</p>
Full article ">
20 pages, 9751 KiB  
Article
6D Pose Estimation of Industrial Parts Based on Point Cloud Geometric Information Prediction for Robotic Grasping
by Qinglei Zhang, Cuige Xue, Jiyun Qin, Jianguo Duan and Ying Zhou
Entropy 2024, 26(12), 1022; https://doi.org/10.3390/e26121022 - 26 Nov 2024
Viewed by 539
Abstract
In industrial robotic arm gripping operations within disordered environments, the loss of physical information on the object’s surface is often caused by changes such as varying lighting conditions, weak surface textures, and sensor noise. This leads to inaccurate object detection and pose estimation [...] Read more.
In industrial robotic arm gripping operations within disordered environments, the loss of physical information on the object’s surface is often caused by changes such as varying lighting conditions, weak surface textures, and sensor noise. This leads to inaccurate object detection and pose estimation information. A method for industrial object pose estimation using point cloud data is proposed to improve pose estimation accuracy. During the feature extraction process, both global and local information are captured by integrating the appearance features of RGB images with the geometric features of point clouds. Integrating semantic information with instance features effectively distinguishes instances of similar objects. The fusion of depth information and RGB color channels enriches spatial context and structure. A cross-entropy loss function is employed for multi-class target classification, and a discriminative loss function enables instance segmentation. A novel point cloud registration method is also introduced to address re-projection errors when mapping 3D keypoints to 2D planes. This method utilizes 3D geometric information, extracting edge features using point cloud curvature and normal vectors, and registers them with models to obtain accurate pose information. Experimental results demonstrate that the proposed method is effective and superior on the LineMod and YCB-Video datasets. Finally, objects are grasped by deploying a robotic arm on the grasping platform. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>Posture estimation network structure. Feature extraction is performed on the given scene RGB and point cloud images, matching the target pose through semantic and instance prediction and designing a priority grasping strategy to achieve accurate grasping by the robotic arm.</p>
Full article ">Figure 2
<p>Instance segmentation branch network.</p>
Full article ">Figure 3
<p>Hash table framework. For two point pairs in the model edge appearance, keypoint pair features are calculated and saved in the hash table for subsequent feature matching of the template point cloud with the instance point cloud.</p>
Full article ">Figure 4
<p>Definition of B2B-DL descriptors. In order to find corresponding pairs of points between the scene and the model, a descriptor was devised by calculating the tangent lines and considering their direction as the direction of the points.</p>
Full article ">Figure 5
<p>The coordinate transformation relationship between the instance and model point clouds. The inverse transformation <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>T</mi> </mrow> <mrow> <mi>p</mi> <mo>→</mo> <mi>g</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msubsup> </mrow> </semantics></math> repositions the reference point <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>p</mi> </mrow> <mrow> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math> of the example point cloud to the origin, and the normal direction is aligned parallel to the <math display="inline"><semantics> <mrow> <mi>x</mi> </mrow> </semantics></math>-axis of the coordinate system.</p>
Full article ">Figure 6
<p>Positional information for each instance obtained by aligning the model point cloud with the field instance point cloud.</p>
Full article ">Figure 7
<p>The experimental platform dominated by the 3D industrial camera, KUKA robotic arm, target object, and PLCS7-1200 is mainly used for industrial parts disordered gripping use.</p>
Full article ">Figure 8
<p>In the suction device, each part corresponds to one kind of suction device; the first row consists of two sets of fixtures: the beam fixture and the roller fixture. The second row includes the suction device for the wheel fixture and the base fixture; the left side of each set of fixtures is its 3D model, and the right side is the real fixture.</p>
Full article ">Figure 9
<p>Coordinate transformation between the camera, robotic arm and object of the robotic gripping platform.</p>
Full article ">Figure 10
<p>Four low-textured industrial parts. The first row shows, from left to right, the real object’s beam, hub, roller, and base. The second row shows the CAD models of the beam, hub, roller, and base from left to right.</p>
Full article ">Figure 11
<p>The robot arm first gets the signal from PLC to grasp the target workpiece, then sucks up the corresponding fixture, returns to the home position, waits for the gripping position information, and executes the gripping operation, and the whole process is shown above.</p>
Full article ">Figure 12
<p>Pose estimation results on LineMod dataset.</p>
Full article ">Figure 13
<p>Pose estimation results on the YCB-Video dataset.</p>
Full article ">Figure 14
<p>Pose estimation results on a real dataset. The results after instance segmentation clustering is shown in the second column, and the third column shows the attitude estimation results of the target object.</p>
Full article ">Figure 15
<p>Pose estimation results obtained from testing in cluttered and occluded scenes. Each of these columns gives the RGB image, depth map, depth map converted point cloud, example point cloud clustering, edge estimation, and pose estimation results. This is a figure; schemes follow the same formatting.</p>
Full article ">Figure 16
<p>Some results of the robotic arm gripping experiments are performed in the real scene. Two kinds of workpieces are selected as gripping objects: a hub and eight wheels. Each column shows the process of converting the positional results obtained with the algorithm into positional information in robotic arm coordinates to make the robotic arm grasp each object.</p>
Full article ">Figure 16 Cont.
<p>Some results of the robotic arm gripping experiments are performed in the real scene. Two kinds of workpieces are selected as gripping objects: a hub and eight wheels. Each column shows the process of converting the positional results obtained with the algorithm into positional information in robotic arm coordinates to make the robotic arm grasp each object.</p>
Full article ">
42 pages, 28776 KiB  
Article
Orbital-Based Automatic Multi-Layer Multi-Pass Welding Equipment for Small Assembly Plates
by Yang Cai, Gongzhi Yu, Jikun Yu and Yayue Ji
Appl. Sci. 2024, 14(23), 10878; https://doi.org/10.3390/app142310878 - 24 Nov 2024
Viewed by 676
Abstract
To address the technical challenges, production quality issues, and inefficiencies caused by the heavy reliance on traditional manual processing of small assembly plates in the shipbuilding industry, this paper presents the design and analysis of a track-based automatic welding device. This equipment provides [...] Read more.
To address the technical challenges, production quality issues, and inefficiencies caused by the heavy reliance on traditional manual processing of small assembly plates in the shipbuilding industry, this paper presents the design and analysis of a track-based automatic welding device. This equipment provides a solution for achieving batch and continuous welding in the field of automatic welding technology. The design section includes the mechanical design of the equipment’s core mechanisms, the design of the operating systems, the development of visual scanning strategies under working conditions, and the formulation of multi-layer and multi-pass welding processes. The analysis section comprises the static analysis of the equipment’s mechanical structure, kinematic analysis of the robotic arm, and inspection analysis of the device. Compared with manual welding, multi-layer and multi-pass welding experiments conducted using the equipment demonstrated stabilized welding quality for small assembly plates. Under the conditions of single plates with different groove positions and gaps, when the gap was 4 mm, processing efficiency increased by 7.35%, and processing time was reduced by 10.2%; when the gap was 5 mm, processing efficiency increased by 10.7%, and processing time decreased by 7.39%. The welding formation rate for the overall processing of single plate panels and web grooves increased by 11.48%, total material consumption decreased by 13.4%, and unit material consumption decreased by 13.5%. For mass production of small assembly plates of the same specifications, processing time was reduced by 16.7%, and there was a 41.4% reduction in costs. The equipment effectively addresses the low level of automation and heavy dependence on traditional manual processing in the shipbuilding industry, contributing to cost reduction and efficiency improvement. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of track equipment.</p>
Full article ">Figure 2
<p>Partial diagram of the track and base.</p>
Full article ">Figure 3
<p>Schematic of fixing mechanism.</p>
Full article ">Figure 4
<p>Schematic diagram of quick-insert mechanism.</p>
Full article ">Figure 5
<p>Rail cart grid division model diagram. (<b>a</b>) Finite element module division diagram. (<b>b</b>) Finite element mesh division diagram.</p>
Full article ">Figure 6
<p>Rail cart analysis diagram.</p>
Full article ">Figure 7
<p>Rail cart grid division model diagram. (<b>a</b>) Finite element module division diagram. (<b>b</b>) Finite element mesh division diagram.</p>
Full article ">Figure 8
<p>Simplified mechanism bias load analysis diagram.</p>
Full article ">Figure 9
<p>Track section grid division model diagram. (<b>a</b>) Finite element module division diagram. (<b>b</b>) Finite element mesh division diagram.</p>
Full article ">Figure 10
<p>Track section analysis diagram.</p>
Full article ">Figure 11
<p>Robotic arm connecting seat grid division model diagram. (<b>a</b>) Finite element module division diagram. (<b>b</b>) Finite element mesh division diagram.</p>
Full article ">Figure 12
<p>Robotic arm connecting seat analysis diagram.</p>
Full article ">Figure 13
<p>Fixing mechanism grid division model diagram. (<b>a</b>) Finite element module division diagram. (<b>b</b>) Finite element mesh division diagram.</p>
Full article ">Figure 14
<p>Fixing mechanism analysis diagram.</p>
Full article ">Figure 15
<p>Overall working logic diagram of equipment.</p>
Full article ">Figure 16
<p>Overall operating equipment summary diagram.</p>
Full article ">Figure 17
<p>Relative diagram of the coordinate system for the calibration process.</p>
Full article ">Figure 18
<p>Schematic of Euler angle rotation.</p>
Full article ">Figure 19
<p>Robotic arm TCP eight-point calibration diagram.</p>
Full article ">Figure 20
<p>Calibration conversion relationship diagram.</p>
Full article ">Figure 21
<p>Vision sensor calibration diagram. (<b>a</b>) Laser origin positioning calibration diagram. (<b>b</b>) Laser field distance dimensions.</p>
Full article ">Figure 22
<p>Calibration result graph.</p>
Full article ">Figure 23
<p>Structural diagram of the robotic arm body.</p>
Full article ">Figure 24
<p>D-H coordinate system model diagram.</p>
Full article ">Figure 25
<p>Kinematic analysis flow chart.</p>
Full article ">Figure 26
<p>Robotic arm attitude verification diagram.</p>
Full article ">Figure 27
<p>Trajectory planning flow chart.</p>
Full article ">Figure 28
<p>Comparative analysis chart for trajectory planning.</p>
Full article ">Figure 29
<p>Workspace solution analysis diagram.</p>
Full article ">Figure 30
<p>Working condition and robotic arm posture information diagram.</p>
Full article ">Figure 31
<p>Work space point cloud data map.</p>
Full article ">Figure 32
<p>Small assembly plates working conditions.</p>
Full article ">Figure 33
<p>Cutaway view of plate butt bevel.</p>
Full article ">Figure 34
<p>Equipment operation test chart.</p>
Full article ">Figure 35
<p>TCP motion trajectory projection.</p>
Full article ">Figure 36
<p>Bevel scanning and information processing flow chart.</p>
Full article ">Figure 37
<p>Welding experiment flow chart.</p>
Full article ">Figure 38
<p>Welding performance comparative analysis diagram.</p>
Full article ">Figure 39
<p>Schematic diagram of automatic and manual welding results.</p>
Full article ">Figure 40
<p>Automatic welding equipment welding process renderings. (<b>a</b>) Panel multi-layer and multi-pass welding process effect. (<b>b</b>) Web multi-layer and multi-pass welding process effect.</p>
Full article ">Figure 41
<p>Comprehensive evaluation diagram.</p>
Full article ">Figure 42
<p>Summary diagram.</p>
Full article ">
15 pages, 3119 KiB  
Article
Fault Detection in Harmonic Drive Using Multi-Sensor Data Fusion and Gravitational Search Algorithm
by Nan-Kai Hsieh and Tsung-Yu Yu
Machines 2024, 12(12), 831; https://doi.org/10.3390/machines12120831 - 21 Nov 2024
Viewed by 440
Abstract
This study proposes a fault diagnosis method for harmonic drive systems based on multi-sensor data fusion and the gravitational search algorithm (GSA). As a critical component in robotic arms, harmonic drives are prone to failures due to wear, less grease, or improper loading, [...] Read more.
This study proposes a fault diagnosis method for harmonic drive systems based on multi-sensor data fusion and the gravitational search algorithm (GSA). As a critical component in robotic arms, harmonic drives are prone to failures due to wear, less grease, or improper loading, which can compromise system stability and production efficiency. To enhance diagnostic accuracy, the research employs wavelet packet decomposition (WPD) and empirical mode decomposition (EMD) to extract multi-scale features from vibration signals. These features are subsequently fused, and GSA is used to optimize the high-dimensional fused features, eliminating redundant data and mitigating overfitting. The optimized features are then input into a support vector machine (SVM) for fault classification, with K-fold cross-validation used to assess the model’s generalization capabilities. Experimental results demonstrate that the proposed diagnosis method, which integrates multi-sensor data fusion with GSA optimization, significantly improves fault diagnosis accuracy compared to methods using single-sensor signals or unoptimized features. This improvement is particularly notable in multi-class fault scenarios. Additionally, GSA’s global search capability effectively addresses overfitting issues caused by high-dimensional data, resulting in a diagnostic model with greater reliability and accuracy across various fault conditions. Full article
Show Figures

Figure 1

Figure 1
<p>Enhanced harmonic drive fault diagnosis framework diagram.</p>
Full article ">Figure 2
<p>Three-layered wavelet packet decomposition process diagram.</p>
Full article ">Figure 3
<p>(<b>a</b>) Experimental setup; (<b>b</b>) schematic of the sixth axis; (<b>c</b>) gear wear; (<b>d</b>) bearing damage; (<b>e</b>) improper load; (<b>f</b>) gear fracture.</p>
Full article ">Figure 4
<p>K-fold cross-validation diagram.</p>
Full article ">Figure 5
<p>Accuracy comparison chart for different optimization methods. (<b>a</b>) FWPD, (<b>b</b>) FWPD+GSA, (<b>c</b>) FEMD, (<b>d</b>) FEMD+GSA.</p>
Full article ">Figure 6
<p>Accuracy comparison chart.</p>
Full article ">Figure 7
<p>Computation time comparison of different methods.</p>
Full article ">
30 pages, 15227 KiB  
Review
A Survey of Planar Underactuated Mechanical System
by Zixin Huang, Chengsong Yu, Ba Zeng, Xiangyu Gong and Hongjian Zhou
Machines 2024, 12(12), 829; https://doi.org/10.3390/machines12120829 - 21 Nov 2024
Viewed by 364
Abstract
Planar underactuated mechanical systems have been a popular research issue in the area of mechanical systems and nonlinear control. This paper reviews the current research status of control methods for a class of planar underactuated manipulator (PUM) systems containing a single passive joint. [...] Read more.
Planar underactuated mechanical systems have been a popular research issue in the area of mechanical systems and nonlinear control. This paper reviews the current research status of control methods for a class of planar underactuated manipulator (PUM) systems containing a single passive joint. Firstly, the general dynamics model and kinematics model of the PUM are given, and its control characteristics are introduced; secondly, according to the distribution position characteristics of the passive joints, the PUM is classified into the passive first joint system, the passive last joint system, and the passive intermediate joint system, and the analysis and discussion are carried out in respect to the existing intelligent control methods. Finally, in response to the above discussion, we provide a brief theoretical analysis and summarize the challenges faced by PUM, i.e., uncertainty and robustness of the system, unified control methods and research on underactuated systems with uncontrollable multi-passive joints; at the same time, the practical applications have certain limitations that need to be implemented subsequently, i.e., anti-jamming, multi-planar underactuated robotic arm co-control and spatial underactuated robotic arm system development. Aiming at the above challenges and problems in the control of PUM systems, we elaborate on them in points, and put forward the research directions and related ideas for future work, taking into account the contributions of the current work. Full article
(This article belongs to the Section Machine Design and Theory)
Show Figures

Figure 1

Figure 1
<p>Physical structure of planar <span class="html-italic">n</span>-DoF manipulator.</p>
Full article ">Figure 2
<p>Physical structure of planar <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">A</mi> <mi>m</mi> </msup> <msup> <mi>PA</mi> <mi>n</mi> </msup> </mrow> </semantics></math> manipulator.</p>
Full article ">Figure 3
<p>Structural sketch of the planar <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">A</mi> <mi>m</mi> </msup> <msup> <mi>PA</mi> <mi>n</mi> </msup> </mrow> </semantics></math> manipulator.</p>
Full article ">Figure 4
<p>Physical structure of planar Acrobot manipulator.</p>
Full article ">Figure 5
<p>Physical structure of planar PAA manipulator system.</p>
Full article ">Figure 6
<p>Physical structure of planar manipulator system. (<b>a</b>) PA manipulator system. (<b>b</b>) PAAA manipulator system. (<b>c</b>) <math display="inline"><semantics> <mrow> <msup> <mi>PA</mi> <mi>n</mi> </msup> </mrow> </semantics></math> manipulator system. (<b>d</b>) PAPA manipulator system.</p>
Full article ">Figure 7
<p>Acrobot simulation results (First data set). (<b>a</b>) angle. (<b>b</b>) angular velocity. (<b>c</b>) torque.</p>
Full article ">Figure 8
<p>Pendubot simulation results (First data set). (<b>a</b>) angle. (<b>b</b>) angular velocity. (<b>c</b>) torque.</p>
Full article ">Figure 9
<p>Acrobot simulation results (Second data set). (<b>a</b>) angle. (<b>b</b>) angular velocity. (<b>c</b>) torque.</p>
Full article ">Figure 10
<p>Pendubot simulation results (Second data set). (<b>a</b>) angle. (<b>b</b>) angular velocity. (<b>c</b>) torque.</p>
Full article ">Figure 11
<p>Physical structure of planar Pendubot manipulator system.</p>
Full article ">Figure 12
<p>Physical structure of planar AAP manipulator system.</p>
Full article ">Figure 13
<p>Physical structure of planar <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">A</mi> <mi>m</mi> </msup> </mrow> </semantics></math>P <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>m</mi> <mo>&gt;</mo> <mn>2</mn> <mo>)</mo> </mrow> </semantics></math> manipulator system.</p>
Full article ">Figure 14
<p>Physical structure of planar APA manipulator system.</p>
Full article ">Figure 15
<p>Physical structure of planar <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">A</mi> <mi>m</mi> </msup> <mrow> <msup> <mi>PA</mi> <mi>n</mi> </msup> <mrow> <mo>(</mo> <mi>m</mi> <mo>≥</mo> <mn>1</mn> <mo>,</mo> <mi>n</mi> <mo>≥</mo> <mn>1</mn> <mo>,</mo> <mi>m</mi> <mo>+</mo> <mi>n</mi> <mo>≥</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </mrow> </semantics></math> manipulator system.</p>
Full article ">
20 pages, 2004 KiB  
Communication
Towards Open-Set NLP-Based Multi-Level Planning for Robotic Tasks
by Peteris Racinskis, Oskars Vismanis, Toms Eduards Zinars, Janis Arents and Modris Greitans
Appl. Sci. 2024, 14(22), 10717; https://doi.org/10.3390/app142210717 - 19 Nov 2024
Viewed by 573
Abstract
This paper outlines a conceptual design for a multi-level natural language-based planning system and describes a demonstrator. The main goal of the demonstrator is to serve as a proof-of-concept by accomplishing end-to-end execution in a real-world environment, and showing a novel way of [...] Read more.
This paper outlines a conceptual design for a multi-level natural language-based planning system and describes a demonstrator. The main goal of the demonstrator is to serve as a proof-of-concept by accomplishing end-to-end execution in a real-world environment, and showing a novel way of interfacing an LLM-based planner with open-set semantic maps. The target use-case is executing sequences of tabletop pick-and-place operations using an industrial robot arm and RGB-D camera. The demonstrator processes unstructured user prompts, produces high-level action plans, queries a map for object positions and grasp poses using open-set semantics, then uses the resulting outputs to parametrize and execute a sequence of action primitives. In this paper, the overall system structure, high-level planning using language models, low-level planning through action and motion primitives, as well as the implementation of two different environment modeling schemes—2.5 or fully 3-dimensional—are described in detail. The impacts of quantizing image embeddings on object recall are assessed and high-level planner performance is evaluated using a small reference scene data set. We observe that, for the simple constrained test command data set, the high-level planner is able to achieve a total success rate of 96.40%, while the semantic maps exhibit maximum recall rates of 94.69% and 92.29% for the 2.5d and 3d versions, respectively. Full article
(This article belongs to the Special Issue Digital Technologies Enabling Modern Industries)
Show Figures

Figure 1

Figure 1
<p>An open-set segmentation of a scene (<b>left</b>) colored by similarity to the queries <span class="html-italic">scissors</span> (<b>center</b>) and <span class="html-italic">computer mouse</span> (<b>right</b>).</p>
Full article ">Figure 2
<p>Architecture of the system demonstrator.</p>
Full article ">Figure 3
<p>Pick (<b>I</b>) and place (<b>II</b>) action flowcharts.</p>
Full article ">Figure 4
<p>A scene in the reference data set (see <a href="#sec5dot2-applsci-14-10717" class="html-sec">Section 5.2</a>) (<b>I</b>) Image of the scene. (<b>II</b>) Vector octree map reconstruction at a resolution of 0.01 m, colored by similarity to the query “<span class="html-italic">scotch tape</span>”, with a grasp pose estimate indicated by a shifted coordinate axes markers. (<b>III</b>) Manually annotated reference grasps. Center (red cross) and direction (blue asterisk) indicators drawn as seen by the user of the data set tagging utility. (<b>IV</b>) A visualization showing a semantic map grasp pose estimate and the corresponding ground-truth annotation.</p>
Full article ">Figure 5
<p>End-to-end execution on the real robot. Given the command “<span class="html-italic">put the toy car on the realsense box</span>”, the HLP produces a sequence containing a pick action (<b>I</b>,<b>II</b>) followed by a place action (<b>III</b>), which the LLP decomposes into primitives and executes. Object poses obtained from the vectree map.</p>
Full article ">
8 pages, 24773 KiB  
Communication
A Comparison Between Single-Stage and Two-Stage 3D Tracking Algorithms for Greenhouse Robotics
by David Rapado-Rincon, Akshay K. Burusa, Eldert J. van Henten and Gert Kootstra
Sensors 2024, 24(22), 7332; https://doi.org/10.3390/s24227332 - 17 Nov 2024
Viewed by 595
Abstract
With the current demand for automation in the agro-food industry, accurately detecting and localizing relevant objects in 3D is essential for successful robotic operations. However, this is a challenge due the presence of occlusions. Multi-view perception approaches allow robots to overcome occlusions, but [...] Read more.
With the current demand for automation in the agro-food industry, accurately detecting and localizing relevant objects in 3D is essential for successful robotic operations. However, this is a challenge due the presence of occlusions. Multi-view perception approaches allow robots to overcome occlusions, but a tracking component is needed to associate the objects detected by the robot over multiple viewpoints. Multi-object tracking (MOT) algorithms can be categorized between two-stage and single-stage methods. Two-stage methods tend to be simpler to adapt and implement to custom applications, while single-stage methods present a more complex end-to-end tracking method that can yield better results in occluded situations at the cost of more training data. The potential advantages of single-stage methods over two-stage methods depend on the complexity of the sequence of viewpoints that a robot needs to process. In this work, we compare a 3D two-stage MOT algorithm, 3D-SORT, against a 3D single-stage MOT algorithm, MOT-DETR, in three different types of sequences with varying levels of complexity. The sequences represent simpler and more complex motions that a robot arm can perform in a tomato greenhouse. Our experiments in a tomato greenhouse show that the single-stage algorithm consistently yields better tracking accuracy, especially in the more challenging sequences where objects are fully occluded or non-visible during several viewpoints. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>(<b>Left</b>). Robotic system used for data collection. We used a 6 DoF robot arm, ABB IRB1200. The robot is mounted over a mobile platform (not visible in the image) that allows movement over the greenhouse heating rails. On the end effector, we mounted a scissor-like cutting and gripping tool and a Realsense L515 camera. (<b>Right</b>). Illustration of the planar path followed by the robot with respect to the plant in front of it. An area of 60 cm (height) by 40 cm (width) was covered in steps of 2 cm.</p>
Full article ">Figure 2
<p>Examples of the images and point clouds collected by the robot. (<b>Left</b>). The distance from the camera to the plant is 40 cm. (<b>Right</b>). The distance from the camera to the plant is 60 cm.</p>
Full article ">Figure 3
<p>3D-SORT (<b>top</b>). First, the color image is processed by the object detection algorithm. The resulting detections are used together with the point cloud to generate a 3D position per detected object that corresponds to the re-ID property used by the data association step. The Hungarian algorithm is then used to associate the locations of newly detected objects with the previously tracked object positions. MOT-DETR (<b>bottom</b>). Color images and point clouds are used at the same time to detect objects with their corresponding class and re-ID features, which are black box features. The re-ID features are then passed to a Hungarian-based data association algorithm.</p>
Full article ">
19 pages, 13598 KiB  
Article
Structural Parameter Optimization of a Tomato Robotic Harvesting Arm: Considering Collision-Free Operation Requirements
by Chuanlang Peng, Qingchun Feng, Zhengwei Guo, Yuhang Ma, Yajun Li, Yifan Zhang and Liangzheng Gao
Plants 2024, 13(22), 3211; https://doi.org/10.3390/plants13223211 - 15 Nov 2024
Viewed by 652
Abstract
The current harvesting arms used in harvesting robots are developed based on standard products. Due to design constraints, they are unable to effectively avoid obstacles while harvesting tomatoes in tight spaces. To enhance the robot’s capability in obstacle-avoidance picking of tomato bunches with [...] Read more.
The current harvesting arms used in harvesting robots are developed based on standard products. Due to design constraints, they are unable to effectively avoid obstacles while harvesting tomatoes in tight spaces. To enhance the robot’s capability in obstacle-avoidance picking of tomato bunches with various postures, this study proposes a geometric parameter optimization method for a 7 degree of freedom (DOF) robotic arm. This method ensures that the robot can reach a predetermined workspace with a more compact arm configuration. The optimal picking posture for the end-effector is determined by analyzing the spatial distribution of tomato bunches, the main stem position, and peduncle posture, enabling a quantitative description of the obstacle-avoidance workspace. The denavit–hartenberg (D-H) model of the harvesting arm and the expected collision-free workspace are set as constraints. The compactness of the arm and the accessibility of the harvesting space serve as the optimization objectives. The Non-dominated Sorting Genetic Algorithm II (NSGA-II) multi-objective genetic algorithm is employed to optimize the arm length, and the results were validated through a virtual experiment using workspace traversal. The results indicate that the optimized structure of the tomato harvesting arm is compact, with a reachability of 92.88% in the workspace, based on the collision-free harvesting criteria. This study offers a reference for structural parameter optimization of robotic arms specialized in fruit and vegetable harvesting. Full article
Show Figures

Figure 1

Figure 1
<p>Industrial greenhouse tomato environment. (<b>a</b>) Workspace of tomato harvesting; (<b>b</b>) tomato bunch on plant.</p>
Full article ">Figure 2
<p>Growth morphology of tomato bunches and requirements for obstacle-avoidance picking operations. (<b>a</b>) Tomato bunches with peduncles oriented in front of the main stem; (<b>b</b>) tomato bunches with peduncles oriented to the back of the main stem; (<b>c</b>) tomato bunches with peduncles oriented to the left of the main stem; (<b>d</b>) tomato bunches with peduncles oriented to the right of the main stem; (<b>e</b>) manual method for picking tomato bunches.</p>
Full article ">Figure 3
<p>Tomato characteristics and picking posture description. (<b>a</b>) Tomato peduncle orientation distribution area and peduncle posture description; (<b>b</b>) description of acceptable picking posture.</p>
Full article ">Figure 4
<p>Peduncle cutting end-effector.</p>
Full article ">Figure 5
<p>Tomato harvesting robot model. (<b>a</b>) Robot model and its components; (<b>b</b>) 7-DOF robotic arm model and its coordinate system.</p>
Full article ">Figure 6
<p>Potential picking tasks for one tomato bunch. The red point indicates the intersection point between the peduncle and the main stem, the red arrow indicates the main stem vector, the green arrow indicates the peduncle vector, the yellow point indicates the picking point, and the colors gray, pink, blue, and light red, respectively, indicate random points within regions I, II, III, and IV.</p>
Full article ">Figure 7
<p>Graph of picking reachability with variation of each arm length parameter. (<b>a</b>) Graph of picking reachability with variation <span class="html-italic">d</span><sub>1</sub> length; (<b>b</b>) graph of picking reachability with variation <span class="html-italic">d</span><sub>3</sub> length; (<b>c</b>) graph of picking reachability with variation <span class="html-italic">d</span><sub>7</sub> length; (<b>d</b>) graph of picking reachability with variation <span class="html-italic">d</span><sub>7</sub> length.</p>
Full article ">Figure 8
<p>Reachability at spatial points. The left sphere shows a right-front 45° view, and the right sphere shows a left-rear 45° view; the red arrow indicates the main stem vector, the green arrow indicates the peduncle vector, the green point indicates reachable point, and the red point indicates inaccessible point.</p>
Full article ">Figure 9
<p>Reachability in the workspace. Red point indicates points with a picking reachability of 32–40.5%, orange point indicates points with a picking reachability of 40.5–49%, yellow point indicates points with a picking reachability of 49–57.5%, green point indicates points with a picking reachability of 57.5–66%, light blue point indicates points with a picking reachability of 66–74.5%, blue point indicates points with a picking reachability of 74.5–83%, dark blue point indicates points with a picking reachability of 83–91.5%, and purple point indicates points with a picking reachability of 91.5–100%.</p>
Full article ">
21 pages, 9035 KiB  
Article
Design and Implementation of an AI-Based Robotic Arm for Strawberry Harvesting
by Chung-Liang Chang and Cheng-Chieh Huang
Agriculture 2024, 14(11), 2057; https://doi.org/10.3390/agriculture14112057 - 15 Nov 2024
Cited by 1 | Viewed by 949
Abstract
This study presents the design and implementation of a wire-driven, multi-joint robotic arm equipped with a cutting and gripping mechanism for harvesting delicate strawberries, with the goal of reducing labor and costs. The arm is mounted on a lifting mechanism and linked to [...] Read more.
This study presents the design and implementation of a wire-driven, multi-joint robotic arm equipped with a cutting and gripping mechanism for harvesting delicate strawberries, with the goal of reducing labor and costs. The arm is mounted on a lifting mechanism and linked to a laterally movable module, which is affixed to the tube cultivation shelf. The trained deep learning model can instantly detect strawberries, identify optimal picking points, and estimate the contour area of fruit while the mobile platform is in motion. A two-stage fuzzy logic control (2s-FLC) method is employed to adjust the length of the arm and bending angle, enabling the end of the arm to approach the fruit picking position. The experimental results indicate a 90% accuracy in fruit detection, an 82% success rate in harvesting, and an average picking time of 6.5 s per strawberry, reduced to 5 s without arm recovery time. The performance of the proposed system in harvesting strawberries of different sizes under varying lighting conditions is also statistically analyzed and evaluated in this paper. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic of joint arm swing (the black dotted line indicates the trajectory of the arm swing).</p>
Full article ">Figure 2
<p>Structure of the multi-jointed robotic arm (<b>center</b>); base of the arm (<b>top left</b>) and end joint (<b>bottom left</b>); internal hoses and thin wires within the arm (<b>right</b>).</p>
Full article ">Figure 3
<p>Design of clamp and cutting tool. (<b>a</b>) The structure of clamping and cutting tools; (<b>b</b>) clamp in the open state; (<b>c</b>) clamp in the closed state; (<b>d</b>) prototype of the two sets of clamps; (<b>e</b>) mounting of the clamp on the joint arm (with the upper clamp in the open state and the lower clamp in the closed state); (<b>f</b>) the clamp in action for picking strawberries (the nozzle is installed inside the tube, bottom left).</p>
Full article ">Figure 4
<p>Clamp cutting part with two blades and gripping part with two foam pads.</p>
Full article ">Figure 5
<p>Hydroponic fruit picking platform: ➀ hydroponic PVC pipe and aluminum extrusion track; ➁ pulley module; ➂ module for raising and lowering the arm; ➃ arm with two camera units.</p>
Full article ">Figure 6
<p>Robotic arm and lifting module: (<b>a</b>) prototype of lifting module and mechanism; (<b>b</b>–<b>d</b>) show the actions of extending the robotic arm.</p>
Full article ">Figure 7
<p>Process of creating the object model.</p>
Full article ">Figure 8
<p>Side view of fruit models in three different sizes, labeled Size 1 (<b>a</b>), Size 2 (<b>b</b>), and Size 3 (<b>c</b>).</p>
Full article ">Figure 9
<p>Coordinate configuration of arm and strawberry.</p>
Full article ">Figure 10
<p>The block diagram of 2s-FLC system.</p>
Full article ">Figure 11
<p>Input and output variable fuzzification for FLC 1 and FLC 2. (<b>a</b>) <math display="inline"><semantics> <msup> <mi mathvariant="normal">v</mi> <mo>′</mo> </msup> </semantics></math> for input of FLC 1; (<b>b</b>) <math display="inline"><semantics> <mi>a</mi> </semantics></math> for input of FLC 1 and FLC 2; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">PWM</mi> </mrow> <mi mathvariant="normal">Z</mi> </msub> </mrow> </semantics></math> for output of FLC 1; (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">PWM</mi> </mrow> <mi mathvariant="normal">Z</mi> </msub> </mrow> </semantics></math> for input of FLC 2; (<b>e</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">PWM</mi> </mrow> <mi mathvariant="normal">Y</mi> </msub> </mrow> </semantics></math> for output of FLC 2.</p>
Full article ">Figure 12
<p>Example of fuzzy inference and defuzzification; fuzzy inference results when <math display="inline"><semantics> <msup> <mi mathvariant="normal">v</mi> <mo>′</mo> </msup> </semantics></math> = 600 and <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <msup> <mn>10</mn> <mn>5</mn> </msup> <mrow> <mo>(</mo> <mi>pixel</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> (FLC 1).</p>
Full article ">Figure 13
<p>Fuzzy inference surfaces of FLC 1 (<b>left</b>) and FLC 2 (<b>right</b>).</p>
Full article ">Figure 14
<p>Strawberry identification results.</p>
Full article ">Figure 15
<p>Bending test of the jointed arm. (<b>a</b>) Simulated joint arm bending using the Simulink tool, (<b>b</b>) arm bending without the plastic tube inserted, and (<b>c</b>) arm bending with the plastic tube inserted.</p>
Full article ">Figure 16
<p>Swing trajectory of the joint arm. (<b>a</b>) Bending trajectories of PVC plastic pipes with insertion (blue line) and without insertion (black dashed line); (<b>b</b>) Relationship between joint arm lengths and swing trajectories (each color represents the swing trajectory for a different joint arm length).</p>
Full article ">Figure 17
<p>Average time per fruit for single fruit picking operation.</p>
Full article ">Figure 18
<p>Snapshot of the experimental site (strawberry models of different sizes hung on one side).</p>
Full article ">Figure 19
<p>Performance comparison of the detection model at various times.</p>
Full article ">Figure 20
<p>Strawberry picking experiment site (with strawberry models of different sizes hanging on both sides).</p>
Full article ">Figure 21
<p>Snapshots of the joint arm grasping a strawberry. (<b>a</b>) The joint arm is lowered and aligned with the target; (<b>b</b>) the joint arm rises; (<b>c</b>) the joint arm bends; (<b>d</b>) the gripper cuts the stem; (<b>e</b>) the gripper clamps the stem; (<b>f</b>) the arm is lowered; (<b>g</b>) the gripper releases the stem; (<b>h</b>) the mobile platform moves to the next target. Images (<b>i</b>–<b>l</b>) respectively illustrate the lifting and bending of arm toward the strawberry stem (<b>i</b>,<b>j</b>), the gripping action (<b>k</b>), and finally the arm in a lowered position (<b>l</b>).</p>
Full article ">
18 pages, 7770 KiB  
Article
Vision-Based Localization Method for Picking Points in Tea-Harvesting Robots
by Jingwen Yang, Xin Li, Xin Wang, Leiyang Fu and Shaowen Li
Sensors 2024, 24(21), 6777; https://doi.org/10.3390/s24216777 - 22 Oct 2024
Cited by 1 | Viewed by 897
Abstract
To address the issue of accurately recognizing and locating picking points for tea-picking robots in unstructured environments, a visual positioning method based on RGB-D information fusion is proposed. First, an improved T-YOLOv8n model is proposed, which improves detection and segmentation performance across multi-scale [...] Read more.
To address the issue of accurately recognizing and locating picking points for tea-picking robots in unstructured environments, a visual positioning method based on RGB-D information fusion is proposed. First, an improved T-YOLOv8n model is proposed, which improves detection and segmentation performance across multi-scale scenes through network architecture and loss function optimizations. In the far-view test set, the detection accuracy of tea buds reached 80.8%; for the near-view test set, the mAP0.5 values for tea stem detection in bounding boxes and masks reached 93.6% and 93.7%, respectively, showing improvements of 9.1% and 14.1% over the baseline model. Secondly, a layered visual servoing strategy for near and far views was designed, integrating the RealSense depth sensor with robotic arm cooperation. This strategy identifies the region of interest (ROI) of the tea bud in the far view and fuses the stem mask information with depth data to calculate the three-dimensional coordinates of the picking point. The experiments show that this method achieved a picking point localization success rate of 86.4%, with a mean depth measurement error of 1.43 mm. The proposed method improves the accuracy of picking point recognition and reduces depth information fluctuations, providing technical support for the intelligent and rapid picking of premium tea. Full article
Show Figures

Figure 1

Figure 1
<p>Picking operation point positioning principle.</p>
Full article ">Figure 2
<p>Tea samples in different scenarios. (<b>a</b>) Complex lighting: different lighting conditions (e.g., sunny and cloudy days); (<b>b</b>) multi-angle observation: images captured from 30° and 60° angles; (<b>c</b>) background complexity: varying levels of background clutter.</p>
Full article ">Figure 3
<p>Construction of MFN combined test sets: (<b>a</b>) test set A; (<b>b</b>) test set B.</p>
Full article ">Figure 4
<p>T-YOLOv8n model framework diagram (improvements are marked in color).</p>
Full article ">Figure 5
<p>SGE working principle diagram.</p>
Full article ">Figure 6
<p>MSConv convolution structure.</p>
Full article ">Figure 7
<p>PIoU loss function.</p>
Full article ">Figure 8
<p>Example of layered visual spatial localization strategy for picking points in near and far views.</p>
Full article ">Figure 9
<p>Flowchart for spatial localization of picking points.</p>
Full article ">Figure 10
<p>Model training loss curve.</p>
Full article ">Figure 11
<p>Detection results of tea buds (tea label) in test set A. (<b>a</b>) Strong light; (<b>b</b>) rainy; (<b>c</b>) normal light + 30° angle; (<b>d</b>) normal light + 45° angle; (<b>e</b>) normal light + 60° angle. Note: white boxes indicate missed detections, and yellow boxes indicate false detections.</p>
Full article ">Figure 12
<p>Segmentation effects for stem recognition in test set B. (<b>a</b>) Original image and annotation information (the area within the white circle is a magnified view of the region outlined by the square); (<b>b</b>) YOLOv8n-seg; (<b>c</b>) local enlargement (localized enlargement of the area within the rectangular box corresponding to (<b>b</b>)); (<b>d</b>) T-YOLOV8n; (<b>e</b>) local enlargement (localized enlargement of the area within the rectangular box corresponding to (<b>d</b>)).</p>
Full article ">Figure 13
<p>Picking point localization experiment based on a layered near and far view strategy experiments. (<b>a</b>) Far-view point; (<b>b</b>) near-view point.</p>
Full article ">Figure 14
<p>Comparison experiment between single-stage vision and two-stage vision. (<b>a</b>) Single-stage visual scheme; (<b>b</b>) two-stage visual scheme.</p>
Full article ">Figure 15
<p>Depth error distribution.</p>
Full article ">
26 pages, 2848 KiB  
Article
Scheduling Cluster Tools with Multi-Space Process Modules and a Multi-Finger-Arm Robot in Wafer Fabrication Subject to Wafer Residency Time Constraints
by Lei Gu, Naiqi Wu, Yan Qiao, Siwei Zhang and Tan Li
Appl. Sci. 2024, 14(20), 9490; https://doi.org/10.3390/app14209490 - 17 Oct 2024
Viewed by 645
Abstract
To increase productivity, more sophisticated cluster tools are developed. To achieve this, one of the ways is to increase the number of spaces in a process module (PM) and the number of fingers on a robot arm as well, leading to a cluster [...] Read more.
To increase productivity, more sophisticated cluster tools are developed. To achieve this, one of the ways is to increase the number of spaces in a process module (PM) and the number of fingers on a robot arm as well, leading to a cluster tool with multi-space PMs and a multi-finger-arm robot. This paper discusses the scheduling problem of cluster tools with four-space PMs and a four-finger-arm robot, a typical tool with multi-space PMs and a multi-finger-arm robot adopted in modern fabs. With two arms in such a tool, one is used as a clean one, while the other is used as a dirty one. In this way, wafer quality can be improved. However, scheduling such cluster tools to ensure the residency time constraints is very challenging, and there is no research report on this issue. This article conducts an in-depth analysis of the steady-state scheduling for this type of cluster tools to explore the effect of different scheduling strategies. Based on the properties, four robot task sequences are presented as scheduling strategies. With them, four linear programming models are developed to optimize the cycle time of the system and find feasible schedules. The performance of these strategies is dependent on the activity parameters. Experiments are carried out to test the effect of different parameters on the performance of different strategies. It shows that, given a group of parameters, one can apply all the strategies and choose the best result obtained by one of the strategies. Full article
Show Figures

Figure 1

Figure 1
<p>A cluster tool with single-space PMs.</p>
Full article ">Figure 2
<p>A cluster tool with four-space PMs.</p>
Full article ">Figure 3
<p>Description of robot movements under the OBS strategy.</p>
Full article ">Figure 4
<p>Description of robot movements under the OHTS strategy.</p>
Full article ">Figure 5
<p>Description of robot movements under the TBS strategy.</p>
Full article ">Figure 6
<p>Description of robot movements under the THTS strategy.</p>
Full article ">Figure 7
<p>The cycle time varies with <span class="html-italic">α</span><sub>1</sub>.</p>
Full article ">Figure 8
<p>The cycle time varies with <span class="html-italic">α</span><sub>2</sub>.</p>
Full article ">Figure 9
<p>The cycle time varies with <span class="html-italic">α</span><sub>3</sub>.</p>
Full article ">Figure 10
<p>The cycle time varies with <span class="html-italic">υ</span>.</p>
Full article ">
28 pages, 2513 KiB  
Article
ROS Gateway: Enhancing ROS Availability across Multiple Network Environments
by Byoung-Youl Song and Hoon Choi
Sensors 2024, 24(19), 6297; https://doi.org/10.3390/s24196297 - 29 Sep 2024
Viewed by 802
Abstract
As the adoption of large-scale model-based AI grows, the field of robotics is undergoing significant changes. The emergence of cloud robotics, where advanced tasks are offloaded to fog or cloud servers, is gaining attention. However, the widely used Robot Operating System (ROS) does [...] Read more.
As the adoption of large-scale model-based AI grows, the field of robotics is undergoing significant changes. The emergence of cloud robotics, where advanced tasks are offloaded to fog or cloud servers, is gaining attention. However, the widely used Robot Operating System (ROS) does not support communication between robot software across different networks. This paper introduces ROS Gateway, a middleware designed to improve the usability and extend the communication range of ROS in multi-network environments, which is important for processing sensor data in cloud robotics. We detail its structure, protocols, and algorithms, highlighting improvements over traditional ROS configurations. The ROS Gateway efficiently handles high-volume data from advanced sensors such as depth cameras and LiDAR, ensuring reliable transmission. Based on the rosbridge protocol and implemented in Python 3, ROS Gateway is compatible with rosbridge-based tools and runs on both x86 and ARM-based Linux environments. Our experiments show that the ROS Gateway significantly improves performance metrics such as topic rate and delay compared to standard ROS setups. We also provide predictive formulas for topic receive rates to guide the design and deployment of robotic applications using ROS Gateway, supporting performance estimation and system optimization. These enhancements are essential for developing responsive and intelligent robotic systems in dynamic environments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>Example of a ROS-Based robotic application in cloud or fog Configuration. Subnet <math display="inline"><semantics> <mi>α</mi> </semantics></math> is a subnet to which multiple hosts that constitute cloud or fog computing are connected. Subnet <math display="inline"><semantics> <mi>β</mi> </semantics></math> is a subnet to which robots are connected, or a local host network within the robot. Subnet <math display="inline"><semantics> <mi>γ</mi> </semantics></math> is a subnet to which a control system consisting of multiple hosts in a remote location is connected, though not a subnet in which robots are included.</p>
Full article ">Figure 2
<p>The Gateway architecture. * The Client Worker is activated when a configuration that enables connectivity to a gateway in a different network is applied. ** Server Workers are initially created with a single process for receiving commands from external sources. Subsequently, a new process is assigned based on the client that establishes a connection.</p>
Full article ">Figure 3
<p>Comparison of ROS topic transmission rates for different configurations: The configurations include the use of ROS on a single device (<span class="html-italic">ros2-localhost</span>) and local subnet (<span class="html-italic">ros2-subnet</span>), the use of the Gateway on separate networks (<span class="html-italic">Gateway-pub-json, Gateway-sub-json, Gateway-sub-raw</span>), and the use of rosbridge on separate networks (<span class="html-italic">rosbridge-pub-json, rosbridge-sub-json, rosbridge-sub-raw</span>). The best performance for each configuration is plotted, with error bars representing the worst performance. Higher values indicate superior performance.</p>
Full article ">Figure 4
<p>Comparison of ROS topic transmission delay for different configurations: The configurations include the use of ROS on a single device (<span class="html-italic">ros2-localhost</span>) and local subnet (<span class="html-italic">ros2-subnet</span>), the use of the Gateway on separate networks (<span class="html-italic">Gateway-pub-json, Gateway-sub-json, Gateway-sub-raw</span>), and the use of rosbridge on separate networks (<span class="html-italic">rosbridge-pub-json, rosbridge-sub-json, rosbridge-sub-raw</span>). A log scale is used to illustrate the delay values. The minimum delay for each configuration is plotted, and the error bars represent the maximum delay. Lower values indicate better performance. The horizontal dashed lines on the graph represent the real-time limit delay at each occurrence rate.</p>
Full article ">Figure 5
<p>Observed topic rates by Gateway configurations and sensor publishing rates: The configurations include the use of ROS on a single device (<span class="html-italic">ros2-localhost</span>) and local subnet (<span class="html-italic">ros2-subnet</span>), as well as the use of the Gateway on each option. The best performance for each configuration is plotted, with error bars representing the worst performance. Higher values indicate superior performance.</p>
Full article ">Figure 6
<p>Observed topic delays by Gateway configurations and sensor publishing rates: The configurations include the use of ROS on a single device (<span class="html-italic">ros2-localhost</span>) and local subnet (<span class="html-italic">ros2-subnet</span>), as well as the use of the Gateway on each option. A log scale is used to illustrate the delay values. The minimum delay for each configuration is plotted, and the error bars represent the maximum delay. Lower values indicate better performance. The horizontal dashed lines on the graph represent the real-time limit delay at each occurrence rate.</p>
Full article ">Figure A1
<p>Topic task sequence diagram.</p>
Full article ">Figure A2
<p>Service task sequence diagram.</p>
Full article ">Figure A3
<p>Action task sequence diagram.</p>
Full article ">
21 pages, 7474 KiB  
Review
Balancing Accuracy and Efficiency: The Status and Challenges of Agricultural Multi-Arm Harvesting Robot Research
by Jiawei Chen, Wei Ma, Hongsen Liao, Junhua Lu, Yuxin Yang, Jianping Qian and Lijia Xu
Agronomy 2024, 14(10), 2209; https://doi.org/10.3390/agronomy14102209 - 25 Sep 2024
Viewed by 1305
Abstract
As the global fruit growing area continues to increase and the population aging problem continues to intensify, fruit and vegetable production is constrained by the difficulties of labor shortages and high costs. Single-arm harvesting robots are inefficient, and in order to balance harvesting [...] Read more.
As the global fruit growing area continues to increase and the population aging problem continues to intensify, fruit and vegetable production is constrained by the difficulties of labor shortages and high costs. Single-arm harvesting robots are inefficient, and in order to balance harvesting accuracy and efficiency, research on multi-arm harvesting robots has become a hot topic. This paper summarizes the performance of multi-arm harvesting robots in indoor and outdoor environments from the aspects of automatic navigation technology, fruit and vegetable identification and localization, multi-arm workspace optimization, and multi-arm harvesting task planning and analyzes their advantages and challenges in practical applications. The results show that the lack of application of automatic field navigation for multi-arm harvesting robots, the low harvesting rate in non-structured environments, and the complexity of algorithms for multi-arm harvesting robots’ task planning are the main challenges hindering their wide-scale application. Future studies need to focus on building a standardized growing environment to control the amount of information acquired by the robots and optimize the multi-arm control strategy of these challenges, which is an important direction for research on multi-arm harvesting robots. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Two types of harvesting robots: first type (<b>a</b>,<b>b</b>); second type (<b>c</b>,<b>d</b>). (<b>a</b>) Tomato harvesting robot [<a href="#B13-agronomy-14-02209" class="html-bibr">13</a>]; (<b>b</b>) aubergine harvesting robot [<a href="#B8-agronomy-14-02209" class="html-bibr">8</a>]; (<b>c</b>) apple harvesting robot [<a href="#B12-agronomy-14-02209" class="html-bibr">12</a>]; (<b>d</b>) mushroom harvesting robot [<a href="#B15-agronomy-14-02209" class="html-bibr">15</a>].</p>
Full article ">Figure 2
<p>Different navigation techniques: (<b>a</b>) interline navigation technique [<a href="#B24-agronomy-14-02209" class="html-bibr">24</a>]; (<b>b</b>) boundary-following technique [<a href="#B25-agronomy-14-02209" class="html-bibr">25</a>]; (<b>c</b>) SLAM technique [<a href="#B26-agronomy-14-02209" class="html-bibr">26</a>]; (<b>d</b>) deep learning navigation technique [<a href="#B28-agronomy-14-02209" class="html-bibr">28</a>].</p>
Full article ">Figure 3
<p>Different types of occlusion: (<b>a</b>) non-occlusion; (<b>b</b>) fruit occlusion; (<b>c</b>) leaf occlusion; (<b>d</b>) branch occlusion.</p>
Full article ">Figure 4
<p>Soft fruit stem localization: (<b>a</b>) tomato fruit stem position and pose information [<a href="#B83-agronomy-14-02209" class="html-bibr">83</a>]. The arrows depict the tomato pose.; (<b>b</b>) strawberry fruit stem harvesting point [<a href="#B84-agronomy-14-02209" class="html-bibr">84</a>]; (<b>c</b>) grape fruit stem segmentation [<a href="#B85-agronomy-14-02209" class="html-bibr">85</a>].</p>
Full article ">Figure 5
<p>Harvesting steps for kiwifruit trellis [<a href="#B22-agronomy-14-02209" class="html-bibr">22</a>]: (<b>a</b>) Cartesian robotic system; (<b>b</b>) articulated robotic system.</p>
Full article ">Figure 6
<p>Multi-arm robot workspace analysis: (<b>a</b>) articulated system robot [<a href="#B7-agronomy-14-02209" class="html-bibr">7</a>]; (<b>b</b>) Cartesian system robot [<a href="#B12-agronomy-14-02209" class="html-bibr">12</a>].</p>
Full article ">Figure 7
<p>Three multi-arm harvesting robot harvesting sequences: (<b>a</b>) strawberry harvesting sequence [<a href="#B11-agronomy-14-02209" class="html-bibr">11</a>]; (<b>b</b>) mushroom harvesting sequence [<a href="#B15-agronomy-14-02209" class="html-bibr">15</a>]; (<b>c</b>) grape harvesting sequence [<a href="#B7-agronomy-14-02209" class="html-bibr">7</a>].</p>
Full article ">
21 pages, 5473 KiB  
Article
Automatic Optimal Robotic Base Placement for Collaborative Industrial Robotic Car Painting
by Khalil Zbiss, Amal Kacem, Mario Santillo and Alireza Mohammadi
Appl. Sci. 2024, 14(19), 8614; https://doi.org/10.3390/app14198614 - 24 Sep 2024
Viewed by 773
Abstract
This paper investigates the problem of optimal base placement in collaborative robotic car painting. The objective of this problem is to find the optimal fixed base positions of a collection of given articulated robotic arms on the factory floor/ceiling such that the possibility [...] Read more.
This paper investigates the problem of optimal base placement in collaborative robotic car painting. The objective of this problem is to find the optimal fixed base positions of a collection of given articulated robotic arms on the factory floor/ceiling such that the possibility of vehicle paint coverage is maximized while the possibility of robot collision avoidance is minimized. Leveraging the inherent two-dimensional geometric features of robotic car painting, we construct two types of cost functions that formally capture the notions of paint coverage maximization and collision avoidance minimization. Using these cost functions, we formulate a multi-objective optimization problem, which can be readily solved using any standard multi-objective optimizer. Our resulting optimal base placement algorithm decouples base placement from motion/trajectory planning. In particular, our computationally efficient algorithm does not require any information from motion/trajectory planners a priori or during base placement computations. Rather, it offers a hierarchical solution in the sense that its generated results can be utilized within already available robotic painting motion/trajectory planners. Our proposed solution’s effectiveness is demonstrated through simulation results of multiple industrial robotic arms collaboratively painting a Ford F-150 truck. Full article
(This article belongs to the Special Issue Artificial Intelligence and Its Application in Robotics)
Show Figures

Figure 1

Figure 1
<p>Mutli-robot car-painting base placement problem. The base positions directly impact the quality of vehicle paint coverage and the possibility of robot collisions during painting. For instance, robots placed on golden diamonds are more prone to collisions. On the other hand, robots placed on red diamonds will not be able to ensure complete vehicle paint coverage.</p>
Full article ">Figure 2
<p>The kinematic structure of an example RRR articulated arm (ABB IRB 4600 industrial manipulator [<a href="#B31-applsci-14-08614" class="html-bibr">31</a>]), where the body, shoulder, and elbow joints are all revolute. The unit lengths are in millimeters.</p>
Full article ">Figure 3
<p>The CAD model of an example vehicle (Ford Motor Company F-150 truck).</p>
Full article ">Figure 4
<p>Algorithm 1 generates a 3D point cloud and a minimum volume enclosing ellipsoid (MVEE) for each of the robotic arms.</p>
Full article ">Figure 5
<p>Intuition behind the first cost function formulation. The ellipsoids in the plots are the MVEEs that closely fit the reachable workspace of the robot forearms. The larger the volumes of intersection in-between the ellipsoids, the higher the possibility of robot collision during painting. Larger intersection volumes between the 3D ellipsoids, in turn, correspond to larger overlap areas of their projections on the factory floor.</p>
Full article ">Figure 6
<p>The planar ellipse overlap areas are employed to define the first cost function, which quantifies the possibility of collision in-between the robotic arms during painting.</p>
Full article ">Figure 7
<p>The algorithm proposed by [<a href="#B51-applsci-14-08614" class="html-bibr">51</a>] distinguishes between a variety of different ellipse intersection cases. It computes the overlap area of any two general ellipses without resorting to proxy curves.</p>
Full article ">Figure 8
<p>Boundary detection in the vehicle’s CAD model and creation of the grip map representations.</p>
Full article ">Figure 9
<p>The overall proposed solution to the multi-robot car-painting base placement problem.</p>
Full article ">Figure 10
<p>Snapshots of multi-robot car painting (a homogeneous team of robots) utilizing the optimal robot base positions obtained from our base placement algorithm.</p>
Full article ">Figure 11
<p>Snapshots of multi-robot car painting (a homogeneous team of robots) utilizing non-optimal base positions.</p>
Full article ">Figure 12
<p>The normalized manipulability of the robotic arms during collaborative vehicle painting: (left) optimal and (right) non-optimal base positions. The percentage values on each bar in the histograms provide the painting duration during which the robot manipulability belonged to a certain interval. In particular, a percentage value of <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mn>0</mn> </msub> <mo>%</mo> </mrow> </semantics></math> on a bar located in the interval <math display="inline"><semantics> <mrow> <mo>[</mo> <msub> <mi>σ</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>σ</mi> <mn>2</mn> </msub> <mo>]</mo> </mrow> </semantics></math> for the <span class="html-italic">i</span>th robot, <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>≤</mo> <mi>i</mi> <mo>≤</mo> <mn>3</mn> </mrow> </semantics></math>, indicates that the robot had a normalized manipulability value belonging to the interval <math display="inline"><semantics> <mrow> <mo>[</mo> <msub> <mi>σ</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>σ</mi> <mn>2</mn> </msub> <mo>]</mo> </mrow> </semantics></math> during <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mn>0</mn> </msub> <mo>%</mo> </mrow> </semantics></math> of the painting task duration.</p>
Full article ">Figure 13
<p>Elbow singular configurations of (<b>left</b>) ABB IRB 4600 and (<b>right</b>) FANUC R-2000iC robotic arms.</p>
Full article ">Figure 14
<p>Snapshots of multi-robot car painting (a heterogeneous team of robots) utilizing the optimal robot base positions obtained from our base placement algorithm.</p>
Full article ">Figure 15
<p>Snapshots of multi-robot car painting (a heterogeneous team of robots) utilizing non-optimal base positions.</p>
Full article ">
Back to TopTop