[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 13, October
Previous Issue
Volume 13, August
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Robotics, Volume 13, Issue 9 (September 2024) – 16 articles

Cover Story (view full-size image): This paper presents a real-time re-planning control system for autonomous quadrotors navigating uncertain environments. The framework integrates a modified PX4 Autopilot with a Raspberry Pi 5 companion computer to execute on-the-fly trajectory adjustments. Utilizing minimum-snap trajectory generation, the system ensures efficient obstacle avoidance by leveraging the differential flatness property of quadrotors. The simulation results validate the algorithm, and the real-world hardware tests demonstrate successful collision avoidance, showcasing the practical applicability of this approach for autonomous flights. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 6078 KiB  
Article
Using a Guidance Virtual Fixture on a Soft Robot to Improve Ureteroscopy Procedures in a Phantom
by Chun-Feng Lai, Elena De Momi, Giancarlo Ferrigno and Jenny Dankelman
Robotics 2024, 13(9), 140; https://doi.org/10.3390/robotics13090140 - 18 Sep 2024
Viewed by 901
Abstract
Manipulating a flexible ureteroscope is difficult, due to its bendable body and hand–eye coordination problems, especially when exploring the lower pole of the kidney. Though robotic interventions have been adopted in various clinical scenarios, they are rarely used in ureteroscopy. This study proposes [...] Read more.
Manipulating a flexible ureteroscope is difficult, due to its bendable body and hand–eye coordination problems, especially when exploring the lower pole of the kidney. Though robotic interventions have been adopted in various clinical scenarios, they are rarely used in ureteroscopy. This study proposes a teleoperation system consists of a soft robotic endoscope together with a Guidance Virtual Fixture (GVF) to help users explore the kidney’s lower pole. The soft robotic arm was a cable-driven, 3D-printed design with a helicoid structure. GVF was dynamically constructed using video streams from an endoscopic camera. With a haptic controller, GVF can provide haptic feedback to guide the users in following a trajectory. In the user study, participants were asked to follow trajectories when the soft robotic arm was in a retroflex posture. The results suggest that the GVF can reduce errors in the trajectory tracking tasks when the users receive the proper training and gain more experience. Based on the NASA Task Load Index questionnaires, most participants preferred having the GVF when manipulating the robotic arm. In conclusion, the results demonstrate the benefits and potential of using a robotic arm with a GVF. More research is needed to investigate the effectiveness of the GVFs and the robotic endoscope in ureteroscopic procedures. Full article
(This article belongs to the Section Soft Robotics)
Show Figures

Figure 1

Figure 1
<p>The robotic endoscope system, ATLAScope, to simulate a robotized ureteroscope. (1) Two stepper motors; (2) two pulleys; (3) cable tunnels to guide driving cables; (4) soft robotic arm with HelicoFlex design, with a total length of 90 mm and a steerable segment of 70 mm; (5) miniaturized endoscopic camera and the two bending directions of the robotic arm.</p>
Full article ">Figure 2
<p>The teleoperation system with the GVF consists of ATLAScope, a haptic controller and a communication channel. The user commands the haptic controller with <math display="inline"><semantics> <msub> <mover> <mi mathvariant="bold-italic">u</mi> <mo>˙</mo> </mover> <mi>c</mi> </msub> </semantics></math>, the velocity of the tip of the haptic controller. This movement is translated into <math display="inline"><semantics> <msub> <mover> <mi mathvariant="bold-italic">y</mi> <mo>˙</mo> </mover> <mi>i</mi> </msub> </semantics></math>, a desired moving velocity of the target in the image space. Then, the velocity of motors <math display="inline"><semantics> <mover> <mi mathvariant="bold-italic">θ</mi> <mo>˙</mo> </mover> </semantics></math> in the actuation space is determined by the Moore–Penrose inverse of the model-free Jacobian matrix <math display="inline"><semantics> <msubsup> <mrow> <mi mathvariant="bold">J</mi> </mrow> <mrow> <mi>f</mi> <mi>r</mi> <mi>e</mi> <mi>e</mi> </mrow> <mo>†</mo> </msubsup> </semantics></math>. After the motors move the tip of the endoscopic camera into a new position, <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">t</mi> </mrow> </semantics></math>, the camera captures a new image. The Segmentation and Target Detection module processes this new image and returns a new target vector, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">p</mi> <mi>i</mi> </msub> </mrow> </semantics></math>, which is the shortest vector from the center of the image to the route. Within the Virtual Fixture module, this target vector is translated into a force <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">f</mi> <mi>c</mi> </msub> </mrow> </semantics></math> by the spring-damper model <math display="inline"><semantics> <mrow> <mi>F</mi> <mo>(</mo> <mo>·</mo> <mo>)</mo> </mrow> </semantics></math> and exerted on the user. <b>K</b>, <span class="html-italic">k</span>, and <math display="inline"><semantics> <mi>ξ</mi> </semantics></math> are working space transformation matrix, spring constant, and damping constant, respectively. <math display="inline"><semantics> <mo>Ω</mo> </semantics></math> stands for coordinate space, and its superscript <span class="html-italic">C</span>, <span class="html-italic">I</span>, <span class="html-italic">A</span>, and <span class="html-italic">E</span> stand for controller, image actuation and end-effector, respectively.</p>
Full article ">Figure 3
<p>GVF coordinate transformation between the image space <math display="inline"><semantics> <msup> <mo>Ω</mo> <mi>I</mi> </msup> </semantics></math> (Left) and controller space <math display="inline"><semantics> <msup> <mo>Ω</mo> <mi>C</mi> </msup> </semantics></math> (Right) by the scaling transformation matrix <b>K</b>. Both the teleoperating manipulation and the GVF rely on the information in <math display="inline"><semantics> <msup> <mo>Ω</mo> <mi>I</mi> </msup> </semantics></math>. To link <math display="inline"><semantics> <msup> <mo>Ω</mo> <mi>I</mi> </msup> </semantics></math> with <math display="inline"><semantics> <msup> <mo>Ω</mo> <mi>C</mi> </msup> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">u</mi> <mi>c</mi> </msub> </mrow> </semantics></math> in <math display="inline"><semantics> <msup> <mo>Ω</mo> <mi>C</mi> </msup> </semantics></math> are projected into <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>y</mi> </mrow> </semantics></math> plane to form <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">u</mi> <mi>m</mi> </msub> </mrow> </semantics></math>. Using space transformation matrix <b>K</b>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">u</mi> <mi>m</mi> </msub> </mrow> </semantics></math> is transformed into desired target movement, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">y</mi> <mi>i</mi> </msub> </mrow> </semantics></math>, in <math display="inline"><semantics> <msup> <mo>Ω</mo> <mi>I</mi> </msup> </semantics></math>. Reversely, the guidance vector, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">p</mi> <mi>i</mi> </msub> </mrow> </semantics></math> created by GVF in <math display="inline"><semantics> <msup> <mo>Ω</mo> <mi>I</mi> </msup> </semantics></math> can be also transformed into <math display="inline"><semantics> <msup> <mo>Ω</mo> <mi>C</mi> </msup> </semantics></math> as <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">p</mi> <mi>c</mi> </msub> </mrow> </semantics></math> using <math display="inline"><semantics> <msup> <mrow> <mi mathvariant="bold">K</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </semantics></math>. In the top figure, <span class="html-italic">R</span> is the set of two-dimensional vectors of the segmented route. <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">c</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">r</mi> <mi>s</mi> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">p</mi> <mi>i</mi> </msub> </mrow> </semantics></math> is the center of the image, the closest point in <span class="html-italic">R</span> to <b>c</b>, and the guidance vector, respectively.</p>
Full article ">Figure 4
<p>Experimental set-up with the flexible arm bent in a retroflex posture. (1) Soft robotic arm; (2) 3D printed fixture mold to restrict the movement of the soft robotic arm; (3) tip of the robotic arm equipped with a miniaturized endoscopic camera in a retroflex posture; (4) target plane with a triangle or oval route; (5) two designed routes and their dimensions.</p>
Full article ">Figure 5
<p>A flow diagram showing the user study protocol. After the first training session, the participants are divided into two groups (Group A and Group B). Each group has two sets of runs: one set of Control tasks (Control) and one set of Guided Virtual Fixture tasks (GVF-on). Within each set, there are two different routes (Oval Route and Triangle Route) that participants had to repeat five times, and they had to fill in one NASA TLX Questionnaire. Finally, all the participants had to fill in a Comparison Questionnaire. It is worth noting that a crossover group is being highlighted within the dashed line, and the colors and dashed lines represent the group of data shown in the next figures.</p>
Full article ">Figure 6
<p>Box and whisker plots comparing overall results for the three performance metrics, Completion Time (CT), Mean Absolute Error (MAE), and Max Error (ME). The color blue represents the Control set, while the color orange represents the GVF-on set. Those boxes with slashes are the results of Triangle Route. The hollow circles are the outliers, and the black horizontal lines in the box indicate where the median values are.</p>
Full article ">Figure 7
<p>Results of Crossover Groups in each run. The box and whisker plots show the three performance metrics (Completion Time, Mean Absolute Error, Max Error) per run with respect to the Oval Route (in the upper row) and the Triangle Route (in the lower row). Blue: Control; orange: GVF-on, respectively. Darker color tones: Group A; lighter tones: Group B. (*) <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>. (<b>a</b>) First Crossover Group. (<b>b</b>) Second Crossover Group. (<b>c</b>) First Crossover Group. (<b>d</b>) Second Crossover Group.</p>
Full article ">Figure 7 Cont.
<p>Results of Crossover Groups in each run. The box and whisker plots show the three performance metrics (Completion Time, Mean Absolute Error, Max Error) per run with respect to the Oval Route (in the upper row) and the Triangle Route (in the lower row). Blue: Control; orange: GVF-on, respectively. Darker color tones: Group A; lighter tones: Group B. (*) <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>. (<b>a</b>) First Crossover Group. (<b>b</b>) Second Crossover Group. (<b>c</b>) First Crossover Group. (<b>d</b>) Second Crossover Group.</p>
Full article ">Figure 8
<p>Results of each Crossover Group. Compared into two different dimensions, within its Crossover Groups and in Control and GVF-on. Blue: Control; orange: GVF-on. Darker tones: Group A; lighter tones: Group B. Boxes without slashes: Oval Route; boxes with slashes: Triangle Route. (*) <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math> and (**) <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.01</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Bar plots representing the results of NASA TLX Questionnaires. Left: all Participants, middle: Group A, right: Group B. The bars and error bars show each TLX index’s mean and standard error, respectively. Note: Scales are transferred into percentages.</p>
Full article ">Figure 10
<p>Bar plot showing the results of Comparison Questionnaires. The bar shows the preferences of participants toward the two tasks with respect to the six task load indexes and their general preferences toward the two sets of tasks.</p>
Full article ">
18 pages, 1078 KiB  
Article
Non-Orthogonal Serret–Frenet Parametrization Applied to Path Following of B-Spline Curves by a Mobile Manipulator
by Filip Dyba and Marco Frego
Robotics 2024, 13(9), 139; https://doi.org/10.3390/robotics13090139 - 12 Sep 2024
Viewed by 799
Abstract
A tool for path following for a mobile manipulator is herein presented. The control algorithm is obtained by projecting a local frame associated with the robot onto the desired path, thus obtaining a non-orthogonal moving frame. The Serret–Frenet frame moving along the curve [...] Read more.
A tool for path following for a mobile manipulator is herein presented. The control algorithm is obtained by projecting a local frame associated with the robot onto the desired path, thus obtaining a non-orthogonal moving frame. The Serret–Frenet frame moving along the curve is considered as a reference. A curve resulting from the control points of a B-spline in 2D or 3D is investigated as the desired path. It is used to show how the geometric continuity of the path has an impact on the performance of the robot in terms of undesired force spikes. This can be understood by looking at the curvature and, in 3D, at the torsion of the path. These unwanted effects vanish and better performance is achieved thanks to the change of the B-spline order. The theoretical results are confirmed by the simulation study for a mobile manipulator consisting of a non-holonomic wheeled base coupled with a holonomic robotic arm with three degrees of freedom (rotational and prismatic). Full article
Show Figures

Figure 1

Figure 1
<p>Schematic view of the mobile manipulator considered in the simulation study.</p>
Full article ">Figure 2
<p>Non-orthogonal projection in <math display="inline"><semantics> <mrow> <mspace width="3.33333pt"/> <msup> <mi mathvariant="double-struck">R</mi> <mn>3</mn> </msup> </mrow> </semantics></math> space.</p>
Full article ">Figure 3
<p>Example of B-spline curves: (<b>a</b>) B-spline curves of various orders <span class="html-italic">k</span> for the same set of control points and knot points (denoted with circles). (<b>b</b>) Curvature of respective B-spline curves. (blue: <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>; red: <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>; magenta: <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>; cyan: <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>; black: control points).</p>
Full article ">Figure 4
<p>Control system structure.</p>
Full article ">Figure 5
<p>The desired path definition: (<b>a</b>) Path (blue solid: the desired path; red dashed: the performed path). (<b>b</b>) Local frame evolution.</p>
Full article ">Figure 6
<p>Reference signals: (<b>a</b>) Desired position with respect to the path, <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">d</mi> <mi>d</mi> </msub> </semantics></math>. (<b>b</b>) Reference velocity profiles, <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">z</mi> <mi mathvariant="italic">ref</mi> </msub> </semantics></math>, generated by the kinematic controller (<a href="#FD26-robotics-13-00139" class="html-disp-formula">26</a>).</p>
Full article ">Figure 7
<p>Errors: (<b>a</b>) path-following errors, <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">e</mi> <mi mathvariant="bold-italic">d</mi> </msub> </semantics></math>. (<b>b</b>) Velocity profile following errors, <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">e</mi> <mi mathvariant="bold-italic">z</mi> </msub> </semantics></math>.</p>
Full article ">Figure 8
<p>Geometric parameters: (<b>a</b>) Curvature, <math display="inline"><semantics> <mi>κ</mi> </semantics></math>. (<b>b</b>) Torsion, <math display="inline"><semantics> <mi>τ</mi> </semantics></math>.</p>
Full article ">Figure 9
<p>Control commands (<a href="#FD27-robotics-13-00139" class="html-disp-formula">27</a>): (<b>a</b>) Peaks in the control signals. (<b>b</b>) Zoomed view.</p>
Full article ">Figure 10
<p>The desired path definition: (<b>a</b>) Path (blue solid: the desired path; red dashed: the performed path). (<b>b</b>) Local frame evolution.</p>
Full article ">Figure 11
<p>Geometric parameters: (<b>a</b>) Curvature, <math display="inline"><semantics> <mi>κ</mi> </semantics></math>. (<b>b</b>) Torsion, <math display="inline"><semantics> <mi>τ</mi> </semantics></math>.</p>
Full article ">Figure 12
<p>Visualization of the mobile manipulator performance: The mobile manipulator mimics the behaviour of the Serret–Frenet frame along the desired path.</p>
Full article ">Figure 13
<p>Reference signals: (<b>a</b>) Desired position with respect to the path, <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">d</mi> <mi>d</mi> </msub> </semantics></math>. (<b>b</b>) Reference velocity profiles, <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">z</mi> <mi mathvariant="italic">ref</mi> </msub> </semantics></math>, generated by the kinematic controller (<a href="#FD26-robotics-13-00139" class="html-disp-formula">26</a>).</p>
Full article ">Figure 14
<p>Errors: (<b>a</b>) path-following errors, <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">e</mi> <mi mathvariant="bold-italic">d</mi> </msub> </semantics></math>. (<b>b</b>) Velocity profile following errors, <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">e</mi> <mi mathvariant="bold-italic">z</mi> </msub> </semantics></math>.</p>
Full article ">Figure 15
<p>Control signals (<a href="#FD27-robotics-13-00139" class="html-disp-formula">27</a>): (<b>a</b>) Mobile platform wheels and rotational joints. (<b>b</b>) Prismatic joint.</p>
Full article ">
19 pages, 27719 KiB  
Article
Assistive Control through a Hapto-Visual Digital Twin for a Master Device Used for Didactic Telesurgery
by Daniel Pacheco Quiñones, Daniela Maffiodo and Med Amine Laribi
Robotics 2024, 13(9), 138; https://doi.org/10.3390/robotics13090138 - 11 Sep 2024
Viewed by 700
Abstract
This article explores the integration of a hapto-visual digital twin on a master device used for bilateral teleoperation. The device, known as a quasi-spherical parallel manipulator, is currently employed for the remote center of motion control in teleoperated mini-invasive surgery. After providing detailed [...] Read more.
This article explores the integration of a hapto-visual digital twin on a master device used for bilateral teleoperation. The device, known as a quasi-spherical parallel manipulator, is currently employed for the remote center of motion control in teleoperated mini-invasive surgery. After providing detailed insights into the device’s kinematics, including its geometric configuration, Jacobian, and reachable workspace, the paper illustrates the overall control system, encompassing both hardware and software components. The article describes how a digital twin, which implements a haptic assistive control and a visually enhanced representation of the device, was integrated into the system. The digital twin was then tested with the device: in the experiments, one “student” end-user must follow a predefined “teacher” trajectory. Preliminary results demonstrate how the overall system can pose a good starting point for didactic telesurgery operation. The control action, yet to be optimized and tested on different subjects, indeed seems to grant satisfying performance and accuracy. Full article
(This article belongs to the Special Issue Digital Twin-Based Human–Robot Collaborative Systems)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The 3-RRR Spherical parallel manipulator. (<b>b</b>) The 2RRR-1URU quasi-spherical parallel manipulator (qSPM): the URU leg is labeled as leg A, and the RRR legs as legs B and C.</p>
Full article ">Figure 2
<p>(<b>a</b>) Schematic representation of the nomenclature of the qSPM device. (<b>b</b>) Operative RCM workspace of the bilaterally teleoperated system. The platform’s orientation <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">r</mi> <mi mathvariant="bold-italic">E</mi> </msub> </semantics></math> is transmitted to the slave’s instrumented tool through a proper rotation matrix <math display="inline"><semantics> <msub> <mi>R</mi> <mi>T</mi> </msub> </semantics></math>.</p>
Full article ">Figure 3
<p>(<b>a</b>) Reachable workspace inside the Euler space for working mode <math display="inline"><semantics> <msub> <mi>m</mi> <mn>3</mn> </msub> </semantics></math>. (<b>b</b>) A sectioned view of the reachable workspace on plane <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>ψ</mi> <mo>,</mo> <mi>θ</mi> <mo>)</mo> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <msub> <mi>ϕ</mi> <mi>r</mi> </msub> <mo>=</mo> <mo>−</mo> <msup> <mn>40</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>: red and purple areas, respectively, correspond to <math display="inline"><semantics> <msub> <mi>S</mi> <mi>J</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>C</mi> <mi>V</mi> </msub> </semantics></math> within the operative workspace <math display="inline"><semantics> <msub> <mi>W</mi> <mrow> <mi>o</mi> <mi>p</mi> </mrow> </msub> </semantics></math>, outlined in black; green dots denote the workspace center <math display="inline"><semantics> <msub> <mi>w</mi> <mi>c</mi> </msub> </semantics></math>. A detailed discussion on the workspaces can be found in [<a href="#B25-robotics-13-00138" class="html-bibr">25</a>].</p>
Full article ">Figure 4
<p>(<b>a</b>) Restricted neighborhood <math display="inline"><semantics> <mrow> <mover accent="true"> <mi mathvariant="script">D</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> Equation (9b), colored according to Equation (11): white, grey, and black areas correspond, respectively, to force-related admitted area <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">A</mi> <mi>F</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> Equations (11a)–(11c). Red points correspond to the point <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>ψ</mi> <mrow> <mi>t</mi> <mi>e</mi> <mi>a</mi> <mi>c</mi> <mi>h</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>θ</mi> <mrow> <mi>t</mi> <mi>e</mi> <mi>a</mi> <mi>c</mi> <mi>h</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </semantics></math>. (<b>b</b>) Geometrical explanation of vector <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">t</mi> <mi mathvariant="bold-italic">E</mi> </msub> </semantics></math> Equation (12). The vector is not to scale, and <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>(</mo> <mi>ψ</mi> <mo>,</mo> <mi>θ</mi> <mo>)</mo> </mrow> </semantics></math> was increased for readability.</p>
Full article ">Figure 5
<p>(<b>a</b>) Elliptic admitted area <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">A</mi> <mi>F</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> centered in <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>ψ</mi> <mrow> <mi>t</mi> <mi>e</mi> <mi>a</mi> <mi>c</mi> <mi>h</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>θ</mi> <mrow> <mi>t</mi> <mi>e</mi> <mi>a</mi> <mi>c</mi> <mi>h</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </semantics></math> with the major axis parallel to <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>q</mi> <mrow> <mi>t</mi> <mi>e</mi> <mi>a</mi> <mi>c</mi> <mi>h</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>b</b>) Sliced admitted area <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">A</mi> <mi>F</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> composed of a reduced circular admitted area of <math display="inline"><semantics> <msup> <mn>1</mn> <mo>∘</mo> </msup> </semantics></math> span, and a circular sector of angular span <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>χ</mi> </mrow> </semantics></math> with bisection parallel to <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>q</mi> <mrow> <mi>t</mi> <mi>e</mi> <mi>a</mi> <mi>c</mi> <mi>h</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Different poses of the device in the <span class="html-italic">RViz</span> environment. The visual feedback is enriched by the RFs related to <math display="inline"><semantics> <msub> <mi>q</mi> <mrow> <mi>t</mi> <mi>e</mi> <mi>a</mi> <mi>c</mi> <mi>h</mi> </mrow> </msub> </semantics></math> (blue) and <math display="inline"><semantics> <msub> <mi>q</mi> <mrow> <mi>s</mi> <mi>t</mi> <mi>u</mi> <mi>d</mi> </mrow> </msub> </semantics></math> (red/green hue) Equation (7).</p>
Full article ">Figure 7
<p>Schematic architecture of the overall system. The master’s software architecture is highlighted in colored areas, blocks referring to <a href="#sec2dot5-robotics-13-00138" class="html-sec">Section 2.5</a>. Since the operating mode signal <math display="inline"><semantics> <mrow> <mi>c</mi> <mi>m</mi> <mi>d</mi> </mrow> </semantics></math> acts on every block inside the <span class="html-italic">operation</span> subsystem, it was schematized entering said overall subsystem for clarity.</p>
Full article ">Figure 8
<p>Selected teacher trajectory <math display="inline"><semantics> <msub> <mi>q</mi> <mrow> <mi>t</mi> <mi>e</mi> <mi>a</mi> <mi>c</mi> <mi>h</mi> </mrow> </msub> </semantics></math> reported in the Euler angles <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>ψ</mi> <mrow> <mi>r</mi> <mo>,</mo> <mi>t</mi> <mi>e</mi> <mi>a</mi> <mi>c</mi> <mi>h</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>θ</mi> <mrow> <mi>r</mi> <mo>,</mo> <mi>t</mi> <mi>e</mi> <mi>a</mi> <mi>c</mi> <mi>h</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>ϕ</mi> <mrow> <mi>r</mi> <mo>,</mo> <mi>t</mi> <mi>e</mi> <mi>a</mi> <mi>c</mi> <mi>h</mi> </mrow> </msub> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Testing setup used for the experiments (Ex1), (Ex2) and (Ex3), in which the device is controlled through an ROS environment, as in <a href="#sec2dot5-robotics-13-00138" class="html-sec">Section 2.5</a>, and the visual part of the digital twin is generated using <span class="html-italic">RViz</span>, as in <a href="#sec2dot4dot4-robotics-13-00138" class="html-sec">Section 2.4.4</a>.</p>
Full article ">Figure 10
<p>Time-based plots of the following variables: (<b>top</b>) Euler angles associated with <math display="inline"><semantics> <msub> <mi>q</mi> <mrow> <mi>s</mi> <mi>t</mi> <mi>u</mi> <mi>d</mi> </mrow> </msub> </semantics></math> (continuous) and <math display="inline"><semantics> <msub> <mi>q</mi> <mrow> <mi>t</mi> <mi>e</mi> <mi>a</mi> <mi>c</mi> <mi>h</mi> </mrow> </msub> </semantics></math> (dashed) in Equation (7); (<b>middle</b>) their differences <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>e</mi> <mi>ψ</mi> </msub> <mo>,</mo> <msub> <mi>e</mi> <mi>θ</mi> </msub> <mo>)</mo> </mrow> </semantics></math> in Equation (8); (<b>bottom</b>) distance <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>(</mo> <mi>ψ</mi> <mo>,</mo> <mi>θ</mi> <mo>)</mo> </mrow> </semantics></math> in Equation (10) correlating to the <math display="inline"><semantics> <msub> <mi>δ</mi> <mrow> <mi>t</mi> <mi>h</mi> <mi>r</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>δ</mi> <mrow> <mi>s</mi> <mi>p</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> </semantics></math> in Equation (11).</p>
Full article ">Figure 11
<p>Time-based plots of the following variables: (<b>top</b>) differences <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>e</mi> <mi>ψ</mi> </msub> <mo>,</mo> <msub> <mi>e</mi> <mi>θ</mi> </msub> <mo>)</mo> </mrow> </semantics></math> in Equation (8); (<b>middle</b>–<b>top</b>) distance <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>(</mo> <mi>ψ</mi> <mo>,</mo> <mi>θ</mi> <mo>)</mo> </mrow> </semantics></math> in Equation (10) correlating to <math display="inline"><semantics> <msub> <mi>δ</mi> <mrow> <mi>t</mi> <mi>h</mi> <mi>r</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>δ</mi> <mrow> <mi>s</mi> <mi>p</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> </semantics></math> (11); (<b>middle</b>–<b>bottom</b>) elements of <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">F</mi> <mi mathvariant="bold-italic">E</mi> </msub> </semantics></math> Equation (11) within the operative RF; (<b>bottom</b>) elements of <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">D</mi> <mi mathvariant="bold-italic">E</mi> </msub> </semantics></math> in Equation (14) within the operative RF.</p>
Full article ">Figure 12
<p>Time-based plots of the following variables: (<b>top</b>) Euler angles associated with <math display="inline"><semantics> <msub> <mi>q</mi> <mrow> <mi>s</mi> <mi>t</mi> <mi>u</mi> <mi>d</mi> </mrow> </msub> </semantics></math> (continuous) and <math display="inline"><semantics> <msub> <mi>q</mi> <mrow> <mi>t</mi> <mi>e</mi> <mi>a</mi> <mi>c</mi> <mi>h</mi> </mrow> </msub> </semantics></math> (dashed) in Equation (7); (<b>middle</b>–<b>top</b>) distance <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>(</mo> <mi>ψ</mi> <mo>,</mo> <mi>θ</mi> <mo>)</mo> </mrow> </semantics></math> in Equation (10) correlating to <math display="inline"><semantics> <msub> <mi>δ</mi> <mrow> <mi>t</mi> <mi>h</mi> <mi>r</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>δ</mi> <mrow> <mi>s</mi> <mi>p</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> </semantics></math> in Equation (11); (<b>middle</b>) difference <math display="inline"><semantics> <msub> <mi>e</mi> <mi>ϕ</mi> </msub> </semantics></math> in Equation (8); (<b>middle</b>–<b>bottom</b>) elements of <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">M</mi> <mi mathvariant="bold-italic">E</mi> </msub> </semantics></math> in Equation (13) within the operative RF; (<b>bottom</b>) elements of <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">D</mi> <mi mathvariant="bold-italic">E</mi> </msub> </semantics></math> in Equation (14) within the operative RF.</p>
Full article ">Figure A1
<p>Other results of (Ex1) (<b>a</b>–<b>c</b>): time-based plots of the following variables: (<b>top</b>) Euler angles associated with <math display="inline"><semantics> <msub> <mi>q</mi> <mrow> <mi>s</mi> <mi>t</mi> <mi>u</mi> <mi>d</mi> </mrow> </msub> </semantics></math> (continuous) and <math display="inline"><semantics> <msub> <mi>q</mi> <mrow> <mi>t</mi> <mi>e</mi> <mi>a</mi> <mi>c</mi> <mi>h</mi> </mrow> </msub> </semantics></math> (dashed) Equation (7); (<b>middle</b>) their differences <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>e</mi> <mi>ψ</mi> </msub> <mo>,</mo> <msub> <mi>e</mi> <mi>θ</mi> </msub> <mo>)</mo> </mrow> </semantics></math> Equation (8); (<b>bottom</b>) distance <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>(</mo> <mi>ψ</mi> <mo>,</mo> <mi>θ</mi> <mo>)</mo> </mrow> </semantics></math> Equation (10) correlating with <math display="inline"><semantics> <msub> <mi>δ</mi> <mrow> <mi>t</mi> <mi>h</mi> <mi>r</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>δ</mi> <mrow> <mi>s</mi> <mi>p</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> </semantics></math> Equation (11).</p>
Full article ">Figure A2
<p>Other results of 3 (<b>a</b>–<b>c</b>): time-based plots of the following variables: (<b>top</b>) differences <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>e</mi> <mi>ψ</mi> </msub> <mo>,</mo> <msub> <mi>e</mi> <mi>θ</mi> </msub> <mo>)</mo> </mrow> </semantics></math> (8); (<b>middle</b>–<b>top</b>) distance <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>(</mo> <mi>ψ</mi> <mo>,</mo> <mi>θ</mi> <mo>)</mo> </mrow> </semantics></math> Equation (10) in correlation of <math display="inline"><semantics> <msub> <mi>δ</mi> <mrow> <mi>t</mi> <mi>h</mi> <mi>r</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>δ</mi> <mrow> <mi>s</mi> <mi>p</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> </semantics></math> Equation (11); (<b>middle</b>–<b>bottom</b>) elements of <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">F</mi> <mi mathvariant="bold-italic">E</mi> </msub> </semantics></math> Equation (11) within the operative RF; (<b>bottom</b>) elements of <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">D</mi> <mi mathvariant="bold-italic">E</mi> </msub> </semantics></math> Equation (14) within the operative RF.</p>
Full article ">
24 pages, 2157 KiB  
Article
Harnessing the Power of Large Language Models for Automated Code Generation and Verification
by Unai Antero, Francisco Blanco, Jon Oñativia, Damien Sallé and Basilio Sierra
Robotics 2024, 13(9), 137; https://doi.org/10.3390/robotics13090137 - 11 Sep 2024
Viewed by 2271
Abstract
The cost landscape in advanced technology systems is shifting dramatically. Traditionally, hardware costs took the spotlight, but now, programming and debugging complexities are gaining prominence. This paper explores this shift and its implications, focusing on reducing the cost of programming complex robot behaviors, [...] Read more.
The cost landscape in advanced technology systems is shifting dramatically. Traditionally, hardware costs took the spotlight, but now, programming and debugging complexities are gaining prominence. This paper explores this shift and its implications, focusing on reducing the cost of programming complex robot behaviors, using the latest innovations from the Generative AI field, such as large language models (LLMs). We leverage finite state machines (FSMs) and LLMs to streamline robot programming while ensuring functionality. The paper addresses LLM challenges related to content quality, emphasizing a two-fold approach using predefined software blocks and a Supervisory LLM. Full article
(This article belongs to the Section AI in Robotics)
Show Figures

Figure 1

Figure 1
<p>AI performance on benchmarks (relative to human performance) [<a href="#B15-robotics-13-00137" class="html-bibr">15</a>].</p>
Full article ">Figure 2
<p>Steps in the proposed methodology, indicating relevant Large Language Model (LLM) roles.</p>
Full article ">Figure 3
<p>Connection to iTHOR simulated environment.</p>
Full article ">Figure 4
<p>iTHOR simulated kitchen.</p>
Full article ">Figure 5
<p>Simulated kitchen showing element location.</p>
Full article ">Figure 6
<p>Tomato slice on plate, as requested by the user.</p>
Full article ">Figure 7
<p>Real (physical) test device and environment.</p>
Full article ">Figure 8
<p>LLM reasoning degradation.</p>
Full article ">
27 pages, 9595 KiB  
Article
A Control System Design and Implementation for Autonomous Quadrotors with Real-Time Re-Planning Capability
by Yevhenii Kovryzhenko, Nan Li and Ehsan Taheri
Robotics 2024, 13(9), 136; https://doi.org/10.3390/robotics13090136 - 9 Sep 2024
Viewed by 1368
Abstract
Real-time (re-)planning is crucial for autonomous quadrotors to navigate in uncertain environments where obstacles may be detected and trajectory plans must be adjusted on-the-fly to avoid collision. In this paper, we present a control system design for autonomous quadrotors that has real-time re-planning [...] Read more.
Real-time (re-)planning is crucial for autonomous quadrotors to navigate in uncertain environments where obstacles may be detected and trajectory plans must be adjusted on-the-fly to avoid collision. In this paper, we present a control system design for autonomous quadrotors that has real-time re-planning capability, including the hardware pipeline for the hardware–software integration to realize the proposed real-time re-planning algorithm. The framework is based on a modified version of the PX4 Autopilot and a Raspberry Pi 5 companion computer. The planning algorithm utilizes minimum-snap trajectory generation, taking advantage of the differential flatness property of quadrotors, to realize computationally light, real-time re-planning using an onboard computer. We first verify the control system and the planning algorithm through simulation experiments, followed by implementing and demonstrating the system on hardware using a quadcopter. Full article
Show Figures

Figure 1

Figure 1
<p>Definition of inertial and quadrotor body-fixed frames of reference. Positive sense of rotation is shown. The <span class="html-italic">i</span>-th propeller spins with angular velocity, <math display="inline"><semantics> <msub> <mi>ω</mi> <mi>i</mi> </msub> </semantics></math>, and generates thrust force, <math display="inline"><semantics> <msub> <mi>T</mi> <mi>i</mi> </msub> </semantics></math>.</p>
Full article ">Figure 2
<p>Cascaded control system architecture diagram.</p>
Full article ">Figure 3
<p>Definition of segment parameters and time of flight vector for a multi-segment trajectory.</p>
Full article ">Figure 4
<p>Re-planning algorithm diagram.</p>
Full article ">Figure 5
<p>Overview of the developed hardware-software interfacing pipeline. Additional modules and functionalities of the vehicle management system that are used, but omitted from the diagram, are grouped and denoted by ‘...’ submodule for brevity.</p>
Full article ">Figure 6
<p>Simulation. The initial vehicle’s state and an offline-generated time-allocated minimum-snap trajectory that is feasible with respect to the central (dark blue) obstacle. (<b>a</b>) Top View, (<b>b</b>) 3D View.</p>
Full article ">Figure 7
<p>Simulation. The initial reference trajectory solution with replanning positions is shown with green markers.</p>
Full article ">Figure 8
<p>Simulation. The first replanned reference trajectory.</p>
Full article ">Figure 9
<p>Simulation. The second replanned reference trajectory.</p>
Full article ">Figure 10
<p>Simulation. The third replanned reference trajectory.</p>
Full article ">Figure 11
<p>Simulation. The fourth replanned reference trajectory.</p>
Full article ">Figure 12
<p>Simulation results for detection of the first obstacle and update to the reference trajectory. (<b>a</b>) State of the vehicle and reference trajectory right before the first obstacle was detected. (<b>b</b>) Updated mission after the first obstacle was detected.</p>
Full article ">Figure 13
<p>Simulation results for detection of the second obstacle and update to the reference trajectory. (<b>a</b>) State of the vehicle and reference trajectory before the second obstacle was detected. (<b>b</b>) Updated mission after the second obstacle was detected.</p>
Full article ">Figure 14
<p>Simulation results for detection of the third obstacle and update to the reference trajectory. (<b>a</b>) State of the vehicle and reference trajectory right before the third obstacle was detected. (<b>b</b>) Updated mission after the third obstacle was detected.</p>
Full article ">Figure 15
<p>Simulation results for the detection of the fourth obstacle and update to the reference trajectory. (<b>a</b>) Vehicle successfully performing the final obstacle avoidance maneuver. (<b>b</b>) The path and the final reference trajectory of a successful mission are shown.</p>
Full article ">Figure 16
<p>Experiment results with the initial state of the vehicle and an initial straight-line minimum-snap trajectory.</p>
Full article ">Figure 17
<p>Experimental results showing two reference trajectories that were computed for the experimental scenario. (<b>a</b>) The initial reference trajectory solution. (<b>b</b>) The first replanned reference trajectory.</p>
Full article ">Figure 18
<p>Experiment results for collision avoidance re-planning with one obstacle. (<b>a</b>) State of the vehicle and reference trajectory right before the first obstacle was detected. (<b>b</b>) Updated mission after the first obstacle was detected.</p>
Full article ">Figure 19
<p>Experimental results. (<b>a</b>) The quadrotor successfully perfors an obstacle-avoidance maneuver. (<b>b</b>) The path and the final reference trajectory of a successful mission are shown.</p>
Full article ">
19 pages, 12437 KiB  
Article
Vibration Propulsion in Untethered Insect-Scale Robots with Piezoelectric Bimorphs and 3D-Printed Legs
by Mario Rodolfo Ramírez-Palma, Víctor Ruiz-Díez, Víctor Corsino and José Luis Sánchez-Rojas
Robotics 2024, 13(9), 135; https://doi.org/10.3390/robotics13090135 - 9 Sep 2024
Viewed by 1281
Abstract
This research presents the development and evaluation of a miniature autonomous robot inspired by insect locomotion, capable of bidirectional movement. The robot incorporates two piezoelectric bimorph resonators, 3D-printed legs, an electronic power circuit, and a battery-operated microcontroller. Each piezoelectric motor features ceramic plates [...] Read more.
This research presents the development and evaluation of a miniature autonomous robot inspired by insect locomotion, capable of bidirectional movement. The robot incorporates two piezoelectric bimorph resonators, 3D-printed legs, an electronic power circuit, and a battery-operated microcontroller. Each piezoelectric motor features ceramic plates measuring 15 × 1.5 × 0.6 mm3 and weighing 0.1 g, with an optimized electrode layout. The bimorphs vibrate at two flexural modes with resonant frequencies of approximately 70 and 100 kHz. The strategic placement of the 3D-printed legs converts out-of-plane motion into effective forward or backward propulsion, depending on the vibration mode. A differential drive configuration, using the two parallel piezoelectric motors and calibrated excitation signals from the microcontroller, allows for arbitrary path navigation. The fully assembled robot measures 29 × 17 × 18 mm3 and weighs 7.4 g. The robot was tested on a glass surface, reaching a maximum speed of 70 mm/s and a rotational speed of up to 190 deg./s, with power consumption of 50 mW, a cost of transport of 10, and an estimated continuous operation time of approximately 6.7 h. The robot successfully followed pre-programmed paths, demonstrating its precise control and agility in navigating complex environments, marking a significant advancement in insect-scale autonomous robotics. Full article
(This article belongs to the Section Intelligent Robots and Mechatronics)
Show Figures

Figure 1

Figure 1
<p>Detailed schematic of the miniature robot showing the piezoelectric bimorph resonators, 3D-printed legs, battery, and microcontroller board.</p>
Full article ">Figure 2
<p>Side view of the mode shapes for the vibration modes (50) and (60), highlighting the distinct half-lobes between the nodal and anti-nodal lines, which are crucial for effective locomotion. This figure shows the positions of the legs <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>L</mi> </mrow> <mrow> <mn>1</mn> <mo>−</mo> <mn>4</mn> </mrow> </msub> </mrow> </semantics></math> and the semi-nodes <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>N</mi> </mrow> <mrow> <mn>1</mn> <mo>−</mo> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>. The colored half-lobes represent the areas where a leg produces the forward or backward motion of the robot. To achieve bidirectional movement, the legs should be positioned between the green vertical lines. The semi-nodes <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>N</mi> </mrow> <mrow> <mn>1</mn> <mo>−</mo> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> are critical areas in supporting the robot’s weight during standing wave locomotion in both modes.</p>
Full article ">Figure 3
<p>Final design and geometry of the locomotion system.</p>
Full article ">Figure 4
<p>Simulated stress distribution on the bimorph surface, highlighting regions <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">Ω</mi> </mrow> <mrow> <mo>+</mo> </mrow> </msub> </mrow> </semantics></math> (red) and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">Ω</mi> </mrow> <mrow> <mo>−</mo> </mrow> </msub> </mrow> </semantics></math> (blue) for electrode placement in each mode.</p>
Full article ">Figure 5
<p>Subregions <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">Ω</mi> </mrow> <mrow> <mo>+</mo> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">Ω</mi> </mrow> <mrow> <mo>−</mo> </mrow> </msub> </mrow> </semantics></math> common for modes (50) and (60).</p>
Full article ">Figure 6
<p>Graphical description of the four different types of motion of the robot. The bimorphs are positioned 180° away from each other and can be actuated in either mode (50) or (60) for bidirectional thrust.</p>
Full article ">Figure 7
<p>(<b>a</b>) High-voltage piezo drive circuit. (<b>b</b>) Measured PWM signal from microcontroller (orange) and voltage between PZT bimorph plate terminals (blue).</p>
Full article ">Figure 8
<p>Burst-type control signal for the trajectory compensation of the robot, showing the adjustment of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mi>o</mi> <mi>n</mi> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mi>o</mi> <mi>f</mi> <mi>f</mi> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mi>b</mi> </mrow> </msub> </mrow> </semantics></math> to control the robot’s speed and direction.</p>
Full article ">Figure 9
<p>Structure and dimensions in millimeters of LFS Piezo Bimorph Vibration Sensor, RS PRO, Japan [<a href="#B29-robotics-13-00135" class="html-bibr">29</a>].</p>
Full article ">Figure 10
<p>Electrode design to maximize the efficiency of modes (50) and (60), showing the division into <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>E</mi> </mrow> <mrow> <mo>+</mo> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>E</mi> </mrow> <mrow> <mo>−</mo> </mrow> </msub> </mrow> </semantics></math>, and neutral regions.</p>
Full article ">Figure 11
<p>The 3D-printed leg design using Formlabs Rigid 10K resin (bottom and lateral views), featuring four mini claws for enhanced attachment and stability [<a href="#B31-robotics-13-00135" class="html-bibr">31</a>].</p>
Full article ">Figure 12
<p>The final fully assembled robot: upside down (<b>left</b>) and upright position (<b>right</b>).</p>
Full article ">Figure 13
<p>The enhancement process of the resonance peaks for modes (50) and (60) after the steps followed in the electrode layout implementation described in <a href="#robotics-13-00135-f005" class="html-fig">Figure 5</a> and <a href="#robotics-13-00135-f010" class="html-fig">Figure 10</a>.</p>
Full article ">Figure 14
<p>Resonance peaks of each bimorph of the locomotion system when the robot is fully assembled.</p>
Full article ">Figure 15
<p>Frequency adjustment for maximum speed of locomotion system.</p>
Full article ">Figure 16
<p>Bidirectional rotational movement, clockwise (0–1.8 s) and counterclockwise (1.8–4 s).</p>
Full article ">Figure 17
<p>Frames of clockwise (0–1.8 s) and counterclockwise (1.8–4 s) rotation.</p>
Full article ">Figure 18
<p>Bidirectional straight-line movement.</p>
Full article ">Figure 19
<p>Complex L-shaped trajectory carried out by the robot.</p>
Full article ">Figure 20
<p>Robot trajectory for a programmed sequence: straight line–deviation–straight line.</p>
Full article ">
17 pages, 11591 KiB  
Article
A Novel Fuzzy Logic Switched MPC for Efficient Path Tracking of Articulated Steering Vehicles
by Xuanwei Chen, Jiaqi Cheng, Huosheng Hu, Guifang Shao, Yunlong Gao and Qingyuan Zhu
Robotics 2024, 13(9), 134; https://doi.org/10.3390/robotics13090134 - 5 Sep 2024
Viewed by 910
Abstract
This paper introduces a novel fuzzy logic switched model predictive control (MPC) algorithm for articulated steering vehicles, addressing significant path tracking challenges due to varying road conditions and vehicle speeds. Traditional single-model and parameter-based controllers struggle with tracking errors and computational inefficiencies under [...] Read more.
This paper introduces a novel fuzzy logic switched model predictive control (MPC) algorithm for articulated steering vehicles, addressing significant path tracking challenges due to varying road conditions and vehicle speeds. Traditional single-model and parameter-based controllers struggle with tracking errors and computational inefficiencies under diverse operational conditions. Therefore, a kinematics-based MPC algorithm is first developed, showing strong real-time performance but encountering accuracy issues on low-adhesion surfaces and at high speeds. Then, a 4-DOF dynamics-based MPC algorithm is designed to enhance tracking accuracy and control stability. The proposed solution is a switched MPC strategy, integrating a fuzzy control system that dynamically switches between kinematics-based and dynamics-based MPC algorithms based on error, solution time, and heading angle indicators. Subsequently, simulation tests are conducted using SIMULINK and ADAMS to verify the performance of the proposed algorithm. The results confirm that this fuzzy-based MPC algorithm can effectively mitigate the drawbacks of single-model approaches, ensuring precise, stable, and efficient path tracking across diverse adhesion road conditions. Full article
(This article belongs to the Section AI in Robotics)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Geometry of kinematic model and (<b>b</b>) geometry of dynamic model.</p>
Full article ">Figure 2
<p>The desired path in the tests.</p>
Full article ">Figure 3
<p>Tracking performance of the controllers on the road with the adhesion coefficient of 0.8.</p>
Full article ">Figure 3 Cont.
<p>Tracking performance of the controllers on the road with the adhesion coefficient of 0.8.</p>
Full article ">Figure 4
<p>Tracking performance of the controllers on the road with the adhesion coefficient of 0.4.</p>
Full article ">Figure 5
<p>Schematics of the proposed switched MPC.</p>
Full article ">Figure 6
<p>Switching cost for different models.</p>
Full article ">Figure 7
<p>Membership functions for each input variable.</p>
Full article ">Figure 8
<p>The output of the fuzzy controller.</p>
Full article ">Figure 9
<p>The overall flow chart of fuzzy logic switched MPC.</p>
Full article ">Figure 10
<p>The ADAMS model.</p>
Full article ">Figure 11
<p>Adhesion coefficient parameters of the desired path.</p>
Full article ">Figure 12
<p>Tracking performance at 1 m/s.</p>
Full article ">Figure 13
<p>Tracking performance at 2 m/s.</p>
Full article ">
15 pages, 2870 KiB  
Article
Towards Prosthesis Control: Identification of Locomotion Activities through EEG-Based Measurements
by Saqib Zafar, Hafiz Farhan Maqbool, Muhammad Imran Ashraf, Danial Javaid Malik, Zain ul Abdeen, Wahab Ali, Juri Taborri and Stefano Rossi
Robotics 2024, 13(9), 133; https://doi.org/10.3390/robotics13090133 - 1 Sep 2024
Viewed by 1258
Abstract
The integration of advanced control systems in prostheses necessitates the accurate identification of human locomotion activities, a task that can significantly benefit from EEG-based measurements combined with machine learning techniques. The main contribution of this study is the development of a novel framework [...] Read more.
The integration of advanced control systems in prostheses necessitates the accurate identification of human locomotion activities, a task that can significantly benefit from EEG-based measurements combined with machine learning techniques. The main contribution of this study is the development of a novel framework for the recognition and classification of locomotion activities using electroencephalography (EEG) data by comparing the performance of different machine learning algorithms. Data of the lower limb movements during level ground walking as well as going up stairs, down stairs, up ramps, and down ramps were collected from 10 healthy volunteers. Time- and frequency-domain features were extracted by applying independent component analysis (ICA). Successively, they were used to train and test random forest and k-nearest neighbors (kNN) algorithms. For the classification, random forest revealed itself as the best-performing one, achieving an overall accuracy up to 92%. The findings of this study contribute to the field of assistive robotics by confirming that EEG-based measurements, when combined with appropriate machine learning models, can serve as robust inputs for prosthesis control systems. Full article
(This article belongs to the Special Issue AI for Robotic Exoskeletons and Prostheses)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The hardware of EMOTIV Epoc headset, (<b>b</b>) the position of the 14 electrodes.</p>
Full article ">Figure 2
<p>Flowchart of the data processing and analysis.</p>
Full article ">Figure 3
<p>Frequency response of different filters in the EEGLAB toolbox.</p>
Full article ">Figure 4
<p>EEG signals (<b>Left</b>) and independent components (<b>Right</b>) (<span class="html-italic">x</span>-axis represents the time in seconds, while the <span class="html-italic">y</span>-axis represents the micro voltage measured by each electrode and ICs).</p>
Full article ">Figure 5
<p>(<b>a</b>) Example of informax ICA results related to ascending stairs, (<b>b</b>) example of ADJUST results related to descending stairs. The number on the scalp stands for the specific components found by the independent component analysis, whereas the percentage indicates the confidence associated with the type of identified component. The colors on the scalp represent different levels of voltage, with warmer colors indicating higher levels of electrical potential, whereas blue and green indicate lower levels of activity. Black curves indicate isopotential lines, closer lines indicate steeper gradients of electrical potential, whereas widely spaced lines indicate more gradual changes.</p>
Full article ">
17 pages, 3904 KiB  
Article
Adaptive Path Planning for Subsurface Plume Tracing with an Autonomous Underwater Vehicle
by Zhiliang Wu, Shuozi Wang, Xusong Shao, Fang Liu and Zefeng Bao
Robotics 2024, 13(9), 132; https://doi.org/10.3390/robotics13090132 - 31 Aug 2024
Viewed by 953
Abstract
Autonomous underwater vehicles (AUVs) have been increasingly applied in marine environmental monitoring. Their outstanding capability of performing tasks without human intervention makes them a popular tool for environmental data collection, especially in unknown and remote regions. This paper addresses the path planning problem [...] Read more.
Autonomous underwater vehicles (AUVs) have been increasingly applied in marine environmental monitoring. Their outstanding capability of performing tasks without human intervention makes them a popular tool for environmental data collection, especially in unknown and remote regions. This paper addresses the path planning problem when AUVs are used to perform plume source tracing in an unknown environment. The goal of path planning is to locate the plume source efficiently. The path planning approach is developed using the Double Deep Q-Network (DDQN) algorithm in the deep reinforcement learning (DRL) framework. The AUV gains knowledge by interacting with the environment, and the optimal direction is extracted from the mapping obtained by a deep neural network. The proposed approach was tested by numerical simulation and on a real ground vehicle. In the numerical simulation, several initial sampling strategies were compared on the basis of survey efficiency. The results show that direct learning based on the interaction with the environment could be an appropriate survey strategy for plume source tracing problems. The comparison with the canonical lawnmower path used in practice showed that path planning using DRL algorithms could be potentially promising for large-scale environment exploration. Full article
Show Figures

Figure 1

Figure 1
<p>Schematics of AUV plume tracing phases: (<b>a</b>) Phase I: Sawtooth survey in the vertical plane; (<b>b</b>) Phase II: Plume characterization in the horizontal plane.</p>
Full article ">Figure 2
<p>AUV kinematic model in the NED system, where {x<sub>b</sub>, y<sub>b</sub>} denotes the body-fixed reference frame, <math display="inline"><semantics> <mrow> <mi>α</mi> </mrow> </semantics></math> indicates the direction of the ocean current, and <math display="inline"><semantics> <mrow> <mi>β</mi> </mrow> </semantics></math> denotes the angular difference between the NED system and the body-fixed reference frame.</p>
Full article ">Figure 3
<p>DDQN algorithm for AUV path planning for plume tracing, where <span class="html-italic">s</span> and <span class="html-italic">s′</span> denote the current and the next states, respectively, <span class="html-italic">a</span> denotes the action taken, <span class="html-italic">a</span><sub>max</sub> denotes the action corresponding to the maximal Q-value output from the action selection network, <span class="html-italic">r</span> denotes the immediate reward received after the action is executed, and {<span class="html-italic">s</span>,<span class="html-italic">a</span>,<span class="html-italic">r</span>,<span class="html-italic">s′</span>} denotes the experiences saved in the AUV experience replay buffer.</p>
Full article ">Figure 4
<p>Contour plot of the Ackley function as a steady plume. The picture is generated using Equation (5) with the coefficients <span class="html-italic">a</span> = −36, <span class="html-italic">b</span> = 2.2, and <span class="html-italic">k</span> = π.</p>
Full article ">Figure 5
<p>Asymptotic performance of the DDQN algorithm in the averaged cumulative reward: (<b>a</b>) survey strategy with 30 initial random sampling episodes; (<b>b</b>) survey strategy with 30 initial uniform sampling episodes; (<b>c</b>) survey strategy without initial sampling.</p>
Full article ">Figure 6
<p>Asymptotic performance of the DDQN algorithm in the success rate: (<b>a</b>) survey strategy with 30 initial random sampling episodes; (<b>b</b>) survey strategy with 30 initial uniform sampling episodes; (<b>c</b>) survey strategy without initial sampling.</p>
Full article ">Figure 7
<p>Evolution of the adaptive survey path w/o initial sampling. The AUV starts from the left lower corner. The red, green, and black lines are survey paths generated by the proposed approach in learning process, and the yellow line is the lawnmower path.</p>
Full article ">Figure 8
<p>Asymptotic performance of the DDQN algorithm in transient plume tracing: (<b>a</b>) averaged cumulative reward, indicating the average over every one hundred episodes; (<b>b</b>) success rate, indicating the rate of successfully arriving at the plume source over every one hundred episodes.</p>
Full article ">Figure 9
<p>Transient plume tracing using adaptive path planning and lawnmower survey pattern: (<b>a</b>) Stage 1: 10 time steps (~5.5 h); (<b>b</b>) Stage 2: 20 time steps (~11 h); (<b>c</b>) Stage 3: 30 time steps (~16.5 h); (<b>d</b>) Stage 4: 42 time steps (~23 h).</p>
Full article ">Figure 9 Cont.
<p>Transient plume tracing using adaptive path planning and lawnmower survey pattern: (<b>a</b>) Stage 1: 10 time steps (~5.5 h); (<b>b</b>) Stage 2: 20 time steps (~11 h); (<b>c</b>) Stage 3: 30 time steps (~16.5 h); (<b>d</b>) Stage 4: 42 time steps (~23 h).</p>
Full article ">Figure 10
<p>Total survey time needed with different number of AUVs used in the steady plume source tracing problem. The time for individual operation in the lawnmower approach and the adaptive approach is setup as 60 time steps in the numerical simulation or 33 h and 20 min according to the preset vehicle endurance. Note that when one AUV is utilized, the battery charging time is ignored in the calculation.</p>
Full article ">Figure 11
<p>TurtleBot 2 and the map used in the experiment: (<b>a</b>) TurtleBot 2; (<b>b</b>) the map printed with a contour plot of the Ackley function with the coefficients <span class="html-italic">a</span> = −36, <span class="html-italic">b</span> = 2.2, and <span class="html-italic">k</span> = π. The global maximum is indicated with a color of bright yellow.</p>
Full article ">Figure 12
<p>Snapshots of TurtleBot 2 robot autonomous navigation in the static Ackley environment.</p>
Full article ">Figure 13
<p>Robot trajectory in the static Ackley environment.</p>
Full article ">
17 pages, 9604 KiB  
Article
An Arch-Shaped Electrostatic Actuator for Multi-Legged Locomotion
by Yusuke Seki and Akio Yamamoto
Robotics 2024, 13(9), 131; https://doi.org/10.3390/robotics13090131 - 30 Aug 2024
Viewed by 1145
Abstract
A simple actuator to create non-reciprocal leg motion is imperative in realizing a multi-legged micro-locomotion mechanism. This work focuses on an arch-shaped electrostatic actuator as a candidate actuator, and it proposes the operation protocol to realize a non-reciprocal trajectory. The actuator consists of [...] Read more.
A simple actuator to create non-reciprocal leg motion is imperative in realizing a multi-legged micro-locomotion mechanism. This work focuses on an arch-shaped electrostatic actuator as a candidate actuator, and it proposes the operation protocol to realize a non-reciprocal trajectory. The actuator consists of two hard and flexible sheets and a leg attached to the flexible sheet. The flexible sheet is deformed through an electrostatic zipping motion that changes the height and/or angle of the attached leg. The fabricated prototype weighed 0.1 g and swung about 15 degrees with the applied voltage of 1000 V. The swinging force exceeded 5 mN, five times the gravitational force on the actuator’s weight. Large performance deviations among prototypes were found, which were due to the manual fabrication process and the varying conditions of the silicone oil injected into the gap. The trajectory measurement showed that the leg tip moved along a non-reciprocal trajectory with a vertical shift of about 0.3 mm between the forward and backward swings. The prototype locomotion mechanism using four actuators successfully demonstrated forward and backward motions with the non-reciprocal swing motion of the four legs. The observed locomotion speed was about 0.3 mm/s. Although the speed was limited, the results showed the potential of the actuator for use in multi-legged micro-locomotion systems. Full article
Show Figures

Figure 1

Figure 1
<p>Three types of miniature locomotion mechanisms.</p>
Full article ">Figure 2
<p>The structure and the operation principle of the arch-shaped electrostatic actuator. A high voltage is applied to the electrode shown in a red color. The flexible sheet deforms through a zipping motion, as shown in the upper inset. The attached leg will be lifted through the application of voltage to the two electrodes. On the other hand, a voltage application to one electrode will swing the leg.</p>
Full article ">Figure 3
<p>Realization of a non-reciprocal trajectory of the leg tip.</p>
Full article ">Figure 4
<p>Fabrication process of the arch-shaped electrostatic actuator. (<b>a</b>) The components of the stator sheet. (<b>b</b>) The stator sheet is fixed onto the fixing base, and the flexible sheet (stainless shim tape) is arranged onto the stator with the rod. (<b>c</b>) The flexible sheet is formed in an arch shape by combining the rod and the cover. (<b>d</b>) The flexible sheets are glued, and the structure is removed from the base. The structure then is cut into individual actuators, such as in (<b>e</b>).</p>
Full article ">Figure 5
<p>Dimensions and appearance of the arch-shaped actuators fabricated in this work.</p>
Full article ">Figure 6
<p>First buckling mode of a beam fixed at both ends. The analysis of this work assumed that the flexible sheet buckles in this mode.</p>
Full article ">Figure 7
<p>The calculation results of the analytical model. (<b>a</b>) Change in the height, or the vertical displacement, of the flexible sheet. (<b>b</b>) Leg-swing angle.</p>
Full article ">Figure 8
<p>Setup for measuring the leg-swing characteristics. When the motion was measured, the load cell was removed, and the camera measured the motion of the actuator. On the other hand, the load cell was connected when the output force was measured.</p>
Full article ">Figure 9
<p>Step response of the leg-swing motion. In (<b>a</b>), a step voltage of 1000 V was applied to one stator electrode. The measurement was repeated five times for the same actuator, and it was plotted using different colors.</p>
Full article ">Figure 10
<p>Silicone oil sometimes flew out from the gap and adhered to the leg.</p>
Full article ">Figure 11
<p>Relationship between the leg-tilt angle and the applied voltage, measured by applying a ramp voltage. The measurement was performed for three different prototypes plotted using different colors. The applied voltage and leg-tilt angle derived from the model are also plotted.</p>
Full article ">Figure 12
<p>Setup for measuring the output force of the leg swing. The load cell and the leg were set to <math display="inline"><semantics> <mrow> <mn>0.4</mn> </mrow> </semantics></math> mm, and a bipolar voltage was applied. After that, the load cell was moved toward −0.4 mm while measuring the force.</p>
Full article ">Figure 13
<p>Leg-swing force of three prototypes measured using the setup in <a href="#robotics-13-00131-f012" class="html-fig">Figure 12</a>. Negative force indicates the direction of the force toward the negative x direction. The plots show that the three prototypes behaved differently.</p>
Full article ">Figure 14
<p>Step response of the leg-lift motion. A step voltage of 1000 V was applied to the two electrodes simultaneously. The measurement was repeated five times for the same actuator.</p>
Full article ">Figure 15
<p>Setup for measuring the load characteristics along the vertical direction.</p>
Full article ">Figure 16
<p>The vertical resistive force when actuators were pushed by a load cell. The measurement was done for three different prototypes plotted in different colors. (<b>a</b>) Relationship between the vertical force and displacement. (<b>b</b>) Snapshots showing the actuator’s behavior under the load. The snapshots in (<b>b</b>) correspond to the points in (<b>a</b>) with the same alphabet.</p>
Full article ">Figure 17
<p>Voltage waveforms to produce the non-reciprocal motion of the leg.</p>
Full article ">Figure 18
<p>Snapshots showing the non-reciprocal motion for the cycling frequency of 0.5 Hz.</p>
Full article ">Figure 19
<p>The measured trajectories of the leg tip when the actuator was driven using the waveforms in <a href="#robotics-13-00131-f017" class="html-fig">Figure 17</a> with different cycling frequencies. Each plot shows the trajectories of five cycles.</p>
Full article ">Figure 20
<p>Prototype of a multi-legged locomotion mechanism.</p>
Full article ">Figure 21
<p>Grouping of the actuators and the voltage waveforms to realize multi-legged locomotion.</p>
Full article ">Figure 22
<p>Horizontal motion of the main body of the multi-legged mechanism. The two plots represent two different cases in which voltage patterns were swapped.</p>
Full article ">Figure 23
<p>Photos of the multi-legged mechanism in operation and the behavior of its leg.</p>
Full article ">
16 pages, 23045 KiB  
Article
Tetherbot: Experimental Demonstration and Path Planning of Cable-Driven Climbing in Microgravity
by Simon Harms, Carlos Giese Bizcocho, Hiroto Wakizono, Kyosuke Murasaki, Hibiki Kawagoe and Kenji Nagaoka
Robotics 2024, 13(9), 130; https://doi.org/10.3390/robotics13090130 - 30 Aug 2024
Viewed by 1173
Abstract
In this paper, we introduce Tetherbot, a cable-driven climbing robot designed for microgravity environments with sparse holding points, such as space stations or asteroids. Tetherbot consists of a platform with a robotic arm that is suspended via cables from multiple grippers. It achieves [...] Read more.
In this paper, we introduce Tetherbot, a cable-driven climbing robot designed for microgravity environments with sparse holding points, such as space stations or asteroids. Tetherbot consists of a platform with a robotic arm that is suspended via cables from multiple grippers. It achieves climbing locomotion by alternately positioning the platform with the cables and relocating the grippers with the robotic arm from one holding point to the next. The main contribution of this work is the first experimental demonstration of autonomous cable-driven climbing in an environment with sparse holding points. To this end, we outline the design, kinematics, and statics of the Tetherbot and present a path planning algorithm to relocate the grippers. We demonstrate autonomous cable-driven climbing through an experiment conducted in a simulated microgravity environment using the path planning algorithm and a prototype of the robot. The results showcase Tetherbot’s ability to achieve autonomous cable-driven climbing locomotion, thereby demonstrating that cable-driven climbing is a viable concept and laying the foundation for future robots of this type. Full article
(This article belongs to the Section Aerospace Robotics and Autonomous Systems)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Conceptual illustration of Tetherbot. (<b>a</b>) Climbing locomotion principle. (<b>b</b>) Tetherbot on the outside of a space station. (<b>c</b>) Tetherbot exploring an asteroid. Background images adapted from [<a href="#B7-robotics-13-00130" class="html-bibr">7</a>,<a href="#B8-robotics-13-00130" class="html-bibr">8</a>].</p>
Full article ">Figure 2
<p>Static and kinematic model of a cable-driven climbing robot.</p>
Full article ">Figure 3
<p>Tetherbot’s cable and arm joint configuration.</p>
Full article ">Figure 4
<p>Tetherbot’s pick and place motion sequence. (<b>a</b>) Initial state, all grippers are docked to a holding point and the platform is in the 10-cable configuration. (<b>b</b>) Move the platform to a pose that is stable with eight cables and switch to the eight-cable configuration. (<b>c</b>) Move the platform to a pose where the arm can reach the gripper to be picked. In the following, this is referred to as platform alignment. (<b>d</b>) Move the arm’s end-effector over the gripper’s docking adapter. (<b>e</b>) Dock the arm’s end-effector to the gripper’s docking adapter by approaching it in a straight line and slacken the cables of the gripper. (<b>f</b>) Undock the arm’s end-effector and gripper from the hold by removing the gripper in a straight line. (<b>g</b>) Move the platform to a pose where the arm can reach the hold to place the gripper. (<b>h</b>) Move the arm’s end-effector and gripper over the hold’s docking adapter. (<b>i</b>) Dock the arm’s end-effector and gripper to the hold’s docking adapter by approaching it in a straight line and tension the cables of the gripper. (<b>j</b>) Undock the arm’s end-effector from the gripper by removing the end-effector in a straight line and switch to the 10-cable configuration.</p>
Full article ">Figure 5
<p>Steps of Tetherbot’s motion planning algorithm and trajectory generation.</p>
Full article ">Figure 6
<p>Experiment environment with the prototype of Tetherbot.</p>
Full article ">Figure 7
<p>Control system of the prototype. (<b>a</b>) Simplified diagram of the control system. (<b>b</b>) Arm/platform controller design.</p>
Full article ">Figure 8
<p>Experiment results: (<b>a</b>) images of Tetherbot picking and placing the gripper. (<b>b</b>) Position and orientation of Tetherbot’s platform with respect to time and the phase of the motion sequence. (<b>c</b>) Position of Tetherbot’s arm and capacity margin with respect to time and the phase of the motion sequence.</p>
Full article ">Figure 8 Cont.
<p>Experiment results: (<b>a</b>) images of Tetherbot picking and placing the gripper. (<b>b</b>) Position and orientation of Tetherbot’s platform with respect to time and the phase of the motion sequence. (<b>c</b>) Position of Tetherbot’s arm and capacity margin with respect to time and the phase of the motion sequence.</p>
Full article ">
21 pages, 5591 KiB  
Article
Design of a Three-Degree of Freedom Planar Parallel Mechanism for the Active Dynamic Balancing of Delta Robots
by Christian Mirz, Mathias Hüsing, Yukio Takeda and Burkhard Corves
Robotics 2024, 13(9), 129; https://doi.org/10.3390/robotics13090129 - 27 Aug 2024
Viewed by 1114
Abstract
Delta robots are the most common parallel robots for manipulation tasks. In many industrial applications, they must be operated at reduced speed, or dwell times have to be included in the motion planning, to prevent frame vibrations. As a result, their full potential [...] Read more.
Delta robots are the most common parallel robots for manipulation tasks. In many industrial applications, they must be operated at reduced speed, or dwell times have to be included in the motion planning, to prevent frame vibrations. As a result, their full potential cannot be realized. Against this background, this publication is concerned with the mechanical design of an active dynamic balancing unit for the reduction of frame vibrations. In the first part of this publication, the main design requirements for an active dynamic balancing mechanism are discussed, followed by a presentation of possible mechanism designs. Subsequently, one the most promising mechanisms is described in detail and its kinematics and dynamics equations are derived. Finally, the dimensions of a prototype mechanism designed to experimentally validate the concept of active dynamic balancing are defined using the example of Suisui Bot, a low-cost Delta robot. Full article
Show Figures

Figure 1

Figure 1
<p>Typical setup of a Delta robot.</p>
Full article ">Figure 2
<p>Eigenmodes and natural frequencies of a typical Delta robot frame.</p>
Full article ">Figure 3
<p>Illustrations to visualize the origin of parasitic shaking moments and the influence of the counterweight mass on the auxiliary variable <math display="inline"><semantics> <mi>λ</mi> </semantics></math> and thus on the power requirement of the balancing unit.</p>
Full article ">Figure 4
<p>Renderings and schematics of the balancing mechanism.</p>
Full article ">Figure 5
<p>Balancing unit attached to the Suisui Bot.</p>
Full article ">Figure 6
<p>Variables as used for the derivation of the inverse kinematics of the balancing mechanism. All vectors are given in the global COS.</p>
Full article ">Figure 7
<p>Kinematic parameters of the Suisui Bot.</p>
Full article ">Figure 8
<p>(<b>a</b>) shows the sectioning of the turntable as used for the trajectory planning. (<b>b</b>) shows a set of 75 trajectories defined by a Halton sequence and recorded in experiments.</p>
Full article ">Figure 9
<p>Trajectories used to access the maximum shaking forces and moments.</p>
Full article ">Figure 10
<p>Fourier transform of the shaking forces and moments calculated for the critical cases for the trajectories in an asterisk and a square pattern.</p>
Full article ">Figure A1
<p>Page one of the morphological matrix containing solution candidates for the functions of the dynamic balancing task. For simplicity, solutions that can be used for both force and moment balancing are only listed under subfunction force balancing [<a href="#B31-robotics-13-00129" class="html-bibr">31</a>].</p>
Full article ">Figure A2
<p>Page two of the morphological matrix containing solution candidates for the functions of the dynamic balancing task. For simplicity, solutions that can be used for both force and moment balancing are only listed under subfunction force balancing.</p>
Full article ">Figure A3
<p>Page three of the morphological matrix containing solution candidates for the functions of the dynamic balancing task. For simplicity, solutions that can be used for both force and moment balancing are only listed under subfunction force balancing.</p>
Full article ">
22 pages, 10563 KiB  
Article
Low-Cost Cable-Driven Robot Arm with Low-Inertia Movement and Long-Term Cable Durability
by Van Pho Nguyen, Wai Tuck Chow, Sunil Bohra Dhyan, Bohan Zhang, Boon Siew Han and Hong Yee Alvin Wong
Robotics 2024, 13(9), 128; https://doi.org/10.3390/robotics13090128 - 27 Aug 2024
Cited by 3 | Viewed by 3105
Abstract
Our study presents a novel design for a cable-driven robotic arm, emphasizing low cost, low inertia movement, and long-term cable durability. The robotic arm shares similar specifications with the UR5 robotic arm, featuring a total of six degrees of freedom (DOF) distributed in [...] Read more.
Our study presents a novel design for a cable-driven robotic arm, emphasizing low cost, low inertia movement, and long-term cable durability. The robotic arm shares similar specifications with the UR5 robotic arm, featuring a total of six degrees of freedom (DOF) distributed in a 1:1:1:3 ratio at the arm base, shoulder, elbow, and wrist, respectively. The three DOF at the wrist joints are driven by a cable system, with heavy motors relocated from the end-effector to the shoulder base. This repositioning results in a lighter cable-actuated wrist (weighing 0.8 kg), which enhances safety during human interaction and reduces the torque requirements for the elbow and shoulder motors. Consequently, the overall cost and weight of the robotic arm are reduced, achieving a payload-to-body weight ratio of 5:8.4 kg. To ensure good positional repeatability, the shoulder and elbow joints, which influence longer moment arms, are designed with a direct-drive structure. To evaluate the design’s performance, tests were conducted on loading capability, cable durability, position repeatability, and manipulation. The tests demonstrated that the arm could manipulate a 5 kg payload with a positional repeatability error of less than 0.1 mm. Additionally, a novel cable tightener design was introduced, which served dual functions: conveniently tightening the cable and reducing the high-stress concentration near the cable locking end to minimize cable loosening. When subjected to an initial cable tension of 100 kg, this design retained approximately 80% of the load after 10 years at a room temperature of 24 °C. Full article
(This article belongs to the Section Industrial Robots and Automation)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Drawbacks of industrial collaborative robot arms consisting of direct-drive joints in handling heavy load and generating low-inertia interaction. (<b>b</b>) Our solution of a cable-driven robot arm. In this inset picture, the encoder and torque sensor may be located before or after the gearbox.</p>
Full article ">Figure 2
<p>A 3D design of the cable-driven robot arm with different views: (<b>a</b>) perspective view, and (<b>b</b>) top view. The dash-line boxes show the boundaries of the main cluster structures in the robot arm. <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mi>E</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>F</mi> <mi>U</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>F</mi> <mi>F</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>S</mi> <mi>W</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mi>e</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mi>P</mi> </msub> </mrow> </semantics></math>, are respectively the center lines of the joints: elbow, tube holders at the forearm, tube holders at the upper arm, wrist, end-effector, and pinion wrist. Concurrently, <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mi>B</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mi>S</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mi>F</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>W</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>W</mi> <mn>2</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>W</mi> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> are in turn the center lines of the motor shafts: base <math display="inline"><semantics> <msub> <mi>M</mi> <mi>B</mi> </msub> </semantics></math>, shoulder <math display="inline"><semantics> <msub> <mi>M</mi> <mi>S</mi> </msub> </semantics></math>, forearm <math display="inline"><semantics> <msub> <mi>M</mi> <mi>F</mi> </msub> </semantics></math>, and wrist <math display="inline"><semantics> <mrow> <mo>{</mo> <msub> <mi>M</mi> <mrow> <mi>W</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>M</mi> <mrow> <mi>W</mi> <mn>2</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>M</mi> <mrow> <mi>W</mi> <mn>3</mn> </mrow> </msub> <mo>}</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Schematic illustration of the 3-DOF differential-gear wrist design with (<b>a</b>) isometric view, (<b>b</b>) front view, and (<b>c</b>) side view. Six cables are labeled as <math display="inline"><semantics> <mrow> <msub> <mi>c</mi> <mrow> <mn>1</mn> <mo>−</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>c</mi> <mrow> <mn>1</mn> <mo>−</mo> <mn>2</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>c</mi> <mrow> <mn>2</mn> <mo>−</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>c</mi> <mrow> <mn>2</mn> <mo>−</mo> <mn>2</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>c</mi> <mrow> <mn>3</mn> <mo>−</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>c</mi> <mrow> <mn>3</mn> <mo>−</mo> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Illustration of the inside structure and the decoupling mechanism in the 3-DOF differential-gear wrist.</p>
Full article ">Figure 5
<p>Schematic illustration of the cable tightening mechanism (<b>a</b>) and the principle of adjusting the cable tension (<b>b</b>). A yellow-dash circle line presents the hub surface of the pulley and <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>W</mi> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> </mrow> </msub> </semantics></math> means <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>W</mi> <mn>1</mn> </mrow> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>W</mi> <mn>2</mn> </mrow> </msub> </semantics></math> or <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>W</mi> <mn>3</mn> </mrow> </msub> </semantics></math>. The models in this figure are also applied for both wrist and pinion pulleys. The cable is assumed to be locked in the wrist pulley and the male/female pulley, while <span class="html-italic">A</span> and <span class="html-italic">B</span> are the first contact points between the cable and the pulleys.</p>
Full article ">Figure 6
<p>Experimental relation between the <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>h</mi> <mi>o</mi> <mi>l</mi> <mi>d</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>a</mi> <mi>d</mi> </mrow> </msub> </semantics></math> (<b>a</b>) and the ratio of <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>h</mi> <mi>o</mi> <mi>l</mi> <mi>d</mi> </mrow> </msub> <mo>/</mo> <msub> <mi>T</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>a</mi> <mi>d</mi> </mrow> </msub> </mrow> </semantics></math> on the pulley over two variants <math display="inline"><semantics> <mi>μ</mi> </semantics></math> and the number of rounds (<b>b</b>). In graph (<b>a</b>), the pulley with a diameter of 32 mm, the load cell, and the force sensor are clamped. Also, the cable made from Dyneema material has a diameter of 2 mm. The cable is wound many rounds on the pulley hub surface. <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>a</mi> <mi>d</mi> </mrow> </msub> </semantics></math> is set at the load cell, while <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>h</mi> <mi>o</mi> <mi>l</mi> <mi>d</mi> </mrow> </msub> </semantics></math> is measured at the force sensor. In (<b>b</b>), the ratio of <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>h</mi> <mi>o</mi> <mi>l</mi> <mi>d</mi> </mrow> </msub> <mo>/</mo> <msub> <mi>T</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>a</mi> <mi>d</mi> </mrow> </msub> </mrow> </semantics></math> in experiments is shown in the solid lines, and the dot lines show the interpolations from such data over different values of <math display="inline"><semantics> <mi>μ</mi> </semantics></math>.</p>
Full article ">Figure 7
<p>Front-view illustration of cable layout in the robot arm. <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mi>S</mi> </msub> <mo>,</mo> <msub> <mi>M</mi> <mi>F</mi> </msub> <mo>,</mo> <msub> <mi>M</mi> <mrow> <mi>W</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>M</mi> <mrow> <mi>W</mi> <mn>2</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>M</mi> <mrow> <mi>W</mi> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> are, respectively, the motors driving the shoulder joint, forearm, and wrist. <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mi>E</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mi>M</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mi>B</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mi>P</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mrow> <mi>W</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>P</mi> <mrow> <mi>W</mi> <mn>2</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>P</mi> <mrow> <mi>W</mi> <mn>3</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>P</mi> <mrow> <mi>D</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>P</mi> <mrow> <mi>D</mi> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> are the pulleys, respectively, elbow, minor, base, planar, direct 1, and direct 2, with their center lines <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mi>E</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mi>M</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mi>B</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mi>P</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>W</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>W</mi> <mn>2</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>W</mi> <mn>3</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>D</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>D</mi> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>. Inset images covered by the green-dash lines and red-dash lines show the front view of the cable layout at the elbow and the decoupling mechanism, respectively.</p>
Full article ">Figure 8
<p>Kinematics analysis of the cable-driven robot arm with six DOF. Inset images show the rotations of the wrist along three axes <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>z</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>z</mi> <mn>5</mn> </msub> </mrow> </semantics></math> indicated by red-dash lines. The range of motion of each joint is <math display="inline"><semantics> <mrow> <mo>[</mo> <msub> <mi>θ</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>θ</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>θ</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>θ</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>θ</mi> <mn>5</mn> </msub> <mo>,</mo> <msub> <mi>θ</mi> <mn>6</mn> </msub> <mo>]</mo> </mrow> </semantics></math> = <math display="inline"><semantics> <mrow> <mo>[</mo> <mn>360</mn> <mo>,</mo> <mn>111</mn> <mo>,</mo> <mn>106</mn> <mo>,</mo> <mn>160</mn> <mo>,</mo> <mn>360</mn> <mo>,</mo> <mn>180</mn> <mo>]</mo> </mrow> </semantics></math>(°). <math display="inline"><semantics> <msub> <mi>θ</mi> <mn>5</mn> </msub> </semantics></math> can reach a full round and only a half round rotation if <math display="inline"><semantics> <mover accent="true"> <msub> <mi>θ</mi> <mn>6</mn> </msub> <mo>˙</mo> </mover> </semantics></math> is zero and non-zero, respectively.</p>
Full article ">Figure 9
<p>Fabrication and assembling processes for making the robot arm.</p>
Full article ">Figure 10
<p>Electrical design and controlling system for the cable-driven robot arm.</p>
Full article ">Figure 11
<p>Setting up experiments for testing the cable loosening in static payload of 100 kg (<b>a</b>) and dynamic payload of 2 kg (<b>b</b>) with (<b>b-1</b>), (<b>b-2</b>), and (<b>b-3</b>), respectively, for rotating around <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>z</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>z</mi> <mn>5</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>Setting up experiments for testing the repeatability of the arm (<b>a</b>) and the wrist: (<b>b</b>), (<b>c</b>) and (<b>d</b>) for rotating around <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>z</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>z</mi> <mn>5</mn> </msub> </mrow> </semantics></math>, respectively. In this figure, the grasped object is a box sucked and manipulated by a suction-cup gripper.</p>
Full article ">Figure 13
<p>Evaluation of the cable loosening under 2 kg dynamic payload. The value of each dot point indicates the measurement value of the tension, and the dot and solid lines show the interpolation of the real test data over time.</p>
Full article ">Figure 14
<p>Cable durability test on the cable tightener with 10 wound rounds on its hub surface under 100 kg static payload setting-up. The blue curve indicates the interpolating function from the experimental data (red markers).</p>
Full article ">Figure 15
<p>Demonstration of the cable-driven robot arm in manipulating the objects (<b>a</b>) filament box, (<b>b</b>) heavy box, (<b>c</b>) foam sheet, (<b>d</b>) pneumatic-joint box, (<b>e</b>) storing box, and (<b>f</b>) component bag. (1), (2), and (3) are the phases of the experiments following the order of sucking, lifting, moving, and releasing.</p>
Full article ">Figure 16
<p>Weight distribution (<b>a</b>) and cost distribution of our robot arm after fabrication (<b>b</b>).</p>
Full article ">
18 pages, 2405 KiB  
Article
Experimental Comparison of Two 6D Pose Estimation Algorithms in Robotic Fruit-Picking Tasks
by Alessio Benito Alterani, Marco Costanzo, Marco De Simone, Sara Federico and Ciro Natale
Robotics 2024, 13(9), 127; https://doi.org/10.3390/robotics13090127 - 26 Aug 2024
Viewed by 1328
Abstract
This paper presents an experimental comparison between two existing methods representative of two categories of 6D pose estimation algorithms nowadays commonly used in the robotics community. The first category includes purely deep learning methods, while the second one includes hybrid approaches combining learning [...] Read more.
This paper presents an experimental comparison between two existing methods representative of two categories of 6D pose estimation algorithms nowadays commonly used in the robotics community. The first category includes purely deep learning methods, while the second one includes hybrid approaches combining learning pipelines and geometric reasoning. The hybrid method considered in this paper is a pipeline of an instance-level deep neural network based on RGB data only and a geometric pose refinement algorithm based on the availability of the depth map and the CAD model of the target object. Such a method can handle objects whose dimensions differ from those of the CAD. The pure learning method considered in this comparison is DenseFusion, a consolidated state-of-the-art pose estimation algorithm selected because it uses the same input data, namely, RGB image and depth map. The comparison is carried out by testing the success rate of fresh food pick-and-place operations. The fruit-picking scenario has been selected for the comparison because it is challenging due to the high variability of object instances in appearance and dimensions. The experiments carried out with apples and limes show that the hybrid method outperforms the pure learning one in terms of accuracy, thus allowing the pick-and-place operation of fruits with a higher success rate. An extensive discussion is also presented to help the robotics community select the category of 6D pose estimation algorithms most suitable to the specific application. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

Figure 1
<p>Robot with sensorized gripper grasping a lime.</p>
Full article ">Figure 2
<p>Architecture of the DenseFusion network.</p>
Full article ">Figure 3
<p>Simplified sketch of the architecture of the DOPE network: the nine images at the first stage output depict as white dots the 8 keypoints corresponding to the projections of the 3D cuboid vertices and the object centroid in the image plane.</p>
Full article ">Figure 4
<p>Pose estimation of an apple smaller than the one of the CAD model: the apple under the table is the one estimated by DOPE, and the apple on the table is the one estimated by DOPE + Pose Refinement. The cone delimited by the red lines contains all possible apples translated and scaled to have the same image in the camera plane. The dotted blue line passes through the centroids of these apples.</p>
Full article ">Figure 5
<p>Pipeline of the robotic fruit-picking execution. The 6D pose estimation algorithm block implements one of the two compared methods: DOPE + Pose Refinement or DenseFusion.</p>
Full article ">Figure 6
<p>Block scheme of the grasp controller.</p>
Full article ">Figure 7
<p>Illustration of the grasping strategy.</p>
Full article ">Figure 8
<p>Sample frames of the apple (<b>left</b>) and the lime (<b>right</b>) datasets.</p>
Full article ">Figure 9
<p>Top and side views of the five apples used for the first test with corresponding dimensions (<b>left</b>) and the five locations for grasp attempts (<b>right</b>).</p>
Full article ">Figure 10
<p>Top and side views of the two limes used for the first test with corresponding dimensions (<b>left</b>) and the five locations for grasp attempts (<b>right</b>).</p>
Full article ">Figure 11
<p>Fruits and locations (well illuminated on the left and overshadowed on the right) of the grasp attempts selected for the second test.</p>
Full article ">Figure 12
<p>Third test: three orientations selected for the apple.</p>
Full article ">Figure 13
<p>Third test: Grasp configurations planned for the three orientations of the apple, where the blue axis is the estimated <span class="html-italic">z</span> axis of the object frame (symmetry axis).</p>
Full article ">Figure 14
<p>Grasp forces applied during the pick-and-place operation of an apple (<b>left</b>) and a lime (<b>right</b>).</p>
Full article ">
25 pages, 14907 KiB  
Article
Closed-Form Continuous-Time Neural Networks for Sliding Mode Control with Neural Gravity Compensation
by Claudio Urrea, Yainet Garcia-Garcia and John Kern
Robotics 2024, 13(9), 126; https://doi.org/10.3390/robotics13090126 - 23 Aug 2024
Viewed by 934
Abstract
This study proposes the design of a robust controller based on a Sliding Mode Control (SMC) structure. The proposed controller, called Sliding Mode Control based on Closed-Form Continuous-Time Neural Networks with Gravity Compensation (SMC-CfC-G), includes the development of an inverse model of the [...] Read more.
This study proposes the design of a robust controller based on a Sliding Mode Control (SMC) structure. The proposed controller, called Sliding Mode Control based on Closed-Form Continuous-Time Neural Networks with Gravity Compensation (SMC-CfC-G), includes the development of an inverse model of the UR5 industrial robot, which is widely used in various fields. It also includes the development of a gravity vector using neural networks, which outperforms the gravity vector obtained through traditional robot modeling. To develop a gravity compensator, a feedforward Multi-Layer Perceptron (MLP) neural network was implemented. The use of Closed-Form Continuous-Time (CfC) neural networks for the development of a robot’s inverse model was introduced, allowing efficient modeling of the robot. The behavior of the proposed controller was verified under load and torque disturbances at the end effector, demonstrating its robustness against disturbances and variations in operating conditions. The adaptability and ability of the proposed controller to maintain superior performance in dynamic industrial environments are highlighted, outperforming the classic SMC, Proportional-Integral-Derivative (PID), and Neural controllers. Consequently, a high-precision controller with a maximum error rate of approximately 1.57 mm was obtained, making it useful for applications requiring high accuracy. Full article
(This article belongs to the Section Intelligent Robots and Mechatronics)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of implemented neural networks. (<b>a</b>) Multi-Layer Perceptron. (<b>b</b>) Closed-Form Continuous-Time neural networks.</p>
Full article ">Figure 2
<p>UR5 robot modeled in Simscape. (<b>a</b>) Robot reference frames according to the D–H algorithm; (<b>b</b>) D–H representation.</p>
Full article ">Figure 3
<p>Planar projection of the axis coordinates. (<b>a</b>) Planar projection of the fifth axis coordinate to the base coordinate; (<b>b</b>) Planar projection of the sixth axis coordinate to the base coordinate; (<b>c</b>) Planar projection of the fifth axis to the first axis joint coordinate [<a href="#B29-robotics-13-00126" class="html-bibr">29</a>].</p>
Full article ">Figure 4
<p>UR5 identification procedure schemes.</p>
Full article ">Figure 5
<p>Chirp-type signals used for UR5 identification.</p>
Full article ">Figure 6
<p>Proposed control scheme (SMC-CfC-G).</p>
Full article ">Figure 7
<p>Trajectories in Cartesian space. (<b>a</b>) Three-petal flower; (<b>b</b>) Combination of lines and curves.</p>
Full article ">Figure 8
<p>Trajectory tracking by the designed controllers. (<b>a</b>) Desired Cartesian trajectory 1 (three-petal flower); (<b>b</b>) Desired Cartesian trajectory 2 (combination of lines and curve).</p>
Full article ">Figure 9
<p>Simulation results for the implemented control strategies. (<b>a</b>) <span class="html-italic">x</span>-axis tracking trajectory 1; (<b>b</b>) <span class="html-italic">x</span>-axis tracking trajectory 2.</p>
Full article ">Figure 10
<p>Simulation results for the implemented control strategies. (<b>a</b>) <span class="html-italic">y</span>-axis tracking trajectory 1; (<b>b</b>) <span class="html-italic">y</span>-axis tracking trajectory 2.</p>
Full article ">Figure 11
<p>Simulation results for the implemented control strategies. (<b>a</b>) <span class="html-italic">z</span>-axis tracking trajectory 1; (<b>b</b>) <span class="html-italic">z</span>-axis tracking trajectory 2.</p>
Full article ">Figure 12
<p>Tracking of Cartesian trajectory 2 in the presence of external disturbances. (<b>a</b>) Increase in end effector load; (<b>b</b>) Multidirectional torque.</p>
Full article ">
14 pages, 4926 KiB  
Article
Eight-Bar Elbow Joint Exoskeleton Mechanism
by Giorgio Figliolini, Chiara Lanni, Luciano Tomassi and Jesús Ortiz
Robotics 2024, 13(9), 125; https://doi.org/10.3390/robotics13090125 - 23 Aug 2024
Viewed by 901
Abstract
This paper deals with the design and kinematic analysis of a novel mechanism for the elbow joint of an upper-limb exoskeleton, with the aim of helping operators, in terms of effort and physical resistance, in carrying out heavy operations. In particular, the proposed [...] Read more.
This paper deals with the design and kinematic analysis of a novel mechanism for the elbow joint of an upper-limb exoskeleton, with the aim of helping operators, in terms of effort and physical resistance, in carrying out heavy operations. In particular, the proposed eight-bar elbow joint exoskeleton mechanism consists of a motorized Watt I six-bar linkage and a suitable RP dyad, which connects mechanically the external parts of the human arm with the corresponding forearm by hook and loop velcro, thus helping their closing relative motion for lifting objects during repetitive and heavy operations. This relative motion is not a pure rotation, and thus the upper part of the exoskeleton is fastened to the arm, while the lower part is not rigidly connected to the forearm but through a prismatic pair that allows both rotation and sliding along the forearm axis. Instead, the human arm is sketched by means of a crossed four-bar linkage, which coupler link is considered as attached to the glyph of the prismatic pair, which is fastened to the forearm. Therefore, the kinematic analysis of the whole ten-bar mechanism, which is obtained by joining the Watt I six-bar linkage and the RP dyad to the crossed four-bar linkage, is formulated to investigate the main kinematic performance and for design purposes. The proposed algorithm has given several numerical and graphical results. Finally, a double-parallelogram linkage, as in the particular case of the Watt I six-bar linkage, was considered in combination with the RP dyad and the crossed four-bar linkage by giving a first mechanical design and a 3D-printed prototype. Full article
(This article belongs to the Section Neurorobotics)
Show Figures

Figure 1

Figure 1
<p>Ten-bar exoskeleton elbow joint mechanism.</p>
Full article ">Figure 2
<p>Ten-bar mechanism.</p>
Full article ">Figure 3
<p>Watt I six-bar linkage: (<b>a</b>) vector loops; (<b>b</b>) ICs.</p>
Full article ">Figure 4
<p>Crossed four-bar linkage: (<b>a</b>) vector loop; (<b>b</b>) ICs.</p>
Full article ">Figure 5
<p>Ten-bar mechanism: ICs.</p>
Full article ">Figure 6
<p>Ten-bar mechanism: result for a crank angle <span class="html-italic">θ</span><sub>2</sub> = 255° of <span class="html-italic">A</span><sub>0</sub><span class="html-italic">A</span> (Blue and magenta colors indicate the eight-bar elbow joint exoskeleton mechanism and the crossed four-bar linkage, respectively).</p>
Full article ">Figure 7
<p>Ten-bar mechanism: result for a crank angle <span class="html-italic">θ</span><sub>2</sub> = 290° of <span class="html-italic">A</span><sub>0</sub><span class="html-italic">A.</span>(Blue and magenta colors indicate the eight-bar elbow joint exoskeleton mechanism and the crossed four-bar linkage, respectively).</p>
Full article ">Figure 8
<p>Ten-bar elbow joint exoskeleton mechanism: (<b>a</b>) kinematic sketch; (<b>b</b>) application.</p>
Full article ">Figure 9
<p>Ten-bar linkage for the upper-limb exoskeleton: result for a crank angle <span class="html-italic">θ</span><sub>2</sub> = 300° of <span class="html-italic">A</span><sub>0</sub><span class="html-italic">A</span>. (Blue and magenta colors indicate the eight-bar elbow joint exoskeleton mechanism and the crossed four-bar linkage, respectively).</p>
Full article ">Figure 10
<p>Result for a crank angle <span class="html-italic">θ</span><sub>2</sub> = 252° of <span class="html-italic">A</span><sub>0</sub><span class="html-italic">A</span> for ten-bar linkage (Blue and magenta colors indicate the eight-bar elbow joint exoskeleton mechanism and the crossed four-bar linkage, respectively).</p>
Full article ">Figure 11
<p>Ten-bar mechanism and 3D-printed prototype for different configurations of crank angles <span class="html-italic">θ</span><sub>2</sub>: (<b>a</b>) 225°; (<b>b</b>) 240°; (<b>c</b>) 252°; (<b>d</b>) 270°; (<b>e</b>) 300°; (<b>f</b>) 315°.</p>
Full article ">Figure 12
<p>Whole sequence of the ten-bar mechanism closing motion.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop