[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Deep Learning for Human Activity Recognition on 3D Human Skeleton: Survey and Comparative Study
Previous Article in Journal
Radar/INS Integration and Map Matching for Land Vehicle Navigation in Urban Environments
Previous Article in Special Issue
Transforming Industrial Manipulators via Kinesthetic Guidance for Automated Inspection of Complex Geometries
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time Kinematically Synchronous Planning for Cooperative Manipulation of Multi-Arms Robot Using the Self-Organizing Competitive Neural Network

1
Institute of Robotics, Henan University of Technology, Zhengzhou 450001, China
2
State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(11), 5120; https://doi.org/10.3390/s23115120
Submission received: 17 April 2023 / Revised: 24 May 2023 / Accepted: 25 May 2023 / Published: 27 May 2023
(This article belongs to the Special Issue New Advances in Robotically Enabled Sensing)
Figure 1
<p>A type of cooperative manipulation. (<b>a</b>) Carrying. (<b>b</b>) Operating rudder. (<b>c</b>) Operating a wrench. (<b>d</b>) Using pliers. (<b>e</b>) Multi-station operation.</p> ">
Figure 2
<p>The diagram for the common features in the cooperative manipulation of multi-arms.</p> ">
Figure 3
<p>Simple configuration of multi-arm robot.</p> ">
Figure 4
<p>Kinematically synchronous planning for multi-arm robot. <span class="html-italic">U<sub>in</sub></span> = <b>t</b> = (<b>t</b><sub>1</sub>, <b>t</b><sub>2</sub>, …, <b>t</b><sub>N</sub>)<sup>T</sup>. <span class="html-italic">U<sub>out</sub></span> = <b>s</b> = (<b>s</b><sub>1</sub>, <b>s</b><sub>2</sub>, …, <b>s</b><sub>N</sub>)<sup>T</sup>.</p> ">
Figure 5
<p>Motion planning and EE motion for the EE with the minimum pose error, <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>e</mi> </mstyle> <mrow> <mi>min</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi mathvariant="normal">T</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>. <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mover accent="true"> <mi>v</mi> <mo>^</mo> </mover> </mstyle> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi mathvariant="normal">T</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>μ</mi> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>e</mi> </mstyle> <mrow> <mi>min</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi mathvariant="normal">T</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p> ">
Figure 6
<p>The configuration of three-arm robot with 15-DoFs.</p> ">
Figure 7
<p>Inverse kinematics based on the traditional method in real time. (<b>a</b>) Motion of multi-arms. (<b>b</b>) Joint angles. (<b>c</b>) EE position velocity. (<b>d</b>) EE attitude velocity. (<b>e</b>) EE position error. (<b>f</b>) EE attitude error.</p> ">
Figure 8
<p>Inverse kinematics based on the sub-base method in real time. (<b>a</b>) Motion of multi-arms. (<b>b</b>) Joint angles. (<b>c</b>) EE position velocity. (<b>d</b>) EE attitude velocity. (<b>e</b>) EE position error. (<b>f</b>) EE attitude error.</p> ">
Figure 9
<p>The configuration of two-arm robot with 13-DoFs.</p> ">
Figure 10
<p>The principle of two-arm robot with 13-DoFs.</p> ">
Figure 11
<p>Carrying task. (<b>a</b>) Initial configuration. (<b>b</b>) Manipulating process.</p> ">
Figure 12
<p>Trajectories for dual arms in carrying task. (<b>a</b>) EE movement. (<b>b</b>) Joint trajectory. (<b>c</b>) EE position. (<b>d</b>) EE attitude. (<b>e</b>) Position velocity. (<b>f</b>) Attitude velocity. (<b>g</b>) Pose velocity error. (<b>h</b>) EE pose error.</p> ">
Figure 13
<p>Manipulating pilers. (<b>a</b>) Initial configuration. (<b>b</b>) Manipulating process.</p> ">
Figure 14
<p>Trajectories for dual arms in manipulating pilers. (<b>a</b>) EE movement. (<b>b</b>) Joint trajectory. (<b>c</b>) EE position. (<b>d</b>) EE attitude. (<b>e</b>) Position velocity. (<b>f</b>) Attitude velocity. (<b>g</b>) Pose velocity error. (<b>h</b>) EE pose error.</p> ">
Figure 15
<p>Manipulating rudder. (<b>a</b>) Initial configuration. (<b>b</b>) Manipulating process.</p> ">
Figure 16
<p>Trajectories for dual arms in manipulating rudder. (<b>a</b>) EE movement. (<b>b</b>) Joint trajectory. (<b>c</b>) EE position. (<b>d</b>) EE attitude. (<b>e</b>) Position velocity. (<b>f</b>) Attitude velocity. (<b>g</b>) Pose velocity error. (<b>h</b>) EE pose error.</p> ">
Figure 16 Cont.
<p>Trajectories for dual arms in manipulating rudder. (<b>a</b>) EE movement. (<b>b</b>) Joint trajectory. (<b>c</b>) EE position. (<b>d</b>) EE attitude. (<b>e</b>) Position velocity. (<b>f</b>) Attitude velocity. (<b>g</b>) Pose velocity error. (<b>h</b>) EE pose error.</p> ">
Review Reports Versions Notes

Abstract

:
This paper presents a real-time kinematically synchronous planning method for the collaborative manipulation of a multi-arms robot with physical coupling based on the self-organizing competitive neural network. This method defines the sub-bases for the configuration of multi-arms to obtain the Jacobian matrix of common degrees of freedom so that the sub-base motion converges along the direction for the total pose error of the end-effectors (EEs). Such a consideration ensures the uniformity of the EE motion before the error converges completely and contributes to the collaborative manipulation of multi-arms. An unsupervised competitive neural network model is raised to adaptively increase the convergence ratio of multi-arms via the online learning of the rules of the inner star. Then, combining with the defined sub-bases, the synchronous planning method is established to achieve the synchronous movement of multi-arms robot rapidly for collaborative manipulation. Theory analysis proves the stability of the multi-arms system via the Lyapunov theory. Various simulations and experiments demonstrate that the proposed kinematically synchronous planning method is feasible and applicable to different symmetric and asymmetric cooperative manipulation tasks for a multi-arms system.

1. Introduction

The development of artificial intelligence technology has facilitated the research on the autonomous manipulation of robot manipulators. Meanwhile, the increase in requirement of using robots to replace human hands’ manipulation has made the cooperative motion of the robot important in the autonomous operations (e.g., manipulating the rudder, using pliers or a wrench, carrying large objects, or other similar manual tasks in daily life) [1]. Various multi-arm robots, such as Baxter [2], YUMI [3], Justin [4], and Robonaut [5], have been proposed to satisfy the requirement because of their outstanding capability of cooperative manipulation in replacing humans.
Multi-arm robots can not only complete the manipulation task of single-arm robot (or robot manipulator) but also accomplish more complex cooperative manipulation task, which is attributed to their large workspace, more degrees of freedom (DOFs), and greater flexibility. The cooperative manipulation of robots has been studied a lot. The methods for the cooperative manipulation could be divided into force-based [6,7,8,9] and kinematics-based strategies [10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29].
The force-based strategies include two types: One relies on dynamics control, but the highly complicated nonlinear dynamic model makes it difficult to apply in robot system. The other one depends on position compensation control that adjusts the position of the end-effector (EE) to maintain a certain interaction force and that has been studied extensively. The hybrid control of force and position [6,7] and the impedance control [8,9] are the typical method using position compensation control. The hybrid control of force and position needs to accurately decompose the coordinate space of the EE into the position space and force space. The impedance control changes the contact force and the EE position according to the impedance value, but the impedance parameters must be adjusted adaptively to ensure the compliance of interaction in the cooperative manipulation. Although the variable impedance is proposed to improve the flexibility of interaction, the real impedance parameters are difficult to obtain. The intelligent control methods are proposed to simplify the control schemes and reduce the difficulty of modeling the robot control system caused by the strong coupling, time-varying, and uncertainty, such as a neural network [10]. In addition, the force-based strategies rely on force/torque sensors and the corresponding control algorithms and are mainly used in the non-redundant robot system [11].
The kinematics-based strategies are simple and easy to combine with an intelligent algorithm to realize robot autonomous manipulation, thereby resulting in the kinematics-based strategies becoming a research focus of the cooperative manipulation. Thus, the kinematics-based cooperative manipulation is mainly considered in this paper. Usually, the higher requirement for the motion synchronization among the arms needs to be satisfied in the kinematics-based cooperative manipulation. The existing kinematics-based strategies for the cooperative manipulation rely on motion planning, including leader–follower [12,13], cooperative-task space (CTS) [14,15,16,17,18,19], task-oriented [20,21], and intelligent approaches [11,22,23].
The leader–follower approach defines the leader arm and the follower arm for the robot system, and the follower carries out motion planning according to the movement of the leader [12,13]. In the CTS approach, robot arms without a leader-follower relationship are shared equally to achieve symmetrical cooperative manipulation tasks by defining the relative and absolute motions. The extended CTS approach was further proposed to accomplish asymmetric behavior and uncoordinated tasks [18,19]. The task-equation-based approach uses the general formula of the defined cooperative task to transform the coordinate system kinematics among arms to plan the motion of single arm [20,21]. The intelligent approach considers the collaborative task of arms as the constrained quadratic programming problems and utilizes the advanced neural network as the solver of the problems to control the arms motion [11,22]. The intelligent approaches could simplify the solving process of inverse kinematics for dual arms, but the adjustment of the EE attitudes is neglected in cooperative tasks. In Ref. [23], the dual-arm path-planning problem was transformed into a multi-objective optimization problem, and a co-evolutionary method with shared workspace was proposed to serve as a solver. A motion planner based on the kinematic model of a dual-arm robot system in [24] was designed to ensure grasping stability and dexterity. The movement under the relative motion frame of EEs was studied in [25] for the problem of two-hand assembly. An asymmetric task-planning method based on the Lyapunov theory was proposed in [26] to solve the problems of designing the control law of absolute motion tasks and updating the distribution of relative tasks among arms. Fractional-order derivative and the uncertain fractional-order differential equations were utilized to predict and correct motion trends, and the rationality of the method is verified by different cases [27]. A state feedback robust controller based on local information was designed to ensure that the states of multiple robots converge to a common motion state [28]. An extended Kalman filter collaborative algorithm based on the error compensation was proposed to reduce the state estimation error of delayed filtering in multi robot systems [29].
The key focus of the kinematics-based strategies planning is ensuring the motion synchronization in collaborative tasks. The motion of one arm is always taken as a reference to plan the motion of other arms in the existing literature. Moreover, the application of the existing approaches is more suitable for the multi-arms robot than those coupled with the common and fixed base. Even if there is an application in the robot with a dynamic coupling base, the dynamic coupling base is set to be stationary to ensure the cooperation between arms [24], and this is caused by the uncoordinated movement between the EEs of redundant arms. Few studies have been devoted to the kinematics-based planning of the redundant multi-arms with a dynamic coupling base to ensure the cooperation between the arms.
Therefore, on this basis of the previous studies [24,30,31], this paper proposed a novel kinematics-based synchronous planning for collaborative manipulation of the redundant multi-arms with dynamic coupling, which involves the inverse kinematics based on the sub-base method and the self-organizing competitive neural network. The following aspects differ from those in the existing literature.
A class of cooperative manipulation tasks of multi-arm robot described by generalized coordinate transformation matrix are summarized, such as carrying, manipulating the rudder, using a wrench, manipulating pliers, multi-station manipulation, and other similar cooperative manipulation. The configuration branch division of the multi-arm robot based on the sub-base method is proposed to identify each branch of the robot, and the inverse kinematics is calculated based on the damped least square method. Then, the multi-arms robot system can synchronously converge along the reducing direction of the total error. The self-organizing competitive neural network is proposed to promote motion synchronization between multiple arms, and it regards the cooperative movement between arms as the competitive relationship of neurons instead of relying on a defined arm motion as a reference in existing research. The inner star learning rule is used to change the neuron weight, and all neuron weight values are updated to adjust the motion of multi-arms in every instance of competitive learning. Thus, the multi-arm robot motion planning method is formed and realizes the synchronization of the arms’ motion state. The stability of the motion-planning algorithm is analyzed by using the Lyapunov theory and the inner star learning rule principle. The feasibility of the proposed method, the synchronization of motion state, and its applicability are demonstrated by dual-arm and three-arm robots with a dynamic base in different symmetric and asymmetric cooperative manipulations.
The remainder of this paper is organized as follows. Section 2 presents a type of cooperative manipulation, the sub-base description, and the Jacobian matrix definition for the multi-arms with physical coupling. Section 3 discusses the real-time kinematically synchronous planning method of collaborative manipulation based on the self-organizing competitive neural network and the stability. Section 4 and Section 5 provide the simulation and experimental results in different cases, respectively. Section 6 presents the conclusion.

2. Cooperative Manipulation of Multi-Arms

2.1. A Type of Kinematically Cooperative Manipulation

In Figure 1, a type of cooperative manipulation for multi-arms is considered for this paper, such as carrying, manipulating rudder, using pliers and multi-station manipulation, etc. Figure 2 presents the common features for the kinematically cooperative manipulations that can be concluded as follows:
(1) The coordinate system {Ti} in or out of the specified object is referenced for the object pose, ti, of each arm EE in real time. The object pose, ti, is defined as follows:
{ Q i = Q T i R t i T i D i   R t i ( f ^ t i , ψ t i ) = R T i O 0 R t i T i t i = ( Q i , f ^ t i · ψ t i ) T ,
where Q i is the desired position vector of the i-th EE in coordinate system {O0}. Q T i is the vector from the coordinate system {O0} to the i-th coordinate system {Ti}. D i is the position vector in coordinate system {O0} for ti. R t i T i denotes the rotation matrix from the coordinate system for ti to {Ti}. R T i O 0 refers to the rotation matrix from {Ti} to {O0}. R t i ( f ^ t i , ψ t i ) defines the rotational operator about the axis direction f ^ t i by ψ t i radians. ti R b × 1 . b = 3 for planar robot. b = 6 for spatial robot. i = 1, 2, …, N. N is the number of arms (or EEs).
(2) When the arms perform collaborative operations, the arms form a closed loop, and there is a certain motion constraint relationship between the EEs. Thus, the motion states of EEs from “1, 2, …, i” to “1’, 2’, …, i’” are kinematically consistent and synchronous. The descriptions for movement states are mutual during the execution of these tasks, like the motion error and the motion rate for each EE. Such common motion states of cooperative manipulations are expressed as the following problem in Equation (2), and Equation (2) is used as the judgment criteria for coordinated synchronous motion and is proved in Section 3.2.
{ lim t e i = t i s i = 0             ( a ) lim t ( v i t ˙ i ) = 0             ( b ) ,  
where e i denotes the pose error of the i-th EE, v i defines the i-th EE velocity, s i is the pose of the i-th EE, and t is the time. Equation (2a) refers to the pose errors of EEs along the direction of error convergence, which not only can make the multi-arms reach the execution position of the manipulated object at the same time but also can ensure the minimization of the movement error between the arms in the process of cooperative manipulation. Equation (2b) guarantees that the movement speeds of the EEs are synchronous and that the manipulation speed of the EE is equal/close to the set or constraint value in the manipulation. Finally, the synchronization of arms during cooperative manipulation can be achieved.
The i-th EE pose, velocity, and object pose are s i , v i and t i , respectively. s i , v i , t i R b × 1 . The i-th EE pose error and velocity can be calculated as follows:
e i ( T ) = t i ( T ) s i ( T )   ,  
v i ( T ) = s ˙ i = s i ( T ) s i ( T Δ T ) Δ T ,  
where T signifies the current time, ( T Δ T ) is the last sampling time, Δ T denotes the sampling period, s = (s1, s2, …, sk)T, and t = (t1, t2, …, tk)T.

2.2. Multi-Arms Robot with Physical Coupling

This paper considers the general configuration of the multi-arm robot with physical coupling to achieve the cooperative manipulation, as shown in Figure 3. Suppose that there are r joints of the multi-arms and each value of θj is the joint angle. The completely joint configuration of the multi-arms is defined as Θ = (θ1, …, θr)T, Θ R r × 1 . The pose mapping of the multi-arm robot from joint space to Cartesian space can be expressed as follows:
s = f ( Θ ) ,
and s i = f i ( Θ ) for the EE.
The corresponding inverse mapping is as follows:
Θ = f 1 ( s ) .      
where f is a highly nonlinear operator and difficult to solve. The iterative method via Jacobian matrix is used to approach the good solution of the mapping problem.
The traditional Jacobian matrix, J, is the partial derivative matrix of the whole chain system relative to the EE s . The Jacobian matrix is obtained via linear approximations of inverse kinematic problems. They linearly simulate the motion of the EE with respect to the instantaneous system changes of the link translation and joint angle. The traditional Jacobian matrix, J, is a function of the joint angle, Θ , defined as follows:
J ( Θ ) i j = ( s i θ j ) i j ,
where I = 1, …, N. j = 1, …, r. J ( Θ ) i j R b × 1 .

2.3. Definition of Sub-Bases

The traditional Jacobian matrix can ensure that each EE converges along its own error reduction direction. Unlike the traditional Jacobian matrix, J , the sub-bases are defined to make the EEs converge along the reducing direction of the system’s total error and guarantee that the EEs converge at the same time.
This paper defines that the nodes with multiple branches as the sub-bases for the multi-arms configuration, as shown in Figure 3. The Jacobian matrix is modified as follows:
J ( Θ ) = diag ( J 1 , 1 , J n , 1 , , J n , k , J 1 , , J N ) .
For the 1-th sub-base pose, P 1 , 1 , the corresponding element of the Jacobian matrix is as follows:
J 1 , 1 ( Θ ) j = i = 1 N ( s i θ j ) j ,
where J 1 , 1 ( Θ ) j R b × 1 . J 1 , 1 ( Θ ) R b × M 0 , 1 . θ j belongs to the chain P 0 P 1 , 1 with M0,1 DoFs.
For the n,k-th sub-base pose, P n , k , the corresponding element of the Jacobian matrix is as follows:
J n , k ( Θ ) j = [ i = N N k + 1 N ( s i θ j + + s N θ j ) ] j ,
where J n , k ( Θ ) j R b × 1 . J n , k ( Θ ) R b × M n , k . θ j belongs to the chain P 1 , 1 P n , k with Mn,k DoFs. By analogy, the Jacobian matrix corresponding to other sub-bases can be obtained.
For J 1 , , J N without common degrees of freedom, the corresponding elements can be obtained according to (6).
N = N 1 + N 2 + + N k ,  
r = M 0 , 1 + M n , 1 + + M n , k + M 1 + + M N .  
The velocities of 1-th sub-base and n,k-th sub-base are calculated by the following:
P ˙ 1 , 1 = η 1 · 1 N · i = 1 N k p · d ( t i s i ) d t ,
P ˙ n , k = η n , k · 1 N k · i = N N k + 1 N k p · d ( t i s i ) d t ,
where η 1 and η n , k are the gain coefficient. N and Nk are fixed value and related to the configuration of multi-arms.
The inverse kinematics is as follows:
Θ ˙ = J * ( Θ ) S ˙ = J * ( Θ ) ( P ˙ 1 , 1       P ˙ n , 1             P ˙ n , k       s ˙ ) T ,  
where J* denotes the pseudo-inverse of Jacobian matrix, J(Θ), based on the damped least squares method, and J*= JT(JJT + λI)−1. J ( Θ ) R b ( k + N + 1 ) × r . J * R r × b ( k + N + 1 ) . λ (λ > 0) represents the damping factor that can handle the ill-conditioned J in the neighborhood of singular configurations for redundant manipulators and guarantee the EEs with the minimum possible deviation at all configurations. I is a unit matrix with the dimension b ( k + N + 1 )   ×   b ( k + N + 1 ) . In accordance with the traditional fixed proportion-based method [31] for the real-time tracking of a given object pose, t ˙ , the EE velocities are planned as follows:
s ˙ = t ˙ + k p · ( t s ) ,
where k p is the gain coefficient.

2.4. Iteration for Multi-Arms Robot Motion

The iterative method is utilized to achieve the real-time movement of multi-arms via updating the joint angles, Θ, according to (17).
Θ ( T ) = Θ ( T Δ T ) + Δ Θ .  
where Δ Θ deduced from (15) becomes
Δ Θ J * ( Θ ) Δ S = J * ( Θ ) ( Δ P 1 , 1       Δ P n , 1             Δ P n , k       Δ s ) T   = J * ( Θ ) [ η 1 · 1 N · i = 1 N μ · ( t i s i ) η n , 1 · 1 N 1 · i = 1 N 1 μ · ( t i s i ) η n , k · 1 N k · i = N N k + 1 N μ · ( t i s i ) Δ t + μ · ( t s ) ] ,
where μ = kp·ΔT, and μ < 1. Moreover, Δ t represents the changing pose of the object at a sampling time interval, ΔT.
Then, according to the sub-base method, the movement of the multi-arm robot can be achieved. The sub-base motion facilitates the synchronous convergence, and the synchronous performance is more obvious when the common DoFs are enough. The corresponding verifications are presented in Section 4 and Section 5.2.

3. Kinematically Synchronous Planning

3.1. Synchronous Planning Using Self-Organizing Competitive Neural Network

The DoFs of the sub-base are not always enough to ensure the system convergence along the reducing error direction for the pose of the EEs, thereby resulting in the asynchronous EE motion. Thus, the self-organizing competitive neural network based on the rule of inner star model is proposed to adjust the synchronism of multi-arms movement. Equation (2) indicates that the error e i and the v i will tend to a stable value. In accordance with the principle of the self-organizing competitive neural network, the kinematically synchronous planning for the multi-arm robot is designed as shown in Figure 4.
The learning rule of the inner star model defines the weight updating as follows:
Δ w i = η ( P i w i ) Y i ,
where η denotes the learning rate; P i is the i-th input element of the neuron and the minimum pose velocity error norm, min( v i ( T ) t ˙ i ); w i is the weight value; i = 1, 2, …, N; N is the number of arms (or EE); and Y i is the value of output neuron and is defined as
Y i = { 1 ,             if   P i P i > ε   0 ,             Otherwise   ,
where
ε = P P N .  
The input element, P i , and the weight value, w i , are defined as
P i = v ˜ = min ( v 1 ( T ) t ˙ 1 , v 2 ( T ) t ˙ 2 , , v N ( T ) t ˙ N ) ,
w i = v i ( T ) t ˙ i .
The input vector P is constituted by P i and defined as
P = ( P 1 ,   P 2 , , P N ) T ,   P R N × 1 .        
Since the minimum of v ˜ is used as an input and each EE may become the one with the minimum of v ˜ , the proposed method makes the planner no longer use a manipulator as a reference as in the existing literature. In each period, the new input vectors, P , in neural networks are defined as
P = ( P 1 , P 2 , , P N ) T .  
where
P i = v i ( T ) t ˙ i .  
In order to make the system quickly reach the state of synchronization, the winning weight value will get more rewards in a cycle. The new weight value, Δ w ^ i , is designed as
Δ w ^ i = Δ w i · tanh ( c i v i ( T ) t ˙ i i = 1 N v i ( T ) t ˙ i + σ i ) ,  
where σ i is a small positive real number; and c i denotes a positive real number to adjust the updating slope of neuron weight and contributes to the synchronization of the EE motion. The response time of converges is shorter when the parameter c i becomes larger.
The i-th EE’s planned related velocity is deduced according to the iterative method as follows:
v ^ i ( T ) = μ · ( t i s i ) · [ 1 Δ w ^ i v i ( T ) t ˙ i + δ i ] .  
Then, the planned velocity for EEs is
v ¯ i ( T ) = v ^ i ( T ) + Δ t i .  
Thus, (18) becomes
Δ Θ J * ( Θ ) Δ S = J * ( Θ ) [ η 1 · 1 N · i = 1 N v ^ i ( T ) η n , 1 · 1 N 1 · i = 1 N 1 v ^ i ( T )   η n , k · 1 N k · i = N N k + 1 N v ^ i ( T ) v ¯ ( T ) ] ,
where v ¯ ( T ) = [ v ¯ 1 ( T ) , v ¯ 2 ( T ) , , v ¯ N ( T ) ] T . The learning rate η, η1 and ηn,k are related to the step size for each iteration. The response time of converges is shorter when the learning rate becomes larger.

3.2. Stability Analysis

Supposing that the learning rate, i.e., η , η 1 , η n , 1 , …, η n , k , is small, based on the rule of inner star model, the weight value with the minimum EE pose error, e min , is never updated, and the minimum pose error changes in a cycle is Δ e min ( T ) . Thus, for the EE with the minimum pose error, the Lyapunov function is defined as follows:
V ( T ) = 1 2 e min 2 ( T ) ,
V ( T + Δ T ) = 1 2 e min 2 ( T + Δ T ) ,  
The change of the pose error is
e min ( T + Δ T ) = e min ( T ) Δ e min ( T ) ,  
For the EE with the minimum pose error, e min ( T ) , the planned iterative step based on the fixed proportion method [31] is μ e min ( T ) in each control cycle, as shown in Figure 5. However, the actual motion of EE will have a small deviation, φ , as follows:
Δ e min ( T ) = μ e min ( T ) + φ ,
where 0 < μ < 1.
Due to the high-precision motion of the advanced manipulator and the small learning rate in the proposed method, the deviation, φ , is usually in a very small range. The norm of the deviation φ is less than μ e min ( T ) to ensure that the EE can move along the planned path, i.e., φ < μ e min ( T ) . Therefore,
Δ V ( T ) = 1 2 e min 2 ( T + Δ T ) 1 2 e min 2 ( T ) = 1 2 { [ e min ( T ) μ e min ( T ) φ ] 2 e min 2 ( T ) } = 1 2 { [ ( 1 μ ) e m i n ( T ) φ ] 2 e m i n 2 ( T ) } 1 2 { [ ( 1 μ ) e m i n ( T ) + φ ] 2 e m i n 2 ( T ) } < 1 2 { [ ( 1 μ ) e m i n ( T ) + μ e m i n ( T ) ] 2 e m i n 2 ( T ) } = 1 2 { [ ( 1 μ ) e m i n ( T ) + μ e m i n ( T ) ] 2 e m i n ( T ) 2 } = 0
Thus,
lim T e min ( T ) = 0 ,  
and the minimum pose error, e min , is convergent.
Based on Equations (3) and (4), the derivation of e ˙ min ( T ) is
e ˙ min ( T ) = t ˙ min ( T ) v min ( T ) = t ˙ min ( T ) s ˙ min ( T )   = t min ( T ) t min ( T Δ T ) Δ T s min ( T ) s min ( T Δ T ) Δ T = t min ( T ) s min ( T ) Δ T t min ( T Δ T ) s min ( T Δ T ) Δ T = 1 Δ T [ e min ( T ) e min ( T Δ T ) ] = 1 Δ T [ e min ( T Δ T ) ] = 1 Δ T [ μ e min ( T Δ T ) + φ ]
Since φ < μ e min ( T ) and lim T e min ( T ) = 0 ,
lim T e ˙ min ( T ) = 0 ,  
According to the rule of inner star model, the input and the weight ultimately become equal, as follows:
P i = w i .
This is
v ˜ = e ˙ min ( T ) = v i ( T ) t ˙ i = 0 ,
Δ w ^ i = Δ w i = 0 .  
Then,
lim T e i ( T ) = lim T e min ( T ) = 0 .
lim T e ˙ i ( T ) = lim T e ˙ min ( T ) = 0 .
Therefore, the proposed planning method based on the self-organizing competitive neural network is convergent, and Equation (2) is proved to be valid to achieve motion synchronization.

4. Simulation

The three-arm robot with 15-DoFs is utilized to provide the contrast simulations for inverse kinematics by using inverse kinematics based on the sub-bases and the traditional method [31], as shown in Figure 6. The common DoFs are 3 that are enough for all EEs to make the pose error converge along the decreasing direction of the total error simultaneously. The synchronization performances of EE movements are compared by tracking the static-designated location on the specified-carried object. The robot configuration parameters, initial parameters, and kinematics parameters are presented in Table 1, Table 2 and Table 3. The poses of the static object are t1 = (2.49 m, 2.79 m, 1.536 rad)T, t2 = (5.0 m, −0.70 m, 1.536 rad)T, and t3 = (4.49 m, 1.50 m, 1.536 rad)T. Moreover, t = (t1, t2, t3)T, and t ˙ = 0. The simulation results are presented in Figure 7 and Figure 8.
Both methods can make the EEs arrive at the designated location of the specified object, as illustrated in Figure 7a and Figure 8a. The joint angles are shown in Figure 7b and Figure 8b. Due to t ˙ = 0 , Δ e i ( T ) is equal to v i ( T ) Δ T , and the velocity curves of the EE pose are similar with those of the EE pose error, as shown in Figure 7c–f and Figure 8c–f. Because of the sub-base motion, the convergence of the EE pose error is much faster than that based on the traditional method. Furthermore, the EE pose velocities form uniform motion states before complete convergence in Figure 8c,d. Thus, the proposed sub-base method is conducive to the synchronization from the initial motion state to the cooperative motion state and improves the efficiency for the carrying of the multi-arm robot.

5. Experimental Verification

5.1. Experimental Setup

The two-arm robot with 13-DoFs is used to compare the proposed synchronous planning for the collaborative manipulation, and 1-DoF is common in the base joint, as shown in Figure 9. The D-H parameters of the two-arm robot are presented in Table 4. In Figure 10, the principle of the experimental setup can be briefly described as follows.
(1)
The global depth camera transfers the observed frame of depth and color images to the computer by using a universal serial bus (USB) in real time. The robot operating system (ROS) node runs in the computer and extracts the position information of the object from each frame image. The vision-processing procedures in the computer are developed based on the morphology, using the Open Source Computer Vision Library and the camera Software Development Kit.
(2)
The joint feedback data, Θ, of the robot are transmitted to the TMS320F28335 controller through the Controller Area Network (CAN) bus. At the same time, the controller transfers the received joint data, Θ, to the ROS nodes through the Serial Communication Interface (SCI) bus in real time.
(3)
The ROS nodes receive the data (xobj, Θ) and execute the forward and inverse kinematics calculation and motion-planning algorithm. The real-time control command data, Θd and Θ ˙ d , are sent to the robot joint actuator through the RS485 bus to control the motion. The baud rate of the CAN bus, USB serial bus, SCI bus, and RS485 bus is set to 1 MHz. The control period of the whole system containing the proposed kinematically synchronous planning method is less than 10 ms.

5.2. Synchronous Planning Experiments

(1)
Collaboration Carrying
The collaboration carrying an object was provided to verify the feasibility of the proposed planning method, as illustrated in Figure 11. The parameters for carrying are shown in Table 5 and Table 6, respectively. The carried object has translational and rotational motion. The rotation center position is (−1.5 mm, −342.7 mm, −0.85 mm)T. The EEs of left and right arms reach the initial locations, as shown in Figure 11a. The tracked initial locations on the carried object are t 1 init   = (115.841, −342.7, −0.85, 1.2092, −1.2092, −1.2092)T, t 2 init   = (−118.841, −342.7, −0.85, 1.2092, −1.2092, −1.2092)T. At 6.75 s, two arms begin to carry the object, as shown in Figure 11b. The rotation angle relative to the rotation center is 0.0005 rad, and the translational motion of the rotation center is 0.15 mm in a cycle period, Δ T . During the collaboration process, the position of the tracked locations on the carried object moves back and forth in a 5 s cycle, and the tracked pose trajectory in the initial 2.25 s (from 6.75 s to 9 s) is as follows:
t 1 = t 1 init ( 117.341 · [ sin ( 0.0005 ( T 6.75 ) + 0.0005 Δ T ) sin ( 0.0005 ( T 6.75 ) ) ] + 0.15 ( T 6.75 ) 117.341 · [ cos ( 0.0005 ( T 6.75 ) + 0.0005 Δ T ) cos ( 0.0005 ( T 6.75 ) ) ] 0.15 ( T 6.75 ) f ^ 1 · ψ 1 )
t 2 = t 2 init + ( 117.341 · [ sin ( 0.0005 ( T 6.75 ) + 0.0005 Δ T ) sin ( 0.0005 ( T 6.75 ) ) ] + 0.15 ( T 6.75 ) 117.341 · [ cos ( 0.0005 ( T 6.75 ) + 0.0005 Δ T ) cos ( 0.0005 ( T 6.75 ) ) ] 0.15 ( T 6.75 ) f ^ 2 · ψ 2 )  
where f ^ 1 , φ1, f ^ 2 , and φ2 can be obtained according to the following equations:
R 1 ( f ^ 1 , φ 1 ) = R 2 ( f ^ 2 , φ 2 ) = [ cos ( π 2 0.0005 ( T 6.75 ) ) sin ( π 2 0.0005 ( T 6.75 ) ) 0 sin ( π 2 0.0005 ( T 6.75 ) ) cos ( π 2 0.0005 ( T 6.75 ) ) 0 0 0 1 ] [ 1 0 0 0 0 1 0 1 0 ]  
All the parameters in Table 5 and Table 6, the rotation center position, t 1 init , and t 1 init , were used in Figure 4. The inverse kinematics is based on Equation (30) to obtain the joint angle, Θ. The change of the new weight value, Δ w ^ i , of the self-organizing competitive neural network is obtained according to Equation (27). The learning rate, i.e., η , η 1 , η n , 1 , …, η n , k , is used to calculate the carrying step size of EE motion in Cartesian space. With the continuous learning and competition of the proposed planning method, the position velocities of EEs gradually form a consistent movement. Figure 12a,b show the motion paths of EEs and joint trajectories, respectively. Figure 12c,d show the periodic change of pose in Cartesian space where the Y-axis position, the Z-axis position, and the attitude of the EEs remain the same. The position velocity and the attitude velocity reach synchronous motion states when the position velocity error and the attitude velocity error decrease to 0, as illustrated in Figure 12e–h.
(2)
Manipulating the pliers
Figure 13 shows that the two arms manipulated a pair of pliers. The parameters are illustrated in Table 5 and Table 6. The rotation center position is (−50.0 mm, −347.55 mm, and 0.85 mm)T. The length of the pliers’ handle is 135 mm. The tracked initial locations on the carried object for the EEs of left and right arms are t 1 init = (86.841, −212.7, 0.85, 1.2092, −1.2092, −1.2092)T and t 2 init   = (−186.841, −212.7, 0.85, 1.2092, −1.2092, −1.2092)T. At 6.25 s, two arms begin to manipulate the pliers, as shown in Figure 14b. During the collaboration process, the position of the tracked locations on the pliers moves back and forth in a 3 s cycle, and the tracked pose trajectory in the initial 1.5 s (from 6.25 s to 7.75 s) is as follows:
t 1 = t 1 init + ( 135 · [ sin ( 0.0005 ( T 6.25 ) + 0.0005 Δ T ) sin ( 0.0005 ( T 6.25 ) ) ] 135 · [ cos ( 0.0005 ( T 6.25 ) + 0.0005 Δ T ) cos ( 0.0005 ( T 6.25 ) ) ] 0 f ^ 1 · ψ 1 )  
t 2 = t 2 init + ( 135 · [ sin ( 0.0005 ( T 6.25 ) 0.0005 Δ T ) sin ( 0.0005 ( T 6.25 ) ) ] 135 · [ cos ( 0.0005 ( T 6.25 ) + 0.0005 Δ T ) cos ( 0.0005 ( T 6.25 ) ) ] 0 f ^ 2 · ψ 2 )
where f ^ 1 , ψ 1 , f ^ 2 , and ψ 2 can be obtained according to the following equations:
R 1 ( f ^ 1 , φ 1 ) = [ cos ( π 2 0.0005 ( T 6.25 ) ) sin ( π 2 0.0005 ( T 6.25 ) ) 0 sin ( π 2 0.0005 ( T 6.25 ) ) cos ( π 2 0.0005 ( T 6.25 ) ) 0 0 0 1 ] [ 1 0 0 0 0 1 0 1 0 ]  
R 2 ( f ^ 2 , φ 2 ) = [ cos ( π 2 + 0.0005 ( T 6.25 ) ) sin ( π 2 + 0.0005 ( T 6.25 ) ) 0 sin ( π 2 + 0.0005 ( T 6.25 ) ) cos ( π 2 + 0.0005 ( T 6.25 ) ) 0 0 0 1 ] [ 1 0 0 0 0 1 0 1 0 ]  
The inverse kinematics is based on Equation (30) to obtain the joint angle, Θ. The change of the new weight value, · w ^ i , is obtained according to Equation (27) and used to adjust the EE motions according to Equation (28). The learning rate, i.e., η , η 1 , η n , 1 , …, η n , k , is used to calculate the step size of manipulating the pilers in the Cartesian space. The proposed planning method makes the position velocities of EEs gradually form a consistent movement. Figure 14a,b show the motion paths of EEs and joint trajectories, respectively. Figure 14c,d show the periodic change of pose in Cartesian space, where the Y-axis position, the Z-axis position, and the attitude of the EEs remain the same. The position velocity and the attitude velocity of the EEs are almost the same and showed a small error when synchronous motion states were reached, as illustrated in Figure 14e–h.
(3)
Manipulating a rudder
Manipulating a rudder is illustrated in Figure 15. The parameters are shown in Table 5 and Table 6. The rotation center position is (0.0 mm, −342.7 mm, −0.85 mm)T. The diameter of the rudder is 136.84 mm. The tracked initial locations on the carried object for the EEs of left and right arms are t 1 init   = (136.84, −342.7, −0.85, 1.2092, −1.2092, −1.2092)T and t 2 init   = (−136.84, −342.7, 0.85, 1.2092, −1.2092, −1.2092)T. At 6.75 s, two arms begin to manipulate the rudder, as shown in Figure 16b. During the collaboration process, the position of the tracked locations on the rudder moves back and forth in a 2.5 s cycle, and the tracked pose trajectory in the initial 1.25 s (from 6.75 s to 8 s) is as follows:
t 1 = t 1 init ( 136.841 · [ sin ( 0.0005 ( T 6.75 ) + 0.0005 Δ T ) sin ( 0.0005 ( T 6.75 ) ) ] 136.841 · [ cos ( 0.0005 ( T 6.75 ) + 0.0005 Δ T ) cos ( 0.0005 ( T 6.75 ) ) ] 0 f ^ 1 · ψ 1 )
t 2 = t 2 init + ( 136.841 · [ sin ( 0.0005 ( T 6.75 ) + 0.0005 Δ T ) sin ( 0.0005 ( T 6.75 ) ) ] 136.841 · [ cos ( 0.0005 ( T 6.75 ) + 0.0005 Δ T ) cos ( 0.0005 ( T 6.75 ) ) ] 0 f ^ 2 · ψ 2 )  
where f ^ 1 , ψ 1 , f ^ 2 , and ψ 2 can be obtained according to the following equations:
R 1 ( f ^ 1 , φ 1 ) = R 2 ( f ^ 2 , φ 2 ) = [ cos ( π 2 0.0005 ( T 6.75 ) ) sin ( π 2 0.0005 ( T 6.75 ) ) 0 sin ( π 2 0.0005 ( T 6.75 ) ) cos ( π 2 0.0005 ( T 6.75 ) ) 0 0 0 1 ] [ 1 0 0 0 0 1 0 1 0 ]
Similar to the previous experiments, the inverse kinematics is based on Equation (30) to obtain the joint angle, Θ. The change of the new weight value, Δ w ^ i , is obtained according to Equation (27) and used to adjust the EE motions according to Equation (28). The learning rate, i.e., η , η 1 , η n , 1 , …, η n , k , is used to calculate the step size of manipulating the rudder in Cartesian space. Since the shape of the rudder is symmetrical at the center, the tracked locations are also symmetrical, and the EE attitude changes are the same. In Figure 16c,d, the two arms begin to manipulate the rudder at 6.75 s. In Figure 16e–h, the EE pose errors can converge to 0, and the corresponding EE motions own almost the same states ultimately by using the proposed planning method with the learning of the self-organizing competitive neural network.
Considering (1)–(3) in Section 5.2 comprehensively, the pose speed and pose error converge to 0 for the successful execution of coordination manipulation and correspond to Equation (2), and no arm motion is set as the reference for the other arms. Hence, the proposed planning method based on the self-organizing competitive neural network owns the feasibility, synchronism, and effectiveness in achieving the collaboration manipulations for the arms with physical coupling and has the contributions to the practical application.

6. Conclusions

This paper presents a real-time kinematically synchronous planning method for collaborative manipulation through the self-organizing competitive neural network. This method considers a type of collaborative manipulation known as the synchronization of EE motion. The sub-bases are defined for the configuration of multi-arms to obtain the Jacobian matrix of common DoFs and ensure the pose errors converging along the reducing direction of the EE total pose errors. The simulations of multi-arms with common DoFs display the consistency before the pose errors converge completely and make contributions to the collaborative manipulation of multi-arms. On this basis, an unsupervised competitive neural network is raised to regard the EE synchronous motion as the competition of neurons and adaptively increase the convergence ratio of multi-arms through the mutual learning and competition of neurons by using the inner star rules. The stability of multi-arms system is analyzed through the Lyapunov theory. Various simulations and experiments confirm that the proposed synchronous planning method is feasible, synchronous, and has the application potentiality in different cooperative manipulation tasks.

Author Contributions

Conceptualization, H.Z., H.J., M.G. and J.Z.; methodology, H.Z., H.J., M.G. and J.Z.; software, H.Z. and M.G.; validation, H.Z. and M.G.; formal analysis, H.Z.; investigation, H.Z.; resources, H.Z.; data curation, H.Z.; writing—original draft preparation, H.Z.; writing—review and editing, H.Z.; visualization, H.Z.; supervision, H.J. and J.Z.; project administration, H.J. and J.Z.; funding acquisition, H.J. and J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Science and Technology Innovation 2030-"Brain Science and Brain-like Research" Major Project: 2021ZD0201400; the National Natural Science Foundation of China under Grants 92048301 and 61473102; the Science Foundation of Henan University of Technology under Grants 2021BS060 and 2020BS059; and the Key Scientific Research Project Plans of Higher Education Institutions in Henan Province under Grants 23B410002 and 22A410002.

Data Availability Statement

Not applicable.

Conflicts of Interest

No conflicts of interest exit in regard to this manuscript, and this manuscript was approved by all authors for publication. I would like to declare, on behalf of my co-authors, that the work described was original research that was not published previously and not under consideration for publication elsewhere, either in whole or in part. All the authors listed approved the manuscript that is enclosed.

References

  1. Li, Z.; Yuan, W.; Zhao, S.; Yu, Z.; Kang, Y.; Chen, C.L.P. Brain-Actuated Control of Dual-Arm Robot Manipulation With Relative Motion. IEEE Trans. Cogn. Dev. Syst. 2019, 11, 51–62. [Google Scholar] [CrossRef]
  2. Yang, C.; Chen, C.; Wang, N.; Ju, Z.; Fu, J.; Wang, M. Biologically Inspired Motion Modeling and Neural Control for Robot Learning From Demonstrations. IEEE Trans. Cogn. Dev. Syst. 2019, 11, 281–291. [Google Scholar]
  3. Nicolis, D.; Palumbo, M.; Zanchettin, A.M.; Rocco, P. Occlusion-Free Visual Servoing for the Shared Autonomy Teleoperation of Dual-Arm Robots. IEEE Robot. Autom. Lett. 2018, 3, 796–803. [Google Scholar] [CrossRef]
  4. Schmaus, P.; Leidner, D.; Bayer, R.; Pleintinger, B.; Krüger, T.; Lii, N.Y. Continued Advances in Supervised Autonomy User Interface Design for METERON SUPVIS Justin. In Proceedings of the 2019 IEEE Aerospace Conference, Big Sky, MT, USA, 2–9 March 2019; pp. 1–11. [Google Scholar]
  5. Ambrose, R.; Savely, R.; Goza, S.; Strawser, P.; Radford, N. Mobile Manipulation using NASA’s Robonaut. In Proceedings of the IEEE International Conference on Robotics and Automation, New Orleans, LA, USA, 26 April–1 May 2004; Volume 2, pp. 2104–2109. [Google Scholar]
  6. Chen, B.; Wang, Y.; Lin, P. A Hybrid Control Strategy for Dual-arm Object Manipulation Using Fused Force/Position Errors and Iterative Learning. In Proceedings of the 2018 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Auckland, New Zealand, 9–12 July 2018; pp. 39–44. [Google Scholar]
  7. Nozawa, S.; Kakiuchi, Y.; Okada, K.; Inaba, M. Controlling the Planar Motion of a Heavy Object by Pushing with a Humanoid Robot using Dual-Arm Force Control. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 1428–1435. [Google Scholar]
  8. Suarez, A.; Heredia, G.; Ollero, A. Physical-Virtual Impedance Control in Ultralightweight and Compliant Dual-Arm Aerial Manipulators. IEEE Robot. Autom. Lett. 2018, 3, 2553–2560. [Google Scholar] [CrossRef]
  9. Lee, J.; Chang, P.H.; Jamisola, R.S. Relative Impedance Control for Dual-Arm Robots Performing Asymmetric Bimanual Tasks. IEEE Trans. Ind. Electron. 2014, 61, 3786–3796. [Google Scholar] [CrossRef]
  10. Kong, L.; He, W.; Yang, C.; Li, Z.; Sun, C. Adaptive Fuzzy Control for Coordinated Multiple Robots With Constraint Using Impedance Learning. IEEE T. Cybern. 2019, 49, 3052–3063. [Google Scholar] [CrossRef] [PubMed]
  11. Zhang, Z.; Lin, Y.; Li, S.; Li, Y.; Yu, Z.; Luo, Y. Tricriteria Optimization-Coordination Motion of Dual-Redundant-Robot Manipulators for Complex Path Planning. IEEE Trans. Control Syst. Technol. 2018, 26, 1345–1357. [Google Scholar] [CrossRef]
  12. Li, X.; Tan, S.; Feng, X.; Rong, H. LSPB Trajectory Planning: Design for the Modular Robot Arm Applications. In Proceedings of the 2009 International Conference on Information Engineering and Computer Science, Wuhan, China, 19–20 December 2009; pp. 1–4. [Google Scholar]
  13. Bouteraa, Y.; Ghommam, J.; Poisson, G. Adaptive Backstepping Synchronization for Networked Lagrangian Systems. Int. J. Comput. Appl. Technol. 2012, 42, 1–8. [Google Scholar] [CrossRef]
  14. Uchiyama, M.; Dauchez, P. Symmetric Kinematic Formulation and Non-master/Slave Coordinated Control of Two-Arm Robots. Adv. Robot. 1992, 7, 361–383. [Google Scholar] [CrossRef]
  15. Caccavale, F.; Chiacchio, P.; Chiaverini, S. Task-Space Regulation of Cooperative Manipulators. Automatica 2000, 36, 879–887. [Google Scholar] [CrossRef]
  16. Adorno, B.; Fraisse, P.; Druon, S. Dual Position Control Strategies Using the Cooperative Dual Task-Space Framework. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 3955–3960. [Google Scholar]
  17. Park, H.; Lee, C. Cooperative-Dual-Task-Space-based Whole-Body Motion Balancing for Humanoid Robots. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 4797–4802. [Google Scholar]
  18. Park, H.; Lee, C. Extended Cooperative Task Space for Manipulation Tasks of Humanoid Robots. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 6088–6093. [Google Scholar]
  19. Park, H.; Lee, C. Dual-Arm Coordinated-Motion Task Specification and Performance Evaluation. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Republic of Korea, 9–14 October 2016; pp. 929–936. [Google Scholar]
  20. Basile, F.; Caccavale, F.; Chiacchio, P.; Coppola, J.; Marino, A. A Decentralized Kinematic Control Architecture for Collaborative and Cooperative Multi-Arm Systems. Mechatronics 2013, 23, 1100–1112. [Google Scholar] [CrossRef]
  21. Basile, F.; Caccavale, F.; Chiacchio, P.; Coppola, J.; Curatella, C. Task-Oriented Motion Planning for Multi-Arm Robotic Systems. Robot. Comput.-Integr. Manuf. 2012, 28, 569–582. [Google Scholar] [CrossRef]
  22. Zhang, Z.; Kong, L.; Niu, Y. A Time-Varying-Constrained Motion Generation Scheme for Humanoid Robot Arms. In Advances in Neural Networks; Springer: Berlin/Heidelberg, Germany, 2018; Volume 10878, pp. 757–767. [Google Scholar]
  23. Curkovic, P.; Jerbic, B. Dual-arm Robot Motion Planning based on Cooperative Coevolution. Emerg. Trends Technol. Innov. 2010, 314, 169–178. [Google Scholar]
  24. Caccavale, F.; Lippiello, V.; Muscio, G.; Pierri, F.; Ruggiero, F.; Villani, L. Grasp Planning and Parallel Control of A Redundant Dual-Arm/Hand Manipulation System. Robotica 2013, 31, 1169–1194. [Google Scholar] [CrossRef]
  25. Stavridis, S.; Doulgeri, Z. Bimanual Assembly of Two Parts with Relative Motion Generation and Task Related Optimization. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 7131–7136. [Google Scholar]
  26. Almeida, D.; Karayiannidis, Y. A Lyapunov-Based Approach to Exploit Asymmetries in Robotic Dual-Arm Task Resolution. In Proceedings of the 2019 IEEE 58th Conference on Decision and Control (CDC), Nice, France, 3 May 2019; pp. 4252–4258. [Google Scholar]
  27. Jin, T.; Xia, H. Lookback Option Pricing Models Based on the Uncertain Fractional-Order Differential Equation with Caputo Type. J. Ambient Intell. Hum. Comput. 2021, 14, 6435–6448. [Google Scholar] [CrossRef]
  28. Zhang, S.; Cao, Y. Consensus in Networked Multi-robot Systems Via Local State Feedback Robust Control. Int. J. Adv. Robot Syst. 2019, 16, 45–52. [Google Scholar] [CrossRef]
  29. Zhang, S.; Cao, Y. Cooperative Localization Approach for Multi-Robot Systems Based on State Estimation Error Compensation. Sensors 2019, 19, 3842. [Google Scholar] [CrossRef] [PubMed]
  30. Aristidou, A.; Lasenby, J. Inverse Kinematics: A Review of Existing Techniques and Introduction of a New Fast Iterative Solver; Technical Report; Cambridge University Engineering Department: Cambridge, UK, 2009. [Google Scholar]
  31. Buss, S.R.; Kim, J.-S. Selectively Damped Least Squares for Inverse Kinematics. J. Graph. Tools 2005, 10, 37–49. [Google Scholar] [CrossRef]
Figure 1. A type of cooperative manipulation. (a) Carrying. (b) Operating rudder. (c) Operating a wrench. (d) Using pliers. (e) Multi-station operation.
Figure 1. A type of cooperative manipulation. (a) Carrying. (b) Operating rudder. (c) Operating a wrench. (d) Using pliers. (e) Multi-station operation.
Sensors 23 05120 g001
Figure 2. The diagram for the common features in the cooperative manipulation of multi-arms.
Figure 2. The diagram for the common features in the cooperative manipulation of multi-arms.
Sensors 23 05120 g002
Figure 3. Simple configuration of multi-arm robot.
Figure 3. Simple configuration of multi-arm robot.
Sensors 23 05120 g003
Figure 4. Kinematically synchronous planning for multi-arm robot. Uin = t = (t1, t2, …, tN)T. Uout = s = (s1, s2, …, sN)T.
Figure 4. Kinematically synchronous planning for multi-arm robot. Uin = t = (t1, t2, …, tN)T. Uout = s = (s1, s2, …, sN)T.
Sensors 23 05120 g004
Figure 5. Motion planning and EE motion for the EE with the minimum pose error, e min ( T ) . v ^ i ( T ) = μ e min ( T ) .
Figure 5. Motion planning and EE motion for the EE with the minimum pose error, e min ( T ) . v ^ i ( T ) = μ e min ( T ) .
Sensors 23 05120 g005
Figure 6. The configuration of three-arm robot with 15-DoFs.
Figure 6. The configuration of three-arm robot with 15-DoFs.
Sensors 23 05120 g006
Figure 7. Inverse kinematics based on the traditional method in real time. (a) Motion of multi-arms. (b) Joint angles. (c) EE position velocity. (d) EE attitude velocity. (e) EE position error. (f) EE attitude error.
Figure 7. Inverse kinematics based on the traditional method in real time. (a) Motion of multi-arms. (b) Joint angles. (c) EE position velocity. (d) EE attitude velocity. (e) EE position error. (f) EE attitude error.
Sensors 23 05120 g007
Figure 8. Inverse kinematics based on the sub-base method in real time. (a) Motion of multi-arms. (b) Joint angles. (c) EE position velocity. (d) EE attitude velocity. (e) EE position error. (f) EE attitude error.
Figure 8. Inverse kinematics based on the sub-base method in real time. (a) Motion of multi-arms. (b) Joint angles. (c) EE position velocity. (d) EE attitude velocity. (e) EE position error. (f) EE attitude error.
Sensors 23 05120 g008
Figure 9. The configuration of two-arm robot with 13-DoFs.
Figure 9. The configuration of two-arm robot with 13-DoFs.
Sensors 23 05120 g009
Figure 10. The principle of two-arm robot with 13-DoFs.
Figure 10. The principle of two-arm robot with 13-DoFs.
Sensors 23 05120 g010
Figure 11. Carrying task. (a) Initial configuration. (b) Manipulating process.
Figure 11. Carrying task. (a) Initial configuration. (b) Manipulating process.
Sensors 23 05120 g011
Figure 12. Trajectories for dual arms in carrying task. (a) EE movement. (b) Joint trajectory. (c) EE position. (d) EE attitude. (e) Position velocity. (f) Attitude velocity. (g) Pose velocity error. (h) EE pose error.
Figure 12. Trajectories for dual arms in carrying task. (a) EE movement. (b) Joint trajectory. (c) EE position. (d) EE attitude. (e) Position velocity. (f) Attitude velocity. (g) Pose velocity error. (h) EE pose error.
Sensors 23 05120 g012
Figure 13. Manipulating pilers. (a) Initial configuration. (b) Manipulating process.
Figure 13. Manipulating pilers. (a) Initial configuration. (b) Manipulating process.
Sensors 23 05120 g013
Figure 14. Trajectories for dual arms in manipulating pilers. (a) EE movement. (b) Joint trajectory. (c) EE position. (d) EE attitude. (e) Position velocity. (f) Attitude velocity. (g) Pose velocity error. (h) EE pose error.
Figure 14. Trajectories for dual arms in manipulating pilers. (a) EE movement. (b) Joint trajectory. (c) EE position. (d) EE attitude. (e) Position velocity. (f) Attitude velocity. (g) Pose velocity error. (h) EE pose error.
Sensors 23 05120 g014
Figure 15. Manipulating rudder. (a) Initial configuration. (b) Manipulating process.
Figure 15. Manipulating rudder. (a) Initial configuration. (b) Manipulating process.
Sensors 23 05120 g015
Figure 16. Trajectories for dual arms in manipulating rudder. (a) EE movement. (b) Joint trajectory. (c) EE position. (d) EE attitude. (e) Position velocity. (f) Attitude velocity. (g) Pose velocity error. (h) EE pose error.
Figure 16. Trajectories for dual arms in manipulating rudder. (a) EE movement. (b) Joint trajectory. (c) EE position. (d) EE attitude. (e) Position velocity. (f) Attitude velocity. (g) Pose velocity error. (h) EE pose error.
Sensors 23 05120 g016aSensors 23 05120 g016b
Table 1. Parameters of three-arm robot.
Table 1. Parameters of three-arm robot.
li12345
Length (m)1.180.880.880.880.88
li678910
Length (m)0.8857.850.880.880.88
li1112131415
Length (m)57.850.880.880.8857.85
Table 2. Initial joint parameters of three-arm robot.
Table 2. Initial joint parameters of three-arm robot.
θ i 12345
Initial angle (°)−5.05.05.030.020.0
θ i 678910
Initial angle (°)20.020.0−70.020.020.0
θ i 1112131415
Initial angle (°)20.010.010.020.020.0
Table 3. Kinematic parameters.
Table 3. Kinematic parameters.
Parametersμ η 1 , 1 Nb k R
Value0.081/633115
ParametersM0,1M1M2M3λ Δ T
Value34440.010.05
Table 4. D-H parameters of two-arm robot *.
Table 4. D-H parameters of two-arm robot *.
i d i a i
(mm)
α i (rad) β i i d i a i α i (mm) β i (rad)
Left
Arm
10 h 1 = 4200Right
Arm
10 h 1 = 42π0
20h2 = 84π/2020 h 2 = 84π/20
30 h 3 = 84−π/2030 h 3 = 84−π/20
40 h 4 = 84π/2040 h 4 = 84π/20
50 h 5 = 78−π/2050 h 5 = 78−π/20
60h7 = 71π/2060 h 7 = 71π/20
70 h 7 = 710070 h 7 = 7100
* The base (i.e., θ 1 ) is the common joint of two arms.
Table 5. Kinematic parameters of two-arm robot.
Table 5. Kinematic parameters of two-arm robot.
Parametersμ η 1 , 1 Nb k r
Value0.005522113
ParametersM0,1M1M2λ Δ T
Value1660.010.05
Table 6. The parameters of self-organizing competitive neural network.
Table 6. The parameters of self-organizing competitive neural network.
Parameters δ i σ i c i η
Value1 × 10−51 × 10−52.00.03
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, H.; Jin, H.; Ge, M.; Zhao, J. Real-Time Kinematically Synchronous Planning for Cooperative Manipulation of Multi-Arms Robot Using the Self-Organizing Competitive Neural Network. Sensors 2023, 23, 5120. https://doi.org/10.3390/s23115120

AMA Style

Zhang H, Jin H, Ge M, Zhao J. Real-Time Kinematically Synchronous Planning for Cooperative Manipulation of Multi-Arms Robot Using the Self-Organizing Competitive Neural Network. Sensors. 2023; 23(11):5120. https://doi.org/10.3390/s23115120

Chicago/Turabian Style

Zhang, Hui, Hongzhe Jin, Mingda Ge, and Jie Zhao. 2023. "Real-Time Kinematically Synchronous Planning for Cooperative Manipulation of Multi-Arms Robot Using the Self-Organizing Competitive Neural Network" Sensors 23, no. 11: 5120. https://doi.org/10.3390/s23115120

APA Style

Zhang, H., Jin, H., Ge, M., & Zhao, J. (2023). Real-Time Kinematically Synchronous Planning for Cooperative Manipulation of Multi-Arms Robot Using the Self-Organizing Competitive Neural Network. Sensors, 23(11), 5120. https://doi.org/10.3390/s23115120

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop