[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN118288294B - Robot vision servo and man-machine cooperative control method based on image variable admittance - Google Patents

Robot vision servo and man-machine cooperative control method based on image variable admittance Download PDF

Info

Publication number
CN118288294B
CN118288294B CN202410591654.XA CN202410591654A CN118288294B CN 118288294 B CN118288294 B CN 118288294B CN 202410591654 A CN202410591654 A CN 202410591654A CN 118288294 B CN118288294 B CN 118288294B
Authority
CN
China
Prior art keywords
mechanical arm
image
matrix
camera
admittance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410591654.XA
Other languages
Chinese (zh)
Other versions
CN118288294A (en
Inventor
王冬瑞
马磊
孙永奎
林剑飞
鲁文儒
郝浩楠
邓泽宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202410591654.XA priority Critical patent/CN118288294B/en
Publication of CN118288294A publication Critical patent/CN118288294A/en
Application granted granted Critical
Publication of CN118288294B publication Critical patent/CN118288294B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a robot vision servo and man-machine cooperative control method based on image admittance, which comprises the following steps: performing kinematic and dynamic modeling on the mechanical arm in the visual servo system to obtain a mechanical arm kinematic model and a dynamic model; calibrating a visual servo system through a chessboard method to obtain a conversion matrix between the mechanical arm end effector and the camera; constructing a first-order kinematic model and a second-order kinematic model of the visual servo; obtaining a dynamic model of the mechanical arm in a characteristic space; calculating and adjusting admittance parameters by using virtual positions of feature points in the image; determining a visual servo control law; the camera acquires image characteristic points of the two-dimensional code in real time, and the joint speed of the mechanical arm is obtained in real time according to a visual servo control law, so that the mechanical arm is subjected to motion control, and a visual servo process is completed. The invention solves the problem of inconsistent driving layers of the force sensor and the visual sensor, couples the visual sensor and the force sensor, and improves the flexibility of the system.

Description

Robot vision servo and man-machine cooperative control method based on image variable admittance
Technical Field
The invention relates to a robot vision servo and man-machine cooperative control method based on image admittance, and belongs to the technical field of robot vision servo.
Background
Visual servoing is a control method that uses visual features (perception) to control a robot to perform robot positioning or trajectory tracking. The visual servoing system has higher accuracy, flexibility and robustness in handling unstructured environments than a sensorless system. The force control of the robot can be better adapted to the uncertainty of the environment, and the precision and the safety of the contact operation of the robot are improved.
At present, in the operation and maintenance of the electrified railway contact network in China, the 'detection' has realized automation and partial intellectualization, and the technical state of the contact network is analyzed and diagnosed mainly by using methods and means such as a 6C system, inspection tour, component inspection and the like. The maintenance is to implement three-level repair of temporary repair, comprehensive repair and fine measurement and finish repair according to the technical state, but the three-level repair basically depends on manual work. Visual servo and man-machine cooperation are important technologies for realizing semi-automation and full automation of intelligent maintenance of the overhead contact system.
Currently, most human-computer interaction control techniques are based on direct explicit closed-loop force control or indirect force control, such as impedance and admittance control. The multi-sensor fusion can realize multi-level and multi-space information processing, and the flexibility and the intelligence of the robot system are enhanced. However, the combination of vision and force sensors is not an easy matter. The learner designed a control strategy based on the form of a task framework that improved the direction of the shared control force along the constraint by control, with the direction of movement remaining being visually controlled. Still other scholars have proposed a hybrid vision/force control method to handle camera and constraint surface uncertainties by tracking the image features of the end effector and applying orthogonal contact forces to the unknown surface. The above methods all use visual and force sense information to control movement, but have the disadvantage that vision cannot correct errors generated in the direction of force control.
Aiming at the problems, a learner proposes an adaptive dynamic controller based on task space feedback, and solves the contact problem under the bottleneck constraint through vision/force hybrid control. And dividing the task space into a constraint space, an image space and a force space, and finishing the visual servo operation and the acting force track tracking control task. First and second order impedance control laws based on image vision have also been devised by the learner. In the control law, an elastic moment component of a mixed impedance equation is defined on an image plane and is used for designing and calculating visual errors. The above method, while dividing the task space into image feature space and designing a visual error based controller, still achieves compliance in cartesian space. Unlike vision/force hybrid control in cartesian space or joint space, scholars have proposed a generalized framework for vision/impedance hybrid control in feature space. The framework is defined in the task space of the visual servoing system irrespective of the choice of visual features. Compliance is achieved by selecting constant admittance parameters in the feature space.
Disclosure of Invention
The invention aims to overcome the defect that the mixed vision/force control in the prior art has the defect that vision cannot correct errors generated in the force control direction and the problems of poor system flexibility and safety in the constant admittance control.
The technical scheme provided by the invention for solving the technical problems is as follows: a robot vision servo and man-machine cooperative control method based on image admittance comprises the following steps:
step S100, performing kinematic and dynamic modeling on a mechanical arm in a visual servo system to obtain a mechanical arm kinematic model and a dynamic model, and performing coordinate system setting on a reference frame of the visual servo system;
Step S200, calibrating a visual servo system through a chessboard method to obtain a conversion matrix between an end effector of the mechanical arm and a camera;
Step S300, establishing a kinematic and dynamic model of camera vision, and combining the mechanical arm kinematic model and the dynamic model to construct a first-order kinematic model and a second-order kinematic model of vision servo;
Step S400, a dynamic model of the mechanical arm under the image feature space is established, and the dynamic model of the mechanical arm under the feature space is obtained by combining the mechanical arm kinematics, the dynamic model, a first-order kinematic model and a second-order kinematic model of the visual servo;
S500, calculating and adjusting admittance parameters in real time by utilizing virtual positions of feature points in an image in combination with a logistic sigmoid function;
step S600, determining a visual servo control law according to errors of expected characteristic points and current characteristic points;
and S700, acquiring image feature points of the two-dimensional code in real time by a camera, and obtaining the joint speed of the mechanical arm in real time according to a visual servo control law, so as to control the movement of the mechanical arm and complete a visual servo process.
The mechanical arm kinematic model is as follows:
Wherein: respectively a joint position vector, a velocity vector and an acceleration vector in joint space; Is a mechanical arm Jacobian matrix; the position vector, the velocity vector and the acceleration vector in the operation space, Is a coordinate system mapping function.
The mechanical arm dynamic model is as follows:
Wherein: respectively a joint position vector, a velocity vector and an acceleration vector in joint space; Is an inertial matrix of the manipulator; a coriolis matrix and a centripetal matrix; A gravity vector; Coulomb, viscosity and stiction vectors; is the external torque received; is a control input.
The further technical scheme is that the specific process of step S200 includes:
Step S210, using a chessboard calibration board with N points, and randomly placing the chessboard calibration board in a REALSENSE camera field of view;
s220, performing picture acquisition on the chessboard calibration board by using a REALSENSE camera;
Step S230, calculating the circle center pixel coordinates of N dots in the image;
Step S240, recording the three-dimensional pose of the mechanical arm end effector at the moment in the form of a translation vector and a rotation vector;
Step S250, repeating step S220-step S240 for 16 times;
Step S260, resolving 2D-3D data to obtain coordinate conversion relation between the chessboard calibration board and the camera
Step S270, according to the coordinate conversion relationCalculating a conversion matrix of the mechanical arm end effector to the camera
The further technical scheme is that the first-order kinematic model of the visual servo is as follows:
Wherein: For the speed of the feature points of the image, In the form of an image interaction matrix,Is the speed of movement of the camera.
The second-order kinematic model of the visual servo is as follows:
Wherein: A conversion matrix from the mechanical arm end effector to the camera; Is a mechanical arm Jacobian matrix; a Jacobian matrix is an image characteristic; Respectively a velocity vector and an acceleration vector in joint space; Is an image interaction matrix; Acceleration vectors which are characteristic points of the image; A first order derivative of the image interaction matrix; a first order derivative of a conversion matrix from the mechanical arm end effector to the camera; is the first order derivative of the jacobian matrix of the mechanical arm.
The mechanical arm dynamic model under the characteristic space is as follows:
wherein:
Wherein: the inverse of the projection of the manipulator inertia matrix in the camera frame; is a characteristic Jacobian matrix; Is a mechanical arm Jacobian matrix; A conversion matrix from the mechanical arm end effector to the camera; Acceleration vectors which are characteristic points of the image; Is the moment under the image feature space; is the external contact force under the image feature space; is an external disturbance in the image feature space; the method is a transposition of a mechanical arm Jacobian matrix; A conversion matrix from the mechanical arm end effector to the camera; Is the force in the camera coordinate system.
The further technical scheme is that the specific process of step S500 is as follows: selecting admittance formula and characteristic point distance difference under combination of logistic sigmoid function and characteristic space, and regulating damping through the following s-type functionAnd stiffness(s)Is a variable admittance parameter of (a);
Wherein: And Is a parameter that affects the rate of increase and inflection point of the curve; is a logistic sigmoid function; is the maximum value of admittance parameters; is the minimum value of admittance parameter; Is the total length of the virtual path; Is the length of the virtual path between the current feature point and the desired feature point.
Further technical proposal is that in the step S500, damping is calculated and adjustedWhen (1): selectingFor the maximum value of the damping parameter,As a minimum value of the damping parameter,The damping coefficient is adjusted by an s-type function of the set of parameters;
calculating and adjusting stiffness When (1): selectingFor the maximum value of the stiffness parameter,As a minimum value for the stiffness parameter,The stiffness coefficient is adjusted by an s-type function of this set of parameters.
The further technical scheme is that the specific process of step S600 includes:
step S610, establishing admittance control rate between external contact force and characteristic point error;
Wherein: virtual mass, damping and stiffness, respectively; Is the characteristic point error; Acceleration, which is characteristic point error; the speed of the feature point error; for the desired feature point to be initially set, A feature point that is moved by an external contact force; is the external contact force under the image feature space;
step S620, selecting As an error between the current feature point and the desired feature point, whereIs an image feature point;
step S630, acceleration by error of characteristic points The speed of the characteristic point error can be obtained by carrying out primary and secondary integrationAnd a feature point moved by an external contact force
Step S640, determining that the visual servo control law is:
Wherein: Is the movement speed of the camera; the speed of the feature point error; gain for the controller; is the pseudo-inverse of the image interaction matrix; is the error between the current feature point and the desired feature point.
The invention has the following beneficial effects:
1. unlike visual/force hybrid control in Cartesian or joint space, the present invention solves the problem of inconsistent force sensor and visual sensor drive layers;
2. Different from the variable admittance control in the Cartesian space or the joint space, the invention is coupled with a visual sensor and a force sensor, so that the flexibility of the system is improved;
3. compared with the constant admittance control of vision/force mixing in the feature space, the introduction of the variable admittance can adjust admittance parameters in real time, and the dynamic characteristics, man-machine interaction and safety of the system are improved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a graph showing the variation of admittance parameters in a variable admittance system according to an embodiment of the present invention;
wherein fig. 2 (a) is a variation diagram of damping parameters, and fig. 2 (b) is a variation diagram of stiffness parameters;
FIG. 3 is a schematic diagram showing the visual servo stage of the control method for different admittance parameters according to an embodiment of the present invention;
wherein fig. 3 (a) is an execution effect of low damping high rigidity constant admittance, fig. 3 (b) is an execution effect of high damping low rigidity constant admittance, fig. 3 (c) is an execution effect of high damping high rigidity constant admittance, and fig. 3 (d) is an execution effect of variable admittance;
FIG. 4 is a diagram showing the result of the control method for four different admittance parameters according to an embodiment of the present invention;
Wherein, fig. 4 (a) shows the execution results of the visual servo completion time and the error bars thereof, fig. 4 (b) shows the execution results of the speed and the angular speed and the error bars thereof in the process of the embodiment, fig. 4 (c) shows the execution effects of the total task completion time and the error bars thereof, and fig. 4 (d) shows the execution effects of the force and the error bars thereof in the process of the task.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, the method for controlling the robot vision servo and the man-machine cooperation based on the image admittance specifically comprises the following steps:
step S100, performing kinematic and dynamic modeling on a mechanical arm in a visual servo system to obtain a mechanical arm kinematic model and a dynamic model, and performing coordinate system setting on a reference frame of the visual servo system;
wherein the mechanical arm kinematic model is:
Wherein: respectively a joint position vector, a velocity vector and an acceleration vector in joint space; Is a mechanical arm Jacobian matrix; the position vector, the velocity vector and the acceleration vector in the operation space, Is a coordinate system mapping function.
The mechanical arm dynamics model is as follows:
wherein:
Wherein: respectively a joint position vector, a velocity vector and an acceleration vector in joint space; Is an inertial matrix of the manipulator; a coriolis matrix and a centripetal matrix; A gravity vector; Coulomb, viscosity and stiction vectors; is the external torque received; Is the external torque in the end effector coordinate system; Is a control input;
Step S200, calibrating a visual servo system through a chessboard method to obtain a conversion matrix between an end effector of the mechanical arm and a camera;
Step S210, using a chessboard calibration board with N points, and randomly placing the chessboard calibration board in a REALSENSE camera field of view;
s220, performing picture acquisition on the chessboard calibration board by using a REALSENSE camera;
Step S230, calculating the circle center pixel coordinates of N dots in the image;
Step S240, recording the three-dimensional pose of the mechanical arm end effector at the moment in the form of a translation vector and a rotation vector;
Step S250, repeating step S220-step S240 for 16 times;
Step S260, resolving 2D-3D data to obtain coordinate conversion relation between the chessboard calibration board and the camera
Step S270, according to the coordinate conversion relationCalculating a conversion matrix of the mechanical arm end effector to the camera
Step S300, establishing a kinematic and dynamic model of camera vision, and combining the mechanical arm kinematic model and the dynamic model to construct a first-order kinematic model and a second-order kinematic model of vision servo;
the first order kinematic model of visual servoing is:
Wherein: For the speed of the feature points of the image, In the form of an image interaction matrix,Is the movement speed of the camera;
the second-order kinematic model of the visual servoing is:
Wherein: A conversion matrix from the mechanical arm end effector to the camera; Is a mechanical arm Jacobian matrix; a Jacobian matrix is an image characteristic; Respectively a velocity vector and an acceleration vector in joint space; Is an image interaction matrix; Acceleration vectors which are characteristic points of the image; A first order derivative of the image interaction matrix; a first order derivative of a conversion matrix from the mechanical arm end effector to the camera; the first order derivative of the mechanical arm Jacobian matrix;
Step S400, a dynamic model of the mechanical arm under the image feature space is established, and the dynamic model of the mechanical arm under the feature space is obtained by combining the mechanical arm kinematics, the dynamic model, a first-order kinematic model and a second-order kinematic model of visual servo, wherein the dynamic model is specifically expressed as follows:
Wherein the method comprises the steps of AndThe moment transformation matrix mapped from the camera coordinate system to the end effector coordinate system and the moment acting in the camera coordinate system, respectively. Combining the characteristic jacobian matrix, and correlating the torsion transformation matrix with the wrench transformation matrix to form
Rewriting the dynamic model to obtain a dynamic model of the mechanical arm in a characteristic space;
wherein:
Wherein: the inverse of the projection of the manipulator inertia matrix in the camera frame; is a characteristic Jacobian matrix; Is a mechanical arm Jacobian matrix; A conversion matrix from the mechanical arm end effector to the camera; Acceleration vectors which are characteristic points of the image; Is the moment under the image feature space; is the external contact force under the image feature space; is an external disturbance in the image feature space; the method is a transposition of a mechanical arm Jacobian matrix; A conversion matrix from the mechanical arm end effector to the camera; is the force in the camera coordinate system;
S500, calculating and adjusting admittance parameters in real time by utilizing virtual positions of feature points in an image in combination with a logistic sigmoid function;
selecting a logistic sigmoid function, which is specifically expressed as:
Wherein: And Is a parameter that affects the rate of increase and inflection point of the curve;
by combining an admittance formula under the characteristic space and the characteristic point distance difference, damping is adjusted through the following s-shaped function And stiffness(s)Is a variable admittance parameter of (a);
Wherein: And Is a parameter that affects the rate of increase and inflection point of the curve; is a logistic sigmoid function; is the maximum value of admittance parameters; is the minimum value of admittance parameter; Is the total length of the virtual path; The length of the virtual path between the current characteristic point and the expected characteristic point is set;
calculating and adjusting damping When (1): selectingFor the maximum value of the damping parameter,As a minimum value of the damping parameter,The damping coefficient is adjusted by an s-type function of this set of parameters, the result of which is shown in fig. 2 (a);
calculating and adjusting stiffness When (1): selectingFor the maximum value of the stiffness parameter,As a minimum value for the stiffness parameter,The stiffness coefficient is adjusted by an s-type function of this set of parameters, the result of which is shown in fig. 2 (b);
step S600, determining a visual servo control law according to errors of expected characteristic points and current characteristic points;
step S610, establishing admittance control rate between external contact force and characteristic point error;
Wherein: virtual mass, damping and stiffness, respectively; Is the characteristic point error; Acceleration, which is characteristic point error; the speed of the feature point error; for the desired feature point to be initially set, A feature point that is moved by an external contact force; is the external contact force under the image feature space;
step S620, selecting As an error between the current feature point and the desired feature point, whereIs an image feature point;
for the desired feature point to be initially set, A feature point that is moved by an external contact force;
step S630, acceleration by error of characteristic points The speed of the characteristic point error can be obtained by carrying out primary and secondary integrationAnd a feature point moved by an external contact force
Step S640, determining that the visual servo control law is:
Wherein: Is the movement speed of the camera; the speed of the feature point error; gain for the controller; is the pseudo-inverse of the image interaction matrix; error between the current feature point and the expected feature point;
step S700, acquiring image feature points of the two-dimensional code in real time by a camera, and obtaining the joint speed of the mechanical arm in real time according to a visual servo control law, so as to control the movement of the mechanical arm and complete a visual servo process;
Specifically, programming is performed on the PC end according to the designed visual servo controller, and communication is performed among the mechanical arm, the moment sensor and the Visp platform through ROS. The embodiment takes overhead contact system bolt overhaul as a background, and aims to improve efficiency and safety of overhead contact system overhaul process.
The embodiment comprises three stages, namely, a first stage, wherein an overhaul target is identified, and a visual servo moves to an overhaul position; the second stage, moving the maintenance mechanical arm to the next maintenance point through man-machine cooperation; and in the third stage, the next maintenance target is identified, and the visual servo moves to a maintenance position.
FIG. 3 shows experimental results, FIG. 3 (a) shows an experiment with low damping and high stiffness, where visual servoing converges faster but there is overshoot; FIG. 3 (b) is an experiment of high damping low stiffness with the slowest visual servo convergence; FIG. 3 (c) is an experiment of high damping and high stiffness, with visual servo convergence speed faster than high damping and low stiffness but slower than higher stiffness than renter; fig. 3 (d) is an experiment of admittance, where visual servoing converges fastest and there is no overshoot.
FIG. 4 is a diagram showing experimental results, and FIG. 4 (a) shows that the visual servoing converges with the best admittance control effect; FIG. 4 (b) represents the speed magnitude during visual servoing operation, with the admittance control speed being only lower than the low-damping high-impedance control; FIG. 4 (c) represents the time of the total task, with the admittance control having the shortest task completion time; FIG. 4 (d) represents the magnitude of interaction force during human-machine interaction, with the magnitude of interaction force controlled by the admittance being only greater than the high damping low stiffness; the experimental performance indexes are combined, the admittance has the best dynamic response in the pure visual servo process, and the experimental performance indexes have better man-machine interaction in the whole experimental task. This result verifies the effectiveness of the present invention.
Compared with the prior art, the invention has the advantages that: (1) Unlike visual/force hybrid control in Cartesian or joint space, the present invention solves the problem of inconsistent force sensor and visual sensor drive layers; (2) Different from the variable admittance control in the Cartesian space or the joint space, the invention is coupled with a visual sensor and a force sensor, so that the flexibility and the man-machine interaction of the system are improved; (3) Compared with the constant admittance control of vision/force mixing in the feature space, the introduction of the variable admittance can adjust admittance parameters in real time, and the dynamic characteristics, man-machine interaction and safety of the system are improved.
The present invention is not limited to the above-mentioned embodiments, but is not limited to the above-mentioned embodiments, and any person skilled in the art can make some changes or modifications to the equivalent embodiments without departing from the scope of the technical solution of the present invention, but any simple modification, equivalent changes and modifications to the above-mentioned embodiments according to the technical substance of the present invention are within the scope of the technical solution of the present invention.

Claims (7)

1. The robot vision servo and man-machine cooperative control method based on image admittance is characterized by comprising the following steps of:
step S100, performing kinematic and dynamic modeling on a mechanical arm in a visual servo system to obtain a mechanical arm kinematic model and a dynamic model, and performing coordinate system setting on a reference frame of the visual servo system;
Step S200, calibrating a visual servo system through a chessboard method to obtain a conversion matrix between an end effector of the mechanical arm and a camera;
Step S210, using a chessboard calibration board with N points, and randomly placing the chessboard calibration board in a REALSENSE camera field of view;
s220, performing picture acquisition on the chessboard calibration board by using a REALSENSE camera;
Step S230, calculating the circle center pixel coordinates of N dots in the image;
Step S240, recording the three-dimensional pose of the mechanical arm end effector at the moment in the form of a translation vector and a rotation vector;
Step S250, repeating step S220-step S240 for 16 times;
step S260, solving 2D-3D data, and solving a coordinate conversion relation cTo between the chessboard calibration board and the camera;
Step S270, calculating a conversion matrix cTe from the mechanical arm end effector to the camera according to the coordinate conversion relation cTo;
Step S300, establishing a kinematic and dynamic model of camera vision, and combining the mechanical arm kinematic model and the dynamic model to construct a first-order kinematic model and a second-order kinematic model of vision servo;
Step S400, a dynamic model of the mechanical arm under the image feature space is established, and the dynamic model of the mechanical arm under the image feature space is obtained by combining the mechanical arm kinematics, the dynamic model, a first-order kinematic model and a second-order kinematic model of the visual servo;
wherein:
Wherein: m c(q)-1 is the inverse of the projection of the manipulator inertia matrix in the camera frame; j s is the image feature jacobian matrix; eJe Is a mechanical arm Jacobian matrix; cTe A conversion matrix from the mechanical arm end effector to the camera; Acceleration vectors which are characteristic points of the image; f s is the moment in the image feature space; f sext is the external contact force under the image feature space; f q is external disturbance in the image feature space; the method is a transposition of a mechanical arm Jacobian matrix; a conversion matrix from the mechanical arm end effector to the camera; cfc Is the force in the camera coordinate system; a coriolis matrix and a centripetal matrix; g (q) is a gravity vector; Coulomb, viscosity and stiction vectors; τ is the control input; q is, Respectively a joint position vector and a joint speed vector in joint space;
S500, calculating and adjusting admittance parameters in real time by utilizing virtual positions of feature points in an image in combination with a logistic sigmoid function;
Selecting an admittance formula and a characteristic point distance difference under the combination of a logistic sigmoid function and a characteristic space, and adjusting variable admittance parameters of the damping D s and the rigidity K s;
Wherein: gamma 1 and gamma 2 are parameters that affect the rate of increase and inflection point of the curve; sigma (e s) is a logistic signature function; v max is the maximum value of the admittance parameter; v min is the minimum value of the admittance parameter; d s is the total length of the virtual path; e s is the length of the virtual path between the current feature point and the desired feature point;
step S600, determining a visual servo control law according to errors of expected characteristic points and current characteristic points;
and S700, acquiring image feature points of the two-dimensional code in real time by a camera, and obtaining the joint speed of the mechanical arm in real time according to a visual servo control law, so as to control the movement of the mechanical arm and complete a visual servo process.
2. The image admittance-based robot vision servo and man-machine cooperative control method according to claim 1, wherein the mechanical arm kinematics model is as follows:
x=P(q)
Wherein: q is, Respectively a joint position vector, a velocity vector and an acceleration vector in joint space; eJe Is a mechanical arm Jacobian matrix; x is,Respectively a position vector, a speed vector and an acceleration vector in an operation space, wherein P (q) is a coordinate system mapping function; is the first order derivative of the jacobian matrix of the mechanical arm.
3. The image admittance-based robot vision servo and man-machine cooperative control method according to claim 1, wherein the mechanical arm dynamics model is as follows:
Wherein: q is, Respectively a joint position vector, a velocity vector and an acceleration vector in joint space; m (q) is an inertial matrix of the manipulator; a coriolis matrix and a centripetal matrix; g (q) is a gravity vector; coulomb, viscosity and stiction vectors; τ e is the external torque received; τ is the control input.
4. The image admittance-based robot vision servo and man-machine cooperative control method according to claim 1, wherein the first-order kinematic model of the vision servo is:
Wherein: for the image feature point speed, L s is the image interaction matrix and v c is the camera motion speed.
5. The image admittance-based robot vision servo and man-machine cooperative control method of claim 4, wherein the second-order kinematic model of the vision servo is:
Wherein: cTe A conversion matrix from the mechanical arm end effector to the camera; eJe Is a mechanical arm Jacobian matrix; j s is the image feature jacobian matrix; Respectively a velocity vector and an acceleration vector in joint space; l s is an image interaction matrix; Acceleration vectors which are characteristic points of the image; A first order derivative of the image interaction matrix; a first order derivative of a conversion matrix from the mechanical arm end effector to the camera; The first order derivative of the mechanical arm Jacobian matrix; f q is the external disturbance in the image feature space.
6. The method for controlling the visual servoing and man-machine cooperation of a robot based on image admittance according to claim 1, wherein when calculating and adjusting the damping D s in the step S500: selecting V max =40 as the maximum value of damping parameters, V min =10 as the minimum value of damping parameters, γ 1=25,γ2 =2.5, and adjusting the damping coefficient by a logistic signature function of the group of parameters;
When calculating and adjusting the stiffness K s: v max = 200 is chosen as the maximum value of the stiffness parameter, V min = 100 is the minimum value of the stiffness parameter, γ 1=50,γ2 = 1.5, and the stiffness coefficient is adjusted by a logistic signature function of this set of parameters.
7. The method for controlling the visual servo and the man-machine cooperation of the robot based on the image admittance according to claim 1, wherein the specific process of the step S600 comprises:
step S610, establishing admittance control rate between external contact force and characteristic point error;
e=sd-sw
Wherein: h s、Ds、Ks is virtual mass, damping and stiffness, respectively; e is the characteristic point error; Acceleration, which is characteristic point error; The speed of the feature point error; s d is a desired feature point initially set, s w is a feature point moved by an external contact force; f sext is the external contact force under the image feature space;
Step S620, selecting e s=s-sw as an error between the current feature point and the expected feature point, wherein S is an image feature point;
step S630, acceleration by error of characteristic points The speed of the characteristic point error can be obtained by carrying out primary and secondary integrationAnd a feature point s w moved by an external contact force;
Step S640, determining that the visual servo control law is:
wherein: v c is the speed of movement of the camera; the speed of the feature point error; alpha is the controller gain; is the pseudo-inverse of the image interaction matrix; e s is the error between the current feature point and the desired feature point.
CN202410591654.XA 2024-05-14 2024-05-14 Robot vision servo and man-machine cooperative control method based on image variable admittance Active CN118288294B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410591654.XA CN118288294B (en) 2024-05-14 2024-05-14 Robot vision servo and man-machine cooperative control method based on image variable admittance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410591654.XA CN118288294B (en) 2024-05-14 2024-05-14 Robot vision servo and man-machine cooperative control method based on image variable admittance

Publications (2)

Publication Number Publication Date
CN118288294A CN118288294A (en) 2024-07-05
CN118288294B true CN118288294B (en) 2024-10-18

Family

ID=91676866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410591654.XA Active CN118288294B (en) 2024-05-14 2024-05-14 Robot vision servo and man-machine cooperative control method based on image variable admittance

Country Status (1)

Country Link
CN (1) CN118288294B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118544362A (en) * 2024-07-26 2024-08-27 湖南大学 Iterative learning control method for impedance of vision-force fusion mechanical arm under guidance of person

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104942809A (en) * 2015-06-23 2015-09-30 广东工业大学 Mechanical arm dynamic fuzzy approximator based on visual servo system
CN115122325A (en) * 2022-06-30 2022-09-30 湖南大学 Robust visual servo control method for anthropomorphic manipulator with view field constraint

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3740352B1 (en) * 2018-01-15 2023-03-15 Technische Universität München Vision-based sensor system and control method for robot arms

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104942809A (en) * 2015-06-23 2015-09-30 广东工业大学 Mechanical arm dynamic fuzzy approximator based on visual servo system
CN115122325A (en) * 2022-06-30 2022-09-30 湖南大学 Robust visual servo control method for anthropomorphic manipulator with view field constraint

Also Published As

Publication number Publication date
CN118288294A (en) 2024-07-05

Similar Documents

Publication Publication Date Title
CN110039542B (en) Visual servo tracking control method with speed and direction control function and robot system
CN110238839B (en) Multi-shaft-hole assembly control method for optimizing non-model robot by utilizing environment prediction
CN107627303B (en) PD-SMC control method of visual servo system based on eye-on-hand structure
Wang et al. Uncalibrated visual tracking control without visual velocity
CN113681543B (en) Mechanical arm zero-force control method based on model prediction
CN109782601B (en) Design method of self-adaptive neural network synchronous robust controller of coordinated mechanical arm
CN109240091B (en) Underwater robot control method based on reinforcement learning and tracking control method thereof
CN105772917B (en) A kind of three joint spot welding robot's Trajectory Tracking Control methods
CN108453732B (en) Self-adaptive dynamic force/position hybrid control method for closed robot of control system
Zhao et al. Cooperative manipulation for a mobile dual-arm robot using sequences of dynamic movement primitives
CN115122325A (en) Robust visual servo control method for anthropomorphic manipulator with view field constraint
CN116460860B (en) Model-based robot offline reinforcement learning control method
CN114942593B (en) Mechanical arm self-adaptive sliding mode control method based on disturbance observer compensation
CN118288294B (en) Robot vision servo and man-machine cooperative control method based on image variable admittance
CN110053044B (en) Model-free self-adaptive smooth sliding mode impedance control method for clamping serial fruits by parallel robot
CN111958584A (en) Trajectory planning method, device and system
CN111702767A (en) Manipulator impedance control method based on inversion fuzzy self-adaptation
CN115480583B (en) Visual servo tracking and impedance control method for flying operation robot
CN115351780A (en) Method for controlling a robotic device
CN111230882A (en) Self-adaptive variable impedance control method for fruit sorting parallel robot clamping mechanism
CN112847235B (en) Robot step force guiding assembly method and system based on deep reinforcement learning
CN115416024A (en) Moment-controlled mechanical arm autonomous trajectory planning method and system
Tong et al. Neural network based visual servo control under the condition of heavy loading
CN110434854B (en) Redundant manipulator visual servo control method and device based on data driving
CN111546344A (en) Mechanical arm control method for alignment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant