[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN118288294B - A robot visual servo and human-machine collaborative control method based on image variable admittance - Google Patents

A robot visual servo and human-machine collaborative control method based on image variable admittance Download PDF

Info

Publication number
CN118288294B
CN118288294B CN202410591654.XA CN202410591654A CN118288294B CN 118288294 B CN118288294 B CN 118288294B CN 202410591654 A CN202410591654 A CN 202410591654A CN 118288294 B CN118288294 B CN 118288294B
Authority
CN
China
Prior art keywords
image
robot
visual servo
camera
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410591654.XA
Other languages
Chinese (zh)
Other versions
CN118288294A (en
Inventor
王冬瑞
马磊
孙永奎
林剑飞
鲁文儒
郝浩楠
邓泽宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202410591654.XA priority Critical patent/CN118288294B/en
Publication of CN118288294A publication Critical patent/CN118288294A/en
Application granted granted Critical
Publication of CN118288294B publication Critical patent/CN118288294B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a robot vision servo and man-machine cooperative control method based on image admittance, which comprises the following steps: performing kinematic and dynamic modeling on the mechanical arm in the visual servo system to obtain a mechanical arm kinematic model and a dynamic model; calibrating a visual servo system through a chessboard method to obtain a conversion matrix between the mechanical arm end effector and the camera; constructing a first-order kinematic model and a second-order kinematic model of the visual servo; obtaining a dynamic model of the mechanical arm in a characteristic space; calculating and adjusting admittance parameters by using virtual positions of feature points in the image; determining a visual servo control law; the camera acquires image characteristic points of the two-dimensional code in real time, and the joint speed of the mechanical arm is obtained in real time according to a visual servo control law, so that the mechanical arm is subjected to motion control, and a visual servo process is completed. The invention solves the problem of inconsistent driving layers of the force sensor and the visual sensor, couples the visual sensor and the force sensor, and improves the flexibility of the system.

Description

一种基于图像变导纳的机器人视觉伺服与人机协同控制方法A robot visual servo and human-machine collaborative control method based on image variable admittance

技术领域Technical Field

本发明涉及一种基于图像变导纳的机器人视觉伺服与人机协同控制方法,属于机器人视觉伺服技术领域。The invention relates to a robot visual servo and human-machine collaborative control method based on image variable admittance, and belongs to the technical field of robot visual servo.

背景技术Background Art

视觉伺服是一种利用视觉特征(感知)来控制机器人进行机器人定位或轨迹跟踪的控制方法。与无传感器系统相比,视觉伺服系统在处理非结构化环境时具有更高的精度、灵活性和鲁棒性。机器人的力控制可以更好地适应环境的不确定性,提高机器人接触作业的精度和安全性。Visual servoing is a control method that uses visual features (perception) to control the robot for robot positioning or trajectory tracking. Compared with sensorless systems, visual servo systems have higher accuracy, flexibility and robustness when dealing with unstructured environments. The robot's force control can better adapt to environmental uncertainties and improve the accuracy and safety of robot contact operations.

目前我国电气化铁路接触网运维中,“检测”已经实现了自动化和部分智能化,主要利用6C系统、检查巡视、部件检验等方法和手段,分析和诊断接触网技术状态。“维修”则根据技术状态,实施临修、全面修和精测精修三级修理,但上述三级维修基本依赖人工。视觉伺服和人机协同是实现接触网智能检修半自动化及全自动化的重要技术。At present, in the operation and maintenance of electrified railway contact network in my country, "detection" has been automated and partially intelligent, mainly using 6C system, inspection and patrol, component inspection and other methods and means to analyze and diagnose the technical status of the contact network. "Maintenance" is carried out according to the technical status, including temporary repair, comprehensive repair and precise measurement and repair, but the above three levels of maintenance basically rely on manual labor. Visual servo and human-machine collaboration are important technologies for realizing semi-automatic and full-automatic intelligent maintenance of contact network.

目前,大多数人机交互控制技术都是基于直接显式闭环力控制或间接力控制,如阻抗和导纳控制。多传感器融合可以实现多层次、多空间的信息处理,增强了机器人系统的灵活性和智能性。然而,视觉和力传感器的结合并不是一件容易的事情。有学者设计了基于任务框架形式的控制策略,该策略通过控制来改进共享控制力沿着约束的方向,进行运动剩下的方向由视觉控制。还有学者提出了一种混合视觉/力控制方法来处理相机和约束表面的不确定性,通过跟踪末端执行器的图像特征,并对未知表面施加正交接触力。上述方法都利用了视觉和力觉信息进行运动控制,但存在视觉无法对力控制方向产生的误差进行校正的缺点。At present, most human-machine interactive control technologies are based on direct explicit closed-loop force control or indirect force control, such as impedance and admittance control. Multi-sensor fusion can realize multi-level and multi-space information processing, enhancing the flexibility and intelligence of the robot system. However, the combination of vision and force sensors is not an easy task. Some scholars have designed a control strategy based on the form of a task framework, which improves the shared control force by controlling the direction of the constraint, and the remaining direction of the movement is controlled by vision. Other scholars have proposed a hybrid vision/force control method to deal with the uncertainty of the camera and the constraint surface by tracking the image features of the end effector and applying orthogonal contact forces to the unknown surface. The above methods all use visual and force information for motion control, but there is a disadvantage that vision cannot correct the errors caused by the force control direction.

针对上述问题,有学者提出了一种基于任务空间反馈的自适应动态控制器,通过视觉/力混合控制解决瓶颈约束下的接触问题。将任务空间划分为约束空间、图像空间和力空间,完成视觉伺服操作和作用力轨迹跟踪控制任务。也有学者设计了基于图像视觉的一阶和二阶阻抗控制律。在上述控制律中,在像面上定义混合阻抗方程的弹性力矩分量,用于视觉误差的设计与计算。上述方法虽然将任务空间划分为图像特征空间,并设计了基于视觉误差的控制器,但其遵从性仍然是在笛卡尔空间中实现的。与笛卡尔空间或关节空间中的视觉/力混合控制不同,学者提出了特征空间中视觉/阻抗混合控制的广义框架。该框架是在视觉伺服系统的任务空间中定义的,不考虑视觉特征的选择。通过在特征空间中选择恒定导纳参数来实现顺应性。In response to the above problems, some scholars proposed an adaptive dynamic controller based on task space feedback to solve the contact problem under bottleneck constraints through vision/force hybrid control. The task space is divided into constraint space, image space and force space to complete the visual servo operation and force trajectory tracking control tasks. Some scholars have also designed first-order and second-order impedance control laws based on image vision. In the above control law, the elastic torque component of the mixed impedance equation is defined on the image plane for the design and calculation of visual errors. Although the above method divides the task space into image feature space and designs a controller based on visual errors, its compliance is still implemented in Cartesian space. Unlike vision/force hybrid control in Cartesian space or joint space, scholars proposed a generalized framework for vision/impedance hybrid control in feature space. This framework is defined in the task space of the visual servo system without considering the selection of visual features. Compliance is achieved by selecting a constant admittance parameter in the feature space.

发明内容Summary of the invention

为了克服现有技术中混合视觉/力控制存在视觉无法对力控制方向产生的误差进行校正的缺点,以及不变导纳控制中系统灵活性、安全性较差的问题,本发明旨在提供一种基于图像变导纳的机器人视觉伺服与人机协同控制方法。In order to overcome the shortcomings of hybrid vision/force control in the prior art that vision cannot correct the errors caused by the force control direction, and the problems of poor system flexibility and safety in constant admittance control, the present invention aims to provide a robot visual servo and human-machine collaborative control method based on image variable admittance.

本发明解决上述技术问题所提供的技术方案是:一种基于图像变导纳的机器人视觉伺服与人机协同控制方法,包括以下步骤:The present invention provides a technical solution to solve the above technical problems: a robot visual servo and human-machine collaborative control method based on image variable admittance, comprising the following steps:

步骤S100、对视觉伺服系统中的机械臂进行运动学和动力学建模获得机械臂运动学模型、动力学模型,对视觉伺服系统参考框架进行坐标系设定;Step S100, performing kinematic and dynamic modeling on the robot arm in the visual servo system to obtain a kinematic model and a dynamic model of the robot arm, and setting a coordinate system for a reference frame of the visual servo system;

步骤S200、通过棋盘法对视觉伺服系统进行标定,获得机械臂末端执行器与相机之间的转换矩阵;Step S200, calibrating the visual servo system by a chessboard method to obtain a transformation matrix between the end effector of the robot arm and the camera;

步骤S300、建立相机视觉的运动学和动力学模型,结合机械臂运动学模型和动力学模型构建视觉伺服的一阶运动学模型、二阶运动学模型;Step S300, establishing the kinematic and dynamic models of camera vision, and combining the kinematic model and dynamic model of the robot arm to construct the first-order kinematic model and the second-order kinematic model of the visual servo;

步骤S400、建立机械臂在图像特征空间下的动力学模型,结合机械臂运动学、动力学模型和视觉伺服的一阶运动学模型、二阶运动学模型获得机械臂在特征空间下的动力学模型;Step S400, establishing a dynamic model of the robot arm in the image feature space, combining the robot arm kinematics, dynamic model and the first-order kinematic model and second-order kinematic model of the visual servo to obtain the dynamic model of the robot arm in the feature space;

步骤S500、结合logistic sigmoid函数,利用图像中特征点的虚拟位置来实时计算和调整导纳参数;Step S500, combining the logistic sigmoid function, using the virtual position of the feature point in the image to calculate and adjust the admittance parameter in real time;

步骤S600、根据期望特征点和当前特征点的误差,确定视觉伺服控制律;Step S600, determining a visual servoing control law according to an error between a desired feature point and a current feature point;

步骤S700、相机实时采集二维码的图像特征点,根据视觉伺服控制律实时得到机械臂的关节速度,从而对机械臂进行运动控制,完成视觉伺服过程。Step S700: The camera collects the image feature points of the QR code in real time, and obtains the joint speed of the robot arm in real time according to the visual servo control law, thereby controlling the motion of the robot arm and completing the visual servo process.

进一步的技术方案是,所述机械臂运动学模型为:A further technical solution is that the kinematic model of the robotic arm is:

式中:分别为关节空间下的关节位置矢量、速度矢量和加速度矢量;为机械臂雅克比矩阵;分别为操作空间下的位置矢量、速度矢量和加速度矢量,为坐标系映射函数。Where: , , They are the joint position vector, velocity vector and acceleration vector in the joint space respectively; is the Jacobian matrix of the robot arm; , , are the position vector, velocity vector and acceleration vector in the operation space respectively, is the coordinate system mapping function.

进一步的技术方案是,所述机械臂动力学模型为:A further technical solution is that the mechanical arm dynamics model is:

式中:分别为关节空间下的关节位置矢量、速度矢量和加速度矢量;为机械手的惯性矩阵;为科里奥利矩阵和向心矩阵;为重力矢量;为库仑,粘性和静摩擦矢量;为受到的外部扭矩;为控制输入。Where: , , They are the joint position vector, velocity vector and acceleration vector in the joint space respectively; is the inertia matrix of the manipulator; are the Coriolis and centripetal matrices; is the gravity vector; are the Coulomb, viscosity and static friction vectors; is the external torque; For control input.

进一步的技术方案是,所述步骤S200的具体过程包括:A further technical solution is that the specific process of step S200 includes:

步骤S210、使用具有N个点的棋盘标定板,随机置于RealSense相机视野中;Step S210: using a chessboard calibration plate with N points and randomly placing it in the field of view of the RealSense camera;

步骤S220、使用RealSense相机对棋盘标定板进行图片采集;Step S220: Use the RealSense camera to collect images of the chessboard calibration plate;

步骤S230、计算图像中N个圆点的圆心像素坐标;Step S230, calculating the pixel coordinates of the centers of N dots in the image;

步骤S240、以平移向量和旋转向量的形式记录此时机械臂末端执行器的三维位姿;Step S240, recording the three-dimensional position and posture of the end effector of the robot arm at this time in the form of a translation vector and a rotation vector;

步骤S250、重复步骤S220-步骤S240一共16次;Step S250, repeating steps S220 to S240 a total of 16 times;

步骤S260、解算2D-3D数据,求出棋盘标定板与相机之间的坐标转换关系Step S260: Solve the 2D-3D data to find the coordinate transformation relationship between the chessboard calibration plate and the camera ;

步骤S270、根据坐标转换关系,计算机械臂末端执行器到相机的转换矩阵Step S270: According to the coordinate conversion relationship , calculate the transformation matrix from the end effector of the robot to the camera .

进一步的技术方案是,所述视觉伺服的一阶运动学模型为:A further technical solution is that the first-order kinematic model of the visual servo is:

式中:为图像特征点速度,为图像交互矩阵,为相机的运动速度。Where: is the image feature point velocity, is the image interaction matrix, is the camera's movement speed.

进一步的技术方案是,所述视觉伺服的二阶运动学模型为:A further technical solution is that the second-order kinematic model of the visual servo is:

式中:为机械臂末端执行器到相机的转换矩阵;为机械臂雅克比矩阵;为图像特征雅克比矩阵;分别为关节空间下的速度矢量和加速度矢量;为图像交互矩阵;为图像特征点的加速度矢量;为图像交互矩阵的一阶导;为机械臂末端执行器到相机的转换矩阵的一阶导;为机械臂雅克比矩阵的一阶导。Where: is the transformation matrix from the robot end effector to the camera; is the Jacobian matrix of the robot arm; is the image feature Jacobian matrix; , are the velocity vector and acceleration vector in the joint space respectively; is the image interaction matrix; is the acceleration vector of the image feature point; is the first-order derivative of the image interaction matrix; is the first-order derivative of the transformation matrix from the robot end effector to the camera; is the first-order derivative of the Jacobian matrix of the robot.

进一步的技术方案是,所述机械臂在特征空间下的动力学模型为:A further technical solution is that the dynamic model of the robot arm in the feature space is:

其中:in:

式中:为机械手惯量矩阵在相机框架内投影的逆;为特征雅克比矩阵;为机械臂雅克比矩阵;为机械臂末端执行器到相机的转换矩阵;为图像特征点的加速度矢量;为图像特征空间下的力矩;为图像特征空间下的外部接触力;为图像特征空间下的外部扰动;为机械臂雅克比矩阵的转置;为机械臂末端执行器到相机的转换矩阵;为相机坐标系下的力。Where: is the inverse of the projection of the manipulator inertia matrix into the camera frame; is the characteristic Jacobian matrix; is the Jacobian matrix of the robot arm; is the transformation matrix from the robot end effector to the camera; is the acceleration vector of the image feature point; is the moment in the image feature space; is the external contact force in the image feature space; is the external disturbance in the image feature space; is the transpose of the Jacobian matrix of the robot; is the transformation matrix from the robot end effector to the camera; is the force in the camera coordinate system.

进一步的技术方案是,所述步骤S500的具体过程为:选取logistic sigmoid函数结合特征空间下的导纳公式以及特征点距离差,通过下面的s型函数调节阻尼和刚度的变导纳参数;A further technical solution is that the specific process of step S500 is: select the logistic sigmoid function in combination with the admittance formula in the feature space and the feature point distance difference, and adjust the damping through the following s-shaped function and stiffness Variable admittance parameter;

式中:是影响曲线的增长率和拐点的参数;为logistic sigmoid函数;为导纳参数的最大值;为导纳参数的最小值;为虚拟路径的总长度;为当前特征点与期望特征点之间虚拟路径的长度。Where: and It is a parameter that affects the growth rate and inflection point of the curve; is the logistic sigmoid function; is the maximum value of the admittance parameter; is the minimum value of the admittance parameter; is the total length of the virtual path; It is the length of the virtual path between the current feature point and the expected feature point.

进一步的技术方案是,所述步骤S500中计算和调整阻尼时:选取为阻尼参数的最大值,为阻尼参数的最小值,,阻尼系数由这组参数的s型函数调节;A further technical solution is that the damping is calculated and adjusted in step S500 When: Select is the maximum value of the damping parameter, is the minimum value of the damping parameter, , , the damping coefficient is adjusted by the sigmoid function of this set of parameters;

计算和调整刚度时:选取为刚度参数的最大值,为刚度参数的最小值,,刚度系数由这组参数的s型函数调节。Calculating and adjusting stiffness When: Select is the maximum value of the stiffness parameter, is the minimum value of the stiffness parameter, , , the stiffness coefficient is adjusted by a sigmoid function of this set of parameters.

进一步的技术方案是,所述步骤S600的具体过程包括:A further technical solution is that the specific process of step S600 includes:

步骤S610、建立外部接触力与特征点误差之间的导纳控制率;Step S610, establishing an admittance control ratio between the external contact force and the characteristic point error;

式中:分别为虚质量、阻尼和刚度;为特征点误差;为特征点误差的加速度;为特征点误差的速度;为初始设定的期望特征点,为受到外部接触力所移动的特征点;为图像特征空间下的外部接触力;Where: , , are virtual mass, damping and stiffness respectively; is the feature point error; is the acceleration of the feature point error; is the speed of feature point error; is the expected feature point set initially, is the feature point moved by the external contact force; is the external contact force in the image feature space;

步骤S620、选取作为当前特征点与期望特征点之间误差,其中为图像特征点;Step S620: Select As the error between the current feature point and the expected feature point, is the image feature point;

步骤S630、通过对特征点误差的加速度进行一次和二次积分可以求得特征点误差的速度和受到外部接触力所移动的特征点Step S630: Acceleration of feature point error Performing the first and second integrations can obtain the speed of the characteristic point error and feature points moved by external contact forces ;

步骤S640、确定视觉伺服控制律为:Step S640: Determine the visual servo control law as:

式中:为相机的运动速度;为特征点误差的速度;为控制器增益;为图像交互矩阵的伪逆;为当前特征点与期望特征点之间误差。Where: is the camera's movement speed; is the speed of feature point error; is the controller gain; is the pseudo-inverse of the image interaction matrix; is the error between the current feature point and the expected feature point.

本发明具有以下有益效果:The present invention has the following beneficial effects:

1、与笛卡尔空间或关节空间中的视觉/力混合控制不同,该发明解决了力传感器和视觉传感器驱动层不一致的问题;1. Different from the visual/force hybrid control in Cartesian space or joint space, this invention solves the problem of inconsistent driving layers of force sensors and visual sensors;

2、与笛卡尔空间或关节空间中变导纳控制不同,该发明耦合了视觉传感器和力传感器,提高了系统的灵活性;2. Different from variable admittance control in Cartesian space or joint space, this invention couples visual sensors and force sensors to improve the flexibility of the system;

3、特征空间中视觉/力混合的不变导纳控制相比,变导纳的引入可以实时调整导纳参数,提高了系统的动态特性、人机交互性和安全性。3. Compared with the invariant admittance control of vision/force hybrid in feature space, the introduction of variable admittance can adjust the admittance parameters in real time, improving the dynamic characteristics, human-computer interaction and safety of the system.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为本发明的流程框图;Fig. 1 is a flow chart of the present invention;

图2为本发明一实施例中变导纳系统中导纳参数的变化曲线;FIG2 is a curve showing a change in admittance parameters in a variable admittance system according to an embodiment of the present invention;

其中图2(a)为阻尼参数的变化图,图2(b)为刚度参数的变化图;Figure 2(a) shows the change of damping parameters, and Figure 2(b) shows the change of stiffness parameters;

图3为本发明一实施例中不同导纳参数的控制方法视觉伺服阶段执行效果示意图;3 is a schematic diagram of the execution effect of the visual servo stage of the control method for different admittance parameters in one embodiment of the present invention;

其中图3(a)为低阻尼高刚度不变导纳的执行效果,图3(b)是高阻尼低刚度不变导纳的执行效果,图3(c)为高阻尼高刚度不变导纳的执行效果,图3(d)为变导纳的执行效果;FIG3(a) shows the implementation effect of low damping, high stiffness and constant admittance, FIG3(b) shows the implementation effect of high damping, low stiffness and constant admittance, FIG3(c) shows the implementation effect of high damping, high stiffness and constant admittance, and FIG3(d) shows the implementation effect of variable admittance.

图4为本发明一实施例中四种不同导纳参数的控制方法执行结果示意图;FIG4 is a schematic diagram of the execution results of a control method for four different admittance parameters in one embodiment of the present invention;

其中图4(a)视觉伺服完成时间及其误差条的执行结果,图4(b)是实施例过程中速度和角速度及其误差条的执行结果,图4(c)为总任务完成时间及其误差条的执行效果,图4(d)为任务过程中力及其误差条的执行效果。Among them, Figure 4(a) shows the execution result of visual servoing completion time and its error bar, Figure 4(b) shows the execution result of speed and angular velocity and their error bars during the implementation process, Figure 4(c) shows the execution effect of total task completion time and its error bar, and Figure 4(d) shows the execution effect of force and its error bar during the task.

具体实施方式DETAILED DESCRIPTION

下面将结合附图对本发明的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solution of the present invention will be described clearly and completely below in conjunction with the accompanying drawings. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of the present invention.

如图1所示,一种基于图像变导纳的机器人视觉伺服与人机协同控制方法,具体包括以下步骤:As shown in FIG1 , a robot visual servo and human-machine collaborative control method based on image variable admittance specifically includes the following steps:

步骤S100、对视觉伺服系统中的机械臂进行运动学和动力学建模获得机械臂运动学模型、动力学模型,对视觉伺服系统参考框架进行坐标系设定;Step S100, performing kinematic and dynamic modeling on the robot arm in the visual servo system to obtain a kinematic model and a dynamic model of the robot arm, and setting a coordinate system for a reference frame of the visual servo system;

其中所述机械臂运动学模型为:The kinematic model of the robot arm is:

式中:分别为关节空间下的关节位置矢量、速度矢量和加速度矢量;为机械臂雅克比矩阵;分别为操作空间下的位置矢量、速度矢量和加速度矢量,为坐标系映射函数。Where: , , They are the joint position vector, velocity vector and acceleration vector in the joint space respectively; is the Jacobian matrix of the robot arm; , , are the position vector, velocity vector and acceleration vector in the operation space respectively, is the coordinate system mapping function.

所述机械臂动力学模型为:The mechanical arm dynamics model is:

其中:in:

式中:分别为关节空间下的关节位置矢量、速度矢量和加速度矢量;为机械手的惯性矩阵;为科里奥利矩阵和向心矩阵;为重力矢量;为库仑,粘性和静摩擦矢量;为受到的外部扭矩;为末端执行器坐标系中的外部扭矩;为控制输入;Where: , , They are the joint position vector, velocity vector and acceleration vector in the joint space respectively; is the inertia matrix of the manipulator; are the Coriolis and centripetal matrices; is the gravity vector; are the Coulomb, viscosity and static friction vectors; is the external torque; is the external torque in the end-effector coordinate system; is the control input;

步骤S200、通过棋盘法对视觉伺服系统进行标定,获得机械臂末端执行器与相机之间的转换矩阵;Step S200, calibrating the visual servo system by a chessboard method to obtain a transformation matrix between the end effector of the robot arm and the camera;

步骤S210、使用具有N个点的棋盘标定板,随机置于RealSense相机视野中;Step S210: using a chessboard calibration plate with N points and randomly placing it in the field of view of the RealSense camera;

步骤S220、使用RealSense相机对棋盘标定板进行图片采集;Step S220: Use the RealSense camera to collect images of the chessboard calibration plate;

步骤S230、计算图像中N个圆点的圆心像素坐标;Step S230, calculating the center pixel coordinates of N dots in the image;

步骤S240、以平移向量和旋转向量的形式记录此时机械臂末端执行器的三维位姿;Step S240, recording the three-dimensional position and posture of the end effector of the robot arm at this time in the form of a translation vector and a rotation vector;

步骤S250、重复步骤S220-步骤S240一共16次;Step S250, repeating steps S220 to S240 a total of 16 times;

步骤S260、解算2D-3D数据,求出棋盘标定板与相机之间的坐标转换关系Step S260: Solve the 2D-3D data to find the coordinate transformation relationship between the chessboard calibration plate and the camera ;

步骤S270、根据坐标转换关系,计算机械臂末端执行器到相机的转换矩阵Step S270: According to the coordinate conversion relationship , calculate the transformation matrix from the end effector of the robot to the camera ;

步骤S300、建立相机视觉的运动学和动力学模型,结合机械臂运动学模型和动力学模型构建视觉伺服的一阶运动学模型、二阶运动学模型;Step S300, establishing the kinematic and dynamic models of camera vision, and constructing the first-order kinematic model and the second-order kinematic model of visual servoing in combination with the kinematic model and the dynamic model of the robot arm;

视觉伺服的一阶运动学模型为:The first-order kinematic model of visual servoing is:

式中:为图像特征点速度,为图像交互矩阵,为相机的运动速度;Where: is the image feature point velocity, is the image interaction matrix, is the camera's movement speed;

视觉伺服的二阶运动学模型为:The second-order kinematic model of visual servoing is:

式中:为机械臂末端执行器到相机的转换矩阵;为机械臂雅克比矩阵;为图像特征雅克比矩阵;分别为关节空间下的速度矢量和加速度矢量;为图像交互矩阵;为图像特征点的加速度矢量;为图像交互矩阵的一阶导;为机械臂末端执行器到相机的转换矩阵的一阶导;为机械臂雅克比矩阵的一阶导;Where: is the transformation matrix from the robot end effector to the camera; is the Jacobian matrix of the robot arm; is the image feature Jacobian matrix; , are the velocity vector and acceleration vector in the joint space respectively; is the image interaction matrix; is the acceleration vector of the image feature point; is the first-order derivative of the image interaction matrix; is the first-order derivative of the transformation matrix from the robot end effector to the camera; is the first-order derivative of the Jacobian matrix of the robot;

步骤S400、建立机械臂在图像特征空间下的动力学模型,结合机械臂运动学、动力学模型和视觉伺服的一阶运动学模型、二阶运动学模型获得机械臂在特征空间下的动力学模型,具体表示为:Step S400: Establish a dynamic model of the robot in the image feature space, and obtain the dynamic model of the robot in the feature space by combining the robot kinematics, dynamic model, and the first-order kinematic model and second-order kinematic model of the visual servo, which is specifically expressed as:

其中分别为从摄像机坐标系映射到末端执行器坐标系的力矩变换矩阵和在摄像机坐标系中作用的力矩。结合特征雅可比矩阵,将扭转变换矩阵与扳手变换矩阵关联为in , and are the torque transformation matrix from the camera coordinate system to the end effector coordinate system and the torque acting in the camera coordinate system. Combined with the characteristic Jacobian matrix, the torsion transformation matrix is related to the wrench transformation matrix as follows: ;

对其进行改写得到机械臂在特征空间下的动力学模型;Rewrite it to get the dynamic model of the robot arm in the feature space;

其中:in:

式中:为机械手惯量矩阵在相机框架内投影的逆;为特征雅克比矩阵;为机械臂雅克比矩阵;为机械臂末端执行器到相机的转换矩阵;为图像特征点的加速度矢量;为图像特征空间下的力矩;为图像特征空间下的外部接触力;为图像特征空间下的外部扰动;为机械臂雅克比矩阵的转置;为机械臂末端执行器到相机的转换矩阵;为相机坐标系下的力;Where: is the inverse of the projection of the manipulator inertia matrix into the camera frame; is the characteristic Jacobian matrix; is the Jacobian matrix of the robot arm; is the transformation matrix from the robot end effector to the camera; is the acceleration vector of the image feature point; is the moment in the image feature space; is the external contact force in the image feature space; is the external disturbance in the image feature space; is the transpose of the Jacobian matrix of the robot; is the transformation matrix from the robot end effector to the camera; is the force in the camera coordinate system;

步骤S500、结合logistic sigmoid函数,利用图像中特征点的虚拟位置来实时计算和调整导纳参数;Step S500, combining the logistic sigmoid function, using the virtual position of the feature point in the image to calculate and adjust the admittance parameter in real time;

选取logistic sigmoid函数,具体表示为:Select the logistic sigmoid function, which is specifically expressed as:

式中:是影响曲线的增长率和拐点的参数;Where: and It is a parameter that affects the growth rate and inflection point of the curve;

结合特征空间下的导纳公式以及特征点距离差,通过下面的s型函数调节阻尼和刚度的变导纳参数;Combined with the admittance formula in the feature space and the distance difference of the feature points, the damping is adjusted by the following s-shaped function and stiffness Variable admittance parameter;

式中:是影响曲线的增长率和拐点的参数;为logistic sigmoid函数;为导纳参数的最大值;为导纳参数的最小值;为虚拟路径的总长度;为当前特征点与期望特征点之间虚拟路径的长度;Where: and It is a parameter that affects the growth rate and inflection point of the curve; is the logistic sigmoid function; is the maximum value of the admittance parameter; is the minimum value of the admittance parameter; is the total length of the virtual path; is the length of the virtual path between the current feature point and the expected feature point;

计算和调整阻尼时:选取为阻尼参数的最大值,为阻尼参数的最小值,,阻尼系数由这组参数的s型函数调节,其结果如图2(a)所示;Calculating and adjusting damping When: Select is the maximum value of the damping parameter, is the minimum value of the damping parameter, , , the damping coefficient is adjusted by the sigmoid function of this set of parameters, and the result is shown in Figure 2(a);

计算和调整刚度时:选取为刚度参数的最大值,为刚度参数的最小值,,刚度系数由这组参数的s型函数调节,其结果如图2(b)所示;Calculating and adjusting stiffness When: Select is the maximum value of the stiffness parameter, is the minimum value of the stiffness parameter, , , the stiffness coefficient is adjusted by the sigmoid function of this set of parameters, and the result is shown in Figure 2(b);

步骤S600、根据期望特征点和当前特征点的误差,确定视觉伺服控制律;Step S600, determining a visual servoing control law according to an error between a desired feature point and a current feature point;

步骤S610、建立外部接触力与特征点误差之间的导纳控制率;Step S610, establishing an admittance control ratio between the external contact force and the characteristic point error;

式中:分别为虚质量、阻尼和刚度;为特征点误差;为特征点误差的加速度;为特征点误差的速度;为初始设定的期望特征点,为受到外部接触力所移动的特征点;为图像特征空间下的外部接触力;Where: , , are virtual mass, damping and stiffness respectively; is the feature point error; is the acceleration of the feature point error; is the speed of feature point error; is the expected feature point set initially, is the feature point moved by the external contact force; is the external contact force in the image feature space;

步骤S620、选取作为当前特征点与期望特征点之间误差,其中为图像特征点;Step S620: Select As the error between the current feature point and the expected feature point, is the image feature point;

为初始设定的期望特征点,为受到外部接触力所移动的特征点; is the expected feature point set initially, is the feature point moved by the external contact force;

步骤S630、通过对特征点误差的加速度进行一次和二次积分可以求得特征点误差的速度和受到外部接触力所移动的特征点Step S630: Acceleration of feature point error Performing the first and second integrations can obtain the speed of the characteristic point error and feature points moved by external contact forces ;

步骤S640、确定视觉伺服控制律为:Step S640: Determine the visual servo control law as:

式中:为相机的运动速度;为特征点误差的速度;为控制器增益;为图像交互矩阵的伪逆;为当前特征点与期望特征点之间的误差;Where: is the camera's movement speed; is the speed of feature point error; is the controller gain; is the pseudo-inverse of the image interaction matrix; is the error between the current feature point and the expected feature point;

步骤S700、相机实时采集二维码的图像特征点,根据视觉伺服控制律实时得到机械臂的关节速度,从而对机械臂进行运动控制,完成视觉伺服过程;Step S700: The camera collects the image feature points of the two-dimensional code in real time, and obtains the joint speed of the robot arm in real time according to the visual servo control law, thereby controlling the motion of the robot arm and completing the visual servo process;

具体地,根据所设计的视觉伺服控制器在PC端进行编程,机械臂、力矩传感器、Visp平台之间通过ROS进行通信。实施例以接触网螺栓检修为背景,目的为提高接触网检修过程的效率及安全性。Specifically, programming is performed on the PC according to the designed visual servo controller, and the manipulator, torque sensor, and Visp platform communicate with each other through ROS. The embodiment is based on the background of overhead line bolt maintenance, and aims to improve the efficiency and safety of the overhead line maintenance process.

实施例包括三个阶段,第一阶段,识别检修目标,视觉伺服运动到检修位置;第二阶段,通过人机协同,将检修机械臂移动到下一个检修点;第三阶段,识别到下一个检修目标,视觉伺服运动到检修位置。The embodiment includes three stages. In the first stage, the maintenance target is identified and visual servoing is performed to the maintenance position. In the second stage, the maintenance robot arm is moved to the next maintenance point through human-machine collaboration. In the third stage, the next maintenance target is identified and visual servoing is performed to the maintenance position.

图3是实验效果展示,图3(a)为低阻尼高刚度的实验,其视觉伺服收敛较快,但是存在超调;图3(b)是高阻尼低刚度的实验,其视觉伺服收敛最慢;图3(c)为高阻尼高刚度的实验,由于高刚度的存在,其视觉伺服收敛速度比高阻尼低刚度要快,但比地租你高刚度要慢;图3(d)为变导纳的实验,其视觉伺服收敛最快且不存在超调。Figure 3 shows the experimental results. Figure 3(a) is an experiment with low damping and high stiffness. Its visual servo converges faster, but there is overshoot. Figure 3(b) is an experiment with high damping and low stiffness. Its visual servo converges the slowest. Figure 3(c) is an experiment with high damping and high stiffness. Due to the existence of high stiffness, its visual servo convergence speed is faster than that of high damping and low stiffness, but slower than that of high stiffness. Figure 3(d) is an experiment with variable admittance. Its visual servo converges the fastest and there is no overshoot.

图4是实验效果展示,图4(a)代表视觉伺服收敛的时间,变导纳控制效果最好;图4(b)代表视觉伺服运行过程中速度大小,变导纳控制速度仅低于低阻尼高阻抗的控制;图4(c)代表总任务的时间,变导纳控制具有最短的任务完成时间;图4(d)代表人机交互过程中交互力大小,变导纳控制的交互力大小仅比高阻尼低刚度的大;综合上述实验性能指标,变导纳在纯视觉伺服过程中具有最好的动态响应,在整个实验任务中就有较好的人机交互性中。该结果验证了本发明的有效性。Figure 4 is a display of the experimental results. Figure 4 (a) represents the convergence time of visual servoing, and the variable admittance control has the best effect; Figure 4 (b) represents the speed during the operation of visual servoing, and the variable admittance control speed is only lower than the low damping and high impedance control; Figure 4 (c) represents the total task time, and the variable admittance control has the shortest task completion time; Figure 4 (d) represents the interactive force during the human-computer interaction process, and the interactive force of the variable admittance control is only larger than that of the high damping and low stiffness; Based on the above experimental performance indicators, the variable admittance has the best dynamic response in the pure visual servoing process, and has good human-computer interactivity in the entire experimental task. This result verifies the effectiveness of the present invention.

与现有的技术相比,本发明的优点在于:(1) 与笛卡尔空间或关节空间中的视觉/力混合控制不同,该发明解决了力传感器和视觉传感器驱动层不一致的问题;(2) 与笛卡尔空间或关节空间中变导纳控制不同,该发明耦合了视觉传感器和力传感器,提高了系统的灵活性和人机交互性;(3) 与特征空间中视觉/力混合的不变导纳控制相比,变导纳的引入可以实时调整导纳参数,提高了系统的动态特性、人机交互性和安全性。Compared with the existing technology, the advantages of the present invention are: (1) Different from the vision/force hybrid control in Cartesian space or joint space, the present invention solves the problem of inconsistent driving layers of force sensors and vision sensors; (2) Different from the variable admittance control in Cartesian space or joint space, the present invention couples the vision sensor and the force sensor, thereby improving the flexibility and human-computer interactivity of the system; (3) Compared with the invariant admittance control of vision/force hybrid in feature space, the introduction of variable admittance can adjust the admittance parameters in real time, thereby improving the dynamic characteristics, human-computer interactivity and safety of the system.

以上所述,并非对本发明作任何形式上的限制,虽然本发明已通过上述实施例揭示,然而并非用以限定本发明,任何熟悉本专业的技术人员,在不脱离本发明技术方案范围内,可利用上述揭示的技术内容作出一些变动或修饰为等同变化的等效实施例,但凡是未脱离本发明技术方案的内容,依据本发明的技术实质对以上实施例所作的任何简单修改、等同变化与修饰,均属于本发明技术方案的范围内。The above description is not intended to limit the present invention in any form. Although the present invention has been disclosed through the above embodiments, it is not intended to limit the present invention. Any technician familiar with the profession can make some changes or modifications to equivalent embodiments of equivalent changes using the technical contents disclosed above without departing from the scope of the technical solution of the present invention. However, any simple modification, equivalent changes and modifications made to the above embodiments based on the technical essence of the present invention without departing from the content of the technical solution of the present invention are within the scope of the technical solution of the present invention.

Claims (7)

1.一种基于图像变导纳的机器人视觉伺服与人机协同控制方法,其特征在于,包括以下步骤:1. A robot visual servo and human-machine collaborative control method based on image variable admittance, characterized in that it includes the following steps: 步骤S100、对视觉伺服系统中的机械臂进行运动学和动力学建模获得机械臂运动学模型、动力学模型,对视觉伺服系统参考框架进行坐标系设定;Step S100, performing kinematic and dynamic modeling on the robot arm in the visual servo system to obtain a kinematic model and a dynamic model of the robot arm, and setting a coordinate system for a reference frame of the visual servo system; 步骤S200、通过棋盘法对视觉伺服系统进行标定,获得机械臂末端执行器与相机之间的转换矩阵;Step S200, calibrating the visual servo system by a chessboard method to obtain a transformation matrix between the end effector of the robot arm and the camera; 步骤S210、使用具有N个点的棋盘标定板,随机置于RealSense相机视野中;Step S210: using a chessboard calibration plate with N points and randomly placing it in the field of view of the RealSense camera; 步骤S220、使用RealSense相机对棋盘标定板进行图片采集;Step S220: Use the RealSense camera to collect images of the chessboard calibration plate; 步骤S230、计算图像中N个圆点的圆心像素坐标;Step S230, calculating the pixel coordinates of the centers of N dots in the image; 步骤S240、以平移向量和旋转向量的形式记录此时机械臂末端执行器的三维位姿;Step S240, recording the three-dimensional position and posture of the end effector of the robot arm at this time in the form of a translation vector and a rotation vector; 步骤S250、重复步骤S220-步骤S240一共16次;Step S250, repeating steps S220 to S240 a total of 16 times; 步骤S260、解算2D-3D数据,求出棋盘标定板与相机之间的坐标转换关系cToStep S260, solving the 2D-3D data to obtain the coordinate transformation relationship c T o between the chessboard calibration plate and the camera; 步骤S270、根据坐标转换关系cTo,计算机械臂末端执行器到相机的转换矩阵cTeStep S270, calculating the transformation matrix c Te from the robot end effector to the camera according to the coordinate transformation relationship c T o ; 步骤S300、建立相机视觉的运动学和动力学模型,结合机械臂运动学模型和动力学模型构建视觉伺服的一阶运动学模型、二阶运动学模型;Step S300, establishing the kinematic and dynamic models of camera vision, and combining the kinematic model and dynamic model of the robot arm to construct the first-order kinematic model and the second-order kinematic model of the visual servo; 步骤S400、建立机械臂在图像特征空间下的动力学模型,结合机械臂运动学、动力学模型和视觉伺服的一阶运动学模型、二阶运动学模型获得机械臂在图像特征空间下的动力学模型;Step S400, establishing a dynamic model of the robot arm in the image feature space, and combining the robot arm kinematics, dynamic model and the first-order kinematic model and second-order kinematic model of the visual servo to obtain the dynamic model of the robot arm in the image feature space; 其中:in: 式中:Mc(q)-1为机械手惯量矩阵在相机框架内投影的逆;Js为图像特征雅克比矩阵;eJe为机械臂雅克比矩阵;cTe为机械臂末端执行器到相机的转换矩阵;为图像特征点的加速度矢量;fs为图像特征空间下的力矩;fsext为图像特征空间下的外部接触力;fq为图像特征空间下的外部扰动;为机械臂雅克比矩阵的转置;为机械臂末端执行器到相机的转换矩阵;cfc为相机坐标系下的力;为科里奥利矩阵和向心矩阵;g(q)为重力矢量;为库仑,粘性和静摩擦矢量;τ为控制输入;q、分别为关节空间下的关节位置矢量、速度矢量;Where: Mc (q) -1 is the inverse of the projection of the manipulator inertia matrix in the camera frame; Js is the image feature Jacobian matrix; eJe is the manipulator Jacobian matrix; cTe is the transformation matrix from the manipulator end effector to the camera; is the acceleration vector of the image feature point; fs is the torque in the image feature space; fsext is the external contact force in the image feature space; fq is the external disturbance in the image feature space; is the transpose of the Jacobian matrix of the robot; is the transformation matrix from the end effector of the robot to the camera; c f c is the force in the camera coordinate system; are the Coriolis matrix and the centripetal matrix; g(q) is the gravity vector; are the Coulomb, viscosity and static friction vectors; τ is the control input; q, They are the joint position vector and velocity vector in the joint space respectively; 步骤S500、结合logistic sigmoid函数,利用图像中特征点的虚拟位置来实时计算和调整导纳参数;Step S500, combining the logistic sigmoid function, using the virtual position of the feature point in the image to calculate and adjust the admittance parameter in real time; 选取logistic sigmoid函数结合特征空间下的导纳公式以及特征点距离差,调节阻尼Ds和刚度Ks的变导纳参数;The logistic sigmoid function is selected to combine the admittance formula in the feature space and the distance difference of the feature points to adjust the variable admittance parameters of damping Ds and stiffness Ks ; 式中:γ1和γ2是影响曲线的增长率和拐点的参数;σ(es)为logistic sigmoid函数;Vmax为导纳参数的最大值;Vmin为导纳参数的最小值;ds为虚拟路径的总长度;es为当前特征点与期望特征点之间虚拟路径的长度;Where: γ 1 and γ 2 are parameters that affect the growth rate and inflection point of the curve; σ( es ) is the logistic sigmoid function; V max is the maximum value of the admittance parameter; V min is the minimum value of the admittance parameter; ds is the total length of the virtual path; es is the length of the virtual path between the current feature point and the expected feature point; 步骤S600、根据期望特征点和当前特征点的误差,确定视觉伺服控制律;Step S600, determining a visual servoing control law according to an error between a desired feature point and a current feature point; 步骤S700、相机实时采集二维码的图像特征点,根据视觉伺服控制律实时得到机械臂的关节速度,从而对机械臂进行运动控制,完成视觉伺服过程。Step S700: The camera collects the image feature points of the QR code in real time, and obtains the joint speed of the robot arm in real time according to the visual servo control law, thereby controlling the motion of the robot arm and completing the visual servo process. 2.根据权利要求1所述的一种基于图像变导纳的机器人视觉伺服与人机协同控制方法,其特征在于,所述机械臂运动学模型为:2. According to the method of robot visual servo and human-machine collaborative control based on image variable admittance in claim 1, it is characterized in that the kinematic model of the robot arm is: x=P(q)x=P(q) 式中:q、分别为关节空间下的关节位置矢量、速度矢量和加速度矢量;eJe为机械臂雅克比矩阵;x、分别为操作空间下的位置矢量、速度矢量和加速度矢量,P(q)为坐标系映射函数;为机械臂雅克比矩阵的一阶导。Where: q, are the joint position vector, velocity vector and acceleration vector in the joint space respectively; e J e is the Jacobian matrix of the robot; x, are the position vector, velocity vector and acceleration vector in the operation space respectively, and P(q) is the coordinate system mapping function; is the first-order derivative of the Jacobian matrix of the robot. 3.根据权利要求1所述的一种基于图像变导纳的机器人视觉伺服与人机协同控制方法,其特征在于,所述机械臂动力学模型为:3. According to the method of robot visual servo and human-machine collaborative control based on image variable admittance in claim 1, it is characterized in that the mechanical arm dynamics model is: 式中:q、分别为关节空间下的关节位置矢量、速度矢量和加速度矢量;M(q)为机械手的惯性矩阵;为科里奥利矩阵和向心矩阵;g(q)为重力矢量;为库仑,粘性和静摩擦矢量;τe为受到的外部扭矩;τ为控制输入。Where: q, are the joint position vector, velocity vector and acceleration vector in the joint space respectively; M(q) is the inertia matrix of the manipulator; are the Coriolis matrix and the centripetal matrix; g(q) is the gravity vector; are the Coulomb, viscosity and static friction vectors; τ e is the external torque; τ is the control input. 4.根据权利要求1所述的一种基于图像变导纳的机器人视觉伺服与人机协同控制方法,其特征在于,所述视觉伺服的一阶运动学模型为:4. According to the method of robot visual servo and human-machine collaborative control based on image variable admittance in claim 1, it is characterized in that the first-order kinematic model of the visual servo is: 式中:为图像特征点速度,Ls为图像交互矩阵,vc为相机的运动速度。Where: is the speed of the image feature points, Ls is the image interaction matrix, and vc is the movement speed of the camera. 5.根据权利要求4所述的一种基于图像变导纳的机器人视觉伺服与人机协同控制方法,其特征在于,所述视觉伺服的二阶运动学模型为:5. The robot visual servo and human-machine collaborative control method based on image variable admittance according to claim 4, characterized in that the second-order kinematic model of the visual servo is: 式中:cTe为机械臂末端执行器到相机的转换矩阵;eJe为机械臂雅克比矩阵;Js为图像特征雅克比矩阵;分别为关节空间下的速度矢量和加速度矢量;Ls为图像交互矩阵;为图像特征点的加速度矢量;为图像交互矩阵的一阶导;为机械臂末端执行器到相机的转换矩阵的一阶导;为机械臂雅克比矩阵的一阶导;fq为图像特征空间下的外部扰动。Where: c T e is the transformation matrix from the end effector of the robot to the camera; e J e is the Jacobian matrix of the robot; J s is the Jacobian matrix of the image feature; are the velocity vector and acceleration vector in the joint space respectively; Ls is the image interaction matrix; is the acceleration vector of the image feature point; is the first-order derivative of the image interaction matrix; is the first-order derivative of the transformation matrix from the robot end effector to the camera; is the first-order derivative of the Jacobian matrix of the robot arm; fq is the external disturbance in the image feature space. 6.根据权利要求1所述的一种基于图像变导纳的机器人视觉伺服与人机协同控制方法,其特征在于,所述步骤S500中计算和调整阻尼Ds时:选取Vmax=40为阻尼参数的最大值,Vmin=10为阻尼参数的最小值,γ1=25,γ2=2.5,阻尼系数由这组参数的logisticsigmoid函数调节;6. A robot visual servo and human-machine collaborative control method based on image variable admittance according to claim 1, characterized in that, when calculating and adjusting the damping Ds in step S500: V max = 40 is selected as the maximum value of the damping parameter, V min = 10 is selected as the minimum value of the damping parameter, γ 1 = 25, γ 2 = 2.5, and the damping coefficient is adjusted by the logistic sigmoid function of this group of parameters; 计算和调整刚度Ks时:选取Vmax=200为刚度参数的最大值,Vmin=100为刚度参数的最小值,γ1=50,γ2=1.5,刚度系数由这组参数的logistic sigmoid函数调节。When calculating and adjusting the stiffness Ks : select V max = 200 as the maximum value of the stiffness parameter, V min = 100 as the minimum value of the stiffness parameter, γ 1 = 50, γ 2 = 1.5, and the stiffness coefficient is adjusted by the logistic sigmoid function of this set of parameters. 7.根据权利要求1所述的一种基于图像变导纳的机器人视觉伺服与人机协同控制方法,其特征在于,所述步骤S600的具体过程包括:7. The robot visual servo and human-machine collaborative control method based on image variable admittance according to claim 1, characterized in that the specific process of step S600 includes: 步骤S610、建立外部接触力与特征点误差之间的导纳控制率;Step S610, establishing an admittance control ratio between the external contact force and the characteristic point error; e=sd-sw e= sd - sw 式中:Hs、Ds、Ks分别为虚质量、阻尼和刚度;e为特征点误差;为特征点误差的加速度;为特征点误差的速度;sd为初始设定的期望特征点,sw为受到外部接触力所移动的特征点;fsext为图像特征空间下的外部接触力;Where: Hs , Ds , Ks are virtual mass, damping and stiffness respectively; e is the characteristic point error; is the acceleration of the feature point error; is the speed of feature point error; sd is the expected feature point set initially, sw is the feature point moved by the external contact force; fsext is the external contact force in the image feature space; 步骤S620、选取es=s-sw作为当前特征点与期望特征点之间误差,其中s为图像特征点;Step S620, selecting e s =ss w as the error between the current feature point and the expected feature point, where s is the image feature point; 步骤S630、通过对特征点误差的加速度进行一次和二次积分可以求得特征点误差的速度和受到外部接触力所移动的特征点swStep S630: Acceleration of feature point error Performing the first and second integrations can obtain the speed of the characteristic point error and the characteristic point s w moved by the external contact force; 步骤S640、确定视觉伺服控制律为:Step S640: Determine the visual servo control law as: 式中:vc为相机的运动速度;为特征点误差的速度;α为控制器增益;为图像交互矩阵的伪逆;es为当前特征点与期望特征点之间误差。Where: v c is the camera's movement speed; is the speed of the characteristic point error; α is the controller gain; is the pseudo-inverse of the image interaction matrix; es is the error between the current feature point and the expected feature point.
CN202410591654.XA 2024-05-14 2024-05-14 A robot visual servo and human-machine collaborative control method based on image variable admittance Active CN118288294B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410591654.XA CN118288294B (en) 2024-05-14 2024-05-14 A robot visual servo and human-machine collaborative control method based on image variable admittance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410591654.XA CN118288294B (en) 2024-05-14 2024-05-14 A robot visual servo and human-machine collaborative control method based on image variable admittance

Publications (2)

Publication Number Publication Date
CN118288294A CN118288294A (en) 2024-07-05
CN118288294B true CN118288294B (en) 2024-10-18

Family

ID=91676866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410591654.XA Active CN118288294B (en) 2024-05-14 2024-05-14 A robot visual servo and human-machine collaborative control method based on image variable admittance

Country Status (1)

Country Link
CN (1) CN118288294B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118544362B (en) * 2024-07-26 2024-12-06 湖南大学 An impedance iterative learning control method for a human-guided vision-force fusion manipulator

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104942809A (en) * 2015-06-23 2015-09-30 广东工业大学 Mechanical arm dynamic fuzzy approximator based on visual servo system
CN115122325A (en) * 2022-06-30 2022-09-30 湖南大学 Robust visual servo control method for anthropomorphic manipulator with view field constraint

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3740352B1 (en) * 2018-01-15 2023-03-15 Technische Universität München Vision-based sensor system and control method for robot arms

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104942809A (en) * 2015-06-23 2015-09-30 广东工业大学 Mechanical arm dynamic fuzzy approximator based on visual servo system
CN115122325A (en) * 2022-06-30 2022-09-30 湖南大学 Robust visual servo control method for anthropomorphic manipulator with view field constraint

Also Published As

Publication number Publication date
CN118288294A (en) 2024-07-05

Similar Documents

Publication Publication Date Title
CN110039542B (en) Visual servo tracking control method with speed and direction control function and robot system
CN111546315B (en) Robot flexible teaching and reproducing method based on human-computer cooperation
Hashimoto A review on vision-based control of robot manipulators.
Wang et al. Uncalibrated visual tracking control without visual velocity
Cheah et al. Adaptive vision and force tracking control for robots with constraint uncertainty
US20210299860A1 (en) Method and system for robot action imitation learning in three-dimensional space
CN114912287B (en) Robot autonomous grabbing simulation system and method based on target 6D pose estimation
Sauvée et al. Image based visual servoing through nonlinear model predictive control
CN115122325A (en) Robust visual servo control method for anthropomorphic manipulator with view field constraint
CN110744541A (en) Vision-guided underwater mechanical arm control method
CN118288294B (en) A robot visual servo and human-machine collaborative control method based on image variable admittance
CN103331756A (en) Mechanical arm motion control method
CN109358507A (en) A Visual Servo Adaptive Tracking Control Method with Time-varying Performance Boundary Constraints
CN111515928B (en) Mechanical arm motion control system
CN112207835A (en) Method for realizing double-arm cooperative work task based on teaching learning
CN116834014A (en) Intelligent cooperative control method and system for capturing non-cooperative targets by space dobby robot
CN116276998A (en) Hand-eye-free calibration method and system for robotic arm grasping based on reinforcement learning
CN115416021A (en) Mechanical arm control method based on improved impedance control
CN111546344A (en) Mechanical arm control method for alignment
CN107894709A (en) Controlled based on Adaptive critic network redundancy Robot Visual Servoing
CN117428772A (en) A six-degree-of-freedom robotic arm visual grabbing method for multi-station scenarios
CN110434854A (en) A kind of redundancy mechanical arm Visual servoing control method and apparatus based on data-driven
JPH0635525A (en) Robot arm control method
Liu et al. Dynamic tracking of manipulators using visual feedback from an uncalibrated fixed camera
CN118544362B (en) An impedance iterative learning control method for a human-guided vision-force fusion manipulator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant