CN110464471A - The control method of operating robot and its end instrument, control device - Google Patents
The control method of operating robot and its end instrument, control device Download PDFInfo
- Publication number
- CN110464471A CN110464471A CN201910854900.5A CN201910854900A CN110464471A CN 110464471 A CN110464471 A CN 110464471A CN 201910854900 A CN201910854900 A CN 201910854900A CN 110464471 A CN110464471 A CN 110464471A
- Authority
- CN
- China
- Prior art keywords
- pose information
- target pose
- instrument
- target
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 238000004590 computer program Methods 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 230000000007 visual effect Effects 0.000 abstract description 4
- 230000008569 process Effects 0.000 description 20
- 238000013507 mapping Methods 0.000 description 16
- 230000007246 mechanism Effects 0.000 description 12
- 230000000712 assembly Effects 0.000 description 7
- 238000000429 assembly Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000002324 minimally invasive surgery Methods 0.000 description 5
- 238000001356 surgical procedure Methods 0.000 description 5
- 238000000354 decomposition reaction Methods 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 241000282465 Canis Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 210000004247 hand Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods, e.g. tourniquets
- A61B17/00234—Surgical instruments, devices or methods, e.g. tourniquets for minimally invasive surgery
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
- A61B34/37—Master-slave robots
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/70—Manipulators specially adapted for use in surgery
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/361—Image-producing devices, e.g. surgical cameras
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods, e.g. tourniquets
- A61B2017/00017—Electrical control of surgical instruments
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
- A61B2034/302—Surgical robots specifically adapted for manipulations within body cavities, e.g. within abdominal or thoracic cavities
Landscapes
- Health & Medical Sciences (AREA)
- Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Robotics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Pathology (AREA)
- Manipulator (AREA)
Abstract
The present invention relates to the control methods of a kind of operating robot and its end instrument, control device.This method comprises: the initial target posture information for decomposing each controlled operation end instrument obtains one group of posture information set respectively, and combine each group posture information set computer tool arm distal end the first object posture information of the first coordinate system, image end instrument the second coordinate system the second object pose information and each controlled operation end instrument respectively in the third object pose information of the second coordinate system;When each object pose information is effective, manipulator motion is controlled according to first object posture information, the corresponding motion arm of image end instrument is controlled according to the second object pose information to move so that image end instrument is held in current pose, and is moved according to the motion arm that each third object pose information controls corresponding controlled operation end instrument.The present invention is conducive to expand the operating space of operational tip instrument in art under the premise of keeping the visual field constant.
Description
Technical Field
The invention relates to the field of medical instruments, in particular to a surgical robot and a control method and a control device of a tail end instrument of the surgical robot.
Background
The minimally invasive surgery is a surgery mode for performing surgery in a human body cavity by using modern medical instruments such as a laparoscope, a thoracoscope and the like and related equipment. Compared with the traditional minimally invasive surgery, the minimally invasive surgery has the advantages of small wound, light pain, quick recovery and the like.
With the progress of science and technology, the minimally invasive surgery robot technology is gradually mature and widely applied. The minimally invasive surgery robot generally comprises a main operation table and a slave operation device, wherein the main operation table comprises a handle, a doctor sends a control command to the slave operation device through the operation handle, the slave operation device comprises a mechanical arm and a plurality of operation arms arranged at the far end of the mechanical arm, the operation arms are provided with tail end instruments, and the tail end instruments move along with the handle in a working state so as to realize remote operation.
The end instruments include image end instruments providing a surgical field of view and operation end instruments performing surgical operations, and it is often desirable to provide a larger movement range (i.e., an operation space, which can also be understood as flexibility) for operating the end instruments in a fixed field of view during surgery, but since the movement range is limited by the small movement range of the operation arm itself, the movement range of the mechanical arm can be considered to be enlarged in combination with the movement of the mechanical arm. However, the change of the pose of the far end of the mechanical arm is easy to cause the problem of changing the undesirable visual field, and the operation safety can be affected.
Disclosure of Invention
Accordingly, it is necessary to provide a surgical robot capable of extending the range of motion of the operation end instrument while maintaining the field of view, and a method and a device for controlling the end instrument.
In one aspect, there is provided a control method of a tip instrument in a surgical robot, the control method including: an acquisition step of acquiring initial target pose information of each controlled operation terminal instrument; decomposing the initial target pose information to obtain a group of pose information sets respectively, wherein each group of pose information set comprises first component target pose information of the far end of the mechanical arm in a first coordinate system and second component target pose information of the controlled operation end instrument in a second coordinate system, the first coordinate system refers to a base coordinate system of the mechanical arm, and the second coordinate system refers to a tool coordinate system of the mechanical arm; a first judgment step of judging the validity of each set of pose information sets; a calculation step of calculating, in a condition that at least one set of the pose information sets is valid and the image end apparatus is kept at the current pose, first target pose information of the distal end of the mechanical arm in a first coordinate system, second target pose information of the image end apparatus in a second coordinate system, and third target pose information of each controlled operation end apparatus in the second coordinate system, respectively, by combining each set of the pose information sets; a second judgment step of judging validity of the first target pose information, the second target pose information, and each of the third target pose information; and a control step of controlling the mechanical arm to move according to the first target pose information so as to enable the distal end of the mechanical arm to reach a corresponding target pose when the first target pose information, the second target pose information and each third target pose information are effective, controlling the operation arm corresponding to the image end instrument to move according to the second target pose information so as to enable the image end instrument to keep a current pose, and controlling the operation arm corresponding to the controlled operation end instrument to move according to each third target pose information so as to enable the controlled operation end instrument to reach a corresponding target pose.
Wherein, when there is one controlled operation end instrument and the pose information set is valid, the calculating step includes: under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information, converting the current pose information of the image end instrument to obtain the target pose information of the image end instrument in a second coordinate system; and assigning the first component target pose information into the first target pose information, assigning the target pose information of the image end instrument in a second coordinate system obtained by conversion into the second target pose information, and assigning the second component target pose information into the third target pose information.
Wherein, when the number of the controlled operation terminal instruments is more than two, and one pose information set is valid and the other pose information sets are invalid, the calculating step comprises: under the condition that the far end of the mechanical arm reaches a target pose corresponding to first component target pose information in the effective pose information set, converting the current pose information of the image end instrument to obtain the target pose information of the image end instrument in a second coordinate system, and converting the current pose information of the controlled operation end instrument associated with each ineffective pose information to obtain second expected target pose information of the controlled operation end instrument in the second coordinate system; assigning first component target pose information in the effective pose information set to the first target pose information, assigning target pose information of the image end instrument in a second coordinate system to the second target pose information, assigning second component target pose information in the effective pose information set to third target pose information of the associated controlled operation end instrument, and assigning each second expected target pose to third target pose information of the corresponding controlled operation end instrument.
Wherein, when there are a plurality of controlled operation terminal instruments, and more than two pose information sets are valid, and the rest pose information sets are invalid, the calculating step includes: respectively converting the current pose information of the image end instrument to obtain the target pose information of the image end instrument in a second coordinate system under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information in each effective pose information set; judging the effectiveness of the pose information of each target of the image terminal instrument in a second coordinate system; when only one of the target pose information of the image end instrument is effective, under the condition that the far end of the mechanical arm reaches a target pose corresponding to first component target pose information in an effective pose information set associated with the effective target pose information of the image end instrument, converting the current pose information of each controlled operation end instrument associated with the ineffective target pose information set to obtain second expected target pose information of the controlled operation end instrument in a second coordinate system, and converting the initial target pose information of each controlled operation end instrument associated with the rest effective target pose information set to obtain first expected target pose information of the controlled operation end instrument in the second coordinate system; judging the validity of each first expected target position and attitude information; assigning first component target pose information in the set of pose information associated with valid target pose information of the image end instrument to the first target pose information, assigning valid target pose information of the image end instrument to the second target pose information, assigning second component target pose information in the set of pose information associated with valid target pose information of the image end instrument to third target pose information of the associated controlled operation end instrument, and assigning each first desired target pose information and each second desired target pose information to third target pose information of the corresponding controlled operation end instrument, respectively, if each first desired target pose information is valid; if at least part of the first expected target pose information is invalid, under the condition that the far end of the mechanical arm reaches a target pose corresponding to first component target pose information in the valid pose information set and associated with valid target pose information of the image end instrument, converting current pose information of the controlled operation end instrument associated with the invalid first expected target pose information to obtain second expected target pose information of the controlled operation end instrument in a second coordinate system; assigning first component target pose information in the set of pose information associated with valid target pose information for the end-of-image instrument to the first target pose information, assigning valid target pose information for the end-of-image instrument to the second target pose information, assigning second component target pose information in the set of pose information associated with valid target pose information for the end-of-image instrument to the associated third target pose information for the end-of-controlled-operation instrument, assigning each valid first desired target pose information to the corresponding third target pose information for the end-of-controlled-operation instrument, and assigning each second desired target pose information to the corresponding third target pose information for the end-of-controlled-operation instrument; when more than two pieces of target pose information of the image end instrument are effective, selecting one of the effective target pose information of the image end instrument as effective and the rest as ineffective, and entering the step when only one of the effective target pose information of the image end instrument is effective.
Wherein, when the number of the controlled operation terminal instruments is more than two and each posture information set is valid, the calculating step includes: respectively converting the current pose information of the image end instrument to obtain the target pose information of the image end instrument in a second coordinate system under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information in each effective pose information set; judging the effectiveness of the pose information of each target of the image terminal instrument in a second coordinate system; when only one of the target pose information of the image end instrument is effective, under the condition that the far end of the mechanical arm reaches a target pose corresponding to first component target pose information in an effective pose information set associated with the effective target pose information of the image end instrument, converting initial target pose information of each controlled operation end instrument associated with the rest effective target pose information sets to obtain first expected target pose information of the controlled operation end instrument in a second coordinate system; judging the validity of each first expected target position and attitude information; assigning first component target pose information in the set of pose information associated with valid target pose information of the image end instrument to the first target pose information, assigning valid target pose information of the image end instrument to the second target pose information, assigning second component target pose information in the set of pose information associated with valid target pose information of the image end instrument to third target pose information of the associated controlled operation end instrument, and assigning each first desired target pose information to third target pose information of the corresponding controlled operation end instrument if each first desired target pose information is valid; if at least part of the first expected target pose information is invalid, under the condition that the far end of the mechanical arm reaches a target pose corresponding to first component target pose information in the valid pose information set and associated with valid target pose information of the image end instrument, converting current pose information of the controlled operation end instrument associated with the invalid first expected target pose information to obtain second expected target pose information of the controlled operation end instrument in a second coordinate system; assigning first component target pose information in the set of pose information associated with valid target pose information of the image end instrument to the first target pose information, assigning valid target pose information of the image end instrument to the second target pose information, assigning second component target pose information in the set of pose information associated with valid target pose information of the image end instrument to the associated third target pose information of the controlled operation end instrument, assigning each valid first desired target pose information to the corresponding third target pose information of the controlled operation end instrument, and assigning each second desired target pose information to the corresponding third target pose information of the controlled operation end instrument; when more than two pieces of target pose information of the image end instrument are effective, selecting one of the effective target pose information of the image end instrument as effective and the rest as ineffective, and entering the step when only one of the effective target pose information of the image end instrument is effective.
Wherein the decomposing step comprises: acquiring an input operation command related to the task degree of freedom of the remote end of the mechanical arm; and decomposing each initial target pose information by combining the task freedom degrees to obtain a group of pose information sets including first component target pose information of the far end of the mechanical arm in a first coordinate system and second component target pose information of the controlled operation end instrument in a second coordinate system.
Wherein the operation command comprises a first operation command and a second operation command; the first operation command is associated with a case where a task degree of freedom of the distal end of the robot arm completely matches an effective degree of freedom of the robot arm; the second operation command is associated with a case where a task degree of freedom of the distal end of the robot arm completely matches a pose degree of freedom in the effective degrees of freedom of the robot arm.
Wherein the decomposing step comprises: acquiring current pose information of the far end of the mechanical arm in a first coordinate system; converting the initial target pose information to obtain second component target pose information under the condition that the far end of the mechanical arm is kept at the current pose corresponding to the current pose information of the mechanical arm; judging the validity of the pose information of the second component target; if the first component target pose information is valid, converting the initial target pose information to obtain the first component target pose information under the condition that the controlled operation end instrument reaches a target pose corresponding to the second component target pose information; and if the first component target pose information is invalid, adjusting the second component target pose information to be valid, updating the second component target pose information, and converting the initial target pose information to obtain the first component target pose information under the condition that the controlled operation terminal instrument reaches a target pose corresponding to the updated second component target pose information.
Wherein the first judging step includes: judging the validity of the first component target pose information obtained by converting the initial target pose information; if the first component target pose information is valid, judging that the pose information set is valid; and if the first component target pose information is invalid, judging that the pose information set is invalid.
Wherein the step of judging the validity of the target pose information comprises: analyzing the target pose information into target motion state parameters of each joint assembly in a corresponding arm body, wherein the arm body is the mechanical arm or the operating arm; comparing the target motion state parameters of each joint component in the arm body with the motion state threshold of each joint component in the arm body; if more than one target motion state parameter of each joint assembly in the arm body exceeds the motion state threshold of the corresponding joint assembly, judging that the target pose information is invalid; and if the target motion state parameters of all the joint assemblies in the arm body do not exceed the motion state threshold of the corresponding joint assembly, judging that the target pose information is valid.
Wherein the step of judging the validity of the target pose information comprises: analyzing the target pose information into target motion state parameters of each joint assembly in a corresponding arm body, wherein the arm body is the mechanical arm or the operating arm; comparing the target motion state parameters of each joint component in the arm body with the motion state threshold of each joint component in the arm body; if more than one target motion state parameter of each joint assembly in the arm body exceeds the motion state threshold of the corresponding joint assembly, judging that the target pose information is invalid; and if the target motion state parameters of all the joint assemblies in the arm body do not exceed the motion state threshold of the corresponding joint assembly, judging that the target pose information is valid.
In yet another aspect, a computer-readable storage medium is provided, which stores a computer program configured to be executed by one or more processors to implement the steps of the control method according to any one of the above embodiments.
In still another aspect, there is provided a control apparatus of a surgical robot, including: a memory for storing a computer program; and a processor for loading and executing the computer program; wherein the computer program is configured to be loaded by the processor and to execute steps implementing the control method according to any of the embodiments described above.
In yet another aspect, a surgical robot is provided, comprising: a mechanical arm; one or more operation arms provided with end instruments and arranged at the distal end of the mechanical arm, wherein the end instruments comprise an image end instrument and one or more operation end instruments, and the operation end instruments are all configured to be controlled to operate the end instruments; and a control device respectively connected with the mechanical arm and the operating arm; the control device is used for executing the steps of realizing the control method according to any one of the above embodiments.
The invention has the following beneficial effects:
the operation space of the operation tail end instrument can be enlarged by means of the movement of the mechanical arm on the premise of keeping the visual field unchanged, and the operation tail end instrument is convenient and safe to use.
Drawings
FIG. 1 is a schematic structural diagram of a surgical robot according to an embodiment of the present invention;
FIG. 2 is a partial schematic view of the surgical robot of FIG. 1;
FIG. 3 is a partial schematic view of the surgical robot of FIG. 1;
FIG. 4 is a flowchart of one embodiment of a method for controlling a distal end instrument in a surgical robot according to the present invention;
FIG. 5 is a flowchart of a method of controlling a distal end instrument in a surgical robot according to yet another embodiment of the present invention;
FIG. 6 is a simplified diagram of a surgical robot according to an embodiment of the present invention in a use state;
FIG. 7 is a flowchart of a method of controlling a distal end instrument in a surgical robot according to yet another embodiment of the present invention;
FIG. 8 is a simplified schematic view of a surgical robot in accordance with another embodiment of the present invention;
FIGS. 9-12 are flow charts of various embodiments of methods for controlling a distal instrument in a surgical robot in accordance with the present invention;
FIG. 13 is a schematic diagram of a robotic arm of the surgical robotic arm mechanism of FIG. 1;
fig. 14 is an analysis diagram of a spatial movement angle in the control method of the surgical robot according to the present invention;
FIG. 15 is a flowchart of a method of controlling a distal end instrument in a surgical robot according to yet another embodiment of the present invention;
FIG. 16 is a flow chart of an embodiment of a method for controlling a distal end instrument in a surgical robot according to the present invention in a two-to-one mode of operation;
FIG. 17 is a schematic view of the operation of an embodiment of the method for controlling a distal end instrument in a surgical robot according to the present invention in a two-to-one operation mode;
FIGS. 18-19 are flow charts of another embodiment of a method for controlling a distal end instrument in a surgical robot according to the present invention in a two-to-one mode of operation;
FIG. 20 is a schematic view illustrating the operation of the surgical robot in a two-to-one operation mode according to another embodiment of the method for controlling the distal end instrument;
FIG. 21 is a flowchart of an embodiment of a method of controlling a distal instrument in a surgical robot in a one-to-one mode of operation according to the present invention;
FIG. 22 is a schematic view illustrating the operation of an embodiment of the method for controlling a distal end instrument in a one-to-one operation mode of the surgical robot according to the present invention;
FIGS. 23-25 are flow charts of various embodiments of methods for controlling a distal instrument in a surgical robot in accordance with the present invention;
fig. 26 is a schematic structural view of another embodiment of the surgical robot of the present invention.
Detailed Description
To facilitate an understanding of the invention, the invention will now be described more fully with reference to the accompanying drawings. Preferred embodiments of the present invention are shown in the drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
It will be understood that when an element is referred to as being "disposed on" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. When an element is referred to as being "coupled" to another element, it can be directly coupled to the other element or intervening elements may also be present. The terms "vertical," "horizontal," "left," "right," and the like as used herein are for illustrative purposes only and do not represent the only embodiments. As used herein, the terms "distal" and "proximal" are used as terms of orientation that are conventional in the art of interventional medical devices, wherein "distal" refers to the end of the device that is distal from the operator during a procedure, and "proximal" refers to the end of the device that is proximal to the operator during a procedure.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items. The term "each" as used herein includes one and more than one. The term "plurality", as used herein, includes two and more.
Fig. 1 to 3 are schematic structural diagrams and partial schematic diagrams of a surgical robot according to an embodiment of the present invention.
The surgical robot includes a master operation table 1 and a slave operation device 2. The main operating table 1 has a motion input device 11 and a display 12, and a doctor transmits a control command to the slave operating device 2 by operating the motion input device 11 to make the slave operating device 2 perform a corresponding operation according to the control command of the doctor operating the motion input device 11, and observes an operation area through the display 12. The slave operation device 2 has an arm body having a robot arm 21 and an operation arm 31 detachably attached to a distal end of the robot arm 21. The robot arm 21 includes a base and a connecting assembly in series, the connecting assembly having a plurality of joint assemblies, which in the configuration illustrated in FIG. 1 have joint assemblies 210-214. The operating arm 31 comprises a connecting rod 32, a connecting assembly 33 and a terminal instrument 34 which are connected in sequence, wherein the connecting assembly 33 is provided with a plurality of joint assemblies, and the operating arm 31 adjusts the pose of the terminal instrument 34 through adjusting the joint assemblies; end instrument 34 has an image end instrument 34A and a manipulation end instrument 34B. More specifically, the operating arm 31 is mounted to the power mechanism 22 at the distal end of the robot arm 21 and is driven by a driving portion in the power mechanism 22. The robot arm 21 and/or the operation arm 31 may follow the motion input device 11, and the robot arm 21 may be dragged by an external force.
For example, the motion-input device 11 may be connected to the main console 1 by a wire, or connected to the main console 1 by a rotating link. The motion-input device 11 may be configured to be hand-held or wearable (often worn at the far end of the wrist, such as the fingers or palm), with multiple degrees of freedom available. The motion-input device 11 is, for example, configured in the form of a handle as shown in fig. 3. In one case, the number of degrees of freedom available for the motion-input device 11 is configured to be lower than the number of degrees of freedom defined for the task at the distal end of the arm body; in another case, the number of effective degrees of freedom of the motion-input device 11 is configured not to be lower than the number of task degrees of freedom of the distal end of the arm body. The number of effective degrees of freedom of the motion input device 11 is up to 6, and in order to flexibly control the motion of the arm body in the cartesian space, the motion input device 11 is exemplarily configured to have 6 effective degrees of freedom, wherein the effective degrees of freedom of the motion input device 11 refer to effective degrees of freedom that can follow the motion of the hand, so that the doctor has a large operation space, and can generate more meaningful data by analyzing each effective degree of freedom, thereby satisfying the control of the robot arm 21 in almost all configurations.
The motion input device 11 follows the hand motion of the doctor, and collects the motion information of the motion input device itself caused by the hand motion in real time. The motion information can be analyzed to obtain position information, attitude information, speed information, acceleration information and the like. The motion-input device 11 includes, but is not limited to, a magnetic navigation position sensor, an optical position sensor, or a link-type main operator, etc.
In one embodiment, a method of controlling a tip instrument in a surgical robot is provided. As shown in fig. 4, the control method includes:
and step S1, an acquisition step, namely acquiring initial target pose information of each controlled operation terminal instrument.
Operative tip instruments 34B mounted on power mechanism 22 are each configured as a controlled operative tip instrument. At most, one operator can control two controlled operation distal end instruments 34B at the same time, and when there are two or more controlled operation distal end instruments 34B, the two or more operators can cooperatively control the instruments.
And step S2, a decomposition step, namely decomposing the pose information of each initial target to respectively obtain a group of pose information sets.
Each set of pose information comprises first component target pose information of the distal end of the mechanical arm in a first coordinate system and second component target pose information of the controlled operation end instrument in a second coordinate system. The first coordinate system refers to a base coordinate system of the robot arm, and the second coordinate system refers to a tool coordinate system of the robot arm.
In step S3, the first determination step is to determine the validity of each group of posture information sets.
Specifically, the validity of two component pose information included in each set of pose information is judged, and when the two component pose information are both valid, the set of pose information is determined to be valid, otherwise, the set of pose information is determined to be invalid.
Step S4, a calculation step, namely, under the condition that at least one set of pose information is valid and the image end instrument is kept at the current pose, calculating first target pose information of the distal end of the mechanical arm in a first coordinate system, second target pose information of the image end instrument in a second coordinate system, and third target pose information of each controlled operation end instrument in the second coordinate system by combining each set of pose information.
In this step, it is preferable that each of controlled operation tip instruments 34B is expected to reach the first desired pose, and if several of controlled operation tip instruments 34B are not able to reach the first desired pose, those controlled operation tip instruments 34B which are not able to reach the first desired pose are expected to reach the second desired pose. The first desired pose refers to the pose of the target corresponding to the initial target pose information (including both cases where the initial target pose (which is associated with the motion information input by the controlled motion-input device) is consistent or inconsistent with the current pose). This second desired pose refers to the current pose, in case several controlled operational end-instruments 34B cannot reach the first desired pose, aiming to ensure that it/they can reach the second desired pose, in order to guarantee the safety of the operation.
Step S5, a second judgment step of judging the validity of the first target pose information, the second target pose information, and each third target pose information.
And step S6, a control step, namely when the first target pose information, the second target pose information and each third target pose information are effective, controlling the mechanical arm to move according to the first target pose information so that the far end of the mechanical arm reaches the corresponding target pose, controlling the operation arm corresponding to the image end instrument to move according to the second target pose information so that the image end instrument keeps the current pose, and controlling the operation arm corresponding to the controlled operation end instrument to move according to each third target pose information so that the controlled operation end instrument reaches the corresponding target pose.
Through the steps S1 to S6, when the acquired first target pose information, second target pose information, and third target pose information are all valid, the controlled operation end instrument 34B can reach the first desired pose or the second desired pose while maintaining the current pose of the image end instrument 34A to provide a stable view; in addition, the range of motion of controlled manipulation tip instrument 34B may be extended in some scenarios in conjunction with the movement of the robotic arm and corresponding manipulation arm to facilitate easier surgical deployment.
In one embodiment, specifically, in step S3, if the sets of posture information obtained by the determination are all invalid, it indicates that none of the controlled operation end devices 34B (including one or more controlled operation end devices) has adjustability, and therefore the subsequent step is not performed, i.e., the control is ended, and the process returns to step S1.
In one embodiment, referring to fig. 5 and 6, when the controlled operation end instrument is configured as one and the set of pose information is valid, the step S4, namely the calculating step, includes:
and S411, converting the current pose information of the instrument at the end of the image to obtain the target pose information of the instrument in the second coordinate system under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information.
The current pose information of each end instrument 34, including image end instrument 34A and controlled operation end instrument 34B, may be a first coordinate system or a second coordinate system, and may be other reference coordinate systems, which are substantially interconvertible. As exemplified herein, "current pose information" refers to current pose information in a second coordinate system, although current pose information in other coordinate systems is also possible.
Step S412, assigning the first component target pose information to first target pose information, assigning the converted target pose information of the image end instrument in a second coordinate system to second target pose information, and assigning the second component target pose information to third target pose information.
In the above-described step S5, i.e., the second determination step, since the set of pose information has been determined to be valid, the first object pose information and the third object pose information are valid. Therefore, only the second target pose information needs to be judged to be effective actually.
And if the first target pose information, the second target pose information and the third target pose information are not all effective, ending the control.
If the first to third target pose information are all valid, the process proceeds to step S6, i.e., the control step. As shown in fig. 6, the manipulator 21 is controlled to move according to the first target pose information so as to enable the power mechanism 22 at the distal end thereof to reach the corresponding target pose, the manipulator 31A is controlled to move according to the second target pose information so as to enable the image end instrument 34A to keep the current pose, and the manipulator 31B is controlled to move according to the third target pose information so as to enable the controlled manipulation end instrument 34B to reach the corresponding target pose (the first desired pose).
In one embodiment, referring to fig. 7 and 8, when the controlled operation end instrument is configured to be a plurality of instruments, and when it is determined in step S3 that only one pose information set is valid and the remaining pose information sets are invalid, step S4 is a calculation step including:
step S421, under the condition that the distal end of the mechanical arm reaches the target pose corresponding to the first component target pose information in the effective pose information set, converting the current pose information of the image end instrument to obtain the target pose information of the image end instrument in the second coordinate system, and converting the current pose information of the controlled operation end instrument associated with each invalid pose information to obtain the second expected target pose information of the controlled operation end instrument in the second coordinate system.
Step S422, assigning the first component target pose information in the effective pose information set to first target pose information, assigning the target pose information of the image terminal instrument in a second coordinate system to second target pose information, assigning the second component target pose information in the effective pose information set to third target pose information of the associated controlled operation terminal instrument, and assigning each second expected target pose to third target pose information of the corresponding controlled operation terminal instrument.
In the above step S5, i.e., the second judgment step, it is only necessary to actually judge whether or not the second target pose information of the image end instrument 34A and the third target pose information of the controlled manipulation end instrument 34B associated with each invalid set of pose information are valid.
And if the first target pose information, the second target pose information and each third target pose information are not all effective, ending the control.
If the first target pose information, the second target pose information, and each third target pose information are all valid, the process proceeds to step S6, which is a control step. As shown in FIG. 8, if one set of pose information associated with the controlled manipulation tip instrument 34B1 is valid, and two sets of pose information associated with the controlled manipulation tip instruments 34B 2-34B 3 are invalid:
the control system controls the mechanical arm 21 to move according to the first target pose information so as to enable the power mechanism 22 at the far end of the mechanical arm to reach the corresponding target pose, controls the operation arm 31A to move according to the second target pose information so as to enable the image end instrument 34A to keep the current pose, controls the operation arm 31B to move according to the third target pose information of the controlled operation end instrument 34B1 so as to enable the controlled operation end instrument 34B to reach the corresponding target pose (the first desired pose), and controls the operation arms 31C to 31D to move according to the third target pose information of the controlled operation end instruments 34B2 to 34B3 so as to enable the controlled operation end instruments 34B2 to 34B3 to reach the corresponding target pose (the second desired pose, namely, the current pose is kept).
Steps S1 to S6 including steps S421 to S422 are also applicable to the case where the same robot arm 21 has two, or four or more, controlled operation distal end instruments 34B, in accordance with the principle thereof, except that the same robot arm 21 has three controlled operation distal end instruments 34B as shown in fig. 8.
In one embodiment, referring to fig. 8 and 9, when there are a plurality of controlled operation end instruments, and more than two pose information sets are valid and the remaining pose information sets are invalid, the step S4 includes:
and step S431, respectively converting the current pose information of the instrument at the end of the image to obtain the target pose information of the instrument in the second coordinate system under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information in each effective pose information set.
And step S432, judging the effectiveness of the pose information of each target of the image end instrument in the second coordinate system.
In this step, if only one of the target pose information of the image end apparatus is valid, the process proceeds to step S433; if more than two of the target pose information of the image end instrument is valid, the process proceeds to step S438.
Step S433, under the condition that the far end of the mechanical arm reaches the target pose associated with the target pose information of the effective image end instrument and corresponding to the first component target pose information in the effective pose information set, the current pose information of each controlled operation end instrument associated with the ineffective target pose information set is converted to obtain the second expected target pose information of the controlled operation end instrument in the second coordinate system, and the initial target pose information of each controlled operation end instrument associated with the rest effective target pose information sets is converted to obtain the first expected target pose information of the controlled operation end instrument in the second coordinate system.
In step S434, the validity of each first expected target posture information is determined.
In this step, if the position and posture information of each first expected target is valid, the step S435 is entered; if at least part of the first expected target posture information is invalid, the process proceeds to step S436.
Step S435, assign the first component target pose information in the set of pose information associated with the target pose information of the active end-of-image instrument to first target pose information, assign the target pose information of the active end-of-image instrument to second target pose information, assign the second component target pose information in the set of pose information associated with the target pose information of the active end-of-image instrument to third target pose information of the associated controlled operation end-of-instrument, and assign each first desired target pose information and each second desired target pose information to the third target pose information of the corresponding controlled operation end-of-instrument, respectively.
Correspondingly assigning the effective first expected target pose information to third target pose information of the associated controlled operation terminal instrument; and correspondingly assigning the second desired target pose information to third target pose information of the controlled operational tip instrument associated with the invalid set of target pose information.
And step S436, under the condition that the distal end of the mechanical arm reaches the target pose corresponding to the first component target pose information in the effective pose information set associated with the target pose information of the effective image end instrument, converting the current pose information of the controlled operation end instrument associated with each first expected target pose information which is invalid to obtain the second expected target pose information of the controlled operation end instrument in the second coordinate system.
Step S437, assign the first component target pose information in the set of pose information associated with the target pose information of the effective image end instrument to first target pose information, assign the target pose information of the effective image end instrument to second target pose information, assign the second component target pose information in the set of pose information associated with the target pose information of the effective image end instrument to third target pose information of the associated controlled operation end instrument, assign the effective first expected target pose information to third target pose information of the corresponding controlled operation end instrument, and assign the second expected target pose information obtained at different stages (i.e., conditions) to third target pose information of the corresponding controlled operation end instrument.
Correspondingly assigning the effective first expected target pose information to third target pose information of the associated controlled operation terminal instrument; each second desired target pose information is associated with two cases, one case being associated with an invalid set of target pose information and the other case being associated with an invalid first desired target pose information (converting the first desired pose to the second desired pose), and therefore its correspondence needs to be assigned to the controlled operation end instrument associated with the respective case. In step S438, one of the target pose information of the end-of-image instrument that is valid is selected as valid, and the other is selected as invalid, and the process proceeds to step S433 when only one of the target pose information of the end-of-image instrument is valid.
In step S438, a plurality of combinations of the target pose information of the plurality of valid image end instruments as valid and invalid may be configured and calculated in step S433 to step S437, respectively, and the control step of step S6 may be performed by setting the calculated first target pose information, second target pose information, and each third target pose information to be valid.
In some embodiments, if more than two sets of the first object pose information, the second object pose information, and each third object pose information calculated for different combinations are valid, then a set of the first object pose information, the second object pose information, and each third object pose information valid may be determined to be selected for performing the controlling step of step S6 based on some metrics. These metrics include, but are not limited to, one or more of the range of motion, the velocity of motion, and the acceleration of motion of the arm (the mechanical arm or the manipulator arm), and the first object pose information, the second object pose information, and each third object pose information used in step S6 may be selected using, for example, one or more of a smaller range of motion, a smaller velocity of motion, and a smaller acceleration of motion of the arm. These metrics include, but are not limited to, the number of controlled operational end instruments that can achieve the first desired pose and the second desired pose. A larger number of positions that can reach the first expected position are preferentially selected for control. These metrics may also be combined with each other to select an optimum set for the control of step S6.
In some embodiments, priorities may be set for each controlled operation end instrument in advance or in real time (e.g., through voice instructions) during the operation of the controlled operation end instrument, and the target pose information of the valid image end instrument associated with the controlled operation end instrument with higher priority is sequentially selected as valid and the rest as invalid in step S438, so as to ensure that the control object with higher priority can achieve the first desired pose as much as possible. The setting of the priority can be set according to the authority of an operator, can also be set according to a specific control object, and can be flexibly configured.
Assuming that one set of pose information associated with the controlled operation distal end instrument 34B1 is invalid and both sets of pose information associated with the controlled operation distal end instruments 34B 2-34B 3 are valid, the target pose information of the image distal end instrument in the second coordinate system calculated according to the above step S431 and associated with the controlled operation distal end instruments 34B2, 34B3 are C2 and C3, respectively, as shown in fig. 8.
Case (1.1): if it is determined in step S432 that both C2 and C3 are invalid, the control is terminated.
Case (1.2): assume that it is judged in step S432 that C2 is valid and C3 is invalid.
Based on the first component pose information of the set of pose information associated with the controlled operation tip instrument 34B2, the target pose information (second desired target pose information) of the controlled operation tip instrument 34B1 in the second coordinate system and the target pose information (first desired target pose information) of the controlled operation tip instrument 34B3 in the second coordinate system are calculated according to step S433.
Assuming that the first desired target posture information of the controlled operation tip instrument 34B3 in the second coordinate system is valid as judged according to step S434:
after the value is assigned in step S435, the process proceeds to step S5, and when the first target pose information, the second target pose information, and each of the third target pose information are valid, the process proceeds to step S6. In step S6, the manipulator arm 21 is controlled to move so that the power mechanism 22 at the distal end thereof reaches the corresponding target pose according to the first target pose information, the manipulator arm 31A is controlled to move so that the image-end instrument 34A is kept at the current pose according to the second target pose information, the manipulator arm 31B is controlled to move so that the controlled operation end instrument 34B reaches the corresponding target pose (the second desired pose, i.e., the current pose is kept) according to the third target pose information of the controlled operation end instrument 34B1, and the manipulator arms 31C to 31D are controlled to move so that the controlled operation end instruments 34B2 to 34B3 reach the corresponding target pose (the first desired pose) according to the third target pose information of the controlled operation end instruments 34B2 to 34B3, respectively.
Assuming that the first desired target posture information of the controlled operation tip instrument 34B3 in the second coordinate system is determined to be invalid according to step S434:
based on the first component pose information of the set of pose information associated with the controlled manipulation tip instrument 34B2, second desired target pose information of the controlled manipulation tip instrument 34B3 in the second coordinate system is calculated according to step S436, and assigned in step S437, and the process proceeds to step S5, and proceeds to step S6 when the first target pose information, the second target pose information, and each third target pose information are valid. In step S6, the manipulator arm 21 is controlled to move so that the power mechanism 22 at the distal end thereof reaches the corresponding target pose based on the first target pose information, the manipulator arm 31A is controlled to move so that the image-end instrument 34A is kept at the current pose based on the second target pose information, the manipulator arms 31B, 34D are controlled to move so that the controlled-operation end instruments 34B1, 34B3 reach the corresponding target pose (second desired pose, i.e., current pose) based on the third target pose information of the controlled-operation end instruments 34B1, 34B3, respectively, and the manipulator arm 31C is controlled to move so that the controlled-operation end instrument 34B2 reaches the corresponding target pose (first desired pose) based on the third target pose information of the controlled-operation end instrument 34B 2.
Case (1.3): assume that both C2 and C3 are valid as determined in step S432.
The selection of C2 as valid, C3 as invalid and/or C2 as invalid, C3 as valid goes to the above case (1.2).
Steps S1 to S6 including steps S431 to S438 are also applicable to the case where the same robot arm 21 has four or more controlled operation tip instruments 34B according to the principle thereof, except that the same robot arm 21 has three controlled operation tip instruments 34B as shown in fig. 8.
In one embodiment, referring to fig. 8 and 10, when the controlled operation terminal device has more than two (including two or more) instruments and each posture information set is valid, the step S4 includes:
and step S441, respectively converting the current pose information of the image end instrument to obtain the target pose information of the image end instrument in a second coordinate system under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information in each effective pose information set.
And step S442, judging the effectiveness of the pose information of each target of the image end instrument in the second coordinate system.
In this step, if only one of the target pose information of the image end instrument is valid, the process proceeds to step S443; if more than two of the target pose information of the image end instrument are valid, the process proceeds to step S448.
Step S443, under the condition that the distal end of the manipulator reaches the target pose corresponding to the first component target pose information in the effective pose information set associated with the target pose information of the effective image end instrument, converting the initial target pose information of each controlled operation end instrument associated with the remaining effective target pose information sets to obtain the first expected target pose information of the controlled operation end instrument in the second coordinate system.
In step S444, the validity of each first expected target posture information is determined.
In this step, if the position and posture information of each first desired target is valid, the step proceeds to step S445; if at least part of the first expected target pose information is invalid, the process proceeds to step S446.
Step S445, assign first component target pose information in the set of pose information associated with the target pose information of the active image end instrument to first target pose information, assign the target pose information of the active image end instrument to second target pose information, assign second component target pose information in the set of pose information associated with the target pose information of the active image end instrument to third target pose information of the associated controlled operation end instrument, and assign each first desired target pose information to third target pose information of the corresponding controlled operation end instrument.
Step S446, under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information in the effective pose information set associated with the target pose information of the effective image end instrument, the current pose information of the controlled operation end instrument associated with each first expected target pose information which is invalid is converted to obtain the second expected target pose information of the controlled operation end instrument in the second coordinate system.
Step S447, assign first component target pose information in the set of pose information associated with the target pose information of the active image end instrument to first target pose information, assign the target pose information of the active image end instrument to second target pose information, assign second component target pose information in the set of pose information associated with the target pose information of the active image end instrument to third target pose information of the associated controlled operation end instrument, assign each of the active first desired target pose information to third target pose information of the corresponding controlled operation end instrument, and assign each of the second desired target pose information to third target pose information of the corresponding controlled operation end instrument.
The second desired target pose information here is associated only with the case where the first desired target pose information is invalid (the first desired pose is converted into the second desired pose).
In step S448, one of the target pose information of the end-of-image instrument to be valid is selected and the others are invalidated, and the flow proceeds to step S443 when only one of the target pose information of the end-of-image instrument is valid.
Assuming that the three sets of pose information associated with the controlled operation distal end instruments 34B 1-34B 3 are valid, the target pose information of the image distal end instruments associated with the controlled operation distal end instruments 34B1, 34B2, 34B3 in the second coordinate system calculated according to the above step S441 are C1, C2, C3, respectively, as shown in fig. 8.
Case (2.1): if it is determined in step S442 that none of C1-C3 is valid, the control is ended.
Case (2.2): assume that only C1 is valid and C2 and C3 are invalid as determined by step S442.
Based on the first component pose information of the set of pose information associated with the controlled operation tip instrument 34B1, target pose information (first desired target pose information) of the controlled operation tip instrument 34B2, 34B3 in the second coordinate system is calculated according to step S433.
Assuming that the first desired target posture information of the controlled operation tip instruments 34B2, 34B3 in the second coordinate system is valid according to the step S444:
after the value is assigned in step S445, the process proceeds to step S5, and when the first target pose information, the second target pose information, and each of the third target pose information are valid, the process proceeds to step S6. In step S6, the manipulator 21 is controlled to move according to the first target pose information so that the power mechanism 22 at the distal end of the manipulator reaches the corresponding target pose, the manipulator 31A is controlled to move according to the second target pose information so that the image end instrument 34A maintains the current pose, and the manipulator 31B-31D are controlled to move according to the third target pose information of the controlled manipulation end instruments 34B 1-34B 3 respectively to reach the corresponding target pose (the first desired pose).
Assuming that it is determined according to step S444 that the first desired target posture information of the controlled operation tip instrument 34B2, 34B3 in the second coordinate system is at least partially invalid, if the first desired target posture information of the controlled operation tip instrument 34B2 in the second coordinate system is valid, and the first desired target posture information of the controlled operation tip instrument 34B3 in the second coordinate system is invalid:
based on the first component pose information of the set of pose information associated with the controlled operation tip instrument 34B1, the second desired target pose information of the controlled operation tip instrument 34B3 in the second coordinate system is calculated according to step S446, and after the value is assigned in step S447, the process proceeds to step S5, and when the first target pose information, the second target pose information, and each of the third target pose information are all valid, the process proceeds to step S6. In step S6, the manipulator 21 is controlled to move according to the first target pose information so that the power mechanism 22 at the distal end of the manipulator reaches the corresponding target pose, the manipulator 31A is controlled to move according to the second target pose information so that the image end instrument 34A maintains the current pose, the manipulator 31B-31C is controlled to move according to the third target pose information of the controlled manipulation end instruments 34B 1-34B 2 so that the manipulator 31D reaches the corresponding target pose (the first desired pose), and the manipulator 31D is controlled to move according to the third target pose information of the controlled manipulation end instrument 34B2 so that the manipulator 31D reaches the corresponding target pose (the second desired pose).
Assuming that it is judged according to step S444 that the first desired target posture information of the controlled operation tip instruments 34B2, 34B3 in the second coordinate system is invalid:
based on the first component pose information of the set of pose information associated with the controlled operation tip instrument 34B1, the second desired target pose information of the controlled operation tip instruments 34B2, 34B3 in the second coordinate system is calculated according to step S446, and after the value is assigned in step S447, the process proceeds to step S5, and when the first target pose information, the second target pose information, and each third target pose information are all valid, the process proceeds to step S6. In step S6, the manipulator 21 is controlled to move to make the power mechanism 22 at its distal end reach the corresponding target pose according to the first target pose information, the manipulator 31A is controlled to move to make the image end instrument 34A maintain the current pose according to the second target pose information, the manipulator 31B is controlled to move to reach the corresponding target pose (first desired pose) according to the third target pose information of the controlled manipulation end instrument 34B1, and the manipulator 31C, 31D is controlled to move to reach the corresponding target pose (second desired pose) according to the third target pose information of the controlled manipulation end instruments 34B2, 34B 3.
Steps S1 to S6 including steps S441 to S448 are also applicable to the case where the same robot arm 21 has two or more controlled operation tip instruments 34B according to the principle thereof, except that the same robot arm 21 has three controlled operation tip instruments 34B as shown in fig. 8.
Through the above-described embodiment, by combining the movements of the robot arm 21 and the operation arm 31, it is possible to ensure as much as possible that the controlled operation tip instrument 34B can reach the first desired pose for the purpose of surgery under the condition that the image tip instrument 34A is kept at the current pose, and it is possible to ensure as much as possible that the controlled operation tip instrument 34B is kept at the current pose if the first desired pose cannot be reached, thereby reducing the risk of surgery.
In an embodiment, as shown in fig. 11, the step S1, namely the obtaining step, includes:
in step S11, motion information of the movement of the teleoperation controlled object input by the motion input device is acquired.
The controlled object is herein specifically referred to as a controlled operation tip instrument.
And step S12, analyzing the motion information into initial target pose information of the controlled object.
Typically, the motion information may be pose information of the motion-input device.
In one embodiment, as shown in fig. 12, the step S12 includes:
and step S121, analyzing and mapping the motion information into pose increment information of the controlled object.
Where "mapping" is a transformation relationship, it may include natural mapping relationships and unnatural mapping relationships.
The natural mapping relationship is a one-to-one correspondence relationship, and refers to a relationship from horizontal movement increment information to horizontal movement increment information, a relationship from vertical movement increment information to vertical movement increment information, a relationship from front and back movement increment information to front and back movement increment information, a relationship from yaw angle rotation increment information to yaw angle rotation increment information, a relationship from pitch angle rotation increment information to pitch angle rotation increment information, and a relationship from roll angle rotation increment information to roll angle rotation increment information between a controlled motion input device and a controlled object.
The non-natural mapping relationship is a mapping relationship other than the natural mapping relationship. In one example, the unnatural mapping includes, but is not limited to, a conversion mapping, which includes, but is not limited to, the aforementioned one-to-one mapping of the horizontal movement increment information, the vertical movement increment information, and the rotation increment information of the fixed coordinate system to the yaw increment information, the pitch increment information, and the roll increment information of the controlled object. The configuration as the unnatural mapping enables easier control of the controlled object in some cases, such as a two-to-one operation mode.
Step S122, position information of each joint component in the controlled object is acquired.
The corresponding position information can be obtained by a position sensor such as an encoder installed at each joint component in the controlled object. In the exemplary embodiment illustrated in fig. 1 and 13, the robot arm 21 has 5 degrees of freedom, and a set of position information (d1, θ) can be detected by means of position sensors2,θ3,θ4,θ5)。
And step S123, calculating the current pose information of the controlled object in the first coordinate system according to the position information of each joint assembly.
Where calculations can be generally made in conjunction with positive kinematics. Establishing a kinematic model from the fixed point of the mechanical arm 21 (namely, the point C, the origin of the tool coordinate system of the mechanical arm 21 is on the fixed point) to the base of the mechanical arm 21, and outputting a model conversion matrix of the point C and the baseThe calculation method is
And step S124, calculating initial target pose information of the controlled object in the first coordinate system by combining the incremental pose information and the current pose information.
Wherein, the model conversion matrix is based on the C point and the baseAnd acquiring the pose information of the point C in the fixed coordinate system. Assuming that the coordinate system of the point C is rotated to the posture described by the model transformation matrix without changing the position of the point C, the rotation axis angle [ theta ] can be obtainedx0,θy0,θz0]As shown in fig. 14. Thetax0Is the roll angle, thetay0Is yaw angle, θz0For pitch, in fact in the arm 21 shown in fig. 13, there is a lack of freedom of roll angle and hence theta in factx0Is not adjustable. The fixed coordinate system may for example be defined at the display, but may of course also be defined at a location which is not movable at least during operation.
Further, specifically, in the control step of step S6, the control of the robot arm, the image end instrument, and the manipulation end instrument as the control objects may include the steps of:
and calculating the target position information of each corresponding joint component according to the target pose information of the far end of the control object. Such as may be calculated by inverse kinematics.
And controlling each joint assembly in the control object to reach the corresponding target pose in a linkage manner according to the target position information of each joint assembly.
In one embodiment, as shown in fig. 15, the step S12 includes:
in step S125, a selection instruction associated with the operation mode type input for the controlled object is acquired.
The operation modes include a two-to-one operation mode and a one-to-one operation mode, the two-to-one operation mode refers to control of one controlled object with two controlled motion input devices, and the one-to-one operation mode refers to control of one controlled object with one controlled motion input device. When controlling the movement of a controlled object, a one-to-one operation mode or a two-to-one operation mode can be selected. For the one-to-one operation mode, it is further selectable which motion-input device is to be used as the controlled motion-input device for control. For example, when the same operator moves with both hands, the same operator may control one controlled object in a two-to-one operation mode or may control two controlled objects in a one-to-one operation mode according to the configuration. This is still true for more than two operators when the surgical robot provides enough motion-input devices.
And step S126, acquiring the motion information input by the controlled motion input equipment by combining the type of the operation mode, and analyzing and mapping the motion information into the incremental pose information of the far end of the controlled object in the first coordinate system.
In one embodiment, for one-to-one operation mode, the formula P is used for examplen=KPnObtaining pose information P for a respective one of the controlled motion-input devices 11 at an nth time instant, where K is a scaling factor, and in general, K is a scaling factor>0, more preferably, 1. gtoreq.K>And 0, so as to realize the scaling of the pose and facilitate the control.
In one embodiment, for the two-to-one operation mode, the formula P is used for examplen=K1PnL+K2PnRObtaining pose information P for the respective two controlled motion-input devices 11 at time n, where K1And K2Respectively representing the scaling factors of different motion-input devices 11, typically K1>0,K2>0; more preferably, 1 is not less than K1>0,1≥K2>0。
Calculating incremental pose information Δ p of the controlled motion input device 11 corresponding to a one-to-one operation mode or a two-to-one operation mode at a time before and after a certain timen_n-1The method can be calculated according to the following formula:
Δpn_n-1=Pn-Pn-1
in an embodiment, as shown in fig. 16 and fig. 17, when the selection instruction acquired in step S125 is associated with a two-to-one operation mode, step S126 includes:
in step S1261, the first position and orientation information of the two controlled motion input devices at the previous time are respectively obtained.
Step S1262, respectively obtaining second position and orientation information of the two controlled motion input devices at the later time.
And S1263, calculating and acquiring the incremental pose information of the two controlled motion input devices in a fixed coordinate system by combining the first scale coefficient and the first pose information and the second pose information of the two controlled motion input devices.
In step S1263, the following steps may be specifically implemented:
and calculating the incremental pose information of the first pose information and the second pose information of one controlled motion input device in the fixed coordinate system, and calculating the incremental pose information of the first pose information and the second pose information of the other controlled motion input device in the fixed coordinate system.
And calculating the increment pose information of one motion input device in the fixed coordinate system and the increment pose information of the other motion input device in the fixed coordinate system by combining the first scale coefficient to respectively obtain the increment pose information of the two motion input devices in the fixed coordinate system.
In the two-to-one operation mode, the first scaling factor is 0.5, i.e. K, for example1And K2And if the values of the two controlled motion input devices are both 0.5, the acquired incremental pose information represents the incremental pose information of the central point of the connecting line between the two controlled motion input devices. According to the actual situation, K can also be matched1And K2Additional assignments are made. K1And K2May be the same or different.
And S1264, mapping the incremental pose information of the two controlled motion input devices in the fixed coordinate system to the incremental pose information of the far end of the controlled object in the first coordinate system.
In an embodiment, as shown in fig. 18, when the selection instruction acquired in step S125 is associated with a two-to-one operation mode, step S126 may also include:
in step S1265, first position information of the two controlled motion input devices in the fixed coordinate system at the previous time is respectively obtained.
In step S1266, second position information of the two controlled motion input devices in the fixed coordinate system at the later time is respectively obtained.
And S1267, calculating and acquiring horizontal movement increment information, vertical movement increment information and rotation increment information of the two controlled motion input devices in the fixed coordinate system by combining the second proportionality coefficient and the first position information and the second position information of the two controlled motion input devices in the fixed coordinate system.
And S1268, mapping the horizontal movement increment information, the vertical movement increment information and the rotation increment information of the two controlled motion input devices in a fixed coordinate system into the yaw angle increment information, the pitch angle increment information and the roll angle increment information of the remote end of the controlled object in a first coordinate system correspondingly.
Further, as shown in fig. 19 and 20, the step S1268 of calculating and acquiring incremental rotation information of the two controlled motion input devices in the fixed coordinate system according to the first position information and the second position information of the two controlled motion input devices in the fixed coordinate system includes:
step S12681, a first position vector between the two controlled motion-input devices at a previous time is established.
Step S12682, a second position vector between the two controlled motion input devices at a later time is established.
Step S12683, the rotation increment information of the two controlled motion input devices in the fixed coordinate system is obtained by combining the third scaling factor and the included angle between the first position vector and the second position vector.
In an embodiment, as shown in fig. 21 and fig. 22, when the selection instruction acquired in step S125 is associated with a one-to-one operation mode, step S126 may include:
in step S12611, first pose information of the controlled motion input device in the fixed coordinate system at the previous time is obtained.
In step S12612, second pose information of the controlled motion input device in the fixed coordinate system at the later time is obtained.
And S12613, calculating and acquiring the incremental pose information of the controlled motion input equipment in the fixed coordinate system by combining the fourth proportionality coefficient and the first pose information and the second pose information of the controlled motion input equipment in the fixed coordinate system.
Step S12614, mapping the incremental pose information of the controlled motion input device in the fixed coordinate system to the incremental pose information of the remote end of the controlled object in the first coordinate system.
It should be noted that, in some usage scenarios, when the mechanical arm 21 moves, it is necessary to ensure that the distal end of the mechanical arm 21 moves around a stationary point (a distal end Center of Motion) when the mechanical arm 21 moves, that is, performs RCM constrained Motion, and specifically, the task degree of freedom at the distal end of the mechanical arm may be ensured to be implemented by setting the task degree of freedom, which is only related to the pose degree of freedom. The task degree of freedom of the distal end of the arm body can be understood as the degree of freedom of the distal end of the arm body allowing movement in cartesian space, which is at most 6. The degree of freedom actually possessed by the distal end of the arm body in the cartesian space is an effective degree of freedom, which is related to the configuration (i.e., structural features) thereof, and can be understood as the degree of freedom that can be realized by the distal end of the arm body in the cartesian space.
The stationary point has a relatively fixed positional relationship with the distal end of the robotic arm. Depending on the particular control objective, the origin of the second coordinate system may be the fixed point in some embodiments, or a point on the distal end of the robotic arm in other embodiments.
In an embodiment, as shown in fig. 23, specifically in step S2, i.e. the decomposition step, the method may include:
in step S211, an input operation command associated with the degree of freedom of the task at the distal end of the robot arm is acquired.
Step S212, decomposing each initial target pose information by combining the task freedom degrees respectively to obtain a group of pose information sets including first component target pose information of the far end of the mechanical arm in a first coordinate system and second component target pose information of the controlled operation end instrument in a second coordinate system.
Wherein the operation command may include a first operation command and a second operation command. The first operation command is associated with the condition that the task degree of freedom of the distal end of the mechanical arm 21 is completely matched with the effective degree of freedom of the mechanical arm 21, so that the distal end of the mechanical arm can move freely in the effective degree of freedom of the mechanical arm; the second operating command, which corresponds to the above-mentioned RCM constrained motion, is associated with the case where the task degree of freedom of the distal end of the robot arm 21 perfectly matches the attitude degree of freedom in the effective degrees of freedom of the robot arm 21, so as to ensure that the distal end thereof, i.e., the power mechanism 22, moves around the motionless point when the robot arm 21 moves. Of course, other combinations of task degrees of freedom may be defined to facilitate control, and are not described in detail herein.
For example, when the second operation command is acquired in step S211, only the information on the degree of freedom of the attitude is changed while the information on the degree of freedom of the position is kept unchanged in the first component target pose information obtained by decomposition. In this way, the distal end of the robot arm 21 moves around the fixed point, and the desired posture is achieved mainly depending on the movement of the controlled operation end instrument 34B, and the safety of the operation can be ensured.
In an embodiment, specifically in step S2, i.e. the decomposition step, the method may include:
step S221, acquiring current pose information of the far end of the mechanical arm in a first coordinate system;
step S222, converting the initial target pose information to obtain second component target pose information under the condition that the far end of the mechanical arm is kept at the current pose corresponding to the current pose information of the mechanical arm;
step S223, judging the validity of the pose information of the second component target;
in this step, if the pose information of the second component target is valid, go to step S224; otherwise, the process proceeds to step S225.
Step S224, under the condition that the controlled operation terminal instrument reaches the target pose corresponding to the second component target pose information, converting the initial target pose information to obtain first component target pose information;
and step S225, adjusting the second component target pose information to be effective, updating the second component target pose information, and converting the initial target pose information to obtain the first component target pose information under the condition that the controlled operation terminal instrument reaches the target pose corresponding to the updated second component target pose information.
Through the steps S221 to S225, when the pose of the controlled operation terminal instrument is adjusted, the corresponding operation arm is preferentially adjusted, and if the movement of the operation arm meets the adjustment of the controlled operation terminal instrument, only the operation arm needs to move; if the movement of the manipulator arm is not sufficient for adjustment of the controlled manipulation tip instrument, the adjustment may be made in conjunction with the movement of the robotic arm.
Furthermore, in the above step S3, i.e. the first determination step, since the pose information of the second component target itself is valid or is valid after adjustment, only the pose information of the first component target needs to be determined, and when the pose information of the first component target is valid, the corresponding pose information set can be determined to be valid, otherwise, the pose information set is determined to be invalid.
Referring to fig. 6, the steps S221 to S222 may be implemented as follows, specifically, by the following formula (1):
wherein,is the initial target pose information of controlled manipulation tip instrument 34B in the first coordinate system,is the current pose information of the far end of the mechanical arm in a first coordinate system,is the target pose information of controlled manipulation tip instrument 34B in the second coordinate system. T2 is the tool coordinate system of the controlled manipulation tip instrument 34B, T1 is the tool coordinate system of the robotic arm, and B is the base coordinate system of the robotic arm. When calculating, because the calculation is carried out firstAndis known and can thus be calculated
If it is judged in the step S223Is invalid, canIs adjusted to be effective then due toAndis known, calculation
It will be understood by those skilled in the art that the foregoing embodiments, which relate to calculating (converting) the target pose information of the distal end of the corresponding arm in the corresponding coordinate system, can be implemented by using the above equation (1), only for the case ofEtc. may vary depending on the circumstances, e.g.,or the current pose information of the corresponding arm body in the first coordinate system, which is not described in detail herein.
In an embodiment, the step of determining the validity of the arbitrarily obtained target pose information includes:
and step S71, resolving the target pose information into target motion state parameters of each joint component in the corresponding arm body.
Step S72, the target motion state parameters of each joint component in the arm body are compared with the motion state threshold of each joint component in the arm body.
Step S73, if more than one target motion state parameter of each joint component in the arm body exceeds the motion state threshold of the corresponding joint component, the target pose information is judged to be invalid; and if the target motion state parameters of all the joint assemblies in the arm body do not exceed the motion state threshold of the corresponding joint assembly, judging that the target pose information is effective.
Further, in the aforementioned step S225, specifically, in the step of adjusting the pose information of the second component object to be valid, the motion state of each joint assembly in the arm body exceeding the motion state threshold may be adjusted to be within the corresponding motion state threshold so as to be valid. In one embodiment, the motion state of each joint assembly in the arm body exceeding the motion state threshold can be adjusted to the corresponding motion state threshold to be effective, so that the operation arm can move to the limit as much as possible and then be adjusted by matching with the mechanical arm.
The above described embodiments are suitable for controlling end instruments in a surgical robot of the type shown in fig. 1. The surgical robot of this type includes one robot arm 21 and one or more operation arms 31 having end instruments 34 installed at the distal end of the robot arm 21, and the robot arm 21 and the operation arms 31 each have several degrees of freedom.
The above embodiments are equally applicable to the control of end instruments in a surgical robot of the type shown in fig. 26. The surgical robot of this type includes a main arm 32 ', one or more adjusting arms 30' installed at a distal end of the main arm 32 ', and one or more manipulation arms 31' having a distal end instrument installed at a distal end of the adjusting arm 30 ', the main arm 32', the adjusting arm 30 ', and the manipulation arm 31' each having several degrees of freedom. As shown in fig. 26, in the surgical robot, four adjustment arms 30 ' may be provided, and only one operation arm 31 ' may be provided for each adjustment arm 30 '. According to the actual use scenario, the three-segment arm structure of the surgical robot shown in fig. 26 can be configured as the two-segment arm structure of the surgical robot shown in fig. 1 to realize control. In an embodiment, in the case where the concepts of the operation arms in the two types of surgical robots are identical, for example, depending on the configuration, each of the adjustment arms 30' in the type of surgical robot shown in fig. 26 may be regarded as the robot arm 21 in the type of surgical robot shown in fig. 1 for control; for example, depending on the arrangement, the entire adjustment arm 30 'and the main arm 32' of the surgical robot of the type shown in fig. 26 may be controlled as the robot arm 21 of the surgical robot of the type shown in fig. 1. In one embodiment, the main arm 32 ' of the surgical robot shown in fig. 26 may be regarded as the mechanical arm 21 of the surgical robot shown in fig. 1, and the whole of the adjusting arm 30 ' and the corresponding operation arm 31 ' of the surgical robot shown in fig. 26 may be regarded as the operation arm 31 of the surgical robot shown in fig. 1.
In one embodiment, the control method of the surgical robot is generally configured to be implemented in a processing system of the surgical robot, and the processing system has more than one processor.
In one embodiment, a computer-readable storage medium is provided, in which a computer program is stored, the computer program being configured to be executed by one or more processors to implement the steps of the control method according to any one of the above-mentioned embodiments.
The surgical robot, the control method thereof and the computer readable storage medium of the invention have the following advantages:
the operation terminal instrument 34B is controlled to move to keep the current pose while the operation terminal instrument 34B reaches the target pose by controlling the movement of the mechanical arm 21, so that the operation space of the operation terminal instrument 34B can be enlarged by the movement of the mechanical arm 21 on the premise of keeping the visual field unchanged, and the operation terminal instrument is convenient and safe to use.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A method for controlling a distal end instrument in a surgical robot including a robot arm provided at a distal end thereof with two or more operation arms having distal end instruments including an image distal end instrument and one or more operation distal end instruments each configured as a controlled operation distal end instrument, the method comprising:
an acquisition step of acquiring initial target pose information of each controlled operation terminal instrument;
decomposing the initial target pose information to obtain a group of pose information sets respectively, wherein each group of pose information set comprises first component target pose information of the far end of the mechanical arm in a first coordinate system and second component target pose information of the controlled operation end instrument in a second coordinate system, the first coordinate system refers to a base coordinate system of the mechanical arm, and the second coordinate system refers to a tool coordinate system of the mechanical arm;
a first judgment step of judging the validity of each set of pose information sets;
a calculation step of calculating, in a condition that at least one set of the pose information sets is valid and the image end apparatus is kept at the current pose, first target pose information of the distal end of the mechanical arm in a first coordinate system, second target pose information of the image end apparatus in a second coordinate system, and third target pose information of each controlled operation end apparatus in the second coordinate system, respectively, by combining each set of the pose information sets;
a second judgment step of judging validity of the first target pose information, the second target pose information, and each of the third target pose information;
and a control step of controlling the mechanical arm to move according to the first target pose information so as to enable the distal end of the mechanical arm to reach a corresponding target pose when the first target pose information, the second target pose information and each third target pose information are effective, controlling the operation arm corresponding to the image end instrument to move according to the second target pose information so as to enable the image end instrument to keep a current pose, and controlling the operation arm corresponding to the controlled operation end instrument to move according to each third target pose information so as to enable the controlled operation end instrument to reach a corresponding target pose.
2. The control method according to claim 1, wherein when there is one controlled operation tip instrument and the set of pose information is valid, the calculating step includes:
under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information, converting the current pose information of the image end instrument to obtain the target pose information of the image end instrument in a second coordinate system;
and assigning the first component target pose information into the first target pose information, assigning the target pose information of the image end instrument in a second coordinate system obtained by conversion into the second target pose information, and assigning the second component target pose information into the third target pose information.
3. The control method according to claim 1, wherein when there are two or more of the controlled operation tip instruments, and one of the pose information sets is valid and the remaining pose information sets are invalid, the calculating step includes:
under the condition that the far end of the mechanical arm reaches a target pose corresponding to first component target pose information in the effective pose information set, converting the current pose information of the image end instrument to obtain the target pose information of the image end instrument in a second coordinate system, and converting the current pose information of the controlled operation end instrument associated with each ineffective pose information to obtain second expected target pose information of the controlled operation end instrument in the second coordinate system;
assigning first component target pose information in the effective pose information set to the first target pose information, assigning target pose information of the image end instrument in a second coordinate system to the second target pose information, assigning second component target pose information in the effective pose information set to third target pose information of the associated controlled operation end instrument, and assigning each second expected target pose to third target pose information of the corresponding controlled operation end instrument.
4. The control method according to claim 1, wherein when the number of the controlled operation tip instruments is plural, and two or more of the pose information sets are valid and the remaining pose information sets are invalid, the calculating step includes:
respectively converting the current pose information of the image end instrument to obtain the target pose information of the image end instrument in a second coordinate system under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information in each effective pose information set;
judging the effectiveness of the pose information of each target of the image terminal instrument in a second coordinate system;
when only one of the target pose information of the image end instrument is effective, under the condition that the far end of the mechanical arm reaches a target pose corresponding to first component target pose information in an effective pose information set associated with the effective target pose information of the image end instrument, converting the current pose information of each controlled operation end instrument associated with the ineffective target pose information set to obtain second expected target pose information of the controlled operation end instrument in a second coordinate system, and converting the initial target pose information of each controlled operation end instrument associated with the rest effective target pose information set to obtain first expected target pose information of the controlled operation end instrument in the second coordinate system;
judging the validity of each first expected target position and attitude information;
assigning first component target pose information in the set of pose information associated with valid target pose information of the image end instrument to the first target pose information, assigning valid target pose information of the image end instrument to the second target pose information, assigning second component target pose information in the set of pose information associated with valid target pose information of the image end instrument to third target pose information of the associated controlled operation end instrument, and assigning each first desired target pose information and each second desired target pose information to third target pose information of the corresponding controlled operation end instrument, respectively, if each first desired target pose information is valid;
if at least part of the first expected target pose information is invalid, under the condition that the far end of the mechanical arm reaches a target pose corresponding to first component target pose information in the valid pose information set and associated with valid target pose information of the image end instrument, converting current pose information of the controlled operation end instrument associated with the invalid first expected target pose information to obtain second expected target pose information of the controlled operation end instrument in a second coordinate system;
assigning first component target pose information in the set of pose information associated with valid target pose information for the end-of-image instrument to the first target pose information, assigning valid target pose information for the end-of-image instrument to the second target pose information, assigning second component target pose information in the set of pose information associated with valid target pose information for the end-of-image instrument to the associated third target pose information for the end-of-controlled-operation instrument, assigning each valid first desired target pose information to the corresponding third target pose information for the end-of-controlled-operation instrument, and assigning each second desired target pose information to the corresponding third target pose information for the end-of-controlled-operation instrument;
when more than two pieces of target pose information of the image end instrument are effective, selecting one of the effective target pose information of the image end instrument as effective and the rest as ineffective, and entering the step when only one of the effective target pose information of the image end instrument is effective.
5. The control method according to claim 1, wherein when the number of the controlled operation tip instruments is two or more and each of the posture information sets is valid, the calculating step includes:
respectively converting the current pose information of the image end instrument to obtain the target pose information of the image end instrument in a second coordinate system under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information in each effective pose information set;
judging the effectiveness of the pose information of each target of the image terminal instrument in a second coordinate system;
when only one of the target pose information of the image end instrument is effective, under the condition that the far end of the mechanical arm reaches a target pose corresponding to first component target pose information in an effective pose information set associated with the effective target pose information of the image end instrument, converting initial target pose information of each controlled operation end instrument associated with the rest effective target pose information sets to obtain first expected target pose information of the controlled operation end instrument in a second coordinate system;
judging the validity of each first expected target position and attitude information;
assigning first component target pose information in the set of pose information associated with valid target pose information of the image end instrument to the first target pose information, assigning valid target pose information of the image end instrument to the second target pose information, assigning second component target pose information in the set of pose information associated with valid target pose information of the image end instrument to third target pose information of the associated controlled operation end instrument, and assigning each first desired target pose information to third target pose information of the corresponding controlled operation end instrument if each first desired target pose information is valid;
if at least part of the first expected target pose information is invalid, under the condition that the far end of the mechanical arm reaches a target pose corresponding to first component target pose information in the valid pose information set and associated with valid target pose information of the image end instrument, converting current pose information of the controlled operation end instrument associated with the invalid first expected target pose information to obtain second expected target pose information of the controlled operation end instrument in a second coordinate system;
assigning first component target pose information in the set of pose information associated with valid target pose information of the image end instrument to the first target pose information, assigning valid target pose information of the image end instrument to the second target pose information, assigning second component target pose information in the set of pose information associated with valid target pose information of the image end instrument to the associated third target pose information of the controlled operation end instrument, assigning each valid first desired target pose information to the corresponding third target pose information of the controlled operation end instrument, and assigning each second desired target pose information to the corresponding third target pose information of the controlled operation end instrument;
when more than two pieces of target pose information of the image end instrument are effective, selecting one of the effective target pose information of the image end instrument as effective and the rest as ineffective, and entering the step when only one of the effective target pose information of the image end instrument is effective.
6. The control method according to claim 1, wherein the decomposing step includes:
acquiring an input operation command related to the task degree of freedom of the remote end of the mechanical arm;
and decomposing each initial target pose information by combining the task freedom degrees to obtain a group of pose information sets including first component target pose information of the far end of the mechanical arm in a first coordinate system and second component target pose information of the controlled operation end instrument in a second coordinate system.
7. The control method according to claim 6, characterized in that:
the operation commands comprise a first operation command and a second operation command;
the first operation command is associated with a case where a task degree of freedom of the distal end of the robot arm completely matches an effective degree of freedom of the robot arm;
the second operation command is associated with a case where a task degree of freedom of the distal end of the robot arm completely matches a pose degree of freedom in the effective degrees of freedom of the robot arm.
8. The control method according to claim 1, wherein the decomposing step includes:
acquiring current pose information of the far end of the mechanical arm in a first coordinate system;
converting the initial target pose information to obtain second component target pose information under the condition that the far end of the mechanical arm is kept at the current pose corresponding to the current pose information of the mechanical arm;
judging the validity of the pose information of the second component target;
if the first component target pose information is valid, converting the initial target pose information to obtain the first component target pose information under the condition that the controlled operation end instrument reaches a target pose corresponding to the second component target pose information;
and if the first component target pose information is invalid, adjusting the second component target pose information to be valid, updating the second component target pose information, and converting the initial target pose information to obtain the first component target pose information under the condition that the controlled operation terminal instrument reaches a target pose corresponding to the updated second component target pose information.
9. A control device for a surgical robot, comprising:
a memory for storing a computer program;
and a processor for loading and executing the computer program;
wherein the computer program is configured to be loaded by the processor and to execute steps implementing a control method according to any of claims 1-8.
10. A surgical robot, comprising:
a mechanical arm;
one or more operation arms provided with end instruments and arranged at the distal end of the mechanical arm, wherein the end instruments comprise an image end instrument and one or more operation end instruments, and the operation end instruments are all configured to be controlled to operate the end instruments;
and a control device respectively connected with the mechanical arm and the operating arm;
the control device is used for executing the steps of realizing the control method according to any one of claims 1-8.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910854900.5A CN110464471B (en) | 2019-09-10 | 2019-09-10 | Surgical robot and control method and control device for tail end instrument of surgical robot |
PCT/CN2020/114113 WO2021047520A1 (en) | 2019-09-10 | 2020-09-08 | Surgical robot and control method and control device for distal instrument thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910854900.5A CN110464471B (en) | 2019-09-10 | 2019-09-10 | Surgical robot and control method and control device for tail end instrument of surgical robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110464471A true CN110464471A (en) | 2019-11-19 |
CN110464471B CN110464471B (en) | 2020-12-01 |
Family
ID=68515486
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910854900.5A Active CN110464471B (en) | 2019-09-10 | 2019-09-10 | Surgical robot and control method and control device for tail end instrument of surgical robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110464471B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112264996A (en) * | 2020-10-16 | 2021-01-26 | 中冶赛迪上海工程技术有限公司 | Steel grabbing machine positioning control method and system |
WO2021047520A1 (en) * | 2019-09-10 | 2021-03-18 | 深圳市精锋医疗科技有限公司 | Surgical robot and control method and control device for distal instrument thereof |
CN112587243A (en) * | 2020-12-15 | 2021-04-02 | 深圳市精锋医疗科技有限公司 | Surgical robot and control method and control device thereof |
CN112618029A (en) * | 2021-01-06 | 2021-04-09 | 深圳市精锋医疗科技有限公司 | Surgical robot and method and control device for guiding surgical arm to move |
CN112643678A (en) * | 2020-12-29 | 2021-04-13 | 北京配天技术有限公司 | Mechanical arm, control device thereof, control system of mechanical arm and control method |
CN113524201A (en) * | 2021-09-07 | 2021-10-22 | 杭州柳叶刀机器人有限公司 | Active adjusting method and device for pose of mechanical arm, mechanical arm and readable storage medium |
CN113545851A (en) * | 2021-06-11 | 2021-10-26 | 诺创智能医疗科技(杭州)有限公司 | Control method, system, equipment and storage medium for reconstructing instrument surgical field center |
WO2022002155A1 (en) * | 2020-07-01 | 2022-01-06 | 北京术锐技术有限公司 | Master-slave motion control method, robot system, device, and storage medium |
WO2022037392A1 (en) * | 2020-08-19 | 2022-02-24 | 北京术锐技术有限公司 | Robot system and control method |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012066321A (en) * | 2010-09-22 | 2012-04-05 | Fuji Electric Co Ltd | Robot system and robot assembly system |
US20120265051A1 (en) * | 2009-11-09 | 2012-10-18 | Worcester Polytechnic Institute | Apparatus and methods for mri-compatible haptic interface |
CN103097086A (en) * | 2010-09-07 | 2013-05-08 | 奥林巴斯株式会社 | Master-slave manipulator |
CN104991448A (en) * | 2015-05-25 | 2015-10-21 | 哈尔滨工程大学 | Solving method of kinematics of underwater mechanical arm based on configuration plane |
JP5856837B2 (en) * | 2011-12-22 | 2016-02-10 | 川崎重工業株式会社 | Robot teaching point creation method and robot system |
CN106256310A (en) * | 2016-08-18 | 2016-12-28 | 中国科学院深圳先进技术研究院 | It is automatically adjusted the method and system of nasal endoscopes pose |
CN108697481A (en) * | 2016-03-04 | 2018-10-23 | 柯惠Lp公司 | Resolved motion control system for robotic surgical system |
CN109288591A (en) * | 2018-12-07 | 2019-02-01 | 微创(上海)医疗机器人有限公司 | Surgical robot system |
CN109421050A (en) * | 2018-09-06 | 2019-03-05 | 北京猎户星空科技有限公司 | A kind of control method and device of robot |
-
2019
- 2019-09-10 CN CN201910854900.5A patent/CN110464471B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120265051A1 (en) * | 2009-11-09 | 2012-10-18 | Worcester Polytechnic Institute | Apparatus and methods for mri-compatible haptic interface |
CN103097086A (en) * | 2010-09-07 | 2013-05-08 | 奥林巴斯株式会社 | Master-slave manipulator |
JP2012066321A (en) * | 2010-09-22 | 2012-04-05 | Fuji Electric Co Ltd | Robot system and robot assembly system |
JP5856837B2 (en) * | 2011-12-22 | 2016-02-10 | 川崎重工業株式会社 | Robot teaching point creation method and robot system |
CN104991448A (en) * | 2015-05-25 | 2015-10-21 | 哈尔滨工程大学 | Solving method of kinematics of underwater mechanical arm based on configuration plane |
CN108697481A (en) * | 2016-03-04 | 2018-10-23 | 柯惠Lp公司 | Resolved motion control system for robotic surgical system |
CN106256310A (en) * | 2016-08-18 | 2016-12-28 | 中国科学院深圳先进技术研究院 | It is automatically adjusted the method and system of nasal endoscopes pose |
CN109421050A (en) * | 2018-09-06 | 2019-03-05 | 北京猎户星空科技有限公司 | A kind of control method and device of robot |
CN109288591A (en) * | 2018-12-07 | 2019-02-01 | 微创(上海)医疗机器人有限公司 | Surgical robot system |
Non-Patent Citations (2)
Title |
---|
修立刚: "腹腔介入式手术机器人机械结构设计及运动仿真", 《CNKI硕士论文全文数据库》 * |
陈鹏: "腹腔镜微创手术机器人的控制系统研究", 《CNKI硕士论文全文数据库》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021047520A1 (en) * | 2019-09-10 | 2021-03-18 | 深圳市精锋医疗科技有限公司 | Surgical robot and control method and control device for distal instrument thereof |
WO2022002155A1 (en) * | 2020-07-01 | 2022-01-06 | 北京术锐技术有限公司 | Master-slave motion control method, robot system, device, and storage medium |
WO2022037392A1 (en) * | 2020-08-19 | 2022-02-24 | 北京术锐技术有限公司 | Robot system and control method |
CN112264996A (en) * | 2020-10-16 | 2021-01-26 | 中冶赛迪上海工程技术有限公司 | Steel grabbing machine positioning control method and system |
CN112264996B (en) * | 2020-10-16 | 2022-06-14 | 中冶赛迪上海工程技术有限公司 | Steel grabbing machine positioning control method and system |
CN112587243A (en) * | 2020-12-15 | 2021-04-02 | 深圳市精锋医疗科技有限公司 | Surgical robot and control method and control device thereof |
CN112643678A (en) * | 2020-12-29 | 2021-04-13 | 北京配天技术有限公司 | Mechanical arm, control device thereof, control system of mechanical arm and control method |
CN112618029A (en) * | 2021-01-06 | 2021-04-09 | 深圳市精锋医疗科技有限公司 | Surgical robot and method and control device for guiding surgical arm to move |
CN113545851A (en) * | 2021-06-11 | 2021-10-26 | 诺创智能医疗科技(杭州)有限公司 | Control method, system, equipment and storage medium for reconstructing instrument surgical field center |
CN113545851B (en) * | 2021-06-11 | 2022-07-29 | 诺创智能医疗科技(杭州)有限公司 | Control method, system, equipment and storage medium for reconstructing instrument surgical field center |
CN113524201A (en) * | 2021-09-07 | 2021-10-22 | 杭州柳叶刀机器人有限公司 | Active adjusting method and device for pose of mechanical arm, mechanical arm and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110464471B (en) | 2020-12-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110464471B (en) | Surgical robot and control method and control device for tail end instrument of surgical robot | |
CN110559083B (en) | Surgical robot and control method and control device for tail end instrument of surgical robot | |
CN110559082B (en) | Surgical robot and control method and control device for mechanical arm of surgical robot | |
CN111315309B (en) | System and method for controlling a robotic manipulator or associated tool | |
CN110464469B (en) | Surgical robot, method and device for controlling distal end instrument, and storage medium | |
CN110464470B (en) | Surgical robot and control method and control device for arm body of surgical robot | |
CN110464473B (en) | Surgical robot and control method and control device thereof | |
JP6007194B2 (en) | Surgical robot system for performing surgery based on displacement information determined by user designation and its control method | |
US11541551B2 (en) | Robotic arm | |
Stroppa et al. | Human interface for teleoperated object manipulation with a soft growing robot | |
CN110464472A (en) | The control method of operating robot and its end instrument, control device | |
US20150367514A1 (en) | Real-time robotic grasp planning | |
CN106965187B (en) | Method for generating feedback force vector when bionic hand grabs object | |
WO2021047520A1 (en) | Surgical robot and control method and control device for distal instrument thereof | |
CN118105174A (en) | Control method, device and storage medium for surgical robot system | |
JP7079899B2 (en) | Robot joint control | |
JP2020171516A (en) | Surgical system and control method of surgical system | |
JP7079881B2 (en) | Robot joint control | |
CN116077089B (en) | Multimode safety interaction method and device for ultrasonic scanning robot | |
EP4313508A1 (en) | Systems and methods for controlling a robotic manipulator or associated tool | |
CN118493375A (en) | Control method and device of surgical robot, electronic equipment and storage medium | |
WO2022101175A1 (en) | Adaptive robot-assisted system and method for evaluating the position of the trocar in a robot-assisted laparoscopic surgery intervention | |
CN118288303A (en) | Gesture-based redundant soft robot user-friendly real-time control method | |
CN117442347A (en) | Remote control method for underdrive mechanism | |
GB2623464A (en) | Robotic joint control |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |