CN111483750A - Control method and control device for robot system - Google Patents
Control method and control device for robot system Download PDFInfo
- Publication number
- CN111483750A CN111483750A CN202010066275.0A CN202010066275A CN111483750A CN 111483750 A CN111483750 A CN 111483750A CN 202010066275 A CN202010066275 A CN 202010066275A CN 111483750 A CN111483750 A CN 111483750A
- Authority
- CN
- China
- Prior art keywords
- operation object
- robot system
- scanning
- end effector
- control sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 239000012636 effector Substances 0.000 claims abstract description 94
- 238000003860 storage Methods 0.000 claims abstract description 59
- 238000006243 chemical reaction Methods 0.000 claims abstract description 38
- 238000013459 approach Methods 0.000 claims abstract description 32
- 238000005259 measurement Methods 0.000 claims description 21
- 230000006872 improvement Effects 0.000 claims description 6
- 230000036544 posture Effects 0.000 description 90
- 230000033001 locomotion Effects 0.000 description 30
- 238000012546 transfer Methods 0.000 description 21
- 230000008859 change Effects 0.000 description 18
- 238000003384 imaging method Methods 0.000 description 17
- 230000002093 peripheral effect Effects 0.000 description 16
- 230000008569 process Effects 0.000 description 15
- 238000012545 processing Methods 0.000 description 15
- 230000000007 visual effect Effects 0.000 description 14
- 230000009471 action Effects 0.000 description 11
- 239000003550 marker Substances 0.000 description 11
- 230000007246 mechanism Effects 0.000 description 11
- 238000004891 communication Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000013507 mapping Methods 0.000 description 5
- 238000013461 design Methods 0.000 description 4
- 238000012856 packing Methods 0.000 description 4
- 238000011144 upstream manufacturing Methods 0.000 description 4
- 238000009825 accumulation Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000000926 separation method Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 239000000470 constituent Substances 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000004806 packaging method and process Methods 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000001364 causal effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 229920001746 electroactive polymer Polymers 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- NJPPVKZQTLUDBO-UHFFFAOYSA-N novaluron Chemical compound C1=C(Cl)C(OC(F)(F)C(OC(F)(F)F)F)=CC=C1NC(=O)NC(=O)C1=C(F)C=CC=C1F NJPPVKZQTLUDBO-UHFFFAOYSA-N 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000013024 troubleshooting Methods 0.000 description 1
- 238000003466 welding Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1612—Programme controls characterised by the hand, wrist, grip control
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G1/00—Storing articles, individually or in orderly arrangement, in warehouses or magazines
- B65G1/02—Storage devices
- B65G1/04—Storage devices mechanical
- B65G1/137—Storage devices mechanical with arrangements or automatic control means for selecting which articles are to be removed
- B65G1/1373—Storage devices mechanical with arrangements or automatic control means for selecting which articles are to be removed for fulfilling orders in warehouses
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G1/00—Storing articles, individually or in orderly arrangement, in warehouses or magazines
- B65G1/02—Storage devices
- B65G1/04—Storage devices mechanical
- B65G1/137—Storage devices mechanical with arrangements or automatic control means for selecting which articles are to be removed
- B65G1/1373—Storage devices mechanical with arrangements or automatic control means for selecting which articles are to be removed for fulfilling orders in warehouses
- B65G1/1378—Storage devices mechanical with arrangements or automatic control means for selecting which articles are to be removed for fulfilling orders in warehouses the orders being assembled on fixed commissioning areas remote from the storage areas
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/39—Robotics, robotics to robotics hand
- G05B2219/39543—Recognize object and plan hand shapes in grasping movements
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40053—Pick 3-D object from pile of objects
Landscapes
- Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Robotics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Orthopedic Medicine & Surgery (AREA)
- Manipulator (AREA)
- De-Stacking Of Articles (AREA)
- Specific Conveyance Elements (AREA)
Abstract
The subject of the disclosure is to realize high cooperation among units including robots and fully improve the storage efficiency of operation objects. The control method of the present disclosure includes: acquiring an approach position at which an end effector grips an operation object; acquiring a scanning position for scanning the identifier of the operation object; and creating or acquiring a control sequence based on the proximity position and the scanning position, the control sequence being instructed to be executed by the robot. The control sequence comprises: (1) holding an operation object from a start position; (2) scanning an identification of the operation object with a scanner located between the start position and the job position; (3) when a predetermined condition is satisfied, temporarily releasing the operation object from the end effector at the grip conversion position, and gripping the operation object again with the end effector in a grip conversion manner; and (4) moving the operation object to the working position.
Description
Technical Field
The present disclosure relates generally to a robot system, and more particularly to a control device, a control method, a logistics system, a program, and a storage medium for a robot system that operates an operation target such as an article.
Background
Currently, most robots (e.g., machines configured to automatically/independently perform physical actions) are widely used in many fields due to increasing performance and decreasing cost. For example, a robot can be used to perform various operations and tasks such as handling and moving an operation target in manufacturing, assembling, packaging, transferring, and conveying. In the execution of the work, the robot can repeatedly perform the motion of a human being, and thus, the danger of the human being or the repeated work can be replaced or reduced.
As a system (robot system) using such a robot, for example, patent document 1 proposes an automatic distribution system for automating and saving labor from warehousing to delivery of articles, the system including: a transport container storage mechanism for temporarily storing transport containers; and an automatic article delivery mechanism for automatically collecting the articles in the transport container in the outgoing container based on the outgoing information.
Documents of the prior art
Patent document
Patent document 1: japanese patent laid-open publication No. 2018-167950
Disclosure of Invention
However, despite the technological advances, in the case of a large number of robots, the fineness required for reproducing the manual participation work for performing a large and/or complex work is lacking. Therefore, automation and high performance of the robot system are not sufficient, and there are many jobs that are difficult to replace human participation, and the robot system lacks fine-grained (granularity) control and flexibility in the executed operation. Accordingly, there remains a need for improved techniques for managing various actions and/or conversations between robots and further facilitating automation and high performance of robotic systems. Therefore, an object of the present disclosure is to provide a control device, a control method, and the like for a robot system, which can achieve high cooperation between units including robots, for example, sufficiently improve the efficiency of storing an operation target.
In order to solve the above problems, the present invention adopts the following configuration.
[1] That is, the control method of a robot system including a robot having a robot arm and an end effector according to the present disclosure includes: acquiring an approaching position where the end effector holds (clamps) an operation object; acquiring a scanning position for scanning the identifier of the operation object; and creating or acquiring a control sequence based on the approach position and the scanning position, instructing the robot to execute the control sequence. And, the control sequence includes the following (1) to (4):
(1) holding the operation object from a start position;
(2) scanning identification information of the operation object (e.g., a computer-readable identification such as a barcode, a Quick Response (QR) code (registered trademark)) with a scanner located between the start position and the work position;
(3) when a predetermined condition is satisfied, temporarily releasing the operation object from the end effector at a grip conversion position, and gripping the operation object again by the end effector in a grip conversion manner; and
(4) and moving the operation object to a working position.
The "operation target" is an object to be operated by an operation robot provided in the robot system, and includes, for example, one or more articles (commodities), and a container such as a bottle, a container, or a box on which the articles are placed or stored. In other embodiments and examples, the "operation object" may be a concept including a shelf, a pallet, a conveyor, another temporary place, and the like. The "control sequence" indicates a sequence of operations that is preset when a unit such as one or more robots included in the robot system performs control for executing each job
[2] In the above structure, the control sequence may be constituted to include the following (5) and (6):
(5) setting, as the predetermined condition, an improvement in efficiency of storing the operation object at the working position when the direction in which the operation object is gripped by the end effector is changed by changing the gripping direction of the operation object; and
(6) calculating storage efficiency at the work position before the gripping conversion of the operation object and storage efficiency at the work position after the gripping conversion of the operation object.
[3] In the above structure, the control sequence may be further configured to include the following (7) and (8):
(7) acquiring the height of the operation object; and
(8) calculating the storage efficiency based on the height of the operation object.
[4] In the above configuration, the height of the operation target may be calculated based on a height position (horizontal height) of a top surface of the operation target and a height position (horizontal height) of a bottom surface of the operation target measured in a state where the operation target is gripped by the end effector.
[5] In the above configuration, the height of the measurement object may be measured when the scanner scans the operation object.
[6] In the above configuration, the control sequence may be configured to include: (9) when the predetermined condition is satisfied, the operation object is placed on the temporary placement table at the grip conversion position and is temporarily released from the end effector.
[7] In the above configuration, the configuration may further include: acquiring shooting data representing a pickup area including the operation object; determining an initial posture of the operation object based on the shot data; calculating a reliable reference representing a likelihood that an initial pose of the operational object is correct; and acquiring the approach position and the scanning position based on the reliable reference.
Here, the "attitude" represents a position and/or orientation of an operation object (for example, an attitude including an orientation of a stopped state), and includes a component of parallel movement and/or a component of rotation in a grid system used by the robot system. In addition, the "attitude" may be represented by a vector, a set of angles (e.g., euler angle, and/or roll-pitch-yaw angle), a homogeneous transformation, or a combination thereof, and these coordinate conversions and the like in the "attitude" of the operation object may contain components of parallel movement, components of rotation, variations thereof, or a combination thereof.
In addition, the "reliable reference" represents a quantitative reference indicating a degree of coincidence (degree of certainty or likelihood) between the determined posture of the operation object and the actual posture of the operation object in the real world. In other words, the "reliable reference" is a reference indicating the accuracy of the determined posture of the operation object, or an index indicating the possibility that the determined posture matches the actual posture of the operation object. The "reliability criterion" can be quantified based on, for example, a result of matching between one or more visual characteristics (for example, shape, color, image, design, mark, text, etc.) of the operation target in the image data of the pickup area including the operation target and information related to the visual characteristics of the operation target stored in the master data.
[8] In the above configuration, the control sequence may be configured to include: (10) selectively calculating the approach position and the scan position from a measure of performance (metric) and/or a measure of scanning based on a result of the comparison of the reliable reference to a sufficiency threshold, the measure of scanning being independent of whether the initial pose of the manipulator is correct or not, and being dependent on a likelihood that the identity of the manipulator is not covered by the end effector.
[9] In the above configuration, the approach position and the scanning position may be acquired based on the metric of the scanning when the reliability criterion does not satisfy the sufficiency threshold, or the approach position and the scanning position may be acquired based on the metric of the scanning by giving priority to the metric of the scanning over the metric of the performance.
[10] Alternatively, in the above configuration, the approach position and the scanning position may be acquired based on the measure of performance when the reliability criterion satisfies the sufficiency threshold.
[11] In the above structure, the control sequence may be constituted to include the following (11 and (12):
(11) acquiring a first scanning position for providing identification information of the operation object to the scanner and a second scanning position for providing alternative identification information of the operation object to the scanner;
(12) until the operation object is moved to the first scanning position, in a case where the scanning result indicates a successful scan, the operation object is moved to the job position and the second scanning position is disregarded, or, in a case where the scanning result indicates a failed scan, the operation object is moved to the second scanning position.
[12] Additionally, the present disclosure provides a non-transitory computer readable storage medium storing processor commands for implementing a method of controlling a robotic system including a robot with a robotic arm and an end effector, the processor commands including: acquiring a command for the end effector to grip an approach position of an operation object; acquiring a command of a scanning position for scanning the identification information of the operation object; and creating or retrieving a control sequence based on the approach position and the scanning position, commands instructing the robot to execute the control sequence. And, the control sequence includes the following (1) to (4):
(1) holding the operation object from a start position;
(2) scanning an identification of the operation object with a scanner located between the start position and the job position;
(3) when a predetermined condition is satisfied, temporarily releasing the operation object from the end effector at a grip conversion position, and again gripping the operation object with the end effector so as to convert the grip; and
(4) and moving the operation object to a working position.
[13] In the above structure, the control sequence may be constituted to include the following (5) and (6):
(5) setting, as the predetermined condition, an improvement in efficiency of storing the operation object at the working position when the direction in which the operation object is gripped by the end effector is changed by changing the gripping direction of the operation object; and
(6) calculating storage efficiency at the work position before the gripping conversion of the operation object and storage efficiency at the work position after the gripping conversion of the operation object.
[14] In the above structure, the control sequence may be configured to include the following (7) and (8):
(7) acquiring the height of the operation object; and
(8) calculating the storage efficiency based on the height of the operation object.
[15] In the above configuration, the height of the measurement object may be calculated based on a height position (horizontal height) of a top surface of the operation object and a height position (horizontal height) of a bottom surface of the measurement object measured in a state where the end effector is gripped.
[16] Further, a control device of a robot system including a robot with a robot arm and an end effector of the present disclosure is used to execute the control method of any one of [1] to [11 ].
Drawings
Fig. 1 is a diagram showing an example environment in which a robot system according to an embodiment of the present disclosure operates.
Fig. 2 is a block diagram showing an example of a hardware configuration of a robot system according to an embodiment of the present disclosure.
Fig. 3A is a perspective view schematically showing a first posture of the operation target.
Fig. 3B is a perspective view schematically showing a second posture of the operation object.
Fig. 3C is a perspective view schematically showing a third posture of the operation target.
Fig. 4A is a plan view illustrating an exemplary operation performed by the robot system according to the embodiment of the present disclosure.
Fig. 4B is a front view showing an exemplary job executed by the robot system according to the embodiment of the present disclosure.
Fig. 5A is a flowchart showing an example of an operation procedure of the robot system according to the embodiment of the present disclosure.
Fig. 5B is a flowchart showing an example of the operation procedure of the robot system according to the embodiment of the present disclosure.
Description of the reference numerals
100 … robotic systems; 102 … discharge unit; 104 … transfer unit; 106 … conveying unit; 108 … stacker units; 112 … operands; 114 … starting position; 116 … work position; 118 … holding the change position; 202 … processor; 204 … storage devices; 206 … communication device; 208 … input-output devices; 210 … display; 212 … running the equipment; 214 … transfer motor; a 216 … sensor; 222 … shooting device; 224 … position sensor; 226 … contact sensor; 252 … main data; 254 … tracking data; 302 … operands; 304 … a first exposed surface; 306 … second exposed surface; 312 … first posture; 314 … second posture; 316 … third posture; 322 … top surface; 324 … bottom surface; 326 … outer peripheral surface; 332 …; 334 … identify a location; 402. 404 … operation; 412. a 416 … scanner; 414 … robotic arm; 422 … first control sequence; 424 … second control sequence; 432 … first approximate position; 434 … a second approximate position; 442 … a first providing location; 444 … second supply position; 450 … container; 464 … self-propelled trolley; 464 … pallet; 466 … distance measuring device; 468 … temporary standing table; 472. 474 … control the sequence.
Detailed Description
According to the present disclosure, there are provided a robot system in which a plurality of units (for example, various robots, various devices, a control device provided integrally therewith or separately, and the like) are highly integrated, a control device therefor, a logistics system provided with the same, a method for the same, and the like. That is, the robot system according to the embodiment of the present disclosure is, for example, an integrated system capable of autonomously performing one or more tasks. In addition, the robot system according to the embodiment of the present disclosure operates to include advanced processing that can significantly improve the storage efficiency of the operation object based on the shape and size of the operation object and the spatial volume of the storage container when the operation object is stored in the storage container or the like. In addition, a control sequence is created or acquired and executed based on a reliable reference associated with the initial pose of the operation object, thereby providing a scan job in which the operation object is highly advanced.
The robot system according to the embodiment of the present disclosure can be configured to perform a task based on an operation (for example, physical movement and/or orientation) performed on an operation target. Specifically, the robot system can pick up an operation object from a pick-up area (for example, a large box, a bottle, a container, a pallet, a container, a cage, a conveyor, etc. as a supply source of the operation object) including a start position, move the operation object to a placement area (for example, a large box, a bottle, a container, a pallet, a container, a cage, a conveyor, etc. as a movement destination of the operation object), and change or replace the arrangement of various operation objects.
In addition, the control sequence executed by the robot system may include scanning one or more markers (for example, a barcode or a Quick Response (QR) code (registered trademark)) located at one or more specific positions and/or surfaces of the operation target at the time of transfer. Therefore, the robot system can perform various operations including: the object to be picked up is gripped, the attitude is adjusted at an appropriate position/orientation of the scanning mark, the gripping is changed by changing the attitude (the gripping is released, and the object is gripped again and picked up), the object is transferred to the working position, the gripping is released, and the object is arranged at the working position.
The robot system may include an imaging device (e.g., a camera, an infrared sensor/camera, a radar, a laser radar, etc.) for recognizing the position and orientation of the operation target and the surrounding environment of the operation target. Also, the robot system can calculate a reliable reference relating to the attitude of the operation target. The robot system can acquire an image indicating the position and orientation of the operation target when the operation target is transferred to a pickup area including the start position, a placement area including the work position, an area (for example, an appropriate work table such as a temporary placement table, another robot) located in the middle of the movement path of the operation target and having a grip conversion position.
Further, the robot system can perform image processing for identifying or selecting the operation objects in a predetermined order (for example, from top to bottom, from outside to inside, from inside to outside, and the like). Further, the robot system can recognize the outline of the operation object based on, for example, the color, brightness, depth, position, and/or combination of these values of pixels in the pattern image of the captured data, and can determine, for example, the initial posture of the operation object in the pickup area from the image by grouping them, or the like. In the determination of the initial attitude, the robot system can calculate a reliable reference in accordance with a predetermined flow and/or equation.
The robot system can perform gripping conversion of the operation object (change of gripping position of the operation object) as necessary at a gripping conversion position provided in the middle of a path from a pickup area including the start position to a placement area including the work position. In addition, the robot system can acquire the height of the operation target as needed by, for example, an imaging device having a distance measurement function while the operation target moves from a pickup area or the like including the start position to a placement area or the like including the work position.
In addition, the robot system can execute a control sequence for executing each job based on the position, attitude, height, reliability reference, or a combination thereof of the operation target, and/or the position, attitude, or a combination thereof of the robot. For example, the control sequence can be created or acquired by mechanical learning such as motion planning and deep learning. The control sequence should deal with the following processing, for example, for the arrangement conversion, grip conversion, replacement, and the like of the operation objects: holding the operation object at the start position and/or an arbitrary position in the middle of the movement, manipulating the operation object, placing the operation object at the target work position, and the like.
Here, the following control sequence is executed in the existing robot system: an operation object is gripped in a pickup area or the like including a start position, and the operation object is moved to a setting area or the like including a working position in the gripped state and released. Therefore, in the conventional system, the gripped operation object moves in the gripped state and is released only from the gripped state, and the space for accumulating or storing the operation object cannot be effectively used. Therefore, in view of the accumulation or storage efficiency of the operation objects, manual intervention (adjustment, playback, replenishment, system stop, and the like) and operation input therefor are sometimes required.
On the other hand, unlike the conventional robot system, the robot system according to the present disclosure can create or acquire a control sequence and execute it based on the shape information of the operation target and the accumulation or storage information of the operation target. In other words, the robot system according to the present disclosure can further optimize the efficiency of stacking or storing the operation objects based on the shape information of the operation objects and the stacking or storing information of the operation objects. In addition, the robot system according to the present disclosure can change the gripping position of the operation target to a gripping position suitable for optimizing the accumulation or storage efficiency of the operation target at the gripping change position located in the middle of the operation position from the start position.
Unlike the conventional system, the robot system according to the present disclosure can create or acquire a control sequence suitable for optimizing the stacking or storage efficiency of the operation target according to the actual height of the operation target as needed and execute the control sequence. For example, even if more than one scanned identifiers located at more than one specific position and/or surface of the operation object are the same operation object, different shapes and sizes may be possible in practice. Therefore, on the upstream side (preceding stage) of the control sequence that is further upstream than the grip change position, the height of the operation target is actually measured based on the distance information to the operation target whose support position is known, for example, by an imaging device (camera, distance measuring device) that is positioned in the vertical direction. Then, the stacking or storage efficiency of the operation targets at the work position can be calculated based on the actually measured height of the operation target, and based on the result, the control sequence can be further optimized.
Moreover, unlike existing systems, the robotic system of the present disclosure can create or acquire control sequences and execute them as needed and on a reliable basis. For example, the mode for the operation target can be changed according to the reliability criterion, the gripping position on the operation target can be changed, the posture/position of the operation target can be changed, and/or a part of the movement path can be changed.
In addition, the posture of the measurement object held in the pickup area or the like is generally exposed with its top surface facing horizontally (upward), and the side surface of the operation object is exposed vertically (in the lateral direction). Therefore, the robot system in the present disclosure has one identification on the bottom surface of the operation object (i.e., the side opposite to the top surface of the operation object) and another identification on the side surface of the operation object in the master data.
In addition, the robot system in the present disclosure can calculate a reliable reference as needed when processing an image of a pickup area in recognition of an operation object. When the reliability criterion exceeds a sufficiency threshold and a sufficient certainty that the top surface of the operation object is exposed is recognized, the robot system can dispose an end effector on the exposed top surface, grip the top surface, and rotate the operation object so that the bottom surface of the operation object is provided at a predetermined position in front of the scanner. On the other hand, in the case where the reliability criterion is less than the sufficiency threshold value and it is not possible to recognize whether the top surface or the bottom surface of the operation target is exposed, the robot system may arrange the end effector along one side surface of the operation target, hold the side surface of the operation target, and rotate the operation target so as to pass between the opposing scanner groups, for example.
In this case, the operation target is scanned within the moving path of the operation target, for example, between a pickup area including the start position and a placement area including the work position, thereby improving work efficiency and work speed. At this time, the robot system in the present disclosure creates or acquires a control sequence in cooperation with the scanner at the scanning position, thereby enabling effective combination of the movement job of the operation target and the scanning job of the operation target. Further, by creating or acquiring a control sequence based on a reliable reference for the initial pose of the operation target, the efficiency, speed and accuracy associated with the scanning job can be further improved.
In addition, the robot system in the present disclosure can create or acquire a corresponding control sequence in the case where the initial posture of the operation object is incorrect. Thus, even in the case where the posture of the operation target is determined erroneously (for example, a determination error of a result such as a correction error, an expected external posture, an expected external light condition, or the like), the possibility of correctly and reliably scanning the operation target can be increased. As a result, the overall throughput of the robot system can be increased, and the labor and intervention of the operator can be further reduced.
In the present specification, the specific details are given for the sake of thorough understanding of the present disclosure, and the present disclosure is not limited thereto. In addition, in the embodiments of the present disclosure, the technology described in the present specification may be implemented without these specific details. Furthermore, for specific functions or routines, etc., that are well known, details are not set forth in order to avoid unnecessarily obscuring the present disclosure. Reference in the specification to "an embodiment", "one embodiment" or the like means that a particular feature, structure, material, or characteristic described is included in at least one embodiment of the present disclosure. Therefore, the expressions described in the present specification do not necessarily all refer to the same embodiment. On the other hand, such references are not necessarily mutually exclusive. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments. On this basis, it should be understood that the various embodiments illustrated are merely illustrative expressions and are not necessarily shown to scale.
Moreover, for well-known structures or processes that are typically associated with robotic systems and subsystems and that unnecessarily obscure several salient aspects of the present disclosure, descriptions are omitted for clarity of the disclosure. Further, although the present specification describes various embodiments of the present disclosure, the present disclosure may include a structure different from that described in this section or a structure having different constituent elements as other embodiments. Accordingly, the present disclosure may include other embodiments with or without additional elements or several elements described below.
Thus, the terms "computer" and "controller" as used herein may be any data processor and may include Internet devices and hand-held devices (including palm-top computers, wearable computers, cellular or mobile phones, multiprocessor systems, processor-based or programmable home appliances, network computers, microcomputers, and the like). information processed by these computers and controllers may be provided to any suitable display media, such as a liquid crystal display (L). embodiments of the present disclosure may take the form of commands including routines executed by a programmable computer or controller, and may be executed by a computer or controller, and may include any suitable hardware, firmware, and/or other suitable hardware, including any suitable hardware, firmware, and/or firmware, and any other suitable hardware, including any suitable hardware, firmware, and/or any other suitable combination of such hardware, firmware, and computer-readable media.
In addition, terms such as "coupled" and "connected" in this specification may be used for derivative forms and describe structural relationships between constituent elements. It should be understood that these terms are not intended as synonyms for each other. In particular, in particular embodiments, "connected" may be used to indicate that two or more elements are in direct contact with each other. Unless the context clearly dictates otherwise, the term "coupled" may be used to indicate that two or more elements are in direct or indirect contact with each other (with other intervening elements present therebetween), or that two or more elements are in co-operation or interaction with each other (with a causal relationship such as that associated with the transmission/reception of signals or function calls), or both.
[ appropriate Environment ]
Fig. 1 is a diagram showing an environment in which a robot system 100 according to an embodiment of the present disclosure operates. The robot system 100 includes one or more units such as robots configured to perform one or more tasks.
For the example shown in fig. 1, the robotic system 100 may be provided with a discharge unit 102, a transfer unit 104, a transfer unit 106, a stacking unit 108, or a combination thereof, within a warehouse or a distribution/transport hub. Among these units, as a robot for operating an operation target, for example, a robot for operating the operation target by a robot arm and an end effector called an unpacking robot, a picking robot, a grasping robot, and the like can be cited. Each unit in the robot system 100 executes a control sequence in which a plurality of jobs, which are one or more jobs for unloading an operation target from a truck, a van, or the like for storage in a warehouse, unloading the operation target from a storage location, for moving the operation target between containers, or for stacking the operation target to a truck, a van, or the like, for conveying the operation target, are combined so as to perform the plurality of jobs. That is, the "work" herein is a concept including various operations and actions for the purpose of transferring an operation target from a "certain position" to another "certain position".
More specifically, "work" includes an operation (for example, movement, orientation, posture change, or the like) of the operation target 112 from the start position 114 to the work position 116 of the operation target 112, a gripping change of the operation target 112 at a gripping change position 118 provided in the middle of the movement path of the operation target 112 from the start position 114 to the work position 116, a scan of the operation target 112 for acquiring identification information of the operation target 112, and the like.
Further, for example, the discharge unit 102 may be configured to transfer the operation target 112 from a position in a transport vehicle (e.g., a truck) to a position on the conveyor. The transfer unit 104 may be configured to transfer the operation object 112 from a certain position (for example, a pickup area including a start position) to another position (for example, a placement area including a work position on the conveyance unit 106), and to change the gripping of the operation object 112 in the middle of the movement path. Further, the transport unit 106 can transfer the operation target 112 from the area related to the transfer unit 104 to the area related to the stacker unit 108. The stacker unit 108 can transfer the operation target 112 from the transfer unit 104 to a storage position (e.g., a predetermined position of a shelf such as a shelf in a warehouse) by moving a pallet or the like on which the operation target 112 is placed, for example.
Although the description herein describes an example in which the robot system 100 is applied to a transportation center, it is understood that the robot system 100 may be configured to perform operations in other environments and for other purposes in order to perform manufacturing, assembly, packaging, healthcare, and/or other types of automated operations. It is understood that the robotic system 100 may include other units such as manipulators, service robots, modular robots, etc., not shown in fig. 1. For example, the robot system 100 may include, for example, a discharge unit from a pallet for transferring the operation object 112 from a cage or pallet to a conveyor or another pallet, a container switching unit for transferring the operation object 112 between containers, a packing unit for packing the operation object 112, an arrangement converting unit for grouping the operation objects according to the characteristics of the operation object 112, a pickup unit for performing various operations (for example, arrangement conversion, grouping, and/or transfer) on the operation object 112 according to the characteristics of the operation object 112, a pallet for storing the operation object 112, a self-propelled carriage unit (for example, an automated guided vehicle, or the like) for moving a rack, or a combination thereof.
[ suitable System ]
Fig. 2 is a block diagram showing an example of a hardware configuration of the robot system 100 according to the embodiment of the present disclosure. The robot system 100 includes, for example, electronic or electrical devices such as one or more processors 202, one or more storage devices 204, one or more communication devices 206, one or more input-output devices 208, one or more operating devices 212, one or more transfer motors 214, one or more sensors 216, or a combination thereof. These electronic or electrical devices are coupled to each other by wired and/or wireless connections.
Additionally, the robotic system 100 may be, for example, a system bus, a Peripheral Component Interconnect (PCI) bus or a PCI-express bus, a Hyper Transport or industry standard configuration (ISA) bus, a Small Computer System Interface (SCSI) bus, a Universal Serial Bus (USB), AN IIC (I2C) bus, or a bus such as AN Institute of Electrical and Electronics Engineers (IEEE) standard 1394 bus (also referred to as a "FireWire").Near Field Communication (NFC), etc.), internet of things (IoT) protocols (e.g., NB-IoT, &ttttranslation = L "&tttl &ttt/t &tttte-M, etc.), and/or other wireless connections based on wireless communication protocols.
The processor 202 may include a data processor (e.g., a Central Processing Unit (CPU), a special purpose computer, and/or an onboard server) configured to execute commands (e.g., software commands) stored in a storage device 204 (e.g., computer memory). Processor 202 may execute program commands to control/interact with other devices to cause robotic system 100 to perform control sequences including various actions, tasks, and/or operations.
The master data 252 may include, as information related to the manipulation object 112, for example, a size, a shape, a mass, a center of gravity, a position of a center of mass, a template related to a posture and an outline, model data for recognizing different postures, and a Stock Keeping Unit (SKU), a color scheme, an image, recognition information, an identification, an expected position of the manipulation object, an expected sensor measurement value (for example, a force, a torque, a pressure, a physical quantity related to a contact reference value), or a combination thereof.
Further, the storage device 204 may store, for example, the trace data 254 for the operands 112. The tracking data 254 may include a log of the object being scanned or manipulated, shot data (e.g., a photograph, a point cloud, a live video, etc.) of the object 112 at one or more locations (e.g., an appropriate starting location, a job location, a grip translation location, etc.), a position and/or pose of the object 112 at one or more locations.
The communication device 206 may include, for example, circuitry configured to communicate with external or remote devices over a network, a receiver, a transmitter, a regulator/demodulator (modem), a signal detector, a signal encoder/decoder, a connector port, a network card, and so forth. Additionally, the communication device 206 may be configured to transmit, receive, and/or process electrical signals according to more than one communication protocol (e.g., Internet Protocol (IP), wireless communication protocol, etc.). For example, the robotic system 100 may use the communication device 206 to exchange information between units or with external systems or devices for appropriate purposes of reporting, data collection, analysis, troubleshooting, and the like.
Input-output devices 208 serve as user interface devices configured to input information and instructions from an operator, communicate information with an operator, prompt, etc., and may include, for example, a keyboard, mouse, touch screen, microphone, User Interface (UI) sensors (e.g., a camera for receiving motion commands), input devices such as wearable input devices, and output devices such as display 210, speakers, haptic circuits, haptic feedback devices, etc. In addition, the robotic system 100 may use the input-output device 208 to inter-work with an operator when performing an action, task, operation, or a combination thereof.
The robotic system 100 may include physical or structural components (e.g., robotic manipulators, robotic arms, etc.. hereinafter, simply referred to as "structural components"), for example, connected by links or joints, to perform control sequences that include displacements, such as movement or rotation of the operational object 112. Such physical or structural components and chains or joints may be configured to operate end effectors (e.g., clamps, hands, etc.) that perform one or more tasks (e.g., gripping, rotating, welding, assembling, etc.) on the robotic system 100. In addition, the robotic system 100 may include a motion device 212 (e.g., a motor, actuator, wire, artificial muscle, electroactive polymer, etc.) configured to drive or manipulate (e.g., index and/or reorient) a construction element around or at a joint, and a transfer motor 214 configured to transfer a unit from one location to another.
In addition, the robotic system 100 may include a sensor 216 configured to acquire information for performing operations to manipulate the structural component and/or the transfer unit. The sensors 216 are configured as devices that detect or measure one or more physical characteristics of the robotic system 100 (e.g., the state, condition, position, etc. of one or more structural members, links, joints, etc.) and/or ambient environmental characteristics, and may include, for example, accelerometers, gyroscopes, force sensors, strain gauges, tactile sensors, torque sensors, position encoders, etc.
Further, the sensor 216 may include one or more capture devices 222 configured to detect the surrounding environment (e.g., visible and/or infrared cameras, two-and/or three-dimensional imaging cameras, lidar or radar equidistance devices, etc.). The capture device 222 may generate a display of the detection environment, such as a digital image and/or a point cloud, for the purpose of, for example, obtaining visual information for automated inspection, robot guidance, and other robotic applications.
The robot system 100 can process the digital image, the point cloud, the distance measurement data, and the like, for example, via the processor 202, and recognize the operation object 112 of fig. 1, the start position 114 of fig. 1, the work position 116 of fig. 1, the grip conversion position 118 between the start position 114 and the work position 116, the posture of the operation object 112, the reliable reference of the posture of the operation object such as the start position 114, the reliable reference of the height of the operation object 112, or a combination thereof.
Further, in order to operate the operation object 112, the robot system 100 acquires and analyzes images of a designated area (for example, a pickup area in a truck or on a conveyor belt, a placement area on the conveyor belt for arranging the operation object 112, an area for gripping and converting the operation object 112, an area for arranging the operation object in a container, an area on a pallet for stacking the operation object 112, and the like) by various means, and can recognize the operation object 112, its start position 114, its work position 116, a gripping and converting position 118, and the like. The imaging device 222 includes, for example, one or more cameras configured to generate images of a pickup region, a placement region, a region set therebetween for grip conversion of the operation object 112, and the like.
The imaging device 222 may include one or more distance measuring devices such as a laser radar or a radar configured to measure a distance to the operation target 112 supported at a predetermined position on an upstream side (a previous stage) of the grip changing position 118. The robot system 100 can determine the start position 114, the work position 116, the grip conversion position 118, the relevant posture, the actual height of the operation object 112, the reliability criterion, and the like based on the acquired image and/or the distance measurement data.
The imaging device 222 may include one or more scanners 412 and 416 (for example, a barcode scanner, a QR code scanner (registered trademark), and the like, which are configured to scan identification information (for example, a mark 332 in fig. 3A and/or 3C described later) of the operation target 112 between the start position 114 and the work position 116 (preferably, in a stage before the grip conversion position 118) during conveyance or movement of the operation target (see fig. 4A and 4B described later). Also, the robotic system 100 can create or acquire control sequences for providing one or more portions of the operation object 112 for one or more scanners 412.
Further, the sensor 216 may include, for example, a position sensor 224 (e.g., a position encoder, a potentiometer, etc.) configured to detect the position of the structural member, the chain portion, or the joint. The position sensor 224 is used to track the position and/or orientation of a structural component, chain or joint during the performance of a job.
Additionally, the sensors 216 may include, for example, contact sensors 226 configured to measure physical structure or contact related characteristics between surfaces (e.g., pressure sensors, force sensors, strain gauges, piezoresistive/piezoelectric sensors, capacitive sensors, elastic resistive sensors, other tactile sensors, etc.). The contact sensor 226 can measure a characteristic corresponding to the gripping of the end effector of the operation object 112. Thus, the contact sensor 226 can output a contact reference indicating a quantified measurement value (for example, a measured force, moment, position, or the like) corresponding to the degree of contact between the end effector and the operation object 112. The "contact reference" may include, for example, a reading of one or more forces or moments related to the force applied to the operation object 112 by the end effector.
[ determination of reliable reference for initial attitude ]
Fig. 3A, 3B, and 3C are perspective views schematically showing a first posture 312, a second posture 314, and a third posture 316, as examples of various postures (positions and orientations) of the operation target 302. To recognize the pose of the manipulation object 302, the robot system 100 may process, for example, a two-dimensional image, a three-dimensional image, a point cloud, and/or other captured data from the capturing device 222. In addition, the robot system 100 may analyze the photographing data of one or more photographing devices 222 toward the pickup area, for example, in order to recognize the initial posture of the manipulation object 302.
In order to recognize the posture of the operation object 302, the robot system 100 first analyzes the pattern image of the operation object 302 in the captured data based on a predetermined recognition mechanism, a recognition rule, and/or a template related to the posture and the contour, and recognizes the contour (for example, the peripheral edge or the surface) of the operation object 302 or groups the contour. More specifically, the robot system 100 can identify, for example, based on the template of the contour and the posture of the master data 252, a group of contours corresponding to the pattern in the change of the color, the brightness, the depth/position, and/or the combination thereof, the value of the same (for example, whether it is the same value, whether it is changed at a known scale/pattern, or the like) for the entire contour of the operation object.
If the contours of the operands 302 are grouped, the robotic system 100 is able to identify, for example, one or more surfaces, edges, and/or points of the operands 302, as well as poses, in a grid or coordinate system used by the robotic system 100.
In addition, the robot system 100 can recognize one or more exposed surfaces (e.g., the first exposed surface 304, the second exposed surface 306, and the like) of the operation object 302 within the captured data. Further, the robot system 100 can identify the operation target 302 by determining the shape of the contour of the operation target 302 and one or more dimensions (for example, length, width, and/or height) based on the contour of the operation target 302 and the imaging data related to calibration or the mapping data related to the imaging device 222, and comparing the determined dimensions with the corresponding data in the master data 252. Further, the robot system 100 can identify which of the top surface 322, the bottom surface 324, and the outer peripheral surface 326 the exposed surface is based on the length, the width, and the height of the operation object 302 that identifies the size of the exposed surface.
The robot system 100 can recognize the operation object 302 by comparing one or more marks (for example, characters, number, shape, visual images, marks, or a combination thereof) displayed on one or more exposed surfaces with one or more predetermined images in the master data 252. In this case, the master data 252 may include, for example, one or more images of trade names, marks, designs, images, or combinations thereof on the package surface of the operation object 302. The robot system 100 compares a part of the captured data (for example, a part within the outline of the operation target 302) with the master data 252 to recognize the operation target 302, and can also recognize the posture (particularly, the orientation) of the operation target 302 based on a predetermined image pattern unique to the surface in the same manner.
Fig. 3A shows the first posture 312 in a case where the first exposed surface 304 (e.g., an upward-facing exposed surface) is the top surface 322 of the operation object 302, and the second exposed surface 306 (e.g., an exposed surface substantially facing the source of the captured data) is one of the outer peripheral surfaces 326 of the operation object 302.
In the recognition of the exposed surface, the robot system 100 processes the captured data of fig. 3A, and can map the measured value of the size (for example, the number of pixels) of the first exposed surface 304 and/or the second exposed surface 306 to the real-world size using a predetermined calibration or mapping function. In addition, the robotic system 100 can compare the mapped dimensions to known/expected dimensions of the operands 302 within the master data 252 and, based on the results, identify the operands 302. Further, the robot system 100 can identify whether the first exposed surface 304 is the top surface 322 or the bottom surface 324 after determining that a pair of intersecting edge portions of the boundary of the first exposed surface 304 matches the length and the width of the identified operation object 302. Similarly, the robot system 100 can recognize the second exposed surface 306 as the outer peripheral surface 326 after defining that one edge of the second exposed surface 306 matches the height of the recognized operation object 302.
In addition, the robot system 100 can process the captured data of fig. 3A and recognize one or more marks specific to the surface of the operation object 302. In this case, the master data 252 may contain one or more images and/or other visual characteristics (e.g., color, size, etc.) of the surface and/or unique markings of the operands 302 described above. As shown in fig. 3A, the operation object 302 has "a" on the top surface 322, and therefore the robot system 100 can recognize the operation object 302 as an operation object stored in the master data 252, and further can recognize the first exposed surface 304 as the top surface 322 of the operation object 302.
In addition, the main data 252 may contain an identifier 332 as identification information of the operation object 302. More specifically, the primary data 252 may include an image and/or encoded message of the identity 332 of the operand 302, a location 334 of the identity 332 for a set of surfaces and/or edges, one or more visual characteristics thereof, or a combination thereof. As shown in fig. 3A, the robotic system 100 can identify the second exposed surface 306 as the peripheral surface 326 based on the presence of the markings 332, and/or their location matching the location 334 of the markings 332.
Additionally, FIG. 3B represents a second gesture 314 that rotates the operational object 302 by 90 degrees about a vertical axis in the direction B of FIG. 3A for example, the reference point "α" of the operational object 302 may be the lower left corner of FIG. 3A and the lower right corner of FIG. 3B, whereby the top surface 322 of the operational object 302 is recognized in a different orientation in the captured data and/or the peripheral surface 326 of the operational object 302 with the logo 332 is not visually recognizable as compared to the first gesture 312.
The robotic system 100 may recognize various gestures of the operational object 302 based on the particular orientation of the identification 332 of the one or more visual features. For example, the first pose 312 and/or the third pose 316 may be determined where a dimension matching a known length of the operational object 302 extends horizontally in the captured data, a dimension matching a known height of the operational object 302 extends vertically in the captured data, and/or a dimension matching a known width of the operational object 302 extends along a depth axis in the captured data. Also for the robotic system 100, the second pose 314 may be determined with dimensions matching the width extending horizontally in the captured data, dimensions matching the height extending vertically in the captured data, and/or dimensions matching the length extending along the depth axis in the captured data.
The robot system 100 can determine that the operation target 302 is in the first posture 312 or the second posture 314 based on the orientation of the visual marker such as "a" shown in fig. 3A and 3B, for example. Further, the robot system 100 can determine that the manipulation object 302 is in the first posture 312 based on the visual recognition mark to be visually recognized in the combination of the surfaces, for example, in a case where the identification 332 of the manipulation object 302 is visually confirmed along with the mark "a" (i.e., on a different surface).
Further, FIG. 3C represents a third pose 316 in which the object 302 is rotated 180 degrees about a horizontal axis in the direction C of FIG. 3A. for example, reference point "α" of the object 302 is the lower left front corner of FIG. 3A and the upper left rear corner of FIG. 3℃ thus, compared to the first pose 312, the first exposed surface 304 is the bottom surface 324 of the object and, in addition, both the top surface 322 with the logo 332 and the peripheral surface 326 of the object 302 cannot be visually recognized.
As described above, the robot system 100 can recognize that the manipulation object 302 is in the first posture 312 or the third posture 316 based on the size determined from the image data, and in the case where the mark (for example, "a") of the top surface 322 can be seen, can recognize that the manipulation object 302 is in the first posture 312. Additionally, the robotic system 100 may recognize that the operand 302 is in the third pose 316 while being able to see the markings (e.g., an example of the identifier 332 of the operand) of the bottom surface.
When determining the posture of the operation object 302, the actual world situation may affect the accuracy of the determination. For example, the visual recognizability of the surface mark may be reduced due to reflection and/or shading caused by the condition of light. Furthermore, depending on the actual orientation of the operation object 302, the exposure of one or more surfaces or the angle of visual recognition may be reduced, and therefore, any mark on the surface may not be recognized. Therefore, the robot system 100 can calculate a reliable reference relating to the posture of the operation object 302 after the determination.
In addition, the robot system 100 may calculate a reliable reference based on a deterministic interval (interval) related to the measurement of the size in the image in the captured data. In this case, the deterministic spacing may increase away from the direction parallel to the radiation direction as the distance between the operation object 302 and the imaging source (e.g., the imaging device 222) decreases, and/or as the measured edge portion of the operation object 302 approaches in the direction orthogonal to the direction of radiation from the imaging source. Further, the robotic system 100 may calculate a reliable reference based on, for example, the degree of matching between the markings or designs in the captured data and the known markings/designs in the master data 252. Further, the robot system 100 may measure the degree of overlap or deviation between at least a part of the shot data and a predetermined mark/image.
In this case, the robot system 100 can identify the operation target 302 and/or the orientation thereof by measuring the maximum repetition and/or the minimum deviation according to the mechanism of Minimum Mean Square Error (MMSE) or the like, and can calculate the reliability criterion based on the obtained degree of repetition/deviation. Further, the robot system 100 can calculate the movement path of the operation object 302 in the control sequence based on the obtained reliable reference, in other words, the robot system 100 can appropriately move the operation object 302 based on the obtained reliable reference.
[ System operation ]
Fig. 4A is a plan view illustrating an exemplary work 402 executed by the robot system 100 according to the embodiment of the present disclosure. As described above, the job 402 is an example of a control sequence (for example, performed by the unit shown in fig. 1 or the like) executed by the robot system 100. As shown in FIG. 4A, for example, a job 402 may include: moving the operation object 112 from a pickup region including the start position 114 to a placement region including the work position 116 via a grip conversion position 118; scanning the operation object 112 while moving from the start position 114 to the work position 116; and performing gripping conversion (changing the gripping position) on the operation object 112 at the gripping conversion position 118. Thus, the robot system 100 can update the trace data 254 as needed by adding the scanned operation target 112 to the trace data 254 of the operation target 112, removing the operation target 112 from the trace data 254, evaluating the operation target 112, and the like.
In addition, the robot system 100 may include a scanner 412 (an example of the photographing apparatus 222) for photographing a 3D vision or the like of the pickup area (more specifically, an area designated to a pallet or a large box for component transfer, and/or an area on the receiving side of the conveyor belt, for example) toward the pickup area in order to identify and/or determine the start position 114, thereby being able to acquire photographing data of the designated area. The robot system 100 may perform computer image processing (visual field processing) of the captured data in order to recognize various operation objects 112 located in a predetermined area by the processor 202, for example.
The robot system 100 may select the operation target 112 for executing the job 402 from the recognized operation targets 112 based on, for example, a predetermined selection criterion, a selection rule, and/or a template related to the posture and the contour, and may further process the shot data for determining the start position 114 and/or the initial posture with respect to the operation target 112.
The robot system 100 may include another scanner 416 (an example of the image capturing device 222) for capturing images of the placement area and the orientation of the predetermined area (more specifically, an area designated for the pallet or the magazine having the changed arrangement, the transmission side area of the conveyor belt, or the like) in order to identify and/or specify the work position 116 and the grip changing position 118, and thereby may acquire image data of the designated area. The robot system 100 may perform computer image processing (field of view processing) of the captured data in order to recognize the work position 116 for arranging the operation object 112, the grip conversion position 118, and/or the posture of the operation object 112 by the processor 202, for example. In addition, the robot system 100 may recognize and select the work position 116 and the grip conversion position 118 (based on the imaging result or not) based on a prescribed reference or rule for stacking and/or arranging the plurality of operation objects 112.
Among other things, the scanner 416 may be configured to face in a horizontal direction to scan indicia present on the surface of the work object 112 adjacent thereto (e.g., at a height corresponding to the height of the corresponding scanner (s)) and facing vertically. Further, the scanner 416 may be disposed in a vertical direction so as to scan a mark present on the surface of the horizontally oriented operation target 112 which is present vertically. Also, the scanners 416 may be arranged opposite to each other to scan both sides of the operation object 112 located between the scanners 416.
In addition, the robotic system 100 may manipulate the operand 112 in a manner that places the operand 112 in a provided location, and/or enables scanning of more than one surface/portion of the operand 112 by the scanner 416, depending on the position and/or scanning direction of the scanner 416. Further, the robot system 100 may include, for example, an imaging device 222 (see fig. 4B) configured to scan with a scanner 416 and measure a height position of the bottom surface 324 of the operation target 112 whose support position is known.
In this way, using the identified start position 114, grip transition position 118, and/or work position 116, the robotic system 100 may be able to operate one or more structural components of each unit (e.g., robotic arm 414 and/or end effector) in order to perform the work 402. Thus, for example, the robot system 100 can create or acquire a control sequence corresponding to one or more actions performed by the corresponding means for executing the job 402 by the processor 202.
For example, the control sequences associated with the transfer unit 104 may include: arranging the end effector at an approaching position (e.g., a position/location where the end effector for holding the operation object 112 is arranged); holding an operation object 112; lifting the operation object 112; moving the manipulation object 112 from the start position 114 to a providing position/posture for a scanning manipulation; performing gripping conversion (changing the gripping position) on the operation object 112 at the gripping conversion position 118; moving the operation target 112 from the start position 114 to the working position 116 via the grip changing position 118 as needed; lower the operation object 112; and releasing the grip of the operation object 112.
Additionally, the robotic system 100 may determine a sequence of commands and/or settings for operating one or more handling devices 212 of the robotic arm 414 and/or end effector, create or otherwise obtain a control sequence. In this case, the robotic system 100, for example using the processor 202, may calculate commands and/or settings for manipulating the end effector and the manipulator 212 of the robotic arm 414 to place the end effector at an approximate position around the start position 114, hold the manipulation object 112 with the end effector, place the end effector at an approximate position around the scan position and the change position 118, place the end effector at an approximate position around the work position 116, and release the manipulation object 112 from the end effector. Thus, the robot system 100 can perform the operation for completing the work 402 by operating the operating device 212 according to the control sequence determined by the command and/or the setting.
In addition, the robot system 100 can create or acquire a control sequence based on the reliable reference related to the posture of the manipulation object 112. In this case, the robot system 100 may place the end effector at various positions for picking up, calculate various provided positions/postures with respect to the operation object 112, or combine them, for example, in order to hold or cover different surfaces in accordance with a reliable reference with respect to the posture.
As an example, the operand 112 is the operand 302 in the first pose 312 of fig. 3A (in this case, the top surface 322 of the operand 302 is entirely exposed upward), and in the case where the reliability criterion relating to the pose is high (i.e., the degree of certainty exceeds the sufficiency threshold, and the determined pose is highly likely to be correct), the robot system 100 can create or acquire the first control sequence 422 including the first approach position 432 and the first provision position 442. At this point, for example, because the certainty that the top surface 322 of the operand 302 is facing upward (i.e., the bottom surface 324 with the object identifier 332 of fig. 3C is facing downward) is sufficient, the robotic system 100 may calculate a first control sequence 422 that includes a first proximity location 432 for placing the end effector directly on the top surface 322 of the operand 302.
As a result, the robot system 100 can hold the manipulation object 112 by the end effector contacting/covering the top surface 322 of the manipulation object 302 so that the bottom surface 324 of the manipulation object 302 is exposed. Additionally, the robotic system 100 may calculate a first control sequence 422 that includes a first provided position 442 for the operator 112 to be directly above the upwardly facing scanner 416 that scans the indicia 332 located on the bottom surface 324.
On the other hand, where the confidence metric associated with the pose is low (i.e., the degree of certainty is less than the sufficiency threshold, the determined pose is less likely to be correct), the robotic system 100 may create or acquire a second control sequence 424 (i.e., different from the first control sequence 422) that includes a second proximate location 434 and one or more second provided locations 444. At this time, the robot system 100 may measure the size of the operation target 112, for example, and compare it with the master data 252 to determine whether the operation target 302 is the first posture 312 of fig. 3A or the third posture 316 of fig. 3C (for example, when the measured deterministic level exceeds a predetermined threshold).
However, in the robot system 100, it may be difficult to capture and process a mark printed on the surface of the operation target 112, and as a result, the reliability criterion regarding the determined posture may be smaller than the sufficiency threshold. In other words, the robotic system 100 is sometimes unable to adequately determine whether the upwardly-facing exposed surface of the operational object 302 is its top surface 322 (e.g., the first pose 312) or its bottom surface 324 (e.g., the third pose 316).
In this case, because of the low level of confidence (low degree of certainty), the robotic system 100 may calculate a second control sequence 424 that includes a second approach position 434 for configuring the end effector in abutment with (e.g., oriented and/or facing in a parallel direction with respect to the top surface 322 and/or the bottom surface 324 of the operational object 302) one of the peripheral surfaces 326 of the operational object 302 of fig. 3A.
As a result, the robot system 100 can grip the operation object 112 by the end effector that contacts and covers one outer peripheral surface 326 of the operation object 302 and exposes both the top surface 322 and the bottom surface 324 of the operation object 302. Additionally, the robotic system 100 may provide or place the top surface 322 and the bottom surface 324 of the operand 302 simultaneously or sequentially in front of (e.g., within and/or facing) the scanner 416. With the operand 112 in the scanning position, the robotic system 100 may acquire the identification 332(s) of the operand 302 thereon using the scanner 416 (e.g., the scanner 416 facing at least the top surface 322 and the bottom surface 324 of the operand 302), while and/or continuously scanning the provided surfaces.
Additionally, the second control sequence 424 includes a second providing location 444 (or locations) for having an initially downward facing surface (the bottom surface 324 of the operational object 302) horizontal and disposed directly above the upward facing scanner 416, and/or having an initially upward facing surface (the top surface 322 of the operational object) vertical and placed directly in front of the horizontal facing scanner 416. The second control sequence 424 includes a reorientation/rotation action (e.g., the action shown by the dashed hollow circle) to provide two provided positions/poses, whereby both the top surface 322 and the bottom surface 324 are scanned using the orthogonally oriented scanner 416. Further, the robotic system 100 may, for example, continuously provide the top surface 322 of the operand 302 to the scanner facing upward for scanning, and then rotate the operand 302 by 90 degrees to provide its bottom surface 324 to the scanner 416 facing horizontally for scanning. At this time, in case of failure to read the identifier 332 of the operation object 302, the action of reorienting/rotating may be conditional to cause the robot system 100 to implement the corresponding command.
Alternatively, as an example, the robot system 100 may create or acquire a control sequence (not shown) for gripping/covering one outer circumferential surface 326 along the width of the manipulation object 302 in a case where the reliable reference is low. In this case, the robotic system 100 moves the work object 302 between a horizontally opposed pair of scanners 416, providing peripheral surfaces 326 of the work object 302 along its length, e.g., as shown in fig. 3A, capable of scanning a marking 332 on one of the peripheral surfaces 326. Further, details of the control sequence based on the reliable reference will be described later with reference to fig. 5A and 5B later.
The robot system 100 may acquire the control sequence again based on the two-dimensional or three-dimensional shape of the operation object 112 held by the end effector (hereinafter, the "operation object 302" is referred to as the "operation object 112" instead) and the information on the operation object 112 placed in the container 450 (for example, a large box, a bucket, or the like) of the work position 116.
For example, the robot system 100 grasps the size of the operation object 112 in both the first control sequence and the second control sequence. Further, since the other operation objects 112 already stored in the storage container 450 placed at the working position 116 and the sizes thereof are known, the robot system 100 can obtain the spatial information of the empty volume in the storage container 450. The robot system 100 can calculate the spatial shape parameter of the operation object 112 when various postures of the operation object 112 held by the end effector change two-dimensionally or three-dimensionally. Thus, by comparing these spatial shape parameters with the spatial information in the storage container 450, it is possible to optimally select a mode or a scheme for storing the operation object 112 in the storage container 450 at a higher packing density.
In this case, the robot system 100 may consider whether or not the end effector interferes with the operation target 112 already stored in the storage container 450 when the end effector accesses the storage container 450. Also, in comparison with the case where the held manipulation object 112 is directly stored in the storage container 450 in the current orientation, when changing the posture of the manipulation object 112 has a high filling rate of the manipulation object 112 into the storage container 450, the robot system 100 can create or acquire a control sequence including a manipulation of performing a grip change on the manipulation object 112 in the posture optimized for storage.
Fig. 4B is a front view showing an exemplary job 404 executed by the robot system 100 according to the embodiment of the present disclosure. In this example, a plurality of operation targets 112 are mixed and placed on a pallet 464, and the pallet 464 is transported to a pickup area including the start position 114 while being mounted on a self-propelled carriage 462 of, for example, an AGV (Automated Guided Vehicle). Although fig. 4B shows a state in which a plurality of operation targets 112 having the same shape are placed in order and mixed, it should be noted that a plurality of operation targets 112 having different sizes and shapes are often stacked on the pallet 464 at random depending on the actual unloading situation.
The pickup area where the pallet 464 is conveyed is imaged by the scanner 412, and the operation target 112 is selected as described with reference to fig. 4A. The selected manipulation object 112 passes through an end effector provided at the distal end portion of the robot arm 414 of the transfer unit 104, holds the top surface 322 of the manipulation object 112 in this example, and scans the same with the scanner 416, thereby acquiring the identifier 332. The robot system 100 can compare the information of the identifier 332 of the operation target 112 with the master data 252, for example, and grasp information including the size of the operation target 112.
On the other hand, even the operation objects 112 having the same identifier 332 sometimes have different sizes (particularly heights) in practice. Therefore, when the robot system 100 scans the operation target 112, for example, the distance to the bottom surface 324 of the operation target 112 is measured by the distance measuring device 466 (an example of the imaging device 222) provided on or near the floor surface of the work space. At this time, when the movement of the operation target 112 is temporarily stopped when the scanning is performed, the distance from the bottom surface 324 of the operation target 112 may be measured during the temporary stop. Although fig. 4B shows a case where the measurement is performed by the distance measuring device 466 immediately after the operation target 112 is unloaded (depalletized) from the pallet 464, the timing of the measurement is not particularly limited as long as the measurement is performed at a position upstream of the grip changing position 118 in the control sequence.
In this example, the robot system 100 can grasp the height position (gripping level) of the top surface 322 of the operation object 112 at the time of measurement by a control sequence or appropriate position measurement. Thus, the height 112h of the operation target 112 can be obtained by obtaining the measurement value of the distance to the bottom surface 324 of the operation target 112. That is, the robot system 100 can acquire measurement data of the bottom surface 324 of the operation object 112 by the distance measurement device 466, and calculate the height 112h based on the acquired measurement data and the height position (grip level) of the top surface 322 of the operation object 112. When the height 112h is different from the value stored as the master data 252 of the operation target 112, the robot system 100 may replace the master data 252 or add and update the master data 252.
In this way, after the actual size of the manipulation object 112 is known, the robot system 100 can calculate the spatial shape parameters of the posture when the manipulation object 112 is gripped from each direction. Then, the robot system 100 compares these spatial shape parameters with the spatial information in the container 450 placed at the working position 116, and optimally selects a recipe or a mode for storing the operation object 112 in the container 450 with a higher packing density.
At this time, the robot system 100 calculates whether or not the end effector interferes with the operation target 112 already stored in the storage container 450 when the end effector accesses the storage container 450, and can exclude this mode if interference is likely to occur. Further, by being able to create a control sequence including an operation of changing the gripping of the operation object 112 again, the robot system 100 can change the control sequence 472 (corresponding to the first control sequence 422 or the second control sequence 424 of fig. 4A and 4B) up to this point to form the posture of the optimized storage when the filling rate of the operation object 112 into the storage container 450 is high, compared to the case where the gripped operation object 112 is directly stored in the storage container 450 in the current orientation at this point in time, so that the posture of the operation object 112 is changed.
Conversely, when the posture of the currently held operation object 112 is optimal in terms of storage efficiency, the robot system 100 stores the held operation object 112 in the storage container 450 such as a bucket placed on the warehousing conveyor of the conveyor unit 106 at the working position 116 without changing the control sequence 472.
When the robot system 100 performs the grip conversion on the operation object 112, the robot system operates the operation object 112 based on the recalculated control sequence 474. For example, the scanned operation object 112 is moved to the peripheral region of the grip conversion position 118, the end effector is directed to a predetermined direction to set the operation object 112 in a temporary placement posture, and in this state, the operation object is placed on the temporary placement stage 468, and the grip is released. The temporary placement table is not particularly limited, and for example, a pedestal capable of supporting and holding the operation object 112 in an inclined state is exemplified as a preferable example in terms of ease of holding and stability during holding, in particular, by placing the operation object 112 so that at least both surfaces thereof can be exposed. The robot system 100 can change the direction of the end effector, grip a surface different from the surface gripped before the temporary placement, and change the grip of the operation object 112.
The robot system 100 stores the grip-converted operation object 112 in a storage container 450 such as a bucket placed on a storage conveyor or the like of the conveying unit 106 at the working position 116. In this case, the end effector may be operated in a manner of, for example, rocking back and forth/right and left/up and down with respect to the target position without being temporarily positioned directly. Further, a plurality of or a plurality of end effectors may be provided, and each end effector may be used for control in relation to the size of the operation object 112.
Further, in the foregoing, to perform the action associated with job 402, robotic system 100 may track the current position (e.g., a set of coordinates corresponding to a grid used by robotic system 100) and/or the current pose of operational object 112. For example, the robotic system 100 may track a current position/pose based on data from the position sensor 224 of fig. 2 via, for example, the processor 202. The robotic system 100 can configure more than one portion (e.g., chain, joint) of the robotic arm 414 based on data from the position sensor 224. The robot system 100 may further calculate the position/posture of the end effector based on the position and orientation of the robot arm 414, thereby being able to calculate the current position of the manipulation object 112 held by the end effector. In addition, the robotic system 100 may track the current position based on processing of other sensor readings (e.g., force readings or acceleration readings), executed operational commands/settings, and/or associated timing, or a combination thereof, in a speculative mechanism.
[ operation procedure (control sequence based on reliable reference) ]
Fig. 5A is a flowchart of a method 500 showing an example of the flow of the operation of the robot system 100 according to an embodiment of the present disclosure. To execute the job 402 of fig. 4A in accordance with the reliable reference associated with the determination of the initial pose of the operational object 112, the method 500 includes the steps of acquiring/calculating a control sequence based on the reliable reference and implementing it. Additionally, the method 500 may be implemented by the one or more processors 202 based on executing commands stored in the one or more storage devices 204.
In block 501, the robotic system 100 may identify a scanning area of the one or more capture devices 222 of fig. 2. For example, the robotic system 100, through, for example, the one or more processors 202, may identify a space scanned by one or more cameras 222, such as the scanners 412, 416 of fig. 4A and 4B. The robotic system 100 identifies scan fields that are oriented in opposite directions (orientations) and/or orthogonal directions based on the orientation of the scanner 416. As shown in fig. 4A and 4B, the scanners 416 may be arranged on both sides in the horizontal direction, or both sides in the vertical direction, on opposite sides of each other, and/or facing each other. In addition, the scanners 416 may also be configured to be oriented vertically with respect to each other, one facing up or down, other facing horizontal directions, and so forth.
The robotic system 100 may identify a scanning domain from the master data 252, for example. The main data 252 contains the camera device 222 and/or grid locations, coordinates, and/or other indicia representing the corresponding scan field. The master data 252 may be predetermined according to the layout and/or physical configuration of the capture device 222, the capabilities of the capture device 222, environmental factors (e.g., light conditions, and/or shading/configuration), or combinations thereof. In addition, the robotic system 100 may implement a calibration procedure in order to identify the scanning area. For example, the robot system 100 can determine whether or not the corresponding imaging device 222 has correctly scanned a known mark by arranging a known mark or code in the position set using the transfer unit 104. The robotic system 100 may identify the scanning field based on the location of the known markers as a result of the correct scan.
In block 502, the robotic system 100 can scan the designated area. The robotic system 100 may generate capture data (e.g., acquired digital images and/or point clouds) for one or more designated areas, such as a pick-up area and/or a placement area, using one or more capture devices 222 (e.g., the scanner 412 of fig. 4A and 4B and/or other area scanners), for example, via commands/prompts sent by the processor 202. The photographing data can be communicated by the photographing apparatus 222 to the one or more processors 202. Accordingly, the one or more processors 202 may receive, for subsequent processing, the shot data representing the pickup region (e.g., including the operation subject 112 before the execution of the job), the grip transformation region, and/or the placement region (e.g., including the operation subject 112 after the execution of the job).
In block 504, the robotic system 100 can identify the operational object 112 and associated positions (e.g., the start position 114 of fig. 1, and/or the job position 116 of fig. 1), and/or an initial pose of the operational object 112. To identify the contour (e.g., surrounding edges and/or surfaces) of the operational object 112, the robotic system 100 may analyze the captured data, for example, by the processor 202 based on a pattern recognition mechanism and/or recognition rules. The robotic system 100 may further identify the groupings of contours and/or surfaces of the operands 112 based on, for example, prescribed recognition mechanisms, recognition rules, and/or pose, contour-related templates to which the various operands 112 correspond.
The robotic system 100 can, for example, identify groupings of the contours of the operands 112 that correspond to patterns (e.g., whether the same values, whether changes occur at a known scale/pattern) of color, brightness, depth/position, and/or combinations thereof of the entire contour of the operands 112. In addition, for example, the robotic system 100 may identify groupings of contours and/or surfaces of the operational object 112 from templates, images, or combinations thereof of prescribed shapes/poses specified in the master data 252.
From the operands 112 identified in the pick-up region, the robotic system 100 may select one of the operands (e.g., in a prescribed sequence or set of rules, and/or a template of an outline of the operand) as the operand 112. The robotic system 100 may select the operation object 112, for example, from a point cloud representing distances/positions corresponding to known positions of the scanner 412. The robot system 100 may select the operation object 112 having two or more surfaces exposed and displayed on the imaging result, for example, at the corner and edge. The robot system 100 may select the operation object 112 according to a predetermined pattern or sequence (for example, from left to right, from the closest to the farthest, or the like with respect to the reference position).
For the selected operation object 112, the robot system 100 may further process the shot data in order to determine the start position 114 and/or the initial posture. For example, the robot system 100 can determine the start position 114 by mapping the position of the operation target 112 (e.g., a predetermined reference point related to the determined posture) in the captured data to a position within a grid used by the robot system 100, and map the position according to a predetermined calibration map.
The robot system 100 may process the shot data of the placement area and determine the empty space between the operation objects 112. The robot system 100 can determine an empty space based on a contour of the operation object 112 mapped on a predetermined calibration map in which the position of the image is mapped to a real position and/or coordinates used by the system. The robot system 100 can determine the empty space as a space between the outlines of the manipulation objects 112 (even the surfaces of the manipulation objects 112) belonging to different groups. Then, the robot system 100 measures one or more dimensions of the empty space, compares the measured dimensions with one or more dimensions of the operation target 112 (for example, dimensions stored in the master data 252), and determines an appropriate empty space for the operation target 112. Additionally, the robotic system 100 may select an appropriate/free space as the work location 116 according to a prescribed pattern (e.g., relative to a reference location, left-to-right, nearest-to-farthest, bottom-to-top, etc.).
The robot system 100 may determine the work position 116 without processing or after processing the captured data. For example, the robot system 100 may arrange the operation object 112 in the arrangement area according to a predetermined control sequence and position without photographing the area. For example, the robot system 100 may process the captured data in order to perform a plurality of tasks (e.g., move a plurality of objects 112 such as objects related to the objects 112 located in a layer/column shared by the stacks).
In block 522, for example, the robotic system 100 may determine an initial pose (e.g., an estimation of a pose at which the operation object 112 in the pickup area stops) based on processing of the captured data (e.g., captured data from the scanner 412). The robot system 100 can determine the initial posture of the manipulation object 112 by comparing the contour of the manipulation object 112 with the contour of the predetermined posture template of the master data 252 (for example, comparing pixel values). The predetermined posture template may include, for example, a configuration in which the outline of the operation object 112 related to the orientation corresponding to the expected operation object 112 may be different. The robotic system 100 can identify a selected operational object 112 and a set of previously associated operational objects 112 contours (e.g., an edge of an exposed surface, such as the first exposed surface 304 of fig. 3A and/or 3C, and/or the second exposed surface 306 of fig. 3A). The robot system 100 can determine the initial posture by selecting one posture template corresponding to the measurement in which the difference between the outlines of the compared operation objects 112 is the smallest.
For further example, the robotic system 100 may determine an initial pose of the operand 112 based on the physical dimensions of the operand 112. The robot system 100 can estimate the physical size of the operation object 112 based on the size of the exposed surface obtained by photographing data. The robot system 100 measures the length and/or angle of each of the outlines of the operation object 112 in the captured data, and then can map or convert the measured length to a real-world length or a standard length using a calibration map, a conversion table or process, a predetermined equation, or a combination thereof. The robotic system 100 may use the measured dimensions in order to identify the manipulation object 112 and/or the exposed surface(s) corresponding to the physical dimensions.
The robotic system 100 may compare the estimated physical dimensions to a set of known dimensions (e.g., height, length, and/or width) of the operands 112 and their surfaces within the master data 252 to identify the operands 112 and/or the exposed surface(s). The robotic system 100 may identify the exposed surface(s) and corresponding poses using the matched set of dimensions. For example, the robotic system 100 may identify the exposed surface as a top surface 322 of the operational object 302 of fig. 3A or a bottom surface 324 (e.g., a pair of two side surfaces) of the operational object 302 of fig. 3B if the dimensions of the exposed surface match the length and width associated with the intended operational object 112. Based on the orientation of the exposed surface, the robotic system 100 may determine an initial pose of the operational object 112 (e.g., the first pose 312 or the third pose 316 of the operational object 302 with the exposed surface facing upward).
For example, the robotic system 100 may determine an initial pose of the operand 112 based on visual images of one or more surfaces of the operand 112, and/or one or more markers thereof. The robotic system 100 may compare the pixel values of the connected contour set to the prescribed label-based pose template of the master data 252. The gesture template of the marker base may include, for example, one or more specific markers of the operation object 112 expected to have various orientations. The robotic system 100 may determine the initial pose of the operational object 112 by selecting a surface, surface orientation, and/or corresponding one of the poses that is a measure of the least difference associated with the compared images.
In block 524, the robotic system 100 may calculate a reliable reference associated with the initial pose of the operational object 112. The robotic system 100 may calculate reliable benchmarks as part of the process of determining the initial pose. For example, the reliable reference may correspond to a reference for a difference between the contour of the operation object 112 and the contour of the template selected as described above. Additionally, for example, the reliable reference may correspond to the estimated physical dimension and/or angle-related tolerance level described above. In addition, for example, the reliable reference may correspond to a reference for a difference between a visible mark within the captured data and an image of the template described above.
In block 506, the robot system 100 may calculate a control sequence 474 including a control sequence (e.g., the first control sequence 422 of fig. 4A, the second control sequence 424 of fig. 4A, the control sequence 472 of fig. 4B, etc.) for executing the job 402 related to the operation object 112 and the grip transformation operation of the operation object 112 shown in fig. 4B.
For example, the robotic system 100 may create or acquire a control sequence by calculating a sequence of commands or settings, or a combination thereof, associated with operating the robotic arm 414 and/or end effector manipulator 212 of fig. 4A and 4B. With respect to some jobs, the robotic system 100 calculates control sequences and set values for manipulating the robotic arm 414 and/or end effector to move the manipulation object 112 from the start position 114 to the job position 116 via the grip change position 118 as needed. The robotic system 100 may implement a control sequence mechanism (e.g., a process, a function, an equation, an algorithm, a computer-generated/readable model, or a combination thereof) configured to calculate a path of movement within a space.
For example, the robotic system 100 may use a search based on the A * algorithm, the D * algorithm, and/or other grids to calculate a path of movement through space in order to move the manipulation object 112 from the start position 114 to the job position 116 via the grip translation position 118 as needed through one or more provided poses/positions (e.g., one or more corresponding scan positions for an end effector). A control sequence mechanism uses further processes, functions, or equations, and/or mapping tables to convert the path of movement into a sequence of commands or settings related to the movement device 212, or a combination thereof.
The robotic system 100 may selectively create or acquire control sequences based on the reliable reference. The robotic system 100 may calculate a control sequence including an approach location (e.g., the first approach location 432 of fig. 4A and/or the second approach location 434 of fig. 4A), more than one scanning location (e.g., the first provision location 442 of fig. 4 and/or the second provision location 444 of fig. 4), or a combination thereof, based on the reliable reference. For example, the robotic system 100 may calculate an approach location and/or a scan location related to a metric (e.g., a metric of performance, and/or a metric of scanning) based on a comparison of the reliable benchmark to the sufficiency threshold. The scan position is the position of the end effector configured in a manner to provide one or more surfaces of the operand 112 in front of (i.e., in the scanning field of) one or more corresponding object scanners 416 that scan one or more indicia 332 of the operand 112.
In block 532, the robotic system 100 may calculate a set of available proximity locations, for example, via the processor 202. The available access positions may correspond to open or unoccupied positions around the starting position 114 that adequately configure the end effector. The robot system 100 may place the end effector at a selected proximity position so as not to interfere with another operation object 112 and to contact and hold the operation object 112.
For example, the robotic system 100 may calculate the set of available proximity locations by calculating a separation distance between the outline of the operand 112 and the outline of the adjacent operand 112. The robotic system 100 may compare the separation distance to a prescribed set of distances corresponding to the physical size/shape of the end effector, and/or various orientations. The robotic system may identify each available access location if the corresponding separation distance exceeds a prescribed set of distances corresponding to the size of the end effector.
In determination block 534, the robotic system 100 may compare the confidence metrics to one or more sufficiency thresholds to determine whether they are met. In the event that the reliable reference satisfies the sufficiency threshold (e.g., the reliable reference exceeds a desired sufficiency threshold), the robotic system 100 may calculate a control sequence (e.g., the first control sequence 422) based on the metric of performance. In the event that the reliable reference meets the sufficiency threshold, the robotic system 100 may infer that the initial pose is appropriate, and calculate a control sequence without regard to a metric of the scan corresponding to a likelihood of correlation of the at least one identifier 332 of the scan operand 112, and/or a likelihood of an incorrect condition of the initial pose.
As an example, the robotic system 100 may calculate a candidate solution at block 542. The candidate schemes may be examples of control sequences corresponding to unique combinations of available proximity positions and scanning positions (e.g., providing positions/orientations corresponding to the operation object 112). The robotic system 100 rotates the location(s) 334 of the identifier 332 or the corresponding model/pose within the master data 252 so that from the initial pose, the location 334 of the identifier 332 can be calculated. The robotic system 100 may remove available access locations (e.g., placed directly above, in front of, and/or within a threshold distance) of the location 334 of the end effector overlay identifier 332.
The robotic system 100 may calculate candidate solutions for each available proximate location remaining in the set (e.g., the result of the calculation of block 532). For each candidate solution, the robotic system 100 may further calculate a unique scan position based on the available proximity positions. The robotic system 100 may calculate the scanning position based on the rotation and/or movement of the model of the operand 112, whereby the surface corresponding to the location 334 of the identifier 332 is within the scanning field and faces the corresponding scanner 416. The robotic system 100 may rotate and/or move the model according to a prescribed flow, equation, function, etc.
In block 544, the robotic system 100 may calculate a metric of performance associated with the solution for each candidate. Robotic system 100 may calculate a metric of performance corresponding to throughput (rate) associated with completing job 402. For example, the measure of performance may be associated with a distance traveled by the operator 112 associated with the candidate solution, an estimated travel time, a number of changes to commands and/or settings associated with the operating equipment 212, a completion rate (i.e., complementary to an amount of loss), or a combination thereof. The robot system 100 may calculate values corresponding to the control sequence correlations of the candidates using one or more pieces of measured or known data (for example, acceleration/velocity, and/or a rate of loss of a piece related to a gripping surface and/or a motion direction, which are related to a setting/command), a predetermined calculation flow, an equation, a function, and the like.
In block 546, the robotic system 100 may select the candidate with the greatest performance metric (i.e., with the corresponding proximate location) as the control sequence. For example, the robot system 100 selects, as a control sequence, a candidate corresponding to the highest completion rate, the shortest movement distance, the fewest number of changes in commands and/or settings, the fastest movement duration, or a combination thereof from among the set of candidates. Thus, the robotic system 100 may select as the proximity location an available proximity location within the set corresponding to the highest performance metric.
If a comparison is made, the robotic system 100 may calculate a candidate solution based on a different reference if the reliable reference does not satisfy the sufficiency threshold (e.g., the reliable reference is less than the required sufficiency threshold). As indicated at block 538, the robotic system 100 may calculate a control sequence (e.g., the second control sequence 424) based on the scanned metrics. The metric for the scan, regardless of whether the initial pose of the at least one identifier 332 of the operand 112 is correct, is a value corresponding to a likelihood (e.g., a value of two elements, or a score/percentage of non-two elements) that remains uncovered by the end effector and can be scanned.
For example, the robotic system 100 may prioritize the metric of scanning over the metric of performance (e.g., initially meeting and/or assigning a more important importance) if the reliability criterion does not meet the sufficiency threshold. Thus, the robotic system 100 may calculate a control sequence for one or more scan positions (i.e., within a scan field and/or toward a corresponding scanner) contained in front of the one or more scanners 416 for providing the identification 332 of the at least one uncovered operational object 112.
Fig. 5B is a flow diagram illustrating an example sequence of actions of a robotic system according to an embodiment of the present disclosure, and illustrates a flow diagram 538 for selectively calculating a control sequence (e.g., for one or more positions of an end effector) based on a scan metric.
In this example, computing a control sequence based on the scanned metrics, as shown at block 552, may include computing a set of locations for the exposed identifiers 332. The robot system 100 may calculate a set of positions of the exposed marker 332 with respect to the initial posture of the manipulation object 112 (for example, a position 334 of the marker 332 that can be directly scanned by the end effector at the holding position). The robotic system 100 may calculate the location 334 of the exposed identifier 332 relative to each of the available proximity locations. Assuming the initial pose is correct, at the corresponding approach position, the position 334 of the exposed marker 332 corresponds to the position 334 of the marker 332 of the operator object 112 that is not covered by the end effector.
As depicted at block 542, the master data 252 may include a computer model or template (e.g., a measure of offset relative to the edges and/or images of one or more operands 112) that describes the expected locations 334 of the markers 332 associated with each of the operands 112. The robotic system 100 may calculate a set of positions of the exposed identifier 332 based on rotating and/or moving a prescribed model/template within the master data 252 in a manner that matches the initial pose. The robotic system 100 may remove (e.g., place directly above, in front of, and/or within a threshold distance of) the proximity location of the location 334 covered by the end effector for the identifier 332. In other words, the robotic system 100 may remove available access locations that are located directly above, in front of, and/or within a threshold distance of the location 334 of the identifier 332.
In block 554, the robotic system 100 may calculate a set of locations 334 for the alternate identifications 332. The robotic system 100 may calculate a set of positions 334 of the alternative identifications 332 for the alternative pose of the initial pose. For each available proximity location, the robotic system 100 may calculate an alternate pose, and for each alternate pose, the robotic system 100 may calculate a location of the alternate identifier 332. Thus, assuming the initial pose is incorrect, at the corresponding proximate location, the location of the alternate identifier 332 may correspond to the location 334 of the identifier 332 of the operand 112 that remains uncovered by the end effector. With respect to the location 334 of the exposed marker 332, as described above, the robotic system 100 can calculate the location 334 of the alternate marker 332 based on rotating and/or moving the prescribed model/template within the master data 252 according to the alternate pose.
In block 556, the robotic system 100 may calculate the exposure possibilities associated with each of the proximity locations, each of the alternative poses, the identifications 332 of each of the operational objects 112, or a combination thereof. The exposure possibility indicates a possibility that the markers of one or more operation objects 112 remain exposed and a scannable state is maintained by the end effector gripping the operation objects 112 from the corresponding proximity position. The exposure probability represents both a scene that is initially pose-correct and a scene that is initially pose-incorrect. In other words, the possibility of exposure may represent a possibility that the identification of one or more operation objects 112 remains exposed and remains scannable even if the initial posture is incorrect.
For example, the robotic system 100 may calculate the exposure likelihood as a conditionally relevant certainty such as a probability value corresponding to a particular condition (e.g., a specific instance of a proximity location, an alternate gesture, an identification of the operational object 112, or a combination thereof). The robotic system 100 may calculate the exposure likelihood based on combining (e.g., by making additions and/or multiples) the condition-related certainty with a certainty/likelihood that the particular condition is true (e.g., a value that is close to a reliable benchmark). The robotic system 100 may calculate the exposure probability based on the certainty associated with each identifier additionally expecting exposure when multiple identifiers are exposed at the considered proximity location and/or the considered pose.
The robotic system 100 may calculate the exposure likelihood by combining the certainty values of the exposed identified location and the alternate identified location for the proximity location-related potential individual poses or the like under consideration. For example, the robotic system 100 may calculate the exposure likelihood using the location-dependent certainty of the location of the exposed marker and the location of the alternate markers having the inverse sign (e.g., positive and negative). The robotic system 100 calculates the exposure probability by adding the magnitude of two determinants, and/or adding the determinants with a sign. The overall size may represent the overall likelihood that the one or more identifiers 332 of the operand 112 remain scannable, and the signed/vector likelihood may represent the likelihood that the scannable state will also be maintained if the one or more identifiers of the operand 112 were initially incorrectly posed. Thus, regardless of whether the initial pose is correct or incorrect, the proximity location is ideal when the overall size is large, and the signed/vector likelihood approaches zero, due to similar likelihoods of representing the identity 332 of the scannable operand 112, and so on.
In block 558, the robotic system 100 may select an approach location. The robotic system 100 may select, as proximity locations, available proximity locations that include the location 334 of the uncovered identity 332, both in the set of exposed identities 332 (e.g., the set of estimated locations of the identity 332 of the operand 112 that is assumed to be initially pose-correct) and in the set of alternative identities 332 (e.g., the set of more than one estimated locations of the identity 332 of the operand 112 that is assumed to be initially pose-incorrect). In other words, regardless of the correctness of the initial pose, the robotic system 100 may select the proximate location of the remaining at least one identifier 332 of the exposed scannable operand 112. The robotic system 100 may select as the proximity location an available proximity location that has the largest overall size, and/or a signed/vector likelihood of approaching zero, etc. that matches the target condition, and/or that corresponds to the closest exposure likelihood.
The robotic system 100 may calculate a likelihood of scanning (e.g., a likelihood that the identity 332 of the exposed operational object 112 may be successfully scanned) based on the exposure likelihood. For example, the robotic system 100 may combine the likelihood of exposure with an evaluation value associated with the identifier 332 of the corresponding exposed operational object 112 (e.g., a tracking scale of successful scans, a physical size, and/or a type of the identifier 332). The robotic system 100 may select the available proximity location corresponding to the highest scanning likelihood as the proximity location.
The robotic system 100 compares the set of exposed identifications 332 with the set of alternative identifications 332, and may determine whether the set of exposed identifications 332 and the set of alternative identifications 332 contain a position of the opposing surface of the operational object 112 (e.g., between the first pose 312 and the third pose 316). Thus, the robotic system 100 may select an available approach position corresponding to a third surface (e.g., one peripheral surface 326 of the operational object 302) that is orthogonal to the two opposing surfaces.
In block 560, the robotic system 100 may create or acquire a control sequence as a candidate based on the selected proximity location, in the event that the reliability criterion does not satisfy a sufficiency threshold, or the like. The robotic system 100 may calculate a candidate control sequence that includes placing the identifier 332 of the operand 112 at one or more of the provided positions/scan positions associated toward the corresponding end effector in both the set of exposed identifiers 332 and the set of alternate identifiers 332. In other words, the robot system 100 can calculate the control sequence of the candidates of the scannable operation target 112 regardless of whether the initial posture is correct or not.
The robotic system 100 may create or acquire a control sequence that deals with a candidate for the location 334 of the identity 332 in both the exposed set of identities 332 and the alternate set of identities 332. For example, the robotic system 100 may calculate a candidate control sequence corresponding to the location of the relative and/or orthogonal surface presence likelihood indicator 332. Therefore, the robot system 100 can cope with a posture in an opposite direction (for example, a posture in which the outline of the operation object 112 is placed in the same position/angle based on visual confirmation and in an opposite direction) and/or another posture in rotation on the basis of the initial posture. As an illustrative example, referring again to fig. 3A and 3C, when the gripping position corresponds to one outer circumferential surface 326 of the operation object 302, the robot system 100 may calculate a control sequence that supports both candidates of the first posture 312 and the third posture 316.
To account for poses where there are multiple possibilities (e.g., erroneous estimates of the initial pose), the robotic system 100 may calculate a scan pose that places the identifier 332 of the operand 112 in both the exposed set of identifiers 332 and the alternate set of identifiers 332. As indicated at block 562, the robotic system 100 may calculate a set of candidate poses associated with the operand 112 by within or within the scan domain. If the approximate position is selected, the robotic system 100 may calculate a candidate scan position as described in block 542 by rotating and/or moving the model of the position 334 of the marker 332 such that the position 334 of the marker 332 is disposed within the scan field.
In block 564, the robotic system 100 may map the set of exposed identifiers 332 and the set of alternative identifiers 332 to candidate scan positions, respectively. The robotic system 100 may map the set of exposed identities 332 with the initial pose as a starting point based on rotating the model of the locations 335 of the identities 332. The robotic system 100 may map the set of alternative identifications 332 with an alternative pose (e.g., an inverted pose) as a starting point based on rotating the model of the location 334 of the identification 332.
When mapping the location 334 of the identifier 332, the robotic system 100 may compare the location 334 and/or orientation of the identifier 332 of the operational object 112 in both the set of exposed identifiers 332 and the set of alternative identifiers 332 to the scan domain in block 568. In determination block 570, the robotic system 100 may determine whether the candidate pose provides the identifier 332 of the operation object 112 in both the exposed set of identifiers 332 and the alternate set of identifiers 332 to the scanner at the same time.
In block 572, the robotic system 100 may identify candidate poses that provide the identifier 332 of the operational target 112 in both the exposed set of identifiers 332 and the alternate set of identifiers 332 to different scanner/scan fields at the same time as a scan pose. For example, in a state where the positions of the operation objects 112 in the exposed set of the markers 332 of the operation object 112 and the set of the alternative markers 332 are on the opposite surfaces, when the grip position corresponds to one outer circumferential surface 326 of the operation object 112, the robot system 100 can recognize the scanning posture in which the operation object 112 is placed between the pair of opposite/opposing scanners in a state where both side surfaces of the operation object 112 face one of the scanners, respectively.
In block 574, if none of the candidate poses provides the identity 332 of the operational object 112 simultaneously in both the set of identities 332 of the exposed operational object 112 and the set of alternative identities 332, the robotic system 100 may calculate a plurality of scan positions (e.g., a first scan position and a second scan position) that each provide the identity 332 of at least one operational object 112 from the set of identities 332 of the exposed operational object 112 and the set of alternative identities 332. For example, a first scanning position may provide the location 334 of one or more identifiers 332 within the set of exposed identifiers 332 of the operand 112 to a subject scanner, and a second scanning position may provide the location 334 of one or more identifiers 332 within the set of alternative identifiers 332 of the operand 112 to a scanner. The second scanning position may be associated with pivoting the end effector, translating the end effector, or a combination thereof from the first scanning position.
Referring again to the example shown in fig. 4A and 4B, the second control sequence 424 may correspond to the second approach position 434 corresponding to a third surface (e.g., one peripheral surface 326 of the operation object 112) orthogonal to two opposing surfaces (e.g., the first posture 312 and the third posture 316), as described above. Thus, the first scanning position may correspond to a first one of the second provided positions 444 that positions the surface (e.g., estimated to be the bottom surface 324 of the subject 112) to which the initial pose (e.g., the first pose 312) corresponds above the scanner 416 facing upward and toward the scanner. The second scanning position corresponds to one second position of the second supply position 444 at which the operation target 112 is rotated 90, for example, counterclockwise from the start position 114 to the entire working position 116. Thus, the second scanning position corresponds to the second providing position 444, and the second providing position 444 positions the surface (e.g., the bottom surface 324 of the determination operation object 112) corresponding to the alternative posture (e.g., the third posture 316) in front of the scanner 416 facing horizontally and in a vertical orientation facing the scanner 416.
The robot system 100 may place the end effector at the selected proximity position, contact and hold the operation object 112 in correspondence thereto, and further, calculate a candidate plan for lifting and moving the operation object 112 to the set of recognized scan postures and/or scan positions using one or more mechanisms described above (e.g., a * mechanism). for example, when recognizing a scan posture, the robot system 100 may calculate a candidate plan to determine a scan posture related to the operation object 112 within the scan field or passing through the scan field. in the case where the scan posture cannot be recognized, the robot system 100 calculates a candidate plan to move/orient the end effector in a manner of continuously passing through the set of a plurality of scan positions, thereby continuously moving/rotating the operation object 112 in accordance with the plurality of provided positions/orientations.
In block 576, the robotic system 100 may recreate or update scan possibilities associated with each control sequence of the candidates. The robotic system 100 may update the scan likelihood based on the various likelihoods and/or priorities described above (e.g., likelihoods, scan locations, scanners 416 utilized, identifications 332 deemed to be revealed, associated errors, and/or loss rates, or a combination thereof) associated with the combo box 544, but may update the scan likelihood for the metric of the scan instead of the metric of performance.
In block 578, the robotic system 100 may create or acquire a control sequence based on the selection of the candidate solution based on the scan likelihood. The robot system 100 may select, as the control sequence, the candidate with the highest possibility of scanning among the candidates. For example, the robotic system 100 may select the candidate with the highest likelihood of configuring at least one location 334 of the exposed identifier 332 and at least one location 334 of the alternative identifier 332 for one or more scan fields (i.e., in front of one or more scanners 416) during movement of the operator 112, e.g., for scanning a space between the start location 114 and the work location 116.
Within a small value of difference (e.g., a prescribed threshold), where two or more candidate solutions correspond to a scanning likelihood, the robotic system 100 may calculate and evaluate a metric of performance corresponding to the respective candidate solution (e.g., as described in blocks 544 and 546). The robot system 100 may select the candidate closest to the target condition as the control sequence.
The robotic system 100 may depart from the illustrated exemplary flow. For example, the robotic system 100 may select an access location, as described above. Based on the selected approach position, the robot system 100 can perform a predetermined set of operations such as holding, lifting, reorienting, horizontally moving, returning to the lower side, releasing, or a combination thereof, on the operation target 112. During or after the prescribed set of actions, the robotic system 100 may again photograph or scan the pick-up area (e.g., by looping back to block 502), again determining the initial pose and the confidence criteria (e.g., by blocks 522 and 524).
If returning to FIG. 5A, in block 508, the robotic system 100 may begin implementing the resulting control sequence. The robotic system 100 sends commands and/or settings of control sequences to other devices (e.g., corresponding operational devices 212 and/or other processors) to implement control sequences based on operating the one or more processors 202 in a manner that executes the jobs 402, 404. Thus, the robot system 100 operates the running devices 212 according to a sequence of commands or settings, or a combination thereof, thereby executing a control sequence. For example, the robotic system 100 may operate the operation device 212, dispose the end effector at an approach position around the start position 114, contact and hold the operation object 112, or perform a combination thereof.
In block 582, the robotic system 100 can move the end effector to the scanning position, thereby moving the operand 112 to the providing position/orientation. For example, after lifting the manipulation object 112 from the start position 114, or as the robot system 100 moves the end effector, a scanning pose associated with the manipulation object 112 can be established. Additionally, the robotic system 100 may move the end effector to the first scanning position.
In block 584, the robotic system 100 may operate the scanner 416 to scan the operation object 112. For example, the one or more processors 202 may send commands to the scanner 416 to scan, and/or send queries to the scanner 416 to receive scan status and/or scan values. In block 585 or the like, in the case where the control sequence includes the scanning posture, the robot system 100 may execute the control sequence to move the manipulation object 112 in the scanning posture throughout the scanning field in a direction orthogonal to the orientation of the scanning field. During movement of the operand 112, the scanner 416 may scan (simultaneously and/or continuously) multiple surfaces associated with multiple probable positions 334 of the identifier 332 of the operand 112.
In decision block 586, the robotic system 100 evaluates the scan results (e.g., status and/or scan values) and determines whether the operator 112 has been scanned. For example, the robotic system 100 may evaluate the scan results after executing the control sequence to the first scan position. In block 588, etc., where the scan result is a scan success of the operand 112 (e.g., the status corresponds to detection of a valid code/identification, and/or the scan value matches an identified/expected operand 112), the robotic system 100 may move the operand 112 toward the work position 116. Based on the scanning success, the robot system 100 may disregard the scanning position thereafter (e.g., the second scanning position) and move the operation object 112 directly to the working position 116.
In the case where the scan result indicates that the scan of the operation object 112 is unsuccessful, the robot system 100 determines whether the current scan position is the end of the control sequence in determination block 590. In the event that it is not the last control sequence, the robotic system 100 may move the operand 112 to the next presentation position/orientation in the manner shown by the loop back to block 582.
In the event that the current scan position is the end of the control sequence, the robotic system 100 may perform one or more improvement tasks, as shown in block 592. The robotic system 100 may stop and/or cancel the control sequence in the event that the scan results associated with all scan positions of the control sequence indicate a scan failure. The robotic system 100 may generate an error status/message to alert the operator. The robot system 100 may arrange the operation target 112 in a designated area (i.e., a position different from the start position 114 and the work position 116) related to the operation target 112 whose scanning has failed.
Based on the successful completion of the job 402, 404 (i.e., successful scanning of the operation object 112, placement at the job location 116) or the implementation of the improvement job, the robotic system 100 may proceed with the movement of the next job/operation object 112. The robot system 100 may scan the designated area again as indicated by the loop returning to block 502, and may select the next operation target 112 using the existing shot data as indicated by the loop returning to block 504.
By scanning the operation target 112 in space (e.g., a position between the start position 114 and the job position 116), the efficiency and speed associated with executing the jobs 402, 404 may be improved. By calculating a control sequence including the scanning position and operating in cooperation with the scanner 416, the robot system 100 can effectively combine the job for moving the operation target 112 and the job for scanning the operation target 112. Further, by creating or acquiring a control sequence based on a reliable reference for the initial pose, the efficiency, speed, and accuracy associated with the scanning job may be further improved. As described above, the robotic system 100 is able to create or acquire a control sequence for which an alternate orientation corresponds to a scene that was initially incorrectly posed. Therefore, the robot system 100 can increase the possibility of correctly/successfully scanning the manipulation object 112 even in the case of an error in posture determination due to an error in calibration, an unexpected posture, an unexpected light condition, or the like. Increasing the likelihood of a correct scan can increase the overall throughput associated with the robotic system 100 and reduce operator labor/intervention.
[ conclusion ]
The foregoing detailed description of the embodiments of the present disclosure is not intended to be exhaustive or to limit the disclosure to the precise form disclosed above. While specific embodiments of, and examples for, the disclosure are described above for illustrative purposes, various equivalent modifications are possible within the scope of the disclosure, as those skilled in the relevant art will recognize. For example, while flows or blocks are provided in a given order, through alternative embodiments, routines having steps, or systems having blocks, may be executed in a different order, or some flows or blocks may be deleted, moved, added, subdivided, combined, and/or modified to provide alternative or subcombinations. These flows or blocks, respectively, may be implemented in a variety of different ways. Further, while flows or blocks are shown as being performed serially, the flows or blocks may alternatively be performed in parallel or performed at different times. Moreover, any particular number shown herein is merely an example, and different values or ranges may be employed in alternative embodiments.
These and other changes can be made to the disclosure in light of the above-described detailed description. While specific embodiments of, and the best mode contemplated for carrying out, the present disclosure is described in the detailed description, it can be implemented in numerous ways, regardless of how the above description is presented in detail in the text. The details of the present system may vary considerably in its specific embodiments, but is still encompassed within the technology disclosed herein. As noted above, particular terminology used when describing certain features or aspects of the disclosure should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the disclosure with which that terminology is associated. Accordingly, the invention is not limited except as by the appended claims. In general, the terms used in the following claims should not be construed to limit the disclosure to the specific embodiments disclosed herein, unless the above detailed description section explicitly provides such terms.
While certain aspects of the invention have been presented below in terms of the following specific claims, applicants contemplate the various aspects of the invention in any number of claims. The applicants hereby reserve the right to further claim and make such additional claims after filing the application or a subsequent application.
Claims (16)
1. A control method of a robot system including a robot with a robot arm and an end effector, the control method of the robot system comprising:
acquiring an approach position at which the end effector grips an operation object;
acquiring a scanning position for scanning the identifier of the operation object; and
creating or acquiring a control sequence based on the approach position and the scanning position, instructing the robot to execute the control sequence,
the control sequence includes the following (1) to (4):
(1) holding the operation object from a start position;
(2) scanning identification information of the operation object with a scanner located between the start position and the working position;
(3) when a predetermined condition is satisfied, temporarily releasing the operation object from the end effector at a grip conversion position, and again gripping the operation object with the end effector so as to convert the grip; and
(4) and moving the operation object to a working position.
2. The control method of a robot system according to claim 1,
the control sequence includes the following (5) and (6):
(5) setting, as the predetermined condition, an improvement in efficiency of storing the operation object at the working position when the direction in which the operation object is gripped by the end effector is changed by changing the gripping direction of the operation object; and
(6) calculating storage efficiency at the work position before the gripping conversion of the operation object and storage efficiency at the work position after the gripping conversion of the operation object.
3. The control method of a robot system according to claim 2,
the control sequence includes the following (7) and (8):
(7) acquiring the height of the operation object; and
(8) calculating the storage efficiency based on the height of the operation object.
4. The control method of a robot system according to claim 3,
calculating the height of the operation object from the height position of the top surface of the operation object and the height position of the bottom surface of the operation object measured in the end effector gripping state.
5. The control method of a robot system according to claim 3 or 4, wherein,
the height of the measurement object is measured while the operation object is scanned by the scanner.
6. The control method of a robot system according to any one of claims 1 to 5, wherein,
the control sequence includes: (9) when the predetermined condition is satisfied, the operation object is placed on the temporary placement table at the grip conversion position and is temporarily released from the end effector.
7. The control method of a robot system according to any one of claims 1 to 6, wherein,
the control method of the robot system further includes:
acquiring shooting data representing a pickup area including the operation object;
determining an initial posture of the operation object based on the shot data;
calculating a reliable reference representing a likelihood that an initial pose of the operational object is correct; and
acquiring the approach position and the scan position based on the reliable reference.
8. The control method of a robot system according to claim 7,
the control sequence includes: (10) selectively calculating the approach location and the scan location in terms of a measure of performance and/or a measure of scanning based on a result of comparing the reliable reference to a sufficiency threshold,
the metric of the scan is independent of whether the initial pose of the manipulator is correct or incorrect, and is related to the likelihood that the identity of the manipulator is not covered by the end effector.
9. The control method of a robot system according to claim 7,
in the event that the confidence measure does not meet the sufficiency threshold, the approach location and the scan location are acquired based on a metric of the scan, or the approach location and the scan location are acquired based on a metric of the scan with priority over a metric of the performance.
10. The control method of a robot system according to claim 7,
acquiring the approach location and the scan location based on a metric of the performance if the reliable reference satisfies the sufficiency threshold.
11. The control method of a robot system according to any one of claims 1 to 10,
the control sequence includes the following (11) and (12):
(11) acquiring a first scanning position for providing identification information of the operation object to the scanner and a second scanning position for providing alternative identification information of the operation object to the scanner;
(12) until the operation object is moved to the first scanning position, in a case where the scanning result indicates a successful scan, the operation object is moved to the job position and the second scanning position is disregarded, or, in a case where the scanning result indicates a failed scan, the operation object is moved to the second scanning position.
12. A non-transitory computer readable storage medium storing processor commands for implementing a method of controlling a robotic system including a robot with a robot arm and an end effector,
the processor command includes:
acquiring a command for the end effector to grip an approach position of an operation object;
acquiring a command of a scanning position for scanning the identification information of the operation object; and
creating or retrieving a control sequence based on the approach position and the scanning position, instructing the robot to execute commands of the control sequence,
the control sequence includes the following (1) to (4):
(1) holding the operation object from a start position;
(2) scanning an identification of the operation object with a scanner located between the start position and the job position;
(3) when a predetermined condition is satisfied, temporarily releasing the operation object from the end effector at a grip conversion position, and again gripping the operation object with the end effector so as to convert the grip; and
(4) and moving the operation object to a working position.
13. The non-transitory computer-readable storage medium of claim 12,
the control sequence includes the following (5) and (6):
(5) setting, as the predetermined condition, an improvement in efficiency of storing the operation object at the working position when the direction in which the operation object is gripped by the end effector is changed by changing the gripping direction of the operation object; and
(6) calculating storage efficiency at the work position before the gripping conversion of the operation object and storage efficiency at the work position after the gripping conversion of the operation object.
14. The non-transitory computer-readable storage medium of claim 13,
the control sequence includes the following (7) and (8):
(7) acquiring the height of the operation object; and
(8) calculating the storage efficiency based on the height of the operation object.
15. The non-transitory computer-readable storage medium of claim 14,
the height of the operation object is calculated from the height position of the top surface of the operation object and the height position of the bottom surface of the operation object measured in a state where the end effector is held.
16. A control device for a robot system comprising a robot having a robot arm and an end effector, and performing the control method according to any one of claims 1 to 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010277929.4A CN111470244B (en) | 2019-01-25 | 2020-01-20 | Control method and control device for robot system |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/258,120 US10456915B1 (en) | 2019-01-25 | 2019-01-25 | Robotic system with enhanced scanning mechanism |
US16/258,120 | 2019-01-25 | ||
JP2019-118678 | 2019-06-26 | ||
JP2019118678 | 2019-06-26 | ||
JP2019-213029 | 2019-11-26 | ||
JP2019213029A JP6697204B1 (en) | 2019-01-25 | 2019-11-26 | Robot system control method, non-transitory computer-readable recording medium, and robot system control device |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010277929.4A Division CN111470244B (en) | 2019-01-25 | 2020-01-20 | Control method and control device for robot system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111483750A true CN111483750A (en) | 2020-08-04 |
Family
ID=70682480
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010066275.0A Pending CN111483750A (en) | 2019-01-25 | 2020-01-20 | Control method and control device for robot system |
Country Status (3)
Country | Link |
---|---|
JP (2) | JP6697204B1 (en) |
CN (1) | CN111483750A (en) |
DE (1) | DE102020101767B4 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117295480A (en) * | 2021-04-29 | 2023-12-26 | X趋势人工智能公司 | Robot apparatus for dispensing specified items |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102021202340A1 (en) | 2021-03-10 | 2022-09-15 | Robert Bosch Gesellschaft mit beschränkter Haftung | METHOD OF CONTROLLING A ROBOT TO PICK AND INSPECT AN OBJECT AND ROBOT CONTROL DEVICE |
DE102021115473A1 (en) * | 2021-06-15 | 2022-12-15 | Linde Material Handling Gmbh | Methods for automatic picking and mobile picking robots |
DE102022101825B3 (en) | 2022-01-26 | 2023-02-23 | Ssi Schäfer Automation Gmbh (At) | Method and system for automated material handling including 100% verification |
DE102022203410A1 (en) | 2022-04-06 | 2023-10-12 | Robert Bosch Gesellschaft mit beschränkter Haftung | Method for controlling a robotic device |
DE102022121538A1 (en) | 2022-08-25 | 2024-03-07 | Bayerische Motoren Werke Aktiengesellschaft | Method for depositing an object using a robot |
WO2024134794A1 (en) * | 2022-12-21 | 2024-06-27 | 日本電気株式会社 | Processing device, robot system, processing method, and recording medium |
CN116002368B (en) * | 2023-02-09 | 2023-08-04 | 安徽布拉特智能科技有限公司 | Battery cell feeding control method, electronic equipment and storage medium |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05217014A (en) * | 1992-02-07 | 1993-08-27 | Fujitsu Ltd | Method for reading out bar code |
JP2013078825A (en) | 2011-10-04 | 2013-05-02 | Yaskawa Electric Corp | Robot apparatus, robot system, and method for manufacturing workpiece |
JP5366031B2 (en) | 2011-10-17 | 2013-12-11 | 株式会社安川電機 | Robot sorting system, robot apparatus, and method for manufacturing sorted articles |
JP5803769B2 (en) * | 2012-03-23 | 2015-11-04 | トヨタ自動車株式会社 | Mobile robot |
JP6415291B2 (en) | 2014-12-09 | 2018-10-31 | キヤノン株式会社 | Information processing apparatus, information processing method, and program |
JP6466297B2 (en) | 2015-09-14 | 2019-02-06 | 株式会社東芝 | Object detection apparatus, method, depalletizing automation apparatus, and packing box |
JP6724499B2 (en) * | 2016-04-05 | 2020-07-15 | 株式会社リコー | Object gripping device and grip control program |
AT519176B1 (en) | 2016-10-14 | 2019-02-15 | Engel Austria Gmbh | robot system |
JP7001354B2 (en) | 2017-03-29 | 2022-01-19 | トーヨーカネツ株式会社 | Automatic logistics system |
JP7045139B2 (en) | 2017-06-05 | 2022-03-31 | 株式会社日立製作所 | Machine learning equipment, machine learning methods, and machine learning programs |
DE102017005882B4 (en) | 2017-06-22 | 2019-08-01 | FPT Robotik GmbH & Co. KG | Method of operating a robot to verify its working environment |
-
2019
- 2019-11-26 JP JP2019213029A patent/JP6697204B1/en active Active
-
2020
- 2020-01-20 CN CN202010066275.0A patent/CN111483750A/en active Pending
- 2020-01-24 DE DE102020101767.7A patent/DE102020101767B4/en active Active
- 2020-04-10 JP JP2020070953A patent/JP7495688B2/en active Active
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117295480A (en) * | 2021-04-29 | 2023-12-26 | X趋势人工智能公司 | Robot apparatus for dispensing specified items |
Also Published As
Publication number | Publication date |
---|---|
JP7495688B2 (en) | 2024-06-05 |
JP2021003801A (en) | 2021-01-14 |
JP2021003800A (en) | 2021-01-14 |
JP6697204B1 (en) | 2020-05-20 |
DE102020101767A1 (en) | 2020-07-30 |
DE102020101767B4 (en) | 2021-07-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110116406B (en) | Robotic system with enhanced scanning mechanism | |
US11772267B2 (en) | Robotic system control method and controller | |
CN111483750A (en) | Control method and control device for robot system | |
JP6822719B2 (en) | Robot system with automatic package scanning and registration mechanism, and how it works | |
JP2023155399A (en) | Robotic system with piece-loss management mechanism | |
JP7175487B1 (en) | Robotic system with image-based sizing mechanism and method for operating the robotic system | |
CN111470244B (en) | Control method and control device for robot system | |
US12097627B2 (en) | Control apparatus for robotic system, control method for robotic system, computer-readable storage medium storing a computer control program, and robotic system | |
JP7218881B1 (en) | ROBOT SYSTEM WITH OBJECT UPDATE MECHANISM AND METHOD FOR OPERATING ROBOT SYSTEM | |
WO2023073780A1 (en) | Device for generating learning data, method for generating learning data, and machine learning device and machine learning method using learning data | |
CN115609569A (en) | Robot system with image-based sizing mechanism and method of operating the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |