CN113298876A - Storage position identification method and device - Google Patents
Storage position identification method and device Download PDFInfo
- Publication number
- CN113298876A CN113298876A CN202010699074.4A CN202010699074A CN113298876A CN 113298876 A CN113298876 A CN 113298876A CN 202010699074 A CN202010699074 A CN 202010699074A CN 113298876 A CN113298876 A CN 113298876A
- Authority
- CN
- China
- Prior art keywords
- target
- area
- hard disk
- grid
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 133
- 238000013507 mapping Methods 0.000 claims abstract description 19
- 230000000007 visual effect Effects 0.000 claims description 81
- 239000011159 matrix material Substances 0.000 claims description 14
- 238000000638 solvent extraction Methods 0.000 claims 2
- 230000000875 corresponding effect Effects 0.000 description 204
- 230000008569 process Effects 0.000 description 62
- 238000001514 detection method Methods 0.000 description 32
- 238000012549 training Methods 0.000 description 24
- 238000010586 diagram Methods 0.000 description 22
- 230000033001 locomotion Effects 0.000 description 15
- 238000004422 calculation algorithm Methods 0.000 description 14
- 230000002159 abnormal effect Effects 0.000 description 13
- 230000009471 action Effects 0.000 description 13
- 238000012423 maintenance Methods 0.000 description 13
- 239000013589 supplement Substances 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 9
- 238000003780 insertion Methods 0.000 description 8
- 230000037431 insertion Effects 0.000 description 8
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000003062 neural network model Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 210000000988 bone and bone Anatomy 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000013013 elastic material Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 238000003702 image correction Methods 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
- B25J9/1687—Assembly, peg and hole, palletising, straight line, weaving pattern movement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/1408—Methods for optical code recognition the method being specifically adapted for the type of code
- G06K7/1417—2D bar codes
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Toxicology (AREA)
- Electromagnetism (AREA)
- Health & Medical Sciences (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
One or more embodiments of the present specification provide a storage location identification method and apparatus, and the method may include: determining a target area corresponding to a target object to which a target object belongs; and carrying out grid area division on the target area, and determining a target grid area corresponding to the target object according to a grid area division result and a mapping relation between the object and the grid area to be used as a corresponding storage position of the target object on the target object.
Description
Technical Field
The disclosure relates to the technical field of robots, in particular to a storage position identification method and device.
Background
Along with the development of science and technology, more and more robots have entered into people's life and work, replace the manual work with the robot and can show the efficiency that promotes production and work. At present, the assembly, replacement and maintenance work of the object still needs to be completed manually, which often needs to spend a great deal of labor cost and capital cost.
Disclosure of Invention
In view of this, one or more embodiments of the present disclosure provide a storage location identification method and apparatus.
Specifically, one or more embodiments of the present disclosure provide the following technical solutions:
according to a first aspect of one or more embodiments of the present specification, there is provided a storage position identification method including:
determining a target area corresponding to a target object to which a target object belongs;
and carrying out grid area division on the target area, and determining a target grid area corresponding to the target object according to a grid area division result and a mapping relation between the object and the grid area to be used as a corresponding storage position of the target object on the target object.
According to a second aspect of one or more embodiments of the present specification, there is provided a storage position identification method including:
determining a target area corresponding to a target server to which a target hard disk belongs;
and determining a target grid area corresponding to the target hard disk as a corresponding storage position of the target hard disk on the target server according to a grid area division result of the target area and a mapping relation between the hard disk and the grid area.
According to a third aspect of one or more embodiments of the present specification, there is provided a storage position identification device including:
a first target area determination unit, configured to determine a target area corresponding to a target object to which a target object belongs;
and the first grid area determining unit is used for performing grid area division results on the target area, and determining a target grid area corresponding to the target object according to the grid area division results and the mapping relation between the object and the grid area to serve as a corresponding storage position of the target object on the target object.
According to a fourth aspect of one or more embodiments of the present specification, there is provided a storage position identification device including:
the second target area determining unit is used for determining a target area corresponding to a target server to which the target hard disk belongs;
and the second grid area determining unit is used for carrying out grid area division on the target area and determining a target grid area corresponding to the target hard disk according to a grid area division result and a mapping relation between the hard disk and the grid area so as to be used as a corresponding storage position of the target hard disk on the target server.
According to a fifth aspect of the present specification, there is provided an electronic apparatus. The electronic device includes:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method according to the first aspect or the second aspect by executing the executable instructions.
According to a sixth aspect of the present description, a computer-readable storage medium is presented, having stored thereon computer instructions which, when executed by a processor, carry out the steps of the method according to the first or second aspect.
Drawings
Fig. 1 is a schematic view of a vertical cabinet according to an exemplary embodiment of the present disclosure.
Fig. 2 is a schematic diagram of a component structure of a data center operation and maintenance robot according to an exemplary embodiment of the present disclosure.
Fig. 3 is a schematic structural diagram of a clamping device provided in an exemplary embodiment of the present disclosure.
Fig. 4 is another schematic structural diagram of a clamping device provided in an exemplary embodiment of the present disclosure.
Fig. 5 is a schematic diagram of a component structure of an assembly robot according to an exemplary embodiment of the present disclosure.
Fig. 6 is a flowchart of a storage position identification method according to an exemplary embodiment of the present disclosure.
Fig. 7 is a flowchart of another storage location identification method according to an exemplary embodiment of the present disclosure.
Fig. 8 is a schematic diagram of feature points in a server feature model according to an exemplary embodiment of the present specification.
Fig. 9 is a schematic diagram of an image in an image capturing area according to an exemplary embodiment of the present specification.
Fig. 10 is a schematic diagram of an image conversion process in an image capturing area according to an exemplary embodiment of the present specification.
Fig. 11 is a diagram illustrating a mesh region division result of a target region according to an exemplary embodiment of the present specification.
Fig. 12 is a diagram illustrating another grid area division result of a target area according to an exemplary embodiment of the present specification.
Fig. 13 is a schematic diagram of a target grid area of an image capture area provided in an exemplary embodiment of the present description.
FIG. 14 is a flow chart of a method of assembling an object provided by an exemplary embodiment of the present description.
FIG. 15 is a flow chart of another method of assembling an object provided in an exemplary embodiment of the present description.
Fig. 16 is a schematic diagram of a three-dimensional coordinate system provided in an exemplary embodiment of the present description.
Fig. 17 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Fig. 18 is a block diagram of a storage position identification device according to an exemplary embodiment of the present specification.
Fig. 19 is a block diagram of another storage position identification device provided in an exemplary embodiment of the present specification.
Fig. 20 is a schematic structural diagram of another electronic device provided in an exemplary embodiment of the present specification.
Fig. 21 is a block diagram of an object mounting apparatus according to an exemplary embodiment of the present disclosure.
Fig. 22 is a block diagram of another object mounting apparatus provided in an exemplary embodiment of the present description.
Fig. 23 is a block diagram of another object mounting apparatus provided in an exemplary embodiment of the present description.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with one or more embodiments of the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of one or more embodiments of the specification, as detailed in the claims which follow.
It should be noted that: in other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described herein. In some other embodiments, the method may include more or fewer steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
At present, the assembly work of the object and the corresponding receiving portion still needs to be completed manually, where the receiving portion may be a device capable of receiving the object, such as a receiving groove, a receiving hole, an assembly groove, or an insertion groove, and the like, which is not limited in this specification, for example, the object is pulled out from the assembly groove to perform a replacement work or inserted into the assembly groove to complete a maintenance work, and the like, but this often requires a large amount of manpower and capital costs, and it is difficult to improve the efficiency of the object assembly work. Particularly, at present, the replacement and maintenance work of the hard disk of the data server in a large-scale data center machine room still needs to be completed manually, the management efficiency of the hard disk of the server is low, and the operation and maintenance requirements of the data center are difficult to meet.
The data center machine room may include a plurality of rows of vertical cabinets 10 as shown in fig. 1, each vertical cabinet 10 includes a plurality of servers, each server has a plurality of hard disk slots 11 densely arranged, the hard disk slots 11 may be distributed in an array in a horizontal direction or a vertical direction, each hard disk slot 11 may be correspondingly plugged with a hard disk 12, and a slot gap between the hard disk 12 and the hard disk slot 11 is small, the gap is usually about 0.5mm, if the hard disk 12 and the hard disk slot 11 are not aligned for plugging, damage to the hard disk 12 and the hard disk slot 11 is easily caused, and therefore, a high precision requirement and a high stress requirement are often required in a safe plugging process of the hard disk 12.
The clamping device, the data center operation and maintenance robot and the assembling robot of the exemplary embodiment of the present specification will be described in detail below with reference to the accompanying drawings. The features of the following examples and embodiments can be supplemented or combined with each other without conflict.
Fig. 2 is a schematic diagram illustrating a composition structure of a data center operation and maintenance robot 20 according to an exemplary embodiment of the present disclosure. As shown in fig. 2, the maintenance robot 20 includes a traveling unit 202, a gripping unit 204, a spatial position detection unit 206, a force detection unit 208, a control unit 210, and a drive unit 212.
The walking unit 202 may enable the operation and maintenance robot 20 to walk to a vertical cabinet including a target server, where the target server includes a plurality of receiving units, and the target server may include a target receiving unit corresponding to a target hard disk.
The clamping portion 204 may enable the operation and maintenance robot 20 to clamp the target hard disk.
The spatial position detection unit 206 may be configured to detect a spatial position of the storage unit on the server. The force detection portion 208 may be used to detect force information of the target hard disk during the process of inserting and/or removing the target hard disk into and/or from the target storage portion by the clamping portion 204. The spatial position detecting unit 206 may be a depth camera, the spatial position may be defined according to a geodetic reference system, and certainly, the position of the object may also be described by establishing a coordinate system, taking a three-dimensional coordinate system as an example, the depth camera may detect the coordinate positions of various points in a visual image area in the three-dimensional coordinate system, and certainly, other devices that may obtain the spatial position of the storage unit on the target server may also be used, the visual image area may also be referred to as an image capturing area, and the following description is given by using the image capturing area, which is not limited in this specification. The force detecting portion 208 may be a six-dimensional force sensor, the six-dimensional force sensor may measure three force components and three moment components simultaneously, and the force detecting portion 208 may also be other devices that may measure a plurality of force components and moment components, which is not limited in this specification.
The control unit 210 can perform visual positioning on the target storage unit according to the spatial position of the target storage unit obtained by the spatial position detection unit 206, and then the control unit 210 can send an initial control signal to the driving unit 212 according to the result of the visual positioning, and the control unit 210 can also send an adjustment control signal and a plugging control signal to the driving unit 210 according to the stress information of the target hard disk detected by the force detection unit 208.
And the driving part 212 may move the target hard disk to an initial position of the target receiving part, which may make the positioning accuracy between the target hard disk and the target receiving part 1 to 2mm, according to the initial control signal received from the control part 210. And the driving part 212 may also drive the clamping part 204 to perform attitude adjustment on the target hard disk according to the received adjustment control signal, wherein the attitude adjustment includes moving the target hard disk in a horizontal axial direction or in a vertical axial direction, so that the target hard disk may be aligned with the target receiving part. Then, in the case that the target hard disk is aligned with the target receiving portion, the driving portion 212 may drive the clamping portion 204 to insert and/or extract the target hard disk into and/or from the target receiving portion according to the received inserting and extracting control signal.
The operation and maintenance robot 20 may further include a collecting part 214, where the collecting part 214 may be configured to collect a barcode identifier on the target server, where the barcode identifier is used to represent identity information of the target server, so that the operation and maintenance robot may identify the identity information of the target server by collecting the barcode identifier. The barcode identifier may be a one-dimensional barcode, a two-dimensional barcode, or a barcode pattern, for example, an EAN-13 type barcode, a UPC-a type barcode, a QR CODE type barcode, or a GS1 QR type barcode, and of course, the barcode identifier may also be a three-dimensional barcode, a multi-dimensional barcode, or a digital information hologram, and the like, which is not limited in this specification. For example, the collecting part may be an RGB camera, and the barcode identifier may be a two-dimensional code, so that the RGB camera may identify the identity information of the server according to the collected two-dimensional code of the server, and the identity information may include encoded information corresponding to the server or location information of the server, which is not limited in this specification.
This fortune dimension robot 20 can also include light filling portion 216, and this light filling portion 216 can be used for carrying out the light filling when spatial position detection portion 206 detects the spatial position of portion of accomodating, can also carry out the light filling at collection portion 214 in-process at gathering the bar code sign, and wherein this light filling portion 216 can be the equipment that has reflective surface such as light filling board, does not restrict this in this description.
Wherein, the clamping part 204 of the operation and maintenance robot 20 may include a clamping device 30, as shown in fig. 3 and 4, which may include a clamping assembly 40 and a force detection assembly 50.
The clamping assembly 40 may include a clamping body 401 and a clamping jaw 402, wherein the clamping jaw 402 and the force detection assembly 50 may be located at both ends of the clamping body 401, respectively, as shown in fig. 3 and 4. And the clamping jaw 402 may include a plurality of moving components, and the number of the moving components may be set according to practical requirements, which is not limited in this specification, for example, the clamping jaw 402 includes two moving components as shown in fig. 3 and 4. Each motion assembly may include a proximal phalanx 406, a distal phalanx 407, a linkage, and a driving cylinder 404, the jaw body 401, the proximal phalanx 406, and the distal phalanx 407 are sequentially hinged, and both ends of the driving cylinder 404 may be hinged to the jaw body 401 and the distal phalanx 407, respectively, wherein the linkage may include a crank 403 and a connecting rod 405, one end of the crank 403 may be hinged to the hinged axis of the jaw body 401 and the proximal phalanx 406, the other end of the crank 403 may be hinged to the connecting rod 405, and both ends of the connecting rod 405 may be hinged to the crank 403 and the distal phalanx 407, respectively. And the hinge axes of the distal phalanx 407 and the proximal phalanx 406, the hinge axes of the distal phalanx 407 and the connecting rod 405, and the hinge axes of the distal phalanx 407 and the driving cylinder 404 may be sequentially arranged along the same straight line and parallel to each other. The above-described structure of the gripping jaws 402 may allow independent movement capabilities and flexibility of movement of the motion assembly, thereby facilitating various gripping actions.
The surface of the end phalange 407 on the side contacting the object may be provided with a flexible contact 408, wherein the flexible contact 408 may be made of an elastic material such as rubber, elastic plastic, etc., so as to avoid damaging the surface of the object during the contact of the end phalange 407 with the object, and to better protect the integrity of the object.
The clamping assembly 40 may be configured to clamp a target object, the force detection assembly 50 may detect stress information of the target object during the process of inserting and/or pulling the target object into and/or out of the target receiving portion shown in fig. 1 by the clamping assembly 40, and the force detection assembly 50 may further send the detected stress information to a controller electrically connected to the force detection assembly 50, so that the controller may perform posture adjustment control on the clamping assembly 40 according to the received stress information, for example, drive the clamping device to move along a horizontal axis or a vertical axis, so that the target object may be aligned with the corresponding target receiving portion, thereby the target object may be safely inserted into and/or pulled out of the target receiving portion, and safety and no damage of the target object and the target receiving portion may be ensured. The clamping assembly 40 may move or perform posture adjustment with respect to the clamping device 30, or the clamping assembly 40 may be integrated with the clamping device 30, and the clamping assembly 40 is driven to move by the movement or posture adjustment of the clamping device 30, which is not limited in this specification.
In an embodiment, the clamping device 30 may further include a spatial position detecting component, which may be configured to detect a spatial position of the receiving portion and may transmit the spatial position to a controller electrically connected to the spatial position detecting component, so that the controller may perform visual positioning on the target receiving portion according to the spatial position, so that the target object may move to an initial position of the target receiving portion. The spatial position detecting component may be a depth camera 60 as shown in fig. 3, the depth camera 60 may collect distances between each point in the image capturing area and the camera, and of course, other devices that can acquire the spatial position of the storage portion may also be used, such as an ultrasonic positioning device, an infrared positioning device, and the like, which is not limited in this specification. When the spatial position detecting assembly is a depth camera, the positioning accuracy between the target object and the target accommodating part can be adjusted to 1 to 2 millimeters according to the spatial position information of the accommodating part detected by the depth camera.
In an embodiment, the controller may be used to perform posture adjustment control on the clamping device 30 or perform visual positioning on the receiving portion, and the controller may be located on the clamping device 30, or the controller may be located on a robot body of an assembly robot to which the clamping device 30 belongs, which is not limited in this specification.
In an embodiment, the holding device 30 may further include a driving assembly that can drive the holding device to move to adjust the pose of the target object. For example, the driving assembly can drive the clamping device to move along a horizontal axis or a vertical axis, and the driving assembly can also drive the clamping device to realize opening and closing actions, so that the target object can be grabbed and placed and the like.
In an embodiment, the clamping device 30 may include a collecting component, and the collecting component may be configured to collect a barcode identifier on the target device corresponding to the target receiving portion, where the barcode identifier may be used to represent identity information of the target device, so that the identity information of the target device may be identified by collecting the barcode identifier, thereby facilitating identity confirmation of the target device. The barcode identifier may be a one-dimensional barcode, a two-dimensional barcode, or a barcode pattern, for example, an EAN-13 type barcode, a UPC-a type barcode, a QR CODE type barcode, or a GS1 QR type barcode, and of course, the barcode identifier may also be a three-dimensional barcode, a multi-dimensional barcode, or a digital information hologram, and the like, which is not limited in this specification. For example, the barcode identifier on the target device may be a two-dimensional barcode, and the collecting component may be an RGB camera 70 as shown in fig. 3, the RGB camera 70 may scan the two-dimensional barcode on the target device, so as to obtain identity information of the target device, and confirm the identity of the target device, where the identity information may include encoded information corresponding to the target device or location information of the target device, and the like, which is not limited in this specification.
In an embodiment, the clamping device 30 may further include a light supplement component, wherein the light supplement component may be a light supplement plate 80 as shown in fig. 3, the light supplement plate 80 may supplement light in a process of collecting the target barcode identifier by the collection component, for example, the light supplement is performed in a process of collecting the two-dimensional code information by the RGB camera 70, of course, the light supplement plate 80 may also supplement light in a process of detecting a spatial position of the target storage portion by the spatial position detection component, for example, the light supplement is performed in a process of detecting the spatial position information of the target storage portion by the depth camera 60, and a brightness requirement in a shooting or collection process may be ensured.
In an embodiment, as shown in fig. 3, the depth camera 60, the RGB camera 70 and the light supplement plate 80 are all connected to the clamping device 30 through the camera mounting plate 90, and as shown in fig. 3, the camera mounting plate 90 may be mounted between the force detection assembly 50 and the clamping jaw body 401, at this time, if the components mounted on the camera mounting plate, such as the depth camera 60, the RGB camera 70, and the like, collide with other objects abnormally, the force detection assembly 50 may detect the abnormal force and moment, so as to remind the relevant people to pay attention to the abnormal situation, and the abnormal situation may be handled in time. Of course, the camera mounting plate may be disposed between the force detection assembly 50 and the robot body of the assembly robot to which the clamping device belongs, which is not limited in this specification.
In an embodiment, the clamping device 30 may further include a mounting component that may mount the clamping device to a robot body of an assembly robot to which the clamping device belongs, wherein the mounting component may be a connection pad 100 as shown in fig. 3. The force detection assembly 50 may be disposed between the clamping assembly 40 and the mounting assembly, so that the force detection assembly 50 may detect force information of the clamping assembly 40. Then in operation, the clamping assembly 40 and the force sensing assembly 50 and the camera mounting plate 90 may be moved synchronously with the mounting assembly, as driven by the mounting assembly.
The following description will be given taking an example in which the clamping device inserts and/or extracts the target hard disk into and/or from the target slot. Of course, the object clamped by the clamping device in this specification may be not only a hard disk but also other objects that need to be assembled, such as a battery sheet, a filter sheet, etc., and this is not limited in this specification.
In one embodiment, the process of inserting the target hard disk into the target slot by the clamping device may be: the motion assembly of the clamping jaw of the clamping device can grab a target hard disk by opening and closing, and according to the visual positioning result of the space position detection assembly, the target hard disk is moved to the initial position of the target slot, namely the target hard disk is moved to the vicinity of the target slot, then the clamping device can contact the end part of the target hard disk with the opening end of the target slot, so that the target hard disk can slowly enter the target slot, if the target hard disk is not aligned with the target slot, the target hard disk and the target slot can generate force action which can be transmitted to the force detection assembly through the motion assembly, the force detection assembly can obtain the action force and the corresponding action moment, for example, the force detection assembly can be a six-dimensional force sensor and is transmitted to a corresponding controller through the six-dimensional force sensor, and then the controller can finely adjust the posture of the clamping device by the force information, for example, the clamping device can be driven to move along the horizontal axial direction, move along the vertical axial direction or move along the preset direction, and the like, so that the target hard disk can be aligned to the target slot until no force is applied between the target hard disk and the target slot, the target hard disk can be safely inserted, and the hard disk and the corresponding slot can be effectively prevented from being damaged due to rigid collision.
In an embodiment, the operation process of the clamping device to pull the target hard disk out of the target slot may be: the clamping device can move the moving assembly of the clamping jaw to the initial position of the target slot according to the visual positioning result of the space position detection assembly, namely move the moving assembly to the position near the target slot, and then can touch the bayonet switch corresponding to the target hard disk through the moving assembly of the control clamping jaw, so that the bayonet is opened. And then, the motion assembly can move to the vicinity of the target hard disk and expand so as to clamp the end part of the target hard disk when the motion assembly is closed, wherein the motion assembly carries out compliance position adjustment according to the stress information detected by the force detection assembly in the process of clamping the target hard disk. After the moving assembly clamps the target hard disk, the clamping device can slowly retreat to execute the pulling action, and the force control of the pulling direction can be carried out according to the stress information fed back by the force detection assembly in the pulling process, so that the target hard disk is free of stress in any direction except the pulling direction, the target hard disk can slowly retreat from the target slot, and the target hard disk can be pulled smoothly.
Fig. 5 is a schematic diagram illustrating a composition structure of an assembly robot according to an exemplary embodiment of the present disclosure, and the assembly robot includes a robot main body and a gripping device as illustrated in fig. 5. The assembling robot may be used to insert and/or extract a target object into and/or from a corresponding target receiving portion, where the target object may be not only a server hard disk but also other objects that need to be assembled, such as a battery sheet, a filter sheet, and the like, which is not limited in this specification. Wherein the robot main body of the assembling robot may include a walking part, a gripping part, a spatial position detecting part, a force detecting part, a control part, and a driving part. The clamping device, the walking part, the clamping part, the spatial position detection part, the force detection part, the control part and the driving part included in the robot main body of the assembly robot are similar to the above embodiments, and the related implementation details can refer to the above embodiments, so the details are not described below.
According to the technical scheme, the force detection assembly in the clamping device in the specification can detect the stress information of the target object in the process that the target object is inserted into and/or pulled out of the target accommodating part by the clamping device, the stress information is used for posture adjustment control of the clamping device by the controller, so that the target object is aligned to the target accommodating part, the posture of the clamping device can be adjusted in real time according to the stress information, the accurate positioning of the clamping device can be realized, the safe insertion and pulling of the target object are realized, the damage of the target object and the target accommodating part in the assembling process is avoided, meanwhile, the movement assembly of the clamping device comprises the near-end finger bone and the tail-end finger bone, the movement assembly can have independent movement capacity, the movement assembly is more flexible, and various grabbing actions are convenient to realize.
Fig. 6 is a flowchart of a storage position identifying method shown in this specification. As shown in fig. 6, the method may be applied to an assembly apparatus (such as the assembly robot shown in fig. 5, and the control logic is implemented by a control section or other control apparatus included in the robot); the method may comprise the steps of:
In an embodiment, the assembling robot may recognize the image in the image capturing area, so that a target area corresponding to a target object to which the target object belongs may be determined.
In an embodiment, the assembly robot may adjust an image capturing area of a vision camera, wherein the vision camera may transmit an image projected onto the sensor through a lens to a machine Device capable of storing, analyzing or displaying, for example, the vision camera may be a CCD (Charge Coupled Device camera), a CMOS (Complementary Metal Oxide Semiconductor) camera, or the like, which is not limited in this specification. The assembling robot may adjust a target device to which a target object belongs to a preset position of an image capturing area, where the preset position may be any position of the image capturing area set according to actual requirements, for example, a central position, a top position, a terminal position, or a position above the left of the image capturing area, and the present disclosure does not limit this. The target object may be a server hard disk, a battery, a filter, or the like that needs to be assembled, and the target object may be assembled in the target object or unloaded from a corresponding storage position in the target object, which is not limited in this specification.
In an embodiment, the assembly robot may sequentially recognize a barcode identifier corresponding to each object in the image capturing area, where the barcode identifier may be a one-dimensional barcode, a two-dimensional barcode, or a barcode pattern, for example, an EAN-13 type barcode, a UPC-a type barcode, a QR CODE type barcode, or a GS1 QR type barcode, and the barcode identifier may also be a three-dimensional barcode, a multi-dimensional barcode, or a digital information hologram, which is not limited in this specification.
When the recognition result of a certain barcode identifier can be matched with the preset description information of the target object, the barcode identifier can be determined as the target barcode identifier corresponding to the target object. The preset description information may include a name, a number, location description information, and the like. The location description information may include a specific text description, for example, a 3 rd row 4 th device in a 5 th machine room, or may also be a specific coordinate location stored in advance, or may also be a coordinate location set composed of a plurality of specific coordinate locations, and the like, which is not limited in this specification.
In an embodiment, the assembly robot may obtain a position of a target barcode identifier, where the position of the target barcode identifier may be included in a recognition result of the target barcode identifier, or the assembly robot may perform visual positioning according to the determined target barcode identifier, so as to determine the position of the target barcode identifier, or the assembly robot may search the position of the target barcode identifier from a preset barcode identifier position record table according to information, such as a number of the target barcode identifier, included in the recognition result of the target barcode identifier, which is not limited in this specification.
In an embodiment, the assembly robot may determine a position of the target object corresponding to the target barcode identifier according to a predefined relative position relationship between the barcode identifier and the object, so as to adjust an image acquisition area of the visual camera, and adjust the target object to a preset position of the image acquisition area.
In one embodiment, the assembly robot may first align the visual camera so that the plane of the visual camera lens is parallel to the target object plane before adjusting the image capture area of the visual camera. The assembling robot can determine a target object feature plane according to a Random Sample Consensus (RANSAC) algorithm, and complete camera alignment according to a target object feature plane equation. The RANSAC algorithm can calculate mathematical model parameters of data and obtain effective sample data according to a group of sample data sets containing abnormal data. Of course, other algorithms may be used to determine the target object feature plane, which is not limited in this specification.
In an embodiment, the assembling robot may perform mesh area division on the target area, and then the assembling robot may determine the target mesh area corresponding to the target object according to the mesh area division result and the mapping relationship between each object and the mesh area, and may use the target mesh area as a corresponding storage position of the target object on the target object, that is, the target object may be assembled to the corresponding storage position.
In an embodiment, the assembly robot may directly identify the center point of the object within the image acquisition area. The assembling robot may be preset with feature information of the object center point, and then the assembling robot may identify the object center point in the image acquisition area according to the feature information. The characteristic information of the center point of the object may include an indicator light at the center point of the object or an infrared signal emitted from the center point of the object, and the like, which is not limited in this specification.
In an embodiment, the assembling robot may perform feature point matching on the image in the image acquisition area according to the object feature model, so as to determine at least one group of object feature points, and then the assembling robot may determine the object center point corresponding to each group of object feature points according to a relative position relationship between the object feature points and the object center points. The object feature model may be trained by the assembling robot in advance or may be transmitted to the assembling robot by other devices different from the assembling robot, which is not limited in this specification. The relative position relationship between the object feature point and the object center point may be recorded in the object feature model, or may be recorded in the assembly robot, which is not limited in this specification. The relative position relationship between the object feature point and the object center point may include a position and a distance of a certain object feature point relative to the object center point, for example, the object feature point is at x 1mm on the left side of the object center point in the horizontal direction, and the relative position relationship may also include a coordinate position of the object feature point and the corresponding object center point in the same coordinate system, and the like, which is not limited in this specification. The object feature point may be a single pixel point in the image, or may also be a region composed of a plurality of pixel points, and the like, which is not limited in this specification.
In an embodiment, a corresponding object feature point threshold may be preset, and each group of object feature points is converted into a corresponding object center point respectively only when the number of any group of object feature points is not less than the object feature point threshold; and under the condition that the number of any group of object feature points is smaller than the object feature point threshold value, the phenomenon that equipment is abnormal or the environment is abnormal possibly exists in the matching process of the group of object feature points, so that the number of the object feature points is insufficient, the group of object feature points are not converted into corresponding object center points, and the error of increasing the positions of the object center points can be avoided. Or, when the number of any group of object feature points is smaller than the object feature point threshold, the assembly robot may further perform corresponding movement according to the received instruction to adjust the image in the image acquisition area of the visual camera until the number of the object feature points is not smaller than the object feature point threshold. For example, the object feature point threshold value may be set to 80% of the number of object feature points included in the object feature model, or the like.
In an embodiment, the assembling robot may compare the identified respective object center points with a preset position, and may determine an object center point at which a position matches the preset position as a target center point corresponding to the target object, for example, determine an object center point at which the position coincides with the preset position as the target center point or determine an object center point at which the position is closest to the preset position as the target center point, etc.
In an embodiment, when a corresponding object center point can be determined according to any group of object feature points, an object center point at which the position matches a preset position may be determined as a target center point corresponding to a target object.
In an embodiment, under the condition that a plurality of object center points can be determined according to any group of object feature points, the assembling robot may cluster the obtained plurality of object center points, so as to obtain corresponding cluster centers, and under the condition that the preset position is the center position of the image acquisition region, the assembling robot may determine the cluster center point closest to the center position as the target center point corresponding to the target object. The density peak clustering algorithm may be used to cluster the center points of the plurality of objects, and of course, the clustering algorithm may also be a k-means clustering algorithm or a mean shift clustering, which is not limited in this specification. The density peak value clustering algorithm is an algorithm for clustering based on density, takes a high-density area as a judgment basis, is suitable for processing data sets in any shape compared with the traditional method, does not need to set the number of clusters in advance, can automatically find the cluster center, and can realize high-efficiency clustering of data in any shape.
In the actual operation process, the positions of the matched object feature points may have a certain deviation, and then the determined object center points also have a certain deviation according to the relative position relationship between the object feature points and the object center points, so that the number of the object center points corresponding to each group of object feature points can be multiple, and the assembly robot can cluster the object center points at the moment. Or the assembly robot can use different object feature models to perform feature point matching on the images in the image acquisition area for multiple times, so that the object feature points obtained by matching each time are different, the number of the determined object center points can be multiple, and the assembly robot can cluster the object center points.
In an embodiment, the object feature model may include feature point description information obtained in advance, where the feature point description information may include a pixel value corresponding to the feature point or a pixel value, a pixel value law, and the like in an area corresponding to the feature point. The assembling robot may match the feature point description information included in the object feature model with the image actually acquired by the image acquisition region, thereby determining the corresponding object feature point in the image actually acquired.
The object feature model can only contain feature point description information of an object edge region, the assembling robot can match the feature point description information of the object edge region contained in the object feature model with an actually acquired image, and therefore the feature points corresponding to the object edge region in the actually acquired image are determined, the operation amount in the matching process can be obviously reduced, the matching efficiency can be improved, meanwhile, the data amount in the object feature model training process and the operation amount in the training process can be reduced, and the training efficiency of the object feature model is improved.
In one embodiment, the assembly robot may first divide the image within the image acquisition area into edge area images and non-edge area images through a pre-trained edge feature model. The object feature model may only include feature point description information of the object edge region, and the assembling robot may match the feature point description information included in the object feature model with an edge region image in the image capturing region, so that a feature point corresponding to the object edge region in the image capturing region may be determined.
The edge feature model may be a neural network model, and the training process of the edge feature model is as follows: an object edge sample image, on which an "object edge" is identified, and an object non-edge sample image, on which a "non-object edge" is labeled, may be input. In addition, the server can compare the predicted recognition result output by the edge feature model and the actual recognition result of the marked sample image, namely the object edge and the non-object edge, so that the corresponding parameters in the edge feature model can be adjusted according to the difference information between the predicted recognition result and the actual recognition result, and the recognition results of the edge feature model on the edge region and the non-edge region are more accurate. The edge feature model may be a Convolutional Neural Network (CNN) model, a Recurrent Neural Network (RNN) model, or a Generative Adaptive Network (GAN) model, which is not limited in this specification.
In one embodiment, the object feature model may be a neural network model, and the training process of the object feature model is as follows: the object sample images may be input, where the object sample images are labeled with the distinguishing feature information of the corresponding object feature points, for example, each object sample image is labeled with "feature point No. 1", "feature point No. 2", and "feature point No. 3".
Then, the image actually captured in the image capture area is input into the trained object feature model, and at least one set of object feature points may be output, for example, each set of object feature points may include "feature point No. 1", "feature point No. 2", or "feature point No. 3", and the like. And determining the object center points corresponding to the object feature points of each group according to the relative position relationship between each object feature point and the object center point recorded in the object feature model. The object feature model may be a Convolutional Neural Network (CNN) model, a Recurrent Neural Network (RNN) model, or a Generative Adaptive Network (GAN) model, which is not limited in this specification.
In an embodiment, according to the relative position relationship between the object center point and the object region, the target region corresponding to the target center point can be determined. For example, the object region may be set as a rectangular region with sides of 200 mm and 100mm, and the object center point is a geometric center point of the object region, so that the corresponding object region may be defined according to the object center point and the size information of the object.
In an embodiment, the assembling robot may perform feature point matching on the image in the target region according to the object feature model, so as to determine at least one group of object feature points, and then the assembling robot may determine the object center point corresponding to each group of object feature points according to a relative position relationship between the object feature points and the object center points. The relative position relationship between the object feature point and the object center point may be recorded in the object feature model, or may be recorded in the assembly robot, which is not limited in this specification. The relative position relationship between the object feature point and the object center point may include an orientation and a distance of a certain object feature point with respect to the object center point, for example, x 2mm on the left side of the object feature point in the horizontal direction of the object center point, and the like, and the relative position relationship may also include coordinate positions of the object feature point and the corresponding object center point in the same coordinate system, and the like, which is not limited in this specification. Of course, the object feature point may be a single pixel point in the image, or may also be a region composed of a plurality of pixel points, and the like, which is not limited in this specification.
In an embodiment, corresponding object feature point thresholds may be preset, and each group of object feature points is converted into corresponding object center points respectively only when the number of any group of object feature points is not less than the object feature point threshold; and under the condition that the number of any group of object feature points is smaller than the threshold value of the object feature points, the phenomenon that equipment is abnormal or the environment is abnormal possibly exists in the matching process of the group of object feature points, so that the number of the object feature points is insufficient, the group of object feature points are not converted into corresponding object center points, and the error of increasing the positions of the object center points can be avoided.
In an embodiment, the object feature model may include feature point description information of object feature points acquired in advance, or the object feature model may also be a neural network model, a training and matching process of the object feature model is similar to the training and matching process of the object feature model, and details of implementation related to the training and matching process may refer to the above description, which is not described herein again.
In an embodiment, under the condition that a corresponding object center point can be determined according to any group of object feature points, the target area can be divided into corresponding grid areas according to the relative position relationship between the object center point and the grid areas.
In an embodiment, the assembling robot may cluster the plurality of object center points under the condition that the plurality of object center points can be determined according to any one group of object feature points, so that corresponding cluster centers may be obtained. The assembling robot may determine a cluster center point where the position matches a preset object arrangement direction as an object center point corresponding to each group of object feature points, where the object arrangement direction may include at least one of: horizontal direction, vertical direction. For example, if the object arrangement direction is aligned in a straight line in the horizontal direction, the cluster center closest to the alignment in the horizontal direction is selected as the object center point or the like, and if the object arrangement direction is aligned in a straight line in the horizontal direction and aligned in the vertical direction, the cluster center closest to the alignment in the horizontal direction and aligned in the vertical direction may be selected as the object center point or the like.
In the actual operation process, the matched positions of the object feature points may have certain deviation, and then the determined object center points also have certain deviation according to the relative position relationship between the object feature points and the object center points, so that the number of the object center points corresponding to each group of object feature points can be multiple, and the assembling robot can cluster the object center points at the moment. Or the assembly robot can use different object feature models to perform feature point matching on the images in the image acquisition area, so that the object feature points obtained by matching each time are different, the number of the determined object center points can be multiple, and the assembly robot can cluster the object center points.
The assembly robot can divide the grid area of the target area according to the relative position relation between the center point of the object and the grid area. For example, the mesh area may be preset to be a rectangular area with sides of 100mm and 20mm, respectively, and then the corresponding mesh area may be defined according to the object center point and the size information of the mesh area.
In one embodiment, the assembling robot may perform feature point matching on the images in the target region according to the array feature model, thereby determining at least one set of object feature points. The assembling robot may perform mesh region division on the target region according to the standard mesh array defined in the array feature model, so that the target mesh array may be generated, wherein the corresponding positions of the array feature points in the target mesh array are consistent with the corresponding positions in the standard mesh array.
In an embodiment, a corresponding array feature point threshold may be preset, and each group of array feature points is converted into a corresponding array center point respectively only when the number of any group of array feature points is not less than the array feature point threshold; and under the condition that the number of any group of array characteristic points is smaller than the threshold value of the array characteristic points, the phenomenon that equipment is abnormal or the environment is abnormal possibly exists in the matching process of the group of array characteristic points, so that the number of the array characteristic points is insufficient, the group of array characteristic points are not converted into corresponding array central points, and the error of increasing the positions of the array central points can be avoided.
In an embodiment, the array feature model may include feature point description information of the array feature points acquired in advance, or the array feature model may also be a neural network model, a training and matching process of the array feature model is similar to the training and matching process of the object feature model, and details of implementation related to the training and matching process may also refer to the above description, which is not described herein again.
In an embodiment, in the case that a first mesh region division result of the target region is obtained according to the object feature model and a second mesh region division result of the target region is obtained according to the array feature model, the assembly robot may calculate a homography matrix between the first mesh region division result and the second mesh region division result. The assembly robot can correct the grid area division result of the target area according to the homography matrix, and the corrected grid area division result can be used for determining the target grid area corresponding to the target object. Compared with the first grid area division result and the second grid area division result of the target area, the corrected grid area division result is more accurate, the target grid area corresponding to the target object can be determined more favorably, and the subsequent accuracy of visual positioning on the target grid area can be improved. The homography matrix refers to a matrix in which any two images of the same plane in an expression space are associated together through homography in the field of computer vision, and the homography matrix can be generally applied to aspects such as image correction, image stitching or camera pose estimation, and of course, other modes can be adopted to correct the grid region division result of the target region, which is not limited in the description.
In an embodiment, the assembling robot may perform visual positioning on the target grid area, so that the target object may be assembled to the corresponding storage position or unloaded from the corresponding storage position of the target grid area according to the visual positioning result.
Of course, the assembly robot may use a depth camera to perform visual positioning on the target grid area, and the assembly robot may also use an infrared positioning technology, an ultrasonic positioning technology, or a binocular camera spatial positioning technology to position the target grid area, which is not limited in this specification.
Fig. 7 is a flowchart of a storage position identifying method shown in this specification. As shown in fig. 7, the method may be applied to an assembly apparatus (such as the assembly robot shown in fig. 5, and the control logic is implemented by a control section or other control apparatus included in the robot); taking a target object as a server hard disk for exemplary description; the method may comprise the steps of:
at step 702, indication information is received.
In this embodiment, the assembling robot can move according to the received instruction information. The indication information includes a storage position corresponding to the number X001 in the F04 server of the 03 st cabinet in the 5 th machine room where the target hard disk is inserted. That is, the destination server number corresponding to the destination hard disk is F04, and the corresponding storage position number is X001.
In this embodiment, before the assembly robot can move to the 03 th cabinet in the 5 th machine room according to the indication information, the assembly robot can align the visual camera first, so that the plane of the lens of the visual camera is parallel to the plane of the server in the server cabinet. The assembling robot can determine a server feature plane according to a Random Sample Consensus (RANSAC) algorithm, and complete camera alignment according to a server feature plane equation.
In this embodiment, it is assumed that each server has a corresponding two-dimensional code identifier, and the relative position relationship between the two-dimensional code identifier and the corresponding server is that the two-dimensional code identifier is on the left side of the server, the center point of the two-dimensional code identifier and the center point of the corresponding server are on the same horizontal line, and the distance between the center point of the two-dimensional code identifier and the center point of the corresponding server is 260 mm, where the identification result of the two-dimensional code identifier includes the serial number of the server.
Then, the visual camera in the assembly robot may sequentially collect two-dimensional code identifiers corresponding to the server devices in the image collection area, and recognize the collected two-dimensional code identifiers. In the case where the recognition result of the two-dimensional code identifier is identical to the number of the target server, it may be determined that the two-dimensional code identifier is a target two-dimensional code identifier corresponding to the target server. The assembly robot can acquire the coordinate position of the acquired target two-dimensional code identifier, can determine the approximate position of the target server according to the relative position relation between the two-dimensional code identifier and the server, and can adjust the image acquisition area of the visual camera according to the approximate position of the target server so as to adjust the target server to the central position of the image acquisition area.
At step 708, a server feature model is generated.
In this embodiment, the server feature model may be generated by performing model training on the server feature model. The training process of the service characteristic model is as follows: server edge sample images corresponding to server edge regions are respectively intercepted from all server sample images, corresponding feature points are marked in each server edge sample image, a server feature model is trained by adopting a plurality of server edge sample images, server feature points in the server feature model shown in fig. 8 are obtained, 4 server feature points are arranged in the server feature model, namely the feature points 81, the feature points 82, the feature points 83 and the feature points 84, wherein the feature points 81 are indicator light regions containing a plurality of pixel points at the server edge, the feature points 82 are switch regions containing a plurality of pixel points at the server edge, and the feature points 83 and the feature points 84 are single pixel points.
In this embodiment, a plurality of server edge sample images are used for training the server feature model, only feature points need to be marked on the server edge sample images in the training process, and feature points do not need to be marked on the whole server images, so that the data amount required in the training process and the operation amount in the training process can be greatly reduced, and the training process can be obviously simplified.
In the present embodiment, it is assumed that the relative positional relationship between the above-mentioned 4 feature points recorded in the server feature point model and the corresponding server center point is as follows:
the distance from the feature point 81 to the center point of the server in the x-axis direction is-x 3 to-x 4, and the distance from the center point of the server in the y-axis direction is + y1 to + y 2;
the distance from the characteristic point 82 to the center point of the server in the x-axis direction is-x 5 to-x 6, and the distance from the center point of the server in the y-axis direction is + y3 to + y 4;
the distance from the feature point 83 to the center point of the server in the x-axis direction is + x7, and the distance from the center point of the server in the y-axis direction is-y 5;
the feature point 84 is a distance of + x8 from the server center point in the x-axis direction and a distance of + y6 from the server center point in the y-axis direction.
And step 710, performing feature point matching on the image according to the server feature model.
In this embodiment, feature point matching may be performed on the server feature model and the image in the image capturing area Pic of the visual camera, and assuming that the image capturing area contains 3 images corresponding to the servers, as shown in fig. 9, 3 sets of server feature points in the image capturing area Pic are illustrated, where the 1 st set of server feature points includes feature point 911, feature point 912, feature point 913, and feature point 914, the 2 nd set of server feature points includes feature point 921, feature point 922, feature point 923, and feature point 924, and the 3 rd set of server feature points includes feature point 931, feature point 932, feature point 933, and feature point 934.
The assembly robot may determine the server center point corresponding to each group of server feature points according to the relative position relationship between the server feature points and the server center points recorded in the server feature model, as shown in fig. 9.
In this embodiment, under the condition that the number of the server center points determined according to the group 1 server feature points is multiple, the assembling robot may cluster the multiple server center points corresponding to the group 1 server feature points by using a density peak clustering algorithm to obtain corresponding clustering center points c, where the number of the obtained clustering center points may be one or more. And determining a server center point a according to the 2 nd group of server feature points, and determining a server center point b according to the 3 rd group of server feature points. Of course, the assembly robot may also cluster the server center points determined by all the 3 groups of server feature points, at this time, the cluster center point corresponding to the 2 nd group of server center points is still the server center point a, and the cluster center point corresponding to the 3 rd group of server center points is still the server center point b, which is not limited in this specification.
And the assembly robot compares the acquired clustering center point c, the server center point a and the server center point b with the central position, determines the server center point a closest to the central position as a target center point of the target server, and acquires the position of the target center point.
At step 714, a target area is determined.
In this embodiment, according to the relative position relationship between the server central point and the server area, the target area 92 corresponding to the target central point, i.e. the server central point a, can be determined. Assuming a rectangular area with a server end dimension of 420mm in length and 100mm in width, the target area 92 corresponding to the target server can be defined by combining the target center point position and the server end dimension.
Of course, the server region 91 corresponding to the cluster center c and the server region 93 corresponding to the server center b may also be determined according to the relative position relationship between the cluster center c and the server center b, and the server regions, which is not limited in this specification.
In this embodiment, the hard disk feature model may be generated by performing model training on the hard disk feature model. The training process of the hard disk feature model is as follows: and marking corresponding hard disk feature points in a plurality of hard disk sample images, wherein the hard disk sample images can only contain images of a single hard disk. The hard disk feature point may be a switch area including a plurality of pixel points or an indicator light area including a plurality of pixel points, or may be a single pixel point, and in addition, the hard disk feature point may include a feature point of the entire hard disk end face or may only include a feature point of an edge area of the hard disk end face, which is not limited in this specification. The process of training and generating the hard disk feature model is similar to the process of the server feature model in step 708, and is not described here again.
In this embodiment, the hard disk feature model may record a relative position relationship between a hard disk feature point included in the hard disk feature model and a corresponding hard disk center point.
And step 718, performing feature point matching on the target area according to the hard disk feature model.
And step 720, converting the hard disk feature points into hard disk center points.
In this embodiment, feature point matching may be performed on the hard disk feature model and the image in the target area 92, assuming that the target area 92 includes 12 hard disks, the hard disks are all assembled in corresponding slots, and assuming that 12 groups of hard disk feature points are determined, where a slot is a storage position corresponding to a hard disk.
In this embodiment, the assembly robot may determine the hard disk center point corresponding to each group of hard disk feature points according to the relative position relationship between the hard disk feature points and the hard disk center points recorded in the hard disk feature model.
In this embodiment, under the condition that the number of the hard disk center points determined according to any one group of the hard disk feature points is multiple or under the condition that each determined hard disk center point does not conform to the preset hard disk arrangement direction, the assembling robot may cluster the multiple hard disk center points corresponding to the group of the hard disk feature points by using a density peak value clustering algorithm to obtain corresponding cluster center points. The assembly robot can determine the cluster center point of which the position is matched with the preset hard disk arrangement direction in each cluster center point as the hard disk center point corresponding to each group of hard disk feature points. The assembling robot can use RANSAC algorithm and/or least square method to fit the obtained clustering centers, that is, the positions of the clustering centers are matched with the preset hard disk arrangement direction. Of course, other algorithms may also be used to fit the cluster center, so that the hard disk center point matches with the preset hard disk arrangement direction, which is not limited in this specification.
Assuming that the preset hard disk arrangement direction is arranged in a straight line along the horizontal direction, the central point closest to the straight line along the horizontal direction is selected as the hard disk central point. Taking fig. 10 as an example for explanation, the hard disk center points determined according to the hard disk feature model are respectively "d '11", "d' 12", "d '13", and "d' 14", the dotted line L is the preset hard disk arrangement direction, the assembly robot fits the obtained center points according to the RANSAC algorithm and/or the least square method, and obtains the hard disk center points "d 11", "d 12", "d 13", and "d 14" shown in fig. 10, and performs grid region division on the target region 92 according to the hard disk center points obtained after fitting.
In this embodiment, the grid area corresponding to the hard disk center point may be determined according to the relative position relationship between the hard disk center point and the grid area. As shown in fig. 11, assuming a rectangular area with a hard disk end surface size of 100mm in length and 30mm in width, the target area 92 may be subjected to mesh area division by combining the position of each hard disk center point d and the hard disk end surface size, so as to obtain a first mesh area division result a.
In this embodiment, the array feature model may be generated by performing model training on the array feature model. The training process of the array feature model is as follows: array sample images corresponding to the hard disk array area are respectively intercepted from the plurality of server sample images, corresponding array characteristic points are marked in each array sample image, the array characteristic points can be switch areas containing a plurality of pixel points or indicator light areas containing a plurality of pixel points, and the like, and the array characteristic points can also be single pixel points. And a part of the array characteristic points can be consistent with the server characteristic points and the hard disk characteristic points. The training and generating process of the array feature model is similar to the process of the server feature model in step 708, and is not described here again.
In this embodiment, the assembling robot may perform mesh region division on the target region 92 according to a standard mesh array defined in the array model, and generate a second mesh region division result B, as shown in fig. 12. Wherein the corresponding position of the array feature point in the standard grid array is consistent with the position in the second grid area division result B.
At step 728, a homography matrix is determined.
In this embodiment, the server may obtain the first grid region division result a according to the hard disk feature model, the server may further obtain the second grid region division result B according to the array feature model, and the server may calculate a homography matrix between the first grid region division result a and the second grid region division result B, so the server may correct the grid region division result of the target region 92 according to the homography matrix, and obtain a corrected grid region division result C.
In this embodiment, the server may determine the target grid area corresponding to the number X001 of the storage position from the corrected grid area division result C according to a predefined storage position sequence in the target server, where the hard disk sequence in the target server may be the number corresponding to each storage position in the target server, as shown in fig. 13. The assembly robot can perform visual positioning on the target grid area by using the depth camera so as to insert and/or pull the target hard disk into and/or out of the accommodating position according to the visual positioning result.
According to the technical scheme, the target object is adjusted to the preset position in the image acquisition area, the target area corresponding to the target object is determined according to the identified object center point, the target grid area corresponding to the target object is determined, the target object can be assembled to the containing position corresponding to the target grid area or unloaded from the containing position corresponding to the target grid area, the assembling robot can be accurately moved to the corresponding target object and can be positioned to the grid area corresponding to the target object, the assembling robot can accurately perform assembling or unloading operation on the target object, and accordingly assembling efficiency of the target object is improved.
FIG. 14 is a flow chart of a method of assembling an object shown in the present specification. As shown in fig. 14, the method may be applied to an assembly apparatus (such as the assembly robot shown in fig. 5, and the control logic is implemented by a control section or other control apparatus included in the robot); the method may comprise the steps of:
In an embodiment, the assembly robot may perform visual positioning on the target receiving portion, and move the target object to an initial position corresponding to the target receiving portion according to a result of the visual positioning, so that the target object is pre-aligned with the target receiving portion. The target object may be an object that needs to be assembled, such as a server hard disk, a battery, a filter, and the like, and the target receiving portion may be a device that can receive the target object, such as a receiving groove, a receiving hole, an assembly groove, or a slot, and the like, which is not limited in this specification. For example, the assembly robot may perform visual positioning on the target receiving portion according to the spatial position of the target receiving portion detected by the depth camera, so that the assembly robot may move the target object to an initial position at the target receiving portion according to the visual positioning result, wherein the accuracy of the visual positioning performed by the depth camera is 1-2 mm, that is, the assembly robot may adjust the alignment accuracy between the target object and the target receiving portion to 1-2 mm according to the depth camera. The depth camera may be a depth camera based on active projection of structured light, a passive binocular camera, a depth camera based on light flight time, and the like, which is not limited in this specification.
In one embodiment, the assembly robot may initially position the target receiving portion, and move the target object to an initial position corresponding to the target receiving portion according to a result of the initial positioning. The manner of initial positioning by the assembly robot may include: the positioning is performed by using an infrared positioning technology, an ultrasonic positioning technology, or a binocular camera space positioning technology, which is not limited in this specification.
In one embodiment, the force detection part in the assembly robot may acquire force information of the target object during the process of inserting the target object into the target receiving part from the initial position. The force receiving information may include information such as a contact force between the target object and the target accommodating portion and a moment of force acting in response to the contact force, which is not limited in this specification.
In an embodiment, the assembling robot may perform posture adjustment on the target object according to the acquired stress information, where the posture adjustment may include moving the target object by a corresponding adjustment distance along an adjustment direction, where the adjustment direction may include a horizontal axis direction, a vertical axis direction, or a preset direction, and the like, which is not limited in this specification.
In an embodiment, in the case that the contact force of the target object is not less than the contact force threshold or the acting torque is not less than the torque threshold, indicating that the target object is not aligned with the target accommodating portion, the assembling robot may move the target object by a corresponding adjustment distance in the adjustment direction. And under the condition that the contact force is smaller than the contact force threshold value and the acting torque is smaller than the torque threshold value, the target object is indicated to be aligned to the target accommodating part, so that severe collision cannot occur between the target object and the target accommodating part, the contact force and the acting torque are all in a safe range, the assembling robot can insert the target object into the target accommodating part by a fixed distance, and the fixed distance can be set according to actual needs.
In an embodiment, the assembly robot may also establish a coordinate system in advance, for example, for convenience of subsequent calculation, the assembly robot may establish a three-dimensional coordinate system in advance, and adjust a plane coordinate system composed of vertical coordinates and horizontal coordinates of the three-dimensional coordinate system to be parallel to a front surface of a target device to which the target receiving portion belongs, and the like. Of course, a multidimensional coordinate system may be established according to actual needs, and the present description is not limited thereto.
In an embodiment, when the contact force of the target object is not less than the contact force threshold or the acting torque is not less than the torque threshold, the assembly robot may input the obtained contact force, the acting torque, and the like into a preset zero torque point calculation formula, so as to obtain a zero torque point position of the target object. And the assembly robot presets the clamping part to clamp at the preset position of the target object, so that the assembly robot can also directly calculate the position of the geometric center point of the target object at the moment.
In an embodiment, the assembly robot may compare the zero moment point position and the center point position of the target object, and may move the target object according to the zero moment point position when the offset distance between the zero moment point position and the center point position of the target object in the adjustment direction is not less than the offset threshold; in the case where the offset distance between the zero moment point position and the center point position of the target object in the adjustment direction is smaller than the offset threshold, the assembly robot does not need to move the target object in the adjustment direction.
In an embodiment, the assembling robot may obtain a target component force of the contact force of the target object in the adjustment direction, and in a case that the target component force is not less than a component force threshold, the assembling robot may move the target object according to the target component force; in the case where the target component force is smaller than the component force threshold value, the assembly robot does not need to move the target object in the adjustment direction according to the target component force.
In an embodiment, the assembling robot may obtain an offset distance between the zero moment point position and the center point position in the adjustment direction and a target component force of the contact force of the target object in the adjustment direction, and the assembling robot may further determine a moment moving direction of the target object according to a relative relationship between the zero moment point position and the center point position, for example, the moment moving direction may be a direction pointing from the center point position of the target object to the zero moment point position. The assembling robot may also determine a component force moving direction of the target object according to the target component force, for example, the component force moving direction may be a direction pointed by the target component force. Under the condition that the moment moving direction is matched with the component force moving direction, namely the moment moving direction is consistent with the component force moving direction, the assembling robot can move the target object according to the position of the zero moment point; in the case where the moment moving direction does not match the component force moving direction, that is, the moment moving direction does not match the component force moving direction, the robot may move the target object according to the target component force.
In an embodiment, when the assembly robot moves the target object according to the zero moment point position, the adjustment distance corresponding to the target object is an offset distance between the zero moment point position and the center point position in the adjustment direction, and the adjustment direction may be a direction from the center point position of the target object to the zero moment point position.
In an embodiment, when the assembling robot moves the target object according to the target component force, the assembling robot may determine an adjustment distance corresponding to the target component force according to a mapping relationship between the component force and the adjustment distance, where the adjustment distance is an adjustment distance, and the adjustment direction may be a direction of the target component force.
In an embodiment, the adjustment direction may be an axial direction along the established coordinate system, wherein the adjustment direction may include at least one of: positive direction along the horizontal axis, negative direction along the horizontal axis, positive direction along the vertical axis, negative direction along the vertical axis, and the like.
In an embodiment, if the offset distances of the zero moment point position and the center point position of the target object in any adjustment direction are both smaller than the offset threshold and the target component force of the contact force in any adjustment direction is smaller than the component force threshold, the assembly robot still cannot successfully insert the target object into the corresponding target accommodating portion, which indicates that there is an abnormal condition. Then, the assembling robot may further perform trial adjustment on the target object according to the preset adjustment direction of the target object and the preset adjustment distance corresponding to the preset adjustment direction, and continuously insert the trial-adjusted target object into the target accommodating portion, so as to obtain stress information of the target object, and re-determine whether to continue posture adjustment on the target object according to the stress information until the target object is successfully inserted into the corresponding target accommodating portion, where the preset adjustment direction may be set according to actual needs, and the corresponding preset adjustment distance may also be set according to actual needs, which is not limited in this specification. For example, the assembling robot may move the target object by 0.1mm in the positive direction of the horizontal axis, then insert the target object into the target receiving portion slowly, at this time, the force information of the target object may be obtained, and determine whether to perform the posture adjustment on the target object according to the force information as described above, assuming that the force information of the target object is still the case where the offset distances of the zero moment point position and the target object center point position corresponding to the target object in any one adjustment direction are both smaller than the offset threshold value and the target component force of the contact force in any one adjustment direction is smaller than the component force threshold value, but still the target object cannot be inserted into the target receiving portion, the assembling robot may move the target object by 0.1mm in the negative direction of the vertical axis again, and continue to try to insert the target object into the target receiving portion slowly, and obtain the force information of the object, and determining whether to perform posture adjustment on the target object according to the force information as described above until the target object can be successfully inserted into the target accommodating portion.
In an embodiment, the assembly robot may further detect a depth of the target object inserted into the target accommodating portion, and in a case where the depth is not less than a depth threshold, the assembly robot may stop performing the posture adjustment on the target object according to the force information. That is, in the case that the assembler detects that the depth of the target object inserted into the target receiving portion is greater than the depth threshold, which indicates that the target object is accurately positioned and can be successfully inserted into the corresponding target receiving portion, the subsequent insertion process of the target object can be referred to by the corresponding target receiving portion. The assembly robot may detect the depth of the target object inserted into the target receiving portion after performing the posture adjustment on the target object each time, or the assembly robot may detect the depth of the target object inserted into the target receiving portion according to a preset time period, and the like, which is not limited in this specification.
Of course, in the technical solution of this specification, the target object may be a server hard disk, and then the corresponding target receiving portion may be a slot of the server hard disk, and the process of assembling the server hard disk is similar to the above embodiment, and the related implementation details may refer to the above embodiment, and therefore, the details are not described below.
For the convenience of understanding, the technical solutions in the present specification are further described below with reference to the accompanying drawings. Referring to fig. 15, fig. 15 is a flowchart of an object assembling method according to an exemplary embodiment of the present disclosure. Taking a target object as a server hard disk for example description, the method is applied to an assembly device (such as an assembly robot shown in fig. 5, and control logic is implemented by a control part included in the robot or other control devices); the method may comprise the steps of:
In this embodiment, the assembly robot may perform visual positioning on the target assembly slot according to the spatial position information of the target slot detected by the depth camera, and then the assembly robot may drive the clamping portion to move the target hard disk to the initial position of the target slot according to the visual positioning result, where the accuracy of the visual positioning performed by the depth camera is 1-2 mm, and then the assembly robot may adjust the alignment accuracy between the target hard disk and the target slot to 1-2 mm. And the gap range of the relative position between the target hard disk and the target slot is not more than 0.5mm, so that only under the condition that the gap range between the target hard disk and the target slot is less than 0.5mm can the target hard disk be shown to be aligned to the target slot, the target hard disk can be smoothly inserted into the target slot, and the target hard disk and the target slot cannot be violently collided.
At step 1506, stress information is detected.
In this embodiment, with the target hard disk in the initial position, the assembly robot may drive the clamping component to continuously and slowly move the target hard disk along the axial direction of the target slot by a fixed distance to approach the target slot, so that the target hard disk may gradually approach and be inserted into the target slot. Assuming that the force detection part in the assembly robot is a six-dimensional force sensor, the six-dimensional force sensor can detect the stress information of the target hard disk in each process of slowly moving for a fixed distance, wherein the stress information comprises the contact force F between the target hard disk and the target slot and the corresponding acting moment M. The fixed distance may be set according to actual conditions of the target hard disk and the target slot, which is not limited in this specification.
In this embodiment, the target hard disk is continuously and slowly close to the target slot, so that a severe collision between the target hard disk and the target slot cannot occur and a contact force between the target hard disk and the target slot is always within a safety range, wherein a safety stress limit exists between the target hard disk and the target slot, and a speed at which the target hard disk is slowly inserted into the target slot needs to be correspondingly adjusted according to the safety stress limit, so that the contact force F and the action moment M between the target hard disk and the target slot are both within the safety range, and the target hard disk and the target slot are not damaged.
At step 1508, it is determined whether the contact force is less than the contact force threshold and whether the applied torque is less than the torque threshold.
In this embodiment, the assembly robot may determine whether the contact force F in the detected force information is smaller than a preset contact force threshold and whether the applied moment M is smaller than a moment threshold. The contact force threshold and the moment threshold may be preset in the assembly robot or may be sent to the assembly robot by other devices, which is not limited in this specification.
If the contact force F in the detected force information is smaller than the contact force threshold and the applied moment M is smaller than the moment threshold, it may indicate that the target hard disk is aligned with the target slot, that is, the target hard disk may smoothly enter the target slot and no contact force or only a slight contact force is generated between the target hard disk and the target slot, for example, the outer surface of the target hard disk is rubbed with the inner surface of the target slot, and so on, proceed to step 1516.
If the contact force F in the detected stress information is not less than the contact force threshold or the applied moment M is not less than the moment threshold, it may be indicated that the target hard disk is not aligned with the target slot, that is, the outer surface of the target hard disk and the inner surface of the target slot generate a large contact force, and the assembling robot needs to adjust the target hard disk by a corresponding adjustment distance along the adjustment direction, and then the process goes to step 1510.
In this embodiment, the assembly robot may pre-establish a three-dimensional coordinate system, such as an xyz axis shown in fig. 16, where the x axis is a horizontal axis, the y axis is a vertical axis, and the z axis is an axis along the target slot, so the adjustment direction of the target hard disk may include a positive direction along the horizontal axis, a negative direction along the horizontal axis, a positive direction along the vertical axis, or a negative direction along the vertical axis. Of course, the direction of the coordinate system may also be set according to actual needs, and this is not limited in this specification.
In this embodiment, the assembly robot may input the acquired contact force F, the applied moment M, and coordinate information of the center of mass of the target hard disk in the coordinate system, etc. into a preset zero moment point calculation formula, so as to calculate the zero moment point position of the target hard disk (X1, Y1, Z1). And assuming that the assembly robot has previously set the central position clamped at the two side vertical surfaces of the target hard disk each time and has previously acquired the size information of the target hard disk, i.e. the length, width and height information of the target hard disk, the assembly robot can also calculate the geometric central point position of the target hard disk at the moment (X2, Y2, Z2).
In this embodiment, the assembly robot may further obtain the component forces Fx, Fy, and Fz of the target hard disk in the x-axis, y-axis, and z-axis directions according to the obtained contact force F.
In this embodiment, the assembling robot may obtain the offset distance of the zero moment point position of the target hard disk and the center point position of the target hard disk in the horizontal axial direction, that is, (X1-X2), and assuming that the offset threshold is 0.5mm, that is, 0.5mm, the assembling robot may further obtain the component force Fx of the target hard disk in the horizontal axial direction, assuming that the component force threshold in the horizontal axial direction is 2N, that is, 2N.
In this embodiment, if the absolute value of the offset distance (X1-X2) of the target hard disk is not less than 0.5mm or Fx is not less than 2n, it can be indicated that the position of the target hard disk in the horizontal axis direction is incorrect, and the assembly robot first slowly retracts the target hard disk in the axial direction of the target slot by the fixed distance, and then performs position adjustment of the target hard disk in the horizontal axis direction.
The assembly robot can determine the moment moving direction of the target object according to the relative relation between the position of the zero moment point and the position of the central point and determine the component force moving direction of the target object according to the target component force Fx, and moves the target hard disk according to the position of the zero moment point under the condition that the moment moving direction is matched with the component force moving direction; and under the condition that the moment moving direction is not matched with the component force moving direction, moving the target hard disk according to the target component force.
For example, if the offset distance is-0.8 mm (X1-X2), it can be determined that the target hard disk needs to be moved in the negative direction of the horizontal axis, i.e., the moment moving direction is the negative direction in the horizontal axis. If the component force Fx in the horizontal axial direction is also a negative direction in the horizontal axial direction, that is, the component force moving direction is a negative direction in the horizontal axial direction. Then, the moment moving direction can be determined to be consistent with the component force moving direction, and the assembling robot can move the target object by-0.8 mm along the negative direction in the horizontal axis direction according to the position of the zero moment point.
If the offset distance is +0.8mm (X1-X2), it can be determined that the target hard disk needs to be moved in the positive direction of the horizontal axis, i.e., the torque moving direction is the positive direction of the horizontal axis. If the target component Fx in the horizontal axis direction is in the negative direction in the horizontal axis direction, that is, the component moving direction is in the negative direction in the horizontal axis direction. Then, it can be determined that the moment moving direction is not consistent with the component force moving direction, and the assembling robot can move the target hard disk according to the target component force Fx. The assembly robot can maintain a mapping relation between a target component force and a moving distance in advance, for example, when Fx is +1N, a target hard disk is moved by 0.1mm along the positive direction of the horizontal axis; when Fx is +2N, moving the target hard disk by 0.2mm along the positive direction of the horizontal axis; when Fx is-1N, the target hard disk is moved by 0.1mm or the like in the negative direction of the horizontal axis. Assuming that the target component Fx is-1N at this time, the assembling robot may determine to move the target hard disk by 0.1mm in the negative direction in the horizontal axis according to the corresponding mapping relationship.
In this embodiment, when the moment moving direction is the same as the component force moving direction, it can be shown that the calculated zero moment point position has high accuracy and reliability, and then the assembly robot can move the target hard disk according to the zero moment point position, so that the target hard disk is close to the correct position of the target hard disk on the horizontal axis. Under the condition that the moment moving direction is not consistent with the component force moving direction, the calculated zero moment point position has a large error, and then the assembling robot can move the target hard disk according to the target component force, so that the target hard disk is closer to the correct position of the target hard disk on the horizontal axis.
In this embodiment, if the absolute value of the offset distance (X1-X2) is smaller than the offset threshold value 0.5mm and Fx is smaller than the component force threshold value 2N, it can indicate that the position of the target hard disk in the horizontal axis is correct, and the target hard disk does not need to be adjusted in position in the horizontal axis, so that the assembling robot does not need to move the target hard disk in the horizontal axis.
In this embodiment, the assembly robot may obtain the offset distance (Y1-Y2) between the zero moment point position of the target hard disk and the center point position of the target hard disk in the vertical axis direction, assuming that the offset threshold is 0.5mm, that is, 0.5mm, and the assembly robot may further obtain the component force Fy of the target hard disk in the vertical axis direction, assuming that the component force threshold in the horizontal axis direction is 2N, that is, 2N.
If the absolute value of the offset distance (Y1-Y2) is not less than 0.5mm or Fy is not less than 2N, it may indicate that the position of the target hard disk in the vertical axis direction is incorrect, the assembly robot first slowly retracts the target hard disk along the axial direction of the target slot by the fixed distance, and then the assembly robot performs position adjustment on the target hard disk in the vertical axis direction, where the position adjustment process is similar to the adjustment process of the target hard disk in the horizontal axis direction, and the implementation details related thereto may refer to the adjustment process of the target hard disk in the horizontal axis direction as well, and are not described herein again.
If the absolute value of the offset distance (Y1-Y2) is less than 0.5mm and Fy is less than 2N, it can indicate that the target hard disk is correctly positioned in the vertical axis direction, and the target hard disk does not need to be adjusted in position in the vertical axis direction, so that the assembly robot does not need to move the target hard disk in the vertical axis direction.
In this embodiment, after moving the target hard disk in the horizontal axis direction and/or moving the target hard disk in the vertical axis direction, the process proceeds to step 1504.
In this embodiment, the assembly robot may further detect a depth of the target hard disk inserted into the target slot when the contact force F in the detected stress information is smaller than a contact force threshold and the acting moment M is smaller than a moment threshold, and may perform posture adjustment on the target hard disk when the depth of the target hard disk inserted into the target slot is detected to be smaller than the depth threshold; and under the condition that the depth is not less than the depth threshold value H, the assembling robot can stop carrying out posture adjustment on the target hard disk according to the stress information, and the step 1518 is carried out.
And 1518, inserting the target hard disk along the axial direction of the target slot.
When the assembly robot detects that the depth of the target hard disk inserted into the target slot is greater than the depth threshold value H, the assembly robot indicates that the target hard disk is accurately positioned and can be successfully inserted into the corresponding target slot, and the subsequent insertion process of the target hard disk can directly use the corresponding target slot as a guide reference without performing attitude adjustment on the target hard disk.
In this embodiment, when the offset distances of the zero moment point position and the center point position of the target hard disk in any adjustment direction are both smaller than the offset threshold and the target component force of the contact force in any adjustment direction is smaller than the component force threshold, the assembly robot still cannot successfully insert the target hard disk into the corresponding target slot, which indicates that there is an abnormal condition in the target hard disk or an abnormal condition exists between the target hard disk and the target slot. The assembling robot may further perform trial adjustment on the target hard disk according to a preset adjustment direction of the target hard disk and a preset adjustment distance corresponding to the preset adjustment direction, continue to insert the trial-adjusted target hard disk into the target slot, obtain stress information of the target hard disk, and perform posture adjustment on the target hard disk according to the stress information until the target hard disk can be successfully inserted into the corresponding target slot, where the preset adjustment direction may be set according to actual needs, and the corresponding preset adjustment distance may also be set according to actual needs. For example, the assembling robot may move the target hard disk by 0.1mm in the positive direction of the horizontal axis, then insert the target hard disk into the target slot slowly, if the target hard disk still cannot be inserted into the target slot and the stress information of the target hard disk is that the offset distance of the zero moment point position corresponding to the target hard disk and the offset distance of the center point position of the target hard disk in any adjustment direction are both smaller than the offset threshold and the target component force of the contact force in any adjustment direction is both smaller than the component force threshold, rotate the previous adjustment action by 90 degrees clockwise to obtain a new adjustment direction, which is a negative direction in the vertical axis, the assembling robot may move the target hard disk by 0.1mm in the negative direction in the vertical axis, then insert the target hard disk into the target slot slowly, obtain the stress information of the target hard disk, and perform posture adjustment on the target hard disk according to the stress information, and continuously performing spiral search according to the mode until the target hard disk can be successfully inserted into the corresponding target slot. Of course, the previous adjustment action may be tried to be rotated by a preset angle counterclockwise to obtain a new adjustment direction, and the like, which is not limited in this specification.
According to the technical scheme, the stress information of the target object in the process of inserting the target object into the target slot from the initial position is detected, the posture of the target object is adjusted according to the stress information, the target object is aligned and inserted into the target accommodating part, the target object can be safely and accurately inserted into the target accommodating part, the target object and the target accommodating part can be prevented from being damaged due to collision, full-automatic inserting and extracting control of the target object can be achieved through the assembling robot, the assembling efficiency of the target object can be improved, and the cost of the target object in the assembling process is reduced.
Fig. 17 shows a schematic structural diagram of an electronic device according to an exemplary embodiment of the present description. Referring to fig. 17, at the hardware level, the electronic device includes a processor 1702, an internal bus 1704, a network interface 1706, a memory 1708 and a nonvolatile memory 1710, but may also include hardware required for other services. The processor 1702 reads a corresponding computer program from the nonvolatile memory 1710 into the memory 1708 and then runs the computer program, thereby forming a storage location identifying apparatus on a logical level. Of course, besides the software implementation, the present specification does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may be hardware or logic devices.
Referring to fig. 18, in a software embodiment, the storage position recognition apparatus may include a first target area determination unit 1802 and a first mesh area determination unit 1804. Wherein:
a first target area determination unit 1802, configured to determine a target area corresponding to a target object to which a target object belongs;
a first grid region determining unit 1804, configured to perform grid region division on the target region, and determine a target grid region corresponding to the target object according to a grid region division result and a mapping relationship between the object and the grid region, so as to serve as a corresponding storage position of the target object on the target object.
Optionally, the first target area determination unit 1802 is specifically configured to:
identifying a target central point corresponding to the target area in the acquired visual image area;
and determining the object area corresponding to the target central point as a target area corresponding to the target object.
Optionally, the first target area determination unit 1802 is specifically configured to:
adjusting the target object to a preset position of the visual image area;
and identifying the center point of the object in the visual image area, and determining the center point of the object, of which the position is matched with the preset position, in the visual image area as the target center point corresponding to the target object.
Optionally, the first target area determination unit 1802 is specifically configured to:
sequentially identifying the bar code identifications corresponding to the objects in the visual image area to determine target bar code identifications corresponding to the target objects, wherein the identification results of the target bar code identifications are matched with preset description information of the target objects;
and adjusting the target object to the preset position of the visual image area according to the position of the target bar code identifier in the visual image area and the relative position relation between the target bar code identifier and the target object.
Optionally, the first target area determination unit 1802 is specifically configured to:
performing feature point matching on the image in the visual image region according to the object feature model to determine at least one group of object feature points;
and determining the object center points corresponding to each group of object feature points according to the relative position relationship between the object feature points and the object center points.
Optionally, the object feature model is used to identify feature points of an edge region of the object.
Optionally, the preset position includes a central position of the visual image area; the first target area determination unit 1802 is specifically configured to:
under the condition that a plurality of object center points are determined according to any group of object feature points, clustering the object center points to obtain corresponding clustering center points;
determining a cluster center point closest to the central location as a target center point corresponding to the target object.
Optionally, the first grid area determining unit 1804 is specifically configured to:
performing feature point matching on the image in the target area according to the object feature model to determine at least one group of object feature points;
determining object center points corresponding to each group of object feature points according to the relative position relationship between the object feature points and the object center points;
and carrying out grid area division on the target area according to the relative position relation between the object center point and the grid area.
Optionally, the first grid area determining unit 1804 is specifically configured to:
under the condition that a plurality of object center points are determined according to any group of object feature points, clustering the object center points to obtain corresponding clustering center points;
and determining the clustering center point of which the position is matched with the preset object arrangement direction as an object center point corresponding to each group of object feature points.
Optionally, the object arrangement direction includes at least one of: horizontal direction, vertical direction.
Optionally, the first grid area determining unit 1804 is specifically configured to:
performing characteristic point matching on the image in the target area according to the array characteristic model to determine at least one group of array characteristic points;
carrying out grid area division on the target area according to a standard grid array defined in the array characteristic model to generate a target grid array; wherein the corresponding positions of the array feature points in the target grid array coincide with the corresponding positions in the standard grid array.
Optionally, the method further includes:
a first calculating unit 1806, configured to calculate a homography matrix between a first grid area division result and a second grid area division result of the target area when the first grid area division result of the target area is obtained according to the object feature model and the second grid area division result of the target area is obtained according to the array feature model;
a first correcting unit 1808, configured to correct a grid region division result of the target region according to the homography matrix, where the corrected grid region division result is used to determine a target grid region corresponding to the target object.
Optionally, the method further includes:
the first visual positioning unit 1810 is configured to visually position the target grid area, so as to assemble the target object to a storage position corresponding to the target grid area according to a visual positioning result, or unload the target object from the storage position corresponding to the target grid area.
Referring to fig. 19, in a software implementation, the storage position identifying apparatus may include a second target area determining unit 1902 and a second grid area determining unit 1904. Wherein:
a second target area determining unit 1902, configured to determine a target area corresponding to a target server to which a target object belongs;
a second grid area determining unit 1904, configured to perform grid area division on the target area, and determine, according to a grid area division result and a mapping relationship between a hard disk and a grid area, a target grid area corresponding to the target hard disk, as a corresponding storage position of the target hard disk on the target server.
Optionally, the second target area determining unit 1902 is specifically configured to:
identifying a target central point corresponding to the target area in the acquired visual image area;
and determining the object area corresponding to the target central point as a target area corresponding to the target object.
Optionally, the second target area determining unit 1902 is specifically configured to:
adjusting the target object to a preset position of the visual image area;
and identifying the center point of the object in the visual image area, and determining the center point of the object, of which the position is matched with the preset position, in the visual image area as the target center point corresponding to the target object.
Optionally, the second target area determining unit 1902 is specifically configured to:
sequentially identifying the bar code identifications corresponding to the servers in the visual image area to determine target bar code identifications corresponding to the target servers, wherein the identification results of the target bar code identifications are matched with preset description information of the target servers;
and adjusting the target server to the preset position of the visual image area according to the position of the target bar code identifier in the visual image area and the relative position relation between the target bar code identifier and the target server.
Optionally, the second target area determining unit 1902 is specifically configured to:
performing feature point matching on the image in the visual image area according to a server feature model to determine at least one group of server feature points;
and determining the server central points corresponding to the characteristic points of each group of servers according to the relative position relationship between the characteristic points of the servers and the server central points.
Optionally, the server feature model is used to identify feature points of the server edge area.
Optionally, the preset position includes a central position of the visual image area; the second target area determining unit 1902 is specifically configured to:
under the condition that a plurality of server central points are determined according to any group of server characteristic points, clustering the plurality of server central points to obtain corresponding clustering central points;
determining a cluster center point closest to the central location as a target center point corresponding to the target server.
Optionally, the second grid area determining unit 1904 is specifically configured to:
performing feature point matching on the image in the target area according to the hard disk feature model to determine at least one group of hard disk feature points;
determining hard disk central points corresponding to each group of hard disk characteristic points according to the relative position relationship between the hard disk characteristic points and the hard disk central points;
and carrying out grid area division on the target area according to the relative position relation between the central point of the hard disk and the grid area.
Optionally, the second grid area determining unit 1904 is specifically configured to:
under the condition that a plurality of hard disk central points are determined according to any group of hard disk characteristic points, clustering the plurality of hard disk central points to obtain corresponding clustering central points;
and determining the cluster central point of which the position is matched with the preset hard disk arrangement direction as the hard disk central point corresponding to each group of hard disk characteristic points.
Optionally, the hard disk arrangement direction includes at least one of: horizontal direction, vertical direction.
Optionally, the second grid area determining unit 1904 is specifically configured to:
performing characteristic point matching on the image in the target area according to the array characteristic model to determine at least one group of array characteristic points;
carrying out grid area division on the target area according to a standard grid array defined in the array characteristic model to generate a target grid array; wherein the corresponding positions of the array feature points in the target grid array coincide with the corresponding positions in the standard grid array.
Optionally, the method further includes:
a second calculating unit 1906, configured to calculate a homography matrix between a first grid area division result and a second grid area division result of the target area when the first grid area division result of the target area is obtained according to the hard disk feature model and the second grid area division result of the target area is obtained according to the array feature model;
a second correcting unit 1908, configured to correct the grid area division result of the target area according to the homography matrix, where the corrected grid area division result is used to determine a target grid area corresponding to the target hard disk.
Optionally, the method further includes:
a second visual positioning unit 1910, configured to visually position the target grid area, so as to assemble the target hard disk to a storage position corresponding to the target grid area according to a visual positioning result, or unload the target hard disk from the storage position corresponding to the target grid area.
Fig. 20 shows a schematic structural diagram of an electronic device according to an exemplary embodiment of the present description. Referring to fig. 20, at the hardware level, the electronic device includes a processor 2002, an internal bus 2004, a network interface 2006, a memory 2008, and a non-volatile storage 2010, but may also include hardware required for other services. The processor 2002 reads the corresponding computer program from the non-volatile memory 2010 into the memory 2008 and then runs, forming the object mounting apparatus on a logical level. Of course, besides the software implementation, the present specification does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may be hardware or logic devices.
Referring to fig. 21, in a software implementation, the object assembling apparatus may include a first positioning unit 2102, a first acquiring unit 2104, and a first adjusting unit 2106. Wherein:
a first positioning unit 2102 configured to perform visual positioning on a target accommodating portion, and move a target object to an initial position of the target accommodating portion according to a visual positioning result;
a first acquiring unit 2104 for acquiring force information of the target object in a process of inserting the target object into the target accommodating portion from the initial position;
a first adjusting unit 2106, configured to perform posture adjustment on the target object according to the stress information, so that the target object is aligned with and inserted into the target accommodating portion.
Optionally, the force-receiving information includes a contact force between the target object and the target accommodating portion and an action moment corresponding to the contact force.
Optionally, the first adjusting unit 2106 is specifically configured to:
under the condition that the contact force is not smaller than a contact force threshold value or the acting moment is not smaller than a moment threshold value, moving the target object by a corresponding adjusting distance along an adjusting direction;
optionally, the method further includes:
a first insertion unit 2108 for inserting the target object into the target receiving portion if the contact force is less than the contact force threshold and the applied moment is less than the moment threshold.
Optionally, the first adjusting unit 2106 is specifically configured to:
acquiring a zero moment point position corresponding to the target object and an offset distance of a central point position of the target object in an adjusting direction, and moving the target object according to the zero moment point position under the condition that the offset distance is not smaller than an offset threshold value; or,
acquiring a target component force of the contact force in the adjusting direction, and moving the target object according to the target component force under the condition that the target component force is not smaller than a component force threshold; or,
under the condition that the offset distance is not less than an offset threshold or the target component force is not less than a component force threshold, if the moment moving direction of the target object determined according to the relative relation between the zero moment point position and the central point position is matched with the component force moving direction of the target object determined according to the target component force, moving the target object according to the zero moment point position; and if the moment moving direction is not matched with the component force moving direction, moving the target object according to the target component force.
Optionally, the first adjusting unit 2106 is specifically configured to:
under the condition that the target object is moved according to the zero moment point position, the adjusting distance is the offset distance, and the adjusting direction is the direction from the central point position of the target object to the zero moment point position;
and under the condition that the target object is moved according to the target component force, the adjustment distance is determined according to a mapping relation between the component force and the adjustment distance and corresponds to the target component force, and the adjustment direction is the direction of the target component force.
Optionally, the adjustment direction includes at least one of: positive direction along the horizontal axial direction, negative direction along the horizontal axial direction, positive direction along the vertical axial direction, and negative direction along the vertical axial direction.
Optionally, the method further includes:
under the condition that the offset distance of the zero moment point position corresponding to the target object and the center point position of the target object in any adjusting direction is smaller than an offset threshold value and the target component force of the contact force in any adjusting direction is smaller than a component force threshold value, if the target object cannot be successfully inserted into the target accommodating part, trial adjustment is carried out on the target object according to a preset adjusting direction corresponding to the target object and a preset adjusting distance corresponding to the preset adjusting direction;
and continuously inserting the trial-adjusted target object into the target accommodating part, acquiring stress information of the target object, and adjusting the posture of the target object according to the stress information until the target object is successfully inserted into the target accommodating part.
Optionally, the method further includes:
a first detection unit 2112 for detecting the depth of insertion of the target object into the target housing portion;
a first stopping unit 2114, configured to stop performing posture adjustment on the target object according to the stress information when the depth is not less than a depth threshold.
Referring to fig. 22, in a software embodiment, the object mounting apparatus may include a second positioning unit 2202, a second obtaining unit 2204, and a second adjusting unit 2206. Wherein:
a second positioning unit 2202, configured to perform initial positioning on the target storage unit, and move the target object to an initial position of the target storage unit according to an initial positioning result;
a second acquiring unit 2204, configured to acquire force information of the target object during the process of inserting the target object into the target receiving portion from the initial position;
a second adjusting unit 2206, configured to perform posture adjustment on the target object according to the force information, so as to align and insert the target object into the target receiving portion.
Optionally, the force-receiving information includes a contact force between the target object and the target accommodating portion and an action moment corresponding to the contact force.
Optionally, the second adjusting unit 2206 is specifically configured to:
under the condition that the contact force is not smaller than a contact force threshold value or the acting moment is not smaller than a moment threshold value, moving the target object by a corresponding adjusting distance along an adjusting direction;
optionally, the method further includes:
a second insertion unit 2208 for inserting the target object into the target receiving portion if the contact force is less than the contact force threshold and the applied torque is less than the torque threshold.
Optionally, the second adjusting unit 2206 is specifically configured to:
acquiring a zero moment point position corresponding to the target object and an offset distance of a central point position of the target object in an adjusting direction, and moving the target object according to the zero moment point position under the condition that the offset distance is not smaller than an offset threshold value; or,
acquiring a target component force of the contact force in the adjusting direction, and moving the target object according to the target component force under the condition that the target component force is not smaller than a component force threshold; or,
under the condition that the offset distance is not less than an offset threshold or the target component force is not less than a component force threshold, if the moment moving direction of the target object determined according to the relative relation between the zero moment point position and the central point position is matched with the component force moving direction of the target object determined according to the target component force, moving the target object according to the zero moment point position; and if the moment moving direction is not matched with the component force moving direction, moving the target object according to the target component force.
Optionally, the second adjusting unit 2206 is specifically configured to:
under the condition that the target object is moved according to the zero moment point position, the adjusting distance is the offset distance, and the adjusting direction is the direction from the central point position of the target object to the zero moment point position;
and under the condition that the target object is moved according to the target component force, the adjustment distance is determined according to a mapping relation between the component force and the adjustment distance and corresponds to the target component force, and the adjustment direction is the direction of the target component force.
Optionally, the adjustment direction includes at least one of: positive direction along the horizontal axial direction, negative direction along the horizontal axial direction, positive direction along the vertical axial direction, and negative direction along the vertical axial direction.
Optionally, the method further includes:
a second trial adjustment unit 2210, configured to, if the zero moment point position corresponding to the target object and the offset distance of the center point position of the target object in any adjustment direction are both smaller than an offset threshold and the target component force of the contact force in any adjustment direction is both smaller than a component force threshold, successfully insert the target object into the target accommodating portion, perform trial adjustment on the target object according to a preset adjustment direction corresponding to the target object and a preset adjustment distance corresponding to the preset adjustment direction;
and continuously inserting the trial-adjusted target object into the target accommodating part, acquiring stress information of the target object, and adjusting the posture of the target object according to the stress information until the target object is successfully inserted into the target accommodating part.
Optionally, the method further includes:
a second detection unit 2212 for detecting the depth of the target object inserted into the target receiving portion;
a second stopping unit 2214, configured to stop performing posture adjustment on the target object according to the stress information when the depth is not less than a depth threshold.
Referring to fig. 23, in a software embodiment, the hard disk mounting apparatus may include a third positioning unit 2302, a third obtaining unit 2304 and a third adjusting unit 2306. Wherein:
a third positioning unit 2302 for visually positioning a target storage portion on the server and moving a target hard disk to an initial position of the target storage portion according to a visual positioning result;
a third acquiring unit 2304, configured to detect stress information of the target hard disk in the process of inserting the target hard disk into the target receiving portion from the initial position;
a third adjusting unit 2306, configured to perform posture adjustment on the target hard disk according to the stress information, so that the target hard disk is aligned with and inserted into the target receiving portion.
Optionally, the force information includes a contact force between the target hard disk and the target storage portion and a moment of action corresponding to the contact force.
Optionally, the third adjusting unit 2306 is specifically configured to:
under the condition that the contact force is not smaller than a contact force threshold value or the acting moment is not smaller than a moment threshold value, moving the target hard disk by a corresponding adjusting distance along an adjusting direction;
optionally, the method further includes:
a third inserting unit 2308 for inserting the target hard disk into the target storage portion when the contact force is smaller than the contact force threshold and the acting torque is smaller than the torque threshold.
Optionally, the third adjusting unit 2306 is specifically configured to:
acquiring a zero moment point position corresponding to the target hard disk and an offset distance of a central point position of the target hard disk in an adjusting direction, and moving the target hard disk according to the zero moment point position under the condition that the offset distance is not less than an offset threshold; or,
acquiring a target component force of the contact force in the adjusting direction, and moving the target hard disk according to the target component force under the condition that the target component force is not smaller than a component force threshold; or,
under the condition that the offset distance is not less than an offset threshold or the target component force is not less than a component force threshold, if the moment moving direction of the target hard disk determined according to the relative relation between the zero moment point position and the central point position is matched with the component force moving direction of the target hard disk determined according to the target component force, moving the target hard disk according to the zero moment point position; and if the moment moving direction is not matched with the component force moving direction, moving the target hard disk according to the target component force.
Optionally, the third adjusting unit 2306 is specifically configured to:
under the condition that the target hard disk is moved according to the position of the zero moment point, the adjusting distance is the offset distance, and the adjusting direction is the direction from the central point position of the target hard disk to the position of the zero moment point;
and under the condition that the target hard disk is moved according to the target component force, the adjustment distance is determined according to the mapping relation between the component force and the adjustment distance and corresponds to the target component force, and the adjustment direction is the direction of the target component force.
Optionally, the adjustment direction includes at least one of: positive direction along the horizontal axial direction, negative direction along the horizontal axial direction, positive direction along the vertical axial direction, and negative direction along the vertical axial direction.
Optionally, the method further includes:
a third trial adjustment unit 2310, configured to, when both a zero moment point position corresponding to the target hard disk and an offset distance of a center point position of the target hard disk in any adjustment direction are smaller than an offset threshold and both a target component force of the contact force in any adjustment direction is smaller than a component force threshold, perform trial adjustment on the target hard disk according to a preset adjustment direction corresponding to the target hard disk and a preset adjustment distance corresponding to the preset adjustment direction if the target hard disk cannot be successfully inserted into the target storage portion;
and continuously inserting the trial-adjusted target object into the target accommodating part, acquiring stress information of the target object, and adjusting the posture of the target object according to the stress information until the target hard disk is successfully inserted into the target accommodating part.
Optionally, the method further includes:
a third detecting unit 2312 for detecting the depth of insertion of the target hard disk into the target receiving portion;
a third stopping unit 2314, configured to stop performing posture adjustment on the target hard disk according to the stress information and continue to insert the target hard disk into the target slot according to the direction indicated by the target slot, when the depth is not less than a depth threshold.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage media or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in one or more embodiments of the present description to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of one or more embodiments herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The above description is only for the purpose of illustrating the preferred embodiments of the one or more embodiments of the present disclosure, and is not intended to limit the scope of the one or more embodiments of the present disclosure, and any modifications, equivalent substitutions, improvements, etc. made within the spirit and principle of the one or more embodiments of the present disclosure should be included in the scope of the one or more embodiments of the present disclosure.
Claims (23)
1. A storage position identification method, comprising:
determining a target area corresponding to a target object to which a target object belongs;
and carrying out grid area division on the target area, and determining a target grid area corresponding to the target object according to a grid area division result and a mapping relation between the object and the grid area to be used as a corresponding storage position of the target object on the target object.
2. The method according to claim 1, wherein the determining a target region corresponding to a target object to which the target object belongs comprises:
identifying a target central point corresponding to the target area in the acquired visual image area;
and determining the object area corresponding to the target central point as a target area corresponding to the target object.
3. The method of claim 2, wherein identifying a target center point corresponding to the target area in the acquired visual image area comprises:
adjusting the target object to a preset position of the visual image area;
and identifying the center point of the object in the visual image area, and determining the center point of the object, of which the position is matched with the preset position, in the visual image area as the target center point corresponding to the target object.
4. The method of claim 3, wherein the adjusting the target object to the preset position of the visual image area comprises:
sequentially identifying the bar code identifications corresponding to the objects in the visual image area to determine target bar code identifications corresponding to the target objects, wherein the identification results of the target bar code identifications are matched with preset description information of the target objects;
and adjusting the target object to the preset position of the visual image area according to the position of the target bar code identifier in the visual image area and the relative position relation between the target bar code identifier and the target object.
5. The method of claim 3, wherein identifying a center point of an object in the visual image region comprises:
performing feature point matching on the image in the visual image region according to the object feature model to determine at least one group of object feature points;
and determining the object center points corresponding to each group of object feature points according to the relative position relationship between the object feature points and the object center points.
6. The method of claim 5, wherein the object feature model is used to identify feature points of an edge region of an object.
7. The method of claim 5, wherein the preset position comprises a central position of the visual image area; determining the object center point of which the position in the visual image area is matched with the preset position as a target center point corresponding to the target object, including:
under the condition that a plurality of object center points are determined according to any group of object feature points, clustering the object center points to obtain corresponding clustering center points;
determining a cluster center point closest to the central location as a target center point corresponding to the target object.
8. The method of claim 1, wherein the grid region partitioning result is obtained by:
performing feature point matching on the image in the target area according to the object feature model to determine at least one group of object feature points;
determining object center points corresponding to each group of object feature points according to the relative position relationship between the object feature points and the object center points;
and carrying out grid area division on the target area according to the relative position relation between the object center point and the grid area.
9. The method according to claim 8, wherein the determining the object center points corresponding to each set of object feature points comprises:
under the condition that a plurality of object center points are determined according to any group of object feature points, clustering the object center points to obtain corresponding clustering center points;
and determining the clustering center point of which the position is matched with the preset object arrangement direction as an object center point corresponding to each group of object feature points.
10. The method of claim 9, wherein the object alignment direction comprises at least one of: horizontal direction, vertical direction.
11. The method according to claim 1 or 8, wherein the grid region division result is obtained by:
performing characteristic point matching on the image in the target area according to the array characteristic model to determine at least one group of array characteristic points;
carrying out grid area division on the target area according to a standard grid array defined in the array characteristic model to generate a target grid array; wherein the corresponding positions of the array feature points in the target grid array coincide with the corresponding positions in the standard grid array.
12. The method of claim 11, further comprising:
under the condition that a first grid area division result of the target area is obtained according to the object feature model and a second grid area division result of the target area is obtained according to the array feature model, a homography matrix between the first grid area division result and the second grid area division result is calculated;
and correcting the grid area division result of the target area according to the homography matrix, wherein the corrected grid area division result is used for determining the target grid area corresponding to the target object.
13. The method of claim 1, further comprising:
and visually positioning the target grid area to assemble the target object to a storage position corresponding to the target grid area or unload the target object from the storage position corresponding to the target grid area according to a visual positioning result.
14. A storage position identification method, comprising:
determining a target area corresponding to a target server to which a target hard disk belongs;
and carrying out grid area division on the target area, and determining a target grid area corresponding to the target hard disk according to a grid area division result and a mapping relation between the hard disk and the grid area to be used as a corresponding storage position of the target hard disk on the target server.
15. The method of claim 14, wherein the determining a target area corresponding to a target server to which a target hard disk belongs comprises:
identifying a target central point corresponding to the target area in the acquired visual image area;
and determining the server area corresponding to the target central point as a target area corresponding to the target server.
16. The method of claim 15, wherein identifying a target center point corresponding to the target area in the captured visual image area comprises:
adjusting the target server to a preset position of the visual image area;
and identifying a server center point in the visual image area, and determining a server center point, of which the position in the visual image area is matched with the preset position, as a target center point corresponding to the target server.
17. The method of claim 16, wherein identifying a server center point within the visual image area comprises:
performing feature point matching on the image in the visual image area according to a server feature model to determine at least one group of server feature points;
and determining the server central points corresponding to the characteristic points of each group of servers according to the relative position relationship between the characteristic points of the servers and the server central points.
18. The method of claim 14, wherein the grid region partitioning result is obtained by:
performing feature point matching on the image in the target area according to the hard disk feature model to determine at least one group of hard disk feature points;
determining hard disk central points corresponding to each group of hard disk characteristic points according to the relative position relationship between the hard disk characteristic points and the hard disk central points;
and carrying out grid area division on the target area according to the relative position relation between the central point of the hard disk and the grid area.
19. The method according to claim 14 or 18, wherein the grid region division result is obtained by:
performing characteristic point matching on the image in the target area according to the array characteristic model to determine at least one group of array characteristic points;
carrying out grid area division on the target area according to a standard grid array defined in the array characteristic model to generate a target grid array; wherein the corresponding positions of the array feature points in the target grid array coincide with the positions in the standard grid array.
20. A storage position recognition device, comprising:
a first target area determination unit, configured to determine a target area corresponding to a target object to which a target object belongs;
and the first grid area determining unit is used for carrying out grid area division on the target area and determining a target grid area corresponding to the target object according to a grid area division result and a mapping relation between the object and the grid area to be used as a corresponding storage position of the target object on the target object.
21. A storage position recognition device, comprising:
the second target area determining unit is used for determining a target area corresponding to a target server to which the target hard disk belongs;
and the second grid area determining unit is used for carrying out grid area division on the target area and determining a target grid area corresponding to the target hard disk according to a grid area division result and a mapping relation between the hard disk and the grid area so as to be used as a corresponding storage position of the target hard disk on the target server.
22. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of any one of claims 1-19 by executing the executable instructions.
23. A computer-readable storage medium having stored thereon computer instructions, which, when executed by a processor, carry out the steps of the method according to any one of claims 1-19.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010699074.4A CN113298876A (en) | 2020-07-20 | 2020-07-20 | Storage position identification method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010699074.4A CN113298876A (en) | 2020-07-20 | 2020-07-20 | Storage position identification method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113298876A true CN113298876A (en) | 2021-08-24 |
Family
ID=77318565
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010699074.4A Pending CN113298876A (en) | 2020-07-20 | 2020-07-20 | Storage position identification method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113298876A (en) |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100564728B1 (en) * | 2004-09-17 | 2006-03-28 | (주)래디안트 | System and method for determining position of mobile communication device by grid-based pattern matching algorithm |
CN104770080A (en) * | 2012-11-01 | 2015-07-08 | 松下知识产权经营株式会社 | Electronic component mounting system |
CN106437024A (en) * | 2016-08-31 | 2017-02-22 | 芜湖天航科技(集团)股份有限公司 | Mounting method of special-shaped large-span net rack high-altitude positioning blocks |
JP2017147360A (en) * | 2016-02-18 | 2017-08-24 | Juki株式会社 | Electronic component inspecting method, electronic component mounting method, and electronic component mounting device |
CN107303636A (en) * | 2016-04-19 | 2017-10-31 | 泰科电子(上海)有限公司 | Automatic setup system and automatic assembly method based on robot |
CN107995683A (en) * | 2017-12-13 | 2018-05-04 | 北京小米移动软件有限公司 | Alignment system, indoor orientation method, server and storage medium |
WO2018092236A1 (en) * | 2016-11-17 | 2018-05-24 | 株式会社Fuji | Work robot and work position correction method |
US20190091870A1 (en) * | 2017-09-28 | 2019-03-28 | Seiko Epson Corporation | Robot System |
CN109992640A (en) * | 2019-04-11 | 2019-07-09 | 北京百度网讯科技有限公司 | Determination method and device, equipment and the storage medium of position grid |
CN110340630A (en) * | 2019-07-17 | 2019-10-18 | 中国科学院自动化研究所 | Robot automation's assembly method and device based on Multi-sensor Fusion |
CN110815213A (en) * | 2019-10-21 | 2020-02-21 | 华中科技大学 | Part identification and assembly method and device based on multi-dimensional feature fusion |
CN113290570A (en) * | 2020-07-20 | 2021-08-24 | 阿里巴巴集团控股有限公司 | Clamping device, data center operation and maintenance robot and assembly robot |
CN113298877A (en) * | 2020-07-20 | 2021-08-24 | 阿里巴巴集团控股有限公司 | Object assembling method and device |
-
2020
- 2020-07-20 CN CN202010699074.4A patent/CN113298876A/en active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100564728B1 (en) * | 2004-09-17 | 2006-03-28 | (주)래디안트 | System and method for determining position of mobile communication device by grid-based pattern matching algorithm |
CN104770080A (en) * | 2012-11-01 | 2015-07-08 | 松下知识产权经营株式会社 | Electronic component mounting system |
JP2017147360A (en) * | 2016-02-18 | 2017-08-24 | Juki株式会社 | Electronic component inspecting method, electronic component mounting method, and electronic component mounting device |
CN107303636A (en) * | 2016-04-19 | 2017-10-31 | 泰科电子(上海)有限公司 | Automatic setup system and automatic assembly method based on robot |
CN106437024A (en) * | 2016-08-31 | 2017-02-22 | 芜湖天航科技(集团)股份有限公司 | Mounting method of special-shaped large-span net rack high-altitude positioning blocks |
WO2018092236A1 (en) * | 2016-11-17 | 2018-05-24 | 株式会社Fuji | Work robot and work position correction method |
US20190091870A1 (en) * | 2017-09-28 | 2019-03-28 | Seiko Epson Corporation | Robot System |
CN107995683A (en) * | 2017-12-13 | 2018-05-04 | 北京小米移动软件有限公司 | Alignment system, indoor orientation method, server and storage medium |
CN109992640A (en) * | 2019-04-11 | 2019-07-09 | 北京百度网讯科技有限公司 | Determination method and device, equipment and the storage medium of position grid |
CN110340630A (en) * | 2019-07-17 | 2019-10-18 | 中国科学院自动化研究所 | Robot automation's assembly method and device based on Multi-sensor Fusion |
CN110815213A (en) * | 2019-10-21 | 2020-02-21 | 华中科技大学 | Part identification and assembly method and device based on multi-dimensional feature fusion |
CN113290570A (en) * | 2020-07-20 | 2021-08-24 | 阿里巴巴集团控股有限公司 | Clamping device, data center operation and maintenance robot and assembly robot |
CN113298877A (en) * | 2020-07-20 | 2021-08-24 | 阿里巴巴集团控股有限公司 | Object assembling method and device |
Non-Patent Citations (1)
Title |
---|
徐刚;张文明;李海滨;刘彬;: "基于小波多分辨率网格划分的双目立体视觉方法", 光学学报, no. 04, 15 April 2009 (2009-04-15) * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9259844B2 (en) | Vision-guided electromagnetic robotic system | |
KR100693262B1 (en) | Image processing apparatus | |
CN112476434B (en) | Visual 3D pick-and-place method and system based on cooperative robot | |
CN110555889B (en) | CALTag and point cloud information-based depth camera hand-eye calibration method | |
CN104227722B (en) | Robot system and robot control method | |
CN106945035B (en) | Robot control apparatus, robot system, and control method for robot control apparatus | |
Song et al. | CAD-based pose estimation design for random bin picking using a RGB-D camera | |
CN112109086B (en) | Grabbing method for industrial stacked parts, terminal equipment and readable storage medium | |
CN112836558B (en) | Mechanical arm tail end adjusting method, device, system, equipment and medium | |
US11625842B2 (en) | Image processing apparatus and image processing method | |
JP4709668B2 (en) | 3D object recognition system | |
US9082017B2 (en) | Robot apparatus and position and orientation detecting method | |
CN110930442B (en) | Method and device for determining positions of key points in robot hand-eye calibration based on calibration block | |
JP3654042B2 (en) | Object identification method and apparatus | |
Kaymak et al. | Implementation of object detection and recognition algorithms on a robotic arm platform using raspberry pi | |
US20230297068A1 (en) | Information processing device and information processing method | |
CN111681268A (en) | Method, device, equipment and storage medium for identifying and detecting sequence number of optical mark point by mistake | |
CN113298877A (en) | Object assembling method and device | |
Zhang et al. | Vision-based six-dimensional peg-in-hole for practical connector insertion | |
CN113290570A (en) | Clamping device, data center operation and maintenance robot and assembly robot | |
Xu et al. | A vision-guided robot manipulator for surgical instrument singulation in a cluttered environment | |
CN114505864A (en) | Hand-eye calibration method, device, equipment and storage medium | |
CN113298876A (en) | Storage position identification method and device | |
US20230100238A1 (en) | Methods and systems for determining the 3d-locations, the local reference frames and the grasping patterns of grasping points of an object | |
CN114750164B (en) | Transparent object grabbing method, transparent object grabbing system and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40059185 Country of ref document: HK |