CN111191546A - Intelligent product assembling method based on machine vision recognition - Google Patents
Intelligent product assembling method based on machine vision recognition Download PDFInfo
- Publication number
- CN111191546A CN111191546A CN201911330665.8A CN201911330665A CN111191546A CN 111191546 A CN111191546 A CN 111191546A CN 201911330665 A CN201911330665 A CN 201911330665A CN 111191546 A CN111191546 A CN 111191546A
- Authority
- CN
- China
- Prior art keywords
- robot
- detection
- target detection
- model
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000001514 detection method Methods 0.000 claims abstract description 80
- 238000013135 deep learning Methods 0.000 claims abstract description 23
- 238000004519 manufacturing process Methods 0.000 claims abstract description 12
- 230000009466 transformation Effects 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 5
- 238000002372 labelling Methods 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000003708 edge detection Methods 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 claims description 4
- 238000012795 verification Methods 0.000 claims description 4
- 230000006872 improvement Effects 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000005520 cutting process Methods 0.000 claims description 2
- 230000005855 radiation Effects 0.000 claims description 2
- 230000001629 suppression Effects 0.000 claims description 2
- 230000006870 function Effects 0.000 description 9
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000002407 reforming Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention adopts a method combining deep learning and traditional digital image processing, and the method comprehensively considers the detection precision and the detection speed, firstly adopts a target detection algorithm of the deep learning to detect the rough position information of an object in a complex environment, and then takes the detected position as an interested area to carry out image processing to detect more accurate target contour information. The invention also improves the target detection method based on deep learning, so that the detection speed and the detection precision are improved, the requirements on workshop hardware are greatly reduced, and the production cost is reduced. In conclusion, the invention greatly improves the assembly efficiency and the assembly precision of the parts and reduces the condition of assembly failure caused by the position deviation of the parts.
Description
Technical Field
The invention belongs to the technical field of automobile production, and particularly relates to an intelligent product assembling method based on machine vision recognition.
Background
There are many parts to be assembled in the automotive production process. The traditional manual assembly needs a large amount of manpower and material resources, has low production efficiency and cannot meet the current automatic production. At present, most of common automatic part assembly production lines teach robots and then assemble parts at fixed positions. However, when the parts are conveyed to the fixed positions, the parts are inevitably subjected to position deviation, and assembling the parts according to the fixed robot teaching positions at the time can cause assembling failure and even damage to the parts. In order to solve the problem, the invention adopts a computer vision method to position the assembly positions of the parts to be assembled and then guides the robot to assemble according to the positioning positions. Currently, commonly used target positioning methods are roughly classified into detection methods based on image processing and target detection methods based on deep learning. If the target detection method based on image processing directly processes all information of the whole picture, the noise is large, the influence of environmental noise is too large, and the detection is easy to fail under the condition of excessive noise. The target detection method based on deep learning is good in robustness, and common target detection methods based on deep learning include R-CNN series, YOLO, SSD and the like. However, although the target detection method based on deep learning can be more suitable for the environment with severe illumination conditions, the detection accuracy of the target detection method based on deep learning is rough and not accurate enough compared with image processing and edge detection due to the characteristic of insensitive position of the convolutional neural network. Although the deep learning-based semantic segmentation method is high in positioning accuracy, the requirement on computer GPU resources required by semantic segmentation is high, the detection speed is slow, the real-time performance of workshop production is difficult to meet, and high detection accuracy is required for part assembly.
Therefore, the invention firstly uses the target detection method of deep learning to detect the target position to obtain an inaccurate position, then slightly enlarges the detection position area as the interested area, thus greatly reducing the noise, and then carries out image processing detection on the interested area to obtain the accurate position of the target to be assembled.
On the other hand, the part assembly line needs automatic assembly equipment, however, the position of the common robot teaching method is fixed, and when the position of the part slightly deviates, the automatic equipment can cause assembly failure and damage to the assembled part. In order to solve the problem, the invention introduces a computer vision method to guide the robot, and ensures the smooth assembly of parts. The invention combines the deep learning and image processing methods to ensure that the assembly position of the part can be successfully detected.
Disclosure of Invention
Aiming at the technical problems, the invention provides a product intelligent assembly method based on machine vision identification, which comprises the following steps:
1. the intelligent product assembling method based on machine vision recognition comprises the following steps:
(1) model improvement and training:
1.1, firstly, improving and optimizing YOLOv2 by adopting focal loss and ShuffleNet V2 to obtain a new target detection model;
1.2, photographing a workpiece to be assembled in a production workshop by using a camera to obtain 5000 pictures, and then labeling the 5000 collected pictures by using a target detection labeling tool to obtain position information of the workpiece in the pictures;
1.3, performing data enhancement on the data set by methods such as rotating, cutting, zooming, translating, increasing noise and the like on the manufactured data set to obtain more data;
1.4, dividing the enhanced data into a training set and a verification set according to a ratio of 4:1, sending the training set into an improved target detection model, setting the size of a training batch to be 32, and setting the learning rate to be 10-3Training on a local computer with the training step length of 100000 steps, verifying on a verification set, and adjusting a hyper-parameter optimization model;
(2) calibrating a camera: in order to obtain a conversion relation between a pixel coordinate and a robot coordinate, calibrating a camera at a photographing point to obtain a coordinate transformation matrix;
(3) assembly position detection:
3.1, transplanting the model trained on the local industrial personal computer in the step 1 to a workshop industrial personal computer, then shooting a workpiece picture by a camera, calling a target detection model, sending the picture to a model output result, then carrying out non-maximum suppression processing on the output result of the target detection model, filtering out detection frames with low confidence coefficient and coincidence, and obtaining a final detection result of the target detection model;
3.2, detecting by 3.1 to obtain a detection result of a target detection model based on deep learning, amplifying the detection result region by 1.5 times to serve as an interested region, and then carrying out gray processing, median filtering, edge detection and contour fitting of a random sampling consistency algorithm on the interested region to obtain the accurate pixel point position of the workpiece;
(4) and (3) robot guide assembly: and (4) converting the pixel point position in the step (3) into a robot coordinate through the coordinate transformation matrix obtained by calibration in the step (2), and guiding the mechanical arm of the robot to move according to the obtained part assembling position in the robot coordinate system to finish the assembling of the part.
Further, in the step (2), because the plane detected by the camera is parallel to the arm of the robot, the transformation relation between the image plane of the camera and the two-position plane of the coordinates of the robot is obtained by adopting radiation transformation, and firstly, a mark point is searched on the workpiece and is photographed; then calculating the pixel coordinates of the mark points and the robot coordinates; and then the mobile robot takes a picture twice, and the conversion relation between the pixel coordinates and the plane coordinates of the robot can be obtained through the obtained three pixel coordinates and the three plane coordinates of the robot.
Has the advantages that:
firstly, the invention adopts automatic part assembly measures, reduces the use of manpower and material resources and ensures the speed of part assembly; in addition, the invention adopts a computer vision method to carry out memorial guidance on the assembly position, can adapt to the assembly position by adopting the computer vision method, and can correct the deviation even if the position of the part has certain deviation from the previously set position so as to ensure the assembly of the part.
The invention adopts a method combining deep learning and traditional digital image processing, and the method comprehensively considers the detection precision and the detection speed, firstly adopts a target detection algorithm of the deep learning to detect the rough position information of an object in a complex environment, and then takes the detected position as an interested area to carry out image processing to detect more accurate target contour information. The invention further processes the image of the detection frame of the deep learning target detection algorithm, so that the detection result after the image processing can refine the detection position, the edge information of the object to be detected is returned, and the precision requirement during assembly can be ensured. Compared with the method of simply using image processing detection, the method of the invention slightly amplifies the position detected by the deep learning target detection algorithm and then takes the position as the region of interest of image processing, and the region of interest of image processing is smaller and has less noise. The invention also improves the target detection method based on deep learning, so that the detection speed and the detection precision are improved, the requirements on workshop hardware are greatly reduced, and the production cost is reduced. In conclusion, the invention greatly improves the assembly efficiency and the assembly precision of the parts and reduces the condition of assembly failure caused by the position deviation of the parts.
Detailed Description
The present invention will be described in further detail with reference to examples, but the present invention is not limited thereto.
The visual guidance of the invention is mainly divided into two parts: hardware and software components. Hardware aspect: the method comprises the steps of firstly selecting a Basler industrial camera for shooting parts, and then selecting a proper industrial personal computer for processing and positioning the part pictures acquired by the camera. A software part: the appropriate detection algorithm is first identified and then programmed. The detection algorithm adopted by the invention is as follows:
1. detection algorithm combining deep learning and image processing
The method comprises the steps of firstly, detecting a workpiece to be assembled by adopting an improved target detection algorithm YOLOv2 based on deep learning, wherein the position detected by the target detection algorithm possibly has deviation and cannot completely cover the area of the workpiece. The detected position is then slightly enlarged by 1.5 times as much as the region of interest, so that other noises on the picture can be avoided during image processing. And then, detecting the region of interest by adopting an image processing method, fitting to obtain the accurate position of the assembly workpiece, and guiding the robot to assemble.
(1) Firstly, 5000 pictures of a workpiece to be detected are collected in a production workshop, and then the pictures are labeled. And marking the shot pictures by adopting Label Image software to establish a data set required by training.
(2) Clustering the manufactured training set by adopting a K-means clustering algorithm to obtain the number and the size of the proper anchor frame, wherein a clustering loss function is as follows:
loss=∑i∑j(1-IoU(boxi,centerj)
wherein box is the real frame size obtained by labeling, and center is the size of the clustering center. And continuously optimizing the loss function to obtain the number and the size of the final anchor frame. And then selecting the proper number and size of the anchor frames. Model training is easier to converge after clustering.
(3) And performing data enhancement according to the anchor frame obtained in the last step and the data set, and then training on the improved Yolov2 model until the model converges.
(4) The trained model is transplanted to an industrial personal computer, then a camera shoots pictures and sends the pictures into the model for detection, and according to a detection result, the rough position of the workpiece can be obtained.
(5) Since the deep learning object detection algorithm detects that the returned object is an enveloping rectangle of the object, and the rectangle does not necessarily completely include the object, the improved YOLOv2 detection result is developed by 1.5 times and is sent to the image processing part as the region of interest.
(6) And for the region of interest obtained in the last step, fitting the fine contour of the assembled workpiece by adopting a gray scale change, filtering treatment, edge detection and random sampling consistency algorithm, and then obtaining the accurate position of the assembled workpiece.
(7) And (4) converting pixel coordinates.
The invention firstly adopts a deep learning method to detect to obtain a rough position, and then carries out image processing operation on the rough position to fit the precise position. Compared with an object detection algorithm only using deep learning, the method is more accurate in positioning. Compared with the method for processing the whole image, the method has less noise and can be suitable for complex illumination environment.
2. YOLOv2 algorithm improvement and optimization
In order to ensure the detection speed, the invention adopts a single-stage target detection algorithm YOLOv2 with higher speed at present, and adopts a detection ShuffleNet V2 convolutional neural network with fewer parameters and higher speed to replace the original convolutional layer so as to improve the detection speed, and adopts a method with fine granularity characteristics to further enhance the detection precision of the model. In addition, focalloss loss is adopted to improve the loss function of YOLOv2, so that the detection accuracy of the model is further improved.
(1) In order to enable the detection speed of the YOLOv2 target detection model to be faster and to run smoothly without a GPU in a production workshop, the method adopts ShuffleNetV2 to replace the prior darknet19 of the convolution structure of YOLOv 2. Firstly, removing a full connecting layer, a global pooling layer and a convolution layer 5 of ShuffleNet V2; then, reforming the output result of the convolution block of the first stage3 of the ShuffleNet V2 to ensure that the size of the output result is consistent with the size of the convolution output feature of the stage4 of the ShuffleNet V2, then carrying out channel superposition with the last layer of feature of the ShuffleNet V2, and finally sending the feature output by the model into a loss function for training.
(2) The loss function is improved, the problem of unbalance of positive and negative samples always exists in a single-stage target detection algorithm, namely YOLOv2, and in order to solve the problem, the invention adopts the Focal loss function to improve the confidence loss function part of the original YOLOv 2. The formula of the Focal loss function is as follows:
focal_loss(p)=αy(1-p)γlog(p)-(1-α)(1-y)pγlog(1-p)
p is the confidence value of the object predicted by the Yolov2 model, y is the true tag confidence, the confidence of the included object is 1, the confidence of the not included object is 0, a balance variable α is used for balancing the positive and negative samples, α is 0.25, gamma is a scaling parameter which can be adjusted, and gamma is 2.
In the aspect of improving the target detection model, the lightweight model ShuffleNet V2 is used for replacing the original convolutional layer, the ShuffleNet V2 model is subjected to fine-grained characteristic operation, then the focal _ loss is adopted for improving the loss function of the target detection model, the improved model detection precision and detection speed are improved compared with those of the original YOLOv2, and the target detection model can be well detected on an industrial personal computer without GPU configuration.
Claims (2)
1. The intelligent product assembling method based on machine vision recognition comprises the following steps:
(1) model improvement and training:
1.1, firstly, improving and optimizing YOLOv2 by adopting focal loss and ShuffleNet V2 to obtain a new target detection model;
1.2, photographing a workpiece to be assembled in a production workshop by using a camera to obtain 5000 pictures, and then labeling the 5000 collected pictures by using a target detection labeling tool to obtain position information of the workpiece in the pictures;
1.3, performing data enhancement on the data set by methods such as rotating, cutting, zooming, translating, increasing noise and the like on the manufactured data set to obtain more data;
1.4, dividing the enhanced data into a training set and a verification set according to a ratio of 4:1, sending the training set into an improved target detection model, setting the size of a training batch to be 32, and setting the learning rate to be 10-3Training on a local computer with the training step length of 100000 steps, verifying on a verification set, and adjusting a hyper-parameter optimization model;
(2) calibrating a camera: in order to obtain a conversion relation between a pixel coordinate and a robot coordinate, calibrating a camera at a photographing point to obtain a coordinate transformation matrix;
(3) assembly position detection:
3.1, transplanting the model trained on the local industrial personal computer in the step 1 to a workshop industrial personal computer, then shooting a workpiece picture by a camera, calling a target detection model, sending the picture to a model output result, then carrying out non-maximum suppression processing on the output result of the target detection model, filtering out detection frames with low confidence coefficient and coincidence, and obtaining a final detection result of the target detection model;
3.2, detecting by 3.1 to obtain a detection result of a target detection model based on deep learning, amplifying the detection result region by 1.5 times to serve as an interested region, and then carrying out gray processing, median filtering, edge detection and contour fitting of a random sampling consistency algorithm on the interested region to obtain the accurate pixel point position of the workpiece;
(4) and (3) robot guide assembly: and (4) converting the pixel point position in the step (3) into a robot coordinate through the coordinate transformation matrix obtained by calibration in the step (2), and guiding the mechanical arm of the robot to move according to the obtained part assembling position in the robot coordinate system to finish the assembling of the part.
2. The intelligent product assembling method based on machine vision recognition of claim 1, wherein in step (2), since the plane detected by the camera is parallel to the robot arm, we use the radiation transformation to obtain the transformation relationship between the camera image plane and the robot coordinate two-dimensional plane, first we find a mark point on the workpiece and take a picture; then calculating the pixel coordinates of the mark points and the robot coordinates; and then the mobile robot takes a picture twice, and the conversion relation between the pixel coordinates and the plane coordinates of the robot can be obtained through the obtained three pixel coordinates and the three plane coordinates of the robot.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911330665.8A CN111191546A (en) | 2019-12-20 | 2019-12-20 | Intelligent product assembling method based on machine vision recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911330665.8A CN111191546A (en) | 2019-12-20 | 2019-12-20 | Intelligent product assembling method based on machine vision recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111191546A true CN111191546A (en) | 2020-05-22 |
Family
ID=70709323
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911330665.8A Pending CN111191546A (en) | 2019-12-20 | 2019-12-20 | Intelligent product assembling method based on machine vision recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111191546A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112070736A (en) * | 2020-09-01 | 2020-12-11 | 上海电机学院 | Object volume vision measurement method combining target detection and depth calculation |
CN112658637A (en) * | 2020-12-16 | 2021-04-16 | 歌尔光学科技有限公司 | Assembling equipment |
CN112712076A (en) * | 2020-12-29 | 2021-04-27 | 中信重工开诚智能装备有限公司 | Visual positioning device and method based on label-free positioning |
CN113034548A (en) * | 2021-04-25 | 2021-06-25 | 安徽科大擎天科技有限公司 | Multi-target tracking method and system suitable for embedded terminal |
CN113033322A (en) * | 2021-03-02 | 2021-06-25 | 国网江苏省电力有限公司南通供电分公司 | Method for identifying hidden danger of oil leakage of transformer substation oil filling equipment based on deep learning |
CN114445393A (en) * | 2022-02-07 | 2022-05-06 | 无锡雪浪数制科技有限公司 | Bolt assembly process detection method based on multi-vision sensor |
CN114608801A (en) * | 2020-12-08 | 2022-06-10 | 重庆云石高科技有限公司 | Automatic detection algorithm for falling of connecting wire of locomotive axle temperature probe |
CN116894616A (en) * | 2023-08-04 | 2023-10-17 | 湖南大学 | Method for intelligently controlling new energy logistics vehicle assembly based on machine vision recognition system |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070271064A1 (en) * | 2006-05-16 | 2007-11-22 | The Boeing Company | System and method for identifying a feature of a workpiece |
US20110280472A1 (en) * | 2010-05-14 | 2011-11-17 | Wallack Aaron S | System and method for robust calibration between a machine vision system and a robot |
CN105654464A (en) * | 2014-11-28 | 2016-06-08 | 佳能株式会社 | Image processing apparatus and image processing method |
CN105954359A (en) * | 2016-05-24 | 2016-09-21 | 武汉理工大学 | Distributed ultrasonic nondestructive testing device and method for internal defects of complex-shape part |
CN106709909A (en) * | 2016-12-13 | 2017-05-24 | 重庆理工大学 | Flexible robot vision recognition and positioning system based on depth learning |
CN107665603A (en) * | 2017-09-06 | 2018-02-06 | 哈尔滨工程大学 | A kind of real-time detection method for judging parking stall and taking |
CN108399639A (en) * | 2018-02-12 | 2018-08-14 | 杭州蓝芯科技有限公司 | Fast automatic crawl based on deep learning and arrangement method |
US10262437B1 (en) * | 2018-10-31 | 2019-04-16 | Booz Allen Hamilton Inc. | Decentralized position and navigation method, device, and system leveraging augmented reality, computer vision, machine learning, and distributed ledger technologies |
CN109895095A (en) * | 2019-02-11 | 2019-06-18 | 赋之科技(深圳)有限公司 | A kind of acquisition methods of training sample, device and robot |
CN110020648A (en) * | 2018-01-10 | 2019-07-16 | 上银科技股份有限公司 | Workpiece measures and localization method |
CN110524580A (en) * | 2019-09-16 | 2019-12-03 | 西安中科光电精密工程有限公司 | A kind of welding robot visual component and its measurement method |
CN110580723A (en) * | 2019-07-05 | 2019-12-17 | 成都智明达电子股份有限公司 | method for carrying out accurate positioning by utilizing deep learning and computer vision |
-
2019
- 2019-12-20 CN CN201911330665.8A patent/CN111191546A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070271064A1 (en) * | 2006-05-16 | 2007-11-22 | The Boeing Company | System and method for identifying a feature of a workpiece |
US20110280472A1 (en) * | 2010-05-14 | 2011-11-17 | Wallack Aaron S | System and method for robust calibration between a machine vision system and a robot |
CN105654464A (en) * | 2014-11-28 | 2016-06-08 | 佳能株式会社 | Image processing apparatus and image processing method |
CN105954359A (en) * | 2016-05-24 | 2016-09-21 | 武汉理工大学 | Distributed ultrasonic nondestructive testing device and method for internal defects of complex-shape part |
CN106709909A (en) * | 2016-12-13 | 2017-05-24 | 重庆理工大学 | Flexible robot vision recognition and positioning system based on depth learning |
CN107665603A (en) * | 2017-09-06 | 2018-02-06 | 哈尔滨工程大学 | A kind of real-time detection method for judging parking stall and taking |
CN110020648A (en) * | 2018-01-10 | 2019-07-16 | 上银科技股份有限公司 | Workpiece measures and localization method |
CN108399639A (en) * | 2018-02-12 | 2018-08-14 | 杭州蓝芯科技有限公司 | Fast automatic crawl based on deep learning and arrangement method |
US10262437B1 (en) * | 2018-10-31 | 2019-04-16 | Booz Allen Hamilton Inc. | Decentralized position and navigation method, device, and system leveraging augmented reality, computer vision, machine learning, and distributed ledger technologies |
CN109895095A (en) * | 2019-02-11 | 2019-06-18 | 赋之科技(深圳)有限公司 | A kind of acquisition methods of training sample, device and robot |
CN110580723A (en) * | 2019-07-05 | 2019-12-17 | 成都智明达电子股份有限公司 | method for carrying out accurate positioning by utilizing deep learning and computer vision |
CN110524580A (en) * | 2019-09-16 | 2019-12-03 | 西安中科光电精密工程有限公司 | A kind of welding robot visual component and its measurement method |
Non-Patent Citations (4)
Title |
---|
卢建等: "基于遥感监测的城市化进程中土地利用动态变化分析――以浙江省龙泉市为例", 《安徽农业科学》 * |
卢建等: "基于遥感监测的城市化进程中土地利用动态变化分析――以浙江省龙泉市为例", 《安徽农业科学》, no. 25, 31 December 2016 (2016-12-31), pages 191 - 195 * |
徐震洲等: "机器视觉在阀芯自动装配系统中的应用", 《自动化应用》, no. 10, 25 October 2015 (2015-10-25), pages 32 - 35 * |
蔡成涛等: "《基于视觉的海洋浮标目标探测技术》", vol. 1, 哈尔滨工程大学出版社, pages: 51 - 53 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112070736A (en) * | 2020-09-01 | 2020-12-11 | 上海电机学院 | Object volume vision measurement method combining target detection and depth calculation |
CN112070736B (en) * | 2020-09-01 | 2023-02-24 | 上海电机学院 | Object volume vision measurement method combining target detection and depth calculation |
CN114608801A (en) * | 2020-12-08 | 2022-06-10 | 重庆云石高科技有限公司 | Automatic detection algorithm for falling of connecting wire of locomotive axle temperature probe |
CN114608801B (en) * | 2020-12-08 | 2024-04-19 | 重庆云石高科技有限公司 | Automatic detection algorithm for falling off of connecting wire of locomotive shaft temperature probe |
CN112658637A (en) * | 2020-12-16 | 2021-04-16 | 歌尔光学科技有限公司 | Assembling equipment |
CN112712076A (en) * | 2020-12-29 | 2021-04-27 | 中信重工开诚智能装备有限公司 | Visual positioning device and method based on label-free positioning |
CN113033322A (en) * | 2021-03-02 | 2021-06-25 | 国网江苏省电力有限公司南通供电分公司 | Method for identifying hidden danger of oil leakage of transformer substation oil filling equipment based on deep learning |
CN113034548A (en) * | 2021-04-25 | 2021-06-25 | 安徽科大擎天科技有限公司 | Multi-target tracking method and system suitable for embedded terminal |
CN114445393A (en) * | 2022-02-07 | 2022-05-06 | 无锡雪浪数制科技有限公司 | Bolt assembly process detection method based on multi-vision sensor |
CN114445393B (en) * | 2022-02-07 | 2023-04-07 | 无锡雪浪数制科技有限公司 | Bolt assembly process detection method based on multi-vision sensor |
CN116894616A (en) * | 2023-08-04 | 2023-10-17 | 湖南大学 | Method for intelligently controlling new energy logistics vehicle assembly based on machine vision recognition system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111191546A (en) | Intelligent product assembling method based on machine vision recognition | |
CN111062915B (en) | Real-time steel pipe defect detection method based on improved YOLOv3 model | |
CN111179233B (en) | Self-adaptive deviation rectifying method based on laser cutting of two-dimensional parts | |
CN111241931A (en) | Aerial unmanned aerial vehicle target identification and tracking method based on YOLOv3 | |
CN110175615B (en) | Model training method, domain-adaptive visual position identification method and device | |
CN111951238A (en) | Product defect detection method | |
CN116402787B (en) | Non-contact PCB defect detection method | |
CN115439458A (en) | Industrial image defect target detection algorithm based on depth map attention | |
CN111881743B (en) | Facial feature point positioning method based on semantic segmentation | |
CN112819748B (en) | Training method and device for strip steel surface defect recognition model | |
CN111209907A (en) | Artificial intelligent identification method for product characteristic image in complex light pollution environment | |
CN111553949A (en) | Positioning and grabbing method for irregular workpiece based on single-frame RGB-D image deep learning | |
CN111368637B (en) | Transfer robot target identification method based on multi-mask convolutional neural network | |
CN107545247B (en) | Stereo cognition method based on binocular recognition | |
CN112669269A (en) | Pipeline defect classification and classification method and system based on image recognition | |
CN117745708A (en) | Deep learning algorithm-based wood floor surface flaw detection method | |
CN113435542A (en) | Coal and gangue real-time detection method based on deep learning | |
CN108520533B (en) | Workpiece positioning-oriented multi-dimensional feature registration method | |
CN108564601B (en) | Fruit identification tracking method and system based on deep learning algorithm | |
CN113888603A (en) | Loop detection and visual SLAM method based on optical flow tracking and feature matching | |
CN111105418B (en) | High-precision image segmentation method for rectangular targets in image | |
CN111738264A (en) | Intelligent acquisition method for data of display panel of machine room equipment | |
CN110826564A (en) | Small target semantic segmentation method and system in complex scene image | |
CN117611571A (en) | Strip steel surface defect detection method based on improved YOLO model | |
CN116664608A (en) | Target detection and positioning method based on image enhancement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200522 |