Abstract
In the process of dispensing, the traditional dispensing robot generally obtains the component pad profile according to the Mark point assisted positioning, and directly uses the profile as the dispensing profile. However, due to the influence of welding and other factors, the posture of the components often changes after welding, which easily causes the actual dispensing contour to be difficult to completely match the pad, so there is a certain deviation in the dispensing. Moreover, component recognition based on convolutional neural network requires a large number of samples for training, which is not conducive to the expansion of dispensing components. This paper focuses on the high-precision dispensing task. Based on the indirect positioning components, this paper uses Mask RCNN to extract complex component dispensing track in different environments. Compared with traditional methods, this method has higher robustness and dispensing accuracy. At the same time, the transfer learning method is used to train the neural network, so that the algorithm has better scalability and flexibility when facing the detection and segmentation tasks of new components. The experimental results show that the component dispensing track extraction method proposed in this paper has higher precision and flexibility than the traditional method.
Similar content being viewed by others
Data availability
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
References
Badrinarayanan V, Kendall A, Cipolla R (2017) SegNet: A deep convolutional encoder-decoder architecture for scene segmentation. IEEE Trans Pattern Anal Mach Intell, pp 2481–2495
Bao X, Wu X, Lv W (2019) Mark point positioning method of PCB board based on IBBS-SIFT algorithm. J Zhejiang Sci-Tech Univ (Nat Sci Ed) 41(03):360–366
Chen LC, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2018) DeepLab: semantic image segmentation with Deep Convolutional Nets, Atrous Convolution, and fully connected CRFs. IEEE Trans Pattern Anal Mach Intell 40(4):834–848
Donahue J, Jia Y, Vinyals O, Hoffman J, Zhang N, Tzeng E, Darrell T (2014) Decaf: A deep convolutional activation feature for generic visual recognition. In: International Conference on Machine Learning, pp 647–655
Girshick R (2015) Fast r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp 1440–1448
Girshick R, Donahue J, Darrell T, Malik J (2016) Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans Pattern Anal Mach Intell 38(1):142–158
He K, Gkioxari G, Dollar P, Girshick R (2017) “Mask R-CNN” In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp 2980–2988
Kuo C-FJ, Tsai C-H, Wang W-R, Wu H-C (2019) Automatic marking point positioning of printed circuit boards based on template matching technique. J Intell Manuf 30(2):671–685
Li Z, Ouyang B (2020) Image recognition Mark circle detection scheme based on MFC + HALCON technology. Laser Technol 44(03):358–363
Li Y, Hao Z, Lei H (2016) Survey of convolutional neural network. J Comput Appl 36(9):2508–2515
Li Y, Qi H, Dai J, Ji X, Wei Y (2017) Fully convolutional instance-aware semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 4438–4446
Lin G, Milan A, Shen C, Reid I (2017) RefineNet: Multi-path refinement networks for high-resolution semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 5168–5177
Lin T-Y C, Marie M, Belongie S, Hays J, Perona P, Ramanan D, … Zitnick CL (2014) Microsoft COCO: Common Objects in Context. In: Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6–12, 2014, Proceedings, Part V 13, pp. 740–755. Springer International Publishing, Berlin
Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 3431–3440
Ma H-l, Li, Q-h, Wei T, Wang Z (2020) Research status of the technology combining machine vision and dispensing. J Qilu Univ Technol 34(01):53–57
Ren S. Girshick R, Sun J (2017) Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137–1149
Tang D, Zong D, Deng Z, Li M (2006) On application of glue-robot vision system. Robot 28(01):1–4
Yosinski J, Clune J, Bengio Y, Lipson H (2014) How transferable are features in deep neural networks? Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2 (NIPS’14). MIT Press, Cambridge, MA, USA, pp 3320–3328
Zeng N, Wu P, Wang Z, Li H, Liu W, Liu X (2022) A small-sized object detection oriented multi-scale feature fusion approach with application to defect detection. IEEE Trans Instrum Meas 71:1–14, Art no. 3507014. https://doi.org/10.1109/TIM.2022.3153997
Zhang, T, Wang K, Yu J (2016) Research of peristaltic dispensing machine on automatic identification of the image. Mod Manuf Eng (07):101–107
Zhang K, Wang H, Chen X-d, Cai N, Zeng Y-b, He G-r (2018) Visual dispensing system based on automatic recognition of workpieces. Modular Mach Tool Autom Manuf Tech 7:43–47
Zhang J, Li M, Feng Y et al (2020) Robotic grasp detection based on image processing and random forest. Multimed Tools Appl 79:2427–2446. https://doi.org/10.1007/s11042-019-08302-9
Zhou X, Wei H, Fang S (2018) Algorithms of locating mark points on PCB. Ind Control Comput 31(04):111–112
Funding
This work was supported by National Natural Science Foundation of China (No.91748106), Hubei Province Natural Science Foundation of China (No. 2019CFB526), and Shenzhen Science and Technology Innovation Project(CYZZ20160412111639184).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Peng, G., Xiong, C., Zhou, Y. et al. Extraction method of dispensing track for components based on transfer learning and Mask-RCNN. Multimed Tools Appl 83, 2959–2978 (2024). https://doi.org/10.1007/s11042-023-15755-6
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-023-15755-6