[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Extraction method of dispensing track for components based on transfer learning and Mask-RCNN

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

In the process of dispensing, the traditional dispensing robot generally obtains the component pad profile according to the Mark point assisted positioning, and directly uses the profile as the dispensing profile. However, due to the influence of welding and other factors, the posture of the components often changes after welding, which easily causes the actual dispensing contour to be difficult to completely match the pad, so there is a certain deviation in the dispensing. Moreover, component recognition based on convolutional neural network requires a large number of samples for training, which is not conducive to the expansion of dispensing components. This paper focuses on the high-precision dispensing task. Based on the indirect positioning components, this paper uses Mask RCNN to extract complex component dispensing track in different environments. Compared with traditional methods, this method has higher robustness and dispensing accuracy. At the same time, the transfer learning method is used to train the neural network, so that the algorithm has better scalability and flexibility when facing the detection and segmentation tasks of new components. The experimental results show that the component dispensing track extraction method proposed in this paper has higher precision and flexibility than the traditional method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

Data availability

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

References

  1. Badrinarayanan V, Kendall A, Cipolla R (2017) SegNet: A deep convolutional encoder-decoder architecture for scene segmentation. IEEE Trans Pattern Anal Mach Intell, pp 2481–2495

  2. Bao X, Wu X, Lv W (2019) Mark point positioning method of PCB board based on IBBS-SIFT algorithm. J Zhejiang Sci-Tech Univ (Nat Sci Ed) 41(03):360–366

    Google Scholar 

  3. Chen LC, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2018) DeepLab: semantic image segmentation with Deep Convolutional Nets, Atrous Convolution, and fully connected CRFs. IEEE Trans Pattern Anal Mach Intell 40(4):834–848

  4. Donahue J, Jia Y, Vinyals O, Hoffman J, Zhang N, Tzeng E, Darrell T (2014) Decaf: A deep convolutional activation feature for generic visual recognition. In: International Conference on Machine Learning, pp 647–655

  5. Girshick R (2015) Fast r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp 1440–1448

  6. Girshick R, Donahue J, Darrell T, Malik J (2016) Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans Pattern Anal Mach Intell 38(1):142–158

    Article  Google Scholar 

  7. He K, Gkioxari G, Dollar P, Girshick R (2017) “Mask R-CNN” In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp 2980–2988

  8. Kuo C-FJ, Tsai C-H, Wang W-R, Wu H-C (2019) Automatic marking point positioning of printed circuit boards based on template matching technique. J Intell Manuf 30(2):671–685

  9. Li Z, Ouyang B (2020) Image recognition Mark circle detection scheme based on MFC + HALCON technology. Laser Technol 44(03):358–363

    Google Scholar 

  10. Li Y, Hao Z, Lei H (2016) Survey of convolutional neural network. J Comput Appl 36(9):2508–2515

    Google Scholar 

  11. Li Y, Qi H, Dai J, Ji X, Wei Y (2017) Fully convolutional instance-aware semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 4438–4446

  12. Lin G, Milan A, Shen C, Reid I (2017) RefineNet: Multi-path refinement networks for high-resolution semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 5168–5177

  13. Lin T-Y C, Marie M, Belongie S, Hays J, Perona P, Ramanan D, … Zitnick CL (2014) Microsoft COCO: Common Objects in Context. In: Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6–12, 2014, Proceedings, Part V 13, pp. 740–755. Springer International Publishing, Berlin

  14. Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 3431–3440

  15. Ma H-l, Li, Q-h, Wei T, Wang Z (2020) Research status of the technology combining machine vision and dispensing. J Qilu Univ Technol 34(01):53–57

  16. Ren S. Girshick R, Sun J (2017) Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137–1149

  17. Tang D, Zong D, Deng Z, Li M (2006) On application of glue-robot vision system. Robot 28(01):1–4

  18. Yosinski J, Clune J, Bengio Y, Lipson H (2014) How transferable are features in deep neural networks? Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2 (NIPS’14). MIT Press, Cambridge, MA, USA, pp 3320–3328

    Google Scholar 

  19. Zeng N, Wu P, Wang Z, Li H, Liu W, Liu X (2022) A small-sized object detection oriented multi-scale feature fusion approach with application to defect detection. IEEE Trans Instrum Meas 71:1–14, Art no. 3507014. https://doi.org/10.1109/TIM.2022.3153997

  20. Zhang, T, Wang K, Yu J (2016) Research of peristaltic dispensing machine on automatic identification of the image. Mod Manuf Eng (07):101–107

  21. Zhang K, Wang H, Chen X-d, Cai N, Zeng Y-b, He G-r (2018) Visual dispensing system based on automatic recognition of workpieces. Modular Mach Tool Autom Manuf Tech 7:43–47

  22. Zhang J, Li M, Feng Y et al (2020) Robotic grasp detection based on image processing and random forest. Multimed Tools Appl 79:2427–2446. https://doi.org/10.1007/s11042-019-08302-9

    Article  Google Scholar 

  23. Zhou X, Wei H, Fang S (2018) Algorithms of locating mark points on PCB. Ind Control Comput 31(04):111–112

    Google Scholar 

Download references

Funding

This work was supported by National Natural Science Foundation of China (No.91748106), Hubei Province Natural Science Foundation of China (No. 2019CFB526), and Shenzhen Science and Technology Innovation Project(CYZZ20160412111639184).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yicheng Zhou.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Peng, G., Xiong, C., Zhou, Y. et al. Extraction method of dispensing track for components based on transfer learning and Mask-RCNN. Multimed Tools Appl 83, 2959–2978 (2024). https://doi.org/10.1007/s11042-023-15755-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-15755-6

Keywords

Navigation