WO2017081736A1 - リード先端位置画像認識方法及びリード先端位置画像認識システム - Google Patents
リード先端位置画像認識方法及びリード先端位置画像認識システム Download PDFInfo
- Publication number
- WO2017081736A1 WO2017081736A1 PCT/JP2015/081520 JP2015081520W WO2017081736A1 WO 2017081736 A1 WO2017081736 A1 WO 2017081736A1 JP 2015081520 W JP2015081520 W JP 2015081520W WO 2017081736 A1 WO2017081736 A1 WO 2017081736A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- lead
- image
- tip
- image recognition
- learning
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 33
- 238000012545 processing Methods 0.000 claims abstract description 40
- 238000013528 artificial neural network Methods 0.000 claims abstract description 15
- 238000003384 imaging method Methods 0.000 claims description 20
- 230000008569 process Effects 0.000 claims description 20
- 230000008859 change Effects 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 4
- 238000003780 insertion Methods 0.000 claims description 3
- 230000037431 insertion Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 8
- 238000004519 manufacturing process Methods 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 4
- 210000002569 neuron Anatomy 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000000946 synaptic effect Effects 0.000 description 2
- 235000014676 Phragmites communis Nutrition 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000032258 transport Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05K—PRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
- H05K13/00—Apparatus or processes specially adapted for manufacturing or adjusting assemblages of electric components
- H05K13/08—Monitoring manufacture of assemblages
- H05K13/081—Integration of optical monitoring devices in assembly lines; Processes using optical monitoring devices specially adapted for controlling devices or machines in assembly lines
- H05K13/0813—Controlling of single components prior to mounting, e.g. orientation, component geometry
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
- G06F18/2178—Validation; Performance evaluation; Active pattern learning techniques based on feedback of a supervisor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05K—PRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
- H05K13/00—Apparatus or processes specially adapted for manufacturing or adjusting assemblages of electric components
- H05K13/04—Mounting of components, e.g. of leadless components
Definitions
- the present invention relates to a lead tip position image recognition method and a lead tip position image recognition method that recognizes the center position of the lead tip by processing an image captured by a camera of the lead tip of an electronic component inserted into a through hole of a circuit board. It is an invention related to the system.
- the leading end of the lead of the electronic component is picked up by the camera from below, and the image is processed using general pattern matching to obtain the center of the leading end of the lead. After the position is recognized, the position shift of the electronic component is corrected based on the recognition result, and the lead of the electronic component is positioned directly above the through hole of the circuit board.
- the tip of the lead of the electronic component is stably formed into a circular shape, the degree of difficulty in image processing is not so high.
- the tip shape of the lead is not stable, and even if it is an electronic part of the same product type, as shown in FIG. ), (B), the tip of the lead is burred, and the lead tip is blurred with scattered light (halation), as shown in FIGS.
- FIG. 4 (e) the tip of the lead is deformed into an ellipse or the like, or as shown in FIG. 4 (f), the tip of the lead is partially cut out. ), As shown in (h), it may appear in the middle.
- the shape of the tip of the lead appears in various shapes depending on the state of the cut surface of the tip of the lead. Even if image processing is used, it is difficult to accurately recognize the tip of the lead, and there is a drawback that the recognition accuracy of the center position of the tip of the lead is poor.
- the problem to be solved by the present invention is to make it possible to accurately recognize the center position of the lead tip from the lead image captured by the camera even if the shape of the lead tip is not stable. is there.
- the present invention processes an image (hereinafter referred to as “lead image”) obtained by capturing an image of a tip of a lead of an electronic component to be inserted into a through hole of a circuit board with an image recognition device.
- the lead tip position image recognition method for recognizing the center position of the tip of the lead, the output when the operator designates the center position of the tip of the lead in the lead image and inputs the lead image to the image recognition device is output.
- a learning step for learning to be the center position of the tip of the lead designated by the operator, and a lead image obtained by imaging the tip of the lead of the electronic component with the camera is input to the image recognition device.
- a recognition step of outputting the center position of the tip of the lead is input to the image recognition device.
- the image recognition apparatus may be configured using a machine learning system such as a neural network.
- the process switches to a learning step, and the lead image in which the image processing error has occurred is input to the image recognition device. It is preferable to learn that the output at that time is the center position of the tip of the lead specified by the operator. In this way, if learning is performed using a lead image in which an image processing error has occurred, even if a similar lead image is subsequently input to the image recognition device, no image processing error occurs, and the center position of the tip of the lead is accurate. Since it can be recognized well, the recognition accuracy of the center position of the tip of the lead can be increased while reducing the frequency of occurrence of image processing errors.
- the lead image is subjected to at least one of rotation, mapping, luminance change, and shape change to generate a plurality of lead images, and each of the plurality of lead images is input to an image recognition device.
- the output at this time may be learned so as to be the center position of the tip of the lead designated by the operator. In this way, even if there are few lead images used for learning, teacher data (combination of “input” and “correct output”) can be increased, and learning can be performed efficiently. The recognition accuracy of the center position can be increased.
- the mounting is based on the recognition result of the image recognition device.
- the learning process is performed. It is also possible to learn so that the output when the lead image in which the positioning error has occurred is input to the image recognition device is the center position of the tip of the lead designated by the operator.
- a learning device that learns the relationship (teacher data) between the lead image input to the image recognition device and the center position of the leading end of the output lead is provided separately from the image recognition device, and the result learned by the learning device is recognized as an image. You may make it transmit to an apparatus. In this way, since it is not necessary to learn the teacher data in the image recognition device, it is not necessary to increase the calculation capability of the image recognition device to the calculation capability corresponding to the learning of the teacher data.
- the learning device and the image recognition device can share the recognition of the center position of the tip of the head efficiently.
- the image recognizing device may perform both learning of teacher data and recognition of the center position of the tip of the lead.
- the image recognition apparatus is configured to be able to switch between a recognition mode for inputting a lead image and outputting the center position of the tip of the lead and a learning mode for learning teacher data, and an operator can operate in the learning mode.
- the image recognition apparatus includes a designation unit that designates the center position of the tip of the lead in the lead image, and the image recognition apparatus is configured such that the center position of the tip of the lead in the lead image designated by the operator in the learning mode is the lead image. It is sufficient to learn so that the output becomes the input when.
- FIG. 1 is a block diagram showing the configuration of a lead tip position image recognition system according to a first embodiment of the present invention.
- FIG. 2 is a diagram illustrating a process of imaging the tip of the lead of the electronic component with a camera.
- FIG. 3 is a view for explaining the process of inserting the lead of the electronic component into the through hole of the circuit board.
- 4A to 4H are views showing lead images in which the shape of the tip of the lead is shown in various shapes.
- FIG. 5 is a diagram for explaining a configuration example of a neural network.
- FIG. 6 is a diagram showing an image of pixel data of the input layer and output layer of the neural network when positive sample data is used as the teacher data.
- FIG. 7 is a diagram showing an image of pixel data of the input layer and the output layer of the neural network when negative sample data is used as the teacher data.
- FIG. 8 is a diagram for explaining a method of detecting a region where a lead exists in a captured image by raster scanning.
- FIG. 9 is a diagram for explaining a method for detecting a region where a lead exists in a captured image by blob analysis.
- FIG. 10 is a flowchart showing the flow of processing of the component placement machine control program.
- FIG. 11 is a flowchart showing the flow of processing of the learning processing program.
- FIG. 12 is a block diagram showing the configuration of the lead tip position image recognition system according to the second embodiment of the present invention.
- the head moving device 14 that moves the mounting head 13 that holds the electronic component 12, the component supply device 15 that supplies the electronic component 12, the conveyor 17 that transports the circuit board 16, and the mounting head 13 hold it.
- a component imaging camera 18 for imaging the electronic component 12 from its lower surface side, a mark imaging camera 19 for imaging a reference mark or the like of the circuit board 16 from above, and the like are provided.
- the component imaging camera 18 is fixed upward between the component supply device 15 and the conveyor 17, and an illumination device 20 that illuminates the electronic component 12 held by the mounting head 13 from below is attached to the upper portion thereof. ing.
- the mark imaging camera 19 is attached to the mounting head 13 downward, and is moved integrally with the mounting head 13 by the head moving device 14.
- the control device 21 of the component mounting machine 11 is configured by a computer or the like, holds the electronic component 12 supplied by the component supply device 15 with the mounting head 13, and moves it above the circuit board 16 by the head moving device 14. The operation of inserting the lead 22 of the electronic component 12 into the through hole 23 of the circuit board 16 is controlled.
- control device 21 of the component mounting machine 11 is also equipped with a function as the image recognition device 25, and before moving the electronic component 12 held by the mounting head 13 to above the circuit board 16, the component imaging camera 18.
- the leading end (lower end) of the lead 22 of the electronic component 12 is imaged by the camera 18 from below to obtain a lead image, and this lead image is input to the image recognition device 25.
- the center position of the tip of the lead 22 is output.
- the control device 21 and the image recognition device 25 may be configured by separate computers or a single computer.
- the image recognition device 25 is configured using a machine learning system such as a neural network or deep learning, and uses supervised learning (supervised learning) in a learning mode (learning process) to input an image in a lead image as an input. Learning the relationship (teacher data) between the shape of the tip of the lead 22 and the center position of the lead 22 as an output, and then in the recognition mode (recognition process), the center of the tip of the lead 22 from the input lead image It is configured so that the position can be recognized with high accuracy.
- supervised learning supervised learning
- learning process learning process
- learning of the teacher data may be performed before the start of production.
- learning of the teacher data is performed every time an image processing error or a positioning error occurs.
- the “image processing error” is an error meaning that the recognition process has failed to recognize the center position of the tip of the lead 22 in the lead image, and the “positioning error” is held in the mounting head 13. This means that in the inserting step of positioning and inserting the lead 22 of the electronic component 12 into the through hole 23 of the circuit board 16, it has failed to insert the lead 22 of the electronic component 12 into the through hole 23 of the circuit board 16. It is an error.
- the teacher data learning method displays a lead image in which an image processing error or a positioning error has occurred on a display device 26 such as a liquid crystal display or a CRT, and an operator looks at the lead image displayed on the display device 26,
- the operation unit 27 (designating means) such as a keyboard, a mouse, a touch panel, etc. is operated to designate the center position of the tip of the lead 22 in the lead image and input it to the image recognition device 25.
- the image recognition device 25 learns that the output when the lead image in which the image processing error or the positioning error has occurred is input is the center position of the tip of the lead 22 specified by the operator.
- a lead image having burrs is rotated by a predetermined angle, and a plurality of lead images having different rotation angles by a predetermined angle are created and learned.
- the center position of the tip of the lead 22 can be accurately recognized from the lead image regardless of the direction in which the burr occurs.
- the signal flows to the input layer, the intermediate layer (hidden layer), and the output layer.
- the intermediate layer may be one layer or two or more layers.
- a neural network refers to all models in which neurons (nodes) in each layer that form a network by synaptic connections have a problem-solving ability by changing weights (synaptic connection strength) by learning.
- each neuron in the intermediate layer and the output layer receives stimuli from each neuron in the previous layer, and the stimuli are weighted and added together and passed to the neurons in the next layer.
- what is important is weighting, and by changing the weighting in the learning process of the teacher data and adjusting it to output the optimal value, “correct output” with high accuracy even for “unknown input” You will be able to get
- a lead image of 80 [pixel] ⁇ 80 [pixel] is input to the input layer, and in the output layer, where the input lead image is 80 [pixel] ⁇ 80 [pixel]. Is the center position of the tip of the lead 22 is output.
- FIG. 6 shows an example of input and output of 10 [pixel] ⁇ 10 [pixel] for easy illustration.
- the output layer may be made larger than the input layer.
- the input layer is 80 [pixel] ⁇ 80 [pixel]
- the output layer may be designed as 160 [pixel] ⁇ 160 [pixel] or 320 [pixel] ⁇ 320 [pixel].
- the learning of the teacher data may be learned and stored for each type of electronic component 12, or may be learned and stored by sharing a plurality of types.
- the processing range corresponding to the input layer of the neural network is sequentially searched for the captured image by raster scanning or the like as shown in FIG. May be detected and applied to the input layer of the neural network.
- blob analysis or the like may be used instead of the raster scan. For example, by performing binarization processing on the captured image and performing blob analysis or the like, as shown in FIG. 9, candidate regions in which the lead 22 may be present in the captured image are detected and input to the neural network is performed. You may make it give to a layer. In the blob analysis or the like, a portion other than the lead 22 is detected or the position detection accuracy is low even if the lead 22 is detected, but only the lead 22 can be detected with high position detection accuracy by a post-processing neural network.
- the control device 21 of the component placement machine 11 executes the programs shown in FIGS. 10 and 11 in cooperation with the image recognition device 25, whereby the electronic component 12 supplied by the component supply device 15 is replaced with the placement head of the component placement machine 11. 13 to control the operation of inserting the lead 22 of the electronic component 12 into the through hole 23 of the circuit board 16 by recognizing the center position of the tip of the lead 22 of the electronic component 12 and performing image processing.
- the teacher data is learned each time an error or positioning error occurs.
- the component mounting machine control program of FIG. 10 is a program that holds the electronic component 12 supplied by the component supply device 15 and controls the operation until the lead 22 of the electronic component 12 is inserted into the through hole 23 of the circuit board 16. And is activated at the timing when the holding operation of the electronic component 12 is started.
- this program is started, first, in step 101, the electronic component 12 supplied by the component supply device 15 is held by the mounting head 13 of the component mounting machine 11, and the imaging position above the camera 18 for component imaging. Move to. Thereafter, the process proceeds to step 102, and the leading end (lower end) of the lead 22 of the electronic component 12 is imaged by the camera 18 from below to obtain a lead image.
- step 103 where the lead image is input to the image recognition device 25 and image processing for outputting the center position of the tip of the lead 22 is executed.
- step 104 determines whether or not an image processing error has occurred (whether or not it has failed to recognize the center position of the tip of the lead 22 in the lead image). As a result, if it is determined that an image processing error has occurred, the process proceeds to step 107 and the learning processing program of FIG. 11 is executed.
- step 104 the process proceeds to step 105 where the electronic component 12 held by the mounting head 13 of the component mounting machine 11 is moved above the circuit board 16. Based on the center position of the tip of the lead 22 output from the image recognition device 25, the positional deviation of the electronic component 12 is corrected so that the lead 22 of the electronic component 12 is directly above the through hole 23 of the circuit board 16. After positioning, the mounting head 13 is lowered and the lead 22 of the electronic component 12 is inserted into the through hole 23 of the circuit board 16.
- step 106 it is determined whether or not a positioning error has occurred (whether or not the lead 22 of the electronic component 12 has failed to be inserted into the through hole 23 of the circuit board 16). If not, the program is terminated. If a positioning error has occurred, the process proceeds to step 107 and the learning processing program of FIG. 11 is executed.
- the learning processing program of FIG. 11 is a program executed in step 107 of FIG. 10 when an image processing error or positioning error occurs.
- this program is started, first, in step 201, a lead image in which an image processing error or positioning error has occurred is displayed on the liquid crystal display device 26.
- the operator displays the lead image on the display device 26. While watching the lead image, the operation unit 27 is operated to wait until the center position of the tip of the lead 22 in the lead image is designated.
- step 203 the output when the lead image in which an image processing error or positioning error has occurred is input to the image recognition device 25 is output.
- learning is performed so that the center position of the tip of the lead 22 specified by the operator is obtained.
- This learning process may be performed by the image recognition device 25 itself or by the control device 21 of the component mounting machine 11. In the latter case, the result (weighting) learned by the control device 21 of the component mounting machine 11 is transmitted to the image recognition device 25.
- the shape and output of the tip of the lead 22 in the lead image that is an input of the image recognition device 25 can be obtained.
- the relationship (teacher data) with the center position of the tip of a certain lead 22 is learned, and the center position of the tip of the lead 22 can be accurately recognized from the input lead image.
- the image recognition device 25 when an image processing error that cannot recognize the center position of the tip of the lead 22 in the lead image occurs, an output when the lead image in which the image processing error has occurred is input to the image recognition device 25. However, even if a similar lead image is input to the image recognition device 25 thereafter, no image processing error occurs and the lead 22 is learned.
- the center position of the tip can be recognized with high accuracy. Thereby, the recognition accuracy of the center position of the tip of the lead 22 can be increased while reducing the frequency of occurrence of image processing errors.
- the first embodiment when a positioning error occurs in which the lead 22 cannot be inserted into the through hole 23 of the circuit board 16, an output when the lead image in which the positioning error has occurred is input to the image recognition device 25 is Since the learning is performed so as to be the center position of the tip of the lead 22 designated by the person, the recognition accuracy of the center position of the tip of the lead 22 when a similar lead image is input to the image recognition device 25 thereafter. The frequency of occurrence of positioning errors can be reduced.
- the computing device 21 and the image recognizing device 25 of the component mounting machine 11 have a sufficient computing capacity. Therefore, the control device 21 and the image recognizing device 25 of the component mounting machine 11 learn the teacher data and read 22.
- the center position of the leading edge of the component is both recognized, but generally, a high calculation capability is required for learning of the teacher data, and therefore the control device 21 and the image recognition device 25 of the component mounting machine 11 are required.
- the computing ability is insufficient.
- a learning device 31 having a high computing capacity is provided in addition to the component mounting machine 11, and the learning device 31 is connected to the component mounting machine 11 through a network.
- the learning device 31 learns the relationship (teacher data) between the lead image input to the image recognition device 26 of the component mounting machine 11 and the center position of the tip of the lead 22 to be output, and the learning result (weighting) is used as the component.
- the image is transmitted to the image recognition device 25 of the mounting machine 11.
- the learning device 31 may be configured by using a production management computer that manages a production line including the component mounting machine 11, or a learning computer may be newly provided.
- the learning device 31 may be connected to a plurality of component placement machines 11 via a network, and the learning data of the plurality of component placement devices 11 may be learned by one learning device 31.
- the teacher data (lead image and the center position of the tip of the lead 22 specified by the operator) may be generated on the component mounting machine 11 side and transmitted to the learning device 31, or only the lead image is learned.
- the lead image is transmitted to the device 31 and the lead image is displayed on the display screen of the learning device 31, and the operator designates the center position of the tip of the lead 22 in the lead image displayed on the display screen of the learning device 31.
- the teacher data and learning results are stored in the production line server and the storage device of the component mounting machine 11, and the learning results are transmitted to other component mounting machines 11 in the production line, or a new lead image is added to the teacher data. May be used for re-learning.
- edge information (contour boundary line, edge gradient, etc.) of the lead 22 may be used as input lead image data, and component mounting
- the employee of the manufacturing company of the machine 11 may learn the teacher data using an in-house learning device and provide the learning result (weighting) to the user of the component mounting machine 11 without departing from the gist.
- various modifications can be made within the scope.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Manufacturing & Machinery (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Operations Research (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Quality & Reliability (AREA)
- Life Sciences & Earth Sciences (AREA)
- Supply And Installment Of Electrical Components (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
まず、図1乃至図3を用いて部品装着機全体の構成を説明する。
図10の部品装着機制御プログラムは、部品供給装置15によって供給される電子部品12を保持して該電子部品12のリード22を回路基板16のスルーホール23に挿入するまでの動作を制御するプログラムであり、該電子部品12の保持動作を開始するタイミングで起動される。本プログラムが起動されると、まず、ステップ101で、部品供給装置15によって供給される電子部品12を部品装着機11の装着ヘッド13で保持して、部品撮像用のカメラ18の上方の撮像位置へ移動させる。この後、ステップ102に進み、該電子部品12のリード22の先端(下端)をその下方から該カメラ18で撮像してリード画像を取得する。
図11の学習処理プログラムは、画像処理エラーや位置決めエラーが発生したときに前記図10のステップ107で実行されるプログラムである。本プログラムが起動されると、まず、ステップ201で、画像処理エラーや位置決めエラーが発生したリード画像を液晶表示装置26に表示し、次のステップ202で、作業者が、この表示装置26に表示されたリード画像を見ながら、操作部27を操作してリード画像中のリード22の先端の中心位置を指定するまで待機する。
Claims (11)
- 回路基板のスルーホールに挿入する電子部品のリードの先端をカメラで撮像した画像(以下「リード画像」という)を画像認識装置で処理して該リードの先端の中心位置を認識するリード先端位置画像認識方法において、
作業者が前記リード画像中のリードの先端の中心位置を指定して、当該リード画像を前記画像認識装置に入力したときの出力が、前記作業者が指定した当該リードの先端の中心位置となるように学習する学習工程と、
前記電子部品のリードの先端を前記カメラで撮像して取得したリード画像を前記画像認識装置に入力して当該リードの先端の中心位置を出力する認識工程と
を含むことを特徴とするリード先端位置画像認識方法。 - 前記画像認識装置は、ニューラルネットワークを用いて構成されていることを特徴とする請求項1に記載のリード先端位置画像認識方法。
- 前記認識工程で、前記リードの先端の中心位置を認識できない画像処理エラーが発生したときに前記学習工程に切り換えて、前記画像処理エラーが発生した前記リード画像を前記画像認識装置に入力したときの出力が、前記作業者が指定した前記リードの先端の中心位置となるように学習することを特徴とする請求項1又は2に記載のリード先端位置画像認識方法。
- 前記学習工程で、前記リード画像に回転、写像、輝度変化、形状変化のうちの少なくとも1つの処理を施して複数のリード画像を生成し、前記複数のリード画像の各々を前記画像認識装置に入力したときの出力が、前記作業者が指定した前記リードの先端の中心位置となるように学習することを特徴とする請求項1乃至3のいずれかに記載のリード先端位置画像認識方法。
- 前記カメラは、部品装着機の装着ヘッドに保持した電子部品をその下面側から撮像する部品撮像用のカメラであり、
前記画像認識装置の認識結果に基づいて前記装着ヘッドに保持した電子部品のリードを回路基板のスルーホールに位置決めして挿入する挿入工程で、前記電子部品のリードを前記回路基板のスルーホールに挿入できない位置決めエラーが発生したときに、前記学習工程に切り換えて、前記位置決めエラーが発生した前記リード画像を前記画像認識装置に入力したときの出力が、前記作業者が指定した前記リードの先端の中心位置となるように学習することを特徴とする請求項1乃至4のいずれかに記載のリード先端位置画像認識方法。 - 回路基板のスルーホールに挿入する電子部品のリードの先端をカメラで撮像した画像(以下「リード画像」という)を画像認識装置で処理して該リードの先端の中心位置を認識するリード先端位置画像認識システムにおいて、
前記画像認識装置は、前記リード画像を入力して前記リードの先端の中心位置を出力するように構成され、
前記画像認識装置に入力する前記リード画像と出力する前記リードの先端の中心位置との関係を学習してその学習結果を前記画像認識装置に送信する学習装置と、
作業者が前記リード画像中のリードの先端の中心位置を指定する指定手段とを備え、
前記学習装置は、前記作業者が前記指定手段で指定した前記リード画像中のリードの先端の中心位置が当該リード画像を前記画像認識装置に入力したときの出力となるように学習することを特徴とするリード先端位置画像認識システム。 - 回路基板のスルーホールに挿入する電子部品のリードの先端をカメラで撮像した画像(以下「リード画像」という)を画像認識装置で処理して該リードの先端の中心位置を認識するリード先端位置画像認識システムにおいて、
前記画像認識装置は、前記リード画像を入力して前記リードの先端の中心位置を出力する認識モードと、入力する前記リード画像と出力する前記リードの先端の中心位置との関係を学習する学習モードとを切り換え可能であり、
前記学習モード時に作業者が前記リード画像中のリードの先端の中心位置を指定する指定手段を備え、
前記画像認識装置は、前記学習モード時に前記作業者が前記指定手段で指定した前記リード画像中のリードの先端の中心位置が当該リード画像を入力したときの出力となるように学習することを特徴とするリード先端位置画像認識システム。 - 前記画像認識装置は、ニューラルネットワークを用いて構成されていることを特徴とする請求項7に記載のリード先端位置画像認識システム。
- 前記画像認識装置は、前記認識モード時に前記リード画像中のリードの先端の中心位置を認識できない画像処理エラーが発生したときに前記学習モードに切り換えて、前記画像処理エラーが発生した前記リード画像を入力したときの出力が、前記作業者が指定した前記リードの先端の中心位置となるように学習することを特徴とする請求項7又は8に記載のリード先端位置画像認識システム。
- 前記画像認識装置は、前記学習モード時に前記リード画像に回転、写像、輝度変化、形状変化のうちの少なくとも1つの処理を施して複数のリード画像を生成し、前記複数のリード画像の各々を入力したときの出力が、前記作業者が指定した前記リードの先端の中心位置となるように学習することを特徴とする請求項7乃至9のいずれかに記載のリード先端位置画像認識システム。
- 部品装着機に搭載されたリード先端位置画像認識システムであって、
前記カメラは、前記部品装着機の装着ヘッドに保持した電子部品をその下面側から撮像する部品撮像用のカメラであり、
前記画像認識装置の認識結果に基づいて前記装着ヘッドに保持した電子部品のリードを回路基板のスルーホールに位置決めして挿入する挿入工程で、前記電子部品のリードを前記回路基板のスルーホールに挿入できない位置決めエラーが発生したときに、前記画像認識装置は、前記学習モードに切り換えて、前記位置決めエラーが発生した前記リード画像を入力したときの出力が、前記作業者が指定した前記リードの先端の中心位置となるように学習することを特徴とする請求項7乃至10のいずれかに記載のリード先端位置画像認識システム。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/772,825 US10380457B2 (en) | 2015-11-09 | 2015-11-09 | Lead tip position image recognition method and lead tip position image recognition system |
JP2017549889A JP6727228B2 (ja) | 2015-11-09 | 2015-11-09 | リード先端位置画像認識方法及びリード先端位置画像認識システム |
EP15908260.1A EP3376843B1 (en) | 2015-11-09 | 2015-11-09 | Lead end-position image recognition method and lead end-position image recognition system |
CN201580084427.7A CN108353534B (zh) | 2015-11-09 | 2015-11-09 | 引脚前端位置图像识别方法及引脚前端位置图像识别系统 |
PCT/JP2015/081520 WO2017081736A1 (ja) | 2015-11-09 | 2015-11-09 | リード先端位置画像認識方法及びリード先端位置画像認識システム |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2015/081520 WO2017081736A1 (ja) | 2015-11-09 | 2015-11-09 | リード先端位置画像認識方法及びリード先端位置画像認識システム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017081736A1 true WO2017081736A1 (ja) | 2017-05-18 |
Family
ID=58694960
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/081520 WO2017081736A1 (ja) | 2015-11-09 | 2015-11-09 | リード先端位置画像認識方法及びリード先端位置画像認識システム |
Country Status (5)
Country | Link |
---|---|
US (1) | US10380457B2 (ja) |
EP (1) | EP3376843B1 (ja) |
JP (1) | JP6727228B2 (ja) |
CN (1) | CN108353534B (ja) |
WO (1) | WO2017081736A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019155593A1 (ja) * | 2018-02-09 | 2019-08-15 | 株式会社Fuji | 部品画像認識用学習済みモデル作成システム及び部品画像認識用学習済みモデル作成方法 |
JP2019159017A (ja) * | 2018-03-09 | 2019-09-19 | Kddi株式会社 | マルチコア光ファイバの調心装置及び当該調心装置のための教師データの生成装置 |
WO2020012628A1 (ja) * | 2018-07-13 | 2020-01-16 | 株式会社Fuji | 異物検出方法および電子部品装着装置 |
WO2023195173A1 (ja) * | 2022-04-08 | 2023-10-12 | 株式会社Fuji | 部品実装システム及び画像分類方法 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10824137B2 (en) * | 2017-06-19 | 2020-11-03 | Panasonic Intellectual Property Management Co., Ltd. | Mounting board manufacturing system |
CN112867906B (zh) * | 2018-10-23 | 2022-11-22 | 株式会社富士 | 元件数据生成方法以及元件安装机 |
US20220174851A1 (en) * | 2019-03-28 | 2022-06-02 | Panasonic Intellectual Property Management Co., Ltd. | Production data creation device and production data creation method |
CN111862196A (zh) * | 2019-04-30 | 2020-10-30 | 瑞典爱立信有限公司 | 检测平板物体的通孔的方法、装置和计算机可读存储介质 |
CN110381721B (zh) * | 2019-07-16 | 2020-07-31 | 深圳市中禾旭精密机械有限公司 | 一种卧式智能全自动高速插装系统 |
JP7382575B2 (ja) * | 2019-11-05 | 2023-11-17 | パナソニックIpマネジメント株式会社 | 実装条件推定装置、学習装置、実装条件推定方法、およびプログラム |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0545123A (ja) * | 1991-08-20 | 1993-02-23 | Matsushita Electric Ind Co Ltd | リード先端位置検出装置 |
JPH08148893A (ja) * | 1994-11-16 | 1996-06-07 | Sony Corp | 部品挿入装置 |
JP2003298293A (ja) * | 2002-03-29 | 2003-10-17 | Hitachi High-Tech Instruments Co Ltd | 電子部品装着装置 |
JP2004172221A (ja) * | 2002-11-18 | 2004-06-17 | Yamagata Casio Co Ltd | 部品搭載装置、部品搭載方法、及びそのプログラム |
JP2012234488A (ja) * | 2011-05-09 | 2012-11-29 | Fuji Mach Mfg Co Ltd | 基準マークモデルテンプレート作成方法 |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS60103700A (ja) * | 1983-11-11 | 1985-06-07 | 株式会社日立製作所 | 部品の位置決め装置 |
US4813255A (en) * | 1986-02-21 | 1989-03-21 | Hewlett-Packard Company | System for sensing and forming objects such as leads of electronic components |
US4728195A (en) * | 1986-03-19 | 1988-03-01 | Cognex Corporation | Method for imaging printed circuit board component leads |
US5058177A (en) | 1988-01-04 | 1991-10-15 | Motorola, Inc. | Method for inspection of protruding features |
JPH04105341A (ja) * | 1990-08-24 | 1992-04-07 | Hitachi Ltd | 半導体装置のリード曲がり、浮き検出方法及び検出装置 |
US5119436A (en) * | 1990-09-24 | 1992-06-02 | Kulicke And Soffa Industries, Inc | Method of centering bond positions |
JPH08234226A (ja) * | 1995-02-28 | 1996-09-13 | Toshiba Corp | 液晶基板製造用画像処理装置 |
US6289492B1 (en) * | 1998-12-18 | 2001-09-11 | Cognex Corporation | Methods and apparatuses for defining a region on an elongated object |
US20050008212A1 (en) * | 2003-04-09 | 2005-01-13 | Ewing William R. | Spot finding algorithm using image recognition software |
JP4417779B2 (ja) * | 2004-05-31 | 2010-02-17 | 株式会社日立ハイテクインスツルメンツ | 電子部品装着装置及び電子部品装着方法 |
WO2011086889A1 (ja) * | 2010-01-12 | 2011-07-21 | 日本電気株式会社 | 特徴点選択システム、特徴点選択方法および特徴点選択プログラム |
CN103249295B (zh) * | 2012-02-08 | 2017-07-07 | Juki株式会社 | 电子部件安装方法、电子部件安装装置以及电子部件安装系统 |
CN104335692B (zh) | 2012-06-06 | 2017-03-22 | 富士机械制造株式会社 | 元件插装装置 |
JP6019409B2 (ja) * | 2013-11-13 | 2016-11-02 | パナソニックIpマネジメント株式会社 | 電子部品実装装置及び電子部品実装方法 |
-
2015
- 2015-11-09 WO PCT/JP2015/081520 patent/WO2017081736A1/ja active Application Filing
- 2015-11-09 EP EP15908260.1A patent/EP3376843B1/en active Active
- 2015-11-09 CN CN201580084427.7A patent/CN108353534B/zh active Active
- 2015-11-09 US US15/772,825 patent/US10380457B2/en active Active
- 2015-11-09 JP JP2017549889A patent/JP6727228B2/ja active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0545123A (ja) * | 1991-08-20 | 1993-02-23 | Matsushita Electric Ind Co Ltd | リード先端位置検出装置 |
JPH08148893A (ja) * | 1994-11-16 | 1996-06-07 | Sony Corp | 部品挿入装置 |
JP2003298293A (ja) * | 2002-03-29 | 2003-10-17 | Hitachi High-Tech Instruments Co Ltd | 電子部品装着装置 |
JP2004172221A (ja) * | 2002-11-18 | 2004-06-17 | Yamagata Casio Co Ltd | 部品搭載装置、部品搭載方法、及びそのプログラム |
JP2012234488A (ja) * | 2011-05-09 | 2012-11-29 | Fuji Mach Mfg Co Ltd | 基準マークモデルテンプレート作成方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3376843A4 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019155593A1 (ja) * | 2018-02-09 | 2019-08-15 | 株式会社Fuji | 部品画像認識用学習済みモデル作成システム及び部品画像認識用学習済みモデル作成方法 |
CN111656883A (zh) * | 2018-02-09 | 2020-09-11 | 株式会社富士 | 元件图像识别用学习完成模型生成系统及元件图像识别用学习完成模型生成方法 |
JPWO2019155593A1 (ja) * | 2018-02-09 | 2020-10-22 | 株式会社Fuji | 部品画像認識用学習済みモデル作成システム及び部品画像認識用学習済みモデル作成方法 |
CN111656883B (zh) * | 2018-02-09 | 2021-07-30 | 株式会社富士 | 元件图像识别用学习完成模型生成系统及方法 |
US11386546B2 (en) | 2018-02-09 | 2022-07-12 | Fuji Corporation | System for creating learned model for component image recognition, and method for creating learned model for component image recognition |
JP2019159017A (ja) * | 2018-03-09 | 2019-09-19 | Kddi株式会社 | マルチコア光ファイバの調心装置及び当該調心装置のための教師データの生成装置 |
WO2020012628A1 (ja) * | 2018-07-13 | 2020-01-16 | 株式会社Fuji | 異物検出方法および電子部品装着装置 |
JPWO2020012628A1 (ja) * | 2018-07-13 | 2021-02-25 | 株式会社Fuji | 異物検出方法および電子部品装着装置 |
EP3822619A4 (en) * | 2018-07-13 | 2021-07-21 | Fuji Corporation | METHOD AND DEVICE FOR DETECTING FOREIGN MATERIALS |
JP7050926B2 (ja) | 2018-07-13 | 2022-04-08 | 株式会社Fuji | 異物検出方法および電子部品装着装置 |
WO2023195173A1 (ja) * | 2022-04-08 | 2023-10-12 | 株式会社Fuji | 部品実装システム及び画像分類方法 |
Also Published As
Publication number | Publication date |
---|---|
US10380457B2 (en) | 2019-08-13 |
CN108353534B (zh) | 2020-11-24 |
JP6727228B2 (ja) | 2020-07-22 |
EP3376843A1 (en) | 2018-09-19 |
JPWO2017081736A1 (ja) | 2018-08-23 |
EP3376843B1 (en) | 2024-11-13 |
US20180314918A1 (en) | 2018-11-01 |
CN108353534A (zh) | 2018-07-31 |
EP3376843A4 (en) | 2019-02-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017081736A1 (ja) | リード先端位置画像認識方法及びリード先端位置画像認識システム | |
JP6608682B2 (ja) | 位置決め方法、外観検査装置、プログラム、コンピュータ可読記録媒体および外観検査方法 | |
KR102300951B1 (ko) | 기판 검사 장치 및 스크린 프린터의 결함 유형 결정 방법 | |
US20210073973A1 (en) | Method and apparatus for component fault detection based on image | |
KR20210008352A (ko) | 촬상된 품목의 결함을 검출하기 위한 시스템 및 방법 | |
JP2023134688A (ja) | ビジョンシステムで画像内のパターンを検出及び分類するためのシステム及び方法 | |
US10664939B2 (en) | Position control system, position detection device, and non-transitory recording medium | |
CN117730347A (zh) | 基于数字图像的感兴趣区域(roi)自动生成一个或多个机器视觉作业 | |
JP6894335B2 (ja) | 部品計数装置、部品計数方法およびプログラム | |
JP5769559B2 (ja) | 画像処理装置、画像処理プログラム、ロボット装置及び画像処理方法 | |
US20230162344A1 (en) | Appearance inspection apparatus and appearance inspection method | |
JP2008300456A (ja) | 被検査体の検査システム | |
KR20180092033A (ko) | 부품 등록기 | |
JP5960433B2 (ja) | 画像処理装置及び画像処理方法 | |
JP2020134424A (ja) | 分光検査方法、画像処理装置、及びロボットシステム | |
CN114518079A (zh) | 一种孔内特征检测系统及检测方法 | |
JP5921190B2 (ja) | 画像処理装置及び画像処理方法 | |
US20240303982A1 (en) | Image processing device and image processing method | |
JP6418739B2 (ja) | 評価装置、表面実装機、評価方法 | |
JPWO2016185615A1 (ja) | 部品向き判定データ作成装置及び部品向き判定データ作成方法 | |
JP5778685B2 (ja) | ボールグリッドアレイデバイスの位置合わせ及び検査のためのシステム及び方法 | |
CN118511204A (zh) | 将深度学习工具应用于机器视觉的系统和方法及其接口 | |
TWI704630B (zh) | 半導體設備及其檢測方法 | |
JP2024541040A (ja) | マシンビジョンに深層学習ツールを適用するシステムおよび方法、ならびにそのためのインターフェース | |
KR20230119928A (ko) | 딥러닝을 활용한 비원형 용기 생산 공정의 머신비전 검사 시스템 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15908260 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017549889 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15772825 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2015908260 Country of ref document: EP |