WO2020175666A1 - Color filter inspection device, inspection device, color filter inspection method, and inspection method - Google Patents
Color filter inspection device, inspection device, color filter inspection method, and inspection method Download PDFInfo
- Publication number
- WO2020175666A1 WO2020175666A1 PCT/JP2020/008236 JP2020008236W WO2020175666A1 WO 2020175666 A1 WO2020175666 A1 WO 2020175666A1 JP 2020008236 W JP2020008236 W JP 2020008236W WO 2020175666 A1 WO2020175666 A1 WO 2020175666A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- defect
- color filter
- classification
- candidate
- captured image
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B5/00—Optical elements other than lenses
- G02B5/20—Filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
Definitions
- Color filter inspection device inspection device, color filter inspection method and inspection method
- the present invention relates to a color filter inspection device, an inspection device, a color filter inspection method, and an inspection method.
- a non-uniform region such as an opening area or the thickness of the colored layer is detected.
- a color filter unevenness inspection device is used.
- the color filter unevenness inspection device is a device that detects unevenness by performing image processing on a captured image of a color filter substrate. There is a correlation between the type of unevenness (attribute items determined by shape, area, shading, etc.) and the causal process. If the device detects unevenness, the operator determines the type of unevenness from the image of unevenness and The manufacturing conditions of the causative process were adjusted to suppress the occurrence.
- an image classification device that can be used for visual inspection of semiconductor substrates and the like is disclosed in "201 17-5 4 2 3 9 8". This technology extracts feature quantities from captured images and classifies defects, and classifies defects using neural networks, decision trees, and semi-differential analysis using teacher data.
- An object of the present invention is to provide a color filter inspection device, an inspection device, a color filter inspection method and an inspection method capable of simply and accurately detecting a defect due to unevenness of a color filter and classifying the defect. That is.
- the present invention compares a defect detection unit that detects a defect candidate based on a captured image of a color filter, at least one physical quantity of the detected defect candidate, and a threshold of the physical quantity,
- a color filter inspection apparatus comprising: a defect output unit that determines whether the defect candidate is not a defect and outputs defect candidates other than the defect candidate that is determined not to be a defect to a neural network for defect classification. Is.
- the present invention provides a defect detection unit that detects a defect candidate based on a captured image of a color filter, a defect output unit that outputs the detected defect candidate to a neural network for defect classification,
- a color filter inspection device comprising: a classification determination unit that determines the result of the defect classification output from the neural network based on the first analysis result of the captured image of the color filter.
- the first analysis result may include generation position information indicating a generation position of a defect for each classification.
- the color filter inspection device further includes a defect determination unit that determines whether or not the defect candidate for which the result of the defect classification has been determined is a defect based on the second analysis result of the captured image of the color filter. You may have it. ⁇ 0 2020/175666 3 (:171? 2020/008236
- the second analysis result may include generation history information indicating a defect generation history for each classification.
- the neural network is configured to perform a convolution process from the captured image
- a convolutional layer that generates a feature map
- a pooling layer that performs pooling processing to reduce the size or change of the first feature map to generate a second feature map
- an output layer that outputs the result of the defect classification
- the color filter inspection device may further include an image cutout unit that cuts out a range including the defect candidate detected by the defect detection unit from the captured image.
- the defect detection unit may obtain a range including a defect candidate from data obtained by first differentiating the captured image from a plurality of directions.
- a defect detection unit that detects a defect candidate based on a captured image of an object, at least one physical quantity of the detected defect candidate with a threshold value of the physical quantity.
- a defect output unit that determines whether the defect candidate is not a defect, and outputs defect candidates other than the defect candidate determined not to be a defect to a neural network for defect classification. is there.
- the present invention includes: a defect detection unit that detects a defect candidate based on a captured image of an object; a defect output unit that outputs the detected defect candidate to a neural network for defect classification; An inspection apparatus comprising: a classification determination unit that determines the result of the defect classification output from the neural network based on a first analysis result of a captured image of an object.
- the defect detection unit detects a defect candidate based on a captured image of a color filter
- the defect output unit includes at least one physical quantity of the detected defect candidate, and By comparing with the physical quantity threshold value, it is determined whether the defect indicator is not a defect, and defect candidates other than the defect candidate determined not to be a defect are output to the neural network for defect classification.
- a method for inspecting a color filter comprising:
- the defect detection unit is configured to detect defect candidates based on a captured image of a color filter. ⁇ 0 2020/175 666 4 (:171? 2020/008236
- the defect output unit outputs the detected defect candidates to a neural network for defect classification, and the classification confirmation unit outputs the first analysis result of the captured image of the color filter. And a step of determining the result of the defect classification output from the neural network based on the result.
- the present invention includes: a photographing unit photographing the color filter; a convolution process in which the neural network generates a first feature map from the photographed image by a convolution process; and a pooling process.
- a pooling process for reducing the size or change of one feature map to generate the second feature map may be further provided.
- the defect detecting unit detects a defect candidate based on a captured image of an object
- the defect output unit includes at least one physical quantity of the detected defect candidate and the physical quantity.
- the defect detection unit detects a defect candidate based on a captured image of an object, and the defect output unit uses the detected defect candidate for a dual network for defect classification. And a step of causing the classification determination unit to determine the result of the defect classification output from the neural network based on the first analysis result of the captured image of the object. is there.
- a color filter inspecting device an inspecting device, a color filter inspecting method and an inspecting method capable of easily and accurately detecting a defect of a color filter and classifying the defect. it can.
- FIG. 1 is a diagram showing a first embodiment of a color filter inspection device 1 according to the present invention.
- FIG. 2 is a flow chart showing a flow of operations of the color filter inspection device 1. ⁇ 0 2020/175666 5 (:171? 2020/008236
- Fig. 3 is a diagram showing the configuration of the neural network of the defect classification unit 15 and the learning process.
- FIG. 4 A diagram for explaining average pooling.
- FIG. 5 A diagram showing an example in which a differential image is converted into a binary image.
- FIG. 6 is a diagram showing an example in which a region formed of a set of adjacent pixels having a first pixel value is extracted from the binary image of FIG. 5.
- FIG. 7 is a diagram showing a second embodiment of the color filter inspection device 1 according to the present invention.
- FIG. 8 A flowchart showing the flow of operations in the color filter inspection apparatus 1 according to the second embodiment.
- FIG. 1 is a diagram showing an embodiment of a color filter inspection device 1 according to the present invention.
- the color filter inspecting apparatus 1 is an apparatus for inspecting manufacturing defects in the manufacturing process of the color filter.
- an example of examining the unevenness of the color filter which is an example of the object, is exemplified. It may be for inspecting for defects.
- the color filter inspection apparatus 1 includes an imaging unit 11, a defect detection unit 12, an image cutout unit 13, an input unit 14, a defect classification unit 15, and a learning model construction unit 1. 7 and.
- the defect detection unit 12, the image cutout unit 13, the input unit 14, the defect classification unit 15, and the learning model construction unit 17 are configured by incorporating a dedicated program into the computer. The program realizes the functions of each component. ⁇ 0 2020/175666 6 ⁇ (: 171? 2020 /008236
- the image capturing unit 11 includes an illumination, a camera, a transport device, and the like (not shown), captures a color filter for an inspection, and captures a captured image.
- the photographed image photographed by the photographing unit 11 is sent to the defect detection unit 12 and the image cutout unit 13.
- the camera of the image capturing unit may be a line sensor camera or an area sensor camera.
- the image capturing unit 11 may be configured to receive the reflected light from the color filter or to receive the transmitted light.
- the defect detection unit 12 detects a defect candidate that may be uneven from an image by image processing.
- the unevenness detection method performed by the defect detection unit 12 of the present embodiment the method disclosed in Japanese Patent No. 43.63953 is used. That is, the defect detection unit 12 performs a first-order differentiation process on the captured image using a spatial filter to determine the area where unevenness is expected to exist, and the spatial change rate of the gradation value for this area. Areas where there is a possibility of unevenness are detected by using as an evaluation value indicating the degree of unevenness.
- the image cutout unit 13 cuts out a predetermined range including the defect candidate detected by the defect detection unit 12 from the photographed image, and acquires a nonuniformity peripheral image.
- the image cutout unit 13 sends the cutout unevenness peripheral image and the captured image to the input unit 14
- the input unit 14 inputs the unevenness peripheral image and the captured image to the input layer 1 of the defect classification unit 15.
- the processing by the defect classification unit 15 described later can be lightened.
- the defect classifying unit 15 is composed of a neural network, analyzes the mura peripheral image by artificial intelligence, classifies the types of the mura, and outputs the categorized defects.
- the defect classification unit 15 has a learning model 16 and classifies irregularities (classification of defects) by comparing the peripheral image of irregularities with the learning model 16.
- the unevenness of the color filter may vary depending on the cause of the unevenness. ⁇ 0 2020/175 666 7 ⁇ (: 171? 2020 /008236
- the detection of the defect detection unit 12 may include erroneous detection, it is possible to classify this erroneous detection as well.
- the defect classifying unit 15 includes a learning model 16.
- the learning model 16 is constructed by learning the learning data 18 (image group for each type of unevenness) in advance by the learning model constructing unit 17.
- the learning model building unit 17 builds a learning model from the learning data 18. As described above, the learning model construction unit 17 constructs the learning model 19 by learning the learning data 18 (image group for each type of unevenness) in advance before executing the color filter inspection. Then, it shifts to the learning model 16 of the defect classification unit 15.
- the learning model 19 is composed of parameter sets used in the neural network.
- FIG. 2 is a flow chart showing the operation flow of the color filter inspection device 1.
- step (hereinafter referred to as “3”) 11 the image capturing unit 11 captures an image by capturing an image of the color filter.
- the defect detection unit detects defect candidates.
- the defect detection unit 12 performs a first-order differentiation process using a spatial filter on the image acquired in step 11 to extract a region where unevenness is expected.
- the image cutout unit 13 cuts out a predetermined range including a defect candidate from the photographed image, and acquires a nonuniformity peripheral image.
- the input unit 14 inputs the non-uniformity peripheral image (cut out image) to the input layer of the defect classification unit 15.
- the defect classifying unit 15 classifies defects from the uneven peripheral image (cut out image). ⁇ 0 2020/175666 8 ⁇ (: 171? 2020 /008236
- a method of detecting the defect candidate 3 1 2 will be described.
- a differential image is created by determining the differential direction on the captured image and obtaining the first-order differential in the differential direction for the grayscale value of each pixel that constitutes the image.
- FIG. 5 is a diagram showing an example of converting a differential image into a binary image.
- a predetermined threshold is set, and for the differential image, the first pixel value is given to pixels with pixel values above the threshold, and the second pixel is given to pixels with pixel values below the threshold.
- the differential image is made into a binary image as shown in Fig. 5.
- FIG. 6 is a diagram showing an example of extracting a region consisting of a set of adjacent pixels having the first pixel value in the binary image of FIG.
- a region consisting of a set of adjacent pixels having the first pixel value is extracted from this binary image as shown in Fig. 6.
- the pixel group forming this area is set as a set of a plurality of one-dimensional pixel arrays along the differentiation direction, and the difference in gradation value of the pixels located at both ends of this one-dimensional pixel array is taken as the length of the one-dimensional pixel array.
- the representative value of the multiple evaluation values obtained for multiple one-dimensional pixel arrays is used as the evaluation value indicating the non-uniformity for the one-dimensional pixel array, and the value obtained by dividing the value by Use as an evaluation value. If the evaluation value is high, the unevenness is noticeable from the surroundings.
- the evaluation value is obtained for each differential direction, and if a certain predetermined condition is satisfied, it is evaluated that the mura exists in the evaluation target area.
- the process of 3 1 2 can accurately evaluate the presence or absence of unevenness, so that the area with defect candidates can be accurately detected in the defect classification unit 1 5 Can be passed to the Neural Network.
- defect classifying unit 15 in 3 15 will be described together with the more specific configuration of the defect classifying unit 15.
- FIG. 3 is a diagram showing the structure of the neural network of the defect classification unit 15 and the learning process.
- the neural network of the defect classifying unit 15 of this embodiment has an input layer 1 51, a convolutional layer 1 52, a pooling layer 1 5 3, a fully connected layer 1 5 4 and an output layer 1 5 4. Layers 1 5 5 and.
- the input layer 1 5 1 is a layer that receives an input of the uneven peripheral image from the input unit 14.
- the convolutional layer 1 5 2 performs a convolution process using a coefficient matrix of an arbitrary size.
- convolution processing is performed using the 3 x 3 matrix coefficient, and correction is performed using the bias value.
- the convolution processing the first small image of 3 x 3 pixels is extracted from the input image of 6 4 x 6 4 pixels, the convolution calculation is performed using this small image and the 3 x 3 coefficient matrix, and the bias value and the bias coefficient are calculated. Multiply by and add ReLU (Rect if ied Near Near It) to generate the first feature map.
- ReLU Rect if ied Near Near It
- pooling processing is performed on the first feature map to obtain the second feature map.
- the pooling process reduces the size or change of the first feature map generated in the convolutional layer 152 to generate the second feature map. For example, a 2 ⁇ 2 pixel image is extracted from the first feature map, and the maximum brightness and average brightness of this image are calculated. Specifically, average pooling, maximum pooling,
- FIG. 4 is a diagram illustrating average pooling.
- the size is reduced by dividing the pooling area into 2 x 2 pixels and averaging the luminance values.
- the fully connected layer 154 in Fig. 3 combines the 64 second feature maps to create fully connected data.
- the output layer 155 applies a parameter set (weighting parameter, bias parameter) to all the combined data, and outputs the classification results of nine types of defects using the activation function.
- the parameter set (weight parameter, bias parameter) is applicable to all 64 second feature maps.
- the parameter set (weighting parameter, bias parameter) is set using the back propagation method based on the learning data 18. The output error is calculated from the learning data 18, the parameter set (weighting parameter, bias parameter) is updated by the gradient descent method using the least squares method for the error function, and the parameter setting is performed by repeating the learning multiple times. Value (weight parameter, bias parameter).
- the convolutional layer and the pooling layer are each formed as one layer to configure the network.
- a network including multiple convolutional layers and pooling layers is configured, and the convolutional process and the pooling process are predetermined. It may be repeated as many times as desired.
- the coefficient matrix used for the convolutional layer is learned in advance and its value is fixed and fixed, but the convolutional layer and the parameter set (weight parameter, bias parameter) of all combined data may be learned at the same time. ..
- CNN C onvolution Neural Network
- the configuration example of the neural network described above is merely an example, and can be changed as appropriate.
- the color filter inspecting apparatus 1 of the present embodiment can accurately classify defects by using the defect classifying unit 15 that uses the neural network. Therefore, it becomes possible to detect defects in the manufacturing process early and improve the manufacturing process efficiently.
- a specific mura caused by foreign matter in the step of applying a photosensitive material to a substrate can be recognized as a specific mura by a skilled worker, but even a particular mura may have a shape, an area, and a gradation. There are large variations. Therefore, it is difficult to determine a rule to discriminate a specific unevenness based on the shape, area, density, etc.However, by using the defect classifier 15 using a neural network, it is possible to accurately identify a specific irregularity. Can be determined.
- Foreign matter in the process of applying the photosensitive material to the substrate ⁇ 0 2020/175666 1 1 ⁇ (: 171? 2020 /008236
- FIG. 7 is a diagram showing a second embodiment of the color filter inspection device 1 according to the present invention.
- the color filter inspection apparatus 1 of the second embodiment further includes a defect classification unit 21 which is an example of a defect output unit, a classification confirmation unit 2 2 and a defect determination unit 2 3 It has and.
- the defect classification unit 21, the classification determination unit 22 and the defect determination unit 23 are configured by incorporating a dedicated program into the computer, and the program realizes the functions of each configuration.
- the defect detection unit 12 detects a defect candidate that may be uneven, based on the captured image of the color filter taken by the image capturing unit 11.
- the defect detection unit 12 may directly detect the defect candidate from the photographed image, but it is preferable to detect the defect candidate from the image obtained by preprocessing the photographed image to improve the accuracy of the defect determination.
- the defect classification unit 21 calculates at least one physical quantity of the defect candidates detected by the defect detection unit 12 and compares the calculated physical quantity with the threshold value of the physical quantity, so that the defect candidate is not a defect. Or not.
- the defect classification unit 21 outputs the defect candidates other than the defect candidates determined not to be defects to the dual network, that is, the defect classifier 15 for defect classification. —, The defect classification unit 21 does not output the defect candidates determined not to be defects to the defect classifier 15.
- the classification confirming unit 22 confirms the result of defect classification output from the defect classifier 15 based on the first analysis result of the captured image of the color filter acquired in advance. Specifically, the classification determination unit 22 determines the defect classification by comparing the defect classification with the generation position information indicating the generation position of the defect for each classification which is an example of the first analysis result. .. ⁇ 02020/175666 12 (:171?2020/008236
- the defect determination unit 23 determines whether or not the defect candidate whose defect classification has been determined is a defect, based on the second analysis result of the captured image of the color filter acquired in advance. Specifically, the defect judgment unit 23 compares the defect indicator whose defect classification has been confirmed with the occurrence history information indicating the defect occurrence history for each classification which is an example of the second analysis result. , It is determined whether the defect candidate whose defect classification is confirmed is a defect.
- FIG. 8 is a flow chart showing a flow of operations of the color filter inspection device 1 in the second embodiment.
- the photographing unit 11 acquires a photographed image obtained by photographing the color filter.
- the defect detection unit 12 After the captured image is obtained, in 3 2 2, the defect detection unit 12 performs preprocessing for improving the defect determination accuracy for the captured image.
- the pre-processing is, for example, a series of processing including shading processing, smoothing processing, radiation processing, and black-and-white inversion processing.
- the shading process is a process for removing density unevenness from an image having density unevenness.
- the smoothing process is a process of blurring a captured image in order to remove noise other than defect candidates included in the captured image.
- smoothing processing for example, Gaussian filter can be used.
- the radiating process is a process required for a convolution calculation at the time of detecting a defect candidate, which will be described later, and is a process of copying the luminance value of the pixel located at the edge of the captured image to the outside thereof. Radiation processing is insufficient due to the lack of pixels around the target pixel when the pixel located at the edge of the captured image is used as the target pixel, that is, the center pixel, for example, 3 x 3 pixels are used for the registration. Is done to make up for.
- the black-and-white reversal process is a process that black-and-white inverts a captured image when a black defect is the target of determination and does not invert a captured image when a white defect is the target of determination. ⁇ 0 2020/175 666 13 ⁇ (: 171? 2020 /008236
- the judgment target is a black defect or a white defect depends on the kind of the defect to be judged. According to the black-and-white reversal process, a white image can be detected and determined as a defect candidate regardless of the type of defect, so that the defect determination can be simplified.
- shading processing, smoothing processing, radiation processing and black-and-white inversion processing that constitute the pre-processing may be appropriately replaced before and after.
- the defect detection unit 12 After performing the pre-processing, in 323, the defect detection unit 12 performs a defect candidate detection process of detecting a defect candidate from the pre-processed captured image.
- the defect candidate detection process includes, for example, a difference filter process by a convolution operation, a binarization process, a horizontal direction closing process, a vertical direction closing process, a horizontal direction opening process, a vertical direction opening process, and a labeling process. It is composed of a series of processing together with processing.
- the difference filtering process by the convolution calculation is a process of obtaining the magnitude of the change in the brightness value at each point, that is, each pixel of the captured image after preprocessing.
- the brightness value changes greatly compared to the surroundings, so it is possible to detect the defect candidate by obtaining the magnitude of the change in the brightness value by differential filtering.
- each pixel of the captured image after pre-processing is set as the target pixel in order, and the brightness value of each of the 3 ⁇ 3 pixels centered on the target pixel and the filter
- a convolution operation is performed using the 3 x 3 coefficient matrix and.
- the convolution calculation the difference value of the brightness value between the pixel of interest and the adjacent pixel is calculated. The greater the difference in the brightness value, the larger the difference value, and it is possible to detect the place having the larger difference value as a defect candidate.
- the binarization processing is based on the captured image after the difference filter processing by the convolution operation, and generates a binary image in which pixels having a difference value of the image above the threshold value are white and pixels below the threshold value are black. It is a process to do. Through the binarization process, areas with large changes in brightness value are detected as white images, that is, defect candidates, and areas with small changes in brightness value, that is, black images are displayed except for defect candidates. ⁇ 0 2020/175666 14 ⁇ (: 171? 2020 /008236
- the horizontal direction closing process is a process of expanding and contracting the white area of the binary image in the horizontal direction, that is, in the horizontal direction.
- the vertical direction closing process is a process of expanding and contracting the white area of the binary image in the vertical direction, that is, in the vertical direction.
- the horizontal opening process is a process of contracting and expanding the white area of the binary image in the horizontal direction.
- the vertical opening process is a process of contracting and expanding the white area of the binary image in the vertical direction.
- the labeling process is a process of assigning a number to each pixel in the white area of the binary image.
- the same number is assigned to a plurality of pixels belonging to the white region of a continuous block, and the numbers assigned to pixels are different between white regions of different blocks.
- difference filter processing binarization processing, horizontal closing processing, vertical closing processing, horizontal direction opening processing, vertical direction opening processing, and labeling processing that constitute the defect candidate detection processing are The front and rear of these may be replaced appropriately.
- the defect classification unit 21 determines the defect candidates of different numbers subjected to the labeling processing, that is, the defect areas individually for each white area. ⁇ 0 2020/175 666 15 (:171? 2020/008236
- defect classification processing calculates at least one physical quantity for each detected defect candidate and compares the calculated physical quantity with a threshold value of the physical quantity to determine whether the defect candidate is a defect.
- This is a process of outputting defect candidates other than the defect candidate determined to be not to the defect classifier 15.
- defect classification processing includes horizontal width calculation processing, vertical width calculation processing, area calculation processing, luminance difference peak value calculation processing, luminance difference average value calculation processing, slope rate calculation processing, and area ratio calculation processing. It is configured by a series of processing including a calculation process, a streak rate calculation process, a circularity calculation process, a threshold value determination process, and an output process.
- the width calculation process is a process of calculating the width of the defect candidate as one of the physical quantities of the defect candidate. For example, the width is calculated as the distance between the two points when a straight line passing through the center of gravity of the defect candidate intersects with both ends in the X direction of the defect candidate.
- the vertical width calculation processing is processing for calculating the vertical width of the defect candidate as one of the physical quantities of the defect candidate.
- the vertical width is obtained as the distance in the direction of the defect between two points when a straight line passing through the center of gravity of the defect candidate intersects with both ends of the defect candidate in the direction of the defect.
- the area calculation process is a process of calculating the area of the defect candidate as one of the physical quantities of the defect candidate.
- the brightness difference peak value calculation process is a process of calculating the maximum difference value of the defect candidates by the difference filtering process as one of the physical quantities of the defect candidates.
- the brightness difference average value calculation process is a process of calculating the average value of the brightness differences of the defect candidates as one of the physical quantities of the defect candidates.
- the slope rate calculation process is a process of calculating the variation amount of the brightness difference of the defect candidate as one of the physical quantities of the defect candidate.
- the area ratio calculation process is defined as one of the physical quantities of the defect candidates so as to include the whole of one defect candidate and to circumscribe the outermost end of the defect candidate in the X direction and the defect direction. This is a process of calculating the defect candidate in the rectangular area, that is, the area ratio of the white area and the black area. ⁇ 0 2020/175 666 16 ⁇ (: 171? 2020 /008236
- the streak rate calculation process is a process of considering the defect candidate as one rectangular streak as one of the physical quantities of the defect candidate and calculating the ratio between the long side and the short side of the streak. More specifically, the streak rate calculation processing includes the long side and the short side of a rectangular area that is defined so as to include the entire one defect candidate and circumscribe the outermost end in the X direction and the direction of the defect of the defect candidate. Is a process of calculating the ratio of
- the circularity calculation process is a process of calculating the circularity of the defect candidate as one of the physical quantities of the defect candidate. Circularity, an area of the defect candidate 3, in the case where the peripheral length of the defect candidate! _ And may be calculated by 4 3 / 1_ 2.
- the threshold value determination process is carried out individually for each defect candidate of different lumps, with each of the physical amounts of the defect candidates calculated by the various calculation processes described above as a determination threshold value for each physical amount preset for each physical amount. This is a process of determining whether or not the defect candidate is not a defect by comparing.
- defect candidates of different chunks are sorted into defect candidates determined not to be defects and defect candidates other than defect candidates determined not to be defects. Defect candidates other than the defect candidates that are determined not to be defects have not yet been determined to be defects at this point. Therefore, the defect candidates other than the defect candidates that are determined not to be defects may include not only the defect candidates that are finally determined to be defects but also the defect candidates that are suspicious of not being defects.
- the output processing outputs the defect candidates other than the defect candidates determined not to be the defect by the threshold determination processing to the defect classifier 15 and the defect candidates determined not to be the defect by the threshold determination processing to the defect classifier 1 It is a process that does not output to 5.
- only defect candidates other than the defect candidates determined not to be defects are the targets of defect classification using the neural network, and the defect candidates determined not to be defects are the targets of defect classification using the neural network. Excluded from. ⁇ 0 2020/175 666 17 ⁇ (: 171? 2020 /008236
- the defect classification process is a process for classifying the defect candidates other than the defect candidates that are determined not to be defects by the threshold value determination process using the neural network as in the first embodiment. ..
- the classification determination unit 22 After performing the defect classification processing, at 3 26, the classification determination unit 22 performs the classification determination processing for determining the defect classification of the defect candidate output from the defect classifier 15.
- the classification determining unit 22 compares the defect classification with the occurrence position information indicating the occurrence position of the defect for each classification, and the position of the classified defect candidate is the occurrence position indicated in the occurrence position information of the corresponding classification. Defect classification is confirmed when On the other hand, the classification determination unit 22 does not determine the defect classification when the position of the classified defect candidate does not match the occurrence position indicated in the occurrence position information of the corresponding classification.
- the classification confirming unit 22 may perform the classification confirming process by further using the threshold value of the reliability of the defect classification set individually for each classification in addition to the occurrence position information.
- the defect determination unit 23 After performing the classification confirmation processing, at 327, the defect determination unit 23 performs a defect determination processing for determining whether or not the defect candidate whose defect classification has been determined is a defect.
- the defect determining unit 23 determines whether or not the defect candidate is a defect by comparing the defect candidate whose defect classification has been determined with the occurrence history information indicating the defect occurrence history for each classification.
- the defect determination unit 23 determines that the defect candidate is defective when the defect candidate whose defect classification has been confirmed matches the occurrence history indicated in the occurrence history information of the corresponding classification.
- the occurrence history may be, for example, the number of consecutive occurrences of defect candidates.
- the defect determination unit 23 determines that the defect candidate is not a defect when the defect candidate whose defect classification has been determined does not match the occurrence history indicated in the occurrence history information of the corresponding classification. In addition, the defect determination unit 23 determines that a defect candidate whose defect classification has not been determined is a defect that does not belong to any classification.
- the defect determination unit 23 may perform the defect determination process by further using the defect intensity indicating the defect density, that is, the brightness difference, in addition to the occurrence history information. ⁇ 0 2020/175666 18 ⁇ (: 171? 2020 /008236
- the defect classification using the neural network by combining the defect classification using the neural network and the defect judgment method using the judgment criterion other than the neural network, it is possible to use only the neural network. By comparison, it is possible to perform highly accurate defect determination.
- the defect classification unit 15 may directly process the captured image of the color filter. In this case, the defect classification unit 15 outputs the defect classification from the entire image in the neural network.
- the defect detection unit 12 will be described with reference to an example in which the defect candidate is detected by using the method disclosed in Japanese Patent No. 4 3 6 3 9 5 3. did.
- the present invention is not limited to this, and a defect candidate may be detected using a conventionally known defect detection method.
- the present invention can also be applied to defect inspection of objects other than color filters.
- the present invention can be applied to a film, glass, silicon, metal or the like for inspecting a defect of an object having an appearance to be inspected by coating or self-luminous.
- the appearance of the inspection object is not limited to one that can be detected under visible light, and may be one that can be detected under infrared light or ultraviolet light.
- the present invention can also be applied to inspect defects in medical radiographic images.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Chemical & Material Sciences (AREA)
- Biochemistry (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Optics & Photonics (AREA)
- Analytical Chemistry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Image Analysis (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
- Spectrometry And Color Measurement (AREA)
- Eye Examination Apparatus (AREA)
- Image Processing (AREA)
Abstract
A color filter inspection device (1) is provided with: a defect detection unit (12) that detects defect candidates on the basis of a captured image of a color filter; and a defect output unit (21) that determines whether each defect candidate is a defect by comparing at least one physical quantity of the detected defect candidate with a threshold of the physical quantity, and outputs defect candidates other than the defect candidates determined as not defects to a neural network for defect classification.
Description
\¥0 2020/175666 1 ?01/^2020/008236 \¥0 2020/175666 1 ?01/^2020/008236
明 細 書 Specification
発明の名称 : Title of invention:
カラーフィルタ検査装置、 検査装置、 カラーフィルタ検査方法および検査 方法 Color filter inspection device, inspection device, color filter inspection method and inspection method
技術分野 Technical field
[0001 ] 本発明は、 カラーフィルタ検査装置、 検査装置、 カラーフィルタ検査方法 および検査方法に関するものである。 The present invention relates to a color filter inspection device, an inspection device, a color filter inspection method, and an inspection method.
背景技術 Background technology
[0002] 従来、 カラーフィルタ製造ラインでは製造条件の変動が原因でカラーフィ ルタ基板に発生する欠陥の一つとして、 開口面積や着色層の膜厚等の不均一 性領域 (ムラ) を検出するためカラーフィルタのムラ検査装置が用いられて いる。 カラーフィルタのムラ検査装置は、 カラーフィルタ基板の撮影画像を 画像処理することによりムラを検出する装置である。 ムラの種類 (形状、 面 積、 濃淡等から決まる属性項目) と原因工程には相関があり、 装置がムラを 検出した場合は、 作業員がムラの画像からムラの種類を判別し、 ムラの発生 を抑えるように原因工程の製造条件を調整していた。 [0002] Conventionally, in the color filter manufacturing line, as one of the defects generated in the color filter substrate due to the fluctuation of the manufacturing conditions, a non-uniform region (unevenness) such as an opening area or the thickness of the colored layer is detected. A color filter unevenness inspection device is used. The color filter unevenness inspection device is a device that detects unevenness by performing image processing on a captured image of a color filter substrate. There is a correlation between the type of unevenness (attribute items determined by shape, area, shading, etc.) and the causal process. If the device detects unevenness, the operator determines the type of unevenness from the image of unevenness and The manufacturing conditions of the causative process were adjusted to suppress the occurrence.
[0003] しかし、 作業員がムラの画像からムラの種類を判別するためには、 熟練が 必要であり、 また、 作業員が変わっても判別を適正に行えるようにすること は、 非常に難易度が高く、 自動化が望まれている。 [0003] However, it is very difficult for a worker to discriminate the type of the irregularity from the irregularity image, and it is very difficult to properly perform the discrimination even if the worker changes. High degree, automation is desired.
また、 半導体基板等の外観検査に用いることができる画像分類装置が」 2 0 1 7 - 5 4 2 3 9八に開示されている。 この技術は、 撮影画像から特徴 量を抽出して欠陥分類を行うものであり、 教師データを用いてニューラルネ ッ トワーク、 決定木、 半別分析等により欠陥分類を行っている。 In addition, an image classification device that can be used for visual inspection of semiconductor substrates and the like is disclosed in "201 17-5 4 2 3 9 8". This technology extracts feature quantities from captured images and classifies defects, and classifies defects using neural networks, decision trees, and semi-differential analysis using teacher data.
[0004] しかし、
2 0 1 7 - 5 4 2 3 9八に開示されている技術では、 撮影画 像と参照画像との差分画像に基づき欠陥を検出している。 カラーフィルタの ムラは、 非常に微妙な濃淡の差として現れる場合も多く、 差分画像を用いる 手法では、 誤検出してしまうおそれが高く、 カラーフィルタのムラの検出に
\¥0 2020/175666 2 卩(:171? 2020 /008236 [0004] However, The technology disclosed in 2 0 1 7-5 4 2 3 9 8 detects defects based on the difference image between the captured image and the reference image. Color filter unevenness often appears as a very subtle difference in light and shade, and the method that uses the difference image is highly likely to cause false detection. \¥ 0 2020/175666 2 (:171? 2020/008236
は適さないものであった。 また、 」 2 0 1 7— 5 4 2 3 9八に開示されて いる技術では、 分類器に欠陥の特徴量を入力して欠陥の種類を判別している が、 分類器を構築する際に、 どの特徴量を利用するか決める必要があり、 利 用するための難易度が高かった。 Was not suitable. In addition, in the technique disclosed in “2 0 1 7— 5 4 2 3 9 8”, the feature quantity of the defect is input to the classifier to determine the type of the defect. However, when constructing the classifier, , It was necessary to decide which feature quantity to use, and it was difficult to use.
発明の開示 Disclosure of the invention
[0005] 本発明の課題は、 簡単かつ正確にカラーフィルタのムラによる欠陥の検出 と、 欠陥の分類とを行うことができるカラーフィルタ検査装置、 検査装置、 カラーフィルタ検査方法および検査方法を提供することである。 An object of the present invention is to provide a color filter inspection device, an inspection device, a color filter inspection method and an inspection method capable of simply and accurately detecting a defect due to unevenness of a color filter and classifying the defect. That is.
[0006] [課題を解決するための手段] [0006] [Means for Solving the Problems]
本発明は、 以下のような解決手段により、 前記課題を解決する。 なお、 理 解を容易にするために、 本発明の実施形態に対応する符号を付して説明する が、 これに限定されるものではない。 The present invention solves the above problems by the following solution means. It should be noted that, in order to facilitate the understanding, the reference numerals corresponding to the embodiments of the present invention will be given and described, but the present invention is not limited thereto.
[0007] 本発明は、 カラーフィルタの撮影画像に基づいて欠陥候補を検出する欠陥 検出部と、 前記検出された欠陥候補についての少なくとも 1つの物理量と、 前記物理量の閾値とを比較することで、 前記欠陥候補が欠陥でないか否かを 判定し、 欠陥でないと判定された欠陥候補以外の欠陥候補を、 欠陥分類のた めにニューラルネッ トワークに出力する欠陥出力部と、 を備えるカラーフィ ルタ検査装置である。 [0007] The present invention compares a defect detection unit that detects a defect candidate based on a captured image of a color filter, at least one physical quantity of the detected defect candidate, and a threshold of the physical quantity, A color filter inspection apparatus comprising: a defect output unit that determines whether the defect candidate is not a defect and outputs defect candidates other than the defect candidate that is determined not to be a defect to a neural network for defect classification. Is.
[0008] 本発明は、 カラーフィルタの撮影画像に基づいて欠陥候補を検出する欠陥 検出部と、 前記検出された欠陥候補を、 欠陥分類のためにニューラルネッ ト ワークに出力する欠陥出力部と、 カラーフィルタの撮影画像の第 1の分析結 果に基づいて前記ニューラルネッ トワークから出力された前記欠陥分類の結 果を確定する分類確定部と、 を備えるカラーフィルタ検査装置である。 [0008] The present invention provides a defect detection unit that detects a defect candidate based on a captured image of a color filter, a defect output unit that outputs the detected defect candidate to a neural network for defect classification, A color filter inspection device comprising: a classification determination unit that determines the result of the defect classification output from the neural network based on the first analysis result of the captured image of the color filter.
[0009] 前記第 1の分析結果は、 分類毎の欠陥の発生位置を示す発生位置情報を含 んでいてもよい。 [0009] The first analysis result may include generation position information indicating a generation position of a defect for each classification.
[0010] 前記カラーフィルタ検査装置は、 カラーフィルタの撮影画像の第 2の分析 結果に基づいて前記欠陥分類の結果が確定された欠陥候補が欠陥であるか否 かを判定する欠陥判定部をさらに備えていてもよい。
\¥0 2020/175666 3 卩(:171? 2020 /008236 The color filter inspection device further includes a defect determination unit that determines whether or not the defect candidate for which the result of the defect classification has been determined is a defect based on the second analysis result of the captured image of the color filter. You may have it. \\0 2020/175666 3 (:171? 2020/008236
[001 1 ] 前記第 2の分析結果は、 分類毎の欠陥の発生履歴を示す発生履歴情報を含 んでいてもよい。 [001 1] The second analysis result may include generation history information indicating a defect generation history for each classification.
[0012] 前記ニューラルネッ トワークは、 前記撮影画像から畳み込み処理により第 [0012] The neural network is configured to perform a convolution process from the captured image,
1特徴マップを生成する畳み込み層と、 プーリング処理を行い前記第 1特徴 マップのサイズ又は変化を低減して第 2特徴マップを生成するプーリング層 と、 前記欠陥分類の結果を出力する出力層と、 を備えていてもよい。 1 a convolutional layer that generates a feature map, a pooling layer that performs pooling processing to reduce the size or change of the first feature map to generate a second feature map, and an output layer that outputs the result of the defect classification, May be provided.
[0013] 前記カラーフィルタ検査装置は、 前記欠陥検出部で検出された前記欠陥候 補を含む範囲を前記撮影画像から切り出す画像切り出し部をさらに備えてい てもよい。 The color filter inspection device may further include an image cutout unit that cuts out a range including the defect candidate detected by the defect detection unit from the captured image.
[0014] 前記欠陥検出部は、 前記撮影画像を複数方向から一次微分したデータから 欠陥候補を含む範囲を求めてもよい。 [0014] The defect detection unit may obtain a range including a defect candidate from data obtained by first differentiating the captured image from a plurality of directions.
[0015] 本発明は、 対象物の撮影画像に基づいて欠陥候補を検出する欠陥検出部と 、 前記検出された欠陥候補についての少なくとも 1つの物理量と、 前記物理 量の閾値とを比較することで、 前記欠陥候補が欠陥でないか否かを判定し、 欠陥でないと判定された欠陥候補以外の欠陥候補を、 欠陥分類のためにニュ —ラルネッ トワークに出力する欠陥出力部と、 を備える検査装置である。 According to the present invention, by comparing a defect detection unit that detects a defect candidate based on a captured image of an object, at least one physical quantity of the detected defect candidate with a threshold value of the physical quantity. A defect output unit that determines whether the defect candidate is not a defect, and outputs defect candidates other than the defect candidate determined not to be a defect to a neural network for defect classification. is there.
[0016] 本発明は、 対象物の撮影画像に基づいて欠陥候補を検出する欠陥検出部と 、 前記検出された欠陥候補を、 欠陥分類のためにニューラルネッ トワークに 出力する欠陥出力部と、 対象物の撮影画像の第 1の分析結果に基づいて前記 ニューラルネッ トワークから出力された前記欠陥分類の結果を確定する分類 確定部と、 を備える検査装置である。 The present invention includes: a defect detection unit that detects a defect candidate based on a captured image of an object; a defect output unit that outputs the detected defect candidate to a neural network for defect classification; An inspection apparatus comprising: a classification determination unit that determines the result of the defect classification output from the neural network based on a first analysis result of a captured image of an object.
[0017] 本発明は、 欠陥検出部が、 カラーフィルタの撮影画像に基づいて欠陥候補 を検出する工程と、 欠陥出力部が、 前記検出された欠陥候補についての少な くとも 1つの物理量と、 前記物理量の閾値とを比較することで、 前記欠陥候 補が欠陥でないか否かを判定し、 欠陥でないと判定された欠陥候補以外の欠 陥候補を、 欠陥分類のためにニューラルネッ トワークに出力する工程と、 を 備えるカラーフィルタ検査方法である。 According to the present invention, the defect detection unit detects a defect candidate based on a captured image of a color filter, and the defect output unit includes at least one physical quantity of the detected defect candidate, and By comparing with the physical quantity threshold value, it is determined whether the defect indicator is not a defect, and defect candidates other than the defect candidate determined not to be a defect are output to the neural network for defect classification. A method for inspecting a color filter, comprising:
[0018] 本発明は、 欠陥検出部が、 カラーフィルタの撮影画像に基づいて欠陥候補
\¥0 2020/175666 4 卩(:171? 2020 /008236 According to the present invention, the defect detection unit is configured to detect defect candidates based on a captured image of a color filter. \\0 2020/175 666 4 (:171? 2020/008236
を検出する工程と、 欠陥出力部が、 前記検出された欠陥候補を、 欠陥分類の ためにニューラルネッ トワークに出力する工程と、 分類確定部が、 カラーフ ィルタの撮影画像の第 1の分析結果に基づいて前記ニューラルネッ トワーク から出力された前記欠陥分類の結果を確定する工程と、 を備えるカラーフィ ルタ検査方法である。 The defect output unit outputs the detected defect candidates to a neural network for defect classification, and the classification confirmation unit outputs the first analysis result of the captured image of the color filter. And a step of determining the result of the defect classification output from the neural network based on the result.
[0019] 本発明は、 撮影部が、 前記カラーフィルタを撮影する工程と、 前記ニュー ラルネッ トワークが、 前記撮影画像から畳み込み処理により第 1特徴マップ を生成する畳み込み工程と、 プーリング処理を行い前記第 1特徴マップのサ イズ又は変化を低減して第 2特徴マップを生成するプーリングエ程と、 をさ らに備えていてもよい。 [0019] The present invention includes: a photographing unit photographing the color filter; a convolution process in which the neural network generates a first feature map from the photographed image by a convolution process; and a pooling process. A pooling process for reducing the size or change of one feature map to generate the second feature map may be further provided.
[0020] 本発明は、 欠陥検出部が、 対象物の撮影画像に基づいて欠陥候補を検出す る工程と、 欠陥出力部が、 前記検出された欠陥候補についての少なくとも 1 つの物理量と、 前記物理量の閾値とを比較することで、 前記欠陥候補が欠陥 でないか否かを判定し、 欠陥でないと判定された欠陥候補以外の欠陥候補を 、 欠陥分類のためにニューラルネッ トワークに出力する工程と、 を備える検 査方法である。 According to the present invention, the defect detecting unit detects a defect candidate based on a captured image of an object, and the defect output unit includes at least one physical quantity of the detected defect candidate and the physical quantity. By comparing with the threshold value of, to determine whether the defect candidate is not a defect, and output the defect candidates other than the defect candidate determined to be not a defect to the neural network for defect classification, It is an inspection method that includes
[0021 ] 本発明は、 欠陥検出部が、 対象物の撮影画像に基づいて欠陥候補を検出す る工程と、 欠陥出力部が、 前記検出された欠陥候補を、 欠陥分類のために二 ューラルネッ トワークに出力する工程と、 分類確定部が、 対象物の撮影画像 の第 1の分析結果に基づいて前記ニューラルネッ トワークから出力された前 記欠陥分類の結果を確定する工程と、 を備える検査方法である。 According to the present invention, the defect detection unit detects a defect candidate based on a captured image of an object, and the defect output unit uses the detected defect candidate for a dual network for defect classification. And a step of causing the classification determination unit to determine the result of the defect classification output from the neural network based on the first analysis result of the captured image of the object. is there.
[0022] 本発明によれば、 簡単かつ正確にカラーフィルタの欠陥の検出と、 欠陥の 分類とを行うことができるカラーフィルタ検査装置、 検査装置、 カラーフィ ルタ検査方法および検査方法を提供することができる。 According to the present invention, it is possible to provide a color filter inspecting device, an inspecting device, a color filter inspecting method and an inspecting method capable of easily and accurately detecting a defect of a color filter and classifying the defect. it can.
図面の簡単な説明 Brief description of the drawings
[0023] [図 1 ]本発明によるカラーフィルタ検査装置 1の第 1実施形態を示す図である FIG. 1 is a diagram showing a first embodiment of a color filter inspection device 1 according to the present invention.
[図 2]カラーフィルタ検査装置 1の動作の流れを示すフローチヤートである。
\¥0 2020/175666 5 卩(:171? 2020 /008236 FIG. 2 is a flow chart showing a flow of operations of the color filter inspection device 1. \\0 2020/175666 5 (:171? 2020/008236
[図 3]欠陥分類部 1 5のニユーラルネッ トワークの構成と、 学習処理について 示す図である。 [Fig. 3] Fig. 3 is a diagram showing the configuration of the neural network of the defect classification unit 15 and the learning process.
[図 4]平均プーリングを説明する図である。 [FIG. 4] A diagram for explaining average pooling.
[図 5]微分画像を 2値画像にする例を示す図である。 [FIG. 5] A diagram showing an example in which a differential image is converted into a binary image.
[図 6]図 5の 2値画像について第 1の画素値を有する隣接画素の集合からなる 領域を抽出した例を示す図である。 [FIG. 6] FIG. 6 is a diagram showing an example in which a region formed of a set of adjacent pixels having a first pixel value is extracted from the binary image of FIG. 5.
[図 7]本発明によるカラーフィルタ検査装置 1の第 2実施形態を示す図である FIG. 7 is a diagram showing a second embodiment of the color filter inspection device 1 according to the present invention.
[図 8]第 2実施形態のカラーフィルタ検査装置 1 における動作の流れを示すフ 口ーチヤートでめる。 [FIG. 8] A flowchart showing the flow of operations in the color filter inspection apparatus 1 according to the second embodiment.
発明を実施するための形態 MODE FOR CARRYING OUT THE INVENTION
[0024] 以下、 本発明を実施するための最良の形態について図面等を参照して説明 する。 各実施形態で参照する図面において、 同一部分または同様な機能を有 する部分には同一の符号または類似の符号を付し、 その繰り返しの説明は省 略する。 [0024] Hereinafter, the best mode for carrying out the present invention will be described with reference to the drawings. In the drawings referred to in the respective embodiments, the same portions or portions having similar functions are denoted by the same reference numerals or similar reference numerals, and repeated description thereof is omitted.
[0025] (第 1実施形態) [0025] (First Embodiment)
図 1は、 本発明によるカラーフィルタ検査装置 1の実施形態を示す図であ る。 FIG. 1 is a diagram showing an embodiment of a color filter inspection device 1 according to the present invention.
カラーフィルタ検査装置 1は、 カラーフィルタの製造過程において、 製造 上の欠陥を検査する装置であり、 ここでは、 対象物の一例であるカラーフィ ルタのムラを検討する場合を例示するが、 ムラ以外の欠陥を検査するもので あってもよい。 The color filter inspecting apparatus 1 is an apparatus for inspecting manufacturing defects in the manufacturing process of the color filter. Here, an example of examining the unevenness of the color filter, which is an example of the object, is exemplified. It may be for inspecting for defects.
本実施形態のカラーフィルタ検査装置 1は、 撮影部 1 1 と、 欠陥検出部 1 2と、 画像切り出し部 1 3と、 入力部 1 4と、 欠陥分類部 1 5と、 学習モデ ル構築部 1 7とを備えている。 なお、 欠陥検出部 1 2と、 画像切り出し部 1 3と、 入力部 1 4と、 欠陥分類部 1 5と、 学習モデル構築部 1 7とは、 コン ピユータに、 専用のプログラムを組み込んで構成されており、 プログラムが 各構成の機能を実現する。
\¥0 2020/175666 6 卩(:171? 2020 /008236 The color filter inspection apparatus 1 according to the present embodiment includes an imaging unit 11, a defect detection unit 12, an image cutout unit 13, an input unit 14, a defect classification unit 15, and a learning model construction unit 1. 7 and. The defect detection unit 12, the image cutout unit 13, the input unit 14, the defect classification unit 15, and the learning model construction unit 17 are configured by incorporating a dedicated program into the computer. The program realizes the functions of each component. \\0 2020/175666 6 卩 (: 171? 2020 /008236
[0026] 撮影部 1 1は、 不図示の照明、 カメラ、 搬送装置等を備えており、 検査対 象のカラーフィルタの撮影を行い、 撮影画像を取得する。 撮影部 1 1が撮影 した撮影画像は、 欠陥検出部 1 2及び画像切り出し部 1 3へ送られる。 なお 、 撮影部のカメラとしては、 ラインセンサカメラであってもよいし、 エリア センサカメラであってもよい。 The image capturing unit 11 includes an illumination, a camera, a transport device, and the like (not shown), captures a color filter for an inspection, and captures a captured image. The photographed image photographed by the photographing unit 11 is sent to the defect detection unit 12 and the image cutout unit 13. The camera of the image capturing unit may be a line sensor camera or an area sensor camera.
撮影部 1 1は、 カラーフィルタからの反射光を受光する形態でも透過光を 受光する形態であってもよい。 The image capturing unit 11 may be configured to receive the reflected light from the color filter or to receive the transmitted light.
[0027] 欠陥検出部 1 2は、 画像処理で画像からムラである可能性がある欠陥候補 を検出する。 本実施形態の欠陥検出部 1 2が行うムラの検出方法は、 特許第 4 3 6 3 9 5 3号公報に開示されている手法を用いている。 すなわち、 欠陥 検出部 1 2は、 撮影画像に対して空間フィルタを用いた一次微分処理を行い 、 ムラが存在すると予想される領域を定め、 この領域についての階調値の空 間的な変化割合をムラの程度を示す評価値として、 ムラが存在する可能性の ある領域を検出する。 [0027] The defect detection unit 12 detects a defect candidate that may be uneven from an image by image processing. As the unevenness detection method performed by the defect detection unit 12 of the present embodiment, the method disclosed in Japanese Patent No. 43.63953 is used. That is, the defect detection unit 12 performs a first-order differentiation process on the captured image using a spatial filter to determine the area where unevenness is expected to exist, and the spatial change rate of the gradation value for this area. Areas where there is a possibility of unevenness are detected by using as an evaluation value indicating the degree of unevenness.
[0028] 画像切り出し部 1 3は、 欠陥検出部 1 2が検出した欠陥候補を含む所定の 範囲について撮影画像から切り出して、 ムラ周辺画像を取得する。 画像切り 出し部 1 3は、 切り出したムラ周辺画像と、 撮影画像とを入力部 1 4へ送る The image cutout unit 13 cuts out a predetermined range including the defect candidate detected by the defect detection unit 12 from the photographed image, and acquires a nonuniformity peripheral image. The image cutout unit 13 sends the cutout unevenness peripheral image and the captured image to the input unit 14
[0029] 入力部 1 4は、 ムラ周辺画像と、 撮影画像とを欠陥分類部 1 5の入力層 1 The input unit 14 inputs the unevenness peripheral image and the captured image to the input layer 1 of the defect classification unit 15.
5 1へ入力する。 5 Input to 1.
欠陥検出部 1 2によって欠陥候補を検出し、 画像切り出し部 1 3によって 画像を切り出すことにより、 後述の欠陥分類部 1 5による処理を軽くするこ とができる。 By detecting defect candidates by the defect detection unit 12 and cutting out the image by the image cutout unit 13, the processing by the defect classification unit 15 described later can be lightened.
[0030] 欠陥分類部 1 5は、 ニューラルネッ トワークにより構成されており、 ムラ周 辺画像を人工知能により解析して、 ムラの種類を分類して欠陥分類として出 力する。 欠陥分類部 1 5は、 学習モデル 1 6を有しており、 ムラ周辺画像を 学習モデル 1 6と対比することによりムラの分類 (欠陥の分類) を行う。 こ こで、 カラーフィルタのムラは、 ムラの発生原因によってムラの具体的な形
\¥0 2020/175666 7 卩(:171? 2020 /008236 The defect classifying unit 15 is composed of a neural network, analyzes the mura peripheral image by artificial intelligence, classifies the types of the mura, and outputs the categorized defects. The defect classification unit 15 has a learning model 16 and classifies irregularities (classification of defects) by comparing the peripheral image of irregularities with the learning model 16. Here, the unevenness of the color filter may vary depending on the cause of the unevenness. \\0 2020/175 666 7 卩 (: 171? 2020 /008236
態、 例えば、 形状、 大きさ、 面積等が異なっている。 よって、 ムラの分類を 行うことにより、 ムラ (欠陥) の発生原因の特定が可能である。 また、 欠陥 検出部 1 2の検出には、 誤検出が含まれる場合もあり得るので、 この誤検出 も分類することが可能である。 State, for example, shape, size, area, etc. Therefore, by classifying the unevenness, it is possible to identify the cause of the unevenness (defect). Further, since the detection of the defect detection unit 12 may include erroneous detection, it is possible to classify this erroneous detection as well.
[0031 ] また、 欠陥分類部 1 5は、 学習モデル 1 6を備えている。 Further, the defect classifying unit 15 includes a learning model 16.
学習モデル 1 6は、 学習モデル構築部 1 7によって予め学習データ 1 8 ( ムラの種類毎の画像群) を学習して構築される。 The learning model 16 is constructed by learning the learning data 18 (image group for each type of unevenness) in advance by the learning model constructing unit 17.
[0032] 学習モデル構築部 1 7は、 学習データ 1 8から学習モデルを構築する。 上 述したように、 学習モデル構築部 1 7は、 カラーフィルタの検査を実行する よりも先に、 予め学習データ 1 8 (ムラの種類毎の画像群) を学習して学習 モデル 1 9を構築し、 欠陥分類部 1 5の学習モデル 1 6に移す。 学習モデル 1 9はニューラルネッ トワークで利用するパラメータセッ ト等から構成され ている。 The learning model building unit 17 builds a learning model from the learning data 18. As described above, the learning model construction unit 17 constructs the learning model 19 by learning the learning data 18 (image group for each type of unevenness) in advance before executing the color filter inspection. Then, it shifts to the learning model 16 of the defect classification unit 15. The learning model 19 is composed of parameter sets used in the neural network.
[0033] 次に、 カラーフィルタ検査装置 1の動作、 及び、 欠陥分類部 1 5の構成と 動作をより詳しく説明する。 Next, the operation of the color filter inspection apparatus 1 and the configuration and operation of the defect classification unit 15 will be described in more detail.
図 2は、 カラーフィルタ検査装置 1の動作の流れを示すフローチヤートであ る。 FIG. 2 is a flow chart showing the operation flow of the color filter inspection device 1.
ステップ (以下、 3とする) 1 1では、 撮影部 1 1が、 カラーフィルタを 撮影して、 画像を取得する。 In step (hereinafter referred to as “3”) 11, the image capturing unit 11 captures an image by capturing an image of the color filter.
3 1 2では、 欠陥検出部が欠陥候補を検出する。 本実施形態では、 欠陥検 出部 1 2は、 ステップ 1 1で取得した画像に対して空間フィルタを用いた一 次微分処理を行い、 ムラが存在すると予想される領域を抽出する。 In 3 1 2, the defect detection unit detects defect candidates. In the present embodiment, the defect detection unit 12 performs a first-order differentiation process using a spatial filter on the image acquired in step 11 to extract a region where unevenness is expected.
3 1 3では、 画像切り出し部 1 3が、 欠陥候補を含む所定の範囲について 撮影画像から切り出して、 ムラ周辺画像を取得する。 In 3 13, the image cutout unit 13 cuts out a predetermined range including a defect candidate from the photographed image, and acquires a nonuniformity peripheral image.
3 1 4では、 入力部 1 4が、 ムラ周辺画像 (切り出された画像) を欠陥分 類部 1 5の入力層へ入力する。 In 3 1 4, the input unit 14 inputs the non-uniformity peripheral image (cut out image) to the input layer of the defect classification unit 15.
[0034] 3 1 5では、 欠陥分類部 1 5がムラ周辺画像 (切り出された画像) から欠 陥を分類する。
\¥0 2020/175666 8 卩(:171? 2020 /008236 [0034] In 315, the defect classifying unit 15 classifies defects from the uneven peripheral image (cut out image). \¥0 2020/175666 8 卩 (: 171? 2020 /008236
[0035] 3 1 2の欠陥候補の検出方法について説明する。 撮影画像上の微分方向を 決定し、 画像を構成する各画素の階調値について微分方向に関する一次微分 を求めることにより微分画像を作成する。 A method of detecting the defect candidate 3 1 2 will be described. A differential image is created by determining the differential direction on the captured image and obtaining the first-order differential in the differential direction for the grayscale value of each pixel that constitutes the image.
図 5は、 微分画像を 2値画像にする例を示す図である。 FIG. 5 is a diagram showing an example of converting a differential image into a binary image.
所定のしきい値を設定し、 微分画像について、 しきい値以上の画素値を有 する画素については第 1の画素値を与え、 しきい値未満の画素値を有する画 素については第 2の画素値を与えることにより、 微分画像を図 5に示すよう な 2値画像にする。 A predetermined threshold is set, and for the differential image, the first pixel value is given to pixels with pixel values above the threshold, and the second pixel is given to pixels with pixel values below the threshold. By giving the pixel value, the differential image is made into a binary image as shown in Fig. 5.
図 6は、 図 5の 2値画像について第 1の画素値を有する隣接画素の集合か らなる領域を抽出した例を示す図である。 FIG. 6 is a diagram showing an example of extracting a region consisting of a set of adjacent pixels having the first pixel value in the binary image of FIG.
この 2値画像について、 第 1の画素値を有する隣接画素の集合からなる領 域を図 6に示すように抽出する。 この領域を構成する画素群を、 微分方向に 沿った複数の一次元画素配列の集合として、 この一次元画素配列の両端に位 置する画素の階調値の差を当該一次元画素配列の長さで除した値を一次元画 素配列に関する不均一性を示す評価値として、 複数の一次元画素配列につい て求めた複数の評価値の代表値を、 評価対象領域についての不均一性を示す 評価値とする。 評価値の値が高ければ周囲からムラが目立って見える。 評価 値を微分方向毎に求めて、 一定の所定の条件を満たせば、 評価対象領域にム ラが存在すると評価する。 A region consisting of a set of adjacent pixels having the first pixel value is extracted from this binary image as shown in Fig. 6. The pixel group forming this area is set as a set of a plurality of one-dimensional pixel arrays along the differentiation direction, and the difference in gradation value of the pixels located at both ends of this one-dimensional pixel array is taken as the length of the one-dimensional pixel array. The representative value of the multiple evaluation values obtained for multiple one-dimensional pixel arrays is used as the evaluation value indicating the non-uniformity for the one-dimensional pixel array, and the value obtained by dividing the value by Use as an evaluation value. If the evaluation value is high, the unevenness is noticeable from the surroundings. The evaluation value is obtained for each differential direction, and if a certain predetermined condition is satisfied, it is evaluated that the mura exists in the evaluation target area.
[0036] 3 1 2の処理はムラ領域の大きさや形状が予測できない場合でも、 正確に ムラの存在の有無を評価することができるため、 精度よく欠陥候補のある領 域を欠陥分類部 1 5のニューラルネッ トワークに渡すことができる。 Even if the size and shape of the uneven area cannot be predicted, the process of 3 1 2 can accurately evaluate the presence or absence of unevenness, so that the area with defect candidates can be accurately detected in the defect classification unit 1 5 Can be passed to the Neural Network.
[0037] 欠陥分類部 1 5の 3 1 5における動作について、 欠陥分類部 1 5のより具 体的な構成と共に説明する。 The operation of the defect classifying unit 15 in 3 15 will be described together with the more specific configuration of the defect classifying unit 15.
図 3は、 欠陥分類部 1 5のニューラルネッ トワークの構成と、 学習処理に ついて示す図である。 FIG. 3 is a diagram showing the structure of the neural network of the defect classification unit 15 and the learning process.
本実施形態の欠陥分類部 1 5のニューラルネッ トワークは、 入力層 1 5 1 と、 畳み込み層 1 5 2と、 プーリング層 1 5 3と、 全結合層 1 5 4と、 出力
層 1 5 5とを備えている。 The neural network of the defect classifying unit 15 of this embodiment has an input layer 1 51, a convolutional layer 1 52, a pooling layer 1 5 3, a fully connected layer 1 5 4 and an output layer 1 5 4. Layers 1 5 5 and.
[0038] 入力層 1 5 1は、 入力部 1 4からムラ周辺画像の入力を受ける層である。 The input layer 1 5 1 is a layer that receives an input of the uneven peripheral image from the input unit 14.
図 3の例では、 6 4 X 6 4ピクセルの画像の入力を受けることとして示して いるが、 これは、 適宜変更可能である。 In the example of FIG. 3, it is shown that an image of 64×64 pixels is input, but this can be changed appropriately.
[0039] 畳み込み層 1 5 2は、 任意の大きさの係数行列を用いた畳み込み処理を行 う。 ここでは 3 X 3行列係数を用いて畳み込み処理を行ってバイアス値によ る補正を行っている。 [0039] The convolutional layer 1 5 2 performs a convolution process using a coefficient matrix of an arbitrary size. Here, convolution processing is performed using the 3 x 3 matrix coefficient, and correction is performed using the bias value.
畳み込み処理では 6 4 X 6 4ピクセルの入力画像から 3 X 3ピクセルの第 1小画像を抽出し、 この小画像と 3 X 3係数行列を用いて畳み込み計算を行 い、 それにバイアス値とバイアス係数を乗算したものを加算して ReLU (Rect i f i ed L i near Un i t) を適用し、 第 1特徴マップを生成する。 図 3では、 6 4 種類の行列係数を用いて 6 4枚の畳み込み画像を作成する例を例示している が、 これは、 適宜変更可能である。 In the convolution processing, the first small image of 3 x 3 pixels is extracted from the input image of 6 4 x 6 4 pixels, the convolution calculation is performed using this small image and the 3 x 3 coefficient matrix, and the bias value and the bias coefficient are calculated. Multiply by and add ReLU (Rect if ied Near Near It) to generate the first feature map. Although FIG. 3 exemplifies an example in which 6 4 convolutional images are created using 6 4 types of matrix coefficients, this can be appropriately changed.
[0040] プーリング層 1 5 3では、 第 1特徴マップにプーリング処理を行い、 第 2 特徴マップを得る。 プーリング処理は畳み込み層 1 5 2で生成した第 1特徴 マップのサイズ又は変化を低減して第 2特徴マップを生成している。 例えば 、 第 1特徴マップから、 2 X 2ピクセルの画像を抽出し、 この画像の最大輝 度や平均輝度を算出する。 具体的には、 平均プーリング、 最大プーリング、 In the pooling layer 153, pooling processing is performed on the first feature map to obtain the second feature map. The pooling process reduces the size or change of the first feature map generated in the convolutional layer 152 to generate the second feature map. For example, a 2×2 pixel image is extracted from the first feature map, and the maximum brightness and average brightness of this image are calculated. Specifically, average pooling, maximum pooling,
L pプーリング等を実行する。 L p Perform pooling, etc.
図 4は、 平均プーリングを説明する図である。 FIG. 4 is a diagram illustrating average pooling.
図 4に示す平均プーリングの例では、 2 X 2ピクセルのプーリング領域に 分割し、 輝度値を平均することによりサイズを低減化している。 In the example of average pooling shown in Fig. 4, the size is reduced by dividing the pooling area into 2 x 2 pixels and averaging the luminance values.
[0041 ] 図 3の全結合層 1 5 4は、 6 4個の第 2特徴マップを結合し全結合データ を作成する。 [0041] The fully connected layer 154 in Fig. 3 combines the 64 second feature maps to create fully connected data.
[0042] 出力層 1 5 5は、 全結合データにパラメータセッ ト (重みパラメータ、 バ イアスパラメータ) を適用し、 活性関数を用いて 9種類の欠陥の分類結果を 出力している。 パラメータセッ ト (重みパラメータ、 バイアスパラメータ) は、 6 4個の第 2特徴マップの全てに適用可能なものである。
[0043] パラメータセッ ト (重みパラメータ、 バイアスパラメータ) の設定は、 学 習データ 1 8によりバックプロパゲーション法を用いて行われる。 学習デー 夕 1 8から出力誤差を算出し、 誤差関数に最小自乗法を利用し勾配降下法等 によってパラメータセッ ト (重みパラメータ、 バイアスパラメータ) の更新 をおこない、 学習を複数回繰り返すことによりパラメータセッ ト (重みパラ メータ、 バイアスパラメータ) の値を決める。 [0042] The output layer 155 applies a parameter set (weighting parameter, bias parameter) to all the combined data, and outputs the classification results of nine types of defects using the activation function. The parameter set (weight parameter, bias parameter) is applicable to all 64 second feature maps. [0043] The parameter set (weighting parameter, bias parameter) is set using the back propagation method based on the learning data 18. The output error is calculated from the learning data 18, the parameter set (weighting parameter, bias parameter) is updated by the gradient descent method using the least squares method for the error function, and the parameter setting is performed by repeating the learning multiple times. Value (weight parameter, bias parameter).
[0044] 図 3では畳み込み層とプーリング層をそれぞれ 1層にしてネッ トワークを 構成しているが、 複数の畳み込み層とプーリング層からなるネッ トワークを 構成し、 畳み込み処理とプーリング処理を予め定められた回数繰り返しても よい。 また、 畳み込み層に使用する係数行列は、 予め学習させて値を決めて 固定させているが、 畳み込み層と全結合データのパラメータセッ ト (重みパ ラメータ、 バイアスパラメータ) を同時に学習させてもよい。 [0044] In Fig. 3, the convolutional layer and the pooling layer are each formed as one layer to configure the network. However, a network including multiple convolutional layers and pooling layers is configured, and the convolutional process and the pooling process are predetermined. It may be repeated as many times as desired. Also, the coefficient matrix used for the convolutional layer is learned in advance and its value is fixed and fixed, but the convolutional layer and the parameter set (weight parameter, bias parameter) of all combined data may be learned at the same time. ..
[0045] 本実施形態では C o n v o l u t i o n N e u r a l N e t w o r k ( C N N) と呼ばれる D e e p L e a r n i n g技術を用いることができ、 シ グモイ ド関数や R e L u関数等の活性化関数が複数の層で組み合わされたネ ッ トワークを形成してもよい。 [0045] In this embodiment, a deep lane earning technology called C onvolution Neural Network (CNN) can be used, and activation functions such as sigmoidal functions and R e Lu functions are combined in multiple layers. Connected networks may be formed.
なお、 上述したニューラルネッ トワークの構成例は、 一例を示したに過ぎ ず、 適宜変更可能である。 The configuration example of the neural network described above is merely an example, and can be changed as appropriate.
[0046] 以上説明したように、 本実施形態のカラーフィルタ検査装置 1は、 ニュー ラルネッ トワークを利用した欠陥分類部 1 5を用いることにより、 精度よく 欠陥の分類を行うことができる。 よって、 製造工程における欠陥を早期に発 見して、 製造工程の改善を効率よく行うことが可能となる。 As described above, the color filter inspecting apparatus 1 of the present embodiment can accurately classify defects by using the defect classifying unit 15 that uses the neural network. Therefore, it becomes possible to detect defects in the manufacturing process early and improve the manufacturing process efficiently.
[0047] 例えば、 感光材料を基板に塗布する工程の異物が原因で発生する特定のム ラは、 熟練した作業員が見れば特定のムラと認識できるが、 特定のムラでも 形状、 面積、 濃淡等のばらつきが大きい。 そのため、 形状、 面積、 濃淡等か ら特定のムラと判別する規則を決定するのは困難であるが、 ニューラルネッ トワークを利用した欠陥分類器 1 5を用いることにより、 精度よく特定のム ラと判別することができる。 感光材料を基板に塗布する工程の異物が特定の
\¥0 2020/175666 1 1 卩(:171? 2020 /008236 [0047] For example, a specific mura caused by foreign matter in the step of applying a photosensitive material to a substrate can be recognized as a specific mura by a skilled worker, but even a particular mura may have a shape, an area, and a gradation. There are large variations. Therefore, it is difficult to determine a rule to discriminate a specific unevenness based on the shape, area, density, etc.However, by using the defect classifier 15 using a neural network, it is possible to accurately identify a specific irregularity. Can be determined. Foreign matter in the process of applying the photosensitive material to the substrate \\0 2020/175666 1 1 卩 (: 171? 2020 /008236
ムラと相関があることは分かっているので、 特定のムラの発生を知った作業 員は、 感光材料を基板に塗布する工程の異物の確認や除去を行うことで早期 の改善が見込める。 Since it is known that there is a correlation with unevenness, the worker who knows the occurrence of specific unevenness can expect early improvement by checking and removing foreign matter in the process of applying the photosensitive material to the substrate.
[0048] (第 2実施形態) (Second Embodiment)
図 7は、 本発明によるカラーフィルタ検査装置 1の第 2実施形態を示す図 である。 FIG. 7 is a diagram showing a second embodiment of the color filter inspection device 1 according to the present invention.
第 2実施形態のカラーフィルタ検査装置 1は、 第 1実施形態の構成に加え て、 さらに、 欠陥出力部の一例である欠陥分別部 2 1 と、 分類確定部 2 2と 、 欠陥判定部 2 3とを備えている。 これら欠陥分別部 2 1、 分類確定部 2 2 および欠陥判定部 2 3は、 コンビユータに、 専用のプログラムを組み込んで 構成されており、 プログラムが各構成の機能を実現する。 In addition to the configuration of the first embodiment, the color filter inspection apparatus 1 of the second embodiment further includes a defect classification unit 21 which is an example of a defect output unit, a classification confirmation unit 2 2 and a defect determination unit 2 3 It has and. The defect classification unit 21, the classification determination unit 22 and the defect determination unit 23 are configured by incorporating a dedicated program into the computer, and the program realizes the functions of each configuration.
[0049] 欠陥検出部 1 2は、 撮像部 1 1 によるカラーフィルタの撮影画像に基づい てムラである可能性がある欠陥候補を検出する。 欠陥検出部 1 2は、 撮影画 像から直接欠陥候補を検出してもよいが、 撮影画像に対して欠陥の判定精度 を上げるための前処理を行った画像から欠陥候補を検出することが好ましい The defect detection unit 12 detects a defect candidate that may be uneven, based on the captured image of the color filter taken by the image capturing unit 11. The defect detection unit 12 may directly detect the defect candidate from the photographed image, but it is preferable to detect the defect candidate from the image obtained by preprocessing the photographed image to improve the accuracy of the defect determination.
[0050] 欠陥分別部 2 1は、 欠陥検出部 1 2によって検出された欠陥候補について の少なくとも 1つの物理量を算出し、 算出された物理量と物理量の閾値とを 比較することで欠陥候補が欠陥でないか否かを判定する。 欠陥分別部 2 1は 、 欠陥でないと判定された欠陥候補以外の欠陥候補を、 欠陥分類のために二 ユーラルネッ トワークすなわち欠陥分類器 1 5に出力する。 — 、 欠陥分別 部 2 1は、 欠陥でないと判定された欠陥候補は、 欠陥分類器 1 5に出力しな い。 [0050] The defect classification unit 21 calculates at least one physical quantity of the defect candidates detected by the defect detection unit 12 and compares the calculated physical quantity with the threshold value of the physical quantity, so that the defect candidate is not a defect. Or not. The defect classification unit 21 outputs the defect candidates other than the defect candidates determined not to be defects to the dual network, that is, the defect classifier 15 for defect classification. —, The defect classification unit 21 does not output the defect candidates determined not to be defects to the defect classifier 15.
[0051 ] 分類確定部 2 2は、 予め取得されているカラーフィルタの撮影画像の第 1 の分析結果に基づいて、 欠陥分類器 1 5から出力された欠陥分類の結果を確 定する。 具体的には、 分類確定部 2 2は、 欠陥分類と、 第 1の分析結果の一 例である分類毎の欠陥の発生位置を示す発生位置情報とを比較することで、 欠陥分類を確定する。
\¥02020/175666 12 卩(:171?2020/008236 The classification confirming unit 22 confirms the result of defect classification output from the defect classifier 15 based on the first analysis result of the captured image of the color filter acquired in advance. Specifically, the classification determination unit 22 determines the defect classification by comparing the defect classification with the generation position information indicating the generation position of the defect for each classification which is an example of the first analysis result. .. \¥02020/175666 12 (:171?2020/008236
[0052] 欠陥判定部 2 3は、 予め取得されているカラーフィルタの撮影画像の第 2 の分析結果に基づいて、 欠陥分類が確定された欠陥候補が欠陥であるか否か を判定する。 具体的には、 欠陥判定部 2 3は、 欠陥分類が確定された欠陥候 補と、 第 2の分析結果の一例である分類毎の欠陥の発生履歴を示す発生履歴 情報とを比較することで、 欠陥分類が確定された欠陥候補が欠陥であるか否 かを判定する。 The defect determination unit 23 determines whether or not the defect candidate whose defect classification has been determined is a defect, based on the second analysis result of the captured image of the color filter acquired in advance. Specifically, the defect judgment unit 23 compares the defect indicator whose defect classification has been confirmed with the occurrence history information indicating the defect occurrence history for each classification which is an example of the second analysis result. , It is determined whether the defect candidate whose defect classification is confirmed is a defect.
[0053] 次に、 第 2実施形態におけるカラーフィルタ検査装置 1の動作をより詳し く説明する。 [0053] Next, the operation of the color filter inspection apparatus 1 according to the second embodiment will be described in more detail.
図 8は、 第 2実施形態におけるカラーフィルタ検査装置 1の動作の流れを 示すフローチヤートである。 FIG. 8 is a flow chart showing a flow of operations of the color filter inspection device 1 in the second embodiment.
[0054] 先ず、 3 2 1 において、 撮影部 1 1は、 カラーフィルタを撮影した撮影画 像を取得する。 [0054] First, in 321, the photographing unit 11 acquires a photographed image obtained by photographing the color filter.
[0055] 撮影画像を取得した後、 3 2 2において、 欠陥検出部 1 2は、 撮影画像に 対して欠陥の判定精度を上げるための前処理を行う。 前処理は、 例えば、 シ エーディング処理と、 平滑化処理と、 放射処理と、 白黒反転処理との一連の 処理によって構成される。 After the captured image is obtained, in 3 2 2, the defect detection unit 12 performs preprocessing for improving the defect determination accuracy for the captured image. The pre-processing is, for example, a series of processing including shading processing, smoothing processing, radiation processing, and black-and-white inversion processing.
シエーディング処理は、 濃度ムラのある画像から濃度ムラを取り除く処理 である。 The shading process is a process for removing density unevenness from an image having density unevenness.
平滑化処理は、 撮影画像に含まれる欠陥候補以外のノイズを除去するため に撮影画像をぼかす処理である。 平滑化処理には、 例えばガウシアンフィル 夕を用いることができる。 The smoothing process is a process of blurring a captured image in order to remove noise other than defect candidates included in the captured image. For smoothing processing, for example, Gaussian filter can be used.
放射処理は、 後述する欠陥候補検出の際の畳み込み演算に必要な処理とし て、 撮影画像の端部に位置する画素の輝度値をその外側にコピーする処理で ある。 放射処理は、 撮影画像の端部に位置する画素を注目画素すなわち中心 画素とした例えば 3 X 3画素を用いて置み込みを行う場合に、 注目画素の周 辺の画素が不足するのでこの不足を補うために行われる。 The radiating process is a process required for a convolution calculation at the time of detecting a defect candidate, which will be described later, and is a process of copying the luminance value of the pixel located at the edge of the captured image to the outside thereof. Radiation processing is insufficient due to the lack of pixels around the target pixel when the pixel located at the edge of the captured image is used as the target pixel, that is, the center pixel, for example, 3 x 3 pixels are used for the registration. Is done to make up for.
白黒反転処理は、 黒い欠陥を判定対象とする場合には撮影画像を白黒反転 し、 白い欠陥を判定対象とする場合には撮影画像を白黒反転しない処理であ
\¥0 2020/175666 13 卩(:171? 2020 /008236 The black-and-white reversal process is a process that black-and-white inverts a captured image when a black defect is the target of determination and does not invert a captured image when a white defect is the target of determination. \\0 2020/175 666 13 卩 (: 171? 2020 /008236
る。 判定対象が黒い欠陥であるか又は白い欠陥であるかについては、 判定す べき欠陥の種類によって異なる。 白黒反転処理によれば、 欠陥の種類によら ずに白い画像を欠陥候補として検出して判定することができるので、 欠陥判 定を簡便化することができる。 It Whether the judgment target is a black defect or a white defect depends on the kind of the defect to be judged. According to the black-and-white reversal process, a white image can be detected and determined as a defect candidate regardless of the type of defect, so that the defect determination can be simplified.
なお、 前処理を構成する上記のシヱーディング処理、 平滑化処理、 放射処 理および白黒反転処理は、 これらの前後を適宜入れ替えてもよい。 The above-mentioned shading processing, smoothing processing, radiation processing and black-and-white inversion processing that constitute the pre-processing may be appropriately replaced before and after.
[0056] 前処理を行った後、 3 2 3において、 欠陥検出部 1 2は、 前処理後の撮影 画像から欠陥候補を検出する欠陥候補検出処理を行う。 欠陥候補検出処理は 、 例えば、 畳み込み演算による差分フィルタ処理と、 二値化処理と、 水平方 向クロージング処理と、 垂直方向クロージング処理と、 水平方向才ープニン グ処理と、 垂直方向オープニング処理と、 ラベリング処理との一連の処理に よって構成される。 After performing the pre-processing, in 323, the defect detection unit 12 performs a defect candidate detection process of detecting a defect candidate from the pre-processed captured image. The defect candidate detection process includes, for example, a difference filter process by a convolution operation, a binarization process, a horizontal direction closing process, a vertical direction closing process, a horizontal direction opening process, a vertical direction opening process, and a labeling process. It is composed of a series of processing together with processing.
畳み込み演算による差分フィルタ処理は、 前処理後の撮影画像の各点すな わち各画素において、 輝度値の変化の大きさを求める処理である。 ムラや異 物などの欠陥候補がある場所では、 輝度値が周囲に比べて大きく変化してい るため、 差分フィルタ処理によって輝度値の変化の大きさを求めることで欠 陥候補を検出することができる。 具体的には、 差分フィルタ処理においては 、 前処理後の撮影画像の各画素のそれぞれを順に注目画素に設定し、 注目画 素を中心とした例えば 3 X 3画素のそれぞれの輝度値と、 フィルタとしての 例えば 3 X 3の係数行列とを用いた畳み込み演算を行う。 畳み込み演算によ って、 注目画素ごとに隣接する画素との間の輝度値の差分値が算出される。 輝度値の変化の大きい場所ほど差分値は大きくなり、 差分値が大きな場所を 欠陥候補として検出することができる。 The difference filtering process by the convolution calculation is a process of obtaining the magnitude of the change in the brightness value at each point, that is, each pixel of the captured image after preprocessing. In a place where there is a defect candidate such as unevenness or foreign matter, the brightness value changes greatly compared to the surroundings, so it is possible to detect the defect candidate by obtaining the magnitude of the change in the brightness value by differential filtering. it can. Specifically, in the differential filter processing, each pixel of the captured image after pre-processing is set as the target pixel in order, and the brightness value of each of the 3×3 pixels centered on the target pixel and the filter For example, a convolution operation is performed using the 3 x 3 coefficient matrix and. By the convolution calculation, the difference value of the brightness value between the pixel of interest and the adjacent pixel is calculated. The greater the difference in the brightness value, the larger the difference value, and it is possible to detect the place having the larger difference value as a defect candidate.
[0057] 二値化処理は、 畳み込み演算による差分フィルタ処理後の撮影画像に基づ いて、 当該画像の差分値が閾値以上の画素を白、 閾値未満の画素を黒とした 二値画像を生成する処理である。 二値化処理によって、 輝度値の変化が大き い領域が白の画像すなわち欠陥候補として検出され、 輝度値の変化の小さい 領域すなわち欠陥候補以外は黒の画像となる。
\¥0 2020/175666 14 卩(:171? 2020 /008236 [0057] The binarization processing is based on the captured image after the difference filter processing by the convolution operation, and generates a binary image in which pixels having a difference value of the image above the threshold value are white and pixels below the threshold value are black. It is a process to do. Through the binarization process, areas with large changes in brightness value are detected as white images, that is, defect candidates, and areas with small changes in brightness value, that is, black images are displayed except for defect candidates. \¥0 2020/175666 14 卩 (: 171? 2020 /008236
[0058] 水平方向クロージング処理は、 二値画像の白色領域を水平方向すなわち横 方向に膨張させて収縮させる処理である。 [0058] The horizontal direction closing process is a process of expanding and contracting the white area of the binary image in the horizontal direction, that is, in the horizontal direction.
[0059] 垂直方向クロージング処理は、 二値画像の白色領域を垂直方向すなわち縦 方向に膨張させて収縮させる処理である。 [0059] The vertical direction closing process is a process of expanding and contracting the white area of the binary image in the vertical direction, that is, in the vertical direction.
[0060] 水平方向クロージング処理および垂直方向クロージング処理によれば、 白 い領域の中に存在する穴を潰したり、 近くに存在する白い領域をくっつけた りすることができるので、 ノイズを除去して欠陥候補の精度を高めることが できる。 [0060] According to the horizontal direction closing process and the vertical direction closing process, holes existing in a white area can be crushed and white areas existing nearby can be stuck together. The accuracy of defect candidates can be improved.
[0061 ] 水平方向オープニング処理は、 二値画像の白色領域を水平方向に収縮させ て膨張させる処理である。 [0061] The horizontal opening process is a process of contracting and expanding the white area of the binary image in the horizontal direction.
[0062] 垂直方向オープニング処理は、 二値画像の白色領域を垂直方向に収縮させ て膨張させる処理である。 The vertical opening process is a process of contracting and expanding the white area of the binary image in the vertical direction.
[0063] 水平方向オープニング処理および垂直方向オープニング処理によれば、 小 さい白い塊を除去したり、 近くに存在する二つの白い領域をくっつけたりす ることができるので、 ノイズを除去して欠陥候補の精度を高めることができ る。 [0063] According to the horizontal opening process and the vertical opening process, it is possible to remove small white lumps or to join two white areas that are close to each other. The accuracy of can be improved.
[0064] ラベリング処理は、 二値画像の白色領域に画素毎に番号を付ける処理であ る。 ラベリング処理においては、 例えば、 連続するひと塊の白色領域に属す る複数の画素については、 互いに同一の番号を付与し、 異なる塊の白色領域 同士の間では、 画素に付与される番号を互いに異ならせる。 ラベリング処理 を行うことで、 後述する欠陥分別処理においてブロブ解析による欠陥の分類 が可能となる。 [0064] The labeling process is a process of assigning a number to each pixel in the white area of the binary image. In the labeling process, for example, the same number is assigned to a plurality of pixels belonging to the white region of a continuous block, and the numbers assigned to pixels are different between white regions of different blocks. Let By performing the labeling process, it becomes possible to classify defects by blob analysis in the defect classification process described later.
なお、 欠陥候補検出処理を構成する上記の差分フィルタ処理、 二値化処理 、 水平方向クロージング処理、 垂直方向クロージング処理、 水平方向才ープ ニング処理、 垂直方向才ープニング処理およびラべリング処理は、 これらの 前後を適宜入れ替えてもよい。 Note that the above-described difference filter processing, binarization processing, horizontal closing processing, vertical closing processing, horizontal direction opening processing, vertical direction opening processing, and labeling processing that constitute the defect candidate detection processing are The front and rear of these may be replaced appropriately.
[0065] 欠陥候補検出処理を行った後、 3 2 4において、 欠陥分別部 2 1は、 ラべ リング処理された異なる番号の欠陥候補すなわち白色領域ごとに個別に欠陥
\¥0 2020/175666 15 卩(:171? 2020 /008236 [0065] After performing the defect candidate detection processing, in 324, the defect classification unit 21 determines the defect candidates of different numbers subjected to the labeling processing, that is, the defect areas individually for each white area. \¥0 2020/175 666 15 (:171? 2020/008236
分別処理を行う。 欠陥分別処理は、 検出された欠陥候補についての少なくと も 1つの物理量を算出し、 算出された物理量と物理量の閾値とを比較するこ とで欠陥候補が欠陥でないか否かを判定し、 欠陥でないと判定された欠陥候 補以外の欠陥候補を欠陥分類器 1 5に出力する処理である。 例えば、 欠陥分 別処理は、 横幅算出処理と、 縦幅算出処理と、 面積算出処理と、 輝度差ピー ク値算出処理と、 輝度差平均値算出処理と、 斜率算出処理と、 面積比算出処 理と、 スジ率算出処理と、 円形度算出処理と、 閾値判定処理と、 出力処理と の一連の処理によつて構成される。 Perform classification processing. The defect classification process calculates at least one physical quantity for each detected defect candidate and compares the calculated physical quantity with a threshold value of the physical quantity to determine whether the defect candidate is a defect. This is a process of outputting defect candidates other than the defect candidate determined to be not to the defect classifier 15. For example, defect classification processing includes horizontal width calculation processing, vertical width calculation processing, area calculation processing, luminance difference peak value calculation processing, luminance difference average value calculation processing, slope rate calculation processing, and area ratio calculation processing. It is configured by a series of processing including a calculation process, a streak rate calculation process, a circularity calculation process, a threshold value determination process, and an output process.
[0066] 横幅算出処理は、 欠陥候補の物理量の 1つとして、 欠陥候補の横幅を算出 する処理である。 横幅は、 例えば、 欠陥候補の重心点を通る乂方向の直線が 欠陥候補の X方向の両端とそれぞれ交わる 2点を定義した場合に、 当該 2点 の間の乂方向の距離として求められる。 The width calculation process is a process of calculating the width of the defect candidate as one of the physical quantities of the defect candidate. For example, the width is calculated as the distance between the two points when a straight line passing through the center of gravity of the defect candidate intersects with both ends in the X direction of the defect candidate.
[0067] 縦幅算出処理は、 欠陥候補の物理量の 1つとして、 欠陥候補の縦幅を算出 する処理である。 縦幅は、 例えば、 欠陥候補の重心点を通る丫方向の直線が 欠陥候補の丫方向の両端とそれぞれ交わる 2点を定義した場合に、 当該 2点 の間の丫方向の距離として求められる。 The vertical width calculation processing is processing for calculating the vertical width of the defect candidate as one of the physical quantities of the defect candidate. The vertical width is obtained as the distance in the direction of the defect between two points when a straight line passing through the center of gravity of the defect candidate intersects with both ends of the defect candidate in the direction of the defect.
[0068] 面積算出処理は、 欠陥候補の物理量の 1つとして、 欠陥候補の面積を算出 する処理である。 The area calculation process is a process of calculating the area of the defect candidate as one of the physical quantities of the defect candidate.
[0069] 輝度差ピーク値算出処理は、 欠陥候補の物理量の 1つとして、 欠陥候補の うち差分フィルタ処理による差分値の最大値を算出する処理である。 The brightness difference peak value calculation process is a process of calculating the maximum difference value of the defect candidates by the difference filtering process as one of the physical quantities of the defect candidates.
[0070] 輝度差平均値算出処理は、 欠陥候補の物理量の 1つとして、 欠陥候補の輝 度差の平均値を算出する処理である。 [0070] The brightness difference average value calculation process is a process of calculating the average value of the brightness differences of the defect candidates as one of the physical quantities of the defect candidates.
[0071 ] 斜率算出処理は、 欠陥候補の物理量の 1つとして、 欠陥候補の輝度差の変 化量を算出する処理である。 [0071] The slope rate calculation process is a process of calculating the variation amount of the brightness difference of the defect candidate as one of the physical quantities of the defect candidate.
[0072] 面積比算出処理は、 欠陥候補の物理量の 1つとして、 1つの欠陥候補の全 体を包含し且つ当該欠陥候補の X方向および丫方向の最外端部と外接するよ うに定義された矩形領域内における欠陥候補すなわち白色領域と黒色領域と の面積比を算出する処理である。
\¥0 2020/175666 16 卩(:171? 2020 /008236 [0072] The area ratio calculation process is defined as one of the physical quantities of the defect candidates so as to include the whole of one defect candidate and to circumscribe the outermost end of the defect candidate in the X direction and the defect direction. This is a process of calculating the defect candidate in the rectangular area, that is, the area ratio of the white area and the black area. \\0 2020/175 666 16 卩 (: 171? 2020 /008236
[0073] スジ率算出処理は、 欠陥候補の物理量の 1つとして、 欠陥候補を 1つの矩 形のスジとみなして、 スジの長辺と短辺との比を算出する処理である。 より 詳しくは、 スジ率算出処理は、 1つの欠陥候補の全体を包含し且つ当該欠陥 候補の X方向および丫方向の最外端部と外接するように定義された矩形領域 の長辺と短辺との比を算出する処理である。 The streak rate calculation process is a process of considering the defect candidate as one rectangular streak as one of the physical quantities of the defect candidate and calculating the ratio between the long side and the short side of the streak. More specifically, the streak rate calculation processing includes the long side and the short side of a rectangular area that is defined so as to include the entire one defect candidate and circumscribe the outermost end in the X direction and the direction of the defect of the defect candidate. Is a process of calculating the ratio of
[0074] 円形度算出処理は、 欠陥候補の物理量の 1つとして、 欠陥候補の円形度を 算出する処理である。 円形度は、 欠陥候補の面積を 3、 欠陥候補の周囲長を !_とした場合に、 4 3 / 1_ 2で算出してもよい。 The circularity calculation process is a process of calculating the circularity of the defect candidate as one of the physical quantities of the defect candidate. Circularity, an area of the defect candidate 3, in the case where the peripheral length of the defect candidate! _ And may be calculated by 4 3 / 1_ 2.
なお、 上記の横幅算出処理、 縦幅算出処理、 面積算出処理、 輝度差ピーク 値算出処理、 輝度差平均値算出処理、 斜率算出処理、 面積比算出処理、 スジ 率算出処理および円形度算出処理は、 これらの前後を適宜入れ替えてもよい The above-mentioned width calculation processing, height calculation processing, area calculation processing, brightness difference peak value calculation processing, brightness difference average value calculation processing, slope rate calculation processing, area ratio calculation processing, streak rate calculation processing and circularity calculation processing , These may be replaced before and after as appropriate
[0075] 閾値判定処理は、 異なる塊の欠陥候補ごとに個別に、 上述した各種の算出 処理で算出された欠陥候補の各物理量のそれぞれを各物理量について予め設 定された物理量ごとの判定閾値と比較することで、 欠陥候補が欠陥でないか 否かを判定する処理である。 閾値判定処理によって、 異なる塊の欠陥候補が 、 欠陥でないと判定された欠陥候補と、 欠陥でないと判定された欠陥候補以 外の欠陥候補とに振り分けられる。 欠陥でないと判定された欠陥候補以外の 欠陥候補は、 この時点ではまだ欠陥であるかどうかが確定していない。 した がって、 欠陥でないと判定された欠陥候補以外の欠陥候補には、 最終的に欠 陥であると確定される欠陥候補だけでなく、 欠陥でないかどうかが疑わしい 欠陥候補も含まれ得る。 [0075] The threshold value determination process is carried out individually for each defect candidate of different lumps, with each of the physical amounts of the defect candidates calculated by the various calculation processes described above as a determination threshold value for each physical amount preset for each physical amount. This is a process of determining whether or not the defect candidate is not a defect by comparing. By the threshold determination process, defect candidates of different chunks are sorted into defect candidates determined not to be defects and defect candidates other than defect candidates determined not to be defects. Defect candidates other than the defect candidates that are determined not to be defects have not yet been determined to be defects at this point. Therefore, the defect candidates other than the defect candidates that are determined not to be defects may include not only the defect candidates that are finally determined to be defects but also the defect candidates that are suspicious of not being defects.
[0076] 出力処理は、 閾値判定処理によって欠陥でないと判定された欠陥候補以外 の欠陥候補を欠陥分類器 1 5に出力し、 閾値判定処理によって欠陥でないと 判定された欠陥候補を欠陥分類器 1 5に出力しない処理である。 これにより 、 欠陥でないと判定された欠陥候補以外の欠陥候補のみが、 ニューラルネッ トワークを用いた欠陥分類の対象となり、 欠陥でないと判定された欠陥候補 は、 ニューラルネッ トワークを用いた欠陥分類の対象から除外される。
\¥0 2020/175666 17 卩(:171? 2020 /008236 The output processing outputs the defect candidates other than the defect candidates determined not to be the defect by the threshold determination processing to the defect classifier 15 and the defect candidates determined not to be the defect by the threshold determination processing to the defect classifier 1 It is a process that does not output to 5. As a result, only defect candidates other than the defect candidates determined not to be defects are the targets of defect classification using the neural network, and the defect candidates determined not to be defects are the targets of defect classification using the neural network. Excluded from. \\0 2020/175 666 17 卩 (: 171? 2020 /008236
[0077] 欠陥分別処理を行った後、 3 2 5において、 欠陥分類器 1 5は、 欠陥分類 処理を行う。 欠陥分類処理は、 閾値判定処理によって欠陥でないと判定され た欠陥候補以外の欠陥候補について、 第 1実施形態と同様にニューラルネッ トワークを用いて何れの種類の欠陥であるのかを分類する処理である。 After performing the defect classification processing, at 3 25, the defect classifier 15 performs the defect classification processing. The defect classification process is a process for classifying the defect candidates other than the defect candidates that are determined not to be defects by the threshold value determination process using the neural network as in the first embodiment. ..
[0078] 欠陥分類処理を行った後、 3 2 6において、 分類確定部 2 2は、 欠陥分類 器 1 5から出力され欠陥候補の欠陥分類を確定する分類確定処理を行う。 分 類確定部 2 2は、 欠陥分類と、 分類毎の欠陥の発生位置を示す発生位置情報 とを比較し、 分類された欠陥候補の位置が、 対応する分類の発生位置情報に 示される発生位置と合致する場合に、 欠陥分類を確定する。 _方、 分類確定 部 2 2は、 分類された欠陥候補の位置が、 対応する分類の発生位置情報に示 される発生位置と合致しない場合に、 欠陥分類を確定しない。 After performing the defect classification processing, at 3 26, the classification determination unit 22 performs the classification determination processing for determining the defect classification of the defect candidate output from the defect classifier 15. The classification determining unit 22 compares the defect classification with the occurrence position information indicating the occurrence position of the defect for each classification, and the position of the classified defect candidate is the occurrence position indicated in the occurrence position information of the corresponding classification. Defect classification is confirmed when On the other hand, the classification determination unit 22 does not determine the defect classification when the position of the classified defect candidate does not match the occurrence position indicated in the occurrence position information of the corresponding classification.
[0079] なお、 分類確定部 2 2は、 発生位置情報以外にも、 予め分類ごとに個別に 設定された欠陥分類の信頼度の閾値を更に用いて分類確定処理を行ってもよ い。 The classification confirming unit 22 may perform the classification confirming process by further using the threshold value of the reliability of the defect classification set individually for each classification in addition to the occurrence position information.
[0080] 分類確定処理を行った後、 3 2 7において、 欠陥判定部 2 3は、 欠陥分類 が確定された欠陥候補が欠陥であるか否かを判定する欠陥判定処理を行う。 欠陥判定部 2 3は、 欠陥分類が確定された欠陥候補と、 分類毎の欠陥の発生 履歴を示す発生履歴情報とを比較することで、 欠陥候補が欠陥であるか否か を判定する。 欠陥判定部 2 3は、 欠陥分類が確定された欠陥候補が、 対応す る分類の発生履歴情報に示される発生履歴と整合する場合に、 欠陥候補が欠 陥であると判定する。 発生履歴は、 例えば、 欠陥候補の連続発生回数であっ てもよい。 一方、 欠陥判定部 2 3は、 欠陥分類が確定された欠陥候補が、 対 応する分類の発生履歴情報に示される発生履歴と整合しない場合に、 欠陥候 補が欠陥でないと判定する。 なお、 欠陥判定部 2 3は、 欠陥分類が確定され なかった欠陥候補については、 どの分類にも属しない欠陥として判定される After performing the classification confirmation processing, at 327, the defect determination unit 23 performs a defect determination processing for determining whether or not the defect candidate whose defect classification has been determined is a defect. The defect determining unit 23 determines whether or not the defect candidate is a defect by comparing the defect candidate whose defect classification has been determined with the occurrence history information indicating the defect occurrence history for each classification. The defect determination unit 23 determines that the defect candidate is defective when the defect candidate whose defect classification has been confirmed matches the occurrence history indicated in the occurrence history information of the corresponding classification. The occurrence history may be, for example, the number of consecutive occurrences of defect candidates. On the other hand, the defect determination unit 23 determines that the defect candidate is not a defect when the defect candidate whose defect classification has been determined does not match the occurrence history indicated in the occurrence history information of the corresponding classification. In addition, the defect determination unit 23 determines that a defect candidate whose defect classification has not been determined is a defect that does not belong to any classification.
[0081 ] なお、 欠陥判定部 2 3は、 発生履歴情報以外にも、 欠陥の濃さすなわち輝 度差を示す欠陥強度を更に用いて欠陥判定処理を行ってもよい。
\¥0 2020/175666 18 卩(:171? 2020 /008236 The defect determination unit 23 may perform the defect determination process by further using the defect intensity indicating the defect density, that is, the brightness difference, in addition to the occurrence history information. \¥0 2020/175666 18 卩 (: 171? 2020 /008236
[0082] 第 2の実施形態によれば、 ニューラルネッ トワークを用いた欠陥分類と、 ニューラルネッ トワーク以外の判定基準を用いた欠陥判定手法とを組み合わ せることで、 ニューラルネッ トワークのみを用いる場合と比較して精度の高 い欠陥判定を行うことができる。 According to the second embodiment, by combining the defect classification using the neural network and the defect judgment method using the judgment criterion other than the neural network, it is possible to use only the neural network. By comparison, it is possible to perform highly accurate defect determination.
[0083] (変形形態) [0083] (Variations)
以上説明した実施形態に限定されることなく、 種々の変形や変更が可能で あって、 それらも本発明の範囲内である。 The present invention is not limited to the embodiment described above, and various modifications and changes can be made, which are also within the scope of the present invention.
[0084] ( 1 ) 第 1実施形態において、 欠陥検出部 1 2によって欠陥候補を検出し、 画像切り出し部 1 3によって画像を切り出すことにより、 後述の欠陥分類部 1 5による処理を軽くする例を挙げて説明した。 これに限らず、 カラーフィ ルタの撮影画像を直接、 欠陥分類部 1 5によって処理する構成としてもよい 。 この場合、 欠陥分類部 1 5のニューラルネッ トワークにおいて、 画像全体 から欠陥分類を出力する。 (1) In the first embodiment, an example in which a defect candidate is detected by the defect detection unit 12 and an image is cut out by the image cutout unit 13 to lighten the processing by the defect classification unit 15 described later I explained it. The invention is not limited to this, and the defect classification unit 15 may directly process the captured image of the color filter. In this case, the defect classification unit 15 outputs the defect classification from the entire image in the neural network.
[0085] ( 2 ) 第 1実施形態において、 欠陥検出部 1 2は、 特許第 4 3 6 3 9 5 3号 公報に開示されている手法を用いて欠陥候補の検出を行う例を挙げて説明し た。 これに限らず、 従来公知の欠陥検出の手法を用いて欠陥候補の検出を行 う構成としてもよい。 (2) In the first embodiment, the defect detection unit 12 will be described with reference to an example in which the defect candidate is detected by using the method disclosed in Japanese Patent No. 4 3 6 3 9 5 3. did. The present invention is not limited to this, and a defect candidate may be detected using a conventionally known defect detection method.
[0086] 本発明は、 カラーフィルタ以外の対象物の欠陥検査にも適用することがで きる。 例えば、 フィルム、 ガラス、 シリコン、 金属等に、 塗装や自己発光に よって検査対象外観を形成した対象物の欠陥を検査するために本発明を適用 することもできる。 検査対象外観は、 可視光下で検出できるものに限らず、 赤外光下や紫外光下で検出できるものであってもよい。 また、 医療用のレン トゲン画像の欠陥を検査するために本発明を適用することもできる。 The present invention can also be applied to defect inspection of objects other than color filters. For example, the present invention can be applied to a film, glass, silicon, metal or the like for inspecting a defect of an object having an appearance to be inspected by coating or self-luminous. The appearance of the inspection object is not limited to one that can be detected under visible light, and may be one that can be detected under infrared light or ultraviolet light. The present invention can also be applied to inspect defects in medical radiographic images.
[0087] なお、 実施形態及び変形形態は、 適宜組み合わせて用いることもできるが 、 詳細な説明は省略する。 また、 本発明は以上説明した実施形態によって限 定されることはない。 The embodiments and modified embodiments may be used in combination as appropriate, but detailed description thereof will be omitted. Further, the present invention is not limited to the embodiments described above.
符号の説明 Explanation of symbols
[0088] 1 カラーフィルタ検査装置
\¥02020/175666 19 卩<:171? 2020 /008236 [0088] 1 Color filter inspection device \¥02020/175666 19 卩<:171? 2020/008236
1 1 撮影部 1 1 Shooting section
1 2 欠陥検出部 1 2 Defect detection section
1 3 画像切り出し部 1 3 Image cropping section
1 4 入力部 1 4 Input section
1 5 欠陥分類部 1 5 Defect classification section
1 6, 1 9 学習モデル 1 6, 1 9 Learning model
1 7 学習モデル構築部 1 7 Learning model construction department
1 8 学習データ 1 8 learning data
1 5 1 入力層 1 5 1 Input layer
1 52 置み込み層 1 52 Imposition layer
1 53 プーリング層 1 53 Pooling layer
1 54 全結合層 1 54 Fully coupled layer
1 55 出力層
1 55 Output layer
Claims
[請求項 1 ] カラーフィルタの撮影画像に基づいて欠陥候補を検出する欠陥検出 部と、 [Claim 1] A defect detection unit that detects a defect candidate based on a captured image of a color filter,
前記検出された欠陥候補についての少なくとも 1 つの物理量と、 前 記物理量の閾値とを比較することで、 前記欠陥候補が欠陥でないか否 かを判定し、 欠陥でないと判定された欠陥候補以外の欠陥候補を、 欠 陥分類のためにニューラルネッ トワークに出力する欠陥出力部と、 を備えるカラーフィルタ検査装置。 By comparing at least one physical quantity of the detected defect candidate with the threshold value of the above-mentioned physical quantity, it is determined whether or not the defect candidate is a defect, and a defect other than the defect candidate determined not to be a defect is detected. A color filter inspection device comprising: a defect output unit that outputs candidates to a neural network for defect classification.
[請求項 2] カラーフィルタの撮影画像に基づいて欠陥候補を検出する欠陥検出 部と、 [Claim 2] A defect detection unit for detecting a defect candidate based on a captured image of a color filter,
前記検出された欠陥候補を、 欠陥分類のためにニューラルネッ トワ —クに出力する欠陥出力部と、 A defect output unit that outputs the detected defect candidates to a neural network for defect classification,
カラーフィルタの撮影画像の第 1 の分析結果に基づいて前記ニュー ラルネッ トワークから出力された前記欠陥分類の結果を確定する分類 確定部と、 A classification determination unit that determines the result of the defect classification output from the neural network based on the first analysis result of the captured image of the color filter,
を備えるカラーフィルタ検査装置。 A color filter inspection device including.
[請求項 3] 前記第 1 の分析結果は、 分類毎の欠陥の発生位置を示す発生位置情 報を含む、 請求項 2に記載のカラーフィルタ検査装置。 3. The color filter inspection device according to claim 2, wherein the first analysis result includes generation position information indicating a generation position of a defect for each classification.
[請求項 4] カラーフィルタの撮影画像の第 2の分析結果に基づいて前記欠陥分 類の結果が確定された欠陥候補が欠陥であるか否かを判定する欠陥判 定部をさらに備える、 請求項 2又は 3に記載のカラーフィルタ検査装 置。 4. A defect determination unit for determining whether or not the defect candidate for which the result of the defect classification has been determined is a defect based on the second analysis result of the captured image of the color filter. The color filter inspection device according to item 2 or 3.
[請求項 5] 前記第 2の分析結果は、 分類毎の欠陥の発生履歴を示す発生履歴情 報を含む、 請求項 4に記載のカラーフィルタ検査装置。 5. The color filter inspection device according to claim 4, wherein the second analysis result includes generation history information indicating a generation history of defects for each classification.
[請求項 6] 前記ニューラルネッ トワークは、 [Claim 6] The neural network is
前記撮影画像から畳み込み処理により第 1特徴マツプを生成する畳 み込み層と、 A convolutional layer that generates a first feature map from the captured image by convolutional processing;
プーリング処理を行い前記第 1特徴マツプのサイズ又は変化を低減
\¥0 2020/175666 21 卩(:171? 2020 /008236 Performs a pooling process to reduce the size or change of the first characteristic map \\0 2020/175 666 21 卩 (: 171? 2020 /008236
して第 2特徴マップを生成するプーリング層と、 And a pooling layer to generate a second feature map,
前記欠陥分類の結果を出力する出力層と、 An output layer for outputting the result of the defect classification,
を備える、 請求項 1乃至 5のいずれか 1項に記載のカラーフィルタ 検査装置。 The color filter inspection device according to any one of claims 1 to 5, further comprising:
[請求項 7] 前記欠陥検出部で検出された前記欠陥候補を含む範囲を前記撮影画 像から切り出す画像切り出し部をさらに備える、 請求項 1乃至 6のい ずれか 1項に記載のカラーフィルタ検査装置。 7. The color filter inspection according to any one of claims 1 to 6, further comprising an image cutout unit that cuts out a range including the defect candidate detected by the defect detection unit from the captured image. apparatus.
[請求項 8] 前記欠陥検出部は、 前記撮影画像を複数方向から一次微分したデー 夕から欠陥候補を含む範囲を求める、 請求項 7に記載のカラーフィル 夕検査装置。 8. The color filter inspection apparatus according to claim 7, wherein the defect detection unit obtains a range including a defect candidate from a data obtained by first-order differentiating the captured image from a plurality of directions.
[請求項 9] 対象物の撮影画像に基づいて欠陥候補を検出する欠陥検出部と、 前記検出された欠陥候補についての少なくとも 1つの物理量と、 前 記物理量の閾値とを比較することで、 前記欠陥候補が欠陥でないか否 かを判定し、 欠陥でないと判定された欠陥候補以外の欠陥候補を、 欠 陥分類のためにニューラルネッ トワークに出力する欠陥出力部と、 を備える検査装置。 [Claim 9] By comparing a defect detection unit that detects a defect candidate based on a captured image of an object, at least one physical quantity of the detected defect candidate, and a threshold of the physical quantity described above, An inspection apparatus comprising: a defect output unit that determines whether or not a defect candidate is a defect and outputs defect candidates other than the defect candidate determined to be not a defect to a neural network for defect classification.
[請求項 10] 対象物の撮影画像に基づいて欠陥候補を検出する欠陥検出部と、 前記検出された欠陥候補を、 欠陥分類のためにニューラルネッ トワ —クに出力する欠陥出力部と、 [Claim 10] A defect detection unit that detects a defect candidate based on a captured image of an object; a defect output unit that outputs the detected defect candidate to a neural network for defect classification;
対象物の撮影画像の第 1の分析結果に基づいて前記ニューラルネッ トワークから出力された前記欠陥分類の結果を確定する分類確定部と を備える検査装置。 An inspection apparatus, comprising: a classification determination unit that determines a result of the defect classification output from the neural network based on a first analysis result of a captured image of the object.
[請求項 1 1 ] 欠陥検出部が、 カラーフィルタの撮影画像に基づいて欠陥候補を検 出する工程と、 [Claim 11] A step in which the defect detection unit detects a defect candidate based on a captured image of a color filter,
欠陥出力部が、 前記検出された欠陥候補についての少なくとも 1つ の物理量と、 前記物理量の閾値とを比較することで、 前記欠陥候補が 欠陥でないか否かを判定し、 欠陥でないと判定された欠陥候補以外の
\¥0 2020/175666 22 卩(:171? 2020 /008236 The defect output unit determines whether or not the defect candidate is a defect by comparing at least one physical quantity of the detected defect candidate with a threshold value of the physical quantity, and it is determined that the defect candidate is not a defect. Other than defect candidates \¥0 2020/175666 22 卩 (: 171? 2020 /008236
欠陥候補を、 欠陥分類のためにニューラルネッ トワークに出力するエ 程と、 The process of outputting defect candidates to a neural network for defect classification,
を備えるカラーフィルタ検査方法。 A method for inspecting a color filter, comprising:
[請求項 12] 欠陥検出部が、 カラーフィルタの撮影画像に基づいて欠陥候補を検 出する工程と、 [Claim 12] a step in which the defect detection unit detects a defect candidate based on a captured image of the color filter,
欠陥出力部が、 前記検出された欠陥候補を、 欠陥分類のためにニュ —ラルネッ トワークに出力する工程と、 The defect output unit outputs the detected defect candidates to a neural network for defect classification,
分類確定部が、 カラーフィルタの撮影画像の第 1 の分析結果に基づ いて前記ニューラルネッ トワークから出力された前記欠陥分類の結果 を確定する工程と、 A step of determining the defect classification result output from the neural network based on the first analysis result of the captured image of the color filter,
を備えるカラーフィルタ検査方法。 A method for inspecting a color filter, comprising:
[請求項 13] 撮影部が、 前記カラーフィルタを撮影する工程と、 [Claim 13] A step of photographing the color filter by a photographing unit,
前記ニューラルネッ トワークが、 前記撮影画像から畳み込み処理に より第 1特徴マップを生成する畳み込み工程と、 A convolution step in which the neural network generates a first feature map from the captured image by convolution processing;
プーリング処理を行い前記第 1特徴マップのサイズ又は変化を低減 して第 2特徴マップを生成するプーリングエ程と、 をさらに備える、 請求項 1 1又は 1 2に記載のカラーフィルタ検査 方法。 The color filter inspection method according to claim 11 or 12, further comprising: a pooling step of performing a pooling process to reduce the size or change of the first feature map to generate a second feature map.
[請求項 14] 欠陥検出部が、 対象物の撮影画像に基づいて欠陥候補を検出するエ 程と、 [Claim 14] A step in which the defect detection unit detects a defect candidate based on a captured image of an object,
欠陥出力部が、 前記検出された欠陥候補についての少なくとも 1 つ の物理量と、 前記物理量の閾値とを比較することで、 前記欠陥候補が 欠陥でないか否かを判定し、 欠陥でないと判定された欠陥候補以外の 欠陥候補を、 欠陥分類のためにニューラルネッ トワークに出力するエ 程と、 The defect output unit determines whether or not the defect candidate is a defect by comparing at least one physical amount of the detected defect candidate with a threshold value of the physical amount, and it is determined that the defect candidate is not a defect. The process of outputting defect candidates other than the defect candidates to the neural network for defect classification,
を備える検査方法。 Inspection method comprising.
[請求項 1 5] 欠陥検出部が、 対象物の撮影画像に基づいて欠陥候補を検出するエ 程と、
\¥0 2020/175666 23 卩(:171? 2020 /008236 [Claim 15] A process in which the defect detection unit detects a defect candidate based on a captured image of an object, \¥0 2020/175 666 23 (:171? 2020/008236
欠陥出力部が、 前記検出された欠陥候補を、 欠陥分類のためにニユ —ラルネッ トワークに出力する工程と、 A defect output unit outputs the detected defect candidates to a neural network for defect classification;
分類確定部が、 対象物の撮影画像の第 1の分析結果に基づいて前記 ニューラルネッ トワークから出力された前記欠陥分類の結果を確定す る工程と、 A step in which the classification determining unit determines the result of the defect classification output from the neural network based on the first analysis result of the captured image of the object;
を備える検査方法。
Inspection method comprising.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021502393A JP7415286B2 (en) | 2019-02-28 | 2020-02-28 | Color filter inspection equipment, inspection equipment, color filter inspection method and inspection method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019036102 | 2019-02-28 | ||
JP2019-036102 | 2019-02-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020175666A1 true WO2020175666A1 (en) | 2020-09-03 |
Family
ID=72238882
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2020/008236 WO2020175666A1 (en) | 2019-02-28 | 2020-02-28 | Color filter inspection device, inspection device, color filter inspection method, and inspection method |
Country Status (3)
Country | Link |
---|---|
JP (1) | JP7415286B2 (en) |
TW (1) | TWI822968B (en) |
WO (1) | WO2020175666A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023282043A1 (en) * | 2021-07-08 | 2023-01-12 | Jfeスチール株式会社 | Inspection method, classification method, management method, steel material manufacturing method, training model generation method, training model, inspection device, and steel material manufacturing facility |
JP7510132B1 (en) | 2023-11-22 | 2024-07-03 | 株式会社デンケン | Visual inspection device, machine learning model learning method, teaching image generation method and program |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113469997B (en) * | 2021-07-19 | 2024-02-09 | 京东科技控股股份有限公司 | Method, device, equipment and medium for detecting plane glass |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06273349A (en) * | 1993-03-23 | 1994-09-30 | Sumitomo Metal Ind Ltd | Flaw decision system |
JP2005127823A (en) * | 2003-10-23 | 2005-05-19 | Dainippon Printing Co Ltd | Method and apparatus for measuring nonuniformity in optical characteristics, and method and apparatus for determining quality of product by utilizing same |
JP2013167596A (en) * | 2012-02-17 | 2013-08-29 | Honda Motor Co Ltd | Defect inspection device, defect inspection method, and program |
JP2018005639A (en) * | 2016-07-04 | 2018-01-11 | タカノ株式会社 | Image classification device, image inspection device, and program |
JP2018005640A (en) * | 2016-07-04 | 2018-01-11 | タカノ株式会社 | Classifying unit generation device, image inspection device, and program |
-
2020
- 2020-02-28 TW TW109106635A patent/TWI822968B/en active
- 2020-02-28 JP JP2021502393A patent/JP7415286B2/en active Active
- 2020-02-28 WO PCT/JP2020/008236 patent/WO2020175666A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06273349A (en) * | 1993-03-23 | 1994-09-30 | Sumitomo Metal Ind Ltd | Flaw decision system |
JP2005127823A (en) * | 2003-10-23 | 2005-05-19 | Dainippon Printing Co Ltd | Method and apparatus for measuring nonuniformity in optical characteristics, and method and apparatus for determining quality of product by utilizing same |
JP2013167596A (en) * | 2012-02-17 | 2013-08-29 | Honda Motor Co Ltd | Defect inspection device, defect inspection method, and program |
JP2018005639A (en) * | 2016-07-04 | 2018-01-11 | タカノ株式会社 | Image classification device, image inspection device, and program |
JP2018005640A (en) * | 2016-07-04 | 2018-01-11 | タカノ株式会社 | Classifying unit generation device, image inspection device, and program |
Non-Patent Citations (1)
Title |
---|
TSENG, DINCHANG ET AL.: "DEFECT CLASSIFICATION FOR LCD COLOR FILTERS USING NEURAL-NETWORK DECISION TREE CLASSIFIER", INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING, INFORMATION AND CONTROL, vol. 7, no. 7, 2011, pages 3695 - 3707, XP055735696, ISSN: 1349-4198 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023282043A1 (en) * | 2021-07-08 | 2023-01-12 | Jfeスチール株式会社 | Inspection method, classification method, management method, steel material manufacturing method, training model generation method, training model, inspection device, and steel material manufacturing facility |
JPWO2023282043A1 (en) * | 2021-07-08 | 2023-01-12 | ||
JP7459957B2 (en) | 2021-07-08 | 2024-04-02 | Jfeスチール株式会社 | Inspection method, classification method, management method, steel manufacturing method, learning model generation method, learning model, inspection device, and steel manufacturing equipment |
JP7510132B1 (en) | 2023-11-22 | 2024-07-03 | 株式会社デンケン | Visual inspection device, machine learning model learning method, teaching image generation method and program |
Also Published As
Publication number | Publication date |
---|---|
TW202034421A (en) | 2020-09-16 |
JP7415286B2 (en) | 2024-01-17 |
TWI822968B (en) | 2023-11-21 |
JPWO2020175666A1 (en) | 2020-09-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102254773B1 (en) | Automatic decision and classification system for each defects of building components using image information, and method for the same | |
Akagic et al. | Pavement crack detection using Otsu thresholding for image segmentation | |
Choudhary et al. | Crack detection in concrete surfaces using image processing, fuzzy logic, and neural networks | |
JP5546317B2 (en) | Visual inspection device, visual inspection discriminator generation device, visual inspection discriminator generation method, and visual inspection discriminator generation computer program | |
Mathavan et al. | Use of a self-organizing map for crack detection in highly textured pavement images | |
CN114155181B (en) | Automatic optimization of inspection schemes | |
CN110779928B (en) | Defect detection device and method | |
Bong et al. | Vision-based inspection system for leather surface defect detection and classification | |
WO2020175666A1 (en) | Color filter inspection device, inspection device, color filter inspection method, and inspection method | |
CN112330593A (en) | Building surface crack detection method based on deep learning network | |
CN113095438A (en) | Wafer defect classification method and device, system, electronic equipment and storage medium thereof | |
US20200265575A1 (en) | Flaw inspection apparatus and method | |
CN115953373B (en) | Glass defect detection method, device, electronic equipment and storage medium | |
JP7298176B2 (en) | Defect inspection device and trained model | |
Ibrahim et al. | Characterization of cracking in pavement distress using image processing techniques and k-nearest neighbour | |
Mir et al. | Machine learning-based evaluation of the damage caused by cracks on concrete structures | |
CN112200790B (en) | Cloth defect detection method, device and medium | |
Dow et al. | Skeleton-based noise removal algorithm for binary concrete crack image segmentation | |
JP2024505874A (en) | System and method for detecting paint defects using machine learning | |
JPH08189904A (en) | Surface defect detector | |
Revathy et al. | Fabric defect detection and classification via deep learning-based improved Mask RCNN | |
Kuo et al. | Automated inspection of micro-defect recognition system for color filter | |
CN117455917B (en) | Establishment of false alarm library of etched lead frame and false alarm on-line judging and screening method | |
CN113016023B (en) | Information processing method and computer-readable non-transitory recording medium | |
Singh et al. | Segmentation technique for the detection of Micro cracks in solar cell using support vector machine |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20762839 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021502393 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20762839 Country of ref document: EP Kind code of ref document: A1 |