[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

JP2008139074A - Method for detecting defect in image - Google Patents

Method for detecting defect in image Download PDF

Info

Publication number
JP2008139074A
JP2008139074A JP2006323457A JP2006323457A JP2008139074A JP 2008139074 A JP2008139074 A JP 2008139074A JP 2006323457 A JP2006323457 A JP 2006323457A JP 2006323457 A JP2006323457 A JP 2006323457A JP 2008139074 A JP2008139074 A JP 2008139074A
Authority
JP
Japan
Prior art keywords
pixel
image
feature
class
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2006323457A
Other languages
Japanese (ja)
Inventor
Munetoshi Numata
宗敏 沼田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lossev Technology Corp
Original Assignee
Lossev Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lossev Technology Corp filed Critical Lossev Technology Corp
Priority to JP2006323457A priority Critical patent/JP2008139074A/en
Publication of JP2008139074A publication Critical patent/JP2008139074A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To accurately discriminate a defect at a pattern edge in a method for comparing a sample image with a target image and detecting the defect in the target image in units of pixel. <P>SOLUTION: In the method for comparing the sample image with the target image and detecting the defect in the target image in units of pixel, a feature vector comprising the feature quantity such as a density value is extracted pixel by pixel, a previously obtained positional relationship of the feature vector to an identification boundary or plane between a normal class and an abnormal class is discriminated pixel by pixel, and a defective pixel is determined if the feature vector exists in the abnormal class. <P>COPYRIGHT: (C)2008,JPO&INPIT

Description

本発明は、検査対象物、例えばプリント基板のパターン検査や異種部品検査、パッケージのプリントミス検査など、複数の学習サンプルとターゲットとを画像処理で比較し欠陥を検出するような画像の欠陥検出方法に関する。   The present invention relates to an image defect detection method for detecting a defect by comparing a plurality of learning samples with a target by image processing, such as inspection of an object to be inspected, such as pattern inspection of a printed circuit board, inspection of different parts, and package print error inspection. About.

図1は、良品マスタとターゲットとを画像処理で比較し、欠陥を検出する従来の画像の欠陥検出方法について示している。良品マスタ画象とターゲット画像とは予め位置合わせされている。位置合わせには、画像処理における、残差相関法や正規化相関法等が用いられる。ターゲット画像と良品マスタ画像とを重ね合わせ、画素毎に差分を取り、差の絶対値があるしきい値よりも大きい場合には欠陥画素と判別する。ただ、この方法には以下の3つの問題がある。   FIG. 1 shows a conventional image defect detection method in which a non-defective product master and a target are compared by image processing to detect defects. The non-defective master image and the target image are aligned in advance. For the alignment, a residual correlation method, a normalized correlation method, or the like in image processing is used. The target image and the non-defective product master image are overlapped, a difference is taken for each pixel, and if the absolute value of the difference is larger than a certain threshold value, it is determined as a defective pixel. However, this method has the following three problems.

(1)良品マスタ画像、ターゲット画像のいずれも、パターンのエッジ部における濃度値の変化が著しく、差の絶対値が大きい。パターンのエッジ部の寸法には公差があるため、その位置は1画素の分解能の寸法内でもターゲット毎にばらついている。このようなパターン位置の変動は、エッジ部近傍画素の濃度値の変動として現れる。このため、画像全体でしきい値が1つだけだと、パターンのエッジ部で欠陥画素が多発する。   (1) Both the non-defective master image and the target image have a significant change in density value at the edge of the pattern, and the absolute value of the difference is large. Since there is a tolerance in the dimension of the edge of the pattern, the position varies from target to target even within the resolution of one pixel. Such a variation in pattern position appears as a variation in the density value of pixels near the edge portion. For this reason, if there is only one threshold value in the entire image, defective pixels frequently occur at the edge of the pattern.

(2)上記(1)の問題を解決するため、画素毎にしきい値をもたせる方法がある。しかし、各画素に対して手動でしきい値を設定することは、煩雑であり使いにくい。   (2) In order to solve the problem (1), there is a method of providing a threshold value for each pixel. However, manually setting a threshold value for each pixel is cumbersome and difficult to use.

(3)上記(2)の問題を解決するため、良品マスタを1つとするのではなく複数の学習サンプルで代替する方法がある。これは、複数の学習サンプル画像を重ね合わせ、画素(i, j)毎の平均値μijと標準偏差σijを計算することにより、ある定数cを用いて、μij ± cσijを画素毎のしきい値とするものである。この方法は、画像処理により自動で行うことができ、また画素毎の濃度のばらつきもcσijという式で対応できるためエッジ部を含め欠陥画素が多発することはなくなる。ただし、エッジ部においては濃度値のばらつきが大きくなるため、標準偏差σijも大きくなり、欠陥画素の多発は押さえられるものの、逆に判定があまくなる。よってエッジ部に欠陥があった場合には、見逃しやすくなるという問題が生じる。
特開2001−175865号公報 特開平10−253544号公報 特開平11−242746号公報
(3) In order to solve the above problem (2), there is a method in which a non-defective master is replaced with a plurality of learning samples. This is because, by superimposing multiple learning sample images and calculating the average value μ ij and standard deviation σ ij for each pixel (i, j), μ ij ± cσ ij is set for each pixel using a certain constant c. It is set as a threshold value. This method can be automatically performed by image processing, and the density variation from pixel to pixel can be dealt with by the equation cσ ij, so that defective pixels including the edge portion do not occur frequently. However, since the variation of the density value becomes large at the edge portion, the standard deviation σ ij also becomes large and the occurrence of defective pixels can be suppressed, but conversely, the determination becomes complicated. Therefore, when there is a defect in the edge portion, there arises a problem that it is easy to overlook.
JP 2001-175865 A Japanese Patent Laid-Open No. 10-253544 JP 11-242746 A

したがって、本発明の課題は、サンプル画像とターゲット画像とを比較し、画素単位でターゲット画像の欠陥を検出する方法において、特にパターンのエッジ部における欠陥を正しく判別することである。   Therefore, an object of the present invention is to correctly discriminate a defect particularly in an edge portion of a pattern in a method for comparing a sample image with a target image and detecting a defect in the target image in pixel units.

上記の課題の下に、請求項1の発明は、サンプル画像とターゲット画像とを比較し、画素単位でターゲット画像の欠陥を検出する方法において、濃度値などの特徴量からなる特徴ベクトルを画素毎に抽出し、予め求めておいた正常クラスと異常クラスの識別境界線または面に対する特徴ベクトルの位置関係を画素毎に判別し、特徴ベクトルが異常クラスにある場合に欠陥画素としている。   Under the above-described problems, the invention of claim 1 compares a sample image with a target image and detects a defect in the target image in units of pixels. The positional relationship of the feature vector with respect to the normal class / abnormal class identification boundary line or surface obtained in advance is determined for each pixel, and a defective pixel is determined when the feature vector is in the abnormal class.

請求項2の発明は、請求項1の発明において、複数のサンプル画像を用いて、各画像の画素毎に特徴ベクトルを求め、学習により正常クラスと異常クラスの識別境界線または面を画素毎に予め求めている。   According to a second aspect of the present invention, in the first aspect of the present invention, a feature vector is obtained for each pixel of each image using a plurality of sample images, and an identification boundary line or surface between a normal class and an abnormal class is obtained for each pixel by learning. It has been determined in advance.

請求項3の発明は、請求項1の発明において、ターゲット画像と複数のサンプル画像の位置を予め位置合わせしている。   According to a third aspect of the invention, in the first aspect of the invention, the positions of the target image and the plurality of sample images are aligned in advance.

請求項4の発明は、請求項2の発明において、画素毎の、正常クラスと異常クラスの識別境界線または面が等確率楕円または等確率楕円体であることを特徴としている。   The invention of claim 4 is characterized in that, in the invention of claim 2, the discrimination boundary line or surface of the normal class and the abnormal class for each pixel is an equal probability ellipse or an equal probability ellipsoid.

請求項5の発明は、請求項2の発明において、複数のサンプル画像が、検査対象物の製造工程より無作為に抽出されたサンプルの画像であることを特徴としている。   The invention of claim 5 is characterized in that, in the invention of claim 2, the plurality of sample images are images of samples randomly extracted from the manufacturing process of the inspection object.

請求項6の発明は、請求項2の発明において、各特徴量の平均、分散、および各特徴量間の共分散をサンプル画像が増える毎に更新し、記憶しておくことにより、学習終了後にこれらの計算値を用いて識別境界線または面を計算する。   According to a sixth aspect of the present invention, in the second aspect of the present invention, the average, variance, and covariance between each feature amount are updated and stored each time the sample image increases, so that after learning ends. Using these calculated values, an identification boundary line or surface is calculated.

請求項7の発明は、請求項1の発明において、サンプル画像、ターゲット画像をともにカラー画像とし、特徴ベクトルをr(赤色)濃度値、g(緑色)濃度値、b(青色)濃度値としている。   The invention of claim 7 is the invention of claim 1, wherein both the sample image and the target image are color images, and the feature vectors are r (red) density values, g (green) density values, and b (blue) density values. .

請求項1の発明は、濃度値などの特徴量からなる特徴ベクトルを画素毎に抽出し、予め求めておいた正常クラスと異常クラスの識別境界線または面に対する特徴ベクトルの位置関係を画素毎に判別し、特徴ベクトルが異常クラスにある場合に欠陥画素とするから、画像のエッジ部にある異常画素を的確に見つけることができる。   According to the first aspect of the present invention, a feature vector composed of a feature quantity such as a density value is extracted for each pixel, and the positional relationship of the feature vector with respect to the normal class / abnormal class identification boundary line or surface obtained in advance is determined for each pixel. Since the defective pixel is determined when the characteristic vector is in the abnormal class, the abnormal pixel in the edge portion of the image can be accurately found.

請求項2の発明は、請求項1の発明において、複数のサンプル画像を用いて、各画像の画素毎に特徴ベクトルを求め、学習により正常クラスと異常クラスの識別境界線または面を画素毎に予め求めるから、複数のサンプル画像を反映した適切な識別境界線または面を得ることができる。   According to a second aspect of the present invention, in the first aspect of the present invention, a feature vector is obtained for each pixel of each image using a plurality of sample images, and an identification boundary line or surface between a normal class and an abnormal class is obtained for each pixel by learning. Since it is obtained in advance, an appropriate identification boundary line or surface reflecting a plurality of sample images can be obtained.

請求項3の発明は、請求項1の発明において、ターゲット画像と複数のサンプル画像の位置を予め位置合わせしているから、後の処理、すなわち特徴ベクトルの抽出と、欠陥画素の判定を容易に行うことができる。   In the invention of claim 3, in the invention of claim 1, since the positions of the target image and the plurality of sample images are registered in advance, subsequent processing, that is, extraction of feature vectors and determination of defective pixels are facilitated. It can be carried out.

請求項4の発明は、請求項2の発明において、画素毎の、正常クラスと異常クラスの識別境界線または面として等確率楕円または等確率楕円体を用いるから、従来法では欠陥画素であるにも関わらず正常画素と判定されていた画素を、欠陥画素として正しく判定できる。   Since the invention according to claim 4 uses the equiprobability ellipse or the equiprobability ellipsoid as the discrimination boundary line or surface between the normal class and the abnormal class for each pixel in the invention of claim 2, it is a defective pixel in the conventional method. Nevertheless, a pixel that has been determined to be a normal pixel can be correctly determined as a defective pixel.

請求項5の発明は、請求項2の発明において、複数のサンプル画像として、検査対象物の製造工程より無作為に抽出されたサンプルの画像を用いるから、不良サンプルが連続で続いて等確率楕円または等確率楕円体が大きくなり、不良サンプルの特徴ベクトルを含むような識別境界線または面が生じることがなくなる。   In the invention of claim 5, in the invention of claim 2, since the sample images randomly extracted from the manufacturing process of the inspection object are used as the plurality of sample images, the defective samples are continuously connected to the equal probability ellipse. Alternatively, the equal probability ellipsoid becomes large, and an identification boundary line or surface that includes a feature vector of a defective sample is not generated.

請求項6の発明は、請求項2の発明において、各特徴量の平均、分散、および各特徴量間の共分散をサンプル画像が増える毎に更新し、記憶しておくことにより、学習終了後にこれらの計算値を用いて識別境界線または面を計算するから、記憶容量が大幅に削減され、少ない記憶容量のコンピュータ等でも請求項2記載の画像の欠陥検出方法の実行が可能である。   According to a sixth aspect of the present invention, in the second aspect of the present invention, the average, variance, and covariance between each feature amount are updated and stored each time the sample image increases, so that after learning ends. Since the identification boundary line or surface is calculated using these calculated values, the storage capacity is greatly reduced, and the image defect detection method according to claim 2 can be executed even by a computer having a small storage capacity.

請求項7の発明は、特徴ベクトルをr(赤色)濃度値、g(緑色)濃度値、b(青色)濃度値とすることにより、カラー画像のサンプル画像、カラー画像のターゲット画像にも請求項1の発明を適用でき、同様の効果が期待できる。   According to the seventh aspect of the invention, the feature vector is set to the r (red) density value, the g (green) density value, and the b (blue) density value, so that the sample image of the color image and the target image of the color image are also claimed. The invention of 1 can be applied and the same effect can be expected.

図2は、画素単位でターゲット画像の欠陥を検出する本発明の画像の欠陥検出方法を示している。ある画素 (i,j) において、複数の特徴量をx1, x2, …, xmとする。特徴量はたとえば、x1が濃度値、x2がエッジ強度、x3が近傍画素の平均濃度値などである。特徴量の数mは2以上の整数でいくつであってもよいが、図2では便宜上、m =3としてある。なお、サンプル画像、ターゲット画像がともにカラー画像である場合、特徴量としてr(赤色)濃度値、g(緑色)濃度値、b(青色)濃度値を用いることができる(請求項7)。 FIG. 2 shows an image defect detection method of the present invention for detecting a defect in a target image in pixel units. In a pixel (i, j), the plurality of characteristic amounts x 1, x 2, ..., and x m. The feature amount is, for example, x 1 is a density value, x 2 is an edge strength, x 3 is an average density value of neighboring pixels, and the like. The number m of feature quantities is an integer greater than or equal to 2 and may be any number. In FIG. 2, for convenience, m = 3. When both the sample image and the target image are color images, r (red) density value, g (green) density value, and b (blue) density value can be used as the feature amount.

x1x2…xm 特徴空間において、正常クラスと異常クラスとがあるものとする。ターゲット画像の画素の特徴量を特徴ベクトルX=(x1, x2, …, xm)とするとき、特徴ベクトルXが正常クラスに属すれば正常画素と判定し、不良クラスに属すれば異常画素と判定する。 Assume that there are a normal class and an abnormal class in the x 1 x 2 ... x m feature space. Wherein the feature amount of the pixel of the target image vector X = (x 1, x 2 , ..., x m) when the feature vector X is determined as a normal pixel if and only if they belong to a normal class, if and only if they belong to the defect classes It is determined as an abnormal pixel.

具体的には、濃度値やエッジ強度、近傍画素の平均濃度値などの特徴量からなる特徴ベクトルを画素毎に抽出し、予め求めておいた正常クラスと異常クラスの識別境界線または面に対する特徴ベクトルの位置関係を画素毎に判別し、特徴ベクトルが異常クラスにある場合に欠陥画素とする(請求項1)。   Specifically, a feature vector consisting of feature values such as density values, edge strengths, and average density values of neighboring pixels is extracted for each pixel, and features for the normal class and abnormal class identification boundary lines or surfaces obtained in advance. The positional relationship of the vectors is determined for each pixel, and a defective pixel is determined when the feature vector is in the abnormal class.

なお、複数のサンプル画像を用いて、各画像の画素毎に特徴ベクトルを求め、学習により正常クラスと異常クラスの識別境界線または面を画素毎に予め求めておく(請求項2)。このとき、画素毎の正常クラスと異常クラスの識別境界線または面は等確率楕円または等確率楕円体である(請求項4)。また、複数のサンプル画像は、検査対象物の製造工程より無作為に抽出されたサンプルの画像である(請求項5)。   It should be noted that a feature vector is obtained for each pixel of each image using a plurality of sample images, and a normal class / abnormal class identification boundary line or surface is obtained in advance for each pixel by learning. At this time, the discrimination boundary line or surface between the normal class and the abnormal class for each pixel is an equal probability ellipse or an equal probability ellipsoid. The plurality of sample images are sample images randomly extracted from the manufacturing process of the inspection object (claim 5).

図3は、段落番号0005に示した従来手法を示している。この手法において、パターンのエッジ部における欠陥を正しく判別することが困難な理由は、エッジ上に欠陥があっても、もともとエッジにおける画素(i, j)のサンプル画像の標準偏差σijが大きいため、欠陥画素の濃度値 fij がサンプル画像の画素毎の平均値μijと差があっても、正常と判定されてしまうからである。 FIG. 3 shows the conventional technique shown in paragraph number 0005. In this method, the reason why it is difficult to correctly determine the defect at the edge of the pattern is that the standard deviation σ ij of the sample image of the pixel (i, j) at the edge is large even if there is a defect on the edge. This is because even if the density value f ij of the defective pixel is different from the average value μ ij for each pixel of the sample image, it is determined to be normal.

これに対し、図4に示す本発明の手法では、画素毎に特徴空間を用い、画素毎に抽出した特徴ベクトルXが識別境界の内側にあるか外側にあるかで正常画素か異常画素かを判定するので、このような問題は生じない。なお、図4ではm =2とし、特徴量x1にエッジ部の画素の濃度値、特徴量x2にエッジ部の画素のエッジ強度を用いている。このように、エッジ部におけるある画素の濃度値とエッジ強度の間には、通常、正の相関または負の相関が見られる。正と負の両方の相関が見られるのは、エッジ強度というのは、差分値の絶対値をとるからである。サンプル画像におけるある画素の濃度値の分布は、平均値μijを中心とする正規分布と考えてよいから、同様にその画素のエッジ強度の分布も正規分布と考えてよい。もちろん、平均値と分散は濃度値の場合のものとは異なる。このため、等確率線は楕円となり、識別境界線は等確率楕円となる。特徴量が3つの場合は、識別境界は面となりこれは等確率楕円体となる。特徴量が4つ以上の場合は、識別境界は超平面となりこれは等確率超楕円体となる。本明細書では、便宜上、特徴量が3つ以上の場合に、これら面・超平面の区別や、楕円体・超楕円体の区別をせずに、「識別境界は面で等確率楕円体」という表現をすることにする。 On the other hand, in the method of the present invention shown in FIG. 4, a feature space is used for each pixel, and whether the feature vector X extracted for each pixel is inside or outside the identification boundary is a normal pixel or an abnormal pixel. Such a problem does not occur because the determination is made. Incidentally, the FIG. 4, m = 2, the density value of the pixel of the edge portion on the feature amount x 1, the feature quantity x 2 is used the edge intensities of the pixels of the edge portion. Thus, a positive correlation or a negative correlation is usually found between the density value of a certain pixel in the edge portion and the edge intensity. Both positive and negative correlations are seen because the edge strength takes the absolute value of the difference value. Since the distribution of density values of a certain pixel in the sample image may be considered as a normal distribution centered on the average value μ ij , the distribution of the edge intensity of the pixel may be considered as a normal distribution. Of course, the average value and the variance are different from those for the density value. For this reason, the equal probability line is an ellipse, and the identification boundary line is an equal probability ellipse. When there are three feature quantities, the identification boundary is a surface, which is an equal probability ellipsoid. When there are four or more feature quantities, the discrimination boundary becomes a hyperplane, which is an equiprobable hyperellipsoid. In this specification, for the sake of convenience, when there are three or more feature quantities, without distinguishing between these planes / hyperplanes or between ellipsoids / superellipsoids, “the identification boundary is a plane and an equiprobable ellipsoid” I will express that.

図3の従来手法では、特徴量が1つしかないためエッジ部分の画素の異常を検出できなかったが、図4の本発明の手法では特徴量が2つあるために、楕円を識別境界線とすることにより、エッジ部分の画素の異常(異常クラスに属する特徴ベクトルXをもつ)も検出できる。   In the conventional method of FIG. 3, there is only one feature quantity, so it was not possible to detect the pixel abnormality at the edge portion.However, since the technique of the present invention in FIG. By doing so, it is possible to detect an abnormality of a pixel in an edge portion (having a feature vector X belonging to an abnormality class).

それでは、このような本発明の手法は、図3の従来手法の特徴量mを1から2以上に増やしただけのものと考えられそうであるが、そうではない。図3の従来手法は統計的手法であって、下限をμ-cσ、上限をμ+cσとする方法である。これをフローチャートで示すと図5のようになる。同じ統計的手法で特徴量mが2つの場合は、同じような考え方を用いると図6のようになる。   Then, although it seems that such a method of the present invention is merely the feature amount m of the conventional method of FIG. 3 increased from 1 to 2 or more, this is not the case. The conventional method of FIG. 3 is a statistical method in which the lower limit is μ−cσ and the upper limit is μ + cσ. This is shown in a flowchart in FIG. If the same statistical method is used and there are two feature values m, the same concept is used, as shown in FIG.

このように単なる統計手法の組み合わせでは、特徴ベクトルXは異常画素のものであるにも関わらず、正常画素と判定されてしまう(図7)(図7右において、特徴ベクトルXは従来手法による正常画素判定領域(網点部)内にあり、正常画素と判定される。実際は、図7左の識別境界線内にあるものが正常である。)   In this way, with a simple combination of statistical methods, the feature vector X is determined to be a normal pixel even though it is an abnormal pixel (FIG. 7). (It is in the pixel determination area (halftone dot portion) and is determined to be a normal pixel. Actually, the pixel within the identification boundary line on the left in FIG. 7 is normal.)

これに対し、本発明の手法は単なる統計手法の組み合わせではなく、学習による識別境界線(または面)として楕円(または楕円体)を用いるため、識別境界線の外側にある特徴ベクトルXを異常画素として正しく判定できる。   On the other hand, the method of the present invention uses not only a combination of statistical methods but also an ellipse (or an ellipsoid) as an identification boundary line (or surface) by learning. Therefore, the feature vector X outside the identification boundary line is used as an abnormal pixel. Can be determined correctly.

以下、本発明の実施例を図面を参照しながら説明する。まず、互いに位置合わせがなされたn枚のサンプル画像を用意し(図8)、画素毎に濃度値やエッジ強度などの特徴量を抽出する。これらの特徴量をx1, x2,…, xmとする。ある注目画素Aについて見た時、各サンプル画像の特徴量をm次のx1x2…xm特徴空間にプロットする(図9)。各サンプルの特徴量は一般に正規分布で近似できるから、同じ確率Φの曲線または曲面は楕円または楕円体となる。 Embodiments of the present invention will be described below with reference to the drawings. First, n sample images aligned with each other are prepared (FIG. 8), and feature values such as density values and edge strength are extracted for each pixel. These feature amounts are assumed to be x 1 , x 2 ,..., X m . When looking at a certain pixel of interest A, the feature quantity of each sample image is plotted in the m-th order x 1 x 2 ... X m feature space (FIG. 9). Since the characteristic amount of each sample can be generally approximated by a normal distribution, a curve or curved surface having the same probability Φ is an ellipse or an ellipsoid.

図9に各々Φ=0.682、0.955、0.997の場合の等確率楕円体を描いた。これらはそれぞれ各特徴量の標準偏差の1倍、2倍、3倍の確率密度を持つ等確率楕円体(請求項4)である。この楕円体の内部は、統計的には、それぞれ信頼度68.2%、95.5%、99.7%で正常クラスであるとみなせる。   In FIG. 9, equal probability ellipsoids are drawn for Φ = 0.682, 0.955, and 0.997, respectively. These are equal probability ellipsoids (claim 4) each having a probability density of one, two, and three times the standard deviation of each feature quantity. The inside of this ellipsoid is statistically considered to be a normal class with reliability of 68.2%, 95.5%, and 99.7%, respectively.

さて、ここで信頼度99.7% の等確率楕円または楕円体は、正常クラスと異常クラスの識別境界とみなしてよい。これは、c1、c2, …, cmを任意の定数、各特徴量の標準偏差をσ1、σ2、…、σmとした時の、等確率楕円または楕円体である。ここでは、c1= c2=…=cm =3とする。等確率楕円または楕円体の線または面は、サンプル画像数が増えるとともに、少しづつ境界線または境界面の形を変える。具体的には、等確率楕円または楕円体の中心位置、各軸半径、傾きが変わる。ただし、サンプル画像数が多くなれば多くなるほど、境界線または境界面の形の変化は収束する。 Now, an equiprobability ellipse or ellipsoid with a reliability of 99.7% may be regarded as a discrimination boundary between a normal class and an abnormal class. This, c 1, c 2, ... , c m an arbitrary constant, the standard deviation sigma 1 of each feature quantity, sigma 2, ..., when a sigma m, is equal probability ellipse or ellipsoid. Here, c 1 = c 2 = ... = c m = 3. The line or surface of an equal probability ellipse or ellipsoid gradually changes the shape of the boundary line or boundary surface as the number of sample images increases. Specifically, the center position, the radius of each axis, and the inclination of the equal probability ellipse or ellipsoid change. However, as the number of sample images increases, the change in the shape of the boundary line or boundary surface converges.

なお、等確率楕円または楕円体は、いちいち各サンプル画像の各画素における、全特徴量を記憶しておかなくても、全特徴量における平均、分散、および各特徴量間の共分散を記憶しておくだけで、学習終了時にこれらの計算値を用いて、等確率楕円または楕円体の中心、各軸半径、傾きを計算することができる(請求項6)。このため、記憶容量が大幅に削減される。すなわち、新しい学習サンプルが追加されたときには、いちいち等確率楕円または楕円体を計算しなおすのではなく、各計算値を計算しなおすだけでよい。   Note that the equiprobability ellipse or ellipsoid stores the average, variance, and covariance between each feature quantity in each pixel of each sample image without having to store all feature quantities. It is possible to calculate the center of an equal probability ellipse or ellipsoid, the radius of each axis, and the inclination by using these calculated values at the end of learning. For this reason, the storage capacity is greatly reduced. That is, when new learning samples are added, it is only necessary to recalculate each calculated value instead of recalculating the equal probability ellipse or ellipsoid.

続いて、新しいターゲット画像が与えられると、まず、学習サンプル画像と位置合わせを行う(請求項3)。   Subsequently, when a new target image is given, first, alignment with the learning sample image is performed (claim 3).

そして、すべての画素に対して、特徴量x1, x2,…, xmを求める。これを特徴ベクトルXとする。ある注目画素のサンプル画像と同じ位置の画素Aの特徴ベクトルXが与えられたとき、これを図2のように学習サンプルのx1x2…xm特徴空間に投票し、特徴ベクトルXが識別境界である等確率楕円または楕円体の内部にあれば正常画素、そうでなければ異常画素と判定する(請求項1)。 Then, feature amounts x 1 , x 2 ,..., X m are obtained for all pixels. This is a feature vector X. When the feature vector X of the pixel A at the same position as the sample image of a certain target pixel is given, this is voted on the x 1 x 2 … x m feature space of the learning sample as shown in Fig. 2, and the feature vector X is identified If it is within an equiprobable ellipse or ellipsoid that is a boundary, it is determined as a normal pixel, otherwise it is determined as an abnormal pixel (claim 1).

なお、本発明の手法における学習サンプルは必ずしもすべて正常である必要はなく、正常品と思われるサンプルで構わない。このようなサンプルには0.3%程度の不良サンプルを含む可能性があるが、もともと等確率楕円または楕円体の等確率面はc1= c2=…=cm =3とするとΦ=100%ではなく、Φ=99.7%程度なので、学習サンプルに不良サンプルが含まれていてもその該当異常画素は等確率楕円または楕円体の外側に位置付けられる。もちろん、不良サンプルの割合が多い場合には、定数c1、 c2、…、cm を小さく設定すればよい。ところで、不良サンプルが連続で続くと等確率楕円または楕円体が大きくなり、不良サンプルの特徴ベクトルを含むようになり都合が悪い。ただ、このような事態は、学習サンプルを母集団である検査対象物の製造工程から無作為に抽出している限り生じることはない(請求項5)。 Note that all of the learning samples in the method of the present invention are not necessarily normal, and may be samples considered to be normal products. Such a sample may contain about 0.3% of bad samples, but the isoprobability ellipse or the equiprobability surface of the ellipsoid is originally Φ = 100% if c 1 = c 2 =… = c m = 3 However, since Φ = 99.7%, even if a defective sample is included in the learning sample, the corresponding abnormal pixel is positioned outside the equal probability ellipse or ellipsoid. Of course, if the proportion of defective samples is large, the constant c 1, c 2, ..., may be set small c m. By the way, if a bad sample continues continuously, an equal probability ellipse or an ellipsoid will become large and will contain the feature vector of a bad sample, and it is inconvenient. However, such a situation does not occur as long as the learning sample is randomly extracted from the manufacturing process of the test object that is the population (claim 5).

本発明の手法は、画像のエッジ部にある異常画素を的確に見つけることができるから、プリント基板のパターン検査や異種部品検査、パッケージのプリントミス検査など、複数の学習サンプルとターゲットを画像処理で比較し欠陥を検出するような画像の欠陥検出に広く応用できる。   Since the method of the present invention can accurately find abnormal pixels at the edge of an image, a plurality of learning samples and targets can be processed by image processing such as printed circuit board pattern inspection, heterogeneous component inspection, and package print error inspection. The present invention can be widely applied to image defect detection that compares and detects defects.

従来の画像の欠陥検出方法の説明図である。It is explanatory drawing of the defect detection method of the conventional image. 本発明の画像の欠陥検出方法の説明図である。It is explanatory drawing of the defect detection method of the image of this invention. 段落番号0005に示した従来手法の説明図である。It is explanatory drawing of the conventional method shown to the paragraph number 0005. 本発明の手法の説明図である。It is explanatory drawing of the method of this invention. 従来手法による画素単位の判定(m=1)のフローチャートである。10 is a flowchart of pixel unit determination (m = 1) according to a conventional method. 従来手法による画素単位の判定(m=2)のフローチャートである。It is a flowchart of the pixel unit determination (m = 2) by a conventional method. 従来手法による画素単位の判定(m=2)における正常画素判定領域と、本発明の手法(m =2)における識別境界とを比較した説明図である。It is explanatory drawing which compared the normal pixel determination area | region in the pixel unit determination (m = 2) by the conventional method, and the identification boundary in the method (m = 2) of this invention. 本発明の手法において用いられる互いに位置合わせがなされたn枚のサンプル画像の説明図である。It is explanatory drawing of the n sample images used in the method of the present invention and aligned with each other. ある注目画素Aについて見た時、各サンプル画像の特徴量をm次のx1x2…xm特徴空間にプロットした結果得られる等確率楕円または楕円体の説明図である。FIG. 5 is an explanatory diagram of an equal probability ellipse or ellipsoid obtained as a result of plotting the feature amount of each sample image in an m-th order x 1 x 2 ... X m feature space when viewing a certain pixel of interest A.

Claims (7)

サンプル画像とターゲット画像とを比較し、画素単位でターゲット画像の欠陥を検出する方法において、濃度値などの特徴量からなる特徴ベクトルを画素毎に抽出し、予め求めておいた正常クラスと異常クラスの識別境界線または面に対する特徴ベクトルの位置関係を画素毎に判別し、特徴ベクトルが異常クラスにある場合に欠陥画素とすることを特徴とする画像の欠陥検出方法。   In the method of comparing the sample image with the target image and detecting defects in the target image in units of pixels, feature vectors consisting of feature values such as density values are extracted for each pixel, and the normal class and the abnormal class obtained in advance A method for detecting a defect in an image, wherein a positional relationship of a feature vector with respect to an identification boundary line or a surface is determined for each pixel, and a defective pixel is determined when the feature vector is in an abnormal class. 複数のサンプル画像を用いて、各画像の画素毎に特徴ベクトルを求め、学習により正常クラスと異常クラスの識別境界線または面を画素毎に予め求めておくことを特徴とする請求項1記載の画像の欠陥検出方法。   The feature vector is obtained for each pixel of each image using a plurality of sample images, and a discrimination boundary line or surface between a normal class and an abnormal class is obtained in advance for each pixel by learning. Image defect detection method. ターゲット画像と複数のサンプル画像の位置を予め位置合わせしておくことを特徴とする請求項1記載の画像の欠陥検出方法。   The image defect detection method according to claim 1, wherein positions of the target image and the plurality of sample images are registered in advance. 画素毎の、正常クラスと異常クラスの識別境界線または面が等確率楕円または等確率楕円体であることを特徴とする請求項2記載の画像の欠陥検出方法。   3. The image defect detection method according to claim 2, wherein the discrimination boundary line or surface between the normal class and the abnormal class for each pixel is an equal probability ellipse or an equal probability ellipsoid. 複数のサンプル画像が、検査対象物の製造工程より無作為に抽出されたサンプルの画像であることを特徴とする請求項2記載の画像の欠陥検出方法。   3. The image defect detection method according to claim 2, wherein the plurality of sample images are images of samples randomly extracted from the manufacturing process of the inspection object. 各特徴量の平均、分散、および各特徴量間の共分散をサンプル画像が増える毎に更新し、記憶しておくことにより、学習終了後にこれらの計算値を用いて識別境界線または面を計算することを特徴とする請求項2記載の画像の欠陥検出方法。   Each feature quantity average, variance, and covariance between each feature quantity are updated and stored as the number of sample images increases, and the calculated boundary value or surface is calculated using these calculated values after learning. 3. The image defect detection method according to claim 2, wherein: サンプル画像、ターゲット画像がともにカラー画像で、特徴ベクトルがr(赤色)濃度値、g(緑色)濃度値、b(青色)濃度値であることを特徴とする請求項1記載の画像の欠陥検出方法。   2. The defect detection of an image according to claim 1, wherein both the sample image and the target image are color images, and the feature vectors are r (red) density value, g (green) density value, and b (blue) density value. Method.
JP2006323457A 2006-11-30 2006-11-30 Method for detecting defect in image Pending JP2008139074A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2006323457A JP2008139074A (en) 2006-11-30 2006-11-30 Method for detecting defect in image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2006323457A JP2008139074A (en) 2006-11-30 2006-11-30 Method for detecting defect in image

Publications (1)

Publication Number Publication Date
JP2008139074A true JP2008139074A (en) 2008-06-19

Family

ID=39600711

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2006323457A Pending JP2008139074A (en) 2006-11-30 2006-11-30 Method for detecting defect in image

Country Status (1)

Country Link
JP (1) JP2008139074A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010095460A1 (en) * 2009-02-19 2010-08-26 日本電気株式会社 Image processing system, image processing method, and image processing program
CN102254178A (en) * 2010-05-21 2011-11-23 株式会社其恩斯 mage processing apparatus, image processing method, and computer program
JP2011243138A (en) * 2010-05-21 2011-12-01 Keyence Corp Image processing apparatus, image processing method and computer program
JP2012021919A (en) * 2010-07-16 2012-02-02 Keyence Corp Image processing system, image processing method and computer program
WO2019039757A1 (en) * 2017-08-24 2019-02-28 주식회사 수아랩 Method and device for generating training data and computer program stored in computer-readable recording medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09259765A (en) * 1996-03-26 1997-10-03 Sony Corp White uniformity control method of color cathode-ray tube
JPH10253544A (en) * 1997-01-10 1998-09-25 Hitachi Ltd Method and apparatus for visual examination
JPH11242746A (en) * 1997-12-25 1999-09-07 Nec Corp Device for detecting defect of image and method therefor
JP2001175865A (en) * 1999-12-22 2001-06-29 Matsushita Electric Works Ltd Image processing method and its device
JP2004125434A (en) * 2002-09-30 2004-04-22 Ngk Spark Plug Co Ltd Appearance examining method and device for electronic circuit component and method of manufacturing electronic circuit component

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09259765A (en) * 1996-03-26 1997-10-03 Sony Corp White uniformity control method of color cathode-ray tube
JPH10253544A (en) * 1997-01-10 1998-09-25 Hitachi Ltd Method and apparatus for visual examination
JPH11242746A (en) * 1997-12-25 1999-09-07 Nec Corp Device for detecting defect of image and method therefor
JP2001175865A (en) * 1999-12-22 2001-06-29 Matsushita Electric Works Ltd Image processing method and its device
JP2004125434A (en) * 2002-09-30 2004-04-22 Ngk Spark Plug Co Ltd Appearance examining method and device for electronic circuit component and method of manufacturing electronic circuit component

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010095460A1 (en) * 2009-02-19 2010-08-26 日本電気株式会社 Image processing system, image processing method, and image processing program
CN102326183A (en) * 2009-02-19 2012-01-18 日本电气株式会社 Image processing system, image processing method, and image processing program
JP5500163B2 (en) * 2009-02-19 2014-05-21 日本電気株式会社 Image processing system, image processing method, and image processing program
US8903195B2 (en) 2009-02-19 2014-12-02 Nec Corporation Specification of an area where a relationship of pixels between images becomes inappropriate
CN102254178A (en) * 2010-05-21 2011-11-23 株式会社其恩斯 mage processing apparatus, image processing method, and computer program
JP2011243139A (en) * 2010-05-21 2011-12-01 Keyence Corp Image processing device, image processing method and compute program
JP2011243138A (en) * 2010-05-21 2011-12-01 Keyence Corp Image processing apparatus, image processing method and computer program
US8594416B2 (en) 2010-05-21 2013-11-26 Keyence Corporation Image processing apparatus, image processing method, and computer program
US8866903B2 (en) 2010-05-21 2014-10-21 Keyence Corporation Image processing apparatus, image processing method, and computer program
JP2012021919A (en) * 2010-07-16 2012-02-02 Keyence Corp Image processing system, image processing method and computer program
WO2019039757A1 (en) * 2017-08-24 2019-02-28 주식회사 수아랩 Method and device for generating training data and computer program stored in computer-readable recording medium
US11605003B2 (en) 2017-08-24 2023-03-14 Sualab Co., Ltd. Method and device for generating training data and computer program stored in computer-readable recording medium

Similar Documents

Publication Publication Date Title
US11227381B2 (en) Substrate defect inspection apparatus, substrate defect inspection method, and storage medium
JP6143445B2 (en) Method and apparatus for inspecting via holes
US11373292B2 (en) Image generation device and appearance inspection device
US20170169554A1 (en) System and method for patch based inspection
US20050282299A1 (en) Wafer inspection system and method thereof
JP2008139074A (en) Method for detecting defect in image
US20240185412A1 (en) Image processing apparatus for image inspection, image processing method, and storage medium
JP6696323B2 (en) Pattern inspection apparatus and pattern inspection method
KR101022187B1 (en) Substrate inspection device
WO2014103617A1 (en) Alignment device, defect inspection device, alignment method, and control program
Chavan et al. Quality control of PCB using image processing
JP2010008159A (en) Visual inspection processing method
CN111191670B (en) Sorting device and sorting method based on neural network
WO2016152159A1 (en) Dna chip image spot validity assessing device, dna chip image spot validity assessing method, and dna chip image spot validity assessing program
JP2016194434A (en) Inspection system and inspection method
US8055056B2 (en) Method of detecting defects of patterns on a semiconductor substrate and apparatus for performing the same
JP5498109B2 (en) Defect detection apparatus and defect detection method
JP2010243451A (en) Apparatus and method for visual inspection
JP7070334B2 (en) Image classification device, image inspection device, and image classification method
KR101975816B1 (en) Apparatus and Method for Discriminating Defects in Auto Repair System
JP2004515830A (en) Quadratic dispersion plot technique for defect detection
KR101697071B1 (en) Method for discriminating defect of polarizing plate
JP2000121495A (en) Screen inspection method
JP4275582B2 (en) Board inspection equipment
JP2005291988A (en) Method and apparatus for inspecting wiring pattern

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20091013

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20110920

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110926

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20120206