WO2024210035A1 - Image data correction method, and inspection device - Google Patents
Image data correction method, and inspection device Download PDFInfo
- Publication number
- WO2024210035A1 WO2024210035A1 PCT/JP2024/012694 JP2024012694W WO2024210035A1 WO 2024210035 A1 WO2024210035 A1 WO 2024210035A1 JP 2024012694 W JP2024012694 W JP 2024012694W WO 2024210035 A1 WO2024210035 A1 WO 2024210035A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image data
- cross
- unit
- pixels
- radiation
- Prior art date
Links
- 238000012937 correction Methods 0.000 title claims abstract description 68
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000007689 inspection Methods 0.000 title abstract description 92
- 230000005540 biological transmission Effects 0.000 claims abstract description 68
- 230000005855 radiation Effects 0.000 claims description 75
- 230000001131 transforming effect Effects 0.000 claims description 5
- 230000001678 irradiating effect Effects 0.000 claims description 2
- 238000003384 imaging method Methods 0.000 description 31
- 238000012545 processing Methods 0.000 description 29
- 229910000679 solder Inorganic materials 0.000 description 29
- 239000000758 substrate Substances 0.000 description 25
- 238000003860 storage Methods 0.000 description 18
- 238000001514 detection method Methods 0.000 description 11
- 230000000737 periodic effect Effects 0.000 description 9
- 239000011800 void material Substances 0.000 description 9
- 238000012360 testing method Methods 0.000 description 5
- 230000017525 heat dissipation Effects 0.000 description 4
- 238000003702 image correction Methods 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000005304 joining Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 229910003460 diamond Inorganic materials 0.000 description 1
- 239000010432 diamond Substances 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002844 melting Methods 0.000 description 1
- 230000008018 melting Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- WFKWXMTUELFFGS-UHFFFAOYSA-N tungsten Chemical compound [W] WFKWXMTUELFFGS-UHFFFAOYSA-N 0.000 description 1
- 229910052721 tungsten Inorganic materials 0.000 description 1
- 239000010937 tungsten Substances 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N23/00—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
- G01N23/02—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
- G01N23/04—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material
- G01N23/046—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material using tomography, e.g. computed tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
Definitions
- the present invention relates to a method for correcting image data and an inspection device.
- soldered connection state (hereinafter referred to as "solder joint state") between electronic components (e.g., pins) and wiring on the board is difficult to determine by visual inspection, so tomosynthesis-type X-ray inspection equipment is used.
- electronic components may be arranged three-dimensionally.
- an IGBT insulated gate bipolar transistor
- IGBT insulated gate bipolar transistor
- the heat sink for heat dissipation has multiple pin-shaped fins formed thereon, so when an electronic board is irradiated with X-rays to obtain transmission image data, the pins to be inspected and the heat sink for heat dissipation are imaged overlapping each other, and the heat sink becomes a pin shadow (noise) in the transmission image data or the three-dimensional image data (cross-sectional image data) reconstructed from the transmission image data.
- the fins of this heat sink are arranged side by side at a predetermined interval and have periodicity, so a method is used to remove the periodic shadow (noise) from the transmission image data or cross-sectional image data using Fourier transform and inverse Fourier transform (see, for example, Patent Document 1).
- the present invention has been made in consideration of these problems, and aims to provide an image data correction method that efficiently removes only noise images from images that have periodicity in image data (transmission image data or cross-sectional image data), and an inspection device in which this correction method is implemented.
- the image data correction method is a method for correcting transmitted image data obtained by irradiating an object to be inspected with radiation emitted from a radiation source and detecting radiation that has passed through the object to be inspected, or three-dimensional image data reconstructed from the transmitted image data, and includes a first step of storing a position and a value in the image data as information on pixels that satisfy a first condition among the pixels of the image data, a second step of Fourier transforming the image data to generate frequency image data, a third step of correcting the values of pixels that satisfy a second condition among the pixels of the frequency image data and outputting corrected frequency image data, a fourth step of inverse Fourier transforming the corrected frequency image data to generate corrected image data, and a fifth step of replacing the pixel values at the positions stored in the first step in the corrected image data with the stored values.
- the third step divides the frequency image data into a plurality of regions, and corrects the values of pixels in at least one region that satisfy the second condition.
- the third step performs correction using a correction method based on the second condition set for each of the regions.
- the first condition is a pixel having a value greater than a threshold value set as an upper limit value, and a pixel having a value less than a threshold value set as a lower limit value.
- the first condition is a pixel in a specified area within the image data.
- the inspection device also includes a radiation source, a holder for holding an object to be inspected, a detector for detecting radiation from the radiation source that has passed through the object to be inspected and acquiring transmission image data of the object to be inspected, and a control unit, and the control unit corrects two or more pieces of transmission image data acquired by changing the relative positions of the radiation source, the holder, and the detector, or three-dimensional image data reconstructed from the transmission image data, using the image data correction method described above, and inspects the object to be inspected using the corrected image data.
- the image data correction method according to the present invention and the inspection device in which this correction method is implemented can efficiently remove only the noise images from the periodic images in the image data (transmission image data or cross-sectional image data).
- FIG. 1 is an explanatory diagram for explaining a configuration of an inspection device according to an embodiment of the present invention.
- FIG. 2 is an explanatory diagram for explaining each functional block of a control unit of the inspection device.
- 11 is a flowchart for explaining the inspection process in the above-mentioned inspection device, in which (a) shows the selection process of an imaging region (FOV), and (b) shows the image acquisition and judgment process for the selected imaging region (FOV).
- 11 is a flowchart for explaining an image correction process in the image acquisition and determination process.
- 11 is an explanatory diagram for explaining a method of setting a correction region in frequency image data.
- FIG. 1 is an explanatory diagram for explaining a configuration of an inspection device according to an embodiment of the present invention.
- FIG. 2 is an explanatory diagram for explaining each functional block of a control unit of the inspection device.
- 11 is a flowchart for explaining the inspection process in the above-mentioned inspection device, in which (a) shows the selection process of an
- the inspection device 1 is configured to have a control unit 10, which is made up of a processing device such as a personal computer (PC), a monitor 11, and an imaging unit 32.
- the imaging unit 32 also has a radiation generator 22, a substrate holding unit 24, a detector 26, a radiation quality changing unit 14, a radiation generator driving unit 16, a substrate holding unit driving unit 18, and a detector driving unit 20.
- the radiation generator 22 is a device (ray source) that generates radiation such as X-rays, and generates radiation by colliding accelerated electrons with a target such as tungsten or diamond.
- a target such as tungsten or diamond.
- the radiation in this embodiment is described as being X-rays, this is not limiting.
- the radiation may be alpha rays, beta rays, gamma rays, ultraviolet rays, visible light, or infrared rays.
- the radiation may also be microwaves or terahertz waves.
- the board holding unit 24 holds the electronic board, which is the object under inspection 12.
- the object under inspection 12 held by the board holding unit 24 is irradiated with radiation generated by the radiation generator 22, and the radiation that has passed through the object under inspection 12 is detected by the detector 26 to capture an image.
- the radiation transmission image of the object under inspection 12 captured by the detector 26 will be referred to as a "transmission image.”
- the board holding unit 24, which holds the electronic board, which is the object under inspection 12, and the detector 26 are moved relative to the radiation generator 22 to obtain multiple transmission images, and a reconstructed image (cross-sectional image), which is a three-dimensional image, is generated from these transmission images.
- the transmission image captured by the detector 26 (transmission image data, which is data of the transmission image output from the detector 26) is sent to the control unit 10, and is reconstructed into image data including the three-dimensional shape of the solder at the joint portion using a known technique such as the Filtered Backprojection method (FBP method).
- FBP method Filtered Backprojection method
- the reconstructed image data and transmission image data are stored in a storage in the control unit 10 (for example, the storage unit 34 described later) or an external storage (not shown).
- image data obtained by extracting one cross section of the three-dimensional shape calculated based on the transmission image data is called a cross-sectional image (cross-sectional image data).
- a set of one or more cross-sectional image data is called "three-dimensional image data" or "reconstructed image data”.
- image data obtained by cutting out an arbitrary cross section from the reconstructed image data is cross-sectional image data.
- Such reconstructed images and cross-sectional images are output to the monitor 11.
- the monitor 11 In addition to the reconstructed images and cross-sectional images, the monitor 11 also displays the inspection results of the solder joint state described later.
- the reconstructed image in this embodiment is also called "planar CT” because it is reconstructed from a planar image (transmission image data) captured by the detector 26 as described above.
- the radiation quality change unit 14 changes the radiation quality generated by the radiation generator 22.
- the radiation quality is determined by the voltage (hereafter referred to as “tube voltage”) applied to accelerate the electrons to be collided with the target, and the current (hereafter referred to as "tube current") that determines the number of electrons.
- the radiation quality change unit 14 is a device that controls the tube voltage and tube current. This radiation quality change unit 14 can be realized using known technology such as a transformer or rectifier.
- the quality of radiation is determined by the brightness and hardness of the radiation (spectral distribution of radiation).
- Increasing the tube current increases the number of electrons that collide with the target, and the number of radiation photons generated.
- the brightness of the radiation increases.
- some components such as capacitors are thicker than other components, and in order to capture a transmission image of these components, it is necessary to irradiate them with radiation of high brightness.
- the brightness of the radiation is adjusted by adjusting the tube current.
- increasing the tube voltage increases the energy of the electrons that collide with the target, and the energy (spectrum) of the generated radiation increases.
- the tube voltage can be used to adjust the contrast of a transmission image.
- the radiation generator driving unit 16 has a driving mechanism such as a motor (not shown) and can move the radiation generator 22 up and down along axis A passing through its focal point (axis (optical axis) passing through the center of the radiation direction of the radiation emitted from the radiation generator 22, the direction of this axis being referred to as the "Z-axis direction").
- a driving mechanism such as a motor (not shown) and can move the radiation generator 22 up and down along axis A passing through its focal point (axis (optical axis) passing through the center of the radiation direction of the radiation emitted from the radiation generator 22, the direction of this axis being referred to as the "Z-axis direction").
- This makes it possible to change the distance between the radiation generator 22 and the inspected object (electronic board) 12 held by the board holding unit 24 to change the irradiation field and change the magnification ratio of the transmitted image captured by the detector 26.
- the position of the radiation generator 22 in the Z-axis direction is detected
- the detector driving unit 20 also has a driving mechanism such as a motor (not shown) and rotates the detector 26 along the detector rotation track 30.
- the substrate holding unit driving unit 18 also has a driving mechanism such as a motor (not shown) and moves the substrate holding unit 24 in parallel on the plane on which the substrate rotation track 28 is provided.
- the substrate holding unit 24 is configured to rotate on the substrate rotation track 28 in conjunction with the rotation of the detector 26. This makes it possible to capture multiple transmission images with different projection directions and projection angles while changing the relative positional relationship between the test object 12 held by the substrate holding unit 24 and the radiation generator 22.
- the area on the test object 12 where a transmission image can be acquired is determined by the size of the area where the detector 26 detects radiation and the relative positions of the radiation generator 22, the test object 12 (substrate holding unit 24), and the detector 26.
- the area where this transmission image can be acquired is called the "FOV (field of view)".
- the rotation radius of the board rotation orbit 28 and the detector rotation orbit 30 is not fixed, but can be freely changed. This makes it possible to arbitrarily change the irradiation angle of the radiation irradiated to the electronic board substrate, which is the inspected object 12, and the components attached to this substrate.
- the orbital plane of the board rotation orbit 28 and the detector rotation orbit 30 is perpendicular to the Z-axis direction described above, and if the directions perpendicular to this orbital plane are the X-axis direction and the Y-axis direction, the positions of the board holding part 24 in the X-axis direction and the Y-axis direction are detected by the board position detection part 29 and output to the control part 10, and the positions of the detector 26 in the X-axis direction and the Y-axis direction are detected by the detector position detection part 31 and output to the control part 10.
- the control unit 10 controls all operations of the inspection device 1 described above. Below, the main functions of the control unit 10 are explained using FIG. 2. Although not shown, input devices such as a keyboard and a mouse are connected to the control unit 10.
- the control unit 10 includes a memory unit 34, an imaging processing unit 35, a cross-sectional image generating unit 36, a board inspection surface detecting unit 38, a pseudo cross-sectional image generating unit 40, and an inspection unit 42.
- the imaging processing unit 35 of the control unit 10 also has the function of an imaging control unit that controls the operation of the radiation quality changing unit 14, the radiation generator driving unit 16, the board holding unit driving unit 18, and the detector driving unit 20.
- each of these functional blocks is realized by the cooperation of hardware, such as a CPU that executes various arithmetic processes, and a RAM that is used as a work area for storing data and executing programs, and software. Therefore, these functional blocks can be realized in various ways by combining hardware and software.
- the memory unit 34 stores information such as the imaging conditions for capturing a transmission image of the electronic board and the design of the electronic board to be inspected.
- the memory unit 34 also stores image data such as transmission images and reconstructed images (cross-sectional images, pseudo-cross-sectional images) of the electronic board, as well as the inspection results of the inspection unit 42 described below.
- the memory unit 34 also stores information for driving the radiation generator driving unit 16, the board holder driving unit 18, and the detector driving unit 20 (e.g., the speed at which the radiation generator driving unit 16 drives the radiation generator 22, the speed at which the board holder driving unit 18 drives the board holder 24, and the speed at which the detector driving unit 20 drives the detector 26).
- the imaging processing unit 35 drives the radiation generator 22, the substrate holding unit 24, and the detector 26 using the radiation generator driving unit 16, the substrate holding unit driving unit 18, and the detector driving unit 20 to image the specimen 12 held by the substrate holding unit 24, obtain transmission image data, and generate reconstructed image data (cross-sectional image data) from the transmission image data.
- the method of obtaining the transmission image data (capturing the transmission image) and generating the reconstructed image data (cross-sectional image data) by this imaging processing unit 35 will be described later.
- the cross-sectional image generating unit 36 generates reconstructed image data (cross-sectional image data) based on the multiple transmission image data acquired from the storage unit 34. This can be achieved using known techniques, such as the FBP method or the maximum likelihood estimation method. Different reconstruction algorithms result in different properties of the reconstructed image data and different times required for reconstruction. Therefore, multiple reconstruction algorithms and parameters used in the algorithms may be prepared in advance and the user may select one. This provides the user with the freedom to choose, such as prioritizing a shorter reconstruction time or prioritizing better image quality even if it takes more time.
- Each of the generated cross-sectional image data is stored in the storage unit 34 together with attribute information, such as information that determines the position of each cross-sectional image data in the Z-axis direction and the positions (coordinates) of pixels in the cross-sectional image data in the X-axis direction and the Y-axis direction.
- the board inspection surface detection unit 38 identifies image data (cross-sectional image data) that shows the surface to be inspected on the electronic board, which is the object to be inspected 12, (e.g., the surface of the electronic board), from among the multiple cross-sectional image data generated by the cross-sectional image generation unit 36.
- image data cross-sectional image data
- the cross-sectional image cross-sectional image data
- the inspection surface image inspection surface image data
- the pseudo cross-sectional image generating unit 40 images the area of the board thicker than the cross-sectional image by stacking a predetermined number of consecutive cross-sectional images (cross-sectional image data) for the cross-sectional image data generated by the cross-sectional image generating unit 36.
- the number of cross-sectional images to be stacked is determined by the thickness of the area of the board shown by the cross-sectional image (hereinafter referred to as the "slice thickness") and the slice thickness of the pseudo cross-sectional image.
- the inspection surface image data identified by the board inspection surface detecting unit 38 is used to identify the position of the solder.
- the inspection unit 42 inspects the solder joint condition based on the cross-sectional image data generated by the cross-sectional image generation unit 36, the inspection surface image data identified by the board inspection surface detection unit 38, and the pseudo cross-sectional image data generated by the pseudo cross-sectional image generation unit 40. Since the solder that joins the electronic board and the component is located near the board inspection surface, by inspecting the inspection surface image data and the cross-sectional image data that shows the area on the radiation generator 22 side relative to the inspection surface image data, it is possible to determine whether the solder is properly joining the board and the component.
- solder joint condition refers to whether or not an appropriate conductive path is formed when the electronic board and the component are joined by solder. Inspection of the solder joint condition includes bridge inspection, molten state inspection, and void inspection. “Bridge” refers to an undesirable conductive path between conductors caused by solder joining. “Melted state” refers to a state in which the joint between the electronic board and the component is insufficient due to insufficient melting of the solder, that is, whether or not there is a so-called “floating” state. "Void” refers to a defect in the solder joint caused by air bubbles in the solder joint. Therefore, the inspection unit 42 includes a bridge inspection unit 44, a molten state inspection unit 46, and a void inspection unit 48.
- bridge inspection unit 44 inspects and voids, respectively, based on the pseudo cross-sectional image data generated by the pseudo cross-sectional image generation unit 40
- the molten state inspection unit 46 inspects the molten state of the solder based on the inspection surface image data identified by the board inspection surface detection unit 38.
- the inspection results in the bridge inspection unit 44, molten state inspection unit 46, and void inspection unit 48 are stored in the memory unit 34.
- FIGS. 3 and 4 are flowcharts showing the flow (inspection process) of capturing a transmission image (acquiring transmission image data), generating reconstructed image data (cross-sectional image data), identifying inspection surface image data, and inspecting the solder joint state.
- the process in this flowchart starts, for example, when the control unit 10 receives an instruction to start the inspection from an input device (not shown).
- the imaging processing unit 35 of the control unit 10 sets the irradiation field of radiation emitted from the radiation generator 22 by the radiation generator driving unit 16 (the imaging area where the radiation is irradiated to obtain the transmission image data of the field of view FOV described above) (step S100) as shown in Fig. 3(a), and starts the image acquisition and judgment process (step S102). Note that when there are multiple imaging areas (FOV) on the inspected object 12, the imaging areas (FOV) are selected and set in a predetermined order.
- the image capture processing unit 35 of the control unit 10 executes a transmission image capture and reconstructed image generation process as shown in FIG. 3B, captures the test object 12 to acquire transmission image data, and generates reconstructed image data using the transmission image data (step S1020). Specifically, in the transmission image capture and reconstructed image generation process S1020, the image capture processing unit 35 of the control unit 10 moves the substrate holder 24 by the substrate holder drive unit 18, and moves the detector 26 by the detector drive unit 20 to change the imaging position, while setting the radiation quality of the radiation generator 22 by the radiation quality change unit 14, irradiates the current imaging field (FOV) of the test object 12 with radiation, acquires transmission image data, and stores it in the storage unit 34.
- FOV current imaging field
- the cross-sectional image generation unit 36 of the control unit 10 reads out a plurality of transmission image data from the storage unit 34, generates reconstructed image data (cross-sectional image data) using the transmission image data, and stores it in the storage unit 34.
- the movement path of the substrate holder 24 by the substrate holder driver 18 and the movement path of the detector 26 by the detector driver 20 when acquiring the transmission image data are set in advance in the substrate holder driver 18 and the detector driver 20 by reading information stored in the memory 34 or inputting information from an input device.
- the position of the radiation generator 22 in the Z-axis direction is also set in advance in the radiation generator driver 16 by a similar method.
- the substrate holder driver 18 and the detector driver 20 may move the substrate holder 24 and the detector 26 to a desired position, and the substrate holder 24 and the detector 26 may be stopped at a position where the transmission image data is acquired before acquiring the transmission image data, or the substrate holder driver 18 and the detector driver 20 may move the substrate holder 24 and the detector 26 to a desired position and acquire the transmission image data.
- the acquired transmission image data is stored in the memory 34 for each imaging area (FOV).
- the board inspection surface detection unit 38 of the control unit 10 receives the transmission image data or the reconstructed image data (cross-sectional image data) from the cross-sectional image generation unit 36, and executes a board inspection surface detection/pseudo cross-sectional image generation process to identify the inspection surface image from the received data (step S1040).
- the storage unit 34 stores in advance cross-sectional image data (called "reference image data") of the board inspection surface of a normal inspected object 12 that has no abnormalities in the solder joint state, etc.
- the board inspection surface detection unit 38 of the control unit 10 compares the reference image data with each of the cross-sectional image data generated in step S1020, identifies the cross-sectional image data that most closely matches the reference image data as the inspection surface image data, stores the identified cross-sectional image data (inspection surface image data) in the storage unit 34, and stores the position in the Z-axis direction in the storage unit 34 as the position of the board inspection surface in the current field of view FOV.
- the pseudo cross-sectional image generating unit 40 of the control unit 10 generates pseudo cross-sectional image data based on the identified inspection surface image data and the Z-direction position of the substrate inspection surface, and stores the data in the storage unit 34.
- cross-sectional image data are generated as reconstructed image data from the transmission image data, and reference plane image data is determined and pseudo cross-sectional image data is generated for the cross-sectional image data.
- reference plane image data may be determined and pseudo cross-sectional image data may be generated, and the next cross-sectional image data may be generated.
- the imaging processing unit 35 of the control unit 10 performs image correction processing on the cross-sectional image data generated by the cross-sectional image generating unit 36, the inspection surface image data identified by the board inspection surface detecting unit 38, and the pseudo cross-sectional image data generated by the pseudo cross-sectional image generating unit 40 (collectively referred to as "cross-sectional image data") (step S1060).
- the image processing unit 35 of the control unit 10 judges whether correction is required for the cross-sectional image data of the current imaging field (FOV) (step S1061). If there is a heat sink under the pin to be inspected, as in the above-mentioned IGBT, the heat sink is also reflected in the cross-sectional image data and becomes noise (shadow) on the solder state of the pin to be inspected. Therefore, in order to improve the inspection accuracy, it is necessary to remove the noise of the heat sink by correction processing. Whether or not correction processing is required may be judged from information such as the design of the electronic board, which is the inspected object 12, or information on whether or not correction is required for each imaging field (FOV) may be set in advance.
- step S1061: Y the image processing unit 35 of the control unit 10 selects one of the multiple cross-sectional image data (cross-sectional image data, inspection surface image data, pseudo cross-sectional image data) selected in step S1040 and reads it from the storage unit 34 (step S1062).
- the image processing unit 35 of the control unit 10 saves the information of the selected pixels of the cross-sectional image data by storing the position in the cross-sectional image data and the value (brightness value) of the pixel that satisfies the first condition in the memory unit 34 (step S1063).
- the first condition is a pixel with a value greater than a threshold value set as an upper limit value and a pixel with a value smaller than a threshold value set as a lower limit value.
- Fourier transform and inverse Fourier transform are used to remove images of shapes arranged at a predetermined interval, such as heat sinks (periodic noise), from the cross-sectional image data.
- the first condition is to select information on pixels with values greater than a threshold value set as an upper limit value and pixels with values smaller than a threshold value set as a lower limit value.
- pixels in a specified region in the cross-sectional image data may be set as the first condition.
- portions that do not include periodic noise such as a heat sink do not require correction (noise removal) using Fourier transform and inverse Fourier transform, so such regions, i.e., pixels specified at coordinates where it is known in advance that noise removal is not required, can be set as the first condition, and can be set aside in advance so as not to be affected by correction.
- both pixels with values greater than the threshold value set as the above-mentioned upper limit value and pixels with values less than the threshold value set as the lower limit value, and pixels in a specified region in the cross-sectional image data (pixels specified at coordinates where it is known in advance that noise removal is not required), or other conditions may be set.
- the imaging processing unit 35 of the control unit 10 performs a Fourier transform on the selected cross-sectional image data to generate frequency image data (step S1064).
- This Fourier transform detects how much the pixel value (brightness value) changes within a unit pixel, and converts the transmission image data in the spatial domain into frequency image data in the spatial frequency domain; specifically, a two-dimensional Fourier transform is performed on the cross-sectional image data to generate frequency image data.
- the imaging processing unit 35 of the control unit 10 corrects the frequency image data. If the frequency components corresponding to the image of the heat sink are removed from the entire frequency image data, then when the corrected frequency image data is inverse Fourier transformed, images that do not need to be deleted may be deleted from the corrected cross-sectional image data, or images that do not actually exist may be generated. Therefore, the inspection device 1 according to this embodiment is configured to divide the frequency image data into multiple regions (called "correction regions") and perform correction on at least one of these correction regions.
- FIG. 5 shows the case where frequency image data Ip is divided into four regions: correction region A1 divided by boundary Lb1, correction regions A2a and A2b divided by boundary Lb2, and the remaining correction region A3.
- the image capture processing unit 35 of the control unit 10 selects one correction region from these multiple correction regions (step S1065), and corrects pixels within the correction region that satisfy a condition set for the correction region (this condition is called the "second condition") (step S1066).
- the second condition is pixels with frequency components within a specified range, and the correction involves removing the value (setting it to 0).
- the shadow of the heat sink can be removed by removing the pixel value of the frequency component that takes into account the placement interval of the heat sink, etc.
- step S1065 When the image capture processing unit 35 of the control unit 10 completes the correction of the correction area selected in step S1065, it determines whether or not there is a next correction area (step S1067), and if it determines that there is a next correction area (step S1067: Y), it returns to step S1065 to select the next correction area and repeats the above-mentioned steps S1066 and S1067.
- the second condition may be changed for each correction area.
- the range of frequency components to be removed may be changed between correction area A1 and correction areas A2a and A2b, that is, the parameters of the second condition may be changed.
- the content of the second condition (the method of determining whether or not to perform correction) may be changed such that pixels within a predetermined frequency component range are removed in correction area A1, and pixels with frequency components greater than a predetermined threshold value are removed in correction areas A2a and A2b.
- the second condition may be configured such that correction area A3 is not corrected.
- the size and position of the correction area and the correction conditions for each correction area may be determined in advance or may be configured to be selected during inspection.
- the position of the correction area in the frequency image data and the correction method may be determined by learning using AI.
- step S1067 N
- step S1066 N
- step S1066 the image capture processing unit 35 of the control unit 10 determines that there is no next correction region
- step S1066 the image capture processing unit 35 of the control unit 10 performs an inverse Fourier transform on the frequency image data corrected in step S1066 to generate corrected transmission image data (step S1068).
- the image capture processing unit 35 of the control unit 10 reads out the information on the pixels saved in step S1063 from the storage unit 34, and replaces the values of the saved pixels with the values of the saved positions in the corrected cross-sectional image data (step S1069).
- the values of the pixels at the positions saved in step S1063 return to the state before correction using the Fourier transform and inverse Fourier transform, and are not affected by this correction.
- the image capturing processing unit 35 of the control unit 10 stores the transmission image data corrected by the above processing in the storage unit 34 (step S1070), and determines whether or not there is next cross-sectional image data (step S1071). If the image capturing processing unit 35 of the control unit 10 determines that there is next transmission image data (step S1071: Y), it returns to step S1062 to select the next cross-sectional image data, and repeats the above-mentioned steps S1063 to S1071.
- step S1071: N If the image capture processing unit 35 of the control unit 10 determines that there is no next cross-sectional image data (step S1071: N), it ends the image correction process S1060.
- the bridge inspection unit 44 of the control unit 10 obtains a pseudo cross-sectional image of a slice thickness equivalent to that of the solder ball that shows the solder ball from the pseudo cross-sectional image generation unit 40 (read from the storage unit 34) and inspects whether or not a bridge exists (step S1100). If no bridge is detected (step S1100: N), the molten state inspection unit 46 of the control unit 10 obtains an inspection surface image from the board inspection surface detection unit 38 (read from the storage unit 34) and inspects whether or not the solder is molten (step S1120).
- step S1140: Y If the solder is molten (step S1140: Y), the void inspection unit 48 of the control unit 10 obtains a pseudo cross-sectional image that partially shows the solder ball from the pseudo cross-sectional image generation unit 40 (read from the storage unit 34) and inspects whether or not a void exists (step S1160). If no voids are found (step S1180: N), the inspection unit 42 of the control unit 10 determines that the solder joint condition is normal (step S1200) and outputs this information to the memory unit 34.
- step S1100: Y If a bridge is detected (step S1100: Y), the solder is not melted (step S1140: N), or a void is present (step S1180: Y), the inspection unit 42 determines that the solder joint condition is abnormal (step S1220) and outputs this information to the memory unit 34. When the solder condition is output to the memory unit 34, the image acquisition and judgment process S102 in this flowchart ends.
- the image capture processing unit 35 of the control unit 10 determines whether or not there is a next imaging area (FOV) (step S104), and if it is determined that there is a next imaging area (step S104: Y), the control unit 10 returns to step S100, selects the next imaging area (FOV), and repeats the image acquisition and determination process S102. On the other hand, if the image capture processing unit 35 of the control unit 10 determines that there is no next imaging area (FOV) (step S104: N), it ends the inspection process and removes the inspected object 12 from the inspection device 1.
- FOV next imaging area
- the image acquisition and determination process S102 may be performed for each imaging field (FOV), or the examination may be performed in parallel with the acquisition of transmission image data and the generation of reconstructed image data for other imaging fields (FOVs), starting from the imaging field (FOV) for which generation of reconstructed image data (cross-sectional image data and pseudo cross-sectional image data) has been completed.
- a correction method using Fourier transform and inverse Fourier transform is used to remove periodic noise in the transmission image data or cross-sectional image data.
- information on pixels that do not require correction is saved, and after correction, it is returned to the transmission image data or cross-sectional image data. This means that there is no effect of the correction, and only the periodic noise can be removed efficiently.
- this frequency image data can be divided into multiple correction regions, and the correction method and its parameters (second condition) can be changed for each correction region, so that only the noise that is intended to be removed can be efficiently removed from the frequency components of a periodic image.
- Inspection device 10 Control unit 12 Object to be inspected 22 Radiation generator (radiation source) 24 Board holding part (holding part) 26 Detector
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Pulmonology (AREA)
- Radiology & Medical Imaging (AREA)
- Life Sciences & Earth Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Analysing Materials By The Use Of Radiation (AREA)
Abstract
Provided are an image data correction method and an inspection device capable of efficiently removing only images constituting noise, among images having periodicity, from transmission image data or cross-sectional image data. A control unit 10 of an inspection device 1 executes: a first step for storing information (position and value) of pixels satisfying a first condition, among pixels of image data (transmission image data or cross-sectional image data) of an inspection target object 12; a second step for subjecting the image data to a Fourier transform to generate frequency image data; a third step for correcting the values of pixels satisfying a second condition, among pixels of the frequency image data; a fourth step for subjecting the corrected frequency image data to an inverse Fourier transform to generate corrected image data; and a fifth step for replacing the values of the pixels at the positions stored in the first step with the stored values, in the corrected image data.
Description
本発明は、画像データの補正方法及び検査装置に関する。
The present invention relates to a method for correcting image data and an inspection device.
電子基板において、電子部品(例えばピン)と基板上の配線とのはんだによる接続状態(以下「はんだの接合状態」と呼ぶ)は、外観検査では判定が難しく、トモシンセス方式のX線検査装置が用いられる。このような検査対象の電子基板においては、電子部品が立体的に配置される場合がある。例えば、パワーモジュール半導体の一つであるIGBT(絶縁ゲート型バイポーラトランジスタ)は、大電流が流れるため半導体チップ(IGBT)の裏面側に放熱用のヒートシンクが実装されている。放熱用のヒートシンクは、複数のピン型のフィンが形成されているため、電子基板にX線を照射して透過画像データを取得すると、検査対象のピンと放熱用のヒートシンクとが重なって撮像されることとなり、これらの透過画像データ若しくは透過画像データから再構成された3次元画像データ(断面画像データ)にヒートシンクがピンの影(ノイズ)となってしまう。このヒートシンクのフィンは所定の間隔で並んで配置されており、周期性があるため、フーリエ変換・逆フーリエ変換を用いて透過画像データ若しくは断面画像データから周期性のある影(ノイズ)を取り除く方法が用いられる(例えば、特許文献1参照)。
In electronic boards, the soldered connection state (hereinafter referred to as "solder joint state") between electronic components (e.g., pins) and wiring on the board is difficult to determine by visual inspection, so tomosynthesis-type X-ray inspection equipment is used. In such electronic boards to be inspected, electronic components may be arranged three-dimensionally. For example, an IGBT (insulated gate bipolar transistor), which is one of the power module semiconductors, has a heat sink for heat dissipation mounted on the back side of the semiconductor chip (IGBT) because a large current flows through it. The heat sink for heat dissipation has multiple pin-shaped fins formed thereon, so when an electronic board is irradiated with X-rays to obtain transmission image data, the pins to be inspected and the heat sink for heat dissipation are imaged overlapping each other, and the heat sink becomes a pin shadow (noise) in the transmission image data or the three-dimensional image data (cross-sectional image data) reconstructed from the transmission image data. The fins of this heat sink are arranged side by side at a predetermined interval and have periodicity, so a method is used to remove the periodic shadow (noise) from the transmission image data or cross-sectional image data using Fourier transform and inverse Fourier transform (see, for example, Patent Document 1).
しかしながら、透過画像データ若しくは断面画像データからフーリエ変換・逆フーリエ変換を用いて所定の周波数成分をノイズとして除去すると、検査対象の特徴点も除去される場合や本来存在しない形状(像)が生成されてしまう場合があり、検査対象の明瞭な画像を取得できなくなるという課題があった。
However, when using Fourier transform or inverse Fourier transform to remove specific frequency components as noise from transmission image data or cross-sectional image data, there are cases where feature points of the object being inspected are also removed, or shapes (images) that do not actually exist are generated, making it impossible to obtain a clear image of the object being inspected.
本発明はこのような課題に鑑みてなされたものであり、画像データ(透過画像データ若しくは断面画像データ)から周期性を有する像のうち、ノイズとなる像だけを効率良く除去する画像データの補正方法、及び、この補正方法が実装された検査装置を提供することを目的とする。
The present invention has been made in consideration of these problems, and aims to provide an image data correction method that efficiently removes only noise images from images that have periodicity in image data (transmission image data or cross-sectional image data), and an inspection device in which this correction method is implemented.
前記課題を解決するために、本発明に係る画像データの補正方法は、線源から放射された放射線を被検査体に照射し、前記被検査体を透過した放射線を検出して取得した透過画像データ若しくは前記透過画像データにより再構成された3次元画像データの補正方法であって、前記画像データの各々の画素のうち、第1の条件を満足する画素の情報として前記画像データ内の位置及び値を記憶する第1のステップと、前記画像データをフーリエ変換して周波数画像データを生成する第2のステップと、前記周波数画像データの各々の画素のうち、第2の条件を満足する画素の値を補正して補正された周波数画像データを出力する第3のステップと、前記補正された周波数画像データを逆フーリエ変換して補正された画像データを生成する第4のステップと、前記補正された画像データにおいて、前記第1のステップで保存した位置の画素の値を保存した値に置き換える第5のステップと、を有する。
In order to solve the above problem, the image data correction method according to the present invention is a method for correcting transmitted image data obtained by irradiating an object to be inspected with radiation emitted from a radiation source and detecting radiation that has passed through the object to be inspected, or three-dimensional image data reconstructed from the transmitted image data, and includes a first step of storing a position and a value in the image data as information on pixels that satisfy a first condition among the pixels of the image data, a second step of Fourier transforming the image data to generate frequency image data, a third step of correcting the values of pixels that satisfy a second condition among the pixels of the frequency image data and outputting corrected frequency image data, a fourth step of inverse Fourier transforming the corrected frequency image data to generate corrected image data, and a fifth step of replacing the pixel values at the positions stored in the first step in the corrected image data with the stored values.
また、本発明に係る画像データの補正方法において、前記第3のステップは、前記周波数画像データ内を複数の領域に分割し、少なくとも一つの領域内の画素のうち、前記第2の条件を満足する画素の値を補正することが望ましい。
Furthermore, in the image data correction method according to the present invention, it is preferable that the third step divides the frequency image data into a plurality of regions, and corrects the values of pixels in at least one region that satisfy the second condition.
また、本発明に係る画像データの補正方法において、前記第3のステップは、前記領域毎に設定された前記第2の条件に基づく補正方法で補正することが望ましい。
Furthermore, in the image data correction method according to the present invention, it is preferable that the third step performs correction using a correction method based on the second condition set for each of the regions.
また、本発明に係る画像データの補正方法において、前記第1の条件は、上限値として設定した閾値より大きい値の画素、及び、下限値として設定した閾値より小さい値の画素であることが望ましい。
In addition, in the image data correction method according to the present invention, it is preferable that the first condition is a pixel having a value greater than a threshold value set as an upper limit value, and a pixel having a value less than a threshold value set as a lower limit value.
また、本発明に係る画像データの補正方法において、前記第1の条件は、前記画像データ内の所定の領域の画素であることが望ましい。
In addition, in the image data correction method according to the present invention, it is preferable that the first condition is a pixel in a specified area within the image data.
また、本発明に係る検査装置は、線源と、被検査体を保持する保持部と、前記被検査体を透過した前記線源からの放射線を検出して前記被検査体の透過画像データを取得する検出器と、制御部と、を有し、前記制御部は、前記線源と前記保持部及び前記検出器との相対位置を変化させることにより取得した2以上の透過画像データ、若しくは、前記透過画像データにより再構成された3次元画像データを上述した画像データの補正方法により補正し、補正された画像データを用いて前記被検査体の検査を行う。
The inspection device according to the present invention also includes a radiation source, a holder for holding an object to be inspected, a detector for detecting radiation from the radiation source that has passed through the object to be inspected and acquiring transmission image data of the object to be inspected, and a control unit, and the control unit corrects two or more pieces of transmission image data acquired by changing the relative positions of the radiation source, the holder, and the detector, or three-dimensional image data reconstructed from the transmission image data, using the image data correction method described above, and inspects the object to be inspected using the corrected image data.
本発明に係る画像データの補正方法及びこの補正方法が実装された検査装置によれば、画像データ(透過画像データ若しくは断面画像データ)から周期性を有する像のうち、ノイズとなる像だけを効率良く除去することができる。
The image data correction method according to the present invention and the inspection device in which this correction method is implemented can efficiently remove only the noise images from the periodic images in the image data (transmission image data or cross-sectional image data).
以下、本発明の好ましい実施形態について図面を参照して説明する。図1に示すように、本実施形態に係る検査装置1は、パーソナルコンピュータ(PC)等の処理装置で構成される制御部10、モニタ11、及び、撮像部32を有して構成されている。また、撮像部32は、放射線発生器22、基板保持部24、検出器26、線質変更部14、放射線発生器駆動部16、基板保持部駆動部18、及び、検出器駆動部20を有している。
Below, a preferred embodiment of the present invention will be described with reference to the drawings. As shown in FIG. 1, the inspection device 1 according to this embodiment is configured to have a control unit 10, which is made up of a processing device such as a personal computer (PC), a monitor 11, and an imaging unit 32. The imaging unit 32 also has a radiation generator 22, a substrate holding unit 24, a detector 26, a radiation quality changing unit 14, a radiation generator driving unit 16, a substrate holding unit driving unit 18, and a detector driving unit 20.
放射線発生器22は、X線等の放射線を発生させる装置(線源)であり、例えば加速させた電子をタングステンやダイアモンド等のターゲットに衝突させることで放射線を発生するものである。なお、本実施形態における放射線は、X線の場合について説明するが、これに限定されるものではない。例えば、放射線は、アルファ線、ベータ線、ガンマ線、紫外線、可視光、赤外線でもよい。また、放射線は、マイクロ波やテラヘルツ波でもよい。
The radiation generator 22 is a device (ray source) that generates radiation such as X-rays, and generates radiation by colliding accelerated electrons with a target such as tungsten or diamond. Note that, although the radiation in this embodiment is described as being X-rays, this is not limiting. For example, the radiation may be alpha rays, beta rays, gamma rays, ultraviolet rays, visible light, or infrared rays. The radiation may also be microwaves or terahertz waves.
基板保持部24は、被検査体12である電子基板を保持する。基板保持部24に保持された被検査体12に放射線発生器22で発生させた放射線を照射し、被検査体12を透過した放射線を検出器26で検出して画像として撮像する。以下、検出器26で撮像された被検査体12の放射線透過画像を「透過画像」と呼ぶ。なお、後述するように、本実施形態においては、被検査体12である電子基板を保持した基板保持部24と検出器26とを放射線発生器22に対して相対移動させて複数の透過画像を取得して、それらの透過画像から3次元画像である再構成画像(断面画像)を生成する。
The board holding unit 24 holds the electronic board, which is the object under inspection 12. The object under inspection 12 held by the board holding unit 24 is irradiated with radiation generated by the radiation generator 22, and the radiation that has passed through the object under inspection 12 is detected by the detector 26 to capture an image. Hereinafter, the radiation transmission image of the object under inspection 12 captured by the detector 26 will be referred to as a "transmission image." As will be described later, in this embodiment, the board holding unit 24, which holds the electronic board, which is the object under inspection 12, and the detector 26 are moved relative to the radiation generator 22 to obtain multiple transmission images, and a reconstructed image (cross-sectional image), which is a three-dimensional image, is generated from these transmission images.
検出器26で撮像された透過画像(検出器26から出力された透過画像のデータである透過画像データ)は、制御部10に送られ、例えば、フィルタ補正逆投影法(Filtered-Backprojection法(FBP法))等の既知の技術を用いて、接合部分のはんだの立体形状を含む画像データに再構成される。そして、再構成された画像データや透過画像データは、制御部10内のストレージ(例えば、後述する記憶部34)や、図示しない外部のストレージに記憶される。以下、透過画像データに基づいて計算された立体形状の1断面を抽出した画像データを断面画像(断面画像データ)と呼ぶ。また、1枚乃至複数枚の断面画像データのセットを「3次元画像データ」または「再構成画像データ」と呼ぶ。すなわち、再構成画像データから任意の断面を切り出した画像データが断面画像データである。このような再構成画像及び断面画像はモニタ11に出力される。なお、モニタ11には再構成画像や断面画像のみならず、後述するはんだの接合状態の検査結果等も表示される。ここで、本実施形態における再構成画像は、上述したように、検出器26で撮像された平面画像(透過画像データ)から再構成されるため「プラナーCT」とも呼ばれる。
The transmission image captured by the detector 26 (transmission image data, which is data of the transmission image output from the detector 26) is sent to the control unit 10, and is reconstructed into image data including the three-dimensional shape of the solder at the joint portion using a known technique such as the Filtered Backprojection method (FBP method). The reconstructed image data and transmission image data are stored in a storage in the control unit 10 (for example, the storage unit 34 described later) or an external storage (not shown). Hereinafter, image data obtained by extracting one cross section of the three-dimensional shape calculated based on the transmission image data is called a cross-sectional image (cross-sectional image data). In addition, a set of one or more cross-sectional image data is called "three-dimensional image data" or "reconstructed image data". In other words, image data obtained by cutting out an arbitrary cross section from the reconstructed image data is cross-sectional image data. Such reconstructed images and cross-sectional images are output to the monitor 11. In addition to the reconstructed images and cross-sectional images, the monitor 11 also displays the inspection results of the solder joint state described later. Hereinafter, the reconstructed image in this embodiment is also called "planar CT" because it is reconstructed from a planar image (transmission image data) captured by the detector 26 as described above.
線質変更部14は、放射線発生器22で発生される放射線の線質を変更する。放射線の線質は、ターゲットに衝突させる電子を加速するために印加する電圧(以下「管電圧」と呼ぶ)や、電子の数を決定する電流(以下「管電流」と呼ぶ)によって定まる。線質変更部14は、これら管電圧と管電流とを制御する装置である。この線質変更部14は変圧器や整流器等、既知の技術を用いて実現できる。
The radiation quality change unit 14 changes the radiation quality generated by the radiation generator 22. The radiation quality is determined by the voltage (hereafter referred to as "tube voltage") applied to accelerate the electrons to be collided with the target, and the current (hereafter referred to as "tube current") that determines the number of electrons. The radiation quality change unit 14 is a device that controls the tube voltage and tube current. This radiation quality change unit 14 can be realized using known technology such as a transformer or rectifier.
ここで、放射線の線質は、放射線の輝度と硬さ(放射線のスペクトル分布)とで定まる。管電流を大きくすればターゲットに衝突する電子の数が増え、発生する放射線の光子の数も増える。その結果、放射線の輝度が高くなる。例えば、コンデンサ等の部品の中には他の部品と比較して厚みがあるものもあり、これらの部品の透過画像を撮像するには輝度の高い放射線を照射する必要がある。このような場合に管電流を調整することで放射線の輝度を調整する。また、管電圧を高くすると、ターゲットに衝突する電子のエネルギーが大きくなり、発生する放射線のエネルギー(スペクトル)が大きくなる。一般に、放射線のエネルギーが大きいほど物質の貫通力が大きくなり、物質に吸収されにくくなる。そのような放射線を用いて撮像した透過画像はコントラストが低くなる。このため、管電圧は透過画像のコントラストを調整するのに利用できる。
Here, the quality of radiation is determined by the brightness and hardness of the radiation (spectral distribution of radiation). Increasing the tube current increases the number of electrons that collide with the target, and the number of radiation photons generated. As a result, the brightness of the radiation increases. For example, some components such as capacitors are thicker than other components, and in order to capture a transmission image of these components, it is necessary to irradiate them with radiation of high brightness. In such cases, the brightness of the radiation is adjusted by adjusting the tube current. Also, increasing the tube voltage increases the energy of the electrons that collide with the target, and the energy (spectrum) of the generated radiation increases. In general, the greater the energy of radiation, the greater its penetrating power into materials, and the less likely it is to be absorbed by materials. A transmission image captured using such radiation has low contrast. For this reason, the tube voltage can be used to adjust the contrast of a transmission image.
放射線発生器駆動部16は、図示しないモータ等の駆動機構を有しており、放射線発生器22をその焦点を通る軸A(放射線発生器22から放射される放射線の放射方向の中心を通る軸(光軸)であって、この軸の方向を「Z軸方向」とする)に沿って上下に移動させることができる。これにより放射線発生器22と基板保持部24に保持される被検査体(電子基板)12との距離を変えて照射野を変更し、検出器26で撮像される透過画像の拡大率を変更することが可能となる。なお、放射線発生器22のZ軸方向の位置は、発生器位置検出部23により検出され、制御部10に出力される。
The radiation generator driving unit 16 has a driving mechanism such as a motor (not shown) and can move the radiation generator 22 up and down along axis A passing through its focal point (axis (optical axis) passing through the center of the radiation direction of the radiation emitted from the radiation generator 22, the direction of this axis being referred to as the "Z-axis direction"). This makes it possible to change the distance between the radiation generator 22 and the inspected object (electronic board) 12 held by the board holding unit 24 to change the irradiation field and change the magnification ratio of the transmitted image captured by the detector 26. The position of the radiation generator 22 in the Z-axis direction is detected by the generator position detection unit 23 and output to the control unit 10.
検出器駆動部20も図示しないモータ等の駆動機構を有しており、検出器回転軌道30に沿って検出器26を回転移動させる。また、基板保持部駆動部18も図示しないモータ等の駆動機構を有しており、基板回転軌道28が設けられた平面上を、基板保持部24を平行移動させる。また、基板保持部24は、検出器26の回転移動と連動して、基板回転軌道28上を回転移動する構成となっている。これにより、基板保持部24が保持する被検査体12と放射線発生器22との相対的な位置関係を変化させながら、投射方向及び投射角度が異なる複数の透過画像を撮像することが可能となる。なお、本実施形態に係る検査装置1は、検出器26の放射線を検出する領域の大きさと、放射線発生器22、被検査体12(基板保持部24)及び検出器26の相対位置により、被検査体12上の透過画像を取得することができる領域が決定される。この透過画像を取得することができる領域(撮像領域)を「FOV(視野)」と呼ぶ。
The detector driving unit 20 also has a driving mechanism such as a motor (not shown) and rotates the detector 26 along the detector rotation track 30. The substrate holding unit driving unit 18 also has a driving mechanism such as a motor (not shown) and moves the substrate holding unit 24 in parallel on the plane on which the substrate rotation track 28 is provided. The substrate holding unit 24 is configured to rotate on the substrate rotation track 28 in conjunction with the rotation of the detector 26. This makes it possible to capture multiple transmission images with different projection directions and projection angles while changing the relative positional relationship between the test object 12 held by the substrate holding unit 24 and the radiation generator 22. In the inspection device 1 according to this embodiment, the area on the test object 12 where a transmission image can be acquired is determined by the size of the area where the detector 26 detects radiation and the relative positions of the radiation generator 22, the test object 12 (substrate holding unit 24), and the detector 26. The area where this transmission image can be acquired (imaging area) is called the "FOV (field of view)".
基板回転軌道28と検出器回転軌道30との回転半径は固定ではなく、自由に変更できる構成となっている。これにより、被検査体12である電子基板の基板やこの基板に取り付けられている部品に照射する放射線の照射角度を任意に変更することが可能となる。なお、基板回転軌道28及び検出器回転軌道30の軌道面は、上述したZ軸方向と直交しており、この軌道面において直交する方向をX軸方向及びY軸方向とすると、基板保持部24のX軸方向及びY軸方向の位置は、基板位置検出部29で検出されて制御部10に出力され、検出器26のX軸方向及びY軸方向の位置は、検出器位置検出部31で検出されて制御部10に出力される。
The rotation radius of the board rotation orbit 28 and the detector rotation orbit 30 is not fixed, but can be freely changed. This makes it possible to arbitrarily change the irradiation angle of the radiation irradiated to the electronic board substrate, which is the inspected object 12, and the components attached to this substrate. The orbital plane of the board rotation orbit 28 and the detector rotation orbit 30 is perpendicular to the Z-axis direction described above, and if the directions perpendicular to this orbital plane are the X-axis direction and the Y-axis direction, the positions of the board holding part 24 in the X-axis direction and the Y-axis direction are detected by the board position detection part 29 and output to the control part 10, and the positions of the detector 26 in the X-axis direction and the Y-axis direction are detected by the detector position detection part 31 and output to the control part 10.
制御部10は、上述した検査装置1の全動作を制御する。以下、制御部10の主な機能について図2を用いて説明する。なお、図示されていないが、制御部10にはキーボードおよびマウスなどの入力装置が接続されている。
The control unit 10 controls all operations of the inspection device 1 described above. Below, the main functions of the control unit 10 are explained using FIG. 2. Although not shown, input devices such as a keyboard and a mouse are connected to the control unit 10.
制御部10は、記憶部34、撮像処理部35、断面画像生成部36、基板検査面検出部38、疑似断面画像生成部40、及び検査部42を含む。なお、図示しないが制御部10の撮像処理部35は線質変更部14、放射線発生器駆動部16、基板保持部駆動部18、及び検出器駆動部20の作動を制御する撮像制御部の機能も有している。また、これらの各機能ブロックは、各種演算処理を実行するCPU、データの格納やプログラム実行のためのワークエリアとして利用されるRAMなどのハードウェア、およびソフトウェアの連携によって実現される。したがって、これらの機能ブロックはハードウェアおよびソフトウェアの組み合わせによって様々な形で実現することができる。
The control unit 10 includes a memory unit 34, an imaging processing unit 35, a cross-sectional image generating unit 36, a board inspection surface detecting unit 38, a pseudo cross-sectional image generating unit 40, and an inspection unit 42. Although not shown, the imaging processing unit 35 of the control unit 10 also has the function of an imaging control unit that controls the operation of the radiation quality changing unit 14, the radiation generator driving unit 16, the board holding unit driving unit 18, and the detector driving unit 20. Furthermore, each of these functional blocks is realized by the cooperation of hardware, such as a CPU that executes various arithmetic processes, and a RAM that is used as a work area for storing data and executing programs, and software. Therefore, these functional blocks can be realized in various ways by combining hardware and software.
記憶部34は、電子基板の透過画像を撮像するための撮像条件や、被検査体である電子基板の設計等の情報を記憶する。記憶部34はまた、電子基板の透過画像や再構成画像(断面画像、疑似断面画像)等の画像データ、及び後述する検査部42の検査結果等を記憶する。記憶部34にはさらに、放射線発生器駆動部16、基板保持部駆動部18及び検出器駆動部20を駆動するための情報(例えば、放射線発生器駆動部16が放射線発生器22を駆動する速度、基板保持部駆動部18が基板保持部24を駆動する速度および検出器駆動部20が検出器26を駆動する速度、等)も格納されている。
The memory unit 34 stores information such as the imaging conditions for capturing a transmission image of the electronic board and the design of the electronic board to be inspected. The memory unit 34 also stores image data such as transmission images and reconstructed images (cross-sectional images, pseudo-cross-sectional images) of the electronic board, as well as the inspection results of the inspection unit 42 described below. The memory unit 34 also stores information for driving the radiation generator driving unit 16, the board holder driving unit 18, and the detector driving unit 20 (e.g., the speed at which the radiation generator driving unit 16 drives the radiation generator 22, the speed at which the board holder driving unit 18 drives the board holder 24, and the speed at which the detector driving unit 20 drives the detector 26).
撮像処理部35は、放射線発生器駆動部16、基板保持部駆動部18及び検出器駆動部20により、放射線発生器22、基板保持部24及び検出器26を駆動させて、基板保持部24により保持された被検査体12を撮像して透過画像データを取得し、透過画像データから再構成画像データ(断面画像データ)を生成する。この撮像処理部35による透過画像データの取得(透過画像の撮像)及び再構成画像データ(断面画像データ)の生成方法については、後述する。
The imaging processing unit 35 drives the radiation generator 22, the substrate holding unit 24, and the detector 26 using the radiation generator driving unit 16, the substrate holding unit driving unit 18, and the detector driving unit 20 to image the specimen 12 held by the substrate holding unit 24, obtain transmission image data, and generate reconstructed image data (cross-sectional image data) from the transmission image data. The method of obtaining the transmission image data (capturing the transmission image) and generating the reconstructed image data (cross-sectional image data) by this imaging processing unit 35 will be described later.
断面画像生成部36は、記憶部34から取得した複数の透過画像データに基づいて、再構成画像データ(断面画像データ)を生成する。これは、例えばFBP法や最尤推定法等、既知の技術を用いて実現できる。再構成アルゴリズムが異なると、得られる再構成画像データの性質や再構成に要する時間も異なる。そこで、あらかじめ複数の再構成アルゴリズムやアルゴリズムに用いられるパラメータを用意しておき、ユーザに選択させる構成としてもよい。これにより、再構成に要する時間が短くなることを優先したり、時間はかかっても画質の良さを優先したりするなどの選択の自由度をユーザに提供することができる。生成した断面画像データの各々は、各断面画像データのZ軸方向の位置や、断面画像データ内の画素のX軸方向及びY軸方向の位置(座標)を決定する情報等の属性情報とともに記憶部34に記憶される。
The cross-sectional image generating unit 36 generates reconstructed image data (cross-sectional image data) based on the multiple transmission image data acquired from the storage unit 34. This can be achieved using known techniques, such as the FBP method or the maximum likelihood estimation method. Different reconstruction algorithms result in different properties of the reconstructed image data and different times required for reconstruction. Therefore, multiple reconstruction algorithms and parameters used in the algorithms may be prepared in advance and the user may select one. This provides the user with the freedom to choose, such as prioritizing a shorter reconstruction time or prioritizing better image quality even if it takes more time. Each of the generated cross-sectional image data is stored in the storage unit 34 together with attribute information, such as information that determines the position of each cross-sectional image data in the Z-axis direction and the positions (coordinates) of pixels in the cross-sectional image data in the X-axis direction and the Y-axis direction.
基板検査面検出部38は、断面画像生成部36が生成した複数の断面画像データの中から、被検査体12である電子基板上の検査の対象となる面(例えば、電子基板の表面)を映し出している画像データ(断面画像データ)を特定する。以後、電子基板の検査面を映し出している断面画像(断面画像データ)を「検査面画像(検査面画像データ)」と呼ぶ。
The board inspection surface detection unit 38 identifies image data (cross-sectional image data) that shows the surface to be inspected on the electronic board, which is the object to be inspected 12, (e.g., the surface of the electronic board), from among the multiple cross-sectional image data generated by the cross-sectional image generation unit 36. Hereinafter, the cross-sectional image (cross-sectional image data) that shows the inspection surface of the electronic board will be referred to as the "inspection surface image (inspection surface image data)."
疑似断面画像生成部40は、断面画像生成部36が生成した断面画像データについて、連続する所定枚数の断面画像(断面画像データ)を積み上げることにより、断面画像よりも厚い基板の領域を画像化する。積み上げる断面画像の枚数は、断面画像が映し出す基板の領域の厚さ(以後、「スライス厚」という。)と、疑似断面画像のスライス厚とによって定める。例えば、断面画像のスライス厚が50μmで、疑似断面画像としてBGAのはんだボール(以後単に「はんだ」という。)の高さ(例えば500μm)をスライス厚としようとするならば、500/50=10枚の断面画像を積み上げればよい。この際、はんだの位置を特定するために、基板検査面検出部38が特定した検査面画像データが用いられる。
The pseudo cross-sectional image generating unit 40 images the area of the board thicker than the cross-sectional image by stacking a predetermined number of consecutive cross-sectional images (cross-sectional image data) for the cross-sectional image data generated by the cross-sectional image generating unit 36. The number of cross-sectional images to be stacked is determined by the thickness of the area of the board shown by the cross-sectional image (hereinafter referred to as the "slice thickness") and the slice thickness of the pseudo cross-sectional image. For example, if the slice thickness of the cross-sectional image is 50 μm and the height (e.g., 500 μm) of a BGA solder ball (hereinafter simply referred to as "solder") is to be used as the slice thickness of the pseudo cross-sectional image, then 500/50 = 10 cross-sectional images should be stacked. At this time, the inspection surface image data identified by the board inspection surface detecting unit 38 is used to identify the position of the solder.
検査部42は、断面画像生成部36が生成した断面画像データ、基板検査面検出部38が特定した検査面画像データ、及び疑似断面画像生成部40が生成した疑似断面画像データに基づいて、はんだの接合状態を検査する。電子基板と部品とを接合するはんだは基板検査面付近にあるので、検査面画像データ及び検査面画像データに対して放射線発生器22側の領域を映し出している断面画像データを検査することで、はんだが基板と部品とを適切に接合しているか否かが判断できる。
The inspection unit 42 inspects the solder joint condition based on the cross-sectional image data generated by the cross-sectional image generation unit 36, the inspection surface image data identified by the board inspection surface detection unit 38, and the pseudo cross-sectional image data generated by the pseudo cross-sectional image generation unit 40. Since the solder that joins the electronic board and the component is located near the board inspection surface, by inspecting the inspection surface image data and the cross-sectional image data that shows the area on the radiation generator 22 side relative to the inspection surface image data, it is possible to determine whether the solder is properly joining the board and the component.
ここで、「はんだの接合状態」とは、電子基板と部品とがはんだにより接合し、適切な導電経路が生成されているか否かのことをいう。はんだの接合状態の検査には、ブリッジ検査、溶融状態検査、及びボイド検査が含まれる。「ブリッジ(bridge)」とは、はんだが接合することにより生じた導体間の好ましくない導電経路のことをいう。また、「溶融状態」とは、はんだの溶融不足により、電子基板と部品との間の接合が不足しているか否かの状態、いわゆる「浮き」か否かの状態をいう。「ボイド(void)」とは、はんだ接合部内の気泡によるはんだ接合の不具合のことをいう。したがって検査部42は、ブリッジ検査部44、溶融状態検査部46、及びボイド検査部48を含む。
Here, "solder joint condition" refers to whether or not an appropriate conductive path is formed when the electronic board and the component are joined by solder. Inspection of the solder joint condition includes bridge inspection, molten state inspection, and void inspection. "Bridge" refers to an undesirable conductive path between conductors caused by solder joining. "Melted state" refers to a state in which the joint between the electronic board and the component is insufficient due to insufficient melting of the solder, that is, whether or not there is a so-called "floating" state. "Void" refers to a defect in the solder joint caused by air bubbles in the solder joint. Therefore, the inspection unit 42 includes a bridge inspection unit 44, a molten state inspection unit 46, and a void inspection unit 48.
ブリッジ検査部44、溶融状態検査部46、及びボイド検査部48の動作の詳細は後述するが、ブリッジ検査部44およびボイド検査部48は、疑似断面画像生成部40が生成した疑似断面画像データに基づいてそれぞれブリッジおよびボイドの検査をし、溶融状態検査部46は基板検査面検出部38が特定した検査面画像データに基づいてはんだの溶融状態を検査する。なお、ブリッジ検査部44、溶融状態検査部46、及びボイド検査部48における検査結果は記憶部34に記憶される。
The operation of the bridge inspection unit 44, molten state inspection unit 46, and void inspection unit 48 will be described in detail later, but the bridge inspection unit 44 and void inspection unit 48 inspect bridges and voids, respectively, based on the pseudo cross-sectional image data generated by the pseudo cross-sectional image generation unit 40, and the molten state inspection unit 46 inspects the molten state of the solder based on the inspection surface image data identified by the board inspection surface detection unit 38. The inspection results in the bridge inspection unit 44, molten state inspection unit 46, and void inspection unit 48 are stored in the memory unit 34.
図3及び図4は、透過画像の撮像(透過画像データの取得)及び再構成画像データ(断面画像データ)の生成、検査面画像データの特定、及び、はんだの接合状態を検査するまでの流れ(検査処理)を示したフローチャートである。本フローチャートにおける処理は、例えば、制御部10が図示しない入力装置から検査開始の指示を受け付けたときに開始する。
FIGS. 3 and 4 are flowcharts showing the flow (inspection process) of capturing a transmission image (acquiring transmission image data), generating reconstructed image data (cross-sectional image data), identifying inspection surface image data, and inspecting the solder joint state. The process in this flowchart starts, for example, when the control unit 10 receives an instruction to start the inspection from an input device (not shown).
被検査体12が検査装置1に搬入されて検査が開始されると、制御部10の撮像処理部35は、図3(a)に示すように、放射線発生器駆動部16により放射線発生器22から放射される放射線の照射野(上述した視野FOVの透過画像データを取得するために放射線が照射される撮像領域)を設定し(ステップS100)、画像取得・判定処理を開始する(ステップS102)。なお、被検査体12上に複数の撮像領域(FOV)があるときは、予め決められている順序で撮像領域(FOV)を選択して設定する。
When the inspected object 12 is carried into the inspection device 1 and the inspection is started, the imaging processing unit 35 of the control unit 10 sets the irradiation field of radiation emitted from the radiation generator 22 by the radiation generator driving unit 16 (the imaging area where the radiation is irradiated to obtain the transmission image data of the field of view FOV described above) (step S100) as shown in Fig. 3(a), and starts the image acquisition and judgment process (step S102). Note that when there are multiple imaging areas (FOV) on the inspected object 12, the imaging areas (FOV) are selected and set in a predetermined order.
ステップS102の画像取得・判定処理が開始されると、制御部10の撮像処理部35は、図3(b)に示すように、透過画像撮像・再構成画像生成処理を実行し、被検査体12を撮像して透過画像データを取得し、それらの透過画像データを用いて再構成画像データを生成する(ステップS1020)。具体的には、透過画像撮像・再構成画像生成処理S1020において、制御部10の撮像処理部35は、基板保持部駆動部18により基板保持部24を移動させるとともに、検出器駆動部20により検出器26を移動させて撮像位置を変更しながら、線質変更部14により放射線発生器22の線質を設定して、放射線を被検査体12の現在の撮像領域(FOV)に照射して透過画像データを取得して記憶部34に記憶する。また、制御部10の断面画像生成部36は、記憶部34から複数の透過画像データを読み出し、それらの透過画像データを用いて再構成画像データ(断面画像データ)を生成して記憶部34に記憶する。
When the image acquisition and determination process of step S102 is started, the image capture processing unit 35 of the control unit 10 executes a transmission image capture and reconstructed image generation process as shown in FIG. 3B, captures the test object 12 to acquire transmission image data, and generates reconstructed image data using the transmission image data (step S1020). Specifically, in the transmission image capture and reconstructed image generation process S1020, the image capture processing unit 35 of the control unit 10 moves the substrate holder 24 by the substrate holder drive unit 18, and moves the detector 26 by the detector drive unit 20 to change the imaging position, while setting the radiation quality of the radiation generator 22 by the radiation quality change unit 14, irradiates the current imaging field (FOV) of the test object 12 with radiation, acquires transmission image data, and stores it in the storage unit 34. In addition, the cross-sectional image generation unit 36 of the control unit 10 reads out a plurality of transmission image data from the storage unit 34, generates reconstructed image data (cross-sectional image data) using the transmission image data, and stores it in the storage unit 34.
なお、透過画像データを取得する際の、基板保持部駆動部18による基板保持部24の移動経路、及び、検出器駆動部20による検出器26の移動経路は、記憶部34に記憶させた情報を読み込む方法や、入力装置から入力する方法により、予め基板保持部駆動部18及び検出器駆動部20に設定されているものとする。また、放射線発生器22のZ軸方向の位置も、同様の方法により予め放射線発生器駆動部16に設定されているものとする。また、この場合も、基板保持部駆動部18及び検出器駆動部20により基板保持部24及び検出器26を所望の位置に移動させ、透過画像データを取得する位置で基板保持部24及び検出器26を停止させてから透過画像データを取得してもよいし、基板保持部駆動部18及び検出器駆動部20により基板保持部24及び検出器26を移動させながら、所望の位置で透過画像データを取得してもよい。取得された透過画像データは撮像領域(FOV)毎に記憶部34に記憶される。
The movement path of the substrate holder 24 by the substrate holder driver 18 and the movement path of the detector 26 by the detector driver 20 when acquiring the transmission image data are set in advance in the substrate holder driver 18 and the detector driver 20 by reading information stored in the memory 34 or inputting information from an input device. The position of the radiation generator 22 in the Z-axis direction is also set in advance in the radiation generator driver 16 by a similar method. In this case, the substrate holder driver 18 and the detector driver 20 may move the substrate holder 24 and the detector 26 to a desired position, and the substrate holder 24 and the detector 26 may be stopped at a position where the transmission image data is acquired before acquiring the transmission image data, or the substrate holder driver 18 and the detector driver 20 may move the substrate holder 24 and the detector 26 to a desired position and acquire the transmission image data. The acquired transmission image data is stored in the memory 34 for each imaging area (FOV).
制御部10の基板検査面検出部38は、断面画像生成部36から透過画像データまたは再構成画像データ(断面画像データ)を受け取り、その中から検査面画像を特定する基板検査面検出・疑似断面画像生成処理を実行する(ステップS1040)。ここで、記憶部34には、はんだの接合状態等に異常がない正常な被検査体12の基板検査面の断面画像データ(これを「基準画像データ」と呼ぶ)が予め記憶されている。基板検査面検出・疑似断面画像生成処理S1040において、制御部10の基板検査面検出部38は、基準画像データとステップS1020で生成された断面画像データの各々とを比較し、基準画像データと最も一致する断面画像データを検査面画像データとして特定し、特定された断面画像データ(検査面画像データ)を記憶部34に記憶するとともに、そのZ軸方向の位置を現在の視野FOVにおける基板検査面の位置として記憶部34に記憶する。なお、断面画像データの中から基準画像データに最も一致する断面画像データを特定する方法としては、例えば、位相限定相関法を用いることで、高速に位置ずれに関係なく一致率を求めることができる。また、制御部10の疑似断面画像生成部40は、特定された検査面画像データ及び基板検査面のZ方向の位置に基づいて、疑似断面画像データを生成し、記憶部34に記憶する。
The board inspection surface detection unit 38 of the control unit 10 receives the transmission image data or the reconstructed image data (cross-sectional image data) from the cross-sectional image generation unit 36, and executes a board inspection surface detection/pseudo cross-sectional image generation process to identify the inspection surface image from the received data (step S1040). Here, the storage unit 34 stores in advance cross-sectional image data (called "reference image data") of the board inspection surface of a normal inspected object 12 that has no abnormalities in the solder joint state, etc. In the board inspection surface detection/pseudo cross-sectional image generation process S1040, the board inspection surface detection unit 38 of the control unit 10 compares the reference image data with each of the cross-sectional image data generated in step S1020, identifies the cross-sectional image data that most closely matches the reference image data as the inspection surface image data, stores the identified cross-sectional image data (inspection surface image data) in the storage unit 34, and stores the position in the Z-axis direction in the storage unit 34 as the position of the board inspection surface in the current field of view FOV. As a method for identifying the cross-sectional image data that most closely matches the reference image data from among the cross-sectional image data, for example, a phase-only correlation method can be used to quickly determine the coincidence rate regardless of positional deviation. In addition, the pseudo cross-sectional image generating unit 40 of the control unit 10 generates pseudo cross-sectional image data based on the identified inspection surface image data and the Z-direction position of the substrate inspection surface, and stores the data in the storage unit 34.
なお、ここでは、透過画像データから再構成画像データとして複数の断面画像データを生成し、それらの断面画像データに対して基準面画像データの決定及び疑似断面画像データの生成を行っているが、再構成画像データとして1枚の断面画像データを生成する毎に、基準面画像データの判定や疑似断面画像データの生成を行い、次の断面画像データを生成するという処理にしてもよい。
Note that here, multiple cross-sectional image data are generated as reconstructed image data from the transmission image data, and reference plane image data is determined and pseudo cross-sectional image data is generated for the cross-sectional image data. However, each time a piece of cross-sectional image data is generated as reconstructed image data, the reference plane image data may be determined and pseudo cross-sectional image data may be generated, and the next cross-sectional image data may be generated.
次に、制御部10の撮像処理部35は、断面画像生成部36が生成した断面画像データ、基板検査面検出部38が特定した検査面画像データ、及び疑似断面画像生成部40が生成した疑似断面画像データ(まとめて「断面画像データ」と呼ぶ)に対して画像補正処理を実行する(ステップS1060)。
Next, the imaging processing unit 35 of the control unit 10 performs image correction processing on the cross-sectional image data generated by the cross-sectional image generating unit 36, the inspection surface image data identified by the board inspection surface detecting unit 38, and the pseudo cross-sectional image data generated by the pseudo cross-sectional image generating unit 40 (collectively referred to as "cross-sectional image data") (step S1060).
図4に示すように、制御部10の撮像処理部35は、現在の撮像領域(FOV)の断面画像データに対して補正が必要か否かを判断する(ステップS1061)。上述したIGBTのように、検査対象のピンの下にヒートシンクがある場合、断面画像データにこのヒートシンクも写り込み、検査対象のピンのはんだ状態に対してはノイズ(影)となってしまうため、検査精度を向上させるためには、補正処理によりのヒートシンクのノイズを除去する必要がある。補正処理が必要か否かは、被検査体12である電子基板の設計等の情報から判断してもよいし、予め、撮像領域(FOV)毎に補正が必要か否かの情報を設定しておいてもよい。断面画像データの補正が必要であると判断した場合(ステップS1061:Y)、制御部10の撮像処理部35は、ステップS1040で選択された複数の断面画像データ(断面画像データ、検査面画像データ、疑似断面画像データ)のうちの1つを選択し、記憶部34から読み出す(ステップS1062)。
As shown in FIG. 4, the image processing unit 35 of the control unit 10 judges whether correction is required for the cross-sectional image data of the current imaging field (FOV) (step S1061). If there is a heat sink under the pin to be inspected, as in the above-mentioned IGBT, the heat sink is also reflected in the cross-sectional image data and becomes noise (shadow) on the solder state of the pin to be inspected. Therefore, in order to improve the inspection accuracy, it is necessary to remove the noise of the heat sink by correction processing. Whether or not correction processing is required may be judged from information such as the design of the electronic board, which is the inspected object 12, or information on whether or not correction is required for each imaging field (FOV) may be set in advance. If it is judged that correction of the cross-sectional image data is required (step S1061: Y), the image processing unit 35 of the control unit 10 selects one of the multiple cross-sectional image data (cross-sectional image data, inspection surface image data, pseudo cross-sectional image data) selected in step S1040 and reads it from the storage unit 34 (step S1062).
制御部10の撮像処理部35は、選択された断面画像データの画素のうち、第1の条件を満足する画素の断面画像データ内の位置及びその値(輝度値)を記憶部34に記憶することによりこの画素の情報を退避させる(ステップS1063)。ここで、第1の条件は、上限値として設定した閾値より大きい値の画素、及び、下限値として設定した閾値より小さい値の画素である。本実施形態の検査装置1では、ヒートシンクのように所定の間隔を有して配置された形状の像(周期性のあるノイズ)を断面画像データから除去するために、フーリエ変換・逆フーリエ変換を用いている。このとき、断面画像データがフーリエ変換された周波数画像データからヒートシンクの影に相当する周波数成分を消去すると、この補正された周波数画像データを逆フーリエ変換したときに、その周波数成分より高い周波数成分を有する部分で本来存在しない形状(像)が生成されてしまう場合がある。したがって、フーリエ変換をする前に、断面画像データにおいて高周波数部分の画素の値を退避させておき、フーリエ変換・逆フーリエ変換による補正後に、退避させた値を元の位置(画素)に戻すことにより、フーリエ変換による像の形成(ノイズの発生)を防ぐことができる。ここでは、高周波数部分の画素を選択するために、第1の条件として上限値として設定した閾値より大きい値の画素、及び、下限値として設定した閾値より小さい値の画素の情報を選択している。
The image processing unit 35 of the control unit 10 saves the information of the selected pixels of the cross-sectional image data by storing the position in the cross-sectional image data and the value (brightness value) of the pixel that satisfies the first condition in the memory unit 34 (step S1063). Here, the first condition is a pixel with a value greater than a threshold value set as an upper limit value and a pixel with a value smaller than a threshold value set as a lower limit value. In the inspection device 1 of this embodiment, Fourier transform and inverse Fourier transform are used to remove images of shapes arranged at a predetermined interval, such as heat sinks (periodic noise), from the cross-sectional image data. At this time, if the frequency component corresponding to the shadow of the heat sink is erased from the frequency image data obtained by Fourier transforming the cross-sectional image data, when the corrected frequency image data is inverse Fourier transformed, a shape (image) that does not actually exist may be generated in a part having a frequency component higher than that frequency component. Therefore, before performing the Fourier transform, the pixel values of the high frequency parts of the cross-sectional image data are evacuated, and after correction by the Fourier transform and inverse Fourier transform, the evacuated values are returned to their original positions (pixels), thereby preventing the formation of an image (the generation of noise) by the Fourier transform. Here, to select pixels in the high frequency parts, the first condition is to select information on pixels with values greater than a threshold value set as an upper limit value and pixels with values smaller than a threshold value set as a lower limit value.
あるいは、断面画像データ内の所定の領域の画素を第1の条件としてもよい。断面画像データにおいて、ヒートシンクのような周期性のあるノイズが映り込んでいない部分は、フーリエ変換・逆フーリエ変換による補正(ノイズの除去)は必要ないため、そのような領域、すなわち、予めノイズ除去が不要であることが分かっている座標に指定された画素は第1の条件として設定することで、予め退避させておくことができ、補正の影響を受けないようにすることができる。なお、第1の条件として、上述した上限値として設定した閾値より大きい値の画素、及び、下限値として設定した閾値より小さい値の画素と、断面画像データ内の所定の領域の画素(予めノイズ除去が不要であることが分かっている座標に指定された画素)との両方を設定してもよいし、その他の条件を設定してもよい。
Alternatively, pixels in a specified region in the cross-sectional image data may be set as the first condition. In the cross-sectional image data, portions that do not include periodic noise such as a heat sink do not require correction (noise removal) using Fourier transform and inverse Fourier transform, so such regions, i.e., pixels specified at coordinates where it is known in advance that noise removal is not required, can be set as the first condition, and can be set aside in advance so as not to be affected by correction. Note that, as the first condition, both pixels with values greater than the threshold value set as the above-mentioned upper limit value and pixels with values less than the threshold value set as the lower limit value, and pixels in a specified region in the cross-sectional image data (pixels specified at coordinates where it is known in advance that noise removal is not required), or other conditions may be set.
次に、制御部10の撮像処理部35は、選択された断面画像データに対してフーリエ変換を実行して周波数画像データを生成する(ステップS1064)。このフーリエ変換とは、単位ピクセル内において画素の値(輝度値)がどのくらい変化するかを検出し、空間領域にある透過画像データを空間周波数領域にある周波数画像データに変換するものであり、具体的には、断面画像データに対して二次元フーリエ変換を行い、周波数画像データを生成する。
Next, the imaging processing unit 35 of the control unit 10 performs a Fourier transform on the selected cross-sectional image data to generate frequency image data (step S1064). This Fourier transform detects how much the pixel value (brightness value) changes within a unit pixel, and converts the transmission image data in the spatial domain into frequency image data in the spatial frequency domain; specifically, a two-dimensional Fourier transform is performed on the cross-sectional image data to generate frequency image data.
そして、制御部10の撮像処理部35は、周波数画像データを補正する。ここで、ヒートシンクの像に相当する周波数成分を周波数画像データ全体から除去すると、補正された周波数画像データを逆フーリエ変換したときに、補正された断面画像データにおいて、削除しなくてよい像が削除されたり、あるいは、本来存在しない像が生成されたりする場合がある。そこで、本実施形態に係る検査装置1では、周波数画像データ内を複数の領域(「補正領域」と呼ぶ)に分割し、それらの補正領域のうちの少なくとも1つの補正領域に対して補正を行うように構成されている。
Then, the imaging processing unit 35 of the control unit 10 corrects the frequency image data. If the frequency components corresponding to the image of the heat sink are removed from the entire frequency image data, then when the corrected frequency image data is inverse Fourier transformed, images that do not need to be deleted may be deleted from the corrected cross-sectional image data, or images that do not actually exist may be generated. Therefore, the inspection device 1 according to this embodiment is configured to divide the frequency image data into multiple regions (called "correction regions") and perform correction on at least one of these correction regions.
図5は、周波数画像データIpを、境界Lb1で区分された補正領域A1と、境界Lb2で区分された補正領域A2a、A2bと、それ以外の補正領域A3との4つの領域に分割した場合を示している。制御部10の撮像処理部35は、これらの複数の補正領域から1つの補正領域を選択し(ステップS1065)、その補正領域内の画素のうち、その補正領域に設定された条件(この条件を「第2の条件」と呼ぶ)を満足する画素を補正する(ステップS1066)。ここでは、第2の条件は、所定の範囲の周波数成分の画素であり、補正としてはその値を除去する(0にする)というものである。すなわち、ヒートシンクの配置間隔等を考慮した周波数成分の画素の値を除去することにより、ヒートシンクの影を除去することができる。
FIG. 5 shows the case where frequency image data Ip is divided into four regions: correction region A1 divided by boundary Lb1, correction regions A2a and A2b divided by boundary Lb2, and the remaining correction region A3. The image capture processing unit 35 of the control unit 10 selects one correction region from these multiple correction regions (step S1065), and corrects pixels within the correction region that satisfy a condition set for the correction region (this condition is called the "second condition") (step S1066). Here, the second condition is pixels with frequency components within a specified range, and the correction involves removing the value (setting it to 0). In other words, the shadow of the heat sink can be removed by removing the pixel value of the frequency component that takes into account the placement interval of the heat sink, etc.
制御部10の撮像処理部35は、ステップS1065で選択した補正領域の補正が完了すると、次の補正領域があるか否かを判断し(ステップS1067)、次の補正領域があると判断した場合(ステップS1067:Y)、ステップS1065に戻って次の補正領域を選択して上述したステップS1066、S1067を繰り返す。
When the image capture processing unit 35 of the control unit 10 completes the correction of the correction area selected in step S1065, it determines whether or not there is a next correction area (step S1067), and if it determines that there is a next correction area (step S1067: Y), it returns to step S1065 to select the next correction area and repeats the above-mentioned steps S1066 and S1067.
なお、図5では、境界Lb1とLb2とを重ねて配置した場合について示しているが、これらの境界は重なっていなくてもよい。また、補正領域毎に第2の条件を変えてもよい。例えば、補正領域A1と補正領域A2a,A2bとにおいて除去する周波数成分の範囲を変える、すなわち、第2の条件のパラメータを変えてもよい。また、補正領域A1においては所定の周波数成分の範囲の画素を除去し、補正領域A2a、A2bにおいては、所定の閾値より大きい周波数成分の画素を除去するというように、第2の条件の内容(補正するか否かの判断方法)を変えてもよい。あるいは、第2の条件として、補正領域A3は補正しないというように構成してもよい。
Note that while FIG. 5 shows a case where boundaries Lb1 and Lb2 are arranged to overlap, these boundaries do not have to overlap. The second condition may be changed for each correction area. For example, the range of frequency components to be removed may be changed between correction area A1 and correction areas A2a and A2b, that is, the parameters of the second condition may be changed. The content of the second condition (the method of determining whether or not to perform correction) may be changed such that pixels within a predetermined frequency component range are removed in correction area A1, and pixels with frequency components greater than a predetermined threshold value are removed in correction areas A2a and A2b. Alternatively, the second condition may be configured such that correction area A3 is not corrected.
また、現在の撮像領域(FOV)に補正対象がある(例えば、ヒートシンクがある)場合、そのヒートシンクの位置や形状(どのような形状の放熱フィンがどのような間隔で配置されているか)等に関する情報は、電子基板の設計情報から明らかであるので、補正領域の大きさや位置、また、それぞれの補正領域における補正の条件(第2の条件やそのパラメータ)は予め決めておいてもよいし、検査時に選択するように構成してもよい。また、周波数画像データ内の補正領域の位置や補正方法を、AIによる学習で決めるようにしてもよい。
Furthermore, if there is a correction target in the current imaging field (FOV) (for example, there is a heat sink), information regarding the position and shape of the heat sink (what shape the heat dissipation fins are and at what intervals) is clear from the design information of the electronic board, so the size and position of the correction area and the correction conditions for each correction area (second conditions and their parameters) may be determined in advance or may be configured to be selected during inspection. Furthermore, the position of the correction area in the frequency image data and the correction method may be determined by learning using AI.
制御部10の撮像処理部35は、次の補正領域がないと判断した場合(ステップS1067:N)、ステップS1066で補正された周波数画像データに対して逆フーリエ変換を実行し補正された透過画像データを生成する(ステップS1068)。さらに、制御部10の撮像処理部35は、ステップS1063で退避させた画素の情報を記憶部34から読み出し、退避させた画素の値を補正された断面画像データのその退避した位置の値と置き換える(ステップS1069)。これにより、ステップS1063で退避させた位置の画素の値は、フーリエ変換・逆フーリエ変換を用いて補正する前の状態に戻るため、この補正による影響を受けることがない。
If the image capture processing unit 35 of the control unit 10 determines that there is no next correction region (step S1067: N), it performs an inverse Fourier transform on the frequency image data corrected in step S1066 to generate corrected transmission image data (step S1068). Furthermore, the image capture processing unit 35 of the control unit 10 reads out the information on the pixels saved in step S1063 from the storage unit 34, and replaces the values of the saved pixels with the values of the saved positions in the corrected cross-sectional image data (step S1069). As a result, the values of the pixels at the positions saved in step S1063 return to the state before correction using the Fourier transform and inverse Fourier transform, and are not affected by this correction.
更に、制御部10の撮像処理部35は、上記の処理により補正された透過画像データを記憶部34に記憶し(ステップS1070)、次の断面画像データがあるか否かを判断する(ステップS1071)。制御部10の撮像処理部35は、次の透過画像データがあると判断した場合(ステップS1071:Y)、ステップS1062に戻って次の断面画像データを選択し、上述したステップS1063~S1071を繰り返す。
Furthermore, the image capturing processing unit 35 of the control unit 10 stores the transmission image data corrected by the above processing in the storage unit 34 (step S1070), and determines whether or not there is next cross-sectional image data (step S1071). If the image capturing processing unit 35 of the control unit 10 determines that there is next transmission image data (step S1071: Y), it returns to step S1062 to select the next cross-sectional image data, and repeats the above-mentioned steps S1063 to S1071.
制御部10の撮像処理部35は、次の断面画像データがないと判断した場合(ステップS1071:N)、画像補正処理S1060を終了する。
If the image capture processing unit 35 of the control unit 10 determines that there is no next cross-sectional image data (step S1071: N), it ends the image correction process S1060.
図3(b)に戻り、制御部10のブリッジ検査部44は、疑似断面画像生成部40からはんだボールを映し出しているはんだボールと同程度のスライス厚の疑似断面画像を取得し(記憶部34から読み出し)、ブリッジの有無を検査する(ステップS1100)。ブリッジを検出しない場合には(ステップS1100:N)、制御部10の溶融状態検査部46は基板検査面検出部38から検査面画像を取得し(記憶部34から読み出し)、はんだが溶融しているか否かを検査する(ステップS1120)。はんだが溶融している場合には(ステップS1140:Y)、制御部10のボイド検査部48は疑似断面画像生成部40からはんだボールを部分的に映し出している疑似断面画像を取得し(記憶部34から読み出し)、ボイドが存在するか否かを検査する(ステップS1160)。ボイドが見つからない場合には(ステップS1180:N)、制御部10の検査部42は、はんだの接合状態は正常と判断し(ステップS1200)、その旨を記憶部34に出力する。また、ブリッジを検出した場合(ステップS1100:Y)、はんだが溶融していない場合(ステップS1140:N)、またはボイドが存在する場合(ステップS1180:Y)には、検査部42ははんだの接合状態は異常と判断して(ステップS1220)その旨を記憶部34に出力する。はんだの状態が記憶部34に出力されると、本フローチャートにおける画像取得・判定処理S102を終了する。
Returning to FIG. 3B, the bridge inspection unit 44 of the control unit 10 obtains a pseudo cross-sectional image of a slice thickness equivalent to that of the solder ball that shows the solder ball from the pseudo cross-sectional image generation unit 40 (read from the storage unit 34) and inspects whether or not a bridge exists (step S1100). If no bridge is detected (step S1100: N), the molten state inspection unit 46 of the control unit 10 obtains an inspection surface image from the board inspection surface detection unit 38 (read from the storage unit 34) and inspects whether or not the solder is molten (step S1120). If the solder is molten (step S1140: Y), the void inspection unit 48 of the control unit 10 obtains a pseudo cross-sectional image that partially shows the solder ball from the pseudo cross-sectional image generation unit 40 (read from the storage unit 34) and inspects whether or not a void exists (step S1160). If no voids are found (step S1180: N), the inspection unit 42 of the control unit 10 determines that the solder joint condition is normal (step S1200) and outputs this information to the memory unit 34. If a bridge is detected (step S1100: Y), the solder is not melted (step S1140: N), or a void is present (step S1180: Y), the inspection unit 42 determines that the solder joint condition is abnormal (step S1220) and outputs this information to the memory unit 34. When the solder condition is output to the memory unit 34, the image acquisition and judgment process S102 in this flowchart ends.
図3(a)に戻り、画像取得・判定処理S102が終了すると、制御部10の撮像処理部35は、次の撮像領域(FOV)があるか否かを判断し(ステップS104)、次の撮像領域があると判断した場合(ステップS104:Y)、ステップS100に戻り次の撮像領域(FOV)を選択して画像取得・判定処理S102を繰り返す。一方、制御部10の撮像処理部35は、次の撮像領域(FOV)がないと判断した場合(ステップS104:N)、検査処理を終了し、被検査体12を検査装置1から搬出する。
Returning to FIG. 3(a), when the image acquisition and determination process S102 is completed, the image capture processing unit 35 of the control unit 10 determines whether or not there is a next imaging area (FOV) (step S104), and if it is determined that there is a next imaging area (step S104: Y), the control unit 10 returns to step S100, selects the next imaging area (FOV), and repeats the image acquisition and determination process S102. On the other hand, if the image capture processing unit 35 of the control unit 10 determines that there is no next imaging area (FOV) (step S104: N), it ends the inspection process and removes the inspected object 12 from the inspection device 1.
なお、上述したように、撮像領域(FOV)毎に画像取得・判定処理S102を実行してもよいし、再構成画像データ(断面画像データ及び疑似断面画像データ)の生成が終了した撮像領域(FOV)から順に、他の撮像領域(FOV)の透過画像データの取得・再構成画像データの生成と並行して、検査を実行してもよい。
As described above, the image acquisition and determination process S102 may be performed for each imaging field (FOV), or the examination may be performed in parallel with the acquisition of transmission image data and the generation of reconstructed image data for other imaging fields (FOVs), starting from the imaging field (FOV) for which generation of reconstructed image data (cross-sectional image data and pseudo cross-sectional image data) has been completed.
また、以上の構成では、断面画像データに対して補正処理を実行して周期性のあるノイズを除去する場合について説明したが、再構成画像データ(断面画像データ)を生成する前の透過画像データに対して、図4を用いて説明した画像補正処理を実行してもよい。透過画像データからヒートシンクのような周期性のあるノイズを除去することにより、補正された透過画像データにより生成された再構成画像データ(断面画像データ)からこれらのノイズを除去することができる。
In the above configuration, a case has been described in which correction processing is performed on cross-sectional image data to remove periodic noise, but the image correction processing described with reference to FIG. 4 may also be performed on the transmission image data before generating the reconstructed image data (cross-sectional image data). By removing periodic noise such as heat sink noise from the transmission image data, it is possible to remove this noise from the reconstructed image data (cross-sectional image data) generated from the corrected transmission image data.
本実施形態に係る検査装置1の画像データの補正方法の効果をまとめる。
The effects of the image data correction method of the inspection device 1 according to this embodiment are summarized below.
第1に、透過画像データ若しくは断面画像データにおける周期性のあるノイズを除去するためにフーリエ変換・逆フーリエ変換による補正方法を用いているが、この補正の前に補正の必要がない画素の情報を退避させておき、補正後に透過画像データ若しくは断面画像データに戻しているため、補正の影響を受けることがなく、周期性のあるノイズだけを効率よく除去することができる。
First, a correction method using Fourier transform and inverse Fourier transform is used to remove periodic noise in the transmission image data or cross-sectional image data. However, before this correction, information on pixels that do not require correction is saved, and after correction, it is returned to the transmission image data or cross-sectional image data. This means that there is no effect of the correction, and only the periodic noise can be removed efficiently.
第2に、透過画像データ若しくは断面画像データにフーリエ変換を行って得られた周波数画像データを補正する際に、この周波数画像データを複数の補正領域に分割し、補正領域毎に補正の方法やそのパラメータ(第2の条件)を変えることができるため、周期性のある像の周波数成分のうち、本来除去したいノイズだけを効率よく除去することができる。
Secondly, when correcting frequency image data obtained by performing a Fourier transform on transmission image data or cross-sectional image data, this frequency image data can be divided into multiple correction regions, and the correction method and its parameters (second condition) can be changed for each correction region, so that only the noise that is intended to be removed can be efficiently removed from the frequency components of a periodic image.
1 検査装置
10 制御部
12 被検査体
22 放射線発生器(線源)
24 基板保持部(保持部)
26 検出器 1Inspection device 10 Control unit 12 Object to be inspected 22 Radiation generator (radiation source)
24 Board holding part (holding part)
26 Detector
10 制御部
12 被検査体
22 放射線発生器(線源)
24 基板保持部(保持部)
26 検出器 1
24 Board holding part (holding part)
26 Detector
Claims (6)
- 線源から放射された放射線を被検査体に照射し、前記被検査体を透過した放射線を検出して取得した透過画像データ若しくは前記透過画像データにより再構成された3次元画像データの補正方法であって、
前記画像データの各々の画素のうち、第1の条件を満足する画素の情報として前記画像データ内の位置及び値を記憶する第1のステップと、
前記画像データをフーリエ変換して周波数画像データを生成する第2のステップと、
前記周波数画像データの各々の画素のうち、第2の条件を満足する画素の値を補正して補正された周波数画像データを出力する第3のステップと、
前記補正された周波数画像データを逆フーリエ変換して補正された画像データを生成する第4のステップと、
前記補正された画像データにおいて、前記第1のステップで保存した位置の画素の値を保存した値に置き換える第5のステップと、
を有する画像データの補正方法。 A method for correcting transmission image data acquired by irradiating an object to be inspected with radiation emitted from a radiation source and detecting the radiation transmitted through the object to be inspected, or three-dimensional image data reconstructed from the transmission image data, comprising the steps of:
a first step of storing a position and a value in the image data as information of each pixel of the image data that satisfies a first condition;
a second step of Fourier transforming the image data to generate frequency image data;
a third step of correcting values of pixels that satisfy the second condition among the pixels of the frequency image data, and outputting the corrected frequency image data;
a fourth step of inverse Fourier transforming the corrected frequency image data to generate corrected image data;
a fifth step of replacing the pixel values at the positions stored in the first step in the corrected image data with the stored values;
The image data correction method includes the steps of: - 前記第3のステップは、前記周波数画像データ内を複数の領域に分割し、少なくとも一つの領域内の画素のうち、前記第2の条件を満足する画素の値を補正する
請求項1に記載の画像データの補正方法。 2. The method of claim 1, wherein the third step divides the frequency image data into a plurality of regions, and corrects values of pixels that satisfy the second condition among pixels in at least one of the regions. - 前記第3のステップは、前記領域毎に設定された前記第2の条件に基づく補正方法で補正する
請求項2に記載の画像データの補正方法。 3. The method of correcting image data according to claim 2, wherein the third step corrects the image data using a correction method based on the second condition set for each of the regions. - 前記第1の条件は、上限値として設定した閾値より大きい値の画素、及び、下限値として設定した閾値より小さい値の画素である
請求項1に記載の画像データの補正方法。 The method of correcting image data according to claim 1 , wherein the first condition is a pixel having a value greater than a threshold value set as an upper limit value and a pixel having a value smaller than a threshold value set as a lower limit value. - 前記第1の条件は、前記画像データ内の所定の領域の画素である
請求項1に記載の画像データの補正方法。 The method of claim 1 , wherein the first condition is a pixel in a predetermined area within the image data. - 線源と、
被検査体を保持する保持部と、
前記被検査体を透過した前記線源からの放射線を検出して前記被検査体の透過画像データを取得する検出器と、
制御部と、を有し、
前記制御部は、前記線源と前記保持部及び前記検出器との相対位置を変化させることにより取得した2以上の透過画像データ、若しくは、前記透過画像データにより再構成された3次元画像データを請求項1~5のいずれか一項に記載の画像データの補正方法により補正し、補正された画像データを用いて前記被検査体の検査を行う検査装置。 A radiation source;
A holder for holding an object to be inspected;
a detector that detects radiation from the radiation source that has passed through the object to be inspected and acquires transmission image data of the object to be inspected;
A control unit,
The control unit corrects two or more pieces of transmission image data obtained by changing the relative position of the radiation source, the holding unit, and the detector, or three-dimensional image data reconstructed from the transmission image data, by the image data correction method described in any one of claims 1 to 5, and inspects the object to be inspected using the corrected image data.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023-059932 | 2023-04-03 | ||
JP2023059932 | 2023-04-03 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024210035A1 true WO2024210035A1 (en) | 2024-10-10 |
Family
ID=92971745
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2024/012694 WO2024210035A1 (en) | 2023-04-03 | 2024-03-28 | Image data correction method, and inspection device |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024210035A1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015190776A (en) * | 2014-03-27 | 2015-11-02 | キヤノン株式会社 | Image processing system and imaging system |
JP2022098590A (en) * | 2020-12-22 | 2022-07-04 | 株式会社サキコーポレーション | Ai model generation method and inspection device |
-
2024
- 2024-03-28 WO PCT/JP2024/012694 patent/WO2024210035A1/en unknown
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015190776A (en) * | 2014-03-27 | 2015-11-02 | キヤノン株式会社 | Image processing system and imaging system |
JP2022098590A (en) * | 2020-12-22 | 2022-07-04 | 株式会社サキコーポレーション | Ai model generation method and inspection device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8351682B2 (en) | X-ray examination region setting method, X-ray examination apparatus and X-ray examination region setting program | |
JP5559551B2 (en) | Inspection device | |
JP5646769B2 (en) | X-ray inspection method and apparatus | |
US11835475B2 (en) | Inspection position identification method, three-dimensional image generation method, and inspection device | |
US20180080763A1 (en) | X-ray inspection apparatus and control method | |
JP2011191180A (en) | X-ray inspection device and x-ray inspection method | |
JPH02138855A (en) | Inspecting method of soldered part by x-ray transmission image, apparatus therefor and packaging structure of electronic component on circuit board | |
WO2021201211A1 (en) | Inspection device | |
KR20220015415A (en) | Method and apparatus for rapidly classifying defects in subcomponents of manufactured components | |
JP5830928B2 (en) | Inspection area setting method and X-ray inspection system | |
JP6383707B2 (en) | Captured image displacement correction apparatus, captured image displacement correction method, and captured image displacement correction program | |
US20230175985A1 (en) | Inspection device | |
WO2024210035A1 (en) | Image data correction method, and inspection device | |
JP6676023B2 (en) | Inspection position specifying method and inspection device | |
JP2022098590A (en) | Ai model generation method and inspection device | |
JPH0692944B2 (en) | X-ray tomography system | |
WO2024214554A1 (en) | Inspection device | |
JP2011149738A (en) | Method of correcting inspection device using tool for correction and inspection device mounted with tool for correction | |
WO2024117099A1 (en) | Inspection device | |
JP4728092B2 (en) | X-ray image output apparatus, X-ray image output method, and X-ray image output program | |
JP2006177760A (en) | X-ray inspection device, x-ray inspection method, and x-ray inspection program | |
JP2023053558A (en) | Inspection device | |
JP2019060809A (en) | Method for generating three-dimensional image and inspection device | |
JP6682467B2 (en) | Inspection device, inspection method, and inspection program | |
JP2013092460A (en) | Substrate inspection apparatus, substrate inspection method, and substrate inspection program |