CN114926357A - Self-correcting method for LED array light source pose of computed microscopy imaging system - Google Patents
Self-correcting method for LED array light source pose of computed microscopy imaging system Download PDFInfo
- Publication number
- CN114926357A CN114926357A CN202210532164.3A CN202210532164A CN114926357A CN 114926357 A CN114926357 A CN 114926357A CN 202210532164 A CN202210532164 A CN 202210532164A CN 114926357 A CN114926357 A CN 114926357A
- Authority
- CN
- China
- Prior art keywords
- led array
- led
- pose
- image
- light
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 238000003384 imaging method Methods 0.000 title claims abstract description 30
- 238000000386 microscopy Methods 0.000 title claims abstract description 25
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 22
- 238000013178 mathematical model Methods 0.000 claims abstract description 12
- 210000001747 pupil Anatomy 0.000 claims description 24
- 238000012549 training Methods 0.000 claims description 18
- 238000005286 illumination Methods 0.000 claims description 16
- 238000011156 evaluation Methods 0.000 claims description 10
- 238000001228 spectrum Methods 0.000 claims description 10
- 230000003595 spectral effect Effects 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 6
- 230000009191 jumping Effects 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 230000005428 wave function Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000012937 correction Methods 0.000 abstract description 7
- 238000010586 diagram Methods 0.000 description 8
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000003491 array Methods 0.000 description 3
- 210000004027 cell Anatomy 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000010521 absorption reaction Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000012804 iterative process Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 238000010845 search algorithm Methods 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 241000894006 Bacteria Species 0.000 description 1
- 206010034972 Photosensitivity reaction Diseases 0.000 description 1
- 238000000137 annealing Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000003796 beauty Effects 0.000 description 1
- 210000003850 cellular structure Anatomy 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000007850 fluorescent dye Substances 0.000 description 1
- 238000001215 fluorescent labelling Methods 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000000399 optical microscopy Methods 0.000 description 1
- 208000007578 phototoxic dermatitis Diseases 0.000 description 1
- 231100000018 phototoxicity Toxicity 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000001454 recorded image Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000002922 simulated annealing Methods 0.000 description 1
- 238000010186 staining Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/06—Means for illuminating specimens
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/36—Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
- G02B21/365—Control or image processing arrangements for digital or video microscopes
- G02B21/367—Control or image processing arrangements for digital or video microscopes providing an output produced by processing a plurality of individual source images, e.g. image tiling, montage, composite images, depth sectioning, image comparison
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Optics & Photonics (AREA)
- Analytical Chemistry (AREA)
- Chemical & Material Sciences (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Studio Devices (AREA)
- Microscoopes, Condenser (AREA)
Abstract
A self-correction method for the pose of an LED array light source of a computed microscopy imaging system can eliminate the requirement of the existing system on a multi-degree-of-freedom precise mechanical adjusting device and solve the problems of inaccuracy, incompleteness and overlong correction time of the existing correction method. It includes: (1) sequentially lightening the LED lamps in the LED array, and collecting an FPM data set with LED pose deviation by using a COMS camera; (2) selecting a low-resolution image with a bright-dark field boundary from the data set; (3) extracting a bright field area of the image by using a Unet network; (4) fitting a circle where a boundary arc of the light and dark fields is located, and calculating the circle center and the radius; (5) calculating the pose parameters of the LED array through the established mathematical model by combining the circle center and the radius in the step (4); (6) and correcting the position of the LED lamp in the FPM reconstruction algorithm to obtain a corrected FPM reconstruction image.
Description
Technical Field
The invention relates to the technical field of computational microscopy imaging, in particular to a self-correcting method for the pose of an LED array light source of a computational microscopy imaging system.
Background
The beauty of life depends on the diversity and complexity of biological cellular structures, and is difficult to appreciate by the human eye alone in the absence of advanced observation techniques. Until the 17 th century, the lewenhooke opened the human door to the micro world by relying on advanced processing techniques to observe bacteria and obtain their morphological structures with a microscope made by the lewenhooke. However, the ability of conventional microscopic imaging systems to transmit information is limited by system hardware parameters, which typically require a reduction in field of view at the expense of high resolution imaging, and vice versa. Besides, most biological cells are phase objects with weak absorption, cannot be directly observed by a bright field microscope, and can only acquire images with high contrast by means of staining or fluorescence labeling, but due to the characteristic that part of the cells are difficult to stain and the phototoxicity and photobleaching property of fluorescence, the observation of a sample with weak absorption is still challenging.
Thanks to the combination of computer technology and optical microscopy, the emerging computed microscopy imaging technology enables the recovery of complex amplitude information of the sample from the acquired data to be optimized by modeling analysis of the microscopy system. Fourier Ptychographic Microscopy (FPM) is a novel computational Microscopy technique proposed in 2013, and can realize large-field and super-resolution quantitative phase imaging. The FPM system changes a light source of a traditional microscope into a programmable LED array, sequentially lights LED lamps at different positions when data are collected, generates light with different incident angles to illuminate a sample, and moves high-frequency information of the sample into a pupil of an objective lens, so that sample information outside the cutoff frequency of the microscope objective lens is obtained. When sample information is recovered, the acquired data is spliced in a computer by adopting a synthetic aperture and phase recovery algorithm, and finally the complex amplitude information of the sample is obtained. The key point of obtaining an accurate imaging result by the computational microscopy imaging technology is that an established model is corresponding to an actual system, but when the FPM system is established, the actual LED array and an ideal LED array are often deviated, so that the position of a selected frequency spectrum subregion is inaccurate in the reconstruction process, and the final reconstruction effect is poor. Therefore, matching the LED array model established during the reconstruction process with the actual LED array is a determining factor in order to obtain good imaging results.
Currently, methods for solving the above model matching problem can be divided into two broad categories. The first method is to adjust the actual LED array to the ideal position, and to match it with the ideal model. However, FPM systems using this approach typically require multiple degrees of freedom of precision mechanical adjustment, are often costly, bulky, and time consuming, and the adjustment technique is unacceptable to inexperienced users. The second method is that the established model is automatically matched with an actual LED array through an algorithm, in 2016, Sun et al search the pose parameters of the LED array by adopting a method combining a simulated annealing algorithm and nonlinear regression processing, and relevant experimental results show that the method effectively and automatically corrects the pose deviation of the LED array, but the annealing evaluation function of the method is influenced by various factors such as light intensity fluctuation of an LED, camera noise and the like, and the required iterative search time is longer; in 2022, Zheng et al quickly obtained the pose parameters of the LED array using the defocused bright field image shift characteristics, and realized the correction of reconstructed amplitude, phase and pupil function, but this method has a high requirement on the accuracy of the focusing apparatus. In addition, the existing second method only corrects non-inclined pose parameters, and the pose description of the actual LED array is not complete.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a self-correcting method for the pose of an LED array light source of a computed microscopy imaging system, which can eliminate the requirement of the prior system on a multi-degree-of-freedom precise mechanical adjusting device and solve the problems of inaccuracy, incompleteness and overlong correction time of the prior correcting method.
The technical scheme of the invention is as follows: the self-correcting method for the pose of the LED array light source of the computed microscopy imaging system comprises the following steps of:
(1) sequentially lightening the LED lamps in the LED array, and collecting an FPM data set with LED pose deviation by using a COMS camera;
(2) selecting a low-resolution image with a bright-dark field boundary from the data set;
(3) extracting a bright field area of the image by using a Unet network;
(4) fitting a circle where the light and dark field boundary arc is located, and calculating the circle center and the radius;
(5) calculating the pose parameters of the LED array through the established mathematical model by combining the circle center and the radius in the step (4);
(6) and correcting the position of the LED lamp in the FPM reconstruction algorithm to obtain a corrected FPM reconstruction image.
The invention sequentially lights the LED lamps in the LED array, and uses the COMS camera to collect the FPM data set with the LED position and posture deviation, so that no precise mechanical device is needed to adjust the light source in the FPM system, no initial adjustment step is needed, the volume and the weight of the FPM system are reduced, the cost of the system is reduced, and the invention is friendly to inexperienced users; selecting a low-resolution image with a bright-dark field boundary from the data set, extracting a bright field area of the image by using a Unet network, and extracting light spots by using a trained neural network, wherein the extraction process has good robustness, is not influenced by light intensity fluctuation of an LED lamp and CMOS camera noise, and has high accuracy; fitting a circle where the arc of the boundary of the light and dark field is located, establishing a corresponding model between the circle center of the arc boundary of the light and dark field and the LED lamp, and directly solving pose parameters without adopting an iterative search algorithm, so that the time consumption is short, and the error measurement range is large; and calculating the circle center and the radius, calculating the pose parameters of the LED array through the established mathematical model, correcting the position of the LED lamp in the FPM reconstruction algorithm, and obtaining a corrected FPM reconstruction image, so that a more complete LED array model is established, and the pose of the LED array can be better described.
Drawings
Fig. 1 shows a flow chart of a method for correcting the pose of an LED array light source of a computed microscopy imaging system according to the invention.
FIG. 2 shows a schematic diagram of the FPM system of the present invention.
Fig. 3 shows a lighting schematic of an LED lamp in the light field range employed in an embodiment of the present invention.
Fig. 4 shows a schematic view of the illumination of an LED lamp in the boundary range of the light and dark fields used in the embodiment of the present invention.
Fig. 5 shows a schematic diagram of a bright-dark field boundary employed in an embodiment of the present invention.
Fig. 6 shows a schematic diagram of a corresponding model between the circle center of the light spot arc boundary and the coordinates of the LED lamp, which is established in the embodiment of the present invention.
Detailed Description
As shown in fig. 1, the method for self-correcting the pose of the LED array light source of the computed microscopy imaging system comprises the following steps:
(1) sequentially lighting the LED lamps in the LED array, and collecting an FPM data set with LED pose deviation by using a COMS camera;
(2) selecting a low-resolution image with a bright-dark field boundary from the data set;
(3) extracting a bright field area of the image by using a Unet network;
(4) fitting a circle where the light and dark field boundary arc is located, and calculating the circle center and the radius;
(5) and (4) calculating the LED through the established mathematical model by combining the circle center and the radius in the step (4)
Pose parameters of the array;
(6) and correcting the position of the LED lamp in the FPM reconstruction algorithm to obtain a corrected FPM reconstruction image.
The invention sequentially lights the LED lamps in the LED array, and collects the FPM data set with the LED position and orientation deviation by using the COMS camera, so that no precise mechanical device is needed for adjusting the light source in the FPM system, no initial adjustment step is needed, the volume and the weight of the FPM system are reduced, the cost of the system is reduced, and the invention is friendly to inexperienced users; selecting a low-resolution image with a bright-dark field boundary from the data set, extracting a bright field area of the image by using a Unet network, and extracting light spots by using a trained neural network, wherein the extraction process has good robustness, is not influenced by light intensity fluctuation of an LED lamp and CMOS camera noise, and has high accuracy; fitting a circle where the arc of the boundary of the light and dark field is located, establishing a corresponding model between the circle center of the arc boundary of the light and dark field and the LED lamp, and directly solving pose parameters without adopting an iterative search algorithm, so that the time consumption is short, and the error measurement range is large; and calculating the circle center and the radius, calculating the pose parameters of the LED array through the established mathematical model, correcting the position of the LED lamp in the FPM reconstruction algorithm, and obtaining a corrected FPM reconstruction image, so that a more complete LED array model is established, and the pose of the LED array can be better described.
Preferably, the step (1) comprises the following substeps:
(1.1) using a programmable LED array with the specification of M multiplied by N as a light source to provide illumination, placing the LED array at a position far enough away from a sample, and determining that illumination light waves emitted by each LED lamp are quasi-monochromatic plane light waves; assuming that the amplitude of the light wave is 1, then m (1)<m<M) lines, n (1)<n<N) columns of LEDs m,n The wave function of the emitted illumination light is expressed as
Where (x, y) is the two-dimensional Cartesian coordinate system of the sample plane, j is the unit imaginary number, u m,n ,v m,n Is the wave vector of light wave, which is expressed as
Wherein (x) c ,y c ) Is the sub-region center position coordinate of the sample, x m,n And y m,n Indicating LED m,n λ is the wavelength of the illuminating light wave, h is the distance between the LED array and the sample;
(1.2) assuming the sample is a thin sample, the light wave transmitted through the sample o (x, y) is represented as
(1.3) the light wave transmitted through the sample passes through the microscope objective and reaches its back focal plane, a process denoted as
Wherein, O (u-u) m.n ,v-v m,n ) Representing the Fourier transform of the transmitted sample light wave, (u, v) are two-dimensional spatial frequency coordinates in the Fourier spectral plane,is the Fourier transform operator, P (u, v) is the pupil function;
(1.4) the light wave reaches the COMS camera to get a low resolution intensity imageThe process is shown as
Preferably, the step (2) comprises the following substeps:
(2.1) selecting 1 image shot in the step (1), and then carrying out binarization on the images to obtain
(2.2) solvingThe proportion R of the number of pixels with logic value of 1 m,n The process is represented as
Wherein X and Y respectively represent the number of rows and columns of pixels of the CMOS camera;
(2.3) according to R m,n Judging whether the current image has a proper light and dark field area, if so, selecting the image as the input of the step (3), and otherwise, judging the next image;
(2.4) repeating the steps (2.1) to (2.3), stopping iteration when all the images are judged to be finished or the termination condition is met, and obtaining a data set for self-correcting the poseq∈[1,Q]Where Q represents the total number of images with appropriate bright and dark field regions.
Preferably, the step (3) comprises the following substeps:
(3.1) establishing a data set required for training Unet: randomly adjusting the LED array to enable the LED array to have different pose deviations, sequentially placing a plurality of samples on an objective table, lighting an LED lamp generating a bright-dark field boundary image, and collecting a series of images as input of a training set; then, when a sample is not placed, acquiring a series of images as coarse marks of a training set, and taking the images illuminated by the same LED as corresponding input and coarse marks;
(3.2) carrying out binarization processing on the training set marks to obtain a bright field area and a dark field area on the image, using the bright field area and the dark field area as real training set marks, and then using the training set to train the Unet to be convergent;
and (3.3) extracting a light and dark field region of the image by using the trained Unet network to obtain Q facula images.
Preferably, the step (4) comprises the following sub-steps:
(4.1) selecting the q-th light spot generated in the step (3), and extracting an edge pixel by using a canny operator;
(4.2) fitting a circle where the edge pixel is located by using a random sampling consistency algorithm to obtain the center and the radius of the circle;
(4.3) findingThe maximum corresponding iteration times s, and the circle center is recordedAnd radius
(4.4) solving the variance V of all the fitting radii R,q Selecting a pixel, calculating the pixel to the center of a circleIf the distance is equal toThe phase difference is greater than the variance of 3V R,q If so, removing the pixel and traversing all the pixels;
(4.5) fitting the circle centers and the radii of the residual pixels by using a least square method to obtain the circle center (x) of the qth light spot cir,q ,y cir,q ) And radius R cir,q ;
(4.6) letting q ═ q + 1;
(4.7) repeating steps (4.1) to (4.6) until Q is equal to Q.
Preferably, said step (4.2) comprises the sub-steps of:
(4.2.1) determining the iteration number K of the algorithm, wherein the initial iteration process K is 0;
(4.2.2) randomly selecting 3 edge pixels;
(4.2.4) calculating the distance from the pixel to the center of the circle, and recording the distanceNumber of pixels in a certain neighborhood
(4.2.5) making k ═ k + 1;
(4.2.6) looping steps (4.2.2) - (4.2.5) and stopping iteration when K equals K.
Preferably, the step (5) comprises the following substeps:
(5.1) establishing a pose model of the LED array, and describing the position of each LED unit in space through a mathematical model: assuming that the LED array is a rigid body, a plane passing through the centers of the LEDs and perpendicular to the z-axis is a plane with z being 0, and the distance between adjacent LED lamps is d LED Then the spatial position coordinates of each LED unit are expressed as
Wherein (theta) x ,θ y ,θ z ) Respectively representing the rotation angles of the LED array around x, y and z axes, (delta x, delta y) representing the translation amount of the LED array along the x and y axes, and describing the relative position in the z direction by using the distance h between the center of the LED array and a sample;
(5.2) establishing a corresponding model between the circle center of the light spot circular arc boundary and the coordinates of the LED lamp: corresponding the CMOS camera to the object space to obtain the conjugated CMOS window, wherein the conjugate CMOS window is equivalently located on the focusing sample surface, and the center coordinate of the object space is
Wherein A is the magnification of the microscope objective;
distance from COMS entrance window to entrance pupil is
Wherein NA is the numerical aperture,is the mean value of the object space spot radius and is expressed as
Expressing the center coordinates of the light spots in object space as the position coordinates of the LED lamp
Wherein q corresponds to m and n;
(5.3) assuming the pose parameters of the LED array to beCalculating the position coordinates of the LED units according to the LED array pose model established in the step (5.1)Then, according to the model established in the step (5.2), the estimation of the object space light spot circle center is calculated
(5.4) calculating the optimal pose estimation of the LED array by using a nonlinear regression method: if the estimated pose parameters are optimal, the light spot center obtained by calculation in the step (5.2) is closest to the actual light spot center, and a mathematical model is established as
Preferably, the step (6) comprises the following sub-steps:
(6.1) spectral estimation P giving initial pupil function and high resolution samples 0 (u, v) and O 0 (u, v) and starting with the iterative optimization algorithm, the initial pupil function estimate is set to a circular low pass filter with unit 1 inside the pass band and 0 outside the pass band, and phase is 0; the initial sample spectrum estimate is set to the fourier spectrum of an oversampled low resolution image;
(6.2) calculating the wave vector of the corrected light wave: firstly, according to the LED array pose model established in the step (5.1), the optimal pose parameters are used for estimationCalculating coordinates of the corrected LED lampThen calculating the wave vector of the corrected light wave
(6.3) giving an LED m,n Estimation of low resolution images under illumination: p given according to step (6.1) 0 (u, v) and O 0 (u, v), Fourier spectral estimation of the low resolution image is represented as
Inverse fourier transforming it to obtain an estimate of the low resolution image:
(6.4) image with real recordThe square root of the image is substituted for the modulus of the low-resolution image obtained in the previous step to obtain an updated image
(6.5) updating the imageSpectral estimation for updating corresponding samplesAnd a pupil function P i (u, v) to obtain
Wherein is a complex conjugate operation, δ 1 And delta 2 Is a normalization constant to prevent the denominator from being zero, i is the number of iterations, Δ Φ i,m,n Is the error helper function of the update process:
(6.6) repeating steps (6.1) - (6.5) in an iterative process until step (1)
All images in the acquired data set are processed; then, the whole iteration process is repeated for a plurality of times until the reconstruction result is converged, whether the result is converged is judged according to a relative error evaluation function, and the error evaluation function is expressed as
If the current evaluation function is smaller than the set threshold value, jumping out of the reconstruction cycle; and finally, carrying out inverse Fourier transform on the frequency spectrum estimation of the sample to obtain a complex amplitude image with high resolution and large field of view.
For better illustrating the objects and advantages of the present invention, the following detailed description is made with reference to the accompanying drawings and examples.
The invention provides a light source pose self-correcting method based on Fourier laminated microscopic imaging, which comprises the following specific steps of:
step 1.1, a programmable LED array with a specification of M × N is used as a light source to provide illumination, and the LED array is placed at a position far enough away from the sample, in an embodiment, the distance is about 93mm, so that the illumination light waves emitted by the LED lamp are considered as quasi-monochromatic plane light waves for any small area of the sample. Assuming that the amplitude of the light wave is 1, then m (1)<m<M) lines, n (1)<n<N) columns of LEDs m,n The wave function of the emitted illumination light can be expressed as
Where (x, y) is the two-dimensional Cartesian coordinate system of the sample plane, j is the unit imaginary number, (u) m,n ,v m,n ) Is the wave vector of light waves, which can be expressed as
In the formula (x) c ,y c ) Is the sub-region center position coordinate of the sample, x m,n And y m,n Indicating LED m,n λ is the wavelength of the illuminating light wave, h is the distance between the LED array and the sample;
step 1.2, assuming the sample is a thin sample, the light wave transmitted through the sample o (x, y) can be expressed as
Step 1.3, the light wave transmitted through the sample passes through the microscope objective and reaches its back focal plane, which can be expressed as
In the formula, O (u-u) m.n ,v-v m,n ) Representing the Fourier transform of the transmitted sample light wave, (u, v) are two-dimensional spatial frequency coordinates in the Fourier spectral plane,is the Fourier transform operator, P (u, v) is the pupil function;
step 1.4, the light wave reaches the COMS camera to obtain a low-resolution intensity imageThis process is equivalent to performing an inverse fourier transform, which can be expressed as
And 2, selecting a low-resolution image with a light-dark field boundary. Fig. 3 shows a lighting schematic diagram of an LED lamp in a bright field range, and fig. 4 shows a lighting schematic diagram of an LED lamp in a boundary range of a bright field and a dark field, including an LED array 1, a microscope objective 3, an entrance pupil 7, and a cmos entrance window 8. Fig. 5 shows a schematic diagram of a light and dark field boundary used in the embodiment of the present invention, which includes a projection 9 of the entrance pupil 7 on the cmos entrance window 8, a two-dimensional schematic diagram 10 of the cmos entrance window, a bright field region 11, a dark field region 13, a light and dark field boundary arc 14, a center 12 of the projection 9 during bright field illumination, and a center 15 of the projection 9 during LED lamp illumination within the light and dark field boundary range. Step 2 is to select an image having both a bright-dark field region 11 and a dark field region 12 from the M × N images captured in step 1, and the specific steps are as follows:
step 2.1, selecting 1 image shot in the step 1, and binarizing the image to obtainThe binarization threshold can be determined by the maximum inter-class variance method, and the process can be expressed as
Wherein T represents a set threshold;
step 2.2, solveThe proportion R of the number of pixels with logic value of 1 m,n The process can be expressed as
In the formula, X and Y respectively represent the number of pixel lines and columns of the CMOS camera;
step 2.3, when the image is set to have a suitable bright field region 11 and a suitable dark field region 13, R m,n Is [0.15,0.85 ] in the example]And determining R m,n Whether or not toAnd (4) locating in the interval, if so, selecting the image as the input of the step (3), otherwise, continuously selecting the next image for judgment.
Step 2.4, repeating steps 2.1 to 2.3, when R of a plurality of continuous images m,n When the data are all lower than 0.15, the image is a dark field image, iteration is stopped, and a data set for self-correcting pose is obtained
In the formula, Q represents the total number of images with suitable bright field region 11 and dark field region 13.
And 3, extracting a bright and dark field region of the image by using the Unet network. And removing complex and diverse sample information in the shot image by using a neural network Unet to obtain a light spot image only containing a light and dark field area. The method comprises the following specific steps:
preferably, the sub-steps included in step 3 are as follows:
and 3.1, establishing a data set required by training Unet. And randomly adjusting the LED array to enable the LED array to have different pose deviations, sequentially placing a plurality of samples on an objective table, lighting LED lamps for generating bright and dark field boundary images, and collecting a series of images as input of a training set. Then, when a sample is not placed, acquiring a series of images as coarse marks of a training set, and taking the images illuminated by the same LED as corresponding input and coarse marks;
step 3.2, carrying out binarization processing on the training set marks to obtain a bright field area and a dark field area on the image, using the bright field area and the dark field area as real training set marks, and then using the training set to train the Unet to be convergent;
and 3.3, extracting a light and dark field area of the image by using the trained Unet network to obtain Q facula images.
And 4, calculating the circle center and the radius of the arc boundary 14 of the bright and dark field. And (3) processing the light spots obtained in the step (3) by utilizing an edge detection algorithm to obtain a light and dark field boundary arc 14, fitting a circle where the arc is located to obtain a projection 9 of the entrance pupil 7 on the COMS entrance window 8, and finally obtaining the circle center position and the radius of the arc boundary of the bright field and the dark field. The specific substeps of step 4 are as follows:
and 4.1, selecting the q (initial q is 1) th light spot generated in the step 3, and extracting the pixel of the edge of the light spot by using a canny operator.
Step 4.2, fitting the arc where the edge pixel is located by using a random sampling consistency algorithm, wherein the algorithm is as follows:
1) determining the iteration times K of the algorithm, wherein the initial iteration process K is 0;
2) randomly selecting 3 edge pixels;
4) Calculating the distance from the pixel to the center of the circle, and recording the distanceNumber of pixels in a certain neighborhood
5)k=k+1;
6) And (5) circulating the steps 2-5, and stopping iteration when K is equal to K.
7)
Step 4.3, findThe maximum corresponding iteration times s, and the circle center is recordedAnd radius
Step 4.4, solving the variance V of all the fitting radii R,q Selecting a pixel, calculating the pixel to the center of the circleIf the distance is equal toIs greater than the variance of 3V R,q If so, removing the pixel and traversing all the pixels;
step 4.5, fitting the circle center and the radius of the residual pixel by using a least square method to obtain the circle center (x) of the qth light spot circular arc boundary cir,q ,y cir,q ) And radius R cir,q ;
Step 4.6, q is q + 1;
and 4.7, repeating the steps 4.1-4.6 until Q is equal to Q.
And 5, calculating the pose parameters of the LED array. Establishing a pose model of the LED array, namely describing the spatial position of each LED lamp in the LED array through pose parameters, then establishing a corresponding model of the position coordinates of the LED lamps and the light and dark field boundary arc parameters calculated in the step 4, and finally acquiring the pose parameters of the LED array by using a nonlinear regression method. The specific substeps of step 5 are as follows:
and 5.1, establishing a pose model of the LED array, namely describing the position of each LED lamp in space through a mathematical model. Assuming that the LED array is a rigid body, a plane passing through the center of the LED array and perpendicular to the z-axis is a plane with z being 0, and the distance between adjacent LED lamps is d LED Then the spatial position coordinates of each LED lamp can be expressed as
Wherein (theta) x ,θ y ,θ z ) The rotation angles of the LED arrays around the x, y and z axes are respectively represented, (delta x, delta y) represent the translation amounts of the LED arrays along the x and y axes, and the relative position in the z direction is described by using the distance h between the center of the LED arrays and a sample, so that the position of each LED lamp can be represented by (theta) x ,θ y ,θ z Δ x, Δ y, h) are described by 6 parameters.
And 5.2, establishing a corresponding model between the circle center of the light spot circular arc boundary and the coordinates of the LED lamp. FIG. 6 is a two-dimensional schematic diagram of a model corresponding to the circle center of the light spot arc boundary and the coordinates of an LED lamp, which is established in the embodiment of the present invention, wherein the LED lamp is a LED lamp 0 Indicating center light, LED -1 And an LED -2 Indicating an LED lamp within the bright-dark boundary range. For the convenience of analysis, in this embodiment, the aperture stop 4 and the CMOS camera 6 are equivalent to the object space to obtain the entrance pupil 7 and the CMOS entrance window 8, and the center coordinates of the object space are as followsWherein A is the magnification of the microscope objective. Entrance pupil radius BD ═ NA · h 1 Wherein NA is the numerical aperture of the microscope objective. Line CC in FIG. 6 0 The radius of the projection 9 of the entrance pupil 7 on the CMOS entrance window 8, i.e. the radius of the circle on which the object space light spot circular arc boundary is located, can be expressed as the mean value of the circle radius calculated in step 4
By the similarity relation ofObtaining a distance between the entrance pupil 7 and the COMS entrance window 8 of
By similar derivation, the center of the spot can be determined in object space as point C -1 And C -2 Expressed as the position coordinates of the LED lamp
Wherein q corresponds to m and n.
Step 5.3, assuming the pose parameters of the LED array asCalculating the position coordinates of the LED units by equation (7)Then, the estimation of the center of the object space light spot is calculated according to the formula (10)
And 5.4, calculating the optimal pose estimation of the LED array by using a nonlinear regression method. If the estimated pose parameter is optimal, the spot center calculated in step 5.3 should be closest to the actual spot center, so a mathematical model can be established as
In the formula,is a loss function that needs to be minimized,the method is the optimal pose parameter estimation of the LED array.
And 6, outputting the corrected FPM reconstructed image. And (5) calculating a model matched with the actual LED position according to the pose parameters obtained in the step (5), and reconstructing according to the corrected model to obtain an accurate high-resolution large-view-field complex amplitude image.
Preferably, step 6 comprises the following sub-steps:
step 6.1, give the initial pupil function and spectral estimate P of the high resolution sample 0 (u, v) and O 0 (u, v) and this is taken as the start of the iterative optimization algorithm. In general, the pupils of the microscope objective are all circular, so the initial pupil function estimate is set to be a circular low pass filter with unit 1 inside the pass band, 0 outside the pass band, and phase 0; initial sample spectrum estimationA fourier spectrum that can be set as an oversampled low resolution image;
and 6.2, calculating the wave vector of the corrected light wave. Firstly, according to the LED array pose model established in the step 5.1, the optimal pose parameters are used for estimationCalculating coordinates of the corrected LED lampThen calculating the wave vector of the corrected light wave
Step 6.3, providing the LED m,n Low resolution image estimation upon illumination. P given according to step 6.1 0 (u, v) and O 0 (u, v), Fourier spectral estimation of the low resolution image is represented as
Inverse fourier transforming it to obtain an estimate of the low resolution image:
step 6.4, using the actually recorded imageThe square root of the image is substituted for the modulus of the low-resolution image obtained in the previous step to obtain an updated image
Step 6.5, the updated imageSpectral estimation for updating corresponding samplesAnd a pupil function P i (u, v) to obtain
In the formula, is complex conjugate operation, delta 1 And delta 2 Is a normalization constant to prevent the denominator from being zero, i is the number of iterations, Δ Φ i,m,n Is the error helper function of the update process:
and 6.6, repeating the 5 steps in an iterative process until all the images in the data set acquired in the step 1 are processed. Subsequently, the whole iteration process is repeated for multiple times until the reconstruction result is converged, whether the result is converged can be judged according to a relative error evaluation function, and the error evaluation function can be expressed as
And if the current evaluation function is smaller than the set threshold value, jumping out of the reconstruction loop. And finally, carrying out inverse Fourier transform on the frequency spectrum estimation of the sample to obtain a complex amplitude image with high resolution and a large field of view.
The method is an LED array light source pose self-correction method of the computed microscopy imaging system, and comprises a system self-correction algorithm and a Fourier laminated microscopy imaging algorithm.
In the embodiment, the pose deviation of the LED array is corrected by adopting the image information of the light and dark field boundary light spots. Preprocessing the acquired original data set by using a threshold segmentation method to obtain an image containing boundary information of a light field and a dark field; extracting a bright-dark field region from the image with the sample information by using a neural network; fitting the arc boundary of the light and dark field by using error analysis and a least square theory to obtain the circle center position and the radius of the circle where the arc is located; calculating by using the established matching model of the LED array and the light and dark field boundary to obtain the pose parameters of the LED array; and correcting the light wave vector in the reconstruction process by using the fitted pose parameters to obtain an accurate FPM (focal plane modulated) reconstruction image.
The method for self-correcting the pose of the LED array light source of the computed microscopy imaging system can obtain the pose parameters of the LED array light source more quickly, accurately and completely, enables the imaging result to be better under the condition of not adjusting the LED array light source, and plays a certain role in promoting the practical application and development of the computed microscopy imaging system taking the LED array as the light source for illumination.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications, equivalent variations and modifications made to the above embodiment according to the technical spirit of the present invention still belong to the protection scope of the technical solution of the present invention.
Claims (8)
1. The self-correcting method for the pose of the LED array light source of the computed microscopic imaging system is characterized by comprising the following steps of: which comprises the following steps:
(1) sequentially lighting the LED lamps in the LED array, and collecting an FPM data set with LED pose deviation by using a COMS camera;
(2) selecting a low-resolution image with a bright-dark field boundary from the data set;
(3) extracting a bright field area of the image by using a Unet network;
(4) fitting a circle where a boundary arc of the light and dark fields is located, and calculating the circle center and the radius;
(5) calculating the pose parameters of the LED array through the established mathematical model by combining the circle center and the radius in the step (4);
(6) and correcting the position of the LED lamp in the FPM reconstruction algorithm to obtain a corrected FPM reconstruction image.
2. The method for self-correcting the pose of the LED array light source of the computed microscopy imaging system as recited in claim 1, wherein: the step (1) comprises the following sub-steps:
(1.1) using a programmable LED array with the specification of M multiplied by N as a light source to provide illumination, placing the LED array at a position far enough away from a sample, and determining that illumination light waves emitted by each LED lamp are quasi-monochromatic plane light waves; assuming that the amplitude of the light wave is 1, then m (1)<m<M) lines, n (1)<n<N) rows of LEDs m,n The wave function of the emitted illumination light is expressed as
Where (x, y) is the two-dimensional Cartesian coordinate system of the sample plane, j is the unit imaginary number, u m,n ,v m,n Is the wave vector of light wave, which is expressed as
Wherein (x) c ,y c ) Is the sub-region center position coordinate of the sample, x m,n And y m,n Indicating LED m,n λ is the wavelength of the illuminating light wave, h is the distance between the LED array and the sample;
(1.2) assuming the sample is a thin sample, the light wave transmitted through the sample o (x, y) is represented as
(1.3) the light wave transmitted through the sample passes through the microscope objective and reaches its back focal plane, the process being denoted as
Wherein, O (u-u) m.n ,v-v m,n ) Representing the Fourier transform of the transmitted sample light wave, (u, v) are two-dimensional spatial frequency coordinates in the Fourier spectral plane,is the Fourier transform operator, P (u, v) is the pupil function;
(1.4) the light wave reaches the COMS camera to get a low resolution intensity imageThe process is represented as
3. The method for self-correcting the pose of the LED array light source of the computed microscopy imaging system as recited in claim 2, wherein: the step (2) comprises the following sub-steps:
(2.2) solvingThe proportion R of the number of pixels with logic value of 1 m,n The process is represented as
Wherein X and Y respectively represent the number of rows and columns of pixels of the CMOS camera;
(2.3) according to R m,n Judging whether the current image has a proper light and dark field area, if so, selecting the image as the input of the step (3), and otherwise, judging the next image;
(2.4) repeating the steps (2.1) to (2.3), and stopping iteration when all images are judged to be finished or the termination condition is met to obtain a data set I for self-correcting the pose q c (x,y),q∈[1,Q]Where Q represents the total number of images with appropriate bright and dark field regions.
4. The method for self-correcting the pose of the LED array light source of the computed microscopy imaging system as recited in claim 3, wherein: the step (3) comprises the following sub-steps:
(3.1) establishing a data set required for training the Unet: randomly adjusting the LED array to enable the LED array to have different pose deviations, sequentially placing a plurality of samples on an objective table, lighting an LED lamp generating a bright-dark field boundary image, and collecting a series of images as input of a training set; then, when a sample is not placed, acquiring a series of images as coarse marks of a training set, and taking the images illuminated by the same LED as corresponding input and coarse marks;
(3.2) carrying out binarization processing on the training set marks to obtain a bright field area and a dark field area on the image, using the bright field area and the dark field area as real training set marks, and then using the training set to train the Unet to be convergent;
and (3.3) extracting a light and dark field region of the image by using the trained Unet network to obtain Q facula images.
5. The method for self-correcting the pose of the LED array light source of the computed microscopy imaging system as recited in claim 4, wherein: the step (4) comprises the following sub-steps:
(4.1) selecting the q-th light spot generated in the step (3), and extracting an edge pixel by using a canny operator;
(4.2) fitting a circle where the edge pixel is located by using a random sampling consistency algorithm to obtain the center and the radius of the circle;
(4.3) findingThe maximum corresponding iteration times s, and the circle center is recordedAnd radius
(4.4) solving the variances V of all the fitting radii R,q Selecting a pixel, calculating the pixel to the center of the circleIf the distance is equal toDifference greater than variance 3V R,q If so, removing the pixel and traversing all the pixels;
(4.5) fitting the circle centers and the radii of the residual pixels by using a least square method to obtain the circle center (x) of the qth light spot cir,q ,y cir,q ) And radius R cir,q ;
(4.6) making q ═ q + 1;
(4.7) repeating steps (4.1) to (4.6) until Q is equal to Q.
6. The method for self-correcting the pose of the LED array light source of the computed microscopy imaging system as recited in claim 5, wherein: the step (4.2) comprises the following sub-steps:
(4.2.1) determining the iteration number K of the algorithm, wherein the initial iteration process K is 0;
(4.2.2) randomly selecting 3 edge pixels;
(4.2.4) calculating the distance from the pixel to the center of the circle, and recording the distanceNumber of pixels in a certain neighborhood
(4.2.5) making k ═ k + 1;
(4.2.6) looping steps (4.2.2) - (4.2.5) and stopping iteration when K equals K.
7. The method for self-correcting the pose of the LED array light source of the computed microscopy imaging system as recited in claim 6, wherein: the step (5) comprises the following sub-steps:
(5.1) establishing a pose model of the LED array, and describing the position of each LED unit in the space through a mathematical model: assuming that the LED array is a rigid body, a plane passing through the centers of the LEDs and perpendicular to the z-axis is a plane with z being 0, and the distance between adjacent LED lamps is d LED Then the spatial position coordinates of each LED unit are expressed as
Wherein (theta) x ,θ y ,θ z ) Respectively representing the rotation angles of the LED array around x, y and z axes, (delta x, delta y) representing the translation amount of the LED array along the x and y axes, and describing the relative position in the z direction by using the distance h between the center of the LED array and a sample;
(5.2) establishing a corresponding model between the circle center of the light spot circular arc boundary and the coordinates of the LED lamp: corresponding the CMOS camera to the object space to obtain the conjugated CMOS window, which is equivalently located on the focusing sample plane, at this timeThe center coordinates of the object space areWherein A is the magnification of the microscope objective;
distance from COMS entrance window to entrance pupil is
Wherein NA is the numerical aperture,is the mean value of the object space spot radius and is expressed as
Expressing the center coordinates of the light spots in object space as the position coordinates of the LED lamp
Wherein q corresponds to m and n;
(5.3) assuming that the pose parameter of the LED array isCalculating the position coordinates of the LED units according to the LED array pose model established in the step (5.1)Then, according to the model established in the step (5.2), the estimation of the object space light spot circle center is calculated
(5.4) calculating the optimal pose estimation of the LED array by using a nonlinear regression method: if the estimated pose parameters are optimal, the light spot center obtained by calculation in the step (5.2) is closest to the actual light spot center, and a mathematical model is established as
8. The method for self-correcting the pose of the LED array light source of the computed microscopy imaging system as recited in claim 7, wherein: the step (6) comprises the following sub-steps:
(6.1) spectral estimation P giving initial pupil function and high resolution samples 0 (u, v) and O 0 (u, v) and starting with the iterative optimization algorithm, the initial pupil function estimate is set to a circular low pass filter with unit 1 inside the pass band and 0 outside the pass band, and phase is 0; the initial sample spectrum estimate is set to the fourier spectrum of an oversampled low resolution image;
(6.2) calculating the wave vector of the corrected light wave: firstly, according to the LED array pose model established in the step (5.1), the optimal pose parameters are used for estimationCalculating coordinates of the corrected LED lampThen calculating the wave vector of the corrected light wave
(6.3) giving an LED m,n Estimation of low resolution images under illumination: p according to the statement in step (6.1) 0 (u, v) and O 0 (u, v), Fourier spectral estimation of the low resolution image is represented as
Inverse fourier transforming it to obtain an estimate of the low resolution image:
(6.4) image with real recordReplacing the low resolution image module value obtained in the last step by the square root of the image to obtain an updated image
(6.5) updating the imageSpectral estimation for updating corresponding samplesAnd a pupil function P i (u, v) to obtain
Wherein, is a complex conjugate operation, δ 1 And delta 2 Is a normalization constant to prevent the denominator from being zero, i is the number of iterations, Δ Φ i,m,n Is the error assist function of the update process:
(6.6) repeating the steps (6.1) - (6.5) in an iteration process until all the images in the data set acquired in the step (1) are processed; then, the whole iteration process is repeated for a plurality of times until the reconstruction result is converged, whether the result is converged is judged according to a relative error evaluation function, and the error evaluation function is expressed as
If the current evaluation function is smaller than the set threshold value, jumping out of the reconstruction cycle; and finally, carrying out inverse Fourier transform on the frequency spectrum estimation of the sample to obtain a complex amplitude image with high resolution and a large field of view.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210532164.3A CN114926357B (en) | 2022-05-07 | 2022-05-07 | LED array light source pose self-correction method for computing microscopic imaging system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210532164.3A CN114926357B (en) | 2022-05-07 | 2022-05-07 | LED array light source pose self-correction method for computing microscopic imaging system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114926357A true CN114926357A (en) | 2022-08-19 |
CN114926357B CN114926357B (en) | 2024-09-17 |
Family
ID=82809084
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210532164.3A Active CN114926357B (en) | 2022-05-07 | 2022-05-07 | LED array light source pose self-correction method for computing microscopic imaging system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114926357B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117830592A (en) * | 2023-12-04 | 2024-04-05 | 广州成至智能机器科技有限公司 | Unmanned aerial vehicle night illumination method, system, equipment and medium based on image |
CN118641160A (en) * | 2024-08-19 | 2024-09-13 | 中国科学院长春光学精密机械与物理研究所 | Method for measuring illumination angle of spectrum conjugated Fourier laminated microscopic imaging system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111311682A (en) * | 2020-02-24 | 2020-06-19 | 卡莱特(深圳)云科技有限公司 | Pose estimation method and device in LED screen correction process and electronic equipment |
CN113160212A (en) * | 2021-05-11 | 2021-07-23 | 杭州电子科技大学 | Fourier laminated imaging system and method based on LED array position error fast correction |
CN113671682A (en) * | 2021-08-23 | 2021-11-19 | 北京理工大学重庆创新中心 | Frequency domain light source position accurate correction method based on Fourier laminated microscopic imaging |
KR20220028816A (en) * | 2020-08-31 | 2022-03-08 | 한국표준과학연구원 | Reflective Fourier ptychographic microscopy with misalignment error correction algorithm |
CN114355601A (en) * | 2021-12-24 | 2022-04-15 | 北京理工大学 | LED array light source pose deviation correction method and device of microscopic imaging system |
-
2022
- 2022-05-07 CN CN202210532164.3A patent/CN114926357B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111311682A (en) * | 2020-02-24 | 2020-06-19 | 卡莱特(深圳)云科技有限公司 | Pose estimation method and device in LED screen correction process and electronic equipment |
KR20220028816A (en) * | 2020-08-31 | 2022-03-08 | 한국표준과학연구원 | Reflective Fourier ptychographic microscopy with misalignment error correction algorithm |
CN113160212A (en) * | 2021-05-11 | 2021-07-23 | 杭州电子科技大学 | Fourier laminated imaging system and method based on LED array position error fast correction |
CN113671682A (en) * | 2021-08-23 | 2021-11-19 | 北京理工大学重庆创新中心 | Frequency domain light source position accurate correction method based on Fourier laminated microscopic imaging |
CN114355601A (en) * | 2021-12-24 | 2022-04-15 | 北京理工大学 | LED array light source pose deviation correction method and device of microscopic imaging system |
Non-Patent Citations (1)
Title |
---|
张韶辉等: "傅里叶叠层显微成像模型、算法及系统研究综述", 激光与光电子学进展, vol. 58, no. 14, 25 July 2021 (2021-07-25) * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117830592A (en) * | 2023-12-04 | 2024-04-05 | 广州成至智能机器科技有限公司 | Unmanned aerial vehicle night illumination method, system, equipment and medium based on image |
CN118641160A (en) * | 2024-08-19 | 2024-09-13 | 中国科学院长春光学精密机械与物理研究所 | Method for measuring illumination angle of spectrum conjugated Fourier laminated microscopic imaging system |
Also Published As
Publication number | Publication date |
---|---|
CN114926357B (en) | 2024-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10944896B2 (en) | Single-frame autofocusing using multi-LED illumination | |
CN112243519B (en) | Material testing of optical test pieces | |
CN107003229B (en) | Analytical method comprising holographic determination of the position of a biological particle and corresponding device | |
JP4806630B2 (en) | A method for acquiring optical image data of three-dimensional objects using multi-axis integration | |
Horstmeyer et al. | Convolutional neural networks that teach microscopes how to image | |
CN107065159A (en) | A kind of large visual field high resolution microscopic imaging device and iterative reconstruction method based on big illumination numerical aperture | |
CN110146974B (en) | Intelligent biological microscope | |
EP2976747B1 (en) | Image quality assessment of microscopy images | |
CN101151623A (en) | Classifying image features | |
CN114926357A (en) | Self-correcting method for LED array light source pose of computed microscopy imaging system | |
WO2017040669A1 (en) | Pattern detection at low signal-to-noise ratio | |
US11403861B2 (en) | Automated stain finding in pathology bright-field images | |
CN113671682B (en) | Frequency domain light source position accurate correction method based on Fourier laminated microscopic imaging | |
CN107180411A (en) | A kind of image reconstructing method and system | |
CN115032196B (en) | Full-scribing high-flux color pathological imaging analysis instrument and method | |
CN115060367A (en) | Full-glass data cube acquisition method based on microscopic hyperspectral imaging platform | |
CN108537862B (en) | Fourier diffraction scanning microscope imaging method with self-adaptive noise reduction function | |
CN113674402A (en) | Plant three-dimensional hyperspectral point cloud model generation method, correction method and device | |
CN113759535B (en) | High-resolution microscopic imaging method based on multi-angle illumination deconvolution | |
US11422355B2 (en) | Method and system for acquisition of fluorescence images of live-cell biological samples | |
CN109003228A (en) | A kind of micro- big visual field automatic Mosaic imaging method of dark field | |
CN118067001A (en) | Light source position accurate correction method combined with pupil function | |
CN114120318B (en) | Dark field image target point accurate extraction method based on integrated decision tree | |
Chen et al. | Random positional deviations correction for each LED via ePIE in Fourier ptychographic microscopy | |
WO2019140434A2 (en) | Overlapping pattern differentiation at low signal-to-noise ratio |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |