[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114926357A - Self-correcting method for LED array light source pose of computed microscopy imaging system - Google Patents

Self-correcting method for LED array light source pose of computed microscopy imaging system Download PDF

Info

Publication number
CN114926357A
CN114926357A CN202210532164.3A CN202210532164A CN114926357A CN 114926357 A CN114926357 A CN 114926357A CN 202210532164 A CN202210532164 A CN 202210532164A CN 114926357 A CN114926357 A CN 114926357A
Authority
CN
China
Prior art keywords
led array
led
pose
image
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210532164.3A
Other languages
Chinese (zh)
Other versions
CN114926357B (en
Inventor
张韶辉
郑传建
郝群
赫建
许刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Jinghua Precision Optics Co ltd
Beijing Institute of Technology BIT
Original Assignee
Guangzhou Jinghua Precision Optics Co ltd
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Jinghua Precision Optics Co ltd, Beijing Institute of Technology BIT filed Critical Guangzhou Jinghua Precision Optics Co ltd
Priority to CN202210532164.3A priority Critical patent/CN114926357B/en
Publication of CN114926357A publication Critical patent/CN114926357A/en
Application granted granted Critical
Publication of CN114926357B publication Critical patent/CN114926357B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/06Means for illuminating specimens
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • G02B21/367Control or image processing arrangements for digital or video microscopes providing an output produced by processing a plurality of individual source images, e.g. image tiling, montage, composite images, depth sectioning, image comparison
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Optics & Photonics (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Studio Devices (AREA)
  • Microscoopes, Condenser (AREA)

Abstract

A self-correction method for the pose of an LED array light source of a computed microscopy imaging system can eliminate the requirement of the existing system on a multi-degree-of-freedom precise mechanical adjusting device and solve the problems of inaccuracy, incompleteness and overlong correction time of the existing correction method. It includes: (1) sequentially lightening the LED lamps in the LED array, and collecting an FPM data set with LED pose deviation by using a COMS camera; (2) selecting a low-resolution image with a bright-dark field boundary from the data set; (3) extracting a bright field area of the image by using a Unet network; (4) fitting a circle where a boundary arc of the light and dark fields is located, and calculating the circle center and the radius; (5) calculating the pose parameters of the LED array through the established mathematical model by combining the circle center and the radius in the step (4); (6) and correcting the position of the LED lamp in the FPM reconstruction algorithm to obtain a corrected FPM reconstruction image.

Description

Self-correcting method for LED array light source pose of computed microscopy imaging system
Technical Field
The invention relates to the technical field of computational microscopy imaging, in particular to a self-correcting method for the pose of an LED array light source of a computational microscopy imaging system.
Background
The beauty of life depends on the diversity and complexity of biological cellular structures, and is difficult to appreciate by the human eye alone in the absence of advanced observation techniques. Until the 17 th century, the lewenhooke opened the human door to the micro world by relying on advanced processing techniques to observe bacteria and obtain their morphological structures with a microscope made by the lewenhooke. However, the ability of conventional microscopic imaging systems to transmit information is limited by system hardware parameters, which typically require a reduction in field of view at the expense of high resolution imaging, and vice versa. Besides, most biological cells are phase objects with weak absorption, cannot be directly observed by a bright field microscope, and can only acquire images with high contrast by means of staining or fluorescence labeling, but due to the characteristic that part of the cells are difficult to stain and the phototoxicity and photobleaching property of fluorescence, the observation of a sample with weak absorption is still challenging.
Thanks to the combination of computer technology and optical microscopy, the emerging computed microscopy imaging technology enables the recovery of complex amplitude information of the sample from the acquired data to be optimized by modeling analysis of the microscopy system. Fourier Ptychographic Microscopy (FPM) is a novel computational Microscopy technique proposed in 2013, and can realize large-field and super-resolution quantitative phase imaging. The FPM system changes a light source of a traditional microscope into a programmable LED array, sequentially lights LED lamps at different positions when data are collected, generates light with different incident angles to illuminate a sample, and moves high-frequency information of the sample into a pupil of an objective lens, so that sample information outside the cutoff frequency of the microscope objective lens is obtained. When sample information is recovered, the acquired data is spliced in a computer by adopting a synthetic aperture and phase recovery algorithm, and finally the complex amplitude information of the sample is obtained. The key point of obtaining an accurate imaging result by the computational microscopy imaging technology is that an established model is corresponding to an actual system, but when the FPM system is established, the actual LED array and an ideal LED array are often deviated, so that the position of a selected frequency spectrum subregion is inaccurate in the reconstruction process, and the final reconstruction effect is poor. Therefore, matching the LED array model established during the reconstruction process with the actual LED array is a determining factor in order to obtain good imaging results.
Currently, methods for solving the above model matching problem can be divided into two broad categories. The first method is to adjust the actual LED array to the ideal position, and to match it with the ideal model. However, FPM systems using this approach typically require multiple degrees of freedom of precision mechanical adjustment, are often costly, bulky, and time consuming, and the adjustment technique is unacceptable to inexperienced users. The second method is that the established model is automatically matched with an actual LED array through an algorithm, in 2016, Sun et al search the pose parameters of the LED array by adopting a method combining a simulated annealing algorithm and nonlinear regression processing, and relevant experimental results show that the method effectively and automatically corrects the pose deviation of the LED array, but the annealing evaluation function of the method is influenced by various factors such as light intensity fluctuation of an LED, camera noise and the like, and the required iterative search time is longer; in 2022, Zheng et al quickly obtained the pose parameters of the LED array using the defocused bright field image shift characteristics, and realized the correction of reconstructed amplitude, phase and pupil function, but this method has a high requirement on the accuracy of the focusing apparatus. In addition, the existing second method only corrects non-inclined pose parameters, and the pose description of the actual LED array is not complete.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a self-correcting method for the pose of an LED array light source of a computed microscopy imaging system, which can eliminate the requirement of the prior system on a multi-degree-of-freedom precise mechanical adjusting device and solve the problems of inaccuracy, incompleteness and overlong correction time of the prior correcting method.
The technical scheme of the invention is as follows: the self-correcting method for the pose of the LED array light source of the computed microscopy imaging system comprises the following steps of:
(1) sequentially lightening the LED lamps in the LED array, and collecting an FPM data set with LED pose deviation by using a COMS camera;
(2) selecting a low-resolution image with a bright-dark field boundary from the data set;
(3) extracting a bright field area of the image by using a Unet network;
(4) fitting a circle where the light and dark field boundary arc is located, and calculating the circle center and the radius;
(5) calculating the pose parameters of the LED array through the established mathematical model by combining the circle center and the radius in the step (4);
(6) and correcting the position of the LED lamp in the FPM reconstruction algorithm to obtain a corrected FPM reconstruction image.
The invention sequentially lights the LED lamps in the LED array, and uses the COMS camera to collect the FPM data set with the LED position and posture deviation, so that no precise mechanical device is needed to adjust the light source in the FPM system, no initial adjustment step is needed, the volume and the weight of the FPM system are reduced, the cost of the system is reduced, and the invention is friendly to inexperienced users; selecting a low-resolution image with a bright-dark field boundary from the data set, extracting a bright field area of the image by using a Unet network, and extracting light spots by using a trained neural network, wherein the extraction process has good robustness, is not influenced by light intensity fluctuation of an LED lamp and CMOS camera noise, and has high accuracy; fitting a circle where the arc of the boundary of the light and dark field is located, establishing a corresponding model between the circle center of the arc boundary of the light and dark field and the LED lamp, and directly solving pose parameters without adopting an iterative search algorithm, so that the time consumption is short, and the error measurement range is large; and calculating the circle center and the radius, calculating the pose parameters of the LED array through the established mathematical model, correcting the position of the LED lamp in the FPM reconstruction algorithm, and obtaining a corrected FPM reconstruction image, so that a more complete LED array model is established, and the pose of the LED array can be better described.
Drawings
Fig. 1 shows a flow chart of a method for correcting the pose of an LED array light source of a computed microscopy imaging system according to the invention.
FIG. 2 shows a schematic diagram of the FPM system of the present invention.
Fig. 3 shows a lighting schematic of an LED lamp in the light field range employed in an embodiment of the present invention.
Fig. 4 shows a schematic view of the illumination of an LED lamp in the boundary range of the light and dark fields used in the embodiment of the present invention.
Fig. 5 shows a schematic diagram of a bright-dark field boundary employed in an embodiment of the present invention.
Fig. 6 shows a schematic diagram of a corresponding model between the circle center of the light spot arc boundary and the coordinates of the LED lamp, which is established in the embodiment of the present invention.
Detailed Description
As shown in fig. 1, the method for self-correcting the pose of the LED array light source of the computed microscopy imaging system comprises the following steps:
(1) sequentially lighting the LED lamps in the LED array, and collecting an FPM data set with LED pose deviation by using a COMS camera;
(2) selecting a low-resolution image with a bright-dark field boundary from the data set;
(3) extracting a bright field area of the image by using a Unet network;
(4) fitting a circle where the light and dark field boundary arc is located, and calculating the circle center and the radius;
(5) and (4) calculating the LED through the established mathematical model by combining the circle center and the radius in the step (4)
Pose parameters of the array;
(6) and correcting the position of the LED lamp in the FPM reconstruction algorithm to obtain a corrected FPM reconstruction image.
The invention sequentially lights the LED lamps in the LED array, and collects the FPM data set with the LED position and orientation deviation by using the COMS camera, so that no precise mechanical device is needed for adjusting the light source in the FPM system, no initial adjustment step is needed, the volume and the weight of the FPM system are reduced, the cost of the system is reduced, and the invention is friendly to inexperienced users; selecting a low-resolution image with a bright-dark field boundary from the data set, extracting a bright field area of the image by using a Unet network, and extracting light spots by using a trained neural network, wherein the extraction process has good robustness, is not influenced by light intensity fluctuation of an LED lamp and CMOS camera noise, and has high accuracy; fitting a circle where the arc of the boundary of the light and dark field is located, establishing a corresponding model between the circle center of the arc boundary of the light and dark field and the LED lamp, and directly solving pose parameters without adopting an iterative search algorithm, so that the time consumption is short, and the error measurement range is large; and calculating the circle center and the radius, calculating the pose parameters of the LED array through the established mathematical model, correcting the position of the LED lamp in the FPM reconstruction algorithm, and obtaining a corrected FPM reconstruction image, so that a more complete LED array model is established, and the pose of the LED array can be better described.
Preferably, the step (1) comprises the following substeps:
(1.1) using a programmable LED array with the specification of M multiplied by N as a light source to provide illumination, placing the LED array at a position far enough away from a sample, and determining that illumination light waves emitted by each LED lamp are quasi-monochromatic plane light waves; assuming that the amplitude of the light wave is 1, then m (1)<m<M) lines, n (1)<n<N) columns of LEDs m,n The wave function of the emitted illumination light is expressed as
Figure BDA0003632272180000041
Where (x, y) is the two-dimensional Cartesian coordinate system of the sample plane, j is the unit imaginary number, u m,n ,v m,n Is the wave vector of light wave, which is expressed as
Figure BDA0003632272180000042
Wherein (x) c ,y c ) Is the sub-region center position coordinate of the sample, x m,n And y m,n Indicating LED m,n λ is the wavelength of the illuminating light wave, h is the distance between the LED array and the sample;
(1.2) assuming the sample is a thin sample, the light wave transmitted through the sample o (x, y) is represented as
Figure BDA0003632272180000043
(1.3) the light wave transmitted through the sample passes through the microscope objective and reaches its back focal plane, a process denoted as
Figure BDA0003632272180000044
Wherein, O (u-u) m.n ,v-v m,n ) Representing the Fourier transform of the transmitted sample light wave, (u, v) are two-dimensional spatial frequency coordinates in the Fourier spectral plane,
Figure BDA0003632272180000045
is the Fourier transform operator, P (u, v) is the pupil function;
(1.4) the light wave reaches the COMS camera to get a low resolution intensity image
Figure BDA0003632272180000051
The process is shown as
Figure BDA0003632272180000052
Wherein,
Figure BDA0003632272180000053
is an inverse fourier transform operator.
Preferably, the step (2) comprises the following substeps:
(2.1) selecting 1 image shot in the step (1), and then carrying out binarization on the images to obtain
Figure BDA0003632272180000054
(2.2) solving
Figure BDA0003632272180000055
The proportion R of the number of pixels with logic value of 1 m,n The process is represented as
Figure BDA0003632272180000056
Wherein X and Y respectively represent the number of rows and columns of pixels of the CMOS camera;
(2.3) according to R m,n Judging whether the current image has a proper light and dark field area, if so, selecting the image as the input of the step (3), and otherwise, judging the next image;
(2.4) repeating the steps (2.1) to (2.3), stopping iteration when all the images are judged to be finished or the termination condition is met, and obtaining a data set for self-correcting the pose
Figure BDA0003632272180000057
q∈[1,Q]Where Q represents the total number of images with appropriate bright and dark field regions.
Preferably, the step (3) comprises the following substeps:
(3.1) establishing a data set required for training Unet: randomly adjusting the LED array to enable the LED array to have different pose deviations, sequentially placing a plurality of samples on an objective table, lighting an LED lamp generating a bright-dark field boundary image, and collecting a series of images as input of a training set; then, when a sample is not placed, acquiring a series of images as coarse marks of a training set, and taking the images illuminated by the same LED as corresponding input and coarse marks;
(3.2) carrying out binarization processing on the training set marks to obtain a bright field area and a dark field area on the image, using the bright field area and the dark field area as real training set marks, and then using the training set to train the Unet to be convergent;
and (3.3) extracting a light and dark field region of the image by using the trained Unet network to obtain Q facula images.
Preferably, the step (4) comprises the following sub-steps:
(4.1) selecting the q-th light spot generated in the step (3), and extracting an edge pixel by using a canny operator;
(4.2) fitting a circle where the edge pixel is located by using a random sampling consistency algorithm to obtain the center and the radius of the circle;
(4.3) finding
Figure BDA0003632272180000061
The maximum corresponding iteration times s, and the circle center is recorded
Figure BDA0003632272180000062
And radius
Figure BDA0003632272180000063
(4.4) solving the variance V of all the fitting radii R,q Selecting a pixel, calculating the pixel to the center of a circle
Figure BDA0003632272180000064
If the distance is equal to
Figure BDA0003632272180000065
The phase difference is greater than the variance of 3V R,q If so, removing the pixel and traversing all the pixels;
(4.5) fitting the circle centers and the radii of the residual pixels by using a least square method to obtain the circle center (x) of the qth light spot cir,q ,y cir,q ) And radius R cir,q
(4.6) letting q ═ q + 1;
(4.7) repeating steps (4.1) to (4.6) until Q is equal to Q.
Preferably, said step (4.2) comprises the sub-steps of:
(4.2.1) determining the iteration number K of the algorithm, wherein the initial iteration process K is 0;
(4.2.2) randomly selecting 3 edge pixels;
(4.2.3) determining the center of a circle by using the selected 3 pixels
Figure BDA0003632272180000066
And radius
Figure BDA0003632272180000067
(4.2.4) calculating the distance from the pixel to the center of the circle, and recording the distance
Figure BDA0003632272180000068
Number of pixels in a certain neighborhood
Figure BDA0003632272180000069
(4.2.5) making k ═ k + 1;
(4.2.6) looping steps (4.2.2) - (4.2.5) and stopping iteration when K equals K.
Preferably, the step (5) comprises the following substeps:
(5.1) establishing a pose model of the LED array, and describing the position of each LED unit in space through a mathematical model: assuming that the LED array is a rigid body, a plane passing through the centers of the LEDs and perpendicular to the z-axis is a plane with z being 0, and the distance between adjacent LED lamps is d LED Then the spatial position coordinates of each LED unit are expressed as
Figure BDA00036322721800000610
Wherein (theta) xyz ) Respectively representing the rotation angles of the LED array around x, y and z axes, (delta x, delta y) representing the translation amount of the LED array along the x and y axes, and describing the relative position in the z direction by using the distance h between the center of the LED array and a sample;
(5.2) establishing a corresponding model between the circle center of the light spot circular arc boundary and the coordinates of the LED lamp: corresponding the CMOS camera to the object space to obtain the conjugated CMOS window, wherein the conjugate CMOS window is equivalently located on the focusing sample surface, and the center coordinate of the object space is
Wherein A is the magnification of the microscope objective;
distance from COMS entrance window to entrance pupil is
Figure BDA0003632272180000071
Wherein NA is the numerical aperture,
Figure BDA0003632272180000072
is the mean value of the object space spot radius and is expressed as
Figure BDA0003632272180000073
Expressing the center coordinates of the light spots in object space as the position coordinates of the LED lamp
Figure BDA0003632272180000074
Wherein q corresponds to m and n;
(5.3) assuming the pose parameters of the LED array to be
Figure BDA0003632272180000075
Calculating the position coordinates of the LED units according to the LED array pose model established in the step (5.1)
Figure BDA0003632272180000076
Then, according to the model established in the step (5.2), the estimation of the object space light spot circle center is calculated
Figure BDA0003632272180000077
(5.4) calculating the optimal pose estimation of the LED array by using a nonlinear regression method: if the estimated pose parameters are optimal, the light spot center obtained by calculation in the step (5.2) is closest to the actual light spot center, and a mathematical model is established as
Figure BDA0003632272180000078
Wherein,
Figure BDA0003632272180000081
is a loss function that needs to be minimized,
Figure BDA0003632272180000082
the method is the optimal pose parameter estimation of the LED array.
Preferably, the step (6) comprises the following sub-steps:
(6.1) spectral estimation P giving initial pupil function and high resolution samples 0 (u, v) and O 0 (u, v) and starting with the iterative optimization algorithm, the initial pupil function estimate is set to a circular low pass filter with unit 1 inside the pass band and 0 outside the pass band, and phase is 0; the initial sample spectrum estimate is set to the fourier spectrum of an oversampled low resolution image;
(6.2) calculating the wave vector of the corrected light wave: firstly, according to the LED array pose model established in the step (5.1), the optimal pose parameters are used for estimation
Figure BDA0003632272180000083
Calculating coordinates of the corrected LED lamp
Figure BDA0003632272180000084
Then calculating the wave vector of the corrected light wave
Figure BDA0003632272180000085
(6.3) giving an LED m,n Estimation of low resolution images under illumination: p given according to step (6.1) 0 (u, v) and O 0 (u, v), Fourier spectral estimation of the low resolution image is represented as
Figure BDA0003632272180000086
Inverse fourier transforming it to obtain an estimate of the low resolution image:
Figure BDA0003632272180000087
(6.4) image with real record
Figure BDA0003632272180000088
The square root of the image is substituted for the modulus of the low-resolution image obtained in the previous step to obtain an updated image
Figure BDA0003632272180000089
(6.5) updating the image
Figure BDA00036322721800000810
Spectral estimation for updating corresponding samples
Figure BDA0003632272180000091
And a pupil function P i (u, v) to obtain
Figure BDA0003632272180000092
Figure BDA0003632272180000093
Wherein is a complex conjugate operation, δ 1 And delta 2 Is a normalization constant to prevent the denominator from being zero, i is the number of iterations, Δ Φ i,m,n Is the error helper function of the update process:
Figure BDA0003632272180000094
(6.6) repeating steps (6.1) - (6.5) in an iterative process until step (1)
All images in the acquired data set are processed; then, the whole iteration process is repeated for a plurality of times until the reconstruction result is converged, whether the result is converged is judged according to a relative error evaluation function, and the error evaluation function is expressed as
Figure BDA0003632272180000095
If the current evaluation function is smaller than the set threshold value, jumping out of the reconstruction cycle; and finally, carrying out inverse Fourier transform on the frequency spectrum estimation of the sample to obtain a complex amplitude image with high resolution and large field of view.
For better illustrating the objects and advantages of the present invention, the following detailed description is made with reference to the accompanying drawings and examples.
The invention provides a light source pose self-correcting method based on Fourier laminated microscopic imaging, which comprises the following specific steps of:
step 1, collecting an FPM data set with LED pose deviation. In the FPM system, the cells of the LEDs in the LED array with the pose deviation are illuminated one by one, and a series of corresponding low resolution images are taken with the CMOS camera as a data set. The specific sub-steps included in step 1 are as follows:
step 1.1, a programmable LED array with a specification of M × N is used as a light source to provide illumination, and the LED array is placed at a position far enough away from the sample, in an embodiment, the distance is about 93mm, so that the illumination light waves emitted by the LED lamp are considered as quasi-monochromatic plane light waves for any small area of the sample. Assuming that the amplitude of the light wave is 1, then m (1)<m<M) lines, n (1)<n<N) columns of LEDs m,n The wave function of the emitted illumination light can be expressed as
Figure BDA0003632272180000101
Where (x, y) is the two-dimensional Cartesian coordinate system of the sample plane, j is the unit imaginary number, (u) m,n ,v m,n ) Is the wave vector of light waves, which can be expressed as
Figure BDA0003632272180000102
In the formula (x) c ,y c ) Is the sub-region center position coordinate of the sample, x m,n And y m,n Indicating LED m,n λ is the wavelength of the illuminating light wave, h is the distance between the LED array and the sample;
step 1.2, assuming the sample is a thin sample, the light wave transmitted through the sample o (x, y) can be expressed as
Figure BDA0003632272180000103
Step 1.3, the light wave transmitted through the sample passes through the microscope objective and reaches its back focal plane, which can be expressed as
Figure BDA0003632272180000104
In the formula, O (u-u) m.n ,v-v m,n ) Representing the Fourier transform of the transmitted sample light wave, (u, v) are two-dimensional spatial frequency coordinates in the Fourier spectral plane,
Figure BDA0003632272180000105
is the Fourier transform operator, P (u, v) is the pupil function;
step 1.4, the light wave reaches the COMS camera to obtain a low-resolution intensity image
Figure BDA0003632272180000106
This process is equivalent to performing an inverse fourier transform, which can be expressed as
Figure BDA0003632272180000107
In the formula,
Figure BDA0003632272180000108
is an inverse fourier transform operator.
And 2, selecting a low-resolution image with a light-dark field boundary. Fig. 3 shows a lighting schematic diagram of an LED lamp in a bright field range, and fig. 4 shows a lighting schematic diagram of an LED lamp in a boundary range of a bright field and a dark field, including an LED array 1, a microscope objective 3, an entrance pupil 7, and a cmos entrance window 8. Fig. 5 shows a schematic diagram of a light and dark field boundary used in the embodiment of the present invention, which includes a projection 9 of the entrance pupil 7 on the cmos entrance window 8, a two-dimensional schematic diagram 10 of the cmos entrance window, a bright field region 11, a dark field region 13, a light and dark field boundary arc 14, a center 12 of the projection 9 during bright field illumination, and a center 15 of the projection 9 during LED lamp illumination within the light and dark field boundary range. Step 2 is to select an image having both a bright-dark field region 11 and a dark field region 12 from the M × N images captured in step 1, and the specific steps are as follows:
step 2.1, selecting 1 image shot in the step 1, and binarizing the image to obtain
Figure BDA0003632272180000109
The binarization threshold can be determined by the maximum inter-class variance method, and the process can be expressed as
Figure BDA00036322721800001010
Wherein T represents a set threshold;
step 2.2, solve
Figure BDA0003632272180000111
The proportion R of the number of pixels with logic value of 1 m,n The process can be expressed as
Figure BDA0003632272180000112
In the formula, X and Y respectively represent the number of pixel lines and columns of the CMOS camera;
step 2.3, when the image is set to have a suitable bright field region 11 and a suitable dark field region 13, R m,n Is [0.15,0.85 ] in the example]And determining R m,n Whether or not toAnd (4) locating in the interval, if so, selecting the image as the input of the step (3), otherwise, continuously selecting the next image for judgment.
Step 2.4, repeating steps 2.1 to 2.3, when R of a plurality of continuous images m,n When the data are all lower than 0.15, the image is a dark field image, iteration is stopped, and a data set for self-correcting pose is obtained
Figure BDA0003632272180000113
In the formula, Q represents the total number of images with suitable bright field region 11 and dark field region 13.
And 3, extracting a bright and dark field region of the image by using the Unet network. And removing complex and diverse sample information in the shot image by using a neural network Unet to obtain a light spot image only containing a light and dark field area. The method comprises the following specific steps:
preferably, the sub-steps included in step 3 are as follows:
and 3.1, establishing a data set required by training Unet. And randomly adjusting the LED array to enable the LED array to have different pose deviations, sequentially placing a plurality of samples on an objective table, lighting LED lamps for generating bright and dark field boundary images, and collecting a series of images as input of a training set. Then, when a sample is not placed, acquiring a series of images as coarse marks of a training set, and taking the images illuminated by the same LED as corresponding input and coarse marks;
step 3.2, carrying out binarization processing on the training set marks to obtain a bright field area and a dark field area on the image, using the bright field area and the dark field area as real training set marks, and then using the training set to train the Unet to be convergent;
and 3.3, extracting a light and dark field area of the image by using the trained Unet network to obtain Q facula images.
And 4, calculating the circle center and the radius of the arc boundary 14 of the bright and dark field. And (3) processing the light spots obtained in the step (3) by utilizing an edge detection algorithm to obtain a light and dark field boundary arc 14, fitting a circle where the arc is located to obtain a projection 9 of the entrance pupil 7 on the COMS entrance window 8, and finally obtaining the circle center position and the radius of the arc boundary of the bright field and the dark field. The specific substeps of step 4 are as follows:
and 4.1, selecting the q (initial q is 1) th light spot generated in the step 3, and extracting the pixel of the edge of the light spot by using a canny operator.
Step 4.2, fitting the arc where the edge pixel is located by using a random sampling consistency algorithm, wherein the algorithm is as follows:
1) determining the iteration times K of the algorithm, wherein the initial iteration process K is 0;
2) randomly selecting 3 edge pixels;
3) determining the circle center by using the selected 3 pixels
Figure BDA0003632272180000121
And radius
Figure BDA0003632272180000122
4) Calculating the distance from the pixel to the center of the circle, and recording the distance
Figure BDA0003632272180000123
Number of pixels in a certain neighborhood
Figure BDA0003632272180000124
5)k=k+1;
6) And (5) circulating the steps 2-5, and stopping iteration when K is equal to K.
7)
Step 4.3, find
Figure BDA0003632272180000125
The maximum corresponding iteration times s, and the circle center is recorded
Figure BDA0003632272180000126
And radius
Figure BDA0003632272180000127
Step 4.4, solving the variance V of all the fitting radii R,q Selecting a pixel, calculating the pixel to the center of the circle
Figure BDA0003632272180000128
If the distance is equal to
Figure BDA0003632272180000129
Is greater than the variance of 3V R,q If so, removing the pixel and traversing all the pixels;
step 4.5, fitting the circle center and the radius of the residual pixel by using a least square method to obtain the circle center (x) of the qth light spot circular arc boundary cir,q ,y cir,q ) And radius R cir,q
Step 4.6, q is q + 1;
and 4.7, repeating the steps 4.1-4.6 until Q is equal to Q.
And 5, calculating the pose parameters of the LED array. Establishing a pose model of the LED array, namely describing the spatial position of each LED lamp in the LED array through pose parameters, then establishing a corresponding model of the position coordinates of the LED lamps and the light and dark field boundary arc parameters calculated in the step 4, and finally acquiring the pose parameters of the LED array by using a nonlinear regression method. The specific substeps of step 5 are as follows:
and 5.1, establishing a pose model of the LED array, namely describing the position of each LED lamp in space through a mathematical model. Assuming that the LED array is a rigid body, a plane passing through the center of the LED array and perpendicular to the z-axis is a plane with z being 0, and the distance between adjacent LED lamps is d LED Then the spatial position coordinates of each LED lamp can be expressed as
Figure BDA00036322721800001210
Wherein (theta) xyz ) The rotation angles of the LED arrays around the x, y and z axes are respectively represented, (delta x, delta y) represent the translation amounts of the LED arrays along the x and y axes, and the relative position in the z direction is described by using the distance h between the center of the LED arrays and a sample, so that the position of each LED lamp can be represented by (theta) xyz Δ x, Δ y, h) are described by 6 parameters.
And 5.2, establishing a corresponding model between the circle center of the light spot circular arc boundary and the coordinates of the LED lamp. FIG. 6 is a two-dimensional schematic diagram of a model corresponding to the circle center of the light spot arc boundary and the coordinates of an LED lamp, which is established in the embodiment of the present invention, wherein the LED lamp is a LED lamp 0 Indicating center light, LED -1 And an LED -2 Indicating an LED lamp within the bright-dark boundary range. For the convenience of analysis, in this embodiment, the aperture stop 4 and the CMOS camera 6 are equivalent to the object space to obtain the entrance pupil 7 and the CMOS entrance window 8, and the center coordinates of the object space are as follows
Figure BDA0003632272180000131
Wherein A is the magnification of the microscope objective. Entrance pupil radius BD ═ NA · h 1 Wherein NA is the numerical aperture of the microscope objective. Line CC in FIG. 6 0 The radius of the projection 9 of the entrance pupil 7 on the CMOS entrance window 8, i.e. the radius of the circle on which the object space light spot circular arc boundary is located, can be expressed as the mean value of the circle radius calculated in step 4
Figure BDA0003632272180000132
By the similarity relation of
Figure BDA0003632272180000133
Obtaining a distance between the entrance pupil 7 and the COMS entrance window 8 of
Figure BDA0003632272180000134
By similar derivation, the center of the spot can be determined in object space as point C -1 And C -2 Expressed as the position coordinates of the LED lamp
Figure BDA0003632272180000135
Wherein q corresponds to m and n.
Step 5.3, assuming the pose parameters of the LED array as
Figure BDA0003632272180000136
Calculating the position coordinates of the LED units by equation (7)
Figure BDA0003632272180000137
Then, the estimation of the center of the object space light spot is calculated according to the formula (10)
Figure BDA0003632272180000138
And 5.4, calculating the optimal pose estimation of the LED array by using a nonlinear regression method. If the estimated pose parameter is optimal, the spot center calculated in step 5.3 should be closest to the actual spot center, so a mathematical model can be established as
Figure BDA0003632272180000139
In the formula,
Figure BDA00036322721800001310
is a loss function that needs to be minimized,
Figure BDA00036322721800001311
the method is the optimal pose parameter estimation of the LED array.
And 6, outputting the corrected FPM reconstructed image. And (5) calculating a model matched with the actual LED position according to the pose parameters obtained in the step (5), and reconstructing according to the corrected model to obtain an accurate high-resolution large-view-field complex amplitude image.
Preferably, step 6 comprises the following sub-steps:
step 6.1, give the initial pupil function and spectral estimate P of the high resolution sample 0 (u, v) and O 0 (u, v) and this is taken as the start of the iterative optimization algorithm. In general, the pupils of the microscope objective are all circular, so the initial pupil function estimate is set to be a circular low pass filter with unit 1 inside the pass band, 0 outside the pass band, and phase 0; initial sample spectrum estimationA fourier spectrum that can be set as an oversampled low resolution image;
and 6.2, calculating the wave vector of the corrected light wave. Firstly, according to the LED array pose model established in the step 5.1, the optimal pose parameters are used for estimation
Figure BDA0003632272180000141
Calculating coordinates of the corrected LED lamp
Figure BDA0003632272180000142
Then calculating the wave vector of the corrected light wave
Figure BDA0003632272180000143
Step 6.3, providing the LED m,n Low resolution image estimation upon illumination. P given according to step 6.1 0 (u, v) and O 0 (u, v), Fourier spectral estimation of the low resolution image is represented as
Figure BDA0003632272180000144
Inverse fourier transforming it to obtain an estimate of the low resolution image:
Figure BDA0003632272180000145
step 6.4, using the actually recorded image
Figure BDA0003632272180000146
The square root of the image is substituted for the modulus of the low-resolution image obtained in the previous step to obtain an updated image
Figure BDA0003632272180000147
Step 6.5, the updated image
Figure BDA0003632272180000148
Spectral estimation for updating corresponding samples
Figure BDA0003632272180000149
And a pupil function P i (u, v) to obtain
Figure BDA00036322721800001410
Figure BDA0003632272180000151
In the formula, is complex conjugate operation, delta 1 And delta 2 Is a normalization constant to prevent the denominator from being zero, i is the number of iterations, Δ Φ i,m,n Is the error helper function of the update process:
Figure BDA0003632272180000152
and 6.6, repeating the 5 steps in an iterative process until all the images in the data set acquired in the step 1 are processed. Subsequently, the whole iteration process is repeated for multiple times until the reconstruction result is converged, whether the result is converged can be judged according to a relative error evaluation function, and the error evaluation function can be expressed as
Figure BDA0003632272180000153
And if the current evaluation function is smaller than the set threshold value, jumping out of the reconstruction loop. And finally, carrying out inverse Fourier transform on the frequency spectrum estimation of the sample to obtain a complex amplitude image with high resolution and a large field of view.
The method is an LED array light source pose self-correction method of the computed microscopy imaging system, and comprises a system self-correction algorithm and a Fourier laminated microscopy imaging algorithm.
In the embodiment, the pose deviation of the LED array is corrected by adopting the image information of the light and dark field boundary light spots. Preprocessing the acquired original data set by using a threshold segmentation method to obtain an image containing boundary information of a light field and a dark field; extracting a bright-dark field region from the image with the sample information by using a neural network; fitting the arc boundary of the light and dark field by using error analysis and a least square theory to obtain the circle center position and the radius of the circle where the arc is located; calculating by using the established matching model of the LED array and the light and dark field boundary to obtain the pose parameters of the LED array; and correcting the light wave vector in the reconstruction process by using the fitted pose parameters to obtain an accurate FPM (focal plane modulated) reconstruction image.
The method for self-correcting the pose of the LED array light source of the computed microscopy imaging system can obtain the pose parameters of the LED array light source more quickly, accurately and completely, enables the imaging result to be better under the condition of not adjusting the LED array light source, and plays a certain role in promoting the practical application and development of the computed microscopy imaging system taking the LED array as the light source for illumination.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications, equivalent variations and modifications made to the above embodiment according to the technical spirit of the present invention still belong to the protection scope of the technical solution of the present invention.

Claims (8)

1. The self-correcting method for the pose of the LED array light source of the computed microscopic imaging system is characterized by comprising the following steps of: which comprises the following steps:
(1) sequentially lighting the LED lamps in the LED array, and collecting an FPM data set with LED pose deviation by using a COMS camera;
(2) selecting a low-resolution image with a bright-dark field boundary from the data set;
(3) extracting a bright field area of the image by using a Unet network;
(4) fitting a circle where a boundary arc of the light and dark fields is located, and calculating the circle center and the radius;
(5) calculating the pose parameters of the LED array through the established mathematical model by combining the circle center and the radius in the step (4);
(6) and correcting the position of the LED lamp in the FPM reconstruction algorithm to obtain a corrected FPM reconstruction image.
2. The method for self-correcting the pose of the LED array light source of the computed microscopy imaging system as recited in claim 1, wherein: the step (1) comprises the following sub-steps:
(1.1) using a programmable LED array with the specification of M multiplied by N as a light source to provide illumination, placing the LED array at a position far enough away from a sample, and determining that illumination light waves emitted by each LED lamp are quasi-monochromatic plane light waves; assuming that the amplitude of the light wave is 1, then m (1)<m<M) lines, n (1)<n<N) rows of LEDs m,n The wave function of the emitted illumination light is expressed as
Figure FDA0003632272170000011
Where (x, y) is the two-dimensional Cartesian coordinate system of the sample plane, j is the unit imaginary number, u m,n ,v m,n Is the wave vector of light wave, which is expressed as
Figure FDA0003632272170000012
Wherein (x) c ,y c ) Is the sub-region center position coordinate of the sample, x m,n And y m,n Indicating LED m,n λ is the wavelength of the illuminating light wave, h is the distance between the LED array and the sample;
(1.2) assuming the sample is a thin sample, the light wave transmitted through the sample o (x, y) is represented as
Figure FDA0003632272170000021
(1.3) the light wave transmitted through the sample passes through the microscope objective and reaches its back focal plane, the process being denoted as
Figure FDA0003632272170000022
Wherein, O (u-u) m.n ,v-v m,n ) Representing the Fourier transform of the transmitted sample light wave, (u, v) are two-dimensional spatial frequency coordinates in the Fourier spectral plane,
Figure FDA0003632272170000023
is the Fourier transform operator, P (u, v) is the pupil function;
(1.4) the light wave reaches the COMS camera to get a low resolution intensity image
Figure FDA0003632272170000024
The process is represented as
Figure FDA0003632272170000025
Wherein,
Figure FDA0003632272170000026
is an inverse fourier transform operator.
3. The method for self-correcting the pose of the LED array light source of the computed microscopy imaging system as recited in claim 2, wherein: the step (2) comprises the following sub-steps:
(2.1) selecting 1 image shot in the step (1), and then binarizing the image to obtain
Figure FDA0003632272170000027
(2.2) solving
Figure FDA0003632272170000028
The proportion R of the number of pixels with logic value of 1 m,n The process is represented as
Figure FDA0003632272170000029
Wherein X and Y respectively represent the number of rows and columns of pixels of the CMOS camera;
(2.3) according to R m,n Judging whether the current image has a proper light and dark field area, if so, selecting the image as the input of the step (3), and otherwise, judging the next image;
(2.4) repeating the steps (2.1) to (2.3), and stopping iteration when all images are judged to be finished or the termination condition is met to obtain a data set I for self-correcting the pose q c (x,y),q∈[1,Q]Where Q represents the total number of images with appropriate bright and dark field regions.
4. The method for self-correcting the pose of the LED array light source of the computed microscopy imaging system as recited in claim 3, wherein: the step (3) comprises the following sub-steps:
(3.1) establishing a data set required for training the Unet: randomly adjusting the LED array to enable the LED array to have different pose deviations, sequentially placing a plurality of samples on an objective table, lighting an LED lamp generating a bright-dark field boundary image, and collecting a series of images as input of a training set; then, when a sample is not placed, acquiring a series of images as coarse marks of a training set, and taking the images illuminated by the same LED as corresponding input and coarse marks;
(3.2) carrying out binarization processing on the training set marks to obtain a bright field area and a dark field area on the image, using the bright field area and the dark field area as real training set marks, and then using the training set to train the Unet to be convergent;
and (3.3) extracting a light and dark field region of the image by using the trained Unet network to obtain Q facula images.
5. The method for self-correcting the pose of the LED array light source of the computed microscopy imaging system as recited in claim 4, wherein: the step (4) comprises the following sub-steps:
(4.1) selecting the q-th light spot generated in the step (3), and extracting an edge pixel by using a canny operator;
(4.2) fitting a circle where the edge pixel is located by using a random sampling consistency algorithm to obtain the center and the radius of the circle;
(4.3) finding
Figure FDA0003632272170000031
The maximum corresponding iteration times s, and the circle center is recorded
Figure FDA0003632272170000032
And radius
Figure FDA0003632272170000033
(4.4) solving the variances V of all the fitting radii R,q Selecting a pixel, calculating the pixel to the center of the circle
Figure FDA0003632272170000041
If the distance is equal to
Figure FDA0003632272170000042
Difference greater than variance 3V R,q If so, removing the pixel and traversing all the pixels;
(4.5) fitting the circle centers and the radii of the residual pixels by using a least square method to obtain the circle center (x) of the qth light spot cir,q ,y cir,q ) And radius R cir,q
(4.6) making q ═ q + 1;
(4.7) repeating steps (4.1) to (4.6) until Q is equal to Q.
6. The method for self-correcting the pose of the LED array light source of the computed microscopy imaging system as recited in claim 5, wherein: the step (4.2) comprises the following sub-steps:
(4.2.1) determining the iteration number K of the algorithm, wherein the initial iteration process K is 0;
(4.2.2) randomly selecting 3 edge pixels;
(4.2.3) determining the circle center by using the selected 3 image elements
Figure FDA0003632272170000043
And radius
Figure FDA0003632272170000044
(4.2.4) calculating the distance from the pixel to the center of the circle, and recording the distance
Figure FDA0003632272170000045
Number of pixels in a certain neighborhood
Figure FDA0003632272170000046
(4.2.5) making k ═ k + 1;
(4.2.6) looping steps (4.2.2) - (4.2.5) and stopping iteration when K equals K.
7. The method for self-correcting the pose of the LED array light source of the computed microscopy imaging system as recited in claim 6, wherein: the step (5) comprises the following sub-steps:
(5.1) establishing a pose model of the LED array, and describing the position of each LED unit in the space through a mathematical model: assuming that the LED array is a rigid body, a plane passing through the centers of the LEDs and perpendicular to the z-axis is a plane with z being 0, and the distance between adjacent LED lamps is d LED Then the spatial position coordinates of each LED unit are expressed as
Figure FDA0003632272170000051
Wherein (theta) xyz ) Respectively representing the rotation angles of the LED array around x, y and z axes, (delta x, delta y) representing the translation amount of the LED array along the x and y axes, and describing the relative position in the z direction by using the distance h between the center of the LED array and a sample;
(5.2) establishing a corresponding model between the circle center of the light spot circular arc boundary and the coordinates of the LED lamp: corresponding the CMOS camera to the object space to obtain the conjugated CMOS window, which is equivalently located on the focusing sample plane, at this timeThe center coordinates of the object space are
Figure FDA0003632272170000052
Wherein A is the magnification of the microscope objective;
distance from COMS entrance window to entrance pupil is
Figure FDA0003632272170000053
Wherein NA is the numerical aperture,
Figure FDA0003632272170000054
is the mean value of the object space spot radius and is expressed as
Figure FDA0003632272170000055
Expressing the center coordinates of the light spots in object space as the position coordinates of the LED lamp
Figure FDA0003632272170000056
Wherein q corresponds to m and n;
(5.3) assuming that the pose parameter of the LED array is
Figure FDA0003632272170000057
Calculating the position coordinates of the LED units according to the LED array pose model established in the step (5.1)
Figure FDA0003632272170000058
Then, according to the model established in the step (5.2), the estimation of the object space light spot circle center is calculated
Figure FDA0003632272170000061
(5.4) calculating the optimal pose estimation of the LED array by using a nonlinear regression method: if the estimated pose parameters are optimal, the light spot center obtained by calculation in the step (5.2) is closest to the actual light spot center, and a mathematical model is established as
Figure FDA0003632272170000062
Wherein,
Figure FDA0003632272170000063
is a loss function that needs to be minimized,
Figure FDA0003632272170000064
the method is the optimal pose parameter estimation of the LED array.
8. The method for self-correcting the pose of the LED array light source of the computed microscopy imaging system as recited in claim 7, wherein: the step (6) comprises the following sub-steps:
(6.1) spectral estimation P giving initial pupil function and high resolution samples 0 (u, v) and O 0 (u, v) and starting with the iterative optimization algorithm, the initial pupil function estimate is set to a circular low pass filter with unit 1 inside the pass band and 0 outside the pass band, and phase is 0; the initial sample spectrum estimate is set to the fourier spectrum of an oversampled low resolution image;
(6.2) calculating the wave vector of the corrected light wave: firstly, according to the LED array pose model established in the step (5.1), the optimal pose parameters are used for estimation
Figure FDA0003632272170000065
Calculating coordinates of the corrected LED lamp
Figure FDA0003632272170000066
Then calculating the wave vector of the corrected light wave
Figure FDA0003632272170000067
Figure FDA0003632272170000068
(6.3) giving an LED m,n Estimation of low resolution images under illumination: p according to the statement in step (6.1) 0 (u, v) and O 0 (u, v), Fourier spectral estimation of the low resolution image is represented as
Figure FDA0003632272170000071
Inverse fourier transforming it to obtain an estimate of the low resolution image:
Figure FDA0003632272170000072
(6.4) image with real record
Figure FDA0003632272170000073
Replacing the low resolution image module value obtained in the last step by the square root of the image to obtain an updated image
Figure FDA0003632272170000074
(6.5) updating the image
Figure FDA0003632272170000075
Spectral estimation for updating corresponding samples
Figure FDA0003632272170000076
And a pupil function P i (u, v) to obtain
Figure FDA0003632272170000077
Figure FDA0003632272170000078
Wherein, is a complex conjugate operation, δ 1 And delta 2 Is a normalization constant to prevent the denominator from being zero, i is the number of iterations, Δ Φ i,m,n Is the error assist function of the update process:
Figure FDA0003632272170000079
(6.6) repeating the steps (6.1) - (6.5) in an iteration process until all the images in the data set acquired in the step (1) are processed; then, the whole iteration process is repeated for a plurality of times until the reconstruction result is converged, whether the result is converged is judged according to a relative error evaluation function, and the error evaluation function is expressed as
Figure FDA00036322721700000710
If the current evaluation function is smaller than the set threshold value, jumping out of the reconstruction cycle; and finally, carrying out inverse Fourier transform on the frequency spectrum estimation of the sample to obtain a complex amplitude image with high resolution and a large field of view.
CN202210532164.3A 2022-05-07 2022-05-07 LED array light source pose self-correction method for computing microscopic imaging system Active CN114926357B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210532164.3A CN114926357B (en) 2022-05-07 2022-05-07 LED array light source pose self-correction method for computing microscopic imaging system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210532164.3A CN114926357B (en) 2022-05-07 2022-05-07 LED array light source pose self-correction method for computing microscopic imaging system

Publications (2)

Publication Number Publication Date
CN114926357A true CN114926357A (en) 2022-08-19
CN114926357B CN114926357B (en) 2024-09-17

Family

ID=82809084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210532164.3A Active CN114926357B (en) 2022-05-07 2022-05-07 LED array light source pose self-correction method for computing microscopic imaging system

Country Status (1)

Country Link
CN (1) CN114926357B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117830592A (en) * 2023-12-04 2024-04-05 广州成至智能机器科技有限公司 Unmanned aerial vehicle night illumination method, system, equipment and medium based on image
CN118641160A (en) * 2024-08-19 2024-09-13 中国科学院长春光学精密机械与物理研究所 Method for measuring illumination angle of spectrum conjugated Fourier laminated microscopic imaging system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311682A (en) * 2020-02-24 2020-06-19 卡莱特(深圳)云科技有限公司 Pose estimation method and device in LED screen correction process and electronic equipment
CN113160212A (en) * 2021-05-11 2021-07-23 杭州电子科技大学 Fourier laminated imaging system and method based on LED array position error fast correction
CN113671682A (en) * 2021-08-23 2021-11-19 北京理工大学重庆创新中心 Frequency domain light source position accurate correction method based on Fourier laminated microscopic imaging
KR20220028816A (en) * 2020-08-31 2022-03-08 한국표준과학연구원 Reflective Fourier ptychographic microscopy with misalignment error correction algorithm
CN114355601A (en) * 2021-12-24 2022-04-15 北京理工大学 LED array light source pose deviation correction method and device of microscopic imaging system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311682A (en) * 2020-02-24 2020-06-19 卡莱特(深圳)云科技有限公司 Pose estimation method and device in LED screen correction process and electronic equipment
KR20220028816A (en) * 2020-08-31 2022-03-08 한국표준과학연구원 Reflective Fourier ptychographic microscopy with misalignment error correction algorithm
CN113160212A (en) * 2021-05-11 2021-07-23 杭州电子科技大学 Fourier laminated imaging system and method based on LED array position error fast correction
CN113671682A (en) * 2021-08-23 2021-11-19 北京理工大学重庆创新中心 Frequency domain light source position accurate correction method based on Fourier laminated microscopic imaging
CN114355601A (en) * 2021-12-24 2022-04-15 北京理工大学 LED array light source pose deviation correction method and device of microscopic imaging system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张韶辉等: "傅里叶叠层显微成像模型、算法及系统研究综述", 激光与光电子学进展, vol. 58, no. 14, 25 July 2021 (2021-07-25) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117830592A (en) * 2023-12-04 2024-04-05 广州成至智能机器科技有限公司 Unmanned aerial vehicle night illumination method, system, equipment and medium based on image
CN118641160A (en) * 2024-08-19 2024-09-13 中国科学院长春光学精密机械与物理研究所 Method for measuring illumination angle of spectrum conjugated Fourier laminated microscopic imaging system

Also Published As

Publication number Publication date
CN114926357B (en) 2024-09-17

Similar Documents

Publication Publication Date Title
US10944896B2 (en) Single-frame autofocusing using multi-LED illumination
CN112243519B (en) Material testing of optical test pieces
CN107003229B (en) Analytical method comprising holographic determination of the position of a biological particle and corresponding device
JP4806630B2 (en) A method for acquiring optical image data of three-dimensional objects using multi-axis integration
Horstmeyer et al. Convolutional neural networks that teach microscopes how to image
CN107065159A (en) A kind of large visual field high resolution microscopic imaging device and iterative reconstruction method based on big illumination numerical aperture
CN110146974B (en) Intelligent biological microscope
EP2976747B1 (en) Image quality assessment of microscopy images
CN101151623A (en) Classifying image features
CN114926357A (en) Self-correcting method for LED array light source pose of computed microscopy imaging system
WO2017040669A1 (en) Pattern detection at low signal-to-noise ratio
US11403861B2 (en) Automated stain finding in pathology bright-field images
CN113671682B (en) Frequency domain light source position accurate correction method based on Fourier laminated microscopic imaging
CN107180411A (en) A kind of image reconstructing method and system
CN115032196B (en) Full-scribing high-flux color pathological imaging analysis instrument and method
CN115060367A (en) Full-glass data cube acquisition method based on microscopic hyperspectral imaging platform
CN108537862B (en) Fourier diffraction scanning microscope imaging method with self-adaptive noise reduction function
CN113674402A (en) Plant three-dimensional hyperspectral point cloud model generation method, correction method and device
CN113759535B (en) High-resolution microscopic imaging method based on multi-angle illumination deconvolution
US11422355B2 (en) Method and system for acquisition of fluorescence images of live-cell biological samples
CN109003228A (en) A kind of micro- big visual field automatic Mosaic imaging method of dark field
CN118067001A (en) Light source position accurate correction method combined with pupil function
CN114120318B (en) Dark field image target point accurate extraction method based on integrated decision tree
Chen et al. Random positional deviations correction for each LED via ePIE in Fourier ptychographic microscopy
WO2019140434A2 (en) Overlapping pattern differentiation at low signal-to-noise ratio

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant