[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113433130A - Method and device for generating confocal imaging by wide-field imaging - Google Patents

Method and device for generating confocal imaging by wide-field imaging Download PDF

Info

Publication number
CN113433130A
CN113433130A CN202110783049.9A CN202110783049A CN113433130A CN 113433130 A CN113433130 A CN 113433130A CN 202110783049 A CN202110783049 A CN 202110783049A CN 113433130 A CN113433130 A CN 113433130A
Authority
CN
China
Prior art keywords
imaging
wide
confocal
field
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110783049.9A
Other languages
Chinese (zh)
Inventor
季向阳
李博文
连晓聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202110783049.9A priority Critical patent/CN113433130A/en
Publication of CN113433130A publication Critical patent/CN113433130A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Microscoopes, Condenser (AREA)

Abstract

The application relates to the technical field of microscopic imaging, in particular to a method and a device for generating confocal imaging by wide-field imaging, wherein the method comprises the following steps: acquiring a current wide-field image acquired by a wide-field microscope; processing the current wide-field image to obtain a wide-field image with imaging quality meeting a preset condition; and inputting the wide-field image into a pre-trained confocal imaging model, and imaging to obtain a confocal image. The embodiment of the application can obtain the confocal image based on the imaging of the confocal imaging model of deep learning, realize the rapid, low phototoxicity and cheap imaging method with the same imaging quality as that of the confocal microscopy, and effectively meet the imaging requirement.

Description

Method and device for generating confocal imaging by wide-field imaging
Technical Field
The application relates to the technical field of microscopic imaging, in particular to a method and a device for generating confocal imaging by wide-field imaging.
Background
In the related art, wide-field microscopy is a very common microscopic imaging method, and laboratories of almost all research units related to biological environments have a wide-field imaging microscope. The method has the advantages of convenient imaging, simple principle and low equipment price, but the resolution of the result of the method is not high, and the background interference is serious. In contrast, confocal imaging has high resolution, low background interference and high image definition, and is widely applied to the fields of biological observation, medical diagnosis and the like. However, the method has the disadvantages of low imaging speed, high phototoxicity, photobleaching effect on the biological sample, incapability of imaging the photosensitive biological sample and incapability of realizing rapid imaging. Moreover, confocal equipment is generally expensive and needs to be improved.
Content of application
The application provides a method and a device for generating confocal imaging by wide-field imaging, which are used for solving the technical problems that the confocal imaging method in the copolymerization related technology has low imaging speed, high phototoxicity, photobleaching effect on biological samples, incapability of imaging photosensitive biological samples, incapability of realizing rapid imaging and the like.
The embodiment of the first aspect of the present application provides a method for generating confocal imaging by wide-field imaging, which includes the following steps: acquiring a current wide-field image acquired by a wide-field microscope; processing the current wide-field image to obtain a wide-field image with imaging quality meeting a preset condition; and inputting the wide-field image into a pre-trained confocal imaging model, and imaging to obtain a confocal image.
Optionally, in an embodiment of the present application, before inputting the wide-field image into the pre-trained confocal imaging model, the method includes: collecting a plurality of groups of wide field images and confocal images to generate a training set; and training the deep neural network by using the training set to obtain a confocal imaging model.
Optionally, in an embodiment of the present application, the network structure of the deep neural network includes a generator and a discriminator as sub-neural networks to output a confocal image based on a wide field image after learning a pair of the wide field image and the confocal image of the training set.
Optionally, in an embodiment of the present application, the method further includes: determining a loss function as absolute error loss, generating countermeasure Network loss and VGG (virtual Geometry Group Network, VGG model) Network loss; performing iterative optimization using an Adam optimizer to train the deep neural network.
Optionally, in an embodiment of the present application, the acquiring the plurality of sets of wide-field images and confocal images includes: and acquiring the multiple groups of wide field images and confocal images by using the pictures with different depths in the same region and the pictures with different depths in different regions acquired by the target sample.
The second aspect of the present application provides an apparatus for wide-field imaging to generate confocal imaging, including: the acquisition module is used for acquiring a current wide-field image acquired by the wide-field microscope; the processing module is used for processing the current wide-field image to obtain a wide-field image with imaging quality meeting a preset condition; and the imaging module is used for inputting the wide-field image into a pre-trained confocal imaging model to obtain a confocal image through imaging.
Optionally, in an embodiment of the present application, the method further includes: the acquisition module is used for acquiring a plurality of groups of wide-field images and confocal images to generate a training set; and the training module is used for training the deep neural network by using the training set to obtain a confocal imaging model.
Optionally, in an embodiment of the present application, the method further includes: the computing module is used for determining the loss function as absolute error loss and generating confrontation network loss and VGG network loss; and the optimization module is used for performing iterative optimization by using an Adam optimizer so as to train the deep neural network.
An embodiment of a third aspect of the present application provides an electronic device, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the method of wide field imaging generating confocal imaging as described in the above embodiments.
A fourth aspect of the present application provides a computer-readable storage medium, on which a computer program is stored, where the program is executed by a processor, and is used to implement the method for wide-field imaging to generate confocal imaging as described in the foregoing embodiments.
By utilizing the characteristics of high speed, low phototoxicity and low price of wide-field microscopic imaging, training data are learned through a deep learning method, a wide-field microscopic imaging image is converted into a confocal image with high image quality and low background noise, the purpose of obtaining the confocal image through the imaging of a confocal imaging model based on the deep learning is achieved, the imaging method which has the same imaging quality as a confocal microscopy and is high in speed, low in phototoxicity and low in price is achieved, and the imaging requirement is effectively met. Therefore, the technical problems that a confocal imaging method in the copolymerization related technology is low in imaging speed, high in phototoxicity, capable of achieving photobleaching effect on biological samples, incapable of imaging light-sensitive biological samples, incapable of achieving rapid imaging and the like are solved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart of a method for generating confocal imaging by wide-field imaging according to an embodiment of the present application;
FIG. 2 is an exemplary view of a wide-field microscope and a confocal microscope according to one embodiment of the present application;
FIG. 3 is a schematic diagram of a generator in a deep learning network architecture according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a structure of a discriminator in a deep learning network structure according to an embodiment of the present application;
FIG. 5 is a graph illustrating the results of high quality confocal-like imaging, according to one embodiment of the present application;
FIG. 6 is a graphical illustration of confocal imaging results for examination of new different types of samples according to one embodiment of the present application;
FIG. 7 is an exemplary diagram of an apparatus for wide field imaging to generate confocal imaging according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Description of reference numerals:
10-a device for generating confocal imaging by wide field imaging; 100-an acquisition module and 200-a processing module. 300-an imaging module;
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The method and apparatus for wide field imaging to generate confocal imaging according to embodiments of the present application are described below with reference to the accompanying drawings. Aiming at the technical problems that the confocal imaging method in the copolymerization related technology mentioned in the background technology center has low imaging speed and high phototoxicity, has a photobleaching effect on a biological sample, cannot image a photosensitive biological sample, cannot realize rapid imaging and the like, the application provides a wide-field imaging confocal imaging method, in the method, the characteristics of rapidness, low phototoxicity and low cost of wide-field microscopic imaging are utilized, the training data are learned through a deep learning method, a wide-field microscopic imaging image is converted into a confocal image with high image quality and low background noise, the purpose of obtaining the confocal image through the imaging of a confocal imaging model based on the deep learning is achieved, a rapid imaging method with low phototoxicity and low cost and the same imaging quality as a confocal microscopy method is achieved, and imaging requirements are effectively met. Therefore, the technical problems that a confocal imaging method in the copolymerization related technology is low in imaging speed, high in phototoxicity, capable of achieving photobleaching effect on biological samples, incapable of imaging light-sensitive biological samples, incapable of achieving rapid imaging and the like are solved.
Specifically, fig. 1 is a schematic flowchart of a method for generating confocal imaging by wide-field imaging according to an embodiment of the present disclosure.
As shown in fig. 1, the method of wide field imaging to generate confocal imaging comprises the following steps:
in step S101, a current wide-field image acquired by the wide-field microscope is acquired.
It can be appreciated that the embodiments of the present application, in the online phase, first utilize the wide field image acquired by the wide field microscope as the input for the confocal imaging.
In step S102, the current wide-field image is processed to obtain a wide-field image whose imaging quality satisfies a preset condition.
It will be appreciated that the image of the wide-field microscope is first processed after acquiring new data, so that it has the same imaging quality as the confocal microscopy, i.e. the preset conditions can be set to the same imaging quality as the confocal microscopy.
In step S103, the wide-field image is input into a pre-trained confocal imaging model, and a confocal image is obtained by imaging.
Therefore, the imaging result of the embodiment of the application not only obtains the advantages of high resolution, low background and the like of confocal imaging, but also has the advantages of rapidness, low phototoxicity and the like of wide-field microscopic imaging, and can be used for long-time imaging and thick biological sample imaging.
Optionally, in an embodiment of the present application, before inputting the wide-field image into the pre-trained confocal imaging model, the method includes: collecting a plurality of groups of wide field images and confocal images to generate a training set; and (5) training the deep neural network by using a training set to obtain a confocal imaging model.
It can be understood that, in the embodiment of the present application, in the off-line stage, the wide-field image and the confocal image are acquired in pairs as training data, and the deep learning model is trained through a computer.
Optionally, in an embodiment of the present application, acquiring multiple sets of wide-field images and confocal images includes: and acquiring multiple groups of wide field images and confocal images by using the pictures with different depths in the same region and the pictures with different depths in different regions acquired by the target sample.
For example, as shown in fig. 2, a common wide-field microscope image and a common confocal microscope image are respectively shown. In the present embodiment, the two microscopes are used to photograph the same region of the same biological sample (mouse brain slice) and precisely align the photographed images as matched data. The steps are repeated for a plurality of times, and pictures of the same area of the biological sample with different depths and pictures of different areas with different depths are taken, so that a plurality of groups of paired data are obtained. In order to ensure the learning effect, the embodiment of the present application may use 2000 or more matched images as training data.
Wherein, in one embodiment of the present application, the network structure of the deep neural network includes a generator and a discriminator as sub neural networks to output a confocal image based on a wide field image after learning a pair of the wide field image and the confocal image of the training set.
Specifically, after the training data is prepared, the deep neural network training may be performed using, for example, the GPU titanxp of england. The code base framework uses an open source framework, pytorch, under which the neural network structure is designed. The network architecture is based on a generative antagonistic neural network, and the network structure is composed of a generator and a discriminator which are respectively used as two sub-neural networks. By learning a large number of pairs of pictures (a, B), the generator can convert a of such pairs of pictures into B, corresponding to the wide-field and confocal pictures, respectively, in the present invention. The specific structure of the generator is shown in fig. 3. The wide field graph is input from the upper left end of the network structure in the graph, and becomes a confocal graph output from the upper right corner of the network structure in the graph through a series of convolution, down-sampling and up-sampling operations, and the graph can directly transmit low-level information to a high level, so that the high-level information can obtain better results. The three-dimensional data block is obtained by laminating different images shot at different depths in the same area. The number in the upper left corner of the data block represents the number of channels, and the lower right corner is the size of the three-dimensional data block, i.e., the number of pixels in three dimensions. The discriminator is also a neural network, as shown in fig. 4, and is used to determine whether the quality of the generated image is close to that of the real image, that is, whether the confocal image generated by the wide-field image and the confocal image obtained by direct shooting cannot be distinguished by the machine. By adding this extra structure, there is a considerable improvement in the quality of the resulting confocal-like image.
The neural network structure may be, but not limited to, a three-dimensional data calculation structure, or a two-dimensional data calculation structure, and may be set by a person skilled in the art according to actual circumstances, and is not particularly limited herein.
In addition, in an embodiment of the present application, the method further includes: determining a loss function as absolute error loss, and generating countermeasure network loss and VGG network loss; iterative optimization was performed using an Adam optimizer to train the deep neural network.
It can be understood that after the network structure is determined, for example, 1500 matched pictures are input into the network for training, the loss function is determined to be the absolute error loss, the generation countermeasure network (GAN) loss and the VGG network loss, and the Adam optimizer is used for iterative optimization to train the network for about 60 rounds, so that the final result can be obtained. This trained model was then used to test the remaining 1500 pairs of data. Fig. 5 is an example of an imaging result. The input is a wide field map in the paired data (fig. 5a), and an output confocal map predicted by the network is obtained after passing through the generator, and then is compared with the confocal map in the paired data (fig. 5b-d, the first column is the wide field map in the input paired data, the second column is the confocal map in the input paired data, the third column is the confocal map predicted by the network, the fourth column is the Error between the input wide field map and the input confocal map, and the fifth column is the Error between the predicted confocal map and the input confocal map), and the difference is found to be very small by observing and calculating the common image evaluation index RMSE (Root Mean square Error) and SSIM (Structural Similarity), thereby proving the feasibility of the embodiment of the present application.
In actual implementation, the trained model is used to measure other samples or other types of cells. Fig. 6 is another example of the imaging result. Fig. 6a and 6c (first column wide field map in input paired data, second column confocal map in input paired data, third column network predicted confocal map, fourth column error between predicted confocal map and input actual confocal map) are the same cell structure, but result from different signal thickness of neuron cell body signals. Fig. 6b and 6d are the results for cells (nuclei of neurons) of different signal thicknesses and different types from the training data. The model has good applicability to new data, can effectively remove background noise, and greatly improves the imaging quality. These two experiments fully demonstrate the capabilities of the examples of the present application.
According to the method for generating the confocal imaging by the wide-field imaging, which is provided by the embodiment of the application, the characteristics of high speed, low phototoxicity and low price of the wide-field microscopic imaging are utilized, the training data are learned by a deep learning method, the wide-field microscopic imaging image is converted into the confocal imaging image with high image quality and low background noise, the purpose of obtaining the confocal image by the confocal imaging model based on the deep learning is achieved, the imaging method with the imaging quality equal to that of the confocal microscopy is achieved, and the imaging requirement is effectively met.
Next, an apparatus for generating confocal imaging by wide-field imaging proposed according to an embodiment of the present application is described with reference to the drawings.
Fig. 7 is a block diagram of an apparatus for wide field imaging to generate confocal imaging according to an embodiment of the present disclosure.
As shown in fig. 7, the wide field imaging confocal imaging apparatus 10 includes: an acquisition module 100, a processing module 200, and an imaging module 300.
Specifically, the acquiring module 100 is configured to acquire a current wide-field image acquired by the wide-field microscope.
And the processing module 200 is configured to process the current wide-field image to obtain a wide-field image with imaging quality meeting a preset condition.
And the imaging module 300 is configured to input the wide-field image into a pre-trained confocal imaging model, and perform imaging to obtain a confocal image.
Optionally, in an embodiment of the present application, the apparatus 10 of the embodiment of the present application further includes: and an acquisition module.
The acquisition module is used for acquiring a plurality of groups of wide-field images and confocal images to generate a training set; and the training module is used for training the deep neural network by utilizing the training set to obtain the confocal imaging model.
Optionally, in an embodiment of the present application, the apparatus 10 of the embodiment of the present application further includes: a calculation module and an optimization module.
And the calculation module is used for determining the loss function as absolute error loss, and generating countermeasure network loss and VGG network loss.
And the optimization module is used for performing iterative optimization by using an Adam optimizer so as to train the deep neural network.
It should be noted that the foregoing explanation of the embodiment of the method for generating confocal imaging by wide-field imaging also applies to the apparatus for generating confocal imaging by wide-field imaging in this embodiment, and details are not repeated here.
According to the device for generating confocal imaging by wide-field imaging, which is provided by the embodiment of the application, the characteristics of high speed, low phototoxicity and low price of wide-field microscopic imaging are utilized, training data are learned by a deep learning method, a wide-field microscopic imaging image is converted into an image with high image quality and low background noise as high as that of a confocal imaging image, the purpose of obtaining the confocal image by imaging of a confocal imaging model based on deep learning is achieved, an imaging method with the same imaging quality as that of a confocal microscopy is achieved, and imaging requirements are effectively met.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device may include:
a memory 801, a processor 802, and a computer program stored on the memory 801 and executable on the processor 802.
The processor 802, when executing the program, implements the method of wide field imaging generating confocal imaging provided in the embodiments described above.
Further, the electronic device further includes:
a communication interface 803 for communicating between the memory 801 and the processor 802.
A memory 801 for storing computer programs operable on the processor 802.
The memory 801 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
If the memory 801, the processor 802 and the communication interface 803 are implemented independently, the communication interface 803, the memory 801 and the processor 802 may be connected to each other via a bus and communicate with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 8, but this is not intended to represent only one bus or type of bus.
Optionally, in a specific implementation, if the memory 801, the processor 802, and the communication interface 803 are integrated on one chip, the memory 801, the processor 802, and the communication interface 803 may complete communication with each other through an internal interface.
The processor 802 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement embodiments of the present Application.
The present embodiment also provides a computer readable storage medium having stored thereon a computer program, characterized in that the program, when executed by a processor, implements the method of wide field imaging generating confocal imaging as above.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or N embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "N" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more N executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of implementing the embodiments of the present application.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the N steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.

Claims (10)

1. A method of wide field imaging to generate confocal imaging, comprising the steps of:
acquiring a current wide-field image acquired by a wide-field microscope;
processing the current wide-field image to obtain a wide-field image with imaging quality meeting a preset condition; and
and inputting the wide-field image into a pre-trained confocal imaging model, and imaging to obtain a confocal image.
2. The method of claim 1, prior to inputting the wide-field image into the pre-trained confocal imaging model, comprising:
collecting a plurality of groups of wide field images and confocal images to generate a training set;
and training the deep neural network by using the training set to obtain a confocal imaging model.
3. The method of claim 2, wherein the network structure of the deep neural network comprises generators and discriminators as sub-neural networks to output confocal images based on wide field images after learning pairs of wide field images and confocal images of the training set.
4. The method of claim 2 or 3, further comprising:
determining a loss function as absolute error loss, and generating countermeasure network loss and VGG network loss;
performing iterative optimization using an Adam optimizer to train the deep neural network.
5. The method of claim 2, wherein the acquiring the plurality of sets of wide-field images and confocal images comprises:
and acquiring the multiple groups of wide field images and confocal images by using the pictures with different depths in the same region and the pictures with different depths in different regions acquired by the target sample.
6. An apparatus for wide field imaging to produce confocal imaging, comprising:
the acquisition module is used for acquiring a current wide-field image acquired by the wide-field microscope;
the processing module is used for processing the current wide-field image to obtain a wide-field image with imaging quality meeting a preset condition; and
and the imaging module is used for inputting the wide-field image into a pre-trained confocal imaging model to obtain a confocal image through imaging.
7. The apparatus of claim 6, further comprising:
the acquisition module is used for acquiring a plurality of groups of wide-field images and confocal images to generate a training set;
and the training module is used for training the deep neural network by using the training set to obtain a confocal imaging model.
8. The apparatus of claim 6 or 7, further comprising:
the computing module is used for determining the loss function as absolute error loss and generating confrontation network loss and VGG network loss;
and the optimization module is used for performing iterative optimization by using an Adam optimizer so as to train the deep neural network.
9. An electronic device, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the method of wide field imaging generating confocal imaging as claimed in any of claims 1-5.
10. A computer readable storage medium having stored thereon a computer program, the program being executable by a processor for performing the method of wide field imaging generating confocal imaging as claimed in any one of claims 1 to 5.
CN202110783049.9A 2021-07-12 2021-07-12 Method and device for generating confocal imaging by wide-field imaging Pending CN113433130A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110783049.9A CN113433130A (en) 2021-07-12 2021-07-12 Method and device for generating confocal imaging by wide-field imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110783049.9A CN113433130A (en) 2021-07-12 2021-07-12 Method and device for generating confocal imaging by wide-field imaging

Publications (1)

Publication Number Publication Date
CN113433130A true CN113433130A (en) 2021-09-24

Family

ID=77759964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110783049.9A Pending CN113433130A (en) 2021-07-12 2021-07-12 Method and device for generating confocal imaging by wide-field imaging

Country Status (1)

Country Link
CN (1) CN113433130A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115841423A (en) * 2022-12-12 2023-03-24 之江实验室 Wide-field illumination fluorescence super-resolution microscopic imaging method based on deep learning
CN117953005A (en) * 2024-02-03 2024-04-30 浙江荷湖科技有限公司 Simulation-based rapid wide-field calcium imaging neuron extraction method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106842529A (en) * 2017-01-23 2017-06-13 清华大学 Quick three-dimensional micro imaging system
CN110296967A (en) * 2019-07-15 2019-10-01 清华大学 High speed and high resoltuion wide field chromatography imaging method and device
US20190333199A1 (en) * 2018-04-26 2019-10-31 The Regents Of The University Of California Systems and methods for deep learning microscopy
CN111239731A (en) * 2020-01-06 2020-06-05 南京航空航天大学 Synthetic aperture radar rapid imaging method and device based on neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106842529A (en) * 2017-01-23 2017-06-13 清华大学 Quick three-dimensional micro imaging system
US20190333199A1 (en) * 2018-04-26 2019-10-31 The Regents Of The University Of California Systems and methods for deep learning microscopy
CN110296967A (en) * 2019-07-15 2019-10-01 清华大学 High speed and high resoltuion wide field chromatography imaging method and device
CN111239731A (en) * 2020-01-06 2020-06-05 南京航空航天大学 Synthetic aperture radar rapid imaging method and device based on neural network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115841423A (en) * 2022-12-12 2023-03-24 之江实验室 Wide-field illumination fluorescence super-resolution microscopic imaging method based on deep learning
CN117953005A (en) * 2024-02-03 2024-04-30 浙江荷湖科技有限公司 Simulation-based rapid wide-field calcium imaging neuron extraction method and system

Similar Documents

Publication Publication Date Title
CN110853022B (en) Pathological section image processing method, device and system and storage medium
CN109191476A (en) The automatic segmentation of Biomedical Image based on U-net network structure
CN113433130A (en) Method and device for generating confocal imaging by wide-field imaging
Sweere et al. Deep learning-based super-resolution and de-noising for XMM-newton images
CN113191390A (en) Image classification model construction method, image classification method and storage medium
CN111445546A (en) Image reconstruction method and device, electronic equipment and storage medium
CN109978897B (en) Registration method and device for heterogeneous remote sensing images of multi-scale generation countermeasure network
CN112633248B (en) Deep learning full-in-focus microscopic image acquisition method
Goutham et al. Brain tumor classification using EfficientNet-B0 model
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
KR20220129406A (en) A method and apparatus for image segmentation using residual convolution based deep learning network
CN111462005B (en) Method, apparatus, computer device and storage medium for processing microscopic image
Gautam et al. Demonstrating the risk of imbalanced datasets in chest x-ray image-based diagnostics by prototypical relevance propagation
CN112861958A (en) Method and device for identifying and classifying kidney disease immunofluorescence pictures
KR102329546B1 (en) System and method for medical diagnosis using neural network and non-local block
CN108596900B (en) Thyroid-associated ophthalmopathy medical image data processing device and method, computer-readable storage medium and terminal equipment
CN110739051B (en) Method for establishing eosinophilic granulocyte proportion model by using nasal polyp pathological picture
CN110276802B (en) Method, device and equipment for positioning pathological tissue in medical image
CN108020680A (en) Fluid measurement instrument and method based on PIV
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
CN114155340A (en) Reconstruction method and device of scanning light field data, electronic equipment and storage medium
CN114897693A (en) Microscopic image super-resolution method based on mathematical imaging theory and generation countermeasure network
EP3965070A1 (en) Learning model generation method, identification method, learning model generation system, identification system, learning model generation program, identification program, and recording medium
CN113658036B (en) Data augmentation method, device, computer and medium based on countermeasure generation network
Kassim et al. A cell augmentation tool for blood smear analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210924

RJ01 Rejection of invention patent application after publication