CN114004854B - Real-time processing display system and method for slice image under microscope - Google Patents
Real-time processing display system and method for slice image under microscope Download PDFInfo
- Publication number
- CN114004854B CN114004854B CN202111096321.2A CN202111096321A CN114004854B CN 114004854 B CN114004854 B CN 114004854B CN 202111096321 A CN202111096321 A CN 202111096321A CN 114004854 B CN114004854 B CN 114004854B
- Authority
- CN
- China
- Prior art keywords
- image
- focus
- slice
- microscope
- current
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 title claims abstract description 60
- 238000000034 method Methods 0.000 title claims abstract description 31
- 230000000007 visual effect Effects 0.000 claims abstract description 56
- 230000003902 lesion Effects 0.000 claims abstract description 35
- 230000006870 function Effects 0.000 claims description 43
- 201000010099 disease Diseases 0.000 claims description 27
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 claims description 27
- 238000012549 training Methods 0.000 claims description 25
- 239000002131 composite material Substances 0.000 claims description 10
- 230000003190 augmentative effect Effects 0.000 claims description 9
- 238000001228 spectrum Methods 0.000 claims description 9
- 238000011156 evaluation Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 8
- 238000006073 displacement reaction Methods 0.000 claims description 6
- 238000013519 translation Methods 0.000 claims description 6
- 238000012795 verification Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 claims description 5
- 239000000284 extract Substances 0.000 claims description 5
- 230000009466 transformation Effects 0.000 claims description 4
- 230000021615 conjugation Effects 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000003745 diagnosis Methods 0.000 abstract description 11
- 238000010827 pathological analysis Methods 0.000 abstract description 5
- 230000007170 pathology Effects 0.000 description 4
- 238000003759 clinical diagnosis Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000001574 biopsy Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000001575 pathological effect Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 208000000461 Esophageal Neoplasms Diseases 0.000 description 1
- 102100039397 Gap junction beta-3 protein Human genes 0.000 description 1
- 101100061841 Homo sapiens GJB3 gene Proteins 0.000 description 1
- 238000005481 NMR spectroscopy Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000035755 proliferation Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/0004—Microscopes specially adapted for specific applications
- G02B21/0012—Surgical microscopes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/181—Segmentation; Edge detection involving edge growing; involving edge linking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Surgery (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Optics & Photonics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a real-time processing display system and a real-time processing display method for slice images under a microscope, wherein the real-time processing display system comprises a microscope system, an image processing system and an auxiliary display system; the microscope system is used for acquiring a slice visual field image under a microscope; the image processing system is used for splicing the plurality of slice view images to obtain a current historical view spliced image; the method is also used for identifying the focus of the slice view image, extracting the edge of the focus connected domain in the identified focus identification image and superposing the extracted edge on the corresponding slice view image to obtain a microscope view image with a focus label; the method is also used for splicing a plurality of microscope field images with the lesion labels to obtain a current historical field lesion image; the auxiliary display system is used for displaying the microscope field image with the lesion label, the current historical field spliced image and the current historical field lesion image in real time. The invention can provide image data support for pathological diagnosis and improve diagnosis efficiency and accuracy of diagnosis results.
Description
Technical Field
The invention relates to the field of medical image processing, in particular to a system and a method for processing and displaying slice images in real time under a microscope.
Background
The pathological report is a diagnosis report formed by treating biopsy tissues of a patient and observing the biopsy tissues by a doctor of a pathology department through a microscope, is the most reliable and accurate diagnosis means in clinical diagnosis, and is called as a clinical diagnosis gold standard.
In practice, a physician spends a significant amount of time each day diagnosing various disease slices under a microscope. According to statistics, the number of the medical doctors in China is about 1.5w, and the situation that the number of the medical doctors in China is large and the pathological detection efficiency is low is directly caused by the fact that 1w of talent gaps still exist. Especially for most small hospitals, the lack of corresponding experienced pathologists results in a tremendous in-hospital pathological diagnosis pressure due to the few advanced medical equipment.
Along with the application of deep learning, especially the application of a semantic segmentation network in medical image focus recognition tasks, the semantic segmentation network is gradually verified to effectively improve the diagnosis accuracy and efficiency of pathologists, and is widely applied to focus recognition tasks of polyps, esophageal cancers, CT images, nuclear magnetic resonance images, slice images and the like. However, most of deep learning-based projects are in a research stage at present, and there is a lack of a real-time processing system for microscopic slice images, which can be directly used for clinical diagnosis, and effectively combine a network model and actual clinical situations.
Disclosure of Invention
The invention aims to provide a real-time processing display system and method for a slice image under a microscope, which can provide a microscope field image with a disease focus label, a current historical field spliced image and a current historical field focus image in real time based on the slice field image acquired by the microscope, and provide image data support for pathological diagnosis of a doctor of a pathology department so as to improve diagnosis efficiency and accuracy of diagnosis results.
The invention adopts the following technical scheme:
a slice image real-time processing display system under a microscope comprises a microscope system, an image processing system and an auxiliary display system;
The microscope system is used for collecting a slice visual field image under a microscope;
The image processing system is used for sequentially splicing the plurality of slice visual field images according to the time sequence to obtain a current historical visual field spliced image and sending the current historical visual field spliced image to the auxiliary display system; the method is also used for identifying focus of each slice view image, extracting the edge of a focus connected domain in the identified focus identification image and superposing the extracted edge on the corresponding slice view image to obtain a microscope view image with a focus label and sending the microscope view image to an auxiliary display system; the system is also used for sequentially splicing a plurality of microscope field images with the lesion labels according to the time sequence to obtain a current historical field lesion image and sending the current historical field lesion image to an auxiliary display system;
The auxiliary display system is used for displaying the microscope field image with the lesion label, the current historical field spliced image and the current historical field lesion image in real time.
The image processing system comprises an image splicing module, a focus identification module and an image superposition module;
The image stitching module is used for sequentially acquiring the slice visual field images acquired by the microscope system and sequentially transmitting the acquired slice visual field images to the focus recognition module; the image stitching module is also used for stitching a plurality of slice visual field images in sequence according to the time sequence, so as to obtain a current historical visual field stitching image and send the current historical visual field stitching image to the auxiliary display system; the image stitching module is also used for sequentially acquiring the microscope field images with the disease focus labels transmitted by the image superposition module, stitching the microscope field images with the disease focus labels sequentially according to time sequence, finally acquiring the current historical field focus images and transmitting the current historical field focus images to the auxiliary display system;
The focus recognition module is used for recognizing focus of each slice view image transmitted by the image stitching module through a focus recognition network to obtain focus recognition images containing focus connected domains, and then transmitting the obtained focus recognition images to the image superposition module;
The image superposition module is used for sequentially acquiring the slice view image acquired by the microscope system and the focus identification image transmitted by the focus identification module, extracting the edge of the focus connected domain in the focus identification image, superposing the edge of the focus connected domain in the extracted focus identification image on the corresponding slice view image, obtaining the microscope view image with the focus label, and transmitting the microscope view image with the focus label to the image stitching module and the auxiliary display system.
When the image processing system is used for splicing two images, the following steps are carried out:
a: setting the current image acquired by the image processing system as f 1 (x, y), wherein the previous image is f 2 (x, y), and f 2(x,y)=f1 (x-dx, y-dy), namely f 1 (x, y) is obtained by f 2 (x, y) translation (dx, dy); performing Fourier transform on the current image and the previous image respectively to obtain a frequency domain image F 1 (u, v) and a frequency domain image F 2(u,v),F2(u,v)=F1(u,v)e-i·2π(u·dx+v·dy);
Wherein the translation comprises a parallel movement in a horizontal direction and/or a vertical direction; f 1 (x, y) represents the gray value of the (x, y) coordinate pixel of the current image, wherein (x, y) is the coordinate position of the image pixel; f 2 (x, y) is the gray value of the pixel point of the (x, y) coordinate of the previous image; f 1 (u, v) is the value of the frequency domain image of the current image on the (u, v) frequency domain coordinates, wherein (u, v) is the frequency domain coordinates of the frequency domain image, F 2 (u, v) is the value of the frequency domain image of the previous image on the (u, v) frequency domain coordinates, i represents a complex symbol, dx is the moving distance between the two images in the x-axis direction, and dy is the moving distance between the two images in the y-axis direction;
b: conjugation is carried out on the frequency domain image F 2 to obtain a conjugated frequency domain image And then the conjugated frequency domain image/>Multiplying the cross power spectrum with the frequency domain image F 1, performing normalization processing to obtain a cross power spectrum H (u, v),
C: performing Fourier inverse transformation on the cross power spectrum H (u, v) to obtain a real domain diagram F e(x,y),Fe (x, y) which is a pulse function image; the coordinates of the peak position in F e (x, y) are obtained as displacement amounts (dx, dy) of the front image and the rear image, then the front image is respectively placed at four positions of the upper left, the lower left, the upper right and the lower right of the current image, the moving distance between the two images in the x-axis direction is dx when the four positions are located, and the moving distance in the y-axis direction is dy; respectively calculating absolute value average values of gray value differences of overlapping areas of the front image and the rear image at the four positions, wherein the absolute value average value is the right splicing position relation of the front image and the rear image, and then splicing the images according to displacement (dx, dy) and the position relation to obtain a current historical field spliced image or a current historical field focus image;
d: and c, splicing each current image acquired by the image processing system with the previous current image, combining the spliced current historical field spliced image or the current historical field focus image, and finishing splicing all images by the image processing system to acquire the current historical field spliced image or the current historical field focus image when the last image is cut off.
In the process of training the focus recognition module, a step-by-step network training method is adopted;
Firstly, training a network model by using a first loss function, and only storing the network model with the best pixel level IoU index on the verification set Valid as an intermediate network model during training;
Then training the intermediate network model by using a second loss function based on the intermediate network model, wherein only the network model with the best index of pixel level IoU on the verification set Valid is stored as a final network model during training; wherein the first loss function and the second loss function are different loss functions.
In the process of training the focus recognition module, the second loss function adopts a composite loss function aiming at multiple evaluation indexes, and the specific formula of the composite loss function is as follows:
CompoundLoss=Ffocal(α×LesionLoss+(1-α)×PixelLoss);
Ffocal(x)=-k×(1-x)γ×log(x);
Wherein CompoundLoss is a loss function, lesionLoss is focus level loss, pixelLoss is pixel level loss, alpha is a weighting coefficient, F focal (x) represents a focal function, k and gamma are both fixed parameters, and x represents a focal function variable; beta is a weighting coefficient, pre_ PixelLoss represents pixel-level precision loss, rec_ PixelLoss represents pixel-level recall loss, T 1 represents a true label map focus area, P 1 represents a predicted label map focus area, T 1∩P1 represents the number of pixels in the intersection area of T 1 and P 1, and smoth is a minimum amount for preventing denominator from being 0; pre_ LesionLoss represents focus level accuracy loss, rec_ LesionLoss represents focus level recall loss, T 2 represents a set of real label map focus connected domains, P 2 represents a set of predicted label map focus connected domains, and |n (P 2,T2) | represents the number of accurately predicted focuses.
The image processing system also comprises an edge expansion module, a focus recognition module and a current slice view field image acquisition module, wherein the edge expansion module is used for acquiring a current historical view field spliced image from the image splicing module, carrying out image filling on an expansion area of the current slice view field image in the current historical view field spliced image, and sending the current slice view field image after edge expansion to the focus recognition module;
When image filling is carried out, if the real image content exists in the current historical view mosaic image in the expansion area of the current slice view image, filling the expansion area of the current slice view image by using the real image content; if the extended area of the current slice view image does not have real image content in the current history view spliced image, mirror copying the real image content on one side of the current slice view image adjacent to the extended area to obtain mirror image content, and filling the extended area of the current slice view image by using the mirror image content; and finally obtaining the current slice visual field image with the edge expanded after filling.
After the image processing system utilizes the current slice visual field image after edge expansion to identify the focus, the obtained focus identification image is cut according to the size of the current slice visual field image before edge expansion, namely the expansion area of the current slice visual field image is deleted, and then the edge of the focus connected area in the cut focus identification image is extracted and overlapped on the corresponding slice visual field image to obtain the microscope visual field image with the focus label.
The image processing system sends the edge of the extracted focus connected domain to the augmented reality module, and the augmented reality module directly superimposes the edge of the focus connected domain on a corresponding focus in the field of view of the microscope eyepiece through light path conduction.
A method of real-time processing and displaying of a slice image under a microscope using the real-time processing and displaying system according to any one of claims 1 to 8, comprising the steps of, in order:
a: the microscope system acquires a slice visual field image under a microscope;
B: the image stitching module acquires a slice visual field image acquired by the microscope system and transmits the acquired slice visual field image to the focus recognition module;
c: the focus recognition module performs focus recognition on the slice view image transmitted by the image stitching module through a focus recognition network to obtain a focus recognition image containing a focus connected domain, and then sends the obtained focus recognition image to the image superposition module;
D: the image superposition module extracts the edge of the focus connected domain in the focus identification image, superimposes the extracted edge of the focus connected domain in the focus identification image on the corresponding slice view image, obtains a microscope view image with a focus label and sends the microscope view image to the image splicing module and the auxiliary display system; the auxiliary display system displays a microscope view image with a disease focus label in real time;
E: the image splicing module judges whether a previous slice view image and/or a microscope view image with a disease focus label exist, if so, the front and back slice view images and/or the front and back microscope view images with the disease focus label are spliced, a current history view spliced image and/or a current history view focus image obtained by splicing are sent to an auxiliary display system, and then the step A is returned; if not, directly returning to the step A;
the current historical field stitched image and/or the current historical field lesion image is then displayed in real-time by the auxiliary display system.
In the step B, the image stitching module sends the slice view field image acquired by the microscope system to the edge expanding module, the edge expanding module fills the expanding area of the current slice view field image in the current historical view field stitching image, and the current slice view field image after edge expansion is sent to the focus identifying module;
In the step C, after receiving the current slice view image after edge expansion sent by the edge expansion module, the focus recognition module performs focus recognition by using a focus recognition network to obtain a focus recognition image containing a focus connected domain, then cuts the obtained focus recognition image by the size of the current slice view image before edge expansion, namely, deletes the expansion area of the current slice view image, and then sends the cut focus recognition image to the image superposition module.
The invention can effectively combine the focus recognition network model and the microscopic slice image of the actual clinical scene, provide the microscopic field image with the focus label, the current historical field spliced image and the current historical field focus image in real time after processing based on the microscopic slice field image acquired by the microscope, provide image data support for pathological diagnosis and improve diagnosis efficiency and accuracy of diagnosis results.
Drawings
FIG. 1 is a schematic block diagram of a real-time slice image processing and displaying system under a microscope according to the present invention;
FIG. 2 is a schematic diagram of a method for displaying a slice image in real time under a microscope according to the present invention;
FIG. 3 is a view image of a current slice at edge extension;
FIG. 4 is a slice view image that has been filled with real image content in a current historical view stitching image;
FIG. 5 is a view of a slice after filling the portion of FIG. 4 without real image content with mirrored image content;
fig. 6 is a current historical field of view lesion image.
Detailed Description
The invention is described in detail below with reference to the attached drawings and examples:
as shown in fig. 1 to 6, the real-time slice image processing and displaying system under the microscope of the present invention comprises a microscope system, an image processing system and an auxiliary display system;
The microscope system is used for collecting a slice visual field image under a microscope;
The image processing system is used for sequentially splicing the plurality of slice visual field images according to the time sequence to obtain a current historical visual field spliced image and sending the current historical visual field spliced image to the auxiliary display system; the method is also used for identifying focus of each slice view image, extracting the edge of a focus connected domain in the identified focus identification image and superposing the extracted edge on the corresponding slice view image to obtain a microscope view image with a focus label and sending the microscope view image to an auxiliary display system; the system is also used for sequentially splicing a plurality of microscope field images with the lesion labels according to the time sequence to obtain a current historical field lesion image and sending the current historical field lesion image to an auxiliary display system;
The auxiliary display system is used for displaying the microscope field image with the lesion label, the current historical field spliced image and the current historical field lesion image in real time.
In the invention, a microscope system acquires a slice visual field image under a microscope in real time and sequentially transmits the slice visual field image to an image processing system according to a time sequence;
in the embodiment, the microscope system can adopt a CX31 model binocular microscope of Olympic corporation, is provided with a microscopic imaging system interface, and can attach a microscope camera to an ocular lens of the binocular observation tube through an adapter, so that the acquisition function of a slice visual field image under the microscope is realized. The microscope camera can adopt G1UD05C, and can provide 620 multiplied by 460 RGB image sequences of 2 frames per second for an image processing system in a host computer through a USB transmission line.
In the invention, the image processing system comprises an image splicing module, a focus recognition module and an image superposition module;
The image stitching module is used for sequentially acquiring the slice visual field images acquired by the microscope system and sequentially transmitting the acquired slice visual field images to the focus recognition module; the image stitching module is also used for stitching a plurality of slice visual field images in sequence according to the time sequence, so as to obtain a current historical visual field stitching image and send the current historical visual field stitching image to the auxiliary display system; the image stitching module is also used for sequentially acquiring the microscope field images with the disease focus labels transmitted by the image superposition module, stitching the microscope field images with the disease focus labels sequentially according to time sequence, finally acquiring the current historical field focus images and transmitting the current historical field focus images to the auxiliary display system;
The focus recognition module is used for recognizing focus of each slice view image transmitted by the image stitching module through a focus recognition network to obtain focus recognition images containing focus connected domains, and then transmitting the obtained focus recognition images to the image superposition module;
The image superposition module is used for sequentially acquiring the slice view image acquired by the microscope system and the focus identification image transmitted by the focus identification module, extracting the edge of the focus connected domain in the focus identification image, superposing the edge of the focus connected domain in the extracted focus identification image on the corresponding slice view image, acquiring the microscope view image with the focus label, and transmitting the microscope view image with the focus label to the image stitching module and the auxiliary display system;
When an image stitching module in an image processing system acquires a first slice view image, the first slice view image is sent to a focus recognition module, the focus recognition module utilizes a focus recognition network to recognize focuses, then the focus recognition image containing focus connected domains is sent to an image overlaying module, the image overlaying module is used for extracting the edges of the focus connected domains in the focus recognition image, the extracted edges of the focus connected domains in the focus recognition image are overlaid on the first slice view image, and then the obtained microscope view image with focus labels is sent to the image stitching module and an auxiliary display system; the microscope field of view image with the lesion label is displayed by an auxiliary display system. The image stitching module can directly send the first slice visual field image which is not stitched to the auxiliary display system, and the auxiliary display system displays the first slice visual field image.
After an image stitching module in the image processing system acquires a second slice view image, stitching the first slice view image and the second slice view image according to a time sequence to obtain a current historical view stitching image (namely, stitching images of the first slice view image and the second slice view image) and sending the current historical view stitching image to an auxiliary display system; meanwhile, the image stitching module also sends a second slice view field image to the focus recognition module, the focus recognition module utilizes a focus recognition network to recognize focuses, the focus recognition image containing focus connected domains is sent to the image overlaying module, the image overlaying module extracts the edges of the focus connected domains in the focus recognition image, the extracted edges of the focus connected domains in the focus recognition image are overlaid on the second slice view field image, and then the obtained microscope view field image with the focus labels is sent to the image stitching module and the auxiliary display system; after the image stitching module acquires the second microscope field image with the disease focus label, stitching the first microscope field image with the disease focus label with the second microscope field image with the disease focus label according to the time sequence, finally obtaining the current historical field focus image (the stitched image of the first microscope field image with the disease focus label with the second microscope field image with the disease focus label) and sending the current historical field focus image to the auxiliary display system; the auxiliary display system displays the microscope field image with the lesion label, the current historical field spliced image and the current historical field lesion image in real time.
Similarly, when the image stitching module in the image processing system sequentially acquires the subsequent slice view images, the image stitching module sequentially splices the plurality of slice view images according to the time sequence by the method to obtain a current historical view stitched image and sends the current historical view stitched image to the auxiliary display system; the image superposition module also sequentially transmits the microscope field image with the lesion tag to the image stitching module and the auxiliary display system. Simultaneously, an image stitching module sequentially splices a plurality of microscope field images with lesion labels according to a time sequence, and finally obtains a current historical field lesion image and sends the current historical field lesion image to an auxiliary display system; and finally, displaying the microscope field image with the lesion label, the current historical field spliced image and the current historical field lesion image in real time by an auxiliary display system. The invention can provide image data support for pathological diagnosis of a doctor in a pathology department, and display how many focuses and specific positions of the focuses are in a history path of a microscope lens so as to improve diagnosis efficiency and accuracy of diagnosis results.
In this embodiment, when the image stitching module in the image processing system stitches two images (taking a slice view image as an example), the following method is performed:
a: setting the current slice view image obtained by an image stitching module in the image processing system as f 1 (x, y), and setting the previous slice view image as f 2 (x, y), wherein f 2(x,y)=f1 (x-dx, y-dy), namely f 1 (x, y) is obtained by f 2 (x, y) translation (dx, dy); performing Fourier transform on the current slice view image and the previous slice view image respectively to obtain a frequency domain image F 1 (u, v) and a frequency domain image F 2(u,v),F2(u,v)=F1(u,v)e-i·2π(u·dx+v·dy);
Wherein the translation comprises a parallel movement in a horizontal direction and/or a vertical direction; f 1 (x, y) represents the gray value of the (x, y) coordinate pixel point of the current slice view image, wherein (x, y) is the coordinate position of the image pixel point; f 2 (x, y) is the gray value of the pixel point of the (x, y) coordinate of the previous slice view image; f 1 (u, v) is a value of a frequency domain image of the current slice view image on (u, v) frequency domain coordinates, where (u, v) is a frequency domain coordinate of the frequency domain image, F 2 (u, v) is a value of a frequency domain image of the previous slice view image on (u, v) frequency domain coordinates, i represents a complex symbol, dx is a moving distance between two slice view images in an x-axis direction, and dy is a moving distance between two slice view images in a y-axis direction.
B: conjugation is carried out on the frequency domain image F 2 to obtain a conjugated frequency domain imageAnd then the conjugated frequency domain image/>Multiplying the cross power spectrum with the frequency domain image F 1, performing normalization processing to obtain a cross power spectrum H (u, v),
C: performing Fourier inverse transformation on the cross power spectrum H (u, v) to obtain a real domain diagram F e(x,y),Fe (x, y) which is a pulse function image; the coordinates of the peak position in F e (x, y) are obtained as displacement amounts (dx, dy) of the front slice view image and the rear slice view image, then the front slice view image is respectively placed at the left upper, the left lower, the right upper and the right lower positions of the current slice view image, and the moving distance between the two slice view images in the x-axis direction is dx and the moving distance in the y-axis direction is dy when the four positions are obtained; respectively calculating absolute value average values of gray value differences of overlapping areas of front and rear slice view images at the four positions, wherein the minimum absolute value average value is the correct splicing position relation of the front and rear images, and then splicing the images according to displacement (dx, dy) and the position relation to obtain a current historical view spliced image;
d: and c, splicing each current slice view image acquired by the image splicing module with the previous current slice view image, combining the spliced current historical view spliced images, and finishing splicing all slice view images by the image splicing module to obtain a current historical view spliced image when the current slice view image is cut off to the last slice view image.
And similarly, the image stitching module is used for stitching a plurality of microscope field images with the disease focus labels in sequence according to the method, so that the current historical field focus image when the last microscope field image with the disease focus labels is cut off is obtained.
The image stitching method adopted by the invention has strong robustness, and can effectively avoid the situation of serious losing of stitching in the existing stitching algorithm (such as an affine transformation-based image stitching algorithm and a feature matching-based image stitching algorithm).
In the invention, the focus recognition module can adopt the existing various focus recognition networks constructed based on the neural network, such as the fully-connected neural network, the villus network or the proliferation network disclosed in ZL202010825928.9, ZL202010828700.5, ZL202010828696.2 and ZL202010826873.3, and then train the focus recognition network by using training data of different focus images so as to improve the recognition accuracy of the focus recognition network. In this embodiment, the lesion recognition module may employ a DeepLabV3+ based image segmentation network.
DeepLab proposes the concept of cavity convolution, the cavity convolution acts on 9 feature points of 3×3 of mutual interval rate, the convolution of features on different scales can be realized, deepLabV < 3+ > combines the structural characteristics of UNet on the basis of DeepLab, and an encoder-decoder structure is added. The cavity convolution structure ensures the capability of expanding receptive fields and capturing multi-scale focus information, and the encoder-decoder structure reserves shallow layer characteristic image information and deep layer characteristic image information, so that the feature image information with more abundant and more scales ensures the excellent semantic segmentation capability of DeepLabV3 +.
In the network training process, the invention provides a brand new composite loss function aiming at multiple evaluation indexes in combination with a specific scene of medical image focus recognition, and the specific formula of the composite loss function is as follows:
CompoundLoss=Ffocal(α×LesionLoss+(1-α)×PixelLoss);
Ffocal(x)=-k×(1-x)γ×log(x);
Wherein CompoundLoss is a loss function, lesionLoss is focus level loss, pixelLoss is pixel level loss, alpha is a weighting coefficient, F focal (x) represents a focal function, k and gamma are both fixed parameters, and x represents a focal function variable; beta is a weighting coefficient, pre_ PixelLoss represents pixel-level precision loss, rec_ PixelLoss represents pixel-level recall loss, T 1 represents a true label map focus area, P 1 represents a predicted label map focus area, T 1∩P1 represents the number of pixels in the intersection area of T 1 and P 1, and smoth is a minimum amount for preventing denominator from being 0; pre_ LesionLoss represents focus level accuracy loss, rec_ LesionLoss represents focus level recall loss, T 2 represents a set of real label map focus connected domains, P 2 represents a set of predicted label map focus connected domains, and |n (P 2,T2) | represents the number of accurately predicted focuses.
The composite loss function adopted in the invention combines pixel-level loss and focus-level loss, the pixel-level loss ensures the improvement of the pixel-level evaluation index, and the focus-level loss ensures the focus-level evaluation index on one hand; on the other hand, focus level loss calculation takes a single focus connected domain as a unit, and a focus with small area and a focus with large area have the same weight, so that the identification of the small focus can be effectively enhanced. The composite loss function provides possibility of acquiring network models with different requirements by adding two weighting coefficients of alpha and beta, the network model obtained by alpha <0.5 training can have more excellent recall rate, can provide more abundant potential focus areas for doctors of a pathology department, the network model obtained by alpha=0.5 training can give consideration to recall rate and accuracy rate, and has more comprehensive performance.
In the network model training process, the step-by-step network training method based on various loss functions is specially designed by combining the composite loss function provided by the invention:
First, the network model is trained using a first loss function, which may take the form of IoULoss loss functions or BCEWithLogitsLoss loss functions. During training, only the network model with the best index of the pixel level IoU on the Valid of the verification set is stored and used as an intermediate network model;
and then based on the intermediate network model, a second loss function is used, the second loss function can be used for training the intermediate network model by adopting the composite loss function, and only the network model with the best pixel level IoU index on the verification set Valid is stored as a final network model during training, so that the focus level evaluation index is improved while the pixel level evaluation index is maintained.
According to the invention, the step-by-step network training method based on the multiple loss functions can combine the advantages of the multiple loss functions on different evaluation indexes, and a better network model can be obtained through training.
In the process of recognizing the focus by the focus recognition module, the focus recognition effect at the edge of the slice view image is poor, and the main reason is that the focus image at the edge cannot be accurately observed in the slice view image. Therefore, in the present invention, an edge extension module is also specifically designed in the image processing system.
The edge expansion module is used for acquiring a current historical view splice image from the image splicing module, filling an expansion area of the current slice view image in the current historical view splice image, and sending the current slice view image after edge expansion to the focus identification module;
When image filling is carried out, if the real image content exists in the current historical view mosaic image in the expansion area of the current slice view image, filling the expansion area of the current slice view image by using the real image content; if the extended area of the current slice view image does not have real image content in the current history view spliced image, mirror copying the real image content on one side of the current slice view image adjacent to the extended area to obtain mirror image content, and filling the extended area of the current slice view image by using the mirror image content; finally obtaining a current slice visual field image with the edge expanded after filling;
as shown in fig. 3 to 5, fig. 3 is a current slice view image, fig. 4 is a slice view image filled with real image content in a current history view mosaic image, wherein the image portions in fig. 4 except the existing image in fig. 3 are all real image content in the current history view mosaic image, the black portion is an extended region of the current slice view image, and no real image content portion is present in the current history view mosaic image, and fig. 5 is a slice view image filled with mirror image content for the no real image content portion in fig. 4, so as to finally obtain a current slice view image with an edge extended. Fig. 6 is an example of a current history view lesion image, in which a region surrounded by black lines is a recognized lesion.
According to the invention, the edge expansion module can effectively enrich focus information at the edge of the current slice visual field image, so that the focus identification module can more accurately identify the edge focus, and the identification accuracy of the focus identification module is improved. In the edge expansion process, the filled real image content can effectively improve the identification accuracy of the focus identification module. And the black image can obviously reduce the identification accuracy of the focus identification module, and the filled mirror image content in the invention can replace the black image without a real image content part in the expansion area, so that the influence of the black image on the identification accuracy of the focus identification module is avoided.
After receiving the current slice view image after edge expansion sent by the edge expansion module, the focus identification module carries out focus identification by using a focus identification network to obtain a focus identification image containing a focus connected domain, then cuts the obtained focus identification image by the size of the current slice view image before edge expansion, namely deletes the expansion area of the current slice view image, and then sends the cut focus identification image to the image superposition module;
In the invention, the image superposition module extracts the edge of the focus connected domain in the focus identification image after receiving the focus identification image transmitted by the focus identification module, and superimposes the edge of the focus connected domain in the extracted focus identification image on the corresponding slice view image. The extraction of the edge of the focus connected domain and the superposition of the edge of the focus connected domain on the corresponding slice view image are all conventional techniques in the field of image processing and are not described herein.
In the invention, an augmented reality module can be additionally arranged, and the augmented reality module and the augmented reality technology are mature prior art. The image superposition module transmits the edge of the extracted focus connected domain to the augmented reality module through data transmission, and the augmented reality module directly superimposes the edge of the focus connected domain on a corresponding focus in a view field of a microscope eyepiece through optical path conduction, so that a doctor of a disease department can directly observe the view field of the microscope with a focus tag (namely, the edge display with the focus connected domain) in real time through the microscope eyepiece, and the doctor of the disease department can directly observe the view field of the microscope conveniently.
As shown in fig. 2, the method for displaying the slice image in real time under the microscope, which is implemented by using the system for displaying the slice image in real time under the microscope, sequentially comprises the following steps:
a: acquiring a slice view image under a microscope by a microscope system;
B: the image stitching module acquires a slice visual field image acquired by the microscope system and transmits the acquired slice visual field image to the focus recognition module;
c: the focus recognition module performs focus recognition on the slice view image transmitted by the image stitching module through a focus recognition network to obtain a focus recognition image containing a focus connected domain, and then sends the obtained focus recognition image to the image superposition module;
D: the image superposition module extracts the edge of the focus connected domain in the focus identification image, superimposes the extracted edge of the focus connected domain in the focus identification image on the corresponding slice view image, obtains a microscope view image with a focus label and sends the microscope view image to the image splicing module and the auxiliary display system; the auxiliary display system displays a microscope view image with a disease focus label in real time;
e: the image splicing module judges whether a previous slice view image and/or a microscope view image with a disease focus label exist, if so, the front and back slice view images and/or the front and back microscope view images with the disease focus label are spliced, wherein the front and back slice view images are spliced into a current history view spliced image, the front and back microscope view images with the disease focus label are spliced into a current history view focus image, the spliced current history view spliced image and/or the current history view focus image are sent to an auxiliary display system, and then the step A is returned to continuously acquire the next slice view image until all slice view images are acquired; if not, directly returning to the step A, and continuously acquiring the next slice view image until all slice view images are acquired;
the current historical field stitched image and/or the current historical field lesion image is then displayed in real-time by the auxiliary display system.
In order to more accurately identify the edge focus and improve the identification accuracy of the focus identification module, the edge expansion module is also used for edge expansion.
In the step B, the image stitching module sends the slice view field image acquired by the microscope system to the edge expanding module, the edge expanding module fills the expanding area of the current slice view field image in the current history view field stitching image, and sends the current slice view field image after edge expansion to the focus identifying module;
When image filling is carried out, if the real image content exists in the current historical view mosaic image in the expansion area of the current slice view image, filling the expansion area of the current slice view image by using the real image content; if the extended area of the current slice view image does not have real image content in the current history view spliced image, mirror copying the real image content on one side of the current slice view image adjacent to the extended area to obtain mirror image content, and filling the extended area of the current slice view image by using the mirror image content; finally obtaining a current slice visual field image with the edge expanded after filling;
In the step C, after receiving the current slice view image after edge expansion sent by the edge expansion module, the focus recognition module performs focus recognition by using a focus recognition network to obtain a focus recognition image containing a focus connected domain, then cuts the obtained focus recognition image by the size of the current slice view image before edge expansion, namely, deletes the expansion area of the current slice view image, and then sends the cut focus recognition image to the image superposition module.
Claims (7)
1. A slice image real-time processing display system under a microscope is characterized in that: comprises a microscope system, an image processing system and an auxiliary display system;
The microscope system is used for collecting a slice visual field image under a microscope;
The image processing system is used for sequentially splicing the plurality of slice visual field images according to the time sequence to obtain a current historical visual field spliced image and sending the current historical visual field spliced image to the auxiliary display system; the method is also used for identifying focus of each slice view image, extracting the edge of a focus connected domain in the identified focus identification image and superposing the extracted edge on the corresponding slice view image to obtain a microscope view image with a focus label and sending the microscope view image to an auxiliary display system; the system is also used for sequentially splicing a plurality of microscope field images with the lesion labels according to the time sequence to obtain a current historical field lesion image and sending the current historical field lesion image to an auxiliary display system;
The auxiliary display system is used for displaying the microscope field image with the lesion label, the current historical field spliced image and the current historical field lesion image in real time;
the image processing system comprises an image splicing module, a focus identification module and an image superposition module;
The image stitching module is used for sequentially acquiring the slice visual field images acquired by the microscope system and sequentially transmitting the acquired slice visual field images to the focus recognition module; the image stitching module is also used for stitching a plurality of slice visual field images in sequence according to the time sequence, so as to obtain a current historical visual field stitching image and send the current historical visual field stitching image to the auxiliary display system; the image stitching module is also used for sequentially acquiring the microscope field images with the disease focus labels transmitted by the image superposition module, stitching the microscope field images with the disease focus labels sequentially according to time sequence, finally acquiring the current historical field focus images and transmitting the current historical field focus images to the auxiliary display system;
The focus recognition module is used for recognizing focus of each slice view image transmitted by the image stitching module through a focus recognition network to obtain focus recognition images containing focus connected domains, and then transmitting the obtained focus recognition images to the image superposition module;
The image superposition module is used for sequentially acquiring the slice view image acquired by the microscope system and the focus identification image transmitted by the focus identification module, extracting the edge of the focus connected domain in the focus identification image, superposing the edge of the focus connected domain in the extracted focus identification image on the corresponding slice view image, acquiring the microscope view image with the focus label, and transmitting the microscope view image with the focus label to the image stitching module and the auxiliary display system;
in the process of training the focus recognition module, a step-by-step network training method is adopted;
Firstly, training a network model by using a first loss function, and only storing the network model with the best pixel level IoU index on the verification set Valid as an intermediate network model during training;
then training the intermediate network model by using a second loss function based on the intermediate network model, wherein only the network model with the best index of pixel level IoU on the verification set Valid is stored as a final network model during training; wherein the first loss function and the second loss function are different loss functions;
In the process of training the focus recognition module, the second loss function adopts a composite loss function aiming at multiple evaluation indexes, and the specific formula of the composite loss function is as follows:
CompoundLoss=Ffocal(α×LesionLoss+(1-α)×PixelLoss);
Ffocal(x)=-k×(1-x)γ×log(x);
Wherein CompoundLoss is a loss function, lesionLoss is focus level loss, pixelLoss is pixel level loss, alpha is a weighting coefficient, F focal (x) represents a focal function, k and gamma are both fixed parameters, and x represents a focal function variable; beta is a weighting coefficient, pre_ PixelLoss represents pixel-level precision loss, rec_ PixelLoss represents pixel-level recall loss, T 1 represents a true label map focus area, P 1 represents a predicted label map focus area, T 1∩P1 represents the number of pixels in the intersection area of T 1 and P 1, and smoth is a minimum amount for preventing denominator from being 0; pre_ LesionLoss represents focus level accuracy loss, rec_ LesionLoss represents focus level recall loss, T 2 represents a set of real label map focus connected domains, P 2 represents a set of predicted label map focus connected domains, and |n (P 2,T2) | represents the number of accurately predicted focuses.
2. The system for real-time processing and displaying of slice images under a microscope according to claim 1, wherein the image processing system performs the following steps when two images are spliced:
a: setting the current image acquired by the image processing system as f 1 (x, y), wherein the previous image is f 2 (x, y), and f 2(x,y)=f1 (x-dx, y-dy), namely f 1 (x, y) is obtained by f 2 (x, y) translation (dx, dy); performing Fourier transform on the current image and the previous image respectively to obtain a frequency domain image F 1 (u, v) and a frequency domain image F 2(u,v),F2(u,v)=F1(u,v)e-i·2π(u·dx+v·dy);
Wherein the translation comprises a parallel movement in a horizontal direction and/or a vertical direction; f 1 (x, y) represents the gray value of the (x, y) coordinate pixel of the current image, wherein (x, y) is the coordinate position of the image pixel; f 2 (x, y) is the gray value of the pixel point of the (x, y) coordinate of the previous image; f 1 (u, v) is the value of the frequency domain image of the current image on the (u, v) frequency domain coordinates, wherein (u, v) is the frequency domain coordinates of the frequency domain image, F 2 (u, v) is the value of the frequency domain image of the previous image on the (u, v) frequency domain coordinates, i represents a complex symbol, dx is the moving distance between the two images in the x-axis direction, and dy is the moving distance between the two images in the y-axis direction;
b: conjugation is carried out on the frequency domain image F 2 to obtain a conjugated frequency domain image And then the conjugated frequency domain image/>Multiplying the cross power spectrum with the frequency domain image F 1, performing normalization processing to obtain a cross power spectrum H (u, v),
C: performing Fourier inverse transformation on the cross power spectrum H (u, v) to obtain a real domain diagram F e(x,y),Fe (x, y) which is a pulse function image; the coordinates of the peak position in F e (x, y) are obtained as displacement amounts (dx, dy) of the front image and the rear image, then the front image is respectively placed at four positions of the upper left, the lower left, the upper right and the lower right of the current image, the moving distance between the two images in the x-axis direction is dx when the four positions are located, and the moving distance in the y-axis direction is dy; respectively calculating absolute value average values of gray value differences of overlapping areas of the front image and the rear image at the four positions, wherein the absolute value average value is the right splicing position relation of the front image and the rear image, and then splicing the images according to displacement (dx, dy) and the position relation to obtain a current historical field spliced image or a current historical field focus image;
d: and c, splicing each current image acquired by the image processing system with the previous current image, combining the spliced current historical field spliced image or the current historical field focus image, and finishing splicing all images by the image processing system to acquire the current historical field spliced image or the current historical field focus image when the last image is cut off.
3. The system for real-time processing and displaying of slice images under a microscope according to claim 1, wherein: the image processing system also comprises an edge expansion module, a focus recognition module and a current slice view field image acquisition module, wherein the edge expansion module is used for acquiring a current historical view field spliced image from the image splicing module, carrying out image filling on an expansion area of the current slice view field image in the current historical view field spliced image, and sending the current slice view field image after edge expansion to the focus recognition module;
When image filling is carried out, if the real image content exists in the current historical view mosaic image in the expansion area of the current slice view image, filling the expansion area of the current slice view image by using the real image content; if the extended area of the current slice view image does not have real image content in the current history view spliced image, mirror copying the real image content on one side of the current slice view image adjacent to the extended area to obtain mirror image content, and filling the extended area of the current slice view image by using the mirror image content; and finally obtaining the current slice visual field image with the edge expanded after filling.
4. A real-time slice image processing and displaying system under a microscope according to claim 3, wherein: after the image processing system utilizes the current slice visual field image after edge expansion to identify the focus, the obtained focus identification image is cut according to the size of the current slice visual field image before edge expansion, namely the expansion area of the current slice visual field image is deleted, and then the edge of the focus connected area in the cut focus identification image is extracted and overlapped on the corresponding slice visual field image to obtain the microscope visual field image with the focus label.
5. The system for real-time processing and displaying of slice images under a microscope according to claim 1, wherein: the image processing system sends the edge of the extracted focus connected domain to the augmented reality module, and the augmented reality module directly superimposes the edge of the focus connected domain on a corresponding focus in the field of view of the microscope eyepiece through light path conduction.
6. A method for real-time processing and displaying of a slice image under a microscope using the real-time processing and displaying system according to any one of claims 1 to 5, comprising the steps of, in order:
a: the microscope system acquires a slice visual field image under a microscope;
B: the image stitching module acquires a slice visual field image acquired by the microscope system and transmits the acquired slice visual field image to the focus recognition module;
c: the focus recognition module performs focus recognition on the slice view image transmitted by the image stitching module through a focus recognition network to obtain a focus recognition image containing a focus connected domain, and then sends the obtained focus recognition image to the image superposition module;
D: the image superposition module extracts the edge of the focus connected domain in the focus identification image, superimposes the extracted edge of the focus connected domain in the focus identification image on the corresponding slice view image, obtains a microscope view image with a focus label and sends the microscope view image to the image splicing module and the auxiliary display system; the auxiliary display system displays a microscope view image with a disease focus label in real time;
E: the image splicing module judges whether a previous slice view image and/or a microscope view image with a disease focus label exist, if so, the front and back slice view images and/or the front and back microscope view images with the disease focus label are spliced, a current history view spliced image and/or a current history view focus image obtained by splicing are sent to an auxiliary display system, and then the step A is returned; if not, directly returning to the step A;
the current historical field stitched image and/or the current historical field lesion image is then displayed in real-time by the auxiliary display system.
7. The method for real-time processing and displaying of slice images under a microscope according to claim 6, wherein:
In the step B, the image stitching module sends the slice view field image acquired by the microscope system to the edge expanding module, the edge expanding module fills the expanding area of the current slice view field image in the current historical view field stitching image, and the current slice view field image after edge expansion is sent to the focus identifying module;
In the step C, after receiving the current slice view image after edge expansion sent by the edge expansion module, the focus recognition module performs focus recognition by using a focus recognition network to obtain a focus recognition image containing a focus connected domain, then cuts the obtained focus recognition image by the size of the current slice view image before edge expansion, namely, deletes the expansion area of the current slice view image, and then sends the cut focus recognition image to the image superposition module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111096321.2A CN114004854B (en) | 2021-09-16 | 2021-09-16 | Real-time processing display system and method for slice image under microscope |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111096321.2A CN114004854B (en) | 2021-09-16 | 2021-09-16 | Real-time processing display system and method for slice image under microscope |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114004854A CN114004854A (en) | 2022-02-01 |
CN114004854B true CN114004854B (en) | 2024-06-07 |
Family
ID=79921803
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111096321.2A Active CN114004854B (en) | 2021-09-16 | 2021-09-16 | Real-time processing display system and method for slice image under microscope |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114004854B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114764796A (en) * | 2022-04-25 | 2022-07-19 | 杭州迪英加科技有限公司 | Method for displaying film viewing track of microscope |
CN115620852B (en) * | 2022-12-06 | 2023-03-31 | 深圳市宝安区石岩人民医院 | Tumor section template information intelligent management system based on big data |
CN118311016B (en) * | 2024-06-07 | 2024-09-10 | 浙江大学 | Method and system for detecting position and morphology of dendritic spines of high-resolution complete neurons |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108510497A (en) * | 2018-04-10 | 2018-09-07 | 四川和生视界医药技术开发有限公司 | The display methods and display device of retinal images lesion information |
CN109785300A (en) * | 2018-12-27 | 2019-05-21 | 华南理工大学 | A kind of cancer medical image processing method, system, device and storage medium |
WO2019127451A1 (en) * | 2017-12-29 | 2019-07-04 | 深圳前海达闼云端智能科技有限公司 | Image recognition method and cloud system |
CN110458249A (en) * | 2019-10-10 | 2019-11-15 | 点内(上海)生物科技有限公司 | A kind of lesion categorizing system based on deep learning Yu probability image group |
CN110619318A (en) * | 2019-09-27 | 2019-12-27 | 腾讯科技(深圳)有限公司 | Image processing method, microscope, system and medium based on artificial intelligence |
CN111784711A (en) * | 2020-07-08 | 2020-10-16 | 麦克奥迪(厦门)医疗诊断系统有限公司 | Lung pathology image classification and segmentation method based on deep learning |
WO2021093109A1 (en) * | 2019-11-14 | 2021-05-20 | 武汉兰丁智能医学股份有限公司 | Mobile phone-based miniature microscopic image acquisition device, image splicing method, and image recognition method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109784424B (en) * | 2019-03-26 | 2021-02-09 | 腾讯科技(深圳)有限公司 | Image classification model training method, image processing method and device |
-
2021
- 2021-09-16 CN CN202111096321.2A patent/CN114004854B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019127451A1 (en) * | 2017-12-29 | 2019-07-04 | 深圳前海达闼云端智能科技有限公司 | Image recognition method and cloud system |
CN108510497A (en) * | 2018-04-10 | 2018-09-07 | 四川和生视界医药技术开发有限公司 | The display methods and display device of retinal images lesion information |
CN109785300A (en) * | 2018-12-27 | 2019-05-21 | 华南理工大学 | A kind of cancer medical image processing method, system, device and storage medium |
CN110619318A (en) * | 2019-09-27 | 2019-12-27 | 腾讯科技(深圳)有限公司 | Image processing method, microscope, system and medium based on artificial intelligence |
CN110458249A (en) * | 2019-10-10 | 2019-11-15 | 点内(上海)生物科技有限公司 | A kind of lesion categorizing system based on deep learning Yu probability image group |
WO2021093109A1 (en) * | 2019-11-14 | 2021-05-20 | 武汉兰丁智能医学股份有限公司 | Mobile phone-based miniature microscopic image acquisition device, image splicing method, and image recognition method |
CN111784711A (en) * | 2020-07-08 | 2020-10-16 | 麦克奥迪(厦门)医疗诊断系统有限公司 | Lung pathology image classification and segmentation method based on deep learning |
Non-Patent Citations (1)
Title |
---|
基于深度学习的医疗影像识别技术研究综述;张琦;张荣梅;陈彬;;河北省科学院学报;20200915(第03期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114004854A (en) | 2022-02-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114004854B (en) | Real-time processing display system and method for slice image under microscope | |
WO2021036616A1 (en) | Medical image processing method, medical image recognition method and device | |
CN111227864B (en) | Device for detecting focus by using ultrasonic image and computer vision | |
US9672620B2 (en) | Reconstruction with object detection for images captured from a capsule camera | |
CN111214255B (en) | Medical ultrasonic image computer-aided method | |
CN107527069A (en) | Image processing method, device, electronic equipment and computer-readable medium | |
US20210366121A1 (en) | Image matching method and device, and storage medium | |
CN108806776A (en) | A method of the Multimodal medical image based on deep learning | |
CN112164043A (en) | Method and system for splicing multiple fundus images | |
CN116993699A (en) | Medical image segmentation method and system under eye movement auxiliary training | |
CN108470585A (en) | A kind of long-range mask method of interactive virtual sliced sheet and system | |
CN116703837B (en) | MRI image-based rotator cuff injury intelligent identification method and device | |
CN115830064A (en) | Weak and small target tracking method and device based on infrared pulse signals | |
CN115424319A (en) | Strabismus recognition system based on deep learning | |
CN114332858A (en) | Focus detection method and device and focus detection model acquisition method | |
CN113676721A (en) | Image acquisition method and system of AR glasses | |
CN117726822B (en) | Three-dimensional medical image classification segmentation system and method based on double-branch feature fusion | |
CN117528131B (en) | AI integrated display system and method for medical image | |
CN115994887B (en) | Medical image dense target analysis method based on dynamic anchor points | |
CN116807361B (en) | CT image display method, electronic equipment and device | |
CN118780980A (en) | Magnifying endoscope image processing method, device and storage medium | |
Liu et al. | A semantic segmentation algorithm supported by image processing and neural network | |
CN118037669A (en) | Spine detection and segment positioning method, system, terminal and storage medium | |
Zhu et al. | A real-time computer-aided diagnosis method for hydatidiform mole recognition using deep neural network | |
CN110415239B (en) | Image processing method, image processing apparatus, medical electronic device, and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |