[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN116829958A - Methods, apparatus and computer program products for slide processing - Google Patents

Methods, apparatus and computer program products for slide processing Download PDF

Info

Publication number
CN116829958A
CN116829958A CN202180092893.5A CN202180092893A CN116829958A CN 116829958 A CN116829958 A CN 116829958A CN 202180092893 A CN202180092893 A CN 202180092893A CN 116829958 A CN116829958 A CN 116829958A
Authority
CN
China
Prior art keywords
image
sub
information
tray
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180092893.5A
Other languages
Chinese (zh)
Inventor
龙畅
陶晓君
邢伟彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Roche Diagnostic Products Shanghai Co ltd
Original Assignee
Roche Diagnostic Products Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Roche Diagnostic Products Shanghai Co ltd filed Critical Roche Diagnostic Products Shanghai Co ltd
Publication of CN116829958A publication Critical patent/CN116829958A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • G02B21/367Control or image processing arrangements for digital or video microscopes providing an output produced by processing a plurality of individual source images, e.g. image tiling, montage, composite images, depth sectioning, image comparison
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/34Microscope slides, e.g. mounting specimens on microscope slides
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/1444Selective acquisition, locating or processing of specific regions, e.g. highlighted text, fiducial marks or predetermined fields
    • G06V30/1448Selective acquisition, locating or processing of specific regions, e.g. highlighted text, fiducial marks or predetermined fields based on markings or identifiers characterising the document or the area
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N35/00Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
    • G01N35/00584Control arrangements for automatic analysers
    • G01N35/00722Communications; Identification
    • G01N35/00732Identification of carriers, materials or components in automatic analysers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30072Microarray; Biochip, DNA array; Well plate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Optics & Photonics (AREA)
  • Analytical Chemistry (AREA)
  • Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

Embodiments of the present disclosure relate to batch scanning solutions for pathological slides. Some embodiments of the present disclosure provide a method for slide processing. The method includes obtaining a first image of a tray including a plurality of slots. The well is capable of receiving a slide that includes a label and is used to pathologically analyze a sample. The method also includes determining a plurality of sub-images from the first image based on the structure of the tray, wherein each of the plurality of sub-images corresponds to one of the plurality of slots. The method further includes extracting information from the label of the slide based on the at least one sub-image. According to the invention, a plurality of slides can be automatically processed at one time, so that the efficiency of slide management is improved.

Description

Methods, apparatus and computer program products for slide processing
Technical Field
Embodiments of the present disclosure relate generally to the field of digital pathology and, more particularly, relate to methods, apparatuses, computer program products for slide processing.
Background
In clinical medicine, a sample of a patient can greatly aid in pathological analysis. For example, a pathologist may take a sample of a patient and then observe the sample held by the slide using a microscope. In hospitals or other pathology analysis institutions, a large number of slides are handled every day. At different stages of slide processing, such as sampling, embedding, sectioning, staining, reading and archiving, it is necessary to track the patient's slides to prevent the slides from being lost or erroneously associated with different patients. It is desirable to further increase the efficiency of tracking in view of the number of slides to be processed.
EP2966493 A1 describes that pathological samples can be easily observed without spending too much time and suspicious pathological samples can be easily observed in detail. A sample viewing apparatus is described, comprising: an image capturing unit that acquires a partial image representing at least one of a plurality of pathology samples mounted on the housing portion and an overall image of the plurality of pathology samples mounted on the housing portion; an input unit for inputting identification information of the accommodation portion; a display unit that displays the enlarged version of the partial image acquired by the image capturing unit; an image specification unit for specifying a partial image displayed on the display unit; and a storage unit that stores the identification information input via the input unit and the position of the partial image with respect to the whole image specified via the image specification unit such that the position and the identification information are associated with the whole image.
Disclosure of Invention
The present disclosure provides a solution for slide processing in which multiple slides can be automatically processed at once, thereby improving the efficiency of slide management.
In a first aspect, a method for slide processing is provided. The method includes obtaining a first image of a tray including a plurality of slots. The well is capable of receiving a slide, and the slide includes a label and a sample for pathological analysis. The method further includes determining a plurality of sub-images from the first image based on the structure of the tray, each of the plurality of sub-images corresponding to one of the plurality of slots. The method further includes extracting information from the label of the slide based on the at least one sub-image.
The first image of the tray may be obtained by an image capturing device. The image capture device may include a camera, a light source, and a housing for containing the camera and the light source. The tray may be placed in the housing for image capture.
Determining the plurality of sub-images may include obtaining a second image by enhancing the first image; and determining a plurality of sub-images from the second image based on the structure of the tray. Typically, some pixels in the sub-image analysis are not easily detected. Since the first image is enhanced before the sub-image is determined, the efficiency of sub-image segmentation for further object detection, in particular for slide detection, may be improved.
Determining the plurality of sub-images from the second image may include determining a template corresponding to a structure of the tray, the template indicating a layout of the plurality of slots; and dividing the second image into a plurality of sub-images according to the determined template. The template may be configured to fit the size of a sub-image, which may specifically include or depict the shape of the slot and/or slide.
Determining the plurality of sub-images from the second image may include detecting edges of a plurality of slots in the second image; and determining a plurality of sub-images from the second image based on the edges. Since the sub-image is determined based on the edges of the slot, the edges of the slide can be accurately identified to further identify the information provided by the label, particularly bar code information. In particular, the barcode region may be located for detection.
Extracting information from the label of the slide may include dividing the sub-image into a plurality of regions, at least one of the plurality of regions having a size corresponding to the label; and extracting information from at least one of the plurality of regions. The tag may include a two-dimensional code, and extracting the information may include: for each of the plurality of regions: extracting information coded by a two-dimensional code from a tag area; obtaining data of a brightness channel corresponding to the area according to failure of information extraction; generating a binary image based on the data of the luminance channel; filtering out the connection region in the binary image based on the shape or size of the connection region; and extracting information from the filtered binary image.
The method may further include storing information associated with the first image. The method may further include determining an index corresponding to the sub-image of the label, the index indicating a location of a slot on the tray that holds the slide; and storing the information may also include storing the information in association with the index. Storing the information may also include storing the information in association with a sub-image corresponding to the slide.
The information may include at least one of: the encoded data, at least one character on the label, or at least one symbol on the label is represented by a graphical code on the label.
The method may further comprise: presenting a first image and a visual element indicating a state of one of the plurality of sub-images, the state selected from the group consisting of: a first state indicating that the tag is included in the corresponding sub-image and information indicated by the tag is successfully extracted; a second state indicating that no label is contained in the corresponding sub-image; and a third state indicating that the tag is contained in the corresponding sub-image, but that the information indicated by the tag cannot be extracted.
The information may indicate an identity of the slide, and the method may further include: receiving an input indicative of a target identity of a slide; obtaining a target image of the tray based on the target identifier; and presenting the target image as a response to the input.
In a second aspect, an electronic device is provided that includes one or more processors, one or more memories coupled to the one or more processors and having computer-executable instructions stored thereon. The computer-executable instructions, when executed by the one or more processors, cause the device to perform the acts of: obtaining a first image of a tray comprising a plurality of wells capable of holding slides comprising samples for labeling and pathology analysis; determining a plurality of sub-images from the first image based on the structure of the tray, each of the plurality of sub-images corresponding to one of the plurality of slots; and extracting information from the label of the slide based on at least one of the sub-images.
The electronic device may be configured to perform a method for slide processing as described above or as will be described in more detail below.
The first image of the tray may be obtained by an image capturing device comprising a camera, a light source and a housing for housing the camera and the light source. The tray may be placed in the housing for image capture.
Determining the plurality of sub-images may include: obtaining a second image by enhancing the first image; and determining a plurality of sub-images from the second image based on the structure of the tray. Determining the plurality of sub-images from the second image may include: determining a template corresponding to the structure of the tray, the template indicating a layout of the plurality of slots; and dividing the second image into a plurality of sub-images according to the determined template.
Extracting information from the label of the slide may include: detecting a region associated with the tag in at least one of the sub-images; and extracting information based on the region associated with the tag.
Extracting information from the label of the slide may include: dividing the sub-image into a plurality of regions, at least one of the plurality of regions having a size corresponding to the label; and extracting information from at least one of the plurality of regions. The tag may include a two-dimensional code, and extracting the information may include: for each of the plurality of regions: extracting information coded by a two-dimensional code from a tag area; obtaining data of a brightness channel corresponding to the area according to failure of information extraction; generating a binary image based on the data of the luminance channel; filtering out the connection region in the binary image based on the shape or size of the connection region; and extracting information from the filtered binary image.
The actions may also include storing information associated with the first image. The actions may also include: an index corresponding to the sub-image of the label is determined, the index indicating the location of the slot on the tray that holds the slide. Further, storing the information may also include storing the information in association with the index. Storing the information may also include storing the information in association with a sub-image corresponding to the slide.
The information may include at least one of: the encoded data, at least one character on the label, or at least one symbol on the label is represented by a graphical code on the label.
The actions may also include: presenting a first image and a visual element indicating a state of one of the plurality of sub-images, the processing state selected from the group consisting of: a first state indicating that the tag is included in the corresponding sub-image and information indicated by the tag is successfully extracted; a second state indicating that no label is contained in the corresponding sub-image; and a third state indicating that the tag is contained in the corresponding sub-image, but that the information indicated by the tag cannot be extracted.
The information may indicate an identity of the slide, and the actions may further include: receiving an input indicative of a target identity of a slide; obtaining a target image of the tray based on the target identifier; and presenting the target image as a response to the input.
In a third aspect, there is provided a computer readable storage medium having stored thereon computer executable instructions which, when executed by a processor of an apparatus, cause the apparatus to perform the steps of the method of the first aspect described above.
In a fourth aspect, there is provided a computer program product comprising computer executable instructions which, when executed by a processor of an apparatus, cause the apparatus to perform the steps of the method of the first aspect described above.
In a fifth aspect, a scanning device is provided. The scanning device includes a housing having an opening for loading a tray having a plurality of slots. The tray is capable of holding at least one slide including a label and a specimen for pathological analysis. The scanning device further comprises a camera. The camera may be configured to capture an image of the tray to extract information from the tag. Furthermore, the scanning device may comprise a light source, in particular a light source with configurable illumination parameters. The scanning device also includes a processor configured to perform actions comprising: obtaining an image of the tray in response to the tray being loaded into the scanning device, in particular by using the camera and the light source; determining a plurality of sub-images from the image of the tray based on the structure of the tray, each of the plurality of sub-images corresponding to one of the plurality of slots; and extracting information from the label of the slide based on at least one of the sub-images. Further, the actions may include storing information associated with the first image. The processor may be configured to perform the method for slide processing as described above or as will be described in more detail below.
It should be understood that the summary is not intended to identify key or essential features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the description that follows.
The term "pathology" as used herein is a broad term and should be given its ordinary and customary meaning to those of ordinary skill in the art and is not limited to a special or custom meaning. The term may particularly refer to, but is not limited to, processes and/or tests for examining the etiology, pathogenesis, course of disease and effects, including corresponding processes in the human or animal body. In particular, pathological analysis may include macroscopic and microscopic based tissue assessment.
The term "sample" as used herein is a broad term and should be given its ordinary and customary meaning to those skilled in the art without being limited to a special or custom meaning. The term may particularly refer to, but is not limited to, the removal or collection of any element from the human or animal body for pathological analysis. In particular, a sample may refer to a piece of tissue or an entire organ. However, a sample may also refer to a bodily fluid, such as blood or urine.
The term "slide" as used herein is a broad term and should be given its ordinary and customary meaning to those of ordinary skill in the art and is not limited to a special or custom meaning. The term may particularly refer to, but is not limited to, an elongated element capable of accommodating at least one sample. In particular, the slide may have at least one support surface configured to receive the at least one sample. The support surface may in particular be or may comprise at least one flat surface. The slide may be made of at least one optically transparent material, such as glass, in particular. However, other materials may also be possible.
The term "label" as used herein is a broad term and should be given its ordinary and customary meaning to those skilled in the art and is not limited to a special or custom meaning. The term may particularly refer to, but is not limited to, any identifier that is attachable to an article, such as a decal that is attachable or capable of being attached to a surface of another element. For example, the identifier may include an optical identifier, such as a bar code or two-dimensional code, and/or an electronic identifier, such as an RFID identifier. The label may comprise at least one adhesive surface. In particular, the label may be configured to be affixed to a surface of a slide. More specifically, the label may be configured to be affixed to a surface of the slide adjacent to a support surface of the slide configured to receive at least one sample. The label may be configured for tracking slides during different processing stages. As will be described in further detail below, the tag may include at least one element, such as at least one-dimensional code or at least one two-dimensional code, for information indicative of the slide. Thus, the term "extracting information from a tag" may particularly refer to, but is not limited to, machine-reading data from a one-dimensional code or a two-dimensional code by at least one optical reader and electronic processing of the data.
The term "slide treatment" as used herein is a broad term and should be given its ordinary and customary meaning to those of ordinary skill in the art and is not limited to a special or custom meaning. The term may particularly refer to, but is not limited to, a process employing one or more steps of at least one of slide preparation and slide analysis. Thus, slide processing may include a collection of different slide preparation and slide analysis steps in a pathology analysis. The different steps may in particular comprise sampling, embedding, slicing, staining, reading and/or archiving.
The term "tray" as used herein is a broad term and should be given its ordinary and customary meaning to those of ordinary skill in the art and is not limited to a special or custom meaning. The term may particularly refer to, but is not limited to, any carrier element, such as a flat element, configured to carry at least one other object. In particular, the tray may comprise at least one support surface configured for accommodating at least one other object. More specifically, the tray may include at least one recess or slot configured to receive at least one other object. The term "slot" as used herein is a broad term that shall be given its ordinary and customary meaning to those skilled in the art and is not limited to a special or custom meaning. The term may particularly refer to, but is not limited to, a recess or opening in any element such as in a tray. In particular, the slot may have a surrounding frame configured to clamp the object in a desired position. Thus, the surrounding frame may be configured to prevent misalignment of the object at least to a large extent when the element is tilted or transported.
The term "image" as used herein is a broad term and should be given its ordinary and customary meaning to those of ordinary skill in the art without being limited to a special or custom meaning. The term may particularly, but not exclusively, refer to data recorded by using a camera, such as a plurality of electronic readings from an imaging device, such as pixels of a camera chip. Thus, the image itself may comprise pixels, which are related to the pixels of the camera chip. Thus, when referring to a "pixel," either a unit of image information generated by a single pixel of the camera chip or a single pixel of the camera chip is referenced directly. The image may include raw pixel data. For example, the image may include data in RGB space, monochrome data from one of R, G or B pixels, bayer pattern images, and the like. The term "sub-image" as used herein is a broad term and should be given its ordinary and customary meaning to those skilled in the art without being limited to a special or custom meaning. The term may particularly refer to, but is not limited to, an image depicting a section or portion of another image. In particular, as will be outlined in more detail below, the image may be divided into a plurality of sub-images. Thus, each sub-image may depict another portion of the image.
As described above, a method for slide processing includes determining a plurality of sub-images from a first image based on a structure of a tray. The term "tray-based structure" as used herein is a broad term and should be given its ordinary and customary meaning to those of ordinary skill in the art without being limited to a special or custom meaning. The term may particularly refer to, but is not limited to, the case where a plurality of sub-images are obtained according to the arrangement of the grooves, the size of the grooves, the shape of the grooves, the orientation of the grooves to each other, and the number of grooves on the tray. In addition, other parameters may also be considered. Thus, sub-images from the image may be determined such that a region of interest, such as a single well, a slide received in the well, or a region of the slide received in the well, may be completely delineated at least to a large extent on the sub-image.
The term "enhanced image" as used herein is a broad term and should be given its ordinary and customary meaning to those of ordinary skill in the art and is not limited to a special or custom meaning. The term may refer specifically to, but is not limited to, any process that improves image color, contrast, and/or image quality. Enhancement may illustratively include converting a color image to a grayscale image, converting an RGB image to an image having another color space such as HSV, HLS, etc., sharpening the image, and/or eliminating blur. More details will be given in more detail below.
The term "detected edge" as used herein is a broad term and should be given its ordinary and customary meaning to those of ordinary skill in the art and is not limited to a special or custom meaning. The term may particularly refer to, but is not limited to, any mathematical method whose purpose is to separate two-dimensional regions in a digital image from each other if they differ sufficiently in color or gray value, brightness or texture along a straight line or curve.
Drawings
The following detailed description of embodiments of the present disclosure may be best understood when read in conjunction with the following drawings, where:
FIG. 1 illustrates an environment 100 in which example embodiments of the present disclosure may be implemented;
FIG. 2 illustrates an example tray according to some embodiments;
fig. 3 shows an example flowchart of a slide processing method according to an embodiment of the present disclosure.
Fig. 4 illustrates an example flow chart of a method of enhancing a tray image.
Fig. 5 shows an example illustration for dividing a tray image for batch scanning in accordance with an embodiment of the present disclosure.
FIG. 6 illustrates an example flow chart of a method for filtering a label image.
Fig. 7 shows a schematic block diagram of an example device 700 for implementing embodiments of the disclosure.
The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements.
The following detailed description of embodiments may relate to methods, electronic devices, computer-readable storage media, computer program products, and scanning devices for slide processing.
Detailed Description
The principles of the present disclosure will now be described with reference to some embodiments. It should be understood that these embodiments are described for illustrative purposes only and to assist those skilled in the art in understanding and practicing the present disclosure, and are not intended to limit the scope of the present disclosure in any way. The disclosure described herein may be implemented in various ways other than those described below.
In the following description and claims, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
In this disclosure, references to "one embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Furthermore, when a particular feature, structure, or characteristic is described in connection with an example embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
It will be understood that, although the terms "first" and "second," etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments. As used herein, the term "and/or" includes any and all combinations of one or more of the listed terms.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "having," when used herein, specify the presence of stated features, elements, and/or components, but do not preclude the presence or addition of one or more other features, elements, components, and/or groups thereof.
As described above, for pathology analysis, a slide containing a patient sample is prepared, read, and then analyzed. These slides are critical to the patient and require effective tracking at different stages of the procedure. For example, when a sample on a slide is to be stained, it may be desirable to record the identity of the slide to prevent the slide from being erroneously associated with a different patient.
For another example, when a patient is transferred to a different hospital or institution for further diagnosis or treatment, the slides associated with the patient also need to be physically transferred. During transfer of slides, an original hospital/institution may need to keep track of which slides were transferred, while a new hospital/institution may need to keep track of which slides were received.
According to some conventional solutions, labels on slides may be utilized for tracking during the various stages discussed above. For example, a bar code contained on a tag may be scanned and information encoded by the bar code may be extracted and recorded for tracking.
However, in the daily routine of a hospital or other pathology analysis facility, a large number of slides are handled every day. According to conventional solutions, one needs to manually scan the labels on the slides one by one to record the information of the slides. In this case, a lot of labor costs and time costs are required.
According to an example embodiment of the present disclosure, a solution for slide processing is presented. In this solution, a first image of a tray comprising a plurality of wells is first obtained, wherein the wells are capable of receiving slides, and the slides comprise labels and samples for pathological analysis. A plurality of sub-images may then be determined from the first image based on the structure of the tray, wherein each of the plurality of sub-images corresponds to one of the plurality of slots. Information may then be extracted from the label of the slide based on the at least one sub-image. By this solution, information of a plurality of slides can be automatically extracted at a time, thereby greatly reducing the amount of manual work spent scanning slides.
Hereinafter, example embodiments of the present disclosure are described with reference to the accompanying drawings. Fig. 1 illustrates an environment 100 in which example embodiments of the present disclosure may be implemented. As shown in fig. 1, environment 100 may include a scanning device 110 configured to capture an image of a tray 116.
In some embodiments, the tray 116 may include a plurality of slots for receiving slides. Fig. 2 illustrates an example of a tray 116 according to some embodiments. As shown in fig. 2, the tray 116 may include 20 slots 210 shaped to receive slides 220. In the example of fig. 2, 20 slots 210 are arranged in two rows, each row including 10 slots 210.
It should be understood that the shape of the tray 116 and the arrangement of the slots 210 as shown in fig. 2 are merely examples, and that any other suitable tray structure may be used. In one example, a tray with a circular shape can be utilized to hold slides. In another example, 20 slots 210 may be arranged in four rows, each row including 5 slots.
When the tray 116 is used for slide tracking, one or more slides 220 may be placed on the trough 210. For illustration, an example slide 220 is shown in fig. 2. Slide 220 may include a label 222 and a corresponding specimen 224. In some embodiments, the label 222 can include at least one element for indicating information of the slide 220.
In some embodiments, the label 222 can include at least one character that can indicate information associated with the slide 220. For example, the label 222 may include the text "John" indicating the name of the patient associated with the slide 220. Illustratively, the at least one character, such as the text "John," may be depicted in a first region 221 within the label 222 marked with a dashed line.
In some other embodiments, the label 222 can include at least one symbol that can indicate information associated with the slide 220. For example, a hospital logo of the sample 224 may be printed on the label 222 to indicate which hospital prepared the slide 220. Illustratively, at least one logo or at least one character associated with a hospital may be depicted in a second area 223 marked with a dashed line within the label 222, e.g., the text "3 rd hospital" may be shown in the second area 223 within the label 222.
In some further embodiments, the tag 222 may include a graphically encoded representation. Examples of graphically encoded representations may include, but are not limited to: one-dimensional bar codes, two-dimensional codes (e.g., QR codes), and any other graphical representation for encoding information. In the example of fig. 2, the two-dimensional code is contained in the tag 222.
Referring back to fig. 1, in some embodiments, the structure of the scanning device 110 may be specifically designed for easy capture of images of the tray 116. As shown in fig. 1, the scanning device 110 may include a housing 118 that encloses a camera 114 for capturing images of a tray 116. In some embodiments, the housing 118 may include an opening 113 for loading and unloading the tray 116. Further, the light source 112 may be provided within the housing 118. Examples of light sources 112 may include, but are not limited to, incandescent, halogen, fluorescent, mercury, light Emitting Diode (LED), and the like. The light source may have configurable lighting parameters, such as luminous flux, color temperature, power, brightness, etc., which are adjustable by the user.
During capturing of the image of the tray 116, the tray 116 may be placed within the housing 118, so that the influence of ambient light outside the housing 118 may be reduced and the quality of the captured image may be improved.
In some embodiments, scanning device 110 may also include a support 117. The tray 116 may be fixed on a support 117 for image capturing. In this case, the different trays may have a relatively stable position in the captured image, thereby facilitating analysis of the captured image. It should be appreciated that although only one camera 114 and one light source 112 are shown in the example of fig. 1, multiple cameras 114 and/or multiple light sources may be included in the scanning device 110.
As shown in fig. 1, the scanning device 110 may be communicatively coupled to a computing device 120. In some embodiments, the scanning device 110 may capture an image 140 of the tray 116 and then send the captured image 140 to the computing device 120 for analysis.
For one example, a pathologist may press a button (not shown in fig. 1) on scanning device 110 to cause scanning device 110 to capture image 140, which is then sent to computing device 120.
For another example, the scanning device 110 may detect whether the tray 116 is prepared within the housing 118, and may begin capturing the image 140 of the tray 116 after a predetermined period of time after the tray 116 is determined to be prepared.
For another example, the scanning device 110 may receive instructions from the computing device 120 and then begin capturing the image 140 of the tray 116. For example, a user may interact with the computing device 120 through a graphical user interface to cause the scanning device 110 to capture an image 140 of the tray 116.
Upon receiving the image captured by the scanning device 110, the computing device 120 may extract information from the tag(s) of the slide(s) from the captured image 140. The process of extracting information will be discussed in detail below with reference to fig. 3-7.
In some embodiments, computing device 120 may be coupled with display 122. The display 122 may present a graphical user interface to a user (e.g., pathologist). The user may view the analysis results of the tray 116, for example, through a graphical user interface. In addition, the graphical user interface may also provide the user with some components for controlling the scanning device 110. In one example, a user may interact with a graphical user interface to turn power to scanning device 110 on and/or off. In another example, the user may also configure parameters for capturing the image 140 of the tray 116 through the graphical user interface.
In some further embodiments, computing device 120 may be further coupled to storage device 130. For example, computing device 120 may store the extracted information in storage device 130, and may retrieve the stored information from storage device 130 in response to future queries.
Although computing device 120 is shown as a separate entity from scanning device 110, computing device 120 may be included in scanning device 110 as hardware and/or software components of scanning device 110 or integrated with scanning device 110. In this case, image analysis, which will be discussed in detail below, may then be implemented by the scanning device 110 itself.
It should also be appreciated that the scanning device 110 may have a different structure than the example in fig. 1. For example, a pathologist may use a cellular phone or camera to capture an image 140 of the tray 116 placed on a table. The solution of slide processing, which will be discussed in detail below, can also be applied to images of trays captured by scanning devices other than the example in fig. 1.
Fig. 3 shows a flowchart of an example process 300 for slide processing according to an embodiment of the disclosure. Process 300 may be implemented by computing device 120, as shown in fig. 1. For ease of illustration, process 300 will be described with reference to FIGS. 1-2.
At block 310, the computing device 120 obtains an image 140 (also referred to as a first image 140) of the tray 116 including a plurality of slots 210, wherein the slots 210 are capable of receiving slides 220, and the slides 220 include labels 222 and a sum sample 224 for pathology analysis.
In some embodiments, at least one slice 220 may be placed on the slot(s) 210 in order to capture the image 140 of the tray 116. It should be noted that not all slots 210 need be occupied. For example, when using a tray 116 that includes 20 slots 210, 6 slices may be placed on any suitable slot 210 for image capture, while the other 14 slots are empty.
When the tray 116 is prepared, the scanning device 110 may be used to capture an image 140 of the tray 116. In some embodiments, the scanning device 110 may send the captured image 140 of the tray 116 to the computing device 120 over a wired or wireless network. Alternatively, after capturing the image 140 of the tray 116, the scanning device 110 may send the captured image 140 with other captured images once (e.g., at a later time of day).
In some further embodiments, after capturing the image 140 of the tray 116, it may be stored in a storage device, such as the storage device 130. In addition, computing device 120 may later obtain image 140 of tray 116 from storage device 130 for further analysis.
In some further embodiments, if computing device 120 is implemented as an internal component of scanning device 110, computing device 120 may obtain image 140 of tray 116 from an image capturing component (e.g., camera 114), for example, through internal communication within scanning device 110.
It should be appreciated that the image of the tray 116 may be captured by any suitable device, including but not limited to the example scanning device 110 shown in fig. 1.
At block 320, the computing device 120 determines a plurality of sub-images from the first image 140 based on the structure of the tray 116, wherein each of the plurality of sub-images corresponds to one of the plurality of slots 210.
At block 330, the computing device 120 extracts information from the label 222 of the slide 220 based on the at least one sub-image.
In some embodiments, to increase the accuracy of image processing, computing device 120 may first obtain a second image by enhancing first image 140. Fig. 4 illustrates a flow chart of an example process 400 for enhancing a first image.
As shown in fig. 4, at block 410, the computing device 120 may convert the captured first image 140 of the tray 116 to a grayscale image. In most cases, the first image 140 may be a color image that contains mainly more information for later analysis. In some embodiments, the computing device 120 may convert the first image 140 into an 8-bit grayscale image. In an 8-bit gray scale image, the gray scale value of a pixel is represented by 8-bit bytes (i.e., a range of values 0-255).
In some embodiments, the maximum of the three components of R, G and B of the pixels in the first image 140 may be calculated as a gray value. Alternatively, a weighted average of the three components of R, G and B of the pixels in the first image 140 may be calculated as the gray value.
For another example, a grayscale image may be obtained by converting an RGB image to another color space (e.g., HSV, HLS, etc.) and then calculating a grayscale value based on components of the other color space.
At block 420, computing device 120 may obtain a second image by sharpening the grayscale image. In some embodiments, to eliminate blurring in the gray image, the gray image may also be sharpened by, for example, the laplace operator. For example, an example laplace operator may be a 4-connected kernel, with a value of 4 for elements in the center and a value of-1 for four neighboring elements.
Furthermore, for pixel value overflows, the sharpened grayscale image can be further adapted back to an 8-bit grayscale image. By sharpening the gray scale image, more pronounced edges in the image can be obtained, which can facilitate image analysis as described below.
At block 430, the computing device 120 may determine whether the standard deviation of the second image is greater than a threshold. In some embodiments, a standard deviation of the second image may be calculated and compared to a threshold.
If the standard deviation is greater than the threshold, it may be indicated that the second image has sufficient edge information and that the blur has been effectively eliminated. In this case, the process 400 proceeds to block 440. At block 440, the computing device may divide the enhanced second image into a plurality of sub-images, which may be based in particular on the structure of the tray 116.
If it is determined at block 430 that the standard deviation is less than or equal to the threshold, the process 400 proceeds to block 450. At block 450, the computing device 120 may prompt the user to capture another image of the tray. When capturing a new image of the tray, the user may configure the lighting parameters of the light source (e.g., luminous flux, color temperature, power, brightness, etc.) and/or the parameters of the camera (e.g., resolution, white balance, focal length, etc.), so that the obtained new image may have a higher quality of identification. In some embodiments, a preview of the image may be presented on the display to help the user adjust the parameters.
In some embodiments, an image that meets the threshold requirement may not be available after multiple attempts. In this case, one of the captured images having the highest standard deviation may be selected for subsequent processing. Alternatively, the threshold used in block 430 may be lowered, for example, and then an image with a relatively higher quality may be selected.
To detect the tag(s) included in the second image, the computing device may also determine a plurality of sub-images from the second image based on the structure of the tray. In some embodiments, the computing device 120 may determine a template corresponding to the structure of the tray 116, where the template may indicate the layout of the plurality of slots 210.
In some embodiments, the tray(s) 116 for holding slides are designed to have a single structure. In other words, the layout of the slots 210 on the tray(s) 116 is the same. In this case, a template for indicating the layout of the plurality of slots 210 in the tray 116 may be predetermined and maintained. For example, the template may indicate that there are 20 slots arranged in two rows, each row comprising 10 slots uniformly arranged in space.
In some further embodiments, tray(s) having multiple structural types may be used. For example, multiple trays with different configurations may be provided, and a pathologist may select one tray from among them to accommodate the slice. In this case, a plurality of templates corresponding to a plurality of trays may be predetermined, and then maintained in association with the identification of the trays. For example, a first template indicating a first layout of slots may be maintained in association with a first identification, and a second template indicating a second layout of slots may be maintained in association with a second identification.
When determining the template corresponding to the tray 116 being used, the identity of the tray 116 may be obtained first. In one example, a pathologist may first enter through a graphical user interface that the tray 116 with the first identification is being used. The computing device 120 may then obtain the maintained first template based on the first identification.
In another example, the computing device may automatically detect the identity of the tray 116, e.g., based on the first or second images. For example, a number "1" may be printed on the tray 116 to indicate that the tray 116 has a first identification. The computing device 120 may then obtain the maintained first template based on the first identification.
After obtaining the layout of the slots 210 in the tray 116, the computing device 120 may then divide the second image into a plurality of sub-images according to the determined templates. For example, the computing device 120 may arrange the second image into 20 sub-images according to an obtained template indicating that there are 20 slots arranged in two rows, each row including 10 slots uniformly arranged in space.
In some other embodiments, the computing device 120 may detect edges of the plurality of slots 210 in the second image. It should be noted that any suitable edge detection algorithm may be applied, such as the Sobel algorithm or the Canny algorithm, to detect the edges of the time slots. The subject matter of the present disclosure is not limited in this respect.
Further, the computing device 120 may then determine a plurality of sub-images from the second image based on the edges. For example, computing device 120 may determine a plurality of regions bounded by edges as a plurality of sub-images.
Fig. 5 shows a schematic diagram for slide processing according to an embodiment of the present disclosure. As shown in fig. 5, an enhanced second image 510 may be obtained according to the procedure described above. The computing device 120 may also determine 20 sub-images 520 from the second image 510 based on the above-described process.
In some embodiments, during information extraction, an index may be assigned to each of the plurality of sub-images 520. In the example of fig. 5, 20 indices (e.g., 1, 2, 3 … K) are assigned to sub-image 520. The assigned index may help to associate information to be extracted with the corresponding sub-image.
In some further embodiments, the computing device 120 may also determine the plurality of sub-images directly from the first image without enhancing the first image into the second image. In addition, according to the process 400 as discussed with reference to fig. 4, multiple sub-images obtained from the first image 140 may be enhanced for later processing.
In addition, to facilitate extracting information from the tag(s), computing device 120 may also divide sub-image 520 into a plurality of regions, wherein at least one of the plurality of regions has a size corresponding to tag 222.
Generally, as shown in FIG. 2, the slide 220 may have an elongated shape with a label portion at one end of the slide and a sample portion at the other end. Further, different tags 222 may be the same size. To reduce the computational cost of detecting tags, the computing device 120 may divide the sub-image 520 into a plurality of regions 530. In the example of fig. 5, the sub-image 520 is divided into three areas 530, and the size of the areas 530 has a size corresponding to the label 520. That is, the label 2020 always falls within one of these areas 530 and is not included in two or more of the areas 530.
Since the label 222 is typically located at one end of the slide 520, only the region 530 at the end of the sub-image 510 is considered to be a potential region 540 that includes the label 222. Thus, the computing device 120 may extract information from at least one of the plurality of regions 530 without regard to, for example, regions in the middle of the sub-image.
In some embodiments, it may only be necessary to process one of the plurality of regions 530 if the direction in which the slide(s) 220 are placed on the tray 116 is always the same and the direction in which the tray 116 is placed in the scanning device 110 is always the same. For example, if as shown in fig. 5, it can be ensured that the label 222 is always placed on top of the slide 220, the computing device 120 can process only the top region 530 (e.g., the region with index "91") of the sub-image 520 (e.g., the sub-image with index "9"), thereby reducing the computing cost.
In some further embodiments, two or more of the plurality of regions 530 may need to be processed if the direction in which the slide(s) 220 are placed on the tray 116 may be different. In the example of fig. 5, where sub-image 520 is divided into three regions 530, the top region (e.g., the region with index "91") and the bottom region (e.g., the region with index "93") of the three regions may be processed, and the middle region (e.g., the region with index "92") may be skipped accordingly. In this way, the manner in which the slide is placed can be flexible and the computational cost can be reduced.
The regions determined to possibly contain tags are collected for subsequent identification. As described above, the tags 222 may include different types of information and may use different identification solutions. For example, the tag may include at least one of a character, symbol, or graphically encoded representation.
In some embodiments, optical Character Recognition (OCR) may be applied to re-clone characters or graphical symbols on label 222 for each collected image. For example, the text "IM123456789" may be extracted from the label 222 by OCR, and the text "IM123456789" may be determined as the identity of the slide 220.
In some other embodiments, for each collected image, an appropriate bar code recognition algorithm (e.g., for a data matrix, QR code, PDF417, etc.) may be used to decode the graphically encoded representation. For example, the identification "IM123456789" may be encoded by a two-dimensional code on the tag 222, and the computing device 120 may extract the identification by decoding the two-dimensional code.
Furthermore, since each sub-image includes at most one label image (one slide corresponds to one label), it is not necessary to apply the recognition algorithm to all images divided from the same sub-image. For example, if some information has been extracted from the image identified as "11", the image identified as "13" need not be processed for identification and can be skipped, thereby speeding up the batch scan of the present disclosure.
However, since the tag may be contaminated during the slide preparation process and the barcode recognition algorithm is generally susceptible to noise, it may not be possible to extract information in the barcode, particularly the two-dimensional code.
Reference is now made to fig. 6. FIG. 6 illustrates a flow chart of an example process for filtering a label image, including a two-dimensional code, for example, in order to make the batch scan of the present disclosure more robust. At block 610, specifically, for each image for which the recognition algorithm fails to extract information, the computing device 120 may obtain an RGB image of the image. As described above, the image to which the recognition algorithm has been applied but which fails to extract information is an enhanced gray-scale image (as in steps 410-420). The original images (RGB images) of these images can now be obtained to obtain more details that may be lost in the previous process.
At block 620, the computing device 120 may convert the RGB image to a grayscale image using the L (luminance) channel of the HLS (hue, brightness, saturation) color space. In some embodiments, the RGB image may be converted to an HLS image by calculating components of HLS, as is well known in the art, and extracting the value of the L-channel for each pixel as the gray level of that pixel.
At block 630, the computing device 120 may convert the grayscale image to a binary image by, for example, an adaptive threshold. In some embodiments, a threshold is applied to the grayscale image to generate a binary image, wherein a pixel is set to white (e.g., value 255) if the value of the pixel exceeds the adaptive threshold, and is set to black (e.g., value 0) otherwise. The threshold varies depending on the local statistics of the pixels to be applied. For example, if a pixel is located in an area around which the pixel generally has a relatively high gray value, the threshold applied to the pixel may be adaptively increased. Also, if a pixel is located in an area around which the pixel generally has a relatively low gray value, the threshold applied to the pixel may be adaptively lowered. Thus, the adaptive thresholding method can prevent loss of image detail in the overexposed or underexposed areas when generating a binary image.
At block 640, the computing device 120 may detect edges of the binary image. According to some embodiments, a canny operator may be applied to the binary image to generate multiple edges.
In block 650, the computing device may magnify the detected edges. According to some embodiments, the kernel of the zoom-in operation may be a 4-connected kernel (considering the upper, left, lower, and right 4 pixels of the center pixel) or an 8-connected kernel (considering the 8 pixels around the center pixel). Based on the amplification, isolated pixels are effectively removed, most of which are statistically noise.
The computing device 120 may also filter pixels in the magnified image to generate an image for algorithmic recognition of the two-dimensional code. At block 660, the computing device 120 may filter pixels in the enlarged image based on the connectivity. If the pixel is not an 8-connected pixel, the pixel is an isolated pixel and should be discarded (e.g., its value is set to 255 as background, which means white).
At block 670, the computing device 120 may filter pixels in the magnified image based on shape, particularly based on the shape of the connected region of pixels. In some embodiments, only pixels located in an approximately square region may be retained (e.g., with their value set to 0 as foreground, which means black). For example, a ratio of the width and the height of the connection region of the pixel may be calculated. The ratio should fall within a range of predetermined values (e.g., 0.7-1.2), depending on the type of two-dimensional code. Otherwise the pixel should be discarded.
At block 680, the computing device 120 may filter pixels in the enlarged image based on size, particularly based on the size of the connection region of the pixels. In some embodiments, only pixels located in a connection region of an appropriate size may be reserved (e.g., set to a value of 0 as foreground, which means black). For example, the product of the width and the height of the pixel connection region is a quick method of calculating the area of the region, which should fall within a range of predetermined values depending on the type of the two-dimensional code. Otherwise the pixel should be discarded.
The algorithm for decoding the two-dimensional bar code may again be applied to the results of method 600 to extract information in the tag. If the information is still not recognized and extracted, the Laplacian may be applied to the image to sharpen or enhance the image, and then the algorithm is applied again to extract the information again. The laplace operator may be a kernel of, for example, 3x3, with values of 2, 4, 6, 8, or 10 for the element in the center, and values of-1, 0, or 1 for the other 8 elements in the surroundings. In some embodiments, the image may be repeatedly processed with the laplace operator before the algorithm is applied to extract the information.
Based on the above discussion, the embodiment of the invention can reliably extract information from the slide at a time, thereby improving the efficiency of slide management.
In some embodiments, the computing device 120 may present, for example, via the display 122, the captured image 140 (first image) of the tray 116 and a visual element indicating a status of one of the plurality of sub-images.
In some embodiments, the computing device 120 may present, via the display 122, a first state indicating that the tag is contained in the respective sub-image and that the information indicated by the tag was successfully extracted. For example, a block with green color may be presented to indicate that the corresponding sub-image was successfully identified and information extracted. Further, the extracted information may be presented, for example, in response to clicking on a box.
In some other embodiments, the computing device 120 may present a second state indicating that no label is included in the corresponding sub-image. For example, blocks with grey may be presented to indicate that no slide is placed on the corresponding slot.
In some further embodiments, the computing device may preset a third state indicating that the tag is included in the corresponding sub-image but that information indicated by the tag cannot be extracted. For example, a block with a red color may be presented to indicate that the identification of the corresponding sub-image failed.
In some embodiments, computing device 120 may record the extracted information. As described above, the extracted information may include an identification of the slide 220. In some embodiments, the computing device 120 may update information associated with the identification to indicate that a particular procedure (e.g., a retention procedure) of the slide 220 has been completed. In this case, the computing device 120 may later provide information regarding the processing stage of the slide based on the stored information, for example, in response to an identification-based query. Thus, by extracting information from a batch of slides, the efficiency of slide tracking can be improved.
In some other embodiments, the computing device 120 may store the extracted information in association with the image 140 of the tray 116. In this case, the captured image 140 may help verify that the extracted information is correct, and the user may update the information by viewing the stored image 140. Further, by storing information visually indicated or encoded on the label in association with the captured image 140, the user can retrieve the sample of the slide after it has been archived for further review or analysis.
In some embodiments, the computing device 120 may receive input indicating a target identification of the slide, and may then obtain a target image of the tray based on the target identification. For example, computing device 120 may use the identification to look up the target image from storage device 130. Further, the computing device 120 may present the target image as a response to the input. In this way, a user can easily retrieve the original captured image of the tray using the particular identification of the slide.
In some further embodiments, the computing device 120 may also determine an index corresponding to the sub-image of the label, where the index may indicate the location of the slot on the tray that holds the slide. As described above, each sub-image may be assigned a respective index. Computing device 120 may also store the extracted information in association with the index.
For example, the identification extracted from the tag may be stored in association with the index "1". In this way, a user can easily identify from the stored image 140 which of the plurality of slides corresponds to the identification.
It should be understood that the extracted slide information may be applied for any other suitable purpose, for which no detailed purpose is listed herein.
Fig. 7 shows a schematic block diagram of an example device 700 for implementing embodiments of the disclosure. For example, computing device 120 according to embodiments of the invention may be implemented by device 700. As shown, the device 700 includes a Central Processing Unit (CPU) 701 that can perform various suitable actions and processes based on computer program instructions stored in a Read Only Memory (ROM) 702 or loaded from a storage unit 708 into a Random Access Memory (RAM) 703. The RAM 703 may also store all types of programs and data required for the operation of the device 700. The CPU 701, ROM 702, and RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in device 700 are connected to I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, etc.; an output unit 707 such as various displays and speakers; storage unit 708 such as magnetic disk and optical disk; and communication unit 709 such as a network card, modem, wireless transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet and/or various telecommunication networks.
The processes and processes described above, such as process 200, may also be performed by processing unit 701. For example, in some embodiments, the process 300 may be implemented as a computer software program tangibly embodied in a machine-readable medium, e.g., the storage unit 708. In some embodiments, the computer program may be loaded and/or installed in part or in whole to device 700 via ROM 702 and/or communication unit 709. One or more of the steps of the methods or processes described above may be implemented when a computer program is loaded into RAM 703 and executed by CPU 701.
The present disclosure may be a method, apparatus, system, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions loaded thereon for performing aspects of the present disclosure.
The computer readable storage medium may be a tangible device that maintains and stores instructions for use by the instruction execution device. The computer readable storage medium may be, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disc read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards that store instructions, or projections into slots, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not themselves to be interpreted as transitory signals, such as radio waves or freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses propagating through fiber optic cables), or electrical signals propagating through wires.
The described computer readable program instructions may be downloaded from a computer readable storage medium to each computing/processing device or to an external computer or external memory via the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, network gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in the computer readable storage medium of each computing/processing device.
Computer program instructions for performing the operations of the present disclosure may be assembly instructions, instructions of an Instruction Set Architecture (ISA), machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as SmallTalk, c++ or the like and conventional procedural programming languages, such as the C language or similar programming languages. The computer-readable program instructions may be implemented entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) and a Wide Area Network (WAN), or to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the state information of the computer readable program instructions is used to customize an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA). The electronic circuitry may execute computer-readable program instructions to implement various aspects of the present disclosure.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
The computer readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be stored in a computer readable storage medium and cause a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium stored with the instructions includes an article of manufacture including instructions for implementing the aspects of the functions/acts specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other devices to produce a computer implemented process. Accordingly, instructions which execute on the computer, other programmable data processing apparatus, or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the system architecture, functionality, and operation that can be implemented by systems, methods and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, it should be understood that the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed in parallel or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Various implementations of the present disclosure have been described above, and the above description is illustrative only and not exhaustive and is not limited to the implementations of the present disclosure. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various implementations as explained. The terminology in the text was chosen in order to best explain the principles of each embodiment and the practical application, as well as the technical improvements each embodiment makes in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments of the disclosure.
List of examples
Example 1: a method for slide processing, comprising:
obtaining a first image of a tray comprising a plurality of wells capable of holding slides comprising labels and samples for pathology analysis;
determining a plurality of sub-images from the first image based on a structure of the tray, each sub-image of the plurality of sub-images corresponding to one of the plurality of slots; and
information is extracted from the label of the slide based on at least one of the sub-images.
Example 2: the method of embodiment 1, wherein the first image of the tray is obtained by an image capturing device comprising a camera, a light source, and a housing for enclosing the camera and the light source, and wherein the tray is placed in the housing for image capturing.
Example 3: the method of embodiment 1, wherein determining the plurality of sub-images comprises:
obtaining a second image by enhancing the first image; and
the plurality of sub-images is determined from the second image based on the structure of the tray.
Example 4: the method of embodiment 3, wherein determining the plurality of sub-images from the second image comprises:
Determining a template corresponding to the structure of the tray, the template indicating a layout of the plurality of slots; and
the second image is divided into the plurality of sub-images according to the determined template.
Example 5: the method of embodiment 3, wherein determining the plurality of sub-images from the second image comprises:
detecting edges of the plurality of slots in the second image; and
the plurality of sub-images is determined from the second image based on the edge.
Example 6: the method of embodiment 1, wherein extracting information from the label of the slide comprises:
dividing a sub-image into a plurality of regions, at least one region of the plurality of regions having a size corresponding to the label; and
the information is extracted from at least one of the plurality of regions.
Example 7: the method of embodiment 6, wherein the tag comprises a two-dimensional code, and wherein extracting the information comprises:
for each of the plurality of regions:
extracting information encoded in the two-dimensional code from the region of the tag;
in response to a failure to extract the information,
obtaining data of a brightness channel corresponding to the region;
Generating a binary image based on the data of the luminance channel;
filtering out the connection region in the binary image based on the shape or size of the connection region; and
and extracting the information from the filtered binary image.
Example 8: the method of embodiment 1, further comprising:
information associated with the first image is stored.
Example 9: the method of embodiment 8, further comprising:
determining an index corresponding to a sub-image of the label, the index indicating a location of a slot on the tray that accommodates the slide; and
wherein storing the information further comprises:
information associated with the index is stored.
Example 10: the method of embodiment 8, wherein storing the information further comprises:
information associated with a sub-image corresponding to the slide is stored.
Example 11: the method of embodiment 1 wherein the information comprises at least one of:
the coded data is represented by a graphical code on the tag,
at least one character on the label, or
At least one symbol on the label.
Example 12: the method of embodiment 1, further comprising:
Presenting the first image and a visual element indicating a state of one of the plurality of sub-images, the state selected from the group consisting of:
a first state indicating that the tag is contained in the corresponding sub-image and that information indicated by the tag is successfully extracted,
a second state indicating that no said label is contained in the corresponding sub-image, and
and a third state indicating that the tag is included in the corresponding sub-image but that information indicated by the tag cannot be extracted.
Example 13: the method of embodiment 1, wherein the information indicates an identity of the slide, the method further comprising:
receiving an input indicative of a target identity of a slide;
acquiring a target image of the tray based on the target identifier; and
the target image is presented as a response to the input.
Example 14: an electronic device, comprising:
one or more processors;
one or more memories coupled to the one or more processors and having stored thereon computer-executable instructions that, when executed by the one or more processors, cause the apparatus to perform actions comprising:
Obtaining a first image of a tray comprising a plurality of wells capable of holding slides comprising labels and samples for pathology analysis;
determining a plurality of sub-images from the first image based on a structure of the tray, each sub-image of the plurality of sub-images corresponding to one of the plurality of slots; and extracting information from the label of the slide based on at least one of the sub-images.
Example 15: the device of embodiment 14, wherein the first image of the tray is obtained by an image capture device comprising a camera, a light source, and a housing for enclosing the camera and the light source, and wherein the tray is placed in the housing for image capture.
Example 16: the apparatus of embodiment 14, wherein determining the plurality of sub-images comprises:
obtaining a second image by enhancing the first image; and
the plurality of sub-images is determined from the second image based on the structure of the tray.
Example 17: the apparatus of embodiment 16, wherein determining the plurality of sub-images from the second image comprises:
Determining a template corresponding to the structure of the tray, the template indicating a layout of the plurality of slots; and
the second image is divided into the plurality of sub-images according to the determined template.
Example 18: the apparatus of embodiment 14, wherein extracting information from the label of the slide comprises:
detecting an area associated with the tag in at least one of the sub-images; and
the information is extracted based on the region associated with the tag.
Example 19: the apparatus of embodiment 14, wherein extracting information from the label of the slide comprises:
dividing a sub-image into a plurality of regions, at least one region of the plurality of regions having a size corresponding to the label; and
the information is extracted from at least one of the plurality of regions.
Example 20: the apparatus of embodiment 19, wherein the tag comprises a two-dimensional code, and wherein extracting the information comprises:
for each of the plurality of regions:
extracting information encoded in the two-dimensional code from the region of the tag;
in response to a failure to extract the information,
Obtaining data of a brightness channel corresponding to the region;
generating a binary image based on the data of the luminance channel;
filtering out the connection region in the binary image based on the shape or size of the connection region; and
and extracting the information from the filtered binary image.
Example 21: the apparatus of embodiment 14, the acts further comprising:
information associated with the first image is stored.
Example 22: the apparatus of embodiment 21, wherein the acts further comprise:
determining an index corresponding to a sub-image of the label, the index indicating a location of a slot on the tray that accommodates the slide; and
wherein storing the information further comprises:
information associated with the index is stored.
Example 23: the apparatus of embodiment 21, wherein storing the information further comprises:
information associated with a sub-image corresponding to the slide is stored.
Example 24: the apparatus of embodiment 14, wherein the information comprises at least one of:
the coded data is represented by a graphical code on the tag,
at least one character on the label, or
At least one symbol on the label.
Example 25: the apparatus of embodiment 14, wherein the acts further comprise:
presenting the first image and a visual element indicative of a state of one of the plurality of sub-images, the processing state selected from the group consisting of:
a first state indicating that the tag is contained in the corresponding sub-image and that information indicated by the tag is successfully extracted,
a second state indicating that no said label is contained in the corresponding sub-image, and
and a third state indicating that the tag is included in the corresponding sub-image but that information indicated by the tag cannot be extracted.
Example 26: the apparatus of embodiment 14, wherein the information indicates an identification of the slide, and wherein the acts further comprise:
receiving an input indicative of a target identity of a slide;
acquiring a target image of the tray based on the target identifier; and
the target image is presented as a response to the input.
Example 27: a computer readable storage medium having stored thereon computer executable instructions which, when executed by a processor of an apparatus, cause the apparatus to perform the steps of any of the methods according to embodiments 1-13.
Example 28: a computer program product comprising computer executable instructions which, when executed by a processor of an apparatus, cause the apparatus to perform the steps of the method according to any of embodiments 1-13.
Example 29: a scanning device, comprising:
a housing having an opening for loading a tray having a plurality of slots, the tray being capable of holding at least one slide, the slide including a label and a sample for pathology analysis; and a camera configured to capture an image of the tray to extract information from the tag.
Example 30: the apparatus of embodiment 29 further comprising a light source having configurable lighting parameters.
Example 31: the apparatus of embodiment 29, further comprising:
a processor configured to perform actions comprising: in response to the tray being loaded into the scanning device,
obtaining an image of the tray;
determining a plurality of sub-images from an image of the tray based on a structure of the tray,
each of the plurality of sub-images corresponds to one of the plurality of slots;
and
information is extracted from the label of the slide based on at least one of the sub-images.
List of reference numerals
100 environment
110 scanning device
112 light source
113 opening
114 camera
116 tray
117 support
118 outer casing
120 computing device
122 display
130 storage device
140 first image
210 groove
220 slide
221 first region
222 label
223 second region
224 sample
300 example procedure
310 obtain a first image of a tray comprising a plurality of wells capable of holding slides including labels and specimens for pathology analysis
320 determining a plurality of sub-images from the first image based on the structure of the tray, each sub-image of the plurality of sub-images corresponding to one of the plurality of slots
330 extract information from the label of the slide based on at least one sub-image
400 example procedure
410 converts a first image of a captured tray into a grayscale image
420 sharpen gray image
Is the 430 standard deviation greater than the threshold?
440 divide the enhanced second image into sub-images
450 prompt capturing another image of the tray
500 schematic diagram
510 second image
520 sub-picture
530 area
600 example procedure
610 obtaining RGB images
620 converting RGB image into gray image using L-channel of HLS
630 converting gray scale image into binary image
640 detecting edges of binary images
650 enlarge the detected edge
660 filter pixels based on connectivity
670 shape-based filtering pixels
680 filter pixels based on size
700 device
701CPU
702ROM
703RAM
704 bus
705I/O interface
706 input unit
707 output unit
708 memory cell
709 communication unit

Claims (30)

1. A method for slide processing, comprising:
obtaining a first image (140) of a tray (116) comprising a plurality of slots (210), the slots (210) being capable of receiving slides (220), the slides (220) comprising labels (222) and samples (224) for pathological analysis;
determining a plurality of sub-images from the first image (140) based on a structure of the tray (116), each of the plurality of sub-images corresponding to one of the plurality of slots (210); and
information is extracted from the label (222) of the slide (220) based on at least one of the sub-images.
2. The method of claim 1, wherein the first image (140) of the tray (116) is obtained by an image capturing device comprising a camera (114), a light source (112), and a housing (118) for enclosing the camera (114) and the light source (112), and wherein the tray (116) is placed in the housing (118) for image capturing.
3. The method of claim 1, wherein determining the plurality of sub-images comprises:
Obtaining a second image (510) by enhancing the first image (140); and
the plurality of sub-images (520) is determined from the second image (510) based on the structure of the tray (116).
4. A method according to claim 3, wherein determining the plurality of sub-images (520) from the second image (510) comprises:
determining a template corresponding to the structure of the tray (116), the template indicating a layout of the plurality of slots (210); and
the second image (510) is divided into the plurality of sub-images (520) according to the determined template.
5. A method according to claim 3, wherein determining the plurality of sub-images (520) from the second image (510) comprises:
detecting edges of the plurality of slots (210) in the second image (510); and
the plurality of sub-images (520) is determined from the second image (510) based on the edge.
6. The method of claim 1, wherein extracting information from the label (222) of the slide (220) comprises:
dividing a sub-image (520) into a plurality of regions (530), at least one region of the plurality of regions (530) having a size corresponding to the label (222); and
the information is extracted from at least one of the plurality of regions (530).
7. The method of claim 6, wherein the tag (222) comprises a two-dimensional code, and wherein extracting the information comprises:
for each of the plurality of regions (530):
extracting information encoded in the two-dimensional code from the region (530) of the tag (222);
in response to a failure to extract the information,
obtaining data of a luminance channel corresponding to the region (530);
generating a binary image based on the data of the luminance channel;
filtering out the connection region in the binary image based on the shape or size of the connection region; and
and extracting the information from the filtered binary image.
8. The method of claim 1, further comprising:
information associated with the first image (140) is stored.
9. The method of claim 8, further comprising:
determining an index corresponding to a sub-image of the label (222), the index indicating a position of a slot (210) on the tray (116) housing the slide (220); and
wherein storing the information further comprises:
information associated with the index is stored.
10. The method of claim 8, wherein storing the information further comprises:
information associated with a sub-image corresponding to the slide (220) is stored.
11. The method of claim 1, wherein the information comprises at least one of:
the encoded data is represented by a graphical code on the tag (222),
at least one character on the label (222), or
At least one symbol on the tag (222).
12. The method of claim 1, further comprising:
-presenting the first image (140) and a visual element indicating a state of one of the plurality of sub-images, the state being selected from the group consisting of:
a first state indicating that the tag (222) is contained in a corresponding sub-image and that information indicated by the tag (222) was successfully extracted,
a second state indicating that no tag (222) is contained in the corresponding sub-image, and
a third state, indicating that the tag (222) is contained in the corresponding sub-image but that information indicated by the tag (222) cannot be extracted.
13. The method of claim 1, wherein the information indicates an identity of the slide (220), the method further comprising:
receiving an input indicative of a target identity of a slide (220);
obtaining a target image of a tray (116) based on the target identification; and
The target image is presented as a response to the input.
14. An electronic device, comprising:
one or more processors;
one or more memories coupled to the one or more processors and having stored thereon computer-executable instructions that, when executed by the one or more processors, cause the apparatus to perform actions comprising:
obtaining a first image (140) of a tray (116) comprising a plurality of slots (210), the slots (210) being capable of receiving slides (220), the slides (220) comprising labels (222) and samples (224) for pathological analysis;
determining a plurality of sub-images from the first image (140) based on a structure of the tray (116), each of the plurality of sub-images corresponding to one of the plurality of slots (210); and
information is extracted from the label (222) of the slide (220) based on at least one of the sub-images.
15. The device of claim 14, wherein the first image (140) of the tray (116) is obtained by an image capturing device comprising a camera (114), a light source (112), and a housing (118) for enclosing the camera (114) and the light source (112), and wherein the tray (116) is placed in the housing (118) for image capturing.
16. The apparatus of claim 14, wherein determining the plurality of sub-images comprises:
obtaining a second image (510) by enhancing the first image (140); and
the plurality of sub-images (520) is determined from the second image (510) based on the structure of the tray (116).
17. The apparatus of claim 16, wherein determining the plurality of sub-images (520) from the second image (510) comprises:
determining a template corresponding to the structure of the tray (116), the template indicating a layout of the plurality of slots (210); and
the second image (510) is divided into the plurality of sub-images (520) according to the determined template.
18. The apparatus of claim 14, wherein extracting information from the label (222) of the slide (220) comprises:
detecting an area associated with the tag (222) in at least one of the plurality of sub-images; and
the information is extracted based on the region associated with the tag (222).
19. The apparatus of claim 14, wherein extracting information from the label (222) of the slide (220) comprises:
dividing a sub-image into a plurality of regions, at least one of the plurality of regions having a size corresponding to the label (222); and
The information is extracted from the at least one of the plurality of regions.
20. The apparatus of claim 19, wherein the tag (222) comprises a two-dimensional code, and wherein extracting the information comprises:
for each of the plurality of regions:
extracting information encoded in the two-dimensional code from the region of the tag (222);
in response to a failure to extract the information,
obtaining data of a brightness channel corresponding to the region;
generating a binary image based on the data of the luminance channel;
filtering out the connection region in the binary image based on the shape or size of the connection region; and
and extracting the information from the filtered binary image.
21. The apparatus of claim 14, the acts further comprising:
information associated with the first image (140) is stored.
22. The apparatus of claim 21, wherein the actions further comprise:
determining an index corresponding to a sub-image of the label (222), the index indicating a position of a slot (210) on the tray (116) housing the slide (220); and
wherein storing the information further comprises:
information associated with the index is stored.
23. The apparatus of claim 21, wherein storing the information further comprises:
information associated with a sub-image corresponding to the slide (220) is stored.
24. The apparatus of claim 14, wherein the information comprises at least one of:
the encoded data is represented by a graphical code on the tag (222),
at least one character on the label (222), or
At least one symbol on the tag (222).
25. The apparatus of claim 14, wherein the actions further comprise:
-presenting the first image (140) and a visual element indicating a state of one of the plurality of sub-images, the processing state being selected from the group consisting of:
a first state indicating that the tag (222) is contained in a corresponding sub-image and that information indicated by the tag (222) was successfully extracted,
a second state indicating that no tag (222) is contained in the corresponding sub-image, and
a third state, indicating that the tag (222) is contained in the corresponding sub-image but that information indicated by the tag (222) cannot be extracted.
26. The apparatus of claim 14, wherein the information indicates an identity of the slide (220), and wherein the acts further comprise:
Receiving an input indicative of a target identity of a slide (220);
obtaining a target image of a tray (116) based on the target identification; and
the target image is presented as a response to the input.
27. A computer readable storage medium having stored thereon computer executable instructions which, when executed by a processor of an apparatus, cause the apparatus to perform the steps of any of the methods according to claims 1-13.
28. A computer program product comprising computer executable instructions which, when executed by a processor of an apparatus, cause the apparatus to perform the steps of any of the methods according to claims 1-13.
29. A scanning device (110), comprising:
a housing (118) having an opening for loading a tray (116) having a plurality of slots (210), the tray (116) being capable of receiving at least one slide (220), the slide (220) including a label (222) and a sample (224) for pathology analysis; and
a camera (114) configured to capture an image of the tray (116) to extract information from the tag (222),
a processor configured to perform actions comprising: in response to the tray (116) being loaded into the scanning device,
Obtaining an image of the tray (116);
determining a plurality of sub-images from an image of the tray (116) based on a structure of the tray (116), each sub-image of the plurality of sub-images corresponding to one slot of the plurality of slots (210); and
information is extracted from the label (222) of the slide (220) based on at least one of the sub-images.
30. The apparatus of claim 29, further comprising a light source (112) having a configurable illumination parameter.
CN202180092893.5A 2020-12-04 2021-12-03 Methods, apparatus and computer program products for slide processing Pending CN116829958A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN2020134048 2020-12-04
CNPCT/CN2020/134048 2020-12-04
PCT/CN2021/135516 WO2022117094A1 (en) 2020-12-04 2021-12-03 Method, device, and computer program product for slides processing

Publications (1)

Publication Number Publication Date
CN116829958A true CN116829958A (en) 2023-09-29

Family

ID=79282985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180092893.5A Pending CN116829958A (en) 2020-12-04 2021-12-03 Methods, apparatus and computer program products for slide processing

Country Status (2)

Country Link
CN (1) CN116829958A (en)
WO (1) WO2022117094A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6469977B2 (en) 2014-07-09 2019-02-13 オリンパス株式会社 Specimen observation apparatus and specimen observation method
CN106973258B (en) * 2017-02-08 2020-05-22 上海交通大学 Pathological section information rapid acquisition device
JP2022508799A (en) * 2018-10-19 2022-01-19 ダイアグノスティク・インスツルメンツ・インコーポレイテッド Barcode scan of bulk sample container
DE102018133188A1 (en) * 2018-12-20 2020-06-25 Carl Zeiss Microscopy Gmbh DISTANCE DETERMINATION OF A SAMPLE LEVEL IN A MICROSCOPE SYSTEM
CN110556172A (en) * 2019-09-18 2019-12-10 杭州智团信息技术有限公司 Slide filing method, device terminal, slide filing system, and readable storage medium

Also Published As

Publication number Publication date
WO2022117094A1 (en) 2022-06-09

Similar Documents

Publication Publication Date Title
US12094182B2 (en) Neural network based identification of areas of interest in digital pathology images
Gopakumar et al. Convolutional neural network‐based malaria diagnosis from focus stack of blood smear images acquired using custom‐built slide scanner
US11226280B2 (en) Automated slide assessments and tracking in digital microscopy
EP3625765B1 (en) Processing of histology images with a convolutional neural network to identify tumors
Veta et al. Automatic nuclei segmentation in H&E stained breast cancer histopathology images
DK2973397T3 (en) Tissue-object-based machine learning system for automated assessment of digital whole-slide glass
US11983912B2 (en) Pathology predictions on unstained tissue
EP3259068B1 (en) Detection of barcode tag conditions on sample tubes
Cruz et al. Determination of blood components (WBCs, RBCs, and Platelets) count in microscopic images using image processing and analysis
CN111091571B (en) Cell nucleus segmentation method, device, electronic equipment and computer readable storage medium
CN107427835B (en) Classification of barcode tag conditions from top view sample tube images for laboratory automation
US8582924B2 (en) Data structure of an image storage and retrieval system
US10991098B1 (en) Methods for automated chromosome analysis
EP1719068B1 (en) Section based algorithm for image enhancement
CN111602136A (en) Method for creating histopathology ground truth masks using slide re-staining
CN112329779A (en) Method and related device for improving certificate identification accuracy based on mask
CN116868226A (en) Detection of annotated regions of interest in images
JP6630341B2 (en) Optical detection of symbols
CN110647889B (en) Medical image recognition method, medical image recognition apparatus, terminal device, and medium
CN109190590A (en) A kind of arena crystallization recognition methods, device, computer equipment and storage medium
CN113724235B (en) Semi-automatic Ki67/ER/PR negative and positive cell counting system and method under condition of changing environment under mirror
CN110060246B (en) Image processing method, device and storage medium
CN114399764A (en) Pathological section scanning method and system
CN116829958A (en) Methods, apparatus and computer program products for slide processing
US20230178221A1 (en) Whole-slide annotation transfer using geometric features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination