[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112766314B - Anatomical structure recognition method, electronic device, and storage medium - Google Patents

Anatomical structure recognition method, electronic device, and storage medium Download PDF

Info

Publication number
CN112766314B
CN112766314B CN202011625657.9A CN202011625657A CN112766314B CN 112766314 B CN112766314 B CN 112766314B CN 202011625657 A CN202011625657 A CN 202011625657A CN 112766314 B CN112766314 B CN 112766314B
Authority
CN
China
Prior art keywords
anatomical structure
medical image
category
target
target medical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011625657.9A
Other languages
Chinese (zh)
Other versions
CN112766314A (en
Inventor
高菲菲
曹晓欢
薛忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN202011625657.9A priority Critical patent/CN112766314B/en
Publication of CN112766314A publication Critical patent/CN112766314A/en
Application granted granted Critical
Publication of CN112766314B publication Critical patent/CN112766314B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an anatomical structure identification method, electronic equipment and a storage medium. The identification method comprises the following steps: performing anatomy structure identification on the target medical image to obtain an initial anatomy structure category; performing part identification on the target medical image to obtain a part category; determining candidate anatomical structure categories corresponding to the location categories; and correcting the initial anatomical structure category by using the candidate anatomical structure category to obtain a final anatomical structure category. According to the invention, the part category corresponding to the medical image is identified in addition to the initial anatomical structure category corresponding to the medical image, and the identified initial anatomical structure category is corrected by utilizing the candidate anatomical structure category determined according to the part category, so that the final anatomical structure category is obtained, thereby realizing the verification of the initial anatomical structure category, being beneficial to improving the accuracy of the final anatomical structure category, further realizing the accurate positioning of the anatomical structure in the medical image and avoiding the occurrence of adverse effects.

Description

Anatomical structure recognition method, electronic device, and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an anatomical structure identification method, an electronic device, and a storage medium.
Background
The conventional method for identifying anatomical structures in medical images consists in identifying anatomical structures in medical images using a trained anatomical structure detection model, which can be trained using anatomical structure labeling criteria and image data of labeled anatomical structures, for example, by training the anatomical structure detection model using a convolutional neural network, or by constructing a conventional algorithm B-spline model. However, the recognition result of the anatomical detection model obtained by training is subject to error and may thus lead to adverse consequences.
Disclosure of Invention
The invention aims to overcome the defect that the recognition result of an anatomical structure detection model is possibly wrong in the prior art, and provides an anatomical structure recognition method, electronic equipment and a storage medium.
The invention solves the technical problems by the following technical scheme:
a method of identifying an anatomical structure, comprising:
performing anatomy structure identification on the target medical image to obtain an initial anatomy structure category;
Performing part identification on the target medical image to obtain a part category;
determining a candidate anatomical structure category corresponding to the location category;
and correcting the initial anatomical structure category by using the candidate anatomical structure category to obtain a final anatomical structure category.
Preferably, the step of correcting the initial anatomical category using the candidate anatomical category comprises:
And intersecting the candidate anatomical structure category with the initial anatomical structure category to obtain a final anatomical structure category.
Preferably, the step of performing location identification on the target medical image to obtain a location category includes:
Inputting the target medical image into a position identification model to obtain a target position label range, wherein the position identification model is obtained by training medical images with each layer of images marked with position labels;
the step of determining the candidate anatomical structure class corresponding to the region class comprises:
Searching a preset dictionary by utilizing the target part label range to obtain candidate anatomical structure categories;
the first corresponding relation exists between the position label and the position category, and the preset dictionary comprises a second corresponding relation between the position label and the anatomical structure category.
Preferably, the step of identifying the anatomical structure of the target medical image to obtain an initial anatomical structure class includes:
Inputting the target medical image into an anatomical structure recognition model to obtain an initial anatomical structure category and an initial position range corresponding to the initial anatomical structure category, wherein the anatomical structure recognition model is obtained by training a medical image marked with an anatomical structure marking frame, and marking information of the anatomical structure marking frame comprises the anatomical structure category and the position range of the anatomical structure marking frame in the medical image;
The step of obtaining the target part label range further comprises the following steps:
determining a location range of the final anatomical structure class in the target medical image using a target location label range corresponding to the final anatomical structure class;
Determining a candidate position range corresponding to the final anatomical structure category in the target medical image by utilizing the position range corresponding to the final anatomical structure category in the target medical image;
And obtaining an intersection of the candidate position range corresponding to the final anatomical structure category and the initial position range to obtain a final position range corresponding to the final anatomical structure category.
Preferably, the target medical image includes a plurality of layers of images, and the step of inputting the target medical image into the part recognition model to obtain the target part label includes:
Inputting a top layer image in the target medical image into the position identification model to obtain a top layer position label;
Inputting a bottom image in the target medical image into the position identification model to obtain a bottom position label;
and obtaining the target part label range according to the top part label and the bottom part label.
Preferably, the target medical image includes a plurality of layers of images, and the step of inputting the target medical image into the part recognition model to obtain the target part label includes:
extracting a plurality of random layer images from the target medical image;
Respectively inputting a plurality of random layer images into the part identification model to obtain random layer part labels of each random layer image;
Fitting the random layer position labels of the plurality of random layer images and the positions of the plurality of random layer images in the target medical image to obtain a third corresponding relation between the position labels of the same layer of images and the positions of the positions;
Acquiring a top layer position label of a top layer image and a bottom layer position label of a bottom layer image in the target medical image according to the third corresponding relation;
and obtaining the target part label range according to the top part label and the bottom part label.
Preferably, the step of fitting the random layer part labels of the plurality of random layer sub-images and the positions of the plurality of random layer sub-images in the target medical image comprises:
And performing linear fitting or continuous piecewise linear fitting on the random layer position labels of the plurality of random layer sub-images and the positions of the plurality of random layer sub-images in the target medical image.
Preferably, after the step of obtaining the final position range corresponding to the final anatomical structure category, the method further comprises:
And filtering the target medical image according to the final position range corresponding to the final anatomical structure category to obtain a target anatomical structure image corresponding to the final anatomical structure category.
Preferably, after the step of obtaining the target anatomical structure image corresponding to the final anatomical structure category, the method further comprises:
and processing the target anatomical structure image by utilizing an algorithm corresponding to the final anatomical structure category to obtain a processing result.
An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing a method of identifying any of the anatomical structures described above when the computer program is executed.
A computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of any of the above-described anatomical structure identification methods.
The invention has the positive progress effects that: according to the invention, the part category corresponding to the medical image is identified in addition to the initial anatomical structure category corresponding to the medical image, and the identified initial anatomical structure category is corrected by utilizing the candidate anatomical structure category determined according to the part category to obtain the final anatomical structure category, so that the verification of the initial anatomical structure category is realized, the accuracy of the final anatomical structure category is improved, the accurate positioning of the anatomical structure in the medical image can be realized, and adverse effects are avoided.
Drawings
Fig. 1 is a partial flow chart of a method of identifying an anatomical structure according to embodiment 1 of the invention.
Fig. 2 is a flowchart of step S1021 in the method for identifying an anatomical structure according to embodiment 1 of the invention.
Fig. 3 is another flowchart of step S1021 in the anatomical structure identification method according to embodiment 1 of the invention.
Fig. 4 is another partial flowchart of an anatomical structure identification method according to embodiment 1 of the invention.
Fig. 5 is a block diagram of an anatomical structure recognition system according to embodiment 2 of the invention.
Fig. 6 is a schematic structural diagram of an electronic device according to embodiment 3 of the present invention.
Detailed Description
The invention is further illustrated by means of the following examples, which are not intended to limit the scope of the invention.
Example 1
The present embodiment provides a method for identifying an anatomical structure, and referring to fig. 1, the method for identifying an anatomical structure includes:
s101, performing anatomy structure identification on a target medical image to obtain an initial anatomy structure category;
s102, carrying out position identification on the target medical image to obtain a position category;
s103, determining candidate anatomical structure categories corresponding to the part categories;
S104, correcting the initial anatomical structure category by using the candidate anatomical structure category to obtain a final anatomical structure category.
In an embodiment, the target medical image to be identified may be a medical image acquired by a single modality CT (Computed Tomography ) device, PET (Positron Emission Computed Tomography, positron emission computed tomography) device, MRI (Magnetic Resonance Imaging ) device, multi-modality PET/CT device, PET/MR device, or the like. In an embodiment, the anatomical categories correspond to organ categories, e.g., lung, heart, stomach, etc., and the site categories may be custom-partitioned according to the actual application, e.g., head, neck, chest, abdomen, etc. It should be appreciated that the correspondence between anatomical categories and site categories is relatively deterministic, e.g., chest corresponds to organ categories such as lung, heart, etc.
Specifically, in this embodiment, anatomical structure recognition and location recognition are performed on the target medical image respectively, so as to obtain an initial anatomical structure category of an anatomical structure corresponding to the target medical image and a location category of a location corresponding to the target medical image; then, according to the corresponding relation between the anatomical structure category and the position category, determining the candidate anatomical structure category corresponding to the identified position category; and finally correcting the identified initial anatomical structure category by utilizing the determined candidate anatomical structure category.
For example, the initial anatomical structure class obtained by performing anatomical structure recognition on the target medical image is lung, the part class obtained by performing part recognition is chest, and the candidate anatomical structure class corresponding to the chest comprises lung and heart, and the initial anatomical structure class is corrected by using the candidate anatomical structure class to obtain the final anatomical structure which is lung.
Compared with the single recognition of the anatomical structure of the target medical image, the dual recognition of the anatomical structure and the part of the target medical image is performed, wherein the recognition of the anatomical structure aims at obtaining the initial anatomical structure type, and the recognition of the part aims at obtaining the part type and correcting the initial anatomical structure type, so that the accuracy and the robustness of the final anatomical structure type obtained by the recognition method of the embodiment are improved.
In this embodiment, the step S101 and the step S102 may be performed simultaneously or sequentially, which is not intended to be limited. In addition, in the present embodiment, the final anatomical structure class may be obtained by solving the intersection of the candidate anatomical structure class and the initial candidate anatomical structure class, and step S104 may specifically include the step of obtaining the final anatomical structure class by intersecting the candidate anatomical structure class and the initial anatomical structure class.
In this embodiment, step S102 may specifically include:
s1021, inputting the target medical image into a part identification model to obtain a target part label range.
In this embodiment, the part recognition model is obtained by training medical images each of which is labeled with a part label.
Specifically, in the present embodiment, a plurality of anatomical key points may be used as the marker points for dividing the human body structure, and the marker points may be numbered, for example, when the number of marker points is n+1 (where N is an integer), the number of the plurality of marker points may be denoted as L 0,L1,…,LN-1,LN to divide the human body structure into N site categories. The number of the marking points can be set in a self-defined mode according to practical application.
On the basis, a template person is simulated through a large amount of data, the template person is marked with N+1 mark points which are set in advance, so that the anatomical structure proportion (P 0:…:PN-1) between the N+1 mark points is obtained, P 0:…:PN-1=L1-L0:…:LN-LN-1 is obtained, and the position label (T 0,T1,…,TN-1,TN) corresponding to the N+1 mark points is built on the basis of the anatomical structure proportion between the N+1 mark points, wherein T 1-T0:…:TN-TN-1=P0:…:PN-1 is obtained.
Further, when the medical image includes multiple layers of images, the position label corresponding to each layer of image is obtained by piecewise linear distribution of the position labels corresponding to adjacent marking points. For example, a (k+1) layer of images is included between the adjacent mark points L x and L x+1, and there is a portion tag T x+(Tx+1-Tx) × (K/K) corresponding to each layer of images, where k=0, …, K.
Thus, in this embodiment, the location labels corresponding to the marker points are all fixed for different medical images, and there is a first correspondence between the location labels and the location categories. On the basis, labeling of the position labels of each layer of the medical image to be trained can be achieved, and then the medical image with the position labels labeled on each layer of the image is used for training to obtain the position recognition model of the embodiment, wherein the position recognition model preferably adopts a regression model, a loss function adopted in the training process can comprise, but is not limited to MSE, huber, log-Cosh and the like, and in addition, the position recognition model can be established by a traditional method or a deep learning method.
In this embodiment, the input of the part recognition model is a target medical image, and the output is a target part label corresponding to the target medical image. Further, in this embodiment, the input of the location identification model is a single-layer image included in the target medical image, and the output is a target location label corresponding to the single-layer image, that is, the input of the location identification model may be a 2D medical image or a 2.5D medical image.
In this embodiment, when the target medical image (e.g., 2D image or 2.5D image) is a single-layer image, the target portion label obtained by inputting the single-layer target medical image into the portion recognition model is the target portion label range corresponding to the target medical image. When the target medical image (e.g., 3D image) includes multiple layers of images, each layer of image included in the target medical image may be input into the position recognition model to obtain a target position label corresponding to each layer of image, so as to obtain a target position label range corresponding to the target medical image, or obtain a top layer position label corresponding to a top layer image and a bottom layer position label corresponding to a bottom layer image of the target medical image.
Specifically, in one aspect, referring to fig. 2, step S1021 may include:
S1021-11, inputting a top image in the target medical image into a part identification model to obtain a top part label;
S1021-12, inputting a bottom image in the target medical image into a part identification model to obtain a bottom part label;
s1021-13, obtaining a target part label range according to the top part label and the bottom part label.
Specifically, after a top layer image located in a first layer of the multi-layer image is input into a position identification model, a top layer position label T top is obtained, and after a bottom layer image located in a last layer of the multi-layer image is input into the position identification model, a bottom layer position label T bottom is obtained, and a target medical image corresponds to a target position label range [ T top,Tbottom ].
On the other hand, referring to fig. 3, step S1021 may include:
s1021-21, extracting a plurality of random layer images from the target medical image;
s1021-22, respectively inputting a plurality of random layer images into a part recognition model to obtain a random layer part label of each random layer image;
S1021-23, fitting random layer position labels of a plurality of random layer images and positions of the plurality of random layer images in the target medical image to obtain a third corresponding relation between the position labels of the same layer of images and the positions of the positions;
s1021-24, acquiring a top layer position label of a top layer image and a bottom layer position label of a bottom layer image in the target medical image according to a third corresponding relation;
S1021-25, obtaining a target position label range according to the top position label and the bottom position label.
Specifically, for each random layer image, the position of the random layer image in the target medical image is known, after the random layer image is input into the position identification model, a random layer position label is obtained, then a list of random layer position labels corresponding to all the random layer images and a list of layer positions are fitted to obtain a third corresponding relation between the position labels of the same layer image and the layer positions, a top layer position label T top corresponding to the top layer image and a bottom layer position label T bottom corresponding to the bottom layer image are obtained based on the third corresponding relation, and a target position label range [ T top,Tbottom ] corresponding to the target medical image is obtained on the basis.
In this embodiment, the random layer location labels of the plurality of random layer sub-images and the positions of the plurality of random layer sub-images in the target medical image may be linearly fitted or continuously piecewise linearly fitted according to practical applications, which is not intended to be limited in this embodiment.
Compared with the implementation mode that the top layer position label and the bottom layer position label are respectively obtained by directly inputting the top layer image and the bottom layer image into the position identification model, the method for indirectly obtaining the top layer position label and the bottom layer position label has better robustness, and the obtained target position label range is more accurate.
In this embodiment, the number of anatomical structure categories to be identified is set to be M (where M is a positive integer), a template person obtained by simulation can determine a region label range [ T i,Tj ] corresponding to each anatomical structure category O m, where m=0, …, M-1, T i represent a region label of a start end corresponding to the anatomical structure category, T j represent a region label of an end corresponding to the anatomical structure category, and then a preset dictionary of anatomical structure category region labels is established based on a second correspondence between the region labels and the anatomical structure categories. On this basis, step S103 in this embodiment may specifically include a step of searching a preset dictionary by using the target portion label range to obtain candidate anatomical structure categories.
Based on the above, the embodiment realizes the dual recognition of the initial anatomical structure category and the part category corresponding to the target medical image, corrects the initial anatomical structure category based on the candidate anatomical structure category corresponding to the part category, and is beneficial to improving the accuracy and the robustness of the final anatomical structure category obtained by the recognition method of the embodiment.
Further, in addition to being able to identify the anatomical structure category in the target medical image, the present embodiment may also locate the anatomical structure pointed to by the identified anatomical structure category, for example, the anatomical structure may be located by using a labeling box (labeling box), and specifically, step S101 in the present embodiment may include a step of inputting the target medical image into the anatomical structure identification model to obtain an initial anatomical structure category and an initial position range corresponding to the initial anatomical structure category.
In this embodiment, the anatomical structure recognition model is obtained by training a medical image labeled with an anatomical structure labeling frame, where labeling information of the anatomical structure labeling frame includes a category of anatomical structure and a position range of the anatomical structure labeling frame in the medical image. In this embodiment, the input of the anatomical structure recognition model is a target medical image, and the output is an initial anatomical structure category corresponding to the target medical image and an initial anatomical structure range corresponding to the initial anatomical structure category, where the input of the anatomical structure recognition model may be a 2D medical image or a 3D medical image.
In this embodiment, the anatomical structure recognition model preferably adopts a target detection model for performing a classification task and a position regression task, and in the training process of the anatomical structure recognition model, a loss function adopted by the classification task may include, but is not limited to Cross Entropy, focal, and the like, a loss function adopted by the position regression task may include, but is not limited to MAE, MSE, ioU, and the like, and in addition, the anatomical structure recognition model may be established by a conventional method or a deep learning method. It should be understood that the site identification model and the anatomy identification model in this embodiment are separately trained.
Furthermore, each layer of image in the target medical image corresponds to a position label, and further, based on the target position label range corresponding to the final anatomical structure type, the position range of the anatomical structure pointed by the final anatomical structure type in the target medical image can be determined, and further, the position range of the anatomical structure pointed by the final anatomical structure type in the target medical image can be determined. Based on the above, the embodiment realizes double positioning of the anatomical structure in the target medical image, which is beneficial to improving the accuracy and the robustness of the final position range corresponding to the final anatomical structure category.
Specifically, referring to fig. 4, the present embodiment may further include, after step S1021:
s105, determining a position range corresponding to the final anatomical structure category in the target medical image by utilizing the target position label range corresponding to the final anatomical structure category;
s106, determining a candidate position range corresponding to the final anatomical structure category in the target medical image by utilizing the position range corresponding to the final anatomical structure category in the target medical image;
S107, intersection is obtained between the candidate position range corresponding to the final anatomical structure category and the initial position range, and the final position range corresponding to the final anatomical structure category is obtained.
In this embodiment, when the target medical image includes a single-layer image, the candidate position range determined by the target position tag is the single-layer target medical image, and when the target medical image includes a multi-layer image, there is a target position tag range [ R top,Rbottom ] corresponding to the target medical image and a corresponding position range [ D 0,Dtotal ] (where bottom-top=total), further there is a target position tag range [ R origin,Rend ] (where [ R origin,Rend]∈[Rtop,Rbottom ]), further there is a position range [ D start,Dfinish ] of the target position tag range [ R origin,Rend ] in the target medical image (where [ D start,Dfinish]∈[D0,Dtotal ], and end-origin=finish-start), finally, the candidate position range corresponding to the final anatomical category can be obtained, and then the initial anatomical position range output by the anatomical recognition model can be corrected.
Referring to fig. 4, the present embodiment further includes, after step S107:
S108, filtering the target medical image according to the final position range corresponding to the final anatomical structure category to obtain a target anatomical structure image corresponding to the final anatomical structure category;
and S109, processing the target anatomical structure image by utilizing an algorithm corresponding to the final anatomical structure category to obtain a processing result.
Specifically, the embodiment can determine the target anatomical structure category from the final anatomical structure categories, then accurately split the target anatomical structure image from the target medical image based on the final position range corresponding to the target anatomical structure category, and furthest remove image data irrelevant to the called algorithm for accurate call of the subsequent algorithm.
Example 2
The present embodiment provides an anatomical structure recognition system, referring to fig. 5, the recognition system of the present embodiment includes:
A first recognition module 101, configured to perform anatomical structure recognition on the target medical image, to obtain an initial anatomical structure class;
the second recognition module 102 is configured to perform location recognition on the target medical image to obtain a location category;
a first determining module 103, configured to determine a candidate anatomical structure category corresponding to the location category;
A first correction module 104 is configured to correct the initial anatomical structure category by using the candidate anatomical structure category to obtain a final anatomical structure category.
In an embodiment, the target medical image to be identified may be a medical image acquired by a single modality CT (Computed Tomography ) device, PET (Positron Emission Computed Tomography, positron emission computed tomography) device, MRI (Magnetic Resonance Imaging ) device, multi-modality PET/CT device, PET/MR device, or the like. In an embodiment, the anatomical categories correspond to organ categories, e.g., lung, heart, stomach, etc., and the site categories may be custom-partitioned according to the actual application, e.g., head, neck, chest, abdomen, etc. It should be appreciated that the correspondence between anatomical categories and site categories is relatively deterministic, e.g., chest corresponds to organ categories such as lung, heart, etc.
Specifically, in this embodiment, anatomical structure recognition and location recognition are performed on the target medical image respectively, so as to obtain an initial anatomical structure category of an anatomical structure corresponding to the target medical image and a location category of a location corresponding to the target medical image; then, according to the corresponding relation between the anatomical structure category and the position category, determining the candidate anatomical structure category corresponding to the identified position category; and finally correcting the identified initial anatomical structure category by utilizing the determined candidate anatomical structure category.
For example, the initial anatomical structure class obtained by performing anatomical structure recognition on the target medical image is lung, the part class obtained by performing part recognition is chest, and the candidate anatomical structure class corresponding to the chest comprises lung and heart, and the initial anatomical structure class is corrected by using the candidate anatomical structure class to obtain the final anatomical structure which is lung.
Compared with the single recognition of the anatomical structure of the target medical image, the dual recognition of the anatomical structure and the part of the target medical image is performed in the embodiment, wherein the recognition of the anatomical structure aims at obtaining the initial anatomical structure type, and the recognition of the part aims at obtaining the part type and correcting the initial anatomical structure type, which is beneficial to improving the accuracy and the robustness of the final anatomical structure type obtained by the recognition system of the embodiment.
In this embodiment, the first recognition module 101 and the second recognition module 102 may be invoked simultaneously or sequentially, which is not intended to be limited. Furthermore, in the present embodiment, the final anatomical structure class may be obtained by solving the intersection of the candidate anatomical structure class and the initial candidate anatomical structure class, and at this time, the correction module 104 may be specifically configured to obtain the final anatomical structure class by intersecting the candidate anatomical structure class with the initial anatomical structure class.
In this embodiment, the second recognition module 102 may be specifically configured to input the target medical image into the site recognition model to obtain the target site tag range.
In this embodiment, the part recognition model is obtained by training medical images each of which is labeled with a part label.
Specifically, in the present embodiment, a plurality of anatomical key points may be used as the marker points for dividing the human body structure, and the marker points may be numbered, for example, when the number of marker points is n+1 (where N is an integer), the number of the plurality of marker points may be denoted as L 0,L1,…,LN-1,LN to divide the human body structure into N site categories. The number of the marking points can be set in a self-defined mode according to practical application.
On the basis, a template person is simulated through a large amount of data, the template person is marked with N+1 mark points which are set in advance, so that the anatomical structure proportion (P 0:…:PN-1) between the N+1 mark points is obtained, P 0:…:PN-1=L1-L0:…:LN-LN-1 is obtained, and the position label (T 0,T1,…,TN-1,TN) corresponding to the N+1 mark points is built on the basis of the anatomical structure proportion between the N+1 mark points, wherein T 1-T0:…:TN-TN-1=P0:…:PN-1 is obtained.
Further, when the medical image includes multiple layers of images, the position label corresponding to each layer of image is obtained by piecewise linear distribution of the position labels corresponding to adjacent marking points. For example, a (k+1) layer of images is included between the adjacent mark points L x and L x+1, and there is a portion tag T x+(Tx+1-Tx) × (K/K) corresponding to each layer of images, where k=0, …, K.
Thus, in this embodiment, the location labels corresponding to the marker points are all fixed for different medical images, and there is a first correspondence between the location labels and the location categories. On the basis, labeling of the position labels of each layer of the medical image to be trained can be achieved, and then the medical image with the position labels labeled on each layer of the image is used for training to obtain the position recognition model of the embodiment, wherein the position recognition model preferably adopts a regression model, a loss function adopted in the training process can comprise, but is not limited to MSE, huber, log-Cosh and the like, and in addition, the position recognition model can be established by a traditional method or a deep learning method.
In this embodiment, the input of the part recognition model is a target medical image, and the output is a target part label corresponding to the target medical image. Further, in this embodiment, the input of the location identification model is a single-layer image included in the target medical image, and the output is a target location label corresponding to the single-layer image, that is, the input of the location identification model may be a 2D medical image or a 2.5D medical image.
In this embodiment, when the target medical image (e.g., 2D image or 2.5D image) is a single-layer image, the target portion label obtained by inputting the single-layer target medical image into the portion recognition model is the target portion label range corresponding to the target medical image. When the target medical image (e.g., 3D image) includes multiple layers of images, each layer of image included in the target medical image may be input into the position recognition model to obtain a target position label corresponding to each layer of image, so as to obtain a target position label range corresponding to the target medical image, or obtain a top layer position label corresponding to a top layer image and a bottom layer position label corresponding to a bottom layer image of the target medical image.
Specifically, in one aspect, the second identification module 102 may include:
the first recognition unit is used for inputting a top image in the target medical image into the position recognition model to obtain a top position label;
the second recognition unit is used for inputting the bottom image in the target medical image into the position recognition model to obtain a bottom position label;
the first determining unit is used for obtaining a target part label range according to the top part label and the bottom part label.
Specifically, after a top layer image located in a first layer of the multi-layer image is input into a position identification model, a top layer position label T top is obtained, and after a bottom layer image located in a last layer of the multi-layer image is input into the position identification model, a bottom layer position label T bottom is obtained, and a target medical image corresponds to a target position label range [ T top,Tbottom ].
In another aspect, the second identification module 102 may include:
the extraction unit is used for extracting a plurality of random layer images from the target medical image;
The third recognition unit is used for inputting a plurality of random layer images into the position recognition model respectively to obtain random layer position labels of each random layer image;
the fitting unit is used for fitting the random layer position labels of the plurality of random layer images and the positions of the plurality of random layer images in the target medical image to obtain a third corresponding relation between the position labels of the same layer of images and the positions of the positions;
The second determining unit is used for acquiring a top layer position label of a top layer image and a bottom layer position label of a bottom layer image in the target medical image according to the third corresponding relation;
And the third determining unit is used for obtaining the target part label range according to the top part label and the bottom part label.
Specifically, for each random layer image, the position of the random layer image in the target medical image is known, after the random layer image is input into the position identification model, a random layer position label is obtained, then a list of random layer position labels corresponding to all the random layer images and a list of layer positions are fitted to obtain a third corresponding relation between the position labels of the same layer image and the layer positions, a top layer position label T top corresponding to the top layer image and a bottom layer position label T bottom corresponding to the bottom layer image are obtained based on the third corresponding relation, and a target position label range [ T top,Tbottom ] corresponding to the target medical image is obtained on the basis.
In this embodiment, the random layer location labels of the plurality of random layer sub-images and the positions of the plurality of random layer sub-images in the target medical image may be linearly fitted or continuously piecewise linearly fitted according to practical applications, which is not intended to be limited in this embodiment.
Compared with the implementation mode that the top layer position label and the bottom layer position label are respectively obtained by directly inputting the top layer image and the bottom layer image into the position identification model, the method for indirectly obtaining the top layer position label and the bottom layer position label has better robustness, and the obtained target position label range is more accurate.
In this embodiment, the number of anatomical structure categories to be identified is set to be M (where M is a positive integer), a template person obtained by simulation can determine a region label range [ T i,Tj ] corresponding to each anatomical structure category O m, where m=0, …, M-1, T i represent a region label of a start end corresponding to the anatomical structure category, T j represent a region label of an end corresponding to the anatomical structure category, and then a preset dictionary of anatomical structure category region labels is established based on a second correspondence between the region labels and the anatomical structure categories. On this basis, the first determining module 103 in this embodiment may be specifically configured to search a preset dictionary by using the target portion tag range to obtain the candidate anatomical structure class.
Based on the above, the dual recognition of the initial anatomical structure category and the part category corresponding to the target medical image is realized, and the initial anatomical structure category is corrected based on the candidate anatomical structure category corresponding to the part category, which is beneficial to improving the accuracy and the robustness of the final anatomical structure category obtained by the recognition system of the embodiment.
Further, in addition to being able to identify the anatomical structure type in the target medical image, the present embodiment may also locate the anatomical structure pointed to by the identified anatomical structure type, for example, the anatomical structure may be located by using a labeling box (labeling box), and specifically, the first identification module 101 in this embodiment may be specifically configured to input the target medical image into the anatomical structure identification model, to obtain an initial anatomical structure type and an initial position range corresponding to the initial anatomical structure type.
In this embodiment, the anatomical structure recognition model is obtained by training a medical image labeled with an anatomical structure labeling frame, where labeling information of the anatomical structure labeling frame includes a category of anatomical structure and a position range of the anatomical structure labeling frame in the medical image. In this embodiment, the input of the anatomical structure recognition model is a target medical image, and the output is an initial anatomical structure category corresponding to the target medical image and an initial anatomical structure range corresponding to the initial anatomical structure category, where the input of the anatomical structure recognition model may be a 2D medical image or a 3D medical image.
In this embodiment, the anatomical structure recognition model preferably adopts a target detection model for performing a classification task and a position regression task, and in the training process of the anatomical structure recognition model, a loss function adopted by the classification task may include, but is not limited to Cross Entropy, focal, and the like, a loss function adopted by the position regression task may include, but is not limited to MAE, MSE, ioU, and the like, and in addition, the anatomical structure recognition model may be established by a conventional method or a deep learning method. It should be understood that the site identification model and the anatomy identification model in this embodiment are separately trained.
Furthermore, each layer of image in the target medical image corresponds to a position label, and further, based on the target position label range corresponding to the final anatomical structure type, the position range of the anatomical structure pointed by the final anatomical structure type in the target medical image can be determined, and further, the position range of the anatomical structure pointed by the final anatomical structure type in the target medical image can be determined. Based on the above, the embodiment realizes double positioning of the anatomical structure in the target medical image, which is beneficial to improving the accuracy and the robustness of the final position range corresponding to the final anatomical structure category.
Specifically, referring to fig. 5, the identification system of the present embodiment may further include:
a second determining module 105, configured to determine a location range of the final anatomical structure class in the target medical image using the target location label range corresponding to the final anatomical structure class;
A third determining module 106 that determines a candidate location range for the final anatomical structure class in the target medical image using the location range for the final anatomical structure class in the target medical image;
The second correction module 107 is configured to intersect the candidate location range corresponding to the final anatomical structure category with the initial location range, so as to obtain a final location range corresponding to the final anatomical structure category.
In this embodiment, when the target medical image includes a single-layer image, the candidate position range determined by the target position tag is the single-layer target medical image, and when the target medical image includes a multi-layer image, there is a target position tag range [ R top,Rbottom ] corresponding to the target medical image and a corresponding position range [ D 0,Dtotal ] (where bottom-top=total), further there is a target position tag range [ R origin,Rend ] (where [ R origin,Rend]∈[Rtop,Rbottom ]), further there is a position range [ D start,Dfinish ] of the target position tag range [ R origin,Rend ] in the target medical image (where [ D start,Dfinish]∈[D0,Dtotal ], and end-origin=finish-start), finally, the candidate position range corresponding to the final anatomical category can be obtained, and then the initial anatomical position range output by the anatomical recognition model can be corrected.
Referring to fig. 5, the identification system of the present embodiment further includes:
The filtering module 108 is configured to filter the target medical image according to a final position range corresponding to the final anatomical structure category, so as to obtain a target anatomical structure image corresponding to the final anatomical structure category;
the processing module 109 is configured to process the target anatomical structure image by using an algorithm corresponding to the final anatomical structure class, so as to obtain a processing result.
Specifically, the embodiment can determine the target anatomical structure category from the final anatomical structure categories, then accurately split the target anatomical structure image from the target medical image based on the final position range corresponding to the target anatomical structure category, and furthest remove image data irrelevant to the called algorithm for accurate call of the subsequent algorithm.
Example 3
The present embodiment provides an electronic device, which may be expressed in the form of a computing device (for example, may be a server device), including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor may implement the method for identifying an anatomical structure provided in embodiment 1 when executing the computer program.
Fig. 6 shows a schematic diagram of the hardware structure of the present embodiment, and as shown in fig. 6, the electronic device 9 specifically includes:
At least one processor 91, at least one memory 92, and a bus 93 for connecting the different system components (including the processor 91 and the memory 92), wherein:
The bus 93 includes a data bus, an address bus, and a control bus.
The memory 92 includes volatile memory such as Random Access Memory (RAM) 921 and/or cache memory 922, and may further include Read Only Memory (ROM) 923.
Memory 92 also includes a program/utility 925 having a set (at least one) of program modules 924, such program modules 924 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The processor 91 executes various functional applications and data processing, such as the anatomical structure recognition method provided in embodiment 1 of the present invention, by running a computer program stored in the memory 92.
The electronic device 9 may further communicate with one or more external devices 94 (e.g., keyboard, pointing device, etc.). Such communication may occur through an input/output (I/O) interface 95. Also, the electronic device 9 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through a network adapter 96. The network adapter 96 communicates with other modules of the electronic device 9 via the bus 93. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in connection with the electronic device 9, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID (disk array) systems, tape drives, data backup storage systems, and the like.
It should be noted that although several units/modules or sub-units/modules of an electronic device are mentioned in the above detailed description, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more units/modules described above may be embodied in one unit/module in accordance with embodiments of the present application. Conversely, the features and functions of one unit/module described above may be further divided into ones that are embodied by a plurality of units/modules.
Example 4
The present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the anatomical structure recognition method provided by embodiment 1.
More specifically, among others, readable storage media may be employed including, but not limited to: portable disk, hard disk, random access memory, read only memory, erasable programmable read only memory, optical storage device, magnetic storage device, or any suitable combination of the foregoing.
In a possible embodiment, the invention may also be realized in the form of a program product comprising program code for causing a terminal device to carry out the steps of the method for identifying an anatomical structure as described in example 1, when said program product is run on the terminal device.
Wherein the program code for carrying out the invention may be written in any combination of one or more programming languages, which program code may execute entirely on the user device, partly on the user device, as a stand-alone software package, partly on the user device and partly on the remote device or entirely on the remote device.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that this is by way of example only, and the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the principles and spirit of the invention, but such changes and modifications fall within the scope of the invention.

Claims (9)

1. A method of identifying an anatomical structure, comprising:
performing anatomy structure identification on the target medical image to obtain an initial anatomy structure category;
Performing part identification on the target medical image to obtain a part category;
determining a candidate anatomical structure category corresponding to the location category;
correcting the initial anatomical structure category by using the candidate anatomical structure category to obtain a final anatomical structure category;
the step of correcting the initial anatomical category using the candidate anatomical category comprises:
And intersecting the candidate anatomical structure category with the initial anatomical structure category to obtain a final anatomical structure category.
2. The method of claim 1, wherein the step of performing region identification on the target medical image to obtain a region class comprises:
Inputting the target medical image into a position identification model to obtain a target position label range, wherein the position identification model is obtained by training medical images with each layer of images marked with position labels;
the step of determining the candidate anatomical structure class corresponding to the region class comprises:
Searching a preset dictionary by utilizing the target part label range to obtain candidate anatomical structure categories;
the first corresponding relation exists between the position label and the position category, and the preset dictionary comprises a second corresponding relation between the position label and the anatomical structure category.
3. The method of claim 2, wherein the step of identifying the anatomical structure from the target medical image to obtain an initial anatomical structure class comprises:
Inputting the target medical image into an anatomical structure recognition model to obtain an initial anatomical structure category and an initial position range corresponding to the initial anatomical structure category, wherein the anatomical structure recognition model is obtained by training a medical image marked with an anatomical structure marking frame, and marking information of the anatomical structure marking frame comprises the anatomical structure category and the position range of the anatomical structure marking frame in the medical image;
The step of obtaining the target part label range further comprises the following steps:
determining a location range of the final anatomical structure class in the target medical image using a target location label range corresponding to the final anatomical structure class;
Determining a candidate position range corresponding to the final anatomical structure category in the target medical image by utilizing the position range corresponding to the final anatomical structure category in the target medical image;
And obtaining an intersection of the candidate position range corresponding to the final anatomical structure category and the initial position range to obtain a final position range corresponding to the final anatomical structure category.
4. The method of claim 2, wherein the target medical image comprises a multi-layered image, and wherein the step of inputting the target medical image into the site identification model to obtain a target site tag comprises:
Inputting a top layer image in the target medical image into the position identification model to obtain a top layer position label;
Inputting a bottom image in the target medical image into the position identification model to obtain a bottom position label;
and obtaining the target part label range according to the top part label and the bottom part label.
5. The method of claim 2, wherein the target medical image comprises a multi-layered image, and wherein the step of inputting the target medical image into the site identification model to obtain a target site tag comprises:
extracting a plurality of random layer images from the target medical image;
Respectively inputting a plurality of random layer images into the part identification model to obtain random layer part labels of each random layer image;
Fitting the random layer position labels of the plurality of random layer images and the positions of the plurality of random layer images in the target medical image to obtain a third corresponding relation between the position labels of the same layer of images and the positions of the positions;
Acquiring a top layer position label of a top layer image and a bottom layer position label of a bottom layer image in the target medical image according to the third corresponding relation;
and obtaining the target part label range according to the top part label and the bottom part label.
6. The method of claim 5, wherein the step of fitting the random layer location labels of the plurality of random layer sub-images and the locations of the plurality of random layer sub-images in the target medical image comprises:
And performing linear fitting or continuous piecewise linear fitting on the random layer position labels of the plurality of random layer sub-images and the positions of the plurality of random layer sub-images in the target medical image.
7. A method of identifying an anatomical structure according to claim 3, further comprising, after the step of deriving a final range of positions corresponding to the final anatomical structure class:
And filtering the target medical image according to the final position range corresponding to the final anatomical structure category to obtain a target anatomical structure image corresponding to the final anatomical structure category.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of identifying an anatomical structure according to any one of claims 1 to 7 when executing the computer program.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, carries out the steps of the method of identifying an anatomical structure according to any one of claims 1 to 7.
CN202011625657.9A 2020-12-31 2020-12-31 Anatomical structure recognition method, electronic device, and storage medium Active CN112766314B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011625657.9A CN112766314B (en) 2020-12-31 2020-12-31 Anatomical structure recognition method, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011625657.9A CN112766314B (en) 2020-12-31 2020-12-31 Anatomical structure recognition method, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN112766314A CN112766314A (en) 2021-05-07
CN112766314B true CN112766314B (en) 2024-05-28

Family

ID=75698928

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011625657.9A Active CN112766314B (en) 2020-12-31 2020-12-31 Anatomical structure recognition method, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN112766314B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344926B (en) * 2021-08-05 2021-11-02 武汉楚精灵医疗科技有限公司 Method, device, server and storage medium for recognizing biliary-pancreatic ultrasonic image
CN118096772A (en) * 2024-04-29 2024-05-28 西安交通大学医学院第一附属医院 Anatomical part recognition system, control method, medium, equipment and terminal

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1906634A (en) * 2003-11-19 2007-01-31 西门子共同研究公司 System and method for detecting and matching anatomical structures using appearance and shape
TW201019905A (en) * 2008-08-27 2010-06-01 Ibm System and method for automatic recognition and labeling of anatomical structures and vessels in medical imaging scans
CN101855649A (en) * 2007-11-14 2010-10-06 皇家飞利浦电子股份有限公司 Automatically proofread and correct the method for the misorientation of medical image
CN102428469A (en) * 2009-05-19 2012-04-25 皇家飞利浦电子股份有限公司 Retrieving and viewing medical images
CN108542351A (en) * 2018-01-26 2018-09-18 徐州云联医疗科技有限公司 A kind of synchronous display system of medical image faultage image and 3 D anatomical image
CN109074665A (en) * 2016-12-02 2018-12-21 阿文特公司 System and method for navigating to targeted anatomic object in the program based on medical imaging
CN109671036A (en) * 2018-12-26 2019-04-23 上海联影医疗科技有限公司 A kind of method for correcting image, device, computer equipment and storage medium
CN109754396A (en) * 2018-12-29 2019-05-14 上海联影智能医疗科技有限公司 Method for registering, device, computer equipment and the storage medium of image
CN109800805A (en) * 2019-01-14 2019-05-24 上海联影智能医疗科技有限公司 Image processing system and computer equipment based on artificial intelligence
CN110023995A (en) * 2016-11-29 2019-07-16 皇家飞利浦有限公司 Cardiac segmentation method for heart movement correction
CN110136103A (en) * 2019-04-24 2019-08-16 平安科技(深圳)有限公司 Medical image means of interpretation, device, computer equipment and storage medium
CN110335259A (en) * 2019-06-25 2019-10-15 腾讯科技(深圳)有限公司 A kind of medical image recognition methods, device and storage medium
CN110363760A (en) * 2019-07-22 2019-10-22 广东工业大学 The computer system of medical image for identification
CN110378876A (en) * 2019-06-18 2019-10-25 平安科技(深圳)有限公司 Image recognition method, device, equipment and storage medium based on deep learning
CN110490841A (en) * 2019-07-18 2019-11-22 上海联影智能医疗科技有限公司 Area of computer aided image analysis methods, computer equipment and storage medium
CN110689521A (en) * 2019-08-15 2020-01-14 福建自贸试验区厦门片区Manteia数据科技有限公司 Automatic identification method and system for human body part to which medical image belongs
KR20200012707A (en) * 2019-02-20 2020-02-05 김예현 Method for predicting anatomical landmarks and device for predicting anatomical landmarks using the same
CN110914866A (en) * 2017-05-09 2020-03-24 哈特弗罗公司 System and method for anatomical segmentation in image analysis
CN111160367A (en) * 2019-12-23 2020-05-15 上海联影智能医疗科技有限公司 Image classification method and device, computer equipment and readable storage medium
CN111709485A (en) * 2020-06-19 2020-09-25 腾讯科技(深圳)有限公司 Medical image processing method and device and computer equipment
CN112001925A (en) * 2020-06-24 2020-11-27 上海联影医疗科技股份有限公司 Image segmentation method, radiation therapy system, computer device and storage medium
CN112037200A (en) * 2020-08-31 2020-12-04 上海交通大学 Method for automatically identifying anatomical features and reconstructing model in medical image
CN112037164A (en) * 2019-06-03 2020-12-04 睿传数据股份有限公司 Body part identification method and device in medical image
CN112102235A (en) * 2020-08-07 2020-12-18 上海联影智能医疗科技有限公司 Human body part recognition method, computer device, and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7702137B2 (en) * 2004-11-10 2010-04-20 M2S, Inc. Anatomical visualization and measurement system
US7648460B2 (en) * 2005-08-31 2010-01-19 Siemens Medical Solutions Usa, Inc. Medical diagnostic imaging optimization based on anatomy recognition
US20110311116A1 (en) * 2010-06-17 2011-12-22 Creighton University System and methods for anatomical structure labeling
US11232319B2 (en) * 2014-05-16 2022-01-25 The Trustees Of The University Of Pennsylvania Applications of automatic anatomy recognition in medical tomographic imagery based on fuzzy anatomy models
DE102016204225B3 (en) * 2016-03-15 2017-07-20 Friedrich-Alexander-Universität Erlangen-Nürnberg Method for automatic recognition of anatomical landmarks and device
US10452899B2 (en) * 2016-08-31 2019-10-22 Siemens Healthcare Gmbh Unsupervised deep representation learning for fine-grained body part recognition

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1906634A (en) * 2003-11-19 2007-01-31 西门子共同研究公司 System and method for detecting and matching anatomical structures using appearance and shape
CN101855649A (en) * 2007-11-14 2010-10-06 皇家飞利浦电子股份有限公司 Automatically proofread and correct the method for the misorientation of medical image
TW201019905A (en) * 2008-08-27 2010-06-01 Ibm System and method for automatic recognition and labeling of anatomical structures and vessels in medical imaging scans
CN102428469A (en) * 2009-05-19 2012-04-25 皇家飞利浦电子股份有限公司 Retrieving and viewing medical images
CN110023995A (en) * 2016-11-29 2019-07-16 皇家飞利浦有限公司 Cardiac segmentation method for heart movement correction
CN109074665A (en) * 2016-12-02 2018-12-21 阿文特公司 System and method for navigating to targeted anatomic object in the program based on medical imaging
CN110914866A (en) * 2017-05-09 2020-03-24 哈特弗罗公司 System and method for anatomical segmentation in image analysis
CN108542351A (en) * 2018-01-26 2018-09-18 徐州云联医疗科技有限公司 A kind of synchronous display system of medical image faultage image and 3 D anatomical image
CN109671036A (en) * 2018-12-26 2019-04-23 上海联影医疗科技有限公司 A kind of method for correcting image, device, computer equipment and storage medium
CN109754396A (en) * 2018-12-29 2019-05-14 上海联影智能医疗科技有限公司 Method for registering, device, computer equipment and the storage medium of image
CN109800805A (en) * 2019-01-14 2019-05-24 上海联影智能医疗科技有限公司 Image processing system and computer equipment based on artificial intelligence
KR20200012707A (en) * 2019-02-20 2020-02-05 김예현 Method for predicting anatomical landmarks and device for predicting anatomical landmarks using the same
CN110136103A (en) * 2019-04-24 2019-08-16 平安科技(深圳)有限公司 Medical image means of interpretation, device, computer equipment and storage medium
CN112037164A (en) * 2019-06-03 2020-12-04 睿传数据股份有限公司 Body part identification method and device in medical image
CN110378876A (en) * 2019-06-18 2019-10-25 平安科技(深圳)有限公司 Image recognition method, device, equipment and storage medium based on deep learning
CN110335259A (en) * 2019-06-25 2019-10-15 腾讯科技(深圳)有限公司 A kind of medical image recognition methods, device and storage medium
CN110490841A (en) * 2019-07-18 2019-11-22 上海联影智能医疗科技有限公司 Area of computer aided image analysis methods, computer equipment and storage medium
CN110363760A (en) * 2019-07-22 2019-10-22 广东工业大学 The computer system of medical image for identification
CN110689521A (en) * 2019-08-15 2020-01-14 福建自贸试验区厦门片区Manteia数据科技有限公司 Automatic identification method and system for human body part to which medical image belongs
CN111160367A (en) * 2019-12-23 2020-05-15 上海联影智能医疗科技有限公司 Image classification method and device, computer equipment and readable storage medium
CN111709485A (en) * 2020-06-19 2020-09-25 腾讯科技(深圳)有限公司 Medical image processing method and device and computer equipment
CN112001925A (en) * 2020-06-24 2020-11-27 上海联影医疗科技股份有限公司 Image segmentation method, radiation therapy system, computer device and storage medium
CN112102235A (en) * 2020-08-07 2020-12-18 上海联影智能医疗科技有限公司 Human body part recognition method, computer device, and storage medium
CN112037200A (en) * 2020-08-31 2020-12-04 上海交通大学 Method for automatically identifying anatomical features and reconstructing model in medical image

Also Published As

Publication number Publication date
CN112766314A (en) 2021-05-07

Similar Documents

Publication Publication Date Title
CN110473203B (en) medical image segmentation
CN111160367B (en) Image classification method, apparatus, computer device, and readable storage medium
US8437521B2 (en) Systems and methods for automatic vertebra edge detection, segmentation and identification in 3D imaging
CN110059697B (en) Automatic lung nodule segmentation method based on deep learning
CN111311655B (en) Multi-mode image registration method, device, electronic equipment and storage medium
CN118334070A (en) System and method for anatomical segmentation in image analysis
US8818057B2 (en) Methods and apparatus for registration of medical images
EP1895468A2 (en) Medical image processing apparatus
US8295568B2 (en) Medical image display processing apparatus and medical image display processing program
CN112950651A (en) Automatic delineation method of mediastinal lymph drainage area based on deep learning network
CN112766314B (en) Anatomical structure recognition method, electronic device, and storage medium
EP3444824B1 (en) Detecting and classifying medical images based on continuously-learning whole body landmarks detections
JP7101809B2 (en) Image processing equipment, image processing methods, and programs
CN113168914B (en) Interactive iterative image annotation
CN112037146A (en) Medical image artifact automatic correction method and device and computer equipment
US11620747B2 (en) Method and system for image segmentation using a contour transformer network model
CN113313699A (en) X-ray chest disease classification and positioning method based on weak supervised learning and electronic equipment
CN115424691A (en) Case matching method, system, device and medium
CN113240638B (en) Target detection method, device and medium based on deep learning
EP4327333A1 (en) Methods and systems for automated follow-up reading of medical image data
CN115049660A (en) Method and device for positioning characteristic points of cardiac anatomical structure
CN114066905A (en) Medical image segmentation method, system and device based on deep learning
CN113177923B (en) Medical image content identification method, electronic equipment and storage medium
CN112561894B (en) Intelligent electronic medical record generation method and system for CT image
CN115239740A (en) GT-UNet-based full-center segmentation algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant