[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112907517B - Image processing method, device, computer equipment and storage medium - Google Patents

Image processing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN112907517B
CN112907517B CN202110115340.9A CN202110115340A CN112907517B CN 112907517 B CN112907517 B CN 112907517B CN 202110115340 A CN202110115340 A CN 202110115340A CN 112907517 B CN112907517 B CN 112907517B
Authority
CN
China
Prior art keywords
target
target object
information
region
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110115340.9A
Other languages
Chinese (zh)
Other versions
CN112907517A (en
Inventor
王娜
刘星龙
黄宁
张少霆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shangtang Shancui Medical Technology Co ltd
Original Assignee
Shanghai Shangtang Shancui Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shangtang Shancui Medical Technology Co ltd filed Critical Shanghai Shangtang Shancui Medical Technology Co ltd
Priority to CN202110115340.9A priority Critical patent/CN112907517B/en
Publication of CN112907517A publication Critical patent/CN112907517A/en
Priority to PCT/CN2021/118044 priority patent/WO2022160731A1/en
Application granted granted Critical
Publication of CN112907517B publication Critical patent/CN112907517B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides an image processing method, apparatus, computer device, and storage medium, wherein the method includes: acquiring a first image of a first target object; acquiring region size information of a target region in the first image; acquiring first position information of a second target object in the target area; determining first feature data based on the region size information and the first position information, wherein the first feature data comprises data of a position relation of the second target object relative to the target region; and determining target location information of the second target object in the first target object based on the first feature data. The image processing method provided by the embodiment of the disclosure can process images such as medical images more efficiently, so that specific positions of parts such as focuses reflected in the images can be judged more quickly.

Description

Image processing method, device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of machine learning technologies, and in particular, to an image processing method, an image processing apparatus, a computer device, and a storage medium.
Background
The deep learning (DEEPLEARNING, DL) refers to a method for forming a more abstract high-level representation attribute category or feature by combining low-level features to find a distributed feature representation of data, which can perform feature recognition on information such as images, characters, sounds and the like, and explain content contained in the information based on the recognition features, so that the method has wide application scenes. For example, in the medical field, a deep learning algorithm may be used to process medical images to obtain specific information reflected by the images. At present, a deep learning algorithm is used for processing medical images, and the problem of low efficiency exists.
Disclosure of Invention
The embodiment of the disclosure at least provides an image processing method, an image processing device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides an image processing method, including: acquiring a first image of a first target object; acquiring region size information of a target region in the first image; acquiring first position information of a second target object in the target area; determining first feature data based on the region size information and the first position information, wherein the first feature data comprises data of a position relation of the second target object relative to the target region; and determining target location information of the second target object in the first target object based on the first feature data.
Therefore, the method for determining the specific position of the second target object in the first target object based on the relative position relation between the first target object and the second target object utilizes the physiological structure of the human body, can quickly determine the specific position of the second target object from the image, and has higher efficiency.
In an optional embodiment, the determining, based on the first feature data, target location information of the second target object in the first target object includes: determining first part information of a target part of the second target object in the first target object based on the first characteristic data, wherein the target part comprises a plurality of sub parts; and determining second part information of a target sub-part where the second target object is located from the plurality of sub-parts as the target part information based on the first feature data and the first part information.
Thus, the target part information of the target object on the first target object is determined through a step-by-step method, and the reliability is higher.
In an alternative embodiment, the acquiring the first image of the first target object includes resampling the first image based on a preset resolution.
In this way, the first image is resampled to the preset resolution, the operation amount when the relative position relation between the first target object and the second target object is determined based on the first image is reduced, the operation load is smaller, and the efficiency is higher.
In an alternative embodiment, the acquiring the region size information of the target region in the first image includes: region size information of the target region is determined based on the resampled first image.
In an alternative embodiment, the acquiring the first location information of the second target object in the target area includes: acquiring second position information of the second target object in the first image; and determining the first position information of the second target object in an area coordinate system established based on the target area based on third position information of the target area in the first image and second position information of the second target object.
In this way, converting the position information of the first target object in the first image into the region coordinate system is beneficial to characterizing the specific position of the second target object in the first target object.
In an alternative embodiment, the determining the first position information of the second target object in the area coordinate system established based on the target area based on the third position information of the target area in the first image and the second position information of the second target object includes: determining conversion relation information between the region coordinate system and a world coordinate system established based on the first image based on third position information of the target region in the first image and the first image; and determining first position information of the second target object in an area coordinate system established based on the target area based on second position information of the second target object in the first image and the conversion relation information.
In this way, by converting the position information in the world coordinates into the region coordinate system and determining the first position information of the second target object in the region coordinate system established based on the target region, the operation including negative values which may exist in the coordinate system is avoided, and the complexity of image processing is simplified.
In an alternative embodiment, the region size information includes at least one of: the target region has a region height, a region width, a region depth, and a region centerline length.
In an alternative embodiment, the first location information includes: three-dimensional position information of a center point of the second target object in a region coordinate system established based on the target region and a distance between the center point and an origin of the region coordinate system.
Thus, the specific position of the second target object in the region coordinate system can be determined through the first position information, and the accuracy of subsequent determination of the first characteristic data is improved, so that the accuracy of subsequent determination of the first position information and the second position information is improved.
In an alternative embodiment, determining first location information of a target location of the second target object in the first target object based on the first feature data includes: and classifying the first characteristic data by using a first classifier to obtain the first part information of the target part of the second target object in the first target object.
Therefore, the first feature data is classified by the first classifier to obtain the first part information, the operation difficulty is lower, and the result is more accurate.
In an optional embodiment, the determining, based on the first feature data and the first location information, second location information of a target sub-location where the second target object is located from the plurality of sub-locations includes: forming second feature data based on the first feature data and the first part information; and classifying the second characteristic data by using a second classifier to obtain second part information of the target sub-part where the second target object is located.
In this way, the classification result of the first classifier and the first characteristic data are utilized to form second characteristic data, the second part information of the second target object in the target sub-part is determined by utilizing the second classifier based on the second characteristic data, the processing speed is high, and the result is more accurate.
In an alternative embodiment, determining the first feature data based on the region size information and the first location information includes: and carrying out normalization processing on the first position information based on the region size information to obtain the first characteristic data.
In this way, normalizing the first target object in the first image to a more uniform size reduces errors in determining the specific location of the second target object in the first target object due to differences between individuals.
In a second aspect, an embodiment of the present disclosure further provides an image processing apparatus, including:
The first acquisition module is used for acquiring a first image of a first target object;
the second acquisition module is used for acquiring the region size information of the target region in the first image;
a third acquisition module, configured to acquire first position information of a second target object in the target area;
A determining module, configured to determine first feature data based on the region size information and the first position information, where the first feature data includes data of a positional relationship of the second target object with respect to the target region; and determining target location information of the second target object in the first target object based on the first feature data.
In an alternative embodiment, the determining module, when determining, based on the first feature data, target location information of the second target object in the first target object, is configured to: determining first part information of a target part of the second target object in the first target object based on the first characteristic data, wherein the target part comprises a plurality of sub parts; and determining second part information of a target sub-part where the second target object is located from the plurality of sub-parts as the target part information based on the first feature data and the first part information.
In an alternative embodiment, the first acquisition module is configured to, when acquiring a first image of a first target object, resample the first image based on a preset resolution.
In an alternative embodiment, the second obtaining module is configured to, when obtaining the region size information of the target region in the first image: region size information of the target region is determined based on the resampled first image.
In an alternative embodiment, the third obtaining module is configured to, when obtaining the first position information of the second target object in the target area: acquiring second position information of the second target object in the first image; and determining the first position information of the second target object in an area coordinate system established based on the target area based on third position information of the target area in the first image and second position information of the second target object.
In an alternative embodiment, the third obtaining module is configured to, when determining, based on third location information of the target area in the first image and second location information of the second target object, the first location information of the second target object in an area coordinate system established based on the target area: determining conversion relation information between the region coordinate system and a world coordinate system established based on the first image based on third position information of the target region in the first image and the first image; and determining first position information of the second target object in an area coordinate system established based on the target area based on second position information of the second target object in the first image and the conversion relation information.
In an alternative embodiment, the region size information includes at least one of: the target region has a region height, a region width, a region depth, and a region centerline length.
In an alternative embodiment, the first location information includes: three-dimensional position information of a center point of the second target object in a region coordinate system established based on the target region and a distance between the center point and an origin of the region coordinate system.
In an alternative embodiment, the determining module, when determining, based on the first feature data, first location information of a target location of the second target object in the first target object, is configured to: and classifying the first characteristic data by using a first classifier to obtain the first part information of the target part of the second target object in the first target object.
In an optional embodiment, the determining module is configured to, when determining, based on the first feature data and the first location information, second location information of a target sub-location where the second target object is located from a plurality of sub-locations in the target location: forming second feature data based on the first feature data and the first part information; and classifying the second characteristic data by using a second classifier to obtain the second part information of the target sub-part where the second target object is located.
In an alternative embodiment, the determining module is configured to, when determining the first feature data based on the region size information and the first location information: and carrying out normalization processing on the first position information based on the region size information to obtain the first characteristic data.
In a third aspect, an optional implementation manner of the disclosure further provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, where the machine-readable instructions, when executed by the processor, perform the steps in the first aspect, or any possible implementation manner of the first aspect, when executed by the processor.
In a fourth aspect, an alternative implementation of the present disclosure further provides a computer readable storage medium having stored thereon a computer program which when executed performs the steps of the first aspect, or any of the possible implementation manners of the first aspect.
The description of the effects of the image processing apparatus, the computer device, and the storage medium is referred to the description of the image processing method, and is not repeated here.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
FIG. 1 illustrates a flow chart of an image processing method provided by an embodiment of the present disclosure;
FIG. 2 (a) shows a schematic diagram of a first image provided by an embodiment of the present disclosure;
Fig. 2 (b) is a schematic diagram of a projection frame on a two-dimensional plane, where the projection frame indicates a minimum bounding box of a target area according to an embodiment of the present disclosure:
FIG. 3 is a flowchart of a specific method for image processing of lung CT in the image processing method according to an embodiment of the present disclosure;
Fig. 4 shows a schematic diagram of an image processing apparatus provided by an embodiment of the present disclosure;
Fig. 5 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the disclosed embodiments generally described and illustrated herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
It has been found that in modern medicine, medical images are typically processed using deep learning models. When the deep learning model is used for processing the medical image, the neural network is used for extracting the image characteristics of the medical image, and the specific position of the target object in the organ is determined according to the image characteristics.
In addition, in order to distinguish different parts in the human body organ, the human body organ is generally divided into multiple layers. For example, for the lung parenchyma of a human body, there are 5 large lung lobes, the 5 large lung lobes are subdivided into 18 lung segments, when determining the position of a target object in a human body organ based on the medical image, a deep learning algorithm is currently generally used to extract image features of the medical image, and based on the extracted image features, a specific position of the target object in the human body organ is determined. Because medical images are generally obtained by adopting a mode of carrying out tomography on a human body and the phenomenon that different parts belonging to the same organ tissue in the medical images are overlapped due to the complexity of the organ structure of the human body, the medical images are processed by utilizing a deep learning model, and the problem of low accuracy exists when the specific position of a target object in an organ is determined.
Based on the above study, the present disclosure provides an image processing method, an apparatus, a computer device, and a storage medium, which utilize a relative positional relationship between a first target object and a second target object to determine a specific position of the second target object in the first target object, and utilize a physiological structure of a human body to quickly determine the specific position of the second target object from an image, thereby having higher efficiency.
In addition, when determining target location information of a target location of a second target object in a first target object by using first feature data capable of representing a relative position relationship between the first target object and the second target object, the embodiment of the disclosure first determines first location information of the target location of the second target object in the first target object by using the first feature data, and then determines second location information of a target sub-location of the second target object in the first target object based on the first location information and the first-level first feature data, thereby determining a specific location of the second target object by using a step method, and having higher efficiency.
The present invention is directed to a method for manufacturing a semiconductor device, and a semiconductor device manufactured by the method.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
For the sake of understanding the present embodiment, first, a detailed description will be given of an image processing method disclosed in an embodiment of the present disclosure, where an execution subject of the image processing method provided in the embodiment of the present disclosure is generally a computer device having a certain computing capability, and the computer device includes, for example: the terminal device, or server or other processing device, may be a User Equipment (UE), mobile device, user terminal, cellular telephone, cordless telephone, personal digital assistant (Personal DIGITAL ASSISTANT, PDA), handheld device, computing device, vehicle mount device, wearable device, or the like. In some possible implementations, the image processing method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
An image processing method provided by an embodiment of the present disclosure is described below.
Referring to fig. 1, a flowchart of an image processing method according to an embodiment of the disclosure is shown, where the method includes steps S101 to S104, where:
s101: acquiring a first image of a first target object;
s102: acquiring region size information of a target region in a first image;
S103: acquiring first position information of a second target object in a target area;
s104: determining first feature data based on the region size information and the first position information, wherein the first feature data comprises data of a position relation of the second target object relative to the target region; and determining target location information of the second target object in the first target object based on the first feature data.
According to the embodiment of the disclosure, the first characteristic data representing the relative position relationship between the first target object and the second target object is determined by utilizing the target area where the first target object is located and the area size information of the target area and the first position information of the second target object in the target area, and the target position information of the target position of the target object in the first target object is determined by utilizing the first characteristic data, so that the specific position of the second target object is determined from the image quickly, and the efficiency is higher.
The following describes the above-mentioned S101 to S104 in detail.
In the embodiment of the present disclosure, a medical image obtained by scanning a lung is taken as a first image as an example, and an image processing method provided in the embodiment of the present disclosure is described in detail.
For S101 above, the first target object is the lung (which is a respiratory organ in the human body, including the lung parenchyma and other structures of the human body). Because the specific position of the lung in the human body, the actual physical and chemical indexes and the like cannot be directly determined, an electronic computer tomography (Computed Tomography, CT) mode is generally adopted to scan a first target object, and then the medical image obtained by scanning is used for determining the relevant information of the lung. When the first target object is scanned by using the CT scanning method, the scanned medical image is the first image, for example, lung CT. Referring to fig. 2 (a), a schematic diagram of a first image according to an embodiment of the disclosure is provided.
For S102, after obtaining the first image of the first target object, the region where the lung parenchyma is located may be determined from the first image, as the target region, and the region size information of the target region may be acquired. Since the lung parenchyma in the first image is irregular in shape and the lung parenchyma is a solid object in the human body, when determining the region where the lung parenchyma is located, a minimum bounding box (the minimum bounding box is a solid box) can be used for determining the spatial region where the first target object is located, that is, the target region includes the region indicated by the minimum bounding box of the lung parenchyma.
Referring to fig. 2 (b), a schematic diagram of a projection frame on a two-dimensional plane indicating a minimum bounding box of a target area according to an embodiment of the present disclosure is provided, including a projection frame 21.
In particular, since the lung is a solid object in the human body, when determining the target region for the lung parenchyma, the target region of the lung parenchyma can be characterized by three-dimensional size information of the lung parenchyma.
Therefore, when determining the target area, since the absorption and transmittance of the lung parenchyma to the scanning radiation are different from those of other parts of the human body, the target area where the lung parenchyma is located can be reflected by using the two-dimensional plane image of the first image through image brightness, shadow contour lines and the like, that is, the area where the lung parenchyma is located can be determined from the first image by using an image processing method. For example, the first image may be processed using a semantic segmentation network, a target detection network, or the like, and a region of interest (Region of Interest, ROI) where the lung parenchyma is located may be determined from the first image, so as to obtain region size information of the target region where the lung parenchyma is located. Meanwhile, since data in the depth direction (in the normal vector direction of the two-dimensional planar image of the first image) can be obtained when the first image is acquired by means of CT scanning, the size of the minimum bounding box in the depth direction can also be determined. At this time, a target region of the lung parenchyma in the first image can be determined.
In the embodiment of the present disclosure, when determining the ROI area of the lung parenchyma from the first image, by determining the boundary point of the ROI area in the first image, the area including the lung parenchyma may be identified in the first image as the target area.
Wherein the target area may be determined based on the three-dimensional size information. Specifically, three-dimensional coordinate values of a plurality of vertices of the ROI region in a world coordinate system determined based on the first image may be used as three-dimensional size information characterizing the target region; in addition, three-dimensional coordinate values of a plurality of vertices passing through the ROI region in world coordinates and the side lengths of the sides may be used as three-dimensional size information representing the target region.
Here, three-dimensional size information of the target area in the world coordinate system, that is, third position information of the target area in the first image where the lung parenchyma is located in the embodiment described in the disclosure.
When a first image is obtained by scanning a human body, the world coordinate system is established according to a certain standard; the world coordinate system is established, for example, with the center of the human body as the origin, or with the scanning center of the scanning apparatus as the origin.
In one embodiment, the region size information of the target region may be determined directly based on the three-dimensional size information of the ROI region.
Wherein the region size information includes at least one of: region height, region width, region depth, and region centerline length of the target region.
In addition, in another example of the present disclosure, when determining the region size information, there may be a case where the coordinate value of a part of the pixel points in the world coordinate system is negative in the obtained first image due to the fact that the world coordinate system is established; in order to simplify the complexity of image processing, in this embodiment, after determining the region size information of the target region where the lung parenchyma is located, a region coordinate system may be established based on the target region, and the target region may be converted from the world coordinate system to the region coordinate system, and the region size information of the target region may be determined.
For example, when establishing a region coordinate system corresponding to the target region, the region coordinate system may be established using the vertex of the upper left corner of the target region as the origin of the region coordinate system, and the region size information may be determined using the region coordinate system.
In one possible implementation manner, after the first image is acquired, the first image may be resampled based on the preset resolution, and then the region size information of the target region is determined based on the resampled first image, so as to reduce the subsequent calculation amount when determining the relative position relationship between the first target object and the second target object based on the first image.
For the above S103, the second target object may be, for example, another object located in the lung parenchyma, for example, the second target object may include a nodule, a foreign matter, or the like in the lung parenchyma. By determining the first location information of the second target object in the target area, a specific location of the second target object in the lung parenchyma may be more accurately determined.
In a specific implementation, the following manner may be used in determining the first location information, for example: acquiring second position information of a second target object in the first image; and determining first position information of the second target object in an area coordinate system established based on the target area based on third position information of the target area in the first image and second position information of the second target object.
In this case, since it is easier to determine the position information of the second target object and the region size information of the target region in the first image when determining the first position information of the second target object, in the present embodiment, the second position information of the second target object in the first target image and the third position information of the target region in the first image are determined first, and then the second position information of the second target object determined in the first image is converted into the first position information in the target region by using the conversion relationship information determined by the second position information and the third position information.
Specifically, for example, the target detection network may be used to perform target detection processing on the first image, so as to obtain second position information of the second target object in the first image. At this time, the second position information includes, for example, area information of an area occupied by the second target object in the first image, such as a center point position coordinate of the second target object in the first image, an area size of an area where the second target object is located, and the like. Then, first position information of the second target object in an area coordinate system established based on the target area is determined by using second position information of the second target object in the first image and third position information of the target area in the first image.
Here, for example, conversion relation information between the region coordinate system and the world coordinate system established based on the first image may be determined based on third position information of the target region in the first image and the first image; and determining first position information of the second target object in an area coordinate system established based on the target area based on second position information of the second target object in the first image and conversion relation information.
Here, the first location information may include, for example: three-dimensional position information of a center point of the second target object in a region coordinate system established based on the target region and a distance between the center point and an origin of the region coordinate system.
In another possible case, in the case of resampling the first image to the preset resolution, for example, the second position information of the second target object in the first image may also be transformed to obtain fourth position information of the second target object in the resampled first image. And then, obtaining the first position information of the second target object in the target area based on the fourth position information of the second target object in the resampled first image and the position information of the target area in the first image resampled to the preset resolution.
For S104, when determining the first feature data based on the region size information and the first position information, for example, the following manner may be adopted: and carrying out normalization processing on the first position information based on the region size information to obtain first characteristic data.
For example, the ratio of the corresponding parameters in the area size information and the first position information may be calculated, so as to normalize the first position information.
The relative position information of the second target object and the first target object can be represented by utilizing the first characteristic data, so that the problem that the target position information determined for the second target object on different first target objects is inaccurate due to the fact that the target area sizes of the first target object in the first image are different due to the difference of different individuals is solved.
Illustratively, the hypothetical region size information includes: the target region has a region height H, a region width W, a region depth D, and a region centerline length Dis.
The first location information includes: three-dimensional position information (x, y, z) of a center point of the second target object in a region coordinate system established based on the target region, and a distance dis between the center point and an origin of the region coordinate system.
The first characteristic data includes: x/W, y/H, z/D, dis/Dis.
Then, step S103 is performed using the obtained first feature data.
When determining the target position information of the target position of the second target object in the first target object based on the first characteristic data, the human body organ is divided into a multi-layer structure, so that the human body organ is more complex; when the target part of the second target object is determined, the accuracy is low.
Furthermore, in order to improve the accuracy of determining the target portion of the second target object, the embodiments of the present disclosure may determine the target portion information of the target portion of the second target object in the first target object in the following manner, for example:
Determining first part information of a target part of the second target object in the first target object based on the first characteristic data, wherein the target part comprises a plurality of sub parts; and determining second part information of a target sub-part where the second target object is located from the plurality of sub-parts as the target part information based on the first feature data and the first part information.
Illustratively, the first target object is divided into a plurality of locations; each site includes a plurality of sub-sites. For example, when the first target object is a lung, the first target object includes five lung lobes, each of which is a location, respectively: upper left lung lobe, lower left lung lobe, upper right lung lobe, middle right lung lobe, lower right lung lobe.
Each lung lobe, in turn, includes a plurality of lung segments, each lung segment being a sub-site on the lung lobe.
For example, the upper left lung includes 4 lung segments, each: the posterior segment of the tip of the upper left lung, the anterior segment of the upper left lung leaf, the upper left lung She Sheshe segment and the upper left lung She Sheshe segment.
And determining the specific position of the second target object when determining the first position information of the target position of the second target object in the first target object based on the first characteristic data.
Here, for example, the first feature data may be classified by using a first classifier, so as to obtain first location information of a target location of the second target object in the first target object.
Here, the first classifier includes, for example, but is not limited to, at least one of: decision tree, support vector machine.
The first classifier is, for example, pre-trained using sample data. For example, a plurality of medical images including similar first target objects may be obtained in advance, and first location information of the second target object in the first target object may be labeled for each medical image. Then, first sample feature data of each medical image is obtained by using a similar manner of S101 to S103 in the image processing method provided by the embodiment of the present disclosure, and a first classifier is obtained by training using the first sample feature data and first part information marked for each medical image.
After the first feature data is obtained, the first feature data can be input into a first classifier to obtain first part information of a target part of the second target object in the first target object.
Then, based on the first feature data and the first part information, second feature data is formed; and classifying the second characteristic data by using a second classifier to obtain second part information of the target sub-part where the second target object is located.
Here, the second classifier includes, for example, but is not limited to, at least one of: decision tree, support vector machine.
The second classifier is, for example, pre-trained using sample data. For example, a plurality of medical images including similar first target objects may be obtained in advance, and for each medical image, first location information of the second target object in the first target object and second location information of a target sub-location where the second target object is located are marked. Then, in a similar manner to S101 to S103 in the image processing method provided by the embodiment of the present disclosure, first sample feature data of each medical image is obtained, second sample feature data of each medical image is formed by using the first sample feature data and first part information marked for each medical image, and then a second classifier is obtained by training by using the second sample feature data and the second part information marked for each medical image.
After training to obtain the second classifier, the second classifier can be utilized to classify the second feature data obtained based on the first image, so as to obtain second part information of the target sub-part of the second target object in the first target object.
The second location information is target location information of the second target object in the first target object.
For example, by the first classifier, the determined first location information is: upper left lung leaf; the second part information is: the rear section of the upper left tip of the lung, namely, the target position information of the second target object in the first target object is: the upper left lung is the posterior tip.
In the embodiment of the disclosure, the first classifier and the second classifier may be trained separately or jointly, and specifically determined according to actual needs.
Referring to fig. 3, an embodiment of the present disclosure also provides a flowchart of a specific method of image processing of lung CT. Wherein the first target object is a lung; the second target object is a lung nodule; the first image is lung CT; the method comprises the following steps:
S301: preprocessing lung CT; wherein the pretreatment comprises at least one of the following: determining a target area where the lung parenchyma is located, and resampling the target area to a preset resolution.
The method of preprocessing is described in detail in S101, and is not described herein.
S302: and carrying out feature extraction and processing on the lung CT to obtain first feature data.
The method of feature extraction and processing is described in detail in the above-mentioned S101 and S102, and will not be described here again.
S303: inputting the first characteristic data into a first SVM classifier, and determining the lung lobe of the lung nodule in the lung CT by using the first SVM classifier to obtain first position information of the lung lobe of the lung nodule.
When determining the lung lobes of the lung nodule in the lung CT by using the first classifier, the first classifier encodes 5 lung lobes, and the correspondence between the lung lobes and the number is: upper left lung lobe-0, lower left lung lobe-1, upper right lung lobe-2, middle right lung lobe-3, lower right lung lobe-4. Illustratively, when the first classifier determines that the lung nodule is on the left upper lung, the output first location information is encoded as 0, which characterizes the classification result determined by the first classifier as that the lung nodule is on the left upper lung.
S304: and forming second characteristic data based on the first characteristic data and the first position information, inputting the second characteristic data into a second SVM classifier, and determining a specific lung segment where a lung nodule is located in the lung CT by using the second SVM classifier to obtain second position information of the lung segment where the lung nodule is located.
When determining the lung segment where the lung nodule in the lung CT is located by using the second classifier, the second classifier encodes 18 lung segments, and the correspondence between the lung segments and the number is, for example: the tip section-0 on the right lung, the tip section-1 on the right lung, the tip section-2 on the right lung, the tip section-3 on the right lung, the tip section-4 on the right lung, the tip section-5 on the right lung, the tip section-6 on the right lung, the tip section-7 on the right lung, the tip section-8 on the right lung, the tip section-9 on the left lung, the tip section-10 on the left lung, the tip section-11 on the left lung, the tip section-She Sheshe on the left lung, the tip section-12 on the left lung, the tip section-She Sheshe on the left lung, the tip section-14 on the left lung, the tip section-15 on the left lung, the tip section-16 on the left lung, and the tip section-17 on the left lung. For example, when the first classifier is used for judging that the lung nodule is on the upper left lung and the second classifier is used for judging that the lung nodule is on the upper left tip rear section, the output second part information code is 10, and the classification result determined by the second classifier is that the lung nodule is on the upper left tip rear section.
Through the above process, the specific position of the lung nodule, namely the target position information, can be rapidly and accurately determined.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same inventive concept, the embodiments of the present disclosure further provide an image processing apparatus corresponding to the image processing method, and since the principle of the apparatus in the embodiments of the present disclosure for solving the problem is similar to that of the image processing method described in the embodiments of the present disclosure, the implementation of the apparatus may refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 4, a schematic diagram of an image processing apparatus according to an embodiment of the disclosure is provided, where the apparatus includes: a first acquisition module 41, a second acquisition module 42, a third acquisition module 43, and a determination module 44; wherein,
A first acquiring module 41, configured to acquire a first image of a first target object;
A second acquiring module 42, configured to acquire region size information of a target region in the first image;
a third obtaining module 43, configured to obtain first location information of a second target object in the target area;
a determining module 44, configured to determine first feature data based on the region size information and the first location information, where the first feature data includes data of a location relationship of the second target object with respect to the target region; and determining target location information of the second target object in the first target object based on the first feature data.
In an alternative embodiment, the determining module 44 is configured, when determining, based on the first feature data, target location information of the second target object in the first target object, to: determining first part information of a target part of the second target object in the first target object based on the first characteristic data, wherein the target part comprises a plurality of sub parts; and determining second part information of a target sub-part where the second target object is located from the plurality of sub-parts as the target part information based on the first feature data and the first part information.
In an alternative embodiment, the first obtaining module 41 is configured to, when obtaining a first image of a first target object, resample the first image based on a preset resolution.
In an alternative embodiment, the second obtaining module 42 is configured to, when obtaining the region size information of the target region in the first image: region size information of the target region is determined based on the resampled first image.
In an alternative embodiment, the third obtaining module 43 is configured to, when obtaining the first position information of the second target object in the target area: acquiring second position information of the second target object in the first image; and determining the first position information of the second target object in an area coordinate system established based on the target area based on third position information of the target area in the first image and second position information of the second target object.
In an alternative embodiment, the third obtaining module 43 is configured to, when determining, based on third location information of the target area in the first image and second location information of the second target object, the first location information of the second target object in an area coordinate system established based on the target area: determining conversion relation information between the region coordinate system and a world coordinate system established based on the first image based on third position information of the target region in the first image and the first image; and determining first position information of the second target object in an area coordinate system established based on the target area based on second position information of the second target object in the first image and the conversion relation information.
In an alternative embodiment, the region size information includes at least one of: the target region has a region height, a region width, a region depth, and a region centerline length.
In an alternative embodiment, the first location information includes: three-dimensional position information of a center point of the second target object in a region coordinate system established based on the target region and a distance between the center point and an origin of the region coordinate system.
In an alternative embodiment, the determining module 44 is configured, when determining, based on the first feature data, first location information of a target location of the second target object in the first target object, to: and classifying the first characteristic data by using a first classifier to obtain the first part information of the target part of the second target object in the first target object.
In an alternative embodiment, the determining module 44 is configured to, when determining, based on the first feature data and the first location information, second location information of a target sub-location where the second target object is located from a plurality of sub-locations in the target location: forming second feature data based on the first feature data and the first part information; and classifying the second characteristic data by using a second classifier to obtain the second part information of the target sub-part where the second target object is located.
In an alternative embodiment, the determining module 44 is configured to, when determining the first feature data based on the region size information and the first location information: and carrying out normalization processing on the first position information based on the region size information to obtain the first characteristic data.
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
The embodiment of the disclosure further provides a computer device, as shown in fig. 5, which is a schematic structural diagram of the computer device provided by the embodiment of the disclosure, including:
A processor 51 and a memory 52; the memory 52 stores machine readable instructions executable by the processor 51, the processor 51 configured to execute the machine readable instructions stored in the memory 52, the machine readable instructions when executed by the processor 51, the processor 51 performing the steps of:
Acquiring a first image of a first target object; acquiring region size information of a target region in the first image; acquiring first position information of a second target object in the target area; determining first feature data based on the region size information and the first position information, wherein the first feature data comprises data of a position relation of the second target object relative to the target region; and determining target location information of the second target object in the first target object based on the first feature data.
The memory 52 includes a memory 521 and an external memory 522; the memory 521 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 51 and data exchanged with the external memory 522 such as a hard disk, and the processor 51 exchanges data with the external memory 522 via the memory 521.
The specific execution process of the above instruction may refer to the steps of the image processing method described in the embodiments of the present disclosure, which is not described herein.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the image processing method described in the method embodiments described above. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiments of the present disclosure further provide a computer program product, where the computer program product carries a program code, where instructions included in the program code may be used to perform steps of the image processing method described in the foregoing method embodiments, and specifically reference may be made to the foregoing method embodiments, which are not described herein.
Wherein the above-mentioned computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (13)

1. An image processing method, comprising:
Acquiring a first image of a first target object;
Acquiring region size information of a target region in the first image;
acquiring first position information of a second target object in the target area;
determining first feature data based on the region size information and the first position information, wherein the first feature data comprises data of a position relation of the second target object relative to the target region; and
Determining target location information of the second target object in the first target object based on the first characteristic data;
the determining, based on the first feature data, target location information of the second target object in the first target object includes: determining first part information of a target part of the second target object in the first target object based on the first characteristic data, wherein the target part comprises a plurality of sub parts; and determining second part information of a target sub-part where the second target object is located from the plurality of sub-parts as the target part information based on the first feature data and the first part information.
2. The image processing method according to claim 1, wherein the acquiring a first image of a first target object includes resampling the first image based on a preset resolution.
3. The image processing method according to claim 2, wherein the acquiring the region size information of the target region in the first image includes:
region size information of the target region is determined based on the resampled first image.
4. The image processing method according to claim 1, wherein the acquiring the first position information of the second target object in the target area includes:
Acquiring second position information of the second target object in the first image; and
And determining the first position information of the second target object in an area coordinate system established based on the target area based on third position information of the target area in the first image and second position information of the second target object.
5. The image processing method according to claim 4, wherein the determining the first position information of the second target object in an area coordinate system established based on the target area based on third position information of the target area in the first image and second position information of the second target object includes:
determining conversion relation information between the region coordinate system and a world coordinate system established based on the first image based on third position information of the target region in the first image and the first image; and
And determining first position information of the second target object in an area coordinate system established based on the target area based on second position information of the second target object in the first image and the conversion relation information.
6. The image processing method according to claim 1, wherein the region size information includes at least one of:
The target region has a region height, a region width, a region depth, and a region centerline length.
7. The image processing method according to claim 1, wherein the first position information includes: three-dimensional position information of a center point of the second target object in a region coordinate system established based on the target region and a distance between the center point and an origin of the region coordinate system.
8. The image processing method according to claim 1, wherein determining first part information of a target part of the second target object in the first target object based on the first feature data includes:
And classifying the first characteristic data by using a first classifier to obtain the first part information of the target part of the second target object in the first target object.
9. The image processing method according to claim 1, wherein the determining second part information of the target sub-part where the second target object is located from the plurality of sub-parts based on the first feature data and the first part information includes:
forming second feature data based on the first feature data and the first part information;
and classifying the second characteristic data by using a second classifier to obtain the second part information of the target sub-part where the second target object is located.
10. The image processing method according to any one of claims 1 to 9, wherein determining first feature data based on the region size information and the first position information includes:
And carrying out normalization processing on the first position information based on the region size information to obtain the first characteristic data.
11. An image processing apparatus, comprising:
The first acquisition module is used for acquiring a first image of a first target object;
the second acquisition module is used for acquiring the region size information of the target region in the first image;
a third acquisition module, configured to acquire first position information of a second target object in the target area;
A determining module, configured to determine first feature data based on the region size information and the first position information, where the first feature data includes data of a positional relationship of the second target object with respect to the target region; and determining target location information of the second target object in the first target object based on the first feature data;
The determining module is specifically configured to, when determining, based on the first feature data, target location information of the second target object in the first target object: determining first part information of a target part of the second target object in the first target object based on the first characteristic data, wherein the target part comprises a plurality of sub parts; and determining second part information of a target sub-part where the second target object is located from the plurality of sub-parts as the target part information based on the first feature data and the first part information.
12. A computer device, comprising: a processor, a memory storing machine readable instructions executable by the processor for executing machine readable instructions stored in the memory, which when executed by the processor, perform the steps of the image processing method according to any one of claims 1 to 10.
13. A computer-readable storage medium, on which a computer program is stored which, when being run by a computer device, performs the steps of the image processing method according to any one of claims 1 to 10.
CN202110115340.9A 2021-01-28 2021-01-28 Image processing method, device, computer equipment and storage medium Active CN112907517B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110115340.9A CN112907517B (en) 2021-01-28 2021-01-28 Image processing method, device, computer equipment and storage medium
PCT/CN2021/118044 WO2022160731A1 (en) 2021-01-28 2021-09-13 Image processing method and apparatus, electronic device, storage medium, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110115340.9A CN112907517B (en) 2021-01-28 2021-01-28 Image processing method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112907517A CN112907517A (en) 2021-06-04
CN112907517B true CN112907517B (en) 2024-07-19

Family

ID=76119281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110115340.9A Active CN112907517B (en) 2021-01-28 2021-01-28 Image processing method, device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN112907517B (en)
WO (1) WO2022160731A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907517B (en) * 2021-01-28 2024-07-19 上海商汤善萃医疗科技有限公司 Image processing method, device, computer equipment and storage medium
CN116797596B (en) * 2023-08-17 2023-11-28 杭州健培科技有限公司 Lung segment recognition model and training method for lung nodule

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740602A (en) * 2019-01-10 2019-05-10 上海联影医疗科技有限公司 Pulmonary artery phase vessel extraction method and system
CN111291813A (en) * 2020-02-13 2020-06-16 腾讯科技(深圳)有限公司 Image annotation method and device, computer equipment and storage medium

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8747319B2 (en) * 2005-10-07 2014-06-10 Hitachi Medical Corporation Image displaying method and medical image diagnostic system
KR20120044484A (en) * 2010-10-28 2012-05-08 삼성전자주식회사 Apparatus and method for tracking object in image processing system
EP3079772B1 (en) * 2013-12-10 2020-02-05 Merck Sharp & Dohme Corp. Immunohistochemical proximity assay for pd-1 positive cells and pd-ligand positive cells in tumor tissue
WO2017160829A1 (en) * 2016-03-15 2017-09-21 The Trustees Of Columbia University In The City Of New York Method and apparatus to perform local de-noising of a scanning imager image
CN108875535B (en) * 2018-02-06 2023-01-10 北京旷视科技有限公司 Image detection method, device and system and storage medium
TR201806307A2 (en) * 2018-05-04 2018-06-21 Elaa Teknoloji Ltd Sti A METHOD AND ALGORITHM FOR REALIZING SAFE BIOPSY RECOVERY IN THE LUNG AIRLINES
CN109613920B (en) * 2018-12-27 2022-02-11 睿驰达新能源汽车科技(北京)有限公司 Method and device for determining vehicle position
CN109464757B (en) * 2018-12-29 2021-07-20 上海联影医疗科技股份有限公司 Method, system, device and storage medium for determining position of target object
CN110473168A (en) * 2019-07-09 2019-11-19 天津大学 A kind of Lung neoplasm automatic checkout system squeezing-motivate residual error network based on 3D
CN111368923B (en) * 2020-03-05 2023-12-19 上海商汤智能科技有限公司 Neural network training method and device, electronic equipment and storage medium
CN111582207B (en) * 2020-05-13 2023-08-15 北京市商汤科技开发有限公司 Image processing method, device, electronic equipment and storage medium
CN111860388A (en) * 2020-07-27 2020-10-30 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN112907517B (en) * 2021-01-28 2024-07-19 上海商汤善萃医疗科技有限公司 Image processing method, device, computer equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740602A (en) * 2019-01-10 2019-05-10 上海联影医疗科技有限公司 Pulmonary artery phase vessel extraction method and system
CN111291813A (en) * 2020-02-13 2020-06-16 腾讯科技(深圳)有限公司 Image annotation method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
WO2022160731A1 (en) 2022-08-04
CN112907517A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
US8175412B2 (en) Method and apparatus for matching portions of input images
US10380759B2 (en) Posture estimating apparatus, posture estimating method and storing medium
JP4999163B2 (en) Image processing method, apparatus, and program
EP3828825A1 (en) Image segmentation method and apparatus, diagnosis system, storage medium, and computer device
US8548253B2 (en) Fast line linking
US7928978B2 (en) Method for generating multi-resolution three-dimensional model
US11276490B2 (en) Method and apparatus for classification of lesion based on learning data applying one or more augmentation methods in lesion information augmented patch of medical image
CN112907517B (en) Image processing method, device, computer equipment and storage medium
JP2006006359A (en) Image generator, image generator method, and its program
CN112364873A (en) Character recognition method and device for curved text image and computer equipment
WO2011093921A1 (en) Automated vascular region separation in medical imaging
CN112836625A (en) Face living body detection method and device and electronic equipment
CN111738988A (en) Face depth image generation method and device, electronic equipment and storage medium
EP4394696A1 (en) Security check ct object recognition method and apparatus
CN109978004B (en) Image recognition method and related equipment
WO2023047118A1 (en) A computer-implemented method of enhancing object detection in a digital image of known underlying structure, and corresponding module, data processing apparatus and computer program
CN113688846A (en) Object size recognition method, readable storage medium, and object size recognition system
CN115222713A (en) Method and device for calculating coronary artery calcium score and storage medium
CN117197405A (en) Augmented reality method, system and storage medium for three-dimensional object
CN111353325A (en) Key point detection model training method and device
Ferreira et al. GAN-based generation of realistic 3D volumetric data: A systematic review and taxonomy
CN111932495B (en) Medical image detection method, device and storage medium
CN111598144B (en) Training method and device for image recognition model
KR102535054B1 (en) Automatic extraction method of indoor spatial information from floor plan images through patch-based deep learning algorithms and device thereof
JP2006325937A (en) Image determination device, image determination method, and program therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40045099

Country of ref document: HK

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240611

Address after: 200233, Units 6-01, 6-49, 6-80, 6th Floor, No. 1900 Hongmei Road, Xuhui District, Shanghai

Applicant after: Shanghai Shangtang Shancui Medical Technology Co.,Ltd.

Country or region after: China

Address before: Room 1605a, building 3, 391 Guiping Road, Xuhui District, Shanghai

Applicant before: SHANGHAI SENSETIME INTELLIGENT TECHNOLOGY Co.,Ltd.

Country or region before: China

GR01 Patent grant
GR01 Patent grant