Disclosure of Invention
The embodiment of the disclosure at least provides an image processing method, an image processing device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides an image processing method, including: acquiring a first image of a first target object; acquiring region size information of a target region in the first image; acquiring first position information of a second target object in the target area; determining first feature data based on the region size information and the first position information, wherein the first feature data comprises data of a position relation of the second target object relative to the target region; and determining target location information of the second target object in the first target object based on the first feature data.
Therefore, the method for determining the specific position of the second target object in the first target object based on the relative position relation between the first target object and the second target object utilizes the physiological structure of the human body, can quickly determine the specific position of the second target object from the image, and has higher efficiency.
In an optional embodiment, the determining, based on the first feature data, target location information of the second target object in the first target object includes: determining first part information of a target part of the second target object in the first target object based on the first characteristic data, wherein the target part comprises a plurality of sub parts; and determining second part information of a target sub-part where the second target object is located from the plurality of sub-parts as the target part information based on the first feature data and the first part information.
Thus, the target part information of the target object on the first target object is determined through a step-by-step method, and the reliability is higher.
In an alternative embodiment, the acquiring the first image of the first target object includes resampling the first image based on a preset resolution.
In this way, the first image is resampled to the preset resolution, the operation amount when the relative position relation between the first target object and the second target object is determined based on the first image is reduced, the operation load is smaller, and the efficiency is higher.
In an alternative embodiment, the acquiring the region size information of the target region in the first image includes: region size information of the target region is determined based on the resampled first image.
In an alternative embodiment, the acquiring the first location information of the second target object in the target area includes: acquiring second position information of the second target object in the first image; and determining the first position information of the second target object in an area coordinate system established based on the target area based on third position information of the target area in the first image and second position information of the second target object.
In this way, converting the position information of the first target object in the first image into the region coordinate system is beneficial to characterizing the specific position of the second target object in the first target object.
In an alternative embodiment, the determining the first position information of the second target object in the area coordinate system established based on the target area based on the third position information of the target area in the first image and the second position information of the second target object includes: determining conversion relation information between the region coordinate system and a world coordinate system established based on the first image based on third position information of the target region in the first image and the first image; and determining first position information of the second target object in an area coordinate system established based on the target area based on second position information of the second target object in the first image and the conversion relation information.
In this way, by converting the position information in the world coordinates into the region coordinate system and determining the first position information of the second target object in the region coordinate system established based on the target region, the operation including negative values which may exist in the coordinate system is avoided, and the complexity of image processing is simplified.
In an alternative embodiment, the region size information includes at least one of: the target region has a region height, a region width, a region depth, and a region centerline length.
In an alternative embodiment, the first location information includes: three-dimensional position information of a center point of the second target object in a region coordinate system established based on the target region and a distance between the center point and an origin of the region coordinate system.
Thus, the specific position of the second target object in the region coordinate system can be determined through the first position information, and the accuracy of subsequent determination of the first characteristic data is improved, so that the accuracy of subsequent determination of the first position information and the second position information is improved.
In an alternative embodiment, determining first location information of a target location of the second target object in the first target object based on the first feature data includes: and classifying the first characteristic data by using a first classifier to obtain the first part information of the target part of the second target object in the first target object.
Therefore, the first feature data is classified by the first classifier to obtain the first part information, the operation difficulty is lower, and the result is more accurate.
In an optional embodiment, the determining, based on the first feature data and the first location information, second location information of a target sub-location where the second target object is located from the plurality of sub-locations includes: forming second feature data based on the first feature data and the first part information; and classifying the second characteristic data by using a second classifier to obtain second part information of the target sub-part where the second target object is located.
In this way, the classification result of the first classifier and the first characteristic data are utilized to form second characteristic data, the second part information of the second target object in the target sub-part is determined by utilizing the second classifier based on the second characteristic data, the processing speed is high, and the result is more accurate.
In an alternative embodiment, determining the first feature data based on the region size information and the first location information includes: and carrying out normalization processing on the first position information based on the region size information to obtain the first characteristic data.
In this way, normalizing the first target object in the first image to a more uniform size reduces errors in determining the specific location of the second target object in the first target object due to differences between individuals.
In a second aspect, an embodiment of the present disclosure further provides an image processing apparatus, including:
The first acquisition module is used for acquiring a first image of a first target object;
the second acquisition module is used for acquiring the region size information of the target region in the first image;
a third acquisition module, configured to acquire first position information of a second target object in the target area;
A determining module, configured to determine first feature data based on the region size information and the first position information, where the first feature data includes data of a positional relationship of the second target object with respect to the target region; and determining target location information of the second target object in the first target object based on the first feature data.
In an alternative embodiment, the determining module, when determining, based on the first feature data, target location information of the second target object in the first target object, is configured to: determining first part information of a target part of the second target object in the first target object based on the first characteristic data, wherein the target part comprises a plurality of sub parts; and determining second part information of a target sub-part where the second target object is located from the plurality of sub-parts as the target part information based on the first feature data and the first part information.
In an alternative embodiment, the first acquisition module is configured to, when acquiring a first image of a first target object, resample the first image based on a preset resolution.
In an alternative embodiment, the second obtaining module is configured to, when obtaining the region size information of the target region in the first image: region size information of the target region is determined based on the resampled first image.
In an alternative embodiment, the third obtaining module is configured to, when obtaining the first position information of the second target object in the target area: acquiring second position information of the second target object in the first image; and determining the first position information of the second target object in an area coordinate system established based on the target area based on third position information of the target area in the first image and second position information of the second target object.
In an alternative embodiment, the third obtaining module is configured to, when determining, based on third location information of the target area in the first image and second location information of the second target object, the first location information of the second target object in an area coordinate system established based on the target area: determining conversion relation information between the region coordinate system and a world coordinate system established based on the first image based on third position information of the target region in the first image and the first image; and determining first position information of the second target object in an area coordinate system established based on the target area based on second position information of the second target object in the first image and the conversion relation information.
In an alternative embodiment, the region size information includes at least one of: the target region has a region height, a region width, a region depth, and a region centerline length.
In an alternative embodiment, the first location information includes: three-dimensional position information of a center point of the second target object in a region coordinate system established based on the target region and a distance between the center point and an origin of the region coordinate system.
In an alternative embodiment, the determining module, when determining, based on the first feature data, first location information of a target location of the second target object in the first target object, is configured to: and classifying the first characteristic data by using a first classifier to obtain the first part information of the target part of the second target object in the first target object.
In an optional embodiment, the determining module is configured to, when determining, based on the first feature data and the first location information, second location information of a target sub-location where the second target object is located from a plurality of sub-locations in the target location: forming second feature data based on the first feature data and the first part information; and classifying the second characteristic data by using a second classifier to obtain the second part information of the target sub-part where the second target object is located.
In an alternative embodiment, the determining module is configured to, when determining the first feature data based on the region size information and the first location information: and carrying out normalization processing on the first position information based on the region size information to obtain the first characteristic data.
In a third aspect, an optional implementation manner of the disclosure further provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, where the machine-readable instructions, when executed by the processor, perform the steps in the first aspect, or any possible implementation manner of the first aspect, when executed by the processor.
In a fourth aspect, an alternative implementation of the present disclosure further provides a computer readable storage medium having stored thereon a computer program which when executed performs the steps of the first aspect, or any of the possible implementation manners of the first aspect.
The description of the effects of the image processing apparatus, the computer device, and the storage medium is referred to the description of the image processing method, and is not repeated here.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the disclosed embodiments generally described and illustrated herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
It has been found that in modern medicine, medical images are typically processed using deep learning models. When the deep learning model is used for processing the medical image, the neural network is used for extracting the image characteristics of the medical image, and the specific position of the target object in the organ is determined according to the image characteristics.
In addition, in order to distinguish different parts in the human body organ, the human body organ is generally divided into multiple layers. For example, for the lung parenchyma of a human body, there are 5 large lung lobes, the 5 large lung lobes are subdivided into 18 lung segments, when determining the position of a target object in a human body organ based on the medical image, a deep learning algorithm is currently generally used to extract image features of the medical image, and based on the extracted image features, a specific position of the target object in the human body organ is determined. Because medical images are generally obtained by adopting a mode of carrying out tomography on a human body and the phenomenon that different parts belonging to the same organ tissue in the medical images are overlapped due to the complexity of the organ structure of the human body, the medical images are processed by utilizing a deep learning model, and the problem of low accuracy exists when the specific position of a target object in an organ is determined.
Based on the above study, the present disclosure provides an image processing method, an apparatus, a computer device, and a storage medium, which utilize a relative positional relationship between a first target object and a second target object to determine a specific position of the second target object in the first target object, and utilize a physiological structure of a human body to quickly determine the specific position of the second target object from an image, thereby having higher efficiency.
In addition, when determining target location information of a target location of a second target object in a first target object by using first feature data capable of representing a relative position relationship between the first target object and the second target object, the embodiment of the disclosure first determines first location information of the target location of the second target object in the first target object by using the first feature data, and then determines second location information of a target sub-location of the second target object in the first target object based on the first location information and the first-level first feature data, thereby determining a specific location of the second target object by using a step method, and having higher efficiency.
The present invention is directed to a method for manufacturing a semiconductor device, and a semiconductor device manufactured by the method.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
For the sake of understanding the present embodiment, first, a detailed description will be given of an image processing method disclosed in an embodiment of the present disclosure, where an execution subject of the image processing method provided in the embodiment of the present disclosure is generally a computer device having a certain computing capability, and the computer device includes, for example: the terminal device, or server or other processing device, may be a User Equipment (UE), mobile device, user terminal, cellular telephone, cordless telephone, personal digital assistant (Personal DIGITAL ASSISTANT, PDA), handheld device, computing device, vehicle mount device, wearable device, or the like. In some possible implementations, the image processing method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
An image processing method provided by an embodiment of the present disclosure is described below.
Referring to fig. 1, a flowchart of an image processing method according to an embodiment of the disclosure is shown, where the method includes steps S101 to S104, where:
s101: acquiring a first image of a first target object;
s102: acquiring region size information of a target region in a first image;
S103: acquiring first position information of a second target object in a target area;
s104: determining first feature data based on the region size information and the first position information, wherein the first feature data comprises data of a position relation of the second target object relative to the target region; and determining target location information of the second target object in the first target object based on the first feature data.
According to the embodiment of the disclosure, the first characteristic data representing the relative position relationship between the first target object and the second target object is determined by utilizing the target area where the first target object is located and the area size information of the target area and the first position information of the second target object in the target area, and the target position information of the target position of the target object in the first target object is determined by utilizing the first characteristic data, so that the specific position of the second target object is determined from the image quickly, and the efficiency is higher.
The following describes the above-mentioned S101 to S104 in detail.
In the embodiment of the present disclosure, a medical image obtained by scanning a lung is taken as a first image as an example, and an image processing method provided in the embodiment of the present disclosure is described in detail.
For S101 above, the first target object is the lung (which is a respiratory organ in the human body, including the lung parenchyma and other structures of the human body). Because the specific position of the lung in the human body, the actual physical and chemical indexes and the like cannot be directly determined, an electronic computer tomography (Computed Tomography, CT) mode is generally adopted to scan a first target object, and then the medical image obtained by scanning is used for determining the relevant information of the lung. When the first target object is scanned by using the CT scanning method, the scanned medical image is the first image, for example, lung CT. Referring to fig. 2 (a), a schematic diagram of a first image according to an embodiment of the disclosure is provided.
For S102, after obtaining the first image of the first target object, the region where the lung parenchyma is located may be determined from the first image, as the target region, and the region size information of the target region may be acquired. Since the lung parenchyma in the first image is irregular in shape and the lung parenchyma is a solid object in the human body, when determining the region where the lung parenchyma is located, a minimum bounding box (the minimum bounding box is a solid box) can be used for determining the spatial region where the first target object is located, that is, the target region includes the region indicated by the minimum bounding box of the lung parenchyma.
Referring to fig. 2 (b), a schematic diagram of a projection frame on a two-dimensional plane indicating a minimum bounding box of a target area according to an embodiment of the present disclosure is provided, including a projection frame 21.
In particular, since the lung is a solid object in the human body, when determining the target region for the lung parenchyma, the target region of the lung parenchyma can be characterized by three-dimensional size information of the lung parenchyma.
Therefore, when determining the target area, since the absorption and transmittance of the lung parenchyma to the scanning radiation are different from those of other parts of the human body, the target area where the lung parenchyma is located can be reflected by using the two-dimensional plane image of the first image through image brightness, shadow contour lines and the like, that is, the area where the lung parenchyma is located can be determined from the first image by using an image processing method. For example, the first image may be processed using a semantic segmentation network, a target detection network, or the like, and a region of interest (Region of Interest, ROI) where the lung parenchyma is located may be determined from the first image, so as to obtain region size information of the target region where the lung parenchyma is located. Meanwhile, since data in the depth direction (in the normal vector direction of the two-dimensional planar image of the first image) can be obtained when the first image is acquired by means of CT scanning, the size of the minimum bounding box in the depth direction can also be determined. At this time, a target region of the lung parenchyma in the first image can be determined.
In the embodiment of the present disclosure, when determining the ROI area of the lung parenchyma from the first image, by determining the boundary point of the ROI area in the first image, the area including the lung parenchyma may be identified in the first image as the target area.
Wherein the target area may be determined based on the three-dimensional size information. Specifically, three-dimensional coordinate values of a plurality of vertices of the ROI region in a world coordinate system determined based on the first image may be used as three-dimensional size information characterizing the target region; in addition, three-dimensional coordinate values of a plurality of vertices passing through the ROI region in world coordinates and the side lengths of the sides may be used as three-dimensional size information representing the target region.
Here, three-dimensional size information of the target area in the world coordinate system, that is, third position information of the target area in the first image where the lung parenchyma is located in the embodiment described in the disclosure.
When a first image is obtained by scanning a human body, the world coordinate system is established according to a certain standard; the world coordinate system is established, for example, with the center of the human body as the origin, or with the scanning center of the scanning apparatus as the origin.
In one embodiment, the region size information of the target region may be determined directly based on the three-dimensional size information of the ROI region.
Wherein the region size information includes at least one of: region height, region width, region depth, and region centerline length of the target region.
In addition, in another example of the present disclosure, when determining the region size information, there may be a case where the coordinate value of a part of the pixel points in the world coordinate system is negative in the obtained first image due to the fact that the world coordinate system is established; in order to simplify the complexity of image processing, in this embodiment, after determining the region size information of the target region where the lung parenchyma is located, a region coordinate system may be established based on the target region, and the target region may be converted from the world coordinate system to the region coordinate system, and the region size information of the target region may be determined.
For example, when establishing a region coordinate system corresponding to the target region, the region coordinate system may be established using the vertex of the upper left corner of the target region as the origin of the region coordinate system, and the region size information may be determined using the region coordinate system.
In one possible implementation manner, after the first image is acquired, the first image may be resampled based on the preset resolution, and then the region size information of the target region is determined based on the resampled first image, so as to reduce the subsequent calculation amount when determining the relative position relationship between the first target object and the second target object based on the first image.
For the above S103, the second target object may be, for example, another object located in the lung parenchyma, for example, the second target object may include a nodule, a foreign matter, or the like in the lung parenchyma. By determining the first location information of the second target object in the target area, a specific location of the second target object in the lung parenchyma may be more accurately determined.
In a specific implementation, the following manner may be used in determining the first location information, for example: acquiring second position information of a second target object in the first image; and determining first position information of the second target object in an area coordinate system established based on the target area based on third position information of the target area in the first image and second position information of the second target object.
In this case, since it is easier to determine the position information of the second target object and the region size information of the target region in the first image when determining the first position information of the second target object, in the present embodiment, the second position information of the second target object in the first target image and the third position information of the target region in the first image are determined first, and then the second position information of the second target object determined in the first image is converted into the first position information in the target region by using the conversion relationship information determined by the second position information and the third position information.
Specifically, for example, the target detection network may be used to perform target detection processing on the first image, so as to obtain second position information of the second target object in the first image. At this time, the second position information includes, for example, area information of an area occupied by the second target object in the first image, such as a center point position coordinate of the second target object in the first image, an area size of an area where the second target object is located, and the like. Then, first position information of the second target object in an area coordinate system established based on the target area is determined by using second position information of the second target object in the first image and third position information of the target area in the first image.
Here, for example, conversion relation information between the region coordinate system and the world coordinate system established based on the first image may be determined based on third position information of the target region in the first image and the first image; and determining first position information of the second target object in an area coordinate system established based on the target area based on second position information of the second target object in the first image and conversion relation information.
Here, the first location information may include, for example: three-dimensional position information of a center point of the second target object in a region coordinate system established based on the target region and a distance between the center point and an origin of the region coordinate system.
In another possible case, in the case of resampling the first image to the preset resolution, for example, the second position information of the second target object in the first image may also be transformed to obtain fourth position information of the second target object in the resampled first image. And then, obtaining the first position information of the second target object in the target area based on the fourth position information of the second target object in the resampled first image and the position information of the target area in the first image resampled to the preset resolution.
For S104, when determining the first feature data based on the region size information and the first position information, for example, the following manner may be adopted: and carrying out normalization processing on the first position information based on the region size information to obtain first characteristic data.
For example, the ratio of the corresponding parameters in the area size information and the first position information may be calculated, so as to normalize the first position information.
The relative position information of the second target object and the first target object can be represented by utilizing the first characteristic data, so that the problem that the target position information determined for the second target object on different first target objects is inaccurate due to the fact that the target area sizes of the first target object in the first image are different due to the difference of different individuals is solved.
Illustratively, the hypothetical region size information includes: the target region has a region height H, a region width W, a region depth D, and a region centerline length Dis.
The first location information includes: three-dimensional position information (x, y, z) of a center point of the second target object in a region coordinate system established based on the target region, and a distance dis between the center point and an origin of the region coordinate system.
The first characteristic data includes: x/W, y/H, z/D, dis/Dis.
Then, step S103 is performed using the obtained first feature data.
When determining the target position information of the target position of the second target object in the first target object based on the first characteristic data, the human body organ is divided into a multi-layer structure, so that the human body organ is more complex; when the target part of the second target object is determined, the accuracy is low.
Furthermore, in order to improve the accuracy of determining the target portion of the second target object, the embodiments of the present disclosure may determine the target portion information of the target portion of the second target object in the first target object in the following manner, for example:
Determining first part information of a target part of the second target object in the first target object based on the first characteristic data, wherein the target part comprises a plurality of sub parts; and determining second part information of a target sub-part where the second target object is located from the plurality of sub-parts as the target part information based on the first feature data and the first part information.
Illustratively, the first target object is divided into a plurality of locations; each site includes a plurality of sub-sites. For example, when the first target object is a lung, the first target object includes five lung lobes, each of which is a location, respectively: upper left lung lobe, lower left lung lobe, upper right lung lobe, middle right lung lobe, lower right lung lobe.
Each lung lobe, in turn, includes a plurality of lung segments, each lung segment being a sub-site on the lung lobe.
For example, the upper left lung includes 4 lung segments, each: the posterior segment of the tip of the upper left lung, the anterior segment of the upper left lung leaf, the upper left lung She Sheshe segment and the upper left lung She Sheshe segment.
And determining the specific position of the second target object when determining the first position information of the target position of the second target object in the first target object based on the first characteristic data.
Here, for example, the first feature data may be classified by using a first classifier, so as to obtain first location information of a target location of the second target object in the first target object.
Here, the first classifier includes, for example, but is not limited to, at least one of: decision tree, support vector machine.
The first classifier is, for example, pre-trained using sample data. For example, a plurality of medical images including similar first target objects may be obtained in advance, and first location information of the second target object in the first target object may be labeled for each medical image. Then, first sample feature data of each medical image is obtained by using a similar manner of S101 to S103 in the image processing method provided by the embodiment of the present disclosure, and a first classifier is obtained by training using the first sample feature data and first part information marked for each medical image.
After the first feature data is obtained, the first feature data can be input into a first classifier to obtain first part information of a target part of the second target object in the first target object.
Then, based on the first feature data and the first part information, second feature data is formed; and classifying the second characteristic data by using a second classifier to obtain second part information of the target sub-part where the second target object is located.
Here, the second classifier includes, for example, but is not limited to, at least one of: decision tree, support vector machine.
The second classifier is, for example, pre-trained using sample data. For example, a plurality of medical images including similar first target objects may be obtained in advance, and for each medical image, first location information of the second target object in the first target object and second location information of a target sub-location where the second target object is located are marked. Then, in a similar manner to S101 to S103 in the image processing method provided by the embodiment of the present disclosure, first sample feature data of each medical image is obtained, second sample feature data of each medical image is formed by using the first sample feature data and first part information marked for each medical image, and then a second classifier is obtained by training by using the second sample feature data and the second part information marked for each medical image.
After training to obtain the second classifier, the second classifier can be utilized to classify the second feature data obtained based on the first image, so as to obtain second part information of the target sub-part of the second target object in the first target object.
The second location information is target location information of the second target object in the first target object.
For example, by the first classifier, the determined first location information is: upper left lung leaf; the second part information is: the rear section of the upper left tip of the lung, namely, the target position information of the second target object in the first target object is: the upper left lung is the posterior tip.
In the embodiment of the disclosure, the first classifier and the second classifier may be trained separately or jointly, and specifically determined according to actual needs.
Referring to fig. 3, an embodiment of the present disclosure also provides a flowchart of a specific method of image processing of lung CT. Wherein the first target object is a lung; the second target object is a lung nodule; the first image is lung CT; the method comprises the following steps:
S301: preprocessing lung CT; wherein the pretreatment comprises at least one of the following: determining a target area where the lung parenchyma is located, and resampling the target area to a preset resolution.
The method of preprocessing is described in detail in S101, and is not described herein.
S302: and carrying out feature extraction and processing on the lung CT to obtain first feature data.
The method of feature extraction and processing is described in detail in the above-mentioned S101 and S102, and will not be described here again.
S303: inputting the first characteristic data into a first SVM classifier, and determining the lung lobe of the lung nodule in the lung CT by using the first SVM classifier to obtain first position information of the lung lobe of the lung nodule.
When determining the lung lobes of the lung nodule in the lung CT by using the first classifier, the first classifier encodes 5 lung lobes, and the correspondence between the lung lobes and the number is: upper left lung lobe-0, lower left lung lobe-1, upper right lung lobe-2, middle right lung lobe-3, lower right lung lobe-4. Illustratively, when the first classifier determines that the lung nodule is on the left upper lung, the output first location information is encoded as 0, which characterizes the classification result determined by the first classifier as that the lung nodule is on the left upper lung.
S304: and forming second characteristic data based on the first characteristic data and the first position information, inputting the second characteristic data into a second SVM classifier, and determining a specific lung segment where a lung nodule is located in the lung CT by using the second SVM classifier to obtain second position information of the lung segment where the lung nodule is located.
When determining the lung segment where the lung nodule in the lung CT is located by using the second classifier, the second classifier encodes 18 lung segments, and the correspondence between the lung segments and the number is, for example: the tip section-0 on the right lung, the tip section-1 on the right lung, the tip section-2 on the right lung, the tip section-3 on the right lung, the tip section-4 on the right lung, the tip section-5 on the right lung, the tip section-6 on the right lung, the tip section-7 on the right lung, the tip section-8 on the right lung, the tip section-9 on the left lung, the tip section-10 on the left lung, the tip section-11 on the left lung, the tip section-She Sheshe on the left lung, the tip section-12 on the left lung, the tip section-She Sheshe on the left lung, the tip section-14 on the left lung, the tip section-15 on the left lung, the tip section-16 on the left lung, and the tip section-17 on the left lung. For example, when the first classifier is used for judging that the lung nodule is on the upper left lung and the second classifier is used for judging that the lung nodule is on the upper left tip rear section, the output second part information code is 10, and the classification result determined by the second classifier is that the lung nodule is on the upper left tip rear section.
Through the above process, the specific position of the lung nodule, namely the target position information, can be rapidly and accurately determined.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same inventive concept, the embodiments of the present disclosure further provide an image processing apparatus corresponding to the image processing method, and since the principle of the apparatus in the embodiments of the present disclosure for solving the problem is similar to that of the image processing method described in the embodiments of the present disclosure, the implementation of the apparatus may refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 4, a schematic diagram of an image processing apparatus according to an embodiment of the disclosure is provided, where the apparatus includes: a first acquisition module 41, a second acquisition module 42, a third acquisition module 43, and a determination module 44; wherein,
A first acquiring module 41, configured to acquire a first image of a first target object;
A second acquiring module 42, configured to acquire region size information of a target region in the first image;
a third obtaining module 43, configured to obtain first location information of a second target object in the target area;
a determining module 44, configured to determine first feature data based on the region size information and the first location information, where the first feature data includes data of a location relationship of the second target object with respect to the target region; and determining target location information of the second target object in the first target object based on the first feature data.
In an alternative embodiment, the determining module 44 is configured, when determining, based on the first feature data, target location information of the second target object in the first target object, to: determining first part information of a target part of the second target object in the first target object based on the first characteristic data, wherein the target part comprises a plurality of sub parts; and determining second part information of a target sub-part where the second target object is located from the plurality of sub-parts as the target part information based on the first feature data and the first part information.
In an alternative embodiment, the first obtaining module 41 is configured to, when obtaining a first image of a first target object, resample the first image based on a preset resolution.
In an alternative embodiment, the second obtaining module 42 is configured to, when obtaining the region size information of the target region in the first image: region size information of the target region is determined based on the resampled first image.
In an alternative embodiment, the third obtaining module 43 is configured to, when obtaining the first position information of the second target object in the target area: acquiring second position information of the second target object in the first image; and determining the first position information of the second target object in an area coordinate system established based on the target area based on third position information of the target area in the first image and second position information of the second target object.
In an alternative embodiment, the third obtaining module 43 is configured to, when determining, based on third location information of the target area in the first image and second location information of the second target object, the first location information of the second target object in an area coordinate system established based on the target area: determining conversion relation information between the region coordinate system and a world coordinate system established based on the first image based on third position information of the target region in the first image and the first image; and determining first position information of the second target object in an area coordinate system established based on the target area based on second position information of the second target object in the first image and the conversion relation information.
In an alternative embodiment, the region size information includes at least one of: the target region has a region height, a region width, a region depth, and a region centerline length.
In an alternative embodiment, the first location information includes: three-dimensional position information of a center point of the second target object in a region coordinate system established based on the target region and a distance between the center point and an origin of the region coordinate system.
In an alternative embodiment, the determining module 44 is configured, when determining, based on the first feature data, first location information of a target location of the second target object in the first target object, to: and classifying the first characteristic data by using a first classifier to obtain the first part information of the target part of the second target object in the first target object.
In an alternative embodiment, the determining module 44 is configured to, when determining, based on the first feature data and the first location information, second location information of a target sub-location where the second target object is located from a plurality of sub-locations in the target location: forming second feature data based on the first feature data and the first part information; and classifying the second characteristic data by using a second classifier to obtain the second part information of the target sub-part where the second target object is located.
In an alternative embodiment, the determining module 44 is configured to, when determining the first feature data based on the region size information and the first location information: and carrying out normalization processing on the first position information based on the region size information to obtain the first characteristic data.
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
The embodiment of the disclosure further provides a computer device, as shown in fig. 5, which is a schematic structural diagram of the computer device provided by the embodiment of the disclosure, including:
A processor 51 and a memory 52; the memory 52 stores machine readable instructions executable by the processor 51, the processor 51 configured to execute the machine readable instructions stored in the memory 52, the machine readable instructions when executed by the processor 51, the processor 51 performing the steps of:
Acquiring a first image of a first target object; acquiring region size information of a target region in the first image; acquiring first position information of a second target object in the target area; determining first feature data based on the region size information and the first position information, wherein the first feature data comprises data of a position relation of the second target object relative to the target region; and determining target location information of the second target object in the first target object based on the first feature data.
The memory 52 includes a memory 521 and an external memory 522; the memory 521 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 51 and data exchanged with the external memory 522 such as a hard disk, and the processor 51 exchanges data with the external memory 522 via the memory 521.
The specific execution process of the above instruction may refer to the steps of the image processing method described in the embodiments of the present disclosure, which is not described herein.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the image processing method described in the method embodiments described above. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiments of the present disclosure further provide a computer program product, where the computer program product carries a program code, where instructions included in the program code may be used to perform steps of the image processing method described in the foregoing method embodiments, and specifically reference may be made to the foregoing method embodiments, which are not described herein.
Wherein the above-mentioned computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.