CN118261794A - Ultrasonic image processing method, device, equipment and computer readable storage medium - Google Patents
Ultrasonic image processing method, device, equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN118261794A CN118261794A CN202410335794.0A CN202410335794A CN118261794A CN 118261794 A CN118261794 A CN 118261794A CN 202410335794 A CN202410335794 A CN 202410335794A CN 118261794 A CN118261794 A CN 118261794A
- Authority
- CN
- China
- Prior art keywords
- image
- ultrasonic image
- processed
- feature
- ultrasonic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 27
- 238000002604 ultrasonography Methods 0.000 claims abstract description 126
- 238000012545 processing Methods 0.000 claims abstract description 69
- 238000000034 method Methods 0.000 claims abstract description 36
- 238000012549 training Methods 0.000 claims abstract description 24
- 230000004927 fusion Effects 0.000 claims description 58
- 230000007246 mechanism Effects 0.000 claims description 22
- 230000006870 function Effects 0.000 claims description 18
- 230000004913 activation Effects 0.000 claims description 16
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 15
- 238000001914 filtration Methods 0.000 claims description 8
- 238000010606 normalization Methods 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 7
- 238000005070 sampling Methods 0.000 claims description 6
- 238000005516 engineering process Methods 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 15
- 238000004891 communication Methods 0.000 description 8
- 238000003384 imaging method Methods 0.000 description 8
- 238000012285 ultrasound imaging Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000003745 diagnosis Methods 0.000 description 4
- 238000007500 overflow downdraw method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 238000000513 principal component analysis Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 210000002216 heart Anatomy 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 210000004185 liver Anatomy 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
- Image Processing (AREA)
Abstract
The present application relates to the field of image processing technologies, and in particular, to an ultrasound image processing method, device, apparatus, and computer readable storage medium, where the method includes: acquiring an ultrasonic image to be processed, and performing contrast enhancement treatment on the ultrasonic image to be processed to obtain a processed ultrasonic image; determining the image definition of the processed ultrasonic image, and judging whether the image definition accords with a preset definition threshold; if the image definition accords with the definition threshold, outputting the processed ultrasonic image; if the image definition does not accord with the definition threshold, inputting the processed ultrasonic image into a super-resolution reconstruction model to obtain a reconstructed ultrasonic image, obtaining a target ultrasonic image based on the reconstructed ultrasonic image and outputting the target ultrasonic image, wherein the super-resolution reconstruction model is obtained by training the super-resolution reconstruction model by taking the low-resolution ultrasonic image as input data and the high-resolution ultrasonic image as a training label. The application realizes the improvement of the quality of the ultrasonic image and ensures the timeliness of the ultrasonic image.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an ultrasound image processing method, apparatus, device, and computer readable storage medium.
Background
Ultrasonic imaging is widely used in the field of medical imaging as an auxiliary means for medical diagnosis. The imaging quality of ultrasound images has a large impact on the diagnostic judgment of the physician. The high-quality ultrasonic image can clearly show focus, and provides more accurate diagnosis basis for doctors; while the image quality is poor, it may be difficult for a physician to accurately determine the nature of the lesion.
To solve this problem, an image enhancement method is generally used to process an ultrasound image in real time to improve the imaging quality. However, high quality image enhancement processing is often complex, requiring long processing times, such as image processing by neural network algorithms; while simple enhancement methods, although fast in processing speed, may not be ideal in quality of the resulting ultrasound image, for example, image processing by non-neural network algorithms such as filter denoising, filtering, etc. This results in an ultrasound imaging process that does not compromise imaging quality and timeliness.
Disclosure of Invention
The application mainly aims to provide an ultrasonic image processing method, an ultrasonic image processing device, ultrasonic image processing equipment and a computer readable storage medium, and aims to improve the imaging quality of an ultrasonic image and ensure the timeliness of the ultrasonic image.
In order to achieve the above object, the present application provides an ultrasonic image processing method comprising the steps of:
Acquiring an ultrasonic image to be processed, and performing contrast enhancement processing on the ultrasonic image to be processed to obtain a processed ultrasonic image;
Determining the image definition of the processed ultrasonic image, and judging whether the image definition accords with a preset definition threshold;
outputting the processed ultrasonic image if the image definition accords with the definition threshold;
And if the image definition does not accord with the definition threshold, inputting the processed ultrasonic image into a super-resolution reconstruction model to obtain a reconstructed ultrasonic image, obtaining a target ultrasonic image based on the reconstructed ultrasonic image and outputting the target ultrasonic image, wherein the super-resolution reconstruction model is obtained by training by taking the low-resolution ultrasonic image as input data and the high-resolution ultrasonic image as a training label.
Optionally, the step of performing contrast enhancement processing on the to-be-processed ultrasonic image to obtain a processed ultrasonic image includes:
Determining an interested region and a non-interested region in the ultrasonic image to be processed, wherein the non-interested region is an image region except the interested region in the ultrasonic image to be processed;
Copying the ultrasonic image to be processed to obtain a copied ultrasonic image;
carrying out gray level normalization on the ultrasonic image to be processed, determining an image stretching range of the ultrasonic image to be processed, and mapping pixel values of pixels of the ultrasonic image to be processed in the image stretching range to obtain a first processed image:
performing low-pass filtering denoising treatment on the copied ultrasonic image to obtain a second treated image;
and merging the region of interest of the first processed image and the non-region of interest of the second processed image to obtain a processed ultrasonic image.
Optionally, the step of determining the image sharpness of the processed ultrasound image includes:
And calculating the mean square error of pixel values of all pixels in the processed ultrasonic image, and determining the image definition of the processed ultrasonic image based on the mean square error, wherein the mean square error is inversely proportional to the brightness uniformity.
Optionally, the step of inputting the processed ultrasound image into a super-resolution reconstruction model to obtain a reconstructed ultrasound image if the image sharpness does not meet the sharpness threshold includes:
If the image definition does not accord with the definition threshold, acquiring an adjacent frame ultrasonic image of the ultrasonic image to be processed;
and inputting the adjacent frame ultrasonic image and the processed ultrasonic image into the super-resolution reconstruction model to obtain a reconstructed ultrasonic image.
Optionally, the super-resolution reconstruction model comprises a convolution layer, an upsampling layer, a fusion layer, a residual block and an activation function;
the step of inputting the adjacent frame ultrasonic image and the processed ultrasonic image into the super-resolution reconstruction model to obtain a reconstructed ultrasonic image comprises the following steps:
Inputting the adjacent frame ultrasonic image and the processed ultrasonic image into the convolution layer to obtain a first characteristic of the adjacent frame ultrasonic image and a second characteristic of the reconstructed ultrasonic image;
Inputting the first feature and the second feature into the up-sampling layer to obtain a third feature corresponding to the first feature and a fourth feature corresponding to the second feature;
Inputting the third feature and the fourth feature into the fusion layer to obtain a fusion feature;
inputting the fusion characteristic into the residual block to obtain a target characteristic;
inputting the target feature into the activation function to obtain a reconstructed ultrasonic image.
Optionally, the super-resolution reconstruction model further includes an attention mechanism block;
The step of inputting the third feature and the fourth feature into the fusion layer to obtain a fusion feature includes:
Inputting the third feature and the fourth feature into the attention mechanism block, and obtaining attention weights based on the similarity between the third feature and the fourth feature through the attention mechanism block;
Weighting the fourth feature through the attention weight to obtain a weighted feature;
and inputting the third feature and the weighted feature into the fusion layer to obtain a fusion feature.
Optionally, the step of obtaining and outputting the target ultrasound image based on the reconstructed ultrasound image includes:
acquiring a first weight corresponding to the processed ultrasonic image and a second weight corresponding to the reconstructed ultrasonic image;
Weighting the processed ultrasonic image through the first weight to obtain a first weighted image, and weighting the reconstructed ultrasonic image through the second weight to obtain a second weighted image;
And superposing the first weighted image and the second weighted image to obtain a target ultrasonic image, and outputting the target ultrasonic image.
In order to achieve the above object, the present application also provides an ultrasonic image processing apparatus comprising:
the acquisition module is used for acquiring an ultrasonic image to be processed, and carrying out contrast enhancement processing on the ultrasonic image to be processed to obtain a processed ultrasonic image;
The judging module is used for determining the image definition of the processed ultrasonic image and judging whether the image definition accords with a preset definition threshold value or not;
the first output module is used for outputting the processed ultrasonic image if the image definition accords with the definition threshold;
And the second output module is used for inputting the processed ultrasonic image into a super-resolution reconstruction model to obtain a reconstructed ultrasonic image if the image definition does not accord with the definition threshold value, obtaining a target ultrasonic image based on the reconstructed ultrasonic image and outputting the target ultrasonic image, wherein the super-resolution reconstruction model is obtained by training by taking the low-resolution ultrasonic image as input data and taking the high-resolution ultrasonic image as a training label.
To achieve the above object, the present application also provides an ultrasonic image processing apparatus comprising: the ultrasonic image processing device comprises a memory, a processor and an ultrasonic image processing program which is stored in the memory and can run on the processor, wherein the ultrasonic image processing program realizes the steps of the ultrasonic image processing method when being executed by the processor.
In addition, in order to achieve the above object, the present application also proposes a computer-readable storage medium having stored thereon an ultrasound image processing program which, when executed by a processor, implements the steps of the ultrasound image processing method as described above.
In the application, the contrast enhancement treatment is carried out on the ultrasonic image to be treated by acquiring the ultrasonic image to be treated to obtain the ultrasonic image after treatment; determining the image definition of the processed ultrasonic image, and judging whether the image definition accords with a preset definition threshold; if the image definition accords with the definition threshold, outputting the processed ultrasonic image; if the image definition does not accord with the definition threshold, inputting the processed ultrasonic image into a super-resolution reconstruction model to obtain a reconstructed ultrasonic image, obtaining a target ultrasonic image based on the reconstructed ultrasonic image and outputting the target ultrasonic image, wherein the super-resolution reconstruction model is obtained by training the super-resolution reconstruction model by taking the low-resolution ultrasonic image as input data and the high-resolution ultrasonic image as a training label.
Compared with the method that the ultrasonic image to be processed is processed by adopting complex processing modes such as a neural network algorithm and the like alone or is processed by adopting a simple image processing algorithm alone, in the method, the definition of the ultrasonic image to be processed is rapidly improved through contrast enhancement processing, then the definition of the ultrasonic image after processing is judged, the ultrasonic image after processing is output under the condition that the image definition accords with a threshold value, unnecessary complex image processing flow is avoided, and the timeliness of ultrasonic image processing is improved; and under the condition that the image definition does not accord with the threshold value, carrying out super-resolution reconstruction based on the super-resolution reconstruction model, so that the quality of the finally output ultrasonic image can be ensured.
Drawings
FIG. 1 is a schematic diagram of a hardware operating environment according to an embodiment of the present application;
FIG. 2 is a flow chart of a first embodiment of the ultrasound image processing method of the present application;
FIG. 3 is a flow chart of a second embodiment of the ultrasound image processing method of the present application;
FIG. 4 is a schematic flow chart of super-resolution reconstruction according to an embodiment of the present application;
fig. 5 is a schematic functional block diagram of an ultrasonic image processing apparatus according to a preferred embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Referring to fig. 1, fig. 1 is a schematic device structure of a hardware running environment according to an embodiment of the present application.
It should be noted that, in the ultrasonic image processing apparatus according to the embodiment of the present application, the ultrasonic image processing apparatus may be an ultrasonic imaging apparatus, or may be an apparatus that establishes communication connection with an ultrasonic imaging apparatus, which is not limited herein.
As shown in fig. 1, the ultrasonic image processing apparatus may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
It will be appreciated by those skilled in the art that the device structure shown in fig. 1 is not limiting of the ultrasound image processing device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and an ultrasound image processing program may be included in the memory 1005 as one type of computer storage medium. The operating system is a program that manages and controls the hardware and software resources of the device, supporting the execution of ultrasound image processing programs, as well as other software or programs. In the device shown in fig. 1, the user interface 1003 is mainly used for data communication with the client; the network interface 1004 is mainly used for establishing communication connection with a server; and the processor 1001 may be configured to call an ultrasound image processing program stored in the memory 1005 and perform the following operations:
Acquiring an ultrasonic image to be processed, and performing contrast enhancement processing on the ultrasonic image to be processed to obtain a processed ultrasonic image;
Determining the image definition of the processed ultrasonic image, and judging whether the image definition accords with a preset definition threshold;
outputting the processed ultrasonic image if the image definition accords with the definition threshold;
And if the image definition does not accord with the definition threshold, inputting the processed ultrasonic image into a super-resolution reconstruction model to obtain a reconstructed ultrasonic image, obtaining a target ultrasonic image based on the reconstructed ultrasonic image and outputting the target ultrasonic image, wherein the super-resolution reconstruction model is obtained by training by taking the low-resolution ultrasonic image as input data and the high-resolution ultrasonic image as a training label.
Further, the step of performing contrast enhancement processing on the to-be-processed ultrasonic image to obtain a processed ultrasonic image includes:
Determining an interested region and a non-interested region in the ultrasonic image to be processed, wherein the non-interested region is an image region except the interested region in the ultrasonic image to be processed;
Copying the ultrasonic image to be processed to obtain a copied ultrasonic image;
carrying out gray level normalization on the ultrasonic image to be processed, determining an image stretching range of the ultrasonic image to be processed, and mapping pixel values of pixels of the ultrasonic image to be processed in the image stretching range to obtain a first processed image:
performing low-pass filtering denoising treatment on the copied ultrasonic image to obtain a second treated image;
and merging the region of interest of the first processed image and the non-region of interest of the second processed image to obtain a processed ultrasonic image.
Further, the step of determining the image sharpness of the processed ultrasound image includes:
And calculating the mean square error of pixel values of all pixels in the processed ultrasonic image, and determining the image definition of the processed ultrasonic image based on the mean square error, wherein the mean square error is inversely proportional to the brightness uniformity.
Further, if the image sharpness does not meet the sharpness threshold, the step of inputting the processed ultrasonic image into a super-resolution reconstruction model to obtain a reconstructed ultrasonic image includes:
If the image definition does not accord with the definition threshold, acquiring an adjacent frame ultrasonic image of the ultrasonic image to be processed;
and inputting the adjacent frame ultrasonic image and the processed ultrasonic image into the super-resolution reconstruction model to obtain a reconstructed ultrasonic image.
Further, the super-resolution reconstruction model comprises a convolution layer, an up-sampling layer, a fusion layer, a residual block and an activation function;
the step of inputting the adjacent frame ultrasonic image and the processed ultrasonic image into the super-resolution reconstruction model to obtain a reconstructed ultrasonic image comprises the following steps:
Inputting the adjacent frame ultrasonic image and the processed ultrasonic image into the convolution layer to obtain a first characteristic of the adjacent frame ultrasonic image and a second characteristic of the reconstructed ultrasonic image;
Inputting the first feature and the second feature into the up-sampling layer to obtain a third feature corresponding to the first feature and a fourth feature corresponding to the second feature;
Inputting the third feature and the fourth feature into the fusion layer to obtain a fusion feature;
inputting the fusion characteristic into the residual block to obtain a target characteristic;
inputting the target feature into the activation function to obtain a reconstructed ultrasonic image.
Further, the super-resolution reconstruction model further comprises an attention mechanism block;
The step of inputting the third feature and the fourth feature into the fusion layer to obtain a fusion feature includes:
Inputting the third feature and the fourth feature into the attention mechanism block, and obtaining attention weights based on the similarity between the third feature and the fourth feature through the attention mechanism block;
Weighting the fourth feature through the attention weight to obtain a weighted feature;
and inputting the third feature and the weighted feature into the fusion layer to obtain a fusion feature.
Further, the step of obtaining and outputting a target ultrasound image based on the reconstructed ultrasound image includes:
acquiring a first weight corresponding to the processed ultrasonic image and a second weight corresponding to the reconstructed ultrasonic image;
Weighting the processed ultrasonic image through the first weight to obtain a first weighted image, and weighting the reconstructed ultrasonic image through the second weight to obtain a second weighted image;
And superposing the first weighted image and the second weighted image to obtain a target ultrasonic image, and outputting the target ultrasonic image.
Based on the above-described structure, various embodiments of an ultrasound image processing method are proposed.
Referring to fig. 2, fig. 2 is a flowchart illustrating a first embodiment of an ultrasound image processing method according to the present application.
Embodiments of the present application provide embodiments of ultrasound image processing methods, it being noted that although a logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in a different order than what is shown or described herein. In this embodiment, the execution body of the ultrasound image processing method may be an ultrasound imaging apparatus or an apparatus that establishes a communication connection with the ultrasound imaging apparatus, and is not limited in this embodiment, and for convenience of description, explanation of each embodiment by the execution body is omitted. In this embodiment, the ultrasound image processing method includes steps S10 to S40.
Step S10, an ultrasonic image to be processed is obtained, and contrast enhancement processing is carried out on the ultrasonic image to be processed to obtain a processed ultrasonic image.
In this embodiment, in the process of ultrasound imaging, an ultrasound image is processed in real time, that is, each time a frame of ultrasound image is acquired, the frame of ultrasound image is processed to ensure the definition of the ultrasound image, and the ultrasound image that needs to be processed in the current frame is hereinafter referred to as an ultrasound image to be processed. It should be noted that the ultrasound image in this embodiment may be an ultrasound image of organs or tissues such as liver, lung, heart, etc., and is not limited herein.
In this embodiment, after receiving the to-be-processed ultrasonic image, the to-be-processed ultrasonic image is subjected to contrast enhancement processing, and the ultrasonic image obtained after the contrast enhancement processing is referred to as a post-processing ultrasonic image.
In this embodiment, the specific manner of the contrast enhancement process is not limited, for example, in a possible implementation manner, a histogram equalization method may be used to adjust the gray level distribution of the ultrasound image, so that the contrast of the image is enhanced; in another possible implementation, the laplacian operator may also be used to detect edges and details in the image, and enhance the contrast by enhancing the edges; in another possible embodiment, a contrast stretching mode may be adopted.
Step S20, determining the image definition of the processed ultrasonic image, and judging whether the image definition accords with a preset definition threshold.
In this embodiment, after the processed ultrasound image is obtained, the image definition of the processed ultrasound image is determined, and whether the image definition meets a preset definition threshold is determined, so as to determine whether the imaging quality of the processed ultrasound image meets the diagnostic requirement of a doctor. The definition threshold may be set according to actual requirements, which is not limited herein.
Specifically, the manner of determining the image sharpness of the processed ultrasonic image is not limited herein, for example, in a possible embodiment, the sharpness may be evaluated by measuring the contrast of the processed ultrasonic image, and the sharpness may be determined by calculating parameters such as the contrast coefficient or the contrast expansion factor of the processed ultrasonic image to quantify the contrast of the image, where the higher the contrast, the higher the sharpness; in another possible implementation manner, the noise power spectrum or the noise standard deviation of the processed ultrasonic image can be calculated to determine the definition of the processed ultrasonic image, wherein the lower the noise is, the higher the definition of the processed ultrasonic image is; in another possible embodiment, the resolution of the processed ultrasonic image can be measured to determine the image definition, and the higher the resolution of the processed ultrasonic image, the higher the image definition.
And step S30, outputting the processed ultrasonic image if the image definition accords with the definition threshold.
In this embodiment, if the image sharpness meets the sharpness threshold, it indicates that the image quality of the processed ultrasound image meets the medical requirement, and the method can be used for auxiliary diagnosis, and at this time, the processed ultrasound image is output to display the processed ultrasound image on the ultrasound imaging device.
And S40, if the image definition does not accord with the definition threshold, inputting the processed ultrasonic image into a super-resolution reconstruction model to obtain a reconstructed ultrasonic image, obtaining a target ultrasonic image based on the reconstructed ultrasonic image and outputting the target ultrasonic image, wherein the super-resolution reconstruction model is obtained by training by taking a low-resolution ultrasonic image as input data and a high-resolution ultrasonic image as a training label.
And if the image definition does not accord with the definition threshold, inputting the processed ultrasonic image into a super-resolution reconstruction model to obtain a reconstructed ultrasonic image. It should be noted that, the specific structure of the super-resolution reconstruction model is not limited, and may be set according to actual requirements.
In this embodiment, the super-resolution reconstruction model is obtained by training with a low-resolution ultrasound image as input data and a high-resolution ultrasound image as a training tag, and the specific training process is not described here. It should be noted that the ultrasound images used in the model training process are those in the public case library.
The super-resolution reconstruction may acquire a high-resolution ultrasound image based on a low-resolution ultrasound image, and in this embodiment, the image reconstruction is performed on the processed ultrasound image based on a resolution reconstruction model to obtain a reconstructed ultrasound image. It should be noted that the method may be to reconstruct the processed ultrasonic image to obtain a reconstructed ultrasonic image; the reconstructed ultrasound image may also be obtained by performing image reconstruction based on the processed ultrasound image and a plurality of low resolution images associated with the processed ultrasound image, for example, image reconstruction may be performed based on the processed ultrasound image and an adjacent frame ultrasound image.
And obtaining a target ultrasonic image based on the reconstructed ultrasonic image, and outputting the target ultrasonic image. In a possible implementation manner, the reconstructed ultrasonic image can be directly taken as a target ultrasonic image and output; in another possible embodiment, the target ultrasound image may be obtained after processing the reconstructed ultrasound image, for example, an image fusion process may be performed.
Further, in one possible embodiment, step S10: and carrying out contrast enhancement processing on the ultrasonic image to be processed to obtain a processed ultrasonic image, wherein the processing comprises steps S101-S105.
Step S101, determining a region of interest and a non-region of interest in the ultrasound image to be processed, where the non-region of interest is an image region of the ultrasound image to be processed except the region of interest.
In this embodiment, the region of the ultrasound image to be processed is divided into a region of interest and a region of non-interest. The region of interest (ROI, region of Interest) is the region of interest that requires significant attention or processing in an ultrasound image, i.e., diseased tissue, blood vessels, organs, or other structures that require significant observation. The non-region of interest is an image region other than the region of interest in the ultrasound image to be processed.
The specific manner of extracting the region of interest is not limited herein, and for example, in a possible embodiment, the region of interest may be identified by using edge features, and the contour of the region of interest is identified, so that the contour of the region of interest is extracted; in another possible implementation manner, a region growing mode is adopted, specifically, a group of seed points are selected from pixels of an ultrasonic image to be processed, adjacent pixels with similar properties to the seed points are combined into a region of interest, and the whole region of interest can be finally extracted by continuously expanding the range of the region; edge extraction may also be performed by means of machine learning, threshold detection, or the like.
And step S102, copying the ultrasonic image to be processed to obtain a copied ultrasonic image.
And copying the ultrasonic image to be processed to obtain a copied ultrasonic image. And copying the region of interest and the region of non-interest in the ultrasonic image, and correspondingly conforming to the region of interest and the region of non-interest in the ultrasonic image to be processed.
Step S103, carrying out gray level normalization on the ultrasonic image to be processed, determining an image stretching range of the ultrasonic image to be processed, and mapping pixel values of all pixels of the ultrasonic image to be processed in the image stretching range to obtain a first processed image.
In this embodiment, the contrast stretching is performed on the region of interest to improve the contrast between the region of interest and the non-region of interest, so that the region of interest is easier to observe and identify while the contrast of the region of interest is improved, and an accurate reference image is provided for a doctor.
Specifically, in this embodiment, the to-be-processed ultrasound image is subjected to gray level normalization, an image stretching range of the to-be-processed ultrasound image is determined, and then the pixel values of each pixel of the to-be-processed ultrasound image are mapped in the image stretching range to obtain the first processed image. The manner of determining the image stretching range is not limited herein, and for example, in a possible embodiment, a histogram describing the gray level distribution of the image may be obtained, and the minimum value and the maximum value of the pixel values in the ultrasound image to be processed may be determined based on the gray level histogram, where the minimum value and the maximum value are respectively used as the minimum value and the maximum value of the image stretching range.
And step S104, performing low-pass filtering denoising processing on the copied ultrasonic image to obtain a second processed image.
The second processed image is obtained by performing low-pass filtering denoising processing on the copied ultrasonic image, and in the embodiment, timeliness of ultrasonic image processing is improved by simplifying image processing operation on a non-interested area.
Step S105, merging the region of interest of the first processed image and the non-region of interest of the second processed image to obtain a processed ultrasound image.
Combining the region of interest of the first processed image and the non-region of interest of the second processed image results in a processed ultrasound image.
In the embodiment, an interested area and a non-interested area in an ultrasonic image to be processed are determined, wherein the non-interested area is an image area except the interested area in the ultrasonic image to be processed; copying the ultrasonic image to be processed to obtain a copied ultrasonic image; carrying out gray level normalization on an ultrasonic image to be processed, determining an image stretching range of the ultrasonic image to be processed, and mapping pixel values of each pixel of the ultrasonic image to be processed in the image stretching range to obtain a first processed image: performing low-pass filtering denoising treatment on the copied ultrasonic image to obtain a second treated image; combining the region of interest of the first processed image and the non-region of interest of the second processed image results in a processed ultrasound image. Compared with the same operation performed on all the regions of the ultrasonic image to be processed, in the embodiment, after the regions of the ultrasonic image to be processed are divided into the region of interest and the non-region of interest, more detailed image processing and analysis are performed on the region of interest, so that the image quality and accuracy of the region of interest are improved, and simpler image processing is performed on the non-region of interest, so that the computing resources and time are saved, and the timeliness of the image processing is improved.
Further, in a possible embodiment, step S20: the image definition of the processed ultrasound image is determined, including step S201.
Step S201, calculating a mean square error of pixel values of each pixel in the processed ultrasonic image, and determining image definition of the processed ultrasonic image based on the mean square error, wherein the mean square error is inversely proportional to brightness uniformity.
And calculating the mean square error of pixel values of all pixels in the processed ultrasonic image, and determining the image definition of the processed ultrasonic image based on the mean square error, wherein the mean square error is inversely proportional to the brightness uniformity. Specifically, the mean square error of the pixel value may be used as the image definition, or the mean square error of the pixel value may be mapped to the definition according to a preset mapping relationship, which is not limited herein.
In this embodiment, the mean square error of the pixel value is a standardized and objective index, and the sharpness is evaluated based on the mean square error, so that accuracy, objectivity and interpretability of sharpness evaluation can be ensured.
In the embodiment, the contrast enhancement processing is performed on the ultrasonic image to be processed by acquiring the ultrasonic image to be processed to obtain the processed ultrasonic image; determining the image definition of the processed ultrasonic image, and judging whether the image definition accords with a preset definition threshold; if the image definition accords with the definition threshold, outputting the processed ultrasonic image; if the image definition does not accord with the definition threshold, inputting the processed ultrasonic image into a super-resolution reconstruction model to obtain a reconstructed ultrasonic image, obtaining a target ultrasonic image based on the reconstructed ultrasonic image and outputting the target ultrasonic image, wherein the super-resolution reconstruction model is obtained by training the super-resolution reconstruction model by taking the low-resolution ultrasonic image as input data and the high-resolution ultrasonic image as a training label.
Compared with the method that the ultrasonic image to be processed is processed by adopting complex processing modes such as a neural network algorithm and the like alone or is processed by adopting a simple image processing algorithm alone, in the method, the definition of the ultrasonic image to be processed is rapidly improved through contrast enhancement processing, then the definition of the ultrasonic image after processing is judged, the ultrasonic image after processing is output under the condition that the image definition accords with a threshold value, unnecessary complex image processing flow is avoided, and the timeliness of ultrasonic image processing is improved; and under the condition that the image definition does not accord with the threshold value, carrying out super-resolution reconstruction based on the super-resolution reconstruction model, so that the quality of the finally output ultrasonic image can be ensured.
In summary, in this embodiment, by combining efficient contrast enhancement processing, an automated judgment flow, and a high-resolution reconstruction model, the quality of an ultrasound image is ensured, and the timeliness of processing is effectively improved.
Further, based on the above-mentioned first embodiment, a second embodiment of the ultrasound image processing method of the present application is proposed, in which step S40: comprising steps S401-S402.
And step S401, if the image definition does not accord with the definition threshold, acquiring the adjacent frame ultrasonic image of the ultrasonic image to be processed.
In this embodiment, image reconstruction is performed based on the processed ultrasound image and a plurality of low resolution images related to the processed ultrasound image, so as to obtain a reconstructed ultrasound image, so as to improve accuracy of the reconstructed ultrasound image.
Specifically, if the image definition does not meet the definition threshold, acquiring an adjacent frame ultrasonic image of the ultrasonic image to be processed, and carrying out subsequent image reconstruction based on the processed ultrasonic image and the adjacent frame ultrasonic image.
And step S402, inputting the adjacent frame ultrasonic image and the processed ultrasonic image into the super-resolution reconstruction model to obtain a reconstructed ultrasonic image.
And inputting the ultrasonic images of the adjacent frames and the processed ultrasonic images into a super-resolution reconstruction model to obtain a reconstructed ultrasonic image.
Further, in a possible embodiment, the super-resolution reconstruction model includes a convolution layer, an upsampling layer, a fusion layer, a residual block, and an activation function. Step S402: inputting the adjacent frame ultrasonic image and the processed ultrasonic image into the super-resolution reconstruction model to obtain a reconstructed ultrasonic image, wherein the method comprises the steps of S4021-S4025.
Step S4021, inputting the adjacent frame ultrasound image and the processed ultrasound image into the convolution layer, to obtain a first feature of the adjacent frame ultrasound image, and a second feature of the reconstructed ultrasound image.
In this embodiment, the super-resolution reconstruction model includes a convolution layer, an upsampling layer, a fusion layer, a residual block, and an activation function. Specifically, the adjacent frame ultrasonic image and the processed ultrasonic image are input into a convolution layer, the feature extraction is carried out on the processed ultrasonic image and the adjacent frame ultrasonic image through the convolution layer, so that the first feature of the adjacent frame ultrasonic image is obtained, and the second feature of the ultrasonic image is reconstructed.
Step S4022, inputting the first feature and the second feature into the upsampling layer to obtain a third feature corresponding to the first feature and a fourth feature corresponding to the second feature.
The first feature and the second feature are input into an upsampling layer, the resolution of the first feature and the second feature is improved through the upsampling layer, so as to obtain a third feature corresponding to the first feature and a fourth feature corresponding to the second feature.
And step S4023, inputting the third feature and the fourth feature into the fusion layer to obtain a fusion feature.
Inputting the third feature and the fourth feature into a fusion layer, and fusing the third feature and the fourth feature to obtain a fusion feature.
The specific feature fusion method is not limited herein, and for example, feature fusion may be performed by adopting a weighted fusion method, a feature series method, a PCA (PRINCIPAL COMPONENT ANALYSIS ) method, a kernel-based nonlinear feature fusion method, or the like. For example, in a possible implementation manner, a weighted fusion manner is adopted to improve the processing efficiency of feature fusion and improve the timeliness of ultrasonic image processing, and the specific process may be: and determining the respective fusion weights of the third feature and the fourth feature, and adopting the third feature and the fourth feature to multiply the respective fusion weights respectively and then combining to obtain the fusion feature, wherein the combination mode can be simple addition, multiplication or other fusion functions.
And step S4024, inputting the fusion characteristic into the residual block to obtain a target characteristic.
The fusion features are input into a residual block, the fusion features are subjected to non-linear transformation such as convolution, activation function, batch normalization and the like through the residual block, so that the fusion features are subjected to feature enhancement to obtain enhancement features, and then the enhancement features and the fusion features are subjected to feature addition through residual connection. The residual block enables the super-resolution reconstruction network to better understand and represent the inherent structure and characteristics of the fusion characteristics so as to improve the performance of the super-resolution reconstruction model and further improve the accuracy of reconstructing the ultrasonic image.
Step S4025, inputting the target feature into the activation function to obtain a reconstructed ultrasound image.
And inputting the target characteristics into an activation function to obtain a reconstructed ultrasonic image. After the target features are input into the activation function, the activation function can perform nonlinear transformation on the target features, and the representation capability of the features is enhanced, so that the performance of the super-resolution reconstruction model is improved, and the accuracy of the reconstructed ultrasonic image is improved.
Further, in a possible embodiment, the super-resolution reconstruction model further includes an attention mechanism block. Step S4023: and inputting the fusion characteristic into the residual block to obtain a target characteristic, wherein the target characteristic comprises steps S40231-S40233.
And step S40231, inputting the third feature and the fourth feature into the attention mechanism block, and obtaining attention weight through the attention mechanism block based on the similarity between the third feature and the fourth feature.
In this embodiment, the super-resolution reconstruction model further includes an attention mechanism block, and the attention mechanism block focuses on the correlation between the processed ultrasound image and the adjacent frame ultrasound image, and improves the accuracy of reconstructing the ultrasound image based on the correlation between the processed ultrasound image and the adjacent frame ultrasound image.
Specifically, the third feature and the fourth feature are input into an attention mechanism block, and attention weights are obtained by the attention mechanism block based on the similarity between the third feature and the fourth feature.
In this embodiment, the attention mechanism block obtains a weighting coefficient matrix by multiplying the third feature and the fourth feature pixel by pixel; then, the attention mechanical block normalizes the weighting coefficient matrix by using a softmax function to obtain an attention weight of each pixel, wherein the attention weight reflects the attention degree of each pixel in the third feature to each pixel in the fourth feature, namely, the correlation degree. The specific formula of the attention mechanism may be:
A=softmax(F1·F2)
wherein F1 is the third feature, F2 is the fourth feature, and a represents the attention weight.
Step S40232, performing a weighting process on the fourth feature by using the attention weight to obtain a weighted feature.
And weighting the fourth feature by the attention weight to obtain a weighted feature.
And step S40233, inputting the third feature and the weighted feature into the fusion layer to obtain a fusion feature.
And inputting the third feature and the weighted feature into a fusion layer to obtain a fusion feature.
For example, referring to fig. 4, in this embodiment, the specific process of super-resolution reconstruction may be:
and inputting the adjacent frame ultrasonic images and the processed ultrasonic images into a convolution layer to obtain first characteristics of the adjacent frame ultrasonic images and second characteristics of the reconstructed ultrasonic images.
And inputting the first feature and the second feature into an up-sampling layer to obtain a third feature corresponding to the first feature and a fourth feature corresponding to the second feature.
The third feature and the fourth feature are input into an attention mechanism block, and attention weights are obtained by the attention mechanism block based on the similarity between the third feature and the fourth feature.
And weighting the fourth feature by the attention weight to obtain a weighted feature.
And inputting the third feature and the weighted feature into a fusion layer to obtain a fusion feature.
And inputting the fusion characteristic into a residual error block to obtain a target characteristic.
Inputting the target feature into an activation function to obtain a reconstructed ultrasonic image.
Further, in a possible embodiment, step S40: and obtaining and outputting a target ultrasonic image based on the reconstructed ultrasonic image, wherein the method comprises steps S403-S405.
Step S403, acquiring a first weight corresponding to the processed ultrasound image and a second weight corresponding to the reconstructed ultrasound image.
In the embodiment, the processed ultrasonic image and the reconstructed ultrasonic image are subjected to image fusion to obtain the target ultrasonic image, so that more comprehensive and richer image information can be formed, more complete image information is provided for the target ultrasonic image, and the accuracy of the target ultrasonic image is improved.
Specifically, in this embodiment, a weight pair is preset, a weight corresponding to an ultrasound image processed in the weight pair is referred to as a first weight, and a weight corresponding to a reconstructed ultrasound image in the weight pair is referred to as a second weight. Because the ratio of the processed ultrasonic image to the adjacent frame ultrasonic image in the target ultrasonic image can be controlled through the first weight and the second weight, so that the image output effect of the target ultrasonic image is controlled, in a specific embodiment, the definition of the region of interest in the reconstructed ultrasonic image and the processed ultrasonic image can be compared, and the image with higher definition of the region of interest in the reconstructed ultrasonic image and the processed ultrasonic image is given higher weight, namely, the weight with the largest value in the weight pairs is taken as: reconstructing the ultrasound image and weighting the image with higher sharpness of the region of interest in the processed ultrasound image.
Step S404, performing a weighting process on the processed ultrasound image by using the first weight to obtain a first weighted image, and performing a weighting process on the reconstructed ultrasound image by using the second weight to obtain a second weighted image.
And carrying out weighting treatment on the processed ultrasonic image through the first weight to obtain a first weighted image, and carrying out weighting treatment on the reconstructed ultrasonic image through the second weight to obtain a second weighted image.
And step S405, superposing the first weighted image and the second weighted image to obtain a target ultrasonic image, and outputting the target ultrasonic image.
And superposing the first weighted image and the second weighted image to obtain a target ultrasonic image, and outputting the target ultrasonic image.
In the embodiment, if the image definition does not accord with the definition threshold, acquiring the adjacent frame ultrasonic image of the ultrasonic image to be processed; and inputting the ultrasonic images of the adjacent frames and the processed ultrasonic images into a super-resolution reconstruction model to obtain a reconstructed ultrasonic image. The embodiment realizes super-resolution reconstruction of the ultrasonic image and improves the quality of the ultrasonic image, thereby providing more accurate information for medical diagnosis and treatment.
In addition, an embodiment of the present application further provides an ultrasound image processing apparatus, referring to fig. 5, including:
The acquisition module 10 is used for acquiring an ultrasonic image to be processed, and performing contrast enhancement processing on the ultrasonic image to be processed to obtain a processed ultrasonic image;
the judging module 20 is configured to determine an image sharpness of the processed ultrasonic image, and judge whether the image sharpness meets a preset sharpness threshold;
A first output module 30, configured to output the processed ultrasound image if the image sharpness meets the sharpness threshold;
And a second output module 40, configured to input the processed ultrasound image into a super-resolution reconstruction model to obtain a reconstructed ultrasound image, obtain a target ultrasound image based on the reconstructed ultrasound image, and output the target ultrasound image if the image sharpness does not meet the sharpness threshold, where the super-resolution reconstruction model is obtained by training with a low-resolution ultrasound image as input data and a high-resolution ultrasound image as a training tag.
Further, the acquisition module 10 is further configured to:
Determining an interested region and a non-interested region in the ultrasonic image to be processed, wherein the non-interested region is an image region except the interested region in the ultrasonic image to be processed;
Copying the ultrasonic image to be processed to obtain a copied ultrasonic image;
carrying out gray level normalization on the ultrasonic image to be processed, determining an image stretching range of the ultrasonic image to be processed, and mapping pixel values of pixels of the ultrasonic image to be processed in the image stretching range to obtain a first processed image:
performing low-pass filtering denoising treatment on the copied ultrasonic image to obtain a second treated image;
and merging the region of interest of the first processed image and the non-region of interest of the second processed image to obtain a processed ultrasonic image.
Further, the judging module 20 is further configured to:
And calculating the mean square error of pixel values of all pixels in the processed ultrasonic image, and determining the image definition of the processed ultrasonic image based on the mean square error, wherein the mean square error is inversely proportional to the brightness uniformity.
Further, the second output module 40 is further configured to:
If the image definition does not accord with the definition threshold, acquiring an adjacent frame ultrasonic image of the ultrasonic image to be processed;
and inputting the adjacent frame ultrasonic image and the processed ultrasonic image into the super-resolution reconstruction model to obtain a reconstructed ultrasonic image.
Further, the super resolution reconstruction model includes a convolution layer, an upsampling layer, a fusion layer, a residual block and an activation function, and the second output module 40 is further configured to:
Inputting the adjacent frame ultrasonic image and the processed ultrasonic image into the convolution layer to obtain a first characteristic of the adjacent frame ultrasonic image and a second characteristic of the reconstructed ultrasonic image;
Inputting the first feature and the second feature into the up-sampling layer to obtain a third feature corresponding to the first feature and a fourth feature corresponding to the second feature;
Inputting the third feature and the fourth feature into the fusion layer to obtain a fusion feature;
inputting the fusion characteristic into the residual block to obtain a target characteristic;
inputting the target feature into the activation function to obtain a reconstructed ultrasonic image.
Further, the super resolution reconstruction model further comprises an attention mechanism block, and the second output module 40 is further configured to:
Inputting the third feature and the fourth feature into the attention mechanism block, and obtaining attention weights based on the similarity between the third feature and the fourth feature through the attention mechanism block;
Weighting the fourth feature through the attention weight to obtain a weighted feature;
and inputting the third feature and the weighted feature into the fusion layer to obtain a fusion feature.
Further, the second output module 40 is further configured to:
acquiring a first weight corresponding to the processed ultrasonic image and a second weight corresponding to the reconstructed ultrasonic image;
Weighting the processed ultrasonic image through the first weight to obtain a first weighted image, and weighting the reconstructed ultrasonic image through the second weight to obtain a second weighted image;
And superposing the first weighted image and the second weighted image to obtain a target ultrasonic image, and outputting the target ultrasonic image.
The embodiments of the ultrasonic image processing apparatus of the present application may refer to the embodiments of the ultrasonic image processing method of the present application, and will not be described herein.
In addition, the embodiment of the application also provides a computer readable storage medium, wherein the storage medium is stored with an ultrasonic image processing program, and the ultrasonic image processing program realizes the steps of an ultrasonic image processing method when being executed by a processor.
Embodiments of the ultrasound image processing apparatus and the computer readable storage medium of the present application may refer to embodiments of the ultrasound image processing method of the present application, and will not be described herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the application, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.
Claims (10)
1. An ultrasound image processing method, characterized in that the ultrasound image processing method comprises the steps of:
Acquiring an ultrasonic image to be processed, and performing contrast enhancement processing on the ultrasonic image to be processed to obtain a processed ultrasonic image;
Determining the image definition of the processed ultrasonic image, and judging whether the image definition accords with a preset definition threshold;
outputting the processed ultrasonic image if the image definition accords with the definition threshold;
And if the image definition does not accord with the definition threshold, inputting the processed ultrasonic image into a super-resolution reconstruction model to obtain a reconstructed ultrasonic image, obtaining a target ultrasonic image based on the reconstructed ultrasonic image and outputting the target ultrasonic image, wherein the super-resolution reconstruction model is obtained by training by taking the low-resolution ultrasonic image as input data and the high-resolution ultrasonic image as a training label.
2. The method of processing an ultrasound image according to claim 1, wherein the step of performing contrast enhancement processing on the ultrasound image to be processed to obtain a processed ultrasound image comprises:
Determining an interested region and a non-interested region in the ultrasonic image to be processed, wherein the non-interested region is an image region except the interested region in the ultrasonic image to be processed;
Copying the ultrasonic image to be processed to obtain a copied ultrasonic image;
carrying out gray level normalization on the ultrasonic image to be processed, determining an image stretching range of the ultrasonic image to be processed, and mapping pixel values of pixels of the ultrasonic image to be processed in the image stretching range to obtain a first processed image:
performing low-pass filtering denoising treatment on the copied ultrasonic image to obtain a second treated image;
and merging the region of interest of the first processed image and the non-region of interest of the second processed image to obtain a processed ultrasonic image.
3. The ultrasound image processing method according to claim 1, wherein the step of determining the image sharpness of the processed ultrasound image includes:
And calculating the mean square error of pixel values of all pixels in the processed ultrasonic image, and determining the image definition of the processed ultrasonic image based on the mean square error, wherein the mean square error is inversely proportional to the brightness uniformity.
4. The method of ultrasound image processing according to claim 1, wherein the step of inputting the processed ultrasound image into a super-resolution reconstruction model to obtain a reconstructed ultrasound image if the image sharpness does not meet the sharpness threshold comprises:
If the image definition does not accord with the definition threshold, acquiring an adjacent frame ultrasonic image of the ultrasonic image to be processed;
and inputting the adjacent frame ultrasonic image and the processed ultrasonic image into the super-resolution reconstruction model to obtain a reconstructed ultrasonic image.
5. The ultrasound image processing method of claim 4, wherein the super resolution reconstruction model comprises a convolution layer, an upsampling layer, a fusion layer, a residual block, and an activation function;
the step of inputting the adjacent frame ultrasonic image and the processed ultrasonic image into the super-resolution reconstruction model to obtain a reconstructed ultrasonic image comprises the following steps:
Inputting the adjacent frame ultrasonic image and the processed ultrasonic image into the convolution layer to obtain a first characteristic of the adjacent frame ultrasonic image and a second characteristic of the reconstructed ultrasonic image;
Inputting the first feature and the second feature into the up-sampling layer to obtain a third feature corresponding to the first feature and a fourth feature corresponding to the second feature;
Inputting the third feature and the fourth feature into the fusion layer to obtain a fusion feature;
inputting the fusion characteristic into the residual block to obtain a target characteristic;
inputting the target feature into the activation function to obtain a reconstructed ultrasonic image.
6. The ultrasound image processing method of claim 5, wherein the super-resolution reconstruction model further comprises an attention mechanical block;
The step of inputting the third feature and the fourth feature into the fusion layer to obtain a fusion feature includes:
Inputting the third feature and the fourth feature into the attention mechanism block, and obtaining attention weights based on the similarity between the third feature and the fourth feature through the attention mechanism block;
Weighting the fourth feature through the attention weight to obtain a weighted feature;
and inputting the third feature and the weighted feature into the fusion layer to obtain a fusion feature.
7. The ultrasound image processing method according to any one of claims 1 to 6, wherein the step of obtaining and outputting a target ultrasound image based on the reconstructed ultrasound image includes:
acquiring a first weight corresponding to the processed ultrasonic image and a second weight corresponding to the reconstructed ultrasonic image;
Weighting the processed ultrasonic image through the first weight to obtain a first weighted image, and weighting the reconstructed ultrasonic image through the second weight to obtain a second weighted image;
And superposing the first weighted image and the second weighted image to obtain a target ultrasonic image, and outputting the target ultrasonic image.
8. An ultrasound image processing apparatus, characterized in that the ultrasound image processing apparatus comprises:
the acquisition module is used for acquiring an ultrasonic image to be processed, and carrying out contrast enhancement processing on the ultrasonic image to be processed to obtain a processed ultrasonic image;
The judging module is used for determining the image definition of the processed ultrasonic image and judging whether the image definition accords with a preset definition threshold value or not;
the first output module is used for outputting the processed ultrasonic image if the image definition accords with the definition threshold;
And the second output module is used for inputting the processed ultrasonic image into a super-resolution reconstruction model to obtain a reconstructed ultrasonic image if the image definition does not accord with the definition threshold value, obtaining a target ultrasonic image based on the reconstructed ultrasonic image and outputting the target ultrasonic image, wherein the super-resolution reconstruction model is obtained by training by taking the low-resolution ultrasonic image as input data and taking the high-resolution ultrasonic image as a training label.
9. An ultrasound image processing apparatus, characterized in that the ultrasound image processing apparatus comprises: a memory, a processor and an ultrasound image processing program stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the ultrasound image processing method of any of claims 1 to 7.
10. A computer-readable storage medium, on which an ultrasound image processing program is stored, which when executed by a processor, implements the steps of the ultrasound image processing method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410335794.0A CN118261794B (en) | 2024-03-22 | 2024-03-22 | Ultrasonic image processing method, device, equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410335794.0A CN118261794B (en) | 2024-03-22 | 2024-03-22 | Ultrasonic image processing method, device, equipment and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118261794A true CN118261794A (en) | 2024-06-28 |
CN118261794B CN118261794B (en) | 2024-10-18 |
Family
ID=91610239
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410335794.0A Active CN118261794B (en) | 2024-03-22 | 2024-03-22 | Ultrasonic image processing method, device, equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118261794B (en) |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102722875A (en) * | 2012-05-29 | 2012-10-10 | 杭州电子科技大学 | Visual-attention-based variable quality ultra-resolution image reconstruction method |
CN110163237A (en) * | 2018-11-08 | 2019-08-23 | 腾讯科技(深圳)有限公司 | Model training and image processing method, device, medium, electronic equipment |
CN111310558A (en) * | 2019-12-28 | 2020-06-19 | 北京工业大学 | An intelligent extraction method of pavement diseases based on deep learning and image processing |
CN111461070A (en) * | 2020-04-29 | 2020-07-28 | Oppo广东移动通信有限公司 | Text recognition method, device, electronic device and storage medium |
WO2021052261A1 (en) * | 2019-09-17 | 2021-03-25 | 中国科学院空天信息创新研究院 | Image super-resolution reconstruction method and apparatus for sharpening of label data |
WO2021063341A1 (en) * | 2019-09-30 | 2021-04-08 | 华为技术有限公司 | Image enhancement method and apparatus |
US20210272237A1 (en) * | 2020-02-29 | 2021-09-02 | University Of Florida Research Foundation, Inc. | Multimodal ct image super-resolution via transfer generative adversarial network |
CN113744145A (en) * | 2021-08-20 | 2021-12-03 | 武汉瓯越网视有限公司 | Method for improving image definition, storage medium, electronic device and system |
CN113763248A (en) * | 2021-09-08 | 2021-12-07 | 深圳前海微众银行股份有限公司 | Super-resolution image reconstruction method, device, equipment and storage medium |
CN113837945A (en) * | 2021-09-30 | 2021-12-24 | 福州大学 | A display image quality optimization method and system based on super-resolution reconstruction |
CN114782283A (en) * | 2022-06-16 | 2022-07-22 | 深圳华声医疗技术股份有限公司 | Ultrasonic image enhancement method and device, ultrasonic equipment and storage medium |
CN114792287A (en) * | 2022-03-25 | 2022-07-26 | 南京航空航天大学 | Medical ultrasonic image super-resolution reconstruction method based on multi-image fusion |
CN114926333A (en) * | 2022-04-22 | 2022-08-19 | 武汉工程大学 | Image super-resolution reconstruction method and device |
WO2023164300A1 (en) * | 2022-02-28 | 2023-08-31 | Genentech, Inc. | Machine learning enabled restoration of low resolution images |
-
2024
- 2024-03-22 CN CN202410335794.0A patent/CN118261794B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102722875A (en) * | 2012-05-29 | 2012-10-10 | 杭州电子科技大学 | Visual-attention-based variable quality ultra-resolution image reconstruction method |
CN110163237A (en) * | 2018-11-08 | 2019-08-23 | 腾讯科技(深圳)有限公司 | Model training and image processing method, device, medium, electronic equipment |
WO2021052261A1 (en) * | 2019-09-17 | 2021-03-25 | 中国科学院空天信息创新研究院 | Image super-resolution reconstruction method and apparatus for sharpening of label data |
WO2021063341A1 (en) * | 2019-09-30 | 2021-04-08 | 华为技术有限公司 | Image enhancement method and apparatus |
CN111310558A (en) * | 2019-12-28 | 2020-06-19 | 北京工业大学 | An intelligent extraction method of pavement diseases based on deep learning and image processing |
US20210272237A1 (en) * | 2020-02-29 | 2021-09-02 | University Of Florida Research Foundation, Inc. | Multimodal ct image super-resolution via transfer generative adversarial network |
CN111461070A (en) * | 2020-04-29 | 2020-07-28 | Oppo广东移动通信有限公司 | Text recognition method, device, electronic device and storage medium |
CN113744145A (en) * | 2021-08-20 | 2021-12-03 | 武汉瓯越网视有限公司 | Method for improving image definition, storage medium, electronic device and system |
CN113763248A (en) * | 2021-09-08 | 2021-12-07 | 深圳前海微众银行股份有限公司 | Super-resolution image reconstruction method, device, equipment and storage medium |
CN113837945A (en) * | 2021-09-30 | 2021-12-24 | 福州大学 | A display image quality optimization method and system based on super-resolution reconstruction |
WO2023164300A1 (en) * | 2022-02-28 | 2023-08-31 | Genentech, Inc. | Machine learning enabled restoration of low resolution images |
CN114792287A (en) * | 2022-03-25 | 2022-07-26 | 南京航空航天大学 | Medical ultrasonic image super-resolution reconstruction method based on multi-image fusion |
CN114926333A (en) * | 2022-04-22 | 2022-08-19 | 武汉工程大学 | Image super-resolution reconstruction method and device |
CN114782283A (en) * | 2022-06-16 | 2022-07-22 | 深圳华声医疗技术股份有限公司 | Ultrasonic image enhancement method and device, ultrasonic equipment and storage medium |
Non-Patent Citations (3)
Title |
---|
徐文博;孙广玲;陆小锋;: "预训练网络引导的人脸图像超分辨率重建", 工业控制计算机, no. 06, 25 June 2020 (2020-06-25) * |
汤嘉立;杜卓明;: "雾天车辆超分辨率视频图像清晰度识别仿真", 计算机仿真, no. 10, 15 October 2017 (2017-10-15) * |
董猛, 吴戈, 曹洪玉, 景文博, 于洪洋: "基于注意力残差卷积网络的视频超分辨率重构", 长春理工大学学报(自然科学版), 15 February 2020 (2020-02-15), pages 1 * |
Also Published As
Publication number | Publication date |
---|---|
CN118261794B (en) | 2024-10-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111862044B (en) | Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium | |
JP2023540910A (en) | Connected Machine Learning Model with Collaborative Training for Lesion Detection | |
CN112949654B (en) | Image detection method and related device and equipment | |
CN114649092B (en) | Auxiliary diagnosis method and device based on semi-supervised learning and multi-scale feature fusion | |
CN118552504A (en) | Ultrasonic image detection method and system based on artificial intelligence | |
CN113576508A (en) | Cerebral hemorrhage auxiliary diagnosis system based on neural network | |
Haddadi et al. | A novel medical image enhancement algorithm based on CLAHE and pelican optimization | |
CN118918106B (en) | A medical imaging lesion detection system | |
CN116309806A (en) | A CSAI-Grid RCNN-based method for locating regions of interest in thyroid ultrasound images | |
Rao et al. | Retinex-centered contrast enhancement method for histopathology images with weighted CLAHE | |
CN119417821B (en) | Gastrointestinal surgery auxiliary system and method | |
CN115439423B (en) | CT image-based identification method, device, equipment and storage medium | |
CN118261794B (en) | Ultrasonic image processing method, device, equipment and computer readable storage medium | |
CN117788897B (en) | Brain age prediction method, device, equipment and storage medium | |
CN117315378B (en) | Grading judgment method for pneumoconiosis and related equipment | |
CN117853720A (en) | Mammary gland image segmentation system, method and computer storage medium | |
CN111325758A (en) | Lung image segmentation method and device and training method of image segmentation model | |
Kipele et al. | Poisson noise reduction with nonlocal-pca hybrid model in medical x-ray images | |
CN115937113A (en) | Skin disease ultrasonic image multi-disease identification method, equipment and storage medium | |
CN112766333A (en) | Medical image processing model training method, medical image processing method and device | |
Sokolovskyy et al. | Segmentation of Medical Images Using Deep Learning and Texture Enhancement Based on Fractional Derivative Operators | |
CN119205966B (en) | A method, system, device and medium for generating CTA images based on CT images | |
CN119693371B (en) | Method, system and device for improving laryngoscope image super-resolution based on deep learning | |
Venuji Renuka et al. | A customized acutance metric for quality control applications in MRI | |
CN118279174A (en) | Medical image processing method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |