[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN115797276A - Method, device, electronic device and medium for processing focus image of endoscope - Google Patents

Method, device, electronic device and medium for processing focus image of endoscope Download PDF

Info

Publication number
CN115797276A
CN115797276A CN202211466341.9A CN202211466341A CN115797276A CN 115797276 A CN115797276 A CN 115797276A CN 202211466341 A CN202211466341 A CN 202211466341A CN 115797276 A CN115797276 A CN 115797276A
Authority
CN
China
Prior art keywords
image
pixel
parameter
processed
green channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211466341.9A
Other languages
Chinese (zh)
Inventor
刘华成
廖想宏
曹歌
鲁应君
李占鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huizhou Xianzan Technology Co ltd
Daichuan Medical Shenzhen Co ltd
Original Assignee
Huizhou Xianzan Technology Co ltd
Daichuan Medical Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou Xianzan Technology Co ltd, Daichuan Medical Shenzhen Co ltd filed Critical Huizhou Xianzan Technology Co ltd
Priority to CN202211466341.9A priority Critical patent/CN115797276A/en
Publication of CN115797276A publication Critical patent/CN115797276A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The application provides a focus image processing method, a focus image processing device, an electronic device and a storage medium for an endoscope, wherein the method comprises the following steps: acquiring an image to be processed, wherein the image to be processed is a focus image acquired by an endoscope; adjusting the color of an image to be processed through a first parameter to obtain a first image, wherein the first parameter is determined based on the mean value of green channel pixels of the image to be processed, and the color of the image to be processed comprises green, red and blue; converting the first image into a YUV color space, and carrying out normalization processing on the first image; and adjusting the brightness of the first image through a second parameter to obtain a focus enhanced image, wherein the second parameter is determined based on the Y-channel pixel mean value of the first image. The method can enhance two dimensions of color and brightness of a focus image acquired by an endoscope by adjusting parameters, so that the details of a focus area in the image can be obviously highlighted; and the method does not need to use a GPU, can be carried out at the CPU end of the ARM, and can reduce resource consumption.

Description

Method and device for processing focus image of endoscope, electronic device and medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for processing a lesion image of an endoscope, an electronic device, and a storage medium.
Background
An endoscope is a common medical instrument, integrates traditional optics, ergonomics, precision machinery, modern electronics, mathematics, software and the like, and is widely used in noninvasive and minimally invasive operations. When an endoscope is used for disease diagnosis and treatment with low color requirements but high requirements for detail display, due to the influence of interference gray scales, the displayed details of images acquired by the endoscope are not ideal, and generally obtained medical images may have the phenomena of unclear details, low contrast and the like, so that the safety and effectiveness of diagnosis and treatment are influenced. Therefore, it is an important point of the endoscope in optical performance to make the image collected by the endoscope display clearer texture details.
At present, generally, images acquired by an endoscope are subjected to global enhancement processing to increase the contrast of the whole image, so that the detail visibility of the whole image is improved. However, when a doctor observes a medical image, the doctor observes a focal region of the image, and since the global enhancement method cannot effectively enhance local textures and details, only the global contrast can be enhanced, and the visual effect is globally enhanced, the enhancement effect and the detail visibility of the region to be observed are low.
Disclosure of Invention
The main purpose of the embodiments of the present application is to provide a method and an apparatus for processing a lesion image for an endoscope, an electronic device, and a storage medium. Aiming at enhancing two dimensions of color and brightness of a focus image acquired by an endoscope by adjusting parameters, so that the details of a focus area in the image can be obviously highlighted; and the method does not need to use a GPU, can be carried out at the CPU end of the ARM, and can reduce resource consumption.
To achieve the above object, a first aspect of an embodiment of the present application proposes a lesion image processing method for an endoscope, the method including:
acquiring an image to be processed, wherein the image to be processed is a focus image acquired by an endoscope;
adjusting the color of the image to be processed through a first parameter to obtain a first image, wherein the first parameter is determined based on the mean value of green channel pixels of the image to be processed, and the color of the image to be processed comprises green, red and blue;
converting the first image into a YUV color space, and performing normalization processing on the first image;
and adjusting the brightness of the first image through a second parameter to obtain a focus enhanced image, wherein the second parameter is determined based on the Y-channel pixel mean value of the first image.
In some embodiments, the first parameter is calculated by the following formula:
Figure BDA0003956396920000021
wherein p represents a first parameter, g ave Representing the mean value of the green channel pixels in the image to be processed.
In some embodiments, the adjusting the color of the image to be processed by the first parameter to obtain a first image includes:
calculating to obtain the mean value of the pixels of the green channel of the image to be processed;
calculating to obtain a first parameter according to the green channel pixel mean value;
adjusting the green channel pixel of each pixel block in the image to be processed according to the first parameter;
and adjusting the red channel pixel and the blue channel pixel of each pixel block in the image to be processed according to the original green channel pixel and the adjusted green channel pixel of each pixel block in the image to be processed to obtain a first image.
In some embodiments, said adjusting the green channel pixel value of said target pixel block according to said first parameter is performed by the following formula:
g′(x,y)=[g(x,y)] p
where g' (x, y) represents the adjusted green channel pixel value, g (x, y) represents the original green channel pixel value, and p represents the first parameter.
In some embodiments, said adjusting the green channel pixels of each block of pixels in the image to be processed according to the first parameter comprises:
selecting a target pixel block, wherein the target pixel block is any one pixel block in the image to be processed;
acquiring an original green channel pixel of the target pixel block;
adjusting the green channel pixel of the target pixel block according to the first parameter;
and traversing each pixel block in the image to be processed, and adjusting the green channel pixel of each pixel block according to the first parameter.
In some embodiments, adjusting the red channel pixel and the blue channel pixel of each pixel block in the image to be processed according to the original green channel pixel and the adjusted green channel pixel of each pixel block in the image to be processed to obtain a first image includes:
selecting a target pixel block, wherein the target pixel block is any one pixel block in the image to be processed;
acquiring a first pixel value and a second pixel value, wherein the first pixel value is an original green channel pixel of the target pixel block, and the second pixel value is a green channel pixel adjusted by the target pixel block;
adjusting the red channel pixel and the blue channel pixel of the target pixel block according to the first pixel value and the second pixel value;
and traversing each pixel block in the image to be processed, and adjusting the red channel pixel and the blue channel pixel of each pixel block according to the original green channel pixel and the adjusted green channel pixel of each pixel block.
In some embodiments, the second parameter is calculated by the following formula:
Figure BDA0003956396920000031
wherein q represents a second parameter, Y ave Representing a mean of Y-channel pixels in the first image.
In some embodiments, the adjusting the brightness of the first image by the second parameter to obtain a lesion enhancement image includes:
calculating to obtain a Y-channel pixel mean value of the first image;
calculating to obtain a second parameter according to the Y-channel pixel mean value;
adjusting the Y-channel pixel of each pixel block in the first image according to the second parameter to obtain a second image;
and performing inverse normalization processing on the second image, and converting the second image into an RGB color space to obtain a focus enhanced image.
To achieve the above object, a second aspect of the embodiments of the present application provides an electronic device, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the method of the first aspect when executing the computer program.
To achieve the above object, a fourth aspect of the embodiments of the present application proposes a computer-readable storage medium, which stores a computer program, and the computer program realizes the method of the first aspect when executed by a processor.
The application provides a focus image processing method, a focus image processing device, an electronic device and a storage medium for an endoscope, wherein the method comprises the following steps: acquiring an image to be processed, wherein the image to be processed is a focus image acquired by an endoscope; adjusting the color of an image to be processed through a first parameter to obtain a first image, wherein the first parameter is determined based on the mean value of green channel pixels of the image to be processed, and the color of the image to be processed comprises green, red and blue; converting the first image into a YUV color space, and carrying out normalization processing on the first image; and adjusting the brightness of the first image through a second parameter to obtain a focus enhanced image, wherein the second parameter is determined based on the Y-channel pixel mean value of the first image. The method can enhance two dimensions of color and brightness of a focus image acquired by an endoscope by adjusting parameters, so that the details of a focus area in the image can be obviously highlighted; and the method does not need to use a GPU, can be carried out at the CPU end of the ARM, and can reduce resource consumption.
Drawings
Fig. 1 is a flowchart of a lesion image processing method for an endoscope provided in an embodiment of the present application;
fig. 2 is a graph showing a variation of a first parameter with an average value of pixels of a green channel according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a step of adjusting colors of an image to be processed by using a first parameter to obtain a first image according to an embodiment of the present application;
FIG. 4 is a graph of a stretch curve corresponding to different values of a first parameter provided in an embodiment of the present application;
FIG. 5 is a flowchart of the steps provided by the embodiment of the present application for adjusting the green channel pixels of each pixel block in the image to be processed according to the first parameter;
fig. 6 is a flowchart of a step of adjusting a red channel pixel and a blue channel pixel of each pixel block in an image to be processed according to an original green channel pixel and an adjusted green channel pixel of each pixel block in the image to be processed, so as to obtain a first image, according to an embodiment of the present application;
fig. 7 is a flowchart illustrating a step of adjusting the brightness of the first image according to the second parameter to obtain a lesion enhancement image according to an embodiment of the present application;
fig. 8 is a flowchart of lesion image processing for an endoscope according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be noted that although functional blocks are partitioned in a schematic diagram of an apparatus and a logical order is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the partitioning of blocks in the apparatus or the order in the flowchart. The terms first, second and the like in the description and in the claims, and the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
In recent years, with the extension of computer vision research direction, endoscopic image features are widely applied to detection, identification, positioning and reconstruction, and become an effective way for realizing value improvement in the field of computer vision. Therefore, the feature matching technique for endoscopic images is a key technique for acquiring the surface morphology of an object. However, due to the specificity of the operating environment in clinical surgery, the restriction of the operation time and the difference of the endoscopic equipment, the quality difference of the acquired endoscopic image is large.
In endoscopic video imaging systems, images are typically post-processed using image enhancement techniques to improve image quality. For example: sharpness and contrast enhancement are carried out on the image so as to improve the definition of human tissues (blood vessels, organ edges and the like) in the abdominal cavity; carrying out saturation correction on the image to obtain better subjective quality; and carrying out enhancement processing on the non-uniform illumination image and the like.
In the related image enhancement technology, the enhancement strength of different regions is mainly adjusted by artificially designing features, for example: performing saturation adjustment on the red area by using the chromatic value of the current pixel to avoid red overflow; adaptively adjusting contrast enhancement strength using the variance of the local region; and detecting blood vessels by color information to perform enhanced display and the like on blood vessel parts.
Practice shows that in the related image enhancement technology, the self-adaptive image enhancement method based on the artificial design features depends on deep understanding of researchers on medical images and rich image processing experience, the threshold is high, the extracted features are general features generally, the robustness of the scheme is poor, and the scheme is not satisfactory in some extreme scenes.
Based on this, the embodiment of the present application proposes a lesion image processing method for an endoscope. The endoscope image detection model is applied to the endoscope, so that the endoscope can perform real-time focus detection on the acquired image, the detection precision can be guaranteed, the detection rate is improved, and a doctor can acquire focus information and focus positions quickly.
Referring to fig. 1, fig. 1 is a flowchart of a lesion image processing method for an endoscope provided in an embodiment of the present application, and the method in fig. 1 may include, but is not limited to, steps S101 to S104.
And S101, acquiring an image to be processed, wherein the image to be processed is a focus image acquired by an endoscope.
In the embodiment of the application, the focus image acquired by the endoscope is an image in an RGB color space, and when the focus image is acquired, normalization processing needs to be performed on the focus image first. Image normalization refers to a process of transforming an image into a fixed standard form by performing a series of standard processing transformations on the image, and the standard image is called a normalized image. Specifically, in the embodiment of the present application, the value range of each pixel in the input image in the image normalization processing is between 0 and 255, and the value is too large. Therefore, normalization processing is required, specifically, pixel value normalization processing is performed, that is, the pixel value is divided by 255 to obtain a value between 0 and 1 for calculation.
Step S102, adjusting the color of the image to be processed through a first parameter to obtain a first image, wherein the first parameter is determined based on the mean value of the green channel pixels of the image to be processed, and the color of the image to be processed comprises green, red and blue.
In the embodiment of the application, after the pixel value of the image to be processed is divided by 255, the green channel pixel mean value of the image to be processed can be calculated, and the first parameter is determined according to the green channel pixel mean value. And then, adjusting the color of the image to be processed according to the first parameter to obtain a first image.
The calculation process of the green channel pixel mean value of the image to be processed is as follows: determining the number of pixel blocks in the image to be processed, for example, the image to be processed includes N pixel blocks, and each pixel block corresponds to one corresponding green channel pixel, red channel pixel, and blue channel pixel, so that the number of green channel pixels in the image to be processed is also N. And accumulating the N green channel pixels in the image to be processed, and dividing by N to obtain the mean value of the green channel pixels.
For example, for an image to be processed with a resolution of 800 × 800, the number of pixel blocks is 640000, it is determined that the value of a green channel pixel in the image to be processed is 640000, and the value of the 640000 pixel is added and then divided by 640000, so as to obtain the green channel pixel average value.
In the embodiment of the present application, after the green channel pixel mean value is obtained through calculation, a first parameter may be determined according to the green channel pixel mean value, where the first parameter is obtained through calculation according to the following formula:
Figure BDA0003956396920000071
wherein p represents a first parameter, g ave Representing the mean of the green channel pixels in the image to be processed.
Referring to fig. 2, fig. 2 is a graph showing a variation of the first parameter with an average value of pixels in a green channel according to an embodiment of the present application. As shown in FIG. 2, the curve is a non-linear function in the [0,1] interval, with the horizontal axis representing the green channel pixel mean and the vertical axis representing the first parameter. If the green channel pixel mean value is less than 0.5, the first parameter is less than 1; the smaller the green channel pixel mean, the smaller the first parameter, and the greater the stretching of the region of the image where the pixel value is smaller. If the green channel pixel mean value is greater than 0.5, the first parameter is greater than 1; the larger the green channel pixel mean, the larger the first parameter, and the larger the stretch to the region of the image where the pixel value is larger. If the mean value is 0.5 and the first parameter is 1, no adjustment is made.
Referring to fig. 3, fig. 3 is a flowchart of steps of adjusting colors of an image to be processed by a first parameter to obtain a first image according to an embodiment of the present application, including but not limited to step S301 to step S304.
Step S301, calculating to obtain a green channel pixel mean value of an image to be processed;
step S302, calculating to obtain a first parameter according to the green channel pixel mean value;
step S303, adjusting the green channel pixel of each pixel block in the image to be processed according to the first parameter;
step S304, adjusting the red channel pixel and the blue channel pixel of each pixel block in the image to be processed according to the original green channel pixel and the adjusted green channel pixel of each pixel block in the image to be processed to obtain a first image.
In the embodiment of the application, the mean value of the green channel pixels of the image to be processed is obtained through calculation, and then the formula is used for calculating the mean value of the green channel pixels according to the mean value of the green channel pixels
Figure BDA0003956396920000081
And calculating to obtain a first parameter p. And then adjusting the green channel pixel of each pixel block in the image to be processed according to the first parameter p. Specifically, the adjustment formula is:
g′(x,y)=[g(x,y)] p
where g' (x, y) is the adjusted green channel pixel value, g (x, y) is the original green channel pixel value, and p is the first parameter.
And finally, adjusting the red channel pixel and the blue channel pixel of each pixel block in the image to be processed according to the original green channel pixel and the adjusted green channel pixel of each pixel block in the image to be processed to obtain a first image.
For example, for an 800 × 800 resolution image to be processed, which includes 640000 pixel blocks, the mean value of the green channel pixels can be calculated to be g ave Then g from the green channel pixel mean ave And calculating to obtain a first parameter p. At this time, a pixel block is selected from 640000 pixel blocks, for example, a first pixel block is selected, and an original green channel pixel of the first pixel block is obtained as g 1 (x, y), at this time, the adjusted green channel pixel g can be calculated according to the first parameter p 1 ' (x, y) wherein g 1 ′(x,y)=[g 1 (x,y)] p . That is, the original green channel pixel of the first pixel block is g according to the first parameter p 1 (x, y) stretching. Referring to fig. 4, fig. 4 is a drawing graph corresponding to different first parameter values provided in the embodiments of the present application. As shown in FIG. 4, the smaller the first parameter p, the lower the pixel interval [0,0.5 ] in the image to be processed]The steeper the curve, the more dynamic range the low pixel area will be stretched. The larger the first parameter p, the higher the pixel interval [0.5,1, in the image to be processed]The steeper the curve, the higher pixel areas are pulled to a wider dynamic range, thereby improving image contrast.
In the embodiment of the application, when the adjusted green channel pixel of the first pixel block in the image to be processed is obtained by calculating the first parameter p, the green channel pixel is g 1 After' (x, y), can proceedOne step from the original green channel pixel g 1 (x, y) and adjusted green channel pixel g 1 ' (x, y), the red channel pixels and the blue channel pixels of the first block of pixels are adjusted. In particular, by the formula r 1 ′(x,y)=r 1 (x,y)+[g 1 ′(x,y)-g 1 (x,y)]The adjusted red channel pixel of the first pixel block can be calculated, where r is 1 ' (x, y) denotes the adjusted red channel pixel of the first block of pixels, r 1 (x, y) represents the original red channel pixel of the first block of pixels. Likewise, by formula b 1 ′(x,y)=b 1 (x,y)+[g 1 ′(x,y)-g 1 (x,y)]The adjusted blue channel pixel of the first pixel block can be calculated, where b 1 ' (x, y) denotes the adjusted blue channel pixel of the first block of pixels, b 1 (x, y) represents the original blue channel pixel of the first pixel block. Thereby, the adjustment processing for the first pixel block can be completed.
In the embodiment of the application, each pixel block in the image to be processed is traversed, and the green channel pixel, the blue channel pixel and the red channel pixel of each pixel block are adjusted according to the processing mode of the first pixel block.
Referring to fig. 5, fig. 5 is a flowchart of steps provided by an embodiment of the present application for adjusting a green channel pixel of each pixel block in an image to be processed according to a first parameter, including but not limited to steps S501 to S504.
Step S501, selecting a target pixel block, wherein the target pixel block is any pixel block in an image to be processed;
step S502, obtaining an original green channel pixel of a target pixel block;
step S503, adjusting the green channel pixel of the target pixel block according to the first parameter;
step S504, each pixel block in the image to be processed is traversed, and the green channel pixel of each pixel block is adjusted according to the first parameter.
In the embodiment of the application, after the first parameter is obtained through calculationSelecting any pixel block from the image to be processed, for example, selecting a first pixel block, and then obtaining an original green channel pixel of the first pixel block as g 1 (x, y), according to the first parameter p, adjusting the green channel pixel of the first pixel block to obtain the adjusted green channel pixel g 1 ' (x, y). Then selecting a second pixel block from the image to be processed, and acquiring an original green channel pixel g of the second pixel block 2 (x, y), according to the first parameter p, adjusting the green channel pixel of the second pixel block to obtain the adjusted green channel pixel g 2 ' (x, y). In this way, the green channel pixels of each pixel block in the image to be processed are adjusted, and a group of adjusted green channel pixel arrays can be obtained. For example, for an 800 × 800 resolution image to be processed, 640000 adjusted green channel pixel arrays are calculated.
It should be noted that, after the normalization processing of the image to be processed, the calculated green channel pixel mean value is unchanged, and therefore, the first parameter p calculated by the green channel pixel mean value is fixed and unchanged. However, since the original green channel pixel of each pixel block in the image to be processed is different, the corresponding calculated adjusted green channel pixel is also different.
Referring to fig. 6, fig. 6 is a flowchart of steps of adjusting a red channel pixel and a blue channel pixel of each pixel block in an image to be processed according to an original green channel pixel and an adjusted green channel pixel of each pixel block in the image to be processed to obtain a first image according to the embodiment of the present application, where the steps include, but are not limited to, steps S601 to S604.
Step S601, selecting a target pixel block, wherein the target pixel block is any one pixel block in the image to be processed;
step S602, a first pixel value and a second pixel value are obtained, wherein the first pixel value is an original green channel pixel value of a target pixel block, and the second pixel value is a green channel pixel adjusted by the target pixel block;
step S603, adjusting the red channel pixel and the blue channel pixel of the target pixel block according to the first pixel value and the second pixel value;
step S604, traversing each pixel block in the image to be processed, and adjusting the red channel pixel and the blue channel pixel of each pixel block according to the original green channel pixel and the adjusted green channel pixel of each pixel block.
In the embodiment of the present application, since the original green channel pixel of each pixel block in the image to be processed can be obtained and the adjusted green channel pixel of each pixel block can be obtained through calculation in the step shown in fig. 5, the red channel pixel and the blue channel pixel of each pixel block can be adjusted according to the original green channel pixel and the adjusted green channel pixel of each pixel block. Specifically, for example, a first pixel block is selected from the image to be processed, where an original green channel pixel of the first pixel block is g 1 (x, y), the adjusted green channel pixel of the first block of pixels is g 1 ' (x, y) and thus can be represented by the formula r 1 ′(x,y)=r 1 (x,y)+[g 1 ′(x,y)-g 1 (x,y)]And calculating to obtain the adjusted red channel pixel of the first pixel block. Likewise, the formula b can be used 1 ′(x,y)=b 1 (x,y)+[g 1 ′(x,y)-g 1 (x,y)]And calculating to obtain the adjusted blue channel pixel of the first pixel block. According to the method, the adjusted red channel pixel and the adjusted blue channel pixel of each pixel block in the image to be processed can be obtained through calculation, namely the red channel pixel and the blue channel pixel of each pixel block in the image to be processed can be adjusted, and the first image can be obtained.
It should be noted that, since the original green channel pixels of each pixel block are different, the adjusted green channel pixels are different, and the original red channel pixels are different, each pixel block obtained by calculation is also different from the corresponding adjusted red channel pixels. Similarly, the original green channel pixels of each pixel block are different, the adjusted green channel pixels are different, and the original blue channel pixels are different, so that each pixel block obtained through calculation is different from the corresponding adjusted blue channel pixels. That is, although the first parameter is fixed and constant for the image to be processed, the adjustment (stretching) process for each pixel block in the image to be processed is different based on the first parameter.
Step S103, converting the first image into YUV color space, and normalizing the first image.
In the embodiment of the present application, considering that the image to be processed in the RGB color space can only be enhanced in the dimension of color, in order to further highlight human tissues (blood vessels, organ edges, etc.) in the lesion image, enhancement processing needs to be performed in the dimension of brightness, and at this time, it is necessary to convert the first image into the YUV color space first and perform normalization processing on the first image.
RGB is a standard for representing colors in the field of digitization, also called a color space, and represents a specific color by combining different luminance values of the three primary colors R, G, and B. RGB stores luminance values instead of chrominance values.
YUV is a color coding method, Y denotes luminance, and U and V denote chrominance. Y alone is a black and white image, plus UV is a color image. One of the benefits of YUV is that it makes the color system well compatible with the traditional black and white system, while taking advantage of the human visual system's sensitivity to luminance versus chrominance.
R, G, B3 components in RGB color space are all closely related to brightness, that is, there is a lot of brightness information in R, G, B3 components. The process of converting the RGB color space to the YUV color space is actually to extract the luminance information in the R, G, B3 components and put it into the Y component. And extracting the hue and saturation information in the R, G and B3 components and putting the hue and saturation information in the U and V components.
And step S104, adjusting the brightness of the first image through a second parameter to obtain a focus enhanced image, wherein the second parameter is determined based on the Y-channel pixel mean value of the first image.
After normalization processing is carried out on the first image, the Y-channel pixel mean value of the first image can be calculated, and the second parameter is determined according to the Y-channel pixel mean value. And then, the brightness of the first image can be adjusted according to the second parameter to obtain a focus enhanced image.
The calculation process of the Y-channel pixel mean value of the first image is as follows: the number of pixel blocks in the first image is determined, for example, the first image includes N pixel blocks, and since each pixel block corresponds to a corresponding one of the Y-channel pixels, the number of the Y-channel pixels in the first image is also N. The N Y-channel pixels in the first image are accumulated and then divided by N to obtain a Y-channel pixel mean value.
For example, for a first image with 800 × 800 resolution, the number of pixel blocks is 640000, so that the Y-channel pixel value in the first image is 640000, and the Y-channel pixel average value can be obtained by adding the 640000 pixel values and dividing by 640000.
In the embodiment of the present application, after the Y-channel pixel mean value is obtained through calculation, a second parameter may be determined according to the Y-channel pixel mean value, where the second parameter is obtained through calculation according to the following formula:
Figure BDA0003956396920000121
wherein q represents a second parameter, Y ave Representing the mean Y-channel pixel in the first image.
In the embodiment of the present application, the principle of the second parameter is the same as that of the first parameter, and details are not described here.
Referring to fig. 7, fig. 7 is a flowchart illustrating steps of adjusting the brightness of the first image according to the second parameter to obtain a lesion enhanced image according to an embodiment of the present disclosure, including but not limited to steps S701 to S704.
Step S701, calculating to obtain a Y-channel pixel mean value of a first image;
step S702, calculating to obtain a second parameter according to the Y-channel pixel mean value;
step S703, adjusting the Y-channel pixel of each pixel block in the first image according to the second parameter to obtain a second image;
step S704, performing inverse normalization on the second image, and converting the second image into an RGB color space to obtain a focus enhanced image.
In the embodiment of the application, the Y-channel pixel mean value of the first image is obtained through calculation, and then the formula is used for obtaining the Y-channel pixel mean value according to the Y-channel pixel mean value
Figure BDA0003956396920000131
And calculating to obtain a second parameter q. And adjusting the Y-channel pixel of each pixel block in the first image according to the second parameter q. Specifically, the adjustment formula is:
Y′(x,y)=[Y(x,y)] q
where Y' (x, Y) is the adjusted Y-channel pixel value, Y (x, Y) represents the original Y-channel pixel value, and q represents the second parameter.
Illustratively, for a first image with a resolution of 800 × 800, which includes 640000 pixel blocks, the Y-channel pixel mean value Y can be calculated ave Then Y is the average value of the pixels according to the Y channel ave And calculating to obtain a second parameter q. At this time, a pixel block is selected from 640000 pixel blocks, for example, a first pixel block is selected, and an original Y-channel pixel of the first pixel block is obtained as Y 1 (x, Y), at this time, the adjusted Y-channel pixel can be calculated to be Y according to the second parameter q 1 '(x, Y), wherein, Y' (x, Y) = [ Y (x, Y)] q . That is, the original Y-channel pixel of the first pixel block is Y according to the second parameter q 1 (x, y) stretching. In this way, the Y-channel pixels of each pixel block in the first image are adjusted to obtain a set of adjusted Y-channel pixel arrays. For example, for a first image with a resolution of 800 × 800, 640000 adjusted Y-channel pixel arrays are obtained, that is, enhancement processing on the brightness dimension in the first image is completed to obtain a second image.
It should be noted that, after the first image normalization processing, the calculated Y-channel pixel mean value is constant, and therefore, the second parameter q calculated by the Y-channel pixel mean value is constant. However, since the original Y-channel pixels of each pixel block in the first image are different, the adjusted Y-channel pixels calculated correspondingly thereto are also different.
In the embodiment of the present application, after the second image is obtained, the second image needs to be subjected to inverse normalization processing, and is converted into an RGB color space, so as to obtain a lesion enhancement image.
Referring to fig. 8, fig. 8 is a flowchart of processing a lesion image for an endoscope according to an embodiment of the present disclosure, which includes, but is not limited to, steps S801 to S810.
Step S801, performing normalization processing on RGB focus images acquired by an endoscope;
step S802, calculating to obtain a green channel pixel mean value in the RGB focus image;
step S803, calculating to obtain a first parameter according to the green channel pixel mean value;
step S804, according to the first parameter, the green channel pixel of each pixel block in the RGB focus image is adjusted;
step S805, adjusting a red channel pixel and a blue channel pixel of each pixel block according to the adjusted green channel pixel of each pixel block in the RGB lesion image to obtain a first image;
step S806, converting the first image into YUV color space and carrying out normalization processing;
step S807, calculating a Y-channel pixel mean value in the first image:
step S808, calculating to obtain a second parameter according to the Y channel pixel mean value;
step S809, adjusting the Y-channel pixel of each pixel block in the first image according to the first parameter to obtain a second image;
and step S810, performing inverse normalization processing on the second image, and converting the second image into an RGB color space to obtain a focus enhanced image.
It should be noted that the image processing method provided in the embodiment of the present application may be performed by a CPU end of an ARM of an endoscope, and the operation speed is fast, and for an image with a resolution of 800 × 800, only 20ms is required to obtain a lesion enhancement image. After the endoscope acquires the focus image, the image enhancement can be automatically executed, and finally, the enhanced image highlighted in the focus area is output for reference of a doctor.
The embodiment of the application also provides electronic equipment, the electronic equipment comprises a memory and a processor, the memory stores a computer program, and the processor executes the computer program to realize the lesion image processing method for the endoscope. The electronic equipment can be any intelligent terminal including a tablet computer, a vehicle-mounted computer and the like.
Referring to fig. 9, fig. 9 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure. The electronic device includes:
the processor 901 may be implemented by a general-purpose CPU (central processing unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits, and is configured to execute a relevant program to implement the technical solution provided in the embodiment of the present application;
the memory 902 may be implemented in the form of a Read Only Memory (ROM), a static storage device, a dynamic storage device, or a Random Access Memory (RAM). The memory 902 may store an operating system and other application programs, and when the technical solution provided by the embodiments of the present specification is implemented by software or firmware, the relevant program codes are stored in the memory 902 and called by the processor 901 to execute the lesion image processing method for endoscope of the embodiments of the present application;
an input/output interface 903 for inputting and outputting information;
a communication interface 904, configured to implement communication interaction between the device and another device, where communication may be implemented in a wired manner (e.g., USB, network cable, etc.), or in a wireless manner (e.g., mobile network, WIFI, bluetooth, etc.);
a bus 905 that transfers information between various components of the device (e.g., the processor 901, memory 902, input/output interface 903, and communication interface 904);
wherein the processor 901, the memory 902, the input/output interface 903 and the communication interface 904 are communicatively connected to each other within the device via a bus 905.
The embodiment of the application also provides a storage medium which is a computer readable storage medium, and the storage medium stores a computer program, and the computer program is executed by a processor to realize the lesion image processing method for the endoscope.
The memory, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiments described in the embodiments of the present application are for more clearly illustrating the technical solutions of the embodiments of the present application, and do not constitute limitations on the technical solutions provided in the embodiments of the present application, and it is obvious to those skilled in the art that the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems with the evolution of technologies and the emergence of new application scenarios.
It will be understood by those skilled in the art that the embodiments shown in the figures are not limiting, and may include more or fewer steps than those shown, or some of the steps may be combined, or different steps.
The above-described apparatus embodiments are merely illustrative, wherein the modules illustrated as separate components may or may not be physically separate, may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
One of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/modules in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the application and the above-described figures, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules explicitly listed, but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the above-described modules is merely a logical division, and an actual implementation may have another division, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the present application, which are essential or part of the technical solutions contributing to the prior art, or all or part of the technical solutions, may be embodied in the form of a software product stored in a storage medium, which includes multiple instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing programs, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The preferred embodiments of the present application have been described above with reference to the accompanying drawings, and the scope of the claims of the embodiments of the present application is not limited thereby. Any modifications, equivalents and improvements that may occur to those skilled in the art without departing from the scope and spirit of the embodiments of the present application are intended to be within the scope of the claims of the embodiments of the present application.

Claims (10)

1. A lesion image processing method for an endoscope, the method comprising:
acquiring an image to be processed, wherein the image to be processed is a focus image acquired by an endoscope;
adjusting the color of the image to be processed through a first parameter to obtain a first image, wherein the first parameter is determined based on the mean value of green channel pixels of the image to be processed, and the color of the image to be processed comprises green, red and blue;
converting the first image into a YUV color space, and performing normalization processing on the first image;
and adjusting the brightness of the first image through a second parameter to obtain a focus enhanced image, wherein the second parameter is determined based on the Y-channel pixel mean value of the first image.
2. The method of claim 1, wherein the first parameter is calculated by the following formula:
Figure FDA0003956396910000011
wherein p represents a first parameter, g ave Representing a green channel pixel mean in the image to be processed.
3. The method according to claim 1, wherein the adjusting the color of the image to be processed by the first parameter to obtain the first image comprises:
calculating to obtain the mean value of the pixels of the green channel of the image to be processed;
calculating to obtain a first parameter according to the green channel pixel mean value;
adjusting the green channel pixel of each pixel block in the image to be processed according to the first parameter;
and adjusting the red channel pixel and the blue channel pixel of each pixel block in the image to be processed according to the original green channel pixel and the adjusted green channel pixel of each pixel block in the image to be processed to obtain a first image.
4. The method of claim 3, wherein the adjusting the green channel pixel value of the target pixel block according to the first parameter is performed by the following formula:
g′(x,y)=[g(x,y)] p
where g' (x, y) is the adjusted green channel pixel value, g (x, y) is the original green channel pixel value, and p is the first parameter.
5. The method of claim 3, wherein the adjusting the green channel pixels of each pixel block in the image to be processed according to the first parameter comprises:
selecting a target pixel block, wherein the target pixel block is any one pixel block in the image to be processed;
acquiring a green channel pixel of the target pixel block;
adjusting the green channel pixel of the target pixel block according to the first parameter;
and traversing each pixel block in the image to be processed, and adjusting the green channel pixel of each pixel block according to the first parameter.
6. The method according to claim 3, wherein the adjusting the red channel pixel and the blue channel pixel of each pixel block in the image to be processed according to the original green channel pixel and the adjusted green channel pixel of each pixel block in the image to be processed to obtain the first image comprises:
selecting a target pixel block, wherein the target pixel block is any one pixel block in the image to be processed;
acquiring a first pixel value and a second pixel value, wherein the first pixel value is an original green channel pixel of the target pixel block, and the second pixel value is a green channel pixel adjusted by the target pixel block;
adjusting the red channel pixel and the blue channel pixel of the target pixel block according to the first pixel value and the second pixel value;
and traversing each pixel block in the image to be processed, and adjusting the red channel pixel and the blue channel pixel of each pixel block according to the original green channel pixel and the adjusted green channel pixel of each pixel block.
7. The method of claim 1, wherein the second parameter is calculated by the following formula:
Figure FDA0003956396910000021
wherein q represents a second parameter, Y ave Representing a mean value of Y-channel pixels in the first image.
8. The method of claim 1, wherein the adjusting the brightness of the first image by the second parameter to obtain a lesion-enhanced image comprises:
calculating to obtain a Y-channel pixel mean value of the first image;
calculating to obtain a second parameter according to the Y-channel pixel mean value;
adjusting the Y-channel pixel of each pixel block in the first image according to the second parameter to obtain a second image;
and performing inverse normalization processing on the second image, and converting the second image into an RGB color space to obtain a focus enhanced image.
9. An electronic device, characterized in that the electronic device comprises a memory storing a computer program and a processor implementing the method of any of claims 1 to 8 when the processor executes the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 8.
CN202211466341.9A 2022-11-22 2022-11-22 Method, device, electronic device and medium for processing focus image of endoscope Pending CN115797276A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211466341.9A CN115797276A (en) 2022-11-22 2022-11-22 Method, device, electronic device and medium for processing focus image of endoscope

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211466341.9A CN115797276A (en) 2022-11-22 2022-11-22 Method, device, electronic device and medium for processing focus image of endoscope

Publications (1)

Publication Number Publication Date
CN115797276A true CN115797276A (en) 2023-03-14

Family

ID=85440050

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211466341.9A Pending CN115797276A (en) 2022-11-22 2022-11-22 Method, device, electronic device and medium for processing focus image of endoscope

Country Status (1)

Country Link
CN (1) CN115797276A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118096546A (en) * 2024-04-29 2024-05-28 南京诺源医疗器械有限公司 Endoscope image processing method and device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118096546A (en) * 2024-04-29 2024-05-28 南京诺源医疗器械有限公司 Endoscope image processing method and device and electronic equipment
CN118096546B (en) * 2024-04-29 2024-08-06 南京诺源医疗器械有限公司 Endoscope image processing method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN110505459B (en) Image color correction method, device and storage medium suitable for endoscope
US8711252B2 (en) Image processing device and information storage medium including motion vector information calculation
US8837821B2 (en) Image processing apparatus, image processing method, and computer readable recording medium
US20090220133A1 (en) Medical image processing apparatus and medical image processing method
US9635343B2 (en) Stereoscopic endoscopic image processing apparatus
US20170046836A1 (en) Real-time endoscopic image enhancement
JPWO2008155861A1 (en) Image display device and image display program
CN113744266B (en) Method and device for displaying focus detection frame, electronic equipment and storage medium
JP2016535485A (en) Conversion of images from dual-band sensors into visible color images
JP5028138B2 (en) Image processing apparatus and image processing program
EP3016070A1 (en) Detection device, learning device, detection method, learning method, and program
CN101389261A (en) Medical image processing apparatus and medical image processing method
JP7076168B1 (en) How to enhance the object contour of an image in real-time video
CN115797276A (en) Method, device, electronic device and medium for processing focus image of endoscope
WO2016006429A1 (en) Image processing device and method, program, and endoscope system
JP2008278963A (en) Image processing device and image processing program
JP5622903B2 (en) Image processing apparatus, method of operating image processing apparatus, and image processing program
JPH01138876A (en) Color picture processing unit
JP2006115963A (en) Electronic endoscope apparatus
JP7237883B2 (en) Information processing device, information processing method, and computer program
CN114532960A (en) Ureter endoscopic equipment and system based on image enhancement
CN105654541B (en) Video in window treating method and apparatus
EP4272425A1 (en) Systems and methods for low-light image enhancement
CN111065315B (en) Electronic endoscope processor and electronic endoscope system
JP2012095342A (en) Pseudo-gray-image generating device, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination