WO2021114684A1 - 图像处理方法、装置、计算设备和存储介质 - Google Patents
图像处理方法、装置、计算设备和存储介质 Download PDFInfo
- Publication number
- WO2021114684A1 WO2021114684A1 PCT/CN2020/105019 CN2020105019W WO2021114684A1 WO 2021114684 A1 WO2021114684 A1 WO 2021114684A1 CN 2020105019 W CN2020105019 W CN 2020105019W WO 2021114684 A1 WO2021114684 A1 WO 2021114684A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- enhanced
- brightness
- image
- brightness value
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 16
- 238000000034 method Methods 0.000 claims abstract description 45
- 238000012545 processing Methods 0.000 claims description 42
- 238000005070 sampling Methods 0.000 claims description 27
- 238000001914 filtration Methods 0.000 claims description 9
- 238000009499 grossing Methods 0.000 claims description 6
- 230000002708 enhancing effect Effects 0.000 claims description 4
- 238000002834 transmittance Methods 0.000 claims description 4
- 230000006870 function Effects 0.000 description 15
- 238000005516 engineering process Methods 0.000 description 13
- 230000008569 process Effects 0.000 description 11
- 230000000875 corresponding effect Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 239000003086 colorant Substances 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 241000699670 Mus sp. Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000033001 locomotion Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Definitions
- This application relates to the technical field of image processing, in particular to image processing methods, devices, computing devices, and storage media.
- the enhancement of the brightness of a pixel usually depends on the enhancement result of the pixel that has been enhanced before that pixel. This undoubtedly makes the enhancement process of image brightness take up a lot of image processing resources for a long time, such as precious CPU (Central Processing Unit) Resources.
- CPU Central Processing Unit
- the embodiments of the present application provide an image processing method, device, computing device, and storage medium.
- an image processing method which is executed by a computing device, and includes: determining the brightness value of an image; when the brightness value of the image is less than the image brightness threshold, performing a measurement on the brightness of the image Enhanced.
- enhancing the brightness of the image includes: determining each pixel of the image as a pixel to be enhanced; based on the brightness value of the image, the to be enhanced The initial brightness value of the pixel and the initial brightness value of the adjacent pixel of the pixel to be enhanced determine the brightness enhancement value of the pixel to be enhanced; the brightness enhancement value is used as the enhanced brightness value of the pixel to be enhanced.
- an image processing device including: a determining module configured to determine a brightness value of an image; an enhancement module configured to when the brightness value of the image is less than an image brightness threshold , To enhance the brightness of the image.
- the enhancement module further includes: a first determining sub-module configured to determine each pixel of the image as a pixel to be enhanced; a second determining sub-module configured to determine based on the brightness value of the image, the to-be-enhanced pixel The initial brightness value of the enhanced pixel and the initial brightness value of the adjacent pixel of the pixel to be enhanced determine the brightness enhancement value of the pixel to be enhanced; the enhancement sub-module is configured to use the brightness enhancement value as the to-be-enhanced pixel The enhanced brightness value of the pixel.
- a computing device including a processor; and a memory, configured to store computer-executable instructions thereon, and when the computer-executable instructions are executed by the processor, the computer-executable instructions are executed as described above. Any method.
- a computer-readable storage medium which stores computer-executable instructions, and when the computer-executable instructions are executed, any method as described above is executed.
- Figure 1 illustrates an exemplary application scenario that can be implemented in an embodiment of the present application
- Fig. 2 illustrates a schematic flowchart of an image processing method according to an embodiment of the present application
- Fig. 3 illustrates an exemplary flow chart of a method for determining the brightness value of the image based on the brightness component in the YUV color space data of the image according to an embodiment of the present application
- FIG. 4 illustrates the determination of the brightness enhancement of the pixel to be enhanced based on the brightness value of the image, the initial brightness value of the pixel to be enhanced, and the initial brightness value of the neighboring pixels of the pixel to be enhanced according to an embodiment of the present application Exemplary flow chart of the value method;
- Fig. 5 illustrates a schematic diagram of an exemplary filter template according to an embodiment of the present application
- FIG. 6 illustrates a schematic diagram of filtering the pixel brightness value of an image using the filter template shown in FIG. 5;
- FIG. 7 illustrates a schematic effect diagram of using an image processing method according to an embodiment of the present application to enhance an original image
- Fig. 8 illustrates an exemplary structural block diagram of an image processing device according to an embodiment of the present application.
- Figure 9 illustrates an example system that includes example computing devices that represent one or more systems and/or devices that can implement the various technologies described herein.
- LUT refers to the display look-up table (Look-Up-Table), which is essentially a RAM. After it writes data into RAM in advance, every time a signal is input, it is equivalent to inputting an address to look up the table, find out the content corresponding to the address, and then output the content;
- LUT has a wide range of applications, for example: LUT can be used as The mapping table of pixel brightness value, which transforms the actually sampled pixel brightness value into another corresponding brightness value after a certain transformation (for example, inversion, binarization, linear or non-linear transformation, etc.). Can play the role of highlighting the useful information of the image and enhancing the light contrast of the image;
- RGB color space is based on three basic colors of R (Red), G (Green: green), and B (Blue: blue), which are superimposed in different degrees to produce a rich and wide range of colors, so it is commonly known as Three-primary color mode; the biggest advantage of the RGB color space is that it is intuitive and easy to understand; the disadvantage is that the three components of R, G, and B are highly correlated, that is, if a certain component of a color changes to a certain extent, then this color It is likely to change;
- RGB signals can undergo a series of transformations to obtain a luminance signal Y and two color difference signals R-Y (i.e. U) and B-Y (i.e. V).
- This color representation method is the so-called YUV color space representation.
- the importance of using the YUV color space is that its luminance signal Y and chrominance signals U and V are separated.
- "Y” represents brightness, which is gray value; while "U” and "V” represent chromaticity, which are used to describe the color and saturation of the image, and are used to specify the color of the pixel.
- Y'UV, YUV, YCbCr, YPbPr and other color spaces can be collectively referred to as YUV color spaces.
- Fig. 1 illustrates an exemplary application scenario 100 in which a technical solution according to an embodiment of the present application can be implemented.
- the application scenario 100 may typically be a scene for conducting a video conference, in which a participant at the near end (ie, a local) conducts a video conference with a participant at the far end in the scene.
- the camera 101 located at the near end of the video conference can collect video about the video conference.
- the video is composed of a series of video frames.
- the video frames are, for example, local participants participating in the video conference or speakers therein. Near-end image.
- the image processing device 102 on the computing device 110 may be used to enhance the brightness of the image to obtain an enhanced image with better quality.
- the enhanced image may be displayed on the local display 103.
- the enhanced image can also be sent to the far end of the video conference to be displayed for viewing by the participants of the remote conference.
- the encoder 104 on the computing device 110 can be used to encode the enhanced image, and then the encoded image obtained by encoding is sent to the remote end through the network 105, and the decoder 106 can be used at the remote end to decode the received image.
- the encoded image is decoded, and then the decoded image is displayed on the remote display 107, so that participants at the remote end can view the high-quality near-end image.
- the network 105 may be, for example, a wide area network (WAN), a local area network (LAN), a wireless network, a public telephone network, an intranet, and any other types of networks well known to those skilled in the art.
- WAN wide area network
- LAN local area network
- wireless network a public telephone network
- intranet any other types of networks well known to those skilled in the art.
- FIG. 2 illustrates a schematic flowchart of an image processing method 200 according to an embodiment of the present application, which is executed by a computing device, such as the computing device 110 shown in FIG. 1 or the computing device 910 shown in FIG. 9. As shown in FIG. 2, the method 200 includes the following steps.
- step 201 the brightness value of the image is determined.
- the image is embodied in the format of YUV color space data.
- each color has a luminance component Y and two chrominance components U and V.
- the YUV color space data of the image can be acquired first, and then the brightness value of the image can be determined based on the brightness component in the YUV color space data.
- the image is embodied in the format of RGB color space data.
- the RGB color space is defined by the colors R (red), G (green) and B (blue) recognized by the human eye, and can represent most colors.
- R red
- G green
- B blue
- the RGB color space data of the image may be acquired first, and then the RGB color space data of the image can be converted into the YUV color space data of the image.
- the RGB color space data of the image can be converted into the YUV color space data of the image by the following formula:
- any suitable method may be used to determine the brightness value of the image based on the brightness component Y in the YUV color space data. For example, the brightness components of all pixels of the image can be obtained, and then the average value of these brightness components can be taken as the brightness value of the image.
- this is not restrictive, and various other suitable methods are considered.
- step 202 when the brightness value of the image is less than the image brightness threshold, the brightness of the image is enhanced.
- the brightness of the image may not be enhanced, because in this case the image meets the requirements for image brightness and usually has a higher brightness. quality.
- the step 202 may further include the following steps 2021-2023.
- each pixel of the image is determined as a pixel to be enhanced.
- the brightness of the image needs to be enhanced on the basis of each individual pixel.
- step 2022 based on the brightness value of the image, the initial brightness value of the pixel to be enhanced, and the initial brightness value of adjacent pixels of the pixel to be enhanced, the brightness enhancement value of the pixel to be enhanced is determined.
- the initial brightness value of the pixel described here represents the brightness value of the pixel when the pixel is not subjected to enhancement processing, that is, the brightness value before being enhanced.
- the neighboring pixels of the pixel to be enhanced may be pixels other than the pixel to be enhanced in an area with the pixel to be enhanced as a center point.
- the shape and size of the area can be preset as needed.
- the area may be a square area with a size of 3 ⁇ 3 pixels, of course, this is not a limitation.
- the brightness enhancement value is used as the enhanced brightness value of the pixel to be enhanced.
- the brightness value of the pixel to be enhanced may be adjusted to the brightness enhancement value to achieve the enhancement of the brightness of the pixel to be enhanced.
- the brightness of the image is enhanced.
- the to-be-enhanced pixel can be determined more comprehensively. Enhance the quality of pixels to achieve more accurate enhancement results. Moreover, the entire enhancement process is only based on the initial brightness value of the pixel, so the dependence of the pixel in the conventional image processing process is relieved, so that the enhancement of the brightness of the pixel in the image can be performed in parallel, which greatly reduces the image The resource occupancy rate of the processing process.
- FIG. 3 illustrates an exemplary flowchart of a method 300 for determining the brightness value of the image based on the brightness component in the YUV color space data of the image according to an embodiment of the present application, which is executed by a computing device, such as shown in FIG. 1
- the method 300 includes the following steps.
- step 301 the pixels of the image are sampled at a sampling interval, and the brightness component in the YUV color space data of the sampled pixel is obtained.
- the sampling interval depends on the sampling rate ⁇ at which the pixels of the image are sampled.
- the sampling interval has a first interval component in the row direction of the pixels of the image, and has a second interval component in the column direction of the pixels of the image.
- the first interval component is a value obtained by dividing the number of pixels in the row direction by a sampling rate
- the second interval component is a value obtained by dividing the number of pixels in the column direction by the sampling rate.
- the first interval component that is, the sampling interval in the row direction
- I H which is also called Image height
- step 302 the brightness components of the sampled pixels are added to obtain the sum of the brightness of the sampled pixels.
- the brightness values of all pixels obtained by sampling are added to obtain the sampled pixel brightness and L T.
- step 303 the sampled pixel brightness and sampling rate smoothing process are performed to obtain a smoothed brightness value.
- step 304 the smoothed brightness value is determined as the brightness value of the image.
- the brightness value of the image can be determined by a fast, accurate, and low resource occupancy method 300.
- An exemplary flowchart of the value method 400 is executed by a computing device, such as the computing device 110 shown in FIG. 1 or the computing device 910 shown in FIG. 9. The method 400 can be used to implement step 2022 as described above with reference to FIG. 2.
- step 401 based on the initial brightness value of the pixel to be enhanced and the initial brightness value of the neighboring pixels of the pixel to be enhanced, the initial brightness value of the pixel to be enhanced is filtered to obtain the filtering of the pixel to be enhanced After the brightness value.
- the initial brightness value of the pixel to be enhanced may be filtered by means of a filter template, the filter template including the brightness of the pixel to be enhanced and the brightness of neighboring pixels of the pixel to be enhanced.
- a filter template including the brightness of the pixel to be enhanced and the brightness of neighboring pixels of the pixel to be enhanced.
- One-to-one corresponding weight In this case, the weighted sum of the initial brightness value of the pixel to be enhanced and the initial brightness value of the adjacent pixels can be determined by means of a filter template, and the weight of the brightness of the pixel involved here is the filter template The weight included in. Then, the weighted sum is determined as the filtered brightness value of the pixel to be enhanced. It can be seen that the entire filtering process is performed based on the initial brightness value of the pixel to be enhanced and the initial brightness value of the adjacent pixel.
- the filtering of subsequent pixels in the filtering process does not depend on the filtering result of the previous pixel, so the pixel dependence of the image processing process in the conventional technology is further relieved, so that the enhancement of the brightness of the pixels in the image can be performed in parallel, and the resource occupancy rate of the image processing process is greatly reduced.
- FIG. 5 illustrates a schematic diagram of an exemplary filter template 500 according to an embodiment of the present application.
- the filter template corresponds to the pixels (including the pixel to be enhanced and the neighboring pixels of the pixel to be enhanced) in a square area with a size of 3 ⁇ 3 pixels, and includes the brightness of the pixel to be enhanced.
- weights a0, a1, a2, a3, a4, a5, a6, a7, a8 corresponding to the brightness of the neighboring pixels of the pixel to be enhanced where a0 is the weight corresponding to the brightness of the pixel to be enhanced, a1, a2, a3, a4, a5, a6, a7, a8 are weights corresponding to the brightness of adjacent pixels, and their sum is typically equal to one.
- the value of a0 is 1/2
- the values of a2, a4, a6, and a8 are all 1/8
- the values of a1, a3, a5, and a7 are 0. This is used in image processing, especially in video conferences. It is advantageous in the image processing of the scene.
- FIG. 6 illustrates a schematic diagram of filtering the pixel brightness value of an image using the filter template 500 shown in FIG. 5.
- the pixel brightness value can be filtered in a manner from left to right in the row direction of the image pixels and top to bottom in the column direction.
- the brightness values of image pixels can be filtered in any order, for example, the brightness values of multiple pixels at any different positions can be filtered at the same time, because the filtering process removes the pixel dependency in the conventional technology.
- the initial brightness value of the center point pixel ie, the pixel to be enhanced
- the filter template shown in FIG. 5 is used to filter the initial brightness value of the pixel to be enhanced.
- the filtered brightness value of the pixel to be enhanced (ie, the center point pixel) is
- the brightness enhancement value of the pixel to be enhanced is determined based on the brightness value of the image, the initial brightness value of the pixel to be enhanced, and the filtered brightness value of the pixel to be enhanced.
- the brightness enhancement value of the pixel to be enhanced may be determined according to the following formula:
- E(x) is the brightness enhancement value of the pixel to be enhanced
- I(x) is the initial brightness value of the pixel to be enhanced
- t(x) is the atmospheric light transmittance
- I B is the brightness value of the image
- E V is the filtered brightness value of the pixel to be enhanced
- A is the atmospheric light intensity value
- w is a non-zero constant, and optionally 0.2831.
- the initial brightness value of the pixel to be enhanced and the filtered brightness value of the pixel to be enhanced and the brightness of the pixel to be enhanced may be established in advance based on the above formula.
- the following algorithm may be used to establish the two-dimensional lookup table in advance:
- the brightness value of the image is 75.59
- the atmospheric light intensity value is 3
- i is the initial brightness value I(x) of the pixel to be enhanced
- j is the filtered brightness value E V of the pixel to be enhanced
- t is the atmospheric light Transmittance
- m is the brightness enhancement value E(x) calculated according to the above formula
- T lut [i][j] is the brightness enhancement value of the pixel to be enhanced obtained by looking up the table.
- clip( ⁇ ) is a numerical truncation function. If the value is between 0 and 255, the value is retained; if the value is less than zero, the value is set to zero; if it is greater than 255, the value is set to 255. This step The purpose of is to ensure that the calculated brightness enhancement value m is in the range of 0-255, so it is not necessary, and m can be directly determined as T lut [i][j].
- the brightness enhancement value E(x) obtained above can be smoothed as follows, namely clip(0.6*E(x)+0.4*E V ), and then The calculation result is used as the brightness enhancement value of the pixel to be enhanced to enhance the initial brightness of the pixel to be enhanced, which of course is not necessary.
- FIG. 7 illustrates a schematic effect diagram of using an image processing method according to an embodiment of the present application to enhance an original image. It can be seen that the enhanced image 720 on the right side of FIG. 7 is compared with the original image 710 on the left, regardless of the brightness of the overall image 721 and 711, or the brightness of the local area (for example, the hand area) 722 and 712. With the enhancement, the quality of the enhanced image has obviously been greatly improved. Moreover, the inventor compared the resource occupancy rates when using different hardware platforms to implement the image processing method according to the embodiments of the present application and the conventional image processing method with pixel dependence or coupling, as shown in Table 1 below:
- the image processing method according to the embodiment of the present application greatly reduces the resource occupancy rate of the image processing process compared with the conventional image processing method.
- FIG. 8 illustrates an exemplary structural block diagram of an image processing apparatus 800 according to an embodiment of the present application.
- the image processing apparatus 800 is located in a computing device, such as the computing device 110 shown in FIG. 1.
- the device 800 includes a determination module 801 and an enhancement module 802.
- the enhancement module 802 further includes a first determination sub-module 803, a second determination sub-module 804, and an enhancement sub-module 805.
- the determining module 801 is configured to determine the brightness value of the image.
- the determining module 801 may be configured to determine the brightness value of the image using various methods.
- the image is embodied in the format of YUV color space data.
- each color has a luminance component Y, and two chrominance components U and V.
- the determining module 801 may be configured to obtain the YUV color space data of the image, and then determine the brightness value of the image based on the brightness component in the YUV color space data.
- the determining module 801 may be configured to obtain the RGB color space data of the image, and then convert the RGB color space data of the image into the YUV color space data of the image.
- the determining module 801 may be configured to: sample pixels of the image at sampling intervals to obtain the brightness components in the YUV color space data of the sampled pixels; and add the brightness components of the sampled pixels to obtain Sampling pixel brightness sum; performing sampling rate smoothing processing on the sampling pixel brightness sum to obtain a smoothed brightness value; determining the smoothed brightness value as the brightness value of the image.
- the enhancement module 802 is configured to enhance the brightness of the image when the brightness value of the image is less than the image brightness threshold. In other words, if the brightness value of the image is not less than the image brightness threshold, the enhancement module 802 may not enhance the brightness of the image, because the image meets the requirements for image brightness and usually has a higher quality. .
- the first determining submodule 803 is configured to determine each pixel of the image as a pixel to be enhanced.
- the second determining submodule 804 is configured to determine the brightness enhancement of the pixel to be enhanced based on the brightness value of the image, the initial brightness value of the pixel to be enhanced, and the initial brightness value of the neighboring pixels of the pixel to be enhanced value.
- the initial brightness value mentioned here represents the brightness value of the pixel to be enhanced when the enhancement processing is not performed, that is, the brightness value before being enhanced.
- the neighboring pixels of the pixel to be enhanced may be pixels other than the pixel to be enhanced in an area with the pixel to be enhanced as a center point.
- the shape and size of the area can be preset as needed.
- the area may be an area with a size of 3 ⁇ 3 pixels, of course, this is not a limitation.
- the second determining sub-module 804 may be configured to determine the value of the pixel to be enhanced based on the initial brightness value of the pixel to be enhanced and the initial brightness value of the neighboring pixels of the pixel to be enhanced.
- the initial brightness value is filtered to obtain the filtered brightness value of the pixel to be enhanced; then, based on the brightness value of the image, the initial brightness value of the pixel to be enhanced, and the filtered brightness value of the pixel to be enhanced , Determine the brightness enhancement value of the pixel to be enhanced.
- the second determining submodule 804 may be configured to determine the initial brightness value of the pixel to be enhanced and the weighted sum of the initial brightness values of the adjacent pixels, and then determine the weighted sum as the to-be-enhanced pixel. Enhance the filtered brightness value of the pixel.
- the second determining submodule 804 may be configured to determine the brightness enhancement value of the pixel to be enhanced using the following formula:
- E(x) is the brightness enhancement value of the pixel to be enhanced
- I(x) is the initial brightness value of the pixel to be enhanced
- t(x) atmospheric light transmittance is:
- I B is the brightness value of the image
- E V is the filtered brightness value of the pixel to be enhanced
- A is the atmospheric light intensity value
- w is a non-zero constant.
- the enhancement sub-module 805 is configured to use the brightness enhancement value as the enhanced brightness value of the pixel to be enhanced. After determining the brightness enhancement value of the pixel to be enhanced, the enhancement sub-module 805 may adjust the brightness value of the pixel to be enhanced to the brightness enhancement value to achieve the enhancement of the brightness of the pixel to be enhanced.
- Figure 9 illustrates an example system 900 that includes an example computing device 910 that represents one or more systems and/or devices that can implement the various technologies described herein.
- the computing device 910 may be, for example, a server of a service provider, a device associated with the server, a system on a chip, and/or any other suitable computing device or computing system.
- the image processing apparatus 800 described above with respect to FIG. 8 may take the form of a computing device 910. Alternatively, the image processing apparatus 800 may be implemented as a computer program in the form of an image processing application 916.
- the example computing device 910 as illustrated includes a processing system 911, one or more computer-readable media 912, and one or more I/O interfaces 913 that are communicatively coupled to each other.
- the computing device 910 may also include a system bus or other data and command transfer system that couples various components to each other.
- the system bus may include any one or a combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or a processor using any one of various bus architectures. Local bus.
- Various other examples are also contemplated, such as control and data lines.
- the processing system 911 represents a function of performing one or more operations using hardware. Therefore, the processing system 911 is illustrated as including hardware elements 914 that can be configured as processors, functional blocks, and the like. This may include implementation in hardware as an application specific integrated circuit or other logic devices formed using one or more semiconductors.
- the hardware element 914 is not limited by the material it is formed of or the processing mechanism adopted therein.
- the processor may be composed of semiconductor(s) and/or transistors (e.g., electronic integrated circuits (ICs)).
- processor-executable instructions may be electronically executable instructions.
- the computer-readable medium 912 is illustrated as including a memory/storage device 915.
- the memory/storage device 915 represents memory/storage capacity associated with one or more computer-readable media.
- the memory/storage device 915 may include volatile media (such as random access memory (RAM)) and/or nonvolatile media (such as read only memory (ROM), flash memory, optical disks, magnetic disks, etc.).
- RAM random access memory
- ROM read only memory
- flash memory optical disks
- magnetic disks etc.
- the memory/storage device 915 may include fixed media (e.g., RAM, ROM, fixed hard drive, etc.) and removable media (e.g., flash memory, removable hard drive, optical disk, etc.).
- the computer-readable medium 912 may be configured in various other ways as described further below.
- the one or more I/O interfaces 913 represent functions that allow a user to input commands and information to the computing device 910 and optionally also allow various input/output devices to be used to present information to the user and/or other components or devices.
- input devices include keyboards, cursor control devices (e.g., mice), microphones (e.g., for voice input), scanners, touch functions (e.g., capacitive or other sensors configured to detect physical touch), cameras ( For example, visible or invisible wavelengths (such as infrared frequencies) can be used to detect motions that do not involve touch as gestures) and so on.
- Examples of output devices include display devices (for example, monitors or projectors), speakers, printers, network cards, tactile response devices, and so on. Therefore, the computing device 910 may be configured in various ways as described further below to support user interaction.
- the computing device 910 also includes an image processing application 916.
- the image processing application 916 may be, for example, a software instance of the image processing apparatus 800, and combined with other elements in the computing device 910 to implement the technology described herein.
- modules include routines, programs, objects, elements, components, data structures, etc. that perform specific tasks or implement specific abstract data types.
- module means that these technologies can be implemented on various computing platforms with various processors.
- the computer-readable media may include various media that can be accessed by the computing device 910.
- the computer-readable medium may include “computer-readable storage medium” and “computer-readable signal medium”.
- Computer-readable storage medium refers to a medium and/or device capable of permanently storing information, and/or a tangible storage device. Therefore, computer-readable storage media refers to non-signal bearing media.
- Computer-readable storage media include such as volatile and nonvolatile, removable and non-removable media and/or suitable for storing information (such as computer-readable instructions, data structures, program modules, logic elements/circuits, or other data).
- Hardware such as storage devices implemented by methods or technologies.
- Examples of computer-readable storage media may include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technologies, CD-ROM, digital versatile disk (DVD) or other optical storage devices, hard disks, cassette tapes, magnetic tapes, disk storage Apparatus or other magnetic storage device, or other storage device, tangible medium, or article suitable for storing desired information and accessible by a computer.
- Computer-readable signal medium refers to a signal-bearing medium that is configured to send instructions to the hardware of the computing device 910, such as via a network.
- the signal medium can typically embody computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave, a data signal, or other transmission mechanism.
- the signal medium also includes any information transmission medium.
- modulated data signal refers to a signal that encodes information in the signal in such a way to set or change one or more of its characteristics.
- communication media include wired media such as a wired network or direct connection, and wireless media such as acoustic, RF, infrared, and other wireless media.
- the hardware element 914 and the computer-readable medium 912 represent instructions, modules, programmable device logic, and/or fixed device logic implemented in hardware, which in some embodiments can be used to implement the technology described herein. At least some aspects.
- the hardware elements may include integrated circuits or system-on-chips, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), and other implementations in silicon or components of other hardware devices.
- ASICs application specific integrated circuits
- FPGAs field programmable gate arrays
- CPLDs complex programmable logic devices
- a hardware element can serve as a processing device that executes program tasks defined by instructions, modules, and/or logic embodied by the hardware element, and a hardware device that stores instructions for execution, for example, as previously described ⁇ computer readable storage medium.
- software, hardware or program modules and other program modules may be implemented as one or more instructions and/or logic embodied by one or more hardware elements 914 on some form of computer-readable storage medium.
- the computing device 910 may be configured to implement specific instructions and/or functions corresponding to software and/or hardware modules. Therefore, for example, by using the computer-readable storage medium and/or the hardware element 914 of the processing system, the module may be implemented at least partially in hardware to implement the module as a module executable by the computing device 910 as software.
- the instructions and/or functions may be executable/operable by one or more articles of manufacture (eg, one or more computing devices 910 and/or processing system 911) to implement the techniques, modules, and examples described herein.
- the computing device 910 may adopt various different configurations.
- the computing device 910 may be implemented as a computer-type device including a personal computer, a desktop computer, a multi-screen computer, a laptop computer, a netbook, and the like.
- the computing device 910 may also be implemented as a mobile device type device including mobile devices such as a mobile phone, a portable music player, a portable game device, a tablet computer, a multi-screen computer, and the like.
- the computing device 910 may also be implemented as a television-type device, which includes a device with or connected to a generally larger screen in a casual viewing environment. These devices include televisions, set-top boxes, game consoles, etc.
- the techniques described herein can be supported by these various configurations of the computing device 910 and are not limited to specific examples of the techniques described herein.
- the functions can also be implemented in whole or in part on the "cloud" 920 by using a distributed system, such as through the platform 922 as described below.
- Cloud 920 includes and/or represents a platform 922 for resources 924.
- the platform 922 abstracts the underlying functions of the hardware (for example, server) and software resources of the cloud 920.
- the resources 924 may include applications and/or data that can be used when performing computer processing on a server remote from the computing device 910.
- Resources 924 may also include services provided through the Internet and/or through subscriber networks such as cellular or Wi-Fi networks.
- the platform 922 can abstract resources and functions to connect the computing device 910 with other computing devices.
- the platform 922 can also be used to abstract the classification of resources to provide a corresponding level of classification of the requirements encountered for the resources 924 implemented via the platform 922. Therefore, in the interconnected device embodiment, the implementation of the functions described herein may be distributed throughout the system 900. For example, the functions may be partially implemented on the computing device 910 and through a platform 922 that abstracts the functions of the cloud 920.
- each functional unit may be implemented in a single unit, in multiple units, or as a part of other functional units without departing from this application.
- functionality described as being performed by a single unit may be performed by multiple different units. Therefore, a reference to a specific functional unit is only regarded as a reference to an appropriate unit for providing the described functionality, rather than indicating a strict logical or physical structure or organization. Therefore, the present application may be implemented in a single unit, or may be physically and functionally distributed between different units and circuits.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
本申请实施例公开了图像处理方法、装置、计算设备和存储介质,所述方法包括确定图像的亮度值。所述方法还包括当所述图像的亮度值小于图像亮度阈值时,对所述图像的亮度进行增强,其包括:将所述图像的每个像素确定为待增强像素;基于所述图像的亮度值、所述待增强像素的初始亮度值以及所述待增强像素的相邻像素的初始亮度值,确定所述待增强像素的亮度增强值;将所述亮度增强值作为所述待增强像素的增强后的亮度值。
Description
本申请要求于2019年12月12日提交中国专利局、申请号为201911274591.0、申请名称为“图像处理方法和图像处理装置”的中国专利申请的优先权。
本申请涉及图像处理的技术领域,具体地涉及图像处理方法、装置、计算设备和存储介质。
发明背景
随着互联网技术的发展,在各行各业中都对图像的质量的要求越来越高。例如,远程视频会议作为办公协同产品各项功能中最重要的一部分,对视频图像的质量有着非常高的要求。如果用户所处的环境存在光照不理想的情况,会造成视频会议中场景图像的质量比较差。如果不对这样环境下的场景图像进行处理,视频会议的体验显然将会非常差。
为了呈现高质量的图像,通常需要对图像的亮度进行增强。然而,目前的图像处理方法中,对像素的亮度的增强通常都会依赖于该像素之前增强的像素的增强结果,这无疑使得图像亮度的增强过程长时间占用大量的图像处理资源,例如宝贵的CPU(中央处理单元)资源。例如,在视频会议领域中,视频会议系统的大部分处理资源用于这样的图像处理任务,这会对视频会议系统的性能提升造成很大的阻碍。
发明内容
有鉴于此,本申请实施例提供了图像处理方法、装置、计算设备和存储介质。
根据本申请实施例的一方面,提供了一种图像处理方法,由计算设备执行,包括:确定图像的亮度值;当所述图像的亮度值小于图像亮度阈值时,对所述图像的亮度进行增强。响应于所述图像的亮度值小于图像亮度阈值,则对所述图像的亮度进行增强包括:将所述图像的每个像素确定为待增强像素;基于所述图像的亮度值、所述待增强像素的初始亮度值以及所述待增强像素的相邻像素的初始亮度值,确定所述待增强像素的亮度增强值;将所述亮度增强值作为所述待增强像素的增强后的亮度值。
根据本申请实施例的另一方面,提供了一种图像处理装置,包括:确定模块, 被配置成确定图像的亮度值;增强模块,被配置成当所述图像的亮度值小于图像亮度阈值时,对所述图像的亮度进行增强。所述增强模块还包括:第一确定子模块,被配置成将所述图像的每个像素确定为待增强像素;第二确定子模块,被配置成基于所述图像的亮度值、所述待增强像素的初始亮度值以及所述待增强像素的相邻像素的初始亮度值,确定所述待增强像素的亮度增强值;增强子模块,被配置成将所述亮度增强值作为所述待增强像素的增强后的亮度值。
根据本申请的另一方面,提供了一种计算设备,包括处理器;以及存储器,配置为在其上存储有计算机可执行指令,当计算机可执行指令被处理器执行时执行如上面所述的任意方法。
根据本申请的另一方面,提供了一种计算机可读存储介质,其存储有计算机可执行指令,当所述计算机可执行指令被执行时,执行如上面所述的任意方法。
附图简要说明
现在将更详细并且参考附图来描述本申请的实施例,其中:
图1图示了根据本申请的实施例的可以被实施在其中的示例性应用场景;
图2图示了根据本申请的一个实施例的一种图像处理方法的示意性流程图;
图3图示了根据本申请的一个实施例的基于图像的YUV颜色空间数据中的亮度分量确定所述图像的亮度值的方法的示例性流程图;
图4图示了根据本申请的一个实施例的基于图像的亮度值、待增强像素的初始亮度值以及所述待增强像素的相邻像素的初始亮度值,确定所述待增强像素的亮度增强值的方法的示例性流程图;
图5图示了根据本申请的一个实施例的示例性滤波模板的示意图;
图6图示了使用图5所示的滤波模板对图像的像素亮度值进行滤波的示意图;
图7图示了使用根据本申请的实施例的图像处理方法对原始图像进行增强的示意效果图;
图8图示了根据本申请的一个实施例的一种图像处理装置的示例性结构框图;以及
图9图示了一个示例系统,其包括代表可以实现本文描述的各种技术的一个或 多个系统和/或设备的示例计算设备。
实施方式
下面的说明提供用于充分理解和实施本申请的各种实施例的特定细节。本领域的技术人员应当理解,本申请的技术方案可以在没有这些细节中的一些的情况下被实施。在某些情况下,并没有示出或详细描述一些熟知的结构和功能,以避免不必要地使对本申请的实施例的描述模糊不清。在本申请中使用的术语以其最宽泛的合理方式来理解,即使其是结合本申请的特定实施例被使用的。
首先,对本申请实施例中涉及的部分用语进行说明,以便于本领域技术人员理解:
LUT:指显示查找表(Look-Up-Table),本质上就是一个RAM。它把数据事先写入RAM后,每当输入一个信号就等于输入一个地址进行查表,找出地址对应的内容,然后将所述内容输出;LUT的应用范围比较广泛,例如:LUT可以用作像素亮度值的映射表,它将实际采样到的像素亮度值经过一定的变换(例如,反转、二值化、线性或非线性变换等)变成了另外一个与之对应的亮度值,这样可以起到突出图像的有用信息以及增强图像的光对比度的作用;
RGB颜色空间:RGB颜色空间以R(Red:红)、G(Green:绿)、B(Blue:蓝)三种基本色为基础,进行不同程度的叠加,产生丰富而广泛的颜色,所以俗称三基色模式;RGB颜色空间最大的优点就是直观,容易理解;缺点是R,G,B这3个分量是高度相关的,即如果一个颜色的某一个分量发生了一定程度的改变,那么这个颜色很可能要发生改变;
YUV颜色空间:RGB信号可以经过一系列变换得到亮度信号Y和两个色差信号R-Y(即U)、B-Y(即V)。这种色彩的表示方法就是所谓的YUV颜色空间表示。采用YUV颜色空间的重要性是它的亮度信号Y和色度信号U、V是分离的。“Y”表示明亮度,也就是灰度值;而“U”和“V”表示的则是色度,作用是描述影像色彩及饱和度,用于指定像素的颜色。一般地,Y'UV、YUV、YCbCr、YPbPr等颜色空间都可以统称为YUV颜色空间。
图1图示了根据本申请的实施例的技术方案可以实施在其中的示例性应用场景100。如图1所示,所述应用场景100典型地可以为进行视频会议的场景,其中位于近端(也即本地)的参与者在所述场景中与位于远端的参与者进行视频会议。如图1所示,位于视频会议近端的摄像头101可以采集关于视频会议的视频,所述视频由一系列视频帧构成,所述视频帧例如是参与视频会议的本地参与者或其中的发言者的近端图像。如果本地视频会议的环境中存在光照不理想的情况,则所述摄像头采集的近端图像的质量(特别地,亮度)也会比较差。这时,可以利用计算设备110上的图像处理装置102对所述图像的亮度进行增强,以得到质量较好的增强后的图像。一方面,所述增强后图像可以被显示在本地显示器103上。另一方面,也可以将增强后的图像发送到视频会议的远端进行显示以供远端的会议参与者观看。作为示例,可以使用计算设备110上的编码器104对所述增强后的图像进行编码,然后将编码得到的编码图像通过网络105发送到远端,在远端可以利用解码器106对接收到的编码图像进行解码,然后将解码后的图像在远端显示器107上进行显示,使得位于远端的参与者可以观看到高质量的近端图像。
所述网络105例如可以是广域网(WAN)、局域网(LAN)、无线网络、公用电话网、内联网以及本领域的技术人员熟知的任何其它类型的网络。应当指出,上面描述的场景仅仅是本申请的实施例可以被实施在其中的一个示例,事实上本申请的实施例可以被实施在任何需要对图像进行处理、特别是对图像的亮度进行增强的场景中。
图2图示了根据本申请的一个实施例的一种图像处理方法200的示意性流程图,由计算设备执行,例如图1中所示的计算设备110或者图9所示的计算设备910。如图2所示,所述方法200包括如下步骤。
在步骤201,确定图像的亮度值。
可以使用各种方式来确定所述图像的亮度值。在一些实施例中,所述图像是以YUV颜色空间数据的格式体现的。在YUV空间中,每一个颜色具有一个亮度分量Y和两个色度分量U和V。在这种情况下,可以首先获取到所述图像的YUV颜色 空间数据,然后基于所述YUV颜色空间数据中的亮度分量,确定所述图像的亮度值。
在一些实施例,所述图像是以RGB颜色空间数据的格式体现的。RGB颜色空间是依据人眼识别的颜色R(红)、G(绿)B(蓝)定义出的空间,可表示大部分颜色。在RGB颜色空间中,将色调、亮度、饱和度三个量放在一起表示,很难分开。在这种情况下,可以首先获取所述图像的RGB颜色空间数据,然后将所述图像的RGB颜色空间数据转换为所述图像的YUV颜色空间数据。
作为示例,可以通过如下公式将所述图像的RGB颜色空间数据转换为所述图像的YUV颜色空间数据:
Y=0.299R+0.587G+0.114B (1)
U=-0.1687R-0.3313G+0.5B+128 (2)
V=0.5R-0.4187G-0.0813B+128 (3)
在一些实施例中,可以使用任何合适的方法来基于所述YUV颜色空间数据中的亮度分量Y来确定所述图像的亮度值。例如,可以获取所述图像的所有像素的亮度分量,然后取这些亮度分量的平均值以作为所述图像的亮度值。当然,这不是限制性的,其它各种适合的方法都被考虑。
在步骤202,当所述图像的亮度值小于图像亮度阈值时,对所述图像的亮度进行增强。
换句话说,如果所述图像的亮度值不小于所述图像亮度阈值,则可以不对所述图像的亮度进行增强,因为这种情况下所述图像满足对于图像亮度的要求,通常具有较高的质量。
所述步骤202进一步可以包括如下步骤2021-2023。
在步骤2021,将所述图像的每个像素确定为待增强的像素。
在本申请的实施例中,需要在每个单独的像素的基础上对所述图像的亮度进行 增强。
在步骤2022,基于所述图像的亮度值、所述待增强像素的初始亮度值以及所述待增强像素的相邻像素的初始亮度值,确定所述待增强像素的亮度增强值。
应当指出,这里所述的像素的初始亮度值表示所述像素未被进行增强处理时的亮度值,也即被增强前的亮度值。
在一些实施例中,所述待增强像素的相邻像素可以是位于以所述待增强像素为中心点的区域内除所述待增强像素外的其它像素。所述区域的形状和大小可以根据需要被预先设定。作为示例,所述区域可以为3×3个像素大小的正方形区域,当然这不是限制的。
在步骤2023,将所述亮度增强值作为所述待增强像素的增强后的亮度值。
在确定所述待增强像素的亮度增强值后,可以将所述待增强像素的亮度值调整为所述亮度增强值以实现对所述待增强像素的亮度的增强。通过对每个待增强像素的亮度进行增强,实现了对所述图像的亮度增强。
在本申请的实施例中,通过综合考虑所述图像的亮度值、所述待增强像素的初始亮度值以及所述待增强像素的相邻像素的初始亮度值,能够较全面地确定所述待增强像素的质量以实现较准确的增强结果。而且,整个增强过程都仅仅是基于像素的初始亮度值进行的,因此解除了常规技术中图像处理过程存在的像素的依赖关系,使得对图像中像素的亮度的增强可以并行进行,大幅降低了图像处理过程的资源占用率。
应当指出,尽管本申请的实施例是以对图像进行增强处理为主题进行描述的,但是本申请的实施例同样可以适用于对视频进行增强处理的场景中,因为对视频进行增强处理本质上是对视频中的每一帧图像进行增强处理。
上面描述了在获取到所述图像的YUV颜色空间数据后,可以基于所述YUV颜色空间数据中的亮度分量来确定所述图像的亮度值。图3图示了根据本申请的一个实施例的基于图像的YUV颜色空间数据中的亮度分量确定所述图像的亮度值的方法300的示例性流程图,由计算设备执行,例如图1中所示的计算设备110或者图 9所示的计算设备910。所述方法300包括如下步骤。
在步骤301,按采样间隔对所述图像的像素进行采样,获取被采样的像素的YUV颜色空间数据中的亮度分量。
所述采样间隔取决于对所述图像的像素进行采样的采样率γ。在一些实施例中,所述采样间隔在所述图像的像素的行方向上具有第一间隔分量,并且在所述图像的像素的列方向上具有第二间隔分量。所述第一间隔分量为行方向上的像素数目除以采样率得到的值,所述第二间隔分量为列方向上的像素数目除以采样率得到的值。在采样时,可以从所述图像的行和列方向的起点(例如,图像的左上角)开始,每经过所述采样间隔对所述图像的像素进行采样一次,当然这不是限制性的。
作为示例,假设在所述图像的像素的行方向上的像素数目为I
W(其也称为图像的宽度),在所述图像的像素的列方向上的像素数目为I
H(其也称为图像的高度),则所述第一间隔分量(也即,行方向的采样间隔)为I
W/γ,第二间隔分量(也即,列方向的采样间隔)为I
H/γ。
在步骤302,将采样像素的亮度分量相加,得到采样像素亮度和。也就是说,将采样得到的所有像素的亮度值进行相加,以得到采样像素亮度和L
T。
在步骤303,对所述采样像素亮度和进行采样率平滑处理,得到平滑后的亮度值。作为示例,可以将所述采样像素亮度和除以采样率的平方,得到所述平滑后的亮度值,也即所述平滑后的亮度值L
S=L
T/(γ
2)。应当指出,其它能够对采样像素亮度和进行平滑的方式也是被考虑的。
在步骤304,将所述平滑后的亮度值确定为所述图像的亮度值。
可以通过以上步骤,以一种快速、准确、资源占用率较低的方法300确定了所述图像的亮度值。
图4图示了根据本申请的一个实施例的基于图像的亮度值、待增强像素的初始 亮度值以及所述待增强像素的相邻像素的初始亮度值,确定所述待增强像素的亮度增强值的方法400的示例性流程图,由计算设备执行,例如图1中所示的计算设备110或者图9所示的计算设备910。所述方法400可以用来实施如上面参照图2描述的步骤2022。
在步骤401,基于所述待增强像素的初始亮度值以及所述待增强像素的相邻像素的初始亮度值,对所述待增强像素的初始亮度值进行滤波,得到所述待增强像素的滤波后的亮度值。
在一些实施例中,可以借助于滤波模板来对所述待增强像素的初始亮度值进行滤波,所述滤波模板包括与所述待增强像素的亮度以及所述待增强像素的相邻像素的亮度一一对应的权重。在这种情况下,可以借助于滤波模板,确定所述待增强像素的初始亮度值以及所述相邻像素的初始亮度值的加权和,这里牵涉的像素的亮度的权重即为所述滤波模板中包括的权重。然后,将所述加权和确定为所述待增强像素的滤波后的亮度值。可见,整个滤波过程都是基于待增强像素的初始亮度值以及所述相邻像素的初始亮度值进行的,换句话说,滤波过程中后续像素的滤波不依赖于在前像素的滤波结果,因此进一步解除了常规技术中图像处理过程的像素依赖关系,使得对图像中像素的亮度的增强可以并行进行,大幅降低了图像处理过程的资源占用率。
作为示例,图5图示了根据本申请的一个实施例的示例性滤波模板500的示意图。如图5所示,所述滤波模板与3×3个像素大小的正方形区域内的像素(包括待增强像素以及所述待增强像素的相邻像素)相对应,并且包括与待增强像素的亮度以及所述待增强像素的相邻像素的亮度一一对应的权重a0、a1、a2、a3、a4、a5、a6、a7、a8,其中a0为对应于待增强像素的亮度的权重,a1、a2、a3、a4、a5、a6、a7、a8为对应于相邻像素的亮度的权重,并且它们的和典型地等于1。可选地,a0的值为1/2,a2、a4、a6、a8的值均为1/8,a1、a3、a5、a7的值为0,这在图像处理中、特别是在视频会议场景的图像处理中是有利的。
作为示例,图6图示了使用图5所示的滤波模板500对图像的像素亮度值进行 滤波的示意图。如图6所示,可以按照图像像素行方向上从左到右、列方向上从上到下的方式来对像素亮度值进行滤波。事实上,可以按任意顺序对图像像素的亮度值进行滤波,例如可以同时对任意不同位置的多个像素的亮度值进行滤波,这是因为该滤波过程解除了常规技术中的像素依赖关系。假设图6中的所示的中心点像素(即,待增强像素)的初始亮度值为I
0,从中心点像素的左上角开始按顺时针方向的8个相邻像素的初始亮度值分别为I
1、I
2、I
3、I
4、I
5、I
6、I
7、I
8,则使用图5所示的滤波模板对所述待增强像素的初始亮度值进行滤波后得到的所述待增强像素(即,中心点像素)的滤波后的亮度值为
(I
0×a0)+(I
1×a1)+(I
2×a2)+(I
3×a3)+(I
4×a4)+(I
5×a5)+(I
6×a6)+(I
7×a7)+(I
8×a8) (4)
在步骤402,基于所述图像的亮度值、所述待增强像素的初始亮度值和所述待增强像素的滤波后的亮度值确定所述待增强像素的亮度增强值。在一些实施例中,可以按照如下公式确定所述待增强像素的亮度增强值,
其中,E(x)为所述待增强像素的亮度增强值,I(x)为所述待增强像素的初始亮度值,t(x)是大气光透射率,其表达式为:
其中,I
B为所述图像的亮度值,E
V是所述待增强像素的滤波后的亮度值,A为大气光强度值,w是不为零的常数,并且可选地为0.2831。
在一些实施例中,在每对一副图像进行亮度增强前,可以预先基于上述公式建立待增强像素的初始亮度值和所述待增强像素的滤波后的亮度值与所述待增强像素 的亮度增强值之间的二维查找表(LUT),其中第一个维度表示待增强像素的初始亮度值,第二个维度表示所述待增强像素的滤波后的亮度值,通过查找表得到的值为所述待增强像素的亮度增强值。
作为示例,可以采用如下算法预先建立所述二维查找表:
其中,所述图像的亮度值为75.59,大气光强度值为3,i为待增强像素的初始亮度值I(x),j为待增强像素的滤波后的亮度值E
V,t为大气光透射率,m为根据上述公式计算得到的亮度增强值E(x),T
lut[i][j]为查表得到的所述待增强像素的亮度增强值。clip(·)为数值截断函数,如果数值在0到255之间,则保留该值,如果数值小于零,则将该数值置为零,如果大于255,则将该数值置为255,这一步的目的是确保计算得到的亮度增强值m在0-255的范围内,因此并不是必需的,可以直接将m确定为T
lut[i][j]。
在一些实施例中,为了使图像的亮度更加平滑,可以将上面所得到亮度增强值 E(x)进行如下平滑处理,即clip(0.6*E(x)+0.4*E
V),并将其计算结果作为待增强像素的亮度增强值对所述待增强像素的初始亮度进行增强,当然这并不是必须的。
图7图示了使用根据本申请的实施例的图像处理方法对原始图像进行增强的示意效果图。可见,位于图7右侧的增强后的图像720相比于左侧的原始图像710,无论整体图像的亮度721与711,还是局部区域(例如,手部区域)的亮度722与712,都得到了增强,增强后的图像的质量明显得到了较大的提升。而且,发明人对使用不同的硬件平台实施根据本申请的实施例的图像处理方法与存在像素依赖或耦合的常规技术的图像处理方法时的资源占用率进行了对比,如下表1所示:
表1
可见,根据本申请的实施例的图像处理方法相比于常规技术的图像处理方法大幅降低了图像处理过程的资源占用率。
图8图示了根据本申请的一个实施例的图像处理装置800的示例性结构框图,该图像处理装置800位于计算设备中,例如图1中所示的计算设备110。如图8所示,所述设备800包括确定模块801和增强模块802,其中增强模块802还包括第一确定子模块803、第二确定子模块804、增强子模块805。
所述确定模块801被配置成确定图像的亮度值。
所述确定模块801可以被配置成使用各种方式来确定所述图像的亮度值。在一些实施例中,所述图像是以YUV颜色空间数据的格式体现的。在YUV空间中,每一个颜色具有一个亮度分量Y,和两个色度分量U和V。在这种情况下,所述确定 模块801可以被配置成获取到所述图像的YUV颜色空间数据,然后基于所述YUV颜色空间数据中的亮度分量,确定所述图像的亮度值。
在一些实施例中,所述确定模块801可以被配置成获取所述图像的RGB颜色空间数据,然后将所述图像的RGB颜色空间数据转换为所述图像的YUV颜色空间数据。
在一些实施例中,所述确定模块801可以被配置成:按采样间隔对所述图像的像素进行采样,获取采样像素的YUV颜色空间数据中的亮度分量;将采样像素的亮度分量相加以得到采样像素亮度和;对所述采样像素亮度和进行采样率平滑处理,得到平滑后的亮度值;将所述平滑后的亮度值确定为所述图像的亮度值。
所述增强模块802被配置成当所述图像的亮度值小于图像亮度阈值时,对所述图像的亮度进行增强。换句话说,如果所述图像的亮度值不小于图像亮度阈值,则所述增强模块802可以不对所述图像的亮度进行增强,因为所述图像满足对于图像亮度的要求,通常具有较高的质量。
第一确定子模块803被配置成将所述图像的每个像素确定为待增强像素。
第二确定子模块804被配置成基于所述图像的亮度值、所述待增强像素的初始亮度值以及所述待增强像素的相邻像素的初始亮度值,确定所述待增强像素的亮度增强值。这里所述的初始亮度值表示所述待增强像素未被进行增强处理时的亮度值,也即被增强前的亮度值。
在一些实施例中,所述待增强像素的相邻像素可以是位于以所述待增强像素为中心点的区域内除所述待增强像素外的其它像素。所述区域的形状和大小可以根据需要被预先设定。作为示例,所述区域可以为3×3个像素大小的区域,当然这不是限制的。
在一些实施例中,所述第二确定子模块804可以被配置成基于所述待增强像素的初始亮度值以及所述待增强像素的相邻像素的初始亮度值,对所述待增强像素的初始亮度值进行滤波,得到所述待增强像素的滤波后的亮度值;然后,基于所述图像的亮度值、所述待增强像素的初始亮度值和所述待增强像素的滤波后的亮度值, 确定所述待增强像素的亮度增强值。作为示例,所述第二确定子模块804可以被配置成确定所述待增强像素的初始亮度值以及所述相邻像素的初始亮度值的加权和,然后将所述加权和确定为所述待增强像素的滤波后的亮度值。
在一些实施例中,所述第二确定子模块804可以被配置成以如下公式确定所述待增强像素的亮度增强值,
其中,E(x)为所述待增强像素的亮度增强值,I(x)为所述待增强像素的初始亮度值,t(x)大气光透射率,其表达式为:
其中,I
B为所述图像的亮度值,E
V是所述待增强像素的滤波后的亮度值,A为大气光强度值,w是不为零的常数。
增强子模块805被配置成将所述亮度增强值作为所述待增强像素的增强后的亮度值。在确定所述待增强像素的亮度增强值后,增强子模块805可以将所述待增强像素的亮度值调整为所述亮度增强值以实现对所述待增强像素的亮度的增强。
图9图示了示例系统900,其包括代表可以实现本文描述的各种技术的一个或多个系统和/或设备的示例计算设备910。计算设备910可以是例如服务提供商的服务器、与服务器相关联的设备、片上系统、和/或任何其它合适的计算设备或计算系统。上面关于图8描述的图像处理装置800可以采取计算设备910的形式。替换地,图像处理装置800可以以图像处理应用916的形式被实现为计算机程序。
如图示的示例计算设备910包括彼此通信耦合的处理系统911、一个或多个计算机可读介质912以及一个或多个I/O接口913。尽管未示出,但是计算设备910还可以包括系统总线或其他数据和命令传送系统,其将各种组件彼此耦合。系统总 线可以包括不同总线结构的任何一个或组合,所述总线结构诸如存储器总线或存储器控制器、外围总线、通用串行总线、和/或利用各种总线架构中的任何一种的处理器或局部总线。还构思了各种其他示例,诸如控制和数据线。
处理系统911代表使用硬件执行一个或多个操作的功能。因此,处理系统911被图示为包括可被配置为处理器、功能块等的硬件元件914。这可以包括在硬件中实现为专用集成电路或使用一个或多个半导体形成的其它逻辑器件。硬件元件914不受其形成的材料或其中采用的处理机构的限制。例如,处理器可以由(多个)半导体和/或晶体管(例如,电子集成电路(IC))组成。在这样的上下文中,处理器可执行指令可以是电子可执行指令。
计算机可读介质912被图示为包括存储器/存储装置915。存储器/存储装置915表示与一个或多个计算机可读介质相关联的存储器/存储容量。存储器/存储装置915可以包括易失性介质(诸如随机存取存储器(RAM))和/或非易失性介质(诸如只读存储器(ROM)、闪存、光盘、磁盘等)。存储器/存储装置915可以包括固定介质(例如,RAM、ROM、固定硬盘驱动器等)以及可移动介质(例如,闪存、可移动硬盘驱动器、光盘等)。计算机可读介质912可以以下面进一步描述的各种其他方式进行配置。
一个或多个I/O接口913代表允许用户向计算设备910输入命令和信息并且可选地还允许使用各种输入/输出设备将信息呈现给用户和/或其他组件或设备的功能。输入设备的示例包括键盘、光标控制设备(例如,鼠标)、麦克风(例如,用于语音输入)、扫描仪、触摸功能(例如,被配置为检测物理触摸的容性或其他传感器)、相机(例如,可以采用可见或不可见的波长(诸如红外频率)将不涉及触摸的运动检测为手势)等等。输出设备的示例包括显示设备(例如,监视器或投影仪)、扬声器、打印机、网卡、触觉响应设备等。因此,计算设备910可以以下面进一步描述的各种方式进行配置以支持用户交互。
计算设备910还包括图像处理应用916。图像处理应用916可以例如是图像处理装置800的软件实例,并且与计算设备910中的其他元件相组合地实现本文描述的 技术。
本文可以在软件硬件元件或程序模块的一般上下文中描述各种技术。一般地,这些模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、元素、组件、数据结构等。本文所使用的术语“模块”,“功能”和“组件”一般表示软件、固件、硬件或其组合。本文描述的技术的特征是与平台无关的,意味着这些技术可以在具有各种处理器的各种计算平台上实现。
所描述的模块和技术的实现可以存储在某种形式的计算机可读介质上或者跨某种形式的计算机可读介质传输。计算机可读介质可以包括可由计算设备910访问的各种介质。作为示例而非限制,计算机可读介质可以包括“计算机可读存储介质”和“计算机可读信号介质”。
与单纯的信号传输、载波或信号本身相反,“计算机可读存储介质”是指能够持久存储信息的介质和/或设备,和/或有形的存储装置。因此,计算机可读存储介质是指非信号承载介质。计算机可读存储介质包括诸如易失性和非易失性、可移动和不可移动介质和/或以适用于存储信息(诸如计算机可读指令、数据结构、程序模块、逻辑元件/电路或其他数据)的方法或技术实现的存储设备之类的硬件。计算机可读存储介质的示例可以包括但不限于RAM、ROM、EEPROM、闪存或其它存储器技术、CD-ROM、数字通用盘(DVD)或其他光学存储装置、硬盘、盒式磁带、磁带,磁盘存储装置或其他磁存储设备,或其他存储设备、有形介质或适于存储期望信息并可以由计算机访问的制品。
“计算机可读信号介质”是指被配置为诸如经由网络将指令发送到计算设备910的硬件的信号承载介质。信号介质典型地可以将计算机可读指令、数据结构、程序模块或其他数据体现在诸如载波、数据信号或其它传输机制的调制数据信号中。信号介质还包括任何信息传递介质。术语“调制数据信号”是指以这样的方式对信号中的信息进行编码来设置或改变其特征中的一个或多个的信号。作为示例而非限制,通信介质包括诸如有线网络或直接连线的有线介质以及诸如声、RF、红外和其它无线介质的无线介质。
如前所述,硬件元件914和计算机可读介质912代表以硬件形式实现的指令、模块、可编程器件逻辑和/或固定器件逻辑,其在一些实施例中可以用于实现本文描述的技术的至少一些方面。硬件元件可以包括集成电路或片上系统、专用集成电路(ASIC)、现场可编程门阵列(FPGA)、复杂可编程逻辑器件(CPLD)以及硅中的其它实现或其他硬件设备的组件。在这种上下文中,硬件元件可以作为执行由硬件元件所体现的指令、模块和/或逻辑所定义的程序任务的处理设备,以及用于存储用于执行的指令的硬件设备,例如,先前描述的计算机可读存储介质。
前述的组合也可以用于实现本文所述的各种技术和模块。因此,可以将软件、硬件或程序模块和其它程序模块实现为在某种形式的计算机可读存储介质上和/或由一个或多个硬件元件914体现的一个或多个指令和/或逻辑。计算设备910可以被配置为实现与软件和/或硬件模块相对应的特定指令和/或功能。因此,例如通过使用处理系统的计算机可读存储介质和/或硬件元件914,可以至少部分地以硬件来实现将模块实现为可由计算设备910作为软件执行的模块。指令和/或功能可以由一个或多个制品(例如,一个或多个计算设备910和/或处理系统911)可执行/可操作以实现本文所述的技术、模块和示例。
在各种实施方式中,计算设备910可以采用各种不同的配置。例如,计算设备910可以被实现为包括个人计算机、台式计算机、多屏幕计算机、膝上型计算机、上网本等的计算机类设备。计算设备910还可以被实现为包括诸如移动电话、便携式音乐播放器、便携式游戏设备、平板计算机、多屏幕计算机等移动设备的移动装置类设备。计算设备910还可以实现为电视类设备,其包括具有或连接到休闲观看环境中的一般地较大屏幕的设备。这些设备包括电视、机顶盒、游戏机等。
本文描述的技术可以由计算设备910的这些各种配置来支持,并且不限于本文所描述的技术的具体示例。功能还可以通过使用分布式系统、诸如通过如下所述的平台922而在“云”920上全部或部分地实现。
云920包括和/或代表用于资源924的平台922。平台922抽象云920的硬件(例如,服务器)和软件资源的底层功能。资源924可以包括在远离计算设备910的服 务器上执行计算机处理时可以使用的应用和/或数据。资源924还可以包括通过因特网和/或通过诸如蜂窝或Wi-Fi网络的订户网络提供的服务。
平台922可以抽象资源和功能以将计算设备910与其他计算设备连接。平台922还可以用于抽象资源的分级以提供遇到的对于经由平台922实现的资源924的需求的相应水平的分级。因此,在互连设备实施例中,本文描述的功能的实现可以分布在整个系统900内。例如,功能可以部分地在计算设备910上以及通过抽象云920的功能的平台922来实现。
应当理解,为清楚起见,参考不同的功能单元对本申请的实施例进行了描述。然而,将明显的是,在不偏离本申请的情况下,每个功能单元的功能性可以被实施在单个单元中、实施在多个单元中或作为其它功能单元的一部分被实施。例如,被说明成由单个单元执行的功能性可以由多个不同的单元来执行。因此,对特定功能单元的参考仅被视为对用于提供所描述的功能性的适当单元的参考,而不是表明严格的逻辑或物理结构或组织。因此,本申请可以被实施在单个单元中,或者可以在物理上和功能上被分布在不同的单元和电路之间。
将理解的是,尽管第一、第二、第三等术语在本文中可以用来描述各种设备、元件、部件或部分,但是这些设备、元件、部件或部分不应当由这些术语限制。这些术语仅用来将一个设备、元件、部件或部分与另一个设备、元件、部件或部分相区分。
尽管已经结合一些实施例描述了本申请,但是其不旨在被限于在本文中所阐述的特定形式。相反,本申请的范围仅由所附权利要求来限制。附加地,尽管单独的特征可以被包括在不同的权利要求中,但是这些可以可能地被有利地组合,并且包括在不同权利要求中不暗示特征的组合不是可行的和/或有利的。特征在权利要求中的次序不暗示特征必须以其工作的任何特定次序。此外,在权利要求中,词“包括”不排除其它元件,并且不定冠词“一”或“一个”不排除多个。权利要求中的附图标记仅作为明确的例子被提供,不应该被解释为以任何方式限制权利要求的范围。
Claims (18)
- 一种图像处理方法,其特征在于,由计算设备执行,所述方法包括:确定图像的亮度值;当所述图像的亮度值小于图像亮度阈值时,对所述图像的亮度进行增强,其包括:将所述图像的每个像素确定为待增强像素;基于所述图像的亮度值、所述待增强像素的初始亮度值以及所述待增强像素的相邻像素的初始亮度值,确定所述待增强像素的亮度增强值;将所述亮度增强值作为所述待增强像素的增强后的亮度值。
- 根据权利要求1所述的方法,其中,所述确定图像的亮度值,包括:获取所述图像的YUV颜色空间数据;基于所述YUV颜色空间数据中的亮度分量,确定所述图像的亮度值。
- 根据权利要求2所述的方法,其中,所述获取所述图像的YUV颜色空间数据,包括:获取所述图像的RGB颜色空间数据;将所述图像的RGB颜色空间数据转换为所述图像的YUV颜色空间数据。
- 根据权利要求2所述的方法,其中,所述基于所述YUV颜色空间数据中的亮度分量,确定所述图像的亮度值,包括:按采样间隔对所述图像的像素进行采样,获取采样像素的YUV颜色空间数据中的亮度分量;将采样像素的亮度分量相加,得到采样像素亮度和;对所述采样像素亮度和进行采样率平滑处理,得到平滑后的亮度值;将所述平滑后的亮度值确定为所述图像的亮度值。
- 根据权利要求4所述的方法,其中,所述采样间隔具有在所述图像的像素的行方向上的第一间隔分量和列方向上的第二间隔分量,其中,所述第一间隔分量为 行方向上的像素数目除以采样率得到的值,所述第二间隔分量为列方向上的像素数目除以采样率得到的值。
- 根据权利要求4所述的方法,其中,所述对所述采样像素亮度和进行采样率平滑处理,得到平滑后的亮度值,包括:将所述采样像素亮度和除以采样率的平方,得到所述平滑后的亮度值。
- 根据权利要求1所述的方法,其中,所述待增强像素的相邻像素包括位于以所述待增强像素为中心点的区域内的其它像素。
- 根据权利要求1所述的方法,其中,所述基于所述图像的亮度值、所述待增强像素的初始亮度值以及所述待增强像素的相邻像素的初始亮度值,确定所述待增强像素的亮度增强值,包括:基于所述待增强像素的初始亮度值以及所述待增强像素的相邻像素的初始亮度值,对所述待增强像素的初始亮度值进行滤波,得到所述待增强像素的滤波后的亮度值;基于所述图像的亮度值、所述待增强像素的初始亮度值和所述待增强像素的滤波后的亮度值,确定所述待增强像素的亮度增强值。
- 根据权利要求8所述的方法,其中,所述对所述待增强像素的初始亮度值进行滤波,得到所述待增强像素的滤波后的亮度值,包括:确定所述待增强像素的初始亮度值以及所述相邻像素的初始亮度值的加权和;将所述加权和确定为所述待增强像素的滤波后的亮度值。
- 一种图像处理装置,其特征在于,包括:确定模块,被配置成确定图像的亮度值;增强模块,被配置成当所述图像的亮度值小于图像亮度阈值时,则对所述图像的亮度进行增强,其包括:第一确定子模块,被配置成将所述图像的每个像素确定为待增强像素;第二确定子模块,被配置成基于所述图像的亮度值、所述待增强像素的初始亮度值以及所述待增强像素的相邻像素的初始亮度值,确定所述待增强像素的亮度增强值;增强子模块,被配置成将所述亮度增强值作为所述待增强像素的增强后的亮度值。
- 根据权利要求11所述的装置,其中,所述确定模块被配置成:获取所述图像的YUV颜色空间数据;基于所述YUV颜色空间数据中的亮度分量,确定所述图像的亮度值。
- 根据权利要求12所述的装置,其中,所述确定模块被配置成:按采样间隔对所述图像的像素进行采样,获取采样像素的YUV颜色空间数据中的亮度分量;将采样像素的亮度分量相加,得到采样像素亮度和;对所述采样像素亮度和进行采样率平滑处理,得到平滑后的亮度值;将所述平滑后的亮度值确定为所述图像的亮度值。
- 根据权利要求11所述的装置,其中,所述第二确定子模块被配置成:基于所述待增强像素的初始亮度值以及所述待增强像素的相邻像素的初始亮度 值,对所述待增强像素的初始亮度值进行滤波,得到所述待增强像素的滤波后的亮度值;基于所述图像的亮度值、所述待增强像素的初始亮度值和所述待增强像素的滤波后的亮度值,确定所述待增强像素的亮度增强值。
- 根据权利要求14所述的装置,其中,所述第二确定子模块被配置成:确定所述待增强像素的初始亮度值以及所述相邻像素的初始亮度值的加权和;将所述加权和确定为所述待增强像素的滤波后的亮度值。
- 根据权利要求11所述的装置,其中,所述待增强像素的相邻像素包括位于以所述待增强像素为中心点的区域内的其它像素。
- 一种计算设备,包括存储器,其被配置成存储计算机可执行指令;处理器,其被配置成当所述计算机可执行指令被处理器执行时执行如权利要求1-10中的任一项所述的方法。
- 一种计算机可读存储介质,其存储有计算机可执行指令,当所述计算机可执行指令被执行时,执行如权利要求1-10中的任一项所述的方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP20899802.1A EP4006825B1 (en) | 2019-12-12 | 2020-07-28 | Image processing method and apparatus, computing device, and storage medium |
US17/572,579 US11983852B2 (en) | 2019-12-12 | 2022-01-10 | Image processing method and apparatus, computing device, and storage medium |
US18/604,765 US20240303792A1 (en) | 2019-12-12 | 2024-03-14 | Image processing method and apparatus, computing device, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911274591.0 | 2019-12-12 | ||
CN201911274591.0A CN110910333B (zh) | 2019-12-12 | 2019-12-12 | 图像处理方法和图像处理设备 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/572,579 Continuation US11983852B2 (en) | 2019-12-12 | 2022-01-10 | Image processing method and apparatus, computing device, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021114684A1 true WO2021114684A1 (zh) | 2021-06-17 |
Family
ID=69824917
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/105019 WO2021114684A1 (zh) | 2019-12-12 | 2020-07-28 | 图像处理方法、装置、计算设备和存储介质 |
Country Status (4)
Country | Link |
---|---|
US (2) | US11983852B2 (zh) |
EP (1) | EP4006825B1 (zh) |
CN (1) | CN110910333B (zh) |
WO (1) | WO2021114684A1 (zh) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110910333B (zh) | 2019-12-12 | 2023-03-14 | 腾讯科技(深圳)有限公司 | 图像处理方法和图像处理设备 |
CN111526366B (zh) * | 2020-04-28 | 2021-08-06 | 深圳市思坦科技有限公司 | 图像处理方法、装置、摄像设备和存储介质 |
US12080030B2 (en) | 2020-04-28 | 2024-09-03 | Shenzhen Sitan Technology Co., Ltd. | Image processing method and device, camera apparatus and storage medium |
US20220270556A1 (en) * | 2021-02-19 | 2022-08-25 | Vizio, Inc. | Systems and methods for enhancing television display for video conferencing and video watch party applications |
CN113592782B (zh) * | 2021-07-07 | 2023-07-28 | 山东大学 | 一种复合材料碳纤维芯棒x射线图像缺陷提取方法及系统 |
CN115018736A (zh) * | 2022-07-22 | 2022-09-06 | 广州市保伦电子有限公司 | 一种图像亮度均匀处理方法及处理终端 |
US20240388674A1 (en) * | 2023-05-15 | 2024-11-21 | Google Llc | Providing lighting adjustment in a video conference |
CN117854432B (zh) * | 2024-02-02 | 2024-08-20 | 深圳市艾斯莱特光电有限公司 | 一种led灯的控制方法、控制装置和存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1320324A (zh) * | 1998-09-01 | 2001-10-31 | 迪维奥公司 | 数字图像轮廓增强方法和装置 |
CN103824250A (zh) * | 2014-01-24 | 2014-05-28 | 浙江大学 | 基于gpu的图像色调映射方法 |
US20190228511A1 (en) * | 2016-08-04 | 2019-07-25 | Intel Corporation | Tone-mapping high dynamic range images |
CN110910333A (zh) * | 2019-12-12 | 2020-03-24 | 腾讯科技(深圳)有限公司 | 图像处理方法和图像处理设备 |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7426312B2 (en) * | 2005-07-05 | 2008-09-16 | Xerox Corporation | Contrast enhancement of images |
KR100809347B1 (ko) * | 2006-07-31 | 2008-03-05 | 삼성전자주식회사 | 쉐도우 영역 보상 방법 및 장치 |
US8582915B2 (en) * | 2011-06-27 | 2013-11-12 | Wuxi Jinnang Technology Development Ltd. | Image enhancement for challenging lighting conditions |
CN104137531B (zh) * | 2012-03-02 | 2017-06-06 | 三菱电机株式会社 | 图像处理装置和方法 |
WO2015085130A2 (en) * | 2013-12-07 | 2015-06-11 | Razzor Technologies Inc. | Adaptive contrast in image processing and display |
EP3281147A1 (en) * | 2015-04-07 | 2018-02-14 | FotoNation Limited | Image processing method and apparatus |
CN106412383A (zh) * | 2015-07-31 | 2017-02-15 | 阿里巴巴集团控股有限公司 | 视频图像的处理方法和装置 |
US9449252B1 (en) * | 2015-08-27 | 2016-09-20 | Sony Corporation | System and method for color and brightness adjustment of an object in target image |
WO2017036908A1 (en) * | 2015-08-31 | 2017-03-09 | Thomson Licensing | Method and apparatus for inverse tone mapping |
CN105407296B (zh) * | 2015-11-18 | 2021-03-23 | 腾讯科技(深圳)有限公司 | 实时视频增强方法和装置 |
WO2017132858A1 (en) * | 2016-02-03 | 2017-08-10 | Chongqing University Of Posts And Telecommunications | Methods, systems, and media for image processing |
US10097765B2 (en) * | 2016-04-20 | 2018-10-09 | Samsung Electronics Co., Ltd. | Methodology and apparatus for generating high fidelity zoom for mobile video |
CN106875344B (zh) * | 2016-12-21 | 2019-09-17 | 浙江大华技术股份有限公司 | 一种视频图像的补光处理方法及装置 |
CN107248245B (zh) * | 2017-06-06 | 2019-05-24 | 余姚市菲特塑料有限公司 | 警灯闪烁强度控制平台 |
CN109697698B (zh) * | 2017-10-20 | 2023-03-21 | 腾讯科技(深圳)有限公司 | 低照度增强处理方法、装置和计算机可读存储介质 |
WO2019133991A1 (en) * | 2017-12-29 | 2019-07-04 | Wu Yecheng | System and method for normalizing skin tone brightness in a portrait image |
US10489928B2 (en) * | 2018-02-23 | 2019-11-26 | Librestream Technologies Inc. | Image processing system for inspecting object distance and dimensions using a hand-held camera with a collimated laser |
CN108737750A (zh) * | 2018-06-07 | 2018-11-02 | 北京旷视科技有限公司 | 图像处理方法、装置及电子设备 |
CN110581986A (zh) * | 2018-06-08 | 2019-12-17 | 夏普株式会社 | 图像显示装置以及图像数据的修正方法 |
JP7280670B2 (ja) * | 2018-07-06 | 2023-05-24 | キヤノン株式会社 | 画像処理装置、制御方法、及びプログラム |
CN110120021B (zh) * | 2019-05-05 | 2021-04-09 | 腾讯科技(深圳)有限公司 | 图像亮度的调整方法、装置、存储介质及电子装置 |
CN110288546B (zh) | 2019-06-27 | 2022-11-01 | 华侨大学 | 一种采用双向伽马变换的低照度图像增强方法 |
CN110278425A (zh) | 2019-07-04 | 2019-09-24 | 潍坊学院 | 图像增强方法、装置、设备和存储介质 |
-
2019
- 2019-12-12 CN CN201911274591.0A patent/CN110910333B/zh active Active
-
2020
- 2020-07-28 EP EP20899802.1A patent/EP4006825B1/en active Active
- 2020-07-28 WO PCT/CN2020/105019 patent/WO2021114684A1/zh unknown
-
2022
- 2022-01-10 US US17/572,579 patent/US11983852B2/en active Active
-
2024
- 2024-03-14 US US18/604,765 patent/US20240303792A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1320324A (zh) * | 1998-09-01 | 2001-10-31 | 迪维奥公司 | 数字图像轮廓增强方法和装置 |
CN103824250A (zh) * | 2014-01-24 | 2014-05-28 | 浙江大学 | 基于gpu的图像色调映射方法 |
US20190228511A1 (en) * | 2016-08-04 | 2019-07-25 | Intel Corporation | Tone-mapping high dynamic range images |
CN110910333A (zh) * | 2019-12-12 | 2020-03-24 | 腾讯科技(深圳)有限公司 | 图像处理方法和图像处理设备 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4006825A4 * |
Also Published As
Publication number | Publication date |
---|---|
US20220130022A1 (en) | 2022-04-28 |
EP4006825B1 (en) | 2024-12-25 |
CN110910333B (zh) | 2023-03-14 |
US20240303792A1 (en) | 2024-09-12 |
CN110910333A (zh) | 2020-03-24 |
US11983852B2 (en) | 2024-05-14 |
EP4006825A4 (en) | 2022-10-12 |
EP4006825A1 (en) | 2022-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021114684A1 (zh) | 图像处理方法、装置、计算设备和存储介质 | |
US9916518B2 (en) | Image processing apparatus, image processing method, program and imaging apparatus | |
TWI399100B (zh) | 影像處理方法 | |
WO2017084255A1 (zh) | 实时视频增强方法、终端和非易失性计算机可读存储介质 | |
CN105261326A (zh) | 调整显示色域的显示设备及其调整显示色域的方法 | |
CN109274985B (zh) | 视频转码方法、装置、计算机设备和存储介质 | |
CN109361949B (zh) | 视频处理方法、装置、电子设备以及存储介质 | |
WO2018072270A1 (zh) | 一种图像显示增强方法及装置 | |
CN104981861A (zh) | 信号转换装置和方法以及程序和记录介质 | |
CN105100763B (zh) | 色彩补偿方法及电路、显示装置 | |
US8860806B2 (en) | Method, device, and system for performing color enhancement on whiteboard color image | |
CN104919490B (zh) | 用于图像处理的结构描述符 | |
US11062435B2 (en) | Rendering information into images | |
CN105025283A (zh) | 一种新的色彩饱和度调整方法、系统及移动终端 | |
CN111226256A (zh) | 用于图像动态范围调整的系统和方法 | |
TW202021330A (zh) | 圖像縮放方法和裝置 | |
US7796827B2 (en) | Face enhancement in a digital video | |
TWI415480B (zh) | 影像處理方法與影像處理系統 | |
WO2017113620A1 (zh) | 高动态范围图像处理方法及系统 | |
WO2019020112A1 (zh) | 终端显示方法、终端及计算机可读存储介质 | |
JP5865517B2 (ja) | 画像表示方法及び装置 | |
CN101383975A (zh) | 图像处理的方法及装置及其电子装置 | |
WO2017107605A1 (zh) | 一种图像细节处理方法和装置、终端、存储介质 | |
TWI633537B (zh) | 影像優化方法 | |
CN115294945A (zh) | 对象展示、生成颜色查找表的方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20899802 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020899802 Country of ref document: EP Effective date: 20220223 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |