[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2021102939A1 - 图像处理方法及设备 - Google Patents

图像处理方法及设备 Download PDF

Info

Publication number
WO2021102939A1
WO2021102939A1 PCT/CN2019/122060 CN2019122060W WO2021102939A1 WO 2021102939 A1 WO2021102939 A1 WO 2021102939A1 CN 2019122060 W CN2019122060 W CN 2019122060W WO 2021102939 A1 WO2021102939 A1 WO 2021102939A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
interest
region
video frame
infrared video
Prior art date
Application number
PCT/CN2019/122060
Other languages
English (en)
French (fr)
Inventor
张青涛
庹伟
赵新涛
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2019/122060 priority Critical patent/WO2021102939A1/zh
Publication of WO2021102939A1 publication Critical patent/WO2021102939A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements

Definitions

  • This application relates to the field of computer technology, and in particular to an image processing method and equipment.
  • the embodiments of the present application provide an image processing method and device, which can process local details in an image more accurately, and is beneficial to showing target information of a region of interest.
  • an embodiment of the present application provides an image processing method, and the method includes:
  • the image is enlarged to obtain the processed image
  • an embodiment of the present application provides an image processing device, the image processing device includes: a processor and a memory;
  • Program instructions are stored in the memory
  • the processor calls the program instructions for
  • the image is enlarged to obtain the processed image
  • an embodiment of the present application provides a computer-readable storage medium having computer program instructions stored in the computer-readable storage medium, and when the computer program instructions are executed by a processor, they are used to execute the above-mentioned first aspect.
  • Image processing method
  • the embodiments of the present application can obtain the user's region of interest for the image input in the infrared video frame, and zoom in on the image based on the region of interest. Compared with zooming in on the image based on the global image, the region of interest can be eliminated. The influence of external objects on the region of interest enables more precise processing of the region of interest, thereby revealing more target information in the region of interest.
  • Fig. 1a is a schematic diagram of an image provided by an embodiment of the present application.
  • Fig. 1b is a schematic diagram of another image provided by an embodiment of the present application.
  • Fig. 2 is a schematic diagram of an image processing system provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a method for obtaining an area range provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of another area range acquisition method provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a method for obtaining a region of interest according to an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of another image processing method provided by an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of another image processing method provided by an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of another image processing method provided by an embodiment of the present application.
  • Fig. 10 is a schematic structural diagram of an image processing device provided by an embodiment of the present application.
  • this application proposes an image processing method.
  • the image processing method can obtain the user's region of interest for the image input in the infrared video frame; according to the region of interest, the image is enlarged to obtain the processed image; and the processed image is output. It can be seen that the enlargement processing of the image based on the region of interest can more accurately process the region of interest in the image, thereby displaying the target information of the region of interest.
  • Fig. 1a is a schematic diagram of an image provided by an embodiment of the present application
  • Fig. 1b is a schematic diagram of another image provided by an embodiment of the present application.
  • the user can input the region of interest 101 for the image shown in FIG. 1a to obtain the region of interest in FIG. 1a.
  • the image is enlarged according to the region of interest 101 to obtain FIG. 1b after the enlargement processing.
  • the region of interest 101 can be enlarged, such as zooming in to the entire display area. It can be seen that the region of interest 101 in the image shown in FIG. 1a is enlarged to obtain FIG. 1b in which the region of interest is enlarged to the entire display area, so that the region of interest is enlarged for display output.
  • the movable platform may be a movable device, an unmanned aerial vehicle, an unmanned vehicle, and the like.
  • Terminal devices may include mobile phones, notebooks, tablet computers, and so on.
  • the image processing system may include control equipment and a camera.
  • the camera may be located on a movable platform.
  • the movable platform may be an unmanned aerial vehicle, an unmanned vehicle, an unmanned ship, etc.
  • the machine is described as an example.
  • the camera may be an infrared camera for collecting infrared images.
  • An infrared camera can also be called an infrared thermal imager, an infrared sensor, and so on.
  • the camera transmits the acquired infrared image to the control device.
  • the camera can transmit the collected infrared image to the control device through a drone, or a communication module is installed on the camera to directly transmit the collected infrared image to the control device.
  • the control device performs related operations of the image processing method described in this application, wherein the control device obtains the user's region of interest for the image input in the infrared video frame, and then compares the infrared sensor with the region of interest selected by the user.
  • the collected images are processed to obtain and output processed images.
  • the control device may be, for example, a remote control, a ground station, a terminal or a server, etc.
  • the camera transmits the acquired infrared image to the drone, and the processor of the drone performs the related operations of the image processing method described in this application.
  • the drone can obtain the user's region of interest for the image input in the infrared video frame through the control device.
  • the drone processes the image collected by the infrared sensor according to the region of interest selected by the user, and can obtain and output the processed image. image.
  • the image processing process in this case will be described below in conjunction with FIG. 2.
  • FIG. 2 is a schematic diagram of an image processing system provided by an embodiment of the application.
  • the image processing method described in the embodiment of the present application can be applied to the image processing system shown in FIG. 2 but is not limited to this image processing system.
  • the image processing system includes an unmanned aerial vehicle 201 and a control device 202.
  • the unmanned aerial vehicle 201 is equipped with an infrared sensor, and the infrared sensor is used to shoot infrared video frames.
  • the drone can perform the image processing method mentioned in the embodiment of this application on the image in the infrared video frame to obtain an image after image enhancement processing.
  • the UAV can send the enhanced image to the control device for display.
  • the control device can display the image after the image enhancement processing.
  • the drone can obtain the user's region of interest for the image input in the infrared video frame through the control device.
  • the drone processes the image collected by the infrared sensor according to the region of interest selected by the user, and can obtain and output the processed image.
  • the control device may be a drone remote control or a terminal, and the remote control or the display screen on the terminal may display the image after the image enhancement processing.
  • FIG. 3 is a schematic flowchart of an image processing method provided by an embodiment of the application.
  • the image processing method shown in FIG. 3 may include the following steps:
  • S301 Acquire a region of interest input by a user for an image in an infrared video frame.
  • the infrared video frame may be an infrared video frame output by an infrared sensor.
  • step 301 may include: the camera configured with the drone sends the acquired infrared video frames to the control device, or through the unmanned The control device sends the infrared video frame to the control device; the control device receives and displays the infrared video frame, and obtains the user's region of interest for the image input in the infrared video frame; the control device sends the acquired region of interest to the drone, and the drone passes The way of receiving the region of interest from the control device is to obtain the region of interest input by the user for the image in the infrared video frame.
  • control device can obtain the region of interest (ROI) input by the user through its own configured touch screen.
  • ROI region of interest
  • control device when the control device is the execution subject, when the image processing method shown in FIG. 3 is executed, the difference from the previous embodiment is that in step 301, the control device does not need to send the region of interest to the Man-machine.
  • step S301 may include: acquiring the area range of the image input in the infrared video frame by the user through the touch screen; and taking the area range in the image as the area of interest.
  • the touch screen can also be a touch screen or a touch panel.
  • the user can also determine the area of interest through physical keys, joysticks of the remote control, buttons, etc.
  • step S301 may include: acquiring the area range of the image input in the infrared video frame by the user through the touch screen; taking the area outside the area range in the image as the area of interest.
  • acquiring the area range of the image input in the infrared video frame by the user through the touch screen may include, but is not limited to: acquiring the user's click operation on the image input in the infrared video frame through the touch screen; determining the coordinates corresponding to the click operation as the center, The circular area with the preset value as the radius is used as the area range of the image input by the user.
  • the preset value may be preset by the system.
  • FIG. 4 is a schematic diagram of a method for obtaining an area range according to an embodiment of the application.
  • the user inputs a click operation on the dot 401 in the image through the touch screen, then the determined area range can be the circle area obtained by taking the coordinates corresponding to the dot 401 as the center and the preset value as the radius. .
  • the area range where the user inputs the image in the infrared video frame through the touch screen may be a polygonal area range.
  • the polygonal area can be rectangular, triangular, octagonal, and so on.
  • FIG. 5 is a schematic diagram of another method for acquiring a region range provided by an embodiment of the application.
  • the user inputs a triangular area range from the image in the infrared video frame through the touch screen, then the triangular area range can be used as the area range for the user to input the image.
  • the region of interest may be the area where the target object is located in the image, so step S301 may include: detecting the target object in the image in the infrared video frame; determining the area where the target object is located as the sensory area. Area of interest.
  • step S301 may include: obtaining the scale value and orientation input by the user; for the image in the infrared video frame, determining the frame of the scale value in the image as the region of interest, or determining the image in the infrared video frame The area outside the frame of the ratio value of the middle position is regarded as the area of interest.
  • FIG. 6 is a schematic diagram of a method for obtaining a region of interest according to an embodiment of the application.
  • the user input ratio value is 40% and the orientation is the right part
  • the area of the image in the video frame except for the 40% frame on the right is the area of interest input by the user.
  • the method for the user to input the scale value and orientation of the region of interest through the touch screen is not limited to the method described in FIG. 6.
  • the user can click on a variety of labels displayed on the touch screen, and the labels are used to indicate the scale value and orientation of the region of interest in the image, thereby determining the region of interest in the image.
  • S302 Perform magnification processing on the image according to the region of interest to obtain a processed image.
  • step S302 may include: performing magnification processing on pixels of the region of interest in the image to obtain a processed image. It can be seen that zooming in on the pixels of the region of interest can reduce the impact of objects outside the region of interest compared to performing global zoom processing on the image, thereby revealing more detailed information about the region of interest.
  • the amplification processing may be digital amplification.
  • step S302 may include: respectively performing magnification processing on the pixel points of the region of interest in the image and the pixel points outside the region of interest in the image to obtain a processed image.
  • the detailed information of the region of interest can be richer than that of the region outside the region of interest.
  • the pixel points of the region of interest in the image may be enlarged according to the first enlargement ratio value, and the pixels of the area outside the region of interest in the image may be enlarged according to the second enlargement ratio value, Obtain the processed image.
  • the first amplification ratio value and the second amplification ratio value may be different. It should be understood that the embodiment of the present application may also perform magnification processing on the pixel points of the region of interest and the pixel points of the region outside the region of interest based on other methods.
  • obtaining the user's region of interest for the image input in the infrared video frame, and zooming in on the image based on the region of interest can more accurately process the image of interest based on zooming in on the image globally.
  • the details of the area and then show the target information of the area of interest.
  • the user can input the region of interest, which helps to improve the user experience.
  • the image processing method may further include image enhancement processing, and the operation of the image enhancement processing may include one or more of histogram statistics and stretching, medium and high frequency detail enhancement, and pseudo color mapping.
  • the acquisition of the region of interest can be performed before the image is enlarged according to the region of interest, or it can be performed before the image enhancement processing described above.
  • the region of interest can also be pushed to the above-mentioned image enhancement processing operations, so that each image enhancement processing operation is based on the interest. Regionally.
  • the optional implementation manners are described below in conjunction with FIG. 7 to FIG. 9.
  • FIG. 7 is a schematic diagram of an image processing method provided by an embodiment of the application.
  • the image in the infrared video frame can be image-enhanced.
  • the user's region of interest input to the image in the infrared video frame is acquired, so that the image is magnified based on the region of interest.
  • the region of interest input by the user for the image after the image enhancement processing can be obtained.
  • the difference between the image processing method shown in FIG. 7 and the image processing method shown in FIG. 3 is that before obtaining the region of interest of the user for the image input in the infrared video frame, the image in the infrared video frame can also be Perform image enhancement processing.
  • the image processing method may include the following steps:
  • the image is enlarged to obtain and output the processed image.
  • the image processing method can perform image enhancement processing on the image before acquiring the region of interest in the image to obtain an image with better contrast. Obtaining the region of interest in an image with better contrast, and zooming in based on the region of interest, is conducive to better showing the detailed information of the region of interest.
  • FIG. 8 is a schematic diagram of another image processing method provided by an embodiment of the application.
  • the difference between the image processing method shown in FIG. 8 and the image processing method shown in FIG. 7 is that after acquiring the region of interest, the region of interest can be pushed to the image enhancement processing operation.
  • the image processing method may include the following steps:
  • the image is enlarged to obtain and output the processed image
  • image enhancement processing is performed on the image based on the region of interest; and image magnification processing is performed on the image based on the region of interest to obtain and output the processed image.
  • the image processing method acquires the region of interest, it pushes the region of interest to the image enhancement processing operation, and performs image enhancement processing on the image according to the region of interest.
  • the image processing method can Show more detailed information in the area of interest.
  • FIG. 9 is a schematic diagram of another image processing method provided by an embodiment of the application.
  • the user before performing image enhancement and magnification processing on the image, the user can obtain the region of interest input by the user for the image in the infrared video frame, and push the region of interest to the image enhancement processing operation and magnification. Processing operation, thereby, image enhancement processing and magnification processing can be performed on the image according to the region of interest.
  • the image processing method may include the following steps:
  • the image processing method can obtain the region of interest before performing image enhancement processing on the image, and perform image enhancement and enlargement processing on the image based on the region of interest, compared to the image enhancement processing and enlargement processing based on the global image.
  • the details of the region of interest can be processed more accurately, and the richer details of the region of interest can be displayed.
  • performing image enhancement processing on an image according to a region of interest may include one or more of the following implementation manners.
  • the image enhancement processing includes histogram statistics and stretching processing.
  • performing image enhancement processing on the image in the infrared video frame includes: performing histogram statistics on the pixels of the region of interest in the image to obtain a statistical result; and pulling the statistical result Stretch processing to obtain processed images.
  • stretching the statistical results can be referred to as stretching the pixels of the statistical results. It can be seen that performing histogram statistics and stretching processing on the image based on the region of interest can make the gray value distribution of the image more uniform, and then display more information about the region of interest in the image.
  • performing image enhancement processing on the image in the infrared video frame includes: performing pixel points in the region of interest in the image and pixels outside the region of interest in the image respectively. Histogram statistics to obtain the first statistical result and the second statistical result; the first statistical result and the second statistical result are respectively stretched, and the separately stretched images are image fused to obtain the stretched image. It can be seen that by performing histogram statistics and stretching processing on the pixels of the region of interest and the pixels outside the region of interest, respectively, an image with a more uniform gray value distribution can be obtained, thereby enriching the detailed information of the region of interest.
  • performing stretching processing on the above-mentioned first statistical result includes: performing stretching processing on the first statistical result by using a first stretching interval.
  • Stretching the above-mentioned second statistical result includes: using a second stretch interval to stretch the second statistical result.
  • the first stretching interval and the second stretching interval may be different.
  • the situation where the first stretching interval is different from the second stretching interval includes but is not limited to the following ways: for example, the maximum value in the first stretching interval is greater than or equal to the maximum value in the second stretching interval, and The minimum value in the first stretching interval is less than or equal to the minimum value in the second stretching interval.
  • the difference between the maximum value and the minimum value in the first stretching interval is greater than or equal to the difference between the maximum value and the minimum value in the second stretching interval.
  • there is an overlap range between the interval range included in the first stretching interval and the interval range included in the second stretching interval but the overlap range is different from the interval ranges of the first stretching interval and the second stretching interval. Wait.
  • the interval range of the first stretching interval is [2,23] and the interval range of the second stretching interval is [21,45], then the overlapping interval range of the first stretching interval and the second stretching interval It is [21,23], so it is determined that the first stretch interval is different from the second stretch interval.
  • the image enhancement processing includes medium and high frequency detail enhancement processing.
  • performing image enhancement processing on the image in the infrared video frame to obtain the processed image and further includes: performing medium and high frequency detail enhancement processing on the pixels of the region of interest in the image to obtain medium and high frequency details Enhance the processed image.
  • the medium and high frequency detail enhancement refers to a method of enhancing the detail components in an image. It can be seen that the medium and high frequency detail enhancement processing of the image based on the region of interest can eliminate the shot noise in the image, and then extract the details in the image, and better display the detailed information of the region of interest in the image.
  • performing image enhancement processing on the image in the infrared video frame to obtain the processed image and further includes: detecting the pixel points of the region of interest in the image and the region of interest in the image.
  • the pixels outside the area are respectively subjected to medium and high frequency detail enhancement processing to obtain the image after the medium and high frequency detail enhancement processing.
  • increasing the degree of detail enhancement in the region of interest correspondingly, reducing the degree of detail enhancement in areas outside the region of interest.
  • the higher the degree of detail enhancement the more obvious the detail information. It can be seen that, for the pixels of the region of interest and the pixels outside the region of interest, performing medium and high-frequency detail enhancement processing respectively can improve the processing accuracy of the region of interest, thereby showing better detail information of the region of interest.
  • the image enhancement processing further includes pseudo-color mapping processing.
  • the region of interest perform image enhancement processing on the image in the infrared video frame to obtain the processed image. It also includes: using a target color palette to perform pseudo-color mapping processing on the pixels in the region of interest in the image to obtain false The image after color mapping.
  • the pseudo-color mapping refers to a method of converting a grayscale image into a color image.
  • the target palette corresponding to the region of interest can be selected from the preset palette library.
  • the preset color palette library includes one or more color palettes corresponding to one or more objects respectively.
  • the color palettes corresponding to different regions of interest are different.
  • the palettes in the preset palette library can be generated according to the type of the region of interest. For example, if the type of the area of interest is type 1, the color palette can be color palette 1; if the type of the area of interest is type 2, the color palette can be color palette 2.
  • the color palette in the preset color palette library can also be generated based on other information of the region of interest, such as selecting a corresponding color palette based on the number of pixels in the region where the region of interest is located, which is not limited in the embodiment of the present application.
  • the grayscale image can be converted into a color image, thereby increasing the resolution of the details of the image, and displaying the information of the region of interest in the image more comprehensively.
  • the step of "obtaining the user's region of interest for the image input in the infrared video frame” can also be performed at any stage of image enhancement processing on the image. For example, it can be performed before performing histogram statistics and stretching processing on the image, or before performing pseudo-color mapping processing on the image, and so on.
  • the above-mentioned image enhancement processing process may include any one or more of histogram statistics and stretching, medium and high frequency detail enhancement, or pseudo-color mapping, and the processing sequence is not limited.
  • the above-mentioned image enhancement processing process includes histogram statistics and stretching, mid and high frequency detail enhancement, and pseudo color mapping, and the order of execution of the three processes is: histogram statistics and stretching, mid and high frequency detail enhancement, and pseudo color mapping. .
  • the infrared sensor collects the video stream
  • the collected video stream can be sent to the control device in real time, and the control device can display the infrared video stream through the display.
  • the user can update the selected area of interest through the control device in real time.
  • the image processing device such as a drone or control device
  • the image processing device will update the area of interest based on the updated area of interest. Perform the image processing procedure described above.
  • FIG. 10 is a schematic structural diagram of an image processing device according to an embodiment of the present application.
  • the image processing device of the embodiment of the present application includes a memory 1001 and a processor 1002.
  • the memory 1001 may include a volatile memory (volatile memory), such as random-access memory (RAM); the memory 1001 may also include a non-volatile memory (non-volatile memory), such as flash memory (flash memory), solid-state drive (solid-state drive, SSD), etc.; the memory 1001 may also include a combination of the foregoing types of memories.
  • the processor 1002 may be a central processing unit (CPU), and the processor 1002 may also be an image processing unit (GPU).
  • the processor 1002 may further include a hardware chip.
  • the above-mentioned hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD), etc.
  • the above-mentioned PLD may be a field-programmable gate array (FPGA), a general array logic (generic array logic, GAL), and the like.
  • the image processing device may be a movable platform or a movable device, such as a drone.
  • the image processing device can be configured with a sensor (such as an infrared sensor) and a processor.
  • the infrared sensor can obtain an infrared video frame and transmit the infrared video frame to the processor; the processor can analyze the image in the infrared video frame. Perform related operations of the image processing method described in this application.
  • the image processing device may also be equipped with a communication interface, which is used to send the processed image to the terminal, and the terminal will output, for example, display the processed image.
  • the image processing device may be a terminal device, and the terminal device is configured with a processor and a memory as shown in FIG. 10, in addition, a display and a communication interface may also be configured.
  • the communication interface is used to receive infrared video frames sent by mobile platforms such as drones, and the processor is used to perform the relevant operations of the image processing method described in this application for the infrared video frames received by the communication interface, and output the processed images To the monitor, the processed image is displayed on the monitor.
  • the image processing device of the embodiment of the present application can be used to implement the method implemented in each embodiment of the present application shown in FIG. 3 or FIG. 7 to FIG. 8 through the processor 1002.
  • the processor 1002 For ease of description, only For related parts of the embodiments of the present application, please refer to the embodiments of the present application shown in FIG. 3 or FIG. 7 to FIG. 8 for implementation.
  • the memory 1001 stores program codes
  • the processor 1002 calls the program codes in the memory.
  • the processor 1002 is used to: obtain user input for the image in the infrared video frame Region of interest; zoom in the image according to the region of interest to obtain a processed image; output the processed image.
  • the processor 1002 is configured to: perform magnification processing on the pixel points of the region of interest in the image to obtain a processed image.
  • the processor 1002 is configured to: respectively perform magnification processing on the pixel points of the region of interest in the image and the pixel points outside the region of interest in the image to obtain a processed image.
  • the processor 1002 is configured to: before performing magnification processing on the image in the infrared video frame, perform the aforementioned acquisition of the region of interest input by the user for the image in the infrared video frame;
  • the image is enlarged, and before the processed image is obtained, the image in the infrared video frame is image-enhanced.
  • the processor 1002 is configured to: before performing magnification processing on the image in the infrared video frame, perform the aforementioned acquisition of the region of interest input by the user for the image in the infrared video frame;
  • the image is enlarged, and before the processed image is obtained, the image in the infrared video frame is subjected to image enhancement processing according to the region of interest.
  • the processor 1002 is configured to: perform magnification processing on the image according to the region of interest, and perform image enhancement processing on the image in the infrared video frame according to the region of interest before obtaining the processed image;
  • the step of obtaining the region of interest input by the user for the image in the infrared video frame is performed.
  • the processor 1002 is configured to: perform histogram statistics on the pixels of the region of interest in the image to obtain statistical results;
  • the processor 1002 is configured to: perform histogram statistics on pixels in the region of interest in the image and pixels outside the region of interest in the image, respectively, to obtain a first statistical result and a second statistical result;
  • the first statistical result and the second statistical result are respectively stretched, and the separately stretched images are image fused to obtain a processed image.
  • the processor 1002 is configured to: use a first stretching interval to perform stretching processing on the first statistical result, and use a second stretching interval to perform stretching processing on the second statistical result, the first stretching interval The stretching interval is different from the second stretching interval.
  • the processor 1002 is configured to: perform medium and high frequency detail enhancement processing on the pixels of the region of interest in the image to obtain a processed image.
  • the processor 1002 is configured to: perform medium and high frequency detail enhancement processing on pixels in the region of interest in the image and pixels outside the region of interest in the image, respectively, to obtain a processed image.
  • the processor 1002 is configured to: use a target color palette to perform pseudo-color mapping processing on pixels in the region of interest in the image to obtain a processed image.
  • the processor 1002 is configured to: select a target palette corresponding to the region of interest from a preset palette library, and the preset palette library includes one or more objects corresponding to one or more objects. Multiple palettes.
  • the processor 1002 is configured to: obtain the area range of the image input in the infrared video frame by the user through the touch screen;
  • the area in the image is regarded as the area of interest, or the area outside the area in the image is regarded as the area of interest.
  • the processor 1002 is configured to: obtain a user's click operation on the image input in the infrared video frame through the touch screen;
  • the processor 1002 is configured to: obtain the scale value and the orientation input by the user;
  • the frame of the ratio value in the image is determined as the area of interest, or the area outside the frame of the image in the infrared video frame is determined as the area of interest.
  • the image processing device provided in this embodiment can execute the image processing method provided in the foregoing embodiment, and its execution mode and beneficial effects are similar, and will not be repeated here.
  • the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores computer program instructions, and when the computer program instructions are executed by a processor, they are used to execute the instructions described in FIG. 3 or FIG. 7-8. Any one or more of the embodiments.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the modules is only a logical function division, and there may be other divisions in actual implementation, for example, multiple modules or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or modules, and may be in electrical, mechanical or other forms.
  • modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, that is, they may be located in one place, or they may be distributed on multiple network modules. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional modules in the various embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, or in the form of hardware plus software functional modules.
  • the above-mentioned integrated modules implemented in the form of software functional modules may be stored in a computer readable storage medium.
  • the above-mentioned software function module is stored in a readable storage medium, and includes several instructions to make a computer device (personal computer, server, or network device, etc.) or a processor execute the method described in each embodiment of the present application. Part of the steps.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disks or optical disks and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

一种图像处理方法及设备。其中,该方法可获取用户针对红外视频帧中的图像输入的感兴趣区域;根据感兴趣区域,对图像进行放大处理,获得处理后的图像;输出处理后的图像。可见,本申请实施例根据用户针对图像输入的感兴趣区域对图像进行图像处理,能够更精确的处理图像中的局部细节,有利于展现出感兴趣区域的目标信息。

Description

图像处理方法及设备 技术领域
本申请涉及计算机技术领域,尤其涉及一种图像处理方法及设备。
背景技术
随着信息时代的不断发展,图像作为视觉信息的载体,逐渐成为人类获取信息、表达信息和传递信息的重要手段。因此,如何进行图像处理,以获得高质量的图像显得愈发重要。比如,红外传感器输出的图像的直方图往往很集中,还需要经过一系列的图像处理,以提高图像的可辨识度。然而,图像经过全局放大处理后,由于图像中的目标信息会受到其他物体信息的影响,导致目标信息并不能更好的被展示出来。
发明内容
本申请实施例提供一种图像处理方法及设备,能够更精确的处理图像中的局部细节,有利于展现出感兴趣区域的目标信息。
第一方面,本申请实施例提供了一种图像处理方法,所述方法包括:
获取用户针对红外视频帧中的图像输入的感兴趣区域;
根据感兴趣区域,对图像进行放大处理,获得处理后的图像;
输出处理后的图像。
第二方面,本申请实施例提供了一种图像处理设备,所述图像处理设备包括:处理器和存储器;
所述存储器中存储有程序指令;
所述处理器调用所述程序指令,用于
获取用户针对红外视频帧中的图像输入的感兴趣区域;
根据感兴趣区域,对图像进行放大处理,获得处理后的图像;
输出处理后的图像。
第三方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序指令,所述计算机程序指令被处理器执行时,用于执行上述第一方面的图像处理方法。
本申请实施例可获取用户针对红外视频帧中的图像输入的感兴趣区域,并基于感兴趣区域对图像进行放大处理,相比于基于全局对图像进行放大处理而言,可消除感兴趣区域之外的物体对感兴趣区域的影响,实现对感兴趣区域更精确的处理,从而展现出感兴趣区域内更多的目标信息。
附图说明
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍。
图1a是本申请实施例提供的一种图像的示意图;
图1b是本申请实施例提供的另一种图像的示意图;
图2是本申请实施例提供的一种图像处理系统的示意图;
图3是本申请实施例提供的一种图像处理方法的流程示意图;
图4是本申请实施例提供的一种区域范围获取方法的示意图;
图5是本申请实施例提供的另一种区域范围获取方法的示意图;
图6是本申请实施例提供的一种感兴趣区域获取方法的示意图;
图7是本申请实施例提供的另一种图像处理方法的流程示意图;
图8是本申请实施例提供的又一种图像处理方法的流程示意图;
图9是本申请实施例提供的又一种图像处理方法的流程示意图;
图10是本申请实施例提供的一种图像处理设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。
为了更好的展示红外视频帧中的图像的目标信息,本申请提出了一种图像处理方法。该图像处理方法可获取用户针对红外视频帧中的图像输入的感兴趣区域;根据感兴趣区域,对图像进行放大处理,获得处理后的图像;输出处理后的图像。可见,基于感兴趣区域对图像进行放大处理,能够更精确的处理图像中感兴趣区域,从而展示出感兴趣区域的目标信息。
请参阅图1a和图1b,图1a是本申请实施例提供的一种图像的示意图,图1b是本申请 实施例提供的另一种图像的示意图。用户可以针对图1a所示的图像输入感兴趣区域101,进而得到图1a中的感兴趣区域。相应的,根据感兴趣区域101对图像进行放大处理,获得放大处理之后的图1b。如图1b所示,可对该感兴趣区域101进行放大处理,如放大到整个显示区域。可见,对图1a所示图像中的感兴趣区域101进行放大处理,得到感兴趣区域放大到整个显示区域的图1b,使感兴趣区域得到放大显示输出。
本申请实施例提及的图像处理方法可应用于可移动平台和终端设备等中。其中,可移动平台可为可移动设备、无人机、无人车等。终端设备可以包括手机、笔记本、平板电脑等。
本申请实施例所述的图像处理方法可应用各种图像处理系统中。例如,图像处理系统可包括控制设备和相机,可选的,该相机可位于可移动平台上,该可移动平台可以是无人机、无人车、无人船等等,下述以无人机为例进行描述。该相机可以是红外相机,用于采集红外图像。红外相机还可以称为红外热像仪、红外传感器等等。相机将获取的红外图像传输给控制设备,例如,相机可以通过无人机将采集的红外图像传输给控制设备,或者,相机上安装通信模块,直接将采集的红外图像传输给控制设备。这种情况下,控制设备进行本申请所述的图像处理方法的相关操作,其中,控制设备获取用户针对红外视频帧中的图像输入的感兴趣区域,进而根据用户选择的感兴趣区域对红外传感器采集的图像进行处理,可获得并输出处理后的图像。该控制设备例如可以是遥控器、地面站、终端或者服务器等等。
另一种可能的实现方式是,相机将获取的红外图像传输给无人机,无人机的处理器进行本申请所述的图像处理方法的相关操作。其中,无人机可以通过控制设备获取用户针对红外视频帧中的图像输入的感兴趣区域,无人机根据用户选择的感兴趣区域对红外传感器采集的图像进行处理,可获得并输出处理后的图像。下面结合图2描述这种情况的图像处理过程。
请参阅图2,图2为本申请实施例提供的一种图像处理系统的示意图。本申请实施例所述的图像处理方法可应用于图2所示的图像处理系统中但不限于该图像处理系统。如图2所示,该图像处理系统包括无人机201、控制设备202,其中,无人机201配置有红外传感器,红外传感器用于拍摄红外视频帧。无人机可对红外视频帧中的图像进行本申请实施例提及到的图像处理方法,获得图像增强处理后的图像。进而,无人机可将图像增强处理 后的图像发送给控制设备进行显示。相应的,控制设备可显示该图像增强处理后的图像。其中,无人机可以通过控制设备获取用户针对红外视频帧中的图像输入的感兴趣区域,无人机根据用户选择的感兴趣区域对红外传感器采集的图像进行处理,可获得并输出处理后的图像。例如,该控制设备可为无人机遥控器或者终端,遥控器或终端上的显示屏可以显示该图像增强处理后的图像。
以下结合附图对本申请实施例提供的图像处理方法进行阐述。
请参阅图3,图3为本申请实施例提供的一种图像处理方法的流程示意图。图3所示的图像处理方法可以包括以下步骤:
S301,获取用户针对红外视频帧中的图像输入的感兴趣区域。
一种实施方式中,红外视频帧可以为红外传感器输出的红外视频帧。
一种实施方式中,无人机作为执行主体,执行图3所示的图像处理方法时,步骤301可包括:无人机配置的相机将获取的红外视频帧发送给控制设备,或通过无人机发送给控制设备;控制设备接收并显示该红外视频帧,获取用户针对该红外视频帧中的图像输入的感兴趣区域;控制设备将获取的感兴趣区域发送给无人机,无人机通过接收来自控制设备的感兴趣区域的方式,获取用户针对红外视频帧中的图像输入的感兴趣区域。
其中,控制设备可通过自身配置的触摸屏获取用户输入的感兴趣区域(region of interest,ROI)。
另一种实施方式中,控制设备作为执行主体,执行图3所示的图像处理方法时,与上一实施方式的不同之处在于,步骤301中,控制设备不需将感兴趣区域发送给无人机。
一种实施方式中,步骤S301可包括:获取用户通过触摸屏针对红外视频帧中的图像输入的区域范围;将图像中区域范围作为感兴趣区域。
其中,触摸屏也可为触控屏、触控面板,用户除了可以通过触摸屏确定感兴趣区域以外,还可以通过物理按键、遥控器的摇杆、按钮等确定感兴趣区域。
另一种实施方式中,步骤S301可包括:获取用户通过触摸屏针对红外视频帧中的图像输入的区域范围;将图像中区域范围之外的区域作为感兴趣区域。
其中,获取用户通过触摸屏针对红外视频帧中的图像输入的区域范围,可包括但不限于:获取用户通过触摸屏对红外视频帧中的图像输入的点击操作;确定以点击操作对应的坐标为中心,以预设值为半径的圆面区域,作为用户针对图像输入的区域范围。其中,所述预设值可为系统预设的。
举例来说,参阅图4,图4为本申请实施例提供的一种区域范围获取方法的示意图。如图4所示,用户通过触摸屏对图像中圆点401输入点击操作,那么,确定的区域范围可为以该圆点401对应的坐标为中心,以预设值为半径,得到的圆形区域。
可选的,用户通过触摸屏针对红外视频帧中的图像输入的区域范围可为多边形区域范围。其中,多边形区域范围可为长方形、三角形、八边形等。
举例来说,参阅图5,图5为本申请实施例提供的另一种区域范围获取方法的示意图。如图5所示,用户通过触摸屏在红外视频帧中的图像输入三角形区域范围,那么该三角形区域范围可作为用户针对图像输入的区域范围。
又一种实施方式中,所述感兴趣区域可以为图像中目标对象所在的区域,故步骤S301可包括:检测红外视频帧中的图像中的目标对象;确定所述目标对象所在的区域作为感兴趣区域。
又一种实施方式中,步骤S301可包括:获取用户输入的比例值和方位;针对红外视频帧中的图像,确定图像中的比例值的画幅作为感兴趣区域,或者确定红外视频帧中的图像中方位的比例值的画幅之外的区域,作为感兴趣区域。
举例来说,参阅图6,图6为本申请实施例提供的一种感兴趣区域获取方法的示意图。如图6所示,假设用户输入的比例值为40%,方位为右部,则可确定红外视频帧中的图像的右部的40%画幅为用户输入的感兴趣区域,或者,可确定红外视频帧中的图像除右部的40%画幅以外的其他区域为用户输入的感兴趣区域。
应当理解的是,用户通过触摸屏输入感兴趣区域的比例值和方位的方法不限于图6所述的方法。例如,用户可点击触摸屏显示的多种标签,标签用于指示图像中感兴趣区域的比例值和方位,进而确定出图像中感兴趣区域。
S302,根据感兴趣区域,对图像进行放大处理,获得处理后的图像。
一种实施方式中,步骤S302可包括:对图像中感兴趣区域的像素点进行放大处理,获得处理后的图像。可见,针对感兴趣区域的像素点进行放大处理,相比于对图像进行全局的放大处理而言,可减少感兴趣区域以外的物体产生的影响,从而展现出感兴趣区域更详细的细节信息。本申请实施例中,放大处理可以为数字放大。
另一种实施方式中,步骤S302可包括:对图像中感兴趣区域的像素点,以及图像中感兴趣区域之外的像素点分别进行放大处理,获得处理后的图像。可使感兴趣区域相比于感兴趣区域之外的区域,其细节信息更加丰富。
可选的,可以根据第一放大比例值对图像中的感兴趣区域的像素点进行放大处理,以及根据第二放大比例值对图像中的感兴趣区域之外的区域的像素点进行放大处理,获得处理后的图像。其中,第一放大比例值与第二放大比例值可不相同。应当理解的是,本申请实施例也可基于其他方式对感兴趣区域的像素点与感兴趣区域之外的区域的像素点分别进行放大处理。
S303,输出处理后的图像。
可见,获取用户针对红外视频帧中的图像输入的感兴趣区域,并基于感兴趣区域对图像进行放大处理,相比于基于全局对图像进行放大处理而言,可更精确的处理图像中感兴趣区域的细节,进而展现出感兴趣区域的目标信息。另外,可由用户输入感兴趣区域,有利于改善用户体验。
本申请实施例中,图像处理方法还可以包括图像增强处理,该图像增强处理的操作可包括直方图统计和拉伸、中高频细节增强、伪彩映射中的一种或多种处理。其中,感兴趣区域的获取,可在根据感兴趣区域对图像进行放大处理之前执行,也可以在上述所述的图像增强处理之前执行。另外,当感兴趣区域的获取在对图像进行放大处理之前执行时,还可将该感兴趣区域推送给上述所述的图像增强处理的操作中,从而使得各图像增强处理的操作基于该感兴趣区域进行。以下结合附图7至图9对可选的实施方式进行阐述。
请参阅图7,图7为本申请实施例提供的一种图像处理方法的示意图。如图7所示,根据感兴趣区域对图像进行放大处理之前,可对红外视频帧中的图像进行图像增强处理。相应的,对红外视频帧中的图像进行图像增强处理之后,对图像进行放大处理之前,获取用户针对红外视频帧中的图像输入的感兴趣区域,从而基于感兴趣区域对图像进行放大处理。其中,图7中,可以获取用户针对图像增强处理后的图像输入的感兴趣区域。
可见,图7所示的图像处理方法与图3所示的图像处理方法的不同之处在于,获取用户针对红外视频帧中的图像输入的感兴趣区域之前,还可对红外视频帧中的图像进行图像增强处理。具体的,该图像处理方法可以包括以下步骤:
对红外视频帧中的图像进行图像增强处理;
获取用户针对图像增强处理后的图像输入的感兴趣区域;
根据感兴趣区域,对图像进行放大处理,获得并输出处理后的图像。
上述步骤的相关内容可参阅步骤S301~S303的阐述,在此不作赘述。
可见,该图像处理方法在获取图像中感兴趣区域之前,可对图像进行图像增强处理,获得对比度更好的图像。在对比度更好的图像获取感兴趣区域,并基于感兴趣区域进行放大处理,有利于更好的展现感兴趣区域的细节信息。
请参阅图8,图8为本申请实施例提供的又一种图像处理方法的示意图。图8所示的图像处理方法与图7所示的图像处理方法的不同之处在于,获取感兴趣区域后,可将感兴趣区域推送给图像增强处理操作。具体的,该图像处理方法可以包括以下步骤:
对红外视频帧中的图像进行图像增强处理;
获取用户针对图像增强处理后的图像输入的感兴趣区域;
根据感兴趣区域,对图像进行放大处理,获得并输出处理后的图像;
并且,针对后续的红外视频流中的其他图像,基于感兴趣区域对图像进行图像增强处理;以及基于感兴趣区域对图像进行图像放大处理,获得并输出处理后的图像。
可见,该图像处理方法获取感兴趣区域后,将感兴趣区域推送给图像增强处理操作,并根据感兴趣区域对图像进行图像增强处理,相比于基于全局对图像进行图像增强处理而言,可展示出感兴趣区域内更多的细节信息。
请参阅图9,图9为本申请实施例提供的又一种图像处理方法的示意图。如图9所示,在对图像进行图像增强处理和放大处理之前,可获取用户针对红外视频帧中的图像输入的感兴趣区域,并将该感兴趣区域分别推送给图像增强处理操作,以及放大处理操作,从而,可根据感兴趣区域对图像进行图像增强处理和放大处理。具体的,该图像处理方法可以包括以下步骤:
获取用户针对红外视频帧中的图像输入的感兴趣区域;
基于感兴趣区域对图像进行图像增强处理,以及基于感兴趣区域对图像进行放大处理,获得并输出处理后的图像。
可见,该图像处理方法可在对图像进行图像增强处理前获取感兴趣区域,并基于感兴趣区域对图像进行图像增强处理和放大处理,相比于基于全局对图像进行图像增强处理和放大处理而言,能够更精确的处理感兴趣区域的细节,从而展示出感兴趣区域更加丰富的细节。
其中,在图8、图9所述的图像处理方法中,根据感兴趣区域对图像进行图像增强处理可以包括以下一种或多种实施方式。
一种实施方式中,图像增强处理包括直方图统计和拉伸处理。根据所述感兴趣区域, 对红外视频帧中的图像进行图像增强处理,包括:对所述图像中所述感兴趣区域的像素点进行直方图统计,获得统计结果;对所述统计结果进行拉伸处理,获得处理后的图像。其中,对统计结果进行拉伸处理可称为对统计结果的像素点进行拉伸处理。可见,基于感兴趣区域对图像进行直方图统计和拉伸处理,可使图像的灰度值分布更加均匀,进而展示图像中感兴趣区域的更多信息。
另一种实施方式中,根据所述感兴趣区域,对红外视频帧中的图像进行图像增强处理,包括:对图像中感兴趣区域的像素点,以及图像中感兴趣区域外的像素点分别进行直方图统计,获得第一统计结果和第二统计结果;对第一统计结果和第二统计结果分别进行拉伸处理,并将分别拉伸处理后的图像进行图像融合,获得拉伸处理后的图像。可见,对感兴趣区域的像素点和感兴趣区域之外的像素点分别进行直方图统计和拉伸处理,可得到灰度值分布更加均匀的图像,从而使感兴趣区域的细节信息更加丰富。
其中,对上述第一统计结果进行拉伸处理包括:采用第一拉伸区间对第一统计结果进行拉伸处理。对上述第二统计结果进行拉伸处理包括:采用第二拉伸区间对第二统计结果进行拉伸处理。
其中,第一拉伸区间与第二拉伸区间可不相同。其中,第一拉伸区间与第二拉伸区间不相同的情况包括但不限于如下方式:比如,第一拉伸区间中的最大值,大于或等于第二拉伸区间中的最大值,且第一拉伸区间中的最小值小于或等于第二拉伸区间中的最小值。再比如,第一拉伸区间中的最大值与最小值之间的差值,大于或等于第二拉伸区间中的最大值与最小值之间的差值。又比如,第一拉伸区间所包括的区间范围与第二拉伸区间所包括的区间范围不存在公共部分。又比如,第一拉伸区间所包括的区间范围与第二拉伸区间所包括的区间范围存在重叠范围,但是该重叠范围与第一拉伸区间和第二拉伸区间的区间范围都不相同等。
举例来说,第一拉伸区间的区间范围是[2,23],第二拉伸区间的区间范围是[21,45],那么第一拉伸区间与第二拉伸区间的重叠区间范围为[21,23],因此确定第一拉伸区间与第二拉伸区间不相同。
上述描述的执行“基于第一拉伸区间对第一统计结果进行拉伸处理”和“基于第二拉伸区间对第二拉伸结果进行拉伸处理”这两个拉伸处理过程,并不区分执行的先后顺序。在一种实现方式中,可同时进行“基于第一拉伸区间对第一统计结果进行拉伸处理”和“基于第二拉伸区间对第二拉伸结果进行拉伸处理”这两个过程。
一种实施方式中,图像增强处理包括中高频细节增强处理。根据所述感兴趣区域,对红外视频帧中的图像进行图像增强处理,获得处理后的图像,还包括:对图像中所述感兴趣区域的像素点进行中高频细节增强处理,获得中高频细节增强处理后的图像。所述中高频细节增强是指对图像中的细节分量进行增强的方法。可见,基于感兴趣区域对图像进行中高频细节增强处理,可消除图像中的散粒噪音,进而提取出图像中的细节,更好地展现图像中感兴趣区域的细节信息。
另一种实施方式中,根据所述感兴趣区域,对红外视频帧中的图像进行图像增强处理,获得处理后的图像,还包括:对图像中感兴趣区域的像素点,以及图像中感兴趣区域外的像素点分别进行中高频细节增强处理,获得中高频细节增强处理后的图像。例如,在感兴趣区域提高细节增强程度,相应的,在感兴趣区域之外的区域降低细节增强程度,细节增强程度越高,细节信息越明显。可见,针对感兴趣区域的像素点和感兴趣区域之外的像素点,分别进行中高频细节增强处理,可提高感兴趣区域的处理精度,进而展现更好的感兴趣区域的细节信息。
一种实施方式中,图像增强处理还包括伪彩映射处理。根据所述感兴趣区域,对红外视频帧中的图像进行图像增强处理,获得处理后的图像,还包括:采用目标调色盘对图像中感兴趣区域的像素点进行伪彩映射处理,获得伪彩映射处理后的图像。所述伪彩映射是指将灰度图像转换为彩色图像的方法。例如,可从预设调色盘库中选择感兴趣区域对应的目标调色盘。其中,预设调色盘库中包括一个或多个对象分别对应的一个或多个调色盘。
可选的,不同的感兴趣区域对应的调色盘是不相同的。例如,预设调色盘库中的调色盘可根据感兴趣区域的类型生成的。例如,若感兴趣区域的类型为类型1,则调色盘可以为调色盘1;若感兴趣区域的类型为类型2,则调色盘可以为调色盘2。预设调色盘库中的调色盘还可根据感兴趣区域的其他信息生成,比如基于感兴趣区域所在区域的像素点数量选择对应的调色盘等,本申请实施例对此不作限定。
可见,基于感兴趣区域对图像进行伪彩映射处理,可将灰度图像转换为彩色图像,进而增加图像的细节分别度,展现更全面地展现图像中感兴趣区域的信息。
本申请实施例中,“获取用户针对红外视频帧中的图像输入的感兴趣区域”的步骤,还可在对图像进行图像增强处理的任一环节处执行。例如,可以在对图像进行直方图统计和拉伸处理前,或者在对图像进行伪彩映射处理前执行等等。
需要说明的是,上述图像增强处理过程可以包括直方图统计和拉伸、中高频细节增强 或伪彩映射中的任意一种或多种,且处理的过程顺序不作限定。例如,上述图像增强处理过程包括直方图统计和拉伸、中高频细节增强以及伪彩映射,且该三个过程的执行顺序依次是:直方图统计和拉伸、中高频细节增强以及伪彩映射。
需要说明的是,在红外传感器采集视频流的过程中,可以实时的将采集的视频流发送给控制设备,控制设备可以通过显示器对红外视频流进行显示。用户可以通过控制设备实时的更新选择的感兴趣区域,当用户选择的感兴趣区域变更时,图像处理设备(例如无人机或控制设备)会更新感兴趣区域,并基于更新后的感兴趣区域执行上述图像处理过程。
请参见图10,图10是本申请实施例提供的一种图像处理设备的结构示意图,本申请实施例的所述图像处理设备包括:存储器1001和处理器1002。所述存储器1001可以包括易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM);存储器1001也可以包括非易失性存储器(non-volatile memory),例如快闪存储器(flash memory),固态硬盘(solid-state drive,SSD)等;存储器1001还可以包括上述种类的存储器的组合。
所述处理器1002可以是中央处理器(central processing unit,CPU),所述处理器1002还可以是图像处理器(graphics processing unit,GPU)。所述处理器1002还可以进一步包括硬件芯片。上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)等。上述PLD可以是现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)等。
一种实施方式中,该图像处理设备可为可移动平台或可移动设备,如无人机等。该实施方式中,图像处理设备可配置有传感器(如红外传感器)和处理器,红外传感器可获取红外视频帧,并将该红外视频帧传输给处理器;处理器可对红外视频帧中的图像执行本申请所述的图像处理方法的相关操作。可选的,该图像处理设备还可配置有通信接口,通信接口用于将处理后的图像发送给终端,由终端输出,如显示,该处理后的图像。
另一种实施方式中,图像处理设备可为终端设备,该终端设备如图10所示配置有处理器和存储器,另外,还可配置有显示器和通信接口。通信接口用于接收无人机等可移动平台发送的红外视频帧,处理器用于针对通信接口接收的红外视频帧,执行本申请所述的图像处理方法的相关操作,并将处理后的图像输出给显示器,由显示器显示该处理后的图像。
本申请实施例的所述图像处理设备通过所述处理器1002可以用于实施上述图3或图7~图8所示的本申请各实施例实现的方法,为了便于说明,仅示出了与本申请实施例相关的部分,实现请参照图3或图7~图8所示的本申请各实施例。
一种实施方式中,存储器1001中存储有程序代码,处理器1002调用存储器中的程序代码,当程序代码被执行时,所述处理器1002用于:获取用户针对红外视频帧中的图像输入的感兴趣区域;根据感兴趣区域对图像进行放大处理,获得处理后的图像;输出处理后的图像。
一种实施方式中,处理器1002用于:对图像中感兴趣区域的像素点进行放大处理,获得处理后的图像。
一种实施方式中,处理器1002用于:对图像中感兴趣区域的像素点,以及图像中感兴趣区域之外的像素点分别进行放大处理,获得处理后的图像。
一种实施方式中,处理器1002用于:对红外视频帧中的图像进行放大处理前,执行所述的获取用户针对红外视频帧中的图像输入的感兴趣区域;
根据所述感兴趣区域,对图像进行放大处理,获得处理后的图像之前,对红外视频帧中的图像进行图像增强处理。
一种实施方式中,处理器1002用于:对红外视频帧中的图像进行放大处理前,执行所述的获取用户针对红外视频帧中的图像输入的感兴趣区域;
根据所述感兴趣区域,对所述图像进行放大处理,获得处理后的图像之前,根据感兴趣区域对红外视频帧中的图像进行图像增强处理。
一种实施方式中,处理器1002用于:根据感兴趣区域,对图像进行放大处理,获得处理后的图像之前,根据感兴趣区域对红外视频帧中的图像进行图像增强处理;
根据所述感兴趣区域对红外视频帧中的图像进行图像增强处理之前,执行所述的获取用户针对红外视频帧中的图像输入的感兴趣区域的步骤。
一种实施方式中,处理器1002用于:对图像中感兴趣区域的像素点进行直方图统计,获得统计结果;
对统计结果进行拉伸处理,获得处理后的图像。
一种实施方式中,处理器1002用于:对图像中感兴趣区域的像素点,以及图像中感兴趣区域外的像素点分别进行直方图统计,获得第一统计结果和第二统计结果;
对第一统计结果和第二统计结果分别进行拉伸处理,并将分别拉伸处理后的图像进行 图像融合,获得处理后的图像。
一种实施方式中,处理器1002用于:采用第一拉伸区间对第一统计结果进行拉伸处理,以及采用第二拉伸区间对第二统计结果进行拉伸处理,所述第一拉伸区间与所述第二拉伸区间不同。
一种实施方式中,处理器1002用于:对图像中感兴趣区域的像素点进行中高频细节增强处理,获得处理后的图像。
一种实施方式中,处理器1002用于:对图像中感兴趣区域的像素点,以及图像中感兴趣区域外的像素点分别进行中高频细节增强处理,获得处理后的图像。
一种实施方式中,处理器1002用于:采用目标调色盘对图像中感兴趣区域的像素点进行伪彩映射处理,获得处理后的图像。
一种实施方式中,处理器1002用于:从预设调色盘库中选择感兴趣区域对应的目标调色盘,所述预设调色盘库中包括一个或多个对象对应的一个或多个调色盘。
一种实施方式中,处理器1002用于:获取用户通过触摸屏针对红外视频帧中的图像输入的区域范围;
将图像中区域范围作为感兴趣区域,或将图像中区域范围之外的区域作为感兴趣区域。
一种实施方式中,处理器1002用于:获取用户通过触摸屏对红外视频帧中的图像输入的点击操作;
确定以点击操作对应的坐标为中心,以预设值为半径的圆面区域,作为用户针对图像输入的区域范围。
一种实施方式中,处理器1002用于:获取用户输入的比例值和方位;
针对红外视频帧中的图像,确定图像中的比例值的画幅作为感兴趣区域,或者确定红外视频帧中的图像中方位的比例值的画幅之外的区域,作为感兴趣区域。
本实施例提供的图像处理设备能够执行前述实施例提供的图像处理方法,其执行方式和有益效果类似,在这里不再赘述。
本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序指令,计算机程序指令被处理器执行时,用于执行图3或图7~图8所述的任一或多个实施例。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分, 仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或模块的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。
上述以软件功能模块的形式实现的集成的模块,可以存储在一个计算机可读取存储介质中。上述软件功能模块存储在一个可读存储介质中,包括若干指令用以使得一台计算机装置(个人计算机,服务器,或者网络装置等)或处理器(processor)执行本申请各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的装置的工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (41)

  1. 一种图像处理方法,其特征在于,包括:
    获取用户针对红外视频帧中的图像输入的感兴趣区域;
    根据所述感兴趣区域,对所述图像进行放大处理,获得处理后的图像;
    输出处理后的图像。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述感兴趣区域,对所述图像进行放大处理,获得处理后的图像,包括:
    对所述图像中所述感兴趣区域的像素点进行放大处理,获得处理后的图像。
  3. 根据权利要求1所述的方法,其特征在于,所述根据所述感兴趣区域,对所述图像进行放大处理,获得处理后的图像,包括:
    对所述图像中所述感兴趣区域的像素点,以及所述图像中所述感兴趣区域之外的像素点分别进行放大处理,获得处理后的图像。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,
    对红外视频帧中的图像进行放大处理前,执行所述的获取用户针对红外视频帧中的图像输入的感兴趣区域;
    所述根据所述感兴趣区域,对所述图像进行放大处理,获得处理后的图像之前,所述方法还包括:
    对红外视频帧中的图像进行图像增强处理。
  5. 根据权利要求1至3任一项所述的方法,其特征在于,
    对红外视频帧中的图像进行放大处理前,执行所述的获取用户针对红外视频帧中的图像输入的感兴趣区域;
    所述根据所述感兴趣区域,对所述图像进行放大处理,获得处理后的图像之前,所述方法还包括:
    根据所述感兴趣区域对红外视频帧中的图像进行图像增强处理。
  6. 根据权利要求1至3任一项所述的方法,其特征在于,
    所述根据所述感兴趣区域,对所述图像进行放大处理,获得处理后的图像之前,所述方法还包括:
    根据所述感兴趣区域对红外视频帧中的图像进行图像增强处理;
    所述根据所述感兴趣区域对红外视频帧中的图像进行图像增强处理之前,执行所述的获取用户针对红外视频帧中的图像输入的感兴趣区域的步骤。
  7. 根据权利要求1至6任一项所述的方法,其特征在于,所述根据所述感兴趣区域,对红外视频帧中的图像进行图像增强处理,包括:
    对所述图像中所述感兴趣区域的像素点进行直方图统计,获得统计结果;
    对所述统计结果进行拉伸处理,获得处理后的图像。
  8. 根据权利要求1至6任一项所述的方法,其特征在于,所述根据所述感兴趣区域,对红外视频帧中的图像进行图像增强处理,包括:
    对所述图像中所述感兴趣区域的像素点,以及所述图像中所述感兴趣区域外的像素点分别进行直方图统计,获得第一统计结果和第二统计结果;
    对所述第一统计结果和所述第二统计结果分别进行拉伸处理,并将分别拉伸处理后的图像进行图像融合,获得处理后的图像。
  9. 根据权利要求8所述的方法,其特征在于,所述对所述第一统计结果和所述第二统计结果分别进行拉伸处理,包括:
    采用第一拉伸区间对所述第一统计结果进行拉伸处理,以及采用第二拉伸区间对所述第二统计结果进行拉伸处理,所述第一拉伸区间与所述第二拉伸区间不同。
  10. 根据权利要求1至9任一项所述的方法,其特征在于,所述根据所述感兴趣区域,对红外视频帧中的图像进行图像增强处理,获得处理后的图像,还包括:
    对所述图像中所述感兴趣区域的像素点进行中高频细节增强处理,获得处理后的图像。
  11. 根据权利要求1至9任一项所述的方法,其特征在于,所述根据所述感兴趣区域,对红外视频帧中的图像进行图像增强处理,获得处理后的图像,还包括:
    对所述图像中所述感兴趣区域的像素点,以及所述图像中所述感兴趣区域外的像素点分别进行中高频细节增强处理,获得处理后的图像。
  12. 根据权利要求1至11任一项所述的方法,其特征在于,所述根据所述感兴趣区域,对红外视频帧中的图像进行图像增强处理,获得处理后的图像,还包括:
    采用目标调色盘对所述图像中所述感兴趣区域的像素点进行伪彩映射处理,获得处理后的图像。
  13. 根据权利要求12所述的方法,其特征在于,所述采用目标调色盘对所述图像中所述感兴趣区域的像素点进行伪彩映射处理之前,还包括:
    从预设调色盘库中选择所述感兴趣区域对应的目标调色盘,所述预设调色盘库中包括一个或多个对象对应的一个或多个调色盘。
  14. 根据权利要求1至13任一项所述的方法,其特征在于,所述获取用户针对红外视频帧中的图像输入的感兴趣区域,包括:
    获取用户通过触摸屏针对红外视频帧中的图像输入的区域范围;
    将所述图像中所述区域范围作为感兴趣区域,或将所述图像中所述区域范围之外的区域作为感兴趣区域。
  15. 根据权利要求1至13任一项所述的方法,其特征在于,所述感兴趣区域为多边形区域或目标物体所在的区域。
  16. 根据权利要求14或15所述的方法,其特征在于,所述获取用户通过触摸屏针对红外视频帧中的图像输入的区域范围,包括:
    获取用户通过触摸屏对红外视频帧中的图像输入的点击操作;
    确定以所述点击操作对应的坐标为中心,以预设值为半径的圆面区域,作为用户针对所述图像输入的区域范围。
  17. 根据权利要求1至13任一项所述的方法,其特征在于,所述获取用户针对红外视频帧中的图像输入的感兴趣区域,包括:
    获取用户输入的比例值和方位;
    针对所述红外视频帧中的图像,确定所述图像中的所述比例值的画幅作为感兴趣区域,或者确定所述红外视频帧中的所述图像中所述方位的所述比例值的画幅之外的区域,作为感兴趣区域。
  18. 根据权利要求1至17任一项所述的方法,其特征在于,所述红外视频帧为红外传感器获得的红外视频帧。
  19. 一种图像处理设备,其特征在于,所述图像处理设备包括:处理器和存储器;
    所述存储器中存储有程序指令;
    所述处理器调用所述程序指令,用于
    获取用户针对红外视频帧中的图像输入的感兴趣区域;
    根据所述感兴趣区域,对所述图像进行放大处理,获得处理后的图像;
    输出处理后的图像。
  20. 根据权利要求19所述的设备,其特征在于,所述图像处理设备还包括传感器;
    所述传感器用于获取红外视频帧。
  21. 根据权利要求19所述的设备,其特征在于,所述图像处理设备还包括显示器;
    所述显示器,用于显示处理后的图像。
  22. 根据权利要求20所述的设备,其特征在于,所述传感器包括红外传感器。
  23. 根据权利要求19至22任一项所述的设备,其特征在于,所述图像处理设备为可移动设备或所述可移动设备的控制设备。
  24. 根据权利要求19所述的设备,其特征在于,所述处理器根据所述感兴趣区域,对所述图像进行放大处理,获得处理后的图像,包括:
    对所述图像中所述感兴趣区域的像素点进行放大处理,获得处理后的图像。
  25. 根据权利要求19所述的设备,其特征在于,所述处理器根据所述感兴趣区域,对所述图像进行放大处理,获得处理后的图像,包括:
    对所述图像中所述感兴趣区域的像素点,以及所述图像中所述感兴趣区域之外的像素点分别进行放大处理,获得处理后的图像。
  26. 根据权利要求19至24任一项所述的设备,其特征在于,所述处理器对红外视频帧中的图像进行放大处理前,执行所述的获取用户针对红外视频帧中的图像输入的感兴趣区域的操作;
    所述处理器根据所述感兴趣区域,对所述图像进行放大处理,获得处理后的图像之前,还用于对红外视频帧中的图像进行图像增强处理。
  27. 根据权利要求26所述的设备,其特征在于,所述处理器对红外视频帧中的图像进行放大处理前,执行所述的获取用户针对红外视频帧中的图像输入的感兴趣区域的操作;
    所述处理器根据所述感兴趣区域,对所述图像进行放大处理,获得处理后的图像前,还用于根据所述感兴趣区域对红外视频帧中的图像进行图像增强处理。
  28. 根据权利要求19至24任一项所述的设备,其特征在于,所述处理器根据所述感兴趣区域,对所述图像进行放大处理,获得处理后的图像之前,还用于根据所述感兴趣区域对红外视频帧中的图像进行图像增强处理;
    所述处理器根据所述感兴趣区域对红外视频帧中的图像进行图像增强处理之前,执行所述的获取用户针对红外视频帧中的图像输入的感兴趣区域的操作。
  29. 根据权利要求19至28任一项所述的设备,其特征在于,所述处理器根据所述感兴趣区域,对红外视频帧中的图像进行图像增强处理,包括:
    对所述图像中所述感兴趣区域的像素点进行直方图统计,获得统计结果;
    对所述统计结果进行拉伸处理,获得处理后的图像。
  30. 根据权利要求19至28任一项所述的设备,其特征在于,所述处理器根据所述感兴趣区域,对红外视频帧中的图像进行图像增强处理,包括:
    对所述图像中所述感兴趣区域的像素点,以及所述图像中所述感兴趣区域外的像素点分别进行直方图统计,获得第一统计结果和第二统计结果;
    对所述第一统计结果和所述第二统计结果分别进行拉伸处理,并将分别拉伸处理后的图像进行图像融合,获得处理后的图像。
  31. 根据权利要求30所述的设备,其特征在于,所述处理器根据所述感兴趣区域,对红外视频帧中的图像进行图像增强处理,包括:
    采用第一拉伸区间对所述第一统计结果进行拉伸处理,以及采用第二拉伸区间对所述第二统计结果进行拉伸处理,所述第一拉伸区间与所述第二拉伸区间不同。
  32. 根据权利要求19至31任一项所述的设备,其特征在于,所述处理器根据所述感兴趣区域,对红外视频帧中的图像进行图像增强处理,包括:
    对所述图像中所述感兴趣区域的像素点进行中高频细节增强处理,获得处理后的图像。
  33. 根据权利要求19至31任一项所述的设备,其特征在于,所述处理器根据所述感兴趣区域,对红外视频帧中的图像进行图像增强处理,包括:
    对所述图像中所述感兴趣区域的像素点,以及所述图像中所述感兴趣区域外的像素点分别进行中高频细节增强处理,获得处理后的图像。
  34. 根据权利要求19至33任一项所述的设备,其特征在于,所述处理器根据所述感兴趣区域,对红外视频帧中的图像进行图像增强处理,包括:
    采用目标调色盘对所述图像中所述感兴趣区域的像素点进行伪彩映射处理,获得处理后的图像。
  35. 根据权利要求34所述的设备,其特征在于,所述处理器采用目标调色盘对所述图 像中所述感兴趣区域的像素点进行伪彩映射处理之前,还用于从预设调色盘库中选择所述感兴趣区域对应的目标调色盘,所述预设调色盘库中包括一个或多个对象对应的一个或多个调色盘。
  36. 根据权利要求19至35任一项所述的设备,其特征在于,所述处理器获取用户针对红外视频帧中的图像输入的感兴趣区域,包括:
    获取用户通过触摸屏针对红外视频帧中的图像输入的区域范围;
    将所述图像中所述区域范围作为感兴趣区域,或将所述图像中所述区域范围之外的区域作为感兴趣区域。
  37. 根据权利要求19至35任一项所述的设备,其特征在于,所述感兴趣区域为多边形区域或目标物体所在的区域。
  38. 根据权利要求36或37所述的设备,其特征在于,所述处理器获取用户通过触摸屏针对红外视频帧中的图像输入的区域范围,包括:
    获取用户通过触摸屏对红外视频帧中的图像输入的点击操作;
    确定以所述点击操作对应的坐标为中心,以预设值为半径的圆面区域,作为用户针对所述图像输入的区域范围。
  39. 根据权利要求19至35任一项所述的设备,其特征在于,所述处理器获取用户针对红外视频帧中的图像输入的感兴趣区域,包括:
    获取用户输入的比例值和方位;
    针对所述红外视频帧中的图像,确定所述图像中的所述比例值的画幅作为感兴趣区域,或者确定所述红外视频帧中所述图像中所述方位的所述比例值的画幅之外的区域,作为感兴趣区域。
  40. 根据权利要求19至39任一项所述的方法,其特征在于,所述红外视频帧为红外传感器获得的红外视频帧。
  41. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机程序指令,所述计算机程序指令被处理器执行时,用于执行如权利要求1-18任一项所述的图像处理方法。
PCT/CN2019/122060 2019-11-29 2019-11-29 图像处理方法及设备 WO2021102939A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/122060 WO2021102939A1 (zh) 2019-11-29 2019-11-29 图像处理方法及设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/122060 WO2021102939A1 (zh) 2019-11-29 2019-11-29 图像处理方法及设备

Publications (1)

Publication Number Publication Date
WO2021102939A1 true WO2021102939A1 (zh) 2021-06-03

Family

ID=76128998

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/122060 WO2021102939A1 (zh) 2019-11-29 2019-11-29 图像处理方法及设备

Country Status (1)

Country Link
WO (1) WO2021102939A1 (zh)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070109324A1 (en) * 2005-11-16 2007-05-17 Qian Lin Interactive viewing of video
EP1912157A1 (en) * 2006-10-09 2008-04-16 MAGNETI MARELLI SISTEMI ELETTRONICI S.p.A. Digital image processing system for automatically representing surrounding scenes to the driver of a vehicle for driving assistance, and corresponding operating method
CN101895741A (zh) * 2009-05-22 2010-11-24 宏正自动科技股份有限公司 对感兴趣范围特殊处理的图像处理及传输的方法与系统
CN103561629A (zh) * 2011-05-27 2014-02-05 奥林巴斯株式会社 内窥镜装置
CN103583037A (zh) * 2011-04-11 2014-02-12 菲力尔系统公司 红外相机系统和方法
CN105446673A (zh) * 2014-07-28 2016-03-30 华为技术有限公司 屏幕显示的方法及终端设备
CN106108941A (zh) * 2016-06-13 2016-11-16 杭州融超科技有限公司 一种超声图像区域质量增强装置及方法
CN106233328A (zh) * 2014-02-19 2016-12-14 埃弗加泽公司 用于改进、提高或增强视觉的设备和方法
CN106485660A (zh) * 2016-09-28 2017-03-08 北京小米移动软件有限公司 电子地图的缩放方法和装置
CN106934782A (zh) * 2017-01-16 2017-07-07 中国计量大学 一种红外图像增强方法
CN108139799A (zh) * 2016-04-22 2018-06-08 深圳市大疆创新科技有限公司 基于用户的兴趣区(roi)处理图像数据的系统和方法

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070109324A1 (en) * 2005-11-16 2007-05-17 Qian Lin Interactive viewing of video
EP1912157A1 (en) * 2006-10-09 2008-04-16 MAGNETI MARELLI SISTEMI ELETTRONICI S.p.A. Digital image processing system for automatically representing surrounding scenes to the driver of a vehicle for driving assistance, and corresponding operating method
CN101895741A (zh) * 2009-05-22 2010-11-24 宏正自动科技股份有限公司 对感兴趣范围特殊处理的图像处理及传输的方法与系统
CN103583037A (zh) * 2011-04-11 2014-02-12 菲力尔系统公司 红外相机系统和方法
CN103561629A (zh) * 2011-05-27 2014-02-05 奥林巴斯株式会社 内窥镜装置
CN106233328A (zh) * 2014-02-19 2016-12-14 埃弗加泽公司 用于改进、提高或增强视觉的设备和方法
CN105446673A (zh) * 2014-07-28 2016-03-30 华为技术有限公司 屏幕显示的方法及终端设备
CN108139799A (zh) * 2016-04-22 2018-06-08 深圳市大疆创新科技有限公司 基于用户的兴趣区(roi)处理图像数据的系统和方法
CN106108941A (zh) * 2016-06-13 2016-11-16 杭州融超科技有限公司 一种超声图像区域质量增强装置及方法
CN106485660A (zh) * 2016-09-28 2017-03-08 北京小米移动软件有限公司 电子地图的缩放方法和装置
CN106934782A (zh) * 2017-01-16 2017-07-07 中国计量大学 一种红外图像增强方法

Similar Documents

Publication Publication Date Title
CN110136056B (zh) 图像超分辨率重建的方法和装置
US9697416B2 (en) Object detection using cascaded convolutional neural networks
US9344690B2 (en) Image demosaicing
US20210209802A1 (en) Image Detection Method, Apparatus, Electronic Device and Storage Medium
CN113034358B (zh) 一种超分辨率图像处理方法以及相关装置
US9076221B2 (en) Removing an object from an image
US11030715B2 (en) Image processing method and apparatus
US12125458B2 (en) Display terminal adjustment method and display terminal
CN110944160A (zh) 一种图像处理方法及电子设备
CN112381183A (zh) 目标检测方法、装置、电子设备及存储介质
CN111768377B (zh) 图像色彩评估方法、装置、电子设备及存储介质
CN113628259A (zh) 图像的配准处理方法及装置
WO2021102939A1 (zh) 图像处理方法及设备
CN112437237A (zh) 拍摄方法及装置
CN110047126B (zh) 渲染图像的方法、装置、电子设备和计算机可读存储介质
WO2024055531A1 (zh) 照度计数值识别方法、电子设备及存储介质
WO2019150649A1 (ja) 画像処理装置および画像処理方法
WO2021102928A1 (zh) 图像处理方法及装置
US20180276458A1 (en) Information processing device, method and storage medium
CN114820547B (zh) 车道线检测方法、装置、计算机设备、存储介质
US9552632B2 (en) Dynamic waveform region enhancement
US11770617B2 (en) Method for tracking target object
CN112150345A (zh) 图像处理方法及装置、视频处理方法和发送卡
JP2016152467A (ja) 追尾装置、追尾方法及び追尾プログラム
CN111492399B (zh) 图像处理装置、图像处理方法以及记录介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19954223

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19954223

Country of ref document: EP

Kind code of ref document: A1