[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2019105261A1 - Background blurring method and apparatus, and device - Google Patents

Background blurring method and apparatus, and device Download PDF

Info

Publication number
WO2019105261A1
WO2019105261A1 PCT/CN2018/116475 CN2018116475W WO2019105261A1 WO 2019105261 A1 WO2019105261 A1 WO 2019105261A1 CN 2018116475 W CN2018116475 W CN 2018116475W WO 2019105261 A1 WO2019105261 A1 WO 2019105261A1
Authority
WO
WIPO (PCT)
Prior art keywords
blurring
different sub
regions
sub
main image
Prior art date
Application number
PCT/CN2018/116475
Other languages
French (fr)
Chinese (zh)
Inventor
欧阳丹
谭国辉
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019105261A1 publication Critical patent/WO2019105261A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
    • H04N23/959Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Definitions

  • the present application relates to the field of electronic devices, and in particular, to a background blur processing method, apparatus, and terminal device.
  • the present invention provides a background blur processing method, device and device, so as to solve the problem that the depth of field beyond a certain distance cannot be accurately calculated in the prior art, and thus the darkening of the image in the image cannot be realized according to the depth of field, resulting in virtual Technical problems with poor visual effects.
  • the embodiment of the present application provides a background blurring processing method, including: acquiring a main image captured by a main camera and a sub image captured by a sub camera, and acquiring depth information of the main image according to the main image and the sub image; Determining an original blurring intensity of different sub-regions in the background area of the main image according to the depth information and the in-focus area; determining a distribution orientation of the different sub-areas according to a display manner of the main image, and according to the distribution orientation
  • Corresponding weight setting policy determines a blurring weight of the different sub-regions; determining a target blurring strength of the different sub-regions according to the original blurring intensity of the different sub-regions and corresponding blurring weights; The target blurring intensity of the region blurs the background area of the main image.
  • a background blur processing apparatus including: a first acquiring module, configured to acquire a main image captured by a main camera and a sub image captured by a sub camera, and according to the main image and the sub image Acquiring the depth information of the main image; the first determining module is configured to determine an original blurring intensity of different sub-regions in the background area of the main image according to the depth information and the focus area; and a second determining module, configured to Determining a distribution orientation of the different sub-regions, and determining a blurring weight of the different sub-regions according to a weight setting policy corresponding to the distribution orientation; and a third determining module, configured to determine, according to the different The original blurring intensity of the sub-region and the corresponding blurring weight determine the target blurring intensity of the different sub-regions; the processing module is configured to perform virtualizing on the background area of the main image according to the target blurring intensity of the different sub-regions Processing.
  • a further embodiment of the present application provides a computer device including a memory and a processor, wherein the memory stores computer readable instructions, and when the instructions are executed by the processor, the processor performs the above implementation of the present application.
  • the background blurring method described in the example is described in the example.
  • a further embodiment of the present application provides a non-transitory computer readable storage medium having a computer program stored thereon, which when executed by a processor, implements a background blurring processing method as described in the above embodiments of the present application.
  • FIG. 1 is a flow chart of a background blurring processing method according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a dual camera viewing angle coverage according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of acquiring a depth of field of a dual camera according to an embodiment of the present application
  • FIG. 5(a) is a schematic diagram showing division of a plurality of sub-areas in a background area of a main image according to an embodiment of the present application;
  • FIG. 5(b) is a schematic diagram showing division of a plurality of sub-areas in a background area of a main image according to another embodiment of the present application;
  • FIG. 7 is a schematic structural diagram of a background blur processing apparatus according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a background blur processing apparatus according to another embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a background blurring processing apparatus according to still another embodiment of the present application.
  • FIG. 10 is a schematic diagram of an image processing circuit according to another embodiment of the present application.
  • the execution body of the background blur processing method and apparatus of the embodiment of the present application may be a terminal device, where the terminal device may be a hardware device with a dual camera such as a mobile phone, a tablet computer, a personal digital assistant, a wearable device, or the like.
  • the wearable device can be a smart bracelet, a smart watch, smart glasses, and the like.
  • the present application provides a background blurring processing method for controlling the positional relationship of regions corresponding to different depths of field with different blur depths, and controlling regions corresponding to different depths of field to perform blurring of different intensities, thereby Accurately obtaining the depth of field information can also make the area corresponding to different depths of field get the appropriate intensity blur.
  • FIG. 1 is a flowchart of a background blur processing method according to an embodiment of the present application. As shown in FIG. 1, the method includes the following steps:
  • Step 101 Acquire a main image captured by the main camera and a sub image captured by the sub camera, and acquire depth information of the main image according to the main image and the sub image.
  • the spatial depth of the clear imaging allowed by the human eye before and after the focus area where the subject is located is the depth of field.
  • the depth of field of the human eye is mainly determined by binocular vision to distinguish the depth of field. This is the same as the principle of dual camera resolution of depth of field, mainly relying on the principle of triangular ranging as shown in Figure 2.
  • the imaged object is drawn, as well as the positions of the two cameras O R and O T , and the focal planes of the two cameras.
  • the focal plane is at a distance f from the plane of the two cameras. Two cameras are imaged at the focal plane position to obtain two captured images.
  • P and P' are the positions of the same object in different captured images, respectively.
  • the distance from the P point to the left boundary of the captured image is X R
  • the distance from the P′ point to the left boundary of the captured image is X T .
  • O R and O T are two cameras respectively, and the two cameras are on the same plane with a distance B.
  • the distance Z between the object in Figure 2 and the plane of the two cameras has the following relationship:
  • d is the difference in distance between the positions of the same object in different captured images. Since B and f are constant values, the distance Z of the object can be determined according to d.
  • the above formula is implemented based on two parallel cameras.
  • the main camera is used to take the main image of the actual image.
  • the sub-image obtained by the sub-camera is mainly used to calculate the depth of field. Based on the above analysis, the sub-camera The FOV is generally larger than the main camera, but even if it is as shown in Figure 3, objects that are closer together may still be in different images acquired by the two cameras.
  • the adjusted calculated depth of field range is as follows:
  • a map of different point differences is calculated by the main image acquired by the main camera and the sub-image acquired by the sub-camera, and is represented by a disparity map, which is the same on the two graphs.
  • Step 102 Determine an original blurring intensity of different sub-regions in the background area of the main image according to the depth of field information and the focus area.
  • the range of imaging before the focus area is the foreground depth of field
  • the area corresponding to the foreground depth of field is the foreground area
  • the range of clear imaging after the focus area is the background depth of field
  • the area corresponding to the background depth of field is the background area
  • the background area is divided into a plurality of different sub-areas according to the horizontal direction, wherein the size and shape of each sub-area can be adjusted according to application requirements, and the size and shape of the plurality of different sub-areas can be Similarly, the difference may be different.
  • the depth of field of the closest position from the focus area in each sub-area and the depth of field farthest from the focus area are respectively obtained, and the average depth of field of the closest position and the depth of field of the farthest position are averaged.
  • the average depth of field is taken as the average depth information of the corresponding sub-area, and the original blurring intensity is determined according to the average depth information of the sub-area, wherein the higher the average depth information, the greater the original blurring intensity.
  • the depth of field interval of each sub-area can be adjusted according to application requirements.
  • the depth of field interval of multiple different sub-areas can be the same or different.
  • multiple sub-areas can be shaped according to the background area.
  • the sub-areas are divided into a plurality of sub-areas that are inconsistent in size, or, for the convenience of further blurring processing, as shown in FIG. 5(b), the plurality of sub-areas may be horizontally distributed from the bottom to the top of the image having the same width and image width.
  • the depth information obtained in the sub-area may be obtained.
  • the depth of field probability distribution in the middle is calculated as the average depth of field of the sub-area.
  • Step 103 Determine a distribution orientation of different sub-regions according to a display manner of the main image, and determine a blur weight of the different sub-regions according to a weight setting policy corresponding to the distribution orientation.
  • the display manner includes the display reverse direction of the shooting subject, the proportion of the entire image, and the like.
  • the bottom of the image is the ground
  • the top of the image is the sky
  • the subject of the photograph is located on the ground. Therefore, the sub-region in the background area of the main image is closer to the top of the image. The more it is not related to the current subject, the closer it is to the bottom of the image, the more relevant it is to the current subject.
  • the right side of the image is a portrait
  • the left side of the image is a beach or a sea, etc., if the current photographing mode is the portrait mode, the closer to the left side of the image, the more the current subject is unrelated.
  • the distribution orientation of different sub-regions may be determined according to the display manner of the main image, and further, the blurring weights of different sub-regions may be determined according to the weight setting strategy corresponding to the distribution orientation, for example, for the above
  • the sub-region closer to the upper direction is not related to the subject in the main image
  • the sub-region closer to the lower direction is related to the subject in the main image, thereby determining according to the top-down weight reduction strategy.
  • the blurring weight of different sub-areas wherein the up and down direction does not only include the upper and lower sides of the traditional physical, referring to the above analysis, it is related to the display mode of the main image, and if the display mode of the main image is up and down, the The upper and lower directions indicate physical upper and lower, and if the display mode of the main image is horizontal display, the upper and lower directions indicate the left and right direction.
  • the sub-regions closer to the top of the image have greater blurring weights.
  • the sub-area near the bottom has a small blurring weight, which makes the final blur effect more natural and closer to the real optical virtual focus effect.
  • different display modes may be used to estimate the display mode of the main image according to different application scenarios.
  • the orientation of the terminal device may be obtained by detecting the gyroscope information of the terminal device, and then the terminal device is inferred to take a picture. The way the main image is displayed.
  • the relationship between the distribution orientation and the blurring effect may be obtained according to a large number of experiments in advance, and further, according to the relationship
  • the correspondence between the distribution orientation and the weight setting policy is established and stored, so that after determining the display manner of the main image, the corresponding relationship is queried to determine a corresponding weight setting policy.
  • a linear distribution curve including a mapping relationship between the positioning coordinates of the sub-region and the blurring weight, or a nonlinear distribution curve, wherein the positioning coordinate can be any point coordinate of the central region of the sub-region, is further set in advance.
  • the positioning coordinates of different sub-areas are determined according to the positioning coordinate query according to a preset linear weight distribution curve or a nonlinear weight distribution curve to determine the blurring weights of different sub-areas.
  • the correspondence between the display direction of the image and the blurring weight of the sub-regions of different orientations is constructed, and a deep neural network model is constructed according to the learning result, thereby inputting the display direction of the main image and the orientation of different sub-regions in the model. Further, the blurring weight of the corresponding sub-region is obtained according to the output of the model.
  • Step 104 Determine target blurring strengths of different sub-regions according to original blurring strengths of different sub-regions and corresponding blurring weights.
  • Step 105 Perform blurring processing on the background area of the main image according to the target blurring intensity of different sub-areas.
  • the original blurring strength of the different sub-regions and the corresponding blurring weights are different in determining the target blurring strength of different sub-regions, including but not limited to the following ways:
  • the target blurring strength of different sub-regions is determined.
  • the blurring weights of the two sub-regions whose original blurring intensities are a and b respectively are 80% and 90%, then a*80% and b*90% can be used as the target blurring intensity for the above two sub-regions, respectively.
  • the blurring weights of the adjacent sub-regions with large differences in the original blurring intensity are subjected to a certain degree of close processing, for example, for adjacent sub-regions 1 and 2 If the original blurring intensity of sub-region 1 is larger than that of sub-region 2, then the blurring weight of sub-region 1 is multiplied by a coefficient smaller than 1, wherein the original blurring intensity of sub-region 1 is compared The larger the sub-region 2 is, the smaller the coefficient is. Further, the product of the original blurring intensity, the blurring weight and the coefficient of the sub-region 1 is taken as the target blurring intensity of the sub-region 1, and the original blurring intensity of the sub-region 2 is obtained.
  • the product of the divisor weight is used as the target blur strength of sub-region 2, or the blur weight of sub-region 2 is multiplied by a coefficient greater than 1, wherein the original blur strength of sub-region 1 is compared with sub-region 2
  • the manner of performing the blurring process on the background area of the main image according to the target blurring intensity of different sub-areas also includes, but is not limited to, the following processing manners:
  • the depth information of the sub-area may be acquired in the manner described in step 102.
  • acquiring a main image captured by the main camera and a sub image captured by the sub camera and acquiring depth information of the main image according to the main image and the sub image, and further, according to the depth information and focus
  • the region determines the original blurring intensity of different sub-regions in the background area of the main image, multiplies the original blurring intensity by the blurring weight of the sub-region where the original blurring intensity is located, and obtains the target blurring intensity of the corresponding sub-region, Further, the background area of the main image is blurred according to the target blurring intensity to obtain a blurred image.
  • the display direction of the main image may be estimated according to the gyroscope information of the terminal device, and then the upper and lower orientations of the different sub-areas are determined according to the display direction of the main image and determined according to the top-to-bottom weight reduction strategy.
  • the ambiguity weight of different sub-areas may be determined according to the gyroscope information of the terminal device, and then the upper and lower orientations of the different sub-areas are determined according to the display direction of the main image and determined according to the top-to-bottom weight reduction strategy.
  • the background region in addition to dividing the background region into sub-regions in a direction parallel to the in-focus region, the background region may be divided into different sub-regions in a vertical direction (a direction perpendicular to the in-focus region).
  • the background area is divided into a plurality of different sub-areas according to the size of the depth of field, and further, the depth of field closest to the focus area in each sub-area and the farthest distance from the focus area are respectively obtained.
  • the depth of field of the position is averaged according to the depth of field of the nearest position and the depth of field of the farthest position, and the average depth of field is taken as the average depth of field information of the corresponding sub-area, and then the original blurring intensity is determined according to the average depth information of the sub-area Among them, the higher the average depth of field information, the greater the original blurring intensity.
  • the sub-area may be acquired according to the sub-area.
  • the depth of field probability distribution in the depth of field information calculates the average depth of field of the sub-area, or the depth of field information of one or more sub-areas that are far from the focus area is inaccurate, and the focus area can be obtained according to the more accurate distance obtained.
  • the trend of the average depth of field information of the plurality of sub-areas, and the average depth of field information of one or more sub-areas that are farther from the focus area is derived.
  • the blurring weight of each sub-area can be calculated in the same manner as the above-described example to obtain the target blurring intensity of each sub-area for blurring processing.
  • the background blur processing method of the present application acquires the main image captured by the main camera and the sub image captured by the sub camera, and acquires the depth information of the main image according to the main image and the sub image, and determines according to the depth information and the focus area.
  • FIG. 7 is a schematic structural diagram of a background blur processing apparatus according to an embodiment of the present application. As shown in FIG. 7, the background blur processing apparatus is provided.
  • the first obtaining module 100, the first determining module 200, the second determining module 300, the third determining module 400, and the processing module 500 are included.
  • the first obtaining module 100 is configured to acquire a main image captured by the main camera and a sub image captured by the sub camera, and acquire depth information of the main image according to the main image and the sub image.
  • the first determining module 200 is configured to determine an original blurring intensity of different sub-regions in the background area of the main image according to the depth of field information and the focus area.
  • the first determining module 200 includes a first determining unit 210, a first obtaining unit 220, and a second obtaining unit 230.
  • the first obtaining unit 220 is configured to acquire, according to the second depth information, average depth information of different sub-regions in the background area of the main image.
  • the second obtaining unit 230 is configured to acquire the original blurring intensity of different sub-regions according to the first depth information and the average depth information of different sub-regions.
  • the second determining module 300 is configured to determine a distributed orientation of different sub-regions according to a display manner of the main image, and determine a blurring weight of the different sub-regions according to a weight setting policy corresponding to the distributed orientation.
  • the second determining module 300 includes a third obtaining unit 310 and a second determining unit 320.
  • the third obtaining unit 310 is configured to acquire positioning coordinates of different sub-regions.
  • the processing module 500 is configured to perform a blurring process on the background area of the main image according to the target blurring intensity of different sub-regions.
  • each module in the background blur processing device is for illustrative purposes only. In other embodiments, the background blur processing device may be divided into different modules as needed to complete all or part of the background blur processing device.
  • the background blur processing device of the present application acquires the main image captured by the main camera and the sub image captured by the sub camera, and acquires the depth information of the main image according to the main image and the sub image, and determines according to the depth information and the focus area.
  • the main image background area is blurred according to the target blurring intensity of different sub-areas.
  • the image processing circuit includes an ISP processor 1040 and a control logic 1050.
  • the image data captured by imaging device 1010 is first processed by ISP processor 1040, which analyzes the image data to capture image statistical information that may be used to determine and/or control one or more control parameters of imaging device 1010.
  • the imaging device 1010 (camera) may include a camera having one or more lenses 1012 and an image sensor 1014, wherein the imaging device 1010 includes two sets of cameras for implementing the background blurring method of the present application, wherein, with continued reference to FIG.
  • Imaging device 1010 can simultaneously capture scene images based on a primary camera and a secondary camera
  • image sensor 1014 can include a color filter array (eg, a Bayer filter), and image sensor 1014 can acquire light intensity captured by each imaging pixel of image sensor 1014 and The wavelength information is provided and a set of raw image data that can be processed by the ISP processor 1040 is provided.
  • Sensor 1020 can provide raw image data to ISP processor 1040 based on sensor 1020 interface type, wherein ISP processor 1040 can generate raw image data acquired by image sensor 1014 in the main camera and image sensor in the secondary camera based on sensor 1020
  • the original image data acquired by 1014 calculates depth information and the like.
  • the sensor 1020 interface may utilize a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
  • SMIA Serial or parallel camera interfaces
  • the ISP processor 1040 processes the raw image data pixel by pixel in a variety of formats.
  • each image pixel can have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 1040 can perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Among them, image processing operations can be performed with the same or different bit depth precision.
  • ISP processor 1040 can also receive pixel data from image memory 1030. For example, raw pixel data is sent from the sensor 1020 interface to the image memory 1030, and the raw pixel data in the image memory 1030 is then provided to the ISP processor 1040 for processing.
  • Image memory 1030 can be part of a memory device, a storage device, or a separate dedicated memory within an electronic device, and can include DMA (Direct Memory Access) features.
  • DMA Direct Memory Access
  • ISP processor 1040 When receiving raw image data from sensor 1020 interface or from image memory 1030, ISP processor 1040 can perform one or more image processing operations, such as time domain filtering.
  • the processed image data can be sent to image memory 1030 for additional processing prior to being displayed.
  • the ISP processor 1040 receives the processed data from the image memory 1030 and performs image data processing in the original domain and in the RGB and YCbCr color spaces.
  • the processed image data may be output to display 1070 for viewing by a user and/or further processed by a graphics engine or a GPU (Graphics Processing Unit). Additionally, the output of ISP processor 1040 can also be sent to image memory 1030, and display 1070 can read image data from image memory 1030.
  • image memory 1030 can be configured to implement one or more frame buffers.
  • the statistics determined by the ISP processor 1040 can be sent to the control logic 1050 unit.
  • the statistical data may include image sensor 1014 statistical information such as auto exposure, auto white balance, auto focus, flicker detection, black level compensation, lens 1012 shading correction, and the like.
  • Control logic 1050 can include a processor and/or a microcontroller that executes one or more routines, such as firmware, and one or more routines can determine control parameters and control of imaging device 1010 based on received statistical data. parameter.
  • the control parameters may include sensor 1020 control parameters (eg, gain, integration time for exposure control), camera flash control parameters, lens 1012 control parameters (eg, focus or zoom focal length), or a combination of these parameters.
  • the ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (eg, during RGB processing), as well as lens 1012 shading correction parameters.
  • the main image background area is blurred according to the target blurring intensity of the different sub-areas.
  • the present application also proposes a non-transitory computer readable storage medium that enables execution of a background blurring processing method as in the above embodiment when instructions in the storage medium are executed by a processor.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
  • the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the application can be implemented in hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware and in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: discrete with logic gates for implementing logic functions on data signals Logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), and the like.
  • each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like. While the embodiments of the present application have been shown and described above, it is understood that the above-described embodiments are illustrative and are not to be construed as limiting the scope of the present application. The embodiments are subject to variations, modifications, substitutions and variations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

Disclosed in the present application are a background blurring method and apparatus, and a device. The method comprises: obtaining a main image captured by a main camera and an auxiliary image captured by an auxiliary camera, and obtaining depth of field information of the main image according to the main image and the auxiliary image; determining original blurring intensities of different subregions in a background region of the main image according to the depth of field information and a focus region; determining distribution orientations of the different subregions according to a display mode of the main image, and determining blurring weights of the different subregions according to a weight setting strategy corresponding to the distribution orientations; determining target blurring intensities of the different subregions according to the original blurring intensities and the corresponding blurring weights of the different subregions; and blurring the background region of the main image according to the target blurring intensities of the different subregions. Therefore, the blurring effect is more natural, and more approaches to a real optical defocusing effect.

Description

背景虚化处理方法、装置及设备Background blur processing method, device and device
相关申请的交叉引用Cross-reference to related applications
本申请要求广东欧珀移动通信有限公司于2017年11月30日提交的、申请名称为“背景虚化处理方法、装置及设备”的、中国专利申请号“201711242134.4”的优先权。This application claims the priority of the Chinese Patent Application No. "201711242134.4" filed on November 30, 2017 by Guangdong Opal Mobile Communications Co., Ltd., which is entitled "Background Blurring Processing Method, Apparatus and Equipment".
技术领域Technical field
本申请涉及电子设备领域,尤其涉及一种背景虚化处理方法、装置及终端设备。The present application relates to the field of electronic devices, and in particular, to a background blur processing method, apparatus, and terminal device.
背景技术Background technique
随着智能手机等终端设备制造技术的进步,当前较多的终端设备使用了双摄系统,即通过双摄像头同时获取的两幅图像来计算景深信息,再利用景深信息进行虚化处理,其中,在虚化处理时,用户通常要求虚化的效果更加接近真实的光学虚焦效果,即景深越大的地方虚化的强度更高,然而,目前景深的计算精度有限,可能无法准确计算超过一定距离的景深,从而,无法根据景深实现图像中较远距离区域的虚化,导致虚化处理的视觉效果不好。With the advancement of terminal device manufacturing technology such as smart phones, more terminal devices currently use a dual camera system, that is, two depth images acquired by two cameras are used to calculate depth information, and then use depth information to perform blurring processing, wherein In the process of blurring, the user usually requires the effect of blurring to be closer to the true optical virtual focus effect, that is, the greater the depth of field, the higher the intensity of blurring. However, the current depth of field calculation accuracy is limited, and may not be accurately calculated beyond The depth of field of the distance, and thus, the blurring of the far distance region in the image cannot be achieved according to the depth of field, resulting in a poor visual effect of the blurring process.
申请内容Application content
本申请提供一种背景虚化处理方法、装置及设备,以解决现有技术中,因无法准确计算超过一定距离的景深,从而,无法根据景深实现图像中较远距离区域的虚化,导致虚化处理的视觉效果不好的技术问题。The present invention provides a background blur processing method, device and device, so as to solve the problem that the depth of field beyond a certain distance cannot be accurately calculated in the prior art, and thus the darkening of the image in the image cannot be realized according to the depth of field, resulting in virtual Technical problems with poor visual effects.
本申请实施例提供一种背景虚化处理方法,包括:获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据所述主图像和所述副图像获取所述主图像的景深信息;根据所述景深信息和对焦区域确定所述主图像背景区域中不同子区域的原始虚化强度;根据所述主图像的显示方式确定所述不同子区域的分布方位,并按照与所述分布方位对应的权重设置策略确定所述不同子区域的虚化权重;根据所述不同子区域的原始虚化强度和对应的虚化权重确定所述不同子区域的目标虚化强度;根据所述不同子区域的目标虚化强度对所述主图像背景区域进行虚化处理。The embodiment of the present application provides a background blurring processing method, including: acquiring a main image captured by a main camera and a sub image captured by a sub camera, and acquiring depth information of the main image according to the main image and the sub image; Determining an original blurring intensity of different sub-regions in the background area of the main image according to the depth information and the in-focus area; determining a distribution orientation of the different sub-areas according to a display manner of the main image, and according to the distribution orientation Corresponding weight setting policy determines a blurring weight of the different sub-regions; determining a target blurring strength of the different sub-regions according to the original blurring intensity of the different sub-regions and corresponding blurring weights; The target blurring intensity of the region blurs the background area of the main image.
本申请另一实施例提供一种背景虚化处理装置,包括:第一获取模块,用于获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据所述主图像和所述副图像获取所述主图像的景深信息;第一确定模块,用于根据所述景深信息和对焦区域确定所述主图像背 景区域中不同子区域的原始虚化强度;第二确定模块,用于根据所述主图像的显示方式确定所述不同子区域的分布方位,并按照与所述分布方位对应的权重设置策略确定所述不同子区域的虚化权重;第三确定模块,用于根据所述不同子区域的原始虚化强度和对应的虚化权重确定所述不同子区域的目标虚化强度;处理模块,用于根据所述不同子区域的目标虚化强度对所述主图像背景区域进行虚化处理。Another embodiment of the present invention provides a background blur processing apparatus, including: a first acquiring module, configured to acquire a main image captured by a main camera and a sub image captured by a sub camera, and according to the main image and the sub image Acquiring the depth information of the main image; the first determining module is configured to determine an original blurring intensity of different sub-regions in the background area of the main image according to the depth information and the focus area; and a second determining module, configured to Determining a distribution orientation of the different sub-regions, and determining a blurring weight of the different sub-regions according to a weight setting policy corresponding to the distribution orientation; and a third determining module, configured to determine, according to the different The original blurring intensity of the sub-region and the corresponding blurring weight determine the target blurring intensity of the different sub-regions; the processing module is configured to perform virtualizing on the background area of the main image according to the target blurring intensity of the different sub-regions Processing.
本申请又一实施例提供一种计算机设备,包括存储器及处理器,所述存储器中储存有计算机可读指令,所述指令被所述处理器执行时,使得所述处理器执行本申请上述实施例所述的背景虚化处理方法。A further embodiment of the present application provides a computer device including a memory and a processor, wherein the memory stores computer readable instructions, and when the instructions are executed by the processor, the processor performs the above implementation of the present application. The background blurring method described in the example.
本申请还一实施例提供一种非临时性计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现如本申请上述实施例所述的背景虚化处理方法。A further embodiment of the present application provides a non-transitory computer readable storage medium having a computer program stored thereon, which when executed by a processor, implements a background blurring processing method as described in the above embodiments of the present application.
本申请实施例提供的技术方案可以包括以下有益效果:The technical solutions provided by the embodiments of the present application may include the following beneficial effects:
获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据主图像和副图像获取主图像的景深信息,根据景深信息和对焦区域确定主图像背景区域中不同子区域的原始虚化强度,根据主图像的显示方式确定不同子区域的分布方位,并按照与分布方位对应的权重设置策略确定不同子区域的虚化权重,进而,根据不同子区域的原始虚化强度和对应的虚化权重确定不同子区域的目标虚化强度,最终,根据不同子区域的目标虚化强度对主图像背景区域进行虚化处理。由此,实现了虚化效果更加自然,更加接近真实光学虚焦的效果。Obtaining a main image captured by the main camera and a sub-image captured by the sub-camera, and acquiring depth information of the main image according to the main image and the sub-image, and determining an original blurring intensity of different sub-regions in the background area of the main image according to the depth information and the focus area, Determining the distribution orientation of different sub-areas according to the display mode of the main image, and determining the blurring weights of different sub-areas according to the weight setting strategy corresponding to the distribution orientation, and further, according to the original blurring intensity of the different sub-areas and the corresponding blurring weights The target blurring intensity of different sub-areas is determined. Finally, the main image background area is blurred according to the target blurring intensity of different sub-areas. Thereby, the effect of the blurring effect is more natural and closer to the real optical virtual focus.
附图说明DRAWINGS
为了更清楚地说明本公开实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings to be used in the embodiments will be briefly described below. It is obvious that the drawings in the following description are some embodiments of the present disclosure, Those skilled in the art can also obtain other drawings based on these drawings without paying any creative work.
图1是根据本申请一个实施例的背景虚化处理方法的流程图;1 is a flow chart of a background blurring processing method according to an embodiment of the present application;
图2是根据本申请一个实施例的三角测距的原理示意图;2 is a schematic diagram of the principle of triangulation according to an embodiment of the present application;
图3是根据本申请一个实施例的双摄像头视角覆盖范围示意图;FIG. 3 is a schematic diagram of a dual camera viewing angle coverage according to an embodiment of the present application; FIG.
图4是根据本申请一个实施例的双摄像头景深获取示意图;4 is a schematic diagram of acquiring a depth of field of a dual camera according to an embodiment of the present application;
图5(a)是根据本申请一个实施例的主图像背景区域中多个子区域的划分示意图;FIG. 5(a) is a schematic diagram showing division of a plurality of sub-areas in a background area of a main image according to an embodiment of the present application;
图5(b)是根据本申请另一个实施例的主图像背景区域中多个子区域的划分示意图;FIG. 5(b) is a schematic diagram showing division of a plurality of sub-areas in a background area of a main image according to another embodiment of the present application;
图6是根据本申请一个具体实施例的背景虚化处理方法的流程图;6 is a flow chart of a background blur processing method according to an embodiment of the present application;
图7是根据本申请一个实施例的背景虚化处理装置的结构示意图;FIG. 7 is a schematic structural diagram of a background blur processing apparatus according to an embodiment of the present application; FIG.
图8是根据本申请另一个实施例的背景虚化处理装置的结构示意图;FIG. 8 is a schematic structural diagram of a background blur processing apparatus according to another embodiment of the present application; FIG.
图9是根据本申请又一个实施例的背景虚化处理装置的结构示意图;以及9 is a schematic structural diagram of a background blurring processing apparatus according to still another embodiment of the present application;
图10是根据本申请另一个实施例的图像处理电路的示意图。FIG. 10 is a schematic diagram of an image processing circuit according to another embodiment of the present application.
具体实施方式Detailed ways
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。The embodiments of the present application are described in detail below, and the examples of the embodiments are illustrated in the drawings, wherein the same or similar reference numerals are used to refer to the same or similar elements or elements having the same or similar functions. The embodiments described below with reference to the accompanying drawings are intended to be illustrative, and are not to be construed as limiting.
下面参考附图详细描述本申请实施例的背景虚化处理方法、装置和终端设备。The background blur processing method, apparatus and terminal device of the embodiment of the present application are described in detail below with reference to the accompanying drawings.
其中,本申请实施例的背景虚化处理方法和装置的执行主体可以为终端设备,其中,终端设备可以是手机、平板电脑、个人数字助理、穿戴式设备等具有双摄像头的硬件设备。该穿戴式设备可以是智能手环、智能手表、智能眼镜等。The execution body of the background blur processing method and apparatus of the embodiment of the present application may be a terminal device, where the terminal device may be a hardware device with a dual camera such as a mobile phone, a tablet computer, a personal digital assistant, a wearable device, or the like. The wearable device can be a smart bracelet, a smart watch, smart glasses, and the like.
基于以上分析可知,现有技术中,由于根据景深信息来进行虚化,因此,当景深信息由于精度的限制无法准确获取时,则会导致对应的区域的虚化强度无法实现,从而影响图像的虚化效果。Based on the above analysis, in the prior art, since the blurring is performed according to the depth of field information, when the depth of field information cannot be accurately obtained due to the limitation of accuracy, the blurring intensity of the corresponding region cannot be achieved, thereby affecting the image. Blurring effect.
为了解决上述技术问题,本申请提供了一种背景虚化处理方法,就虚化强度与不同景深对应的区域的位置关系,控制不同景深对应的区域进行不同强度的虚化,由此,即使不能精确获取到景深信息,也能使得不同景深对应的区域得到合适强度的虚化。In order to solve the above technical problem, the present application provides a background blurring processing method for controlling the positional relationship of regions corresponding to different depths of field with different blur depths, and controlling regions corresponding to different depths of field to perform blurring of different intensities, thereby Accurately obtaining the depth of field information can also make the area corresponding to different depths of field get the appropriate intensity blur.
图1是根据本申请一个实施例的背景虚化处理方法的流程图,如图1所示,该方法包括以下步骤:FIG. 1 is a flowchart of a background blur processing method according to an embodiment of the present application. As shown in FIG. 1, the method includes the following steps:
步骤101,获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据主图像和副图像获取主图像的景深信息。Step 101: Acquire a main image captured by the main camera and a sub image captured by the sub camera, and acquire depth information of the main image according to the main image and the sub image.
其中,在对拍摄的主体聚焦后,在主体所在的焦点区域之前和之后一段人眼容许的清晰成像的空间深度范围为景深。Wherein, after focusing on the subject being photographed, the spatial depth of the clear imaging allowed by the human eye before and after the focus area where the subject is located is the depth of field.
需要说明的是,在实际应用中,人眼分辩景深主要是依靠双目视觉分辨景深,这与双摄像头分辨景深的原理一样,主要是依靠如图2所示的三角测距的原理实现的,基于图2中,在实际空间中,画出了成像对象,以及两个摄像头所在位置O R和O T,以及两个摄像头的焦平面,焦平面距离两个摄像头所在平面的距离为f,在焦平面位置两个摄像头进行成像,从而得到两张拍摄图像。 It should be noted that in practical applications, the depth of field of the human eye is mainly determined by binocular vision to distinguish the depth of field. This is the same as the principle of dual camera resolution of depth of field, mainly relying on the principle of triangular ranging as shown in Figure 2. Based on Figure 2, in the actual space, the imaged object is drawn, as well as the positions of the two cameras O R and O T , and the focal planes of the two cameras. The focal plane is at a distance f from the plane of the two cameras. Two cameras are imaged at the focal plane position to obtain two captured images.
其中,P和P’分别是同一对象在不同拍摄图像中的位置。其中,P点距离所在拍摄图像的左侧边界的距离为X R,P’点距离所在拍摄图像的左侧边界的距离为X T。O R和O T分别为两个摄像头,这两个摄像头在同一平面,距离为B。 Among them, P and P' are the positions of the same object in different captured images, respectively. The distance from the P point to the left boundary of the captured image is X R , and the distance from the P′ point to the left boundary of the captured image is X T . O R and O T are two cameras respectively, and the two cameras are on the same plane with a distance B.
基于三角测距原理,图2中的对象与两个摄像头所在平面之间的距离Z,具有如下关 系:
Figure PCTCN2018116475-appb-000001
Based on the principle of triangulation, the distance Z between the object in Figure 2 and the plane of the two cameras has the following relationship:
Figure PCTCN2018116475-appb-000001
基于此,可以推得
Figure PCTCN2018116475-appb-000002
其中,d为同一对象在不同拍摄图像中的位置之间的距离差。由于B、f为定值,因此,根据d可以确定出对象的距离Z。
Based on this, you can push
Figure PCTCN2018116475-appb-000002
Where d is the difference in distance between the positions of the same object in different captured images. Since B and f are constant values, the distance Z of the object can be determined according to d.
需要强调的是,上面的公式是基于两个平行的相同摄像头来实施的,但是实际使用的时候实际上有很多问题,比如在上图两个摄像头计算景深中总有一部分场景不能相交,因此实际的为了景深计算两个摄像头的FOV设计会不一样,其中,主摄像头是用来取实际图的主图像的,副摄像头获取的副的图像主要是用来参考计算景深,基于以上分析,副摄像头的FOV一般会大于主摄像头,但是即使是这样如图3所示,距离较近的物体依然有可能不同时在两个摄像头获取图像当中,经过调整的计算景深范围的关系如下公式所示:It should be emphasized that the above formula is implemented based on two parallel cameras. However, there are actually many problems when actually used. For example, in the above picture, there are always some scenes in the depth of field of the camera that cannot intersect, so the actual For the depth of field calculation, the FOV design of the two cameras will be different. The main camera is used to take the main image of the actual image. The sub-image obtained by the sub-camera is mainly used to calculate the depth of field. Based on the above analysis, the sub-camera The FOV is generally larger than the main camera, but even if it is as shown in Figure 3, objects that are closer together may still be in different images acquired by the two cameras. The adjusted calculated depth of field range is as follows:
Figure PCTCN2018116475-appb-000003
即可根据调整后的公式,计算主图像的景深范围等。
Figure PCTCN2018116475-appb-000003
You can calculate the depth of field of the main image based on the adjusted formula.
当然,除了三角测距法,也可以采用其他的方式来计算主图像的景深,比如,主摄像头和副摄像头针对同一个场景拍照时,场景中的物体距离摄像头的距离与主摄像头和副摄像头成像的位移差、姿势差等成比例关系,因此,在本申请的一个实施例中,可以根据这种比例关系获取上述距离Z。Of course, in addition to the triangulation method, other methods can be used to calculate the depth of field of the main image. For example, when the main camera and the sub camera are photographed for the same scene, the distance between the object in the scene and the camera is imaged by the main camera and the sub camera. The displacement difference, the posture difference, and the like are proportional, and therefore, in one embodiment of the present application, the above-described distance Z can be obtained according to such a proportional relationship.
举例而言,如图4所示,通过主摄像头获取的主图像以及副摄像头获取的副图像,计算出不同点差异的图,这里用视差图表示,这个图上表示的是两张图上相同点的位移差异,但是由于三角定位中的位移差异和Z成正比,因此很多时候视差图就直接被用作景深图。For example, as shown in FIG. 4, a map of different point differences is calculated by the main image acquired by the main camera and the sub-image acquired by the sub-camera, and is represented by a disparity map, which is the same on the two graphs. The difference in displacement of the points, but since the difference in displacement in the triangulation is proportional to Z, the disparity map is often used directly as the depth of field map.
步骤102,根据景深信息和对焦区域确定主图像背景区域中不同子区域的原始虚化强度。Step 102: Determine an original blurring intensity of different sub-regions in the background area of the main image according to the depth of field information and the focus area.
可以理解,由于在对焦区域之前成像的范围为前景景深,前景景深对应的区域为前景区域,在对焦区域之后清晰成像的范围为背景景深,背景景深对应的区域为背景区域,根据景深信息和对焦区域确定主图像背景区域,进而,初步确定出对背景区域中不同子区域的原始虚化强度,将该原始虚化强度作为后续对背景区域每个子区域进行虚化的调整基准。It can be understood that since the range of imaging before the focus area is the foreground depth of field, the area corresponding to the foreground depth of field is the foreground area, the range of clear imaging after the focus area is the background depth of field, and the area corresponding to the background depth of field is the background area, according to the depth of field information and focus The area determines the background area of the main image, and further, the original blurring intensity of the different sub-areas in the background area is initially determined, and the original blurring intensity is used as an adjustment basis for subsequent blurring of each sub-area of the background area.
作为一种可能的实现方式,按照水平方向(平行于对焦区域的方向)将背景区域划分为不同子区域。As a possible implementation, the background area is divided into different sub-areas in a horizontal direction (parallel to the direction of the focus area).
根据景深信息和对焦区域确定主图像中前景区域的第一景深信息和背景区域的第二景深信息,根据第二景深信息获取主图像背景区域中不同子区域的平均景深信息,进而,根据第一景深信息和不同子区域的平均景深信息获取不同子区域的原始虚化强度。Determining, according to the depth information and the focus area, first depth information of the foreground area in the main image and second depth information of the background area, and acquiring average depth information of different sub-areas in the background area of the main image according to the second depth information, and further, according to the first The depth of field information and the average depth of field information of different sub-areas obtain the original blurring intensity of different sub-areas.
具体而言,在本示例中,将背景区域按照水平方向分为多个不同的子区域,其中,每 个子区域的大小和形状可以根据应用需要进行调整,多个不同子区域的大小和形状可以相同,也可以不同,进而,分别获取每个子区域中距离对焦区域的距离最近位置的景深和距离对焦区域的距离最远位置的景深,并根据该最近位置的景深和最远位置的景深取平均,将该平均后的景深作为对应子区域的平均景深信息,根据该子区域的平均景深信息确定原始虚化强度,其中,平均景深信息越高,原始虚化强度越大。Specifically, in this example, the background area is divided into a plurality of different sub-areas according to the horizontal direction, wherein the size and shape of each sub-area can be adjusted according to application requirements, and the size and shape of the plurality of different sub-areas can be Similarly, the difference may be different. Further, the depth of field of the closest position from the focus area in each sub-area and the depth of field farthest from the focus area are respectively obtained, and the average depth of field of the closest position and the depth of field of the farthest position are averaged. The average depth of field is taken as the average depth information of the corresponding sub-area, and the original blurring intensity is determined according to the average depth information of the sub-area, wherein the higher the average depth information, the greater the original blurring intensity.
其中,每个子区域的景深区间大小可根据应用需要进行调整,多个不同子区域的景深区间大小可以相同,也可以不同,如图5(a)所示,多个子区域可以根据背景区域的形状划分为多个大小不一致的子区域,或者,为了进一步虚化处理的方便,如图5(b)所示,多个子区域可以为宽度与图像宽度相同的从图像下方到上方水平分布。The depth of field interval of each sub-area can be adjusted according to application requirements. The depth of field interval of multiple different sub-areas can be the same or different. As shown in FIG. 5( a ), multiple sub-areas can be shaped according to the background area. The sub-areas are divided into a plurality of sub-areas that are inconsistent in size, or, for the convenience of further blurring processing, as shown in FIG. 5(b), the plurality of sub-areas may be horizontally distributed from the bottom to the top of the image having the same width and image width.
需要强调的是,如果子区域距离对焦区域的距离最近位置的景深和距离对焦区域的距离最远位置的景深,由于景深计算精度的限制没有获取到,则可以根据该子区域中获取的景深信息中的景深概率分布计算出该子区域的平均景深等。It should be emphasized that if the depth of field of the closest position of the sub-area from the focus area and the depth of field farthest from the focus area are not obtained due to the limitation of the accuracy of the depth of field calculation, the depth information obtained in the sub-area may be obtained. The depth of field probability distribution in the middle is calculated as the average depth of field of the sub-area.
步骤103,根据主图像的显示方式确定不同子区域的分布方位,并按照与分布方位对应的权重设置策略确定不同子区域的虚化权重。Step 103: Determine a distribution orientation of different sub-regions according to a display manner of the main image, and determine a blur weight of the different sub-regions according to a weight setting policy corresponding to the distribution orientation.
其中,上述显示方式包括拍摄主体的显示反向、相对整张图像的占比等。The display manner includes the display reverse direction of the shooting subject, the proportion of the entire image, and the like.
可以理解的是,在第一拍照场景下,图像的下方为地面,图像的上方为天空等,而拍照的主体位于地面上,因而,主图像背景区域中的子区域,越是接近图像上方,越是与当前拍摄主体无关,越是接近图像的下方越是与当前拍摄主体相关。或者,在第二拍照场景下,图像的右边为人像,图像的左边为沙滩或者大海等,则如果当前拍照模式为人像模式,则越是靠近图像的左边,则越是与当前拍摄的主体无关,越是靠近图像的右边,越是与当前拍摄的主体相关等,又或者,在第三拍照场景下,即人像拍照模式下,人像在整个图像中占据比例较大,背景区域仅仅在图像的四个角落位置,则越是远离人像所在区域,则越是有当前拍摄主体无关等。It can be understood that, in the first photographing scene, the bottom of the image is the ground, the top of the image is the sky, and the subject of the photograph is located on the ground. Therefore, the sub-region in the background area of the main image is closer to the top of the image. The more it is not related to the current subject, the closer it is to the bottom of the image, the more relevant it is to the current subject. Or, in the second photographing scene, the right side of the image is a portrait, and the left side of the image is a beach or a sea, etc., if the current photographing mode is the portrait mode, the closer to the left side of the image, the more the current subject is unrelated. The closer to the right side of the image, the more relevant it is to the subject being photographed, or, in the third photographing scene, that is, in the portrait photographing mode, the portrait is occupied in a larger proportion in the entire image, and the background area is only in the image. The more the four corner positions, the farther away from the area where the portrait is located, the more the current subject is irrelevant.
因而,在本申请的实施例中,可以根据主图像的显示方式确定不同子区域的分布方位,进而,按照与分布方位对应的权重设置策略确定不同子区域的虚化权重,比如,对于上述第一场景中,越接近上方位的子区域越是与主图像中的拍摄主体无关,越接近下方位的子区域越是与主图像中的拍摄主体相关,从而按照由上到下权重递减策略确定不同子区域的虚化权重,其中,该上下方位并不仅仅包括传统物理上的上和下,参照上述分析,它与主图像的显示方式有关,如果主图像的显示方式为上下显示,则该上下方位表示物理学上的上和下,如果主图像的显示方式为水平显示,则该上下方位表示左右方向。Therefore, in the embodiment of the present application, the distribution orientation of different sub-regions may be determined according to the display manner of the main image, and further, the blurring weights of different sub-regions may be determined according to the weight setting strategy corresponding to the distribution orientation, for example, for the above In a scene, the sub-region closer to the upper direction is not related to the subject in the main image, and the sub-region closer to the lower direction is related to the subject in the main image, thereby determining according to the top-down weight reduction strategy. The blurring weight of different sub-areas, wherein the up and down direction does not only include the upper and lower sides of the traditional physical, referring to the above analysis, it is related to the display mode of the main image, and if the display mode of the main image is up and down, the The upper and lower directions indicate physical upper and lower, and if the display mode of the main image is horizontal display, the upper and lower directions indicate the left and right direction.
举例而言,当主图像的显示方式为上下显示,且拍照主体相对于整张图像占比较小,上下方位表示物理学上的上和下时,越靠近图片上方的子区域虚化权重越大,靠近下方的 子区域虚化权重小,这样可以使最终的虚化效果更自然,更接近真实光学虚焦效果。For example, when the display mode of the main image is up and down, and the main body of the photograph is relatively small relative to the whole image, and the upper and lower orientations indicate the upper and lower physics, the sub-regions closer to the top of the image have greater blurring weights. The sub-area near the bottom has a small blurring weight, which makes the final blur effect more natural and closer to the real optical virtual focus effect.
其中,在实际应用中,根据应用场景的不同,可采用不同的实现方式推测主图像的显示方式,比如,可以通过检测终端设备的陀螺仪信息获知终端设备的朝向,进而,推测出终端设备拍照时主图像的显示方式。In actual applications, different display modes may be used to estimate the display mode of the main image according to different application scenarios. For example, the orientation of the terminal device may be obtained by detecting the gyroscope information of the terminal device, and then the terminal device is inferred to take a picture. The way the main image is displayed.
本申请实施中,按照与分布方位对应的权重设置策略确定不同子区域的虚化权重的方式有多种,比如,可以预先根据大量实验获取分布方位与虚化效果的关系,进而,根据该关系建立并存储分布方位和权重设置策略的对应关系,以便于在确定出主图像的显示方式后,查询该对应关系确定出对应的权重设置策略。In the implementation of the present application, there are various ways to determine the blurring weights of different sub-areas according to the weight setting strategy corresponding to the distributed orientation. For example, the relationship between the distribution orientation and the blurring effect may be obtained according to a large number of experiments in advance, and further, according to the relationship The correspondence between the distribution orientation and the weight setting policy is established and stored, so that after determining the display manner of the main image, the corresponding relationship is queried to determine a corresponding weight setting policy.
为了更加清楚的说明,下面以按照由上到下权重递减策略确定不同子区域的虚化权重的方式为例进行举例说明:For a clearer explanation, the following is an example of determining the ambiguity weight of different sub-areas according to the top-to-bottom weight reduction strategy:
方式一:method one:
在本方式中,预先设置包含子区域的定位坐标和虚化权重对应关系的线性分布曲线,或,非线性分布曲线,其中,该定位坐标可以为子区域中心区域的任意一点坐标,进而,获取不同子区域的定位坐标,根据定位坐标查询按照预设的线性权重分布曲线或者非线性权重分布曲线确定不同子区域的虚化权重。In this mode, a linear distribution curve including a mapping relationship between the positioning coordinates of the sub-region and the blurring weight, or a nonlinear distribution curve, wherein the positioning coordinate can be any point coordinate of the central region of the sub-region, is further set in advance. The positioning coordinates of different sub-areas are determined according to the positioning coordinate query according to a preset linear weight distribution curve or a nonlinear weight distribution curve to determine the blurring weights of different sub-areas.
方式二:Method 2:
根据大量实验数据学习图像的显示方向与不同方位的子区域的虚化权重的对应关系,根据学习结果构建深度神经网络模型,从而在该模型中输入主图像的显示方向和不同子区域的方位,进而,根据该模型的输出获取对应的子区域的虚化权重。According to a large amount of experimental data, the correspondence between the display direction of the image and the blurring weight of the sub-regions of different orientations is constructed, and a deep neural network model is constructed according to the learning result, thereby inputting the display direction of the main image and the orientation of different sub-regions in the model. Further, the blurring weight of the corresponding sub-region is obtained according to the output of the model.
步骤104,根据不同子区域的原始虚化强度和对应的虚化权重确定不同子区域的目标虚化强度。Step 104: Determine target blurring strengths of different sub-regions according to original blurring strengths of different sub-regions and corresponding blurring weights.
步骤105,根据不同子区域的目标虚化强度对主图像背景区域进行虚化处理。Step 105: Perform blurring processing on the background area of the main image according to the target blurring intensity of different sub-areas.
具体地,在确定不同子区域的原始虚化强度和对应的虚化权重后,根据不同子区域的原始虚化强度和对应的虚化权重确定不同子区域的目标虚化强度,并根据不同子区域的目标虚化强度对主图像背景区域进行虚化处理,由此,根据子区域的方位和主图像显示方向的关系对不同的子区域进行不同强度的虚化的,不需要获知每个子区域的精确的景深信息,也可使得不同子区域得到对应强度的虚化,使得虚化结果更加自然。Specifically, after determining the original blurring intensity and the corresponding blurring weight of different sub-regions, determining the target blurring intensity of different sub-regions according to the original blurring intensity of the different sub-regions and the corresponding blurring weight, and according to different sub-regions The target blurring intensity of the region blurs the background area of the main image. Therefore, according to the relationship between the orientation of the sub-region and the display direction of the main image, different sub-regions are blurred by different intensities, and it is not necessary to know each sub-region. The precise depth of field information can also make the different sub-regions get the corresponding intensity blur, making the blurring result more natural.
也就是说,仅仅根据初步获取的景深信息确定的不同子区域的原始虚化强度,可能由于景深信息获取的不精确,而导致该虚化强度与真实光学虚焦效果对应的虚化强度有偏差,而本申请的实施例中,进一步根据主图像的显示方向和不同子区域的方位确定不同子区域的虚化权重,进而根据该虚化权重修正原始虚化强度确定目标虚化强度,使得虚化结果更加自然。That is to say, the original blurring intensity of different sub-regions determined only according to the initially obtained depth of field information may be deviated due to the inaccuracy of the depth of field information acquisition, and the blurring intensity corresponding to the real optical virtual focal effect may be deviated. In the embodiment of the present application, the blurring weights of different sub-regions are further determined according to the display direction of the main image and the orientations of different sub-regions, and then the original blurring strength is corrected according to the blurring weight to determine the target blurring intensity, so that the virtual The result is more natural.
需要说明的是,根据应用场景的不同,对不同子区域的原始虚化强度和对应的虚化权重确定不同子区域的目标虚化强度的实现方式不同,包括但不限于以下几种方式:It should be noted that, depending on the application scenario, the original blurring strength of the different sub-regions and the corresponding blurring weights are different in determining the target blurring strength of different sub-regions, including but not limited to the following ways:
方式一:method one:
通过获取不同子区域的原始虚化强度与对应的虚化权重的乘积,确定不同子区域的目标虚化强度,比如,原始虚化强度分别为a和b的两个子区域的虚化权重分别为80%和90%,则可以将a*80%和b*90%分别作为针对上述两个子区域的目标虚化强度。By obtaining the product of the original blurring intensity of different sub-regions and the corresponding blurring weight, the target blurring strength of different sub-regions is determined. For example, the blurring weights of the two sub-regions whose original blurring intensities are a and b respectively are 80% and 90%, then a*80% and b*90% can be used as the target blurring intensity for the above two sub-regions, respectively.
方式二:Method 2:
为了使得相邻的子区域的虚化效果衔接更加自然,对原始虚化强度差距较大的相邻子区域的虚化权重进行一定程度的拉近处理,比如,对于相邻子区域1和2,如果子区域1的原始虚化强度相较于子区域2较大,则此时对子区域1的虚化权重乘以一个小于1的系数,其中,子区域1的原始虚化强度相较于子区域2越大,该系数越小,进而,将子区域1的原始虚化强度、虚化权重和系数的乘积作为子区域1的目标虚化强度,将子区域2的原始虚化强度和虚化权重的乘积作为子区域2的目标虚化强度,或者,对子区域2的虚化权重乘以一个大于1的系数,其中,子区域1的原始虚化强度相较于子区域2越大,该系数越大,进而,将子区域2的原始虚化强度、虚化权重和系数的乘积作为子区域2的目标虚化强度,将子区域1的原始虚化强度和虚化权重的乘积作为子区域1的目标虚化强度。In order to make the blurring effect of the adjacent sub-regions more natural, the blurring weights of the adjacent sub-regions with large differences in the original blurring intensity are subjected to a certain degree of close processing, for example, for adjacent sub-regions 1 and 2 If the original blurring intensity of sub-region 1 is larger than that of sub-region 2, then the blurring weight of sub-region 1 is multiplied by a coefficient smaller than 1, wherein the original blurring intensity of sub-region 1 is compared The larger the sub-region 2 is, the smaller the coefficient is. Further, the product of the original blurring intensity, the blurring weight and the coefficient of the sub-region 1 is taken as the target blurring intensity of the sub-region 1, and the original blurring intensity of the sub-region 2 is obtained. The product of the divisor weight is used as the target blur strength of sub-region 2, or the blur weight of sub-region 2 is multiplied by a coefficient greater than 1, wherein the original blur strength of sub-region 1 is compared with sub-region 2 The larger the coefficient is, the larger the product of the original blurring intensity, the blurring weight and the coefficient of the sub-region 2 is taken as the target blurring intensity of the sub-region 2, and the original blurring intensity and the blurring weight of the sub-region 1 are Product of the sub-region 1 as the target blur Degree.
进一步地,根据不同子区域的目标虚化强度对所述主图像背景区域进行虚化处理的方式也包括但不限于以下处理方式:Further, the manner of performing the blurring process on the background area of the main image according to the target blurring intensity of different sub-areas also includes, but is not limited to, the following processing manners:
作为一种可能的实现方式:As a possible implementation:
根据不同子区域的目标虚化强度和不同子区域中各像素的景深信息确定背景区域中每个像素的虚化系数,根据背景区域中每个像素的虚化系数对背景区域进行高斯模糊处理生成虚化照片,由此,实现了背景景深信息越大,虚化程度越强的效果。Determining the blurring coefficient of each pixel in the background region according to the target blurring intensity of different sub-regions and the depth information of each pixel in different sub-regions, and performing Gaussian blur processing on the background region according to the blurring coefficient of each pixel in the background region By blurring the photo, the effect that the background depth information is larger and the degree of blurring is stronger is achieved.
当然,在本实施例中,如果不能精确获知到各像素的景深信息,可以如步骤102描述的方式来获取子区域的景深信息。Of course, in this embodiment, if the depth information of each pixel cannot be accurately obtained, the depth information of the sub-area may be acquired in the manner described in step 102.
为了更加清楚的描述而本申请的背景虚化处理方式,下面结合具体地应用场景进行举例:For a clearer description, the background blurring processing manner of the present application is exemplified below with reference to a specific application scenario:
如图6所示,获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据所述主图像和所述副图像获取所述主图像的景深信息,进而,根据所述景深信息和对焦区域确定所述主图像背景区域中不同子区域的原始虚化强度,将原始虚化强度乘以与该原始虚化强度所在子区域的虚化权重,获取到对应子区域的目标虚化强度,进而,根据该目标虚化强度对主图像的背景区域进行虚化处理得到虚化后的图像。As shown in FIG. 6, acquiring a main image captured by the main camera and a sub image captured by the sub camera, and acquiring depth information of the main image according to the main image and the sub image, and further, according to the depth information and focus The region determines the original blurring intensity of different sub-regions in the background area of the main image, multiplies the original blurring intensity by the blurring weight of the sub-region where the original blurring intensity is located, and obtains the target blurring intensity of the corresponding sub-region, Further, the background area of the main image is blurred according to the target blurring intensity to obtain a blurred image.
其中,继续参考图6,可以根据终端设备的陀螺仪信息推测主图像的显示方向,进而,根据主图像的显示方向,确定不同子区域的上下方位并按照由上到下权重递减策略确定所述不同子区域的虚化权重。With reference to FIG. 6 , the display direction of the main image may be estimated according to the gyroscope information of the terminal device, and then the upper and lower orientations of the different sub-areas are determined according to the display direction of the main image and determined according to the top-to-bottom weight reduction strategy. The ambiguity weight of different sub-areas.
基于以上实施例,除了上述将背景区域按照与对焦区域平行的方向划分子区域外,还可以按照垂直方向(垂直于对焦区域的方向)将背景区域划分为不同子区域。Based on the above embodiment, in addition to dividing the background region into sub-regions in a direction parallel to the in-focus region, the background region may be divided into different sub-regions in a vertical direction (a direction perpendicular to the in-focus region).
根据景深信息和对焦区域确定主图像中前景区域的第一景深信息和背景区域的第二景深信息,按照第二景深信息的大小将背景区域分为不同的子区域,其中,距离对焦区域距离越近的子区域景深越小,根据第二景深信息获取主图像背景区域中不同子区域的平均景深信息,进而,根据第一景深信息和不同子区域的平均景深信息获取不同子区域的原始虚化强度。Determining, according to the depth information and the focus area, the first depth information of the foreground area in the main image and the second depth information of the background area, and dividing the background area into different sub-areas according to the size of the second depth information, wherein the distance from the focus area is larger The smaller the depth of field of the near sub-area, the average depth of field information of different sub-areas in the background area of the main image is obtained according to the second depth information, and then the original blur of different sub-areas is obtained according to the first depth information and the average depth information of different sub-areas. strength.
具体而言,在本示例中,将背景区域按照景深的大小分为多个不同的子区域,进而,分别获取每个子区域中距离对焦区域的距离最近位置的景深和距离对焦区域的距离最远位置的景深,并根据该最近位置的景深和最远位置的景深取平均,将该平均后的景深作为对应子区域的平均景深信息,进而,根据该子区域的平均景深信息确定原始虚化强度,其中,平均景深信息越高,原始虚化强度越大。Specifically, in the present example, the background area is divided into a plurality of different sub-areas according to the size of the depth of field, and further, the depth of field closest to the focus area in each sub-area and the farthest distance from the focus area are respectively obtained. The depth of field of the position is averaged according to the depth of field of the nearest position and the depth of field of the farthest position, and the average depth of field is taken as the average depth of field information of the corresponding sub-area, and then the original blurring intensity is determined according to the average depth information of the sub-area Among them, the higher the average depth of field information, the greater the original blurring intensity.
其中,需要强调的是,如果子区域的距离对焦区域的距离最近位置的景深和距离对焦区域的距离最远位置的景深,由于景深计算精度的限制没有获取到,则可以根据该子区域中获取的景深信息中的景深概率分布计算出该子区域的平均景深等,或者,距离对焦区域较远的一个或多个子区域的景深信息获取不精确,则可以根据获取到的较为精确的距离对焦区域的多个子区域的平均景深信息的变化趋势,推导出距离对焦区域较远的一个或多个子区域的平均景深信息。Among them, it should be emphasized that if the depth of field of the sub-area from the closest position of the focus area and the farthest distance from the focus area are not obtained due to the limitation of the depth of field calculation accuracy, the sub-area may be acquired according to the sub-area. The depth of field probability distribution in the depth of field information calculates the average depth of field of the sub-area, or the depth of field information of one or more sub-areas that are far from the focus area is inaccurate, and the focus area can be obtained according to the more accurate distance obtained. The trend of the average depth of field information of the plurality of sub-areas, and the average depth of field information of one or more sub-areas that are farther from the focus area is derived.
进而,在本申请的实施例中,可以采用与上述示例示出的同样的方式计算每个子区域的虚化权重进而获取每个子区域的目标虚化强度进行虚化处理。Further, in the embodiment of the present application, the blurring weight of each sub-area can be calculated in the same manner as the above-described example to obtain the target blurring intensity of each sub-area for blurring processing.
综上所述,本申请的背景虚化处理方法,获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据主图像和副图像获取主图像的景深信息,根据景深信息和对焦区域确定主图像背景区域中不同子区域的原始虚化强度,根据所述主图像的显示方式确定不同子区域的分布方位,并按照与分布方位对应的权重设置策略确定不同子区域的虚化权重,进而,根据不同子区域的原始虚化强度和对应的虚化权重确定不同子区域的目标虚化强度,最终,根据不同子区域的目标虚化强度对主图像背景区域进行虚化处理。由此,实现了虚化效果更加自然,更加接近真实光学虚焦的效果。为了实现上述实施例,本申请还提出了一种背景虚化处理装置,图7是根据本申请一个实施例的背景虚化处理装置的结构示意图,如图7所示,该背景虚化处理装置包括:第一获取模块100、第一确定模块200、第二确定 模块300、第三确定模块400和处理模块500。In summary, the background blur processing method of the present application acquires the main image captured by the main camera and the sub image captured by the sub camera, and acquires the depth information of the main image according to the main image and the sub image, and determines according to the depth information and the focus area. The original blurring intensity of different sub-regions in the background area of the main image, determining the distribution orientation of different sub-regions according to the display manner of the main image, and determining the blurring weights of different sub-regions according to the weight setting strategy corresponding to the distributed orientation, and further According to the original blurring intensity and the corresponding blurring weight of different sub-regions, the target blurring intensity of different sub-regions is determined. Finally, the main image background region is blurred according to the target blurring intensity of different sub-regions. Thereby, the effect of the blurring effect is more natural and closer to the real optical virtual focus. In order to implement the above embodiments, the present application also provides a background blur processing apparatus, and FIG. 7 is a schematic structural diagram of a background blur processing apparatus according to an embodiment of the present application. As shown in FIG. 7, the background blur processing apparatus is provided. The first obtaining module 100, the first determining module 200, the second determining module 300, the third determining module 400, and the processing module 500 are included.
其中,第一获取模块100,用于获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据主图像和副图像获取主图像的景深信息。The first obtaining module 100 is configured to acquire a main image captured by the main camera and a sub image captured by the sub camera, and acquire depth information of the main image according to the main image and the sub image.
第一确定模块200,用于根据景深信息和对焦区域确定主图像背景区域中不同子区域的原始虚化强度。The first determining module 200 is configured to determine an original blurring intensity of different sub-regions in the background area of the main image according to the depth of field information and the focus area.
在本申请的一个实施例中,如图8所示,在如图7所示的基础上,第一确定模块200包括第一确定单元210、第一获取单元220和第二获取单元230。In an embodiment of the present application, as shown in FIG. 8, on the basis of that shown in FIG. 7, the first determining module 200 includes a first determining unit 210, a first obtaining unit 220, and a second obtaining unit 230.
其中,第一确定单元210,用于根据景深信息和对焦区域确定主图像背景区域中确定主图像中前景区域的第一景深信息和背景区域的第二景深信息。The first determining unit 210 is configured to determine, according to the depth information and the focus area, first depth information of the foreground area in the main image and second depth information of the background area in the background area of the main image.
第一获取单元220,用于根据第二景深信息获取主图像背景区域中不同子区域的平均景深信息。The first obtaining unit 220 is configured to acquire, according to the second depth information, average depth information of different sub-regions in the background area of the main image.
第二获取单元230,用于根据第一景深信息和不同子区域的平均景深信息获取不同子区域的原始虚化强度。The second obtaining unit 230 is configured to acquire the original blurring intensity of different sub-regions according to the first depth information and the average depth information of different sub-regions.
第二确定模块300,用于根据主图像的显示方式确定不同子区域的分布方位,并按照与分布方位对应的权重设置策略确定不同子区域的虚化权重。The second determining module 300 is configured to determine a distributed orientation of different sub-regions according to a display manner of the main image, and determine a blurring weight of the different sub-regions according to a weight setting policy corresponding to the distributed orientation.
在本申请的一个实施例中,如图9所示,在如图7所示的基础上,第二确定模块300包括第三获取单元310和第二确定单元320。In an embodiment of the present application, as shown in FIG. 9, on the basis of that shown in FIG. 7, the second determining module 300 includes a third obtaining unit 310 and a second determining unit 320.
其中,第三获取单元310,用于获取不同子区域的定位坐标。The third obtaining unit 310 is configured to acquire positioning coordinates of different sub-regions.
第二确定单元320,用于根据定位坐标查询按照预设的线性权重分布曲线或者非线性权重分布曲线,确定不同子区域的虚化权重。The second determining unit 320 is configured to determine the blurring weights of different sub-areas according to the preset linear weight distribution curve or the nonlinear weight distribution curve according to the positioning coordinate query.
第三确定模块400,用于根据不同子区域的原始虚化强度和对应的虚化权重确定不同子区域的目标虚化强度。The third determining module 400 is configured to determine target blurring strengths of different sub-regions according to original blurring strengths of different sub-regions and corresponding blurring weights.
处理模块500,用于根据不同子区域的目标虚化强度对主图像背景区域进行虚化处理。The processing module 500 is configured to perform a blurring process on the background area of the main image according to the target blurring intensity of different sub-regions.
需要说明的是,前述对方法实施例的描述,也适用于本申请实施例的装置,其实现原理类似,在此不再赘述。It should be noted that the foregoing description of the method embodiment is also applicable to the device in the embodiment of the present application, and the implementation principle is similar, and details are not described herein again.
上述背景虚化处理装置中各个模块的划分仅用于举例说明,在其他实施例中,可将背景虚化处理装置按照需要划分为不同的模块,以完成上述背景虚化处理装置的全部或部分功能。The division of each module in the background blur processing device is for illustrative purposes only. In other embodiments, the background blur processing device may be divided into different modules as needed to complete all or part of the background blur processing device. Features.
综上所述,本申请的背景虚化处理装置,获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据主图像和副图像获取主图像的景深信息,根据景深信息和对焦区域确定主图像背景区域中不同子区域的原始虚化强度,根据主图像的显示方式确定不同子区域的分布方位,并按照与分布方位对应的权重设置策略确定不同子区域的虚化权重,进而, 根据不同子区域的原始虚化强度和对应的虚化权重确定不同子区域的目标虚化强度,最终,根据不同子区域的目标虚化强度对主图像背景区域进行虚化处理。由此,实现了虚化效果更加自然,更加接近真实光学虚焦的效果。In summary, the background blur processing device of the present application acquires the main image captured by the main camera and the sub image captured by the sub camera, and acquires the depth information of the main image according to the main image and the sub image, and determines according to the depth information and the focus area. The original blurring intensity of different sub-regions in the background area of the main image, determining the distribution orientation of different sub-regions according to the display manner of the main image, and determining the blurring weights of different sub-regions according to the weight setting strategy corresponding to the distribution orientation, and further, according to The original blurring intensity and corresponding blurring weights of different sub-areas determine the target blurring intensity of different sub-areas. Finally, the main image background area is blurred according to the target blurring intensity of different sub-areas. Thereby, the effect of the blurring effect is more natural and closer to the real optical virtual focus.
为了实现上述实施例,本申请还提出了一种计算机设备,其中,计算机设备为包括包含存储计算机程序的存储器及运行计算机程序的处理器的任意设备,比如,可以为智能手机、个人电脑等,上述计算机设备中还包括图像处理电路,图像处理电路可以利用硬件和/或软件组件实现,可包括定义ISP(Image Signal Processing,图像信号处理)管线的各种处理单元。图10为一个实施例中图像处理电路的示意图。如图10所示,为便于说明,仅示出与本申请实施例相关的图像处理技术的各个方面。In order to implement the above embodiments, the present application further provides a computer device, wherein the computer device is any device including a memory including a storage computer program and a processor running the computer program, for example, a smart phone, a personal computer, or the like. The computer device further includes an image processing circuit, and the image processing circuit may be implemented by using hardware and/or software components, and may include various processing units defining an ISP (Image Signal Processing) pipeline. Figure 10 is a schematic illustration of an image processing circuit in one embodiment. As shown in FIG. 10, for convenience of explanation, only various aspects of the image processing technique related to the embodiment of the present application are shown.
如图10所示,图像处理电路包括ISP处理器1040和控制逻辑器1050。成像设备1010捕捉的图像数据首先由ISP处理器1040处理,ISP处理器1040对图像数据进行分析以捕捉可用于确定和/或成像设备1010的一个或多个控制参数的图像统计信息。成像设备1010(照相机)可包括具有一个或多个透镜1012和图像传感器1014的摄像头,其中,为了实施本申请的背景虚化处理方法,成像设备1010包含两组摄像头,其中,继续参照图8,成像设备1010可基于主摄像头和副摄像头同时拍摄场景图像,图像传感器1014可包括色彩滤镜阵列(如Bayer滤镜),图像传感器1014可获取用图像传感器1014的每个成像像素捕捉的光强度和波长信息,并提供可由ISP处理器1040处理的一组原始图像数据。传感器1020可基于传感器1020接口类型把原始图像数据提供给ISP处理器1040,其中,ISP处理器1040可基于传感器1020提供的主摄像头中的图像传感器1014获取的原始图像数据和副摄像头中的图像传感器1014获取的原始图像数据计算景深信息等。传感器1020接口可以利用SMIA(Standard Mobile Imaging Architecture,标准移动成像架构)接口、其它串行或并行照相机接口或上述接口的组合。As shown in FIG. 10, the image processing circuit includes an ISP processor 1040 and a control logic 1050. The image data captured by imaging device 1010 is first processed by ISP processor 1040, which analyzes the image data to capture image statistical information that may be used to determine and/or control one or more control parameters of imaging device 1010. The imaging device 1010 (camera) may include a camera having one or more lenses 1012 and an image sensor 1014, wherein the imaging device 1010 includes two sets of cameras for implementing the background blurring method of the present application, wherein, with continued reference to FIG. Imaging device 1010 can simultaneously capture scene images based on a primary camera and a secondary camera, image sensor 1014 can include a color filter array (eg, a Bayer filter), and image sensor 1014 can acquire light intensity captured by each imaging pixel of image sensor 1014 and The wavelength information is provided and a set of raw image data that can be processed by the ISP processor 1040 is provided. Sensor 1020 can provide raw image data to ISP processor 1040 based on sensor 1020 interface type, wherein ISP processor 1040 can generate raw image data acquired by image sensor 1014 in the main camera and image sensor in the secondary camera based on sensor 1020 The original image data acquired by 1014 calculates depth information and the like. The sensor 1020 interface may utilize a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
ISP处理器1040按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器1040可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。The ISP processor 1040 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel can have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 1040 can perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Among them, image processing operations can be performed with the same or different bit depth precision.
ISP处理器1040还可从图像存储器1030接收像素数据。例如,从传感器1020接口将原始像素数据发送给图像存储器1030,图像存储器1030中的原始像素数据再提供给ISP处理器1040以供处理。图像存储器1030可为存储器装置的一部分、存储设备、或电子设备内的独立的专用存储器,并可包括DMA(Direct Memory Access,直接直接存储器存取)特征。 ISP processor 1040 can also receive pixel data from image memory 1030. For example, raw pixel data is sent from the sensor 1020 interface to the image memory 1030, and the raw pixel data in the image memory 1030 is then provided to the ISP processor 1040 for processing. Image memory 1030 can be part of a memory device, a storage device, or a separate dedicated memory within an electronic device, and can include DMA (Direct Memory Access) features.
当接收到来自传感器1020接口或来自图像存储器1030的原始图像数据时,ISP处理器1040可进行一个或多个图像处理操作,如时域滤波。处理后的图像数据可发送给图像存储器1030,以便在被显示之前进行另外的处理。ISP处理器1040从图像存储器1030接收处理数据,并对所述处理数据进行原始域中以及RGB和YCbCr颜色空间中的图像数据处理。处理后的图像数据可输出给显示器1070,以供用户观看和/或由图形引擎或GPU(Graphics Processing Unit,图形处理器)进一步处理。此外,ISP处理器1040的输出还可发送给图像存储器1030,且显示器1070可从图像存储器1030读取图像数据。在一个实施例中,图像存储器1030可被配置为实现一个或多个帧缓冲器。此外,ISP处理器1040的输出可发送给编码器/解码器1060,以便编码/解码图像数据。编码的图像数据可被保存,并在显示于显示器1070设备上之前解压缩。编码器/解码器1060可由CPU或GPU或协处理器实现。When receiving raw image data from sensor 1020 interface or from image memory 1030, ISP processor 1040 can perform one or more image processing operations, such as time domain filtering. The processed image data can be sent to image memory 1030 for additional processing prior to being displayed. The ISP processor 1040 receives the processed data from the image memory 1030 and performs image data processing in the original domain and in the RGB and YCbCr color spaces. The processed image data may be output to display 1070 for viewing by a user and/or further processed by a graphics engine or a GPU (Graphics Processing Unit). Additionally, the output of ISP processor 1040 can also be sent to image memory 1030, and display 1070 can read image data from image memory 1030. In one embodiment, image memory 1030 can be configured to implement one or more frame buffers. Additionally, the output of ISP processor 1040 can be sent to encoder/decoder 1060 to encode/decode image data. The encoded image data can be saved and decompressed before being displayed on the display 1070 device. Encoder/decoder 1060 can be implemented by a CPU or GPU or coprocessor.
ISP处理器1040确定的统计数据可发送给控制逻辑器1050单元。例如,统计数据可包括自动曝光、自动白平衡、自动聚焦、闪烁检测、黑电平补偿、透镜1012阴影校正等图像传感器1014统计信息。控制逻辑器1050可包括执行一个或多个例程(如固件)的处理器和/或微控制器,一个或多个例程可根据接收的统计数据,确定成像设备1010的控制参数以及的控制参数。例如,控制参数可包括传感器1020控制参数(例如增益、曝光控制的积分时间)、照相机闪光控制参数、透镜1012控制参数(例如聚焦或变焦用焦距)、或这些参数的组合。ISP控制参数可包括用于自动白平衡和颜色调整(例如,在RGB处理期间)的增益水平和色彩校正矩阵,以及透镜1012阴影校正参数。The statistics determined by the ISP processor 1040 can be sent to the control logic 1050 unit. For example, the statistical data may include image sensor 1014 statistical information such as auto exposure, auto white balance, auto focus, flicker detection, black level compensation, lens 1012 shading correction, and the like. Control logic 1050 can include a processor and/or a microcontroller that executes one or more routines, such as firmware, and one or more routines can determine control parameters and control of imaging device 1010 based on received statistical data. parameter. For example, the control parameters may include sensor 1020 control parameters (eg, gain, integration time for exposure control), camera flash control parameters, lens 1012 control parameters (eg, focus or zoom focal length), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (eg, during RGB processing), as well as lens 1012 shading correction parameters.
以下为运用图10中图像处理技术实现背景虚化处理方法的步骤:The following are the steps for implementing the background blurring method using the image processing technique in FIG. 10:
获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据所述主图像和所述副图像获取所述主图像的景深信息;Acquiring a main image captured by the main camera and a sub image captured by the sub camera, and acquiring depth information of the main image according to the main image and the sub image;
根据所述景深信息和对焦区域确定所述主图像背景区域中不同子区域的原始虚化强度;Determining an original blurring intensity of different sub-regions in the background area of the main image according to the depth of field information and the focus area;
根据所述主图像的显示方式确定所述不同子区域的分布方位,并按照与所述分布方位对应的权重设置策略确定所述不同子区域的虚化权重;Determining a distribution orientation of the different sub-regions according to a display manner of the main image, and determining a blur weight of the different sub-regions according to a weight setting policy corresponding to the distribution orientation;
根据所述不同子区域的原始虚化强度和对应的虚化权重确定所述不同子区域的目标虚化强度;Determining a target blurring intensity of the different sub-regions according to an original blurring intensity of the different sub-regions and a corresponding blurring weight;
根据所述不同子区域的目标虚化强度对所述主图像背景区域进行虚化处理。The main image background area is blurred according to the target blurring intensity of the different sub-areas.
为了实现上述实施例,本申请还提出一种非临时性计算机可读存储介质,当所述存储介质中的指令由处理器被执行时,使得能够执行如上述实施例的背景虚化处理方法。In order to implement the above embodiments, the present application also proposes a non-transitory computer readable storage medium that enables execution of a background blurring processing method as in the above embodiment when instructions in the storage medium are executed by a processor.
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of the present specification, the description with reference to the terms "one embodiment", "some embodiments", "example", "specific example", or "some examples" and the like means a specific feature described in connection with the embodiment or example. A structure, material or feature is included in at least one embodiment or example of the application. In the present specification, the schematic representation of the above terms is not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in a suitable manner in any one or more embodiments or examples. In addition, various embodiments or examples described in the specification, as well as features of various embodiments or examples, may be combined and combined.
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。Moreover, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, features defining "first" or "second" may include at least one of the features, either explicitly or implicitly. In the description of the present application, the meaning of "a plurality" is at least two, such as two, three, etc., unless specifically defined otherwise.
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。Any process or method description in the flowcharts or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or more executable instructions for implementing the steps of a custom logic function or process. And the scope of the preferred embodiments of the present application includes additional implementations, in which the functions may be performed in a substantially simultaneous manner or in the reverse order depending on the functions involved, in accordance with the illustrated or discussed order. It will be understood by those skilled in the art to which the embodiments of the present application pertain.
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。The logic and/or steps represented in the flowchart or otherwise described herein, for example, may be considered as an ordered list of executable instructions for implementing logical functions, and may be embodied in any computer readable medium, Used in conjunction with, or in conjunction with, an instruction execution system, apparatus, or device (eg, a computer-based system, a system including a processor, or other system that can fetch instructions and execute instructions from an instruction execution system, apparatus, or device) Or use with equipment. For the purposes of this specification, a "computer-readable medium" can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device. More specific examples (non-exhaustive list) of computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM). In addition, the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技 术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that portions of the application can be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware and in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: discrete with logic gates for implementing logic functions on data signals Logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), and the like.
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。One of ordinary skill in the art can understand that all or part of the steps carried by the method of implementing the above embodiments can be completed by a program to instruct related hardware, and the program can be stored in a computer readable storage medium. When executed, one or a combination of the steps of the method embodiments is included.
此外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。In addition, each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module. The above integrated modules can be implemented in the form of hardware or in the form of software functional modules. The integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。The above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like. While the embodiments of the present application have been shown and described above, it is understood that the above-described embodiments are illustrative and are not to be construed as limiting the scope of the present application. The embodiments are subject to variations, modifications, substitutions and variations.

Claims (15)

  1. 一种背景虚化处理方法,其特征在于,包括:A background blur processing method, comprising:
    获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据所述主图像和所述副图像获取所述主图像的景深信息;Acquiring a main image captured by the main camera and a sub image captured by the sub camera, and acquiring depth information of the main image according to the main image and the sub image;
    根据所述景深信息和对焦区域确定所述主图像背景区域中不同子区域的原始虚化强度;Determining an original blurring intensity of different sub-regions in the background area of the main image according to the depth of field information and the focus area;
    根据所述主图像的显示方式确定所述不同子区域的分布方位,并按照与所述分布方位对应的权重设置策略确定所述不同子区域的虚化权重;Determining a distribution orientation of the different sub-regions according to a display manner of the main image, and determining a blur weight of the different sub-regions according to a weight setting policy corresponding to the distribution orientation;
    根据所述不同子区域的原始虚化强度和对应的虚化权重确定所述不同子区域的目标虚化强度;Determining a target blurring intensity of the different sub-regions according to an original blurring intensity of the different sub-regions and a corresponding blurring weight;
    根据所述不同子区域的目标虚化强度对所述主图像背景区域进行虚化处理。The main image background area is blurred according to the target blurring intensity of the different sub-areas.
  2. 如权利要求1所述的方法,其特征在于,所述根据所述景深信息和对焦区域确定所述主图像背景区域中不同子区域的原始虚化强度,包括:The method according to claim 1, wherein the determining the original blurring intensity of different sub-regions in the background area of the main image according to the depth of field information and the in-focus area comprises:
    根据所述景深信息和对焦区域确定所述主图像中前景区域的第一景深信息和背景区域的第二景深信息;Determining, according to the depth information and the focus area, first depth information of the foreground area in the main image and second depth information of the background area;
    根据所述第二景深信息获取所述主图像背景区域中不同子区域的平均景深信息;Acquiring average depth information of different sub-regions in the background area of the main image according to the second depth information;
    根据所述第一景深信息和所述不同子区域的平均景深信息获取所述不同子区域的原始虚化强度。And acquiring an original blurring intensity of the different sub-regions according to the first depth information and the average depth information of the different sub-regions.
  3. 如权利要求1或2所述的方法,其特征在于,当所述不同子区域的分布方位为上下方位时,所述按照与所述分布方位对应的权重设置策略确定所述不同子区域的虚化权重包括:The method according to claim 1 or 2, wherein when the distribution orientation of the different sub-regions is an up-and-down orientation, the weight setting strategy corresponding to the distribution orientation determines the imaginary of the different sub-regions The weights include:
    按照由上到下权重递减策略确定所述不同子区域的虚化权重。The blurring weights of the different sub-areas are determined according to a top-down weight decrementing strategy.
  4. 如权利要求3所述的方法,其特征在于,所述按照由上到下权重递减策略确定所述不同子区域的虚化权重包括:The method according to claim 3, wherein the determining the blurring weights of the different sub-areas according to the top-to-bottom weight reduction strategy comprises:
    获取所述不同子区域的定位坐标;Obtaining positioning coordinates of the different sub-areas;
    根据所述定位坐标查询按照预设的线性权重分布曲线或者非线性权重分布曲线,确定所述不同子区域的虚化权重。Determining the blurring weights of the different sub-regions according to the preset linear weight distribution curve or the nonlinear weight distribution curve according to the positioning coordinate query.
  5. 如权利要求1-4任一所述的方法,其特征在于,所述根据所述不同子区域的原始虚化强度和对应的虚化权重确定所述不同子区域的目标虚化强度,包括:The method according to any one of claims 1 to 4, wherein the determining the target blurring strength of the different sub-regions according to the original blurring intensity of the different sub-regions and the corresponding blurring weights comprises:
    获取所述不同子区域的原始虚化强度与对应的虚化权重的乘积,确定所述不同子区域的目标虚化强度。Obtaining the product of the original blurring intensity of the different sub-regions and the corresponding blurring weights, and determining the target blurring strength of the different sub-regions.
  6. 如权利要求1-5任一所述的方法,其特征在于,所述根据所述不同子区域的目标虚化强度对所述主图像背景区域进行虚化处理,包括:The method according to any one of claims 1-5, wherein the blurring of the background area of the main image according to the target blurring intensity of the different sub-regions comprises:
    根据所述不同子区域的目标虚化强度和所述不同子区域中各像素的景深信息确定所述背景区域中每个像素的虚化系数;Determining a blurring coefficient of each pixel in the background region according to target blurring intensity of the different sub-regions and depth of field information of each pixel in the different sub-regions;
    根据所述背景区域中每个像素的虚化系数对所述背景区域进行高斯模糊处理生成虚化照片。The background region is subjected to Gaussian blur processing according to a blurring coefficient of each pixel in the background region to generate a blurred photograph.
  7. 一种背景虚化处理装置,其特征在于,包括:A background blur processing device, comprising:
    第一获取模块,用于获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据所述主图像和所述副图像获取所述主图像的景深信息;a first acquiring module, configured to acquire a main image captured by the main camera and a sub image captured by the sub camera, and acquire depth information of the main image according to the main image and the sub image;
    第一确定模块,用于根据所述景深信息和对焦区域确定所述主图像背景区域中不同子区域的原始虚化强度;a first determining module, configured to determine an original blurring intensity of different sub-regions in the background area of the main image according to the depth of field information and the focus area;
    第二确定模块,用于根据所述主图像的显示方式确定所述不同子区域的分布方位,并按照与所述分布方位对应的权重设置策略确定所述不同子区域的虚化权重;a second determining module, configured to determine a distribution orientation of the different sub-regions according to a display manner of the main image, and determine a blur weight of the different sub-regions according to a weight setting policy corresponding to the distributed orientation;
    第三确定模块,用于根据所述不同子区域的原始虚化强度和对应的虚化权重确定所述不同子区域的目标虚化强度;a third determining module, configured to determine target blurring strengths of the different sub-regions according to original blurring strengths of the different sub-regions and corresponding blurring weights;
    处理模块,用于根据所述不同子区域的目标虚化强度对所述主图像背景区域进行虚化处理。And a processing module, configured to perform a blurring process on the background area of the main image according to the target blurring intensity of the different sub-regions.
  8. 如权利要求7所述的装置,其特征在于,所述第一确定模块包括:The apparatus of claim 7, wherein the first determining module comprises:
    第一确定单元,用于根据所述景深信息和对焦区域确定所述主图像背景区域中确定所述主图像中前景区域的第一景深信息和背景区域的第二景深信息;a first determining unit, configured to determine, according to the depth information and the in-focus area, first depth of field information of the foreground area and the second depth of field information of the background area in the main image background area;
    第一获取单元,用于根据所述第二景深信息获取所述主图像背景区域中不同子区域的平均景深信息;a first acquiring unit, configured to acquire, according to the second depth information, average depth of field information of different sub-regions in the background area of the main image;
    第二获取单元,用于根据所述第一景深信息和所述不同子区域的平均景深信息获取所述不同子区域的原始虚化强度。a second acquiring unit, configured to acquire an original blurring intensity of the different sub-regions according to the first depth information and the average depth information of the different sub-regions.
  9. 如权利要求7或8所述的装置,其特征在于,当所述不同子区域的分布方位为上下方位时,所述第二确定模块,具体用于:The apparatus according to claim 7 or 8, wherein when the distribution orientation of the different sub-areas is an up-and-down orientation, the second determining module is specifically configured to:
    按照由上到下权重递减策略确定所述不同子区域的虚化权重。The blurring weights of the different sub-areas are determined according to a top-down weight decrementing strategy.
  10. 如权利要求9所述的装置,其特征在于,所述第二确定模块,具体用于:The device according to claim 9, wherein the second determining module is specifically configured to:
    获取所述不同子区域的定位坐标;Obtaining positioning coordinates of the different sub-areas;
    根据所述定位坐标查询按照预设的线性权重分布曲线或者非线性权重分布曲线,确定所述不同子区域的虚化权重。Determining the blurring weights of the different sub-regions according to the preset linear weight distribution curve or the nonlinear weight distribution curve according to the positioning coordinate query.
  11. 如权利要求7-10任一所述的装置,其特征在于,所述第三确定模块,具体用于:The device according to any one of claims 7 to 10, wherein the third determining module is specifically configured to:
    获取所述不同子区域的原始虚化强度与对应的虚化权重的乘积,确定所述不同子区域的目标虚化强度。Obtaining the product of the original blurring intensity of the different sub-regions and the corresponding blurring weights, and determining the target blurring strength of the different sub-regions.
  12. 如权利要求7-11任一所述的装置,其特征在于,所述处理模块,具体用于:The device according to any one of claims 7-11, wherein the processing module is specifically configured to:
    根据所述不同子区域的目标虚化强度和所述不同子区域中各像素的景深信息确定所述背景区域中每个像素的虚化系数;Determining a blurring coefficient of each pixel in the background region according to target blurring intensity of the different sub-regions and depth of field information of each pixel in the different sub-regions;
    根据所述背景区域中每个像素的虚化系数对所述背景区域进行高斯模糊处理生成虚化照片。The background region is subjected to Gaussian blur processing according to a blurring coefficient of each pixel in the background region to generate a blurred photograph.
  13. 一种计算机设备,其特征在于,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现如权利要求1-6中任一所述的背景虚化处理方法。A computer device, comprising: a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein when the processor executes the program, implementing any one of claims 1-6 The background blurring processing method.
  14. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-6中任一所述的背景虚化处理方法。A computer readable storage medium having stored thereon a computer program, wherein the program is executed by a processor to implement the background blurring processing method according to any one of claims 1-6.
  15. 一种图像处理电路,其特征在于,包括:ISP处理器,其中,所述ISP处理器包括:具有图像传感器的主摄像头和副摄像头,An image processing circuit, comprising: an ISP processor, wherein the ISP processor comprises: a main camera and a sub camera with an image sensor,
    所述ISP处理器,用于通过所述图像传感器的接口获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据所述主图像和所述副图像获取所述主图像的景深信息,根据所述景深信息和对焦区域确定所述主图像背景区域中不同子区域的原始虚化强度,根据所述主图像的显示方式确定所述不同子区域的分布方位,并按照与所述分布方位对应的权重设置策略确定所述不同子区域的虚化权重,根据所述不同子区域的原始虚化强度和对应的虚化权重确定所述不同子区域的目标虚化强度,根据所述不同子区域的目标虚化强度对所述主图像背景区域进行虚化处理。The ISP processor is configured to acquire, by using an interface of the image sensor, a main image captured by a main camera and a sub image captured by a sub camera, and acquire depth information of the main image according to the main image and the sub image. Determining an original blurring intensity of different sub-regions in the background area of the main image according to the depth information and the in-focus area, determining a distribution orientation of the different sub-areas according to a display manner of the main image, and according to the distribution orientation Determining a weighting weight of the different sub-regions, determining a target blurring strength of the different sub-regions according to the original blurring strength of the different sub-regions and the corresponding blurring weights, according to the different sub-regions The target blurring intensity of the region blurs the background area of the main image.
PCT/CN2018/116475 2017-11-30 2018-11-20 Background blurring method and apparatus, and device WO2019105261A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711242134.4 2017-11-30
CN201711242134.4A CN108053363A (en) 2017-11-30 2017-11-30 Background blurring processing method, device and equipment

Publications (1)

Publication Number Publication Date
WO2019105261A1 true WO2019105261A1 (en) 2019-06-06

Family

ID=62121994

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/116475 WO2019105261A1 (en) 2017-11-30 2018-11-20 Background blurring method and apparatus, and device

Country Status (2)

Country Link
CN (1) CN108053363A (en)
WO (1) WO2019105261A1 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108053363A (en) * 2017-11-30 2018-05-18 广东欧珀移动通信有限公司 Background blurring processing method, device and equipment
CN110555809B (en) * 2018-06-04 2022-03-15 瑞昱半导体股份有限公司 Background blurring method based on foreground image and electronic device
CN109191469A (en) * 2018-08-17 2019-01-11 广东工业大学 A kind of image automatic focusing method, apparatus, equipment and readable storage medium storing program for executing
EP3873083A4 (en) 2018-11-02 2021-12-01 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Depth image processing method, depth image processing apparatus and electronic apparatus
KR102597518B1 (en) * 2019-02-20 2023-11-03 삼성전자주식회사 An electronic dievice applying bokeh effect to image and controlling method thereof
CN111539960B (en) * 2019-03-25 2023-10-24 华为技术有限公司 Image processing method and related device
CN112785487B (en) * 2019-11-06 2023-08-04 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment
WO2021114061A1 (en) * 2019-12-09 2021-06-17 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Electric device and method of controlling an electric device
CN112040203B (en) * 2020-09-02 2022-07-05 Oppo(重庆)智能科技有限公司 Computer storage medium, terminal device, image processing method and device
CN113066001B (en) * 2021-02-26 2024-10-22 华为技术有限公司 Image processing method and related equipment
CN114339071A (en) * 2021-12-28 2022-04-12 维沃移动通信有限公司 Image processing circuit, image processing method and electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106993112A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Background-blurring method and device and electronic installation based on the depth of field
CN107395965A (en) * 2017-07-14 2017-11-24 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN108053363A (en) * 2017-11-30 2018-05-18 广东欧珀移动通信有限公司 Background blurring processing method, device and equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106993112A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Background-blurring method and device and electronic installation based on the depth of field
CN107395965A (en) * 2017-07-14 2017-11-24 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN108053363A (en) * 2017-11-30 2018-05-18 广东欧珀移动通信有限公司 Background blurring processing method, device and equipment

Also Published As

Publication number Publication date
CN108053363A (en) 2018-05-18

Similar Documents

Publication Publication Date Title
WO2019105261A1 (en) Background blurring method and apparatus, and device
WO2019105262A1 (en) Background blur processing method, apparatus, and device
JP7003238B2 (en) Image processing methods, devices, and devices
CN107945105B (en) Background blurring processing method, device and equipment
EP3480784B1 (en) Image processing method, and device
US10757312B2 (en) Method for image-processing and mobile terminal using dual cameras
JP6911192B2 (en) Image processing methods, equipment and devices
US9998650B2 (en) Image processing apparatus and image pickup apparatus for adding blur in an image according to depth map
WO2019109805A1 (en) Method and device for processing image
KR102143456B1 (en) Depth information acquisition method and apparatus, and image collection device
WO2019105214A1 (en) Image blurring method and apparatus, mobile terminal and storage medium
WO2020259271A1 (en) Image distortion correction method and apparatus
WO2019105254A1 (en) Background blur processing method, apparatus and device
CN108154514B (en) Image processing method, device and equipment
WO2019105297A1 (en) Image blurring method and apparatus, mobile device, and storage medium
WO2019105260A1 (en) Depth of field obtaining method, apparatus and device
CN107872631B (en) Image shooting method and device based on double cameras and mobile terminal
JP2016208075A (en) Image output device, method for controlling the same, imaging apparatus, and program
US20230033956A1 (en) Estimating depth based on iris size
CN117058183A (en) Image processing method and device based on double cameras, electronic equipment and storage medium
WO2022011657A1 (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN112866547A (en) Focusing method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18884721

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18884721

Country of ref document: EP

Kind code of ref document: A1