CN107241559B - Portrait photographing method and device and camera equipment - Google Patents
Portrait photographing method and device and camera equipment Download PDFInfo
- Publication number
- CN107241559B CN107241559B CN201710461638.9A CN201710461638A CN107241559B CN 107241559 B CN107241559 B CN 107241559B CN 201710461638 A CN201710461638 A CN 201710461638A CN 107241559 B CN107241559 B CN 107241559B
- Authority
- CN
- China
- Prior art keywords
- portrait
- image
- brightness
- current shooting
- shooting scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 230000004927 fusion Effects 0.000 claims abstract description 14
- 238000007499 fusion processing Methods 0.000 claims abstract description 11
- 238000000605 extraction Methods 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 10
- 238000001514 detection method Methods 0.000 claims description 6
- 238000009499 grossing Methods 0.000 claims description 4
- 230000009977 dual effect Effects 0.000 claims description 3
- 230000002093 peripheral effect Effects 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 abstract description 8
- 230000006870 function Effects 0.000 description 8
- 238000005375 photometry Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 239000000284 extract Substances 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/76—Circuitry for compensating brightness variation in the scene by influencing the image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/71—Circuitry for evaluating the brightness variation
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
The invention discloses a portrait photographing method and device and camera equipment. The method comprises the following steps: when the current shooting scene is detected to be a backlight scene, extracting a portrait part in the current shooting scene based on the depth of field information of the current shooting scene; acquiring first exposure compensation of a portrait part, and shooting a current shooting scene according to the first exposure compensation to obtain a first image; extracting a background part in a current shooting scene; acquiring second exposure compensation of the background part, and shooting the current shooting scene according to the second exposure compensation to obtain a second image; and carrying out fusion processing on the first image and the second image to obtain a fused target image. The embodiment of the invention realizes the purpose of automatic exposure of the backlight portrait based on multi-frame fusion, and can realize reasonable exposure of the portrait and the background part in the finally obtained shot image, thereby realizing better visual effect and improving user experience.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a portrait photographing method and device and camera equipment.
Background
With the popularization of digital cameras and various mobile terminals with shooting functions, shooting digital images has become a common thing in people's lives. In photographing, there is often a case where a photographing target object must be backlit.
In the related art, when a photo is taken under a backlight condition, a conventional global automatic photometry method is generally adopted to take a photo of a current shooting scene. However, in the photo exposed by the conventional global automatic photometry method, the human image part often suffers from the problems of underexposure and obvious darkness, and the brightness of the background part is often too high, so that the exposure tends to be overexposed. For example, as shown in fig. 1, under the influence of portrait underexposure and background overexposure, it is difficult for the photograph to achieve a satisfactory visual effect, and the user experience is degraded.
Disclosure of Invention
The object of the present invention is to solve at least to some extent one of the above mentioned technical problems.
Therefore, the first purpose of the invention is to provide a portrait photographing method. The method achieves the purpose of automatic exposure of the backlight portrait based on multi-frame fusion, and can enable the portrait and the background part in the finally obtained shot image to achieve reasonable exposure, thereby achieving better visual effect and improving user experience.
The second objective of the present invention is to provide a portrait photographing apparatus.
A third object of the present invention is to propose an image pickup apparatus.
A fourth object of the invention is to propose a non-transitory computer-readable storage medium.
In order to achieve the above object, an embodiment of the invention provides a portrait photographing method, including: when the fact that a current shooting scene is a backlight scene is detected, extracting a portrait part in the current shooting scene based on depth of field information of the current shooting scene; acquiring first exposure compensation of the portrait part, and shooting the current shooting scene according to the first exposure compensation to obtain a first image; extracting a background part in the current shooting scene; acquiring second exposure compensation of the background part, and shooting the current shooting scene according to the second exposure compensation to obtain a second image; and carrying out fusion processing on the first image and the second image to obtain a fused target image.
According to the portrait photographing method of the embodiment of the invention, when the current photographing scene is detected to be a backlight scene, the portrait part in the current photographing scene is extracted based on the depth of field information of the current photographing scene, the first exposure compensation of the portrait part is obtained, the current photographing scene is photographed according to the first exposure compensation to obtain a first image, then the background part in the current photographing scene is extracted, the second exposure compensation of the background part is obtained, the current photographing scene is photographed according to the second exposure compensation to obtain a second image, finally, the first image and the second image are subjected to fusion processing to obtain a fused target image, the aim of automatic exposure of the backlight portrait based on multi-frame fusion is achieved, and the two images with different exposure compensation are photographed and subjected to fusion processing to obtain a finally obtained image, compared with the result after exposure by the global automatic photometry method, reasonable exposure can be realized for the portrait part and the background part, so that a better visual effect is realized, and the user experience is improved.
In order to achieve the above object, a portrait photographing apparatus according to a second aspect of the present invention includes: the first acquisition module is used for acquiring the depth of field information of the current shooting scene; the first extraction module is used for extracting a portrait part in the current shooting scene based on the depth of field information when the current shooting scene is detected to be a backlight scene; the shooting module is used for acquiring first exposure compensation of the portrait part and shooting the current shooting scene according to the first exposure compensation to obtain a first image; the second extraction module is used for extracting a background part in the current shooting scene; the shooting module is further used for acquiring second exposure compensation of the background part and shooting the current shooting scene according to the second exposure compensation to obtain a second image; and the fusion module is used for carrying out fusion processing on the first image and the second image so as to obtain a fused target image.
According to the portrait photographing device of the embodiment of the invention, when the current photographing scene is detected to be a backlight scene through the first extraction module, the portrait part in the current photographing scene is extracted based on the depth of field information of the current photographing scene, the photographing module obtains the first exposure compensation of the portrait part and photographs the current photographing scene according to the first exposure compensation to obtain a first image, the second extraction module extracts the background part in the current photographing scene, the photographing module obtains the second exposure compensation of the background part and photographs the current photographing scene according to the second exposure compensation to obtain a second image, the fusion module fuses the first image and the second image to obtain a fused target image, the aim of automatic exposure of the backlight portrait based on multi-frame fusion is achieved, and two images with different exposure compensations are photographed, and the two images are fused, so that the images finally obtained can be reasonably exposed at the portrait part and the background part compared with the result after exposure by the global automatic photometry method, thereby realizing better visual effect and improving user experience.
To achieve the above object, an image capturing apparatus according to a third embodiment of the present invention includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the image capturing apparatus implements the portrait photographing method according to the first embodiment of the present invention.
To achieve the above object, a non-transitory computer-readable storage medium is provided in an embodiment of a fourth aspect of the present invention, on which a computer program is stored, and the computer program is executed by a processor to implement the portrait photographing method according to the embodiment of the first aspect of the present invention.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is an exemplary diagram of a photograph after exposure using a conventional global automatic photometry method when taking a photograph under a backlight condition;
FIG. 2 is a flow chart of a method of photographing a human image according to one embodiment of the invention;
FIG. 3 is a schematic structural diagram of a portrait photographing apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a portrait photographing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic configuration diagram of an image pickup apparatus according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
A person image photographing method, apparatus, and image pickup device according to an embodiment of the present invention are described below with reference to the accompanying drawings.
Fig. 2 is a flowchart of a portrait photographing method according to an embodiment of the present invention. It should be noted that the portrait photographing method according to the embodiment of the present invention can be applied to the portrait photographing apparatus according to the embodiment of the present invention, and the portrait photographing apparatus can be configured in the image capturing device. The image capturing apparatus may be an apparatus having a shooting function, for example, a mobile terminal (e.g., a hardware apparatus having various operating systems, such as a mobile phone and a tablet computer), a digital camera, and the like.
As shown in fig. 2, the portrait photographing method may include:
and S210, when the current shooting scene is detected to be a backlight scene, extracting a portrait part in the current shooting scene based on the depth of field information of the current shooting scene. The portrait part can be understood as a portrait in a current shooting scene, and the portrait comprises a human face part and a body part.
Before extracting the portrait part in the current shooting scene based on the depth of field information of the current shooting scene, the depth of field information of the current shooting scene may be acquired. The depth of field is a range of a distance between the front and rear of a subject measured at the front edge of a camera lens or other imager, where a sharp image can be obtained. After the focusing is completed, a clear image can be formed in the range before and after the focal point, and the range of distance before and after the focal point is called the depth of field. There is a space with a certain length in front of the lens (in front of and behind the focus), when the object is in the space, the image on the negative film is just between the two circle circles before and after the focus. The length of the space in which the subject is located is called the depth of field. In other words, the subject in this space, whose image blur degree appears on the film side, is within the limited range of the allowable circle of confusion, and the length of this space is the depth of field.
Preferably, in an embodiment of the present invention, the depth information of the current photographing scene may be acquired by a dual camera or a depth RGBD (RGB + depth, color depth image including color information and distance depth information) camera. For example, taking two cameras as an example, a specific implementation process for acquiring depth information of a current shooting scene through the two cameras may be as follows: the first angle theta 1 between the shot object and the left camera can be calculated through an algorithm, and the second angle theta 2 between the shot object and the right camera can be calculated, so that the distance between the shot object and the lens can be calculated by utilizing a triangle principle through the center distance (wherein the center distance is a fixed value) between the left camera and the right camera, the first angle theta 1 and the second angle theta 2, and the distance is the depth of field information of the current shot scene.
For another example, taking the depth RGBD camera as an example, a specific implementation process of acquiring the depth information of the current shooting scene through the depth RGBD camera may be as follows: and detecting the distance between the shot object and the camera by using a depth detector (such as an infrared sensor) in the depth RGBD camera, wherein the distance is the depth of field information of the current shooting scene.
When the current shooting scene is detected to be a backlight scene, a face area in the current shooting scene can be identified according to a face detection technology, the depth of field information of the current shooting scene is obtained through the depth RGBD camera, then the distance between a face and a lens can be calculated according to the depth of field information of the current shooting scene through the face detection technology, and a portrait part in the current shooting scene is determined according to the distance. More specifically, the area where the face is located in the portrait can be located through a face detection technology, the distance between the face and the lens is calculated according to the depth information of the current shooting scene through the following formula (1), and then the whole portrait part can be searched according to the distance. Wherein the formula (1) can be expressed as follows:
wherein, Δ L is the depth of field information of the current shooting scene, F is the focal length of the lens, F is the aperture value when the lens shoots, σ is the diameter of the dispersion circle, and L is the distance between the face and the lens.
S220, acquiring first exposure compensation of the portrait part, and shooting the current shooting scene according to the first exposure compensation to obtain a first image.
Specifically, after obtaining a portrait portion in the current shooting scene, a light metering result of the portrait portion may be separately calculated for the portrait portion, and a first exposure compensation corresponding to the portrait portion may be obtained according to the light metering result. More specifically, the luminance value of the portrait portion may be acquired first, and then the exposure time period required for the luminance of the portrait portion to reach the target luminance may be calculated based on the preset exposure time based on the difference between the target luminance and the luminance value of the portrait portion. Then, a first picture can be taken according to the first exposure compensation, and the first picture is the first image. The target brightness can be understood as the brightness value of the target portrait image under the backlight condition, and the target brightness can enable the target portrait image to be clear under the backlight condition and has a good visual effect.
For example, a preset exposure time (e.g., 1/100s) may be set to obtain an image of the portrait portion, then the image of the portrait area may be divided into M × N small blocks, where M and N are positive integers, e.g., 64 × 48 small blocks, then the extremely bright and dark blocks (i.e., small blocks having too large and too small brightness values) may be deleted from the M × N small blocks to obtain effective blocks, then a brightness weighted average of the effective blocks may be calculated (where the central position in the effective blocks has a high weight and the peripheral weight), and the finally obtained brightness weighted average is the brightness value of the image, i.e., the light measurement result of the portrait portion. Then, based on the difference between the target luminance and the luminance value of the portrait portion, the exposure time period required to reach the target luminance, which is the first exposure compensation of the portrait portion, may be calculated based on 1/100 s.
And S230, extracting a background part in the current shooting scene.
Specifically, after the first picture is taken according to the first exposure compensation, the portions of the current captured scene other than the portrait portion may be uniformly recognized as the background portion. In this step, the background portion may be extracted from the current captured scene according to a distance difference separation technique and the portrait portion.
And S240, acquiring second exposure compensation of the background part, and shooting the current shooting scene according to the second exposure compensation to obtain a second image.
Specifically, after obtaining a background portion in the current shooting scene, a photometric result of the background portion may be separately calculated, and a second exposure compensation corresponding to the background portion may be obtained according to the photometric result. More specifically, the luminance value of the portrait portion may be acquired, and an exposure time period required for the luminance of the portrait portion to reach the target luminance may be calculated based on a preset exposure time based on a difference between the target luminance and the luminance value of the portrait portion. And then, a second picture can be taken according to the second exposure compensation, wherein the second picture is the second image.
For example, a preset exposure time (e.g., 1/100s) may be set to obtain an image of a background portion, the image of the background portion may be divided into M × N small blocks, where M and N are both positive integers, e.g., 64 × 48 small blocks, then extremely bright and dark blocks (i.e., small blocks with too large and too small luminance values) may be deleted from the M × N small blocks to obtain effective blocks, then a luminance weighted average of the effective blocks may be calculated (where the center position in the effective blocks has a high weight and the surrounding weights), and the finally obtained luminance weighted average is a luminance value of the image, which is a light measurement result of the background portion. Then, based on the difference between the target luminance and the luminance value of the background portion, the exposure time period required to reach the target luminance, which is the second exposure compensation for the background portion, can be calculated based on 1/100 s.
And S250, carrying out fusion processing on the first image and the second image to obtain a fused target image.
Specifically, in the embodiment of the present invention, the portrait portion in the first image and the background portion in the second image may be subjected to stitching processing, and meanwhile, the boundary at the seam is eliminated by using the smoothing filter to obtain the fused target image. For example, the portrait portion in the first image may be replaced to the corresponding portrait area in the second image to obtain the target image. Therefore, in the finally fused target image, reasonable exposure can be realized on the portrait and the background, and the problems of portrait underexposure and background overexposure in the traditional method are avoided.
According to the portrait photographing method of the embodiment of the invention, when the current photographing scene is detected to be a backlight scene, the portrait part in the current photographing scene is extracted based on the depth of field information of the current photographing scene, the first exposure compensation of the portrait part is obtained, the current photographing scene is photographed according to the first exposure compensation to obtain a first image, then the background part in the current photographing scene is extracted, the second exposure compensation of the background part is obtained, the current photographing scene is photographed according to the second exposure compensation to obtain a second image, finally, the first image and the second image are subjected to fusion processing to obtain a fused target image, the aim of automatic exposure of the backlight portrait based on multi-frame fusion is achieved, and the two images with different exposure compensation are photographed and subjected to fusion processing to obtain a finally obtained image, compared with the result after exposure by the global automatic photometry method, reasonable exposure can be realized for the portrait part and the background part, so that a better visual effect is realized, and the user experience is improved.
Corresponding to the portrait photographing methods provided in the foregoing embodiments, an embodiment of the present invention further provides a portrait photographing apparatus, and since the portrait photographing apparatus provided in the embodiment of the present invention corresponds to the portrait photographing methods provided in the foregoing embodiments, the embodiments of the portrait photographing method are also applicable to the portrait photographing apparatus provided in the embodiment, and detailed description is not given in this embodiment. Fig. 3 is a schematic structural diagram of a portrait photographing apparatus according to an embodiment of the present invention. As shown in fig. 3, the portrait photographing apparatus may include: a first acquisition module 310, a first extraction module 320, a capture module 330, a second extraction module 340, and a fusion module 350.
Specifically, the first obtaining module 310 is configured to obtain depth information of a current shooting scene. Specifically, in an embodiment of the present invention, the first obtaining module 310 may obtain the depth information of the current shooting scene through a dual camera or a depth RGBD camera.
The first extraction module 320 is configured to extract a portrait portion in the current shooting scene based on the depth information when the current shooting scene is detected as a backlight scene. As an example, as shown in fig. 4, the first extraction module 320 may include: a calculation unit 321 and a determination unit 322. The calculating unit 321 is configured to calculate, by using a face detection technology, a distance between a face and a lens according to depth information of a current shooting scene. The determination unit 322 is configured to determine a portrait portion in the current shooting scene according to the distance.
The shooting module 330 is configured to obtain a first exposure compensation of the portrait portion, and shoot a current shooting scene according to the first exposure compensation to obtain a first image.
As an example, the specific implementation process of the photographing module 330 to obtain the first exposure compensation of the portrait portion may be as follows: and acquiring the brightness value of the portrait part, and calculating the exposure time required by the brightness of the portrait part to reach the target brightness by taking the preset exposure time as a reference according to the difference between the target brightness and the brightness value of the portrait part.
The second extraction module 340 is used for extracting a background part in the current shooting scene.
The shooting module 330 is further configured to obtain a second exposure compensation of the background portion, and shoot the current shooting scene according to the second exposure compensation to obtain a second image. As an example, the specific implementation process of the photographing module 330 to obtain the second exposure compensation of the background portion may be as follows: and acquiring the brightness value of the portrait part, and calculating the exposure time required by the brightness of the portrait part reaching the target brightness by taking the preset exposure time as a reference according to the difference between the target brightness and the brightness value of the portrait part.
The fusion module 350 is configured to perform fusion processing on the first image and the second image to obtain a fused target image. Specifically, in one embodiment of the present invention, the fusion module 350 may perform a stitching process on the portrait portion in the first image and the background portion in the second image, and simultaneously remove the boundary at the seam using a smoothing filter to obtain the fused target image.
According to the portrait photographing device of the embodiment of the invention, when the current photographing scene is detected to be a backlight scene through the first extraction module, the portrait part in the current photographing scene is extracted based on the depth of field information of the current photographing scene, the photographing module obtains the first exposure compensation of the portrait part and photographs the current photographing scene according to the first exposure compensation to obtain a first image, the second extraction module extracts the background part in the current photographing scene, the photographing module obtains the second exposure compensation of the background part and photographs the current photographing scene according to the second exposure compensation to obtain a second image, the fusion module fuses the first image and the second image to obtain a fused target image, the aim of automatic exposure of the backlight portrait based on multi-frame fusion is achieved, and two images with different exposure compensations are photographed, and the two images are fused, so that the images finally obtained can be reasonably exposed at the portrait part and the background part compared with the result after exposure by the global automatic photometry method, thereby realizing better visual effect and improving user experience.
In order to implement the above embodiments, the present invention also provides an image pickup apparatus.
Fig. 5 is a schematic configuration diagram of an image pickup apparatus according to an embodiment of the present invention. It should be noted that the image capturing apparatus may be an apparatus having a shooting function, for example, a mobile terminal (e.g., a hardware apparatus having various operating systems such as a mobile phone and a tablet computer), a digital camera, and the like.
As shown in fig. 5, the image pickup apparatus 50 may include: the storage 51, the processor 52 and the computer program 53 stored in the storage 51 and capable of running on the processor 52, when the processor 52 executes the computer program 53, the portrait photographing method according to any of the above-mentioned embodiments of the present invention is implemented.
In order to implement the above embodiments, the present invention also proposes a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the portrait photographing method according to any of the above embodiments of the present invention.
In order to implement the above embodiments, the present invention also proposes a computer program product in which instructions, when executed by a processor, perform a portrait photographing method, the method comprising the steps of:
s110', when the current shooting scene is detected to be a backlight scene, a portrait part in the current shooting scene is extracted based on the depth of field information of the current shooting scene.
And S120', acquiring first exposure compensation of the portrait part, and shooting the current shooting scene according to the first exposure compensation to obtain a first image.
S130', extracting a background portion in the current shot scene.
And S140', acquiring second exposure compensation of the background part, and shooting the current shooting scene according to the second exposure compensation to obtain a second image.
S150', the first image and the second image are fused to obtain a fused target image.
In the description of the present invention, it is to be understood that the terms "first", "second" and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
Claims (12)
1. A portrait photographing method is characterized by comprising the following steps:
when the fact that a current shooting scene is a backlight scene is detected, extracting a portrait part in the current shooting scene based on depth of field information of the current shooting scene;
acquiring first exposure compensation of the portrait part, and shooting the current shooting scene according to the first exposure compensation to obtain a first image; wherein said obtaining a first exposure compensation for said portrait portion comprises:
acquiring a luminance value of the portrait portion, wherein the acquiring the luminance value of the portrait portion includes: dividing the image of the portrait area into M × N small blocks, wherein M and N are positive integers, deleting image blocks of which two degrees are higher than a first threshold value and image blocks of which the brightness is lower than a second threshold value from the M × N small blocks, and obtaining effective blocks of the portrait area, wherein the second threshold value is smaller than the first threshold value; calculating a brightness weighted average value of the effective block, wherein the finally obtained brightness weighted average value is the brightness value of the portrait part, and the weight of the central position in the effective block is higher than the weight of the peripheral positions;
calculating the exposure time length required by the brightness of the portrait part to reach the target brightness by taking preset exposure time as a reference according to the difference between the target brightness and the brightness value of the portrait part;
extracting a background part in the current shooting scene;
acquiring second exposure compensation of the background part, and shooting the current shooting scene according to the second exposure compensation to obtain a second image;
and carrying out fusion processing on the first image and the second image to obtain a fused target image.
2. The portrait photographing method of claim 1, wherein the depth information of the current photographing scene is acquired by:
and acquiring the depth of field information of the current shooting scene through the double cameras or the depth RGBD camera.
3. The portrait photographing method of claim 1, wherein extracting the portrait part in the current photographing scene based on the depth information of the current photographing scene comprises:
calculating the distance between the face and the lens according to the depth of field information of the current shooting scene by a face detection technology;
and determining a portrait part in the current shooting scene according to the distance.
4. The method for photographing a human image as claimed in claim 1,
the obtaining a second exposure compensation for the background portion includes:
acquiring a brightness value of the background part;
and calculating the exposure time length required by the brightness of the background part to reach the target brightness by taking preset exposure time as a reference according to the difference between the target brightness and the brightness value of the background part.
5. The portrait photographing method according to claim 1, wherein the fusing the first image and the second image to obtain a fused target image comprises:
and splicing the portrait part in the first image and the background part in the second image, and eliminating the boundary at the seam by adopting a smoothing filter to obtain the fused target image.
6. A portrait photographing apparatus, comprising:
the first acquisition module is used for acquiring the depth of field information of the current shooting scene;
the first extraction module is used for extracting a portrait part in the current shooting scene based on the depth of field information when the current shooting scene is detected to be a backlight scene;
the shooting module is used for acquiring first exposure compensation of the portrait part and shooting the current shooting scene according to the first exposure compensation to obtain a first image; wherein the shooting module is specifically configured to: acquiring a brightness value of the portrait part; calculating the exposure time length required by the brightness of the portrait part to reach the target brightness by taking preset exposure time as a reference according to the difference between the target brightness and the brightness value of the portrait part; wherein the obtaining of the brightness value of the portrait portion includes: dividing the image of the portrait area into M × N small blocks, wherein M and N are positive integers, deleting image blocks of which two degrees are higher than a first threshold value and image blocks of which the brightness is lower than a second threshold value from the M × N small blocks, and obtaining effective blocks of the portrait area, wherein the second threshold value is smaller than the first threshold value; calculating a brightness weighted average value of the effective block, wherein the finally obtained brightness weighted average value is the brightness value of the portrait part, and the weight of the central position in the effective block is higher than the weight of the peripheral positions;
the second extraction module is used for extracting a background part in the current shooting scene;
the shooting module is further used for acquiring second exposure compensation of the background part and shooting the current shooting scene according to the second exposure compensation to obtain a second image;
and the fusion module is used for carrying out fusion processing on the first image and the second image so as to obtain a fused target image.
7. The portrait photographing apparatus of claim 6, wherein the first obtaining module obtains the depth information of the current photographing scene through a dual camera or a depth RGBD camera.
8. The portrait photographing apparatus of claim 6, wherein the first extraction module comprises:
the calculating unit is used for calculating the distance between the face and the lens according to the depth of field information of the current shooting scene through a face detection technology;
and the determining unit is used for determining the portrait part in the current shooting scene according to the distance.
9. The portrait photographing device of claim 6, wherein the photographing module is specifically configured to:
acquiring a brightness value of the background part;
and calculating the exposure time length required by the brightness of the background part to reach the target brightness by taking preset exposure time as a reference according to the difference between the target brightness and the brightness value of the background part.
10. The portrait photographing device of claim 6, wherein the fusion module is specifically configured to:
and splicing the portrait part in the first image and the background part in the second image, and eliminating the boundary at the seam by adopting a smoothing filter to obtain the fused target image.
11. An image pickup apparatus comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the program to implement the portrait photographing method according to any one of claims 1 to 5.
12. A non-transitory computer-readable storage medium having stored thereon a computer program, wherein the program is executed by a processor to implement the portrait photographing method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710461638.9A CN107241559B (en) | 2017-06-16 | 2017-06-16 | Portrait photographing method and device and camera equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710461638.9A CN107241559B (en) | 2017-06-16 | 2017-06-16 | Portrait photographing method and device and camera equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107241559A CN107241559A (en) | 2017-10-10 |
CN107241559B true CN107241559B (en) | 2020-01-10 |
Family
ID=59987571
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710461638.9A Active CN107241559B (en) | 2017-06-16 | 2017-06-16 | Portrait photographing method and device and camera equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107241559B (en) |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109670386A (en) * | 2017-10-16 | 2019-04-23 | 深圳泰首智能技术有限公司 | Face identification method and terminal |
CN107592473A (en) * | 2017-10-31 | 2018-01-16 | 广东欧珀移动通信有限公司 | Exposure parameter method of adjustment, device, electronic equipment and readable storage medium storing program for executing |
CN109819173B (en) * | 2017-11-22 | 2021-12-03 | 浙江舜宇智能光学技术有限公司 | Depth fusion method based on TOF imaging system and TOF camera |
CN108156369B (en) * | 2017-12-06 | 2020-03-13 | Oppo广东移动通信有限公司 | Image processing method and device |
CN108289170B (en) * | 2018-01-12 | 2020-02-14 | 深圳奥比中光科技有限公司 | Photographing apparatus, method and computer readable medium capable of detecting measurement area |
CN108322646B (en) * | 2018-01-31 | 2020-04-10 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
CN108616687B (en) * | 2018-03-23 | 2020-07-21 | 维沃移动通信有限公司 | Photographing method and device and mobile terminal |
CN111314603B (en) | 2018-03-27 | 2021-11-19 | 华为技术有限公司 | Photographing method, photographing device and mobile terminal |
CN108616689B (en) * | 2018-04-12 | 2020-10-02 | Oppo广东移动通信有限公司 | Portrait-based high dynamic range image acquisition method, device and equipment |
CN108650466A (en) * | 2018-05-24 | 2018-10-12 | 努比亚技术有限公司 | The method and electronic equipment of photo tolerance are promoted when a kind of strong light or reversible-light shooting portrait |
CN108683859A (en) * | 2018-08-16 | 2018-10-19 | Oppo广东移动通信有限公司 | It takes pictures optimization method, device, storage medium and terminal device |
CN108718388B (en) * | 2018-08-29 | 2020-02-11 | 维沃移动通信有限公司 | Photographing method and mobile terminal |
CN111182199B (en) * | 2018-11-13 | 2022-02-11 | 深圳富泰宏精密工业有限公司 | Electronic device and photographing method |
CN109525783A (en) * | 2018-12-25 | 2019-03-26 | 努比亚技术有限公司 | A kind of exposure image pickup method, terminal and computer readable storage medium |
CN109618102B (en) * | 2019-01-28 | 2021-08-31 | Oppo广东移动通信有限公司 | Focusing processing method and device, electronic equipment and storage medium |
CN109729275A (en) * | 2019-03-14 | 2019-05-07 | Oppo广东移动通信有限公司 | Imaging method, device, terminal and storage medium |
CN111489323B (en) * | 2020-04-09 | 2023-09-19 | 中国科学技术大学先进技术研究院 | Double-light-field image fusion method, device, equipment and readable storage medium |
CN111586308B (en) * | 2020-04-10 | 2022-03-29 | 北京迈格威科技有限公司 | Image processing method and device and electronic equipment |
CN111464748B (en) * | 2020-05-07 | 2021-04-23 | Oppo广东移动通信有限公司 | Image processing method, mobile terminal and computer readable storage medium |
CN112085686A (en) * | 2020-08-21 | 2020-12-15 | 北京迈格威科技有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN114418914A (en) * | 2022-01-18 | 2022-04-29 | 上海闻泰信息技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN118264919A (en) * | 2022-07-01 | 2024-06-28 | 北京讯通安添通讯科技有限公司 | Method and device for taking photo supplementary image information in dim light environment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN2639926Y (en) * | 2003-07-31 | 2004-09-08 | 上海海鸥数码影像股份有限公司 | Digital camera autoamtic exposure circuit |
CN1829290A (en) * | 2005-03-04 | 2006-09-06 | Lg电子株式会社 | Mobile communications terminal for compensating automatic exposure of camera and method thereof |
CN106331510A (en) * | 2016-10-31 | 2017-01-11 | 维沃移动通信有限公司 | Backlight photographing method and mobile terminal |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4289259B2 (en) * | 2004-08-31 | 2009-07-01 | カシオ計算機株式会社 | Imaging apparatus and exposure control method |
-
2017
- 2017-06-16 CN CN201710461638.9A patent/CN107241559B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN2639926Y (en) * | 2003-07-31 | 2004-09-08 | 上海海鸥数码影像股份有限公司 | Digital camera autoamtic exposure circuit |
CN1829290A (en) * | 2005-03-04 | 2006-09-06 | Lg电子株式会社 | Mobile communications terminal for compensating automatic exposure of camera and method thereof |
CN106331510A (en) * | 2016-10-31 | 2017-01-11 | 维沃移动通信有限公司 | Backlight photographing method and mobile terminal |
Also Published As
Publication number | Publication date |
---|---|
CN107241559A (en) | 2017-10-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107241559B (en) | Portrait photographing method and device and camera equipment | |
CN107977940B (en) | Background blurring processing method, device and equipment | |
WO2018228467A1 (en) | Image exposure method and device, photographing device, and storage medium | |
US11671702B2 (en) | Real time assessment of picture quality | |
CN107948519B (en) | Image processing method, device and equipment | |
KR102306272B1 (en) | Dual camera-based imaging method, mobile terminal and storage medium | |
KR102279436B1 (en) | Image processing methods, devices and devices | |
CN107945105B (en) | Background blurring processing method, device and equipment | |
CN107948538B (en) | Imaging method, imaging device, mobile terminal and storage medium | |
CN107358593B (en) | Image forming method and apparatus | |
CN111915505B (en) | Image processing method, device, electronic equipment and storage medium | |
CN107846556B (en) | Imaging method, imaging device, mobile terminal and storage medium | |
CN108024057B (en) | Background blurring processing method, device and equipment | |
CN108156369B (en) | Image processing method and device | |
CN108616689B (en) | Portrait-based high dynamic range image acquisition method, device and equipment | |
US9485436B2 (en) | Image processing apparatus and image processing method | |
CN108053438B (en) | Depth of field acquisition method, device and equipment | |
CN107465856A (en) | Image capture method, device and terminal device | |
CN110708463B (en) | Focusing method, focusing device, storage medium and electronic equipment | |
CN107087112B (en) | Control method and control device for double cameras | |
CN106791451B (en) | Photographing method of intelligent terminal | |
CN108289170B (en) | Photographing apparatus, method and computer readable medium capable of detecting measurement area | |
JP2010072619A (en) | Exposure operation device and camera | |
CN107370961A (en) | image exposure processing method, device and terminal device | |
JP2009009072A (en) | Dynamic focus zone for camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant after: OPPO Guangdong Mobile Communications Co., Ltd. Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant before: Guangdong Opel Mobile Communications Co., Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |