CN104113702B - Flash control method and control device, image-pickup method and harvester - Google Patents
Flash control method and control device, image-pickup method and harvester Download PDFInfo
- Publication number
- CN104113702B CN104113702B CN201410361259.9A CN201410361259A CN104113702B CN 104113702 B CN104113702 B CN 104113702B CN 201410361259 A CN201410361259 A CN 201410361259A CN 104113702 B CN104113702 B CN 104113702B
- Authority
- CN
- China
- Prior art keywords
- depth
- flash
- subject
- distributed intelligence
- light
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
The technical solution of the embodiment of the present application discloses a kind of flash control method and device, the method includes:Obtain at least distributed intelligence of a subject in scene to be captured;The corresponding multiple subject areas of the multiple depth bounds of scene to be captured are determined according to the distributed intelligence;Determine multigroup flash of light parameter corresponding to the multiple subject area.The technical solution of the embodiment of the present application also discloses a kind of image-pickup method and device, the method includes:Obtain multigroup flash of light parameter of multiple subject areas corresponding to multiple depth bounds;It in response to a shooting instruction, is repeatedly glistened with multigroup flash of light parameter to the scene to be captured, and repeatedly shooting is carried out to the scene to be captured and obtains multiple initial pictures;Synthesize the multiple initial pictures.The technical solution of the embodiment of the present application determines multigroup flash of light parameter according to the distributed intelligence of the subject of scene to be captured, and then can collect the image of the good scene to be captured of exposure effect.
Description
Technical field
This application involves image acquisition technology fields more particularly to a kind of flash control method and control device, image to adopt
Set method and harvester.
Background technology
Under conditions of ambient light light is bad, especially night when, the shooting for carrying out photo is needed using flash lamp
Light filling is carried out to scene, the light that is sent out by flash lamp illuminates scene the shooting while, obtains better photographic effect.
Some flash lamps are directly installed on camera, such as can generally have built-in flash module on mobile phone, household camera;
The phase chance for having some more professional uses Supported Speedlights, to carry out better light filling to scene.
Invention content
The purpose of the application is:A kind of flash of light control technology scheme and relevant image acquisition technology scheme are provided.
In a first aspect, the possible embodiment of the application provides a kind of flash control method, including:
Obtain at least distributed intelligence of a subject in scene to be captured;
According to the distributed intelligence determine the scene to be captured relative to one shooting reference position multiple depth bounds,
And the corresponding multiple subject areas of the multiple depth bounds;
Determine multigroup flash of light parameter corresponding to the multiple subject area.
One possible embodiment of second aspect, the application provides a kind of flash of light control device, including:
Distributed intelligence acquisition submodule, for obtaining at least distributed intelligence of a subject in scene to be captured;
Subject area determination sub-module, for determining the scene to be captured relative to a shooting according to the distributed intelligence
The corresponding multiple subject areas of multiple depth bounds and the multiple depth bounds of reference position;
Parameter determination submodule, for determining multigroup flash of light parameter corresponding to the multiple subject area.
One possible embodiment of the third aspect, the application provides a kind of image-pickup method, including:
Obtain the multigroup flash of light parameter for corresponding to multiple subject areas of multiple depth bounds in a scene to be captured;
In response to a shooting instruction, repeatedly glistened with multigroup flash of light parameter to the scene to be captured, and
Repeatedly shooting is carried out to the scene to be captured and obtains multiple initial pictures, wherein each shooting in the multiple shooting and
Each flash of light in the multiple flash of light corresponds to;
Synthesize the multiple initial pictures.
One possible embodiment of fourth aspect, the application provides a kind of image collecting device, including:
Parameter acquisition module, for obtaining corresponding to multiple subject areas of multiple depth bounds in a scene to be captured
Multigroup flash of light parameter;
Flash module, in response to a shooting instruction, being carried out with multigroup flash of light parameter to the scene to be captured
Repeatedly flash of light;
Image capture module, in response to the shooting instruction, carrying out repeatedly shooting to the scene to be captured and obtaining
Multiple initial pictures, wherein each shooting in the multiple shooting is corresponding with each flash of light in the multiple flash of light;
Processing module, for synthesizing the multiple initial pictures.
At least one embodiment of the embodiment of the present application is according at least distribution of a subject in scene to be captured
Information determines multigroup flash of light parameter corresponding with multiple subject areas in multiple depth bounds, so that being waited for described
When photographed scene is shot, flash lamp can be according to multigroup flash of light parameter to multiple and different depths in the scene to be captured
The subject of degree carries out suitable light filling, and then collects the image of the good scene to be captured of exposure effect.
Description of the drawings
Fig. 1 is a kind of flow diagram of flash control method of the embodiment of the present application;
Fig. 2 is a kind of application scenarios schematic diagram of flash control method of the embodiment of the present application;
Fig. 3 a and Fig. 3 b are respectively the application scenarios schematic diagram of two kinds of flash control methods of the embodiment of the present application;
Fig. 4 is a kind of structural schematic block diagram of flash of light control device of the embodiment of the present application;
Fig. 5 a are the structural schematic block diagram of another flash of light control device of the embodiment of the present application;
Fig. 5 b-5d are the structural representation of the distributed intelligence acquisition submodule of three kinds of flash of light control devices of the embodiment of the present application
Block diagram;
Fig. 5 e are a kind of structural schematic block diagram of the depth bounds determination unit of flash of light control device of the embodiment of the present application;
Fig. 5 f are a kind of structural schematic block diagram of the subject area determination unit of flash of light control device of the embodiment of the present application;
Fig. 5 g are a kind of structural schematic block diagram of the parameter determination submodule of flash of light control device of the embodiment of the present application;
Fig. 6 is the structural schematic block diagram of another flash of light control device of the embodiment of the present application;
Fig. 7 is a kind of flow chart of image-pickup method of the embodiment of the present application;
Fig. 8 a-8d are the schematic diagram that image synthesizes in a kind of image-pickup method of the embodiment of the present application;
Fig. 9 is a kind of structural schematic block diagram of image collecting device of the embodiment of the present application;
Figure 10 a are the structural schematic block diagram of another image collecting device of the embodiment of the present application;
Figure 10 b are the structural schematic block diagram of another image collecting device of the embodiment of the present application;
Figure 10 c are a kind of structural schematic block diagram of the second determination sub-module of image collecting device of the embodiment of the present application;
Figure 11 is the structural schematic block diagram of another image collecting device of the embodiment of the present application.
Specific implementation mode
(identical label indicates identical element in several attached drawings) and embodiment below in conjunction with the accompanying drawings, to the tool of the application
Body embodiment is described in further detail.Following embodiment is not limited to scope of the present application for illustrating the application.
It will be understood by those skilled in the art that the terms such as " first ", " second " in the application be only used for distinguishing it is asynchronous
Suddenly, equipment or module etc. neither represent any particular technology meaning, also do not indicate that the inevitable logical order between them.
Present inventor has found, comprising being taken pair apart from different multiple of camera site depth in scene to be captured
As when, be difficult often obtain suitable flash effect, such as:When surveying luminous point far from the camera site, being taken nearby
Object can receive excessive flash of light and the case where overexposure occurs;When surveying luminous point close to the camera site, distant place
Subject can be because not enough there is under exposed situation in flash of light.For such case, as shown in Figure 1, the embodiment of the present application
A kind of possible embodiment provides a kind of flash control method, including:
S110 obtains at least distributed intelligence of a subject in scene to be captured;
S120 determines multiple depth of the scene to be captured relative to a shooting reference position according to the distributed intelligence
Range and the corresponding multiple subject areas of the multiple depth bounds;
S130 determines multigroup flash of light parameter corresponding to the multiple subject area.
For example, executive agent of the flash of light control device provided by the invention as the present embodiment, execution S110~
S130.Specifically, the flash of light control device can be arranged in a manner of software, hardware or software and hardware combining in user equipment
In;The user equipment includes but not limited to:Camera, the mobile phone with image collecting function, intelligent glasses etc..
The technical solution of the embodiment of the present application is determined according at least distributed intelligence of a subject in scene to be captured
Multigroup flash of light parameter corresponding with multiple subject areas of multiple depth bounds, so that being carried out to the scene to be captured
When shooting, flash lamp can be taken pair to multiple and different depth in the scene to be captured according to multigroup flash of light parameter
As carrying out suitable light filling, and then collect the image of the good scene to be captured of exposure effect.
Each step of the embodiment of the present application is further detailed by following embodiment:
S110 obtains at least distributed intelligence of a subject in scene to be captured.
In the embodiment of the present application, the distributed intelligence includes an at least subject relative to a shooting reference bit
The depth information set.
In the embodiment of the present application, the shooting reference position is to be adopted with the image for shooting the scene to be captured
The relatively-stationary position in position of acquisition means, can be arranged as required to.For example, in a kind of possible reality of the embodiment of the present application
It applies in mode, the shooting reference position can be imaging surface or the camera lens position of described image harvester;Another
In the possible embodiment of kind, the shooting reference position for example can be the position where depth information acquisition module;Alternatively,
In another possible embodiment, the shooting reference position for example can be the position where flash lamp.
In the embodiment of the present application, generally comprise that depth span is larger at least one to be taken pair in the scene to be captured
As.For example, in a kind of possible embodiment, the scene to be captured includes target object, after the target object
The background object of side and the foreground object in front of the target object.Here, the various objects can be one independent
Object, such as a personage, or the part of an object, such as it can be a target pair that a personage, which is stretched outside the palm in front,
As character physical part is a background object.
The mode that the embodiment of the present application obtains the depth information can include a variety of, such as:
It can be acquired by depth and obtain the depth information.
It, can be by a depth transducer of the flash of light control device to obtain in a kind of possible embodiment
State depth information.The depth transducer for example can be:Infrared distance sensor, ultrasonic distance sensor or solid are taken the photograph
As range sensor etc..
In alternatively possible embodiment, the depth information can also be obtained from an at least external equipment.For example,
In a kind of possible embodiment, the flash of light control device does not have the depth transducer, and other user equipmenies, example
If the intelligent glasses of user have the depth transducer, at this point it is possible to obtain the depth letter from other user equipmenies
Breath.In the present embodiment, the flash of light control device can be communicated with the external equipment by a communication device come
Obtain the depth information.
In the embodiment of the present application, the distributed intelligence further include an at least subject along a direction substantially perpendicular
The cross direction profiles information of depth direction.
Optionally, in a kind of possible embodiment, the cross direction profiles information can be taken for described at least one
The object Two dimensional Distribution information of corresponding image-region in the shooting imaging surface on a shooting imaging surface.
Optionally, in a kind of possible embodiment, can be joined relative to the shooting according to the scene to be captured
The depth map for examining position obtains the distributed intelligence.
Those skilled in the art are it is recognised that the depth map includes each subject correspondence in the scene to be captured
Depth value, therefore, both include depth information recited above, also include the cross direction profiles information.For example, one kind can
In the embodiment of energy, cross direction profiles information for example can be that an at least subject is corresponding on the depth map
Two dimensional Distribution information of the region on the depth map.
In the embodiment of the present application, two kinds of Two dimensional Distribution information recited above can be obtained by way of information collection
.Such as:It, can be by shooting the image of a scene to be captured in advance in a kind of possible embodiment, then pass through figure
As the mode of processing obtains the Two dimensional Distribution information in the corresponding described image region of an at least subject;Another
In a kind of possible embodiment, above-mentioned depth map can be obtained by a depth transducer, and then institute can be obtained simultaneously
State depth information and the Two dimensional Distribution information.
It is similar with the above-mentioned acquisition depth information in the alternatively possible embodiment of the embodiment of the present application,
The Two dimensional Distribution information can also be obtained from an at least external equipment.
S120 determines multiple depth of the scene to be captured relative to a shooting reference position according to the distributed intelligence
Range and the corresponding multiple subject areas of the multiple depth bounds.
In a kind of possible embodiment of the embodiment of the present application, the multiple target area is determined according to the distributed intelligence
Domain includes:
The multiple depth bounds are determined according to the depth information;
Determine that each depth bounds are corresponding the multiple in the multiple depth bounds according to the Two dimensional Distribution information
At least one object region in subject area.
Wherein, described that the multiple depth bounds are determined according to the depth information in a kind of possible embodiment
Including:
According to the depth distribution of an at least subject described in depth information determination;
The multiple depth bounds are determined according to the depth distribution.
For example, as shown in Fig. 2, in a kind of possible embodiment, it is taken comprising three in the scene to be captured
The depth distribution of object, three subjects is respectively:First object 211 is a personage, opposite shooting reference bit
Setting 220 has 2 meters of depth d1;Second object 212 is a landscape, corresponds to 3 meters of depth d2;Third object is a city wall background
213, correspond to 4 meters of depth d3;At this point, can determine such as three depth bounds according to the depth distribution:First depth model
It encloses:1.8 meters~2.2 meters, the second depth bounds:2.8 meters~3.2 meters, the second depth bounds:3.8 meters~4.2 meters.
In a kind of possible embodiment of the embodiment of the present application, it is described determined according to the Two dimensional Distribution information it is described every
The corresponding at least one object region of a depth bounds may include:
It is vertical in each depth bounds according to an at least subject described in Two dimensional Distribution information determination
In the cross direction profiles of the depth direction;
At least one object region is determined according to the cross direction profiles.
Below for obtaining the depth information and the cross direction profiles according to the depth map of the scene to be captured
It illustrates.
It is depth map of the scene shown in Fig. 2 relative to the shooting reference position 220 as shown in Figure 3a, wherein first
Object 211 corresponds to first area 311, and the second object 212 corresponds to second area 312, and third object 213 corresponds to third area
Domain 313, in fig. 3 a, different distance of the different types of shadow representation to the shooting reference position 220.
It is handled according to the depth map, two dimension of the first area 311 on the depth map can be obtained
Distributed intelligence, for example, shape and location information according to the first area 311 on depth map, you can obtain at first pair
As 211 cross direction profiles in first depth bounds.It is also possible to obtain the second object 212 and the difference of third object 213
The cross direction profiles of subject in corresponding depth bounds.
The subject area in each depth bounds is obtained according to the cross direction profiles.In Fig. 3 a illustrated embodiments, each
Only there are one subject areas in depth bounds, in other possible embodiments of the embodiment of the present application, in a depth bounds
It there may also be in multiple subject areas, such as embodiment shown in Fig. 3 b, in the first depth bounds, there are two be taken pair
As, the first subregion 311a and the second subregion 311b of laterally separated distribution are corresponded respectively to, it is therefore, corresponding, described
It can be there are two subject area in one depth bounds.
Certainly, those skilled in the art is it is recognised that optionally, is determining the multiple depth bounds and the multiple
When subject area, the function of a flash module is can be combined with to determine.For example, when the flash module does not have direction conversion
When function, two subject areas in the first depth bounds shown in Fig. 3 b can be divided into comprising described two subject areas
A big subject area.
S130 determines multigroup flash of light parameter corresponding to the multiple subject area.
In the embodiment of the present application, the multiple subject area is corresponded with multigroup flash of light parameter.
The step S130 can meet following condition when determining one group of flash of light parameter corresponding with an object region:
When one flash module is glistened with group flash of light parameter, the flash of light covers the target area with the light intensity for meeting established standards
Domain.
In the embodiment of the present application, every group of flash of light parameter in multigroup flash of light parameter includes:
Glisten distance parameter.
Flash of light distance parameter described here correspond to flash of light with it is described meet established standards (a such as light filling standard, i.e., not
Owe expose, only expose) light intensity reach flash of light distance.The flash module according to it is corresponding with an object region flash of light away from
After being glistened from parameter, the corresponding flash of light distance of the flash of light is suitble to the depth bounds.In a kind of possible embodiment, by
It is generally value range in depth bounds, the flash of light distance can be determined according to the mean depth of a depth bounds.
Optionally, in a kind of possible embodiment of the embodiment of the present application, the flash of light distance parameter may include:
Flash power.
In general, the flash power of flash module is bigger, and flash of light distance is also remoter.
Optionally, in alternatively possible embodiment, the flash of light distance parameter may include:
Glisten focal length.
In general, the flash of light focal length of flash module is bigger, and light is more assembled, and flash of light distance is remoter.
Optionally, in another possible embodiment, the flash module includes multiple external flash of light submodules, institute
It is different to state depth of multiple external flash of light submodules apart from the shooting direction, at this point, the flash of light distance of the flash module is also
It can be determined by the flash site of the flash module, therefore, in this embodiment, the flash of light distance parameter also wraps
It includes:
Flash site.
For example, relative to the depth with reference to camera site be respectively 0.5 meter in shooting direction, 1 meter, 2 meters, 3 meters,
5 external flash of light submodules are respectively arranged at 5 meters of position;Such as there are three the embodiments of subject in tool above
In, flash site corresponding with the personage for example can be 1 meter, and flash site corresponding with the landscape for example can be 2
Rice, flash site corresponding with the city wall background for example can be 3 meters.It certainly, can be with when determining the flash site
With reference to factors such as the flash capability of the external flash of light submodule and installation conditions.Certainly, in alternatively possible embodiment party
It is adjustable when can also be the position of the flash module in formula.
Certainly, those skilled in the art is described it is recognised that in the other possible embodiments of the embodiment of the present application
Flash of light distance parameter may include a variety of in the flash power, flash of light focal length and flash site.For example, by adjusting simultaneously
The flash power and the flash of light focal length are saved to determine the flash of light distance of the flash module.Alternatively, other can be used for adjusting
Saving the parameter of the flash of light distance of the flash module can also apply in the embodiment of the embodiment of the present application.
In a kind of possible embodiment of the embodiment of the present application, more for the light filling effect to corresponding subject area
Good, every group of flash of light parameter further includes:
Flash direction.
In a kind of possible embodiment of the embodiment of the present application, it is possible to which the flash module is covered a depth bounds
Lid is limited in scope, for example, when the depth bounds farther out when, the flash module in order to ensure glisten light intensity reach the setting
Standard, the focal length that needs will to glisten tunes up, and when the focal length that glistens becomes larger, and flash of light angle of coverage can become smaller, it is therefore desirable to so that
The subject area is in the range of the flash of light angle of coverage, to reach better light filling effect, just needs at this time to described
Flash direction is adjusted.By taking embodiment shown in Fig. 3 b as an example, those skilled in the art are it is recognised that respectively with described first
The corresponding two flash directions meeting one of corresponding two subject areas of subregion 311a and the second subregion 311b is to the left, another
It is a to the right.
In the alternatively possible embodiment of the embodiment of the present application, optionally, every group of flash of light parameter further includes:
Flash of light angle of coverage.
By above description it is recognised that flash module can be determined by adjusting the flash of light angle of coverage of flash module
Lateral coverage area of the flash of light sent out in a depth bounds.For example, when the flash of light angle of coverage is bigger, the flash of light exists
The corresponding hot spot overlay area of one depth bounds is bigger, otherwise smaller.It therefore, can be according to the size of the subject area come really
The fixed flash of light angle of coverage.Those skilled in the art are it is recognised that full in light intensity of the flash module in an object region
When the standard of the foot setting, if the smaller flash module consumption in the hot spot overlay area of the covering subject area
Energy can be smaller, therefore, determine that suitable flash of light angle of coverage can save energy according to the subject area.In addition, such as
It is recited above, it can be covered by adjusting the flash of light focal length of flash module to adjust the flash of light distance and the flash of light simultaneously
Angle, wherein the flash of light focal length is bigger, the flash of light distance is bigger, and the flash of light angle of coverage is smaller.Therefore, in flash of light mould
It, can be by turning down the flash of light angle of coverage to just can be with when the timing of power one of block, the subject area are smaller
The subject area is covered to reach the flash of light distance of bigger.
Certainly, those skilled in the art is it is recognised that in the other possible embodiments of the embodiment of the present application, true
When fixed multigroup flash of light parameter, the factors such as color, the brightness of scene to be captured can also be considered simultaneously.
In the above-described embodiment, the flash of light control device does not include flash module, is merely creating multigroup sudden strain of a muscle
Then multigroup flash of light parameter can be supplied to one or more flash modules by optical parameter.In the another of the embodiment of the present application
In a kind of possible embodiment, the flash of light control device can also include the flash module, at this point, flash of light control
Method further includes:
In response to a shooting instruction, repeatedly glistened with multigroup flash of light parameter.
In the present embodiment, the primary flash of light in the multiple flash of light corresponds to one group in multigroup flash of light parameter
Flash of light parameter.In another embodiment, optionally, when such as the flash module includes multiple flash of light submodules, once
Flash of light is also possibly corresponding to multiple flash of light submodules and is glistened with multigroup flash of light parameter, such as embodiment shown in Fig. 3 b
In, it is possible in primary flash of light, by the different flash of light submodule of both direction respectively with corresponding with the first subregion 311a
First group of flash of light parameter and second group of flash of light parameter corresponding with the second subregion 311b be carried out at the same time primary flash of light, with respectively
It treats the personage on the first depth areas left side and the personage on the right in shooting image and carries out light filling.
Those skilled in the art can be seen that because multigroup flash of light parameter corresponds to multiple and different depth bounds
Multiple subject areas, therefore, it is described repeatedly glisten can also correspond to different flash of light apart from and/or different Flash ranges.
In the embodiment of the present application, the subject of different depth, different cross direction profiles can be closed by the multiple flash of light
Suitable light filling avoids the occurrence of the uneven situation of exposure.
It will be understood by those skilled in the art that in the above method of the application specific implementation mode, the serial number of each step
Size is not meant that the order of the execution order, and the execution sequence of each step should be determined by its function and internal logic, without answering
Any restriction is constituted to the implementation process of the application specific implementation mode.
As shown in figure 4, a kind of possible embodiment of the embodiment of the present application provides a kind of flash of light control device 400,
Including:
Distributed intelligence acquisition submodule 410, for obtaining at least distributed intelligence of a subject in scene to be captured;
Subject area determination sub-module 420, for determining the scene to be captured relative to one according to the distributed intelligence
Shoot the multiple depth bounds and the corresponding multiple subject areas of the multiple depth bounds of reference position;
Parameter determination submodule 430, for determining multigroup flash of light parameter corresponding to the multiple subject area.
The technical solution of the embodiment of the present application is determined according at least distributed intelligence of a subject in scene to be captured
Multigroup flash of light parameter corresponding with multiple subject areas of multiple depth bounds, so that being carried out to the scene to be captured
When shooting, flash lamp can be taken pair to multiple and different depth in the scene to be captured according to multigroup flash of light parameter
As carrying out suitable light filling, and then collect the image of the good scene to be captured of exposure effect.
Each module of the embodiment of the present application is further detailed by following embodiment:
As shown in Figure 5 a, in a kind of possible embodiment of the embodiment of the present application, the distributed intelligence acquisition submodule
410 may include:
Information acquisition unit 414 obtains the distributed intelligence for passing through information collection.
It can be further detailed below according to the distributed intelligence.
As shown in Figure 5 b, in the alternatively possible embodiment of the embodiment of the present application, the distributed intelligence obtains submodule
Block 410 may include:
Communication unit 415, for obtaining the distributed intelligence from an at least external equipment.
In the embodiment of the present application in a kind of possible embodiment, the distributed intelligence includes:
Depth information of at least subject relative to the shooting reference position;
As shown in Figure 5 c, the distributed intelligence acquisition submodule 410 may include:
Depth Information Acquistion unit 411, for obtaining the depth information.
In the embodiment of the present application, the shooting reference position is to be adopted with the image for shooting the scene to be captured
The relatively-stationary position in position of acquisition means, can be arranged as required to.It is right in above method embodiment specifically to may refer to
The description answered, which is not described herein again.
In the embodiment of the present application, generally comprise that depth span is larger at least one to be taken pair in the scene to be captured
As.Corresponding description in above method embodiment specifically is may refer to, which is not described herein again.
In a kind of possible embodiment of the embodiment of the present application, the Depth Information Acquistion unit 411 can be depth
Sensor, for acquiring the depth information.In alternatively possible embodiment, the Depth Information Acquistion unit 411
Can also be communication device, for obtaining the depth information from external equipment.Specifically it may refer in above method embodiment
Corresponding description, which is not described herein again.
In the embodiment of the present application, the distributed intelligence further include an at least subject along a direction substantially perpendicular
The cross direction profiles information of depth direction.
Optionally, in a kind of possible embodiment, the cross direction profiles information can be taken for described at least one
The object Two dimensional Distribution information of corresponding image-region in the shooting imaging surface on a shooting imaging surface.As shown in Figure 5 c,
In the present embodiment, the distributed intelligence acquisition submodule 410 further includes:
Two dimensional Distribution information acquisition unit 412, for obtaining the Two dimensional Distribution information.
In a kind of possible embodiment of the embodiment of the present application, the Two dimensional Distribution information acquisition unit 412 can wrap
An imaging sensor is included, the image for obtaining the scene to be captured, then the two dimension is obtained by the method for image procossing
Distributed intelligence.Certainly, in another possible embodiment, the Two dimensional Distribution information acquisition unit 412 can also be a communication
Device, for obtaining the Two dimensional Distribution information from an external equipment.
As fig 5d, optionally, in a kind of possible embodiment, the distributed intelligence acquisition submodule 410 can
To include:
Depth map processing unit 413, for the depth according to the scene to be captured relative to the shooting reference position
Degree figure obtains the distributed intelligence.
In the present embodiment, the distributed intelligence further includes other than above-mentioned depth information:
At least subject Two dimensional Distribution of corresponding region on the depth map on the depth map
Information.
Therefore, in the present embodiment, the depth map processing unit 413 may further be used for according to the depth map
Obtain the Two dimensional Distribution information on depth information recited above and the depth map.
In the alternatively possible embodiment of the embodiment of the present application, likewise, can also be from an at least external equipment
Obtain the Two dimensional Distribution information.
As shown in Figure 5 a, in a kind of possible embodiment of the embodiment of the present application, the subject area determination sub-module
420 include:
Depth bounds determination unit 421, for determining the multiple depth bounds according to the depth information;
Subject area determination unit 422, it is every in the multiple depth bounds for being determined according to the Two dimensional Distribution information
At least one object region in the corresponding the multiple subject area of a depth bounds.
As depicted in fig. 5e, in the present embodiment, the depth bounds determination unit 421 includes:
Depth distribution determination subelement 4211, for according to an at least subject described in depth information determination
Depth distribution;
Depth bounds determination subelement 4212, for determining the multiple depth bounds according to the depth distribution.
The function of each subelement is realized in the depth bounds determination unit 421 corresponds to referring to method described above embodiment
Description, which is not described herein again.
As shown in figure 5f, in the present embodiment, the subject area determination unit 422 includes:
Cross direction profiles determination subelement 4221 is taken pair for described in being determined according to the Two dimensional Distribution information at least one
As the cross direction profiles perpendicular to the depth direction in each depth bounds;
Subject area determination subelement 4222, for determining at least one object region according to the cross direction profiles.
The function of each subelement is realized in the subject area determination unit 422 corresponds to referring to method described above embodiment
Description, which is not described herein again.
As shown in Figure 5 a, in a kind of possible embodiment of the embodiment of the present application, the parameter determination submodule 430 is wrapped
It includes:
Flash of light distance parameter determination unit 431 corresponds to each subject area in the multiple subject area for determining
Flash of light distance parameter;
Flash direction determination unit 432, for determining the flash direction corresponding to each subject area.
Wherein, the flash of light distance parameter can be made of following one or more parameters:
Flash power, flash of light focal length and flash site.
The function of the flash of light distance parameter determination unit 431 and flash direction determination unit 432 is realized referring specifically to upper
Corresponding description in the embodiment of the method for face.
As shown in fig. 5g, in the alternatively possible embodiment of the embodiment of the present application, in addition to the flash of light distance parameter
Outside determination unit 431 and the flash direction determination unit 432, the parameter determination submodule 430 further includes:
Flash angle determination unit 433, for determining the flash of light angle of coverage corresponding to each subject area.
The function of the flash angle determination unit 433 is realized referring specifically to corresponding description in above method embodiment.
As shown in Figure 5 a, in a kind of possible embodiment of the embodiment of the present application, described device 400 can also include:
Flash module 440, in response to a shooting instruction, repeatedly being glistened with multigroup flash of light parameter.
The function of the flash module 440 is realized referring specifically to corresponding description in above method embodiment.
Those skilled in the art can be seen that because multigroup flash of light parameter corresponds to multiple and different depth bounds
Multiple subject areas, therefore, it is described repeatedly glisten can also correspond to different flash of light apart from and/or different Flash ranges.
In the embodiment of the present application, the subject of different depth, different cross direction profiles can be closed by the multiple flash of light
Suitable light filling avoids the occurrence of the uneven situation of exposure.
Fig. 6 is the structural schematic diagram of another flash of light control device 500 provided by the embodiments of the present application, and the application is specifically real
Example is applied not limit the specific implementation for the control device 500 that glistens.As shown in fig. 6, the flash of light control device 500 can wrap
It includes:
Processor (processor) 510, communication interface (Communications Interface) 520, memory
(memory) 530 and communication bus 540.Wherein:
Processor 510, communication interface 520 and memory 530 complete mutual communication by communication bus 540.
Communication interface 520, for being communicated with the network element of such as client etc..
Processor 510 can specifically execute the correlation step in above method embodiment for executing program 532.
Specifically, program 532 may include program code, and said program code includes computer-managed instruction.
Processor 510 may be a central processor CPU or specific integrated circuit ASIC (Application
Specific Integrated Circuit), or be arranged to implement the integrated electricity of one or more of the embodiment of the present application
Road.
Memory 530, for storing program 532.Memory 530 may include high-speed RAM memory, it is also possible to further include
Nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.Program 532 can specifically be used
Following steps are executed in making the flash of light control device 500:
Obtain at least distributed intelligence of a subject in scene to be captured;
According to the distributed intelligence determine the scene to be captured relative to one shooting reference position multiple depth bounds,
And the corresponding multiple subject areas of the multiple depth bounds;
Determine multigroup flash of light parameter corresponding to the multiple subject area.
The specific implementation of each step may refer to corresponding in corresponding steps and unit in above-described embodiment in program 532
Description, this will not be repeated here.It is apparent to those skilled in the art that for convenience and simplicity of description, it is above-mentioned to retouch
The specific work process of the equipment and module stated can refer to corresponding processes in the foregoing method embodiment description, herein no longer
It repeats.
As shown in fig. 7, a kind of possible embodiment of the embodiment of the present application provides a kind of image-pickup method, including:
S610 obtains the multigroup flash of light parameter for corresponding to multiple subject areas of multiple depth bounds in a scene to be captured;
S620 is repeatedly glistened in response to a shooting instruction, to the scene to be captured with multigroup flash of light parameter,
And repeatedly shooting is carried out to the scene to be captured and obtains multiple initial pictures, wherein each bat in the multiple shooting
It takes the photograph corresponding with each flash of light in the multiple flash of light;
S630 synthesizes the multiple initial pictures.
For example, executive agent of the image collecting device provided by the invention as the present embodiment, execution S610~
S630.Specifically, described image harvester can be arranged in a manner of software, hardware or software and hardware combining in user equipment
In, alternatively, the inherently described user equipment of described image harvester;The user equipment includes but not limited to:Camera, tool
There are mobile phone, the intelligent glasses etc. of image collecting function.
The technical solution of the embodiment of the present application is determined according at least distributed intelligence of a subject in scene to be captured
Multigroup flash of light parameter corresponding with multiple subject areas of multiple depth bounds, so that being carried out to the scene to be captured
When shooting, flash lamp can be taken pair to multiple and different depth in the scene to be captured according to multigroup flash of light parameter
As carrying out suitable light filling, and then collect the image of the good scene to be captured of exposure effect.
Each step of the embodiment of the present application is further detailed by following embodiment:
S610 obtains the multigroup flash of light parameter for corresponding to multiple subject areas of multiple depth bounds in a scene to be captured.
In the embodiment of the present application, the step S610 obtain it is described it is multigroup flash of light parameter mode can there are many, example
Such as:
In a kind of possible embodiment, multigroup flash of light parameter is obtained from an at least external equipment.
In a kind of possible embodiment, described image harvester can be a digital camera, another use of user
Family equipment, such as mobile phone or intelligent glasses obtain the depth letter of current scene to be captured by the depth transducer that itself is equipped with
Breath, and multigroup flash of light parameter is obtained according to the depth information, described image harvester by with the external equipment
Communication obtain multigroup flash of light parameter.
In alternatively possible embodiment, the step S610 obtains the mode and Fig. 1 of multigroup flash of light parameter
The mode that multigroup flash of light parameter is obtained in the flash control method of illustrated embodiment is identical, including:
Obtain the distributed intelligence of an at least subject in the scene to be captured;
The multiple depth of the scene to be captured relative to a shooting reference position is determined according to the distributed intelligence
Range and the corresponding the multiple subject area of the multiple depth bounds;
Determine multigroup flash of light parameter corresponding to the multiple subject area.
Wherein, optionally, in a kind of possible embodiment, the distributed intelligence may include:
Depth information of at least subject relative to the shooting reference position.
Optionally, in a kind of possible embodiment, the distributed intelligence can also include:
At least subject on a shooting imaging surface corresponding image-region in the shooting imaging surface
Two dimensional Distribution information.
Optionally, in a kind of possible embodiment, obtaining the distributed intelligence may include:
A depth map according to the scene to be captured relative to the shooting reference position obtains the distributed intelligence.
In the present embodiment, the distributed intelligence can also include in addition to the depth information:
At least subject Two dimensional Distribution of corresponding region on the depth map on the depth map
Information.
Optionally, in a kind of possible embodiment, the multiple subject area packet is determined according to the distributed intelligence
It includes:
The multiple depth bounds are determined according to the depth information;
Determine that each depth bounds are corresponding the multiple in the multiple depth bounds according to the Two dimensional Distribution information
At least one object region in subject area.
Optionally, described that the multiple depth model is determined according to the depth information in a kind of possible embodiment
Enclose including:
According to the depth distribution of an at least subject described in depth information determination;
The multiple depth bounds are determined according to the depth distribution.
Optionally, in a kind of possible embodiment, each depth model is determined according to the Two dimensional Distribution information
Enclosing the corresponding at least one object region includes:
It is vertical in each depth bounds according to an at least subject described in Two dimensional Distribution information determination
In the cross direction profiles of the depth direction;
At least one object region is determined according to the cross direction profiles.
Further describing referring to corresponding description in Fig. 1-Fig. 3 b illustrated embodiments for multigroup flash of light parameter is obtained, this
In repeat no more.
S620 is repeatedly glistened in response to a shooting instruction, to the scene to be captured with multigroup flash of light parameter,
And repeatedly shooting is carried out to the scene to be captured and obtains multiple initial pictures, wherein each bat in the multiple shooting
It takes the photograph corresponding with each flash of light in the multiple flash of light.
In a kind of possible embodiment of the embodiment of the present application, the shooting instruction can be dynamic depending on the user's operation
Make the instruction generated, for example, pressing voice command of action, shooting of shutter etc. according to a user generates the shooting instruction;
In alternatively possible embodiment, the shooting instruction can also be the satisfaction according to some pre-set shooting conditions
And generate, such as:Under one monitoring scene, presets every 5 minutes and clap a film,fault;Alternatively, having the object of movement into fashionable
Shoot photo.
In the embodiment of the present application, corresponding with multigroup flash of light parameter, it is repeatedly glistened, wherein with each flash of light
It is corresponding once to be shot, an initial pictures of the scene to be captured are obtained, after the multiple flash of light, can also be completed more
Secondary shooting obtains the multiple initial pictures.
Wherein, in the present embodiment, the parameter shot every time can be identical.Certainly, the embodiment of the present application its
Can also be to be adjusted according to multigroup flash of light parameter according to the needs of user's shooting effect in its possible embodiment
Whole, such as:The focal length shot every time matches with the flash of light distance of corresponding flash of light.
S630 synthesizes the multiple initial pictures.
In a kind of possible embodiment of the embodiment of the present application, the step S630 may include:
According to an at least image subsection for each initial pictures in an at least exposure scale accurately fixed the multiple initial pictures
Domain;
The multiple initial pictures are synthesized according to an at least image region described in each initial pictures.
Due on initial pictures corresponding with a flash of light, the subject in depth bounds corresponding with the flash of light be by
Proper exposure, the part subject corresponding image-region on the initial pictures should meet an at least exposure scale
Accurate (such as:Luminance standard, resolution standard etc.), therefore, in the present embodiment, according only to obtained the multiple initial graph
At least image region on each initial pictures is determined as the exposure effect in upper each region.
In obtaining the multiple initial pictures after an at least image region for each initial pictures, it is suitable to select
Multiple images subregion carry out splicing fusion, wherein in a kind of possible embodiment, the side between each image region
Boundary's pixel can be used integration technology and carry out virtualization or equalization to keep the continuity of whole photo.
It is another in the embodiment of the present application other than carrying out the synthesis of described image according to the exposure effect of obtained initial pictures
It, can also be according to corresponding to the target area shot every time in the corresponding scene to be captured in a kind of possible embodiment
The initial pictures on target image subregion carry out the synthesis of described image.Such as:The step S630 can be wrapped
It includes:
An at least target figure for each initial pictures in the multiple initial pictures is determined according to the multiple subject area
As subregion;
The multiple initial pictures are synthesized according to an at least target image subregion described in each initial pictures.
Wherein, optionally, in a kind of possible embodiment, it is described determined according to the multiple subject area it is described every
An at least target image subregion for a initial pictures includes:
Determine that at least target shot each time in the multiple shooting is taken pair according to the multiple subject area
As;
According to it is described shoot each time described in an at least target subject determine the institutes of each initial pictures
State an at least target image subregion.
Such as in embodiment shown in Fig. 2, each initial graph can be determined according to the depth map of the scene to be captured
As upper image region corresponding with the first object 211, the second object 212 and third object 213 respectively, wherein according to described
After three subject areas of three objects determine three groups of flash of light parameters, it may be determined that for example, the target of first group of flash of light parameter
Subject is first object 211, and the target subject of second group of flash of light parameter is second object 212,
The target subject of third group flash of light parameter is the third object 213.Therefore, as shown by figures 8 a-8 c, with described first
Group flash of light parameter glistened and shoot in the first initial pictures 710, target image subregion is first object
211 corresponding first object image regions 711 (target image subregion is indicated with diagonal line hatches line);Likewise, with described
Target image subregion in corresponding second initial pictures of second group of flash of light parameter 720 is the of 212 pairs of second object
Two target image subregions 721;Target image sub-district in third initial pictures corresponding with third group flash of light parameter 730
Domain is the third target image subregion 731 of 213 pairs of the third object.As shown in figure 8d, by these three target image sub-districts
Domain, which carries out synthesis, can obtain each depth all by the composograph of proper exposure 740.
Those skilled in the art, which can be seen that, to be treated photographed scene by the method for the embodiment of the present application and is repeatedly directed to
The flash of light of different depth and/or different cross direction profiles can to the subject of different depth and/or different cross direction profiles into
The suitable light filling of row avoids the occurrence of the uneven situation of exposure.
It will be understood by those skilled in the art that in the above method of the application specific implementation mode, the serial number of each step
Size is not meant that the order of the execution order, and the execution sequence of each step should be determined by its function and internal logic, without answering
Any restriction is constituted to the implementation process of the application specific implementation mode.
As shown in figure 9, a kind of possible embodiment of the embodiment of the present application provides a kind of image collecting device 800, packet
It includes:
Parameter acquisition module 810, for obtaining multiple target areas corresponding to multiple depth bounds in a scene to be captured
Multigroup flash of light parameter in domain;
Flash module 820, in response to a shooting instruction, to the scene to be captured with multigroup flash of light parameter into
Row repeatedly flash of light;
Image capture module 830, in response to the shooting instruction, repeatedly being shot to the scene progress to be captured
To multiple initial pictures, wherein each shooting in the multiple shooting is corresponding with each flash of light in the multiple flash of light;
Processing module 840, for synthesizing the multiple initial pictures.
The technical solution of the embodiment of the present application is determined according at least distributed intelligence of a subject in scene to be captured
Multigroup flash of light parameter corresponding with multiple subject areas of multiple depth bounds, so that being carried out to the scene to be captured
When shooting, flash lamp can be taken pair to multiple and different depth in the scene to be captured according to multigroup flash of light parameter
As carrying out suitable light filling, and then collect the image of the good scene to be captured of exposure effect.
Each module of the embodiment of the present application is further detailed below:
Optionally, as shown in Figure 10 a, in a kind of possible embodiment of the embodiment of the present application, the parameter acquiring mould
Block 810 may include:
Submodule 811 is communicated, for obtaining multigroup flash of light parameter from an at least external equipment.
For example, in a kind of possible embodiment, described image harvester 800 can be a digital camera, user
Another user equipment, such as mobile phone or intelligent glasses obtain current scene to be captured by the depth transducer that itself is equipped with
Depth information, and multigroup flash of light parameter is obtained according to the depth information, described image harvester 800 by with it is described
The communication of external equipment obtains multigroup flash of light parameter.
Optionally, as shown in fig. lob, in a kind of possible embodiment of the embodiment of the present application, the parameter acquiring mould
Block 810 may include:
Distributed intelligence acquisition submodule 812, the distribution for obtaining an at least subject in the scene to be captured
Information;
Subject area determination sub-module 813, for determining the scene to be captured relative to one according to the distributed intelligence
Shoot the multiple depth bounds and the corresponding the multiple subject area of the multiple depth bounds of reference position;
Parameter determination submodule 814, for determining multigroup flash of light parameter corresponding to the multiple subject area.
In the present embodiment, the structure function of the parameter acquisition module 810 can be controlled with flash of light recited above
Device 400 is identical, i.e.,:The distributed intelligence acquisition submodule 812, the subject area determination sub-module 813 and the ginseng
The distributed intelligence acquisition submodule of the structure and function distribution of number determination sub-module 814 and the flash of light control device 40
410, the subject area determination sub-module 420 and the parameter determination submodule 430 are identical.In the present embodiment, no
The structure and function of the parameter acquisition module 810 is repeated again, referring specifically to flash of light control dress in embodiment illustrated in fig. 4
Set 400 structure and function description.
Optionally, as shown in Figure 10 a, in a kind of possible embodiment, the processing module 840 includes:
First determination sub-module 841, for according to each first in an at least exposure scale accurately fixed the multiple initial pictures
An at least image region for beginning image;
First synthesis submodule 842, for being synthesized according to an at least image region described in each initial pictures
The multiple initial pictures.
Due on initial pictures corresponding with a flash of light, the subject in depth bounds corresponding with the flash of light be by
Proper exposure, the part subject corresponding image-region on the initial pictures meets at least one exposure standard
(such as:Luminance standard, resolution standard etc.), therefore, in the present embodiment, first determination sub-module 841 can be only
At least one on each initial pictures is determined according to the exposure effect in each region on obtained the multiple initial pictures
Image region.
An at least image for each initial pictures in first determination sub-module 841 obtains the multiple initial pictures
After subregion, the first synthesis submodule 842 can select suitable multiple images subregion to carry out splicing fusion, wherein
In a kind of possible embodiment, the boundary pixel between each image region can be used integration technology and carry out virtualization or mean value
Change to keep the continuity of whole photo.
Optionally, as shown in fig. lob, in alternatively possible embodiment, the processing module 840 includes:
Second determination sub-module 843, it is each in the multiple initial pictures for being determined according to the multiple subject area
An at least target image subregion for initial pictures;
Second synthesis submodule 844, for according to an at least target image subregion described in each initial pictures
Synthesize the multiple initial pictures.
Optionally, as shown in figure l0c, in a kind of possible embodiment, second determination sub-module 843 includes:
Target determination unit 8431 is shot each time for being determined in the multiple shooting according to the multiple subject area
An at least target subject;
Subregion determination unit 8432, at least target subject for being shot each time according to are true
An at least target image subregion for fixed each initial pictures.
Each module, the function of unit may refer to corresponding in Fig. 8 a-8d illustrated embodiments retouch in Figure 10 c illustrated embodiments
It states, which is not described herein again.
Figure 11 is the structural schematic diagram of another image collecting device 1000 provided by the embodiments of the present application, and the application is specific
Embodiment does not limit the specific implementation of image collecting device 1000.As shown in figure 11, which can
To include:
Processor (processor) 1010, communication interface (Communications Interface) 1020, memory
(memory) 1030 and communication bus 1040.Wherein:
Processor 1010, communication interface 1020 and memory 1030 complete mutual lead to by communication bus 1040
Letter.
Communication interface 1020, for being communicated with the network element of such as client etc..
Processor 1010 can specifically execute the correlation step in above method embodiment for executing program 1032.
Specifically, program 1032 may include program code, and said program code includes computer-managed instruction.
Processor 1010 may be a central processor CPU or specific integrated circuit ASIC (Application
Specific Integrated Circuit), or be arranged to implement the integrated electricity of one or more of the embodiment of the present application
Road.
Memory 1030, for storing program 1032.Memory 1030 may include high-speed RAM memory, it is also possible to also
Including nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.Program 1032 specifically may be used
For making described image harvester 1000 execute following steps:
Obtain the multigroup flash of light parameter for corresponding to multiple subject areas of multiple depth bounds in a scene to be captured;
In response to a shooting instruction, repeatedly glistened with multigroup flash of light parameter to the scene to be captured, and
Repeatedly shooting is carried out to the scene to be captured and obtains multiple initial pictures, wherein each shooting in the multiple shooting and
Each flash of light in the multiple flash of light corresponds to;
Synthesize the multiple initial pictures.
The specific implementation of each step may refer to corresponding in corresponding steps and unit in above-described embodiment in program 1032
Description, this will not be repeated here.It is apparent to those skilled in the art that for convenience and simplicity of description, it is above-mentioned
The equipment of description and the specific work process of module, can refer to corresponding processes in the foregoing method embodiment description, herein not
It repeats again.
Those of ordinary skill in the art may realize that lists described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and method and step can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, depends on the specific application and design constraint of technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
Scope of the present application.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product
It is stored in a computer read/write memory medium.Based on this understanding, the technical solution of the application is substantially in other words
The part of the part that contributes to existing technology or the technical solution can be expressed in the form of software products, the meter
Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be
People's computer, server or network equipment etc.) execute each embodiment the method for the application all or part of step.
And storage medium above-mentioned includes:USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited
The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic disc or CD.
Embodiment of above is merely to illustrate the application, and is not the limitation to the application, in relation to the common of technical field
Technical staff can also make a variety of changes and modification in the case where not departing from spirit and scope, therefore all
Equivalent technical solution also belongs to the scope of the application, and the scope of patent protection of the application should be defined by the claims.
Claims (50)
1. a kind of flash control method, which is characterized in that including:
Obtaining at least distributed intelligence of a subject, the distributed intelligence in scene to be captured includes:An at least quilt
Depth information of the reference object relative to shooting reference position;
According to the distributed intelligence determine the scene to be captured relative to one shooting reference position multiple depth bounds and
The corresponding multiple subject areas of the multiple depth bounds;
Determine multigroup flash of light parameter corresponding to the multiple subject area.
2. the method as described in claim 1, which is characterized in that the distributed intelligence further includes:
At least subject two dimension of corresponding image-region in the shooting imaging surface on a shooting imaging surface
Distributed intelligence.
3. the method as described in claim 1, which is characterized in that obtaining the distributed intelligence includes:
A depth map according to the scene to be captured relative to the shooting reference position obtains the distributed intelligence.
4. method as claimed in claim 3, which is characterized in that the distributed intelligence further includes:
At least subject Two dimensional Distribution information of corresponding region on the depth map on the depth map.
5. the method as described in claim 1, which is characterized in that obtaining the distributed intelligence includes:
The distributed intelligence is obtained by information collection.
6. the method as described in claim 1, which is characterized in that obtaining the distributed intelligence includes:
The distributed intelligence is obtained from an at least external equipment.
7. the method as described in claim 2 or 4, which is characterized in that determine the multiple target area according to the distributed intelligence
Domain includes:
The multiple depth bounds are determined according to the depth information;
The corresponding the multiple object of each depth bounds in the multiple depth bounds is determined according to the Two dimensional Distribution information
At least one object region in region.
8. the method for claim 7, which is characterized in that described to determine the multiple depth model according to the depth information
Enclose including:
According to the depth distribution of an at least subject described in depth information determination;
The multiple depth bounds are determined according to the depth distribution.
9. the method for claim 7, which is characterized in that determine each depth model according to the Two dimensional Distribution information
Enclosing the corresponding at least one object region includes:
According to an at least subject described in Two dimensional Distribution information determination perpendicular to institute in each depth bounds
State the cross direction profiles of depth direction;
At least one object region is determined according to the cross direction profiles.
10. the method as described in claim 1, which is characterized in that it is described it is multigroup flash of light parameter in every group of flash of light parameter include:
Flash of light distance parameter and flash direction.
11. method as claimed in claim 10, which is characterized in that every group of flash of light parameter further include:
Flash of light angle of coverage.
12. the method as described in claim 1, which is characterized in that the method further includes:
In response to a shooting instruction, repeatedly glistened with multigroup flash of light parameter.
13. a kind of flash of light control device, which is characterized in that including:
Distributed intelligence acquisition submodule, for obtaining at least distributed intelligence of a subject in scene to be captured, described point
Cloth information includes:Depth information of at least subject relative to shooting reference position;
Subject area determination sub-module, for determining that the scene to be captured is referred to relative to a shooting according to the distributed intelligence
The corresponding multiple subject areas of multiple depth bounds and the multiple depth bounds of position;
Parameter determination submodule, for determining multigroup flash of light parameter corresponding to the multiple subject area.
14. device as claimed in claim 13, which is characterized in that the distributed intelligence further includes:
At least subject two dimension of corresponding image-region in the shooting imaging surface on a shooting imaging surface
Distributed intelligence;
The distributed intelligence acquisition submodule further includes:
Two dimensional Distribution information acquisition unit, for obtaining the Two dimensional Distribution information.
15. device as claimed in claim 13, which is characterized in that the distributed intelligence acquisition submodule includes:
Depth map processing unit is obtained for the depth map according to the scene to be captured relative to the shooting reference position
The distributed intelligence.
16. device as claimed in claim 15, which is characterized in that the distributed intelligence further includes:
At least subject Two dimensional Distribution information of corresponding region on the depth map on the depth map;
The depth map processing unit is further used for, and the Two dimensional Distribution information is obtained according to the depth map.
17. device as claimed in claim 13, which is characterized in that the distributed intelligence acquisition submodule includes:
Information acquisition unit obtains the distributed intelligence for passing through information collection.
18. device as claimed in claim 13, which is characterized in that the distributed intelligence acquisition submodule includes:
Communication unit, for obtaining the distributed intelligence from an at least external equipment.
19. the device as described in claim 14 or 16, which is characterized in that the subject area determination sub-module includes:
Depth bounds determination unit, for determining the multiple depth bounds according to the depth information;
Subject area determination unit, for determining each depth model in the multiple depth bounds according to the Two dimensional Distribution information
Enclose at least one object region in corresponding the multiple subject area.
20. device as claimed in claim 19, which is characterized in that the depth bounds determination unit includes:
Depth distribution determination subelement, for the depth point according to an at least subject described in depth information determination
Cloth;
Depth bounds determination subelement, for determining the multiple depth bounds according to the depth distribution.
21. device as claimed in claim 19, which is characterized in that the subject area determination unit includes:
Cross direction profiles determination subelement is used for according to an at least subject described in Two dimensional Distribution information determination described
Perpendicular to the cross direction profiles of the depth direction in each depth bounds;
Subject area determination subelement, for determining at least one object region according to the cross direction profiles.
22. device as claimed in claim 13, which is characterized in that the parameter determination submodule includes:
Glisten distance parameter determination unit, for determine correspond to the flash of light of each subject area in the multiple subject area away from
From parameter;
Flash direction determination unit, for determining the flash direction corresponding to each subject area.
23. device as claimed in claim 22, which is characterized in that the parameter determination submodule further includes:
Flash angle determination unit, for determining the flash of light angle of coverage corresponding to each subject area.
24. device as claimed in claim 13, which is characterized in that described device further includes:
Flash module, in response to a shooting instruction, repeatedly being glistened with multigroup flash of light parameter.
25. a kind of image-pickup method, which is characterized in that including:
Obtain the multigroup flash of light parameter for corresponding to multiple subject areas of multiple depth bounds in a scene to be captured;
In response to a shooting instruction, repeatedly glistened with multigroup flash of light parameter to the scene to be captured, and to institute
State scene to be captured and carry out repeatedly shooting and obtain multiple initial pictures, wherein each in the multiple shooting shoot with it is described
Repeatedly each flash of light in flash of light corresponds to;
Synthesize the multiple initial pictures.
26. method as claimed in claim 25, which is characterized in that obtaining multigroup flash of light parameter includes:
Multigroup flash of light parameter is obtained from an at least external equipment.
27. method as claimed in claim 25, which is characterized in that obtaining multigroup flash of light parameter includes:
Obtain the distributed intelligence of an at least subject in the scene to be captured;
According to the distributed intelligence determine the scene to be captured relative to one shooting reference position the multiple depth bounds,
And the corresponding the multiple subject area of the multiple depth bounds;
Determine multigroup flash of light parameter corresponding to the multiple subject area.
28. method as claimed in claim 27, which is characterized in that the distributed intelligence includes:
Depth information of at least subject relative to the shooting reference position.
29. method as claimed in claim 28, which is characterized in that the distributed intelligence further includes:
At least subject two dimension of corresponding image-region in the shooting imaging surface on a shooting imaging surface
Distributed intelligence.
30. method as claimed in claim 28, which is characterized in that obtaining the distributed intelligence includes:
A depth map according to the scene to be captured relative to the shooting reference position obtains the distributed intelligence.
31. method as claimed in claim 30, which is characterized in that the distributed intelligence further includes:
At least subject Two dimensional Distribution information of corresponding region on the depth map on the depth map.
32. the method as described in claim 29 or 31, which is characterized in that determine the multiple object according to the distributed intelligence
Region includes:
The multiple depth bounds are determined according to the depth information;
The corresponding the multiple object of each depth bounds in the multiple depth bounds is determined according to the Two dimensional Distribution information
At least one object region in region.
33. method as claimed in claim 32, which is characterized in that described to determine the multiple depth according to the depth information
Range includes:
According to the depth distribution of an at least subject described in depth information determination;
The multiple depth bounds are determined according to the depth distribution.
34. method as claimed in claim 32, which is characterized in that determine each depth according to the Two dimensional Distribution information
The corresponding at least one object region of range includes:
According to an at least subject described in Two dimensional Distribution information determination perpendicular to institute in each depth bounds
State the cross direction profiles of depth direction;
At least one object region is determined according to the cross direction profiles.
35. method as claimed in claim 25, which is characterized in that the multiple initial pictures of the synthesis include:
According to an at least image region for each initial pictures in an at least exposure scale accurately fixed the multiple initial pictures;
The multiple initial pictures are synthesized according to an at least image region described in each initial pictures.
36. method as claimed in claim 25, which is characterized in that the multiple initial pictures of the synthesis include:
At least target image of each initial pictures in the multiple initial pictures is determined according to the multiple subject area
Region;
The multiple initial pictures are synthesized according to an at least target image subregion described in each initial pictures.
37. method as claimed in claim 36, which is characterized in that it is described determined according to the multiple subject area it is described each
An at least target image subregion for initial pictures includes:
At least target subject shot each time in the multiple shooting is determined according to the multiple subject area;
According to it is described shoot each time described in an at least target subject determine described in each initial pictures extremely
A few target image subregion.
38. a kind of image collecting device, which is characterized in that including:
Parameter acquisition module, for obtains correspond to a scene to be captured in multiple depth bounds multiple subject areas it is multigroup
Flash of light parameter;
Flash module, in response to a shooting instruction, being carried out repeatedly with multigroup flash of light parameter to the scene to be captured
Flash of light;
Image capture module, in response to the shooting instruction, being obtained to the multiple shooting of scene progress to be captured multiple
Initial pictures, wherein each shooting in the multiple shooting is corresponding with each flash of light in the multiple flash of light;
Processing module, for synthesizing the multiple initial pictures.
39. device as claimed in claim 38, which is characterized in that the parameter acquisition module includes:
Submodule is communicated, for obtaining multigroup flash of light parameter from an at least external equipment.
40. device as claimed in claim 38, which is characterized in that the parameter acquisition module includes:
Distributed intelligence acquisition submodule, the distributed intelligence for obtaining an at least subject in the scene to be captured;
Subject area determination sub-module, for determining that the scene to be captured is referred to relative to a shooting according to the distributed intelligence
The corresponding the multiple subject area of the multiple depth bounds and the multiple depth bounds of position;
Parameter determination submodule, for determining multigroup flash of light parameter corresponding to the multiple subject area.
41. device as claimed in claim 40, which is characterized in that the distributed intelligence includes:
Depth information of at least subject relative to the shooting reference position;
The distributed intelligence acquisition submodule includes:
Depth Information Acquistion unit, for obtaining the depth information.
42. device as claimed in claim 41, which is characterized in that the distributed intelligence further includes:
At least subject two dimension of corresponding image-region in the shooting imaging surface on a shooting imaging surface
Distributed intelligence;
The distributed intelligence acquisition submodule further includes:
Two dimensional Distribution information acquisition unit, for obtaining the Two dimensional Distribution information.
43. device as claimed in claim 41, which is characterized in that the distributed intelligence acquisition submodule includes:
Depth map processing unit is obtained for the depth map according to the scene to be captured relative to the shooting reference position
The distributed intelligence.
44. device as claimed in claim 43, which is characterized in that the distributed intelligence further includes:
At least subject Two dimensional Distribution information of corresponding region on the depth map on the depth map;
The depth map processing unit is further used for, and the Two dimensional Distribution information is obtained according to the depth map.
45. the device as described in claim 42 or 44, which is characterized in that the subject area determination sub-module includes:
Depth bounds determination unit, for determining the multiple depth bounds according to the depth information;
Subject area determination unit, for determining each depth model in the multiple depth bounds according to the Two dimensional Distribution information
Enclose at least one object region in corresponding the multiple subject area.
46. device as claimed in claim 45, which is characterized in that the depth bounds determination unit includes:
Depth distribution determination subelement, for the depth point according to an at least subject described in depth information determination
Cloth;
Depth bounds determination subelement, for determining the multiple depth bounds according to the depth distribution.
47. device as claimed in claim 45, which is characterized in that the subject area determination unit includes:
Cross direction profiles determination subelement is used for according to an at least subject described in Two dimensional Distribution information determination described
Perpendicular to the cross direction profiles of the depth direction in each depth bounds;
Subject area determination subelement, for determining at least one object region according to the cross direction profiles.
48. device as claimed in claim 38, which is characterized in that the processing module includes:
First determination sub-module, for according to each initial pictures in an at least exposure scale accurately fixed the multiple initial pictures
An at least image region;
First synthesis submodule, for the multiple according at least image region synthesis described in each initial pictures
Initial pictures.
49. device as claimed in claim 38, which is characterized in that the processing module includes:
Second determination sub-module, for determining each initial pictures in the multiple initial pictures according to the multiple subject area
An at least target image subregion;
Second synthesis submodule, for according to described at least target image subregion synthesis described in each initial pictures
Multiple initial pictures.
50. device as claimed in claim 49, which is characterized in that second determination sub-module includes:
Target determination unit, for determining shot each time in the multiple shooting at least one according to the multiple subject area
Target subject;
Subregion determination unit, for according to it is described shoot each time described in an at least target subject determine it is described often
An at least target image subregion for a initial pictures.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410361259.9A CN104113702B (en) | 2014-07-25 | 2014-07-25 | Flash control method and control device, image-pickup method and harvester |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410361259.9A CN104113702B (en) | 2014-07-25 | 2014-07-25 | Flash control method and control device, image-pickup method and harvester |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104113702A CN104113702A (en) | 2014-10-22 |
CN104113702B true CN104113702B (en) | 2018-09-04 |
Family
ID=51710325
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410361259.9A Active CN104113702B (en) | 2014-07-25 | 2014-07-25 | Flash control method and control device, image-pickup method and harvester |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104113702B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107623819B (en) * | 2015-04-30 | 2019-08-09 | Oppo广东移动通信有限公司 | A kind of method taken pictures and mobile terminal and related media production |
CN106303267B (en) * | 2015-05-25 | 2019-06-04 | 北京智谷睿拓技术服务有限公司 | Image capture device and method |
CN105049704A (en) * | 2015-06-17 | 2015-11-11 | 青岛海信移动通信技术股份有限公司 | Shooting method and equipment |
CN109923856A (en) * | 2017-05-11 | 2019-06-21 | 深圳市大疆创新科技有限公司 | Light supplementing control device, system, method and mobile device |
CN109218623B (en) * | 2018-11-05 | 2021-07-20 | 浙江大华技术股份有限公司 | Light supplementing method and device, computer device and readable storage medium |
CN111246119A (en) * | 2018-11-29 | 2020-06-05 | 杭州海康威视数字技术股份有限公司 | Camera and light supplement control method and device |
CN111866373B (en) * | 2020-06-19 | 2021-12-28 | 北京小米移动软件有限公司 | Method, device and medium for displaying shooting preview image |
CN112312113B (en) * | 2020-10-29 | 2022-07-15 | 贝壳技术有限公司 | Method, device and system for generating three-dimensional model |
WO2022095012A1 (en) * | 2020-11-09 | 2022-05-12 | 深圳市大疆创新科技有限公司 | Shutter adjustment method and device, photography device, and movable platform |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6654062B1 (en) * | 1997-11-13 | 2003-11-25 | Casio Computer Co., Ltd. | Electronic camera |
CN103685875A (en) * | 2012-08-28 | 2014-03-26 | 株式会社理光 | Imaging apparatus |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4823743B2 (en) * | 2006-04-03 | 2011-11-24 | 三星電子株式会社 | Imaging apparatus and imaging method |
JP4973719B2 (en) * | 2009-11-11 | 2012-07-11 | カシオ計算機株式会社 | Imaging apparatus, imaging method, and imaging program |
CN103118163A (en) * | 2011-11-16 | 2013-05-22 | 中兴通讯股份有限公司 | Method and terminal of controlling photographing flash light |
-
2014
- 2014-07-25 CN CN201410361259.9A patent/CN104113702B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6654062B1 (en) * | 1997-11-13 | 2003-11-25 | Casio Computer Co., Ltd. | Electronic camera |
CN103685875A (en) * | 2012-08-28 | 2014-03-26 | 株式会社理光 | Imaging apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN104113702A (en) | 2014-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104113702B (en) | Flash control method and control device, image-pickup method and harvester | |
JP4236433B2 (en) | System and method for simulating fill flash in photography | |
CN104092955B (en) | Flash control method and control device, image-pickup method and collecting device | |
WO2019237992A1 (en) | Photographing method and device, terminal and computer readable storage medium | |
CN103780840B (en) | Two camera shooting image forming apparatus of a kind of high-quality imaging and method thereof | |
CN104604215B (en) | Photographic device and image capture method | |
CN105580348B (en) | Photographic device and image capture method | |
CN104813227B (en) | Camera device and image capture method | |
CN111028190A (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN106165398B (en) | Photographing element, photographic device and image processing apparatus | |
CN104580878A (en) | Automatic effect method for photography and electronic apparatus | |
CN110225330A (en) | System and method for multiple views noise reduction and high dynamic range | |
AU2012258467A1 (en) | Bokeh amplification | |
CN104092954B (en) | Flash control method and control device, image-pickup method and harvester | |
CN110381263A (en) | Image processing method, device, storage medium and electronic equipment | |
CN107205109A (en) | The method of electronic installation and its control with many photographing modules | |
WO2019019904A1 (en) | White balance processing method and apparatus, and terminal | |
CN104092956B (en) | Flash control method, flash of light control device and image capture device | |
TWI486057B (en) | Image pickup device and image synthesis method thereof | |
CN109196855A (en) | Photographic device, image processing apparatus and electronic equipment | |
CN102629973B (en) | Camera head and image capture method | |
TWI543582B (en) | Image editing method and a related blur parameter establishing method | |
CN112261292B (en) | Image acquisition method, terminal, chip and storage medium | |
JP6965132B2 (en) | Image processing equipment, imaging equipment, image processing methods and programs | |
CN105594196B (en) | Camera device and image capture method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |