CN107295269A - A kind of light measuring method and terminal, computer-readable storage medium - Google Patents
A kind of light measuring method and terminal, computer-readable storage medium Download PDFInfo
- Publication number
- CN107295269A CN107295269A CN201710639685.8A CN201710639685A CN107295269A CN 107295269 A CN107295269 A CN 107295269A CN 201710639685 A CN201710639685 A CN 201710639685A CN 107295269 A CN107295269 A CN 107295269A
- Authority
- CN
- China
- Prior art keywords
- image
- region
- light
- depth
- depth value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/71—Circuitry for evaluating the brightness variation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10052—Images from lightfield camera
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of light measuring method and terminal, computer-readable storage medium, including:IMAQ is carried out to view area using dual camera, the first image and the second image is obtained;Based on described first image and second image, the corresponding depth map of the view area is calculated, wherein, each pixel is corresponding with a depth value in the depth map;Depth value based on each pixel in the depth map carries out region division to the depth map, and the body region for treating light-metering is determined from the depth map;Light-metering is carried out to the body region, the corresponding light intensity parameter of the body region is obtained.
Description
Technical field
The present invention relates to camera technique for taking field, more particularly to a kind of light measuring method and terminal based on depth information,
Computer storage media.
Background technology
With the development of intelligent terminal, the performance of the camera on intelligent terminal is also become better and better.At present, the light-metering mould of camera
Formula is mainly region point metering mode.Region point metering mode refers to:By the central area for counting focusing area or camera
Carry out light-metering.The effect of light-metering is control exposure parameter, so as to determine the bright-dark degree of photo.
Because the light-metering point of region point metering mode has been fixed on the central area of focusing area or camera, therefore, survey
Light region be possible to be not image body region (such as face face), this can cause the exposure effect of image subject poor, shadow
The shooting quality of picture is rung.
The content of the invention
In order to solve the above technical problems, the embodiments of the invention provide a kind of light measuring method and terminal, computer storage media.
Light measuring method provided in an embodiment of the present invention, including:
IMAQ is carried out to view area using dual camera, the first image and the second image is obtained;
Based on described first image and second image, the corresponding depth map of the view area is calculated, wherein, it is described
The pixel of each in depth map is corresponding with a depth value;
Depth value based on each pixel in the depth map carries out region division to the depth map, and from the depth
The body region for treating light-metering is determined in degree figure;
Light-metering is carried out to the body region, the corresponding light intensity parameter of the body region is obtained.
In the embodiment of the present invention, the depth value based on each pixel in the depth map is carried out to the depth map
Region division, and the body region for treating light-metering is determined from the depth map, including:
Based on the depth value of each pixel in the depth map, area is carried out to the depth map using image segmentation algorithm
Domain is divided, and obtains multiple images region, wherein, the mean square deviation of the depth value of included pixel is less than in each image-region
Equal to preset value;
For each image-region, the average value of the depth value of the pixel included by calculating described image region, by institute
Average value is stated as the corresponding depth value in described image region;
Based on the corresponding depth value of each image-region, the image district met corresponding to the depth value of preparatory condition is determined
Domain is used as the body region for treating light-metering.
In the embodiment of the present invention, described determine meets image-region corresponding to the depth value of preparatory condition as described
The body region of light-metering is treated, including:
It regard the image-region corresponding to maximum depth value as the body region for treating light-metering.
It is described that light-metering is carried out to the body region in the embodiment of the present invention, obtain the corresponding light intensity of the body region
Parameter, including:
Based on the depth value of the pixel included by the body region, the corresponding depth Nogata of the body region is calculated
Figure;
Based on the depth histogram, the most depth value of quantity is determined;
Using the image-region corresponding to the most depth value of the quantity as light-metering center, and the light-metering center is entered
Row light-metering, obtains the corresponding light intensity parameter of the body region.
It is described to be based on described first image and second image in the embodiment of the present invention, calculate the view area pair
The depth map answered, including:
Pixel Point matching is carried out to described first image and second image;
For a pair of pixels of matching, the light of coordinate information and the dual camera based on the pair of pixel
Parameter is learned, the corresponding depth value of the pixel is calculated;
Depth value based on all pixels point forms the depth map.
Terminal provided in an embodiment of the present invention, including:
Dual camera, for carrying out IMAQ to view area, obtains the first image and the second image;
Memory, for light-metering program;
Processor, for performing the light-metering program in the memory to realize following operation:
Based on described first image and second image, the corresponding depth map of the view area is calculated, wherein, it is described
The pixel of each in depth map is corresponding with a depth value;
Depth value based on each pixel in the depth map carries out region division to the depth map, and from the depth
The body region for treating light-metering is determined in degree figure;
Light-metering is carried out to the body region, the corresponding light intensity parameter of the body region is obtained.
In the embodiment of the present invention, the processor is additionally operable to perform the light-metering program in the memory to realize following behaviour
Make:
Based on the depth value of each pixel in the depth map, area is carried out to the depth map using image segmentation algorithm
Domain is divided, and obtains multiple images region, wherein, the mean square deviation of the depth value of included pixel is less than in each image-region
Equal to preset value;
For each image-region, the average value of the depth value of the pixel included by calculating described image region, by institute
Average value is stated as the corresponding depth value in described image region;
Based on the corresponding depth value of each image-region, the image district met corresponding to the depth value of preparatory condition is determined
Domain is used as the body region for treating light-metering.
In the embodiment of the present invention, the processor is additionally operable to perform the light-metering program in the memory to realize following behaviour
Make:
It regard the image-region corresponding to maximum depth value as the body region for treating light-metering.
In the embodiment of the present invention, the processor is additionally operable to perform the light-metering program in the memory to realize following behaviour
Make:
Based on the depth value of the pixel included by the body region, the corresponding depth Nogata of the body region is calculated
Figure;
Based on the depth histogram, the most depth value of quantity is determined;
Using the image-region corresponding to the most depth value of the quantity as light-metering center, and the light-metering center is entered
Row light-metering, obtains the corresponding light intensity parameter of the body region.
In the embodiment of the present invention, the processor is additionally operable to perform the light-metering program in the memory to realize following behaviour
Make:
Pixel Point matching is carried out to described first image and second image;
For a pair of pixels of matching, the light of coordinate information and the dual camera based on the pair of pixel
Parameter is learned, the corresponding depth value of the pixel is calculated;
Depth value based on all pixels point forms the depth map.
Computer-readable storage medium provided in an embodiment of the present invention is stored with one or more program, one or many
Individual program can be by one or more computing device, to realize above-mentioned any described light measuring method.
In the technical scheme of the embodiment of the present invention, IMAQ is carried out to view area using dual camera, first is obtained
Image and the second image;Based on described first image and second image, the corresponding depth map of the view area is calculated, its
In, each pixel is corresponding with a depth value in the depth map;Depth based on each pixel in the depth map
Angle value carries out region division to the depth map, and the body region for treating light-metering is determined from the depth map;To the master
Body region carries out light-metering, obtains the corresponding light intensity parameter of the body region.Using the technical scheme of the embodiment of the present invention, it is based on
Depth information automatically distinguishes the main body and background of view area, then automatic to carry out light-metering, light-metering number to body region
According to accurate, so as to ensure that the body region exposure in view area is clear, shooting experience is improved.
Brief description of the drawings
Fig. 1 is a kind of optional hardware architecture diagram of mobile terminal of realization each embodiment one of the invention;
Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention;
Fig. 3 is that obtained image schematic diagram one is shot based on region point metering mode;
Fig. 4 is that obtained image schematic diagram two is shot based on region point metering mode;
Fig. 5 is the schematic flow sheet one of the light measuring method of the embodiment of the present invention;
Fig. 6 is the schematic diagram of the dual camera of the embodiment of the present invention;
The schematic diagram that Fig. 7 is formed for the depth map of the embodiment of the present invention;
Fig. 8 is the schematic flow sheet two of the light measuring method of the embodiment of the present invention;
Fig. 9 is the principle of triangulation schematic diagram of the embodiment of the present invention;
Figure 10 splits schematic diagram for the image of the embodiment of the present invention;
Figure 11 is a kind of histogrammic schematic diagram of the embodiment of the present invention;
Figure 12 is the structure composition schematic diagram of the terminal of the embodiment of the present invention.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In follow-up description, the suffix using such as " module ", " part " or " unit " for representing element is only
Be conducive to the explanation of the present invention, itself there is no a specific meaning.Therefore, " module ", " part " or " unit " can be mixed
Ground is used.
Terminal can be implemented in a variety of manners.For example, the terminal described in the present invention can include such as mobile phone, flat board
Computer, notebook computer, palm PC, personal digital assistant (Personal Digital Assistant, PDA), portable
Media player (Portable Media Player, PMP), guider, wearable device, Intelligent bracelet, pedometer etc. are moved
Move the fixed terminals such as terminal, and numeral TV, desktop computer.
It will be illustrated in subsequent descriptions by taking mobile terminal as an example, it will be appreciated by those skilled in the art that except special
Outside element for moving purpose, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Referring to Fig. 1, its hardware architecture diagram for a kind of mobile terminal of realization each embodiment of the invention, the shifting
Dynamic terminal 100 can include:RF (Radio Frequency, radio frequency) unit 101, WiFi module 102, audio output unit
103rd, A/V (audio/video) input block 104, sensor 105, display unit 106, user input unit 107, interface unit
108th, the part such as memory 109, processor 110 and power supply 111.It will be understood by those skilled in the art that shown in Fig. 1
Mobile terminal structure does not constitute the restriction to mobile terminal, and mobile terminal can be included than illustrating more or less parts,
Either combine some parts or different parts arrangement.
The all parts of mobile terminal are specifically introduced with reference to Fig. 1:
Radio frequency unit 101 can be used for receiving and sending messages or communication process in, the reception and transmission of signal, specifically, by base station
Downlink information receive after, handled to processor 110;In addition, up data are sent into base station.Generally, radio frequency unit 101
Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier, duplexer etc..In addition, penetrating
Frequency unit 101 can also be communicated by radio communication with network and other equipment.Above-mentioned radio communication can use any communication
Standard or agreement, including but not limited to GSM (Global System of Mobile communication, global system for mobile telecommunications
System), GPRS (General Packet Radio Service, general packet radio service), CDMA2000 (Code
Division Multiple Access 2000, CDMA 2000), WCDMA (Wideband Code Division
Multiple Access, WCDMA), TD-SCDMA (Time Division-Synchronous Code
Division Multiple Access, TD SDMA), FDD-LTE (Frequency Division
Duplexing-Long Term Evolution, FDD Long Term Evolution) and TDD-LTE (Time Division
Duplexing-Long Term Evolution, time division duplex Long Term Evolution) etc..
WiFi belongs to short range wireless transmission technology, and mobile terminal can help user's transmitting-receiving electricity by WiFi module 102
Sub- mail, browse webpage and access streaming video etc., it has provided the user wireless broadband internet and accessed.Although Fig. 1 shows
Go out WiFi module 102, but it is understood that, it is simultaneously not belonging to must be configured into for mobile terminal, completely can be according to need
To be omitted in the essential scope for do not change invention.
Audio output unit 103 can be in call signal reception pattern, call mode, record mould in mobile terminal 1 00
When under the isotypes such as formula, speech recognition mode, broadcast reception mode, it is that radio frequency unit 101 or WiFi module 102 are received or
The voice data stored in memory 109 is converted into audio signal and is output as sound.Moreover, audio output unit 103
The audio output related to the specific function that mobile terminal 1 00 is performed can also be provided (for example, call signal receives sound, disappeared
Breath receives sound etc.).Audio output unit 103 can include loudspeaker, buzzer etc..
A/V input blocks 104 are used to receive audio or video signal.A/V input blocks 104 can include graphics processor
(Graphics Processing Unit, GPU) 1041 and microphone 1042,1041 pairs of graphics processor is in video acquisition mode
Or the view data progress of the static images or video obtained in image capture mode by image capture apparatus (such as camera)
Reason.Picture frame after processing may be displayed on display unit 106.Picture frame after being handled through graphics processor 1041 can be deposited
Storage is transmitted in memory 109 (or other storage mediums) or via radio frequency unit 101 or WiFi module 102.Mike
Wind 1042 can connect in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone 1042
Quiet down sound (voice data), and can be voice data by such acoustic processing.Audio (voice) data after processing can
To be converted to the form output that mobile communication base station can be sent to via radio frequency unit 101 in the case of telephone calling model.
Microphone 1042 can implement various types of noises and eliminate (or suppression) algorithm to eliminate (or suppression) in reception and send sound
The noise produced during frequency signal or interference.
Mobile terminal 1 00 also includes at least one sensor 105, such as optical sensor, motion sensor and other biographies
Sensor.Specifically, optical sensor includes ambient light sensor and proximity transducer, wherein, ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 1061, and proximity transducer can close when mobile terminal 1 00 is moved in one's ear
Display panel 1061 and/or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions (general
For three axles) size of acceleration, size and the direction of gravity are can detect that when static, the application available for identification mobile phone posture
(such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc.;
The fingerprint sensor that can also configure as mobile phone, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer,
The other sensors such as hygrometer, thermometer, infrared ray sensor, will not be repeated here.
Display unit 106 is used for the information for showing the information inputted by user or being supplied to user.Display unit 106 can be wrapped
Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configures display panel 1061.
User input unit 107 can be used for the numeral or character information for receiving input, and produce the use with mobile terminal
The key signals input that family is set and function control is relevant.Specifically, user input unit 107 may include contact panel 1071 with
And other input equipments 1072.Contact panel 1071, also referred to as touch-screen, collect touch operation of the user on or near it
(such as user is using any suitable objects such as finger, stylus or annex on contact panel 1071 or in contact panel 1071
Neighbouring operation), and corresponding attachment means are driven according to formula set in advance.Contact panel 1071 may include touch detection
Two parts of device and touch controller.Wherein, touch detecting apparatus detects the touch orientation of user, and detects touch operation band
The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by it
It is converted into contact coordinate, then gives processor 110, and the order sent of reception processing device 110 and can be performed.In addition, can
To realize contact panel 1071 using polytypes such as resistance-type, condenser type, infrared ray and surface acoustic waves.Except contact panel
1071, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments 1072 can be wrapped
Include but be not limited to physical keyboard, in function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc.
One or more, do not limit herein specifically.
Further, contact panel 1071 can cover display panel 1061, detect thereon when contact panel 1071 or
After neighbouring touch operation, processor 110 is sent to determine the type of touch event, with preprocessor 110 according to touch thing
The type of part provides corresponding visual output on display panel 1061.Although in Fig. 1, contact panel 1071 and display panel
1061 be input and the output function that mobile terminal is realized as two independent parts, but in certain embodiments, can
By contact panel 1071 and the input that is integrated and realizing mobile terminal of display panel 1061 and output function, not do specifically herein
Limit.
Interface unit 108 is connected the interface that can pass through as at least one external device (ED) with mobile terminal 1 00.For example,
External device (ED) can include wired or wireless head-band earphone port, external power source (or battery charger) port, wired or nothing
Line FPDP, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end
Mouth, video i/o port, ear port etc..Interface unit 108 can be used for receiving the input from external device (ED) (for example, number
It is believed that breath, electric power etc.) and the input received is transferred to one or more elements in mobile terminal 1 00 or can be with
For transmitting data between mobile terminal 1 00 and external device (ED).
Memory 109 can be used for storage software program and various data.Memory 109 can mainly include storing program area
And storage data field, wherein, application program (the such as sound that storing program area can be needed for storage program area, at least one function
Sound playing function, image player function etc.) etc.;Storage data field can be stored uses created data (such as according to mobile phone
Voice data, phone directory etc.) etc..In addition, memory 109 can include high-speed random access memory, it can also include non-easy
The property lost memory, for example, at least one disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of mobile terminal, utilizes each of various interfaces and the whole mobile terminal of connection
Individual part, by operation or performs and is stored in software program and/or module in memory 109, and calls and be stored in storage
Data in device 109, perform the various functions and processing data of mobile terminal, so as to carry out integral monitoring to mobile terminal.Place
Reason device 110 may include one or more processing units;It is preferred that, processor 110 can integrated application processor and modulatedemodulate mediate
Device is managed, wherein, application processor mainly handles operating system, user interface and application program etc., and modem processor is main
Handle radio communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Mobile terminal 1 00 can also include the power supply 111 (such as battery) powered to all parts, it is preferred that power supply 111
Can be logically contiguous by power-supply management system and processor 110, so as to realize management charging by power-supply management system, put
The function such as electricity and power managed.
Although Fig. 1 is not shown, mobile terminal 1 00 can also will not be repeated here including bluetooth module etc..
For the ease of understanding the embodiment of the present invention, the communications network system that the mobile terminal of the present invention is based on is entered below
Row description.
Referring to Fig. 2, Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention, the communication network system
Unite as the LTE system of universal mobile communications technology, UE (User Equipment, use of the LTE system including communicating connection successively
Family equipment) 201, E-UTRAN (Evolved UMTS Terrestrial Radio Access Network, evolved UMTS lands
Ground wireless access network) 202, EPC (Evolved Packet Core, evolved packet-based core networks) 203 and operator IP operation
204。
Specifically, UE201 can be above-mentioned terminal 100, and here is omitted.
E-UTRAN202 includes eNodeB2021 and other eNodeB2022 etc..Wherein, eNodeB2021 can be by returning
Journey (backhaul) (such as X2 interface) is connected with other eNodeB2022, and eNodeB2021 is connected to EPC203,
ENodeB2021 can provide UE201 to EPC203 access.
EPC203 can include MME (Mobility Management Entity, mobility management entity) 2031, HSS
(Home Subscriber Server, home subscriber server) 2032, other MME2033, SGW (Serving Gate Way,
Gateway) 2034, PGW (PDN Gate Way, grouped data network gateway) 2035 and PCRF (Policy and
Charging Rules Function, policy and rate functional entity) 2036 etc..Wherein, MME2031 be processing UE201 and
There is provided carrying and connection management for the control node of signaling between EPC203.HSS2032 is all to manage for providing some registers
Such as function of attaching position register (not shown) etc, and some are preserved about the use such as service features, data rate
The special information in family.All customer data can be transmitted by SGW2034, and PGW2035 can provide UE 201 IP
Address is distributed and other functions, and PCRF2036 is strategy and the charging control strategic decision-making of business data flow and IP bearing resources
Point, it selects and provided available strategy and charging control decision-making with charge execution function unit (not shown) for strategy.
IP operation 204 can include internet, Intranet, IMS (IP Multimedia Subsystem, IP multimedia
System) or other IP operations etc..
Although above-mentioned be described by taking LTE system as an example, those skilled in the art it is to be understood that the present invention not only
Suitable for LTE system, be readily applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA with
And following new network system etc., do not limit herein.
Based on above-mentioned mobile terminal hardware configuration and communications network system, each embodiment of the inventive method is proposed.
Fig. 3 is that obtained image schematic diagram one is shot based on region point metering mode, and Fig. 4 is based on region point metering mode
Shoot obtained image schematic diagram two.As shown in figure 3, because measuring point has mistakenly been placed on the sky of background, causing the master of picture
Body --- character facial seems dark.If ensure that light-metering point is placed in shooting main body, so that it may correctly exposed, such as
Shown in Fig. 4, light-metering point has been placed on picture main body by this photo when shooting --- character facial, so character facial exposure is clear
It is clear.It can be seen that, shooting obtained image based on region point metering mode, to be likely to result in exposure effect poor, therefore, the present invention is real
Apply example and propose a kind of method that light-metering is carried out based on depth information.
Fig. 5 is the schematic flow sheet one of the light measuring method of the embodiment of the present invention, as shown in figure 5, the light measuring method includes
Following steps:
Step 501:IMAQ is carried out to view area using dual camera, the first image and the second image is obtained.
The technical scheme of the embodiment of the present invention be applied to terminal in, terminal can be mobile phone, tablet personal computer, palm PC,
The equipment such as game machine.
In the embodiment of the present invention, the object in view area can be divided into two major classes, and a class is agent object, another kind of to be
Background object, wherein, agent object needs to reach optimal shooting effect.Here, the region where agent object is referred to as master
Body region, is referred to as background area by the region where background object.For example:View area is behaved before a building,
At this moment, the region where people is body region, and the region where building or all objects in addition to people is background area.
The embodiment of the present invention is intend to distinguish between out body region, so that light-metering pointedly is carried out to body region, so can effective guarantee
The exposure effect of body region.
In the embodiment of the present invention, terminal has dual camera, and terminal realizes shoot function by dual camera.Realizing
During shoot function, in order to obtain more preferable shooting effect, it is necessary to rationally determine exposure parameter.Here, dual camera
Including two cameras, respectively the first camera and second camera, in one example, the first camera and second camera
With identical physical arrangement and optical parametric.
Fig. 6 is the schematic diagram of the dual camera of the embodiment of the present invention, wherein, (a) figure is the top view of dual camera, (b)
Figure is the side view of dual camera.Typically, the dual camera of terminal is using the technique for taking cooperated, to reach more preferable scape
The photographic effects such as deep, 3D shootings.As shown in fig. 6, wherein the first camera 11 and second camera 12 are two visible image capturings
Head, 13 be the connection member of two cameras.First camera 11 and second camera 12 are fixed in connection member 13, and to the greatest extent
Amount accomplishes that imaging plane is parallel.By this imaging system can be obtained in synchronization left mesh image (namely first image) and
Right mesh image (namely second image).
Step 502:Based on described first image and second image, the corresponding depth map of the view area is calculated,
Wherein, each pixel is corresponding with a depth value in the depth map.
In the embodiment of the present invention, depth map is calculated by image processing software, specifically, to described first image and institute
State the second image and carry out pixel Point matching;For a pair of pixels of matching, the coordinate information based on the pair of pixel with
And the optical parametric of the dual camera, calculate the corresponding depth value of the pixel;Depth value shape based on all pixels point
Into the depth map.
The schematic diagram that Fig. 7 is formed for the depth map of the embodiment of the present invention, it is assumed that the first camera is left camera, and second takes the photograph
As head is right camera, is shot by left camera and obtain left mesh image, left mesh image is also referred to as LOOK LEFT image;Taken the photograph by the right side
Right mesh image is obtained as head is shot, right mesh image is also referred to as LOOK RIGHT image.As shown in fig. 7, Pleft appointing for LOOK LEFT image
Anticipate pixel, by image matching technology searched out in LOOK RIGHT image to Pleft the most similar pixel Pright,
Here, the similarity of pixel can be compared by the color and brightness size of pixel.Find after the pixel of matching, just
The corresponding depth value of pixel (Depth) can be calculated according to principle of triangulation.The depth value of whole pixels is to be formed
Depth map.
Step 503:Depth value based on each pixel in the depth map carries out region division to the depth map, and
The body region for treating light-metering is determined from the depth map.
It is roughly the same in view of the depth value of whole pixels of an object in view area, therefore each can be based on
The depth value of pixel carries out region division to depth map, and the region that the roughly the same continuous image vegetarian refreshments of depth value is formed is drawn
It is divided into a region.So, each region is to represent a kind of object, and these objects have the agent object that user pays close attention to, thus,
It is capable of determining that body region.
In the embodiment of the present invention, body region is made up of multiple pixels, one depth value of each pixel correspondence, this
Outside, each pixel also corresponding color data and brightness data.
Step 504:Light-metering is carried out to the body region, the corresponding light intensity parameter of the body region is obtained.
In the embodiment of the present invention, determine after body region, light-metering just is carried out to body region.In one example, terminal
Photometric system be usually to determine the brightness that reflects of subject, also referred to as reflective light-metering.Exposure Metering is by survey
The riding position of optical element is unusual to be divided into External metering and interior light-metering two ways.In outer Exposure Metering, photometry element
Light path with camera lens is each independent.In interior Exposure Metering, light-metering is carried out by camera lens, i.e., by camera lens (TTL,
Through The Lens) light-metering.It is that can obtain the corresponding light intensity parameter of body region to body region light-metering, according to this light
Strong parameter adjusts exposure parameter, so as to realize the correct exposure to body region, greatly improves shooting effect.
Fig. 8 is the schematic flow sheet two of the light measuring method of the embodiment of the present invention, as shown in figure 8, the light measuring method includes
Following steps:
Step 801:IMAQ is carried out to view area using dual camera, the first image and the second image is obtained.
The technical scheme of the embodiment of the present invention be applied to terminal in, terminal can be mobile phone, tablet personal computer, palm PC,
The equipment such as game machine.
In the embodiment of the present invention, the object in view area can be divided into two major classes, and a class is agent object, another kind of to be
Background object, wherein, agent object needs to reach optimal shooting effect.Here, the region where agent object is referred to as master
Body region, is referred to as background area by the region where background object.For example:View area is behaved before a building,
At this moment, the region where people is body region, and the region where building or all objects in addition to people is background area.
The embodiment of the present invention is intend to distinguish between out body region, so that light-metering pointedly is carried out to body region, so can effective guarantee
The exposure effect of body region.
In the embodiment of the present invention, terminal has dual camera, and terminal realizes shoot function by dual camera.Realizing
During shoot function, in order to obtain more preferable shooting effect, it is necessary to rationally determine exposure parameter.Here, dual camera
Including two cameras, respectively the first camera and second camera, in one example, the first camera and second camera
With identical physical arrangement and optical parametric.
Fig. 6 is the schematic diagram of the dual camera of the embodiment of the present invention, wherein, (a) figure is the top view of dual camera, (b)
Figure is the side view of dual camera.Typically, the dual camera of terminal is using the technique for taking cooperated, to reach more preferable scape
The photographic effects such as deep, 3D shootings.As shown in fig. 6, wherein the first camera 11 and second camera 12 are two visible image capturings
Head, 13 be the connection member of two cameras.First camera 11 and second camera 12 are fixed in connection member 13, and to the greatest extent
Amount accomplishes that imaging plane is parallel.By this imaging system can be obtained in synchronization left mesh image (namely first image) and
Right mesh image (namely second image).
Step 802:Based on described first image and second image, the corresponding depth map of the view area is calculated,
Wherein, each pixel is corresponding with a depth value in the depth map.
In the embodiment of the present invention, depth map is calculated by image processing software, specifically, to described first image and institute
State the second image and carry out pixel Point matching;For a pair of pixels of matching, the coordinate information based on the pair of pixel with
And the optical parametric of the dual camera, calculate the corresponding depth value of the pixel;Depth value shape based on all pixels point
Into the depth map.
The schematic diagram that Fig. 7 is formed for the depth map of the embodiment of the present invention, it is assumed that the first camera is left camera, and second takes the photograph
As head is right camera, is shot by left camera and obtain left mesh image, left mesh image is also referred to as LOOK LEFT image;Taken the photograph by the right side
Right mesh image is obtained as head is shot, right mesh image is also referred to as LOOK RIGHT image.As shown in fig. 7, Pleft appointing for LOOK LEFT image
Anticipate pixel, by image matching technology searched out in LOOK RIGHT image to Pleft the most similar pixel Pright,
Here, the similarity of pixel can be compared by the color and brightness size of pixel.Find after the pixel of matching, just
The corresponding depth value of pixel (Depth) can be calculated according to principle of triangulation.The depth value of whole pixels is to be formed
Depth map.
Fig. 9 is the principle of triangulation schematic diagram of the embodiment of the present invention, wherein, Cleft is the optical center of left camera,
Cright is the optical center of right camera, and Oleft is the center of left mesh image, and Oright is the center of right mesh image, and P is thing
Manage space in any point, Pleft be imaging point of the P points in left mesh image, Pright be P points in right mesh image into
Picture point, f is the focal length of camera lens, and Z is the distance between P points to camera, and T is the distance between two cameras, is closed by triangle
System understands:
Depth=Pleft-Pright
Z=f × T/Depth
Based on this, the depth value of the pixel of each in image can be calculated.
Step 803:Based on the depth value of each pixel in the depth map, using image segmentation algorithm to the depth
Figure carries out region division, obtains multiple images region, wherein, the depth value of included pixel is equal in each image-region
Variance is less than or equal to preset value.
In view of whole pixels of an object in view area depth value it is roughly the same (namely mean square deviation be less than etc.
In default), thus can the depth value based on each pixel region division is carried out to depth map, depth value is roughly the same
The region division that continuous image vegetarian refreshments is formed is a region.So, each region is to represent a kind of object, and these objects have
The agent object of user's concern, thus, it is possible to determine body region.
Traditional image segmentation algorithm is carried out in 2D planes, has lacked space length feature this important information, image point
It is typically difficult the background and main body being precisely separating out in scene to cut algorithm.The embodiment of the present invention utilizes depth information, with reference to image
Partitioning algorithm (such as meanshift algorithms) is divided to image-region, is such as divided into body region and background area.This
In, got by image segmentation algorithm after different image-regions, in addition it is also necessary to be handled as follows by morphological operation:Carry
Image outline, filling region interior void are taken, so as to ensure the integrality of image cut zone.
Figure 10 splits schematic diagram for the image of the embodiment of the present invention, wherein, (a) figure is original image, and (b) figure is original graph
As corresponding depth map, (c) figure is the body region figure split.
Step 804:For each image-region, the depth value for calculating pixel included by described image region is averaged
Value, regard the average value as the corresponding depth value in described image region.
Step 805:Based on the corresponding depth value of each image-region, determine and meet corresponding to the depth value of preparatory condition
Image-region be used as the body region for treating light-metering.
In the embodiment of the present invention, the image-region corresponding to maximum depth value is regard as the body region for treating light-metering.
Here, body region is made up of multiple pixels, one depth value of each pixel correspondence, in addition, each pixel is also corresponded to
Color data and brightness data.
Specifically, after image segmentation algorithm, different image-regions can be obtained.In view of the main body of shooting image
Object from a distance from camera it is general compared with background from this nearly priori conditions with a distance from camera, with reference to the depth information of view area
(here, according to principle of triangulation, nearer apart from video camera, its depth value is bigger), the larger image-region of selected depth value
It is used as body region.
Step 806:Based on the depth value of the pixel included by the body region, the body region is calculated corresponding
Depth histogram;Based on the depth histogram, the most depth value of quantity is determined.
In the embodiment of the present invention, the depth information based on body region calculates the depth histogram in the region.In an example
In, it is assumed that body region includes N number of pixel, and N is positive integer.Depth value corresponding to this N number of pixel is respectively d1, d2,
D3 ..., dN.Statistics with histogram is carried out to the depth value of this N number of pixel, statistics with histogram is primarily to find out in the data
The most depth value of occurrence number.As shown in figure 11, Figure 11 is a kind of histogrammic schematic diagram of the embodiment of the present invention.In order to be able to
Enough obtain body region light-metering center, the most depth value of occurrence number in histogram is found out here, it is assumed that be di, then with
Region where di carries out automatic light measuring processing as light-metering center to the light-metering center.
Step 807:Using the image-region corresponding to the most depth value of the quantity as light-metering center, and surveyed to described
Light center carries out light-metering, obtains the corresponding light intensity parameter of the body region.
In the embodiment of the present invention, determine behind light-metering center, light-metering just is carried out to light-metering center.In one example, terminal
Photometric system be usually to determine the brightness that reflects of subject, also referred to as reflective light-metering.Exposure Metering is by survey
The riding position of optical element is unusual to be divided into External metering and interior light-metering two ways.In outer Exposure Metering, photometry element
Light path with camera lens is each independent.In interior Exposure Metering, light-metering is carried out by camera lens, i.e., by camera lens (TTL,
Through The Lens) light-metering.It is that can obtain the corresponding light intensity ginseng of body region to the light-metering center light-metering in body region
Number, adjusts exposure parameter according to this light intensity parameter, so as to realize the correct exposure to body region, greatly improves shooting
Effect.
Figure 12 is the structure composition schematic diagram of the terminal of the embodiment of the present invention, and as shown in figure 12, the terminal includes:
Dual camera 1201, for carrying out IMAQ to view area, obtains the first image and the second image;
Memory 1202, for light-metering program;
Processor 1203, for performing the light-metering program in the memory 1202 to realize following operation:
Based on described first image and second image, the corresponding depth map of the view area is calculated, wherein, it is described
The pixel of each in depth map is corresponding with a depth value;
Depth value based on each pixel in the depth map carries out region division to the depth map, and from the depth
The body region for treating light-metering is determined in degree figure;
Light-metering is carried out to the body region, the corresponding light intensity parameter of the body region is obtained.
In the embodiment of the present invention, the processor 1203 is additionally operable to perform light-metering program in the memory 1202 with reality
It is now following to operate:
Based on the depth value of each pixel in the depth map, area is carried out to the depth map using image segmentation algorithm
Domain is divided, and obtains multiple images region, wherein, the mean square deviation of the depth value of included pixel is less than in each image-region
Equal to preset value;
For each image-region, the average value of the depth value of the pixel included by calculating described image region, by institute
Average value is stated as the corresponding depth value in described image region;
Based on the corresponding depth value of each image-region, the image district met corresponding to the depth value of preparatory condition is determined
Domain is used as the body region for treating light-metering.
In the embodiment of the present invention, the processor 1203 is additionally operable to perform light-metering program in the memory 1202 with reality
It is now following to operate:
It regard the image-region corresponding to maximum depth value as the body region for treating light-metering.
In the embodiment of the present invention, the processor 1203 is additionally operable to perform light-metering program in the memory 1202 with reality
It is now following to operate:
Based on the depth value of the pixel included by the body region, the corresponding depth Nogata of the body region is calculated
Figure;
Based on the depth histogram, the most depth value of quantity is determined;
Using the image-region corresponding to the most depth value of the quantity as light-metering center, and the light-metering center is entered
Row light-metering, obtains the corresponding light intensity parameter of the body region.
In the embodiment of the present invention, the processor 1203 is additionally operable to perform light-metering program in the memory 1202 with reality
It is now following to operate:
Pixel Point matching is carried out to described first image and second image;
For a pair of pixels of matching, the light of coordinate information and the dual camera based on the pair of pixel
Parameter is learned, the corresponding depth value of the pixel is calculated;
Depth value based on all pixels point forms the depth map.
It will be appreciated by those skilled in the art that the function of each synthesizer part can refer in terminal in the embodiment of the present invention
The associated description of foregoing light measuring method is understood.
If the above-mentioned terminal of the embodiment of the present invention is realized using in the form of software function module and is used as independent production marketing
Or in use, can also be stored in a computer read/write memory medium.Understood based on such, the embodiment of the present invention
The part that technical scheme substantially contributes to prior art in other words can be embodied in the form of software product, the meter
Calculation machine software product is stored in a storage medium, including some instructions are to cause a computer equipment (can be individual
People's computer, server or network equipment etc.) perform all or part of each of the invention embodiment methods described.And it is preceding
The storage medium stated includes:USB flash disk, mobile hard disk, read-only storage (ROM, Read Only Memory), magnetic disc or CD etc.
It is various can be with the medium of store program codes.So, the embodiment of the present invention is not restricted to any specific hardware and software combination.
Correspondingly, the embodiment of the present invention also provides a kind of computer-readable storage medium, wherein the computer program that is stored with, the meter
Calculation machine program is configured to perform the light measuring method of the embodiment of the present invention.
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row
His property is included, so that process, method, article or device including a series of key elements not only include those key elements, and
And also including other key elements being not expressly set out, or also include for this process, method, article or device institute inherently
Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this
Also there is other identical element in process, method, article or the device of key element.
The embodiments of the present invention are for illustration only, and the quality of embodiment is not represented.
Embodiments of the invention are described above in conjunction with accompanying drawing, but the invention is not limited in above-mentioned specific
Embodiment, above-mentioned embodiment is only schematical, rather than restricted, one of ordinary skill in the art
Under the enlightenment of the present invention, in the case of present inventive concept and scope of the claimed protection is not departed from, it can also make a lot
Form, these are belonged within the protection of the present invention.
Claims (10)
1. a kind of light measuring method, it is characterised in that methods described includes:
IMAQ is carried out to view area using dual camera, the first image and the second image is obtained;
Based on described first image and second image, the corresponding depth map of the view area is calculated, wherein, the depth
The pixel of each in figure is corresponding with a depth value;
Depth value based on each pixel in the depth map carries out region division to the depth map, and from the depth map
In determine the body region for treating light-metering;
Light-metering is carried out to the body region, the corresponding light intensity parameter of the body region is obtained.
2. light measuring method according to claim 1, it is characterised in that described based on each pixel in the depth map
Depth value carries out region division to the depth map, and the body region for treating light-metering is determined from the depth map, including:
Based on the depth value of each pixel in the depth map, region is carried out to the depth map using image segmentation algorithm and drawn
Point, multiple images region is obtained, wherein, the mean square deviation of the depth value of included pixel is less than or equal in each image-region
Preset value;
For each image-region, the average value of the depth value of the pixel included by calculating described image region will be described flat
Average is used as the corresponding depth value in described image region;
Based on the corresponding depth value of each image-region, determine that the image-region met corresponding to the depth value of preparatory condition is made
For the body region for treating light-metering.
3. light measuring method according to claim 2, it is characterised in that described to determine the depth value institute for meeting preparatory condition
Corresponding image-region as the body region for treating light-metering, including:
It regard the image-region corresponding to maximum depth value as the body region for treating light-metering.
4. light measuring method according to claim 1, it is characterised in that described to carry out light-metering to the body region, is obtained
The corresponding light intensity parameter of the body region, including:
Based on the depth value of the pixel included by the body region, the corresponding depth histogram of the body region is calculated;
Based on the depth histogram, the most depth value of quantity is determined;
Using the image-region corresponding to the most depth value of the quantity as light-metering center, and the light-metering center is surveyed
Light, obtains the corresponding light intensity parameter of the body region.
5. light measuring method according to claim 1, it is characterised in that described to be based on described first image and second figure
Picture, calculates the corresponding depth map of the view area, including:
Pixel Point matching is carried out to described first image and second image;
For a pair of pixels of matching, the Optical Parametric of coordinate information and the dual camera based on the pair of pixel
Number, calculates the corresponding depth value of the pixel;
Depth value based on all pixels point forms the depth map.
6. a kind of terminal, it is characterised in that the terminal includes:
Dual camera, for carrying out IMAQ to view area, obtains the first image and the second image;
Memory, for light-metering program;
Processor, for performing the light-metering program in the memory to realize following operation:
Based on described first image and second image, the corresponding depth map of the view area is calculated, wherein, the depth
The pixel of each in figure is corresponding with a depth value;
Depth value based on each pixel in the depth map carries out region division to the depth map, and from the depth map
In determine the body region for treating light-metering;
Light-metering is carried out to the body region, the corresponding light intensity parameter of the body region is obtained.
7. terminal according to claim 6, it is characterised in that the processor is additionally operable to perform the survey in the memory
Light path sequence is to realize following operation:
Based on the depth value of each pixel in the depth map, region is carried out to the depth map using image segmentation algorithm and drawn
Point, multiple images region is obtained, wherein, the mean square deviation of the depth value of included pixel is less than or equal in each image-region
Preset value;
For each image-region, the average value of the depth value of the pixel included by calculating described image region will be described flat
Average is used as the corresponding depth value in described image region;
Based on the corresponding depth value of each image-region, determine that the image-region met corresponding to the depth value of preparatory condition is made
For the body region for treating light-metering.
8. terminal according to claim 7, it is characterised in that the processor is additionally operable to perform the survey in the memory
Light path sequence is to realize following operation:
It regard the image-region corresponding to maximum depth value as the body region for treating light-metering.
9. terminal according to claim 6, it is characterised in that the processor is additionally operable to perform the survey in the memory
Light path sequence is to realize following operation:
Based on the depth value of the pixel included by the body region, the corresponding depth histogram of the body region is calculated;
Based on the depth histogram, the most depth value of quantity is determined;
Using the image-region corresponding to the most depth value of the quantity as light-metering center, and the light-metering center is surveyed
Light, obtains the corresponding light intensity parameter of the body region.
10. a kind of computer-readable storage medium, it is characterised in that the computer-readable storage medium is stored with one or more journey
Sequence, one or more of programs can be by one or more computing device, to realize any one of claim 1 to 5 institute
The method and step stated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710639685.8A CN107295269A (en) | 2017-07-31 | 2017-07-31 | A kind of light measuring method and terminal, computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710639685.8A CN107295269A (en) | 2017-07-31 | 2017-07-31 | A kind of light measuring method and terminal, computer-readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107295269A true CN107295269A (en) | 2017-10-24 |
Family
ID=60103909
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710639685.8A Pending CN107295269A (en) | 2017-07-31 | 2017-07-31 | A kind of light measuring method and terminal, computer-readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107295269A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107872631A (en) * | 2017-12-06 | 2018-04-03 | 广东欧珀移动通信有限公司 | Image capturing method, device and mobile terminal based on dual camera |
CN108462832A (en) * | 2018-03-19 | 2018-08-28 | 百度在线网络技术(北京)有限公司 | Method and device for obtaining image |
CN109376588A (en) * | 2018-09-05 | 2019-02-22 | 北京达佳互联信息技术有限公司 | A kind of face surveys luminous point choosing method, device and capture apparatus |
CN109688338A (en) * | 2019-01-19 | 2019-04-26 | 创新奇智(北京)科技有限公司 | A kind of exposure method based on scene depth, system and its electronic device |
CN109922255A (en) * | 2017-12-12 | 2019-06-21 | 黑芝麻国际控股有限公司 | For generating the dual camera systems of real-time deep figure |
CN111434104A (en) * | 2017-12-07 | 2020-07-17 | 富士胶片株式会社 | Image processing device, imaging device, image processing method, and program |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102316352A (en) * | 2011-08-08 | 2012-01-11 | 清华大学 | Stereo video depth image manufacturing method based on area communication image and apparatus thereof |
US20140098263A1 (en) * | 2012-10-09 | 2014-04-10 | Kabushiki Kaisha Toshiba | Image processing apparatus and image processing method |
CN104333710A (en) * | 2014-11-28 | 2015-02-04 | 广东欧珀移动通信有限公司 | Camera exposure method, camera exposure device and camera exposure equipment |
CN104853109A (en) * | 2015-04-30 | 2015-08-19 | 广东欧珀移动通信有限公司 | Flashing automatically adjusting method and shooting terminal |
CN106851123A (en) * | 2017-03-09 | 2017-06-13 | 广东欧珀移动通信有限公司 | Exposal control method, exposure-control device and electronic installation |
-
2017
- 2017-07-31 CN CN201710639685.8A patent/CN107295269A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102316352A (en) * | 2011-08-08 | 2012-01-11 | 清华大学 | Stereo video depth image manufacturing method based on area communication image and apparatus thereof |
US20140098263A1 (en) * | 2012-10-09 | 2014-04-10 | Kabushiki Kaisha Toshiba | Image processing apparatus and image processing method |
CN104333710A (en) * | 2014-11-28 | 2015-02-04 | 广东欧珀移动通信有限公司 | Camera exposure method, camera exposure device and camera exposure equipment |
CN104853109A (en) * | 2015-04-30 | 2015-08-19 | 广东欧珀移动通信有限公司 | Flashing automatically adjusting method and shooting terminal |
CN106851123A (en) * | 2017-03-09 | 2017-06-13 | 广东欧珀移动通信有限公司 | Exposal control method, exposure-control device and electronic installation |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107872631A (en) * | 2017-12-06 | 2018-04-03 | 广东欧珀移动通信有限公司 | Image capturing method, device and mobile terminal based on dual camera |
CN111434104A (en) * | 2017-12-07 | 2020-07-17 | 富士胶片株式会社 | Image processing device, imaging device, image processing method, and program |
US11838648B2 (en) | 2017-12-07 | 2023-12-05 | Fujifilm Corporation | Image processing device, imaging apparatus, image processing method, and program for determining a condition for high dynamic range processing |
CN109922255A (en) * | 2017-12-12 | 2019-06-21 | 黑芝麻国际控股有限公司 | For generating the dual camera systems of real-time deep figure |
CN108462832A (en) * | 2018-03-19 | 2018-08-28 | 百度在线网络技术(北京)有限公司 | Method and device for obtaining image |
CN109376588A (en) * | 2018-09-05 | 2019-02-22 | 北京达佳互联信息技术有限公司 | A kind of face surveys luminous point choosing method, device and capture apparatus |
CN109376588B (en) * | 2018-09-05 | 2019-08-02 | 北京达佳互联信息技术有限公司 | A kind of face surveys luminous point choosing method, device and capture apparatus |
CN109688338A (en) * | 2019-01-19 | 2019-04-26 | 创新奇智(北京)科技有限公司 | A kind of exposure method based on scene depth, system and its electronic device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107295269A (en) | A kind of light measuring method and terminal, computer-readable storage medium | |
CN107659758A (en) | Periscopic filming apparatus and mobile terminal | |
CN107592451A (en) | A kind of multi-mode auxiliary photo-taking method, apparatus and computer-readable recording medium | |
CN108322644A (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN108108704A (en) | Face identification method and mobile terminal | |
CN106961706A (en) | Method, mobile terminal and the computer-readable recording medium of communication pattern switching | |
CN107317896A (en) | A kind of Double-camera device and mobile terminal | |
CN107730462A (en) | A kind of image processing method, terminal and computer-readable recording medium | |
CN107820016A (en) | Shooting display methods, double screen terminal and the computer-readable storage medium of double screen terminal | |
CN109167910A (en) | focusing method, mobile terminal and computer readable storage medium | |
CN107194963A (en) | A kind of dual camera image processing method and terminal | |
CN107680060A (en) | A kind of image distortion correction method, terminal and computer-readable recording medium | |
CN107317963A (en) | A kind of double-camera mobile terminal control method, mobile terminal and storage medium | |
CN107343064A (en) | A kind of mobile terminal of two-freedom rotating camera | |
CN108269230A (en) | Certificate photo generation method, mobile terminal and computer readable storage medium | |
CN107682627A (en) | A kind of acquisition parameters method to set up, mobile terminal and computer-readable recording medium | |
CN108182726A (en) | Three-dimensional rebuilding method, cloud server and computer readable storage medium | |
CN107959795A (en) | A kind of information collecting method, equipment and computer-readable recording medium | |
CN107333056A (en) | Image processing method, device and the computer-readable recording medium of moving object | |
CN109711226A (en) | Two-dimensional code identification method, device, mobile terminal and readable storage medium storing program for executing | |
CN107493426A (en) | A kind of information collecting method, equipment and computer-readable recording medium | |
CN107239205A (en) | A kind of photographic method, mobile terminal and storage medium | |
CN108170817A (en) | Differentiation video acquiring method, device and the readable storage medium storing program for executing of photo main body | |
CN107707821A (en) | Modeling method and device, bearing calibration, terminal, the storage medium of distortion parameter | |
CN107239567A (en) | A kind of recognition methods of object scene, equipment and computer-readable recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171024 |