CN107566731A - A kind of focusing method and terminal, computer-readable storage medium - Google Patents
A kind of focusing method and terminal, computer-readable storage medium Download PDFInfo
- Publication number
- CN107566731A CN107566731A CN201710896167.4A CN201710896167A CN107566731A CN 107566731 A CN107566731 A CN 107566731A CN 201710896167 A CN201710896167 A CN 201710896167A CN 107566731 A CN107566731 A CN 107566731A
- Authority
- CN
- China
- Prior art keywords
- view
- interface
- finding
- subregion
- focusing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Studio Devices (AREA)
Abstract
The invention discloses a kind of focusing method and terminal, computer-readable storage medium, methods described to include:Target area in interface of finding a view is divided into N number of subregion, N is positive integer;Position based on subject in the interface of finding a view, M sub-regions are selected from N number of subregion, and using the regional extent that the M sub-regions are formed as focusing area, M is the positive integer less than or equal to N;Object in the focusing area is focused;The definition of subject in interface of finding a view is ensure that, improves shooting experience.
Description
Technical field
The present invention relates to camera technique for taking field, more particularly to a kind of focusing method and terminal, computer storage media.
Background technology
With the development of intelligent terminal, the performance of the camera on intelligent terminal is also become better and better.In order to meet user to height
The pursuit of shooting effect, camera is interior to be provided with continuous auto-focusing (Continuous Auto Focus) function, a kind of common company
Continuing auto-focusing scheme is:Ambient light change is detected, the pose data variation of camera is detected, detects the object in interface of finding a view
Change, when a certain or several change in these three changes reaches corresponding threshold value, camera can be triggered and re-start focusing.
The either focusing again of which kind of change triggers, the measuring and calculating region (namely focusing area) of continuous auto-focusing are all
One piece of region at terminal screen center, normally found a view when shooting where subject because the central area of screen is user
Region.
However, the central area of screen is possible to be more than to contain subject, such as subject size is less
In the case of, the central area of screen contains other more objects, and this can cause to the central area of screen to defocused, shot
Object is still unintelligible, and the focus effects of this focusing mode are poor, have impact on the shooting quality of picture.
The content of the invention
In order to solve the above technical problems, the embodiments of the invention provide a kind of focusing method and terminal, computer storage media.
Focusing method provided in an embodiment of the present invention, including:
Target area in interface of finding a view is divided into N number of subregion, N is positive integer;
Position based on subject in the interface of finding a view, M sub-regions are selected from N number of subregion, and
Using the regional extent that the M sub-regions are formed as focusing area, M is the positive integer less than or equal to N;
Object in the focusing area is focused.
In the embodiment of the present invention, the position based on subject in the interface of finding a view, from N number of subregion
Middle selection M sub-regions, including:
The depth information for each object found a view described in acquisition in interface;
Based on the depth information of each object in the interface of finding a view, shot pair is determined in the interface of finding a view
As, and determine position of the subject in the interface of finding a view;
Position based on the subject in the interface of finding a view, shot pair is selected from N number of subregion
As the M sub-regions at place.
In the embodiment of the present invention, the position based on subject in the interface of finding a view, from N number of subregion
Middle selection M sub-regions, including:
Feature extraction is carried out to the image corresponding to interface of finding a view, obtains characteristic parameter;
Based on the characteristic parameter extracted, subject is determined in the interface of finding a view, and determine the quilt
Take the photograph position of the object in the interface of finding a view;
Position based on the subject in the interface of finding a view, shot pair is selected from N number of subregion
As the M sub-regions at place.
In the embodiment of the present invention, N number of subregion includes first group of subregion and second group of subregion, described second group
Each sub-regions in subregion are arranged in the periphery of first group of subregion;
The position based on subject in the interface of finding a view, M sub-district is selected from N number of subregion
Domain, including:
First group of subregion is selected from N number of subregion;
Determine to whether there is the subject in first group of subregion;
If the subject be present in first group of subregion, using first group of subregion as described right
Burnt region;
If the subject is not present in first group of subregion, using second group of subregion as described in
Focusing area.
In the embodiment of the present invention, the object in the focusing area is focused, including:
The focal length of camera is adjusted, obtains the image at the interface of finding a view corresponding to different focal;
When the definition maximum of the image in the focusing area in the interface of finding a view, determine current focus for institute
State focusing focal length corresponding to focusing area.
In the embodiment of the present invention, the interface of finding a view includes one or more focusing areas;
Methods described also includes:
The focusing area is marked out in the picture corresponding to interface of finding a view.
Terminal provided in an embodiment of the present invention, including:
Camera, for carrying out IMAQ to view area;
Memory, for storing program of focusing;
Processor, for performing the focusing program in the memory to realize following operation:
Target area in interface of finding a view is divided into N number of subregion, N is positive integer;
Position based on subject in the interface of finding a view, M sub-regions are selected from N number of subregion, and
Using the regional extent that the M sub-regions are formed as focusing area, M is the positive integer less than or equal to N;
Object in the focusing area is focused.
In the embodiment of the present invention, the processor is additionally operable to perform the focusing program in the memory to realize following behaviour
Make:
The depth information for each object found a view described in acquisition in interface;
Based on the depth information of each object in the interface of finding a view, shot pair is determined in the interface of finding a view
As, and determine position of the subject in the interface of finding a view;
Position based on the subject in the interface of finding a view, shot pair is selected from N number of subregion
As the M sub-regions at place.
In the embodiment of the present invention, the processor is additionally operable to perform the focusing program in the memory to realize following behaviour
Make:
Feature extraction is carried out to the image corresponding to interface of finding a view, obtains characteristic parameter;
Based on the characteristic parameter extracted, subject is determined in the interface of finding a view, and determine the quilt
Take the photograph position of the object in the interface of finding a view;
Position based on the subject in the interface of finding a view, shot pair is selected from N number of subregion
As the M sub-regions at place.
In the embodiment of the present invention, N number of subregion includes first group of subregion and second group of subregion, described second group
Each sub-regions in subregion are arranged in the periphery of first group of subregion;
The processor is additionally operable to perform the focusing program in the memory to realize following operation:
First group of subregion is selected from N number of subregion;
Determine to whether there is the subject in first group of subregion;
If the subject be present in first group of subregion, using first group of subregion as described right
Burnt region;
If the subject is not present in first group of subregion, using second group of subregion as described in
Focusing area.
In the embodiment of the present invention, the processor is additionally operable to perform the focusing program in the memory to realize following behaviour
Make:
The focal length of camera is adjusted, obtains the image at the interface of finding a view corresponding to different focal;
When the definition maximum of the image in the focusing area in the interface of finding a view, determine current focus for institute
State focusing focal length corresponding to focusing area.
In the embodiment of the present invention, the interface of finding a view includes one or more focusing areas;
The terminal also includes:
Display, for marking out the focusing area in the picture corresponding to interface of finding a view.
Computer-readable storage medium provided in an embodiment of the present invention is stored with one or more program, one or more
Individual program can be by one or more computing device, to realize above-mentioned any described focusing method.
In the technical scheme of the embodiment of the present invention, the target area in interface of finding a view is divided into N number of subregion, N is just
Integer;Position based on subject in the interface of finding a view, M sub-regions are selected from N number of subregion, and by institute
The regional extent of M sub-regions formation is stated as focusing area, M is the positive integer less than or equal to N;To in the focusing area
Object is focused.Using the technical scheme of the embodiment of the present invention, by target area (namely the former focusing area in interface of finding a view
Domain) refined, then, the position based on subject in interface of finding a view, automatically select suitable focusing area realization pair
Jiao, granularity of this focusing mode due to having refined interface of finding a view, thus precision of focusing is higher, ensure that in interface of finding a view and is shot
The definition of object, improve shooting experience.
Brief description of the drawings
Fig. 1 is each optionally a kind of hardware architecture diagram of mobile terminal of embodiment one of the realization present invention;
Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention;
Fig. 3 is the schematic diagram one at the interface of finding a view of camera;
Fig. 4 is the schematic flow sheet one of the focusing method of the embodiment of the present invention;
Fig. 5 is the schematic diagram two at the interface of finding a view of the camera of the embodiment of the present invention;
Fig. 6 is the schematic diagram three at the interface of finding a view of the camera of the embodiment of the present invention;
Fig. 7 is the schematic diagram four at the interface of finding a view of the camera of the embodiment of the present invention;
Fig. 8 is the schematic diagram five at the interface of finding a view of the camera of the embodiment of the present invention;
Fig. 9 is the schematic diagram six at the interface of finding a view of the camera of the embodiment of the present invention;
Figure 10 is the schematic flow sheet two of the focusing method of the embodiment of the present invention;
Figure 11 is the schematic flow sheet three of the focusing method of the embodiment of the present invention;
Figure 12 is the schematic diagram seven at the interface of finding a view of the camera of the embodiment of the present invention;
Figure 13 is the structure composition schematic diagram of the terminal of the embodiment of the present invention.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In follow-up description, the suffix using such as " module ", " part " or " unit " for representing element is only
Be advantageous to the explanation of the present invention, itself there is no a specific meaning.Therefore, " module ", " part " or " unit " can mix
Ground uses.
Terminal can be implemented in a variety of manners.For example, the terminal described in the present invention can include such as mobile phone, flat board
Computer, notebook computer, palm PC, personal digital assistant (Personal Digital Assistant, PDA), portable
Media player (Portable Media Player, PMP), guider, wearable device, Intelligent bracelet, pedometer etc. move
Dynamic terminal, and the fixed terminal such as digital TV, desktop computer.
It will be illustrated in subsequent descriptions by taking mobile terminal as an example, it will be appreciated by those skilled in the art that except special
For moving outside purpose element, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Referring to Fig. 1, its hardware architecture diagram for a kind of mobile terminal of each embodiment of the realization present invention, the shifting
Dynamic terminal 100 can include:RF (Radio Frequency, radio frequency) unit 101, WiFi module 102, audio output unit
103rd, A/V (audio/video) input block 104, sensor 105, display unit 106, user input unit 107, interface unit
108th, the part such as memory 109, processor 110 and power supply 111.It will be understood by those skilled in the art that shown in Fig. 1
Mobile terminal structure does not form the restriction to mobile terminal, and mobile terminal can be included than illustrating more or less parts,
Either combine some parts or different parts arrangement.
The all parts of mobile terminal are specifically introduced with reference to Fig. 1:
Radio frequency unit 101 can be used for receiving and sending messages or communication process in, the reception and transmission of signal, specifically, by base station
Downlink information receive after, handled to processor 110;In addition, up data are sent to base station.Generally, radio frequency unit 101
Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier, duplexer etc..In addition, penetrate
Frequency unit 101 can also be communicated by radio communication with network and other equipment.Above-mentioned radio communication can use any communication
Standard or agreement, including but not limited to GSM (Global System of Mobile communication, global system for mobile telecommunications
System), GPRS (General Packet Radio Service, general packet radio service), CDMA2000 (Code
Division Multiple Access 2000, CDMA 2000), WCDMA (Wideband Code Division
Multiple Access, WCDMA), TD-SCDMA (Time Division-Synchronous Code
Division Multiple Access, TD SDMA), FDD-LTE (Frequency Division
Duplexing-Long Term Evolution, FDD Long Term Evolution) and TDD-LTE (Time Division
Duplexing-Long Term Evolution, time division duplex Long Term Evolution) etc..
WiFi belongs to short range wireless transmission technology, and mobile terminal can help user to receive and dispatch electricity by WiFi module 102
Sub- mail, browse webpage and access streaming video etc., it has provided the user wireless broadband internet and accessed.Although Fig. 1 shows
Go out WiFi module 102, but it is understood that, it is simultaneously not belonging to must be configured into for mobile terminal, completely can be according to need
To be omitted in the essential scope for do not change invention.
Audio output unit 103 can be in call signal reception pattern, call mode, record mould in mobile terminal 100
When under the isotypes such as formula, speech recognition mode, broadcast reception mode, by radio frequency unit 101 or WiFi module 102 it is receiving or
It is sound that the voice data stored in memory 109, which is converted into audio signal and exported,.Moreover, audio output unit 103
The audio output related to the specific function that mobile terminal 100 performs can also be provided (for example, call signal receives sound, disappeared
Breath receives sound etc.).Audio output unit 103 can include loudspeaker, buzzer etc..
A/V input blocks 104 are used to receive audio or video signal.A/V input blocks 104 can include graphics processor
(Graphics Processing Unit, GPU) 1041 and microphone 1042, graphics processor 1041 is in video acquisition mode
Or the static images or the view data of video obtained in image capture mode by image capture apparatus (such as camera) are carried out
Reason.Picture frame after processing may be displayed on display unit 106.Picture frame after the processing of graphics processor 1041 can be deposited
Storage is transmitted in memory 109 (or other storage mediums) or via radio frequency unit 101 or WiFi module 102.Mike
Wind 1042 can connect in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone 1042
Quiet down sound (voice data), and can be voice data by such acoustic processing.Audio (voice) data after processing can
To be converted to the form output that mobile communication base station can be sent to via radio frequency unit 101 in the case of telephone calling model.
Microphone 1042 can implement various types of noises and eliminate (or suppression) algorithm to eliminate (or suppression) in reception and send sound
Caused noise or interference during frequency signal.
Mobile terminal 100 also includes at least one sensor 105, such as optical sensor, motion sensor and other biographies
Sensor.Specifically, optical sensor includes ambient light sensor and proximity transducer, wherein, ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 1061, and proximity transducer can close when mobile terminal 100 is moved in one's ear
Display panel 1061 and/or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions (general
For three axles) size of acceleration, size and the direction of gravity are can detect that when static, the application available for identification mobile phone posture
(such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc.;
The fingerprint sensor that can also configure as mobile phone, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer,
The other sensors such as hygrometer, thermometer, infrared ray sensor, will not be repeated here.
Display unit 106 is used for the information for showing the information inputted by user or being supplied to user.Display unit 106 can wrap
Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configures display panel 1061.
User input unit 107 can be used for the numeral or character information for receiving input, and produce the use with mobile terminal
The key signals input that family is set and function control is relevant.Specifically, user input unit 107 may include contact panel 1071 with
And other input equipments 1072.Contact panel 1071, also referred to as touch-screen, collect touch operation of the user on or near it
(for example user uses any suitable objects or annex such as finger, stylus on contact panel 1071 or in contact panel 1071
Neighbouring operation), and corresponding attachment means are driven according to formula set in advance.Contact panel 1071 may include touch detection
Two parts of device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch operation band
The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by it
Contact coordinate is converted into, then gives processor 110, and the order sent of reception processing device 110 and can be performed.In addition, can
To realize contact panel 1071 using polytypes such as resistance-type, condenser type, infrared ray and surface acoustic waves.Except contact panel
1071, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments 1072 can wrap
Include but be not limited to physical keyboard, in function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc.
One or more, do not limit herein specifically.
Further, contact panel 1071 can cover display panel 1061, detect thereon when contact panel 1071 or
After neighbouring touch operation, processor 110 is sent to determine the type of touch event, is followed by subsequent processing device 110 according to touch thing
The type of part provides corresponding visual output on display panel 1061.Although in Fig. 1, contact panel 1071 and display panel
1061 be the part independent as two to realize the input of mobile terminal and output function, but in certain embodiments, can
Input and the output function of mobile terminal are realized so that contact panel 1071 and display panel 1061 is integrated, is not done herein specifically
Limit.
Interface unit 108 is connected the interface that can pass through as at least one external device (ED) with mobile terminal 100.For example,
External device (ED) can include wired or wireless head-band earphone port, external power source (or battery charger) port, wired or nothing
Line FPDP, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end
Mouth, video i/o port, ear port etc..Interface unit 108 can be used for receiving the input from external device (ED) (for example, number
It is believed that breath, electric power etc.) and the input received is transferred to one or more elements in mobile terminal 100 or can be with
For transmitting data between mobile terminal 100 and external device (ED).
Memory 109 can be used for storage software program and various data.Memory 109 can mainly include storing program area
And storage data field, wherein, storing program area can storage program area, application program (such as the sound needed at least one function
Sound playing function, image player function etc.) etc.;Storage data field can store according to mobile phone use created data (such as
Voice data, phone directory etc.) etc..In addition, memory 109 can include high-speed random access memory, can also include non-easy
The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of mobile terminal, utilizes each of various interfaces and the whole mobile terminal of connection
Individual part, by running or performing the software program and/or module that are stored in memory 109, and call and be stored in storage
Data in device 109, the various functions and processing data of mobile terminal are performed, so as to carry out integral monitoring to mobile terminal.Place
Reason device 110 may include one or more processing units;Preferably, processor 110 can integrate application processor and modulatedemodulate is mediated
Device is managed, wherein, application processor mainly handles operating system, user interface and application program etc., and modem processor is main
Handle radio communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Mobile terminal 100 can also include the power supply 111 (such as battery) to all parts power supply, it is preferred that power supply 111
Can be logically contiguous by power-supply management system and processor 110, so as to realize management charging by power-supply management system, put
The function such as electricity and power managed.
Although Fig. 1 is not shown, mobile terminal 100 can also will not be repeated here including bluetooth module etc..
For the ease of understanding the embodiment of the present invention, the communications network system being based on below to the mobile terminal of the present invention enters
Row description.
Referring to Fig. 2, Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention, the communication network system
Unite as the LTE system of universal mobile communications technology, the UE that the LTE system includes communicating connection successively (User Equipment, is used
Family equipment) 201, E-UTRAN (Evolved UMTS Terrestrial Radio Access Network, evolved UMTS lands
Ground wireless access network) 202, EPC (Evolved Packet Core, evolved packet-based core networks) 203 and operator IP operation
204。
Specifically, UE201 can be above-mentioned terminal 100, and here is omitted.
E-UTRAN202 includes eNodeB2021 and other eNodeB2022 etc..Wherein, eNodeB2021 can be by returning
Journey (backhaul) (such as X2 interface) is connected with other eNodeB2022, and eNodeB2021 is connected to EPC203,
ENodeB2021 can provide UE201 to EPC203 access.
EPC203 can include MME (Mobility Management Entity, mobility management entity) 2031, HSS
(Home Subscriber Server, home subscriber server) 2032, other MME2033, SGW (Serving Gate Way,
Gateway) 2034, PGW (PDN Gate Way, grouped data network gateway) 2035 and PCRF (Policy and
Charging Rules Function, policy and rate functional entity) 2036 etc..Wherein, MME2031 be processing UE201 and
The control node of signaling between EPC203, there is provided carrying and connection management.HSS2032 is all to manage for providing some registers
Such as the function of attaching position register (not shown) etc, and preserve some and used about service features, data rate etc.
The special information in family.All customer data can be transmitted by SGW2034, and PGW2035 can provide UE 201 IP
Address is distributed and other functions, and PCRF2036 is strategy and the charging control strategic decision-making of business data flow and IP bearing resources
Point, it selects and provided available strategy and charging control decision-making with charge execution function unit (not shown) for strategy.
IP operation 204 can include internet, Intranet, IMS (IP Multimedia Subsystem, IP multimedia
System) or other IP operations etc..
Although above-mentioned be described by taking LTE system as an example, those skilled in the art it is to be understood that the present invention not only
Suitable for LTE system, be readily applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA with
And following new network system etc., do not limit herein.
Based on above-mentioned mobile terminal hardware configuration and communications network system, each embodiment of the inventive method is proposed.
Fig. 3 is the schematic diagram one at the interface of finding a view of camera, as shown in figure 3, camera carries out the measuring and calculating area of continuous auto-focusing
Domain (namely focusing area) is the central area of screen, and the area of the central area occupies the 1/4 of whole screen, subject
(namely destination object of user's shooting) is located at the upper right corner of central area, namely:Central area is assumed to be A, subject
Region is assumed to be A1, and the region in central area in addition to subject is assumed to be A2, it is clear that A=A1+A2.
Because when carrying out continuous auto-focusing, the measuring and calculating region of continuous auto-focusing is the area of the size of screen center 1/4
Domain (namely a-quadrant in Fig. 3), and due to smaller (namely Fig. 3 Zhong A1 areas of subject ratio shared in a-quadrant
Domain), continuous auto-focusing processing procedure needs to look after the imaging (namely A2 regions in Fig. 3) of other objects in a-quadrant, leads
It is still unsharp to cause the subject after continuous auto-focusing is carried out.It is real for this and similar situation, the present invention
Apply the continuous auto-focusing scheme that example proposes a kind of improvement.
Fig. 4 is the schematic flow sheet one of the focusing method of the embodiment of the present invention, as shown in figure 4, the focusing method includes
Following steps:
Step 401:Target area in interface of finding a view is divided into N number of subregion, N is positive integer.
The technical scheme of the embodiment of the present invention be applied to terminal in, terminal can be mobile phone, tablet personal computer, palm PC,
The equipment such as game machine, here, terminal is configured with camera.In addition, the technical scheme of the embodiment of the present invention can also be applied to individually
Camera apparatus.
In the embodiment of the present invention, the region of camera shooting is referred to as view area, and the scene in view area is presented on into screen
When on curtain, the picture on screen is referred to as interface of finding a view, it is seen then that and interface of finding a view is corresponding with view area, namely:Find a view interface
It is the picture that view area is presented on screen.
In the embodiment of the present invention, it is assumed that the size at interface of finding a view and terminal screen it is in the same size, certainly, interface of finding a view
Size might be less that the size of terminal screen, namely:Interface of finding a view occupies a part for screen.In order to distinguish interface of finding a view
Main body and background, and consider that main body is normally at the central area at interface of finding a view, using the central area at interface of finding a view as mesh
Region is marked, this target area that is to say body region.In one embodiment, target area occupies the 1/4 of interface of finding a view, when
So, the size of target area and position not limited to this.
In the embodiment of the present invention, because the size of subject is possible to smaller relative to target area, therefore, in order to more
Accurately subject is focused, target area is refined.Specifically, target area is divided into N number of subregion.
In one embodiment, the size all same of every sub-regions in N number of subregion.In another embodiment, N number of subregion
In every sub-regions size it is different or the sizes of part subregion are different.
Fig. 5 is the schematic diagram two at the interface of finding a view of the camera of the embodiment of the present invention, as shown in figure 5, the interface center that will find a view
1/4 region (namely target area) be divided into 5 × 5 25 sub-regions (namely N=25).Those skilled in the art should manage
Solution, the division of target area are not limited to the mode shown in Fig. 5, can also divided by other means.
Step 402:Position based on subject in the interface of finding a view, M son is selected from N number of subregion
Region, and using the regional extent that the M sub-regions are formed as focusing area, M is the positive integer less than or equal to N.
In the embodiment of the present invention, subject refers to that user wants the object of shooting, for a user, it is desirable to shoot
The subject of fine definition, such as:View area is behaved before a building, and at this moment, people is exactly subject, is built
It is background to build thing or all objects in addition to people.The embodiment of the present invention is intended to shoot clearly subject, therefore needs
Position that will be based on subject in the interface of finding a view, to determine the scope of focusing area.
Specifically, the position based on subject in the interface of finding a view, M son is selected from N number of subregion
Region, and using the regional extent that the M sub-regions are formed as focusing area, M is the positive integer less than or equal to N.
Here, it is selected go out M sub-regions be exactly region where subject.
As shown in fig. 6,1/4 region (namely target area) at interface center of finding a view is divided into 5 × 5 25 sub-regions
(namely N=25).Assuming that subject is located at the upper right corner of target area in interface of finding a view, then selected from N number of subregion
4 sub-regions (namely M=4) positioned at the upper right corner are selected, as focusing area.
As shown in fig. 7,1/4 region (namely target area) at interface center of finding a view is divided into 5 × 5 25 sub-regions
(namely N=25).Assuming that subject is located at the center of target area in interface of finding a view, then selected from N number of subregion
9 sub-regions (namely M=9) positioned at center, as focusing area.
In the embodiment of the present invention, the position of selected M sub-regions and number are relevant with subject, wherein, institute
The position of the M sub-regions of selection is relevant with the position of subject, the numbers of selected M sub-regions and subject
Size is relevant.
Step 403:Object in the focusing area is focused.
In the embodiment of the present invention, focusing is carried out to the object in the focusing area and uses Autofocus Technology.From basic
For principle, auto-focusing is segmented into two major classes:One kind is the ranging based on range measurement between camera lens and the target that is taken
Auto-focusing, another kind of is the focus detection auto-focusing based on imaging clearly on focusing screen.
Illustrate focus process by taking focus detection auto-focusing as an example below:The focal length of camera is adjusted, obtains different focal
The image at corresponding interface of finding a view;When the definition maximum of the image in the focusing area in the interface of finding a view,
Determine current focus focusing focal length corresponding to the focusing area.
Above-mentioned focus detection auto-focusing mainly has contrast method and phase method, wherein:
1) contrast method, this method are to realize auto-focusing by the contour edge of detection image.The profile side of image
Edge is more clear, then its brightness step is bigger, and the contrast between edge scenery and background is bigger in other words.Conversely,
Image out of focus, contour edge is smudgy, and brightness step or contrast decline;Out of focus more remote, contrast is lower.Utilize this
Principle, two photoelectric detectors are placed on equidistance before and after charge coupled cell (CCD, Charge-coupled Device)
Place, by the image of photography thing by light splitting simultaneously into the contrast on the two detectors, exporting its imaging respectively.When two
During the absolute value minimum for the contrast difference that detector is exported, illustrate the image planes of focusing just among two detectors, i.e.,
Approached with CCD imaging surface, then focusing is completed.
2) phase method, this method are to realize auto-focusing by detecting the offset of picture.One is placed in CCD position
The waffle slab being made up of parallel lines, lines are in succession printing opacity and light tight.After network board on appropriate location with optical axis symmetrically
Place two photo detectors.Network board with it is of reciprocating vibration in optical axis vertical direction.When focusing surface overlaps with network board, pass through
The light of waffle slab printing opacity lines reaches two photo detectors behind simultaneously.And when defocus, light beam can only successively reach two
Individual photo detector, then there is phase difference between their output signal.Dephased two signals are after processing of circuit
Control executing agency to adjust the position of object lens, focusing surface is overlapped with the plane of waffle slab.
In one embodiment, the focusing method is further comprising the steps of:
The focusing area is marked out in the picture corresponding to interface of finding a view.
Here, interface of finding a view includes one or more focusing areas.As shown in figure 8, interface of finding a view includes 1 focusing area
Domain, focusing area is marked out in picture corresponding to interface of finding a view by square frame, and user can easily know clearly region
The position at place.As shown in figure 9, interface of finding a view includes 9 focusing areas, pass through square frame in picture corresponding to interface of finding a view
Focusing area is marked out, user can easily know the position where clearly region.
Figure 10 is the schematic flow sheet two of the focusing method of the embodiment of the present invention, as shown in Figure 10, the focusing method bag
Include following steps:
Step 1001:Target area in interface of finding a view is divided into N number of subregion, N is positive integer.
The technical scheme of the embodiment of the present invention be applied to terminal in, terminal can be mobile phone, tablet personal computer, palm PC,
The equipment such as game machine, here, terminal is configured with camera.In addition, the technical scheme of the embodiment of the present invention can also be applied to individually
Camera apparatus.
In the embodiment of the present invention, the region of camera shooting is referred to as view area, and the scene in view area is presented on into screen
When on curtain, the picture on screen is referred to as interface of finding a view, it is seen then that and interface of finding a view is corresponding with view area, namely:Find a view interface
It is the picture that view area is presented on screen.
In the embodiment of the present invention, it is assumed that the size at interface of finding a view and terminal screen it is in the same size, certainly, interface of finding a view
Size might be less that the size of terminal screen, namely:Interface of finding a view occupies a part for screen.In order to distinguish interface of finding a view
Main body and background, and consider that main body is normally at the central area at interface of finding a view, using the central area at interface of finding a view as mesh
Region is marked, this target area that is to say body region.In one embodiment, target area occupies the 1/4 of interface of finding a view, when
So, the size of target area and position not limited to this.
In the embodiment of the present invention, because the size of subject is possible to smaller relative to target area, therefore, in order to more
Accurately subject is focused, target area is refined.Specifically, target area is divided into N number of subregion.
In one embodiment, the size all same of every sub-regions in N number of subregion.In another embodiment, N number of subregion
In every sub-regions size it is different or the sizes of part subregion are different.
Fig. 5 is the schematic diagram two at the interface of finding a view of the camera of the embodiment of the present invention, as shown in figure 5, the interface center that will find a view
1/4 region (namely target area) be divided into 5 × 5 25 sub-regions (namely N=25).Those skilled in the art should manage
Solution, the division of target area are not limited to the mode shown in Fig. 5, can also divided by other means.
Step 1002:The depth information for each object found a view described in acquisition in interface.
In the embodiment of the present invention, terminal has depth camera, and in one embodiment, depth camera is taken the photograph including first
As head and second camera, in one example, the first camera and second camera have identical physical arrangement and Optical Parametric
Number.Left mesh image (namely first image) and right mesh image (namely second can be obtained in synchronization by depth camera
Image).Then, depth map is calculated by image processing software, specifically, described first image and second image entered
Row pixel Point matching;For a pair of pixels of matching, coordinate information and double shootings based on the pair of pixel
The optical parametric of head, calculates depth value corresponding to the pixel;Depth value based on all pixels point forms the depth map.
In such scheme, depth map namely the depth information for covering each object in interface of finding a view.
Step 1003:Based on the depth information of each object in the interface of finding a view, determined in the interface of finding a view
Go out subject, and determine position of the subject in the interface of finding a view.
In the embodiment of the present invention, subject for background generally relative to more protruding, therefore, the depth of subject
Information is closer to camera for other objects.Based on such understanding, based on each object in interface of finding a view
Depth information, it may be determined that go out some continuous depth areas, and the depth areas is near camera, then this depth areas
The depth areas of subject is represented, the coordinate position of each pixel based on the depth areas, it may be determined that go out subject
Position in the interface of finding a view.
Step 1004:Position based on the subject in the interface of finding a view, selected from N number of subregion
The M sub-regions gone out where subject, and using the M sub-regions formed regional extent be used as focusing area, M for less than
Positive integer equal to N.
In the embodiment of the present invention, subject refers to that user wants the object of shooting, for a user, it is desirable to shoot
The subject of fine definition, such as:View area is behaved before a building, and at this moment, people is exactly subject, is built
It is background to build thing or all objects in addition to people.The embodiment of the present invention is intended to shoot clearly subject, therefore needs
Position that will be based on subject in the interface of finding a view, to determine the scope of focusing area.
Specifically, the position based on subject in the interface of finding a view, M son is selected from N number of subregion
Region, and using the regional extent that the M sub-regions are formed as focusing area, M is the positive integer less than or equal to N.
Here, it is selected go out M sub-regions be exactly region where subject.
As shown in fig. 6,1/4 region (namely target area) at interface center of finding a view is divided into 5 × 5 25 sub-regions
(namely N=25).Assuming that subject is located at the upper right corner of target area in interface of finding a view, then selected from N number of subregion
4 sub-regions (namely M=4) positioned at the upper right corner are selected, as focusing area.
As shown in fig. 7,1/4 region (namely target area) at interface center of finding a view is divided into 5 × 5 25 sub-regions
(namely N=25).Assuming that subject is located at the center of target area in interface of finding a view, then selected from N number of subregion
9 sub-regions (namely M=9) positioned at center, as focusing area.
In the embodiment of the present invention, the position of selected M sub-regions and number are relevant with subject, wherein, institute
The position of the M sub-regions of selection is relevant with the position of subject, the numbers of selected M sub-regions and subject
Size is relevant.
Step 1005:Object in the focusing area is focused.
In the embodiment of the present invention, focusing is carried out to the object in the focusing area and uses Autofocus Technology.From basic
For principle, auto-focusing is segmented into two major classes:One kind is the ranging based on range measurement between camera lens and the target that is taken
Auto-focusing, another kind of is the focus detection auto-focusing based on imaging clearly on focusing screen.
Illustrate focus process by taking focus detection auto-focusing as an example below:The focal length of camera is adjusted, obtains different focal
The image at corresponding interface of finding a view;When the definition maximum of the image in the focusing area in the interface of finding a view,
Determine current focus focusing focal length corresponding to the focusing area.
Above-mentioned focus detection auto-focusing mainly has contrast method and phase method, wherein:
1) contrast method, this method are to realize auto-focusing by the contour edge of detection image.The profile side of image
Edge is more clear, then its brightness step is bigger, and the contrast between edge scenery and background is bigger in other words.Conversely,
Image out of focus, contour edge is smudgy, and brightness step or contrast decline;Out of focus more remote, contrast is lower.Utilize this
Principle, two photoelectric detectors are placed on before and after CCD at equidistance, by the image of photography thing by light splitting simultaneously into this two
On individual detector, the contrast of its imaging is exported respectively.When the absolute value for the contrast difference that two detectors are exported is minimum
When, illustrate the image planes of focusing just among two detectors, i.e., the imaging surface with CCD is approached, and then focusing is completed.
2) phase method, this method are to realize auto-focusing by detecting the offset of picture.One is placed in CCD position
The waffle slab being made up of parallel lines, lines are in succession printing opacity and light tight.After network board on appropriate location with optical axis symmetrically
Place two photo detectors.Network board with it is of reciprocating vibration in optical axis vertical direction.When focusing surface overlaps with network board, pass through
The light of waffle slab printing opacity lines reaches two photo detectors behind simultaneously.And when defocus, light beam can only successively reach two
Individual photo detector, then there is phase difference between their output signal.Dephased two signals are after processing of circuit
Control executing agency to adjust the position of object lens, focusing surface is overlapped with the plane of waffle slab.
Figure 11 is the schematic flow sheet three of the focusing method of the embodiment of the present invention, as shown in figure 11, the focusing method bag
Include following steps:
Step 1101:Target area in interface of finding a view is divided into N number of subregion, N is positive integer.
The technical scheme of the embodiment of the present invention be applied to terminal in, terminal can be mobile phone, tablet personal computer, palm PC,
The equipment such as game machine, here, terminal is configured with camera.In addition, the technical scheme of the embodiment of the present invention can also be applied to individually
Camera apparatus.
In the embodiment of the present invention, the region of camera shooting is referred to as view area, and the scene in view area is presented on into screen
When on curtain, the picture on screen is referred to as interface of finding a view, it is seen then that and interface of finding a view is corresponding with view area, namely:Find a view interface
It is the picture that view area is presented on screen.
In the embodiment of the present invention, it is assumed that the size at interface of finding a view and terminal screen it is in the same size, certainly, interface of finding a view
Size might be less that the size of terminal screen, namely:Interface of finding a view occupies a part for screen.In order to distinguish interface of finding a view
Main body and background, and consider that main body is normally at the central area at interface of finding a view, using the central area at interface of finding a view as mesh
Region is marked, this target area that is to say body region.In one embodiment, target area occupies the 1/4 of interface of finding a view, when
So, the size of target area and position not limited to this.
In the embodiment of the present invention, because the size of subject is possible to smaller relative to target area, therefore, in order to more
Accurately subject is focused, target area is refined.Specifically, target area is divided into N number of subregion.
In one embodiment, the size all same of every sub-regions in N number of subregion.In another embodiment, N number of subregion
In every sub-regions size it is different or the sizes of part subregion are different.
Fig. 5 is the schematic diagram two at the interface of finding a view of the camera of the embodiment of the present invention, as shown in figure 5, the interface center that will find a view
1/4 region (namely target area) be divided into 5 × 5 25 sub-regions (namely N=25).Those skilled in the art should manage
Solution, the division of target area are not limited to the mode shown in Fig. 5, can also divided by other means.
Step 1102:Feature extraction is carried out to the image corresponding to interface of finding a view, obtains characteristic parameter.
In the embodiment of the present invention, feature extraction refers to:The information of each pixel in image is extracted, determines each pixel
Whether belong to a characteristics of image, and the continuous image vegetarian refreshments for belonging to same characteristics of image is classified as one kind.The knot of feature extraction
Fruit is that all pixels point on image is divided into different subsets, these subsets tend to belong to isolated point, continuous curve or
The continuous region of person.The characteristic parameter obtained for image progress feature extraction includes but is not limited to:Edge, angle, region,
Ridge.
Step 1103:Based on the characteristic parameter extracted, subject is determined in the interface of finding a view, and determine
Go out position of the subject in the interface of finding a view.
In the embodiment of the present invention, subject for background generally relative to more protruding, therefore, the feature of subject
The profile that parameter characterization goes out is more prominent.Based on such understanding, based on the characteristic parameter extracted, it may be determined that go out some
Profile, this profile represent the profile of subject, the coordinate position of each pixel based on the profile, it may be determined that go out shot
Position of the object in the interface of finding a view.
Step 1104:Position based on the subject in the interface of finding a view, selected from N number of subregion
The M sub-regions gone out where subject, and using the M sub-regions formed regional extent be used as focusing area, M for less than
Positive integer equal to N.
In the embodiment of the present invention, subject refers to that user wants the object of shooting, for a user, it is desirable to shoot
The subject of fine definition, such as:View area is behaved before a building, and at this moment, people is exactly subject, is built
It is background to build thing or all objects in addition to people.The embodiment of the present invention is intended to shoot clearly subject, therefore needs
Position that will be based on subject in the interface of finding a view, to determine the scope of focusing area.
Specifically, the position based on subject in the interface of finding a view, M son is selected from N number of subregion
Region, and using the regional extent that the M sub-regions are formed as focusing area, M is the positive integer less than or equal to N.
Here, it is selected go out M sub-regions be exactly region where subject.
As shown in fig. 6,1/4 region (namely target area) at interface center of finding a view is divided into 5 × 5 25 sub-regions
(namely N=25).Assuming that subject is located at the upper right corner of target area in interface of finding a view, then selected from N number of subregion
4 sub-regions (namely M=4) positioned at the upper right corner are selected, as focusing area.
As shown in fig. 7,1/4 region (namely target area) at interface center of finding a view is divided into 5 × 5 25 sub-regions
(namely N=25).Assuming that subject is located at the center of target area in interface of finding a view, then selected from N number of subregion
9 sub-regions (namely M=9) positioned at center, as focusing area.
In the embodiment of the present invention, the position of selected M sub-regions and number are relevant with subject, wherein, institute
The position of the M sub-regions of selection is relevant with the position of subject, the numbers of selected M sub-regions and subject
Size is relevant.
Step 1105:Object in the focusing area is focused.
In the embodiment of the present invention, focusing is carried out to the object in the focusing area and uses Autofocus Technology.From basic
For principle, auto-focusing is segmented into two major classes:One kind is the ranging based on range measurement between camera lens and the target that is taken
Auto-focusing, another kind of is the focus detection auto-focusing based on imaging clearly on focusing screen.
Illustrate focus process by taking focus detection auto-focusing as an example below:The focal length of camera is adjusted, obtains different focal
The image at corresponding interface of finding a view;When the definition maximum of the image in the focusing area in the interface of finding a view,
Determine current focus focusing focal length corresponding to the focusing area.
Above-mentioned focus detection auto-focusing mainly has contrast method and phase method, wherein:
1) contrast method, this method are to realize auto-focusing by the contour edge of detection image.The profile side of image
Edge is more clear, then its brightness step is bigger, and the contrast between edge scenery and background is bigger in other words.Conversely,
Image out of focus, contour edge is smudgy, and brightness step or contrast decline;Out of focus more remote, contrast is lower.Utilize this
Principle, two photoelectric detectors are placed on before and after CCD at equidistance, by the image of photography thing by light splitting simultaneously into this two
On individual detector, the contrast of its imaging is exported respectively.When the absolute value for the contrast difference that two detectors are exported is minimum
When, illustrate the image planes of focusing just among two detectors, i.e., the imaging surface with CCD is approached, and then focusing is completed.
2) phase method, this method are to realize auto-focusing by detecting the offset of picture.One is placed in CCD position
The waffle slab being made up of parallel lines, lines are in succession printing opacity and light tight.After network board on appropriate location with optical axis symmetrically
Place two photo detectors.Network board with it is of reciprocating vibration in optical axis vertical direction.When focusing surface overlaps with network board, pass through
The light of waffle slab printing opacity lines reaches two photo detectors behind simultaneously.And when defocus, light beam can only successively reach two
Individual photo detector, then there is phase difference between their output signal.Dephased two signals are after processing of circuit
Control executing agency to adjust the position of object lens, focusing surface is overlapped with the plane of waffle slab.
In the embodiment of the present invention, determining the mode of focusing area can also be realized by other schemes, a kind of more fast
It is achieved in that promptly:As shown in figure 12, it is assumed that N number of subregion includes first group of subregion and second group of subregion, institute
State the periphery that each sub-regions in second group of subregion are arranged in first group of subregion.
Then, first group of subregion is selected from N number of subregion;Determine whether deposited in first group of subregion
In the subject;If the subject be present in first group of subregion, first group of subregion is made
For the focusing area;If the subject is not present in first group of subregion, by second group of subregion
As the focusing area.
Figure 13 is the structure composition schematic diagram of the terminal of the embodiment of the present invention, and as shown in figure 13, the terminal includes:
Camera 1301, for carrying out IMAQ to view area;
Memory 1302, for storing program of focusing;
Processor 1303, for performing the focusing program in the memory to realize following operation:
Target area in interface of finding a view is divided into N number of subregion, N is positive integer;
Position based on subject in the interface of finding a view, M sub-regions are selected from N number of subregion, and
Using the regional extent that the M sub-regions are formed as focusing area, M is the positive integer less than or equal to N;
Object in the focusing area is focused.
In the embodiment of the present invention, the processor 1303 be additionally operable to perform the focusing program in the memory with realize with
Lower operation:
The depth information for each object found a view described in acquisition in interface;
Based on the depth information of each object in the interface of finding a view, shot pair is determined in the interface of finding a view
As, and determine position of the subject in the interface of finding a view;
Position based on the subject in the interface of finding a view, shot pair is selected from N number of subregion
As the M sub-regions at place.
In the embodiment of the present invention, the processor 1303 be additionally operable to perform the focusing program in the memory with realize with
Lower operation:
Feature extraction is carried out to the image corresponding to interface of finding a view, obtains characteristic parameter;
Based on the characteristic parameter extracted, subject is determined in the interface of finding a view, and determine the quilt
Take the photograph position of the object in the interface of finding a view;
Position based on the subject in the interface of finding a view, shot pair is selected from N number of subregion
As the M sub-regions at place.
In the embodiment of the present invention, N number of subregion includes first group of subregion and second group of subregion, described second group
Each sub-regions in subregion are arranged in the periphery of first group of subregion;
The processor 1303 is additionally operable to perform the focusing program in the memory to realize following operation:
First group of subregion is selected from N number of subregion;
Determine to whether there is the subject in first group of subregion;
If the subject be present in first group of subregion, using first group of subregion as described right
Burnt region;
If the subject is not present in first group of subregion, using second group of subregion as described in
Focusing area.
In the embodiment of the present invention, the processor 1303 be additionally operable to perform the focusing program in the memory with realize with
Lower operation:
The focal length of camera is adjusted, obtains the image at the interface of finding a view corresponding to different focal;
When the definition maximum of the image in the focusing area in the interface of finding a view, determine current focus for institute
State focusing focal length corresponding to focusing area.
In the embodiment of the present invention, the interface of finding a view includes one or more focusing areas;
The terminal also includes:
Display 1304, for marking out the focusing area in the picture corresponding to interface of finding a view.
It will be appreciated by those skilled in the art that the function of each synthesizer part can refer in terminal in the embodiment of the present invention
The associated description of foregoing focusing method is understood.
If the above-mentioned terminal of the embodiment of the present invention is realized in the form of software function module and is used as independent production marketing
Or in use, it can also be stored in a computer read/write memory medium.Based on such understanding, the embodiment of the present invention
The part that technical scheme substantially contributes to prior art in other words can be embodied in the form of software product, the meter
Calculation machine software product is stored in a storage medium, including some instructions are causing a computer equipment (can be
People's computer, server or network equipment etc.) perform all or part of each embodiment methods described of the present invention.It is and preceding
The storage medium stated includes:USB flash disk, mobile hard disk, read-only storage (ROM, Read Only Memory), magnetic disc or CD etc.
It is various can be with the medium of store program codes.So, the embodiment of the present invention is not restricted to any specific hardware and software combination.
Correspondingly, the embodiment of the present invention also provides a kind of computer-readable storage medium, wherein computer program is stored with, the meter
Calculation machine program is configured to perform the focusing method of the embodiment of the present invention.
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row
His property includes, so that process, method, article or device including a series of elements not only include those key elements, and
And also include the other element being not expressly set out, or also include for this process, method, article or device institute inherently
Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this
Other identical element also be present in the process of key element, method, article or device.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
Embodiments of the invention are described above in conjunction with accompanying drawing, but the invention is not limited in above-mentioned specific
Embodiment, above-mentioned embodiment is only schematical, rather than restricted, one of ordinary skill in the art
Under the enlightenment of the present invention, in the case of present inventive concept and scope of the claimed protection is not departed from, it can also make a lot
Form, these are belonged within the protection of the present invention.
Claims (10)
1. a kind of focusing method, it is characterised in that methods described includes:
Target area in interface of finding a view is divided into N number of subregion, N is positive integer;
Position based on subject in the interface of finding a view, M sub-regions are selected from N number of subregion, and by institute
The regional extent of M sub-regions formation is stated as focusing area, M is the positive integer less than or equal to N;
Object in the focusing area is focused.
2. focusing method according to claim 1, it is characterised in that the subject that is based on is in the interface of finding a view
Position, from N number of subregion select M sub-regions, including:
The depth information for each object found a view described in acquisition in interface;
Based on the depth information of each object in the interface of finding a view, subject is determined in the interface of finding a view, and
Determine position of the subject in the interface of finding a view;
Position based on the subject in the interface of finding a view, subject institute is selected from N number of subregion
M sub-regions.
3. focusing method according to claim 1, it is characterised in that the subject that is based on is in the interface of finding a view
Position, from N number of subregion select M sub-regions, including:
Feature extraction is carried out to the image corresponding to interface of finding a view, obtains characteristic parameter;
Based on the characteristic parameter extracted, subject is determined in the interface of finding a view, and determines described shot pair
As the position in the interface of finding a view;
Position based on the subject in the interface of finding a view, subject institute is selected from N number of subregion
M sub-regions.
4. focusing method according to claim 1, it is characterised in that N number of subregion include first group of subregion and
Second group of subregion, each sub-regions in second group of subregion are arranged in the periphery of first group of subregion;
The position based on subject in the interface of finding a view, M sub-regions, bag are selected from N number of subregion
Include:
First group of subregion is selected from N number of subregion;
Determine to whether there is the subject in first group of subregion;
If the subject be present in first group of subregion, using first group of subregion as the focusing area
Domain;
If the subject is not present in first group of subregion, using second group of subregion as the focusing
Region.
5. according to the focusing method described in any one of Claims 1-4, it is characterised in that described in the focusing area
Object is focused, including:
The focal length of camera is adjusted, obtains the image at the interface of finding a view corresponding to different focal;
When the definition maximum of the image in the focusing area in the interface of finding a view, it is described right to determine current focus
Focusing focal length corresponding to burnt region.
6. according to the focusing method described in any one of Claims 1-4, it is characterised in that the interface of finding a view includes one
Or multiple focusing areas;
Methods described also includes:
The focusing area is marked out in the picture corresponding to interface of finding a view.
7. a kind of terminal, it is characterised in that the terminal includes:
Camera, for carrying out IMAQ to view area;
Memory, for storing program of focusing;
Processor, for performing the focusing program in the memory to realize following operation:
Target area in interface of finding a view is divided into N number of subregion, N is positive integer;
Position based on subject in the interface of finding a view, M sub-regions are selected from N number of subregion, and by institute
The regional extent of M sub-regions formation is stated as focusing area, M is the positive integer less than or equal to N;
Object in the focusing area is focused.
8. terminal according to claim 7, it is characterised in that the processor is additionally operable to perform pair in the memory
Burnt program is to realize following operation:
The depth information for each object found a view described in acquisition in interface;
Based on the depth information of each object in the interface of finding a view, subject is determined in the interface of finding a view, and
Determine position of the subject in the interface of finding a view;
Position based on the subject in the interface of finding a view, subject institute is selected from N number of subregion
M sub-regions.
9. according to the terminal described in any one of claim 7 to 8, it is characterised in that the interface of finding a view includes one or more
Individual focusing area;
The terminal also includes:
Display, for marking out the focusing area in the picture corresponding to interface of finding a view.
10. a kind of computer-readable storage medium, it is characterised in that the computer-readable storage medium is stored with one or more journey
Sequence, one or more of programs can be by one or more computing devices, to realize any one of claim 1 to 6 institute
The method and step stated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710896167.4A CN107566731A (en) | 2017-09-28 | 2017-09-28 | A kind of focusing method and terminal, computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710896167.4A CN107566731A (en) | 2017-09-28 | 2017-09-28 | A kind of focusing method and terminal, computer-readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107566731A true CN107566731A (en) | 2018-01-09 |
Family
ID=60983178
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710896167.4A Pending CN107566731A (en) | 2017-09-28 | 2017-09-28 | A kind of focusing method and terminal, computer-readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107566731A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108055470A (en) * | 2018-01-22 | 2018-05-18 | 努比亚技术有限公司 | A kind of method of focusing, terminal and storage medium |
WO2020034776A1 (en) * | 2018-08-14 | 2020-02-20 | Oppo广东移动通信有限公司 | Image processing method and apparatus, terminal device, and computer readable storage medium |
CN112235503A (en) * | 2019-07-15 | 2021-01-15 | 北京字节跳动网络技术有限公司 | Focusing test method and device, computer equipment and storage medium |
WO2021102961A1 (en) * | 2019-11-29 | 2021-06-03 | 深圳市大疆创新科技有限公司 | Focusing method for photographing apparatus, image processing device, and storage medium |
CN114666497A (en) * | 2022-02-28 | 2022-06-24 | 青岛海信移动通信技术股份有限公司 | Imaging method, terminal device, storage medium, and program product |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060087578A1 (en) * | 2004-10-27 | 2006-04-27 | Samsung Techwin Co., Ltd. | Method of controlling digital photographing apparatus for out-focusing operation and digital photographing apparatus adopting the method |
US20090074392A1 (en) * | 2007-09-14 | 2009-03-19 | Canon Kabushiki Kaisha | Imaging apparatus and focusing control method |
CN104102068A (en) * | 2013-04-11 | 2014-10-15 | 聚晶半导体股份有限公司 | Automatic focusing method and automatic focusing device |
CN104333748A (en) * | 2014-11-28 | 2015-02-04 | 广东欧珀移动通信有限公司 | Method, device and terminal for obtaining image main object |
CN104363377A (en) * | 2014-11-28 | 2015-02-18 | 广东欧珀移动通信有限公司 | Method and apparatus for displaying focus frame as well as terminal |
CN105744163A (en) * | 2016-02-23 | 2016-07-06 | 湖南拓视觉信息技术有限公司 | Video camera and video recording method performing tracking focusing based on depth information |
WO2016110012A1 (en) * | 2015-01-05 | 2016-07-14 | 中兴通讯股份有限公司 | Focus region selection method and apparatus, and computer-readable storage medium |
-
2017
- 2017-09-28 CN CN201710896167.4A patent/CN107566731A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060087578A1 (en) * | 2004-10-27 | 2006-04-27 | Samsung Techwin Co., Ltd. | Method of controlling digital photographing apparatus for out-focusing operation and digital photographing apparatus adopting the method |
US20090074392A1 (en) * | 2007-09-14 | 2009-03-19 | Canon Kabushiki Kaisha | Imaging apparatus and focusing control method |
CN104102068A (en) * | 2013-04-11 | 2014-10-15 | 聚晶半导体股份有限公司 | Automatic focusing method and automatic focusing device |
CN104333748A (en) * | 2014-11-28 | 2015-02-04 | 广东欧珀移动通信有限公司 | Method, device and terminal for obtaining image main object |
CN104363377A (en) * | 2014-11-28 | 2015-02-18 | 广东欧珀移动通信有限公司 | Method and apparatus for displaying focus frame as well as terminal |
WO2016110012A1 (en) * | 2015-01-05 | 2016-07-14 | 中兴通讯股份有限公司 | Focus region selection method and apparatus, and computer-readable storage medium |
CN105744163A (en) * | 2016-02-23 | 2016-07-06 | 湖南拓视觉信息技术有限公司 | Video camera and video recording method performing tracking focusing based on depth information |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108055470A (en) * | 2018-01-22 | 2018-05-18 | 努比亚技术有限公司 | A kind of method of focusing, terminal and storage medium |
WO2020034776A1 (en) * | 2018-08-14 | 2020-02-20 | Oppo广东移动通信有限公司 | Image processing method and apparatus, terminal device, and computer readable storage medium |
CN112235503A (en) * | 2019-07-15 | 2021-01-15 | 北京字节跳动网络技术有限公司 | Focusing test method and device, computer equipment and storage medium |
WO2021102961A1 (en) * | 2019-11-29 | 2021-06-03 | 深圳市大疆创新科技有限公司 | Focusing method for photographing apparatus, image processing device, and storage medium |
CN114666497A (en) * | 2022-02-28 | 2022-06-24 | 青岛海信移动通信技术股份有限公司 | Imaging method, terminal device, storage medium, and program product |
CN114666497B (en) * | 2022-02-28 | 2024-03-15 | 青岛海信移动通信技术有限公司 | Imaging method, terminal device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107317963A (en) | A kind of double-camera mobile terminal control method, mobile terminal and storage medium | |
CN108322644A (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN108108704A (en) | Face identification method and mobile terminal | |
CN107566731A (en) | A kind of focusing method and terminal, computer-readable storage medium | |
CN107730462A (en) | A kind of image processing method, terminal and computer-readable recording medium | |
CN107613208A (en) | Adjusting method and terminal, the computer-readable storage medium of a kind of focusing area | |
CN107194963A (en) | A kind of dual camera image processing method and terminal | |
CN108269230A (en) | Certificate photo generation method, mobile terminal and computer readable storage medium | |
CN107682627A (en) | A kind of acquisition parameters method to set up, mobile terminal and computer-readable recording medium | |
CN108008889A (en) | Photographic method, mobile terminal and the computer-readable recording medium of flexible screen | |
CN107680060A (en) | A kind of image distortion correction method, terminal and computer-readable recording medium | |
CN108063901A (en) | A kind of image-pickup method, terminal and computer readable storage medium | |
CN107690065A (en) | A kind of white balance correcting, device and computer-readable recording medium | |
CN107704176A (en) | A kind of picture-adjusting method and terminal | |
CN107959795A (en) | A kind of information collecting method, equipment and computer-readable recording medium | |
CN107493426A (en) | A kind of information collecting method, equipment and computer-readable recording medium | |
CN107666526A (en) | A kind of terminal with camera | |
CN107239205A (en) | A kind of photographic method, mobile terminal and storage medium | |
CN107707821A (en) | Modeling method and device, bearing calibration, terminal, the storage medium of distortion parameter | |
CN110086993A (en) | Image processing method, device, mobile terminal and computer readable storage medium | |
CN107240072A (en) | A kind of screen luminance adjustment method, terminal and computer-readable recording medium | |
CN107040723A (en) | A kind of imaging method based on dual camera, mobile terminal and storage medium | |
CN107357500A (en) | A kind of picture-adjusting method, terminal and storage medium | |
CN107404618A (en) | A kind of image pickup method and terminal | |
CN107295269A (en) | A kind of light measuring method and terminal, computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180109 |