CN104135609B - Auxiliary photo-taking method, apparatus and terminal - Google Patents
Auxiliary photo-taking method, apparatus and terminal Download PDFInfo
- Publication number
- CN104135609B CN104135609B CN201410302630.4A CN201410302630A CN104135609B CN 104135609 B CN104135609 B CN 104135609B CN 201410302630 A CN201410302630 A CN 201410302630A CN 104135609 B CN104135609 B CN 104135609B
- Authority
- CN
- China
- Prior art keywords
- template image
- matching
- mentioned
- real time
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides a kind of auxiliary photo-taking method, apparatus and terminal.The above method includes:When carrying out scene capture, picture frame is obtained in real time;The described image frame currently obtained is matched with the corresponding template image of the scene in real time;The matching degree that output matching obtains in real time.According to technical scheme provided by the invention, by exporting above-mentioned matching degree in real time, user can be aided in obtain the position of subject and the multiple pictures that posture is basically identical, so as to effectively improve Consumer's Experience.
Description
Technical field
The present invention relates to the communications field, in particular to a kind of auxiliary photo-taking method, apparatus and terminal.
Background technology
In work and life, generally there is following demand of taking pictures:To same subject, need at regular intervals into
Row once photo taking, the result that then will take pictures composition image sequence, so as to be compared or record.For example, daily in same position
Scene is shot, and the change of weather is analyzed according to photographed scene.
Due to needing to be compared or for preferably observing effect, it is necessary to subject position in the picture and appearance
State is basically identical, i.e., the camera site of capture apparatus and angle are basically identical.In correlation technique, by the way that camera is entered
Row is fixed on a position, and is fixed as a kind of posture, so as to photograph above-mentioned one group of image.For this processing mode, clap
It is shorter according to interval time, it is easier to operate, but if interval time of taking pictures is longer, then it is difficult that camera is fixed on one
Position, and a kind of posture is fixed as, it is possible that not possessing operability.
Therefore, in shooting process, user how is helped to obtain position and basically identical more of posture of subject
Photo is opened, is technical problem urgently to be resolved hurrily at present.
The content of the invention
The purpose of the present invention, be to provide a kind of auxiliary photo-taking method, apparatus and terminal, with solve the above problems at least it
One.
According to the first aspect of the invention, there is provided a kind of auxiliary photo-taking method.
Included according to the auxiliary photo-taking method of the present invention:When carrying out scene capture, picture frame is obtained in real time;Will be current
Template image corresponding with above-mentioned scene is matched the above-mentioned picture frame obtained in real time;That output matching obtains in real time
With degree.
Above-mentioned picture frame is carried out into matching with above-mentioned template image in real time includes:Obtain above-mentioned template image first is special
Sign point;Obtain the second feature point of above-mentioned picture frame;Carried out in real time using above-mentioned fisrt feature point and above-mentioned second feature point
Matching obtains above-mentioned matching degree.
Obtaining above-mentioned fisrt feature point and above-mentioned second feature point includes:Respectively to above-mentioned picture frame and above-mentioned template image
Extract contour feature;In the contour feature of extraction, the contour feature that length is less than first threshold is deleted;Delete response and be less than the
The image angle point of two threshold values;Contour feature after execution deletion action and angle point are defined as characteristic point;Characteristic point is sieved
Choosing is handled, until characteristic point equiblibrium mass distribution, and the edge direction in each feature vertex neighborhood is determined, obtain final characteristic point.
Above-mentioned picture frame is carried out into matching with above-mentioned template image in real time includes:By analyzing image texture, respectively will
Above-mentioned picture frame and above-mentioned template image are divided into the single region of multiple textures;For above-mentioned picture frame and above-mentioned template image
Each piece of region after middle division, block-by-block are matched.
Before above-mentioned picture frame is matched with above-mentioned template image in real time, in addition to:User's operation is responded, if
Put the matching weight of each selection region of above-mentioned template image;Above-mentioned picture frame is matched with above-mentioned template image in real time
Including:Above-mentioned matching degree is determined according to above-mentioned matching weight.
Before the above-mentioned picture frame currently obtained is matched with default template image in real time, in addition to:
After being taken pictures for the first time to above-mentioned scene, obtained photo is defined as above-mentioned template image;Taken pictures for the second time to above-mentioned scene
Start, after taking pictures every time, all photos that current shooting is obtained are averaging, and the photo after obtaining averagely, are defined as above-mentioned
Template image.
Before picture frame is obtained in real time, in addition to:Above-mentioned template image is handled, obtains shooting reference chart
Picture, wherein, above-mentioned shooting reference picture is the fisrt feature point that is obtained from above-mentioned template image, or above-mentioned shooting reference chart
As being translucent above-mentioned template image;By above-mentioned shooting reference picture and the imaging importing of above-mentioned terminal captured in real-time.
In the above-mentioned matching degree that output matching obtains, in addition to:By Feature Points Matching, obtain and become above-mentioned picture frame
Shift to the transformation matrix of above-mentioned template image;Conversion process is carried out to above-mentioned current picture frame using above-mentioned transformation matrix, with
Transform to the posture of above-mentioned template image.
According to the second aspect of the invention, there is provided a kind of auxiliary photo-taking device.
Included according to the auxiliary photo-taking device of the present invention:First acquisition module, for when carrying out scene capture, in real time
Obtain picture frame;Matching module, for by the above-mentioned picture frame currently obtained Prototype drawing corresponding with above-mentioned scene in real time
As being matched;Output module, the matching degree obtained for output matching in real time.
Above-mentioned matching module includes:First acquisition unit, for extracting the fisrt feature point of above-mentioned template image;Second obtains
Unit is taken, for obtaining the second feature point of above-mentioned picture frame;First matching unit, for using above-mentioned fisrt feature point with
Second feature point is stated to be matched to obtain above-mentioned matching degree in real time.
Above-mentioned matching module includes:Texture analysis module, for by analyzing image texture, respectively by above-mentioned picture frame and
Above-mentioned template image is divided into the single region of multiple textures;Second matching unit, for for above-mentioned picture frame and above-mentioned mould
Each piece of region after being divided in plate image, block-by-block are matched.
Said apparatus also includes:Setup module, for responding user's operation, each selection area in above-mentioned template image is set
The matching weight in domain;Above-mentioned matching module also includes:Determining unit, for determining above-mentioned matching degree according to above-mentioned matching weight.
Said apparatus also includes:First determining module, for the photo that after being taken pictures for the first time to above-mentioned scene, will be obtained
It is defined as above-mentioned template image;Second determining module, for since being taken pictures for the second time to above-mentioned scene, after taking pictures every time,
All photos that current shooting is obtained are averaging, and the photo after obtaining averagely, are defined as above-mentioned template image.
Said apparatus also includes:Second acquisition module, for handling above-mentioned template image, obtain shooting reference chart
Picture, wherein, above-mentioned shooting reference picture includes the fisrt feature point obtained from above-mentioned template image, or above-mentioned shooting reference
Image is translucent above-mentioned template image;Laminating module, for by above-mentioned shooting reference picture and above-mentioned terminal captured in real-time
Imaging importing.
Said apparatus also includes:3rd acquisition module, for by Feature Points Matching, obtaining and being converted into above-mentioned picture frame
The transformation matrix of above-mentioned template image;Processing module, for being become using above-mentioned transformation matrix to above-mentioned current picture frame
Processing is changed, to transform to the posture of above-mentioned template image.
According to the third aspect of the invention we, there is provided a kind of terminal.
Included according to the terminal of the present invention:One or more processors;Memory;With one or more modules, above-mentioned one
Individual or multiple modules are stored in above-mentioned memory and are configured to by said one or multiple computing devices, said one or
Multiple modules are used for:When carrying out scene capture, picture frame is obtained in real time;By the above-mentioned picture frame currently obtained in real time with
Template image corresponding to above-mentioned scene is matched;The matching degree that output matching obtains in real time.
The technical scheme provided by this disclosed embodiment can include the following benefits:It ought be taken pictures with of the prior art
Interval time is longer, it is difficult to which camera is fixed on into a position and is fixed as a kind of posture, thus does not possess operability and compares,
When to scene capture, by the above-mentioned picture frame currently obtained, template image corresponding with above-mentioned scene is matched in real time,
The matching degree that output matching obtains in real time, user can be aided in obtain position and basically identical more of posture of subject
Photo is opened, so as to effectively improve Consumer's Experience.
It should be appreciated that the general description and following detailed description of the above are only exemplary, this can not be limited
It is open.
Brief description of the drawings
Fig. 1 is the flow chart according to the auxiliary photo-taking method of the embodiment of the present invention;
Fig. 2 is the flow chart according to the auxiliary photo-taking method of the embodiment of the present invention one;
Fig. 3 is the flow chart according to the auxiliary photo-taking method of the embodiment of the present invention one;
Fig. 4 is the structured flowchart according to the auxiliary photo-taking device of the embodiment of the present invention;
Fig. 5 is the structured flowchart according to the auxiliary photo-taking device of the embodiment of the present invention one;
Fig. 6 is the structured flowchart according to the auxiliary photo-taking device of the embodiment of the present invention two;And
Fig. 7 is the structural representation according to the terminal of the embodiment of the present invention.
Accompanying drawing herein is merged in specification and forms the part of this specification, shows the implementation for meeting the present invention
Example, and for explaining principle of the invention together with specification.
Embodiment
The present invention is described in further detail below by specific examples of the implementation and with reference to accompanying drawing.
Fig. 1 is the flow chart according to the auxiliary photo-taking method of the embodiment of the present invention.As shown in figure 1, the auxiliary photo-taking method
Mainly include following processing:
In step S101, when carrying out scene capture, picture frame is obtained in real time;
In step s 103, by the above-mentioned picture frame currently obtained, template image corresponding with above-mentioned scene is carried out in real time
Matching;
In step S105, output matching obtains in real time matching degree.
In method shown in Fig. 1, when carrying out scene capture, by the above-mentioned picture frame currently obtained in real time with above-mentioned field
Template image is matched corresponding to scape, the matching degree that output matching obtains in real time, and user can be aided in obtain and be taken pair
The position of elephant and the basically identical multiple pictures of posture, so as to effectively improve Consumer's Experience.
Template image described above can determine in the following manner:, will after being taken pictures for the first time for above-mentioned scene
Obtained photo is defined as above-mentioned template image;, will be current after taking pictures every time since being taken pictures for the second time for above-mentioned scene
Shoot obtained all photos weighting to be averaging, the photo after obtaining averagely, be defined as above-mentioned template image.
By above-mentioned processing, since terminal is taken pictures for the second time, after taking pictures every time, due to current shooting being obtained
Above-mentioned template image is updated in photo after all photo weighting averagings, therefore, has been effectively ensured and has been obtained for scene capture
To each photo between subject position and posture be more nearly.
Wherein, the averaging method for image, can be as follows:
, it is necessary to be handled as follows for each pixel so that user takes pictures to obtain 10 photos altogether as an example:
The pixel value (such as rgb value) of corresponding pixel in every pictures is added, and divided by 10 obtains mean pixel
Value.
After handling each pixel, the pixel after all handling is formed into the average image.
Wherein, when being averaging to image, can be weighted, can also be without weighting.
Certainly, during specific implementation, the photo obtained after can also taking pictures for the first time all the time is defined as above-mentioned mould
Plate image, this scheme are simpler, it is easier to realize.
Before picture frame is obtained in real time, following processing can also be included:Above-mentioned template image is handled, obtained
Reference picture is shot, wherein, the shooting reference picture is the fisrt feature point (fisrt feature herein obtained from template image
Point refers to the characteristic point obtained from template image, and second feature point refers to the characteristic point obtained from the picture frame currently obtained), or
The above-mentioned shooting reference picture of person is translucent above-mentioned template image;By above-mentioned shooting reference picture and above-mentioned terminal captured in real-time
Imaging importing, specifically, shooting reference picture is superimposed upon the image upper strata of above-mentioned terminal captured in real-time and exported to user.
In order to preferably aid in user to obtain the photograph more close with the position of subject in template image and posture
Piece, shooting reference picture can be superimposed on the image of captured in real-time, guiding user is adjusted to optimum position, so as to obtain and mould
The photo that plate image more matches.
Shooting reference picture can be obtained using various ways to guide user to be adjusted to optimum position.For example, mode one:
Fisrt feature point is obtained from above-mentioned template image, fisrt feature point is superimposed upon the image upper strata of above-mentioned terminal captured in real-time simultaneously
Export to user, user constantly adjusts camera site in the shooting process for above-mentioned scene, according to fisrt feature point;Mode
Two:Translucentization processing is carried out to template image, obtains the template image of translucentization, and the template image of translucentization is folded
The image upper strata of terminal captured in real-time is added in, user is in the shooting process for above-mentioned scene, according to the template of translucentization
Image constantly adjusts camera site.
In above-mentioned steps S103, can use various ways by above-mentioned picture frame in real time with above-mentioned template image carry out
Match somebody with somebody.Such as:
Mode one:Obtain the fisrt feature point of above-mentioned template image;Obtain the second feature point of above-mentioned picture frame;Using upper
State fisrt feature point and above-mentioned second feature point above-mentioned template image and above-mentioned picture frame are matched to obtain in real time it is above-mentioned
Matching degree.
The fisrt feature point of above-mentioned acquisition template image may further include following processing:
1st, contour feature is extracted to above-mentioned template image;
2nd, in the contour feature of extraction, delete contour feature of the length less than first threshold and response is less than Second Threshold
Image angle point;
3rd, the contour feature after execution deletion action and angle point are defined as characteristic point, Screening Treatment are carried out to characteristic point,
Until characteristic point equiblibrium mass distribution, and the edge direction in each feature vertex neighborhood is determined, obtain final characteristic point.
Similarly, the second feature point of above-mentioned acquisition picture frame can also further comprise following processing:
1st, contour feature is extracted to above-mentioned picture frame;
2nd, in the contour feature of extraction, delete contour feature of the length less than first threshold and response is less than Second Threshold
Image angle point;
3rd, the contour feature after execution deletion action and angle point are defined as characteristic point, Screening Treatment are carried out to characteristic point,
Until characteristic point equiblibrium mass distribution, and the edge direction in each feature vertex neighborhood is determined, obtain final characteristic point.
Contour feature can be extracted from template image or picture frame in the following ways:To template image or picture frame
Binary conversion treatment is carried out, template image or picture frame are converted to the image for only including two kinds of colors of black and white, to distinguish each image
Target area and background area.It is special that profile is extracted from the template image after the binary conversion treatment or the target area of picture frame
Sign.
Above-mentioned template image and above-mentioned picture frame are entered in real time using above-mentioned fisrt feature point and above-mentioned second feature point
Row matching may further include following processing:
The two images (template image and the picture frame currently obtained) matched first to needs extract characteristic point respectively, obtain
To two set of characteristic points.Centered on each characteristic point, the edge direction in this feature point field is calculated, by its neighborhood
Descriptor of the pixel value as this feature point, the coordinate position of each characteristic point in template image is determined, respectively above-mentioned
Picture frame is searched by the existing neighborhood centered on characteristic point in regional area corresponding to the coordinate position.Two corresponding to calculating
The quadratic sum of the pixel difference of the lap of individual neighborhood.The matching of two neighborhoods according to corresponding to determining the result of calculation of quadratic sum
Degree.The matching degree result of calculation of corresponding neighborhood in comprehensive two images, determine the matching degree of two images.
Mode two:By analyzing image texture, above-mentioned picture frame and above-mentioned template image are divided into multiple textures respectively
Single region;For each piece of region after being divided in above-mentioned picture frame and above-mentioned template image, block-by-block is matched.
Wherein, texture analysis refers to extracting textural characteristics by certain image processing techniques, so as to obtain texture
Quantitative or qualitative description processing procedure.In particular it is required that analysis of texture is carried out to image, according to texture paging journey
Degree carries out region growing, and picture frame and above-mentioned template image are divided into multiple texture single areas respectively.
For each piece of region after being divided in above-mentioned picture frame and above-mentioned template image, block-by-block carries out matching can be further
Including following processing:Determine the coordinate position of each texture region in template image, respectively in above-mentioned picture frame search with
Texture region corresponding to the coordinate position in regional area.The parallax value of two texture regions, obtains texture corresponding to calculating
The dense disparity map of single area.The matching degree of two texture regions according to corresponding to determining result of calculation.Comprehensive two images
In corresponding texture region matching degree result of calculation, determine the matching degrees of two images.
Before above-mentioned picture frame is matched with above-mentioned template image in real time, following processing can also be included:Ring
Operated using family, the matching weight of each selection region of above-mentioned template image is set;By above-mentioned picture frame in real time with above-mentioned mould
Plate image, which carries out matching, to be included:Above-mentioned matching degree is determined according to above-mentioned matching weight.
User can manually select the area for needing emphasis to match in template image (for example, photo after the first shooting)
Domain, or the region of non-emphasis matching, construct a matching weight map, and emphasis matching area possesses greater weight, non-emphasis area
Domain weight is smaller.For example, in comprehensive two images above-mentioned corresponding texture region matching degree result of calculation, determine two
During the matching degree of width image, it may be considered that weighted value corresponding to regional in matching weight map, that is, the area for needing emphasis to match
Weighted value is larger corresponding to domain, and weighted value corresponding to the region of non-emphasis matching is smaller, or even some regions pair without concern
The weighted value answered could be arranged to zero.Therefore, because weighted value this parameter is combined when calculating the matching degree of two images,
Further optimize matching result.
In the above-mentioned matching degree that output matching obtains, following processing can also be included:By Feature Points Matching, obtaining will
Above-mentioned picture frame is converted into the transformation matrix of above-mentioned template image;Above-mentioned current picture frame is carried out using above-mentioned transformation matrix
Conversion process, to transform to the posture of above-mentioned template image.
In scheme above-mentioned, although when user shoots, current matching degree can be provided in real time (for example, matching point
Number), but still need user and manually adjust, to find an accurate position.Using Matching Technology of Feature Point, present image is obtained
To the transformation matrix of template image, present image is transformed to the posture of template image.So, user need not very in adjustment
Accurately, it is only necessary to probably alignment, thus effectively increase Consumer's Experience.
Above-mentioned embodiment is further described below in conjunction with Fig. 2 and Fig. 3.
Fig. 2 is the flow chart according to the auxiliary photo-taking method of the embodiment of the present invention one.As shown in Fig. 2 terminal is clapped for the first time
When taking the photograph picture frame, the auxiliary photo-taking method mainly includes following processing:
In step s 201, after being taken pictures for the first time to above-mentioned scene, picture frame is obtained.
In step S203, obtained picture frame is obscured, down-sampled processing.
Due to longer per shooting time interval twice in the present invention, photographed might have slightly in addition to main body is constant
Change, the change of flowers, plants and trees, color change in such as scenery, have moving object interference.The present invention only focuses on big chi in image
Matching on degree, do not paid close attention to for slight change, it is possible to image is subjected to Gaussian Blur and then down-sampled, so
Not only noise can be removed but also reduces processing complexity.
In step S205, illumination effect is removed to obtained picture frame.
Due to shooting weather twice, illumination may have very big difference, so as to have influence on images match, thus matching before need
Image is handled to remove illumination effect.It is for instance possible to use Retinex image enchancing methods, reduce illumination to image
Influence.
In step S207, characteristic point is extracted to obtained picture frame.
Extraction characteristic point may further include following processing:
1st, contour feature is extracted to above-mentioned picture frame.
2nd, in the contour feature of extraction, delete edge contour of the length less than first threshold and response is less than Second Threshold
Image angle point.Wherein, above-mentioned first threshold and Second Threshold can be set dynamically according to actual conditions.
3rd, the contour feature after execution deletion action and angle point are defined as characteristic point, Screening Treatment are carried out to characteristic point,
Until characteristic point equiblibrium mass distribution, i.e., the number difference of characteristic point should not be too large in unit area on whole image, to prevent scheming
Some region transitions aggregation of picture, cause excessively to tend to intensive texture region during matching.
4th, the edge direction in each feature vertex neighborhood is calculated.
In step S209, the characteristic point of extraction is preserved, wherein, current image frame is arranged to described above
Template image.
After above-mentioned processing is carried out to picture frame, it can effectively remove noise, reduce processing complexity, and reduce illumination pair
The influence of image.In addition, carrying out Screening Treatment to characteristic point, the distribution of equilibrium characteristic point, can prevent in some region of image
Excessively aggregation, cause excessively to tend to intensive texture region during matching.Therefore, by above-mentioned processing, it is easy to subsequently more effectively
Perform images match.
Fig. 3 is the flow chart according to the auxiliary photo-taking method of the embodiment of the present invention two.As shown in figure 3, except terminal for the first time
Outside photographing image frame, remaining each photographing image frame, mainly including following processing:
In step S301, after above-mentioned terminal is taken pictures for above-mentioned scene, picture frame is obtained.
In step S303, obtained picture frame is obscured, down-sampled processing.
In step S305, illumination effect is removed to obtained picture frame.
In step S307, characteristic point is extracted to obtained picture frame.The specific description that may refer to above-mentioned steps S207,
Here is omitted.
In step S309, figure is carried out using the characteristic point extracted in the characteristic point and template image extracted in the picture frame
As matching, obtain matching fraction.
In step S311, by matching fraction, terminal is adjusted to optimum position (i.e. Corresponding matching fraction by guiding user
Maximum position), shooting image.
In step S313, sequence of calculation image averaging figure (all photos for obtaining current shooting are averaging) and more
New is template image, according to step S303, step S305, and the characteristic point of step S307 abstraction sequence image averaging figures, and
Preserved.
By Fig. 2 and Fig. 3 processing, terminal for scene capture it is multiple after, subject position and appearance can be obtained
A basically identical sequence image of state, the sequence image can easily be compared, and whole sequence image deploys except tiling
Outside, the ways of presentation being more suitable for is showed into Mobile state, such as saves as GIF forms or short-sighted frequency.
Fig. 4 is the structured flowchart according to the auxiliary photo-taking device of the embodiment of the present invention.As shown in figure 4, the auxiliary photo-taking fills
Putting mainly includes:First acquisition module 40, during for carrying out scene capture, picture frame is obtained in real time;Matching module 42, it is and upper
State the first acquisition module 40 to be connected, for the above-mentioned picture frame currently obtained is corresponding with above-mentioned scene in real time
Template image is matched;Output module 44, it is connected with above-mentioned matching module 42, obtained for output matching in real time
With degree.
In device shown in Fig. 4, when carrying out scene capture, matching module 32 is real-time by the above-mentioned picture frame currently obtained
Ground template image corresponding with above-mentioned scene is matched, the matching degree that output matching obtains in real time of output module 44, can be with
User is aided in obtain the position of subject and the multiple pictures that posture is basically identical, so as to effectively improve Consumer's Experience.
As shown in figure 5, above-mentioned matching module 42 may further include:First acquisition unit 420, for extracting above-mentioned mould
The fisrt feature point of plate image;Second acquisition unit 422, for obtaining the second feature point of above-mentioned picture frame;First matching is single
Member 424, be connected respectively at first acquisition unit 420 with second acquisition unit 422, for using above-mentioned fisrt feature point with
Second feature point is stated to be matched to obtain above-mentioned matching degree in real time.
As shown in fig. 6, above-mentioned matching module 42 may further include:Texture analysis module 426, for passing through image line
Reason analysis, is divided into the single region of multiple textures by above-mentioned picture frame and above-mentioned template image respectively;Second matching unit
428, it is connected with texture analysis module 426, for for dividing Hou Gekuai areas in above-mentioned picture frame and above-mentioned template image
Domain, block-by-block are matched.
As shown in Figure 5 and Figure 6, said apparatus can also include:Setup module 46, it is connected with above-mentioned matching module 42,
For responding user's operation, the matching weight of each selection region in above-mentioned template image is set;Then above-mentioned matching module 42 is gone back
Unit 430 is can include determining that, for determining above-mentioned matching degree according to above-mentioned matching weight.
As shown in Figure 5 and Figure 6, said apparatus can also include:First determining module 48, it is connected with above-mentioned matching module 42
Connect, for after being taken pictures for the first time to above-mentioned scene, obtained photo to be defined as into above-mentioned template image;Second determining module
50, it is connected with above-mentioned matching module 42, for since being taken pictures for the second time to above-mentioned scene, after taking pictures every time, inciting somebody to action current
Shoot obtained all photos to be averaging, the photo after obtaining averagely, be defined as above-mentioned template image.
As shown in Figure 5 and Figure 6, said apparatus can also include:Second acquisition module 52, it is connected with the output module 44
Connect, for handling above-mentioned template image, obtain shooting reference picture, wherein, above-mentioned shooting reference picture is included from upper
The fisrt feature point obtained in template image is stated, or above-mentioned shooting reference picture is translucent above-mentioned template image;Superposition
Module, for by the imaging importing of above-mentioned shooting reference picture and above-mentioned terminal captured in real-time.
As shown in Figure 5 and Figure 6, said apparatus can also include:3rd acquisition module 54, it is connected with matching module 42,
For by Feature Points Matching, obtaining the transformation matrix that above-mentioned picture frame is converted into above-mentioned template image;Processing module 56, with
Above-mentioned 3rd acquisition module 54 is connected, for carrying out conversion process to above-mentioned current picture frame using above-mentioned transformation matrix,
To transform to the posture of above-mentioned template image.
Each module in said apparatus, the embodiment that each unit be combined with each other specifically may refer to Fig. 1 to Fig. 3 description,
Here is omitted.
Fig. 7 is the structural representation according to the terminal of the embodiment of the present invention.As shown in fig. 7, the terminal can be used for implementing
The auxiliary photo-taking method provided in above-described embodiment.Wherein, the terminal can be mobile phone, digital camera, tablet personal computer pad, wear
Wear formula mobile device (such as intelligent glasses).
Terminal can include communication unit 710, include the storage of one or more computer-readable recording mediums
Device 720, input block 730, display unit 740, sensor 750, voicefrequency circuit 760, Wireless Fidelity (wireless
Fidelity, referred to as WiFi) module 770, include one or processor 780, the Yi Ji electricity of more than one processing core
The grade part of source 790.It will be understood by those skilled in the art that the restriction of the terminal structure shown in Fig. 7 not structure paired terminal, can
With including than illustrating more or less parts, either combining some parts or different parts arrangement.Wherein:
Communication unit 710 can be used for receive and send messages or communication process in, the reception and transmission of signal, the communication unit 710
Can be radio frequency (Radio Frequency, referred to as RF) circuit, router, modem, etc. network communication equipment.It is special
Not, when communication unit 710 is RF circuits, after the downlink information of base station is received, one or more than one processing are transferred to
Device 780 is handled;In addition, it is sent to base station by up data are related to.Include usually as the RF circuits of communication unit but unlimited
In antenna, at least one amplifier, tuner, one or more oscillators, subscriber identity module (SIM) card, transceiver, coupling
Clutch, low-noise amplifier (Low Noise Amplifier, referred to as LNA), duplexer etc..In addition, communication unit 710 is also
It can be communicated by radio communication with network and other equipment.Radio communication can use any communication standard or agreement, including
But it is not limited to global system for mobile communications (Global System of Mobile communication, referred to as GSM), leads to
With packet wireless service (General Packet Radio Service, referred to as GPRS), CDMA (Code
Division Multiple Access, referred to as CDMA), WCDMA (Wideband Code Division
Multiple Access, referred to as WCDMA), Long Term Evolution (Long Term Evolution, referred to as LTE), electronics postal
Part, Short Message Service (Short Messaging Service, referred to as SMS) etc..Memory 720 can be used for storage software journey
Sequence and module, processor 780 is stored in the software program and module of memory 720 by operation, so as to perform various work(
It can apply and data processing.Memory 720 can mainly include storing program area and storage data field, wherein, storing program area
Can storage program area, application program (such as sound-playing function, image player function etc.) needed at least one function etc.;
Storage data field can store uses created data (such as voice data, phone directory etc.) etc. according to terminal.In addition, storage
Device 720 can include high-speed random access memory, can also include nonvolatile memory, for example, at least a disk storage
Device, flush memory device or other volatile solid-state parts.Correspondingly, memory 720 can also include memory control
Device, to provide the access of processor 780 and input block 730 to memory 720.
Input block 730 can be used for the numeral or character information for receiving input, and generation is set with user and function
Control relevant keyboard, mouse, action bars, optics or the input of trace ball signal.Input block 730 may include touch sensitive surface
731 and other input equipments 732.Touch sensitive surface 731, also referred to as touch display screen or Trackpad, collect
(for example user uses any suitable object or attached such as finger, stylus for touch operation of the user on or near it
Operation of the part on touch sensitive surface 731 or near touch sensitive surface 731), and connected accordingly according to formula set in advance driving
Connection device.
Optionally, touch sensitive surface 731 may include both touch detecting apparatus and touch controller.Wherein, inspection is touched
The touch orientation of device detection user is surveyed, and detects the signal that touch operation is brought, transmits a signal to touch controller;Touch
Controller receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processor 780, and energy
Order that reception processing device 780 is sent simultaneously is performed.Furthermore, it is possible to using resistance-type, condenser type, infrared ray and surface sound
The polytypes such as ripple realize touch sensitive surface 731.Except touch sensitive surface 731, input block 730 can also include other input equipments
732。
Other input equipments 732 can include but is not limited to physical keyboard, function key (such as volume control button, switch
Button etc.), trace ball, mouse, the one or more in action bars etc..
Display unit 740 can be used for display by user input information or be supplied to user information and terminal it is various
Graphical user interface, these graphical user interface can be made up of figure, text, icon, video and its any combination.Display
Unit 740 may include display panel 741, optionally, can be configured using forms such as liquid crystal display, Organic Light Emitting Diodes
Display panel 741.Further, touch sensitive surface 731 can cover display panel 741, detect thereon when touch sensitive surface 731 or
After neighbouring touch operation, processor 780 is sent to determine the type of touch event, is followed by subsequent processing device 780 according to touch thing
The type of part provides corresponding visual output on display panel 741.Although in the figure 7, touch sensitive surface 731 and display panel
741 be the part independent as two to realize input and input function, but in some embodiments it is possible to by touch sensitive surface
731 integrate with display panel 741 and realize input and output function.
Terminal may also include at least one sensor 750, such as optical sensor, motion sensor and other sensors.
Optical sensor may include ambient light sensor and proximity transducer, wherein, ambient light sensor can be according to the light and shade of ambient light
To adjust the brightness of display panel 741, proximity transducer can close display panel 741 and/or the back of the body when terminal is moved in one's ear
Light.As one kind of motion sensor, gravity accelerometer can detect in all directions (generally three axles) acceleration
Size, size and the direction of gravity are can detect that when static, available for identification mobile phone posture application (such as horizontal/vertical screen switching,
Dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;It can also configure as terminal
Gyroscope, barometer, hygrometer, thermometer, the other sensors such as infrared ray sensor, will not be repeated here.
Voicefrequency circuit 760, loudspeaker 761, microphone 762 can provide the COBBAIF between user and terminal.Audio-frequency electric
Electric signal after the voice data received conversion can be transferred to loudspeaker 761, sound is converted to by loudspeaker 761 by road 760
Signal output;On the other hand, the voice signal of collection is converted to electric signal by microphone 762, is turned after being received by voicefrequency circuit 760
Voice data is changed to, then after voice data output processor 180 is handled, through RF circuits 710 to be sent to such as another terminal
Equipment, or voice data is exported to memory 720 further to handle.Voicefrequency circuit 760 is also possible that earplug is inserted
Hole, to provide the communication of peripheral hardware earphone and terminal.
In order to realize radio communication, wireless communication unit 770 can be configured with the terminal device, the wireless communication unit
770 can be WiFi module.WiFi belongs to short range wireless transmission technology, and terminal can be helped by wireless communication unit 770
User sends and receive e-mail, browses webpage and access streaming video etc., and it has provided the user wireless broadband internet and accessed.
Although Fig. 7 shows wireless communication unit 770, but it is understood that, it is simultaneously not belonging to must be configured into for terminal, completely may be used
To be omitted as needed in the essential scope for do not change invention.
Processor 780 is the control centre of terminal, using various interfaces and the various pieces of connection whole mobile phone, is led to
Cross operation or perform the software program and/or module being stored in memory 720, and call and be stored in memory 720
Data, the various functions and processing data of terminal are performed, so as to carry out integral monitoring to mobile phone.Optionally, processor 780 can wrap
Include one or more processing cores;Preferably, processor 780 can integrate application processor and modem processor, wherein, should
Operating system, user interface and application program etc. are mainly handled with processor, modem processor mainly handles radio communication.
It is understood that above-mentioned modem processor can not also be integrated into processor 780.
Terminal also includes the power supply 790 (such as battery) to all parts power supply, it is preferred that power supply can pass through power supply pipe
Reason system and processor 780 are logically contiguous, so as to realize management charging, electric discharge and power managed by power-supply management system
Etc. function.Power supply 790 can also include one or more direct current or AC power, recharging system, power failure inspection
The random component such as slowdown monitoring circuit, power supply changeover device or inverter, power supply status indicator.
Although being not shown, terminal can also include camera, bluetooth module etc., will not be repeated here.
In the present embodiment, the display unit of terminal device is touch-screen display, and terminal device also includes memory,
And one or more than one program, one of them or more than one program storage are configured in memory
By one, either more than one computing device one or more than one program bag contain the instruction for being used for being operated below:
When to scene capture, picture frame is obtained in real time;
By the above-mentioned picture frame currently obtained, template image corresponding with above-mentioned scene is matched in real time;And
The matching degree that output matching obtains in real time.
Above-mentioned picture frame is carried out into matching with above-mentioned template image in real time includes:Obtain above-mentioned template image first is special
Sign point;Obtain the second feature point of above-mentioned picture frame;Carried out in real time using above-mentioned fisrt feature point and above-mentioned second feature point
Matching obtains above-mentioned matching degree.This is key point of the present invention, should also be write specifically a bit
Obtaining above-mentioned fisrt feature point and above-mentioned second feature point includes:Respectively to above-mentioned picture frame and above-mentioned template image
Extract contour feature;In the contour feature of extraction, delete length and be less than the contour feature of first threshold and respond less than second
The image angle point of threshold value;Contour feature after execution deletion action and angle point are defined as characteristic point;Characteristic point is screened
Processing, until characteristic point equiblibrium mass distribution, and the edge direction in each feature vertex neighborhood is determined, obtain final characteristic point.
Above-mentioned picture frame is carried out into matching with above-mentioned template image in real time includes:By analyzing image texture, respectively will
Above-mentioned picture frame and above-mentioned template image are divided into the single region of multiple textures;For above-mentioned picture frame and above-mentioned template image
Each piece of region after middle division, block-by-block are matched.
Before above-mentioned picture frame is matched with above-mentioned template image in real time, in addition to:User's operation is responded, if
Put the matching weight of each selection region of above-mentioned template image;Above-mentioned picture frame is matched with above-mentioned template image in real time
Including:Above-mentioned matching degree is determined according to above-mentioned matching weight.
Above-mentioned instruction can also include:It is after above-mentioned terminal is taken pictures for the first time for above-mentioned scene, obtained photo is true
It is set to above-mentioned template image;Since above-mentioned terminal is taken pictures for the second time for above-mentioned scene, after taking pictures every time, by current shooting
Obtained all photos are averaging, and the photo after obtaining averagely, are defined as above-mentioned template image.
Before picture frame is obtained in real time, it can also include:Above-mentioned template image is handled, obtains shooting reference
Image, wherein, above-mentioned shooting reference picture is the fisrt feature point that is obtained from above-mentioned template image, or above-mentioned shooting reference
Image is translucent above-mentioned template image;By above-mentioned shooting reference picture and the imaging importing of above-mentioned terminal captured in real-time.
In the above-mentioned matching degree that output matching obtains, in addition to:By Feature Points Matching, obtain and become above-mentioned picture frame
Shift to the transformation matrix of above-mentioned template image;Conversion process is carried out to above-mentioned current picture frame using above-mentioned transformation matrix, with
Transform to the posture of above-mentioned template image.
In summary, by embodiment provided by the invention, when terminal different time repeatedly shoots Same Scene, without solid
The fixed terminal, the matching degree with template image is provided in real time, terminal is adjusted to optimum position by guiding user, or uses feature
Point matching technology, finds transformation matrix of the present image to template image, and present image is transformed into template posture.And then aid in
User obtains the position of subject and the multiple pictures that posture is basically identical.In addition, in order to preferably aid in user to obtain
The more close photo with the position of subject in template image and posture, the superposition shooting ginseng on the image of captured in real-time
Image is examined, terminal is adjusted to optimum position, further increases Consumer's Experience by guiding user.
One of ordinary skill in the art will appreciate that all or part of processing in above-described embodiment method is to pass through
Program instructs the hardware of correlation to complete, and described program can be stored in a kind of computer-readable recording medium.
One of ordinary skill in the art will appreciate that all or part of processing in above method embodiment is to pass through
The related hardware of programmed instruction is completed, and foregoing program can be stored in a kind of computer read/write memory medium, the program
Upon execution, the step of execution includes preceding method embodiment, and foregoing storage medium includes:ROM, RAM, magnetic disc or CD
Etc. it is various can be with the medium of store program codes.
The preferred embodiments of the present invention are the foregoing is only, are not intended to limit the invention, for the skill of this area
For art personnel, the present invention can have various modifications and variations.Within the spirit and principles of the invention, that is made any repaiies
Change, equivalent substitution, improvement etc., should be included in the scope of the protection.
Claims (12)
- A kind of 1. auxiliary photo-taking method, it is characterised in that including:When carrying out scene capture, picture frame is obtained in real time;By the described image frame currently obtained, template image corresponding with the scene is matched in real time;The matching degree that output matching obtains in real time;Wherein, by described image frame, template image corresponding with the scene carries out matching and included in real time:For each piece of region after being divided in described image frame and the template image, each line in the template image is determined Manage the coordinate position in region;The texture region in regional area corresponding with the coordinate position is searched in described image frame;The parallax value for including including the texture region, mutually corresponding two texture regions is calculated, obtains the single area of texture The dense disparity map in domain;The matching degree of described two texture regions is determined according to the dense disparity map;Described image frame is matched with the template image according to the matching degree;Wherein, after being taken pictures for the first time to the scene, obtained photo is defined as the template image;Since being taken pictures for the second time to the scene, after taking pictures every time, all photos that current shooting is obtained are averaging, and are obtained Photo after being averaged, it is defined as the template image.
- 2. according to the method for claim 1, it is characterised in that carry out described image frame with the template image in real time Matching, can also be:Obtain the fisrt feature point of the template image;Obtain the second feature point of described image frame;Matched to obtain the matching degree in real time with the second feature point using the fisrt feature point.
- 3. according to the method for claim 2, it is characterised in that the fisrt feature point and second feature point are obtained, Including:Contour feature is extracted to described image frame and the template image respectively;In the contour feature of extraction, the contour feature that length is less than first threshold is deleted;Delete image angle point of the response less than Second Threshold;Contour feature after execution deletion action and angle point are defined as characteristic point;Screening Treatment is carried out to characteristic point, until characteristic point equiblibrium mass distribution, and the edge direction in each feature vertex neighborhood is determined, obtain To final characteristic point.
- 4. according to the method for claim 1, it is characterised in thatBefore described image frame is matched with the template image in real time, in addition to:User's operation is responded, institute is set State the matching weight of each selection region of template image;Described image frame is carried out into matching with the template image in real time includes:The matching is determined according to the matching weight Degree.
- 5. according to the method for claim 1, it is characterised in that before picture frame is obtained in real time, in addition to:The template image is handled, obtains shooting reference picture, wherein, the shooting reference picture is from the template The fisrt feature point obtained in image, or the shooting reference picture is the translucent template image;By the imaging importing of the shooting reference picture and terminal captured in real-time.
- 6. method according to any one of claim 1 to 5, it is characterised in that in the matching that output matching obtains When spending, in addition to:By Feature Points Matching, the transformation matrix that described image frame is converted into the template image is obtained;Conversion process is carried out to the current picture frame using the transformation matrix, to transform to the appearance of the template image State.
- A kind of 7. auxiliary photo-taking device, it is characterised in that including:First acquisition module, for when carrying out scene capture, obtaining picture frame in real time;Matching module, in the described image frame that will currently obtain template image progress corresponding with the scene in real time Match somebody with somebody;Output module, the matching degree obtained for output matching in real time;Wherein, the matching module includes:Texture analysis module and the second matching unit;Texture analysis module, for by analyzing image texture, being respectively divided into described image frame and the template image more The single region of individual texture;Second matching unit, for in described image frame and the template image divide after each piece of region, it is determined that described The coordinate position of each texture region in template image;Searched in described image frame corresponding with the coordinate position local Texture region in region;The parallax value for including including the texture region, mutually corresponding two texture regions is calculated, is obtained To the dense disparity map of texture single area;The matching degree of described two texture regions is determined according to the dense disparity map;Root Described image frame is matched with the template image according to the matching degree;Wherein, described device also includes:First determining module, for after being taken pictures for the first time to the scene, obtained photo to be defined as into the template image;Second determining module, for since being taken pictures for the second time to the scene, after taking pictures every time, current shooting to be obtained All photos are averaging, and the photo after obtaining averagely, are defined as the template image.
- 8. device according to claim 7, it is characterised in that the matching module can also include:First acquisition unit, for extracting the fisrt feature point of the template image;Second acquisition unit, for obtaining the second feature point of described image frame;First matching unit, it is described for being matched to obtain in real time using the fisrt feature point and the second feature point Matching degree.
- 9. device according to claim 7, it is characterised in that described device also includes:Setup module, for responding user Operation, the matching weight of each selection region in the template image is set;The matching module also includes:Determining unit, for determining the matching degree according to the matching weight.
- 10. device according to claim 7, it is characterised in that also include:Second acquisition module, for handling the template image, obtain shooting reference picture, wherein, the shooting ginseng The fisrt feature point that image includes obtaining from the template image is examined, or the shooting reference picture is translucent described Template image;Laminating module, for by the imaging importing of shooting reference picture and the terminal captured in real-time.
- 11. the device according to any one of claim 7 to 10, it is characterised in that also include:3rd acquisition module, for by Feature Points Matching, obtaining the conversion that described image frame is converted into the template image Matrix;Processing module, it is described to transform to for carrying out conversion process to the current picture frame using the transformation matrix The posture of template image.
- A kind of 12. terminal, it is characterised in that including:One or more processors;Memory;It is described with one or more modules One or more modules are stored in the memory and are configured to by one or more of computing devices, one Or multiple modules are used for:When carrying out scene capture, picture frame is obtained in real time;By the described image frame currently obtained, template image corresponding with the scene is matched in real time;The matching degree that output matching obtains in real time;Wherein, after being taken pictures for the first time to the scene, obtained photo is defined as the template image;Wherein, by described image frame, template image corresponding with the scene carries out matching and included in real time:For each piece of region after being divided in described image frame and the template image, each line in the template image is determined Manage the coordinate position in region;The texture region in regional area corresponding with the coordinate position is searched in described image frame;The parallax value for including including the texture region, mutually corresponding two texture regions is calculated, obtains the single area of texture The dense disparity map in domain;The matching degree of described two texture regions is determined according to the dense disparity map;Described image frame is matched with the template image according to the matching degree;Since being taken pictures for the second time to the scene, after taking pictures every time, all photos that current shooting is obtained are averaging, and are obtained Photo after being averaged, it is defined as the template image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410302630.4A CN104135609B (en) | 2014-06-27 | 2014-06-27 | Auxiliary photo-taking method, apparatus and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410302630.4A CN104135609B (en) | 2014-06-27 | 2014-06-27 | Auxiliary photo-taking method, apparatus and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104135609A CN104135609A (en) | 2014-11-05 |
CN104135609B true CN104135609B (en) | 2018-02-23 |
Family
ID=51808123
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410302630.4A Active CN104135609B (en) | 2014-06-27 | 2014-06-27 | Auxiliary photo-taking method, apparatus and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104135609B (en) |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104333696A (en) * | 2014-11-19 | 2015-02-04 | 北京奇虎科技有限公司 | View-finding processing method, view-finding processing device and client |
CN105827930A (en) * | 2015-05-27 | 2016-08-03 | 广东维沃软件技术有限公司 | Method and device of auxiliary photographing |
CN105488756B (en) * | 2015-11-26 | 2019-03-29 | 努比亚技术有限公司 | Picture synthetic method and device |
WO2018000299A1 (en) * | 2016-06-30 | 2018-01-04 | Orange | Method for assisting acquisition of picture by device |
CN107018333A (en) * | 2017-05-27 | 2017-08-04 | 北京小米移动软件有限公司 | Shoot template and recommend method, device and capture apparatus |
CN107197153B (en) * | 2017-06-19 | 2024-03-15 | 上海传英信息技术有限公司 | Shooting method and shooting device for photo |
CN107580182B (en) * | 2017-08-28 | 2020-02-18 | 维沃移动通信有限公司 | Snapshot method, mobile terminal and computer readable storage medium |
CN108108268B (en) * | 2017-11-28 | 2021-08-27 | 北京密境和风科技有限公司 | Method and device for processing quit restart of video recording application |
CN108111752A (en) * | 2017-12-12 | 2018-06-01 | 北京达佳互联信息技术有限公司 | video capture method, device and mobile terminal |
US10574881B2 (en) * | 2018-02-15 | 2020-02-25 | Adobe Inc. | Smart guide to capture digital images that align with a target image model |
CN109257541A (en) * | 2018-11-20 | 2019-01-22 | 厦门盈趣科技股份有限公司 | Householder method of photographing and device |
WO2020133204A1 (en) * | 2018-12-28 | 2020-07-02 | Qualcomm Incorporated | Apparatus and method to correct the angle and location of the camera |
CN110113523A (en) * | 2019-03-15 | 2019-08-09 | 深圳壹账通智能科技有限公司 | Intelligent photographing method, device, computer equipment and storage medium |
CN110223366A (en) * | 2019-04-28 | 2019-09-10 | 深圳传音控股股份有限公司 | Image processing method, picture processing unit and readable storage medium storing program for executing |
CN112199547A (en) * | 2019-07-08 | 2021-01-08 | Oppo广东移动通信有限公司 | Image processing method and device, storage medium and electronic equipment |
CN110266958B (en) * | 2019-07-12 | 2022-03-01 | 北京小米移动软件有限公司 | Shooting method, device and system |
CN110493517A (en) * | 2019-08-14 | 2019-11-22 | 广州三星通信技术研究有限公司 | The auxiliary shooting method and image capture apparatus of image capture apparatus |
CN111131702A (en) * | 2019-12-25 | 2020-05-08 | 航天信息股份有限公司 | Method and device for acquiring image, storage medium and electronic equipment |
CN111770276A (en) * | 2020-07-07 | 2020-10-13 | 上海掌门科技有限公司 | Camera AI intelligent auxiliary photographing method, equipment and computer readable medium |
CN113784039B (en) * | 2021-08-03 | 2023-07-11 | 北京达佳互联信息技术有限公司 | Head portrait processing method, head portrait processing device, electronic equipment and computer readable storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101690164A (en) * | 2007-07-11 | 2010-03-31 | 索尼爱立信移动通讯股份有限公司 | Enhanced image capturing functionality |
CN101996308A (en) * | 2009-08-19 | 2011-03-30 | 北京中星微电子有限公司 | Human face identification method and system and human face model training method and system |
CN102074001A (en) * | 2010-11-25 | 2011-05-25 | 上海合合信息科技发展有限公司 | Method and system for stitching text images |
CN102814006A (en) * | 2011-06-10 | 2012-12-12 | 三菱电机株式会社 | Image contrast device, patient positioning device and image contrast method |
CN103366374A (en) * | 2013-07-12 | 2013-10-23 | 重庆大学 | Fire fighting access obstacle detection method based on image matching |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5040760B2 (en) * | 2008-03-24 | 2012-10-03 | ソニー株式会社 | Image processing apparatus, imaging apparatus, display control method, and program |
-
2014
- 2014-06-27 CN CN201410302630.4A patent/CN104135609B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101690164A (en) * | 2007-07-11 | 2010-03-31 | 索尼爱立信移动通讯股份有限公司 | Enhanced image capturing functionality |
CN101996308A (en) * | 2009-08-19 | 2011-03-30 | 北京中星微电子有限公司 | Human face identification method and system and human face model training method and system |
CN102074001A (en) * | 2010-11-25 | 2011-05-25 | 上海合合信息科技发展有限公司 | Method and system for stitching text images |
CN102814006A (en) * | 2011-06-10 | 2012-12-12 | 三菱电机株式会社 | Image contrast device, patient positioning device and image contrast method |
CN103366374A (en) * | 2013-07-12 | 2013-10-23 | 重庆大学 | Fire fighting access obstacle detection method based on image matching |
Also Published As
Publication number | Publication date |
---|---|
CN104135609A (en) | 2014-11-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104135609B (en) | Auxiliary photo-taking method, apparatus and terminal | |
CN107665697B (en) | A kind of adjusting method and mobile terminal of screen intensity | |
CN106296617B (en) | The processing method and processing device of facial image | |
CN107589963B (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN107592471A (en) | A kind of high dynamic range images image pickup method and mobile terminal | |
CN104143078A (en) | Living body face recognition method and device and equipment | |
CN107770451A (en) | Take pictures method, apparatus, terminal and the storage medium of processing | |
CN107483836B (en) | A kind of image pickup method and mobile terminal | |
CN104463105B (en) | Guideboard recognition methods and device | |
CN107592459A (en) | A kind of photographic method and mobile terminal | |
CN111857793B (en) | Training method, device, equipment and storage medium of network model | |
CN108989665A (en) | Image processing method, device, mobile terminal and computer-readable medium | |
CN108259758A (en) | Image processing method, device, storage medium and electronic equipment | |
CN108038431A (en) | Image processing method, device, computer equipment and computer-readable recording medium | |
CN109144361A (en) | A kind of image processing method and terminal device | |
CN110166691A (en) | A kind of image pickup method and terminal device | |
CN109218626A (en) | A kind of photographic method and terminal | |
CN108628568A (en) | A kind of display methods of information, device and terminal device | |
CN108259743A (en) | Panoramic picture image pickup method and electronic equipment | |
CN108960179A (en) | A kind of image processing method and mobile terminal | |
CN107864336B (en) | A kind of image processing method, mobile terminal | |
CN108174109A (en) | A kind of photographic method and mobile terminal | |
CN108881544A (en) | A kind of method taken pictures and mobile terminal | |
CN107124556A (en) | Focusing method, device, computer-readable recording medium and mobile terminal | |
CN108462826A (en) | A kind of method and mobile terminal of auxiliary photo-taking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |