CN105488756B - Picture synthetic method and device - Google Patents
Picture synthetic method and device Download PDFInfo
- Publication number
- CN105488756B CN105488756B CN201510845403.0A CN201510845403A CN105488756B CN 105488756 B CN105488756 B CN 105488756B CN 201510845403 A CN201510845403 A CN 201510845403A CN 105488756 B CN105488756 B CN 105488756B
- Authority
- CN
- China
- Prior art keywords
- picture
- registration
- pictures
- frame
- reference frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000010189 synthetic method Methods 0.000 title claims abstract description 12
- 210000000746 body region Anatomy 0.000 claims abstract description 60
- 238000000605 extraction Methods 0.000 claims abstract description 30
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 17
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 17
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 6
- 239000000284 extract Substances 0.000 claims abstract description 5
- 238000000034 method Methods 0.000 claims description 36
- 230000009466 transformation Effects 0.000 claims description 26
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 238000004891 communication Methods 0.000 description 23
- 230000008569 process Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 7
- 238000011156 evaluation Methods 0.000 description 6
- 238000010295 mobile communication Methods 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- NJPPVKZQTLUDBO-UHFFFAOYSA-N novaluron Chemical compound C1=C(Cl)C(OC(F)(F)C(OC(F)(F)F)F)=CC=C1NC(=O)NC(=O)C1=C(F)C=CC=C1F NJPPVKZQTLUDBO-UHFFFAOYSA-N 0.000 description 4
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 238000004321 preservation Methods 0.000 description 3
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 2
- 206010021703 Indifference Diseases 0.000 description 2
- 229910010888 LiIn Inorganic materials 0.000 description 2
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 2
- 235000003140 Panax quinquefolius Nutrition 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000001186 cumulative effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 235000008434 ginseng Nutrition 0.000 description 2
- 238000011478 gradient descent method Methods 0.000 description 2
- 229910052738 indium Inorganic materials 0.000 description 2
- 230000001404 mediated effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 101150012579 ADSL gene Proteins 0.000 description 1
- 102100020775 Adenylosuccinate lyase Human genes 0.000 description 1
- 108700040193 Adenylosuccinate lyases Proteins 0.000 description 1
- 244000283207 Indigofera tinctoria Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 238000010367 cloning Methods 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 229910052751 metal Inorganic materials 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 239000010409 thin film Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/14—Transformations for image registration, e.g. adjusting or mapping for alignment of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of picture synthetic method and device, device includes: that picture obtains module, for obtaining plurality of pictures;Picture registration module obtains the public domain of plurality of pictures for carrying out feature registration to plurality of pictures;Main body extraction module, for extracting the body region of each picture respectively from the public domain of plurality of pictures;Picture synthesis module, for synthesizing the body region of all pictures.The present invention can automatically extract body region in multiple pictures and be synthesized to a photo, and user only needs to shoot multiple main body difference photos in Same Scene, and terminal system will be automatically performed the synthetic operation of multiple main bodys, save a large amount of manual operation.
Description
Technical field
The present invention relates to image processing technology more particularly to a kind of picture synthetic method and devices.
Background technique
As smart machine capable of taking pictures is more more and more universal, shooting style interest and appeal, simplification become one of software of taking pictures
Developing direction.Cloning camera is a kind of software occurred in recent years, it is by shooting multiple pictures, photo main body in Same Scene
Different postures is put in different location, and finally the photo main body repeatedly shot is synthesized on a photo.
But existing clone's camera software, it is desirable that photographer chooses the area of main body in photo manually after the completion of shooting
Domain, it is this cumbersome and time-consuming to guide photo to synthesize.
Therefore, it is necessary to provide a kind of method, the position of body region in multiple pictures is compared and found automatically, and automatic
Synthesis is completed, eliminates a large amount of artificial process for photographer.
Summary of the invention
It is a primary object of the present invention to propose a kind of picture synthetic method and device, it is intended to realize main body in multiple pictures
Region is automatically synthesized, and simplifies user's operation.
To achieve the above object, a kind of picture synthesizer provided in an embodiment of the present invention, comprising:
Picture obtains module, for obtaining plurality of pictures;
Picture registration module obtains the public area of the plurality of pictures for carrying out feature registration to the plurality of pictures
Domain;
Main body extraction module, for extracting the body region of each picture respectively from the public domain of the plurality of pictures
Domain;
Picture synthesis module, for synthesizing the body region of all pictures.
Optionally, the picture registration module includes:
Reference frame selection unit, for choosing a picture from the plurality of pictures as reference frame picture, other figures
Piece is as frame picture subject to registration;
Registration parameter computing unit, for being carried out to each frame picture subject to registration special on the basis of the reference frame picture
Sign registration, is calculated the registration parameter of each frame picture subject to registration;
Picture converter unit is used for according to the registration parameter, respectively frame figure subject to registration corresponding to the registration parameter
Piece is converted, be allowed to the reference frame picture match, obtain registration picture;
Public domain extraction unit obtains all registration pictures for extracting the intersection in region in all registration pictures
Public domain.
Optionally, the registration parameter computing unit, is also used to choose transformation model and registration features;According to the change of selection
Mold changing type and registration features are carried out feature registration to each frame picture subject to registration, are calculated on the basis of the reference frame picture
To the registration parameter of each frame picture subject to registration.
Optionally, the main body extraction module includes:
Frame difference unit, for the public domain based on registration picture, respectively by each registration picture and reference frame picture into
Row comparison in difference obtains the frame difference image of each registration picture;Binary conversion treatment is carried out to the frame difference image, obtains frame poor two
It is worth image;
The reference frame is obtained for extracting the intersection of all frame difference bianry images with reference to frame main body extraction unit
The body region of picture;
Connected region extraction unit, it is corresponding to extract each registration for the frame difference bianry image according to each registration picture
The connected region of picture obtains the body region of each registration picture.
Optionally, the picture synthesis module is also used to the body region of all registration pictures being synthesized to the reference
The body region of frame picture.
Optionally, described device further include:
Picture output module, for carrying out processing to composited picture and/or externally sending.
The embodiment of the present invention also proposes a kind of picture synthetic method, comprising:
Obtain plurality of pictures;
Feature registration is carried out to the plurality of pictures, obtains the public domain of the plurality of pictures;
Extract the body region of each picture respectively from the public domain of the plurality of pictures;
The body region of all pictures is synthesized.
Optionally, the step of feature registration is carried out to plurality of pictures, obtains the public domain of plurality of pictures packet
It includes:
A picture is chosen from the plurality of pictures as reference frame picture, other pictures are as frame picture subject to registration;
On the basis of the reference frame picture, feature registration is carried out to each frame picture subject to registration, be calculated it is each to
It is registrated the registration parameter of frame picture;
According to the registration parameter, frame picture subject to registration corresponding to the registration parameter is converted respectively, be allowed to
The reference frame picture match obtains registration picture;
The intersection for extracting region in all registration pictures obtains the public domain of all registration pictures.
Optionally, described on the basis of the reference frame picture, feature registration is carried out to each frame picture subject to registration, is calculated
The step of obtaining the registration parameter of each frame picture subject to registration include:
Choose transformation model and registration features;
According to the transformation model and registration features of selection, on the basis of the reference frame picture, to each frame figure subject to registration
Piece carries out feature registration, and the registration parameter of each frame picture subject to registration is calculated.
Optionally, the step of extracting the body region of each picture respectively in the public domain from plurality of pictures packet
It includes:
Based on the public domain of registration picture, each registration picture and reference frame picture are subjected to comparison in difference, obtained every
The frame difference image of one registration picture;
Binary conversion treatment is carried out to the frame difference image, obtains frame difference bianry image;
The intersection for extracting all frame difference bianry images, obtains the body region of the reference frame picture;
According to the frame difference bianry image of each registration picture, the corresponding connected region for extracting each registration picture is obtained every
The body region of one registration picture.
Optionally, the step of body region by all pictures is synthesized include:
The body region of all registration pictures is synthesized to the body region of the reference frame picture.
A kind of picture synthetic method and device that the embodiment of the present invention proposes, can automatically extract body region in multiple pictures
Domain is simultaneously synthesized to a photo, and user only needs to shoot multiple main body difference photos in Same Scene, and terminal system will be automatic
The synthetic operation for completing multiple main bodys has saved a large amount of manual operation.
Detailed description of the invention
Fig. 1 is the hardware structural diagram for realizing an optional mobile terminal of each embodiment of the present invention;
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1;
Fig. 3 is the functional block diagram of picture synthesizer first embodiment of the present invention;
Fig. 4 is the structural schematic diagram of picture registration module in the embodiment of the present invention;
Fig. 5 is the structural schematic diagram of main body extraction module in the embodiment of the present invention;
Fig. 6 is a kind of picture synthetic effect schematic diagram of the embodiment of the present invention;
Fig. 7 is the functional block diagram of picture synthesizer second embodiment of the present invention;
Fig. 8 is the flow diagram of picture synthetic method preferred embodiment of the present invention.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
Terminal device involved in the embodiment of the present invention refers mainly to mobile terminal.
The mobile terminal of each embodiment of the present invention is realized in description with reference to the drawings.In subsequent description, use
For indicate element such as " module ", " component " or " unit " suffix only for being conducive to explanation of the invention, itself
There is no specific meanings.Therefore, " module " can be used mixedly with " component ".
Mobile terminal can be implemented in a variety of manners.For example, terminal described in the present invention may include such as moving
Phone, smart phone, laptop, digit broadcasting receiver, PDA (personal digital assistant), PAD (tablet computer), PMP
The mobile terminal of (portable media player), navigation device etc. and such as number TV, desktop computer etc. are consolidated
Determine terminal.Hereinafter it is assumed that terminal is mobile terminal.However, it will be understood by those skilled in the art that in addition to being used in particular for moving
Except the element of purpose, the construction of embodiment according to the present invention can also apply to the terminal of fixed type.
The hardware structural diagram of Fig. 1 optional mobile terminal of each embodiment to realize the present invention.
Mobile terminal 100 may include wireless communication unit 110, A/V (audio/video) input unit 120, user's input
Unit 130, sensing unit 140, output unit 150, memory 160, interface unit 170, controller 180 and power supply unit 190
Etc..Fig. 1 shows the mobile terminal with various assemblies, it should be understood that being not required for implementing all groups shown
Part.More or fewer components can alternatively be implemented.The element of mobile terminal will be discussed in more detail below.
Wireless communication unit 110 generally includes one or more components, allows mobile terminal 100 and wireless communication system
Or the radio communication between network.For example, wireless communication unit may include broadcasting reception module 111, mobile communication module
112, at least one of wireless Internet module 113, short range communication module 114 and location information module 115.
Broadcasting reception module 111 receives broadcast singal and/or broadcast from external broadcast management server via broadcast channel
Relevant information.Broadcast channel may include satellite channel and/or terrestrial channel.Broadcast management server, which can be, to be generated and sent
The broadcast singal and/or broadcast related information generated before the server or reception of broadcast singal and/or broadcast related information
And send it to the server of terminal.Broadcast singal may include TV broadcast singal, radio signals, data broadcasting
Signal etc..Moreover, broadcast singal may further include the broadcast singal combined with TV or radio signals.Broadcast phase
Closing information can also provide via mobile communications network, and in this case, broadcast related information can be by mobile communication mould
Block 112 receives.Broadcast singal can exist in a variety of manners, for example, it can be with the electronics of digital multimedia broadcasting (DMB)
Program guide (EPG), digital video broadcast-handheld (DVB-H) electronic service guidebooks (ESG) etc. form and exist.Broadcast
Receiving module 111 can receive signal broadcast by using various types of broadcast systems.Particularly, broadcasting reception module 111
It can be wide by using such as multimedia broadcasting-ground (DMB-T), digital multimedia broadcasting-satellite (DMB-S), digital video
It broadcasts-holds (DVB-H), forward link media (MediaFLO@) Radio Data System, received terrestrial digital broadcasting integrated service
(ISDB-T) etc. digit broadcasting system receives digital broadcasting.Broadcasting reception module 111, which may be constructed such that, to be adapted to provide for extensively
Broadcast the various broadcast systems and above-mentioned digit broadcasting system of signal.Via the received broadcast singal of broadcasting reception module 111 and/
Or broadcast related information can store in memory 160 (or other types of storage medium).
Mobile communication module 112 sends radio signals to base station (for example, access point, node B etc.), exterior terminal
And at least one of server and/or receive from it radio signal.Such radio signal may include that voice is logical
Talk about signal, video calling signal or according to text and/or Multimedia Message transmission and/or received various types of data.
The Wi-Fi (Wireless Internet Access) of the support mobile terminal of wireless Internet module 113.The module can be internally or externally
It is couple to terminal.Wi-Fi (Wireless Internet Access) technology involved in the module may include WLAN (Wireless LAN) (Wi-Fi), Wibro
(WiMAX), Wimax (worldwide interoperability for microwave accesses), HSDPA (high-speed downlink packet access) etc..
Short range communication module 114 is the module for supporting short range communication.Some examples of short-range communication technology include indigo plant
ToothTM, radio frequency identification (RFID), Infrared Data Association (IrDA), ultra wide band (UWB), purple honeybeeTMEtc..
Location information module 115 is the module for checking or obtaining the location information of mobile terminal.Location information module
Typical case be GPS (global positioning system).According to current technology, GPS module is calculated from three or more satellites
Range information and correct time information and Information application triangulation for calculating, thus according to longitude, latitude and
Highly accurately calculate three-dimensional current location information.Currently, three satellites are used for the method for calculating position and temporal information
And the error of calculated position and temporal information is corrected by using an other satellite.In addition, 115 energy of GPS module
Enough by Continuous plus current location information in real time come calculating speed information.
A/V input unit 120 is for receiving audio or video signal.A/V input unit 120 may include camera 121, phase
Image of the machine 121 to the static images or video that are obtained in video acquisition mode or image capture mode by image capture apparatus
Data are handled.Treated, and picture frame may be displayed on display unit 151.Through camera 121, treated that picture frame can
It, can be according to shifting to be stored in memory 160 (or other storage mediums) or be sent via wireless communication unit 110
The construction of dynamic terminal provides two or more cameras 1210.
The order that user input unit 130 can be inputted according to user generates key input data to control each of mobile terminal
Kind operation.User input unit 130 allows user to input various types of information, and may include keyboard, metal dome, touch
Plate (for example, the sensitive component of detection due to the variation of resistance, pressure, capacitor etc. caused by being contacted), idler wheel, rocking bar etc.
Deng.Particularly, when touch tablet is superimposed upon in the form of layer on display unit 151, touch screen can be formed.
Sensing unit 140 detects the current state of mobile terminal 100, (for example, mobile terminal 100 opens or closes shape
State), the position of mobile terminal 100, user is for the presence or absence of contact (that is, touch input) of mobile terminal 100, mobile terminal
100 orientation, the acceleration or deceleration movement of mobile terminal 100 and direction etc., and generate for controlling mobile terminal 100
The order of operation or signal.For example, sensing unit 140 can sense when mobile terminal 100 is embodied as sliding-type mobile phone
The sliding-type phone is to open or close.In addition, sensing unit 140 be able to detect power supply unit 190 whether provide electric power or
Whether person's interface unit 170 couples with external device (ED).Interface unit 170 is used as at least one external device (ED) and mobile terminal 100
Connection can by interface.For example, external device (ED) may include wired or wireless headphone port, external power supply (or
Battery charger) port, wired or wireless data port, memory card port, the end for connecting the device with identification module
Mouth, the port audio input/output (I/O), video i/o port, ear port etc..Identification module can be storage for verifying
User using mobile terminal 100 various information and may include subscriber identification module (UIM), client identification module (SIM),
Universal Subscriber identification module (USIM) etc..In addition, the device (hereinafter referred to as " identification device ") with identification module can be adopted
The form of smart card is taken, therefore, identification device can be connect via port or other attachment devices with mobile terminal 100.Interface
Unit 170 can be used for receiving the input (for example, data information, electric power etc.) from external device (ED) and defeated by what is received
Enter to be transferred to one or more elements in mobile terminal 100 or can be used for transmitting between mobile terminal and external device (ED)
Data.
In addition, when mobile terminal 100 is connect with external base, interface unit 170 may be used as allowing will be electric by it
Power, which is provided from pedestal to the path or may be used as of mobile terminal 100, allows the various command signals inputted from pedestal to pass through it
It is transferred to the path of mobile terminal.The various command signals or electric power inputted from pedestal, which may be used as mobile terminal for identification, is
The no signal being accurately fitted on pedestal.
Output unit 150 is configured to provide output signal with vision, audio and/or tactile manner (for example, audio is believed
Number, vision signal, alarm signal, vibration signal etc.).Output unit 150 may include display unit 151 etc..
Display unit 151 may be displayed on the information handled in mobile terminal 100.For example, when mobile terminal 100 is in electricity
When talking about call mode, display unit 151 can show and converse or other communicate (for example, text messaging, multimedia file
Downloading etc.) relevant user interface (UI) or graphic user interface (GUI).When mobile terminal 100 is in video calling mode
Or when image capture mode, display unit 151 can show captured image and/or received image, show video or figure
Picture and the UI or GUI of correlation function etc..
Meanwhile when display unit 151 and touch tablet in the form of layer it is superposed on one another to form touch screen when, display unit
151 may be used as input unit and output device.Display unit 151 may include liquid crystal display (LCD), thin film transistor (TFT)
In LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc. at least
It is a kind of.Some in these displays may be constructed such that transparence to allow user to watch from outside, this is properly termed as transparent
Display, typical transparent display can be, for example, TOLED (transparent organic light emitting diode) display etc..According to specific
Desired embodiment, mobile terminal 100 may include two or more display units (or other display devices), for example, moving
Dynamic terminal may include outernal display unit (not shown) and inner display unit (not shown).Touch screen can be used for detecting touch
Input pressure and touch input position and touch input area.
Memory 160 can store the software program etc. of the processing and control operation that are executed by controller 180, Huo Zheke
Temporarily to store oneself data (for example, telephone directory, message, still image, video etc.) through exporting or will export.And
And memory 160 can store about the vibrations of various modes and audio signal exported when touching and being applied to touch screen
Data.
Memory 160 may include the storage medium of at least one type, and the storage medium includes flash memory, hard disk, more
Media card, card-type memory (for example, SD or DX memory etc.), random access storage device (RAM), static random-access storage
Device (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read only memory
(PROM), magnetic storage, disk, CD etc..Moreover, mobile terminal 100 can execute memory with by network connection
The network storage device of 160 store function cooperates.
The overall operation of the usually control mobile terminal of controller 180.For example, controller 180 executes and voice communication, data
Communication, video calling etc. relevant control and processing.In addition, controller 180 may include for reproducing (or playback) more matchmakers
The multi-media module 1810 of volume data, multi-media module 1810 can construct in controller 180, or can be structured as and control
Device 180 processed separates.Controller 180 can be with execution pattern identifying processing, by the handwriting input executed on the touchscreen or figure
Piece draws input and is identified as character or picture.
Power supply unit 190 receives external power or internal power under the control of controller 180 and provides operation each member
Electric power appropriate needed for part and component.
Various embodiments described herein can be to use the calculating of such as computer software, hardware or any combination thereof
Machine readable medium is implemented.Hardware is implemented, embodiment described herein can be by using application-specific IC
(ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), scene can
Programming gate array (FPGA), controller, microcontroller, microprocessor, is designed to execute function described herein processor
At least one of electronic unit is implemented, and in some cases, such embodiment can be implemented in controller 180.
For software implementation, the embodiment of such as process or function can with allow to execute the individual of at least one functions or operations
Software module is implemented.Software code can by the software application (or program) write with any programming language appropriate Lai
Implement, software code can store in memory 160 and be executed by controller 180.
So far, oneself is through describing mobile terminal according to its function.In the following, for the sake of brevity, will description such as folded form,
Slide type mobile terminal in various types of mobile terminals of board-type, oscillating-type, slide type mobile terminal etc., which is used as, to be shown
Example.Therefore, the present invention can be applied to any kind of mobile terminal, and be not limited to slide type mobile terminal.
Mobile terminal 100 as shown in Figure 1 may be constructed such that using via frame or grouping send data it is all if any
Line and wireless communication system and satellite-based communication system operate.
Referring now to Fig. 2 description communication system that wherein mobile terminal according to the present invention can operate.
Different air interface and/or physical layer can be used in such communication system.For example, used by communication system
Air interface includes such as frequency division multiple access (FDMA), time division multiple acess (TDMA), CDMA (CDMA) and universal mobile communications system
System (UMTS) (particularly, long term evolution (LTE)), global system for mobile communications (GSM) etc..As non-limiting example, under
The description in face is related to cdma communication system, but such introduction is equally applicable to other types of system.
With reference to Fig. 2, cdma wireless communication system may include multiple mobile terminals 100, multiple base stations (BS) 270, base station
Controller (BSC) 275 and mobile switching centre (MSC) 280.MSC280 is configured to and Public Switched Telephony Network (PSTN)
290 form interface.MSC280 is also structured to form interface with the BSC275 that can be couple to base station 270 via back haul link.
Back haul link can be constructed according to any in several known interfaces, and the interface includes such as E1/T1, ATM, IP,
PPP, frame relay, HDSL, ADSL or xDSL.It will be appreciated that system may include multiple BSC2750 as shown in Figure 2.
Each BS270 can service one or more subregions (or region), by multidirectional antenna or the day of direction specific direction
Each subregion of line covering is radially far from BS270.Alternatively, each subregion can be by two or more for diversity reception
Antenna covering.Each BS270, which may be constructed such that, supports multiple frequency distribution, and the distribution of each frequency has specific frequency spectrum
(for example, 1.25MHz, 5MHz etc.).
What subregion and frequency were distributed, which intersects, can be referred to as CDMA Channel.BS270 can also be referred to as base station transceiver
System (BTS) or other equivalent terms.In this case, term " base station " can be used for broadly indicating single
BSC275 and at least one BS270.Base station can also be referred to as " cellular station ".Alternatively, each subregion of specific BS270 can be claimed
For multiple cellular stations.
As shown in Figure 2, broadcast singal is sent to the mobile terminal operated in system by broadcsting transmitter (BT) 295
100.Broadcasting reception module 111 as shown in Figure 1 is arranged at mobile terminal 100 to receive the broadcast sent by BT295
Signal.In fig. 2 it is shown that several global positioning system (GPS) satellites 300.The help of satellite 300 positions multiple mobile terminals
At least one of 100.
In Fig. 2, multiple satellites 300 are depicted, it is understood that, it is useful to can use any number of satellite acquisition
Location information.GPS module as shown in Figure 1 is generally configured to cooperate with satellite 300 to obtain desired location information.
It substitutes GPS tracking technique or except GPS tracking technique, the other skills for the position that can track mobile terminal can be used
Art.In addition, at least one 300 property of can choose of GPS satellite or extraly processing satellite dmb transmission.
As a typical operation of wireless communication system, BS270 receives the reverse link from various mobile terminals 100
Signal.Mobile terminal 100 usually participates in call, information receiving and transmitting and other types of communication.Certain base station 270 is received each anti-
It is handled in specific BS270 to link signal.The data of acquisition are forwarded to relevant BSC275.BSC provides call
The mobile management function of resource allocation and the coordination including the soft switching process between BS270.The number that BSC275 will also be received
According to MSC280 is routed to, the additional route service for forming interface with PSTN290 is provided.Similarly, PSTN290 with
MSC280 forms interface, and MSC and BSC275 form interface, and BSC275 controls BS270 correspondingly with by forward link signals
It is sent to mobile terminal 100.
Based on above-mentioned mobile terminal hardware configuration and communication system, each embodiment of the present invention is proposed.
Due to existing clone's camera software, it is desirable that photographer chosen manually after the completion of shooting in photo body region into
The synthesis of row photo, causes user's operation cumbersome and time-consuming.
For this purpose, the present invention proposes a solution, it can effectively realize that body region is automatically synthesized in multiple pictures, letter
Change user's operation.
Specifically, as shown in figure 3, first embodiment of the invention proposes a kind of picture synthesizer, comprising: picture obtains mould
Block 201, picture registration module 202, main body extraction module 203 and picture synthesis module 204, in which:
Picture obtains module 201, for obtaining plurality of pictures;
Picture registration module 202 obtains the public of the plurality of pictures for carrying out feature registration to the plurality of pictures
Region;
Main body extraction module 203, for extracting the master of each picture respectively from the public domain of the plurality of pictures
Body region;
Picture synthesis module 204, for synthesizing the body region of all pictures.
Specifically, the present embodiment picture synthesizer can be set on the mobile terminals such as above-mentioned mobile phone, pass through the figure
Piece synthesizer realizes that body region is automatically synthesized in multiple pictures, to simplify user's operation.
Firstly, obtaining module 201 by picture obtains plurality of pictures, which can be to clap under same photographed scene
The picture taken the photograph, it is of course also possible to wherein main body personage can be same people for the picture shot under different photographed scenes, it can also
Think different people, is perhaps multiple people or for pure scenery picture.
It is that the picture shot under same photographed scene is illustrated with plurality of pictures, exists for example, obtaining module by picture
User is inputted in picture synthesizer and shoots three or more photos in same photographed scene, is denoted as I1, I2..., In, n >=3.
Then, the multiple pictures of input are snapped to by same background by picture registration module 202, obtains multiple described figures
The public domain of piece.
It is implemented as follows:
Firstly, choosing a picture from the plurality of pictures as reference frame picture, other pictures are as frame subject to registration
Picture;
Then, on the basis of the reference frame picture, feature registration is carried out to each frame picture subject to registration, is calculated every
The registration parameter of one frame picture subject to registration;
Later, according to the registration parameter, the frame picture subject to registration corresponding to the registration parameter becomes respectively
Change, be allowed to the reference frame picture match, obtain registration picture;
Finally, extracting the intersection in region in all registration pictures, the public domain of all registration pictures is obtained.
When concrete application, as shown in figure 4, picture registration module 202 may include: reference frame selection unit 2021, registration
Parameter calculation unit 2022, picture converter unit 2023 and public domain extraction unit 2024, in which:
Reference frame selection unit 2021, for choosing a picture from the plurality of pictures as reference frame picture,
His picture is as frame picture subject to registration;
Registration parameter computing unit 2022, on the basis of the reference frame picture, to each frame picture subject to registration into
The registration parameter of each frame picture subject to registration is calculated in row feature registration;
Picture converter unit 2023, for according to the registration parameter, respectively it is corresponding to the registration parameter it is described to
Registration frame picture converted, be allowed to the reference frame picture match, obtain registration picture;
Public domain extraction unit 2024 obtains all registration figures for extracting the intersection in region in all registration pictures
The public domain of piece.
Picture registration module 202 based on above structure, the specific implementation process is as follows:
Step 21, it is obtained in the n photos that module inputs by reference to frame selection unit 2021 from picture and selects frame work
For reference frame.Reference frame, which is specifically chosen mode and be can be, to be randomly selected, and is also possible to certain selected, wherein certain selected mode
It is including but not limited to following several:
1, select first frame picture (first photo) as reference frame.
2, select last frame as reference frame.
3, clarity evaluation is carried out to n photos, selects clearest one as reference frame.
Wherein, clarity evaluation algorithms can tire out derivative absolute value using second dervative is sought to picture x, y-axis direction
Add, it is the sum of cumulative bigger, then it represents that picture is clearly higher.
The n photos that picture obtains module input as a result, obtain reference frame F after selectionrWith other frames F1,
F2..., Fn-1。
Step 22, image parameters computing unit is registrated other each frames to reference frame on the basis of reference frame, meter
Calculate registration parameter.
Wherein, registration Algorithm embodiment can there are many, be exemplified below:
Firstly, to select a kind of transformation model as assuming.For example, selection picture global change, or can choose several
What transformation, similarity transformation, affine transformation, projective transformation etc., partial transformation can be divided into picture different piece, and to each
Part calculates individual registration parameter.
Then, registration features are selected, the feature that can choose has: characteristic point, cross-correlation, mutual information etc..
For characteristic point, registration parameter implementation method is: from reference frame FrSeveral characteristic points are extracted, then from subject to registration
Frame FiCorresponding characteristic point is extracted or searched for, according to the position of characteristic point as data, solves registration parameter.
For cross-correlation, registration parameter implementation method is: cross-correlation is that picture is transformed to frequency using Fourier transformation
Then domain calculates frame F subject to registration with cross-correlation formulaiThe correlation of each position in the spatial domain takes maximum position conduct
Registration result.
For mutual information, registration parameter implementation method is: mutual information is the evaluation method of picture similitude, is used
The extreme value of parameter makes mutual information minimum in optimization algorithm (such as gradient descent method) search registration parameter space, be make to
It is registrated frame FiWith reference frame FrThe parameter of optimal registration.
Step 23, picture converter unit is after being calculated registration parameter, to frame F subject to registrationi, i=[1, n-1] progress
Transformation is allowed to match with referring to figure.Picture W after being registrated after transformation1, W2..., Wn-1。
Step 24, public domain extraction unit picture W after registration1, W2..., Wn-1The middle public area for calculating all transformation
Domain.
Wherein, public domain is the intersection of picture region after these registrations.After obtaining public domain, subsequent extracted and conjunction
Chengdu is operated just for the public domain part of all photos.
Then, the body region of each picture is extracted from the public domain of the plurality of pictures.
The specific implementation process is as follows:
Firstly, the public domain based on registration picture, carries out comparison in difference for each registration picture and reference frame picture, obtains
Obtain frame difference image;
Then, binary conversion treatment is carried out to the frame difference image, obtains frame difference bianry image;
Later, the intersection for extracting all frame difference bianry images, obtains the body region of the reference frame picture;
Finally, according to the frame difference bianry image of each registration picture, the corresponding connected region for extracting each registration picture is obtained
To the body region of each registration picture.
When concrete application, as shown in figure 5, the main body extraction module 203 includes: frame difference unit 2031, with reference to frame main body
Extraction unit 2032 and connected region extraction unit 2033, in which:
Frame difference unit 2031, for the public domain based on registration picture, by each registration picture and reference frame picture into
Row comparison in difference obtains frame difference image;Binary conversion treatment is carried out to the frame difference image, obtains frame difference bianry image;
The ginseng is obtained for extracting the intersection of all frame difference bianry images with reference to frame main body extraction unit 2032
Examine the body region of frame picture;
Connected region extraction unit 2033, for the frame difference bianry image according to each registration picture, corresponding extraction is each
It is registrated the connected region of picture, obtains the body region of each registration picture.
Based on the structure of aforementioned body extraction module 203, the specific implementation process is as follows:
Step 31, it detects poor unit and comparison in difference is carried out to each registration picture and reference frame.For example, to registration picture Wi
With reference frame FrColor difference is calculated, frame difference image is obtained, calculation formula can be expressed as follows:
DIFFi(x, y)=abs (Wi(x,y)-Fr(x,y));
Wherein, DIFFi(x, y) indicates x in frame difference figure, the pixel value of y-coordinate position.The size of pixel value in frame difference image
Indicate the size of registration picture and reference frame color difference.
Then, binaryzation is carried out to frame difference figure, obtains frame difference binary map, calculation formula are as follows:
Wherein, θ is preset value, Ti(x, y) indicates that frame difference binary map x, the pixel value of y-coordinate indicate registration picture when being 1
Variant with reference frame, 0 indicates indifference.
Step 32, with reference to frame main body extraction unit from frame difference binary map T1, T2..., Tn-1Middle acquisition reference frame FrIn master
Body region.Acquisition methods such as following formula:
Because body position is different in all photos, when reference frame and other frames make frame difference, reference frame main part
Region is centainly variant with other frames.Take the intersection of all frame difference bianry images to be used as with reference to frame main body here.
Step 33, connected region extraction unit is responsible for extracting the body region of other every frames in addition to reference frame, obtains connection
Region.
Firstly, by frame difference binary map T1, T2..., Tn-1The body region of middle reference frame is removed, specific algorithm such as following formula:
Its purpose is to make frame difference binary map T ' after handling1,T′2,…,T′n-1Only retain the main part in registration picture
Point.
Then, to binary map T '1,T′2,…,T′n-1It is marked, the pixel value for being adjacent to 1 is labeled as a region, often
There is independent numbering in a region, for distinguishing with other regions.
Wherein, the algorithm of connected region label has twice of traversal of two-path and seed mediated growth method etc..It is obtained after label
One label schemes Li。LiIn the value of each pixel indicate the pixel in T 'iIn which connected region.Such as pixel (x, y) exists
T′iJ-th of region, then Li=j.
A Prototype drawing is generated to each connected region below.The following formula of template drawing generating method:
Finally, the body region of all registration pictures is synthesized to the reference frame picture by picture synthesis module 204
Body region.
Specifically, picture synthesis module 204 will be registrated picture W on the basis of having obtained every frame main body1, W2..., Wn-1
In body region be synthesized in reference picture.
Synthesis sequence can be W1, W2..., Wn-1, or Wn-1, Wn-2..., W1.Between synthesis sequence different subjects
The relationship of covering is also different, because the main body synthesized afterwards can cover the main body first synthesized in same position.
Synthesising picture is Ifusion, it is initialized as Fr, it is assumed that current main body template to be synthesized is Maskij, then this is synthesized
Afterwards,
Ifusion(x, y)=Maskij(x,y)·Wi+(1-Maskij(x,y))·Ifusion(x,y);
After successively synthesizing to the main body of all frames, composited picture I is obtainedfusion。
It is subsequent, according to user's needs, by composited picture compression, preservation, display or it is sent to network.
It is exemplified below based on this embodiment scheme:
As shown in Fig. 6 a, Fig. 6 b, Fig. 6 c, three pictures are shot under Same Scene respectively, carry out figure through the above scheme
After piece synthesis, synthetic effect figure shown in Fig. 6 d is obtained.
The present embodiment through the above scheme, by finding body region in multiple pictures automatically and being synthesized to a photo,
So that user only needs to shoot multiple main body difference photos in Same Scene, multiple masters can will be automatically performed by the present apparatus
The synthetic operation of body has saved a large amount of manual operation.
As shown in fig. 7, second embodiment of the invention proposes a kind of picture synthesizer, it is based on above-mentioned implementation shown in Fig. 3
Example, described device further include:
Picture output module 205, for carrying out processing to composited picture and/or externally sending.
According to user's needs, by composited picture compression, preservation, display or it is sent to network.
The present embodiment through the above scheme, by finding body region in multiple pictures automatically and being synthesized to a photo,
So that user only needs to shoot multiple main body difference photos in Same Scene, multiple masters can will be automatically performed by the present apparatus
The synthetic operation of body has saved a large amount of manual operation.
Accordingly, picture synthetic method embodiment of the present invention is proposed.
As shown in figure 8, present pre-ferred embodiments propose a kind of picture synthetic method, comprising:
Step S101 obtains plurality of pictures;
The present embodiment picture synthetic method can be executed by picture synthesizer, which can be set
On the mobile terminals such as above-mentioned mobile phone, realize that body region is automatically synthesized in multiple pictures by the picture synthesizer, with
Simplify user's operation.
Firstly, obtaining plurality of pictures, which can be that the picture shot under same photographed scene certainly can also
Think the picture shot under different photographed scenes, wherein main body personage can be same people, or different people, Huo Zhewei
Multiple people, or be pure scenery picture.
It is that the picture shot under same photographed scene is illustrated with plurality of pictures, exists for example, obtaining module by picture
User is inputted in picture synthesizer and shoots three or more photos in same photographed scene, is denoted as I1, I2..., In, n >=3.
Step S102 carries out feature registration to the plurality of pictures, obtains the public domain of the plurality of pictures;
Then, the multiple pictures of input are snapped to by same background by picture registration module, obtains the plurality of pictures
Public domain.
It is implemented as follows:
A picture is chosen from the plurality of pictures as reference frame picture, other pictures are as frame picture subject to registration;
On the basis of the reference frame picture, feature registration is carried out to each frame picture subject to registration, be calculated it is each to
It is registrated the registration parameter of frame picture;
According to the registration parameter, the frame picture subject to registration corresponding to the registration parameter is converted respectively, is made
With the reference frame picture match, obtain registration picture;
The intersection for extracting region in all registration pictures obtains the public domain of all registration pictures.
Concrete application citing realization process is as follows:
Step 21, it is obtained in the n photos that module inputs by reference to frame selection unit from picture and selects a frame as ginseng
Examine frame.Reference frame, which is specifically chosen mode and be can be, to be randomly selected, and certain selected is also possible to, wherein certain selected mode includes
But it is not limited to following several:
1, select first frame picture (first photo) as reference frame.
2, select last frame as reference frame.
3, clarity evaluation is carried out to n photos, selects clearest one as reference frame.
Wherein, clarity evaluation algorithms can tire out derivative absolute value using second dervative is sought to picture x, y-axis direction
Add, it is the sum of cumulative bigger, then it represents that picture is clearly higher.
The n photos that picture obtains module input as a result, obtain reference frame F after selectionrWith other frames F1,
F2..., Fn-1。
Step 22, image parameters computing unit is registrated other each frames to reference frame on the basis of reference frame, meter
Calculate registration parameter.
Wherein, registration Algorithm embodiment can there are many, be exemplified below:
Firstly, to select a kind of transformation model as assuming.For example, selection picture global change, or can choose several
What transformation, similarity transformation, affine transformation, projective transformation etc., partial transformation can be divided into picture different piece, and to each
Part calculates individual registration parameter.
Then, registration features are selected, the feature that can choose has: characteristic point, cross-correlation, mutual information etc..
For characteristic point, registration parameter implementation method is: from reference frame FrSeveral characteristic points are extracted, then from subject to registration
Frame FiCorresponding characteristic point is extracted or searched for, according to the position of characteristic point as data, solves registration parameter.
For cross-correlation, registration parameter implementation method is: cross-correlation is that picture is transformed to frequency using Fourier transformation
Then domain calculates frame F subject to registration with cross-correlation formulaiThe correlation of each position in the spatial domain takes maximum position conduct
Registration result.
For mutual information, registration parameter implementation method is: mutual information is the evaluation method of picture similitude, is used
The extreme value of parameter makes mutual information minimum in optimization algorithm (such as gradient descent method) search registration parameter space, be make to
It is registrated frame FiWith reference frame FrThe parameter of optimal registration.
Step 23, picture converter unit is after being calculated registration parameter, to frame F subject to registrationi, i=[1, n-1] progress
Transformation is allowed to match with referring to figure.Picture W after being registrated after transformation1, W2..., Wn-1。
Step 24, public domain extraction unit picture W after registration1, W2..., Wn-1The middle public area for calculating all transformation
Domain.
Wherein, public domain is the intersection of picture region after these registrations.After obtaining public domain, subsequent extracted and conjunction
Chengdu is operated just for the public domain part of all photos.
Step S103 extracts the body region of each picture respectively from the public domain of the plurality of pictures;
Then, the body region of each picture is extracted from the public domain of the plurality of pictures.
The specific implementation process is as follows:
Firstly, the public domain based on registration picture, carries out comparison in difference for each registration picture and reference frame picture, obtains
Obtain frame difference image;
Then, binary conversion treatment is carried out to the frame difference image, obtains frame difference bianry image;
Later, the intersection for extracting all frame difference bianry images, obtains the body region of the reference frame picture;
Finally, according to the frame difference bianry image of each registration picture, the corresponding connected region for extracting each registration picture is obtained
To the body region of each registration picture.
When concrete application, it can be responsible for extracting the main part of each frame in the picture after being registrated by main body extraction module
Point.The structural block diagram of the module is as shown in Figure 5.
Concrete application citing realization process is as follows:
Step 31, it detects poor unit and comparison in difference is carried out to each registration picture and reference frame.For example, to registration picture Wi
With reference frame FrColor difference is calculated, frame difference image is obtained, calculation formula can be expressed as follows:
DIFFi(x, y)=abs (Wi(x,y)-Fr(x,y));
Wherein, DIFFi(x, y) indicates x in frame difference figure, the pixel value of y-coordinate position.The size of pixel value in frame difference image
Indicate the size of registration picture and reference frame color difference.
Then, binaryzation is carried out to frame difference figure, obtains frame difference binary map, calculation formula are as follows:
Wherein, θ is preset value, Ti(x, y) indicates that frame difference binary map x, the pixel value of y-coordinate indicate registration picture when being 1
Variant with reference frame, 0 indicates indifference.
Step 32, with reference to frame main body extraction unit from frame difference binary map T1, T2..., Tn-1Middle acquisition reference frame FrIn master
Body region.Acquisition methods such as following formula:
Because body position is different in all photos, when reference frame and other frames make frame difference, reference frame main part
Region is centainly variant with other frames.Take the intersection of all frame difference bianry images to be used as with reference to frame main body here.
Step 33, connected region extraction unit is responsible for extracting the body region of other every frames in addition to reference frame, obtains connection
Region.
Firstly, by frame difference binary map T1, T2..., Tn-1The body region of middle reference frame is removed, specific algorithm such as following formula:
Its purpose is to make frame difference binary map T ' after handling1,T′2,…,T′n-1Only retain the main part in registration picture
Point.
Then, to binary map T '1,T′2,…,T′n-1It is marked, the pixel value for being adjacent to 1 is labeled as a region, often
There is independent numbering in a region, for distinguishing with other regions.
Wherein, the algorithm of connected region label has twice of traversal of two-path and seed mediated growth method etc..It is obtained after label
One label schemes Li。LiIn the value of each pixel indicate the pixel in T 'iIn which connected region.Such as pixel (x, y) exists
T′iJ-th of region, then Li=j.
A Prototype drawing is generated to each connected region below.The following formula of template drawing generating method:
Step S104 synthesizes the body region of all pictures.
Finally, the body region of all registration pictures to be synthesized to the body region of the reference frame picture.
Specifically, picture synthesis module will be registrated picture W on the basis of having obtained every frame main body1, W2..., Wn-1In
Body region be synthesized in reference picture.
Synthesis sequence can be W1, W2..., Wn-1, or Wn-1, Wn-2..., W1.Between synthesis sequence different subjects
The relationship of covering is also different, because the main body synthesized afterwards can cover the main body first synthesized in same position.
Synthesising picture is Ifusion, it is initialized as Fr, it is assumed that current main body template to be synthesized is Maskij, then this is synthesized
Afterwards,
Ifusion(x, y)=Maskij(x,y)·Wi+(1-Maskij(x,y))·Ifusion(x,y);
After successively synthesizing to the main body of all frames, composited picture I is obtainedfusion。
It is subsequent, according to user's needs, by composited picture compression, preservation, display or it is sent to network.
The present embodiment through the above scheme, by finding body region in multiple pictures automatically and being synthesized to a photo,
So that user only needs to shoot multiple main body difference photos in Same Scene, multiple masters can will be automatically performed by the present apparatus
The synthetic operation of body has saved a large amount of manual operation.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in a storage medium
In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal device (can be mobile phone, computer, clothes
Business device, air conditioner or the network equipment etc.) execute method described in each embodiment of the present invention.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair
Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills
Art field, is included within the scope of the present invention.
Claims (8)
1. a kind of picture synthesizer characterized by comprising
Picture obtains module, for obtaining plurality of pictures;
Picture registration module obtains the public domain of the plurality of pictures for carrying out feature registration to the plurality of pictures;
Main body extraction module, for extracting the body region of each picture respectively from the public domain of the plurality of pictures;
Picture synthesis module, for synthesizing the body region of all pictures;
Wherein, the main body extraction module includes:
Frame difference unit, for the public domain based on registration picture, it is poor respectively to carry out each registration picture and reference frame picture
Different comparison obtains the frame difference image of each registration picture;Binary conversion treatment is carried out to the frame difference image, obtains frame difference binary map
Picture;
The reference frame picture is obtained for extracting the intersection of all frame difference bianry images with reference to frame main body extraction unit
Body region;
Connected region extraction unit, it is corresponding to extract each registration picture for the frame difference bianry image according to each registration picture
Connected region, obtain it is each registration picture body region.
2. the apparatus according to claim 1, which is characterized in that the picture registration module includes:
Reference frame selection unit, for choosing a picture from the plurality of pictures as reference frame picture, other pictures are made
For frame picture subject to registration;
Registration parameter computing unit, for carrying out feature to each frame picture subject to registration and matching on the basis of the reference frame picture
The registration parameter of each frame picture subject to registration is calculated in standard;
Picture converter unit, for according to the registration parameter, respectively frame picture subject to registration corresponding to the registration parameter into
Row transformation, be allowed to the reference frame picture match, obtain registration picture;
Public domain extraction unit obtains the public of all registration pictures for extracting the intersection in region in all registration pictures
Region.
3. the apparatus of claim 2, which is characterized in that
The registration parameter computing unit, is also used to choose transformation model and registration features;According to the transformation model of selection and match
Quasi- feature is carried out feature registration to each frame picture subject to registration, is calculated each wait match on the basis of the reference frame picture
The registration parameter of quasi- frame picture.
4. the apparatus according to claim 1, which is characterized in that
The picture synthesis module is also used to for the body region of all registration pictures being synthesized to the main body of the reference frame picture
Region.
5. device described in any one of -4 according to claim 1, which is characterized in that described device further include:
Picture output module, for carrying out processing to composited picture and/or externally sending.
6. a kind of picture synthetic method characterized by comprising
Obtain plurality of pictures;
Feature registration is carried out to the plurality of pictures, obtains the public domain of the plurality of pictures;
Extract the body region of each picture respectively from the public domain of the plurality of pictures;
The body region of all pictures is synthesized;
Wherein, the step of extracting the body region of each picture respectively in the public domain from plurality of pictures include:
Based on the public domain of registration picture, each registration picture and reference frame picture are subjected to comparison in difference, obtain each match
The frame difference image of quasi- picture;
Binary conversion treatment is carried out to the frame difference image, obtains frame difference bianry image;
The intersection for extracting all frame difference bianry images, obtains the body region of the reference frame picture;
According to the frame difference bianry image of each registration picture, the corresponding connected region for extracting each registration picture obtains each match
The body region of quasi- picture.
7. according to the method described in claim 6, it is characterized in that, described carry out feature registration to plurality of pictures, described in acquisition
The step of public domain of plurality of pictures includes:
A picture is chosen from the plurality of pictures as reference frame picture, other pictures are as frame picture subject to registration;
On the basis of the reference frame picture, feature registration is carried out to each frame picture subject to registration, is calculated each subject to registration
The registration parameter of frame picture;
According to the registration parameter, frame picture subject to registration corresponding to the registration parameter is converted respectively, be allowed to it is described
Reference frame picture match obtains registration picture;
The intersection for extracting region in all registration pictures obtains the public domain of all registration pictures.
8. the method according to the description of claim 7 is characterized in that described on the basis of the reference frame picture, to it is each to
Being registrated the step of frame picture carries out feature registration, the registration parameter of each frame picture subject to registration is calculated includes:
Choose transformation model and registration features;
According to the transformation model and registration features of selection, on the basis of the reference frame picture, to each frame picture subject to registration into
The registration parameter of each frame picture subject to registration is calculated in row feature registration.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510845403.0A CN105488756B (en) | 2015-11-26 | 2015-11-26 | Picture synthetic method and device |
PCT/CN2016/102847 WO2017088618A1 (en) | 2015-11-26 | 2016-10-21 | Picture synthesis method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510845403.0A CN105488756B (en) | 2015-11-26 | 2015-11-26 | Picture synthetic method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105488756A CN105488756A (en) | 2016-04-13 |
CN105488756B true CN105488756B (en) | 2019-03-29 |
Family
ID=55675721
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510845403.0A Active CN105488756B (en) | 2015-11-26 | 2015-11-26 | Picture synthetic method and device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN105488756B (en) |
WO (1) | WO2017088618A1 (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105488756B (en) * | 2015-11-26 | 2019-03-29 | 努比亚技术有限公司 | Picture synthetic method and device |
WO2017206656A1 (en) * | 2016-05-31 | 2017-12-07 | 努比亚技术有限公司 | Image processing method, terminal, and computer storage medium |
CN105915796A (en) * | 2016-05-31 | 2016-08-31 | 努比亚技术有限公司 | Electronic aperture shooting method and terminal |
CN106097284B (en) * | 2016-07-29 | 2019-08-30 | 努比亚技术有限公司 | A kind of processing method and mobile terminal of night scene image |
CN109544519B (en) * | 2018-11-08 | 2020-09-25 | 顺德职业技术学院 | Picture synthesis method based on detection device |
CN109767397B (en) | 2019-01-09 | 2022-07-12 | 三星电子(中国)研发中心 | Image optimization method and system based on artificial intelligence |
CN110070569B (en) * | 2019-04-29 | 2023-11-10 | 西藏兆讯科技工程有限公司 | Registration method and device of terminal image, mobile terminal and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2429204A2 (en) * | 2010-09-13 | 2012-03-14 | LG Electronics | Mobile terminal and 3D image composing method thereof |
CN104135609A (en) * | 2014-06-27 | 2014-11-05 | 小米科技有限责任公司 | A method and a device for assisting in photographing, and a terminal |
CN104243819A (en) * | 2014-08-29 | 2014-12-24 | 小米科技有限责任公司 | Photo acquiring method and device |
CN105100642A (en) * | 2015-07-30 | 2015-11-25 | 努比亚技术有限公司 | Image processing method and apparatus |
CN105100775A (en) * | 2015-07-29 | 2015-11-25 | 努比亚技术有限公司 | Image processing method and apparatus, and terminal |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101954192B1 (en) * | 2012-11-15 | 2019-03-05 | 엘지전자 주식회사 | Array camera, Moblie terminal, and method for operating the same |
KR20140122054A (en) * | 2013-04-09 | 2014-10-17 | 삼성전자주식회사 | converting device for converting 2-dimensional image to 3-dimensional image and method for controlling thereof |
CN104796625A (en) * | 2015-04-21 | 2015-07-22 | 努比亚技术有限公司 | Picture synthesizing method and device |
CN105488756B (en) * | 2015-11-26 | 2019-03-29 | 努比亚技术有限公司 | Picture synthetic method and device |
-
2015
- 2015-11-26 CN CN201510845403.0A patent/CN105488756B/en active Active
-
2016
- 2016-10-21 WO PCT/CN2016/102847 patent/WO2017088618A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2429204A2 (en) * | 2010-09-13 | 2012-03-14 | LG Electronics | Mobile terminal and 3D image composing method thereof |
CN104135609A (en) * | 2014-06-27 | 2014-11-05 | 小米科技有限责任公司 | A method and a device for assisting in photographing, and a terminal |
CN104243819A (en) * | 2014-08-29 | 2014-12-24 | 小米科技有限责任公司 | Photo acquiring method and device |
CN105100775A (en) * | 2015-07-29 | 2015-11-25 | 努比亚技术有限公司 | Image processing method and apparatus, and terminal |
CN105100642A (en) * | 2015-07-30 | 2015-11-25 | 努比亚技术有限公司 | Image processing method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
WO2017088618A1 (en) | 2017-06-01 |
CN105488756A (en) | 2016-04-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105488756B (en) | Picture synthetic method and device | |
CN104954689B (en) | A kind of method and filming apparatus that photo is obtained using dual camera | |
CN105430295B (en) | Image processing apparatus and method | |
CN105141833B (en) | Terminal image pickup method and device | |
CN105227837A (en) | A kind of image combining method and device | |
CN105100642B (en) | Image processing method and device | |
CN105227865B (en) | A kind of image processing method and terminal | |
CN105472241B (en) | Image split-joint method and mobile terminal | |
CN105897564A (en) | Photo sharing apparatus and method | |
CN106686301A (en) | Picture shooting method and device | |
CN105338242A (en) | Image synthesis method and device | |
CN106612393B (en) | A kind of image processing method and device and mobile terminal | |
CN106851128A (en) | A kind of video data handling procedure and device based on dual camera | |
CN105979148A (en) | Panoramic photographing device, system and method | |
CN106791455A (en) | Panorama shooting method and device | |
CN105763710A (en) | Picture setting system and method | |
CN106534552B (en) | Mobile terminal and its photographic method | |
CN106534590A (en) | Photo processing method and apparatus, and terminal | |
CN106021292B (en) | A kind of device and method for searching picture | |
CN106506965A (en) | A kind of image pickup method and terminal | |
CN105959520B (en) | A kind of photo camera and method | |
CN106973226A (en) | A kind of image pickup method and terminal | |
CN106791449B (en) | Photo shooting method and device | |
CN106454087B (en) | A kind of filming apparatus and method | |
CN106534557B (en) | A kind of the wallpaper switching system and method for display terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |