[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN104935822A - Method and device for processing images - Google Patents

Method and device for processing images Download PDF

Info

Publication number
CN104935822A
CN104935822A CN201510328942.7A CN201510328942A CN104935822A CN 104935822 A CN104935822 A CN 104935822A CN 201510328942 A CN201510328942 A CN 201510328942A CN 104935822 A CN104935822 A CN 104935822A
Authority
CN
China
Prior art keywords
infrared image
visible images
area
target area
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510328942.7A
Other languages
Chinese (zh)
Inventor
李嵩
王彦文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201510328942.7A priority Critical patent/CN104935822A/en
Publication of CN104935822A publication Critical patent/CN104935822A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The invention discloses a method and device for processing images. The method comprises the following steps: acquiring an infrared image and a visible light image simultaneously; partitioning the infrared image and the visible light image into background regions and one or more target regions respectively; and synthesizing the background region of the infrared image and the target region of the invisible light image subjected to parallax conversion into a new image; or synthesizing the background region of the infrared image subjected to parallax conversion and the target region of the visible light image into a new image. Through the scheme of the invention, the visible light image is close to an image which is seen by human eyes, so that the background region of the infrared image and the target region of the visible light image are synthesized into the new image, and an infrared image which is accordant with the beauty appreciation of people is obtained easily.

Description

A kind of method and apparatus processing image
Technical field
The present invention relates to camera work, espespecially a kind of method and apparatus processing image.
Background technology
Infrared photography is a kind of style of shooting of abnormal type, utilizes infrared lighting apparatus to coordinate with infrared filter, is different from traditional black and white photochrome, and because material is different with visible light reflectance to infrared light, the color that infrared photograph and human eye are seen is far from each other.When shooting landscape photograph, infrared photography often can take the general infrared image of dreamland that human eye has never seen.But when taking certain some infrared images (as personage), because infrared imaging causes the change of human body complexion, do not meet with the aesthetic of people, this class infrared image will use image editing software comparison film to carry out subsequent treatment usually, but meet the aesthetic infrared image of people to finally obtain, image editing software often needs very complicated algorithm to realize, and the infrared image after process is also often factitious.
Summary of the invention
In order to solve the problem, the present invention proposes a kind of method and apparatus processing image, can obtain simply and meeting the aesthetic infrared image of people.
In order to achieve the above object, the present invention proposes a kind of method processing image, comprising:
Obtain infrared image and visible images simultaneously;
Respectively infrared image and visible images are divided into background area and one or more target areas;
New image is synthesized in the target area of the visible images behind the background area of infrared image and parallax conversion; Or new image is synthesized in the background area of infrared image after being converted by parallax and the target area of visible images.
Preferably, described background area and one or more target areas of infrared image and visible images being divided into respectively comprise:
Obtain the predeterminable area of described infrared image and described visible images respectively, obtain the target area of described infrared image and described visible images respectively according to the described infrared image of acquisition and the predeterminable area of described visible images;
The background area of described infrared image and described visible images is obtained respectively according to the described infrared image of acquisition and the target area of described visible images.
Preferably, the described target area obtaining infrared image and visible images according to the predeterminable area of the infrared image obtained and visible images respectively comprises:
Region-growing method is adopted to obtain the target area of described infrared image and described visible images according to the predeterminable area of the described infrared image obtained and described visible images respectively.
Preferably, the described target area adopting region-growing method to obtain infrared image and visible images according to the infrared image of acquisition and the predeterminable area of visible images respectively comprises:
Obtain the central point of the predeterminable area of described infrared image and described visible images respectively;
Respectively using the sub pixel of the central point of the predeterminable area of the described infrared image that obtains and described visible images as the target area of described infrared image and described visible images;
The pixel respectively absolute value of the difference in described infrared image and described visible images between the depth of field and the depth of field of described sub pixel being less than or equal to predetermined threshold value is incorporated in the target area of described infrared image and described visible images.
Preferably, the described predeterminable area obtaining infrared image and visible images respectively comprises:
Detect the region at personage place in described infrared image and described visible images respectively;
Or, receive the predeterminable area from user, obtain respectively in described infrared image and described visible images with the predeterminable area corresponding from the predeterminable area of user received.
Preferably, described background area and one or more target areas of infrared image and visible images being divided into respectively comprise:
Obtain the depth map of described infrared image and described visible images respectively;
According to the depth map of described infrared image using the pixel of the depth of field in preset range in described infrared image as the target area of described infrared image, according to the depth map of described visible images using the pixel of the depth of field in preset range in described visible images as the target area of described infrared image;
The background area of described infrared image and described visible images is obtained respectively according to the target area of described infrared image and described visible images.
The invention allows for a kind of device processing image, at least comprise:
Acquisition module, for obtaining infrared image and visible images simultaneously;
Segmentation module, for being divided into background area and one or more target areas respectively by infrared image and visible images;
Synthesis module, for synthesizing new image by the target area of the visible images behind the background area of infrared image and parallax conversion; Or new image is synthesized in the background area of infrared image after being converted by parallax and the target area of visible images.
Preferably, described segmentation module specifically for:
Obtain the predeterminable area of described infrared image and described visible images respectively, obtain the target area of described infrared image and described visible images respectively according to the described infrared image of acquisition and the predeterminable area of described visible images; The background area of described infrared image and described visible images is obtained respectively according to the described infrared image of acquisition and the target area of described visible images.
Preferably, described segmentation module specifically for:
Obtain the predeterminable area of described infrared image and described visible images respectively, adopt region-growing method to obtain the target area of described infrared image and described visible images according to the predeterminable area of the described infrared image obtained and described visible images respectively; The background area of described infrared image and described visible images is obtained respectively according to the described infrared image of acquisition and the target area of described visible images.
Preferably, described segmentation module specifically for:
Obtain the predeterminable area of described infrared image and described visible images respectively, obtain the central point of the predeterminable area of described infrared image and described visible images respectively; Respectively using the sub pixel of the central point of the predeterminable area of the described infrared image that obtains and described visible images as the target area of described infrared image and described visible images; The pixel respectively absolute value of the difference in described infrared image and described visible images between the depth of field and the depth of field of described sub pixel being less than or equal to predetermined threshold value is incorporated in the target area of described infrared image and described visible images; The background area of described infrared image and described visible images is obtained respectively according to the described infrared image of acquisition and the target area of described visible images.
Preferably, split module specifically for:
Detect the region at personage place in infrared image and visible images respectively; Or, receive the predeterminable area from user, obtain respectively in infrared image and visible images with the predeterminable area corresponding from the predeterminable area of user received, respectively according to the infrared image obtained and the predeterminable area acquisition infrared image of visible images and the target area of visible images; The background area of infrared image and visible images is obtained respectively according to the infrared image of acquisition and the target area of visible images.
Preferably, split module specifically for:
Obtain the depth map of infrared image and visible images respectively; According to the depth map of infrared image using the target area of the pixel of the depth of field in infrared image in preset range as infrared image, according to the depth map of visible images using the target area of the pixel of the depth of field in visible images in preset range as infrared image; The background area of infrared image and visible images is obtained respectively according to the target area of infrared image and visible images.
Compared with prior art, the present invention includes: obtain infrared image and visible images simultaneously; Respectively infrared image and visible images are divided into background area and one or more target areas; New image is synthesized in the target area of the visible images behind the background area of infrared image and parallax conversion; Or new image is synthesized in the background area of infrared image after being converted by parallax and the target area of visible images.By the solution of the present invention, the image seen due to visible images and human eye is close, and therefore, new image is synthesized in the background area of infrared image and the target area of visible images, obtains simply and meet the aesthetic infrared image of people.
Accompanying drawing explanation
Be described the accompanying drawing in the embodiment of the present invention below, the accompanying drawing in embodiment is for a further understanding of the present invention, is used from explanation the present invention, does not form limiting the scope of the invention with specification one.
Fig. 1 is the hardware configuration schematic diagram of the mobile terminal realizing each embodiment of the present invention;
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1;
Fig. 3 is the flow chart of the method for first embodiment of the invention process image;
Fig. 4 is the structure composition schematic diagram of the device of fourth embodiment of the invention process image;
Fig. 5 (a) is the vertical view of dual camera of the present invention;
Fig. 5 (b) is the front view of dual camera of the present invention.
The realization of the object of the invention, functional characteristics and advantage will in conjunction with the embodiments, are described further with reference to accompanying drawing.
Embodiment
For the ease of the understanding of those skilled in the art, below in conjunction with accompanying drawing, the invention will be further described, can not be used for limiting the scope of the invention.It should be noted that, when not conflicting, the various modes in the embodiment in the application and embodiment can combine mutually.
Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
The mobile terminal realizing each embodiment of the present invention is described referring now to accompanying drawing.In follow-up description, use the suffix of such as " module ", " parts " or " unit " for representing element only in order to be conducive to explanation of the present invention, itself is specific meaning not.Therefore, " module " and " parts " can mixedly use.
Mobile terminal can be implemented in a variety of manners.Such as, the terminal described in the present invention can comprise the such as mobile terminal of mobile phone, smart phone, notebook computer, digit broadcasting receiver, PDA (personal digital assistant), PAD (panel computer), PMP (portable media player), guider etc. and the fixed terminal of such as digital TV, desktop computer etc.Below, suppose that terminal is mobile terminal.But it will be appreciated by those skilled in the art that except the element except being used in particular for mobile object, structure according to the embodiment of the present invention also can be applied to the terminal of fixed type.
Fig. 1 is the hardware configuration signal of the mobile terminal realizing each embodiment of the present invention.
Mobile terminal 100 can comprise wireless communication unit 110, A/V (audio/video) input unit 120, user input unit 130, sensing cell 140, output unit 150, memory 160, interface unit 170, controller 180 and power subsystem 190 etc.Fig. 1 shows the mobile terminal with various assembly, it should be understood that, does not require to implement all assemblies illustrated.Can alternatively implement more or less assembly.Will be discussed in more detail below the element of mobile terminal.
Wireless communication unit 110 generally includes one or more assembly, and it allows the radio communication between mobile terminal 100 and wireless communication system or network.Such as, wireless communication unit can comprise at least one in broadcast reception module 111, mobile communication module 112, wireless Internet module 113, short range communication module 114 and positional information module 115.
Broadcast reception module 111 via broadcast channel from external broadcasting management server receiving broadcast signal and/or broadcast related information.Broadcast channel can comprise satellite channel and/or terrestrial channel.Broadcast management server can be generate and send the server of broadcast singal and/or broadcast related information or the broadcast singal generated before receiving and/or broadcast related information and send it to the server of terminal.Broadcast singal can comprise TV broadcast singal, radio signals, data broadcasting signal etc.And broadcast singal may further include the broadcast singal combined with TV or radio signals.Broadcast related information also can provide via mobile communications network, and in this case, broadcast related information can be received by mobile communication module 112.Broadcast singal can exist in a variety of manners, such as, it can exist with the form of the electronic service guidebooks (ESG) of the electronic program guides of DMB (DMB) (EPG), digital video broadcast-handheld (DVB-H) etc.Broadcast reception module 111 can by using the broadcast of various types of broadcast system Received signal strength.Especially, broadcast reception module 111 can by using such as multimedia broadcasting-ground (DMB-T), DMB-satellite (DMB-S), digital video broadcasting-hand-held (DVB-H), forward link media (MediaFLO ) the digit broadcasting system receiving digital broadcast of Radio Data System, received terrestrial digital broadcasting integrated service (ISDB-T) etc.Broadcast reception module 111 can be constructed to be applicable to providing the various broadcast system of broadcast singal and above-mentioned digit broadcasting system.The broadcast singal received via broadcast reception module 111 and/or broadcast related information can be stored in memory 160 (or storage medium of other type).
Radio signal is sent at least one in base station (such as, access point, Node B etc.), exterior terminal and server and/or receives radio signals from it by mobile communication module 112.Various types of data that such radio signal can comprise voice call signal, video calling signal or send according to text and/or Multimedia Message and/or receive.
Wireless Internet module 113 supports the Wi-Fi (Wireless Internet Access) of mobile terminal.This module can be inner or be externally couple to terminal.Wi-Fi (Wireless Internet Access) technology involved by this module can comprise WLAN (WLAN) (Wi-Fi), Wibro (WiMAX), Wimax (worldwide interoperability for microwave access), HSDPA (high-speed downlink packet access) etc.
Short range communication module 114 is the modules for supporting junction service.Some examples of short-range communication technology comprise bluetooth tM, radio-frequency (RF) identification (RFID), Infrared Data Association (IrDA), ultra broadband (UWB), purple honeybee tMetc..
Positional information module 115 is the modules of positional information for checking or obtain mobile terminal.The typical case of positional information module is GPS (global positioning system).According to current technology, GPS module 115 calculates from the range information of three or more satellite and correct time information and for the Information application triangulation calculated, thus calculates three-dimensional current location information according to longitude, latitude and pin-point accuracy.Current, the method for calculating location and temporal information uses three satellites and by the error of the position that uses an other satellite correction calculation to go out and temporal information.In addition, GPS module 115 can carry out computational speed information by Continuous plus current location information in real time.
A/V input unit 120 is for audio reception or vision signal.A/V input unit 120 can comprise camera 121 and microphone 1220, and the view data of camera 121 to the static images obtained by image capture apparatus in Video Capture pattern or image capture mode or video processes.Picture frame after process may be displayed on display unit 151.Picture frame after camera 121 processes can be stored in memory 160 (or other storage medium) or via wireless communication unit 110 and send, and can provide two or more cameras 1210 according to the structure of mobile terminal.Such acoustic processing can via microphones sound (voice data) in telephone calling model, logging mode, speech recognition mode etc. operational mode, and can be voice data by microphone 122.Audio frequency (voice) data after process can be converted to the formatted output that can be sent to mobile communication base station via mobile communication module 112 when telephone calling model.Microphone 122 can be implemented various types of noise and eliminate (or suppress) algorithm and receiving and sending to eliminate (or suppression) noise or interference that produce in the process of audio signal.
User input unit 130 can generate key input data to control the various operations of mobile terminal according to the order of user's input.User input unit 130 allows user to input various types of information, and keyboard, the young sheet of pot, touch pad (such as, detecting the touch-sensitive assembly of the change of the resistance, pressure, electric capacity etc. that cause owing to being touched), roller, rocking bar etc. can be comprised.Especially, when touch pad is superimposed upon on display unit 151 as a layer, touch-screen can be formed.
Sensing cell 140 detects the current state of mobile terminal 100, (such as, mobile terminal 100 open or close state), the position of mobile terminal 100, user for mobile terminal 100 contact (namely, touch input) presence or absence, the orientation of mobile terminal 100, the acceleration or deceleration of mobile terminal 100 move and direction etc., and generate order or the signal of the operation for controlling mobile terminal 100.Such as, when mobile terminal 100 is embodied as sliding-type mobile phone, sensing cell 140 can sense this sliding-type phone and open or close.In addition, whether whether sensing cell 140 can detect power subsystem 190 provides electric power or interface unit 170 to couple with external device (ED).Sensing cell 140 can comprise proximity transducer 1410 and will be described this in conjunction with touch-screen below.
Interface unit 170 is used as at least one external device (ED) and is connected the interface that can pass through with mobile terminal 100.Such as, external device (ED) can comprise wired or wireless head-band earphone port, external power source (or battery charger) port, wired or wireless FPDP, memory card port, for connecting the port, audio frequency I/O (I/O) port, video i/o port, ear port etc. of the device with identification module.Identification module can be that storage uses the various information of mobile terminal 100 for authentication of users and can comprise subscriber identification module (UIM), client identification module (SIM), Universal Subscriber identification module (USIM) etc.In addition, the device (hereinafter referred to " recognition device ") with identification module can take the form of smart card, and therefore, recognition device can be connected with mobile terminal 100 via port or other jockey.Interface unit 170 may be used for receive from external device (ED) input (such as, data message, electric power etc.) and the input received be transferred to the one or more element in mobile terminal 100 or may be used for transmitting data between mobile terminal and external device (ED).
In addition, when mobile terminal 100 is connected with external base, interface unit 170 can be used as to allow by it electric power to be provided to the path of mobile terminal 100 from base or can be used as the path that allows to be transferred to mobile terminal by it from the various command signals of base input.The various command signal inputted from base or electric power can be used as and identify whether mobile terminal is arranged on the signal base exactly.Output unit 150 is constructed to provide output signal (such as, audio signal, vision signal, alarm signal, vibration signal etc.) with vision, audio frequency and/or tactile manner.Output unit 150 can comprise display unit 151, dio Output Modules 152, alarm unit 153 etc.
Display unit 151 may be displayed on the information of process in mobile terminal 100.Such as, when mobile terminal 100 is in telephone calling model, display unit 151 can show with call or other communicate (such as, text messaging, multimedia file are downloaded etc.) be correlated with user interface (UI) or graphic user interface (GUI).When mobile terminal 100 is in video calling pattern or image capture mode, display unit 151 can the image of display capture and/or the image of reception, UI or GUI that video or image and correlation function are shown etc.
Meanwhile, when display unit 151 and touch pad as a layer superposed on one another to form touch-screen time, display unit 151 can be used as input unit and output device.Display unit 151 can comprise at least one in liquid crystal display (LCD), thin-film transistor LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc.Some in these displays can be constructed to transparence and watch from outside to allow user, and this can be called transparent display, and typical transparent display can be such as TOLED (transparent organic light emitting diode) display etc.According to the specific execution mode wanted, mobile terminal 100 can comprise two or more display units (or other display unit), such as, mobile terminal can comprise outernal display unit (not shown) and inner display unit (not shown).Touch-screen can be used for detecting touch input pressure and touch input position and touch and inputs area.
When dio Output Modules 152 can be under the isotypes such as call signal receiving mode, call mode, logging mode, speech recognition mode, broadcast reception mode at mobile terminal, voice data convert audio signals that is that wireless communication unit 110 is received or that store in memory 160 and exporting as sound.And dio Output Modules 152 can provide the audio frequency relevant to the specific function that mobile terminal 100 performs to export (such as, call signal receives sound, message sink sound etc.).Dio Output Modules 152 can comprise loud speaker, buzzer etc.
Alarm unit 153 can provide and export that event informed to mobile terminal 100.Typical event can comprise calling reception, message sink, key signals input, touch input etc.Except audio or video exports, alarm unit 153 can provide in a different manner and export with the generation of notification event.Such as, alarm unit 153 can provide output with the form of vibration, when receive calling, message or some other enter communication (incomingcommunication) time, alarm unit 153 can provide sense of touch to export (that is, vibrating) to notify to user.By providing such sense of touch to export, even if when the mobile phone of user is in the pocket of user, user also can identify the generation of various event.Alarm unit 153 also can provide the output of the generation of notification event via display unit 151 or dio Output Modules 152.
Memory 160 software program that can store process and the control operation performed by controller 180 etc., or temporarily can store oneself through exporting the data (such as, telephone directory, message, still image, video etc.) that maybe will export.And, memory 160 can store about when touch be applied to touch-screen time the vibration of various modes that exports and the data of audio signal.
Memory 160 can comprise the storage medium of at least one type, described storage medium comprises flash memory, hard disk, multimedia card, card-type memory (such as, SD or DX memory etc.), random access storage device (RAM), static random-access memory (SRAM), read-only memory (ROM), Electrically Erasable Read Only Memory (EEPROM), programmable read only memory (PROM), magnetic storage, disk, CD etc.And mobile terminal 100 can be connected the memory function of execute store 160 network storage device with by network cooperates.
Controller 180 controls the overall operation of mobile terminal usually.Such as, controller 180 performs the control relevant to voice call, data communication, video calling etc. and process.In addition, controller 180 can comprise the multi-media module 1810 for reproducing (or playback) multi-medium data, and multi-media module 1810 can be configured in controller 180, or can be configured to be separated with controller 180.Controller 180 can pattern recognition process, is identified as character or image so that input is drawn in the handwriting input performed on the touchscreen or picture.
Power subsystem 190 receives external power or internal power and provides each element of operation and the suitable electric power needed for assembly under the control of controller 180.
Various execution mode described herein can to use such as computer software, the computer-readable medium of hardware or its any combination implements.For hardware implementation, execution mode described herein can by using application-specific IC (ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), processor, controller, microcontroller, microprocessor, being designed at least one performed in the electronic unit of function described herein and implementing, in some cases, such execution mode can be implemented in controller 180.For implement software, the execution mode of such as process or function can be implemented with allowing the independent software module performing at least one function or operation.Software code can be implemented by the software application (or program) write with any suitable programming language, and software code can be stored in memory 160 and to be performed by controller 180.
So far, oneself is through the mobile terminal according to its functional description.Below, for the sake of brevity, by the slide type mobile terminal that describes in various types of mobile terminals of such as folded form, board-type, oscillating-type, slide type mobile terminal etc. exemplarily.Therefore, the present invention can be applied to the mobile terminal of any type, and is not limited to slide type mobile terminal.
Mobile terminal 100 as shown in Figure 1 can be constructed to utilize and send the such as wired and wireless communication system of data via frame or grouping and satellite-based communication system operates.
Describe wherein according to the communication system that mobile terminal of the present invention can operate referring now to Fig. 2.
Such communication system can use different air interfaces and/or physical layer.Such as, the air interface used by communication system comprises such as frequency division multiple access (FDMA), time division multiple access (TDMA), code division multiple access (CDMA) and universal mobile telecommunications system (UMTS) (especially, Long Term Evolution (LTE)), global system for mobile communications (GSM) etc.As non-limiting example, description below relates to cdma communication system, but such instruction is equally applicable to the system of other type.
With reference to figure 2, cdma wireless communication system can comprise multiple mobile terminal 100, multiple base station (BS) 270, base station controller (BSC) 275 and mobile switching centre (MSC) 280.MSC280 is constructed to form interface with Public Switched Telephony Network (PSTN) 290.MSC280 is also constructed to form interface with the BSC275 that can be couple to base station 270 via back haul link.Back haul link can construct according to any one in some interfaces that oneself knows, described interface comprises such as E1/T1, ATM, IP, PPP, frame relay, HDSL, ADSL or xDSL.Will be appreciated that system as shown in Figure 2 can comprise multiple BSC2750.
Each BS270 can serve one or more subregion (or region), by multidirectional antenna or point to specific direction each subregion of antenna cover radially away from BS270.Or each subregion can by two or more antenna covers for diversity reception.Each BS270 can be constructed to support multiple parallel compensate, and each parallel compensate has specific frequency spectrum (such as, 1.25MHz, 5MHz etc.).
Subregion can be called as CDMA Channel with intersecting of parallel compensate.BS270 also can be called as base station transceiver subsystem (BTS) or other equivalent terms.Under these circumstances, term " base station " may be used for broadly representing single BSC275 and at least one BS270.Base station also can be called as " cellular station ".Or each subregion of particular B S270 can be called as multiple cellular station.
As shown in Figure 2, broadcast singal is sent to the mobile terminal 100 at operate within systems by broadcsting transmitter (BT) 295.Broadcast reception module 111 as shown in Figure 1 is arranged on mobile terminal 100 and sentences the broadcast singal receiving and sent by BT295.In fig. 2, several global positioning system (GPS) satellite 300 is shown.Satellite 300 helps at least one in the multiple mobile terminal 100 in location.
In fig. 2, depict multiple satellite 300, but understand, the satellite of any number can be utilized to obtain useful locating information.GPS module 115 as shown in Figure 1 is constructed to coordinate to obtain the locating information wanted with satellite 300 usually.Substitute GPS tracking technique or outside GPS tracking technique, can use can other technology of position of tracking mobile terminal.In addition, at least one gps satellite 300 optionally or extraly can process satellite dmb transmission.
As a typical operation of wireless communication system, BS270 receives the reverse link signal from various mobile terminal 100.Mobile terminal 100 participates in call usually, information receiving and transmitting communicates with other type.Each reverse link signal that certain base station 270 receives is processed by particular B S270.The data obtained are forwarded to relevant BSC275.BSC provides call Resourse Distribute and comprises the mobile management function of coordination of the soft switching process between BS270.The data received also are routed to MSC280 by BSC275, and it is provided for the extra route service forming interface with PSTN290.Similarly, PSTN290 and MSC280 forms interface, and MSC and BSC275 forms interface, and BSC275 correspondingly control BS270 so that forward link signals is sent to mobile terminal 100.
Based on above-mentioned mobile terminal hardware configuration and communication system, each embodiment of the inventive method is proposed.
As shown in Figure 3, first embodiment of the invention proposes a kind of method processing image, comprising:
Step 300, simultaneously acquisition infrared image and visible images.
In this step, infrared camera can be adopted to obtain infrared image, visible image capturing head can be adopted to obtain visible images.Infrared camera is parallel with the imaging plane of visible image capturing head.
Step 301, respectively infrared image and visible images are divided into background area and one or more target areas.Specifically comprise:
Obtain the predeterminable area of infrared image and visible images respectively, obtain the target area of infrared image and visible images respectively according to the infrared image of acquisition and the predeterminable area of visible images; The background area of infrared image and visible images is obtained respectively according to the infrared image of acquisition and the target area of visible images.
Or, obtain the depth map of infrared image and visible images respectively; According to the depth map of infrared image using the target area of the pixel of the depth of field in infrared image in preset range as infrared image, according to the depth map of visible images using the target area of the pixel of the depth of field in visible images in preset range as described infrared image; The background area of infrared image and visible images is obtained respectively according to the target area of infrared image and visible images.
Wherein, comprise according to the target area of the infrared image of acquisition and the predeterminable area acquisition infrared image of visible images and visible images respectively:
Region-growing method is adopted to obtain the target area of infrared image and visible images according to the predeterminable area of the infrared image obtained and visible images respectively.
That is, adopt region-growing method to obtain the target area of infrared image according to the predeterminable area of the infrared image obtained, adopt region-growing method to obtain the target area of visible images according to the predeterminable area of the visible images obtained.
Wherein, the target area of region-growing method acquisition infrared image and visible images is adopted to comprise according to the predeterminable area of the infrared image obtained and visible images respectively:
Obtain the central point of the predeterminable area of infrared image and visible images respectively; Respectively using the sub pixel of the central point of the infrared image that obtains and visible images as the target area of infrared image and visible images; The pixel respectively absolute value of the difference in infrared image and visible images between the depth of field and the depth of field of sub pixel being less than or equal to predetermined threshold value is incorporated in the target area of infrared image and visible images.
That is, obtain the central point of the predeterminable area of infrared image, using the central point of the predeterminable area of the infrared image of acquisition as the sub pixel of the target area of infrared image, the pixel that the absolute value of the difference in infrared image between the depth of field and the depth of field of sub pixel is less than or equal to predetermined threshold value is incorporated in the target area of infrared image.
Obtain the central point of the predeterminable area of visible images, using the central point of the predeterminable area of the visible images of acquisition as the sub pixel of the target area of visible images, the pixel that the absolute value of the difference in visible images between the depth of field and the depth of field of sub pixel is less than or equal to predetermined threshold value is incorporated in the target area of visible images.
Wherein, the central point how obtaining the predeterminable area of infrared image or visible images belongs to the known technology of those skilled in the art, and the protection range be not intended to limit the present invention, repeats no more here.Such as, the mean value can getting all pixel coordinates of the predeterminable area of infrared image or visible images is as the pixel coordinate of the central point of the predeterminable area of infrared image or visible images.
Wherein, the depth of field how obtaining infrared image and visible images belongs to the known technology of those skilled in the art, and the protection range be not intended to limit the present invention, repeats no more here.
Wherein, the predeterminable area obtaining infrared image and visible images respectively comprises:
Detect the region at personage place in infrared image and visible images respectively; Or, receive the predeterminable area from user, obtain respectively in infrared image and visible images with the predeterminable area corresponding from the predeterminable area of user received.
Wherein, the region how detecting personage place in infrared image or visible images belongs to the known technology of those skilled in the art, and the protection range be not intended to limit the present invention, repeats no more here.
Wherein, can adopt the disparity map of infrared image and visible images to obtain in infrared image and visible images with the predeterminable area corresponding from the predeterminable area of user received; specific implementation belongs to the known technology of those skilled in the art; the protection range be not intended to limit the present invention, repeats no more here.
Wherein, namely the depth map of infrared image is the figure that the depth of field of all pixels by infrared image is formed, and namely the depth map of visible images is figure that the depth of field of all pixels by visible images is formed.
In this step, the background area of infrared image is other pixels in infrared image except target area, and the background area of visible images is other pixels in visible images except target area.
Step 302, new image is synthesized in the target area of visible images behind the background area of infrared image and parallax conversion; Or new image is synthesized in the background area of infrared image after being converted by parallax and the target area of visible images.
In this step, how carrying out parallax conversion or to carry out parallax to the background area of infrared image converting the known technology belonging to those skilled in the art to the target area of visible images, the protection range be not intended to limit the present invention, repeats no more here.
In this step, new image is being synthesized in the target area of the visible images behind the background area of infrared image and parallax conversion; Or, the background area of infrared image after being converted by parallax and the target area of visible images are synthesized in the process of new image, effect can be carried out to target area corresponding in new image and background area respectively to play up, and the convergence part of target area and background area is divided beautify, such as, the form of gradual change is adopted to show to being connected part.Specifically the color of the pixel of linking part can be multiplied by a coefficient, be connected the pixel that part middle distance interface is nearer, the coefficient taken advantage of is less, the pixel that distance interface is far away, and the coefficient taken advantage of is larger.
Below by several example in detail method of the present invention.
Second embodiment, adopts infrared camera to obtain infrared image simultaneously and adopts visible image capturing head to obtain visible images.Respectively infrared image and visible images are detected to the region at the region at the personage place obtaining infrared image and the personage place of visible images.
Obtain the central point in the region at the personage place of infrared image, using the central point of the infrared image of acquisition as the sub pixel of the target area of infrared image, the pixel that the absolute value of the difference in infrared image between the depth of field and the depth of field of sub pixel is less than or equal to predetermined threshold value is incorporated in the target area of infrared image.
Obtain the central point in the region at the personage place of visible images, using the central point of the visible images of acquisition as the sub pixel of the target area of visible images, the pixel that the absolute value of the difference in visible images between the depth of field and the depth of field of sub pixel is less than or equal to predetermined threshold value is incorporated in the target area of visible images.
Other pixels in infrared image except target area are the background area of infrared image, and other pixels in visible images except target area are the background area of visible images.
By the target area of the visible images behind the background area of infrared image and parallax conversion, or the background area of infrared image after being converted by parallax and the target area of visible images are combined into and can obtain meeting the aesthetic new image of people, in building-up process, effect can be carried out to the background area of new image and target area respectively to play up, and the convergence part of background area and target area is divided beautify.
3rd embodiment, adopts infrared camera to obtain infrared image simultaneously and adopts visible image capturing head to obtain visible images.Receive the predeterminable area from user, obtain infrared image and predeterminable area corresponding with the predeterminable area from user in visible images respectively.
Obtain the central point of the predeterminable area of infrared image, using the central point of the infrared image of acquisition as the sub pixel of the target area of infrared image, the pixel that the absolute value of the difference in infrared image between the depth of field and the depth of field of sub pixel is less than or equal to predetermined threshold value is incorporated in the target area of infrared image.
Obtain the central point of the predeterminable area of visible images, using the central point of the visible images of acquisition as the sub pixel of the target area of visible images, the pixel that the absolute value of the difference in visible images between the depth of field and the depth of field of sub pixel is less than or equal to predetermined threshold value is incorporated in the target area of visible images.
Other pixels in infrared image except target area are the background area of infrared image, and other pixels in visible images except target area are the background area of visible images.
By the target area of the visible images behind the background area of infrared image and parallax conversion, or the background area of infrared image after being converted by parallax and the target area of visible images are combined into and can obtain meeting the aesthetic new image of people, in building-up process, effect can be carried out to the background area of new image and target area respectively to play up, and the convergence part of background area and target area is divided beautify.
4th embodiment, adopts infrared camera to obtain infrared image simultaneously and adopts visible image capturing head to obtain visible images.Obtain the depth map of infrared image, obtain the depth map of visible images.
According to the depth map of infrared image using the target area of the pixel of the depth of field in infrared image in preset range as infrared image element, according to the depth map of visible images using the target area of the pixel of the depth of field in visible images in preset range as infrared image.
Other pixels in infrared image except target area are the background area of infrared image, and other pixels in visible images except target area are the background area of visible images.
By the target area of the visible images behind the background area of infrared image and parallax conversion, or the background area of infrared image after parallax conversion and the target area of visible images are combined into and can obtain meeting the aesthetic new image of people, in building-up process, effect can be carried out to the background area of new image and target area respectively to play up, and the convergence part of background area and target area is divided beautify.
See Fig. 5, fifth embodiment of the invention also proposed a kind of device processing image, at least comprises:
Acquisition module, for obtaining infrared image and visible images simultaneously;
Segmentation module, for being divided into background area and one or more target areas respectively by infrared image and visible images;
Synthesis module, for synthesizing new image by the target area of the visible images behind the background area of infrared image and parallax conversion; Or new image is synthesized in the background area of infrared image after being converted by parallax and the target area of visible images.
Wherein, acquisition module can adopt dual camera to realize.The vertical view that Fig. 5 (a) is dual camera, the front view that Fig. 5 (b) is dual camera.As shown in Fig. 5 (a) He Fig. 5 (b), dual camera comprise infrared camera 11, visible image capturing 12, for connecting the link 13 of two cameras; Or, comprise infrared camera 12, visible image capturing 11, for connecting the link 13 of two cameras.
In device of the present invention, segmentation module specifically for:
Obtain the predeterminable area of infrared image and visible images respectively, obtain the target area of infrared image and visible images respectively according to the infrared image of acquisition and the predeterminable area of visible images; The background area of infrared image and visible images is obtained respectively according to the infrared image of acquisition and the target area of visible images.
In device of the present invention, segmentation module specifically for:
Obtain the predeterminable area of infrared image and visible images respectively, adopt region-growing method to obtain the target area of infrared image and visible images according to the predeterminable area of the infrared image obtained and visible images respectively; The background area of infrared image and visible images is obtained respectively according to the infrared image of acquisition and the target area of visible images.
In device of the present invention, segmentation module specifically for:
Obtain the predeterminable area of infrared image and visible images respectively, obtain the central point of the predeterminable area of infrared image and visible images respectively; Respectively using the sub pixel of the central point of the predeterminable area of the infrared image that obtains and visible images as the target area of infrared image and visible images; The pixel respectively absolute value of the difference in infrared image and visible images between the depth of field and the depth of field of sub pixel being less than or equal to predetermined threshold value is incorporated in the target area of infrared image and visible images; The background area of infrared image and visible images is obtained respectively according to the infrared image of acquisition and the target area of visible images.
In device of the present invention, segmentation module specifically for:
Detect the region at personage place in infrared image and visible images respectively; Or, receive the predeterminable area from user, obtain respectively in infrared image and visible images with the predeterminable area corresponding from the predeterminable area of user received, respectively according to the infrared image obtained and the predeterminable area acquisition infrared image of visible images and the target area of visible images; The background area of infrared image and visible images is obtained respectively according to the infrared image of acquisition and the target area of visible images.
In device of the present invention, segmentation module specifically for:
Obtain the depth map of infrared image and visible images respectively; According to the depth map of infrared image using the target area of the pixel of the depth of field in infrared image in preset range as infrared image, according to the depth map of visible images using the target area of the pixel of the depth of field in visible images in preset range as infrared image; The background area of infrared image and visible images is obtained respectively according to the target area of infrared image and visible images.
It should be noted that, in this article, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thus make to comprise the process of a series of key element, method, article or device and not only comprise those key elements, but also comprise other key elements clearly do not listed, or also comprise by the intrinsic key element of this process, method, article or device.When not more restrictions, the key element limited by statement " comprising ... ", and be not precluded within process, method, article or the device comprising this key element and also there is other identical element.
The invention described above embodiment sequence number, just to describing, does not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be well understood to the mode that above-described embodiment method can add required general hardware platform by software and realize, hardware can certainly be passed through, but in a lot of situation, the former is better execution mode.Based on such understanding, technical scheme of the present invention can embody with the form of software product the part that prior art contributes in essence in other words, this computer software product is stored in a storage medium (as ROM/RAM, magnetic disc, CD), comprising some instructions in order to make a station terminal equipment (can be mobile phone, computer, server, air conditioner, or the network equipment etc.) perform method described in each embodiment of the present invention.
These are only the preferred embodiments of the present invention; not thereby the scope of the claims of the present invention is limited; every utilize specification of the present invention and accompanying drawing content to do equivalent structure or equivalent flow process conversion; or be directly or indirectly used in other relevant technical fields, be all in like manner included in scope of patent protection of the present invention.

Claims (10)

1. process a method for image, it is characterized in that, comprising:
Obtain infrared image and visible images simultaneously;
Respectively infrared image and visible images are divided into background area and one or more target areas;
New image is synthesized in the target area of the visible images behind the background area of infrared image and parallax conversion; Or new image is synthesized in the background area of infrared image after being converted by parallax and the target area of visible images.
2. method according to claim 1, is characterized in that, described background area and one or more target areas of infrared image and visible images being divided into respectively comprise:
Obtain the predeterminable area of described infrared image and described visible images respectively, obtain the target area of described infrared image and described visible images respectively according to the described infrared image of acquisition and the predeterminable area of described visible images;
The background area of described infrared image and described visible images is obtained respectively according to the described infrared image of acquisition and the target area of described visible images.
3. method according to claim 2, is characterized in that, the described target area obtaining infrared image and visible images according to the predeterminable area of the infrared image obtained and visible images respectively comprises:
Region-growing method is adopted to obtain the target area of described infrared image and described visible images according to the predeterminable area of the described infrared image obtained and described visible images respectively.
4. method according to claim 3, is characterized in that, the described target area adopting region-growing method to obtain infrared image and visible images according to the infrared image of acquisition and the predeterminable area of visible images respectively comprises:
Obtain the central point of the predeterminable area of described infrared image and described visible images respectively;
Respectively using the sub pixel of the central point of the predeterminable area of the described infrared image that obtains and described visible images as the target area of described infrared image and described visible images;
The pixel respectively absolute value of the difference in described infrared image and described visible images between the depth of field and the depth of field of described sub pixel being less than or equal to predetermined threshold value is incorporated in the target area of described infrared image and described visible images.
5. the method according to claim 2 ~ 4 any one, is characterized in that, the described predeterminable area obtaining infrared image and visible images respectively comprises:
Detect the region at personage place in described infrared image and described visible images respectively;
Or, receive the predeterminable area from user, obtain respectively in described infrared image and described visible images with the predeterminable area corresponding from the predeterminable area of user received.
6. the method according to Claims 1 to 5 any one, is characterized in that, described background area and one or more target areas of infrared image and visible images being divided into respectively comprise:
Obtain the depth map of described infrared image and described visible images respectively;
According to the depth map of described infrared image using the pixel of the depth of field in preset range in described infrared image as the target area of described infrared image, according to the depth map of described visible images using the pixel of the depth of field in preset range in described visible images as the target area of described infrared image;
The background area of described infrared image and described visible images is obtained respectively according to the target area of described infrared image and described visible images.
7. process a device for image, it is characterized in that, at least comprise:
Acquisition module, for obtaining infrared image and visible images simultaneously;
Segmentation module, for being divided into background area and one or more target areas respectively by infrared image and visible images;
Synthesis module, for synthesizing new image by the target area of the visible images behind the background area of infrared image and parallax conversion; Or new image is synthesized in the background area of infrared image after being converted by parallax and the target area of visible images.
8. device according to claim 7, is characterized in that, described segmentation module specifically for:
Obtain the predeterminable area of described infrared image and described visible images respectively, obtain the target area of described infrared image and described visible images respectively according to the described infrared image of acquisition and the predeterminable area of described visible images; The background area of described infrared image and described visible images is obtained respectively according to the described infrared image of acquisition and the target area of described visible images.
9. device according to claim 7, is characterized in that, described segmentation module specifically for:
Obtain the predeterminable area of described infrared image and described visible images respectively, adopt region-growing method to obtain the target area of described infrared image and described visible images according to the predeterminable area of the described infrared image obtained and described visible images respectively; The background area of described infrared image and described visible images is obtained respectively according to the described infrared image of acquisition and the target area of described visible images.
10. device according to claim 7, is characterized in that, described segmentation module specifically for:
Obtain the predeterminable area of described infrared image and described visible images respectively, obtain the central point of the predeterminable area of described infrared image and described visible images respectively; Respectively using the sub pixel of the central point of the predeterminable area of the described infrared image that obtains and described visible images as the target area of described infrared image and described visible images; The pixel respectively absolute value of the difference in described infrared image and described visible images between the depth of field and the depth of field of described sub pixel being less than or equal to predetermined threshold value is incorporated in the target area of described infrared image and described visible images; The background area of described infrared image and described visible images is obtained respectively according to the described infrared image of acquisition and the target area of described visible images.
CN201510328942.7A 2015-06-15 2015-06-15 Method and device for processing images Pending CN104935822A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510328942.7A CN104935822A (en) 2015-06-15 2015-06-15 Method and device for processing images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510328942.7A CN104935822A (en) 2015-06-15 2015-06-15 Method and device for processing images

Publications (1)

Publication Number Publication Date
CN104935822A true CN104935822A (en) 2015-09-23

Family

ID=54122764

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510328942.7A Pending CN104935822A (en) 2015-06-15 2015-06-15 Method and device for processing images

Country Status (1)

Country Link
CN (1) CN104935822A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862688A (en) * 2017-11-07 2018-03-30 山东浪潮云服务信息科技有限公司 A kind of method and device for medical image aid in diagosis
CN109274923A (en) * 2018-11-21 2019-01-25 南京文采工业智能研究院有限公司 A kind of Intellisense device for industrial equipment
CN110213501A (en) * 2019-06-25 2019-09-06 浙江大华技术股份有限公司 A kind of grasp shoot method, device, electronic equipment and storage medium
CN111210450A (en) * 2019-12-25 2020-05-29 北京东宇宏达科技有限公司 Method for processing infrared image for sea-sky background
CN111386701A (en) * 2017-12-04 2020-07-07 索尼公司 Image processing apparatus, image processing method, and program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101610421A (en) * 2008-06-17 2009-12-23 深圳华为通信技术有限公司 Video communication method, Apparatus and system
CN101727665A (en) * 2008-10-27 2010-06-09 广州飒特电力红外技术有限公司 Method and device for fusing infrared images and visible light images
CN104052992A (en) * 2014-06-09 2014-09-17 联想(北京)有限公司 Image processing method and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101610421A (en) * 2008-06-17 2009-12-23 深圳华为通信技术有限公司 Video communication method, Apparatus and system
CN101727665A (en) * 2008-10-27 2010-06-09 广州飒特电力红外技术有限公司 Method and device for fusing infrared images and visible light images
CN104052992A (en) * 2014-06-09 2014-09-17 联想(北京)有限公司 Image processing method and electronic equipment

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862688A (en) * 2017-11-07 2018-03-30 山东浪潮云服务信息科技有限公司 A kind of method and device for medical image aid in diagosis
CN111386701A (en) * 2017-12-04 2020-07-07 索尼公司 Image processing apparatus, image processing method, and program
US11641492B2 (en) 2017-12-04 2023-05-02 Sony Corporation Image processing apparatus and image processing method
CN109274923A (en) * 2018-11-21 2019-01-25 南京文采工业智能研究院有限公司 A kind of Intellisense device for industrial equipment
CN110213501A (en) * 2019-06-25 2019-09-06 浙江大华技术股份有限公司 A kind of grasp shoot method, device, electronic equipment and storage medium
WO2020258816A1 (en) * 2019-06-25 2020-12-30 Zhejiang Dahua Technology Co., Ltd. Systems and methods for image processing
US11967052B2 (en) 2019-06-25 2024-04-23 Zhejiang Dahua Technology Co., Ltd. Systems and methods for image processing
CN111210450A (en) * 2019-12-25 2020-05-29 北京东宇宏达科技有限公司 Method for processing infrared image for sea-sky background
CN111210450B (en) * 2019-12-25 2022-08-09 北京东宇宏达科技有限公司 Method and system for processing infrared image of sea-sky background

Similar Documents

Publication Publication Date Title
CN105227837A (en) A kind of image combining method and device
CN105404484A (en) Terminal screen splitting device and method
CN104954689A (en) Method and shooting device for acquiring photo through double cameras
CN104902212A (en) Video communication method and apparatus
CN105100482A (en) Mobile terminal and system for realizing sign language identification, and conversation realization method of the mobile terminal
CN105224925A (en) Video process apparatus, method and mobile terminal
CN104735255A (en) Split screen display method and system
CN105183308A (en) Picture display method and apparatus
CN104657482A (en) Method for displaying application interface and terminal
CN104811532A (en) Terminal screen display parameter adjustment method and device
CN105141833A (en) Terminal photographing method and device
CN104967802A (en) Mobile terminal, recording method of screen multiple areas and recording device of screen multiple areas
CN105138261A (en) Shooting parameter adjustment apparatus and method
CN104967744A (en) Method and device for adjusting terminal parameters
CN104967717A (en) Noise reduction method and apparatus in terminal voice interaction mode
CN105263049A (en) Video cropping device based on frame coordinate, method and mobile terminal
CN104968033A (en) Terminal network processing method and apparatus
CN105227865A (en) A kind of image processing method and terminal
CN104751517A (en) Graphic processing method and graphic processing device
CN104917965A (en) Shooting method and device
CN105100673A (en) Voice over long term evolution (VoLTE) based desktop sharing method and device
CN104850325A (en) Mobile terminal application processing method and device
CN105160628A (en) Method and device for acquiring RGB data
CN104935822A (en) Method and device for processing images
CN105245938A (en) Device and method for playing multimedia files

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150923