[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN107707834A - Image processing method and device, electronic installation and computer-readable recording medium - Google Patents

Image processing method and device, electronic installation and computer-readable recording medium Download PDF

Info

Publication number
CN107707834A
CN107707834A CN201710812524.4A CN201710812524A CN107707834A CN 107707834 A CN107707834 A CN 107707834A CN 201710812524 A CN201710812524 A CN 201710812524A CN 107707834 A CN107707834 A CN 107707834A
Authority
CN
China
Prior art keywords
image
scene
colour temperature
depth
light source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710812524.4A
Other languages
Chinese (zh)
Other versions
CN107707834B (en
Inventor
张学勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710812524.4A priority Critical patent/CN107707834B/en
Publication of CN107707834A publication Critical patent/CN107707834A/en
Priority to EP18852861.6A priority patent/EP3680853A4/en
Priority to PCT/CN2018/105121 priority patent/WO2019047985A1/en
Priority to US16/815,177 priority patent/US11516412B2/en
Priority to US16/815,179 priority patent/US11503228B2/en
Application granted granted Critical
Publication of CN107707834B publication Critical patent/CN107707834B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention proposes a kind of image processing method, and for electronic installation, this method includes:Detect the colour temperature of scene residing for active user;Start the virtual light source matched with the colour temperature of the scene, to adjust the colour temperature of background image to be fused so that the colour temperature of the scene matches with the colour temperature of the background image;Obtain the first video image of active user;Obtain multiple depth images of active user;According to the multiple depth image, handle each frame scene image of first scene video, obtained with obtaining people object area of the active user in each frame scene image corresponding to personage's area image;Each described personage's area image is merged to obtain merging image with the corresponding background image.By making the colour temperature of scene be matched with the colour temperature of background image to be fused in the present invention, syncretizing effect can be obtained and preferably merge image.Prompting is provided when opening virtual light source, constructs the fusion process of a sound scene.

Description

Image processing method and device, electronic installation and computer-readable recording medium
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of image processing method and device, electronic installation and Computer-readable recording medium.
Background technology
The existing technology for merging personage with virtual background, generally by one where personage to be fused from the personage Take out in image, then the personage plucked out is added in virtual background.In practical application, between two images merged There can be some differences, the image that merely can not obtain fusion by adjustment such as shift position, sizes has preferable Syncretizing effect.
The content of the invention
It is contemplated that at least solves one of technical problem in correlation technique to a certain extent.
Therefore, first purpose of the present invention is to propose a kind of image processing method, adjusted with realizing according to environment colour temperature The colour temperature of whole fusion object so that merge the colour temperature matching of object, for solve to exist in existing integration technology syncretizing effect compared with The problem of poor.
Second object of the present invention is to propose a kind of image processing apparatus.
Third object of the present invention is to propose a kind of electronic installation.
Fourth object of the present invention is to propose a kind of computer-readable recording medium.
For the above-mentioned purpose, first aspect present invention embodiment proposes a kind of image processing method, including:
Detect the colour temperature of scene residing for the active user;
Start the virtual light source matched with the colour temperature of the scene, to adjust the colour temperature of background image to be fused so that The colour temperature of the scene matches with the colour temperature of the background image;
Prompting message is sent when starting the virtual light source of the matching;
Obtain the first video image of active user;
Obtain multiple depth images of the active user;
According to the multiple depth image, each frame scene image of first scene video is handled, is worked as so that acquisition is described People object area of the preceding user in each frame scene image and personage's area image corresponding to obtaining;
Each described personage's area image is merged to obtain merging image with the corresponding background image.
The image processing method of the embodiment of the present invention, by detecting the colour temperature of scene residing for user, according to the colour temperature of scene Start the virtual light source for being adjusted to background image, the colour temperature of background image is adjusted using virtual light source so that scene Colour temperature matched with the colour temperature of background image to be fused, thereby may be ensured that the people's object area that will be extracted from scene image After image merges with background image, obtain syncretizing effect and preferably merge image.Due to considered in fusion process colour temperature because Element so that personage is merged more with background image naturally, improve syncretizing effect, improves Consumer's Experience.Further, exist Prompting is provided when opening virtual light source, constructs the fusion process of a sound scene.
For the above-mentioned purpose, second aspect of the present invention embodiment proposes a kind of image processing apparatus, including:
Visible image capturing head, the visible image capturing head is used for the colour temperature for detecting scene residing for active user, and obtains The first video image of active user;
Depth image acquisition component, the depth image acquisition component are used for the multiple depth maps for obtaining the active user Picture;
Processor, the processor are used for:
Start the virtual light source matched with the colour temperature of the scene, to adjust the colour temperature of background image to be fused so that The colour temperature of the scene matches with the colour temperature of the background image;
Prompting message is sent when starting the virtual light source of the matching;
According to the multiple depth image, each frame scene image of first scene video is handled, is worked as so that acquisition is described People object area of the preceding user in each frame scene image and personage's area image corresponding to obtaining;
Each described personage's area image is merged to obtain merging image with the corresponding background image.
The image processing apparatus of the embodiment of the present invention, by detecting the colour temperature of scene residing for user, according to the colour temperature of scene Start the virtual light source for being adjusted to background image, the colour temperature of background image is adjusted using virtual light source so that scene Colour temperature matched with the colour temperature of background image to be fused, thereby may be ensured that the people's object area that will be extracted from scene image After image merges with background image, obtain syncretizing effect and preferably merge image.Due to considered in fusion process colour temperature because Element so that personage is merged more with background image naturally, improve syncretizing effect, improves Consumer's Experience.Further, exist Prompting is provided when opening virtual light source, constructs the fusion process of a sound scene.
For the above-mentioned purpose, third aspect present invention embodiment proposes electronic installation, including:
One or more processors;
Memory;With
One or more programs, wherein one or more of programs are stored in the memory, and it is configured Into by one or more of computing devices, described program includes being used to perform the image procossing described in first aspect present invention The instruction of method.
For the above-mentioned purpose, fourth aspect present invention embodiment proposes a kind of computer-readable recording medium, its feature It is, including the computer program that the electronic installation with that can image is used in combination, the computer program can be held by processor Go to complete the image processing method described in first aspect present invention.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partly become from the following description Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
Of the invention above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments Substantially and it is readily appreciated that, wherein:
Fig. 1 is a kind of schematic flow sheet of image processing method provided in an embodiment of the present invention.
Fig. 2 is a kind of module diagram of image processing apparatus provided in an embodiment of the present invention.
Fig. 3 is a kind of structural representation of electronic installation provided in an embodiment of the present invention.
Fig. 4 is the schematic flow sheet of another image processing method provided in an embodiment of the present invention.
Fig. 5 is the schematic flow sheet of another image processing method provided in an embodiment of the present invention.
Fig. 6 (a) to Fig. 6 (e) is the schematic diagram of a scenario of structural light measurement according to an embodiment of the invention.
Fig. 7 (a) and Fig. 7 (b) structural light measurements according to an embodiment of the invention schematic diagram of a scenario.
Fig. 8 is the schematic flow sheet of another image processing method provided in an embodiment of the present invention.
Fig. 9 is the schematic flow sheet of another image processing method provided in an embodiment of the present invention.
Figure 10 is the schematic flow sheet of another image processing method provided in an embodiment of the present invention.
Figure 11 is the schematic flow sheet of another image processing method provided in an embodiment of the present invention.
Figure 12 is the schematic flow sheet of another image processing method provided in an embodiment of the present invention.
Figure 13 is the schematic flow sheet of another image processing method provided in an embodiment of the present invention.
Figure 14 is the schematic flow sheet of another image processing method provided in an embodiment of the present invention.
Figure 15 is the schematic flow sheet of another image processing method provided in an embodiment of the present invention.
Figure 16 is the schematic flow sheet of another image processing method provided in an embodiment of the present invention.
Figure 17 is the schematic flow sheet of another image processing method provided in an embodiment of the present invention.
Figure 18 is the schematic flow sheet of another image processing method provided in an embodiment of the present invention.
Figure 19 is the module diagram of another electronic installation provided in an embodiment of the present invention.
Figure 20 is the module diagram of another electronic installation provided in an embodiment of the present invention.
Embodiment
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
Also referring to Fig. 1 to 3, the image processing method of embodiment of the present invention is used for electronic installation 1000.At image Reason method includes:
S101, detect the colour temperature of scene residing for active user.
S102, object to be fused is adjusted according to the colour temperature of scene, so that the colour temperature matching of object to be fused;Wherein, wait to melt Object is closed as the background image chosen and/or personage's area image of active user.
S103, obtain the first video image of active user.
S104, obtain multiple depth images of active user.
S105, according to multiple depth images, each frame scene image of the first scene video is handled, is existed with obtaining active user People's object area in each frame scene image and personage's area image corresponding to obtaining.
S106, everyone object area image is merged to obtain merging image with corresponding background image.
In some application scenarios, for example, active user carries out wishing to hide current background during video with other people, Now, you can using embodiment of the present invention image processing method by personage's area image corresponding to active user with it is predetermined Background merges, then shows the merging image after fusion to other side.
Referring to Fig. 3, the image processing method of embodiment of the present invention can be by the image procossing of embodiment of the present invention Device 100 is realized.The image processing apparatus 100 of embodiment of the present invention is used for electronic installation 1000.Image processing apparatus 100 wraps Include visible image capturing first 11, depth image acquisition component 12 and processor 20.S101 and S103 can be by visible image capturing first 11 Realize, S104 can be realized that S102, S105 and S106 can be realized by processor 20 by depth image acquisition component 12.
In the embodiment of the present invention, it is seen that light video camera head 11 can pass through the photosensitive of inside during to scene capture Chip detects the colour temperature of scene residing for active user.After scene colour temperature is got, in order that obtaining personage can preferably melt Close in background image to be fused, processor 20 can be used for can be according to the colour temperature of scene, to scene and/or the back of the body to be fused The colour temperature of scape image is adjusted so that the colour temperature of scene can match with the colour temperature of background image so that fused image Colour temperature difference at human eye is smaller, has preferable syncretizing effect, improves Consumer's Experience.
After the colour temperature of scene is got, processor 20 can further get the color of background image to be fused Temperature, two colour temperatures are compared, to judge whether two colour temperatures match, specifically, processor 20 is getting the color of scene After the colour temperature of gentle background image, by the colour temperature of scene compared with the colour temperature of background image to be fused, two are obtained The difference of colour temperature, when difference exceeds default colour temperature threshold value, illustrate that the colour temperature of scene and the colour temperature of background image mismatch, need Object to be fused is carried out to be adjusted.In some embodiments, when two colour temperatures mismatch, one of them can be selected It is adjusted, or adjusts simultaneously.On the process specifically adjusted, reference can be made to the introduction in subsequent embodiment, no longer superfluous herein State.
The colour temperature of scene and/or background image to be fused is adjusted in the colour temperature according to scene so that scene After colour temperature can match with the colour temperature of background image, it is seen that light video camera head 11 is constantly shot to scene, is got and is worked as The first video image of preceding user.In the present embodiment, the first video image includes multiframe scene image.
Further, depth image acquisition component 12 can be used for the multiple depth images for obtaining active user.Wherein, scene For that can be gray level image or coloured image, depth image characterizes each personal or object in the scene comprising active user image Depth information.The scene domain of scene image is consistent with the scene domain of depth image, and each pixel in scene image is equal Can be found in depth image to should pixel depth information.
User is often kept in motion during video capture, can understand constantly change location, now Each frame scene image in first video image can correspond to a depth image respectively.Alternatively possible, user is in multiframe It is not moved in scene image, now the multiframe scene image in the first video image can correspond to a depth image.
Further, processor 20 can be used for the scene image that the first scene video is handled according to multiple depth images, with Obtain people object area of the active user in scene image and obtain personage's area image, and by everyone object area image Merged with corresponding background image to obtain merging image.
The method of existing segmentation personage and background according to similitude of the adjacent pixel in terms of pixel value and does not connect mainly Continuous property carries out the segmentation of personage and background, but this dividing method is easily influenceed by environmental factors such as ambient light photographs.It is of the invention real Image processing apparatus 100 and electronic installation 1000 are applied in example by obtaining the depth image of active user with by scene image Personage's extracted region comes out.Influenceed because the acquisition of depth image is not easy the factor such as COLOR COMPOSITION THROUGH DISTRIBUTION in by illumination, scene, because This, the people's object area extracted by depth image is more accurate, it is particularly possible to which accurate calibration goes out the border of people's object area.Enter one Step ground, merge the better of image after more accurately personage's area image merges with background.
Image processing method provided in an embodiment of the present invention, by detecting the colour temperature of scene residing for user, according to scene Colour temperature, the colour temperature of adjustment fusion object so that the colour temperature of scene matches with the colour temperature of background image to be fused, so as to protect After card merges the personage's area image extracted from scene image with background image, syncretizing effect preferably merging figure is obtained Picture.Due to considering colour temperature factor in fusion process so that personage is merged more with background image naturally, improving fusion Effect, improve Consumer's Experience.
The image processing apparatus 100 of embodiment of the present invention can apply to the electronic installation of embodiment of the present invention 1000.In other words, the electronic installation 1000 of embodiment of the present invention includes the image processing apparatus of embodiment of the present invention 100。
In some embodiments, electronic installation 1000 includes mobile phone, tablet personal computer, notebook computer, Intelligent bracelet, intelligence Energy wrist-watch, intelligent helmet, intelligent glasses etc..
Referring to Fig. 4, in some embodiments, the step of S104 obtains multiple depth images of active user, includes:
S201, to active user's projective structure light;
S202, shoot the structure light image modulated through active user.
S203, phase information corresponding to each pixel of demodulation structure light image is to obtain a depth in multiple depth images Spend image.
Referring again to Fig. 3, in some embodiments, depth image acquisition component 12 includes the He of structured light projector 121 Structure light video camera head 122.S201 can be realized that S202 and S203 can be by structure light video camera heads 122 by structured light projector 121 Realize.
In other words, structured light projector 121 can be used for active user's transmittance structure light;Structure light video camera head 122 can For shooting the structure light image modulated through active user, and phase information corresponding to each pixel of demodulation structure light image To obtain depth image.
Specifically, structured light projector 121 is by the face and body of the project structured light of certain pattern to active user Afterwards, the structure light image after being modulated by active user can be formed in the face of active user and the surface of body.Structure light images Structure light image after first 122 shooting is modulated, then structure light image is demodulated to obtain depth image.Wherein, structure The pattern of light can be laser stripe, Gray code, sine streak, non-homogeneous speckle etc..
Referring to Fig. 5, in some embodiments, phase information corresponding to each pixel of S203 demodulation structure light images To include the step of obtaining a depth image in multiple depth images:
S301, phase information corresponding to each pixel in demodulation structure light image.
S302, phase information is converted into depth information.
S303, depth image is generated according to depth information.
Referring again to Fig. 2, in some embodiments, S301, S302 and S303 can be real by structure light video camera head 122 It is existing.
In other words, structure light video camera head 122 can be further used in demodulation structure light image phase corresponding to each pixel Position information, phase information is converted into depth information, and depth image is generated according to depth information.
Specifically, compared with non-modulated structure light, the phase information of the structure light after modulation is changed, and is being tied The structure light showed in structure light image is to generate the structure light after distortion, wherein, the phase information of change can characterize The depth information of object.Therefore, structure light video camera head 122 demodulates phase corresponding to each pixel in structure light image and believed first Breath, calculates depth information, so as to obtain final depth image further according to phase information.
In order that those skilled in the art is more apparent from gathering the face of active user and body according to structure The process of the depth image of body, illustrate it by taking a kind of widely used optical grating projection technology (fringe projection technology) as an example below Concrete principle.Wherein, optical grating projection technology belongs to sensu lato area-structure light.
As shown in Fig. 6 (a), when being projected using area-structure light, sine streak is produced by computer programming first, And sine streak is projected to measured object by structured light projector 121, recycle structure light video camera head 122 to shoot striped by thing Degree of crook after body modulation, then demodulates the curved stripes and obtains phase, then phase is converted into depth information to obtain Depth image.The problem of to avoid producing error or error coupler, needed before carrying out depth information collection using structure light to depth Image collection assembly 12 carries out parameter calibration, and demarcation includes geometric parameter (for example, structure light video camera head 122 and project structured light Relative position parameter between device 121 etc.) demarcation, the inner parameter and structured light projector 121 of structure light video camera head 122 The demarcation of inner parameter etc..
Specifically, the first step, computer programming produce sine streak.Need to obtain using the striped of distortion due to follow-up Phase, for example phase is obtained using four step phase-shifting methods, therefore the striped that four width phase differences are is produced here, then project structured light Device 121 projects the four spokes line timesharing on measured object (mask shown in Fig. 6 (a)), structure light video camera head 122 collect as The figure on Fig. 6 (b) left sides, while to read the striped of the plane of reference shown on the right of Fig. 6 (b).
Second step, carry out phase recovery.The bar graph that structure light video camera head 122 is modulated according to four width collected is (i.e. Structure light image) to calculate the phase diagram by phase modulation, now obtained be to block phase diagram.Because four step Phase-shifting algorithms obtain Result be that gained is calculated by arctan function, therefore between the phase after structure light modulation is limited in, that is to say, that whenever Phase after modulation exceedes, and it can restart again.Shown in the phase main value such as Fig. 6 (c) finally given.
Wherein, it is necessary to carry out the saltus step processing that disappears, it is continuous phase that will block phase recovery during phase recovery is carried out Position.As shown in Fig. 6 (d), the left side is the continuous phase bitmap modulated, and the right is to refer to continuous phase bitmap.
3rd step, subtract each other to obtain phase difference (i.e. phase information) by the continuous phase modulated and with reference to continuous phase, should Phase difference characterizes depth information of the measured object with respect to the plane of reference, then phase difference is substituted into the conversion formula (public affairs of phase and depth The parameter being related in formula is by demarcation), you can obtain the threedimensional model of the object under test as shown in Fig. 6 (e).
It should be appreciated that in actual applications, according to the difference of concrete application scene, employed in the embodiment of the present invention Structure light in addition to above-mentioned grating, can also be other arbitrary graphic patterns.
As a kind of possible implementation, the depth information of pattern light progress active user also can be used in the present invention Collection.
Specifically, the method that pattern light obtains depth information is that this spreads out using a diffraction element for being essentially flat board The relief diffraction structure that there are element particular phases to be distributed is penetrated, cross section is with two or more concavo-convex step embossment knots Structure.Substantially 1 micron of the thickness of substrate in diffraction element, each step it is highly non-uniform, the span of height can be 0.7 Micron~0.9 micron.Structure shown in Fig. 7 (a) is the local diffraction structure of the collimation beam splitting element of the present embodiment.Fig. 7 (b) is edge The unit of the cross sectional side view of section A-A, abscissa and ordinate is micron.The speckle pattern of pattern photogenerated has The randomness of height, and can with the difference of distance changing patterns.Therefore, depth information is being obtained using pattern light Before, it is necessary first to the speckle pattern in space is calibrated, for example, in the range of 0~4 meter of distance structure light video camera head 122, A reference planes are taken every 1 centimetre, then just save 400 width speckle images after demarcating, the spacing of demarcation is smaller, obtains Depth information precision it is higher.Then, structured light projector 121 is by pattern light projection to measured object (i.e. active user) On, the speckle pattern that the difference in height on measured object surface to project the pattern light on measured object changes.Structure light Camera 122 is shot project speckle pattern (i.e. structure light image) on measured object after, then by speckle pattern and demarcation early stage The 400 width speckle images preserved afterwards carry out computing cross-correlation one by one, and then obtain 400 width correlation chart pictures.Measured object in space Position where body can show peak value on correlation chart picture, above-mentioned peak value is superimposed and after interpolation arithmetic i.e. It can obtain the depth information of measured object.
Most diffraction lights are obtained after diffraction is carried out to light beam due to common diffraction element, but per beam diffraction light light intensity difference Greatly, it is also big to the risk of human eye injury.Re-diffraction even is carried out to diffraction light, the uniformity of obtained light beam is relatively low. Therefore, the effect projected using the light beam of common diffraction element diffraction to measured object is poor.Using collimation in the present embodiment Beam splitting element, the element not only have the function that to collimate uncollimated rays, also have the function that light splitting, i.e., through speculum The non-collimated light of reflection is emitted multi-beam collimation light beam, and the multi-beam collimation being emitted after collimating beam splitting element toward different angles The area of section approximately equal of light beam, flux of energy approximately equal, and then to carry out using the scatterplot light after the beam diffraction The effect of projection is more preferable.Meanwhile laser emitting light is dispersed to every light beam, the risk of injury human eye is reduce further, and dissipate Spot structure light is for other uniform structure lights of arrangement, when reaching same collection effect, the consumption of pattern light Electricity is lower.
Referring to Fig. 8, in some embodiments, S105 handles the first scene video according to the multiple depth image Each frame scene image, obtained with obtaining people object area of the active user in each frame scene image corresponding to personage's administrative division map The step of picture, includes:
S401, for each frame scene image, identify the human face region in scene image.
S402, depth information corresponding with human face region is obtained from depth image corresponding to scene image.
S403, the depth bounds of people's object area is determined according to the depth information of human face region.
S404, the personage area for determining to be connected and fallen into depth bounds with human face region according to the depth bounds of people's object area Domain is to obtain personage's area image.
Referring again to Fig. 2, in some embodiments, S401, S402, S403 and S404 can be real by processor 20 It is existing.
In other words, processor 20 can be further used for identifying the human face region in scene image, be obtained from depth image Depth information corresponding with human face region is taken, the depth bounds of people's object area is determined according to the depth information of human face region, and Determine to be connected with human face region according to the depth bounds of people's object area and people's object area for falling into depth bounds is to obtain personage Area image.
Specifically, the human face region that the deep learning Model Identification trained can be used to go out in scene image first, with The depth information of human face region is can determine that according to the corresponding relation of scene image and depth image afterwards.Because human face region includes The features such as nose, eyes, ear, lip, therefore, depth number of each feature corresponding in depth image in human face region According to being different, for example, in face face depth image acquisition component 12, depth that depth image acquisition component 12 is shot In image, depth data corresponding to nose may be smaller, and depth data corresponding to ear may be larger.Therefore, above-mentioned people The depth information in face region may be a numerical value or a number range.Wherein, when the depth information of human face region is one During individual numerical value, the numerical value can be by averaging to obtain to the depth data of human face region;Or can be by human face region Depth data take in be worth to.
Because people's object area includes human face region, in other words, people's object area is in some depth together with human face region In the range of, therefore, after processor 20 determines the depth information of human face region, it can be set according to the depth information of human face region The depth bounds of people's object area, the depth bounds extraction further according to people's object area fall into the depth bounds and with human face region phase People's object area of connection is to obtain personage's area image.
In this way, personage's area image can be extracted from scene image according to depth information.Due to obtaining for depth information The image of the not factor such as illumination, colour temperature in by environment is taken to ring, therefore, the personage's area image extracted is more accurate.
Referring to Fig. 9, in some embodiments, image processing method is further comprising the steps of:
S501, scene image is handled to obtain the whole audience edge image of scene image.
S502, according to whole audience edge image amendment personage's area image.
Referring again to Fig. 2, in some embodiments, S501 and S502 can be realized by processor 20.
In other words, processor 20 can also be used to handle scene image to obtain the whole audience edge image of scene image, with And according to whole audience edge image amendment personage's area image.
Processor 20 carries out edge extracting to obtain whole audience edge image to scene image first, wherein, whole audience edge graph Edge lines as in include the edge lines of background object in scene residing for active user and active user.Specifically, may be used Edge extracting is carried out to scene image by Canny operators.The core that Canny operators carry out the algorithm of edge extracting mainly includes The following steps:First, convolution is carried out to scene image to eliminate noise with 2D gaussian filterings template;Then, differential operator is utilized The Grad of the gray scale of each pixel, and the gradient direction of the gray scale according to each pixel of Grad calculating are obtained, passes through gradient Direction can find adjacent pixels of the respective pixel along gradient direction;Then, each pixel is traveled through, if the gray scale of some pixel Value is not maximum compared with the gray value of former and later two adjacent pixels on its gradient direction, then it is not side to think this pixel Edge point.In this way, the pixel that marginal position is in scene image is can determine that, so as to obtain the whole audience edge after edge extracting Image.
After processor 20 obtains whole audience edge image, personage's area image is modified further according to whole audience edge image. It is appreciated that personage's area image is will to be connected and fall into all pictures of the depth bounds of setting in scene image with human face region Obtained after element progress merger, in some scenarios, it is understood that there may be some are connected and fallen into depth bounds with human face region Object.Therefore, to cause personage's area image of extraction more accurate, whole audience edge graph can be used to carry out personage's area image Amendment.
Further, processor 20 can also carry out second-order correction to revised personage's area image, for example, can be to amendment Personage's area image afterwards carries out expansion process, expands personage's area image to retain the edge details of personage's area image.
After processor 20 obtains personage's area image, you can personage's area image is merged with making a reservation for two backgrounds, And then obtain merging image.In some embodiments, predetermined two-dimensional background can be randomly selected by processor 20, Huo Zheyou Active user voluntarily selectes.Merging image after fusion can be shown on the display screen of electronic installation 1000, can also be passed through The printer being connected with electronic installation 1000 is printed.
Due to active user just with other side's video calling, therefore, it is seen that light video camera head 11 needs captured in real-time active user's Scene image, depth image acquisition component 12 are also required to gather depth image corresponding to active user in real time, and by processor 20 In time the scene image and depth image that gather in real time are carried out being processed so that other side it can be seen that smooth is merged by multiframe The video pictures that image combines.
When the colour temperature of scene mismatches with the colour temperature of background image to be fused, one of them can be selected to be adjusted It is whole, or adjust simultaneously.In some embodiments, according to the colour temperature of scene, carrying out light filling to scene makes the light irradiation of supplement Into scene, so as to adjust or change the colour temperature of scene so that the colour temperature of scene matches with the colour temperature of background image.
Referring to Fig. 10, the colour temperature in adjustment scene causes situation that the colour temperature of scene matches with the colour temperature of background image Under, image processing method specifically includes following steps:
S601, detect the colour temperature of scene residing for active user.
S602, light filling is carried out to the scene of active user according to the colour temperature of scene so that the colour temperature of scene with it is to be fused The colour temperature matching of background image.
S603, obtain the video image of scene first of active user.
S604, obtain multiple depth images of active user.
S605, according to multiple depth images, each frame scene image of the first scene video is handled, is existed with obtaining active user People's object area in each frame scene image and personage's area image corresponding to obtaining.
S606, everyone object area image is merged to obtain merging image with corresponding background image.
Referring to Fig. 3, the image processing method of embodiment of the present invention can be filled by the image procossing of the embodiment of the present invention Put 100 realizations.Specifically, S601 and S603 can realize that S604 can gather group by depth image by visible image capturing first 11 Part 12 realizes that S602, S605 and S606 can be realized by processor 20.
On S601, S603 and S605 and S606 detailed process, reference can be made in above-described embodiment related content note Carry, here is omitted.
Referring to Figure 11, in some embodiments, S602 carries out light filling according to the colour temperature of scene to the scene of active user, So that the colour temperature of scene matches with the colour temperature of background image to be fused specifically includes following steps:
S701, the color of object for the light source for determining to require supplementation with according to the colour temperature of scene;
S702, light filling array light-emitting is controlled according to color of object, so that light filling array sends the light source and photograph of color of object It is mapped in scene.
Referring again to Fig. 2, in some embodiments, S701 and S702 can be realized by processor 20.
Processor 20 can further get the color of background image to be fused after the colour temperature of scene is got Temperature, can be according to the colour temperature of scene and background image in order to cause the colour temperature of scene to be matched with the colour temperature of background image Colour temperature, determine the color of object of light source required supplementation with.After the color of object is determined, it is possible to which into scene, supplement should The light source of color of object.
In embodiments of the present invention, light filling array is additionally provided with image processing apparatus 1000, passes through the light filling array It can realize and light filling is carried out to scene.Specifically, processor 20 sends color of object according to color of object, control light filling array Light source.In some embodiments, light filling array is to form light filling array by multiple light compensating lamps.It is more to be preset as the setting of light filling array Individual luminous strategy, different luminous strategies are used for the light source for sending different colours.Wherein, the strategy that lights includes each light compensating lamp On off state.Luminous strategy is stored with processor 20, after color of object is determined, processor 20 can be according to target face Color, the luminous strategy matching with storage, the strategy that lighted corresponding to color of object is obtained, it is then each according to luminous strategy, control The on off state of light compensating lamp, so that light filling array sends the light source of color of object, the light source can be irradiated in scene, changed Become the colour temperature of scene so that the colour temperature of scene matches with the colour temperature of background image.
In some embodiments, before the colour temperature according to scene carries out light filling to scene, processor 20 can also enter Row judges whether the colour temperature of scene matches with the colour temperature of background image, specifically, is getting the colour temperature and background image of scene Colour temperature after, by the colour temperature of scene compared with the colour temperature of background image to be fused, obtain two colour temperatures difference, when When difference exceeds default colour temperature threshold value, illustrate that the colour temperature of scene and the colour temperature of background image mismatch, can be to the color of scene Temperature is adjusted.
When the colour temperature of scene mismatches with the colour temperature of background image to be fused, one of them can be selected to be adjusted It is whole, or adjust simultaneously.In some embodiments, according to the colour temperature of scene, the colour temperature of background image to be fused is adjusted, from And it can adjust or change the colour temperature of background image so that the colour temperature of scene matches with the colour temperature of background image.
Figure 12 is referred to, background image is being adjusted according to the colour temperature of scene so that the colour temperature of scene and the color of background image In the case of temperature matching, image processing method specifically includes following steps:
S801, detect the colour temperature of scene residing for active user.
S802, start the virtual light source matched with the colour temperature of scene, to adjust the colour temperature of background image to be fused so that The colour temperature of scene and the colour temperature of background image.
S803, obtain the first video image of active user.
S804, obtain multiple depth images of active user.
S805, according to multiple depth images, each frame scene image of the first scene video is handled, is existed with obtaining active user People's object area in each frame scene image and personage's area image corresponding to obtaining.
S806, everyone object area image is merged to obtain merging image with the corresponding background image.
Referring to Fig. 3, the image processing method of embodiment of the present invention can be filled by the image procossing of the embodiment of the present invention Put 100 realizations.Specifically, S801 and S803 can realize that S804 can gather group by depth image by visible image capturing first 11 Part 12 realizes that S802, S805 and S806 can be realized by processor 20.
On S801, S803, S804, S805 and S806 detailed process, reference can be made to related content in above-described embodiment Record, here is omitted.
Referring to Figure 13, in some embodiments, S802 starts the virtual light source matched with the colour temperature of scene, is treated with adjustment The colour temperature of the background image of fusion so that the colour temperature of scene matches with the colour temperature of background image and specifically includes following steps:
S901, according to the colour temperature of scene, determine to need the destination virtual light source opened from multiple virtual light sources.
In some embodiments, multiple virtual light sources are set for each background image in advance, then each virtual light source Corresponding different colour temperature.One mark is set for each colour temperature, so as to establish the corresponding pass between colour temperature and virtual light source System, the corresponding relation are stored in advance in processor 20.After the colour temperature of scene is got, processor can be according to the colour temperature Mark, inquire about the corresponding relation, and then the virtual light source matched with the colour temperature can be determined, and the virtual light source is made To need the destination virtual light source opened.
S902, start destination virtual light source and be irradiated on background image, adjust the colour temperature of background image so that the color of scene Temperature matches with the colour temperature of background image.
After destination virtual light source is determined, processor 20 can start the destination virtual light source, by destination virtual light source Light source is sent, after the light source is irradiated on background image, the colour temperature of background image will change under the influence of the light source Become so that the colour temperature of background image matches with the colour temperature of scene.
In some embodiments, before the virtual light source that is matched with the colour temperature of scene is started, processor 20 can be with Carry out judging whether the colour temperature of scene matches with the colour temperature of background image, specifically, getting the colour temperature and Background of scene After the colour temperature of picture, by the colour temperature of scene compared with the colour temperature of background image to be fused, the difference of two colour temperatures is obtained, When difference exceeds default colour temperature threshold value, illustrate that the colour temperature of scene and the colour temperature of background image mismatch, can be to Background The colour temperature of picture is adjusted.
Figure 14 is referred to, background image is being adjusted according to the colour temperature of scene so that the colour temperature of scene and the color of background image In the case of temperature matching, image processing method specifically includes following steps:
S1001, detect the colour temperature of scene residing for active user.
S1002, adjust the colour temperature of background image to be fused using the colour temperature of scene so that the colour temperature and Background of scene The colour temperature of picture.
S1003, obtain the first video image of active user.
S1004, obtain multiple depth images of active user.
S1005, according to multiple depth images, each frame scene image of the first scene video of processing, to obtain active user People's object area in each frame scene image and personage's area image corresponding to obtaining.
S1006, everyone object area image is merged to obtain merging image with the corresponding background image.
Referring to Fig. 3, the image processing method of embodiment of the present invention can be filled by the image procossing of the embodiment of the present invention Put 100 realizations.Specifically, S1001 and S1003 can realize that S1004 can be gathered by depth image by visible image capturing first 11 Component 12 realizes that S1005 and S1006 can be realized by processor 20.
On S1001, S1003, S1004, S1005 and S1006 detailed process, reference can be made to related in above-described embodiment The record of content, here is omitted.
Referring to Figure 15, in some embodiments, S1002 adjusts the color of background image to be fused using the colour temperature of scene Temperature so that the colour temperature of scene matches with the colour temperature of background image and specifically includes following steps:
S1101, the colour temperature of the colour temperature of scene and background image is weighted to obtain a target colour temperature.
Weight when in advance for scene and background image the colour temperature can be set to weight, and by corresponding weight Bao Cun to handling In device 20.After the colour temperature and the colour temperature of background image to scene, processor 20 can get weight corresponding to scene respectively With background image corresponding to weight, then by the colour temperature of scene and the multiplied by weight of scene, the first numerical value is obtained, by background image Colour temperature and background image multiplied by weight, obtain second value, the first numerical value be added with second value, obtains a colour temperature Value, the color temperature value is exactly target colour temperature.
S1102, the colour temperature of background image is adjusted to target colour temperature.
After target colour temperature is got, the colour temperature of background image can be directly adjusted to target colour temperature.Due to weighting During, the difference of the colour temperature of scene and the colour temperature of background image is considered, so that target colour temperature more connects Closely with the colour temperature of scene, and chromatic characteristic possessed by the primary colors temperature of background image is remained so that background image will not go out Now sharp change, so as to which by personage's area image of active user, after being fused in background image, syncretizing effect is preferably more Add naturally, disclosure satisfy that the demand of user.
As a kind of possible implementation, directly using the colour temperature of scene as target colour temperature, then by the back of the body to be fused The colour temperature of scape image is substituted for target colour temperature, can thus cause background image colour temperature and scene colour temperature it is completely the same, When personage's area image of active user is fused in background image, syncretizing effect is more naturally, after avoiding the occurrence of fusion The phenomenon visually loftier to user.
In some embodiments, Figure 16 is referred to, background image is being adjusted according to the colour temperature of scene so that the color of scene In the case that temperature matches with the colour temperature of background image, image processing method specifically includes following steps:
S1201, detect the colour temperature of scene residing for active user.
S1202, start the virtual light source matched with the colour temperature of scene, to adjust the colour temperature of background image to be fused, make Obtain the colour temperature of scene and the colour temperature of background image.
S1203, prompting message is sent when starting the virtual light source of matching.
S1204, obtain the first video image of active user.
S1205, obtain multiple depth images of active user.
S1206, according to multiple depth images, each frame scene image of the first scene video of processing, to obtain active user People's object area in each frame scene image and personage's area image corresponding to obtaining.
S1207, everyone object area image is merged to obtain merging image with the corresponding background image.
Referring to Fig. 3, the image processing method of embodiment of the present invention can be filled by the image procossing of the embodiment of the present invention Put 100 realizations.Specifically, S1201 and S1204 can realize that S1205 can be gathered by depth image by visible image capturing first 11 Component 12 realizes that S1202, S1203, S1206 and S1207 can be realized by processor 20.
On S1201 and S1202 and S1204~S1207 detailed process, reference can be made to related content in above-described embodiment Record, here is omitted.
In the embodiment of the present invention, sound is added in image co-registration in order to realize, can be using the unlatching of virtual light source as one Individual monitoring event, processor 200 is when listening to virtual light source unlatching, and just when opening virtual light source, embedded one is opened The prompting message opened, the prompting message can be sound or vibrations etc..
Referring to Figure 17, in some embodiments, it is specific that S1203 sends prompting message when starting the virtual light source of matching Comprise the following steps:
S1301, monitor the unlatching event of virtual light source.
In the embodiment of the present invention, virtual light source is opened in advance a unlatching event is set, and the unlatching event is stored Into processor 20.The split notice part of processor 20 is monitored, and a unlatching thing will occur when there is virtual light source unlatching Part, the can of processor 20 listen to the unlatching event.
S1302, when listening to unlatching event, according to the mark of destination virtual light source, obtain and destination virtual light source The prompting message matched somebody with somebody.
In the embodiment of the present invention, different virtual light sources can correspond to different types of prompting message.It is preset as virtual optical Incidence relation is established between source and prompting message and is stored in processor 20.The target that processor 20 can be opened as needed The mark of virtual light source, inquires about above-mentioned incidence relation, then gets the prompting message matched with destination virtual light source.Remind letter Breath can include sound, vibrations, tone etc..
S1303, prompting message is sent according to form corresponding with destination virtual light source.
Different virtual light sources can correspond to different prompting forms, for example, when prompting message is sound, can set not Same volume and/or prompting duration etc..Processor 20 is after prompting message is determined, it is possible to according to destination virtual light source Corresponding form, send the prompting message.
The image processing method provided in the present embodiment, by when opening virtual light source, adding prompting message, so as to structure A sound fusion scene is built out, increase is interesting, and user can be allowed to perceive the generation of fusion, improves user's sense By.
In some embodiments, before the virtual light source that is matched with the colour temperature of scene is started, processor 20 can be with Carry out judging whether the colour temperature of scene matches with the colour temperature of background image, specifically, getting the colour temperature and Background of scene After the colour temperature of picture, by the colour temperature of scene compared with the colour temperature of background image to be fused, the difference of two colour temperatures is obtained, When difference exceeds default colour temperature threshold value, illustrate that the colour temperature of scene and the colour temperature of background image mismatch, can adjust simultaneously The colour temperature of scene and the colour temperature of background image so that the colour temperature of scene matches with the colour temperature of background image.
Referring to Figure 18, while adjust the colour temperature of scene and the colour temperature of background image so that the colour temperature and background image of scene Colour temperature matching specifically include following steps:
S1401, obtain the difference of the colour temperature of scene and the colour temperature of background image.
S1402, the colour temperature of scene and the colour temperature of background image are adjusted according to the difference, to eliminate the colour temperature of scene and the back of the body The difference of the colour temperature of scape image so that the colour temperature of scene matches with the colour temperature of background image.
Specifically, processor 200 can compare the size between scene colour temperature and the colour temperature of background image, then basis The difference reduces the colour temperature of the big regulating object of colour temperature, such as the colour temperature of the regulating object can be reduced into one from currency Half difference, obtain the colour temperature of the regulating object.Further, according to the difference by the colour temperature liter of the small regulating object of colour temperature Height, such as the colour temperature of the regulating object can be raised half difference from currency, obtain the colour temperature of the regulating object.
For example, the colour temperature of scene is higher 2000K than the colour temperature of background image, processor 20 can drop the colour temperature of scene Low 1000K, the colour temperature rise 1000K of background image.
In the present embodiment, due to being in course of adjustment, the difference of the colour temperature of scene and the colour temperature of background image has been considered Different, so that target colour temperature is more nearly the colour temperature with scene, and the primary colors temperature for remaining background image is had Chromatic characteristic so that background image is not in sharp change, so as to by personage's area image of active user, fusion After into background image, syncretizing effect is preferably more naturally, disclosure satisfy that the demand of user.
Also referring to Fig. 3 and Figure 19, embodiment of the present invention also proposes a kind of electronic installation 1000.Electronic installation 1000 Including image processing apparatus 100.Image processing apparatus 100 can utilize hardware and/or software to realize.Image processing apparatus 100 Including imaging device 10 and processor 20.
Imaging device 10 includes visible image capturing first 11 and depth image acquisition component 12.
Specifically, it is seen that light video camera head 11 includes imaging sensor 111 and lens 112, it is seen that light video camera head 11 can be used for The colour information of active user is caught to obtain scene image, wherein, imaging sensor 111 includes color filter lens array (such as Bayer filter arrays), the number of lens 112 can be one or more.Visible image capturing first 11 is obtaining scene image process In, each imaging pixel in imaging sensor 111 senses luminous intensity and wavelength information in photographed scene, generation one Group raw image data;Imaging sensor 111 sends this group of raw image data into processor 20, and processor 20 is to original View data obtains colored scene image after carrying out the computings such as denoising, interpolation.Processor 20 can be in various formats to original Each image pixel in view data is handled one by one, for example, each image pixel can have the locating depth of 8,10,12 or 14 bits Degree, processor 20 can be handled each image pixel by identical or different bit depth.
Depth image acquisition component 12 includes structured light projector 121 and structure light video camera head 122, depth image collection group The depth information that part 12 can be used for catching active user is to obtain depth image.Structured light projector 121 is used to throw structure light Active user is incident upon, wherein, structured light patterns can be the speckle of laser stripe, Gray code, sine streak or random alignment Pattern etc..Structure light video camera head 122 includes imaging sensor 1221 and lens 1222, and the number of lens 1222 can be one or more It is individual.Imaging sensor 1221 is used for the structure light image that capturing structure light projector 121 is projected on active user.Structure light figure As can be sent by depth acquisition component 12 to processor 20 be demodulated, the processing such as phase recovery, phase information calculate to be to obtain The depth information of active user.
In some embodiments, it is seen that the function of light video camera head 11 and structure light video camera head 122 can be by a camera Realize, in other words, imaging device 10 only includes a camera and a structured light projector 121, and above-mentioned camera is not only Structure light image can also be shot with photographed scene image.
Except using structure light obtain depth image in addition to, can also by binocular vision method, based on differential time of flight (Time Of Flight, TOF) even depth obtains the depth image of active user as acquisition methods.
Processor 20 is further used for personage's area image by being extracted from scene image and depth image and made a reservation for Two-dimensional background image co-registration.When extracting personage's area image, processor 20 can combine depth image in depth information from Personage's area image of two dimension is extracted in scene image, people's object area can also be established according to the depth information in depth image Graphics, in conjunction with the color information in scene image to three-dimensional people's object area carry out color fill up with obtain three-dimensional coloured silk Personage's area image of color.Therefore, fusion treatment personage area image and can be during predetermined two-dimensional background image will two dimension Personage's area image is merged with predetermined two-dimensional background image to obtain merging image or the colored people by three-dimensional Object area image is merged with predetermined two-dimensional background image to obtain merging image.
In addition, image processing apparatus 100 also includes video memory 30.Video memory 30 can be embedded in electronic installation In 1000 or independently of the memory outside electronic installation 1000, and it may include direct memory access (DMA) (Direct Memory Access, DMA) feature.The raw image data or depth image acquisition component 12 of first 11 collection of visible image capturing are adopted The structure light image related data of collection, which can transmit, to be stored or is cached into video memory 30.Processor 20 can be from image Raw image data is read in memory 30 to be handled to obtain scene image, also can read structure from video memory 30 Light image related data is to be handled to obtain depth image.Deposited in addition, scene image and depth image are also storable in image In reservoir 30, calling is handled device 20 for processing at any time, for example, processor 20 calls scene image and depth image to carry out personage Extracted region, and obtained personage's area image after carrying carries out fusion treatment to be merged with predetermined two-dimensional background image Image.Wherein, predetermined two-dimensional background image and merging image may be alternatively stored in video memory 30.
Image processing apparatus 100 may also include display 50.Display 50 can obtain merging figure directly from processor 20 Picture, it can also be obtained from video memory 30 and merge image.The display of display 50 merges image so that user watches, or by scheming Shape engine or graphics processor (Graphics Processing Unit, GPU) are further processed.Image processing apparatus 100 also include encoder/decoder 60, and encoder/decoder 60 encoding and decoding scene image, depth image and can merge image etc. View data, the view data of coding can be stored in video memory 30, and can be shown in display 50 in image By decoder decompresses to be shown before upper.Encoder/decoder 60 can be by central processing unit (Central Processing Unit, CPU), GPU or coprocessor realize.In other words, encoder/decoder 60 can be central processing unit Any one or more in (Central Processing Unit, CPU), GPU and coprocessor.
Image processing apparatus 100 also includes control logic device 40.Imaging device 10 imaging when, processor 20 can according into As the data that equipment obtains are analyzed to determine one or more control parameters of imaging device 10 (for example, time for exposure etc.) Image statistics.Processor 20 sends image statistics to control logic device 40, the control imaging of control logic device 40 Equipment 10 is imaged with the control parameter determined.Control logic device 40 may include to perform one or more routines (such as firmware) Processor and/or microcontroller.One or more routines can determine imaging device 10 according to the image statistics of reception Control parameter.
Figure 20 is referred to, the electronic installation 1000 of embodiment of the present invention includes one or more processors 200, memory 300 and one or more programs 310.Wherein one or more programs 310 are stored in memory 300, and are configured to Performed by one or more processors 200.Program 310 includes being used for the image processing method for performing above-mentioned any one embodiment The instruction of method.
For example, program 310 includes being used to perform to complete the instruction of the image processing method described in above-described embodiment.
The computer-readable recording medium of embodiment of the present invention includes being combined with the electronic installation 1000 that can be imaged making Computer program.Computer program can be performed by processor 200 to complete at the image of above-mentioned any one embodiment Reason method.
For example, computer program can be performed by processor 200 to complete the image processing method described in above-described embodiment.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description Point is contained at least one embodiment or example of the present invention.In this manual, to the schematic representation of above-mentioned term not Identical embodiment or example must be directed to.Moreover, specific features, structure, material or the feature of description can be with office Combined in an appropriate manner in one or more embodiments or example.In addition, in the case of not conflicting, the skill of this area Art personnel can be tied the different embodiments or example and the feature of different embodiments or example described in this specification Close and combine.
In addition, term " first ", " second " are only used for describing purpose, and it is not intended that instruction or hint relative importance Or the implicit quantity for indicating indicated technical characteristic.Thus, define " first ", the feature of " second " can be expressed or Implicitly include at least one this feature.In the description of the invention, " multiple " are meant that at least two, such as two, three It is individual etc., unless otherwise specifically defined.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include Module, fragment or the portion of the code of the executable instruction of one or more the step of being used to realize custom logic function or process Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be of the invention Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (such as computer based system including the system of processor or other can be held from instruction The system of row system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment Put.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring Connecting portion (electronic installation), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium, which can even is that, to print the paper of described program thereon or other are suitable Medium, because can then enter edlin, interpretation or if necessary with it for example by carrying out optical scanner to paper or other media His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each several part of the present invention can be realized with hardware, software, firmware or combinations thereof.Above-mentioned In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage Or firmware is realized.Such as, if realized with hardware with another embodiment, following skill well known in the art can be used Any one of art or their combination are realized:With the logic gates for realizing logic function to data-signal from Logic circuit is dissipated, the application specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile Journey gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method carries Suddenly it is that by program the hardware of correlation can be instructed to complete, described program can be stored in a kind of computer-readable storage medium In matter, the program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, can also That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould Block can both be realized in the form of hardware, can also be realized in the form of software function module.The integrated module is such as Fruit is realized in the form of software function module and as independent production marketing or in use, can also be stored in a computer In read/write memory medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..Although have been shown and retouch above Embodiments of the invention are stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the present invention System, one of ordinary skill in the art can be changed to above-described embodiment, change, replace and become within the scope of the invention Type.

Claims (12)

1. a kind of image processing method, for electronic installation, it is characterised in that described image processing method includes:
Detect the colour temperature of scene residing for active user;
Start the virtual light source matched with the colour temperature of the scene, to adjust the colour temperature of background image to be fused so that described The colour temperature of scene matches with the colour temperature of the background image;
Prompting message is sent when starting the virtual light source of the matching;
Obtain the first video image of the active user;
Obtain multiple depth images of the active user;
According to the multiple depth image, each frame scene image of processing first scene video, to obtain the current use People object area of the family in each frame scene image and personage's area image corresponding to obtaining;
Each described personage's area image is merged to obtain merging image with the corresponding background image.
2. image processing method according to claim 1, it is characterised in that the multiple depths for obtaining the active user The step of spending image includes:
To active user's projective structure light;
The structure light image that shooting is modulated through the active user;With
Phase information corresponding to each pixel of the structure light image is demodulated to obtain one in the multiple depth image Depth image.
3. image processing method according to claim 2, it is characterised in that described to demodulate each of the structure light image Phase information corresponding to pixel the step of obtaining a depth image in the multiple depth image to include:
Demodulate phase information corresponding to each pixel in the structure light image;
The phase information is converted into depth information;With
The depth image is generated according to the depth information.
4. image processing method according to claim 1, it is characterised in that described according to the multiple depth image, place Each frame scene image of first scene video is managed, to obtain people object area of the active user in each frame scene image And include corresponding to obtaining the step of personage's area image:
For each frame scene image, the human face region in the scene image is identified;
Depth information corresponding with the human face region is obtained from depth image corresponding to the scene image;
The depth bounds of people's object area is determined according to the depth information of the human face region;With
The people for determining to be connected and fall into the depth bounds with the human face region according to the depth bounds of people's object area Object area is to obtain personage's area image.
5. image processing method according to claim 4, it is characterised in that described image processing method also includes:
The scene image is handled to obtain the whole audience edge image of the scene image;With
According to personage's area image described in the whole audience edge image amendment.
6. image processing method according to claim 1, it is characterised in that the startup matches with the colour temperature of the scene Virtual light source, to adjust the colour temperature of background image to be fused so that the color of the colour temperature of the scene and the background image Temperature matching, including:
According to the colour temperature of the scene, determine to need the destination virtual light source opened from multiple virtual light sources;
Start the destination virtual light source to be irradiated on the background image, adjust the colour temperature of the background image so that described The colour temperature of scene matches with the colour temperature of the background image.
7. image processing method according to claim 6, it is characterised in that the colour temperature according to the scene, from more Determine to need the destination virtual light source opened in individual virtual light source, including:
According to the mark of the colour temperature of the scene, the corresponding relation inquired about between the colour temperature of default scene and virtual light source, obtain The virtual light source consistent with the mark is taken as the destination virtual light source.
8. the image processing method according to claim 6 or 7, it is characterised in that described to start the virtual of the matching Prompting message is sent during light source, including:
Monitor the unlatching event of virtual light source;
When listening to the unlatching event, according to the mark of the destination virtual light source, obtain and the destination virtual light source The prompting message of matching;
The prompting message is sent according to form corresponding with the destination virtual light source.
9. according to the image processing method described in claim any one of 1-7, it is characterised in that the startup and the scene Before the virtual light source of colour temperature matching, in addition to:
By the colour temperature of the scene compared with the colour temperature of the background image, the difference of two colour temperatures is obtained, and determines institute State difference and exceed default colour temperature threshold value.
10. a kind of image processing apparatus, for electronic installation, it is characterised in that described image processing unit includes:
Visible image capturing head, the visible image capturing head is used for the colour temperature for detecting scene residing for active user, and obtains current The first video image of user;
Depth image acquisition component, the depth image acquisition component are used for the multiple depth images for obtaining the active user;
Processor, the processor are used for:
Start the virtual light source matched with the colour temperature of the scene, to adjust the colour temperature of background image to be fused so that described The colour temperature of scene matches with the colour temperature of the background image;
Prompting message is sent when starting the virtual light source of the matching;
According to the multiple depth image, each frame scene image of processing first scene video, to obtain the current use People object area of the family in each frame scene image and personage's area image corresponding to obtaining;
Each described personage's area image is merged to obtain merging image with the corresponding background image.
11. a kind of electronic installation, it is characterised in that the electronic installation includes:
One or more processors;
Memory;With
One or more programs, wherein one or more of programs are stored in the memory, and be configured to by One or more of computing devices, described program include being used at the image that perform claim is required described in 1 to 9 any one The instruction of reason method.
A kind of 12. computer-readable recording medium, it is characterised in that the meter being used in combination including the electronic installation with that can image Calculation machine program, the computer program can be executed by processor to complete the image procossing described in claim 1 to 9 any one Method.
CN201710812524.4A 2017-09-11 2017-09-11 Image processing method and apparatus, electronic apparatus, and computer-readable storage medium Expired - Fee Related CN107707834B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201710812524.4A CN107707834B (en) 2017-09-11 2017-09-11 Image processing method and apparatus, electronic apparatus, and computer-readable storage medium
EP18852861.6A EP3680853A4 (en) 2017-09-11 2018-09-11 Image processing method and device, electronic device, and computer-readable storage medium
PCT/CN2018/105121 WO2019047985A1 (en) 2017-09-11 2018-09-11 Image processing method and device, electronic device, and computer-readable storage medium
US16/815,177 US11516412B2 (en) 2017-09-11 2020-03-11 Image processing method, image processing apparatus and electronic device
US16/815,179 US11503228B2 (en) 2017-09-11 2020-03-11 Image processing method, image processing apparatus and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710812524.4A CN107707834B (en) 2017-09-11 2017-09-11 Image processing method and apparatus, electronic apparatus, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN107707834A true CN107707834A (en) 2018-02-16
CN107707834B CN107707834B (en) 2020-07-17

Family

ID=61172512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710812524.4A Expired - Fee Related CN107707834B (en) 2017-09-11 2017-09-11 Image processing method and apparatus, electronic apparatus, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN107707834B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019047985A1 (en) * 2017-09-11 2019-03-14 Oppo广东移动通信有限公司 Image processing method and device, electronic device, and computer-readable storage medium
CN113206971A (en) * 2021-04-13 2021-08-03 聚好看科技股份有限公司 Image processing method and display device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103104839A (en) * 2013-01-17 2013-05-15 中国科学院半导体研究所 Light supplemental device of plant and light supplemental method thereof
CN205430735U (en) * 2016-03-16 2016-08-03 京东方科技集团股份有限公司 Lighting device
CN106303250A (en) * 2016-08-26 2017-01-04 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN106340025A (en) * 2016-08-28 2017-01-18 张朝阳 Background replacement visual communication method based on chromatic adaptation transformation
CN106612393A (en) * 2015-10-22 2017-05-03 努比亚技术有限公司 Image processing method, image processing device and mobile terminal
CN106791473A (en) * 2017-01-20 2017-05-31 深圳市金立通信设备有限公司 A kind of imaging method and terminal
CN106803888A (en) * 2017-03-06 2017-06-06 联想(北京)有限公司 The method and electronic equipment of composograph
CN107025635A (en) * 2017-03-09 2017-08-08 广东欧珀移动通信有限公司 Processing method, processing unit and the electronic installation of image saturation based on the depth of field

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103104839A (en) * 2013-01-17 2013-05-15 中国科学院半导体研究所 Light supplemental device of plant and light supplemental method thereof
CN106612393A (en) * 2015-10-22 2017-05-03 努比亚技术有限公司 Image processing method, image processing device and mobile terminal
CN205430735U (en) * 2016-03-16 2016-08-03 京东方科技集团股份有限公司 Lighting device
CN106303250A (en) * 2016-08-26 2017-01-04 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN106340025A (en) * 2016-08-28 2017-01-18 张朝阳 Background replacement visual communication method based on chromatic adaptation transformation
CN106791473A (en) * 2017-01-20 2017-05-31 深圳市金立通信设备有限公司 A kind of imaging method and terminal
CN106803888A (en) * 2017-03-06 2017-06-06 联想(北京)有限公司 The method and electronic equipment of composograph
CN107025635A (en) * 2017-03-09 2017-08-08 广东欧珀移动通信有限公司 Processing method, processing unit and the electronic installation of image saturation based on the depth of field

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019047985A1 (en) * 2017-09-11 2019-03-14 Oppo广东移动通信有限公司 Image processing method and device, electronic device, and computer-readable storage medium
US11503228B2 (en) 2017-09-11 2022-11-15 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method, image processing apparatus and computer readable storage medium
US11516412B2 (en) 2017-09-11 2022-11-29 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method, image processing apparatus and electronic device
CN113206971A (en) * 2021-04-13 2021-08-03 聚好看科技股份有限公司 Image processing method and display device
CN113206971B (en) * 2021-04-13 2023-10-24 聚好看科技股份有限公司 Image processing method and display device

Also Published As

Publication number Publication date
CN107707834B (en) 2020-07-17

Similar Documents

Publication Publication Date Title
CN107610080A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107610077A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107742296A (en) Dynamic image generation method and electronic installation
CN107707839A (en) Image processing method and device
CN107509045A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107734267A (en) Image processing method and device
CN107734264A (en) Image processing method and device
CN107707831A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107707835A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107707838A (en) Image processing method and device
CN107509043A (en) Image processing method and device
CN107610078A (en) Image processing method and device
CN107705277A (en) Image processing method and device
CN107527335A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107705278A (en) The adding method and terminal device of dynamic effect
CN107644440A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107613224A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107610076A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107592491A (en) Video communication background display methods and device
CN107705243A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107613223A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107680034A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107613228A (en) The adding method and terminal device of virtual dress ornament
CN107730509A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107454336A (en) Image processing method and device, electronic installation and computer-readable recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200717