[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN107483845A - Photographic method and its device - Google Patents

Photographic method and its device Download PDF

Info

Publication number
CN107483845A
CN107483845A CN201710643861.5A CN201710643861A CN107483845A CN 107483845 A CN107483845 A CN 107483845A CN 201710643861 A CN201710643861 A CN 201710643861A CN 107483845 A CN107483845 A CN 107483845A
Authority
CN
China
Prior art keywords
image
fused
scene
depth image
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710643861.5A
Other languages
Chinese (zh)
Other versions
CN107483845B (en
Inventor
唐城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710643861.5A priority Critical patent/CN107483845B/en
Publication of CN107483845A publication Critical patent/CN107483845A/en
Application granted granted Critical
Publication of CN107483845B publication Critical patent/CN107483845B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention proposes a kind of photographic method and its device, wherein, method includes:The first depth image of reference object is obtained by structure light, the second depth image of scene to be fused is obtained, obtains the first image of reference object and the second image of scene to be fused;Wherein, the rgb value of each pixel is carried in first image and the second image, according to the first depth image and the second depth image, reference object is fused in scene to be fused, obtain target depth image, according to the first image, the second image and target depth image, target image is formed, wherein target image includes reference object and scene to be fused.In the present embodiment, reference object and the depth image of scene to be fused are obtained based on structure light, reference object is fused in scene based on depth image, can be when switching photographed scene for reference object, reference object can be caused to be bonded with photographed scene perfection, so that image processing effect is more naturally, lifting Consumer's Experience.

Description

Photographing method and device
Technical Field
The invention relates to the field of terminal equipment, in particular to a photographing method and a photographing device.
Background
With the popularization of terminal devices, users increasingly prefer to take pictures or record life by using the shooting function of the terminal devices. And the terminal device may further provide an application program for processing the image, and the image may be processed by the application program.
For example, a plurality of shooting scenes can be replaced by one shooting object, so that the photo is diversified and interesting, one shot image is displayed to be positioned at a beach, the user likes to be positioned in a grassland, the image can be subjected to matting processing at the moment, and then the scratched user is placed in the shooting scene of the grassland. Because all current application programs basically process the image on the two-dimensional image, a shooting object cannot be perfectly attached to a shooting scene in the image, and the image processing effect is poor.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, a first objective of the present invention is to provide a photographing method, so as to implement perfect fitting between a photographic object and a photographic scene when different photographic scenes are replaced by the same photographic object, so that an image processing effect is better, and solve the problem that an image processing effect is poor because the image processing performed by an existing application program is basically performed on a two-dimensional image, so that the photographic object cannot perfectly fit with the photographic scene in the image.
The second objective of the present invention is to provide a photographing apparatus.
A third object of the present invention is to provide a terminal device.
A fourth object of the invention is to propose one or more non-volatile computer-readable storage media containing computer-executable instructions.
To achieve the above object, an embodiment of a first aspect of the present invention provides a photographing method, including:
acquiring a first depth image of a photographic object through structured light;
acquiring a second depth image of a scene to be fused formed by the structured light;
acquiring a first image of the shooting object and a second image of the scene to be fused; the first image and the second image carry the RGB value of each pixel point;
according to the first depth image and the second depth image, the shooting object is fused into the scene to be fused, and a target depth image is obtained;
and forming a target image according to the first image, the second image and the target depth image, wherein the target image comprises the shooting object and the scene to be fused.
According to the photographing method, a first depth image of a photographed object is obtained through structured light, a second depth image of a scene to be fused is obtained, and the first image of the photographed object and the second image of the scene to be fused are obtained; the method comprises the steps of carrying RGB values of each pixel point in a first image and a second image, fusing a shooting object into a scene to be fused according to the first depth image and the second depth image to obtain a target depth image, and forming a target image according to the first image, the second image and the target depth image, wherein the target image comprises the shooting object and the scene to be fused. In the embodiment, the depth image of the shot object and the scene to be fused is acquired based on the structured light, the shot object is fused into the scene based on the depth image, the shot object can be perfectly attached to the shot scene when the shot scene is switched for the shot object, the image processing effect is more natural, and the user experience is improved.
To achieve the above object, a second embodiment of the present invention provides a photographing apparatus, including:
the device comprises a first acquisition module, a second acquisition module and a control module, wherein the first acquisition module is used for acquiring a first depth image of a shooting object through structured light;
the second acquisition module is used for acquiring a second depth image of the scene to be fused formed by the structured light;
the third acquisition module is used for acquiring the first image of the shot object and the second image of the scene to be fused; the first image and the second image carry the RGB value of each pixel point;
the fourth acquisition module is used for fusing the shooting object into the scene to be fused according to the first depth image and the second depth image to obtain a target depth image;
a fifth obtaining module, configured to form a target image according to the first image, the second image, and the target depth image, where the target image includes the photographic object and the scene to be fused.
According to the photographing device, the first depth image of the photographed object is obtained through the structured light, the second depth image of the scene to be fused is obtained, and the first image of the photographed object and the second image of the scene to be fused are obtained; the method comprises the steps of carrying RGB values of each pixel point in a first image and a second image, fusing a shooting object into a scene to be fused according to the first depth image and the second depth image to obtain a target depth image, and forming a target image according to the first image, the second image and the target depth image, wherein the target image comprises the shooting object and the scene to be fused. In the embodiment, the depth image of the shot object and the scene to be fused is acquired based on the structured light, the shot object is fused into the scene based on the depth image, the shot object can be perfectly attached to the shot scene when the shot scene is switched for the shot object, the image processing effect is more natural, and the user experience is improved.
To achieve the above object, a terminal device according to a third embodiment of the present invention includes a memory and a processor, where the memory stores computer-readable instructions, and the instructions, when executed by the processor, cause the processor to execute the photographing method according to the first embodiment of the present invention.
To achieve the above object, a fourth embodiment of the present invention provides one or more non-volatile computer-readable storage media containing computer-executable instructions, which when executed by one or more processors, cause the processors to perform the photographing method according to the first embodiment.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flow chart of a photographing method according to an embodiment of the present invention;
FIG. 2 is a schematic view of an apparatus assembly for projecting structured light;
FIG. 3 is a schematic diagram of a uniform arrangement of structured light;
fig. 4 is a schematic flowchart of another photographing method according to an embodiment of the present invention;
FIG. 5 is a schematic view of a projection set of non-uniform structured light in an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a photographing apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of another photographing apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an image processing circuit in a terminal device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The following describes a photographing method and apparatus, and a terminal device according to an embodiment of the present invention with reference to the drawings.
Because all current application programs basically process the image on the two-dimensional image, a shooting object cannot be perfectly attached to a shooting scene in the image, and the image processing effect is poor.
In order to solve the problem, embodiments of the present invention provide a photographing method, so as to achieve the purpose that when a same photographic subject replaces different photographic scenes, the photographic subject can be perfectly attached to the photographic scenes, and an image processing effect is better.
Fig. 1 is a schematic flow chart of a photographing method according to an embodiment of the present invention.
As shown in fig. 1, the photographing method includes the steps of:
in step 101, a first depth image of a photographic subject is acquired through structured light.
Among them, the projection set of the known spatial direction light beam is called structured light (structured light).
As an example, FIG. 2 is a schematic diagram of an apparatus assembly for projecting structured light. The projection set of structured light is merely illustrated as a set of lines in fig. 2, and the principle for structured light as a speckle pattern for the projection set is similar. As shown in fig. 2, the apparatus may include an optical projector and a camera, wherein the optical projector projects a pattern of structured light into a space where a photographic subject (user) is located, forming a three-dimensional image of a light bar modulated by a surface shape on a surface of the user. The three-dimensional image is detected by a camera at another location to obtain a distorted two-dimensional image of the light bar. The degree of distortion of the light bar depends on the relative position between the optical projector and the camera and the profile of the user's surface, intuitively, the displacement (or offset) displayed along the light bar is proportional to the height of the user's surface, the distortion represents a change in the plane, the physical gap of the user's surface is not continuously displayed, and when the relative position between the optical projector and the camera is fixed, the three-dimensional profile of the user can be reproduced by the two-dimensional image coordinates of the distorted light bar, i.e. a 3D model of the user is obtained.
As an example, the 3D model of the user can be obtained by calculation using formula (1), where formula (1) is as follows:
wherein (x, y, z) is the coordinates of the acquired 3D model of the user, b is the baseline distance between the projection device and the camera, F is the focal length of the camera, θ is the projection angle when the projection device projects the preset structured light to the space where the user is located, and (x ', y') is the coordinates of the two-dimensional distorted image with the user depth information.
As an example, the types of the structured light include a grating type, a light spot type, a speckle type (including a circular speckle and a cross speckle), and the structured light is uniformly arranged as shown in fig. 3. Correspondingly, the device for generating structured light may be some kind of projection device or instrument, such as an optical projector, which projects a light spot, line, grating, grid or speckle onto the object to be examined, but also a laser, which generates a laser beam.
As an example, when the shooting object is a user of the terminal device, the structured light may be irradiated onto the user, and the structured light may be reflected by the user's body, so as to obtain a first depth image of the user. Since the formed first depth image includes depth information of each feature point in the structured light, a 3D model of the user can be reconstructed from the first depth image.
Step 102, a second depth image of a scene to be fused formed by the structured light is acquired.
As an example, a user may select an image to be fused from pre-stored images such as photos, and the depth image of the shooting scene corresponding to the image to be fused is synchronously stored when the image to be fused is shot, so that after the user selects one photo, the depth image corresponding to the photo may be determined according to the name or identifier of the selected photo, and the depth image of the photo is used as a second depth image.
As an example, one of a plurality of shooting scenes stored in advance may be selected as a scene to be fused, and the shooting scene may be stored on the terminal device when a picture is taken before. For example, a user has previously taken a picture in a scene such as a beach, grassland, amusement park, etc., and a second depth image of the taken scene may be acquired by the structured light at the time of taking the picture. Depth information of various objects in the shooting scene may be included in the second depth image. As an example, a first depth image of the photographic subject may be acquired by structured light, and then a second depth image of the scene to be fused may be acquired by structured light.
As an example, a second depth image formed by the structured light may be acquired from a network or a friend, and the second depth image is a depth image corresponding to the scene to be fused.
Step 103, acquiring a first image of a shooting object and a second image of a scene to be fused.
The first image and the second image carry the RGB value of each pixel point.
In order to reconstruct the shot object in the scene to be fused, another camera is needed to acquire a first image of the shot object and a second image of the scene to be fused. The first image and the second image comprise RGB values of each pixel point, and the RGB values are obtained by imaging natural light on an image sensor in the camera device. That is, the first image and the second image are one color image. The second image may be acquired by a user of the terminal device when photographing the scene to be fused.
It should be noted here that the second image of the scene to be fused may be acquired by another camera on the terminal device, or may be acquired by a camera on another device, and is determined by the usage scene, which is not limited herein.
And step 104, fusing the shot object into a scene to be fused according to the first depth image and the second depth image to obtain a target depth image.
After the first depth image and the second depth image are obtained, 3D construction can be carried out on the shot object and the scene to be fused, and a 3D model of the shot object and a 3D model of the scene to be fused are formed. In order to fuse the shot object into the scene to be fused, firstly, a fusion position of the shot object in the scene to be fused needs to be acquired in the 3D model of the scene to be fused. The selection operation of the user for selecting the fusion position in the 3D model of the scene to be fused can be monitored, and when the selection operation is monitored, the fusion position can be determined according to the finger track of the user in the selection operation.
For example, when the photographic subject is a person and the photographic scene is a group photo, a position in the 3D model of the photographic scene may be selected as the position of the photographic subject in the group photo, which is called a fusion position.
Further, the 3D model of the shot object is placed at the fusion position in the 3D model of the scene to be fused, so that the shot object can be fused in the scene to be fused, and the target depth image is obtained.
And 105, forming a target image according to the first image, the second image and the target depth image.
The target image comprises a shooting object and a scene to be fused.
In this embodiment, after the target depth image including the shooting object and the scene to be fused is acquired, the target image may be formed according to the first image, the second image, and the target depth image. Specifically, the RGB values of the corresponding pixel points of the photographic object in the target depth image may be filled by using the RGB values of each pixel point in the first image. The RGB values of the corresponding pixel points in the scene to be photographed in the target depth image can be filled by using the RGB values of each pixel point in the second image. When the color filling is completed, a target image which has colors and carries depth information can be obtained, and the target image is a 3D image.
In the photographing method provided by the embodiment, a first depth image of a photographic object is acquired through structured light, a second depth image of a scene to be fused is acquired, and the first image of the photographic object and the second image of the scene to be fused are acquired; the method comprises the steps of carrying RGB values of each pixel point in a first image and a second image, fusing a shooting object into a scene to be fused according to the first depth image and the second depth image to obtain a target depth image, and forming a target image according to the first image, the second image and the target depth image, wherein the target image comprises the shooting object and the scene to be fused. In the embodiment, the depth image of the shot object and the scene to be fused is acquired based on the structured light, the shot object is fused into the scene based on the depth image, the shot object can be perfectly attached to the shot scene when the shot scene is switched for the shot object, the image processing effect is more natural, and the user experience is improved.
Fig. 4 is a schematic flowchart of another photographing method according to an embodiment of the present invention. As shown in fig. 4, the photographing method includes the steps of:
step 401, emitting structured light to a photographic subject.
In this embodiment, a projection device may be provided in the terminal for emitting the structured light toward the photographic subject. When a user faces the terminal toward a photographic subject, a projection device provided in the terminal may emit structured light toward the photographic subject.
Step 402, collecting emitted light formed by the structured light on a shot object; wherein the emitted light carries depth information for a photographic subject.
In step 403, an image formed by the reflected light on the image sensor is used as a first depth image.
After the structured light emitted to the shooting object arrives, because each facial organ on the shooting object can cause the obstruction to the structured light and the structured light can be reflected at the shooting object, at the moment, the reflected light of the structured light on the shooting object can be collected through a camera arranged in the terminal, and the depth image of the shooting object can be formed through the collected reflected light.
Step 404, reconstructing a 3D model of the photographic subject based on the first depth image.
Specifically, the first depth image may include a shot object and a background, and the first depth image is first subjected to denoising and smoothing to obtain an image of a region where the shot object is located, and then the shot object and the background image are segmented through processing such as foreground and background segmentation.
After the shooting object is extracted from the depth image, feature point data can be extracted from the depth image of the shooting object, and the feature points are connected into a network according to the extracted feature point data. For example, points on the same plane or points with a distance within a threshold range are connected into a triangular network according to the spatial distance relationship of the points, and the networks are spliced to generate the 3D model of the photographic subject.
Step 405, receiving a photographing request sent by an opposite terminal device; and the shooting request comprises a second depth image of the scene to be fused, and the second depth image is formed by transmitting structured light to the scene to be fused by the opposite terminal equipment.
In this embodiment, when the opposite-end device is in the scene to be fused, the structured light is projected to the scene to be fused by the projection device of the structured light on the opposite-end device, and the second depth image of the scene to be fused can be acquired.
In the process that a user uses the terminal device to take a picture, a picture taking request sent by an opposite terminal device in a communication state with the terminal device can be received, and the picture taking request can include second depth information of a scene to be fused. For example, when a group photograph is taken, some students are not present, and a photographing request including the second depth image of the scene to be fused, i.e., the group photograph, may be sent from the students present to the students not present.
Step 406, reconstructing a 3D model of the scene to be fused based on the second depth image.
Similar to step 404, the detailed process can refer to the above-mentioned related contents, and is not described herein again.
Step 407, acquiring a fusion position of the shooting object in the scene to be fused in the 3D model of the scene to be fused.
For the determination process of the fusion position, reference may be made to the description of relevant contents in the above embodiments, and details are not described here.
And 408, placing the 3D model of the shot object at the fusion position of the 3D module of the scene to be fused, so that the shot object is fused in the scene to be fused, and obtaining the target depth image.
As an example, in order to improve the effect of fusion and make the fusion effect more beautiful, in this embodiment, at least one object located at the fusion position may be first obtained from the 3D model in the scene to be fused, for example, the object may be a tree or another person, and the shooting object may have a phenomenon of partial overlap with the tree or another person at the fusion position.
In order to avoid the problem that the imaging effect is poor due to the partial overlapping, the depth information of the object can be extracted from the second depth image, and the relative relationship between the shooting object and the object can be determined according to the depth information of the shooting object and the depth information of the object in the first depth image.
For example, the size of the photographic subject can be determined according to the depth information of the photographic subject, the size of the object can be determined according to the depth information of the object, the sizes of the object and the photographic subject can be adjusted, or the distance between the object and the photographic subject can be adjusted, so that a certain distance exists between the photographic subject and the object, and the problem of partial overlapping is avoided. For example, when the object is the same as the shooting object, the size of the shooting object can be adjusted according to the size of the object in the scene to be fused, for example, if both the shooting object and the object are people, if the size of the shooting object person is significantly different from the size of the people in the scene to be fused, the size of the shooting object person can be adjusted according to the size of the people in the scene to be fused, so that the size difference between the shooting object person and the people in the scene to be fused is within a reasonable range.
Further, the front-back positional relationship between the subject and the object may be adjusted based on the depth information of the subject and the depth information of the object, and the front-back positional relationship between the subject and the object may be clarified. For example, the photographic subject is a person, and the object is a seat on which the person is to stand. Because the person and the seat are in the same fusion position, the position of the person can be arranged in front of the seat, and therefore the front-back position relation of the person and the seat is clear.
After the relative relationship between the shooting object and the object is determined, the shooting object can be fused into the scene to be fused according to the relative relationship to form a target depth image, namely, the shooting object is placed at the fusion position in the scene to be fused to form a target depth image.
Step 409, acquiring a first image of the shooting object and a second image of the scene to be fused.
Further, a first image of the shot object is obtained from the terminal device of the terminal device from a second image of the scene to be fused of the opposite terminal device, wherein the first image and the second image comprise RGB values of each pixel location and are obtained by imaging of natural light on an image sensor in the camera.
It should be noted here that the second image may be transmitted to the terminal device along with the second depth image in the shooting request, or may be separately transmitted to the terminal device.
Step 410, forming a target image according to the first image, the second image and the target depth image.
The target image comprises a shooting object and a scene to be fused.
Specifically, the RGB values of the corresponding pixel points of the photographic object in the target depth image may be filled by using the RGB values of each pixel point in the first image. The RGB values of the corresponding pixel points in the scene to be photographed in the target depth image can be filled by using the RGB values of each pixel point in the second image. When the color filling is completed, a target image which has colors and carries depth information can be obtained, and the target image is a 3D image.
As an example, the first image and the second image may be fused to obtain a fused RGB value of each pixel point in the first image, and then the color of the shooting object in the target depth image is filled by using the fused RGB value of each pixel point in the first image to obtain the target image. Specifically, the RGB value of each pixel point of the second image is obtained, and the RGB value of each pixel point in the first color image is corrected by using the RGB value of each pixel point of the second image, so that the fused RGB value of each pixel point in the first image is obtained. For example, an average value of RGB values of each pixel point in the second image may be obtained, the RGB values of each pixel point in the first image are corrected based on the average value, the average value of RGB values of each pixel point in the first image may be obtained, a ratio of the two average values is obtained, the RGB values of each pixel point in the first image are corrected by using the ratio, and the fused RGB values are obtained. Optionally, the RGB value of each pixel in the first image may be multiplied/divided by the ratio to obtain a fused RGB value.
Optionally, a difference between the RGB value of each pixel point in the first image and the average value may be calculated, then a ratio of the difference to the average value is calculated, and then a product is performed by using the ratio and the RGB value of the pixel point to obtain a fused RGB value.
Alternatively, a higher-order region of the image may be identified from the second image, and image processing may be performed on the higher-order region, for example, gaussian filtering, smoothing, and the like may be performed, so as to obtain a processed second image. And acquiring the average value of the RGB value of each pixel point of the second color image on the basis of the processed second image.
In the photographing method provided by the embodiment, a first depth image of a photographic object is acquired through structured light, a second depth image of a scene to be fused is acquired, and the first image of the photographic object and the second image of the scene to be fused are acquired; the method comprises the steps of carrying RGB values of each pixel point in a first image and a second image, fusing a shooting object into a scene to be fused according to the first depth image and the second depth image to obtain a target depth image, and forming a target image according to the first image, the second image and the target depth image, wherein the target image comprises the shooting object and the scene to be fused. In the embodiment, the depth image of the shot object and the scene to be fused is acquired based on the structured light, the shot object is fused into the scene based on the depth image, the shot object can be perfectly attached to the shot scene when the shot scene is switched for the shot object, the image processing effect is more natural, and the user experience is improved.
It should be noted here that, as an example, the structured light adopted in the above embodiment may be non-uniform structured light, and the non-uniform structured light is a speckle pattern or a random dot pattern formed by a set of a plurality of light spots.
FIG. 5 is a schematic diagram of a projection set of non-uniform structured light according to an embodiment of the present invention. As shown in fig. 5, the non-uniform structured light is adopted in the embodiment of the present invention, where the non-uniform structured light is a randomly arranged non-uniform speckle pattern, that is, the non-uniform structured light is a set of a plurality of light spots, and the plurality of light spots are arranged in a non-uniform dispersion manner, so as to form a speckle pattern. Because the storage space occupied by the speckle patterns is small, the operation efficiency of the terminal cannot be greatly influenced when the projection device operates, and the storage space of the terminal can be saved.
In addition, compared with other existing structured light types, the speckle patterns adopted in the embodiment of the invention can reduce energy consumption, save electric quantity and improve cruising ability of the terminal through hash arrangement.
In the embodiment of the invention, the projection device and the camera can be arranged in the terminals such as a computer, a mobile phone, a palm computer and the like. The projection device emits non-uniform structured light, i.e., a speckle pattern, toward the photographic subject. In particular, a speckle pattern may be formed using a diffractive optical element in the projection device, wherein a certain number of reliefs are provided on the diffractive optical element, and an irregular speckle pattern is generated by an irregular relief on the diffractive optical element. In embodiments of the present invention, the depth and number of relief grooves may be set by an algorithm.
The projection device can be used for projecting a preset speckle pattern to the space where the shooting object is located. The camera can be used for collecting the shot object with the projected speckle pattern so as to obtain a two-dimensional distorted image of the measured object with the speckle pattern.
In the embodiment of the invention, when the camera of the terminal is aligned with the photographic object, the projection device in the terminal can project a preset speckle pattern to the space where the photographic object is located, the speckle pattern has a plurality of scattered spots, and when the speckle pattern is projected onto the surface of the photographic object, the scattered spots in the speckle pattern can be shifted due to the elements contained in the surface of the photographic object. And acquiring the shot object through a camera at the terminal to obtain a two-dimensional distorted image of the shot object with the speckle pattern.
Further, image data calculation is performed on the acquired speckle image of the photographic subject and the reference speckle image according to a predetermined algorithm, and the movement distance of each scattered spot (characteristic point) of the speckle image of the photographic subject relative to the reference scattered spot (reference characteristic point) is acquired. And finally, according to the moving distance, the distance between the reference speckle image and the camera on the terminal and the relative interval value between the projection device and the camera, obtaining the depth value of each scattered spot of the speckle image by using a trigonometry method, obtaining the depth image of the shot object according to the depth value, and further obtaining the 3D model of the shot object according to the depth image.
Fig. 6 is a schematic structural diagram of a photographing device according to an embodiment of the present invention. As shown in fig. 6, the photographing apparatus includes: a first acquisition module 61, a second acquisition module 62, a third acquisition module 63, a fourth module 64 and a fifth acquisition module 65.
The first acquiring module 61 is configured to acquire a first depth image of the photographic subject through the structured light.
And a second obtaining module 62, configured to obtain a second depth image of the scene to be fused formed by the structured light.
A third obtaining module 63, configured to obtain the first image of the photographic object and the second image of the scene to be fused; the first image and the second image carry the RGB value of each pixel point.
And a fourth obtaining module 64, configured to fuse the shooting object into the scene to be fused according to the first depth image and the second depth image, so as to obtain a target depth image.
A fifth obtaining module 65, configured to form a target image according to the first image, the second image, and the target depth image, where the target image includes the shooting object and the scene to be fused.
Based on fig. 6, fig. 7 is a schematic structural diagram of another photographing apparatus according to an embodiment of the present invention. As shown in fig. 7, the fourth obtaining module 44 includes: a modeling unit 641, a position acquisition unit 641, and a fusion unit 643.
The modeling unit 641 is configured to construct a 3D model of the photographic object according to the first depth image, and construct a 3D model of the scene to be fused according to the second depth image.
And a position obtaining unit 642, configured to determine, in the 3D model of the scene to be fused, a fusion position for obtaining the photographic subject in the scene to be fused.
A fusion unit 643, configured to place the 3D model of the photographic object at the fusion position of the 3D module of the scene to be fused, so that the photographic object is fused into the scene to be fused, and the target depth image is obtained.
The fusion unit 643 includes: the system comprises an acquisition subunit, an extraction subunit, a relative relationship determination unit and a fusion subunit.
The obtaining subunit is used for obtaining at least one object located at the fusion position in the scene to be fused;
an extraction subunit, configured to extract depth information of the object from the second depth image;
a relative relationship determination unit configured to determine a relative relationship between the photographic subject and the object based on the depth information of the photographic subject and the depth information of the object in the first depth image;
and the fusion subunit is used for fusing the shooting object into the scene to be fused to form the target depth image according to the relative relationship.
Further, the relative relationship determining unit is specifically configured to adjust the size of the photographic subject, the size of the object, and/or a front-back position relationship according to the depth information of the photographic subject and the depth information of the object, so as to form a relative relationship between the photographic subject and the object.
Further, the fifth obtaining module 65 includes: an image fusion unit 651 and a filling unit 652.
And an image fusion unit 651, configured to fuse the first image and the second image to obtain a fused RGB value of each pixel in the first image.
The filling unit 652 is configured to fill the color of the photographic object in the target depth image with the fused RGB value of each pixel in the first image, so as to obtain the target image.
The image fusion unit 651 is specifically configured to obtain an RGB value of each pixel of the second image, and correct the RGB value of each pixel in the first color image using the RGB value of each pixel of the second image, so as to obtain a fused RGB value of each pixel in the first image.
Further, the first obtaining module 61 is specifically configured to emit structured light to the photographic subject, and collect emitted light formed by the structured light on the photographic subject; wherein the emitted light carries depth information for a photographic subject, and an image of the reflected light on an image sensor is taken as the first depth image.
Further, the structured light is non-uniform structured light which is a speckle pattern or a random dot pattern formed by a plurality of light spots, and is formed by a diffractive optical element arranged in a projection device on the terminal, wherein a certain number of embossments are arranged on the diffractive optical element, and the groove depths of the embossments are different.
Further, the second obtaining module 62 is specifically configured to receive a photographing request sent by the peer device; the shooting request comprises the second depth image of the scene to be fused; the second depth image is formed by emitting structural light to the scene to be fused by the opposite-end equipment; or,
and selecting an image to be fused from images prestored on the terminal equipment, and acquiring a depth image of a scene corresponding to the image to be fused as the second depth image.
According to the photographing device, the first depth image of the photographed object is obtained through the structured light, the second depth image of the scene to be fused is obtained, and the first image of the photographed object and the second image of the scene to be fused are obtained; the method comprises the steps of carrying RGB values of each pixel point in a first image and a second image, fusing a shooting object into a scene to be fused according to the first depth image and the second depth image to obtain a target depth image, and forming a target image according to the first image, the second image and the target depth image, wherein the target image comprises the shooting object and the scene to be fused. In the embodiment, the depth image of the shot object and the scene to be fused is acquired based on the structured light, the shot object is fused into the scene based on the depth image, the shot object can be perfectly attached to the shot scene when the shot scene is switched for the shot object, the image processing effect is more natural, and the user experience is improved.
The division of each module in the photographing apparatus is only for illustration, and in other embodiments, the photographing apparatus may be divided into different modules as needed to complete all or part of the functions of the photographing apparatus.
The embodiment of the invention also provides a computer readable storage medium. One or more non-transitory computer-readable storage media embodying computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of:
acquiring a first depth image of a photographic object through structured light;
acquiring a second depth image of a scene to be fused formed by the structured light;
acquiring a first image of the shooting object and a second image of the scene to be fused; the first image and the second image carry the RGB value of each pixel point;
according to the first depth image and the second depth image, the shooting object is fused into the scene to be fused, and a target depth image is obtained;
and forming a target image according to the first image, the second image and the target depth image, wherein the target image comprises the shooting object and the scene to be fused.
The embodiment of the invention also provides the terminal equipment. The terminal device includes therein an Image Processing circuit, which may be implemented by hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 8 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 8, for ease of explanation, only aspects of the image processing techniques related to embodiments of the present invention are shown.
As shown in fig. 8, image processing circuit 80 includes an imaging device 810, an ISP processor 830, and control logic 840. The imaging device 810 may include a camera with one or more lenses 812, an image sensor 814, and a structured light projector 816. The structured light projector 816 projects structured light to the object to be measured. The structured light pattern may be a laser stripe, a gray code, a sinusoidal stripe, or a randomly arranged speckle pattern. The image sensor 814 captures a structured light image projected onto the object to be measured, and transmits the structured light image to the ISP processor 830, and the ISP processor 830 demodulates the structured light image to obtain depth information of the object to be measured. Meanwhile, the image sensor 814 may also capture color information of the measured object. Of course, the two image sensors 814 may capture the structured light image and the color information of the measured object, respectively.
Taking speckle structured light as an example, the ISP processor 830 demodulates the structured light image, specifically including acquiring a speckle image of the measured object from the structured light image, performing image data calculation on the speckle image of the measured object and the reference speckle image according to a predetermined algorithm, and obtaining a moving distance of each scattered spot of the speckle image on the measured object relative to a reference scattered spot in the reference speckle image. And (4) converting and calculating by using a trigonometry method to obtain the depth value of each scattered spot of the speckle image, and obtaining the depth information of the measured object according to the depth value.
Of course, the depth image information and the like may be acquired by a binocular vision method or a method based on the time difference of flight TOF, and the method is not limited thereto, as long as the depth information of the object to be measured can be acquired or obtained by calculation, and all methods fall within the scope of the present embodiment.
After the ISP processor 830 receives the color information of the object to be measured captured by the image sensor 814, the image data corresponding to the color information of the object to be measured may be processed. ISP processor 830 analyzes the image data to obtain image statistics that may be used to determine one or more control parameters of imaging device 810. The image sensor 814 may include an array of color filters (e.g., Bayer filters), and the image sensor 814 may acquire light intensity and wavelength information captured with each imaging pixel of the image sensor 814 and provide a set of raw image data that may be processed by the ISP processor 830.
The ISP processor 830 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and ISP processor 830 may perform one or more image processing operations on the raw image data, collecting image statistics about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 830 may also receive pixel data from image memory 820. The image memory 820 may be a portion of a memory device, a storage device, or a separate dedicated memory within an electronic device, and may include a DMA (Direct memory access) feature.
Upon receiving the raw image data, ISP processor 830 may perform one or more image processing operations.
After the ISP processor 830 obtains the color information and the depth information of the object to be measured, the color information and the depth information can be fused to obtain a three-dimensional image. The feature of the corresponding object to be measured can be extracted by at least one of an appearance contour extraction method or a contour feature extraction method. For example, the features of the object to be measured are extracted by methods such as an active shape model method ASM, an active appearance model method AAM, a principal component analysis method PCA, and a discrete cosine transform method DCT, which are not limited herein. And then the characteristics of the measured object extracted from the depth information and the characteristics of the measured object extracted from the color information are subjected to registration and characteristic fusion processing. The fusion processing may be a process of directly combining the features extracted from the depth information and the color information, a process of combining the same features in different images after weight setting, or a process of generating a three-dimensional image based on the features after fusion in other fusion modes.
The image data for the three-dimensional image may be sent to the image memory 820 for additional processing before being displayed. ISP processor 830 receives processed data from image memory 820 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. Image data for a three-dimensional image may be output to a display 860 for viewing by a user and/or for further Processing by a Graphics Processing Unit (GPU). Further, the output of the ISP processor 830 may also be sent to the image memory 820, and the display 860 may read image data from the image memory 820. In one embodiment, image memory 820 may be configured to implement one or more frame buffers. In addition, the output of the ISP processor 830 may be transmitted to the encoder/decoder 850 for encoding/decoding the image data. The encoded image data may be saved and decompressed before being displayed on the display 860 device. The encoder/decoder 850 may be implemented by a CPU or GPU or coprocessor.
The image statistics determined by ISP processor 830 may be sent to control logic 840 unit. Control logic 840 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of imaging device 810 based on received image statistics.
The following steps are implemented by using the image processing technology in fig. 8:
acquiring a first depth image of a photographic object through structured light;
acquiring a second depth image of a scene to be fused formed by the structured light;
acquiring a first image of the shooting object and a second image of the scene to be fused; the first image and the second image carry the RGB value of each pixel point;
according to the first depth image and the second depth image, the shooting object is fused into the scene to be fused, and a target depth image is obtained;
and forming a target image according to the first image, the second image and the target depth image, wherein the target image comprises the shooting object and the scene to be fused.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (12)

1. A method of taking a picture, comprising:
acquiring a first depth image of a photographic object through structured light;
acquiring a second depth image of a scene to be fused formed by the structured light;
acquiring a first image of the shooting object and a second image of the scene to be fused; the first image and the second image carry the RGB value of each pixel point;
according to the first depth image and the second depth image, the shooting object is fused into the scene to be fused, and a target depth image is obtained;
and forming a target image according to the first image, the second image and the target depth image, wherein the target image comprises the shooting object and the scene to be fused.
2. The method according to claim 1, wherein the fusing the photographic object into the scene to be fused according to the first depth image and the second depth image to obtain a target depth image comprises:
constructing a 3D model of the photographic object according to the first depth image;
constructing a 3D model of the scene to be fused according to the second depth image;
acquiring a fusion position of the shooting object in the scene to be fused in the 3D model of the scene to be fused;
and placing the 3D model of the shot object at the fusion position of the 3D module of the scene to be fused, so that the shot object is fused in the scene to be fused, and the target depth image is obtained.
3. The method of claim 2, wherein the placing the 3D model of the photographic subject onto the fusion position of the 3D module of the scene to be fused to fuse the photographic subject into the scene to be fused to obtain the target depth image comprises:
acquiring at least one object positioned on the fusion position in the scene to be fused;
extracting depth information of the object from the second depth image;
determining a relative relationship between the photographic object and the object according to the depth information of the photographic object and the depth information of the object in the first depth image;
and fusing the shooting object into the scene to be fused to form the target depth image according to the relative relation.
4. The method according to claim 3, wherein determining the relative relationship between the photographic subject and the object according to the depth information of the photographic subject and the depth information of the object in the first depth image comprises:
and adjusting the size of the shot object, the size of the object and/or the front-back position relation according to the depth information of the shot object and the depth information of the object to form the relative relation between the shot object and the object.
5. The method of claim 1, wherein forming a target image from the first image, the second image, and the target depth image comprises:
fusing the first image and the second image to obtain a fused RGB value of each pixel point in the first image;
and filling the color of the shot object in the target depth image by using the fused RGB value of each pixel point in the first image to obtain the target image.
6. The method of claim 5, wherein the fusing the first image with the second image to obtain a fused RGB value of each pixel point in the first image comprises:
acquiring an RGB value of each pixel point of the second image;
and correcting the RGB value of each pixel point in the first color image by using the RGB value of each pixel point of the second image to obtain the fused RGB value of each pixel point in the first image.
7. The method of any one of claims 1-6, wherein said obtaining a first depth image of the photographic subject through the structure comprises:
emitting structured light to the photographic subject;
collecting emitted light formed by the structured light on the shooting object; wherein the emitted light carries depth information for a photographic subject;
and taking an image formed by the reflected light on an image sensor as the first depth image.
8. The method according to claim 7, wherein the structured light is a non-uniform structured light, wherein the non-uniform structured light is a speckle pattern or a random dot pattern formed by a collection of a plurality of light spots, and is formed by a diffractive optical element provided in a projection device on the terminal, wherein the diffractive optical element is provided with a number of embossments having different groove depths.
9. The method of claim 8, wherein obtaining the second depth image of the scene to be fused formed by the structured light comprises:
receiving a photographing request sent by opposite terminal equipment; the shooting request comprises a second depth image of the scene to be fused, and the second depth image is formed by transmitting structured light to the scene to be fused by the opposite terminal equipment; or,
and selecting an image to be fused from images prestored on the terminal equipment, and acquiring a depth image of a scene corresponding to the image to be fused as the second depth image.
10. A photographing apparatus, comprising:
the device comprises a first acquisition module, a second acquisition module and a control module, wherein the first acquisition module is used for acquiring a first depth image of a shooting object through structured light;
the second acquisition module is used for acquiring a second depth image of the scene to be fused formed by the structured light;
the third acquisition module is used for acquiring the first image of the shot object and the second image of the scene to be fused; the first image and the second image carry the RGB value of each pixel point;
the fourth acquisition module is used for fusing the shooting object into the scene to be fused according to the first depth image and the second depth image to obtain a target depth image;
a fifth obtaining module, configured to form a target image according to the first image, the second image, and the target depth image, where the target image includes the photographic object and the scene to be fused.
11. A terminal device comprising a memory and a processor, the memory having stored therein computer-readable instructions, which when executed by the processor, cause the processor to execute the photographing method according to any one of claims 1 to 9.
12. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the method of taking a picture as recited in any of claims 1-8.
CN201710643861.5A 2017-07-31 2017-07-31 Photographic method and its device Active CN107483845B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710643861.5A CN107483845B (en) 2017-07-31 2017-07-31 Photographic method and its device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710643861.5A CN107483845B (en) 2017-07-31 2017-07-31 Photographic method and its device

Publications (2)

Publication Number Publication Date
CN107483845A true CN107483845A (en) 2017-12-15
CN107483845B CN107483845B (en) 2019-09-06

Family

ID=60596922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710643861.5A Active CN107483845B (en) 2017-07-31 2017-07-31 Photographic method and its device

Country Status (1)

Country Link
CN (1) CN107483845B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109741405A (en) * 2019-01-21 2019-05-10 同济大学 A kind of depth information acquisition system based on dual structure light RGB-D camera
CN109862276A (en) * 2019-03-31 2019-06-07 联想(北京)有限公司 A kind of information processing method and device
CN109993831A (en) * 2019-05-13 2019-07-09 浙江舜宇光学有限公司 The construction method and system of depth image
CN111683239A (en) * 2020-06-22 2020-09-18 贝壳技术有限公司 Control method and device of three-dimensional camera and computer readable storage medium
CN111815695A (en) * 2020-07-09 2020-10-23 Oppo广东移动通信有限公司 Depth image acquisition method and device, mobile terminal and storage medium
CN112312113A (en) * 2020-10-29 2021-02-02 贝壳技术有限公司 Method, device and system for generating three-dimensional model
CN112560698A (en) * 2020-12-18 2021-03-26 北京百度网讯科技有限公司 Image processing method, apparatus, device and medium
CN114170349A (en) * 2020-09-10 2022-03-11 北京达佳互联信息技术有限公司 Image generation method, image generation device, electronic equipment and storage medium
WO2022052782A1 (en) * 2020-09-10 2022-03-17 华为技术有限公司 Image processing method and related device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103337079A (en) * 2013-07-09 2013-10-02 广州新节奏智能科技有限公司 Virtual augmented reality teaching method and device
CN103348386A (en) * 2010-11-29 2013-10-09 马普科技促进协会 Computer-implemented method and apparatus for tracking and re-shaping human shaped figure in digital video
EP2869263A1 (en) * 2013-10-29 2015-05-06 Thomson Licensing Method and apparatus for generating depth map of a scene
CN105357515A (en) * 2015-12-18 2016-02-24 天津中科智能识别产业技术研究院有限公司 Color and depth imaging method and device based on structured light and light-field imaging
CN105913499A (en) * 2016-04-12 2016-08-31 郭栋 Three-dimensional conversion synthesis method and three-dimensional conversion synthesis system
CN106937059A (en) * 2017-02-09 2017-07-07 北京理工大学 Image synthesis method and system based on Kinect

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103348386A (en) * 2010-11-29 2013-10-09 马普科技促进协会 Computer-implemented method and apparatus for tracking and re-shaping human shaped figure in digital video
CN103337079A (en) * 2013-07-09 2013-10-02 广州新节奏智能科技有限公司 Virtual augmented reality teaching method and device
EP2869263A1 (en) * 2013-10-29 2015-05-06 Thomson Licensing Method and apparatus for generating depth map of a scene
CN105357515A (en) * 2015-12-18 2016-02-24 天津中科智能识别产业技术研究院有限公司 Color and depth imaging method and device based on structured light and light-field imaging
CN105913499A (en) * 2016-04-12 2016-08-31 郭栋 Three-dimensional conversion synthesis method and three-dimensional conversion synthesis system
CN106937059A (en) * 2017-02-09 2017-07-07 北京理工大学 Image synthesis method and system based on Kinect

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109741405B (en) * 2019-01-21 2021-02-02 同济大学 Depth information acquisition system based on dual structured light RGB-D camera
CN109741405A (en) * 2019-01-21 2019-05-10 同济大学 A kind of depth information acquisition system based on dual structure light RGB-D camera
CN109862276A (en) * 2019-03-31 2019-06-07 联想(北京)有限公司 A kind of information processing method and device
CN109862276B (en) * 2019-03-31 2020-11-20 联想(北京)有限公司 Information processing method and device
CN109993831A (en) * 2019-05-13 2019-07-09 浙江舜宇光学有限公司 The construction method and system of depth image
CN109993831B (en) * 2019-05-13 2023-09-26 浙江舜宇光学有限公司 Depth image construction method and system
CN111683239A (en) * 2020-06-22 2020-09-18 贝壳技术有限公司 Control method and device of three-dimensional camera and computer readable storage medium
CN111683239B (en) * 2020-06-22 2022-11-01 贝壳技术有限公司 Control method and device of three-dimensional camera and computer readable storage medium
CN111815695A (en) * 2020-07-09 2020-10-23 Oppo广东移动通信有限公司 Depth image acquisition method and device, mobile terminal and storage medium
CN111815695B (en) * 2020-07-09 2024-03-15 Oppo广东移动通信有限公司 Depth image acquisition method and device, mobile terminal and storage medium
CN114170349A (en) * 2020-09-10 2022-03-11 北京达佳互联信息技术有限公司 Image generation method, image generation device, electronic equipment and storage medium
WO2022052782A1 (en) * 2020-09-10 2022-03-17 华为技术有限公司 Image processing method and related device
CN112312113A (en) * 2020-10-29 2021-02-02 贝壳技术有限公司 Method, device and system for generating three-dimensional model
CN112560698A (en) * 2020-12-18 2021-03-26 北京百度网讯科技有限公司 Image processing method, apparatus, device and medium
CN112560698B (en) * 2020-12-18 2024-01-16 北京百度网讯科技有限公司 Image processing method, device, equipment and medium

Also Published As

Publication number Publication date
CN107483845B (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN107483845B (en) Photographic method and its device
CN107465906B (en) Panorama shooting method, device and the terminal device of scene
CN107481304B (en) Method and device for constructing virtual image in game scene
CN107480613B (en) Face recognition method and device, mobile terminal and computer readable storage medium
CN107452034B (en) Image processing method and device
CN107610171B (en) Image processing method and device
CN107481317A (en) The facial method of adjustment and its device of face 3D models
CN107734267B (en) Image processing method and device
CN107370951B (en) Image processing system and method
CN107517346B (en) Photographing method and device based on structured light and mobile device
CN107610080B (en) Image processing method and apparatus, electronic apparatus, and computer-readable storage medium
CN107491744B (en) Human body identity recognition method and device, mobile terminal and storage medium
CN107734264B (en) Image processing method and device
CN107592449B (en) Three-dimensional model establishing method and device and mobile terminal
CN107493427A (en) Focusing method, device and the mobile terminal of mobile terminal
CN107479801A (en) Displaying method of terminal, device and terminal based on user's expression
CN107392874B (en) Beauty treatment method and device and mobile equipment
CN107820019B (en) Blurred image acquisition method, blurred image acquisition device and blurred image acquisition equipment
CN107480615B (en) Beauty treatment method and device and mobile equipment
CN107463659B (en) Object searching method and device
CN107590828B (en) Blurring processing method and device for shot image
CN107659985B (en) Method and device for reducing power consumption of mobile terminal, storage medium and mobile terminal
CN107370950A (en) Focusing process method, apparatus and mobile terminal
CN107592491B (en) Video communication background display method and device
CN107613239B (en) Video communication background display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: OPPO Guangdong Mobile Communications Co., Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: Guangdong OPPO Mobile Communications Co., Ltd.

GR01 Patent grant
GR01 Patent grant