CN110012229B - Image processing method and terminal - Google Patents
Image processing method and terminal Download PDFInfo
- Publication number
- CN110012229B CN110012229B CN201910294697.0A CN201910294697A CN110012229B CN 110012229 B CN110012229 B CN 110012229B CN 201910294697 A CN201910294697 A CN 201910294697A CN 110012229 B CN110012229 B CN 110012229B
- Authority
- CN
- China
- Prior art keywords
- image
- camera
- target
- terminal
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 20
- 238000000034 method Methods 0.000 claims abstract description 30
- 238000004590 computer program Methods 0.000 claims description 14
- 230000002194 synthesizing effect Effects 0.000 claims description 10
- 230000015572 biosynthetic process Effects 0.000 claims description 3
- 238000003786 synthesis reaction Methods 0.000 claims description 3
- 238000003379 elimination reaction Methods 0.000 abstract description 6
- 230000008030 elimination Effects 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 10
- 230000000694 effects Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 239000003795 chemical substances by application Substances 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000010422 painting Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention provides an image processing method and a terminal, wherein the method comprises the following steps: acquiring a first image of a shot object through the first camera, and acquiring a second image of the shot object through the second camera; acquiring depth information and/or angle information in the first image, and identifying a background object in the shot object according to the depth information and/or the angle information; and removing the background object included in the second image to generate a target image. The image processing method provided by the embodiment of the invention can improve the efficiency of background elimination of the image.
Description
Technical Field
The present invention relates to the field of communications technologies, and in particular, to an image processing method and a terminal.
Background
With the rapid development of terminals, the functions of the terminals are more and more diversified. Users are also increasingly beginning to take pictures using the photographing function of the terminal. In practical applications, after an image is captured by a camera of a terminal, the image generally needs to be subjected to a background elimination process by an image processing application, and the time taken for the image to be subjected to the background elimination process by the image processing application is long. Therefore, the background elimination efficiency of the current terminal on the image is low.
Disclosure of Invention
The embodiment of the invention provides an image processing method and a terminal, and aims to solve the problem that the background elimination efficiency of the current terminal on an image is low.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, which is applied to a terminal including a first camera and a second camera, where the first camera and the second camera are located on a same side of the terminal, and the method includes:
acquiring a first image of a shot object through the first camera, and acquiring a second image of the shot object through the second camera;
acquiring depth information and/or angle information in the first image, and identifying a background object in the shot object according to the depth information and/or the angle information;
and removing the background object included in the second image to generate a target image.
In a second aspect, an embodiment of the present invention further provides a terminal, including: first camera and second camera, first camera with the second camera is located same one side of terminal, the terminal still includes:
the first acquisition module is used for acquiring a first image of a shot object through the first camera and acquiring a second image of the shot object through the second camera;
the second acquisition module is used for acquiring depth information and/or angle information in the first image and identifying a background object in the shot object according to the depth information and/or the angle information;
and the removing module is used for removing the background object included in the second image to generate a target image.
In a third aspect, an embodiment of the present invention further provides a mobile terminal, including: the image processing method comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the steps of the image processing method when executing the computer program.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps in the image processing method.
In the embodiment of the invention, a first image of a shot object is acquired through the first camera, and a second image of the shot object is acquired through the second camera; acquiring depth information and/or angle information in the first image, and identifying a background object in the shot object according to the depth information and/or the angle information; and removing the background object included in the second image to generate a target image. Therefore, the background object can be directly identified through the depth information and/or the angle information in the first image, and then the background object in the second image is removed, so that the efficiency of removing the background object in the second image is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention;
FIG. 2 is one example of a diagram provided by an embodiment of the present invention;
FIG. 3 is a flow chart of another image processing method provided by the embodiment of the invention;
FIG. 4 is a second example diagram provided by the present invention;
fig. 5 is a structural diagram of a terminal according to an embodiment of the present invention;
fig. 6 is a block diagram of another terminal provided in an embodiment of the present invention;
fig. 7 is a block diagram of another terminal according to an embodiment of the present invention;
fig. 8 is a block diagram of another terminal provided in an embodiment of the present invention;
fig. 9 is a block diagram of another terminal provided in an embodiment of the present invention;
fig. 10 is a block diagram of another terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of an image processing method provided in an embodiment of the present invention, where the method is applied to a terminal including a first camera and a second camera, where the first camera and the second camera are located on a same side of the terminal, and as shown in fig. 1, the method includes the following steps:
The first camera may be a Time Of Flight (TOF) camera, the second camera may be a color camera, and the color camera may also be referred to as an RGB camera.
The shot object may include a background object and a target object, for example: the target object may be a human face and the background object may be a wall surface and a landscape painting hung on the wall surface.
The first image is an image in a video acquired by the first camera with respect to the subject, and the second image is an image in a video acquired by the second camera with respect to the subject.
When the first image is an image in a video acquired by the first camera with respect to the object, and the second image is an image in a video acquired by the second camera with respect to the object, the first image and the second image may be images acquired at the same time with respect to the same object, for example: the first image is an image obtained by the terminal for the first scene through the first camera at the first time, and the second image may be an image obtained by the terminal for the first scene through the second camera at the first time.
In addition, the first image and the second image are both images in the video, so that the background object of each frame of the second image in the video can be removed, and then the second image can be synthesized with other videos comprising scenes to be synthesized.
In this embodiment, the first image is an image in a video acquired by the first camera with respect to the photographed object, and the second image is an image in a video acquired by the second camera with respect to the photographed object, which can be applied to removing a background object of each frame of image in the video, so that the application range of this embodiment is wider, and the use is more flexible.
102, obtaining depth information and/or angle information in the first image, and identifying a background object in the shot object according to the depth information and/or the angle information, wherein the angle information is as follows: angle information of each position in the subject with respect to the first camera.
And the depth information in the first image is not in the preset range, and/or each position of which the angle information is larger than the preset angle threshold value forms a background object. For example: the preset range may be 1 meter to 10 meters, and the preset angle threshold may be 45 degrees.
It should be noted that the preset range may refer to a range between a value greater than or equal to a first value and a value less than or equal to a second value, where a value of the second value is related to the accuracy of the first camera, and the higher the accuracy of the first camera is, the higher the value of the second value is.
In addition, the angle information of each position in the object may be specifically: the camera comprises a connecting line between each position and the first camera and an included angle between reference straight lines, wherein the reference straight lines are straight lines which pass through the first cameras and are perpendicular to one surface of the terminal, on which the first cameras are arranged. For example: referring to fig. 2, in the drawing, a point a is a position of the first camera, B is a certain position in the object to be photographed, C is another position in the object to be photographed, a straight line where a connecting line between a and D is a reference straight line, wherein angle information of the point B is an included angle between a straight line where the point AB is located and a straight line where the point AD is located, angle information of the point C is an included angle between a straight line where the point AC is located and a straight line where the point AD is located, an included angle between a straight line where the point AE is located and a straight line where the point AD is located may represent a preset angle threshold, and both the length of FH and the length of AG may represent a second numerical value in.
In addition, the background object can be identified according to the angle information of each position in the shot object relative to the first camera, namely, whether the included angle between the connecting line between each position and the first camera and the reference straight line is smaller than a preset angle threshold value or not is judged, and the part formed by the positions with the included angles larger than the preset angle threshold value is the background object; of course, the background object may also be identified separately according to the depth information of each position in the shot object, and a portion formed by positions of which the depth information is not within the preset range is the background object.
It should be noted that the background object may be identified by simultaneously combining the angle information of each position in the subject with respect to the first camera and the depth information of each position in the subject. For example: the partial image of the first image within the preset angle threshold range may be determined, and then a partial image of the partial image with the depth information within the preset range is determined as a target object, and an image of the first image other than the target object is determined as a background object.
After the background object in the first image is identified, the portion corresponding to the background object in the second image may be removed, so as to generate a target image including only the target object.
For example: the shot object comprises a face and a wall surface, the background object in the first image can be identified as the wall surface according to the depth information and/or the angle information in the first image, and the wall surface in the second image can be directly removed, so that a target image comprising the face only is generated.
In the embodiment of the present invention, the terminal may be a Mobile phone, a Tablet Personal Computer (Tablet Personal Computer), a Laptop Computer (Laptop Computer), a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a Wearable Device (Wearable Device), or the like.
In the embodiment of the invention, a first image of a shot object is acquired through the first camera, and a second image of the shot object is acquired through the second camera; acquiring depth information and/or angle information in the first image, and identifying a background object in the shot object according to the depth information and/or the angle information; and removing the background object included in the second image to generate a target image. Therefore, the background object can be directly identified through the depth information and/or the angle information in the first image, and then the background object in the second image is removed, so that the efficiency of removing the background object in the second image is improved.
Referring to fig. 3, fig. 3 is a flowchart of another image processing method according to an embodiment of the present invention. The main differences between this embodiment and the previous embodiment are: the preset range or the numerical value of the preset angle threshold value can be preset according to an operation instruction of a user. As shown in fig. 3, the method comprises the following steps:
and step 301, displaying a setting interface.
When the user opens the camera application program of the terminal and selects the background object removing mode, the setting interface can be directly displayed. Of course, when the user opens the camera application of the terminal and selects the background object removal mode, the setting interface may not be directly displayed, and when the display instruction input by the user is received, the setting interface is displayed. The specific manner is not limited herein.
The operation instruction of the user may be a pressing instruction, a touch instruction, a voice instruction, or the like, and the specific type is not limited herein.
Wherein, depth information and/or angle information can be set on the setting interface, for example: the depth information may include a range of depth information, i.e., a preset range, and the angle information may include a preset angle threshold. Of course, a pre-stored picture can also be directly acquired, and the depth information or the angle information of the picture can be modified.
For example: the line corresponding to the preset range and at least one of the lines corresponding to the preset angle threshold can be displayed on the setting interface, and a user can adjust the preset range or the preset angle threshold by dragging the line corresponding to the preset range or the line corresponding to the preset angle threshold.
In addition, when lines corresponding to the preset range and lines corresponding to the preset angle threshold are displayed on the setting interface at the same time, the lines of the two types can enclose a preset graph. Referring to fig. 4, A, E and G, the preset pattern in fig. 4 is defined by lines of two types AE and AG, and the specific type of the preset pattern is not limited herein, for example: the preset graph can be a cone, a pyramid or a hemisphere, and a user can drag a line in the preset graph, so that the purpose of setting a preset range or a preset angle threshold is achieved. In this way, the portion of the first image within the preset figure is the target object, and the portion of the first image not within the preset figure is the background object. So that the recognition rate of the background object can be improved.
In addition, the outlines of the target object and the background object in the first image can be marked, the first image and the second image are overlapped, and then the part of the background object in the second image is removed. Of course, it is also possible to store the depth information of the target object in the first image, superimpose the depth information of the target object and the color image information included in the second image, and find the contour formed by the depth information of the target object according to a convex hull algorithm (e.g., a Graham scan line algorithm, an Mlekman algorithm, etc.). And then, reserving pixel points corresponding to the color image information in the outline in the second image, and replacing pixel points corresponding to the color image information not in the outline with transparent pixel points, thereby achieving the purpose of eliminating the background object in the second image.
In addition, default numerical values of at least one of the preset range and the preset angle threshold value can be displayed on the setting interface, and a user can directly input the preset range or the setting numerical values of the preset angle threshold value.
It should be noted that the specific setting manner of the preset range and the preset angle threshold is not limited herein.
It should be noted that steps 301 and 302 are optional.
And 303, acquiring a first image of the object to be shot through the first camera, and acquiring a second image of the object to be shot through the second camera.
The shot object may include a background object and a target object, for example: the target object may be a human face and the background object may be a wall surface and a landscape painting hung on the wall surface.
The first image is an image in a video acquired by the first camera with respect to the subject, and the second image is an image in a video acquired by the second camera with respect to the subject.
In addition, the first image is an image in a video acquired by the first camera for the photographed object, and the second image is an image in a video acquired by the second camera for the photographed object, and the specific expression may refer to the expression in the previous embodiment, and the same beneficial technical effects in the previous embodiment may be achieved, which is not described herein again.
And the depth information in the first image is not in the preset range, and/or each position of which the angle information is larger than the preset angle threshold value forms a background object. For example: the preset range may be 1 meter to 10 meters, and the preset angle threshold may be 45 degrees.
It should be noted that the preset range may refer to a range between a value larger than the first value and a value smaller than the second value, where the value of the second value is related to the accuracy of the first camera, and the higher the accuracy of the first camera is, the higher the value of the second value is.
And directly deleting the pixel points of the background object in the second image.
Optionally, the removing the background object included in the second image to generate a target image includes:
identifying pixel points of the background object in the second image;
and setting pixel values corresponding to the pixel points of the background object in the second image as first pixel values, and generating the target image.
The first pixel value is smaller than the preset threshold, it should be noted that the pixel value may also be referred to as an RGB value, and the value of the preset threshold is not limited herein, for example: the preset threshold may be 0.1. Preferably, a pixel value corresponding to a pixel point of the background object in the second image may be set to 0, and the pixel point having the pixel value of 0 may be referred to as a transparent pixel point.
In this embodiment, the pixel value corresponding to the pixel point of the background object in the second image is set as the first pixel value, which can also achieve the effect of removing the background object, so that the step of removing the background object is simpler and more convenient, and the efficiency is higher.
Optionally, after setting pixel values corresponding to pixel points of the background object in the second image as first pixel values and generating the target image, the method further includes:
synthesizing a pixel point of a target object in the target image and depth information of the target object in the first image into a three-dimensional model of the target object, wherein the target object is an object except the background object in the shot object;
and running a virtual reality application program, and constructing a virtual reality application scene of the three-dimensional model through the virtual reality application program.
Therefore, by constructing the virtual reality application scene of the three-dimensional model, the application scene is more diversified and flexible.
Optionally, after setting pixel values corresponding to pixel points of the background object in the second image as first pixel values and generating the target image, the method further includes:
acquiring a third image comprising a scene to be synthesized;
and synthesizing the target image and the third image.
The scene to be synthesized may be a landscape image of a certain place or an image of a certain known person.
Therefore, the target image can be directly synthesized with the third image without using an image synthesis application program, the operation of a user is simplified, and the efficiency of image synthesis is improved.
In the embodiment of the present invention, through steps 301 to 305, a user may preset a corresponding preset range or a numerical value of a preset angle threshold according to a different usage scenario, so that flexibility of removing a background object may be improved.
Referring to fig. 5, fig. 5 is a structural diagram of a terminal according to an embodiment of the present invention, which can implement details of an image processing method in the foregoing embodiment and achieve the same effect. The terminal 500 includes: a first camera and a second camera, where the first camera and the second camera are located on the same side of the terminal, as shown in fig. 5, the terminal 500 further includes:
a first obtaining module 501, configured to obtain a first image of a subject through the first camera, and obtain a second image of the subject through the second camera;
a second obtaining module 502, configured to obtain depth information and/or angle information in the first image, and identify a background object in the captured object according to the depth information and/or the angle information, where the angle information is: the angle information of each position in the shot object relative to the TOF camera, the depth information of the background object is not within a preset range, and/or the angle information is larger than a preset angle threshold;
a removing module 503, configured to remove the background object included in the second image, so as to generate a target image.
Optionally, referring to fig. 6, the culling module 503 includes:
an identifier module 5031, configured to identify a pixel point of the background object in the second image;
the replacing sub-module 5032 is configured to set a pixel value corresponding to a pixel point of the background object in the second image as a first pixel value, and generate the target image, where the first pixel value is smaller than a preset threshold.
Optionally, referring to fig. 7, the terminal 500 further includes:
a display module 504 for displaying a setting interface;
a setting module 505, configured to receive an operation instruction of a user, and set the depth information and/or the angle information on the setting interface according to the operation instruction.
Optionally, referring to fig. 8, the terminal 500 further includes:
a first synthesizing module 506, configured to synthesize a three-dimensional model of a target object with pixel points of the target object in the target image and depth information of the target object in the first image, where the target object is an object other than the background object in the captured object;
the building module 507 is configured to run a virtual reality application program, and build a virtual reality application scene of the three-dimensional model through the virtual reality application program.
Optionally, referring to fig. 9, the terminal 500 further includes:
a third obtaining module 508, configured to obtain a third image including a scene to be synthesized;
a second synthesizing module 509, configured to synthesize the target image and the third image.
Optionally, the first image is an image in a video acquired by the first camera with respect to the object, and the second image is an image in a video acquired by the second camera with respect to the object.
The terminal 500 can implement each process implemented by the terminal in the method embodiments of fig. 1 and fig. 3, and is not described herein again to avoid repetition.
Fig. 10 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present invention.
The mobile terminal 1000 includes, but is not limited to: the mobile terminal comprises a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, a processor 1010, a power supply 1011, a first camera and a second camera, and the like, wherein the first camera and the second camera are located on the same side of the mobile terminal 1000. Those skilled in the art will appreciate that the mobile terminal architecture illustrated in fig. 10 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 1010 is configured to acquire a first image of a subject through the first camera, and acquire a second image of the subject through the second camera;
acquiring depth information and/or angle information in the first image, and identifying a background object in the shot object according to the depth information and/or the angle information, wherein the angle information is as follows: the angle information of each position in the shot object relative to the first camera, the depth information of the background object are not in a preset range, and/or the angle information is larger than a preset angle threshold;
and removing the background object included in the second image to obtain a target image.
Optionally, the removing the background object included in the second image and generating a target image by the processor 1010 includes:
identifying pixel points of the background object in the second image;
setting a pixel value corresponding to a pixel point of the background object in the second image as a first pixel value, and generating the target image, wherein the first pixel value is smaller than a preset threshold value.
Optionally, the processor 1010 is further configured to: before the first image of the object to be shot is acquired by the first camera and the second image of the object to be shot is acquired by the second camera, the method further comprises:
displaying a setting interface;
and receiving an operation instruction of a user, and setting the depth information and/or the angle information on the setting interface according to the operation instruction.
Optionally, the processor 1010 is further configured to: setting pixel values corresponding to pixel points of the background object in the second image as first pixel values, and after generating the target image, the method further includes:
synthesizing a pixel point of a target object in the target image and depth information of the target object in the first image into a three-dimensional model of the target object, wherein the target object is an object except the background object in the shot object;
running a virtual reality application program, and constructing a virtual reality application scene of the three-dimensional model through the virtual reality application program;
or,
acquiring a third image comprising a scene to be synthesized;
and synthesizing the target image and the third image.
Optionally, the first image is an image in a video acquired by the first camera with respect to the object, and the second image is an image in a video acquired by the second camera with respect to the object.
The mobile terminal provided by the embodiment of the invention has better background elimination efficiency on the image.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 1001 may be used for receiving and sending signals during a message transmission or a call, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 1010; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 1001 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 1001 may also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access through the network module 1002, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 1003 may convert audio data received by the radio frequency unit 1001 or the network module 1002 or stored in the memory 1009 into an audio signal and output as sound. Also, the audio output unit 1003 may also provide audio output related to a specific function performed by the mobile terminal 1000 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 1003 includes a speaker, a buzzer, a receiver, and the like.
The input unit 1004 is used to receive an audio or video signal. The input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, the Graphics processor 10041 Processing image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 1006. The image frames processed by the graphic processor 10041 may be stored in the memory 1009 (or other storage medium) or transmitted via the radio frequency unit 1001 or the network module 1002. The microphone 10042 can receive sound and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 1001 in case of a phone call mode.
The mobile terminal 1000 can also include at least one sensor 1005, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 10061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 10061 and/or the backlight when the mobile terminal 1000 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 1005 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described in detail herein.
The display unit 1006 is used to display information input by the user or information provided to the user. The Display unit 1006 may include a Display panel 10061, and the Display panel 10061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 1007 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 10071 (e.g., operations by a user on or near the touch panel 10071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 10071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1010, and receives and executes commands sent by the processor 1010. In addition, the touch panel 10071 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 10071, the user input unit 1007 can include other input devices 10072. Specifically, the other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 10071 can be overlaid on the display panel 10061, and when the touch panel 10071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 1010 to determine the type of the touch event, and then the processor 1010 provides a corresponding visual output on the display panel 10061 according to the type of the touch event. Although in fig. 10, the touch panel 10071 and the display panel 10061 are two independent components for implementing the input and output functions of the mobile terminal, in some embodiments, the touch panel 10071 and the display panel 10061 may be integrated to implement the input and output functions of the mobile terminal, which is not limited herein.
The interface unit 1008 is an interface through which an external device is connected to the mobile terminal 1000. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 1008 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 1000 or may be used to transmit data between the mobile terminal 1000 and external devices.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, and the like), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1009 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 1010 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 1009 and calling data stored in the memory 1009, thereby integrally monitoring the mobile terminal. Processor 1010 may include one or more processing units; preferably, the processor 1010 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The mobile terminal 1000 may also include a power supply 1011 (e.g., a battery) for powering the various components, and the power supply 1011 may be logically coupled to the processor 1010 via a power management system that may be configured to manage charging, discharging, and power consumption.
In addition, the mobile terminal 1000 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, including a processor 1010, a memory 1009, and a computer program stored in the memory 1009 and capable of running on the processor 1010, where the computer program is executed by the processor 1010 to implement each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (10)
1. An image processing method applied to a terminal comprising a first camera and a second camera, wherein the first camera and the second camera are located on the same side of the terminal, the method comprising:
acquiring a first image of a shot object through the first camera, and acquiring a second image of the shot object through the second camera;
acquiring depth information and/or angle information in the first image, and identifying a background object in the shot object according to the depth information and/or the angle information;
removing the background object included in the second image to generate a target image;
before the first image of the object to be shot is acquired by the first camera and the second image of the object to be shot is acquired by the second camera, the method further comprises:
displaying a setting interface;
receiving an operation instruction of a user, and setting the depth information and/or the angle information on the setting interface according to the operation instruction;
the setting interface displays lines corresponding to a preset range and lines corresponding to a preset angle threshold, and the lines corresponding to the preset range and the lines corresponding to the preset angle threshold enclose a preset graph;
the identifying a background object in the photographed object according to the depth information and/or the angle information includes:
determining that the part of the first image which is not in the preset graph is the background object.
2. The method of claim 1, wherein the removing the background objects included in the second image to generate a target image comprises:
identifying pixel points of the background object in the second image;
setting a pixel value corresponding to a pixel point of the background object in the second image as a first pixel value, and generating the target image, wherein the first pixel value is smaller than a preset threshold value.
3. The method of claim 1, wherein after setting pixel values corresponding to pixel points of the background object in the second image to first pixel values and generating the target image, the method further comprises:
synthesizing a pixel point of a target object in the target image and depth information of the target object in the first image into a three-dimensional model of the target object, wherein the target object is an object except the background object in the shot object;
running a virtual reality application program, and constructing a virtual reality application scene of the three-dimensional model through the virtual reality application program;
or,
acquiring a third image comprising a scene to be synthesized;
and synthesizing the target image and the third image.
4. The method according to any one of claims 1 to 3, wherein the first image is an image in a video acquired for the subject by the first camera, and the second image is an image in a video acquired for the subject by the second camera.
5. A terminal, comprising: first camera and second camera, first camera with the second camera is located same one side of terminal, the terminal still includes:
the first acquisition module is used for acquiring a first image of a shot object through the first camera and acquiring a second image of the shot object through the second camera;
the second acquisition module is used for acquiring depth information and/or angle information in the first image and identifying a background object in the shot object according to the depth information and/or the angle information;
the removing module is used for removing the background object included in the second image to generate a target image;
the terminal further comprises:
the display module is used for displaying a setting interface;
the setting module is used for receiving an operation instruction of a user and setting the depth information and/or the angle information on the setting interface according to the operation instruction;
the setting interface displays lines corresponding to a preset range and lines corresponding to a preset angle threshold, and the lines corresponding to the preset range and the lines corresponding to the preset angle threshold enclose a preset graph;
the setting module is further configured to determine that a portion of the first image that is not within the preset graph is the background object.
6. The terminal of claim 5, wherein the culling module comprises:
the identification submodule is used for identifying pixel points of the background object in the second image;
and the replacing sub-module is used for setting a pixel value corresponding to the pixel point of the background object in the second image as a first pixel value and generating the target image, wherein the first pixel value is smaller than a preset threshold value.
7. The terminal of claim 5, wherein the terminal further comprises:
a first synthesizing module, configured to synthesize a three-dimensional model of a target object with pixel points of the target object in the target image and depth information of the target object in the first image, where the target object is an object other than the background object in the captured object;
the building module is used for running a virtual reality application program and building a virtual reality application scene of the three-dimensional model through the virtual reality application program;
or,
the third acquisition module is used for acquiring a third image comprising a scene to be synthesized;
and the second synthesis module is used for synthesizing the target image and the third image.
8. The terminal according to any one of claims 5 to 7, wherein the first image is an image in a video acquired for the subject by the first camera, and the second image is an image in a video acquired for the subject by the second camera.
9. A mobile terminal, comprising: memory, processor and computer program stored on the memory and executable on the processor, the processor implementing the steps in the image processing method according to any of claims 1-4 when executing the computer program.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps in the image processing method according to any one of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910294697.0A CN110012229B (en) | 2019-04-12 | 2019-04-12 | Image processing method and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910294697.0A CN110012229B (en) | 2019-04-12 | 2019-04-12 | Image processing method and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110012229A CN110012229A (en) | 2019-07-12 |
CN110012229B true CN110012229B (en) | 2021-01-08 |
Family
ID=67171466
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910294697.0A Active CN110012229B (en) | 2019-04-12 | 2019-04-12 | Image processing method and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110012229B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113470138B (en) * | 2021-06-30 | 2024-05-24 | 维沃移动通信有限公司 | Image generation method and device, electronic equipment and readable storage medium |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102467661A (en) * | 2010-11-11 | 2012-05-23 | Lg电子株式会社 | Multimedia device and method for controlling the same |
CN105447895A (en) * | 2014-09-22 | 2016-03-30 | 酷派软件技术(深圳)有限公司 | Hierarchical picture pasting method, device and terminal equipment |
CN106327445A (en) * | 2016-08-24 | 2017-01-11 | 王忠民 | Image processing method and device, photographic equipment and use method thereof |
CN106375662A (en) * | 2016-09-22 | 2017-02-01 | 宇龙计算机通信科技(深圳)有限公司 | Photographing method and device based on double cameras, and mobile terminal |
CN107197169A (en) * | 2017-06-22 | 2017-09-22 | 维沃移动通信有限公司 | A kind of high dynamic range images image pickup method and mobile terminal |
CN107194963A (en) * | 2017-04-28 | 2017-09-22 | 努比亚技术有限公司 | A kind of dual camera image processing method and terminal |
CN107396084A (en) * | 2017-07-20 | 2017-11-24 | 广州励丰文化科技股份有限公司 | A kind of MR implementation methods and equipment based on dual camera |
CN108111748A (en) * | 2017-11-30 | 2018-06-01 | 维沃移动通信有限公司 | A kind of method and apparatus for generating dynamic image |
CN108322644A (en) * | 2018-01-18 | 2018-07-24 | 努比亚技术有限公司 | A kind of image processing method, mobile terminal and computer readable storage medium |
CN108881730A (en) * | 2018-08-06 | 2018-11-23 | 成都西纬科技有限公司 | Image interfusion method, device, electronic equipment and computer readable storage medium |
CN109035288A (en) * | 2018-07-27 | 2018-12-18 | 北京市商汤科技开发有限公司 | A kind of image processing method and device, equipment and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130235223A1 (en) * | 2012-03-09 | 2013-09-12 | Minwoo Park | Composite video sequence with inserted facial region |
-
2019
- 2019-04-12 CN CN201910294697.0A patent/CN110012229B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102467661A (en) * | 2010-11-11 | 2012-05-23 | Lg电子株式会社 | Multimedia device and method for controlling the same |
CN105447895A (en) * | 2014-09-22 | 2016-03-30 | 酷派软件技术(深圳)有限公司 | Hierarchical picture pasting method, device and terminal equipment |
CN106327445A (en) * | 2016-08-24 | 2017-01-11 | 王忠民 | Image processing method and device, photographic equipment and use method thereof |
CN106375662A (en) * | 2016-09-22 | 2017-02-01 | 宇龙计算机通信科技(深圳)有限公司 | Photographing method and device based on double cameras, and mobile terminal |
CN107194963A (en) * | 2017-04-28 | 2017-09-22 | 努比亚技术有限公司 | A kind of dual camera image processing method and terminal |
CN107197169A (en) * | 2017-06-22 | 2017-09-22 | 维沃移动通信有限公司 | A kind of high dynamic range images image pickup method and mobile terminal |
CN107396084A (en) * | 2017-07-20 | 2017-11-24 | 广州励丰文化科技股份有限公司 | A kind of MR implementation methods and equipment based on dual camera |
CN108111748A (en) * | 2017-11-30 | 2018-06-01 | 维沃移动通信有限公司 | A kind of method and apparatus for generating dynamic image |
CN108322644A (en) * | 2018-01-18 | 2018-07-24 | 努比亚技术有限公司 | A kind of image processing method, mobile terminal and computer readable storage medium |
CN109035288A (en) * | 2018-07-27 | 2018-12-18 | 北京市商汤科技开发有限公司 | A kind of image processing method and device, equipment and storage medium |
CN108881730A (en) * | 2018-08-06 | 2018-11-23 | 成都西纬科技有限公司 | Image interfusion method, device, electronic equipment and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110012229A (en) | 2019-07-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108495029B (en) | Photographing method and mobile terminal | |
CN108989678B (en) | Image processing method and mobile terminal | |
CN108038825B (en) | Image processing method and mobile terminal | |
CN108307109B (en) | High dynamic range image preview method and terminal equipment | |
CN110213485B (en) | Image processing method and terminal | |
CN107730460B (en) | Image processing method and mobile terminal | |
CN107846583B (en) | Image shadow compensation method and mobile terminal | |
CN107248137B (en) | Method for realizing image processing and mobile terminal | |
CN111401463B (en) | Method for outputting detection result, electronic equipment and medium | |
CN111182211B (en) | Shooting method, image processing method and electronic equipment | |
CN111031234B (en) | Image processing method and electronic equipment | |
CN109819166B (en) | Image processing method and electronic equipment | |
CN109618218B (en) | Video processing method and mobile terminal | |
CN109544445B (en) | Image processing method and device and mobile terminal | |
CN109727212B (en) | Image processing method and mobile terminal | |
CN109005314B (en) | Image processing method and terminal | |
CN109246351B (en) | Composition method and terminal equipment | |
CN108174110B (en) | Photographing method and flexible screen terminal | |
CN110602390B (en) | Image processing method and electronic equipment | |
CN110717964B (en) | Scene modeling method, terminal and readable storage medium | |
CN109639981B (en) | Image shooting method and mobile terminal | |
CN108881721A (en) | A kind of display methods and terminal | |
CN108156386B (en) | Panoramic photographing method and mobile terminal | |
CN110555815A (en) | Image processing method and electronic equipment | |
CN107798662B (en) | Image processing method and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |