CN107172346B - Virtualization method and mobile terminal - Google Patents
Virtualization method and mobile terminal Download PDFInfo
- Publication number
- CN107172346B CN107172346B CN201710297532.XA CN201710297532A CN107172346B CN 107172346 B CN107172346 B CN 107172346B CN 201710297532 A CN201710297532 A CN 201710297532A CN 107172346 B CN107172346 B CN 107172346B
- Authority
- CN
- China
- Prior art keywords
- blurring
- target
- image
- area
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2621—Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the invention provides a virtualization method and a mobile terminal, wherein the virtualization method comprises the following steps: identifying a foreground area and a background area in a preview image, wherein the foreground area comprises at least two target areas; performing blurring processing on each target area to generate a corresponding depth-of-field image; and synthesizing the preview image according to the depth image corresponding to each target area to generate a blurred image with a blurred background area. Therefore, the preview image with two or more target areas can be subjected to blurring treatment, the accuracy of multi-target blurring treatment is effectively improved, and the user experience is further improved.
Description
Technical Field
The embodiment of the invention relates to the field of communication, in particular to a virtualization method and a mobile terminal.
Background
With the continuous development of mobile terminal technology, the shooting function in the mobile terminal is more and more abundant. In the prior art, a user can choose to generate a background blurred picture during the photographing process. The specific blurring method mainly comprises the following steps: the mobile terminal performs blurring processing according to the foreground region selected by the user, in the prior art, the foreground region is basically selected by the user through a circular or rectangular frame, the target is enclosed in the foreground region, and then, the other parts which are not enclosed in the frame are blurred by the mobile terminal.
However, when there are multiple targets in the image, the prior art cannot perform blurring processing on the multiple targets, such as: if some small objects exist in the multiple objects, or the multiple objects are arranged in a scattered manner, the multiple objects cannot be selected in the prior art, and therefore, a problem that some small objects are blurred occurs.
Therefore, an effective solution is not provided at present for the problem that in the prior art, the precision is low and the user experience is poor in the process of blurring multiple target images.
Disclosure of Invention
The embodiment of the invention provides a blurring method, which is used for solving the problem that in the process of blurring multiple target images in the prior art, the accuracy is low, and the user experience is poor.
In a first aspect, a blurring method is provided, which is applied to a mobile terminal, and the method includes:
identifying a foreground area and a background area in a preview image, wherein the foreground area comprises at least two target areas;
performing blurring processing on each target area to generate a corresponding depth-of-field image;
and synthesizing the preview image according to the depth image corresponding to each target area to generate a blurred image with the blurred background area.
On the other hand, an embodiment of the present invention further provides a mobile terminal, including:
the device comprises an identification module, a display module and a display module, wherein the identification module is used for identifying a foreground area and a background area in a preview image, and the foreground area comprises at least two target areas;
the blurring processing module is used for performing blurring processing on the basis of each target area to generate a corresponding depth-of-field image;
and the synthesis module is used for synthesizing the preview image according to the depth image corresponding to each target area so as to generate a blurred image with the blurred background area.
In this way, according to the technical scheme of the embodiment of the invention, the foreground area and the background area in the preview image are identified, wherein the foreground area comprises at least two target areas; performing blurring processing on each target area to generate a corresponding depth-of-field image; and synthesizing the preview image according to the depth image corresponding to each target area to generate a blurred image with a blurred background area. Therefore, the preview image with at least two target areas can be subjected to blurring treatment, the accuracy of multi-target blurring treatment is effectively improved, and the user experience is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a flow chart of a blurring method according to a first embodiment of the present invention;
FIG. 2 is a flow chart of a blurring method according to a second embodiment of the present invention;
fig. 3 is a block diagram of a mobile terminal in a third embodiment of the present invention;
fig. 4 is a block diagram of a mobile terminal in a third embodiment of the present invention;
fig. 5 is a block diagram of a mobile terminal in a fourth embodiment of the present invention;
fig. 6 is a schematic structural diagram of a mobile terminal in a fifth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 1, a flow chart of a blurring method in an embodiment of the present invention is shown.
Specifically, in the embodiment of the present invention, the mobile terminal generates a preview image during the shooting process, and identifies a foreground area and a background area in the preview image. In an embodiment of the present invention, the foreground region includes two or more target regions, i.e., regions corresponding to one or more objects that have the same depth of field and do not need to be blurred.
And 102, performing blurring processing on the basis of each target area to generate a corresponding depth image.
Specifically, in the embodiment of the present invention, the mobile terminal performs blurring processing on each target area, that is, blurring a portion (including other target areas and a background portion) other than the target area to generate a corresponding depth image. In one embodiment of the invention, the mobile terminal can generate the depth image by using pictures which are taken at different times (with extremely small intervals) in the same scene as the preview image.
And 103, synthesizing the preview image according to the depth image corresponding to each target area to generate a blurred image with a blurred background area.
Specifically, in the embodiment of the present invention, the mobile terminal may synthesize the depth image corresponding to each target area with the preview image, so as to virtualize the background area of the preview image, so as to obtain the virtualized image after the virtualization processing is performed on the preview image.
In summary, in the technical solution of the embodiment of the present invention, a foreground region and a background region in a preview image are identified, where the foreground region includes at least two target regions; performing blurring processing on each target area to generate a corresponding depth-of-field image; and synthesizing the preview image according to the depth image corresponding to each target area to generate a blurred image with a blurred background area. Therefore, the preview image with two or more target areas can be subjected to blurring treatment, the accuracy of multi-target blurring treatment is effectively improved, and the user experience is further improved.
Example two
Referring to fig. 2, a flow chart of a blurring method in an embodiment of the present invention is shown.
Step 201, identifying a foreground region and a background region in a preview image.
Specifically, in the embodiment of the present invention, the mobile terminal identifies the feature information of two or more target objects existing in the preview image, and determines the corresponding target area. Wherein, the characteristic information includes but is not limited to: scale, color, shape, and depth of field. The specific identification method can be realized by an object identification algorithm in the prior art, and the invention is not repeated. In an embodiment of the present invention, if the depths of field of some of the identified objects are the same, the objects correspond to the same target area. In another embodiment of the present invention, each identified object may also correspond to a target area. The invention is not limited in this regard.
In addition, in an embodiment of the present invention, after the mobile terminal identifies the target area, if the mobile terminal receives a user instruction, the mobile terminal identifies the corresponding target area according to the user instruction. For example, the following steps are carried out: if the mobile terminal has errors in the process of actively identifying multiple targets, namely, a small number of regions are not identified, the user can manually select the partial regions. Specifically, the user can instruct the mobile terminal to re-identify the unrecognized target in the area in a click mode, so that the accuracy of multi-target blurring is further improved.
Step 202, obtaining a blurring value corresponding to the background area.
Specifically, in the embodiment of the present invention, step 202 specifically includes:
sub-step 2021, detecting the background type corresponding to the background region. Specifically, in the embodiment of the present invention, the background types of the background area include, but are not limited to: scenery, night, city street, indoor, etc. The method for detecting scenes can be implemented by the prior art, and is not described herein.
In sub-step 2022, the blurred value corresponding to the background type is queried.
Specifically, in the embodiment of the present invention, the mobile terminal records a blurring value corresponding to each background type in advance. The mobile terminal can query the corresponding blurring value according to the determined background type. The presetting of the blurring value can obtain a result through multiple times of training of images shot under different scenes, and other training modes can also be utilized, which is not limited by the invention.
In addition, in another embodiment of the present invention, after the virtualized value is queried, if the user is not satisfied with the virtualized value, manual fine-tuning may also be performed.
Step 203, acquiring an original image corresponding to each target area.
Specifically, in the embodiment of the present invention, the mobile terminal needs to generate a corresponding original image for each target area, where the original image is the same as the preview image. In an embodiment of the present invention, the original image may also be an image that the mobile terminal can take with different time (with very small interval) in the same scene as the preview image.
And 204, performing blurring processing on the original image corresponding to each target area according to the blurring value to acquire a corresponding depth-of-field image.
Specifically, in the embodiment of the present invention, the mobile terminal performs blurring processing on a region other than the target region included in the original image according to the obtained blurring value, so as to obtain the corresponding depth-of-field image.
For example, the following steps are carried out: if the image comprises 3 persons standing dispersedly and having different distances from the camera equipment, the mobile terminal generates a corresponding depth-of-field image for a target area corresponding to each person. In the depth image of the target area corresponding to each person, only the target area corresponding to the individual is not blurred, and other individuals and the background area are blurred.
Step 205, synthesizing the preview image according to the depth image corresponding to each target area to generate a blurred image with blurred background areas.
Specifically, in the embodiment of the present invention, the mobile terminal sequentially synthesizes the preview images by using the depth images corresponding to each target area, so that the preview images are successively blurred, and acquires a blurred image in which the background area is blurred.
In summary, in the technical solution of the embodiment of the present invention, a foreground region and a background region in a preview image are identified, where the foreground region includes at least two target regions; performing blurring processing on each target area to generate a corresponding depth-of-field image; and synthesizing the preview image according to the depth image corresponding to each target area to generate a blurred image with a blurred background area. Therefore, the preview image with two or more target areas can be subjected to blurring treatment, the accuracy of multi-target blurring treatment is effectively improved, and the user experience is further improved.
EXAMPLE III
Fig. 3 is a block diagram of a mobile terminal of one embodiment of the present invention. The mobile terminal 300 shown in fig. 3 includes an identification module 301, a generation module 302, and a blurring processing module 303.
The identifying module 301 is configured to identify a foreground region and a background region in the preview image, where the foreground region includes at least two target regions.
Specifically, in the embodiment of the present invention, the mobile terminal generates a preview image during the shooting process, and the identifying module 301 identifies a foreground area and a background area in the preview image. In the embodiment of the present invention, the foreground region includes two or more target regions, and the target regions are regions corresponding to objects that do not need to be blurred.
A blurring module 302, configured to perform blurring processing on the basis of each target region to generate a corresponding depth image.
Specifically, in the embodiment of the present invention, the blurring module 302 performs blurring on each target area to generate a corresponding depth image. In one embodiment of the invention, the mobile terminal can generate the depth image by using pictures which are taken at different times (with extremely small intervals) in the same scene as the preview image.
The synthesizing module 303 is configured to perform synthesizing processing on the preview image according to the depth-of-field image corresponding to each target area, so as to generate a blurred image with a blurred background area.
Specifically, in the embodiment of the present invention, the synthesizing module 303 may synthesize the depth image corresponding to each target area with the preview image, so as to virtualize the background area of the preview image, so as to obtain a virtualized image obtained by virtualizing the preview image.
In summary, in the mobile terminal in the embodiment of the present invention, a foreground region and a background region in a preview image are identified, where the foreground region includes at least two target regions; performing blurring processing on each target area to generate a corresponding depth-of-field image; and synthesizing the preview image according to the depth image corresponding to each target area to generate a blurred image with a blurred background area. Therefore, the preview image with two or more target areas can be subjected to blurring treatment, the accuracy of multi-target blurring treatment is effectively improved, and the user experience is further improved.
In addition, in a preferred embodiment of the present invention, the identifying module 301 may be further configured to determine the corresponding target area by identifying feature information of at least two or more target objects existing in the preview image. And if the user instruction is received, identifying the corresponding target area according to the user instruction.
Optionally, in an embodiment, the blurring processing module 302 may further include: a first obtaining sub-module 401, a second obtaining sub-module 402, and a blurring sub-module 403. Refer to fig. 4.
The first obtaining submodule 401 is configured to obtain a blurring value corresponding to a background area.
And a second obtaining sub-module 402, configured to obtain an original image corresponding to each target region.
The blurring processing sub-module 403 is configured to perform blurring processing on the original image corresponding to each target area according to the blurring value, so as to obtain a corresponding depth-of-field image.
The first obtaining sub-module 401 may further include:
and a detecting unit (not shown in the figure) for detecting the background type corresponding to the background area.
And a query unit (not shown in the figure) for querying the blurring value corresponding to the background type.
Accordingly, in one embodiment, the blurring processing sub-module 403 may be further configured to: and performing blurring processing on the region outside the target region contained in the original image according to the blurring value to acquire a corresponding depth image.
Example four
Fig. 5 is a block diagram of a mobile terminal according to another embodiment of the present invention. The mobile terminal 500 shown in fig. 5 includes: at least one processor 501, memory 502, at least one network interface 504, and other user interfaces 503. The various components in the mobile terminal 500 are coupled together by a bus system 505. It is understood that the bus system 505 is used to enable connection communications between these components. The bus system 505 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 505 in FIG. 5.
The user interface 503 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, trackball, touch pad, or touch screen, among others.
It is to be understood that the memory 502 in embodiments of the present invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a Read-only memory (ROM), a programmable Read-only memory (PROM), an erasable programmable Read-only memory (erasabprom, EPROM), an electrically erasable programmable Read-only memory (EEPROM), or a flash memory. The volatile memory may be a Random Access Memory (RAM) which functions as an external cache. By way of example, but not limitation, many forms of RAM are available, such as static random access memory (staticiram, SRAM), dynamic random access memory (dynamic RAM, DRAM), synchronous dynamic random access memory (syncronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced synchronous SDRAM (ESDRAM), synchronous link SDRAM (SLDRAM), and direct memory bus SDRAM (DRRAM). The memory 502 of the subject systems and methods described in connection with the embodiments of the invention is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 502 stores elements, executable modules or data structures, or a subset thereof, or an expanded set thereof as follows: an operating system 5021 and application programs 5022.
The operating system 5021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application 5022 includes various applications, such as a media player (MediaPlayer), a Browser (Browser), and the like, for implementing various application services. The program for implementing the method according to the embodiment of the present invention may be included in the application program 5022.
In the embodiment of the present invention, by calling a program or an instruction stored in the memory 502, specifically, a program or an instruction stored in the application 5022, the processor 501 is configured to acquire and identify a foreground region and a background region in a preview image, where the foreground region includes at least two or more target regions; performing blurring processing on each target area to generate a corresponding depth-of-field image; and synthesizing the preview image according to the depth image corresponding to each target area to generate a blurred image with a blurred background area.
The method disclosed by the above-mentioned embodiments of the present invention may be applied to the processor 501, or implemented by the processor 501. The processor 501 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 501. The processor 501 may be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 502, and the processor 501 reads the information in the memory 502 and completes the steps of the method in combination with the hardware.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described in this disclosure may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described in this disclosure. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Optionally, the processor 501 is configured to determine the corresponding target area by identifying feature information of at least two or more target objects existing in the preview image. And if the user instruction is received, identifying the corresponding target area according to the user instruction.
Optionally, the processor 501 is further configured to: acquiring a blurring value corresponding to the background area; acquiring an original image corresponding to each target area; and performing blurring processing on the original image corresponding to each target area according to the blurring value to acquire a corresponding depth-of-field image.
Optionally, as another embodiment, the processor 501 is further configured to: detecting a background type corresponding to the background area; and querying a blurring value corresponding to the background type.
Optionally, the processor 501 is further configured to: and performing blurring processing on the region outside the target region contained in the original image according to the blurring value to acquire a corresponding depth image.
The mobile terminal 500 can implement the processes implemented by the mobile terminal in the foregoing embodiments, and in order to avoid repetition, the detailed description is omitted here.
In summary, in the mobile terminal in the embodiment of the present invention, a foreground region and a background region in a preview image are identified, where the foreground region includes at least two target regions; performing blurring processing on each target area to generate a corresponding depth-of-field image; and synthesizing the preview image according to the depth image corresponding to each target area to generate a blurred image with a blurred background area. Therefore, the preview image with two or more target areas can be subjected to blurring treatment, the accuracy of multi-target blurring treatment is effectively improved, and the user experience is further improved.
EXAMPLE five
Fig. 6 is a schematic structural diagram of a mobile terminal according to another embodiment of the present invention. Specifically, the mobile terminal 600 in fig. 6 may be a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), or a vehicle-mounted computer.
The mobile terminal 600 in fig. 6 includes a Radio Frequency (RF) circuit 610, a memory 620, an input unit 630, a display unit 640, a processor 660, an audio circuit 670, a wifi (wireless fidelity) module 680, and a power supply 690.
The input unit 630 may be used, among other things, to receive numeric or character information input by a user and to generate signal inputs related to user settings and function control of the mobile terminal 600. Specifically, in the embodiment of the present invention, the input unit 630 may include a touch panel 631. The touch panel 631, also referred to as a touch screen, may collect touch operations of a user (e.g., operations of the user on the touch panel 631 by using a finger, a stylus, or any other suitable object or accessory) thereon or nearby, and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 631 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 660, and can receive and execute commands sent by the processor 660. In addition, the touch panel 631 may be implemented using various types, such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch panel 631, the input unit 630 may also include other input devices 632, and the other input devices 632 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
Among other things, the display unit 640 may be used to display information input by a user or information provided to the user and various menu interfaces of the mobile terminal 600. The display unit 640 may include a display panel 641, and optionally, the display panel 641 may be configured in the form of an LCD or an organic light-emitting diode (OLED).
It should be noted that the touch panel 631 may cover the display panel 641 to form a touch display screen, and when the touch display screen detects a touch operation thereon or nearby, the touch display screen is transmitted to the processor 660 to determine the type of the touch event, and then the processor 660 provides a corresponding visual output on the touch display screen according to the type of the touch event.
The touch display screen comprises an application program interface display area and a common control display area. The arrangement modes of the application program interface display area and the common control display area are not limited, and can be an arrangement mode which can distinguish two display areas, such as vertical arrangement, left-right arrangement and the like. The application interface display area may be used to display an interface of an application. Each interface may contain at least one interface element such as an icon and/or widget desktop control for an application. The application interface display area may also be an empty interface that does not contain any content. The common control display area is used for displaying controls with high utilization rate, such as application icons like setting buttons, interface numbers, scroll bars, phone book icons and the like.
The processor 660 is a control center of the mobile terminal 600, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile terminal 600 and processes data by operating or executing software programs and/or modules stored in the first memory 621 and calling data stored in the second memory 622, thereby integrally monitoring the mobile terminal 600. Optionally, processor 660 may include one or more processing units.
In the embodiment of the present invention, the processor 660 is configured to obtain and identify a foreground region and a background region in the preview image by calling a software program and/or a module stored in the first memory 621 and/or data stored in the second memory 622, where the foreground region includes at least two target regions; performing blurring processing on each target area to generate a corresponding depth-of-field image; and synthesizing the preview image according to the depth image corresponding to each target area to generate a blurred image with a blurred background area.
Optionally, the processor 660 is configured to determine the corresponding target area by identifying feature information of at least two or more target objects existing in the preview image. And if the user instruction is received, identifying the corresponding target area according to the user instruction.
Optionally, the processor 660 is further configured to: acquiring a blurring value corresponding to the background area; acquiring an original image corresponding to each target area; and performing blurring processing on the original image corresponding to each target area according to the blurring value to acquire a corresponding depth-of-field image.
Optionally, as another embodiment, the processor 660 is further configured to: detecting a background type corresponding to the background area; and querying a blurring value corresponding to the background type.
Optionally, the processor 660 is further configured to: and performing blurring processing on the region outside the target region contained in the original image according to the blurring value to acquire a corresponding depth image.
As can be seen, in the mobile terminal in the embodiment of the present invention, a foreground region and a background region in a preview image are identified, where the foreground region includes at least two target regions; performing blurring processing on each target area to generate a corresponding depth-of-field image; and synthesizing the preview image according to the depth image corresponding to each target area to generate a blurred image with a blurred background area. Therefore, the preview image with two or more target areas can be subjected to blurring treatment, the accuracy of multi-target blurring treatment is effectively improved, and the user experience is further improved.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (8)
1. A blurring method applied to a mobile terminal is characterized by comprising the following steps:
identifying a foreground area and a background area in a preview image, wherein the foreground area comprises at least two target areas;
performing blurring processing on each target area to generate a corresponding depth-of-field image;
synthesizing the preview image according to the depth image corresponding to each target area to generate a blurred image with the blurred background area;
the step of performing blurring processing based on each target region to generate a corresponding depth-of-field image specifically includes:
acquiring a blurring value corresponding to the background area;
acquiring an original image corresponding to each target area;
performing blurring processing on the original image corresponding to each target area according to the blurring value to acquire a corresponding depth-of-field image;
performing blurring processing on the original image corresponding to each target area according to the blurring value to obtain a corresponding depth-of-field image, including:
blurring the regions outside the target regions included in the original image according to the blurring value to obtain corresponding depth-of-field images;
the target area is an area corresponding to one or more objects having the same depth of field and not being blurred.
2. The method according to claim 1, wherein the step of identifying the foreground region and the background region in the preview image specifically comprises:
determining corresponding target areas by identifying characteristic information of at least two target objects existing in the preview image;
and if a user instruction is received, identifying a corresponding target area according to the user instruction.
3. The method according to claim 1, wherein the step of obtaining the blurring value corresponding to the background region specifically includes:
detecting a background type corresponding to the background area;
and querying a blurring value corresponding to the background type.
4. The method according to claim 1, wherein the step of blurring the original image corresponding to each target region according to the blurring value specifically includes:
and performing blurring processing on the region outside the target region included in the original image according to the blurring value to acquire a corresponding depth-of-field image.
5. A mobile terminal, comprising:
the device comprises an identification module, a display module and a display module, wherein the identification module is used for identifying a foreground area and a background area in a preview image, and the foreground area comprises at least two target areas;
the blurring processing module is used for performing blurring processing on the basis of each target area to generate a corresponding depth-of-field image;
the synthesis module is used for carrying out synthesis processing on the preview image according to the depth image corresponding to each target area so as to generate a blurred image with the blurred background area;
the blurring processing module further comprises:
the first obtaining submodule is used for obtaining a blurring value corresponding to the background area;
the second acquisition submodule is used for acquiring an original image corresponding to each target area;
the blurring processing submodule is used for blurring the original image corresponding to each target area according to the blurring value so as to obtain a corresponding depth-of-field image;
performing blurring processing on the original image corresponding to each target area according to the blurring value to obtain a corresponding depth-of-field image, including:
blurring the regions outside the target regions included in the original image according to the blurring value to obtain corresponding depth-of-field images;
the target area is an area corresponding to one or more objects having the same depth of field and not being blurred.
6. The mobile terminal of claim 5, wherein the identification module is further configured to:
determining corresponding target areas by identifying characteristic information of at least two target objects existing in the preview image;
and if a user instruction is received, identifying a corresponding target area according to the user instruction.
7. The mobile terminal of claim 5, wherein the first obtaining sub-module further comprises:
the detection unit is used for detecting the background type corresponding to the background area;
and the query unit is used for querying the blurring value corresponding to the background type.
8. The mobile terminal of claim 5, wherein the blurring processing sub-module is further configured to:
and performing blurring processing on the region outside the target region included in the original image according to the blurring value to acquire a corresponding depth-of-field image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710297532.XA CN107172346B (en) | 2017-04-28 | 2017-04-28 | Virtualization method and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710297532.XA CN107172346B (en) | 2017-04-28 | 2017-04-28 | Virtualization method and mobile terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107172346A CN107172346A (en) | 2017-09-15 |
CN107172346B true CN107172346B (en) | 2020-02-07 |
Family
ID=59812893
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710297532.XA Active CN107172346B (en) | 2017-04-28 | 2017-04-28 | Virtualization method and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107172346B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11893668B2 (en) | 2021-03-31 | 2024-02-06 | Leica Camera Ag | Imaging system and method for generating a final digital image via applying a profile to image information |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107613203B (en) * | 2017-09-22 | 2020-01-14 | 维沃移动通信有限公司 | Image processing method and mobile terminal |
CN107592466B (en) * | 2017-10-13 | 2020-04-24 | 维沃移动通信有限公司 | Photographing method and mobile terminal |
CN108230333B (en) * | 2017-11-28 | 2021-01-26 | 深圳市商汤科技有限公司 | Image processing method, image processing apparatus, computer program, storage medium, and electronic device |
CN108109100B (en) * | 2017-12-19 | 2019-11-22 | 维沃移动通信有限公司 | A kind of image processing method, mobile terminal |
CN108154465B (en) * | 2017-12-19 | 2022-03-01 | 北京小米移动软件有限公司 | Image processing method and device |
CN108776800B (en) * | 2018-06-05 | 2021-03-12 | Oppo广东移动通信有限公司 | Image processing method, mobile terminal and computer readable storage medium |
EP3873083A4 (en) | 2018-11-02 | 2021-12-01 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Depth image processing method, depth image processing apparatus and electronic apparatus |
WO2020107186A1 (en) * | 2018-11-26 | 2020-06-04 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Systems and methods for taking telephoto-like images |
CN112785487B (en) * | 2019-11-06 | 2023-08-04 | RealMe重庆移动通信有限公司 | Image processing method and device, storage medium and electronic equipment |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101426093A (en) * | 2007-10-29 | 2009-05-06 | 株式会社理光 | Image processing device, image processing method, and computer program product |
CN101587586A (en) * | 2008-05-20 | 2009-11-25 | 株式会社理光 | Device and method for processing images |
CN102572262A (en) * | 2010-10-28 | 2012-07-11 | 三洋电机株式会社 | Electronic equipment |
CN103297699A (en) * | 2013-05-31 | 2013-09-11 | 北京小米科技有限责任公司 | Method and terminal for shooting images |
CN103533244A (en) * | 2013-10-21 | 2014-01-22 | 深圳市中兴移动通信有限公司 | Shooting device and automatic visual effect processing shooting method thereof |
CN104219445A (en) * | 2014-08-26 | 2014-12-17 | 小米科技有限责任公司 | Method and device for adjusting shooting modes |
CN105100615A (en) * | 2015-07-24 | 2015-11-25 | 青岛海信移动通信技术股份有限公司 | Image preview method, apparatus and terminal |
CN105141858A (en) * | 2015-08-13 | 2015-12-09 | 上海斐讯数据通信技术有限公司 | Photo background blurring system and photo background blurring method |
CN105204624A (en) * | 2015-08-28 | 2015-12-30 | 努比亚技术有限公司 | Fuzzy processing method and device for shooting |
CN105979165A (en) * | 2016-06-02 | 2016-09-28 | 广东欧珀移动通信有限公司 | Blurred photos generation method, blurred photos generation device and mobile terminal |
CN106454086A (en) * | 2016-09-30 | 2017-02-22 | 维沃移动通信有限公司 | Image processing method and mobile terminal |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170006212A1 (en) * | 2015-07-01 | 2017-01-05 | Hon Hai Precision Industry Co., Ltd. | Device, system and method for multi-point focus |
-
2017
- 2017-04-28 CN CN201710297532.XA patent/CN107172346B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101426093A (en) * | 2007-10-29 | 2009-05-06 | 株式会社理光 | Image processing device, image processing method, and computer program product |
CN101587586A (en) * | 2008-05-20 | 2009-11-25 | 株式会社理光 | Device and method for processing images |
CN102572262A (en) * | 2010-10-28 | 2012-07-11 | 三洋电机株式会社 | Electronic equipment |
CN103297699A (en) * | 2013-05-31 | 2013-09-11 | 北京小米科技有限责任公司 | Method and terminal for shooting images |
CN103533244A (en) * | 2013-10-21 | 2014-01-22 | 深圳市中兴移动通信有限公司 | Shooting device and automatic visual effect processing shooting method thereof |
CN104219445A (en) * | 2014-08-26 | 2014-12-17 | 小米科技有限责任公司 | Method and device for adjusting shooting modes |
CN105100615A (en) * | 2015-07-24 | 2015-11-25 | 青岛海信移动通信技术股份有限公司 | Image preview method, apparatus and terminal |
CN105141858A (en) * | 2015-08-13 | 2015-12-09 | 上海斐讯数据通信技术有限公司 | Photo background blurring system and photo background blurring method |
CN105204624A (en) * | 2015-08-28 | 2015-12-30 | 努比亚技术有限公司 | Fuzzy processing method and device for shooting |
CN105979165A (en) * | 2016-06-02 | 2016-09-28 | 广东欧珀移动通信有限公司 | Blurred photos generation method, blurred photos generation device and mobile terminal |
CN106454086A (en) * | 2016-09-30 | 2017-02-22 | 维沃移动通信有限公司 | Image processing method and mobile terminal |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11893668B2 (en) | 2021-03-31 | 2024-02-06 | Leica Camera Ag | Imaging system and method for generating a final digital image via applying a profile to image information |
Also Published As
Publication number | Publication date |
---|---|
CN107172346A (en) | 2017-09-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107172346B (en) | Virtualization method and mobile terminal | |
CN105959553B (en) | A kind of switching method and terminal of camera | |
CN106060406B (en) | Photographing method and mobile terminal | |
CN105827952B (en) | A kind of photographic method and mobile terminal removing specified object | |
CN107197169B (en) | high dynamic range image shooting method and mobile terminal | |
CN107509030B (en) | focusing method and mobile terminal | |
CN107678644B (en) | Image processing method and mobile terminal | |
CN106657793B (en) | A kind of image processing method and mobile terminal | |
CN106454086B (en) | Image processing method and mobile terminal | |
CN107613203B (en) | Image processing method and mobile terminal | |
CN106791437B (en) | Panoramic image shooting method and mobile terminal | |
CN106331484B (en) | Focusing method and mobile terminal | |
CN106060422B (en) | A kind of image exposure method and mobile terminal | |
CN106648382B (en) | A kind of picture browsing method and mobile terminal | |
CN107659722B (en) | Image selection method and mobile terminal | |
CN106937055A (en) | A kind of image processing method and mobile terminal | |
CN106993091B (en) | Image blurring method and mobile terminal | |
CN105843501B (en) | A kind of method of adjustment and mobile terminal of parameter of taking pictures | |
CN107172347B (en) | Photographing method and terminal | |
CN107483821B (en) | Image processing method and mobile terminal | |
CN106201196A (en) | The method for sorting of a kind of desktop icons and mobile terminal | |
CN105824496A (en) | Method for setting icon brightness based on use of users and mobile terminal | |
CN107221347B (en) | Audio playing method and terminal | |
CN106168894B (en) | Content display method and mobile terminal | |
CN107592458B (en) | Shooting method and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |