[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN107395958B - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN107395958B
CN107395958B CN201710527387.XA CN201710527387A CN107395958B CN 107395958 B CN107395958 B CN 107395958B CN 201710527387 A CN201710527387 A CN 201710527387A CN 107395958 B CN107395958 B CN 107395958B
Authority
CN
China
Prior art keywords
image
coordinate
region
processed
subregion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710527387.XA
Other languages
Chinese (zh)
Other versions
CN107395958A (en
Inventor
张启峰
曹莎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jupiter Technology Co ltd
Original Assignee
Beijing Kingsoft Internet Security Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Internet Security Software Co Ltd filed Critical Beijing Kingsoft Internet Security Software Co Ltd
Priority to CN201710527387.XA priority Critical patent/CN107395958B/en
Publication of CN107395958A publication Critical patent/CN107395958A/en
Application granted granted Critical
Publication of CN107395958B publication Critical patent/CN107395958B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an image processing method, an image processing device, electronic equipment and a storage medium, wherein the image processing method comprises the following steps: when a face area exists in a detected target image, acquiring coordinate parameters of the face area according to coordinates of pixel points in the face area in a preset coordinate system; calculating an action point and an action range of an image area to be processed in the target image according to the coordinate parameters; determining an image area to be processed according to the action point and the action range; and according to a preset image processing mode, carrying out image processing on the image area to be processed with a preset action intensity. According to the scheme provided by the embodiment of the invention, the action point and the action range of the image area to be processed are determined by taking the face area in the image as a reference, and then the image processing is carried out on the image area to be processed with the preset action intensity, so that the situation that the action point, the action range and the action intensity can only be determined by a user through complicated manual operation is avoided, the image processing operation is simplified, and the user experience is improved.

Description

A kind of image processing method, device, electronic equipment and storage medium
Technical field
The present invention relates to electronic technology fields, more particularly to a kind of image processing method, device, electronic equipment and storage Medium.
Background technique
It repairs figure to become more and more popular in daily life, especially in schoolgirl group, repairs figure compared to only profession in the past Shi Liyong PhotoShop be compared it is complicated repair graphic operation, present graphic operation of repairing increasingly is intended to easy, using repairing Figure software only needs to beautify image by shirtsleeve operation.This simplicity repairs figure mode increasingly by the joyous of people It meets, also occurs the various application programs for having and repairing figure function as a result,.
There are many concrete operations modes for repairing figure, such as: increase filtering effects, eliminate blood-shot eye illness, removal noise, partial enlargement Deformation etc., wherein partial enlargement, which deforms, is mainly used in the beautification of character image, such as eye is finely adjusted, to chest into Row adjustment etc..Currently, user is when carrying out chest adjustment to character image, user can determine adjustment region in character image Position and desired adjustment regional scope, and can also determine adjustment after can achieve the effect that intensity.
However, user first must artificially determine that adjustment region exists when user is when carrying out chest adjustment to character image Position in character image, can achieve the effect that intensity otherwise can not after the regional scope and adjustment of desired adjustment Carry out the operation of chest adjustment.It can be seen that carrying out the cumbersome of chest adjustment to character image in the prior art, affect User experience.
Summary of the invention
The embodiment of the present invention is designed to provide a kind of image processing method, device, electronic equipment and storage medium, with Solve the problems, such as that user just can determine that position, sphere of action and action intensity by cumbersome manual operation.Particular technique side Case is as follows:
In a first aspect, the embodiment of the invention provides a kind of image processing methods, which comprises
It detects and whether there is human face region in target image;
If it exists, according to coordinate of the pixel in preset coordinate system in the human face region, the human face region is obtained Coordinate parameters;
According to the coordinate parameters, the position and sphere of action of image-region to be processed in the target image are calculated;
According to the position and sphere of action, the image-region to be processed is determined;
According to preset image procossing mode, with preset action intensity, image is carried out to the image-region to be processed Processing, the target image that obtains that treated.
Optionally, described according to the coordinate parameters, calculate the position of image-region to be processed in the target image And the step of sphere of action, comprising:
According to the first coordinate, the second coordinate, third coordinate and 4-coordinate, the image to be processed of the target image is determined The position in region, wherein first coordinate are as follows: the smallest coordinate of the corresponding numerical value of abscissa, institute in the coordinate parameters State the second coordinate are as follows: the maximum coordinate of the corresponding numerical value of abscissa in the coordinate parameters, the third coordinate are as follows: the coordinate The maximum coordinate of the corresponding numerical value of ordinate, the 4-coordinate in coordinate in parameter, for identifying eyebrow are as follows: the coordinate The smallest coordinate of the corresponding numerical value of ordinate in coordinate in parameter, for identifying chin;
According to first coordinate and second coordinate, the effect of the image-region to be processed of the target image is determined Range.
Optionally, the image-region to be processed includes: the first subregion and the second subregion;
It is described according to the first coordinate, second coordinate, third coordinate and 4-coordinate, determine the target image to The step of handling the position of image-region, comprising:
The abscissa of first coordinate is determined as to the abscissa of the first position of first subregion;
The abscissa of second coordinate is determined as to the abscissa of the second position of second subregion;
The ordinate and second subregion of the first position of first subregion are obtained using following formula The ordinate of second position:
yZ1=yZ2=y4-(y3-y4)
Wherein, yZ1For the ordinate of first position, yZ2For the ordinate of second position, y3It is described The ordinate of three coordinates, y4For the ordinate of the 4-coordinate.
Optionally, described according to the first coordinate and the second coordinate, determine the image-region to be processed of the target image The step of sphere of action, comprising:
First coordinate is obtained at a distance from second coordinate is in x-axis using following formula:
L=| x1-x2|
Wherein, x1For the abscissa of first coordinate, x2For the abscissa of second coordinate, L is first coordinate At a distance from second coordinate is in x-axis;
The sphere of action of the sphere of action and second subregion that determine first subregion is equal are as follows: using L as diameter Border circular areas.
Optionally, described according to the position and sphere of action, the step of determining the image-region to be processed, packet It includes:
By the border circular areas determined using L as diameter, by the center of circle of first position as first subregion;
By the border circular areas determined using L as diameter, by the center of circle of second position as second subregion.
Optionally, described according to the position and sphere of action, after the step of determining the image-region to be processed, Further include:
The image-region to be processed is adjusted;
It is described that the image-region to be processed is carried out with preset action intensity according to preset image procossing mode Image procossing, the target image that obtains that treated, comprising:
According to preset image procossing mode, image procossing is carried out to image-region to be processed adjusted, is handled The target image afterwards.
Optionally, the described the step of image-region to be processed is adjusted, including in following adjustment mode extremely Few one kind:
The image-region to be processed is moved to target position;
The sphere of action of the image-region to be processed is adjusted to intended operating range;
The preset action intensity of the image-region to be processed is adjusted to interacting goals intensity.
Optionally, described also to be wrapped before determining the image-region to be processed according to the position and sphere of action It includes:
If human face region is not present in the target image, the width and height of the target image are obtained;
According to the width and height of the target image, the position of image-region to be processed in the target image is calculated And sphere of action.
Optionally, the image-region to be processed includes: the first subregion and the second subregion;
The width and height according to the target image determines the work of image-region to be processed in the target image The step of with point and sphere of action, comprising:
The abscissa of the first position of first subregion is obtained using following formula:
Wherein, xZ1For the abscissa of first position, W is the width of the target image, Q1For preset first ratio Example value;
The abscissa of the second position of second subregion is obtained using following formula:
Wherein, xZ2For the abscissa of second position, Q2For preset second ratio value;
The ordinate and second subregion of the first position of first subregion are obtained using following formula The ordinate of second position:
Wherein, yZ1For the ordinate of first position, yZ2For the ordinate of second position, H is described The width of target image, Q3For preset third ratio value.
Computational length is obtained using following formula:
Wherein, W is the width of the target image, Q4For preset 4th ratio value;
The sphere of action of the sphere of action and second subregion that determine first subregion is equal are as follows: using D as diameter Border circular areas.
Optionally,
It is described that the image-region to be processed is carried out with preset action intensity according to preset image procossing mode The step of image procossing, the target image that obtains that treated, comprising:
The image-region to be processed is carried out with preset least action intensity according to preset image procossing mode Image procossing, the target image that obtains that treated.
Optionally, the method also includes:
Receive the instruction that Shadows Processing is carried out to the target image;
It is concentrated from preset shadow image and chooses target shadow image;
Determine the saturating of placement location and the target shadow image of the target shadow image in the target image Bright degree;
With identified transparency, by the target shadow image superposition to identified placement location.
Second aspect, the embodiment of the invention provides a kind of image processing apparatus, described device includes:
Detection module, for detecting in target image with the presence or absence of human face region;
First obtains module, for detecting in target image there are when human face region, according to institute when the detection module Coordinate of the pixel in preset coordinate system in human face region is stated, the coordinate parameters of the human face region are obtained;
First computing module calculates image-region to be processed in the target image for according to the coordinate parameters Position and sphere of action;
First determining module, for determining the image-region to be processed according to the position and sphere of action;
Processing module is used for according to preset image procossing mode, with preset action intensity, to the image to be processed Region carries out image procossing, the target image that obtains that treated.
Optionally, first computing module includes:
First determines submodule, described in determining according to the first coordinate, the second coordinate, third coordinate and 4-coordinate The position of the image-region to be processed of target image, wherein first coordinate are as follows: abscissa is corresponding in the coordinate parameters The smallest coordinate of numerical value, second coordinate are as follows: the maximum coordinate of the corresponding numerical value of abscissa in the coordinate parameters, it is described Third coordinate are as follows: the maximum coordinate of the corresponding numerical value of ordinate in the coordinate in the coordinate parameters, for identifying eyebrow, it is described 4-coordinate are as follows: the smallest coordinate of the corresponding numerical value of ordinate in the coordinate in the coordinate parameters, for identifying chin;
Second determines submodule, for determining the target image according to first coordinate and second coordinate The sphere of action of image-region to be processed.
Optionally, the image-region to be processed includes: the first subregion and the second subregion;
Described first determines that submodule includes:
First determination unit, for the abscissa of the first coordinate to be determined as to the first position of first subregion Abscissa;
Second determination unit, for the abscissa of the second coordinate to be determined as to the second position of second subregion Abscissa;
First computing unit, for obtained using following formula first subregion the first position ordinate and The ordinate of second position of second subregion:
yZ1=yZ2=y4-(y3-y4)
Wherein, yZ1For the ordinate of first position, yZ2For the ordinate of second position, y3It is described The ordinate of three coordinates, y4For the ordinate of the 4-coordinate.
Optionally, described second determine that submodule includes:
Second computing unit, for obtaining first coordinate and second coordinate on x axis using following formula Distance:
L=| x1-x2|
Wherein, x1For the abscissa of first coordinate, x2For the abscissa of second coordinate, L is first coordinate At a distance from second coordinate is in x-axis;
Third determination unit, for determining the sphere of action of first subregion and the effect model of second subregion It encloses are as follows: using L as the border circular areas of diameter.
Optionally, first determining module includes:
Third determines submodule, and the border circular areas for will determine using L as diameter, by the center of circle of first position is made For first subregion;
4th determines submodule, and the border circular areas for will determine using L as diameter, by the center of circle of second position is made For second subregion.
Optionally, described device further include:
Module is adjusted, for being adjusted to the image-region to be processed;
The processing module includes:
First processing submodule, for according to preset image procossing mode, to image-region to be processed adjusted into Row image procossing, the target image that obtains that treated.
Optionally, the adjustment module is specifically used for, at least one of following adjustment mode:
The image-region to be processed is moved to target position;
The sphere of action of the image-region to be processed is adjusted to intended operating range;
The preset action intensity of the image-region to be processed is adjusted to interacting goals intensity.
Optionally, described device further include:
Second obtains module, for obtaining when the detection module detects and human face region is not present in target image The width and height of the target image;
Second computing module calculates in the target image for the width and height according to the target image wait locate Manage the position and sphere of action of image-region.
Optionally, the image-region to be processed includes: the first subregion and the second subregion;
Second computing module, comprising:
First computational submodule, the horizontal seat of the first position for obtaining first subregion using following formula Mark:
Wherein, xZ1For the abscissa of first position, W is the width of the target image, Q1For preset first ratio Example value;
Second computational submodule, the horizontal seat of the second position for obtaining second subregion using following formula Mark:
Wherein, xZ2For the abscissa of second position, Q2For preset second ratio value;
Third computational submodule, the ordinate of the first position for obtaining first subregion using following formula With the ordinate of the second position of second subregion:
Wherein, yZ1For the ordinate of first position, yZ2For the ordinate of second position, H is described The width of target image, Q3For preset third ratio value.
4th computational submodule, for obtaining computational length using following formula:
Wherein, W is the width of the target image, Q4For preset 4th ratio value;
5th determines submodule, for determining the sphere of action of first subregion and the effect of second subregion Range is equal are as follows: using D as the border circular areas of diameter.
Optionally, the processing module includes:
Submodule is handled, is used for according to preset image procossing mode, with preset least action intensity, to described wait locate It manages image-region and carries out image procossing, the target image that obtains that treated.
Optionally, described device further include:
Receiving module, for receiving the instruction for carrying out Shadows Processing to the target image;
Module is chosen, chooses target shadow image for concentrating from preset shadow image;
Second determining module, for determining placement location of the target shadow image in the target image and described The transparency of target shadow image;
Laminating module is used for identified transparency, by the target shadow image superposition to identified placement Position.
The third aspect, the embodiment of the invention provides a kind of electronic equipment, including processor, communication interface, memory and Communication bus, wherein processor, communication interface, memory complete mutual communication by communication bus;
Memory, for storing computer program;
Processor when for executing the program stored on memory, executes any of the above-described image processing method.
Fourth aspect, the embodiment of the invention provides a kind of computer readable storage medium, the computer-readable storage Dielectric memory contains computer program, and the computer program executes any of the above-described image procossing when being executed by processor Method.
5th aspect, the embodiment of the invention provides a kind of computer applied algorithm, the computer applied algorithm is being counted When being run on calculation machine, so that computer executes any image processing method in above-described embodiment.
In technical solution provided in an embodiment of the present invention, by detection target image there are in the case where human face region, According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are obtained;According to The coordinate parameters calculate the position and sphere of action of image-region to be processed in the target image;According to the effect Point and sphere of action, determine the image-region to be processed;According to preset image procossing mode, with preset action intensity, Image procossing is carried out to the image-region to be processed, the target image that obtains that treated.Scheme provided in an embodiment of the present invention In be used as by the human face region in image with reference to the position and sphere of action for determining image-region to be processed, and then with default Action intensity to image-region to be processed carry out image procossing, avoiding user by cumbersome manual operation just can determine that work User experience is improved to simplify the operation of image procossing with point, sphere of action and action intensity.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 is a kind of a kind of flow chart of image processing method provided in an embodiment of the present invention;
Fig. 2 is a kind of another flow chart of image processing method provided in an embodiment of the present invention;
Fig. 3 is a kind of another flow chart of image processing method provided in an embodiment of the present invention;
Fig. 4 is a kind of a kind of structural schematic diagram of image processing apparatus provided in an embodiment of the present invention;
Fig. 5 is a kind of another structural schematic diagram of image processing apparatus provided in an embodiment of the present invention;
Fig. 6 is a kind of another structural schematic diagram of image processing apparatus provided in an embodiment of the present invention;
Fig. 7 is the structural schematic diagram of a kind of electronic equipment provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
Just can determine that asking for position, sphere of action and action intensity by cumbersome manual operation to solve user Topic improves user experience to simplify the operation of image procossing.The embodiment of the invention provides a kind of image processing method, Device, electronic equipment and storage medium.
A kind of image processing method provided in an embodiment of the present invention can be used for the application software of electronic equipment, such as: hand Application software, the application software of plate, application software of smart television of machine etc., wherein application software can be various types Repair figure software, such as: PhotoGrid, Meitu Xiu Xiu etc..
Image procossing in the embodiment of the present invention can be the image procossing of the types such as chest enlarge, buttocks development surgery, herein with chest enlarge Image processing method provided in an embodiment of the present invention is illustrated for image procossing.
A kind of image processing method provided in an embodiment of the present invention is introduced first below.
As shown in Figure 1, a kind of image processing method provided in an embodiment of the present invention, includes the following steps:
S101 is detected and be whether there is human face region in target image, and if it exists, executes S102.
Target image can be electronic equipment shooting photo, the picture downloaded on network etc., wherein electronic equipment includes hand Machine, plate, camera etc..The format of target image is including but not limited to following several: JPEG (Joint Photographic Experts Group, Joint Photographic Experts Group), bmp (Bitmap, image file format), PNG (Portable Network Graphic Format, image file storage format), GIF (Graphics Interchange Format, graphic interchange format), TIFF (Tag Image File Format, label image file format) etc..
In general, the type of image can be divided into landscape image and character image, also, in most cases, in order to So that the personage in image is more perfect, user can carry out correspondingly image procossing to character image.When carrying out figure to character image It when as processing, i.e., include at least one portrait in the target image when target image is character image;When only being wrapped in target image When including a portrait, image procossing can be carried out to the portrait;It, can be according to pre- when in target image including multiple portraits If rule image procossing is carried out respectively to each portrait, wherein default rule may is that according to from the left side of target image Sequence to the right carries out respectively;Alternatively, it may also is that being carried out respectively according to the sequence from the right of target image to the left side. It is, of course, understood that default rule is not limited in both the above.
Human face region is that the region where the face of personage, the range in the region can be known by face in the target image Other technology is identified and is extracted.
S102 obtains the human face region according to coordinate of the pixel in preset coordinate system in the human face region Coordinate parameters.
Preset coordinate system can be the coordinate system using target image as reference data, for example, preset coordinate system can be with The lower edge of target image is as X-axis, using the left edge of target image as Y-axis.
In preset coordinate system, each pixel and coordinate on target image are corresponded, each pixel corresponding one A coordinate points.For example, the pixel in the most lower left corner is coordinate origin position on target image, coordinate is (0,0).
The pixel of human face region is one-to-one relationship in preset coordinate system with coordinate, then, human face region Facial contour, face in the human face region profile and each profile included in part etc. can be in preset coordinate system In respectively indicated out by corresponding multiple groups coordinate.
In a kind of embodiment, coordinate corresponding to the pixel of available entire human face region, including face Coordinate corresponding to coordinate corresponding to the pixel of profile and the point of all pixels in the facial contour.Pass through this reality Apply mode, the available coordinate to more complete human face region, to can more accurately be made in subsequent steps With point and sphere of action.
It is possible to further obtain the pixel of facial contour in human face region and the face profile in the human face region Corresponding coordinate, wherein face profile includes: eyebrow outline, eye contour, nose profile, mouth profile, ear profile. For human face region, face are representative characteristic portions, therefore obtain the picture of facial contour and face profile Coordinate corresponding to vegetarian refreshments can also accurately represent human face region.
Further, because can determine the range of human face region by eyebrow outline and facial contour, Coordinate corresponding to facial contour and the pixel of eyebrow outline in human face region can be only obtained, optionally, for eyebrow wheel Exterior feature, coordinate corresponding to the pixel of the profile of any bar eyebrow in available two eyebrows.
It should be noted that facial contour includes at least the left and right profile and chin profile of face.
S103 calculates the position of image-region to be processed and effect in the target image according to the coordinate parameters Range.
Position is that user it is expected to carry out the central point in the region of image procossing, and sphere of action is that user it is expected to carry out image The range in the region of processing, position and sphere of action determine image-region to be processed jointly.For example, using position as circle The heart, using sphere of action as diameter, at this point, image-region to be processed determined by position and sphere of action is border circular areas; Using position as two cornerwise intersection points of square region, sphere of action is as side length, at this point, position and sphere of action institute Determining image-region to be processed is square region.
Image-region to be processed is the region of selected pending image procossing on target image, also, for not The image procossing of same type, image-region to be processed included separate independent region quantity can be it is different, for example, For the image procossing of chest enlarge, image-region to be processed can be two separated independent regions.
When image-region to be processed is two separated independent regions, the position of image-region to be processed is two works With point, this two separated independent regions are respectively corresponded;The sphere of action of image-region to be processed is also two sphere of actions, point This two separated independent regions are not corresponded to, wherein two sphere of actions can be set to the same, may be set to be different Sample.
In one embodiment, from the coordinate parameters of acquired human face region, the corresponding numerical value of abscissa is determined The smallest coordinate is the first coordinate, determines that the maximum coordinate of the corresponding numerical value of abscissa is the second coordinate, determines that ordinate is corresponding The maximum coordinate of numerical value be third coordinate, determine the smallest coordinate of the corresponding numerical value of ordinate be 4-coordinate.
Wherein, the first coordinate, the second coordinate can determine that third coordinate can be from mark from the coordinate of mark facial contour Know in the coordinate of eyebrow outline and determine, 4-coordinate can be determined from the coordinate of the chin profile in facial contour.
Specifically, the target image can be determined according to the first coordinate, the second coordinate, third coordinate and 4-coordinate Image-region to be processed position;The to be processed of the target image can be determined according to the first coordinate and the second coordinate The sphere of action of image-region.
In a kind of specific embodiment, image-region to be processed is two separated independent regions: the first subregion and the Two subregions.Wherein, the position of the first subregion is the first position, and the position of the second subregion is the second position.
The first coordinate is set as (x1, y1), the second coordinate is (x2, y2), third coordinate is (x3, y3), 4-coordinate is (x4, y4), the first position is (xZ1, yZ1), the second position is (xZ2, yZ2)。
The abscissa of first coordinate is determined as to the abscissa of first position, i.e. xZ1=x1;By the second coordinate Abscissa is determined as the abscissa of second position, i.e. xZ2=x2
The ordinate of the ordinate of first position and the second position can be the same, available according to the following formula:
yZ1=yZ2=y4-(y3-y4)
Illustratively, the first coordinate is (Isosorbide-5-Nitrae 0), and the second coordinate is (21,40), and third coordinate is (15,45), 4-coordinate For (11,30);
So, according to above embodiment, the abscissa x of the first positionZ1=1, the abscissa x of the second positionZ2= 21;The ordinate y of first positionZ1=y4-(y3-y4)=30- (45-30)=15, the ordinate y of the second positionZ2=yZ1 =15;To sum up, the coordinate of the first position is (1,15), and the coordinate of the second position is (21,15).
Further, it is determined that being circle in the first subregion and the second subregion in the specific embodiment of sphere of action In the case where shape region, first coordinate is obtained at a distance from second coordinate is on x axis using following formula:
L=| x1-x2|
And determine first subregion sphere of action and second subregion sphere of action it is equal are as follows: with L be straight The border circular areas of diameter.
Illustratively, the first coordinate is (Isosorbide-5-Nitrae 0), and the second coordinate is (21,40), then can determine the first coordinate and described second Distance L of the coordinate in x-axis is 20, it is possible to determine the sphere of action of the first subregion and the effect model of the second subregion Enclosing is border circular areas with 20 for diameter.
S104 determines the image-region to be processed according to the position and sphere of action.
Image-region to be processed includes two separated independent regions: the first subregion and the second subregion, and first is sub Region is as the sphere of action of the second subregion, using L as diameter;It is possible to determine using L as diameter, with the first effect Point is the determining border circular areas in the center of circle as the first subregion, the circle that will be determined using L as diameter, by the center of circle of the second position Region is as the second subregion.
In a kind of embodiment, after determining image-region to be processed, can to determining image-region to be processed into Row display, specifically, may be displayed on the screen of corresponding electronic equipment, such as mobile phone screen, flat screens, TV screen Curtain etc..
It, can also be to the image-region to be processed after determining the image-region to be processed in a kind of embodiment It is adjusted, in this way, in the case where the image-region to be processed determined according to position and sphere of action is inaccurate, user It can also be adjusted again according to demand.
Wherein, the mode being adjusted to image-region to be processed, which may is that, is moved to the image-region to be processed Target position.Specifically, user can use finger long-pressing on electronic equipment screen image-region to be processed, when long-pressing is preset When fixed duration, user can drag image-region to be processed, and be moved to target position.
The mode of adjustment may also is that the sphere of action by the image-region to be processed is adjusted to intended operating range. Specifically, user can use the edge of finger long-pressing on electronic equipment screen image-region to be processed, when long-pressing is preset solid When periodically long, user can drag the edge, realize the scaling to image-region to be processed, to realize to image district to be processed The adjustment of the sphere of action in domain.
Illustratively, when image-region to be processed is border circular areas, the round edge of user's long-pressing border circular areas, to center of circle side To dragging when be the sphere of action for reducing image-region to be processed, to the opposite direction in center of circle direction drag when then be amplify to Handle the sphere of action of image-region.
The mode of adjustment may also is that the preset action intensity by the image-region to be processed is adjusted to interacting goals Intensity.Also, in a kind of embodiment, adjusting to interacting goals intensity, image-region to be processed will be with interacting goals Intensity carries out image procossing, and shows treated target image.
Specifically, when carrying out action intensity adjustment, it will appear functional area on the screen, be used in functional area The progress bar of corrective action intensity, by adjusting progress bar to realize the adjustment to action intensity.Also, optionally, as When with intensity minimum, then any processing will not be carried out to image.
It is understood that above-mentioned three kinds of adjustment modes individually can be adjusted correspondingly, can also be appointed Two kinds of adjustment mode combinations of anticipating simultaneously are adjusted image-region to be processed, it is, of course, also possible to which three kinds of adjustment modes are right simultaneously Image-region to be processed is adjusted.
After being adjusted to image-region to be processed, according to preset image procossing mode, to adjusted to be processed Image-region carries out image procossing, the target image that obtains that treated.For example, the action intensity of image-region to be processed is adjusted To interacting goals intensity, image procossing will be carried out to image-region to be processed with interacting goals intensity, the mesh that obtains that treated Logo image.
S105 carries out the image-region to be processed with preset action intensity according to preset image procossing mode Image procossing, the target image that obtains that treated.
Wherein, preset image procossing mode can be the image procossing of chest enlarge, and preset action intensity, which can be, to be made by oneself Justice setting, specifically, preset action intensity can be set as the most common action intensity of the user counted.
After determining image-region to be processed, with preset action intensity, image procossing is carried out to image-region to be processed, Specifically, with preset action intensity, chest enlarge processing is carried out to image-region to be processed, it is right to reach preset action intensity institute The chest enlarge effect answered.
After carrying out image procossing completion, the target image that just obtains that treated, and the target image that shows that treated.
In technical solution provided in an embodiment of the present invention, by whether there is human face region in detection target image;If depositing According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are being obtained;Root According to the coordinate parameters, the position and sphere of action of image-region to be processed in the target image are calculated;According to the work With point and sphere of action, the image-region to be processed is determined;It is strong with preset effect according to preset image procossing mode Degree carries out image procossing to the image-region to be processed, the target image that obtains that treated.The embodiment of the present invention provides Scheme in position and sphere of action with reference to determination image-region to be processed be used as by the human face region in image, in turn Image procossing is carried out to image-region to be processed with preset action intensity, user is avoided and passes through cumbersome manual operation ability It determines position, sphere of action and action intensity, to simplify the operation of image procossing, improves user experience.
Below with reference to another specific embodiment, a kind of image processing method provided by the invention is introduced.
As shown in Fig. 2, a kind of image processing method provided in an embodiment of the present invention, includes the following steps:
S201 is detected and be whether there is human face region in target image, and if it exists, executes S202;S204 is executed if it does not exist.
S202 obtains the human face region according to coordinate of the pixel in preset coordinate system in the human face region Coordinate parameters.
S203 calculates the position of image-region to be processed and effect in the target image according to the coordinate parameters Range.
In the present embodiment, S201-S203 is identical as the S101-S103 of above-described embodiment, and therefore not to repeat here.
S204 obtains the width and height of the target image.
Wherein, when target image is square image, the side length that the width and height of target image are square, this When, acquisition be target image side length.
When human face region is not present in target image, the width and height of target image are obtained, wherein acquired width Degree and height are in preset coordinate system.
S205 calculates image-region to be processed in the target image according to the width and height of the target image Position and sphere of action.
Image-region to be processed may include two separated independent regions: the first subregion and the second subregion;Wherein, The position of first subregion is the first position, and the position of the second subregion is the second position;First subregion and Respectively corresponding sphere of action can be the same two subregions, may be set to be different.
In a kind of embodiment, determine that the position of image-region to be processed in target image may is that setting first is made It is (x with pointZ1, yZ1), the second position is (xZ2, yZ2), then the abscissa of the first position is obtained using following formula:
Wherein, W is the width of the target image, Q1For preset first ratio value;
The abscissa of the second position of second subregion is obtained using following formula:
Wherein, Q2For preset second ratio value;
The ordinate and second subregion of the first position of first subregion are obtained using following formula The ordinate of second position:
Wherein, H is the height of the target image, Q3For preset third ratio value.
Wherein, preset first ratio value, the second ratio value and third ratio value are customized setting, the first ratio Value, the second ratio value and third ratio value can be set to different.
Illustratively, the width of the target image of acquisition is 12, is highly 15, and preset first ratio value is 4, the second ratio Value is 1, and third ratio value is 3, then, the abscissa of the first position:
The abscissa of second position:
The ordinate of the ordinate of first position and the second position:
To sum up, the coordinate of the first position is (3,5), and the coordinate of the second position is (12,5).
In a kind of embodiment, determine that the sphere of action of image-region to be processed in target image may is that
In the case where the first subregion and the second subregion are border circular areas, obtain calculating length using following formula Degree:
Q4For preset 4th ratio value, wherein preset 4th ratio value can be customized setting.
The sphere of action of the sphere of action and second subregion that determine first subregion is equal are as follows: using D as diameter Border circular areas.
Illustratively, the width of the target image of acquisition is 12, and preset 4th ratio value is 3, then obtains computational length are as follows:
The sphere of action that can so determine the first subregion and the second subregion is the border circular areas with 4 for diameter.
After determining the first position, the second position and sphere of action, will using D as diameter, with the first position It is the determining border circular areas in the center of circle as the first subregion;The circle that will be determined using D as diameter, by the center of circle of the second position Domain is as the second subregion.And the first subregion, the second subregion after determination are shown.
In a kind of embodiment, when human face region is not present in target image, pass through the width and height of target image It determines the position and sphere of action of image-region to be processed, determines position and sphere of action compared to by human face region Embodiment, determine that the method accuracy of position and sphere of action can be slightly worse by the width and height of target image, Therefore, it is the number for reducing user's operation, preset action intensity can be adjusted to least action intensity, i.e., to figure to be processed Processing as region without any action intensity.
And according to preset image procossing mode, with least action intensity, image procossing is carried out to image-region to be processed, The target image that obtains that treated.
S206 determines the image-region to be processed according to the position and sphere of action;
S207 carries out the image-region to be processed with preset action intensity according to preset image procossing mode Image procossing, the target image that obtains that treated.
In the present embodiment, S206-S207 is identical as the S104-S105 of above-described embodiment, and therefore not to repeat here.
In technical solution provided in an embodiment of the present invention, by whether there is human face region in detection target image;If depositing According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are being obtained;Root According to the coordinate parameters, the position and sphere of action of image-region to be processed in the target image are calculated;According to the work With point and sphere of action, the image-region to be processed is determined;It is strong with preset effect according to preset image procossing mode Degree carries out image procossing to the image-region to be processed, the target image that obtains that treated.Side provided in an embodiment of the present invention By the human face region in image as the position and sphere of action with reference to determining image-region to be processed in case, and then with pre- If action intensity to image-region to be processed carry out image procossing, avoid user just can determine that by cumbersome manual operation Position, sphere of action and action intensity improve user experience to simplify the operation of image procossing.
Below with reference to another specific embodiment, a kind of image processing method provided by the invention is introduced.
As shown in figure 3, a kind of image processing method provided in an embodiment of the present invention, can also include the following steps:
S301 receives the instruction that Shadows Processing is carried out to target image.
Shadows Processing, which can be, acts on chest, by setting the transparency of shade, so that visually chest It is more plentiful.
It should be noted that the image procossings such as chest enlarge, buttocks development surgery in Shadows Processing and above-described embodiment can be individually into Capable image procossing the figure such as only does Shadows Processing, or only does chest enlarge, buttocks development surgery that is, when carrying out image procossing to target image As processing;It is, of course, also possible to above two image procossing be carried out simultaneously to target image, for example, after carrying out chest enlarge processing Shadows Processing is carried out again.
S302 is concentrated from preset shadow image and is chosen target shadow image.
Echo image set be it is preset, in the centrally stored shadow image for having multiple and different types of shadow image, use Family can concentrate according to demand from shadow image and choose shadow image, as target shadow image.For example, shadow image concentration is deposited The shadow image of 6 seed types is contained, user can choose No. 1 shadow image as target shadow image and carry out Shadows Processing, when So, user can also replace other shadow images.
S303 determines placement location and the target shadow image of the target shadow image in the target image Transparency.
In the case where there is human face region in the target image, placement location can be the coordinate parameters according to human face region Determining, this embodiment and above-mentioned according to the coordinate parameters calculates image-region to be processed in the target image Position is similar with the embodiment of sphere of action, and details are not described herein.
In the target image there is no in the case where human face region, placement location can be according to the width of target image and What height determined, this embodiment and above-mentioned width and height according to the target image calculate in the target image The position of image-region to be processed and the embodiment of sphere of action are similar, and details are not described herein.
In addition, the transparency of target shadow image can be preparatory customized setting, also, user according to demand may be used To be adjusted again.
S304, with identified transparency, by the target shadow image superposition to identified placement location.
Target shadow image is shown with identified transparency, and is superimposed to identified placement location, so The target image for being superimposed target shadow image is shown afterwards.
In technical solution provided in an embodiment of the present invention, by whether there is human face region in detection target image;If depositing According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are being obtained;Root According to the coordinate parameters, the position and sphere of action of image-region to be processed in the target image are calculated;According to the work With point and sphere of action, the image-region to be processed is determined;It is strong with preset effect according to preset image procossing mode Degree carries out image procossing to the image-region to be processed, the target image that obtains that treated.Side provided in an embodiment of the present invention By the human face region in image as the position and sphere of action with reference to determining image-region to be processed in case, and then with pre- If action intensity to image-region to be processed carry out image procossing, avoid user just can determine that by cumbersome manual operation Position, sphere of action and action intensity improve user experience to simplify the operation of image procossing.
Relative to above method embodiment, the embodiment of the invention also provides a kind of image processing apparatus, as shown in figure 4, Described device includes:
Detection module 410, for detecting in target image with the presence or absence of human face region;
First obtains module 420, for detecting in target image there are when human face region when the detection module, according to Coordinate of the pixel in preset coordinate system in the human face region, obtains the coordinate parameters of the human face region;
First computing module 430, for calculating image-region to be processed in the target image according to the coordinate parameters Position and sphere of action;
First determining module 440, for determining the image-region to be processed according to the position and sphere of action;
Processing module 450 is used for according to preset image procossing mode, with preset action intensity, to described to be processed Image-region carries out image procossing, the target image that obtains that treated.
Optionally, in a kind of embodiment, first computing module 430 includes:
First determines submodule, described in determining according to the first coordinate, the second coordinate, third coordinate and 4-coordinate The position of the image-region to be processed of target image, wherein first coordinate are as follows: abscissa is corresponding in the coordinate parameters The smallest coordinate of numerical value, second coordinate are as follows: the maximum coordinate of the corresponding numerical value of abscissa in the coordinate parameters, it is described Third coordinate are as follows: the maximum coordinate of the corresponding numerical value of ordinate in the coordinate in the coordinate parameters, for identifying eyebrow, it is described 4-coordinate are as follows: the smallest coordinate of the corresponding numerical value of ordinate in the coordinate in the coordinate parameters, for identifying chin;
Second determines submodule, for determining the target image according to first coordinate and second coordinate The sphere of action of image-region to be processed.
Optionally, in a kind of embodiment, the image-region to be processed includes: the first subregion and the second subregion;
Described first determines that submodule includes:
First determination unit, for the abscissa of the first coordinate to be determined as to the first position of first subregion Abscissa;
Second determination unit, for the abscissa of the second coordinate to be determined as to the second position of second subregion Abscissa;
First computing unit, for obtained using following formula first subregion the first position ordinate and The ordinate of second position of second subregion:
yZ1=yZ2=y4-(y3-y4)
Wherein, yZ1For the ordinate of first position, yZ2For the ordinate of second position, y3It is described The ordinate of three coordinates, y4For the ordinate of the 4-coordinate.
Optionally, in a kind of embodiment, described second determines that submodule includes:
Second computing unit, for obtaining first coordinate and second coordinate on x axis using following formula Distance:
L=| x1-x2|
Wherein, x1For the abscissa of first coordinate, x2For the abscissa of second coordinate, L is first coordinate At a distance from second coordinate is in x-axis;
Third determination unit, for determining the sphere of action of first subregion and the effect model of second subregion It encloses are as follows: using L as the border circular areas of diameter.
Optionally, in a kind of embodiment, first determining module 440 includes:
Third determines submodule, and the border circular areas for will determine using L as diameter, by the center of circle of first position is made For first subregion;
4th determines submodule, and the border circular areas for will determine using L as diameter, by the center of circle of second position is made For second subregion.
Optionally, in a kind of embodiment, described device further include:
Module is adjusted, for being adjusted to the image-region to be processed;
The processing module 450 includes:
First processing submodule, for according to preset image procossing mode, to image-region to be processed adjusted into Row image procossing, the target image that obtains that treated.
Optionally, in a kind of embodiment, the adjustment module is specifically used for, at least one of following adjustment mode:
The image-region to be processed is moved to target position;
The sphere of action of the image-region to be processed is adjusted to intended operating range;
The preset action intensity of the image-region to be processed is adjusted to interacting goals intensity.
On the basis of above-mentioned Fig. 4, the embodiment of the present invention also provides another embodiment, as shown in figure 5, described device Further include:
Second obtains module 510, for obtaining when the detection module detects and human face region is not present in target image Take the width and height of the target image;
Second computing module 520, for the width and height according to the target image, calculate in the target image to Handle the position and sphere of action of image-region.
Optionally, in a kind of embodiment, the image-region to be processed includes: the first subregion and the second subregion;
Second computing module 520, comprising:
First computational submodule, the horizontal seat of the first position for obtaining first subregion using following formula Mark:
Wherein, xZ1For the abscissa of first position, W is the width of the target image, Q1For preset first ratio Example value;
Second computational submodule, the horizontal seat of the second position for obtaining second subregion using following formula Mark:
Wherein, xZ2For the abscissa of second position, Q2For preset second ratio value;
Third computational submodule, the ordinate of the first position for obtaining first subregion using following formula With the ordinate of the second position of second subregion:
Wherein, yZ1For the ordinate of first position, yZ2For the ordinate of second position, H is described The width of target image, Q3For preset third ratio value.
4th computational submodule, for obtaining computational length using following formula:
Wherein, W is the width of the target image, Q4For preset 4th ratio value;
5th determines submodule, for determining the sphere of action of first subregion and the effect of second subregion Range is equal are as follows: using D as the border circular areas of diameter.
Optionally, in a kind of embodiment, the processing module 450 includes:
Submodule is handled, is used for according to preset image procossing mode, with preset least action intensity, to described wait locate It manages image-region and carries out image procossing, the target image that obtains that treated.
The embodiment of the present invention also provides another embodiment, as shown in fig. 6, described device further include:
Receiving module 610, for receiving the instruction for carrying out Shadows Processing to target image;
Module 620 is chosen, chooses target shadow image for concentrating from preset shadow image;
Second determining module 630, for determine placement location of the target shadow image in the target image and The transparency of the target shadow image;
Laminating module 640, for identified transparency, the target shadow image superposition to be put to identified Seated position.
In technical solution provided in an embodiment of the present invention, by whether there is human face region in detection target image;If depositing According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are being obtained;Root According to the coordinate parameters, the position and sphere of action of image-region to be processed in the target image are calculated;According to the work With point and sphere of action, the image-region to be processed is determined;It is strong with preset effect according to preset image procossing mode Degree carries out image procossing to the image-region to be processed, the target image that obtains that treated.Side provided in an embodiment of the present invention By the human face region in image as the position and sphere of action with reference to determining image-region to be processed in case, and then with pre- If action intensity to image-region to be processed carry out image procossing, avoid user just can determine that by cumbersome manual operation Position, sphere of action and action intensity improve user experience to simplify the operation of image procossing.
For device embodiment, since it is substantially similar to the method embodiment, so describing fairly simple, correlation Place illustrates referring to the part of embodiment of the method.
The embodiment of the invention also provides a kind of electronic equipment, as shown in fig. 7, comprises processor 710, communication interface 720, Memory 730 and communication bus 740, wherein processor 710, communication interface 720, memory 730 are complete by communication bus 740 At mutual communication.
Memory 730, for storing computer program;
Processor 710 when for executing the program stored on memory 730, realizes following steps:
It detects and whether there is human face region in target image;
If it exists, according to coordinate of the pixel in preset coordinate system in the human face region, the human face region is obtained Coordinate parameters;
According to the coordinate parameters, the position and sphere of action of image-region to be processed in the target image are calculated;
According to the position and sphere of action, the image-region to be processed is determined;
According to preset image procossing mode, with preset action intensity, image is carried out to the image-region to be processed Processing, the target image that obtains that treated.
It is understood that any image processing method in above-described embodiment can also be performed in electronic equipment, herein It does not repeat them here.
The communication bus that above-mentioned electronic equipment is mentioned can be Peripheral Component Interconnect standard (Peripheral Component Interconnect, PCI) bus or expanding the industrial standard structure (Extended Industry Standard Architecture, EISA) bus etc..The communication bus can be divided into address bus, data/address bus, control bus etc..For just It is only indicated with a thick line in expression, figure, it is not intended that an only bus or a type of bus.
Communication interface is for the communication between above-mentioned electronic equipment and other equipment.
Memory may include random access memory (Random Access Memory, RAM), also may include non-easy The property lost memory (Non-Volatile Memory, NVM), for example, at least a magnetic disk storage.Optionally, memory may be used also To be storage device that at least one is located remotely from aforementioned processor.
Above-mentioned processor can be general processor, including central processing unit (Central Processing Unit, CPU), network processing unit (Network Processor, NP) etc.;It can also be digital signal processor (Digital Signal Processing, DSP), it is specific integrated circuit (Application Specific Integrated Circuit, ASIC), existing Field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device are divided Vertical door or transistor logic, discrete hardware components.
The embodiment of the invention also provides a kind of computer readable storage medium, stored in the computer readable storage medium There is computer program, the computer program executes any of the above-described image processing method when being executed by processor.
The embodiment of the invention also provides a kind of computer applied algorithm, which runs on computers When, so that computer executes any image processing method in above-described embodiment.
In technical solution provided in an embodiment of the present invention, by whether there is human face region in detection target image;If depositing According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are being obtained;Root According to the coordinate parameters, the position and sphere of action of image-region to be processed in the target image are calculated;According to the work With point and sphere of action, the image-region to be processed is determined;It is strong with preset effect according to preset image procossing mode Degree carries out image procossing to the image-region to be processed, the target image that obtains that treated.Side provided in an embodiment of the present invention By the human face region in image as the position and sphere of action with reference to determining image-region to be processed in case, and then with pre- If action intensity to image-region to be processed carry out image procossing, avoid user just can determine that by cumbersome manual operation Position, sphere of action and action intensity improve user experience to simplify the operation of image procossing.
The term used in the embodiment of the present application is only to be not intended to be limiting merely for for the purpose of describing particular embodiments The application.In the embodiment of the present application and the "an" of singular used in the attached claims, " described " and "the" It is also intended to including most forms, unless the context clearly indicates other meaning.It is also understood that term used herein "and/or" refers to and includes that one or more associated any or all of project listed may combine.
It will be appreciated that though may be described in the embodiment of the present application using term " first ", " second ", " third " etc. Various connectivity ports and identification information etc., but these connectivity ports and identification information etc. should not necessarily be limited by these terms.These terms Only it is used to for connectivity port and identification information etc. being distinguished from each other out.For example, in the case where not departing from the embodiment of the present application range, First connectivity port can also be referred to as second connection end mouth, and similarly, second connection end mouth can also be referred to as the first connection Port.
Depending on context, word as used in this " if " can be construed to " ... when " or " when ... When " or " in response to determination " or " in response to detection ".Similarly, depend on context, phrase " if it is determined that " or " if detection (condition or event of statement) " can be construed to " when determining " or " in response to determination " or " when the detection (condition of statement Or event) when " or " in response to detection (condition or event of statement) ".
Through the above description of the embodiments, it is apparent to those skilled in the art that, for description It is convenienct and succinct, only the example of the division of the above functional modules, in practical application, can according to need and will be upper It states function distribution to be completed by different functional modules, i.e., the internal structure of device is divided into different functional modules, to complete All or part of function described above.The specific work process of the system, apparatus, and unit of foregoing description, before can referring to The corresponding process in embodiment of the method is stated, details are not described herein.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the module or The division of unit, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units Or component can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, institute Display or the mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, device or unit Indirect coupling or communication connection can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product When, it can store in a computer readable storage medium.Based on this understanding, the technical solution of the application is substantially The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words It embodies, which is stored in a storage medium, including some instructions are used so that a computer It is each that equipment (can be personal computer, server or the network equipment etc.) or processor (processor) execute the application The all or part of the steps of embodiment the method.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (Read Only Memory;Hereinafter referred to as: ROM), random access memory (Random Access Memory;Hereinafter referred to as: RAM), the various media that can store program code such as magnetic or disk.
The above, the only specific embodiment of the application, but the protection scope of the application is not limited thereto, it is any Those familiar with the art within the technical scope of the present application, can easily think of the change or the replacement, and should all contain Lid is within the scope of protection of this application.Therefore, the protection scope of the application should be based on the protection scope of the described claims.

Claims (22)

1. a kind of image processing method, which is characterized in that the described method includes:
It detects and whether there is human face region in target image;
If it exists, according to coordinate of the pixel in preset coordinate system in the human face region, the seat of the human face region is obtained Mark parameter;
According to the coordinate parameters, the position and sphere of action of image-region to be processed in the target image are calculated;
According to the position and sphere of action, the image-region to be processed is determined;
According to preset image procossing mode, with preset action intensity, image procossing is carried out to the image-region to be processed, The target image that obtains that treated;
Receive the instruction that Shadows Processing is carried out to the target image;
It is concentrated from preset shadow image and chooses target shadow image;
Determine the transparent journey of placement location and the target shadow image of the target shadow image in the target image Degree;
With identified transparency, by the target shadow image superposition to identified placement location.
2. calculating the target figure the method according to claim 1, wherein described according to the coordinate parameters As in the step of the position and sphere of action of image-region to be processed, comprising:
According to the first coordinate, the second coordinate, third coordinate and 4-coordinate, the image-region to be processed of the target image is determined Position, wherein first coordinate are as follows: the smallest coordinate of the corresponding numerical value of abscissa in the coordinate parameters, described Two coordinates are as follows: the maximum coordinate of the corresponding numerical value of abscissa in the coordinate parameters, the third coordinate are as follows: the coordinate parameters In, the maximum coordinate of the corresponding numerical value of ordinate in the coordinate for identifying eyebrow, the 4-coordinate are as follows: the coordinate parameters In, the smallest coordinate of the corresponding numerical value of ordinate in the coordinate for identifying chin;
According to first coordinate and second coordinate, the effect model of the image-region to be processed of the target image is determined It encloses.
3. according to the method described in claim 2, it is characterized in that, the image-region to be processed include: the first subregion and Second subregion;
It is described according to the first coordinate, second coordinate, third coordinate and 4-coordinate, determine the to be processed of the target image The step of position of image-region, comprising:
The abscissa of first coordinate is determined as to the abscissa of the first position of first subregion;
The abscissa of second coordinate is determined as to the abscissa of the second position of second subregion;
Using following formula obtain the first position of first subregion ordinate and second subregion second The ordinate of position:
yZ1=yZ2=y4-(y3-y4)
Wherein, yZ1For the ordinate of first position, yZ2For the ordinate of second position, y3For third seat Target ordinate, y4For the ordinate of the 4-coordinate.
4. according to the method described in claim 3, it is characterized in that, described according to the first coordinate and the second coordinate, determine described in The step of sphere of action of the image-region to be processed of target image, comprising:
First coordinate is obtained at a distance from second coordinate is in x-axis using following formula:
L=| x1-x2|
Wherein, x1For the abscissa of first coordinate, x2For the abscissa of second coordinate, L is first coordinate and institute State distance of second coordinate in x-axis;
The sphere of action of the sphere of action and second subregion that determine first subregion is equal are as follows: using L as the circle of diameter Shape region.
5. according to the method described in claim 4, determining institute it is characterized in that, described according to the position and sphere of action The step of stating image-region to be processed, comprising:
By the border circular areas determined using L as diameter, by the center of circle of first position as first subregion;
By the border circular areas determined using L as diameter, by the center of circle of second position as second subregion.
6. determining institute the method according to claim 1, wherein described according to the position and sphere of action After the step of stating image-region to be processed, further includes:
The image-region to be processed is adjusted;
It is described according to preset image procossing mode, with preset action intensity, image is carried out to the image-region to be processed Processing, the target image that obtains that treated, comprising:
According to preset image procossing mode, image procossing is carried out to image-region to be processed adjusted, obtains that treated The target image.
7. according to the method described in claim 6, it is characterized in that, the step being adjusted to the image-region to be processed Including at least one of following adjustment mode suddenly:
The image-region to be processed is moved to target position;
The sphere of action of the image-region to be processed is adjusted to intended operating range;
The preset action intensity of the image-region to be processed is adjusted to interacting goals intensity.
8. determining institute the method according to claim 1, wherein described according to the position and sphere of action Before stating image-region to be processed, further includes:
If human face region is not present in the target image, the width and height of the target image are obtained;
According to the width and height of the target image, the position and work of image-region to be processed in the target image are calculated Use range.
9. according to the method described in claim 8, it is characterized in that, the image-region to be processed include: the first subregion and Second subregion;
The width and height according to the target image determines the position of image-region to be processed in the target image And the step of sphere of action, comprising:
The abscissa of the first position of first subregion is obtained using following formula:
Wherein, xZ1For the abscissa of first position, W is the width of the target image, Q1For preset first ratio value;
The abscissa of the second position of second subregion is obtained using following formula:
Wherein, xZ2For the abscissa of second position, Q2For preset second ratio value;
Using following formula obtain the first position of first subregion ordinate and second subregion second The ordinate of position:
Wherein, yZ1For the ordinate of first position, yZ2For the ordinate of second position, H is the target figure The width of picture, Q3For preset third ratio value;
Computational length is obtained using following formula:
Wherein, W is the width of the target image, Q4For preset 4th ratio value;
The sphere of action of the sphere of action and second subregion that determine first subregion is equal are as follows: using D as the circle of diameter Shape region.
10. according to the method described in claim 8, it is characterized in that,
It is described according to preset image procossing mode, with preset action intensity, image is carried out to the image-region to be processed The step of processing, the target image that obtains that treated, comprising:
According to preset image procossing mode, with preset least action intensity, image is carried out to the image-region to be processed Processing, the target image that obtains that treated.
11. a kind of image processing apparatus, which is characterized in that described device includes:
Detection module, for detecting in target image with the presence or absence of human face region;
First obtains module, for detecting in target image there are when human face region, according to the people when the detection module Coordinate of the pixel in preset coordinate system in face region, obtains the coordinate parameters of the human face region;
First computing module calculates the effect of image-region to be processed in the target image for according to the coordinate parameters Point and sphere of action;
First determining module, for determining the image-region to be processed according to the position and sphere of action;
Processing module is used for according to preset image procossing mode, with preset action intensity, to the image-region to be processed Image procossing is carried out, the target image that obtains that treated;
Receiving module, for receiving the instruction for carrying out Shadows Processing to the target image;
Module is chosen, chooses target shadow image for concentrating from preset shadow image;
Second determining module, for determining placement location and the target of the target shadow image in the target image The transparency of shadow image;
Laminating module is used for identified transparency, by the target shadow image superposition to identified placement location.
12. device according to claim 11, which is characterized in that first computing module includes:
First determines submodule, for determining the target according to the first coordinate, the second coordinate, third coordinate and 4-coordinate The position of the image-region to be processed of image, wherein first coordinate are as follows: the corresponding number of abscissa in the coordinate parameters It is worth the smallest coordinate, second coordinate are as follows: the maximum coordinate of the corresponding numerical value of abscissa, the third in the coordinate parameters Coordinate are as follows: the maximum coordinate of the corresponding numerical value of ordinate in the coordinate in the coordinate parameters, for identifying eyebrow, the described 4th Coordinate are as follows: the smallest coordinate of the corresponding numerical value of ordinate in the coordinate in the coordinate parameters, for identifying chin;
Second determines submodule, for according to first coordinate and second coordinate, determine the target image wait locate Manage the sphere of action of image-region.
13. device according to claim 12, which is characterized in that the image-region to be processed includes: the first subregion With the second subregion;
Described first determines that submodule includes:
First determination unit, the horizontal seat of the first position for the abscissa of the first coordinate to be determined as to first subregion Mark;
Second determination unit, the horizontal seat of the second position for the abscissa of the second coordinate to be determined as to second subregion Mark;
First computing unit, for obtaining the ordinate of the first position of first subregion and described using following formula The ordinate of second position of the second subregion:
yZ1=yZ2=y4-(y3-y4)
Wherein, yZ1For the ordinate of first position, yZ2For the ordinate of second position, y3For third seat Target ordinate, y4For the ordinate of the 4-coordinate.
14. device according to claim 13, which is characterized in that described second determines that submodule includes:
Second computing unit, for obtaining first coordinate at a distance from second coordinate is in x-axis using following formula:
L=| x1-x2|
Wherein, x1For the abscissa of first coordinate, x2For the abscissa of second coordinate, L is first coordinate and institute State distance of second coordinate in x-axis;
Third determination unit, for determine first subregion sphere of action and second subregion sphere of action it is equal Are as follows: using L as the border circular areas of diameter.
15. device according to claim 14, which is characterized in that first determining module includes:
Third determines submodule, and the border circular areas for will determine using L as diameter, by the center of circle of first position is as institute State the first subregion;
4th determines submodule, and the border circular areas for will determine using L as diameter, by the center of circle of second position is as institute State the second subregion.
16. device according to claim 11, which is characterized in that described device further include:
Module is adjusted, for being adjusted to the image-region to be processed;
The processing module includes:
First processing submodule, for carrying out figure to image-region to be processed adjusted according to preset image procossing mode As processing, the target image that obtains that treated.
17. device according to claim 16, which is characterized in that the adjustment module is specifically used for, following adjustment mode At least one of:
The image-region to be processed is moved to target position;
The sphere of action of the image-region to be processed is adjusted to intended operating range;
The preset action intensity of the image-region to be processed is adjusted to interacting goals intensity.
18. device according to claim 11, which is characterized in that described device further include:
Second obtains module, for when the detection module detects in target image there is no human face region, described in acquisition The width and height of target image;
Second computing module calculates figure to be processed in the target image for the width and height according to the target image As the position and sphere of action in region.
19. device according to claim 18, which is characterized in that the image-region to be processed includes: the first subregion With the second subregion;
Second computing module, comprising:
First computational submodule, the abscissa of the first position for obtaining first subregion using following formula:
Wherein, xZ1For the abscissa of first position, W is the width of the target image, Q1For preset first ratio value;
Second computational submodule, the abscissa of the second position for obtaining second subregion using following formula:
Wherein, xZ2For the abscissa of second position, Q2For preset second ratio value;
Third computational submodule, for obtaining ordinate and the institute of the first position of first subregion using following formula State the ordinate of the second position of the second subregion:
Wherein, yZ1For the ordinate of first position, yZ2For the ordinate of second position, H is the target figure The width of picture, Q3For preset third ratio value;
4th computational submodule, for obtaining computational length using following formula:
Wherein, W is the width of the target image, Q4For preset 4th ratio value;
5th determines submodule, for determining the sphere of action of first subregion and the sphere of action of second subregion Are as follows: using D as the border circular areas of diameter.
20. device according to claim 18, which is characterized in that the processing module includes:
Submodule is handled, is used for according to preset image procossing mode, with preset least action intensity, to the figure to be processed As region carries out image procossing, the target image that obtains that treated.
21. a kind of electronic equipment, which is characterized in that including processor, communication interface, memory and communication bus, wherein processing Device, communication interface, memory complete mutual communication by communication bus;
Memory, for storing computer program;
Processor when for executing the program stored on memory, realizes any method and step of claim 1-10.
22. a kind of computer readable storage medium, which is characterized in that be stored with computer in the computer readable storage medium Program realizes claim 1-10 any method and step when the computer program is executed by processor.
CN201710527387.XA 2017-06-30 2017-06-30 Image processing method and device, electronic equipment and storage medium Active CN107395958B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710527387.XA CN107395958B (en) 2017-06-30 2017-06-30 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710527387.XA CN107395958B (en) 2017-06-30 2017-06-30 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN107395958A CN107395958A (en) 2017-11-24
CN107395958B true CN107395958B (en) 2019-11-15

Family

ID=60335015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710527387.XA Active CN107395958B (en) 2017-06-30 2017-06-30 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN107395958B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108364254B (en) * 2018-03-20 2021-07-23 北京奇虎科技有限公司 Image processing method and device and electronic equipment
CN108389155B (en) * 2018-03-20 2021-10-01 北京奇虎科技有限公司 Image processing method and device and electronic equipment
CN108399599B (en) * 2018-03-20 2021-11-26 北京奇虎科技有限公司 Image processing method and device and electronic equipment
CN108346130B (en) * 2018-03-20 2021-07-23 北京奇虎科技有限公司 Image processing method and device and electronic equipment
CN108447023B (en) * 2018-03-20 2021-08-24 北京奇虎科技有限公司 Image processing method and device and electronic equipment
CN109214317B (en) * 2018-08-22 2021-11-12 北京慕华信息科技有限公司 Information quantity determination method and device
CN111105348A (en) * 2019-12-25 2020-05-05 北京市商汤科技开发有限公司 Image processing method and apparatus, image processing device, and storage medium
CN111476201A (en) * 2020-04-29 2020-07-31 Oppo广东移动通信有限公司 Certificate photo manufacturing method, terminal and storage medium
CN113297641A (en) * 2020-11-26 2021-08-24 阿里巴巴集团控股有限公司 Stamp processing method, content element processing method, device, equipment and medium
CN112966578A (en) * 2021-02-23 2021-06-15 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113591710A (en) * 2021-07-30 2021-11-02 康佳集团股份有限公司 Image processing method, device, terminal and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632165A (en) * 2013-11-28 2014-03-12 小米科技有限责任公司 Picture processing method, device and terminal equipment
CN105512605A (en) * 2015-11-23 2016-04-20 小米科技有限责任公司 Face image processing method and device
CN106067167A (en) * 2016-06-06 2016-11-02 广东欧珀移动通信有限公司 Image processing method and device
CN106210522A (en) * 2016-07-15 2016-12-07 广东欧珀移动通信有限公司 A kind of image processing method, device and mobile terminal
CN106558040A (en) * 2015-09-23 2017-04-05 腾讯科技(深圳)有限公司 Character image treating method and apparatus
CN106846240A (en) * 2015-12-03 2017-06-13 阿里巴巴集团控股有限公司 A kind of method for adjusting fusion material, device and equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4600448B2 (en) * 2007-08-31 2010-12-15 カシオ計算機株式会社 Gradation correction apparatus, gradation correction method, and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632165A (en) * 2013-11-28 2014-03-12 小米科技有限责任公司 Picture processing method, device and terminal equipment
CN106558040A (en) * 2015-09-23 2017-04-05 腾讯科技(深圳)有限公司 Character image treating method and apparatus
CN105512605A (en) * 2015-11-23 2016-04-20 小米科技有限责任公司 Face image processing method and device
CN106846240A (en) * 2015-12-03 2017-06-13 阿里巴巴集团控股有限公司 A kind of method for adjusting fusion material, device and equipment
CN106067167A (en) * 2016-06-06 2016-11-02 广东欧珀移动通信有限公司 Image processing method and device
CN106210522A (en) * 2016-07-15 2016-12-07 广东欧珀移动通信有限公司 A kind of image processing method, device and mobile terminal

Also Published As

Publication number Publication date
CN107395958A (en) 2017-11-24

Similar Documents

Publication Publication Date Title
CN107395958B (en) Image processing method and device, electronic equipment and storage medium
JP6951400B2 (en) Equipment and methods for supplying content recognition photo filters
CN110032271B (en) Contrast adjusting device and method, virtual reality equipment and storage medium
US10373244B2 (en) System and method for virtual clothes fitting based on video augmented reality in mobile phone
EP3786892A1 (en) Method, device and apparatus for repositioning in camera orientation tracking process, and storage medium
EP3779883A1 (en) Method and device for repositioning in camera orientation tracking process, and storage medium
CN104331168B (en) Display adjusting method and electronic equipment
CN106971165B (en) A kind of implementation method and device of filter
EP3316080B1 (en) Virtual reality interaction method, apparatus and system
JP6458371B2 (en) Method for obtaining texture data for a three-dimensional model, portable electronic device, and program
CN105094734B (en) A kind of control method and electronic equipment of flexible screen
CN109684980A (en) Automatic marking method and device
US11308655B2 (en) Image synthesis method and apparatus
CN111324250B (en) Three-dimensional image adjusting method, device and equipment and readable storage medium
JP6500355B2 (en) Display device, display program, and display method
CN109064390A (en) A kind of image processing method, image processing apparatus and mobile terminal
CN108830186B (en) Text image content extraction method, device, equipment and storage medium
CN110113534A (en) A kind of image processing method, image processing apparatus and mobile terminal
CN102930278A (en) Human eye sight estimation method and device
CN104050634B (en) Abandon the texture address mode of filter taps
CN111062981A (en) Image processing method, device and storage medium
CN107426490A (en) A kind of photographic method and terminal
CN110503704A (en) Building method, device and the electronic equipment of three components
CN108111747A (en) A kind of image processing method, terminal device and computer-readable medium
CN105590294B (en) A kind of image processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20201124

Address after: Room 115, area C, 1 / F, building 8, yard 1, yaojiayuan South Road, Chaoyang District, Beijing 100123

Patentee after: Beijing LEMI Technology Co.,Ltd.

Address before: 100085 Beijing City, Haidian District Road 33, two floor East Xiaoying

Patentee before: BEIJING KINGSOFT INTERNET SECURITY SOFTWARE Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230831

Address after: 3870A, 3rd Floor, Building 4, Courtyard 49, Badachu Road, Shijingshan District, Beijing, 100144

Patentee after: Beijing Jupiter Technology Co.,Ltd.

Address before: 100123 room 115, area C, 1st floor, building 8, yard 1, yaojiayuan South Road, Chaoyang District, Beijing

Patentee before: Beijing LEMI Technology Co.,Ltd.

TR01 Transfer of patent right