Summary of the invention
The embodiment of the present invention is designed to provide a kind of image processing method, device, electronic equipment and storage medium, with
Solve the problems, such as that user just can determine that position, sphere of action and action intensity by cumbersome manual operation.Particular technique side
Case is as follows:
In a first aspect, the embodiment of the invention provides a kind of image processing methods, which comprises
It detects and whether there is human face region in target image;
If it exists, according to coordinate of the pixel in preset coordinate system in the human face region, the human face region is obtained
Coordinate parameters;
According to the coordinate parameters, the position and sphere of action of image-region to be processed in the target image are calculated;
According to the position and sphere of action, the image-region to be processed is determined;
According to preset image procossing mode, with preset action intensity, image is carried out to the image-region to be processed
Processing, the target image that obtains that treated.
Optionally, described according to the coordinate parameters, calculate the position of image-region to be processed in the target image
And the step of sphere of action, comprising:
According to the first coordinate, the second coordinate, third coordinate and 4-coordinate, the image to be processed of the target image is determined
The position in region, wherein first coordinate are as follows: the smallest coordinate of the corresponding numerical value of abscissa, institute in the coordinate parameters
State the second coordinate are as follows: the maximum coordinate of the corresponding numerical value of abscissa in the coordinate parameters, the third coordinate are as follows: the coordinate
The maximum coordinate of the corresponding numerical value of ordinate, the 4-coordinate in coordinate in parameter, for identifying eyebrow are as follows: the coordinate
The smallest coordinate of the corresponding numerical value of ordinate in coordinate in parameter, for identifying chin;
According to first coordinate and second coordinate, the effect of the image-region to be processed of the target image is determined
Range.
Optionally, the image-region to be processed includes: the first subregion and the second subregion;
It is described according to the first coordinate, second coordinate, third coordinate and 4-coordinate, determine the target image to
The step of handling the position of image-region, comprising:
The abscissa of first coordinate is determined as to the abscissa of the first position of first subregion;
The abscissa of second coordinate is determined as to the abscissa of the second position of second subregion;
The ordinate and second subregion of the first position of first subregion are obtained using following formula
The ordinate of second position:
yZ1=yZ2=y4-(y3-y4)
Wherein, yZ1For the ordinate of first position, yZ2For the ordinate of second position, y3It is described
The ordinate of three coordinates, y4For the ordinate of the 4-coordinate.
Optionally, described according to the first coordinate and the second coordinate, determine the image-region to be processed of the target image
The step of sphere of action, comprising:
First coordinate is obtained at a distance from second coordinate is in x-axis using following formula:
L=| x1-x2|
Wherein, x1For the abscissa of first coordinate, x2For the abscissa of second coordinate, L is first coordinate
At a distance from second coordinate is in x-axis;
The sphere of action of the sphere of action and second subregion that determine first subregion is equal are as follows: using L as diameter
Border circular areas.
Optionally, described according to the position and sphere of action, the step of determining the image-region to be processed, packet
It includes:
By the border circular areas determined using L as diameter, by the center of circle of first position as first subregion;
By the border circular areas determined using L as diameter, by the center of circle of second position as second subregion.
Optionally, described according to the position and sphere of action, after the step of determining the image-region to be processed,
Further include:
The image-region to be processed is adjusted;
It is described that the image-region to be processed is carried out with preset action intensity according to preset image procossing mode
Image procossing, the target image that obtains that treated, comprising:
According to preset image procossing mode, image procossing is carried out to image-region to be processed adjusted, is handled
The target image afterwards.
Optionally, the described the step of image-region to be processed is adjusted, including in following adjustment mode extremely
Few one kind:
The image-region to be processed is moved to target position;
The sphere of action of the image-region to be processed is adjusted to intended operating range;
The preset action intensity of the image-region to be processed is adjusted to interacting goals intensity.
Optionally, described also to be wrapped before determining the image-region to be processed according to the position and sphere of action
It includes:
If human face region is not present in the target image, the width and height of the target image are obtained;
According to the width and height of the target image, the position of image-region to be processed in the target image is calculated
And sphere of action.
Optionally, the image-region to be processed includes: the first subregion and the second subregion;
The width and height according to the target image determines the work of image-region to be processed in the target image
The step of with point and sphere of action, comprising:
The abscissa of the first position of first subregion is obtained using following formula:
Wherein, xZ1For the abscissa of first position, W is the width of the target image, Q1For preset first ratio
Example value;
The abscissa of the second position of second subregion is obtained using following formula:
Wherein, xZ2For the abscissa of second position, Q2For preset second ratio value;
The ordinate and second subregion of the first position of first subregion are obtained using following formula
The ordinate of second position:
Wherein, yZ1For the ordinate of first position, yZ2For the ordinate of second position, H is described
The width of target image, Q3For preset third ratio value.
Computational length is obtained using following formula:
Wherein, W is the width of the target image, Q4For preset 4th ratio value;
The sphere of action of the sphere of action and second subregion that determine first subregion is equal are as follows: using D as diameter
Border circular areas.
Optionally,
It is described that the image-region to be processed is carried out with preset action intensity according to preset image procossing mode
The step of image procossing, the target image that obtains that treated, comprising:
The image-region to be processed is carried out with preset least action intensity according to preset image procossing mode
Image procossing, the target image that obtains that treated.
Optionally, the method also includes:
Receive the instruction that Shadows Processing is carried out to the target image;
It is concentrated from preset shadow image and chooses target shadow image;
Determine the saturating of placement location and the target shadow image of the target shadow image in the target image
Bright degree;
With identified transparency, by the target shadow image superposition to identified placement location.
Second aspect, the embodiment of the invention provides a kind of image processing apparatus, described device includes:
Detection module, for detecting in target image with the presence or absence of human face region;
First obtains module, for detecting in target image there are when human face region, according to institute when the detection module
Coordinate of the pixel in preset coordinate system in human face region is stated, the coordinate parameters of the human face region are obtained;
First computing module calculates image-region to be processed in the target image for according to the coordinate parameters
Position and sphere of action;
First determining module, for determining the image-region to be processed according to the position and sphere of action;
Processing module is used for according to preset image procossing mode, with preset action intensity, to the image to be processed
Region carries out image procossing, the target image that obtains that treated.
Optionally, first computing module includes:
First determines submodule, described in determining according to the first coordinate, the second coordinate, third coordinate and 4-coordinate
The position of the image-region to be processed of target image, wherein first coordinate are as follows: abscissa is corresponding in the coordinate parameters
The smallest coordinate of numerical value, second coordinate are as follows: the maximum coordinate of the corresponding numerical value of abscissa in the coordinate parameters, it is described
Third coordinate are as follows: the maximum coordinate of the corresponding numerical value of ordinate in the coordinate in the coordinate parameters, for identifying eyebrow, it is described
4-coordinate are as follows: the smallest coordinate of the corresponding numerical value of ordinate in the coordinate in the coordinate parameters, for identifying chin;
Second determines submodule, for determining the target image according to first coordinate and second coordinate
The sphere of action of image-region to be processed.
Optionally, the image-region to be processed includes: the first subregion and the second subregion;
Described first determines that submodule includes:
First determination unit, for the abscissa of the first coordinate to be determined as to the first position of first subregion
Abscissa;
Second determination unit, for the abscissa of the second coordinate to be determined as to the second position of second subregion
Abscissa;
First computing unit, for obtained using following formula first subregion the first position ordinate and
The ordinate of second position of second subregion:
yZ1=yZ2=y4-(y3-y4)
Wherein, yZ1For the ordinate of first position, yZ2For the ordinate of second position, y3It is described
The ordinate of three coordinates, y4For the ordinate of the 4-coordinate.
Optionally, described second determine that submodule includes:
Second computing unit, for obtaining first coordinate and second coordinate on x axis using following formula
Distance:
L=| x1-x2|
Wherein, x1For the abscissa of first coordinate, x2For the abscissa of second coordinate, L is first coordinate
At a distance from second coordinate is in x-axis;
Third determination unit, for determining the sphere of action of first subregion and the effect model of second subregion
It encloses are as follows: using L as the border circular areas of diameter.
Optionally, first determining module includes:
Third determines submodule, and the border circular areas for will determine using L as diameter, by the center of circle of first position is made
For first subregion;
4th determines submodule, and the border circular areas for will determine using L as diameter, by the center of circle of second position is made
For second subregion.
Optionally, described device further include:
Module is adjusted, for being adjusted to the image-region to be processed;
The processing module includes:
First processing submodule, for according to preset image procossing mode, to image-region to be processed adjusted into
Row image procossing, the target image that obtains that treated.
Optionally, the adjustment module is specifically used for, at least one of following adjustment mode:
The image-region to be processed is moved to target position;
The sphere of action of the image-region to be processed is adjusted to intended operating range;
The preset action intensity of the image-region to be processed is adjusted to interacting goals intensity.
Optionally, described device further include:
Second obtains module, for obtaining when the detection module detects and human face region is not present in target image
The width and height of the target image;
Second computing module calculates in the target image for the width and height according to the target image wait locate
Manage the position and sphere of action of image-region.
Optionally, the image-region to be processed includes: the first subregion and the second subregion;
Second computing module, comprising:
First computational submodule, the horizontal seat of the first position for obtaining first subregion using following formula
Mark:
Wherein, xZ1For the abscissa of first position, W is the width of the target image, Q1For preset first ratio
Example value;
Second computational submodule, the horizontal seat of the second position for obtaining second subregion using following formula
Mark:
Wherein, xZ2For the abscissa of second position, Q2For preset second ratio value;
Third computational submodule, the ordinate of the first position for obtaining first subregion using following formula
With the ordinate of the second position of second subregion:
Wherein, yZ1For the ordinate of first position, yZ2For the ordinate of second position, H is described
The width of target image, Q3For preset third ratio value.
4th computational submodule, for obtaining computational length using following formula:
Wherein, W is the width of the target image, Q4For preset 4th ratio value;
5th determines submodule, for determining the sphere of action of first subregion and the effect of second subregion
Range is equal are as follows: using D as the border circular areas of diameter.
Optionally, the processing module includes:
Submodule is handled, is used for according to preset image procossing mode, with preset least action intensity, to described wait locate
It manages image-region and carries out image procossing, the target image that obtains that treated.
Optionally, described device further include:
Receiving module, for receiving the instruction for carrying out Shadows Processing to the target image;
Module is chosen, chooses target shadow image for concentrating from preset shadow image;
Second determining module, for determining placement location of the target shadow image in the target image and described
The transparency of target shadow image;
Laminating module is used for identified transparency, by the target shadow image superposition to identified placement
Position.
The third aspect, the embodiment of the invention provides a kind of electronic equipment, including processor, communication interface, memory and
Communication bus, wherein processor, communication interface, memory complete mutual communication by communication bus;
Memory, for storing computer program;
Processor when for executing the program stored on memory, executes any of the above-described image processing method.
Fourth aspect, the embodiment of the invention provides a kind of computer readable storage medium, the computer-readable storage
Dielectric memory contains computer program, and the computer program executes any of the above-described image procossing when being executed by processor
Method.
5th aspect, the embodiment of the invention provides a kind of computer applied algorithm, the computer applied algorithm is being counted
When being run on calculation machine, so that computer executes any image processing method in above-described embodiment.
In technical solution provided in an embodiment of the present invention, by detection target image there are in the case where human face region,
According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are obtained;According to
The coordinate parameters calculate the position and sphere of action of image-region to be processed in the target image;According to the effect
Point and sphere of action, determine the image-region to be processed;According to preset image procossing mode, with preset action intensity,
Image procossing is carried out to the image-region to be processed, the target image that obtains that treated.Scheme provided in an embodiment of the present invention
In be used as by the human face region in image with reference to the position and sphere of action for determining image-region to be processed, and then with default
Action intensity to image-region to be processed carry out image procossing, avoiding user by cumbersome manual operation just can determine that work
User experience is improved to simplify the operation of image procossing with point, sphere of action and action intensity.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
Just can determine that asking for position, sphere of action and action intensity by cumbersome manual operation to solve user
Topic improves user experience to simplify the operation of image procossing.The embodiment of the invention provides a kind of image processing method,
Device, electronic equipment and storage medium.
A kind of image processing method provided in an embodiment of the present invention can be used for the application software of electronic equipment, such as: hand
Application software, the application software of plate, application software of smart television of machine etc., wherein application software can be various types
Repair figure software, such as: PhotoGrid, Meitu Xiu Xiu etc..
Image procossing in the embodiment of the present invention can be the image procossing of the types such as chest enlarge, buttocks development surgery, herein with chest enlarge
Image processing method provided in an embodiment of the present invention is illustrated for image procossing.
A kind of image processing method provided in an embodiment of the present invention is introduced first below.
As shown in Figure 1, a kind of image processing method provided in an embodiment of the present invention, includes the following steps:
S101 is detected and be whether there is human face region in target image, and if it exists, executes S102.
Target image can be electronic equipment shooting photo, the picture downloaded on network etc., wherein electronic equipment includes hand
Machine, plate, camera etc..The format of target image is including but not limited to following several: JPEG (Joint
Photographic Experts Group, Joint Photographic Experts Group), bmp (Bitmap, image file format), PNG
(Portable Network Graphic Format, image file storage format), GIF (Graphics Interchange
Format, graphic interchange format), TIFF (Tag Image File Format, label image file format) etc..
In general, the type of image can be divided into landscape image and character image, also, in most cases, in order to
So that the personage in image is more perfect, user can carry out correspondingly image procossing to character image.When carrying out figure to character image
It when as processing, i.e., include at least one portrait in the target image when target image is character image;When only being wrapped in target image
When including a portrait, image procossing can be carried out to the portrait;It, can be according to pre- when in target image including multiple portraits
If rule image procossing is carried out respectively to each portrait, wherein default rule may is that according to from the left side of target image
Sequence to the right carries out respectively;Alternatively, it may also is that being carried out respectively according to the sequence from the right of target image to the left side.
It is, of course, understood that default rule is not limited in both the above.
Human face region is that the region where the face of personage, the range in the region can be known by face in the target image
Other technology is identified and is extracted.
S102 obtains the human face region according to coordinate of the pixel in preset coordinate system in the human face region
Coordinate parameters.
Preset coordinate system can be the coordinate system using target image as reference data, for example, preset coordinate system can be with
The lower edge of target image is as X-axis, using the left edge of target image as Y-axis.
In preset coordinate system, each pixel and coordinate on target image are corresponded, each pixel corresponding one
A coordinate points.For example, the pixel in the most lower left corner is coordinate origin position on target image, coordinate is (0,0).
The pixel of human face region is one-to-one relationship in preset coordinate system with coordinate, then, human face region
Facial contour, face in the human face region profile and each profile included in part etc. can be in preset coordinate system
In respectively indicated out by corresponding multiple groups coordinate.
In a kind of embodiment, coordinate corresponding to the pixel of available entire human face region, including face
Coordinate corresponding to coordinate corresponding to the pixel of profile and the point of all pixels in the facial contour.Pass through this reality
Apply mode, the available coordinate to more complete human face region, to can more accurately be made in subsequent steps
With point and sphere of action.
It is possible to further obtain the pixel of facial contour in human face region and the face profile in the human face region
Corresponding coordinate, wherein face profile includes: eyebrow outline, eye contour, nose profile, mouth profile, ear profile.
For human face region, face are representative characteristic portions, therefore obtain the picture of facial contour and face profile
Coordinate corresponding to vegetarian refreshments can also accurately represent human face region.
Further, because can determine the range of human face region by eyebrow outline and facial contour,
Coordinate corresponding to facial contour and the pixel of eyebrow outline in human face region can be only obtained, optionally, for eyebrow wheel
Exterior feature, coordinate corresponding to the pixel of the profile of any bar eyebrow in available two eyebrows.
It should be noted that facial contour includes at least the left and right profile and chin profile of face.
S103 calculates the position of image-region to be processed and effect in the target image according to the coordinate parameters
Range.
Position is that user it is expected to carry out the central point in the region of image procossing, and sphere of action is that user it is expected to carry out image
The range in the region of processing, position and sphere of action determine image-region to be processed jointly.For example, using position as circle
The heart, using sphere of action as diameter, at this point, image-region to be processed determined by position and sphere of action is border circular areas;
Using position as two cornerwise intersection points of square region, sphere of action is as side length, at this point, position and sphere of action institute
Determining image-region to be processed is square region.
Image-region to be processed is the region of selected pending image procossing on target image, also, for not
The image procossing of same type, image-region to be processed included separate independent region quantity can be it is different, for example,
For the image procossing of chest enlarge, image-region to be processed can be two separated independent regions.
When image-region to be processed is two separated independent regions, the position of image-region to be processed is two works
With point, this two separated independent regions are respectively corresponded;The sphere of action of image-region to be processed is also two sphere of actions, point
This two separated independent regions are not corresponded to, wherein two sphere of actions can be set to the same, may be set to be different
Sample.
In one embodiment, from the coordinate parameters of acquired human face region, the corresponding numerical value of abscissa is determined
The smallest coordinate is the first coordinate, determines that the maximum coordinate of the corresponding numerical value of abscissa is the second coordinate, determines that ordinate is corresponding
The maximum coordinate of numerical value be third coordinate, determine the smallest coordinate of the corresponding numerical value of ordinate be 4-coordinate.
Wherein, the first coordinate, the second coordinate can determine that third coordinate can be from mark from the coordinate of mark facial contour
Know in the coordinate of eyebrow outline and determine, 4-coordinate can be determined from the coordinate of the chin profile in facial contour.
Specifically, the target image can be determined according to the first coordinate, the second coordinate, third coordinate and 4-coordinate
Image-region to be processed position;The to be processed of the target image can be determined according to the first coordinate and the second coordinate
The sphere of action of image-region.
In a kind of specific embodiment, image-region to be processed is two separated independent regions: the first subregion and the
Two subregions.Wherein, the position of the first subregion is the first position, and the position of the second subregion is the second position.
The first coordinate is set as (x1, y1), the second coordinate is (x2, y2), third coordinate is (x3, y3), 4-coordinate is
(x4, y4), the first position is (xZ1, yZ1), the second position is (xZ2, yZ2)。
The abscissa of first coordinate is determined as to the abscissa of first position, i.e. xZ1=x1;By the second coordinate
Abscissa is determined as the abscissa of second position, i.e. xZ2=x2。
The ordinate of the ordinate of first position and the second position can be the same, available according to the following formula:
yZ1=yZ2=y4-(y3-y4)
Illustratively, the first coordinate is (Isosorbide-5-Nitrae 0), and the second coordinate is (21,40), and third coordinate is (15,45), 4-coordinate
For (11,30);
So, according to above embodiment, the abscissa x of the first positionZ1=1, the abscissa x of the second positionZ2=
21;The ordinate y of first positionZ1=y4-(y3-y4)=30- (45-30)=15, the ordinate y of the second positionZ2=yZ1
=15;To sum up, the coordinate of the first position is (1,15), and the coordinate of the second position is (21,15).
Further, it is determined that being circle in the first subregion and the second subregion in the specific embodiment of sphere of action
In the case where shape region, first coordinate is obtained at a distance from second coordinate is on x axis using following formula:
L=| x1-x2|
And determine first subregion sphere of action and second subregion sphere of action it is equal are as follows: with L be straight
The border circular areas of diameter.
Illustratively, the first coordinate is (Isosorbide-5-Nitrae 0), and the second coordinate is (21,40), then can determine the first coordinate and described second
Distance L of the coordinate in x-axis is 20, it is possible to determine the sphere of action of the first subregion and the effect model of the second subregion
Enclosing is border circular areas with 20 for diameter.
S104 determines the image-region to be processed according to the position and sphere of action.
Image-region to be processed includes two separated independent regions: the first subregion and the second subregion, and first is sub
Region is as the sphere of action of the second subregion, using L as diameter;It is possible to determine using L as diameter, with the first effect
Point is the determining border circular areas in the center of circle as the first subregion, the circle that will be determined using L as diameter, by the center of circle of the second position
Region is as the second subregion.
In a kind of embodiment, after determining image-region to be processed, can to determining image-region to be processed into
Row display, specifically, may be displayed on the screen of corresponding electronic equipment, such as mobile phone screen, flat screens, TV screen
Curtain etc..
It, can also be to the image-region to be processed after determining the image-region to be processed in a kind of embodiment
It is adjusted, in this way, in the case where the image-region to be processed determined according to position and sphere of action is inaccurate, user
It can also be adjusted again according to demand.
Wherein, the mode being adjusted to image-region to be processed, which may is that, is moved to the image-region to be processed
Target position.Specifically, user can use finger long-pressing on electronic equipment screen image-region to be processed, when long-pressing is preset
When fixed duration, user can drag image-region to be processed, and be moved to target position.
The mode of adjustment may also is that the sphere of action by the image-region to be processed is adjusted to intended operating range.
Specifically, user can use the edge of finger long-pressing on electronic equipment screen image-region to be processed, when long-pressing is preset solid
When periodically long, user can drag the edge, realize the scaling to image-region to be processed, to realize to image district to be processed
The adjustment of the sphere of action in domain.
Illustratively, when image-region to be processed is border circular areas, the round edge of user's long-pressing border circular areas, to center of circle side
To dragging when be the sphere of action for reducing image-region to be processed, to the opposite direction in center of circle direction drag when then be amplify to
Handle the sphere of action of image-region.
The mode of adjustment may also is that the preset action intensity by the image-region to be processed is adjusted to interacting goals
Intensity.Also, in a kind of embodiment, adjusting to interacting goals intensity, image-region to be processed will be with interacting goals
Intensity carries out image procossing, and shows treated target image.
Specifically, when carrying out action intensity adjustment, it will appear functional area on the screen, be used in functional area
The progress bar of corrective action intensity, by adjusting progress bar to realize the adjustment to action intensity.Also, optionally, as
When with intensity minimum, then any processing will not be carried out to image.
It is understood that above-mentioned three kinds of adjustment modes individually can be adjusted correspondingly, can also be appointed
Two kinds of adjustment mode combinations of anticipating simultaneously are adjusted image-region to be processed, it is, of course, also possible to which three kinds of adjustment modes are right simultaneously
Image-region to be processed is adjusted.
After being adjusted to image-region to be processed, according to preset image procossing mode, to adjusted to be processed
Image-region carries out image procossing, the target image that obtains that treated.For example, the action intensity of image-region to be processed is adjusted
To interacting goals intensity, image procossing will be carried out to image-region to be processed with interacting goals intensity, the mesh that obtains that treated
Logo image.
S105 carries out the image-region to be processed with preset action intensity according to preset image procossing mode
Image procossing, the target image that obtains that treated.
Wherein, preset image procossing mode can be the image procossing of chest enlarge, and preset action intensity, which can be, to be made by oneself
Justice setting, specifically, preset action intensity can be set as the most common action intensity of the user counted.
After determining image-region to be processed, with preset action intensity, image procossing is carried out to image-region to be processed,
Specifically, with preset action intensity, chest enlarge processing is carried out to image-region to be processed, it is right to reach preset action intensity institute
The chest enlarge effect answered.
After carrying out image procossing completion, the target image that just obtains that treated, and the target image that shows that treated.
In technical solution provided in an embodiment of the present invention, by whether there is human face region in detection target image;If depositing
According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are being obtained;Root
According to the coordinate parameters, the position and sphere of action of image-region to be processed in the target image are calculated;According to the work
With point and sphere of action, the image-region to be processed is determined;It is strong with preset effect according to preset image procossing mode
Degree carries out image procossing to the image-region to be processed, the target image that obtains that treated.The embodiment of the present invention provides
Scheme in position and sphere of action with reference to determination image-region to be processed be used as by the human face region in image, in turn
Image procossing is carried out to image-region to be processed with preset action intensity, user is avoided and passes through cumbersome manual operation ability
It determines position, sphere of action and action intensity, to simplify the operation of image procossing, improves user experience.
Below with reference to another specific embodiment, a kind of image processing method provided by the invention is introduced.
As shown in Fig. 2, a kind of image processing method provided in an embodiment of the present invention, includes the following steps:
S201 is detected and be whether there is human face region in target image, and if it exists, executes S202;S204 is executed if it does not exist.
S202 obtains the human face region according to coordinate of the pixel in preset coordinate system in the human face region
Coordinate parameters.
S203 calculates the position of image-region to be processed and effect in the target image according to the coordinate parameters
Range.
In the present embodiment, S201-S203 is identical as the S101-S103 of above-described embodiment, and therefore not to repeat here.
S204 obtains the width and height of the target image.
Wherein, when target image is square image, the side length that the width and height of target image are square, this
When, acquisition be target image side length.
When human face region is not present in target image, the width and height of target image are obtained, wherein acquired width
Degree and height are in preset coordinate system.
S205 calculates image-region to be processed in the target image according to the width and height of the target image
Position and sphere of action.
Image-region to be processed may include two separated independent regions: the first subregion and the second subregion;Wherein,
The position of first subregion is the first position, and the position of the second subregion is the second position;First subregion and
Respectively corresponding sphere of action can be the same two subregions, may be set to be different.
In a kind of embodiment, determine that the position of image-region to be processed in target image may is that setting first is made
It is (x with pointZ1, yZ1), the second position is (xZ2, yZ2), then the abscissa of the first position is obtained using following formula:
Wherein, W is the width of the target image, Q1For preset first ratio value;
The abscissa of the second position of second subregion is obtained using following formula:
Wherein, Q2For preset second ratio value;
The ordinate and second subregion of the first position of first subregion are obtained using following formula
The ordinate of second position:
Wherein, H is the height of the target image, Q3For preset third ratio value.
Wherein, preset first ratio value, the second ratio value and third ratio value are customized setting, the first ratio
Value, the second ratio value and third ratio value can be set to different.
Illustratively, the width of the target image of acquisition is 12, is highly 15, and preset first ratio value is 4, the second ratio
Value is 1, and third ratio value is 3, then, the abscissa of the first position:
The abscissa of second position:
The ordinate of the ordinate of first position and the second position:
To sum up, the coordinate of the first position is (3,5), and the coordinate of the second position is (12,5).
In a kind of embodiment, determine that the sphere of action of image-region to be processed in target image may is that
In the case where the first subregion and the second subregion are border circular areas, obtain calculating length using following formula
Degree:
Q4For preset 4th ratio value, wherein preset 4th ratio value can be customized setting.
The sphere of action of the sphere of action and second subregion that determine first subregion is equal are as follows: using D as diameter
Border circular areas.
Illustratively, the width of the target image of acquisition is 12, and preset 4th ratio value is 3, then obtains computational length are as follows:
The sphere of action that can so determine the first subregion and the second subregion is the border circular areas with 4 for diameter.
After determining the first position, the second position and sphere of action, will using D as diameter, with the first position
It is the determining border circular areas in the center of circle as the first subregion;The circle that will be determined using D as diameter, by the center of circle of the second position
Domain is as the second subregion.And the first subregion, the second subregion after determination are shown.
In a kind of embodiment, when human face region is not present in target image, pass through the width and height of target image
It determines the position and sphere of action of image-region to be processed, determines position and sphere of action compared to by human face region
Embodiment, determine that the method accuracy of position and sphere of action can be slightly worse by the width and height of target image,
Therefore, it is the number for reducing user's operation, preset action intensity can be adjusted to least action intensity, i.e., to figure to be processed
Processing as region without any action intensity.
And according to preset image procossing mode, with least action intensity, image procossing is carried out to image-region to be processed,
The target image that obtains that treated.
S206 determines the image-region to be processed according to the position and sphere of action;
S207 carries out the image-region to be processed with preset action intensity according to preset image procossing mode
Image procossing, the target image that obtains that treated.
In the present embodiment, S206-S207 is identical as the S104-S105 of above-described embodiment, and therefore not to repeat here.
In technical solution provided in an embodiment of the present invention, by whether there is human face region in detection target image;If depositing
According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are being obtained;Root
According to the coordinate parameters, the position and sphere of action of image-region to be processed in the target image are calculated;According to the work
With point and sphere of action, the image-region to be processed is determined;It is strong with preset effect according to preset image procossing mode
Degree carries out image procossing to the image-region to be processed, the target image that obtains that treated.Side provided in an embodiment of the present invention
By the human face region in image as the position and sphere of action with reference to determining image-region to be processed in case, and then with pre-
If action intensity to image-region to be processed carry out image procossing, avoid user just can determine that by cumbersome manual operation
Position, sphere of action and action intensity improve user experience to simplify the operation of image procossing.
Below with reference to another specific embodiment, a kind of image processing method provided by the invention is introduced.
As shown in figure 3, a kind of image processing method provided in an embodiment of the present invention, can also include the following steps:
S301 receives the instruction that Shadows Processing is carried out to target image.
Shadows Processing, which can be, acts on chest, by setting the transparency of shade, so that visually chest
It is more plentiful.
It should be noted that the image procossings such as chest enlarge, buttocks development surgery in Shadows Processing and above-described embodiment can be individually into
Capable image procossing the figure such as only does Shadows Processing, or only does chest enlarge, buttocks development surgery that is, when carrying out image procossing to target image
As processing;It is, of course, also possible to above two image procossing be carried out simultaneously to target image, for example, after carrying out chest enlarge processing
Shadows Processing is carried out again.
S302 is concentrated from preset shadow image and is chosen target shadow image.
Echo image set be it is preset, in the centrally stored shadow image for having multiple and different types of shadow image, use
Family can concentrate according to demand from shadow image and choose shadow image, as target shadow image.For example, shadow image concentration is deposited
The shadow image of 6 seed types is contained, user can choose No. 1 shadow image as target shadow image and carry out Shadows Processing, when
So, user can also replace other shadow images.
S303 determines placement location and the target shadow image of the target shadow image in the target image
Transparency.
In the case where there is human face region in the target image, placement location can be the coordinate parameters according to human face region
Determining, this embodiment and above-mentioned according to the coordinate parameters calculates image-region to be processed in the target image
Position is similar with the embodiment of sphere of action, and details are not described herein.
In the target image there is no in the case where human face region, placement location can be according to the width of target image and
What height determined, this embodiment and above-mentioned width and height according to the target image calculate in the target image
The position of image-region to be processed and the embodiment of sphere of action are similar, and details are not described herein.
In addition, the transparency of target shadow image can be preparatory customized setting, also, user according to demand may be used
To be adjusted again.
S304, with identified transparency, by the target shadow image superposition to identified placement location.
Target shadow image is shown with identified transparency, and is superimposed to identified placement location, so
The target image for being superimposed target shadow image is shown afterwards.
In technical solution provided in an embodiment of the present invention, by whether there is human face region in detection target image;If depositing
According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are being obtained;Root
According to the coordinate parameters, the position and sphere of action of image-region to be processed in the target image are calculated;According to the work
With point and sphere of action, the image-region to be processed is determined;It is strong with preset effect according to preset image procossing mode
Degree carries out image procossing to the image-region to be processed, the target image that obtains that treated.Side provided in an embodiment of the present invention
By the human face region in image as the position and sphere of action with reference to determining image-region to be processed in case, and then with pre-
If action intensity to image-region to be processed carry out image procossing, avoid user just can determine that by cumbersome manual operation
Position, sphere of action and action intensity improve user experience to simplify the operation of image procossing.
Relative to above method embodiment, the embodiment of the invention also provides a kind of image processing apparatus, as shown in figure 4,
Described device includes:
Detection module 410, for detecting in target image with the presence or absence of human face region;
First obtains module 420, for detecting in target image there are when human face region when the detection module, according to
Coordinate of the pixel in preset coordinate system in the human face region, obtains the coordinate parameters of the human face region;
First computing module 430, for calculating image-region to be processed in the target image according to the coordinate parameters
Position and sphere of action;
First determining module 440, for determining the image-region to be processed according to the position and sphere of action;
Processing module 450 is used for according to preset image procossing mode, with preset action intensity, to described to be processed
Image-region carries out image procossing, the target image that obtains that treated.
Optionally, in a kind of embodiment, first computing module 430 includes:
First determines submodule, described in determining according to the first coordinate, the second coordinate, third coordinate and 4-coordinate
The position of the image-region to be processed of target image, wherein first coordinate are as follows: abscissa is corresponding in the coordinate parameters
The smallest coordinate of numerical value, second coordinate are as follows: the maximum coordinate of the corresponding numerical value of abscissa in the coordinate parameters, it is described
Third coordinate are as follows: the maximum coordinate of the corresponding numerical value of ordinate in the coordinate in the coordinate parameters, for identifying eyebrow, it is described
4-coordinate are as follows: the smallest coordinate of the corresponding numerical value of ordinate in the coordinate in the coordinate parameters, for identifying chin;
Second determines submodule, for determining the target image according to first coordinate and second coordinate
The sphere of action of image-region to be processed.
Optionally, in a kind of embodiment, the image-region to be processed includes: the first subregion and the second subregion;
Described first determines that submodule includes:
First determination unit, for the abscissa of the first coordinate to be determined as to the first position of first subregion
Abscissa;
Second determination unit, for the abscissa of the second coordinate to be determined as to the second position of second subregion
Abscissa;
First computing unit, for obtained using following formula first subregion the first position ordinate and
The ordinate of second position of second subregion:
yZ1=yZ2=y4-(y3-y4)
Wherein, yZ1For the ordinate of first position, yZ2For the ordinate of second position, y3It is described
The ordinate of three coordinates, y4For the ordinate of the 4-coordinate.
Optionally, in a kind of embodiment, described second determines that submodule includes:
Second computing unit, for obtaining first coordinate and second coordinate on x axis using following formula
Distance:
L=| x1-x2|
Wherein, x1For the abscissa of first coordinate, x2For the abscissa of second coordinate, L is first coordinate
At a distance from second coordinate is in x-axis;
Third determination unit, for determining the sphere of action of first subregion and the effect model of second subregion
It encloses are as follows: using L as the border circular areas of diameter.
Optionally, in a kind of embodiment, first determining module 440 includes:
Third determines submodule, and the border circular areas for will determine using L as diameter, by the center of circle of first position is made
For first subregion;
4th determines submodule, and the border circular areas for will determine using L as diameter, by the center of circle of second position is made
For second subregion.
Optionally, in a kind of embodiment, described device further include:
Module is adjusted, for being adjusted to the image-region to be processed;
The processing module 450 includes:
First processing submodule, for according to preset image procossing mode, to image-region to be processed adjusted into
Row image procossing, the target image that obtains that treated.
Optionally, in a kind of embodiment, the adjustment module is specifically used for, at least one of following adjustment mode:
The image-region to be processed is moved to target position;
The sphere of action of the image-region to be processed is adjusted to intended operating range;
The preset action intensity of the image-region to be processed is adjusted to interacting goals intensity.
On the basis of above-mentioned Fig. 4, the embodiment of the present invention also provides another embodiment, as shown in figure 5, described device
Further include:
Second obtains module 510, for obtaining when the detection module detects and human face region is not present in target image
Take the width and height of the target image;
Second computing module 520, for the width and height according to the target image, calculate in the target image to
Handle the position and sphere of action of image-region.
Optionally, in a kind of embodiment, the image-region to be processed includes: the first subregion and the second subregion;
Second computing module 520, comprising:
First computational submodule, the horizontal seat of the first position for obtaining first subregion using following formula
Mark:
Wherein, xZ1For the abscissa of first position, W is the width of the target image, Q1For preset first ratio
Example value;
Second computational submodule, the horizontal seat of the second position for obtaining second subregion using following formula
Mark:
Wherein, xZ2For the abscissa of second position, Q2For preset second ratio value;
Third computational submodule, the ordinate of the first position for obtaining first subregion using following formula
With the ordinate of the second position of second subregion:
Wherein, yZ1For the ordinate of first position, yZ2For the ordinate of second position, H is described
The width of target image, Q3For preset third ratio value.
4th computational submodule, for obtaining computational length using following formula:
Wherein, W is the width of the target image, Q4For preset 4th ratio value;
5th determines submodule, for determining the sphere of action of first subregion and the effect of second subregion
Range is equal are as follows: using D as the border circular areas of diameter.
Optionally, in a kind of embodiment, the processing module 450 includes:
Submodule is handled, is used for according to preset image procossing mode, with preset least action intensity, to described wait locate
It manages image-region and carries out image procossing, the target image that obtains that treated.
The embodiment of the present invention also provides another embodiment, as shown in fig. 6, described device further include:
Receiving module 610, for receiving the instruction for carrying out Shadows Processing to target image;
Module 620 is chosen, chooses target shadow image for concentrating from preset shadow image;
Second determining module 630, for determine placement location of the target shadow image in the target image and
The transparency of the target shadow image;
Laminating module 640, for identified transparency, the target shadow image superposition to be put to identified
Seated position.
In technical solution provided in an embodiment of the present invention, by whether there is human face region in detection target image;If depositing
According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are being obtained;Root
According to the coordinate parameters, the position and sphere of action of image-region to be processed in the target image are calculated;According to the work
With point and sphere of action, the image-region to be processed is determined;It is strong with preset effect according to preset image procossing mode
Degree carries out image procossing to the image-region to be processed, the target image that obtains that treated.Side provided in an embodiment of the present invention
By the human face region in image as the position and sphere of action with reference to determining image-region to be processed in case, and then with pre-
If action intensity to image-region to be processed carry out image procossing, avoid user just can determine that by cumbersome manual operation
Position, sphere of action and action intensity improve user experience to simplify the operation of image procossing.
For device embodiment, since it is substantially similar to the method embodiment, so describing fairly simple, correlation
Place illustrates referring to the part of embodiment of the method.
The embodiment of the invention also provides a kind of electronic equipment, as shown in fig. 7, comprises processor 710, communication interface 720,
Memory 730 and communication bus 740, wherein processor 710, communication interface 720, memory 730 are complete by communication bus 740
At mutual communication.
Memory 730, for storing computer program;
Processor 710 when for executing the program stored on memory 730, realizes following steps:
It detects and whether there is human face region in target image;
If it exists, according to coordinate of the pixel in preset coordinate system in the human face region, the human face region is obtained
Coordinate parameters;
According to the coordinate parameters, the position and sphere of action of image-region to be processed in the target image are calculated;
According to the position and sphere of action, the image-region to be processed is determined;
According to preset image procossing mode, with preset action intensity, image is carried out to the image-region to be processed
Processing, the target image that obtains that treated.
It is understood that any image processing method in above-described embodiment can also be performed in electronic equipment, herein
It does not repeat them here.
The communication bus that above-mentioned electronic equipment is mentioned can be Peripheral Component Interconnect standard (Peripheral Component
Interconnect, PCI) bus or expanding the industrial standard structure (Extended Industry Standard
Architecture, EISA) bus etc..The communication bus can be divided into address bus, data/address bus, control bus etc..For just
It is only indicated with a thick line in expression, figure, it is not intended that an only bus or a type of bus.
Communication interface is for the communication between above-mentioned electronic equipment and other equipment.
Memory may include random access memory (Random Access Memory, RAM), also may include non-easy
The property lost memory (Non-Volatile Memory, NVM), for example, at least a magnetic disk storage.Optionally, memory may be used also
To be storage device that at least one is located remotely from aforementioned processor.
Above-mentioned processor can be general processor, including central processing unit (Central Processing Unit,
CPU), network processing unit (Network Processor, NP) etc.;It can also be digital signal processor (Digital Signal
Processing, DSP), it is specific integrated circuit (Application Specific Integrated Circuit, ASIC), existing
Field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device are divided
Vertical door or transistor logic, discrete hardware components.
The embodiment of the invention also provides a kind of computer readable storage medium, stored in the computer readable storage medium
There is computer program, the computer program executes any of the above-described image processing method when being executed by processor.
The embodiment of the invention also provides a kind of computer applied algorithm, which runs on computers
When, so that computer executes any image processing method in above-described embodiment.
In technical solution provided in an embodiment of the present invention, by whether there is human face region in detection target image;If depositing
According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are being obtained;Root
According to the coordinate parameters, the position and sphere of action of image-region to be processed in the target image are calculated;According to the work
With point and sphere of action, the image-region to be processed is determined;It is strong with preset effect according to preset image procossing mode
Degree carries out image procossing to the image-region to be processed, the target image that obtains that treated.Side provided in an embodiment of the present invention
By the human face region in image as the position and sphere of action with reference to determining image-region to be processed in case, and then with pre-
If action intensity to image-region to be processed carry out image procossing, avoid user just can determine that by cumbersome manual operation
Position, sphere of action and action intensity improve user experience to simplify the operation of image procossing.
The term used in the embodiment of the present application is only to be not intended to be limiting merely for for the purpose of describing particular embodiments
The application.In the embodiment of the present application and the "an" of singular used in the attached claims, " described " and "the"
It is also intended to including most forms, unless the context clearly indicates other meaning.It is also understood that term used herein
"and/or" refers to and includes that one or more associated any or all of project listed may combine.
It will be appreciated that though may be described in the embodiment of the present application using term " first ", " second ", " third " etc.
Various connectivity ports and identification information etc., but these connectivity ports and identification information etc. should not necessarily be limited by these terms.These terms
Only it is used to for connectivity port and identification information etc. being distinguished from each other out.For example, in the case where not departing from the embodiment of the present application range,
First connectivity port can also be referred to as second connection end mouth, and similarly, second connection end mouth can also be referred to as the first connection
Port.
Depending on context, word as used in this " if " can be construed to " ... when " or " when ...
When " or " in response to determination " or " in response to detection ".Similarly, depend on context, phrase " if it is determined that " or " if detection
(condition or event of statement) " can be construed to " when determining " or " in response to determination " or " when the detection (condition of statement
Or event) when " or " in response to detection (condition or event of statement) ".
Through the above description of the embodiments, it is apparent to those skilled in the art that, for description
It is convenienct and succinct, only the example of the division of the above functional modules, in practical application, can according to need and will be upper
It states function distribution to be completed by different functional modules, i.e., the internal structure of device is divided into different functional modules, to complete
All or part of function described above.The specific work process of the system, apparatus, and unit of foregoing description, before can referring to
The corresponding process in embodiment of the method is stated, details are not described herein.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with
It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the module or
The division of unit, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units
Or component can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, institute
Display or the mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, device or unit
Indirect coupling or communication connection can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can store in a computer readable storage medium.Based on this understanding, the technical solution of the application is substantially
The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words
It embodies, which is stored in a storage medium, including some instructions are used so that a computer
It is each that equipment (can be personal computer, server or the network equipment etc.) or processor (processor) execute the application
The all or part of the steps of embodiment the method.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory
(Read Only Memory;Hereinafter referred to as: ROM), random access memory (Random Access Memory;Hereinafter referred to as:
RAM), the various media that can store program code such as magnetic or disk.
The above, the only specific embodiment of the application, but the protection scope of the application is not limited thereto, it is any
Those familiar with the art within the technical scope of the present application, can easily think of the change or the replacement, and should all contain
Lid is within the scope of protection of this application.Therefore, the protection scope of the application should be based on the protection scope of the described claims.