[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN108629781B - Hair drawing method - Google Patents

Hair drawing method Download PDF

Info

Publication number
CN108629781B
CN108629781B CN201810374586.6A CN201810374586A CN108629781B CN 108629781 B CN108629781 B CN 108629781B CN 201810374586 A CN201810374586 A CN 201810374586A CN 108629781 B CN108629781 B CN 108629781B
Authority
CN
China
Prior art keywords
hair
layer
image
gradient
path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810374586.6A
Other languages
Chinese (zh)
Other versions
CN108629781A (en
Inventor
黄亮
徐滢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Pinguo Technology Co Ltd
Original Assignee
Chengdu Pinguo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Pinguo Technology Co Ltd filed Critical Chengdu Pinguo Technology Co Ltd
Priority to CN201810374586.6A priority Critical patent/CN108629781B/en
Publication of CN108629781A publication Critical patent/CN108629781A/en
Application granted granted Critical
Publication of CN108629781B publication Critical patent/CN108629781B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a hair drawing method, which comprises the following steps: performing hair region segmentation on the original image to obtain a hair region probability map; the hair region probability map is a single-channel gray scale map; extracting the trend of the hair from the original image; extracting a hair path from the hair region probability map according to the trend of the hair; drawing hair according to the hair path. The technical scheme provided by the invention can automatically and quickly perform fine hair drawing on the mobile terminal, and meets the requirements of users.

Description

Hair drawing method
Technical Field
The invention relates to the technical field of image processing, in particular to a hair drawing method.
Background
In recent years, mobile phone end photography applications are diversified, and the applications have functions of beautifying, making up and the like, greatly enrich life pleasures of people and are deeply loved by users. Among the applications mentioned above, there is an APP which is specifically directed to the treatment of human hair to achieve further beauty and make-up effects. However, the existing methods for treating portrait hair are simple, either new hair materials are adopted to directly replace original hair, or the color of the original hair is simply changed, obviously, the treatment method is not satisfactory in terms of beauty and makeup effects, and cannot meet the requirements of users.
The method for solving the problems is that the hair of the original portrait needs to be drawn finely according to the requirements. In the prior art, there are also algorithms for better Hair drawing effect, for example, Single-View Hair Modeling for Portable management M Chai, L Wang, Y Weng, Y Yu, B Guo, Acm Transactions on Graphics, 2012, 31 (4): 1-8, the algorithm provided by the article is complex, has long modeling time and processing time in minutes, and is not suitable for being used on a mobile terminal (such as a mobile phone APP).
Disclosure of Invention
The invention aims to provide a hair drawing method which can automatically and quickly draw fine hair on a mobile terminal and meet the requirements of users.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a hair drawing method comprising: performing hair region segmentation on the original image to obtain a hair region probability map; the hair region probability map is a single-channel gray scale map; extracting the trend of the hair from the original image; extracting a hair path from the hair region probability map according to the trend of the hair; drawing hair according to the hair path.
Preferably, the method for extracting the hair direction from the original image comprises the following steps: converting the original image into a grey-scale map; establishing a Gaussian pyramid for the gray level image; calculating the gradient of each layer of image of the Gaussian pyramid; correcting the gradient of each layer of image to obtain a gradient information graph of each layer of image; fusing the gradient information graphs of each layer of images layer by layer along the direction of the Gaussian pyramid from top to bottom to obtain a fused gradient information graph; acquiring a hair directional diagram and a hair direction disposition degree diagram according to the fused gradient information diagram; and according to the hair directional diagram and the hair direction placement degree diagram, locally smoothing the hair directional diagram to obtain the hair direction.
Preferably, the gradient of each layer of the image of the gaussian pyramid is calculated by using a Sobel operator.
Preferably, the method for correcting the gradient of each layer of image and obtaining the gradient information map of each layer of image includes: partitioning the gradient of each layer of image; performing singular value decomposition on each block of each layer of image to obtain the main direction of each block and the main direction stationarity of each block; and forming a two-dimensional vector by the main direction and the main direction position degree of each block, filling the two-dimensional vector into the block corresponding to the two-dimensional vector, and obtaining the gradient information map of each layer of image.
Preferably, the step of fusing the gradient information maps of each layer of images layer by layer along the direction from top to bottom of the gaussian pyramid, and the method of obtaining the fused gradient information maps comprises: the gradient information map of each layer of image is up-sampled, and the up-sampled gradient information map of each layer of image is obtained; the resolution ratio of the gradient information graph corresponding to each layer of image is the same as that of the up-sampling gradient information graph of the previous layer of image; and updating the gradient information graph of the adjacent next layer of image according to the up-sampling gradient information graph and the gradient information graph of one layer of image along the direction of the Gaussian pyramid from top to bottom until the gradient information graph of the bottom layer image of the Gaussian pyramid is updated, and acquiring the fused gradient information graph.
Preferably, the method for extracting the hair path in the hair region probability map according to the trend of the hair comprises the following steps: for each point in the hair region probability map, extracting path points along the trend of the hair; and forming the path points into a first hair path.
Further, still include: performing curve fitting on the first hair path to obtain a smooth hair path; drawing hair according to the smoothed hair path.
Preferably, for each point in the hair region probability map, the method for extracting path points along the trend of the hair is as follows: starting from a preset point, extracting points along the positive direction of the trend of the hair, stopping after a preset condition is reached, and acquiring positive direction path points; extracting points along the opposite direction of the trend of the hair from the preset point, stopping after the preset condition is reached, and acquiring path points in the opposite direction; combining the forward direction path points and the reverse direction path points to obtain the first hair sending path; the predetermined conditions are: the extracted points are located outside the hair region probability map; or the included angle of the vectors of two adjacent points in the hair direction is larger than 90 degrees; alternatively, the signs of the ordinate axes of the extracted adjacent two points are opposite.
Preferably, the hair region segmentation is performed on the original image by adopting a deep learning method.
Preferably, the hair is drawn according to the hair path in a dotted manner.
According to the hair drawing method provided by the embodiment of the invention, the direction of the hair is extracted on multiple scales by adopting the Gaussian pyramid, so that the direction of the hair in an area with weak hair texture can be effectively extracted, and the fineness of hair drawing is improved; meanwhile, a Singular Value Decomposition (SVD) algorithm of non-overlapping blocks is adopted in the hair direction extraction to extract the main direction of each block of each layer of image, thereby effectively reducing gradient noise and improving the operation speed; in addition, the hair path can be extracted relatively quickly to obtain an approximate hair path, and the hair can be prevented from falling into a hair circle to be in a dead circle; the further fitting process of the hair path can dynamically calculate the power of the curve, and the aim of adaptively fitting various hair types is fulfilled. Therefore, the hair drawing method provided by the invention can automatically and quickly draw fine hair on the mobile terminal, and meets the requirements of users.
Drawings
FIG. 1 is a flow chart of a method of an embodiment of the present invention;
fig. 2 is a schematic diagram of curve fitting of a path according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings.
Step 101, performing hair region segmentation on an original image to obtain a hair region probability map; the hair region probability map is a single-channel gray scale map;
in this embodiment, the original image is an RGB image IrgbAnd the width of the RGB image is set as width and the height is set as height. Performing hair region segmentation on the RGB image by an image segmentation method to obtain a hair regionThe domain probability map HairMask is a single-channel gray scale map. The image segmentation is a mature technology, and the invention adopts a deep learning method to segment the hair region of the original image. The deep learning based segmentation algorithm can be referred to as follows: long, Jonathan, Evan Shell, and Trevor Darrell, "full compliance networks for magnetic segmentation" Proceedings of the IEEE Conference on Computer Vision and Pattern recognition.2015.
102, extracting the trend of the hair from the original image;
the part is mainly used for modeling the extracted hair area and extracting the specific trend of the hair, and the specific steps are as follows:
a) the original image I is processedrgbConversion into a grey-scale image Igray
b) For the gray scale image IgrayEstablishing a Gaussian pyramid; the gaussian pyramid is composed of a series of images of different resolutions that make up the various layers of the gaussian pyramid. By PyrIiEach layer represents a gaussian pyramid, in the embodiment, the gaussian pyramid has 4 layers, i is 0,1,2,3, where PyrI0=Igray
c) Calculating the gradient of each layer of image of the Gaussian pyramid; in this embodiment, the Sobel operator is adopted to apply the PyrI of each layer of the gaussian pyramidiCalculating the gradient to obtain the GxyPyrIiAnd i is 0,1,2 and 3. Wherein, GxyPyrIi(x, y) represents the image PyrIiThe gradient at point (x, y), which is a two-dimensional vector, GxyPyrIi(x,y)[0]Representing the gradient in the horizontal direction at point (x, y), GxyPyrIi(x,y)[1]Representing the gradient in the vertical direction at point (x, y).
d) Correcting the gradient of each layer of image to obtain a gradient information graph of each layer of image;
in this embodiment, because the hair texture intensities of different hair regions are different, and the gradient noise of the obtained hair region is large due to factors such as illumination, shielding of hair gradation, and the like, and cannot be directly used for judging the hair direction, therefore,the embodiment of the invention adopts the mode of calculating the main direction in blocks to carry out gradient GxyPyrI on each layer of imageiThe correction is carried out by the following specific correction method:
(1) gradient GxyPyrI to the each layer imageiPartitioning is carried out; let the width of the block be BwThe height of the block is BhEach block is formulated as:
BGxyPyrIi(j,k)={GxyPyrIi(x,y)Bw*j≤x<min(Bw*(j+1),width),Bh*k≤y<min(Bh*(k+1),height)}
wherein width represents the width of the original image, height represents the height of the original image, j is the serial number in the horizontal direction, k is the serial number in the vertical direction, and
Figure BDA0001639571260000061
ceil is an rounding-up function. For example, ceil (x) represents the smallest integer not less than x.
(2) Performing Singular Value Decomposition (SVD) on each block of each layer of image to acquire a main direction of each block and a main direction position degree of each block;
in this embodiment, BGxyPyrI is performed for each blocki(j, k) carrying out SVD to obtain a characteristic value S of the blocki(j, k) and a feature vector V corresponding to the feature valuei(j, k) wherein Si(j,k)[0]Representing a first characteristic value, Si(j,k)[1]Represents a second characteristic value, and Si(j,k)[0]≥Si(j,k)[1], Vi(j,k)[0]For the principal direction of the block (principal direction being defined as the eigenvector corresponding to the larger one of the eigenvalues), the principal direction V of the block is calculatedi(j,k)[0]Degree of repose of
Figure BDA0001639571260000062
(3) And forming a two-dimensional vector by the main direction and the main direction position degree of each block, filling the two-dimensional vector into the block corresponding to the two-dimensional vector, and obtaining the gradient information map of each layer of image.
In this example, V isi(j,k)[0],Ri(jK) filling the component vectors into the block to obtain a modified gradient information map, i.e. a gradient information map for each layer image
Figure BDA0001639571260000063
Wherein floor denotes a floor rounding function, e.g., floor (x) denotes the largest integer no greater than x, in VPyri(x,y)[0]The gradient at coordinate (x, y) is expressed in VPyri(x,y)[1]Represents the gradient locality at coordinates (x, y). In order to improve the calculation efficiency, the width and height of the block are adopted as Bw=Bh16, e.g. GxyPyrIiHas a resolution of 640x640, the GxyPyrI will beiDivided into 40x40 blocks.
e) Fusing the gradient information graphs of each layer of images layer by layer along the direction of the Gaussian pyramid from top to bottom to obtain a fused gradient information graph;
specifically, the following operations are sequentially performed in the order of i being 3,2, and 1:
(1) a gradient information map VPyr for said each layer imageiPerforming up-sampling to obtain an up-sampling gradient information map VPyrUpI of each layer of imagei(ii) a The corresponding gradient information map of each layer image has the same resolution as the up-sampling gradient information map of the previous layer image, i.e. VPyrUpIiResolution and VPyr ofi-1The resolution of (2) is the same;
(2) and updating the gradient information graph of the adjacent next layer of image according to the up-sampling gradient information graph and the gradient information graph of one layer of image along the direction of the Gaussian pyramid from top to bottom until the gradient information graph of the bottom layer image of the Gaussian pyramid is updated, and acquiring the fused gradient information graph.
In this embodiment, the following formula is adopted for updating and fusing:
(2a) correcting the gradient of the adjacent layer of the Gaussian pyramid:
VPyrIi(x,y)[0]=VPyrIi(x,y)[0]*sign(VPyrIi(x,y)[0]·VPyrUpIi(x,y)[0]) Where, represents the dot product, sign is a sign function defined as
Figure BDA0001639571260000071
(2b) Updating gradient information:
calculating weighting coefficients between pyramid layers:
Figure BDA0001639571260000081
and linearly weighting the pyramid two-layer information by using a weighting coefficient:
VPyrIi-1(x,y)[0]=(1-alpha(x,y))*VPyrUpIi(x,y)[0]+alpha(x,y)*VPyrIi(x,y)[0]
then, the result is normalized to obtain a unit vector:
Figure BDA0001639571260000082
the pyramid layer-by-layer fusion mode can give consideration to gradient information of different scales, and the gradient of the area with weak hair texture can be effectively recovered.
(2c) Updating the position information:
VPyrIi-1(x,y)[1]=0.5*VPyrUpIi(x,y)[1]+0.5*VPyrIi(x,y)[1]
f) according to the fused gradient information map VPyrI0Obtaining a hair directional pattern VhairAnd hair direction disposition degree chart Rhair
Vhair(x,y)=⊥VPyrI0(x,y)[0],
Rhair(x,y)=VPyrI0(x,y)[1]
Where the t operator denotes taking a vertical vector, e.g.,
Figure BDA0001639571260000083
representing a vector
Figure BDA0001639571260000084
The vertical vector of (a).
g) And according to the hair directional diagram and the hair direction placement degree diagram, locally smoothing the hair directional diagram to obtain the hair direction.
I.e. a weighted average of the vectors in the range of radius r, the hair direction at coordinates (x, y) is updated as:
Figure BDA0001639571260000085
wherein r is a smooth radius, and 5 is taken in this embodiment; the value ranges of m and n are [ -r, r ].
103, according to the trend V of the hairhairExtracting a hair path from the hair region probability map HairMask; specifically, for each point in the hair region probability map, extracting path points along the trend of the hair by a preset step; and forming the path points into a first hair path. The specific steps are described in detail below:
a) starting from a preset point, extracting points in a preset step length step along the positive direction of the trend of the hair, stopping after a preset condition is reached, and acquiring positive direction path points; extracting points in a preset step length along the opposite direction of the trend of the hair from the preset point, stopping after the preset condition is reached, and acquiring path points in the opposite direction; combining the forward direction path points and the reverse direction path points to obtain the first hair sending path;
scanning each position (x, y) of the hair region probability map, HairMask, starting path extraction with a predetermined probability p: let the current point be Li=(xi,yi) In particular, L0=(x0,y0) (x, y) along Vhair(xi,yi) Forward to L in step sizei+1=Li+Vhair(xi,yi) Repeating the above steps until a predetermined condition is reached, and stopping at a point Ln(ii) a From L0At the beginning, along Vhair(x0,y0) In the opposite direction, moving forward in step steps, i.e. Li-1=Li-Vhair(xi,yi) Stopping after reaching a predetermined condition, and setting a point at the time of stopping as Lm(ii) a Combining the path points in the forward and reverse directions to form a complete path L, i.e. Lm,Lm+1,Lm+2,…,L0,L1,…,,Ln. Wherein the "predetermined condition" means that one of the following conditions is satisfied: the extracted point LiThe hair area probability map is positioned outside the hair area probability map and used for limiting the hair path to be kept in the hair area probability map; or the vector V of two adjacent points on the hairhair(xi,yi) And Vhair(xi+1,yi+1) The included angle is larger than 90 degrees, the condition assumes that the trend of the hair cannot change suddenly, and the sudden change place is mostly the place where the hairs in different levels meet; or two adjacent points L extractediAnd Li+1The ordinate of (c) is opposite in sign, which condition can avoid the tracking algorithm entering a "circle" dead loop. Wherein the probability p is a settable value for controlling the density of the extracted paths, and the range of p is [0,1 ]]. It should be noted that the probability p is for a preset point, that is, given a preset point, the extraction path may be along the hair direction, or may not be, and the next preset point is directly skipped; the extraction path is not performed with the probability p along the hair strike.
b) In order to make the drawn hair as smooth as possible, it is necessary to perform curve fitting on the extracted path, and the process of performing curve fitting on the path is as follows:
1) let a certain path be L, have n points in total, take LiWhere i is 0,1,2, …, and n-1 denotes a point on the path, the highest power Q of the curve to be fitted is set. Let initial Q be 3, traverse L starting from i 0iIf from the straight line L in the traversal process0Ln-1Going from side to side, Q increases by 1, as shown in fig. 2.
2) The position of each point on the curve to be fitted is calculated:
Figure BDA0001639571260000101
obviously, Tn-1Which can be approximately considered as the total length of the path.
After normalization, the following results are obtained:
Figure BDA0001639571260000102
3) fitting with a polynomial curve having the equation of the polynomial curve
Figure BDA0001639571260000103
ajFor a two-dimensional vector, representing the coefficients to be found, j-0, 1, 2.
Figure BDA0001639571260000111
Minimizing E to obtain a parameter aj
c) To obtain smooth hair path
Setting the number of points of new hair path as m ═ alpha Tn-1,α∈[1,10]Wherein the larger alpha is, the more points are, the smoother the curve drawn finally is, but the more points are, the more calculation amount is needed, and in comprehensive consideration, 1.5 is taken out in the text, and the smooth hair path is
Figure BDA0001639571260000112
At step 104, the hair is drawn according to the smoothed hair path.
In this step, the hair is guided along the smooth hair path LnewDrawing hair by drawing points, pair LnewEach point in the original image IrgbThe corresponding position in the hair brush can be used for drawing points to form a smooth hair. All hairs were drawn in this way.
According to the hair drawing method provided by the embodiment of the invention, the direction of the hair is extracted on multiple scales by adopting the Gaussian pyramid, so that the direction of the hair in an area with weak hair texture can be effectively extracted, and the fineness of hair drawing is improved; meanwhile, a Singular Value Decomposition (SVD) algorithm of non-overlapping blocks is adopted in the hair direction extraction to extract the main direction of each block of each layer of image, thereby effectively reducing gradient noise and improving the operation speed; in addition, the hair path can be extracted relatively quickly to obtain an approximate hair path, and the hair can be prevented from falling into a hair circle to be in a dead circle; the further fitting process of the hair path can dynamically calculate the power of the curve, and the aim of adaptively fitting various hair types is fulfilled. In conclusion, the hair drawing method provided by the invention can automatically and quickly draw fine hair on the mobile terminal, and meets the requirements of users. Experiments show that the technical scheme provided by the invention can finish the automatic drawing of the hair in less than 3 seconds on a mobile phone.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention.

Claims (4)

1. A method of hair drawing, comprising the steps of:
firstly, performing hair region segmentation on an original image to obtain a hair region probability map; the hair region probability map is a single-channel gray scale map;
step two, extracting the trend of the hair from the original image:
(a) converting the original image into a grey-scale map;
(b) establishing a Gaussian pyramid for the gray level image;
(c) calculating the gradient GxyPyrI of each layer of image of the Gaussian pyramid by adopting a Sobel operatori,i=0,1,2,3;
Wherein, GxyPyrIi(x, y) represents the image PyrIiThe gradient at point (x, y), which is a two-dimensional vector, GxyPyrIi(x,y)[0]Representing the gradient in the horizontal direction at point (x, y), GxyPyrIi(x,y)[1]The representation is represented at the point (x,y) a gradient in the vertical direction;
(d) correcting the gradient of each layer of image to obtain a gradient information map of each layer of image, wherein the specific correction method comprises the following steps:
(1) gradient GxyPyrI to the each layer imageiPartitioning is carried out; let the width of the block be BwThe height of the block is BhEach block is formulated as:
BGxyPyrIi(j,k)={GxyPyrIi(x,y)|Bw*j≤x<min(Bw*(j+1),width),Bh*k≤y<min(Bh*(k+1),height)}
wherein width represents the width of the original image, height represents the height of the original image, j is the serial number in the horizontal direction, k is the serial number in the vertical direction, and
Figure FDA0003545799390000011
ceil is an rounding-up function;
(2) performing singular value decomposition on each block of each layer of image to obtain a characteristic value S of the blocki(j, k) and a feature vector V corresponding to the feature valuei(j, k) wherein Si(j,k)[0]Representing a first characteristic value, Si(j,k)[1]Represents a second characteristic value, and Si(j,k)[0]≥Si(j,k)[1],Vi(j,k)[0]Calculating a principal direction V of the block for the principal direction of the blocki(j,k)[0]Degree of confidence of
Figure FDA0003545799390000021
(3) The principal direction and principal direction confidence V of each blocki(j,k)[0],Ri(j, k) forming a two-dimensional vector, filling the two-dimensional vector into a block corresponding to the two-dimensional vector to obtain a corrected gradient information map,
Figure FDA0003545799390000022
Figure FDA0003545799390000023
wherein,floor denotes the floor rounding function in VPyri(x,y)[0]The gradient at coordinate (x, y) is expressed in VPyri(x,y)[1]Representing the confidence of the gradient at the coordinates (x, y), the width and height of the patch taken as Bw=Bh=16;
(e) And fusing the gradient information graph of each layer of image layer by layer along the direction of the Gaussian pyramid from top to bottom to obtain a fused gradient information graph, wherein the method comprises the following steps:
(1) a gradient information map VPyr for said each layer imageiPerforming up-sampling to obtain an up-sampling gradient information map VPyrUpI of each layer of imagei(ii) a The corresponding gradient information map of each layer image has the same resolution as the up-sampling gradient information map of the previous layer image, i.e. VPyrUpIiResolution and VPyr ofi-1The resolution of (2) is the same;
(2) updating the gradient information graph of the next adjacent layer of images according to the up-sampling gradient information graph and the gradient information graph of one layer of images from top to bottom along the direction of the Gaussian pyramid until the gradient information graph of the bottom layer of images of the Gaussian pyramid is updated, acquiring the fused gradient information graph, and updating and fusing by adopting the following formula:
(2a) correcting the gradient of the adjacent layer of the Gaussian pyramid: VPyr Ii(x,y)[0]=VPyrIi(x,y)[0]*sign(VPyrIi(x,y)[0]·VPyrUpIi(x,y)[0]) Where, represents the dot product, sign is a sign function defined as
Figure FDA0003545799390000031
(2b) Updating gradient information:
calculating weighting coefficients between pyramid layers:
Figure FDA0003545799390000032
and linearly weighting the pyramid two-layer information by using a weighting coefficient:
VPyrIi-1(x,y)[0]=(1-aIpha(x,y))*VPyrUpIi(x,y)[0]+aIpha(x,y)*VPyrIi(x,y)[0]
then, the result is normalized to obtain a unit vector:
Figure FDA0003545799390000033
(2c) updating confidence information:
VPyrIi-1(x,y)[1]=0.5*VPyrUpIi(x,y)[1]+0.5*VPyrIi(x,y)[1];
(f) according to the fused gradient information map VPyrI0Acquiring a hair directional diagram and a hair direction confidence map:
Vhair(x,y)=⊥VPyrI0(x,y)[0],
Rhair(x,y)=VPyrI0(x,y)[1]
wherein, the operator T represents taking a vertical vector;
(g) according to the hair directional diagram and the hair direction confidence coefficient diagram, locally smoothing the hair directional diagram to obtain the trend of the hair, wherein the method comprises the following steps:
the weighted average is performed on the vectors in the range of radius r, and the hair direction at coordinates (x, y) is updated as:
Figure FDA0003545799390000034
wherein r is a smooth radius, and the value ranges of m and n are [ -r, r ];
step three, extracting a hair path in the hair region probability map according to the trend of the hair, wherein the step comprises the following steps of:
for each point in the hair region probability map, extracting path points along the trend of the hair:
(a) starting from a preset point, extracting points along the positive direction of the trend of the hair, stopping after a preset condition is reached, and acquiring positive direction path points;
(b) extracting points along the opposite direction of the trend of the hair from the preset point, stopping after the preset condition is reached, and acquiring path points in the opposite direction;
(c) combining the forward direction path points and the reverse direction path points to obtain a first hair sending path;
the predetermined conditions are: the extracted points are located outside the hair region probability map; or the included angle of the vectors of two adjacent points in the hair direction is larger than 90 degrees; or the signs of the extracted vertical coordinates of two adjacent points are opposite;
and step four, drawing the hair according to the hair path.
2. The hair drawing method according to claim 1, further comprising:
performing curve fitting on the first hair path to obtain a smooth hair path;
drawing hair according to the smoothed hair path.
3. The hair drawing method according to claim 1, wherein the hair region segmentation is performed on the original image by a deep learning method.
4. A method of hair drawing according to claim 1, wherein the hair is drawn in a dotted manner according to the hair path.
CN201810374586.6A 2018-04-24 2018-04-24 Hair drawing method Active CN108629781B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810374586.6A CN108629781B (en) 2018-04-24 2018-04-24 Hair drawing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810374586.6A CN108629781B (en) 2018-04-24 2018-04-24 Hair drawing method

Publications (2)

Publication Number Publication Date
CN108629781A CN108629781A (en) 2018-10-09
CN108629781B true CN108629781B (en) 2022-04-22

Family

ID=63694599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810374586.6A Active CN108629781B (en) 2018-04-24 2018-04-24 Hair drawing method

Country Status (1)

Country Link
CN (1) CN108629781B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112598610B (en) * 2020-12-11 2024-08-02 杭州海康机器人股份有限公司 Depth image obtaining method and device, electronic equipment and storage medium
CN114565507A (en) * 2022-01-17 2022-05-31 北京新氧科技有限公司 Hair processing method and device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800129A (en) * 2012-06-20 2012-11-28 浙江大学 Hair modeling and portrait editing method based on single image
CN103035030A (en) * 2012-12-10 2013-04-10 西北大学 Hair model modeling method
CN103093488A (en) * 2013-02-02 2013-05-08 浙江大学 Virtual haircut interpolation and tweening animation producing method
CN103606186A (en) * 2013-02-02 2014-02-26 浙江大学 Virtual hair style modeling method of images and videos
CN103927526A (en) * 2014-04-30 2014-07-16 长安大学 Vehicle detecting method based on Gauss difference multi-scale edge fusion
CN105844706A (en) * 2016-04-19 2016-08-10 浙江大学 Full-automatic three-dimensional hair modeling method based on single image
CN106611160A (en) * 2016-12-15 2017-05-03 中山大学 CNN (Convolutional Neural Network) based image hair identification method and device
CN107451555A (en) * 2017-07-27 2017-12-08 安徽慧视金瞳科技有限公司 A kind of hair based on gradient direction divides to determination methods
CN107886516A (en) * 2017-11-30 2018-04-06 厦门美图之家科技有限公司 The method and computing device that hair moves towards in a kind of calculating portrait

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9262857B2 (en) * 2013-01-16 2016-02-16 Disney Enterprises, Inc. Multi-linear dynamic hair or clothing model with efficient collision handling
US9117279B2 (en) * 2013-03-13 2015-08-25 Microsoft Technology Licensing, Llc Hair surface reconstruction from wide-baseline camera arrays

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800129A (en) * 2012-06-20 2012-11-28 浙江大学 Hair modeling and portrait editing method based on single image
CN103035030A (en) * 2012-12-10 2013-04-10 西北大学 Hair model modeling method
CN103093488A (en) * 2013-02-02 2013-05-08 浙江大学 Virtual haircut interpolation and tweening animation producing method
CN103606186A (en) * 2013-02-02 2014-02-26 浙江大学 Virtual hair style modeling method of images and videos
CN103927526A (en) * 2014-04-30 2014-07-16 长安大学 Vehicle detecting method based on Gauss difference multi-scale edge fusion
CN105844706A (en) * 2016-04-19 2016-08-10 浙江大学 Full-automatic three-dimensional hair modeling method based on single image
CN106611160A (en) * 2016-12-15 2017-05-03 中山大学 CNN (Convolutional Neural Network) based image hair identification method and device
CN107451555A (en) * 2017-07-27 2017-12-08 安徽慧视金瞳科技有限公司 A kind of hair based on gradient direction divides to determination methods
CN107886516A (en) * 2017-11-30 2018-04-06 厦门美图之家科技有限公司 The method and computing device that hair moves towards in a kind of calculating portrait

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
AutoHair: Fully Automatic Hair Modeling from A Single Image;Menglei Chai 等;《ACM Transactions on Graphics》;20160731;第35卷(第4期);第1-12页 *
Multiscale Principal Components Analysis for Image Local Orientation Estimation;XiaoGuang Feng 等;《Conference Record of the Thirty-Sixth Asilomar Conference on Signals,Systems and Computer》;20030507;第478-482页 *
基于多尺度主成分分析的图像局部方向估计算法;廖宇;《计算机应用》;20120501;第32卷(第5期);第1296-1299页第1-3节 *

Also Published As

Publication number Publication date
CN108629781A (en) 2018-10-09

Similar Documents

Publication Publication Date Title
EP4083904A1 (en) Method and apparatus for beautifying selfies
KR101198322B1 (en) Method and system for recognizing facial expressions
CN108932536A (en) Human face posture method for reconstructing based on deep neural network
US20150278589A1 (en) Image Processor with Static Hand Pose Recognition Utilizing Contour Triangulation and Flattening
CN104463777B (en) A method of the real time field depth based on face
WO2020177434A1 (en) Image processing method and apparatus, image device, and storage medium
CN107301626B (en) Buffing algorithm suitable for shooting images by mobile equipment
CN108764143B (en) Image processing method, image processing device, computer equipment and storage medium
CN108629781B (en) Hair drawing method
CN114821750A (en) Face dynamic capturing method and system based on three-dimensional face reconstruction
KR20160144699A (en) the automatic 3D modeliing method using 2D facial image
CN107516302A (en) A kind of method of the mixed image enhancing based on OpenCV
CN113327191A (en) Face image synthesis method and device
CN108596992B (en) Rapid real-time lip gloss makeup method
CN114187166A (en) Image processing method, intelligent terminal and storage medium
CN112633288A (en) Face sketch generation method based on drawing stroke guidance
CN109035268A (en) A kind of self-adaptive projection method method
JP2023082065A (en) Method of discriminating objet in image having biometric characteristics of user to verify id of the user by separating portion of image with biometric characteristic from other portion
CN113379623B (en) Image processing method, device, electronic equipment and storage medium
CN115619933A (en) Three-dimensional face reconstruction method and system based on occlusion segmentation
CN114581979A (en) Image processing method and device
CN113989295A (en) Scar and keloid image cutting and surface area calculating method and system
CN112862712A (en) Beautifying processing method, system, storage medium and terminal equipment
CN114863030B (en) Method for generating custom 3D model based on face recognition and image processing technology
CN115311403A (en) Deep learning network training method, virtual image generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant