[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113554623B - Intelligent quantitative analysis method and analysis system for facial skin - Google Patents

Intelligent quantitative analysis method and analysis system for facial skin Download PDF

Info

Publication number
CN113554623B
CN113554623B CN202110836616.2A CN202110836616A CN113554623B CN 113554623 B CN113554623 B CN 113554623B CN 202110836616 A CN202110836616 A CN 202110836616A CN 113554623 B CN113554623 B CN 113554623B
Authority
CN
China
Prior art keywords
image
face
images
makeup
quality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110836616.2A
Other languages
Chinese (zh)
Other versions
CN113554623A (en
Inventor
包勇
戴烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Yixiang Information Technology Co ltd
Original Assignee
Jiangsu Yixiang Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Yixiang Information Technology Co ltd filed Critical Jiangsu Yixiang Information Technology Co ltd
Priority to CN202110836616.2A priority Critical patent/CN113554623B/en
Publication of CN113554623A publication Critical patent/CN113554623A/en
Application granted granted Critical
Publication of CN113554623B publication Critical patent/CN113554623B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention belongs to the technical field of makeup removal detection, and particularly relates to an intelligent quantitative analysis method and an intelligent quantitative analysis system for facial skin, wherein the intelligent quantitative analysis method for facial skin comprises the following steps: collecting a face image and carrying out face recognition; constructing an image comparison model; and detecting the makeup removing effect through the face image according to the image comparison model, so that the accurate detection of the face makeup removing effect is realized, and the influence on skin, human health and the like caused by the fact that cosmetics remain on the face after makeup removing is avoided.

Description

Intelligent quantitative analysis method and analysis system for facial skin
Technical Field
The invention belongs to the technical field of makeup removal detection, and particularly relates to an intelligent quantitative analysis method and an intelligent quantitative analysis system for facial skin.
Background
At present, more and more people begin to make up, most of cosmetics are chemical products, and if the cosmetics are not completely removed during makeup removal, the cosmetics can be attached to the skin for a long time to influence the skin, and the health of the body can be seriously influenced. At present, the makeup removing effect cannot be accurately judged, people can only judge whether the makeup is completely removed or not through naked eyes, whether the makeup is completely removed or not cannot be accurately judged, and the effects of residual cosmetics on skin, bodies and the like are easily caused.
Therefore, based on the above technical problems, a new intelligent quantitative analysis method and analysis system for the skin of the face are needed to be designed.
Disclosure of Invention
The invention aims to provide an intelligent quantitative analysis method and an intelligent quantitative analysis system for facial skin.
In order to solve the technical problems, the invention provides an intelligent quantitative analysis method for human face skin, which comprises the following steps:
Collecting a face image and carrying out face recognition;
Constructing an image comparison model; and
And detecting the makeup removing effect through the face image according to the image comparison model.
Further, the method for acquiring the face image and carrying out face recognition comprises the following steps:
collecting an image of a face when makeup is not performed;
collecting an image after face makeup;
And collecting images of the face after makeup removal.
Further, the method for collecting the face image and carrying out face recognition further comprises the following steps:
evaluation and screening of the acquired images, i.e
The quality of each image is evaluated based on the image pixels, the quality of each image is evaluated based on the information theory, and the quality of each image is evaluated based on the structural similarity.
Further, the method for evaluating the quality of each image according to the image pixels comprises the following steps:
quality evaluation is carried out on each image according to peak signal-to-noise ratio and mean square error, namely
The image to be evaluated is y, the reference image is x, and the size is M x N, and the calculation method for representing the image quality according to the peak signal-to-noise ratio comprises the following steps:
The calculation method for representing the image quality according to the mean square error comprises the following steps:
The larger the PSNR value is, the smaller the distortion between the image to be evaluated and the reference image is, and the better the image quality is;
The smaller the value of MSE, the better the image quality to be evaluated.
Further, the method for evaluating the quality of each image according to the information theory comprises the following steps:
And calculating mutual information between the image to be evaluated and the reference image according to two algorithms of the information fidelity criterion and the visual information fidelity so as to measure the quality of the image to be evaluated.
Further, the method for evaluating the quality of each image according to the structural similarity comprises the following steps:
determining a reference image and an image to be evaluated;
A reference image x and an image y to be evaluated with the size of M x N, wherein the mean, standard deviation, variance and covariance of the reference image x and the image y to be evaluated are as follows: u x、uy、σx、σy、σx 2、σy 2、uxy;
the comparison functions of brightness, contrast and structure are respectively:
wherein c 1、c2、c3 is a positive constant;
the structural similarity index is as follows:
SSIM(x,y)=[l(x,y)]α[c(x,y)]β[s(x,y)]γ
when the structural similarity index is larger, the quality of the corresponding image to be evaluated is better;
And evaluating and screening the acquired images after evaluating the quality of each image according to the image pixels, evaluating the quality of each image according to the information theory and evaluating the quality of each image according to the structural similarity.
Further, the method for acquiring the face image further comprises the following steps:
identifying faces in images, i.e.
The screened images are subjected to five sense organs positioning, and 88 points forming the face outline are calculated;
Carrying out full-range feature recognition on the images of the screened faces when the faces are not made up, positioning 5 ten thousand feature points, meshing the face structures, marking position labels, and constructing a 3D face model;
dividing a preset number of ROI areas in the 3D face model;
performing space calibration, and establishing a corresponding relation between pixels on each screened image and physical dimensions according to the 3D face model;
After setting the calibration parameters, according to the actual physical size of the measurement target on the screened image
And/or
Marking the images according to the preset positions of all the ROI areas, and training to automatically identify all the ROI areas on the screened images.
Further, the method for constructing the image comparison model comprises the following steps:
image contrast model construction based on CIEL a b color space, i.e
Comparing the color differences of all the ROI areas in the images to be compared so as to compare the images to be compared;
wherein ΔL * is the brightness difference of the same ROI area in the required contrast image; Δa * is the red-green difference of the same ROI area in the desired contrast image; Δb * is yellow Lan Chazhi of the same ROI area in the desired contrast image; Δe * is the color difference of the same ROI area in the desired contrast image; and/or
Construction of image contrast models from grey scale, i.e.
Comparing gray differences of all the ROI areas in the images to be compared so as to compare the images to be compared;
Gray Not yet i=R Not yet i*0.3+G Not yet i*0.59+B Not yet i*0.11;
Gray Unloading i=R Unloading i*0.3+G Unloading i*0.59+B Unloading i*0.11;
ΔGrayi=|Gray Unloading i-Gray Not yet i|;
gray Not yet i is the Gray level of the ith ROI area of the image when the screened face is not made up; r Not yet i is the R value of the ith ROI area of the image when the screened face is not made up; g Not yet i is the G value of the ith ROI area of the image when the screened face is not made up; b Not yet i is the B value of the ith ROI area of the image when the screened face is not made up; gray Unloading i is the Gray scale of the ith ROI area of the screened face after makeup removal; r Unloading i is the R value of the ith ROI area of the screened face after makeup removal; g Unloading i is the G value of the ith ROI area of the screened face after makeup removal; b Unloading i is the B value of the ith ROI area of the screened face after makeup removal; Δgray i is the Gray level difference between the i-th ROI area of the image of the non-makeup screened face and the image of the makeup removed screened face.
Further, the method for detecting the makeup removal effect through the face image according to the image comparison model comprises the following steps:
comparing the color differences of all the ROI areas through an image comparison model according to the images of the selected face when not making up and the images of the selected face after making up removal so as to judge the make up removal effect of each ROI area, wherein the smaller the color difference of the same ROI area is, the better the make up removal effect of the ROI area is;
Comparing the color differences of all the ROI areas through an image comparison model according to the images after face makeup screening and the images after face makeup removal screening to judge the makeup removal effect of each ROI area, wherein the larger the color difference of the same ROI area is, the better the makeup removal effect of the ROI area is;
comparing gray level differences of all the ROI areas through an image comparison model according to the images of the selected face when not making up and the images of the selected face after making up removal so as to judge the make up removal effect of each ROI area, wherein the smaller the gray level difference of the same ROI area is, the better the make up removal effect of the ROI area is;
The classification level of the color difference or the gray level difference is preset, after the color difference or the gray level difference of the corresponding image comparison model is obtained according to the image comparison model, the classification level of the color difference or the gray level difference is judged, and the makeup removing effect is judged according to the classification level.
On the other hand, the invention also provides an intelligent quantitative analysis system for the skin of the face, which comprises the following steps:
the acquisition module acquires a face image and performs face recognition;
The model building module is used for building an image comparison model; and
And the comparison detection module is used for detecting the makeup removal effect through the face image according to the image comparison model.
The invention has the beneficial effects that the invention collects the face image and carries out face recognition; constructing an image comparison model; and detecting the makeup removing effect through the face image according to the image comparison model, so that the accurate detection of the face makeup removing effect is realized, and the influence on skin, human health and the like caused by the fact that cosmetics remain on the face after makeup removing is avoided.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a human face skin intelligent quantitative analysis method according to the invention;
FIG. 2 is a gray scale contrast plot of a foundation light source;
FIG. 3 is a blush light source gray contrast plot;
FIG. 4 is a graph of L values of five light sources;
FIG. 5 is a graph of five light source a values versus one another;
FIG. 6 is a graph of five light source b values versus one another;
FIG. 7 is a graph comparing the gray level of the UV foundation with L, a, b;
FIG. 8 is a graph comparing the gray value of the UV blush with L, a, b;
FIG. 9 is a delta E comparison of five light sources;
FIG. 10 is a graph of the gray scale versus ΔE of ultraviolet light;
FIG. 11 is a flow chart of face contour point location in accordance with the present invention;
FIG. 12 is a schematic view of a face contour point in accordance with the present invention;
fig. 13 is a schematic view of the warm and cold color of the makeup removing effect according to the present invention;
FIG. 14 is a comparison of old and new foundation data;
Fig. 15 is a comparison of blush new and old data;
fig. 16 is a comparative view a of makeup removal effect according to the present invention;
fig. 17 is a comparative view B of makeup removal effect according to the present invention;
fig. 18 is a schematic block diagram of a human face skin intelligent quantitative analysis system according to the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Fig. 1 is a flow chart of the intelligent quantitative analysis method of the facial skin.
As shown in fig. 1, the present embodiment provides a method for intelligently quantifying and analyzing skin of a face, which includes: collecting a face image and carrying out face recognition; constructing an image comparison model; and detecting the makeup removing effect through the face image according to the image comparison model, so that the accurate detection of the face makeup removing effect is realized, and the influence on skin, human health and the like caused by the fact that cosmetics remain on the face after makeup removing is avoided.
FIG. 2 is a gray scale contrast plot of a foundation light source;
FIG. 3 is a blush light source gray contrast plot;
FIG. 4 is a graph of L values of five light sources;
FIG. 5 is a graph of five light source a values versus one another;
FIG. 6 is a graph of five light source b values versus one another;
FIG. 7 is a graph comparing the gray level of the UV foundation with L, a, b;
FIG. 8 is a graph comparing the gray value of the UV blush with L, a, b;
FIG. 9 is a delta E comparison of five light sources;
fig. 10 is a graph of the gray scale versus Δe of uv light.
In this embodiment, the method for acquiring a face image and performing face recognition includes: collecting an image of a face when makeup is not performed; collecting an image after face makeup; collecting images of the face after makeup removal; obtaining images under different light sources (each light source can be white light, standard diffuse reflection, cross polarized light, ultraviolet light and parallel polarized light); selecting the most suitable light source and parameters according to the comparison among the parameters of different cosmetics under different light sources; as shown in fig. 2 and 3, the difference of the gray values of the ultraviolet light in the five light sources is most obvious; as shown in fig. 4,5 and 6, the difference of b values of ultraviolet light among the five light sources is most obvious; as shown in fig. 7 and 8, the difference of gray values under ultraviolet light is obvious compared with the difference of values of L, a and b; as shown in fig. 9, the Δe law of the normalized uv light is most pronounced; as shown in fig. 10, the grayscale of ultraviolet light is not greatly different from Δe; therefore, the gray value of ultraviolet light or delta E of ultraviolet light after normalization can be selected as the optimal method for quantitative analysis. In this embodiment, the method for acquiring a face image and performing face recognition further includes: evaluating and screening the collected images, namely evaluating the quality of each image according to image pixels, evaluating the quality of each image according to an information theory, and evaluating the quality of each image according to the structural similarity; the research needs to simulate the visual perception characteristics of human eyes by a machine learning method, helps to set shooting parameters, shoots skin images which are meaningful for makeup removal detection, and the quality evaluation system helps a user to judge the distortion types of the shot images, including brightness and color distortion, blurring distortion and contrast distortion. Basic characteristics of a human eye vision system, including multi-channel perception, resolvable ambiguity perception and contrast sensitivity function response, are studied, three-dimensional feature vectors are generated for each image, and a Support Vector Machine (SVM) and a binary Decision Tree (DT) are utilized to perform pattern recognition on the images by combining professional subjective evaluation, so that images which can be used for makeup removal detection and images which are unfavorable for makeup removal detection are distinguished.
In this embodiment, the method for evaluating the quality of each image according to the image pixels (based on the image pixel statistics) includes: the quality of each image is evaluated according to the Peak signal-to-noise ratio and the mean square error, namely, the Peak signal-to-noise ratio (Peak-SignaltoNoiseRatio, PSNR) and the mean square error (MeanSquareError, MSE) are two common quality evaluation methods based on the image pixel statistics. The quality of the image to be evaluated is measured from a statistical angle by calculating the difference between gray values of corresponding pixel points of the image to be evaluated and the reference image;
The image to be evaluated is y, the reference image is x, and the size is M x N, and the calculation method for representing the image quality according to the peak signal-to-noise ratio comprises the following steps:
The calculation method for representing the image quality according to the mean square error comprises the following steps:
PSNR and MSE are used for measuring the quality of an image by calculating the global size of pixel errors between the image to be evaluated and a reference image; the larger the PSNR value is, the smaller the distortion between the image to be evaluated and the reference image is, and the better the image quality is; the smaller the value of MSE, the better the image quality to be evaluated; however, the algorithm is based on global statistics of image pixel values, and does not consider local visual factors of human eyes, so that the local quality of the image is not known.
In this embodiment, the method for evaluating the quality of each image according to the information theory includes: and calculating mutual information between the image to be evaluated and the reference image according to two algorithms of the information fidelity criterion and the visual information fidelity so as to measure the quality of the image to be evaluated. Mutual information is widely used to evaluate image quality based on the information entropy basis in information theory. The quality of the image to be evaluated is measured by calculating mutual information between the image to be evaluated and the reference image through two algorithms of an information fidelity criterion (InformationFidelityCriterion, IFC) and a visual information fidelity (VisualInformationFidelity, VIF). The two methods have theoretical support, the relation between the image and human eyes is expanded in the aspect of information fidelity, but the methods have no response to the structural information of the image.
In this embodiment, the method for evaluating the quality of each image according to the structural similarity includes: the main function of human eye vision is to extract structural information in the background, and the human eye vision system is highly adaptive to achieve this goal, so the measure of structural distortion of an image should be the best approximation of the perceived quality of the image. On the basis, an objective image quality evaluation standard-structural similarity (StructureSimilaruty, SSIM) which accords with the characteristics of a human eye vision system is provided. The SSIM constructs structural similarity from correlations between image pixels.
Determining a reference image and an image to be evaluated; a reference image x and an image y to be evaluated with the size of M x N, wherein the mean, standard deviation, variance and covariance of the reference image x and the image y to be evaluated are as follows: u x、uy、σx、σy、σx 2、σy 2、uxy;
the comparison functions of brightness, contrast and structure are respectively:
Wherein c 1、c2、c3 is a positive constant to adjust for instability when the denominator is near zero;
the structural similarity index is as follows:
SSIM(x,y)=[l(x,y)]α[c(x,y)]β[s(x,y)]γ
Wherein, alpha, beta and gamma are parameters and can be 1;
when the structural similarity index is larger, the quality of the corresponding image to be evaluated is better;
Evaluating the quality of each image according to the image pixels, evaluating the quality of each image according to the information theory and evaluating and screening the acquired images after evaluating the quality of each image according to the structural similarity so as to screen images which are convenient for makeup removal detection from all the acquired images; all the reference images in this embodiment may be preselected images (for example, an image when a face is not made up, an image after face make-up removal, or the like, which is set in advance).
FIG. 11 is a flow chart of face contour point location in accordance with the present invention;
Fig. 12 is a schematic view of a face contour point according to the present invention.
In this embodiment, the method for acquiring a face image further includes:
And recognizing the human face in each image, namely positioning and registering the human face region, dividing the human face into a plurality of gridding regions (small pictures Patches), and counting the refined evaluation range. And registering in a plurality of dimensional spaces based on an image pixel statistics basis, an information theory basis, a structure information basis and the like, extracting features, and constructing a cleaning quality evaluation model. In different feature spaces, the feature space comprises first-order characteristics, shape characteristics and texture characteristics (GLCM, GLSZM, GLRLM, GLDM, NGTDM), and can provide corresponding feature parameter extraction according to practical application problems. We try various feature space values in the following 5 aspects: (1) boundary feature method; (2) fourier shape description Fu Fa; (3) geometric parameter method; (4) shape invariant moment method; (5) Other methods, work on the representation and matching of shapes also includes finite element methods (FiniteElementMethod or FEM), rotation functions (Turning), and wavelet descriptors (WaveletDeor), among others.
The information of the specific part of the face has important influence on facial expression recognition. The facial feature points are selected to contain facial feature points which are required to be extracted, meanwhile, the feature points can well describe the change process of each expression, and the number of the feature points reaches a certain density around the nominal five sense organs so as to completely describe the change condition of the five sense organs when the expression changes.
In addition to the features of the specific region, the overall features of the face may also affect the recognition of facial expressions by the individual. The impact of the overall features may be manifested in a number of ways, including facial structure, morphology and gender of the face, and the like. The face structure refers to the spatial relationship and layout information between the various parts on the face.
The representation of facial features can be broadly divided into three categories: the first is to represent facial features in terms of points, i.e. some feature points defined according to our understanding of facial features, such as in ASM and AAM, facial features are defined in terms of points, which we call boundary points (landmarks). This is divided into three kinds of boundary points: (a) Extreme points, typically such points have only a single definition in a localized area, such as the pupil of the eye, the tip of the nose, the nostrils, etc. (b) Boundary points, which are considered boundary points on a local boundary, are generally extracted uniformly over the entire edge, such as face contour points, eyebrow contour points, lip contour points, etc., taking into account the positions and distances between adjacent boundary points. (c) Interpolation points, which are locally not obvious texture features, are calculated by interpolation of other boundary points, such as the center point of the mouth, the eyebrow, and the points which are blocked or invisible. The second is to define facial features with lines or boundaries, such as defining the outline of the face as a parabola in a deformable template, defining the boundaries of the eye-balls as circles, etc. The third is to define facial features in regions, such as in some methods of segmentation by color or gray values, by statistically segmenting the region with lip pixels as the mouth; the eyes, eyebrows, etc. are also divided by the difference between the corresponding color and brightness and other areas of the face, and then the area composed of the pixels meeting the conditions is used as the corresponding target area.
As shown in fig. 11 and 12, the screened image is subjected to five sense organs localization, 88 points which form the facial contour are calculated, including eyebrows (8 points on the left and right sides), eyes (8 points on the left and right sides), a nose (13 points), a mouth (22 points), and a facial contour (21 points); carrying out full-range feature recognition on the images of the screened faces when the faces are not made up, positioning 5 ten thousand feature points, meshing the face structures, marking a position Label, and constructing a 3D face model (PRNET, capable of reconstructing about 5 ten thousand face 3D key points based on one frame of face image); dividing a preset number of ROI areas in the 3D face model; performing space calibration, and establishing a corresponding relation between pixels on each screened image and physical dimensions according to the 3D face model; after setting the calibration parameters, measuring the actual physical size of the target on the screened image; and/or
Marking (marking feature points in all the ROI areas) on a large number of face images according to the positions of preset ROI areas, and training to automatically identify all the ROI areas on the screened images; the positions of all the ROI areas can be considered to be marked in the face areas on the image, and training is carried out by a unet method so as to automatically identify all the ROI areas on the screened image.
In this embodiment, the method for constructing an image contrast model includes: image contrast model construction based on CIEL a b color space, i.e
Three basic coordinates represent the brightness of the color (L x, L x=0 to create black and L x=100 to indicate white), its position between red/magenta and green (a x negative to indicate green and positive to indicate magenta) and its position between yellow and blue (b x negative to indicate blue and positive to indicate yellow); l is luminance, ranging from 0 to 100, representing colors from dark (black) to light (white); a represents red-green, the value changes from positive to negative, the color from red (positive) to green (negative); b represents yellow blue, the change in value from positive to negative, the change in color from yellow (positive) to blue (negative);
comparing the color differences of all the ROI areas in the images to be compared so as to compare the images to be compared;
wherein ΔL * is the brightness difference of the same ROI area in the required contrast image; Δa * is the red-green difference of the same ROI area in the desired contrast image; Δb * is yellow Lan Chazhi of the same ROI area in the desired contrast image; Δe * is the color difference of the same ROI area in the desired contrast image; and/or
Construction of image contrast models from grey scale, i.e.
Comparing gray differences of all the ROI areas in the images to be compared so as to compare the images to be compared;
Gray Not yet i=R Not yet i*0.3+G Not yet i*0.59+B Not yet i*0.11;
Gray Unloading i=R Unloading i*0.3+G Unloading i*0.59+B Unloading i*0.11;
ΔGrayi=|Gray Unloading i-Gray Not yet i|;
gray Not yet i is the Gray level of the ith ROI area of the image when the screened face is not made up; r Not yet i is the R value of the ith ROI area of the image when the screened face is not made up; g Not yet i is the G value of the ith ROI area of the image when the screened face is not made up; b Not yet i is the B value of the ith ROI area of the image when the screened face is not made up; gray Unloading i is the Gray scale of the ith ROI area of the screened face after makeup removal; r Unloading i is the R value of the ith ROI area of the screened face after makeup removal; g Unloading i is the G value of the ith ROI area of the screened face after makeup removal; b Unloading i is the B value of the ith ROI area of the screened face after makeup removal; Δgray i is the Gray level difference between the i-th ROI area of the image of the non-makeup screened face and the image of the makeup removed screened face.
Fig. 13 is a schematic view showing the cooling and heating color of the makeup removing effect according to the present invention.
In this embodiment, the method for detecting the makeup removal effect through the face image according to the image contrast model includes:
Dividing the screened face images without makeup, the face images after makeup and the face images after makeup removal into a plurality of grid patch panels (of each grid patch panel) according to the ROI areas, and comparing the color differences of the same ROI areas (corresponding grid patch panels L, a and b are the average values of the grid patch panels) through an image comparison model to judge the makeup removal effect of the ROI areas;
comparing the color differences of all the ROI areas through an image comparison model according to the images of the selected face when not making up and the images of the selected face after making up removal so as to judge the make up removal effect of each ROI area, wherein the smaller the color difference of the same ROI area is, the better the make up removal effect of the ROI area is;
Comparing the color differences of all the ROI areas through an image comparison model according to the images after face makeup screening and the images after face makeup removal screening to judge the makeup removal effect of each ROI area, wherein the larger the color difference of the same ROI area is, the better the makeup removal effect of the ROI area is;
The makeup removal residual percentage of each ROI region can be obtained according to the chromatic aberration of the corresponding ROI region, and the percentage is marked in the corresponding ROI region, so that the makeup removal effect of the ROI region can be conveniently known. As shown in fig. 13, the cool and warm colors of the corresponding ROI areas may be set according to the color difference of each ROI area and displayed in the corresponding ROI area (the cool color indicates good makeup removal effect, and the warm color indicates poor makeup removal effect);
comparing gray level differences of all the ROI areas through an image comparison model according to the images of the selected face when not making up and the images of the selected face after making up removal so as to judge the make up removal effect of each ROI area, wherein the smaller the gray level difference of the same ROI area is, the better the make up removal effect of the ROI area is;
The classification level of the color difference or the gray level difference is preset, after the color difference or the gray level difference of the corresponding image comparison model is obtained according to the image comparison model, the classification level of the color difference or the gray level difference is judged, and the makeup removing effect is judged according to the classification level.
FIG. 14 is a comparison of old and new foundation data;
fig. 15 is a comparison of blush old and new data.
As shown in fig. 14 and 15, in this embodiment, a specific method for dividing the classification level in this embodiment may be: the data of 20 groups of 5 times of makeup are comprehensively compared, the more the makeup times are, the smaller the gray value difference among the groups is, the more the gray value differences are not distinguished, so that pure digital quantization is not used, and a hierarchical quantization mode is adopted; quantified in six stages of 0,1,2, 3, 4, 5, the specific classification can be as follows: 0 level-0-1% (no make-up), 1 level-1-15, 2 level-15-25%, 3 level-25-35%, 4 level-35-45%, 5 level-45-100%; the straight line graph of the makeup removal effect percentage is makeup removal, the ordinate is makeup, the origin is non-makeup, and a specific percentage formula is as follows: Delta is the percentage of the makeup removal effect; Δe Unloading is the color difference between the images during makeup removal; Δe Not yet is the color difference between the images when not making up; Δe Upper part is the color difference between the post-cosmetic images; the conventional calibration method is not favorable for distinction, so a new calibration method is adopted in the embodiment: 10mg of cosmetic (e.g. foundation, blush, etc.) was applied in the area 5*5, 10mg of cosmetic was applied in the area 4*5, 10mg of cosmetic was applied in the area 4*4, 10mg of cosmetic was applied in the area 3 x 4, and 10mg of cosmetic was applied in the area 3*3, the total parts a used for the calibration method were: Each demarcation point is: the data of the new calibration method is more favorable for accurate classification.
Fig. 16 is a comparative view a of makeup removal effect according to the present invention;
fig. 17 is a comparative view B of makeup removal effect according to the present invention.
In this embodiment, makeup removal detection is performed with a specific makeup removal image of the face as an example: as shown in fig. 16, each data before makeup removal is l= 86.59, a=3.02, b=10.80; after makeup removal, each data is l= 86.82, a=3.27, b=11.09, and the color difference Δe * is 0.45; as shown in fig. 17, each data before makeup removal is l= 87.84, a=2.41, b=11.89; after makeup removal, each data is l=83.80, a=6.23, b=13.95, and the color difference Δe * is 5.93.
Example 2
Fig. 18 is a schematic block diagram of a human face skin intelligent quantitative analysis system according to the present invention.
As shown in fig. 18, on the basis of embodiment 1, embodiment 2 further provides a human face skin intelligent quantitative analysis system, which includes: the acquisition module acquires a face image and performs face recognition; the model building module is used for building an image comparison model; and the comparison detection module is used for detecting the makeup removal effect through the face image according to the image comparison model.
In this embodiment, the intelligent quantitative analysis system for facial skin may further include a display module, where the display module may display a 3D model of a facial skin, divide each ROI area in the 3D model, and display makeup removal residues and corresponding cool-warm colors in each ROI area.
In this embodiment, the intelligent quantitative analysis system for facial skin is suitable for detecting the cleansing effect by adopting the intelligent quantitative analysis method for facial skin in embodiment 1.
In summary, the invention collects the face image and carries out face recognition; constructing an image comparison model; and detecting the makeup removing effect through the face image according to the image comparison model, so that the accurate detection of the face makeup removing effect is realized, and the influence on skin, human health and the like caused by the fact that cosmetics remain on the face after makeup removing is avoided.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present invention may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-only memory (ROM), a random access memory (RAM, randomAccessMemory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
With the above-described preferred embodiments according to the present invention as an illustration, the above-described descriptions can be used by persons skilled in the relevant art to make various changes and modifications without departing from the scope of the technical idea of the present invention. The technical scope of the present invention is not limited to the description, but must be determined according to the scope of claims.

Claims (7)

1. The intelligent quantitative analysis method for the skin of the face of the person is characterized by comprising the following steps of:
Collecting a face image and carrying out face recognition;
Constructing an image comparison model; and
Detecting the makeup removal effect through the face image according to the image comparison model;
The method for acquiring the face image further comprises the following steps:
identifying faces in images, i.e.
The screened images are subjected to five sense organs positioning, and 88 points forming the face outline are calculated;
Carrying out full-range feature recognition on the images of the screened faces when the faces are not made up, positioning 5 ten thousand feature points, meshing the face structures, marking position labels, and constructing a 3D face model;
dividing a preset number of ROI areas in the 3D face model;
Performing space calibration, and establishing a corresponding relation between pixels on each screened image and physical dimensions according to the 3D face model; after setting the calibration parameters, measuring the actual physical size of the target on the screened image; and/or
Marking the images according to the positions of preset ROI areas, and training to automatically identify the ROI areas on the screened images;
The method for constructing the image comparison model comprises the following steps:
image contrast model construction based on CIEL a b color space, i.e
Comparing the color differences of all the ROI areas in the images to be compared so as to compare the images to be compared;
wherein ΔL * is the brightness difference of the same ROI area in the required contrast image; Δa * is the red-green difference of the same ROI area in the desired contrast image; Δb * is yellow Lan Chazhi of the same ROI area in the desired contrast image; Δe * is the color difference of the same ROI area in the desired contrast image; and/or
Construction of image contrast models from grey scale, i.e.
Comparing gray differences of all the ROI areas in the images to be compared so as to compare the images to be compared;
Gray Not yet i=R Not yet i*0.3+G Not yet i*0.59+B Not yet i*0.11;
Gray Unloading i=R Unloading i*0.3+G Unloading i*0.59+B Unloading i*0.11;
ΔGrayi=|Gray Unloading i-Gray Not yet i|;
Gray Not yet i is the Gray level of the ith ROI area of the image when the screened face is not made up; r Not yet i is the R value of the ith ROI area of the image when the screened face is not made up; g Not yet i is the G value of the ith ROI area of the image when the screened face is not made up; b Not yet i is the B value of the ith ROI area of the image when the screened face is not made up; gray Unloading i is the Gray scale of the ith ROI area of the screened face after makeup removal; r Unloading i is the R value of the ith ROI area of the screened face after makeup removal; g Unloading i is the G value of the ith ROI area of the screened face after makeup removal; b Unloading i is the B value of the ith ROI area of the screened face after makeup removal; Δgray i is the Gray level difference of the ith region of ROI of the image of the non-makeup screened face and the image of the makeup removed screened face;
the method for detecting the makeup removal effect through the face image according to the image comparison model comprises the following steps:
comparing the color differences of all the ROI areas through an image comparison model according to the images of the selected face when not making up and the images of the selected face after making up removal so as to judge the make up removal effect of each ROI area, wherein the smaller the color difference of the same ROI area is, the better the make up removal effect of the ROI area is;
Comparing the color differences of all the ROI areas through an image comparison model according to the images after face makeup screening and the images after face makeup removal screening to judge the makeup removal effect of each ROI area, wherein the larger the color difference of the same ROI area is, the better the makeup removal effect of the ROI area is;
comparing gray level differences of all the ROI areas through an image comparison model according to the images of the selected face when not making up and the images of the selected face after making up removal so as to judge the make up removal effect of each ROI area, wherein the smaller the gray level difference of the same ROI area is, the better the make up removal effect of the ROI area is;
The classification level of the color difference or the gray level difference is preset, after the color difference or the gray level difference of the corresponding image comparison model is obtained according to the image comparison model, the classification level of the color difference or the gray level difference is judged, and the makeup removing effect is judged according to the classification level.
2. The intelligent quantitative analysis method for human face skin according to claim 1, wherein,
The method for collecting the face image and carrying out face recognition comprises the following steps:
collecting an image of a face when makeup is not performed;
collecting an image after face makeup;
And collecting images of the face after makeup removal.
3. The intelligent quantitative analysis method for the skin of the human face according to claim 2, wherein,
The method for collecting the face image and carrying out face recognition further comprises the following steps:
evaluation and screening of the acquired images, i.e
The quality of each image is evaluated based on the image pixels, the quality of each image is evaluated based on the information theory, and the quality of each image is evaluated based on the structural similarity.
4. The intelligent quantitative analysis method for human face skin according to claim 3, wherein,
The method for evaluating the quality of each image according to the image pixels comprises the following steps:
quality evaluation is carried out on each image according to peak signal-to-noise ratio and mean square error, namely
The image to be evaluated is y, the reference image is x, and the size is M x N, and the calculation method for representing the image quality according to the peak signal-to-noise ratio comprises the following steps:
The calculation method for representing the image quality according to the mean square error comprises the following steps:
The larger the PSNR value is, the smaller the distortion between the image to be evaluated and the reference image is, and the better the image quality is;
The smaller the value of MSE, the better the image quality to be evaluated.
5. The intelligent quantitative analysis method for human face skin according to claim 4, wherein,
The method for evaluating the quality of each image according to the information theory comprises the following steps:
And calculating mutual information between the image to be evaluated and the reference image according to two algorithms of the information fidelity criterion and the visual information fidelity so as to measure the quality of the image to be evaluated.
6. The intelligent quantitative analysis method for human face skin according to claim 5, wherein,
The method for evaluating the quality of each image according to the structural similarity comprises the following steps:
determining a reference image and an image to be evaluated;
A reference image x and an image y to be evaluated with the size of M x N, wherein the mean, standard deviation, variance and covariance of the reference image x and the image y to be evaluated are as follows: u x、uy、σx、σy、σx 2、σy 2、uxy;
the comparison functions of brightness, contrast and structure are respectively:
wherein c 1、c2、c3 is a positive constant;
the structural similarity index is as follows:
SSIM(x,y)=[l(x,y)]α[c(x,y)β[s(x,y)]γ
when the structural similarity index is larger, the quality of the corresponding image to be evaluated is better;
And evaluating and screening the acquired images after evaluating the quality of each image according to the image pixels, evaluating the quality of each image according to the information theory and evaluating the quality of each image according to the structural similarity.
7. A human face skin intelligent quantitative analysis system employing the human face skin intelligent quantitative analysis method as claimed in any one of claims 1 to 6, comprising:
the acquisition module acquires a face image and performs face recognition;
The model building module is used for building an image comparison model; and
And the comparison detection module is used for detecting the makeup removal effect through the face image according to the image comparison model.
CN202110836616.2A 2021-07-23 2021-07-23 Intelligent quantitative analysis method and analysis system for facial skin Active CN113554623B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110836616.2A CN113554623B (en) 2021-07-23 2021-07-23 Intelligent quantitative analysis method and analysis system for facial skin

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110836616.2A CN113554623B (en) 2021-07-23 2021-07-23 Intelligent quantitative analysis method and analysis system for facial skin

Publications (2)

Publication Number Publication Date
CN113554623A CN113554623A (en) 2021-10-26
CN113554623B true CN113554623B (en) 2024-10-22

Family

ID=78132634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110836616.2A Active CN113554623B (en) 2021-07-23 2021-07-23 Intelligent quantitative analysis method and analysis system for facial skin

Country Status (1)

Country Link
CN (1) CN113554623B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119482A (en) * 2021-10-27 2022-03-01 宁波智能技术研究院有限公司 Method and system for detecting skin makeup residues based on neural network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930806A (en) * 2016-04-25 2016-09-07 上海斐讯数据通信技术有限公司 Facial cleanliness detection method and mobile terminal
CN112396573A (en) * 2019-07-30 2021-02-23 纵横在线(广州)网络科技有限公司 Facial skin analysis method and system based on image recognition

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014102567A1 (en) * 2012-12-27 2014-07-03 L'oréal Method for determining make-up removal efficiency
CN108308833A (en) * 2018-01-29 2018-07-24 上海康斐信息技术有限公司 A kind of the makeup removing detection method and system of intelligence makeup removing instrument
CN109961426B (en) * 2019-03-11 2021-07-06 西安电子科技大学 Method for detecting skin of human face
CN111524080A (en) * 2020-04-22 2020-08-11 杭州夭灵夭智能科技有限公司 Face skin feature identification method, terminal and computer equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930806A (en) * 2016-04-25 2016-09-07 上海斐讯数据通信技术有限公司 Facial cleanliness detection method and mobile terminal
CN112396573A (en) * 2019-07-30 2021-02-23 纵横在线(广州)网络科技有限公司 Facial skin analysis method and system based on image recognition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
图像模糊度评价及其应用综述;梅嘉祥;刘展宁;张志佳;王子韬;张轩祎;薛晴;;《软件工程》;20180405;第21卷(第4期);第23-36页 *

Also Published As

Publication number Publication date
CN113554623A (en) 2021-10-26

Similar Documents

Publication Publication Date Title
CN106778788B (en) The multiple features fusion method of aesthetic evaluation is carried out to image
US12056883B2 (en) Method for testing skin texture, method for classifying skin texture and device for testing skin texture
US7715596B2 (en) Method for controlling photographs of people
US8548257B2 (en) Distinguishing between faces and non-faces
CN106295124B (en) The method of a variety of image detecting technique comprehensive analysis gene subgraph likelihood probability amounts
EP1229493B1 (en) Multi-mode digital image processing method for detecting eyes
CN110210448B (en) Intelligent face skin aging degree identification and evaluation method
CN108549886A (en) A kind of human face in-vivo detection method and device
CN110363088B (en) Self-adaptive skin inflammation area detection method based on multi-feature fusion
US20080285856A1 (en) Method for Automatic Detection and Classification of Objects and Patterns in Low Resolution Environments
CN110110637A (en) A kind of method of face wrinkle of skin automatic identification and wrinkle severity automatic classification
CN112396573A (en) Facial skin analysis method and system based on image recognition
CN101833654B (en) Sparse representation face identification method based on constrained sampling
CN101576953A (en) Classification method and device of human body posture
CN112528939B (en) Quality evaluation method and device for face image
Chakravarty et al. Coupled sparse dictionary for depth-based cup segmentation from single color fundus image
CN113436734A (en) Tooth health assessment method and device based on face structure positioning and storage medium
CN111709305B (en) Face age identification method based on local image block
Paul et al. PCA based geometric modeling for automatic face detection
CN113554623B (en) Intelligent quantitative analysis method and analysis system for facial skin
KR101436988B1 (en) Method and Apparatus of Skin Pigmentation Detection Using Projection Transformed Block Coefficient
Bogo et al. Automated detection of new or evolving melanocytic lesions using a 3D body model
CN113436735A (en) Body weight index prediction method, device and storage medium based on face structure measurement
CN108629771B (en) A kind of blind evaluation method of picture quality with scale robustness
CN110796638B (en) Pore detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant