[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN101950356B - Smiling face detecting method and system - Google Patents

Smiling face detecting method and system Download PDF

Info

Publication number
CN101950356B
CN101950356B CN 201010276313 CN201010276313A CN101950356B CN 101950356 B CN101950356 B CN 101950356B CN 201010276313 CN201010276313 CN 201010276313 CN 201010276313 A CN201010276313 A CN 201010276313A CN 101950356 B CN101950356 B CN 101950356B
Authority
CN
China
Prior art keywords
value
smiling face
submodule
weak classifier
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN 201010276313
Other languages
Chinese (zh)
Other versions
CN101950356A (en
Inventor
方发明
罗小伟
林福辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Communications Shanghai Co Ltd
Original Assignee
Spreadtrum Communications Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Communications Shanghai Co Ltd filed Critical Spreadtrum Communications Shanghai Co Ltd
Priority to CN 201010276313 priority Critical patent/CN101950356B/en
Publication of CN101950356A publication Critical patent/CN101950356A/en
Application granted granted Critical
Publication of CN101950356B publication Critical patent/CN101950356B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to computer image processing and discloses a smiling face detecting method and a smiling face detecting system. In the invention, an Adabost voting principle is introduced to a smiling face detecting algorithm to reduce the computing complexity in a detection process considerably, and thus, real-time smiling face detection can be realized on portable equipment with low computing capacity and low storing capacity. In addition, before the Adaboost algorithm is used for smiling face detection, the original face image is preprocessed by noise elimination and brightness regulation, the drawbacks of the use of the Adaboost in smiling face detection are overcome, the computing complexity in a detection process is reduced, and the accuracy of the detection is ensured.

Description

Smiling face detection method and system thereof
Technical Field
The present invention relates to computer image processing, and more particularly, to a technique for detecting smiling faces in images.
Background
In recent years, with the development of multimedia technology, more and more multimedia software, such as image denoising, face recognition, etc., is also included in portable devices such as mobile phones and digital cameras, and these application software are very popular with consumers. In addition, smiling face detection is also an urgent technology as portable devices are improved and developed.
The smile detection refers to a process and a method for detecting whether a given face area is a smile by a specific method and scoring the smile after face detection is completed. Smiling face detection has very important significance, such as automatic snapshot of smiling faces by a digital camera, smiling services in the modern service industry and the like, and smiling face identification technology can be used. Smiling face detection is not easy, and its implementation faces many challenges, for example, due to the fuzzy nature of human expressions, sometimes the judgments given by different people may be far from each other for a fixed expression. In addition, due to the influence of various conditions such as the face posture, the appearance, the skin color, the existence of obstructions such as glasses and the like, the optical imaging environment and the like, the detection is very difficult. Two limiting factors that affect the practical application of smile detection are the accuracy and speed of smile detection.
With the development of computer image processing technology, the smiling face detection method adopted at present has obviously improved accuracy. However, the inventor of the present invention has found that the existing smiling face detection methods are highly complex, and portable devices generally have problems of poor computing capability, poor storage capability, and the like, so that the existing smiling face detection methods cannot be well applied to these devices. Although the existing Adaboost algorithm has the characteristic of low complexity, the speed problem is well solved. However, the method for smile detection also faces many problems: for example, the Adaboost algorithm cannot be applied to smile detection at present because the Adaboost algorithm is sensitive to noise and the difference between a smiling face and a non-smiling face is slightly difficult to identify by the method.
Disclosure of Invention
The invention aims to provide a smiling face detection method and a system thereof, which can realize real-time and efficient smiling face detection on portable equipment with poor computing capability and poor storage capability.
In order to solve the above technical problem, an embodiment of the present invention provides a smiling face detection method, including:
preprocessing an original face image, wherein the preprocessing comprises denoising processing and brightness adjusting processing;
carrying out smiling face detection on the preprocessed face image by using an Adaboost algorithm of a voting mechanism;
the result of smiling face detection is output.
An embodiment of the present invention also provides a smiling face detection system, including:
the preprocessing module is used for preprocessing an original face image, and the preprocessing comprises denoising processing and brightness adjusting processing;
the detection module is used for carrying out smiling face detection on the face image preprocessed by the preprocessing module by using an Adaboost algorithm;
and the output module is used for outputting the result of smiling face detection performed by the detection module.
Compared with the prior art, the implementation mode of the invention has the main differences and the effects that:
the method comprises the steps of preprocessing an original face image including denoising processing and brightness adjusting processing, carrying out smiling face detection on the preprocessed face image by using an Adaboost algorithm of a voting mechanism, and outputting a smiling face detection result. Because the complexity of the Adaboost algorithm is low, the Adaboost voting principle is introduced into the smiling face detection algorithm, so that the calculation complexity in the detection process can be greatly reduced, and the real-time smiling face detection can be realized on portable equipment with poor calculation capability and weak storage capability. Moreover, because the Adaboost algorithm based on the voting mechanism is sensitive to noise and difficult to identify the small difference between the smiling face and the non-smiling face, the original face image is preprocessed by denoising and brightness adjustment before the smiling face is detected by adopting the Adaboost algorithm, and the insufficient part of the Adaboost applied to smiling face identification is improved, so that the detection accuracy can be ensured while the calculation complexity in the detection process is reduced.
Furthermore, when the original face image is subjected to preprocessing, denoising processing is performed firstly, and then brightness adjustment processing is performed, so that various factors unfavorable to the algorithm are reduced to the maximum extent, and the accuracy of smiling face detection is further ensured.
Further, the face image is subjected to noise reduction processing one time and another time in an iterative manner until the difference between the face image after the noise reduction processing and the face image before the noise reduction processing meets a preset condition. The original face image is processed by the denoising method based on the variational method, so that the denoising effect of the original face image can be effectively ensured.
Further, in the Adaboost algorithm, the weak classifier used for voting is h (x, f, p, θ),
Figure BSA00000262414200031
wherein f (x) is a feature value of the weak classifier, f (x) is a haar feature value, θ is a preset threshold used for judging whether the feature value of the weak classifier meets the smiling face condition, p is used for adjusting the direction of the unequal sign, when f (x) is smaller than the threshold θ, p is 1, and when f (x) is larger than the threshold θ, p is-1; when h is 1, the voting of the weak classifier is smiling face, and when h is 0, the voting of the weak classifier is non-smiling face. The method is simple to implement, and the low complexity of smiling face detection is further guaranteed.
Drawings
Fig. 1 is a flowchart of a smiling face detection method according to a first embodiment of the present invention;
FIG. 2 is a flowchart of a denoising process according to a first embodiment of the present invention;
fig. 3 is a flowchart of a luminance adjustment process in the first embodiment according to the present invention;
fig. 4 is a flowchart of a smiling face detection system according to a third embodiment of the present invention.
Detailed Description
In the following description, numerous technical details are set forth in order to provide a better understanding of the present application. However, it will be understood by those skilled in the art that the technical solutions claimed in the present application can be implemented without these technical details and with various changes and modifications based on the following embodiments.
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
A first embodiment of the present invention relates to a smiling face detection method. Fig. 1 is a flowchart illustrating the smiling face detection method.
In step 101, the original face image is preprocessed, wherein the preprocessing includes a denoising process and a brightness adjustment process. In this step, the original face image is denoised, and then the denoised face image is subjected to brightness adjustment. When the original face image is subjected to preprocessing, denoising processing is performed firstly, and then brightness adjustment processing is performed, so that the accuracy of smiling face detection can be further ensured.
Specifically, the flow of denoising the original face image is shown in fig. 2. In step 201, the counter i is reset to zero, and the denoised image u is foundiInitialized to u0=u0,u0=u0Is the original face image.
Next, in step 202, the derivative of each pixel in the face image in the row direction is calculated. Specifically, the invention relates toU is calculated by the following formulaiThe derivative u of each pixel in the row directioni x(x,y):
u i x ( x , y ) = u i ( x + 1 , y ) - u i ( x - 1 , y ) 2
Wherein,
Figure BSA00000262414200042
x and y represent the location coordinates of the pixel, and h represents the height of the image.
Next, in step 203, the derivative of each pixel in the face image in the column direction is calculated. Specifically, u is calculated by the following formulaiThe derivative u of each pixel in the column directioni y(x,y):
u i y ( x , y ) = u i ( x , y + 1 ) - u i ( x , y - 1 ) 2
Wherein,
Figure BSA00000262414200052
x and y represent the location coordinates of the pixel, and w represents the width of the image.
Next, in step 204, a gradient value is calculated for each pixel based on the calculated derivatives of each pixel in the row and column directions. I.e. calculating u according to the following formulaiGradient value of each pixel in (1):
▿ u i ( x , y ) = u i x ( x , y ) i + u i y ( x , y ) j
wherein i, j respectively represent the units in the row and column directionsBit vector ui x(x, y) is determined in step 202 as ui y(x, y) is determined in step 203.
Next, in step 205, the gradient modulus length of each pixel is calculated based on the calculated derivatives of each pixel in the row and column directions. I.e. calculating u according to the following formulaiGradient mode length of each pixel:
| ▿ u i ( x , y ) | = u i x ( x , y ) 2 + u i y ( x , y ) 2
wherein u isi x(x, y) is determined in step 202 as ui y(x, y) is determined in step 203. Next, in step 206, the divergence of the ratio of the gradient value to the gradient mode length for each pixel is obtained from the gradient value and the gradient mode length for each pixel. I.e. according to the following formula
Figure BSA00000262414200055
Divergence of (d):
div ( ▿ u i ( x , y ) | ▿ u i ( x , y ) | ) = ( u i x ( x , y ) | ▿ u i ( x , y ) | ) x + ( u i y ( x , y ) | ▿ u i ( x , y ) | ) y
where the right side of the equation includes subscripts x, y outside the number, representing the partial derivatives in the x and y directions.
Next, in step 207, the face image u is processed according to the following formulaiCarrying out primary noise reduction treatment:
u i + 1 = u i + t ( div ( ▿ u i ( x , y ) | ▿ u i ( x , y ) | ) - λ ( u i - u 0 ) )
wherein u isi+1Represents a pair uiFace image, div, subjected to primary noise reduction
Figure BSA00000262414200062
And the divergence representing the ratio of the gradient value and the gradient mode length of each pixel, t is the algorithm step length for adjusting the algorithm speed, and lambda is a preset adjustable parameter, wherein the smaller lambda is the smoother the image is required. Default t ═ λ ═ 0.01. In addition, it is understood that, in practical applications, t and λ may also be set to other values, and t and λ may be the same or different, which is not illustrated here.
Next, in step 208, for a given threshold value ∈ (e.g., ∈ 0.1), it is determined | | ui+1-uiIf | < ε, if yes, u will bei+1Taking the face image as a face image subjected to denoising processing, and exiting the denoising processing flow; if u | |i+1-uiIf | < ε does not hold, then proceed to step 209.
In step 209, the counter i is incremented, i is i +1, and the process returns to step 202.
Performing noise reduction processing on the face image once and again in an iterative mode until the difference between the face image after the noise reduction processing and the face image before the noise reduction processing meets a preset condition (| | u)i+1-ui| | < ε). The original face image is processed by the denoising method based on the variational method, so that the denoising effect of the original face image can be effectively ensured.
The flow of the brightness adjustment processing on the denoised face image is shown in fig. 3.
In step 301, a mean value of the gray levels of all pixels in the face image to be adjusted is calculated, and the face image to be adjusted is the face image subjected to denoising processing and is represented by u. That is, the mean of all the gradations in u is found (w, h represent the width and height of the face image, respectively):
mean = sum ( u ) wh
next, in step 302, the gray levels of all pixels in the face image to be adjusted are adjusted to a fixed value according to the calculated average value of the gray levels of all pixels. I.e. the mean value of the u is adjusted to a constant value a, e.g. a 136:
u ( x , y ) = a mean u ( x , y )
after the preprocessing of the original face image is completed, step 102 is entered. In step 102, an integrogram of the preprocessed face image is calculated. The calculation of the integral map of the face image is well known in the art and will not be described herein, and can be found in the documents "Paul Viola, Michel Jones, Rapid Object Detection using a Boosted case of Simple Features" and "Yubo WANG, Haizou Al, Bo WU, Chang HUANG, Real time facial Expression Recognition with Adaboost" in the documents 1.
Next, at step 103, the count value of the weak classifier is zeroed. Wherein a weak classifier h (x, f, p, θ) is composed of a feature f, a threshold θ and p indicating the direction of the unequal sign:
Figure BSA00000262414200072
here, f (x) is a feature value of the weak classifier, and f (x) takes a haar feature value, so the feature value of the weak classifier is the haar feature value, and the detailed calculation method of haar features can be also referred to in the above documents 1 and 2. Theta is a preset threshold value used for judging whether the characteristic value of the weak classifier meets the smiling face condition or not, and the value can be obtained through training of a large number of training samples. p is used to adjust the direction of the disparity, when f (x) is less than the threshold θ, p is 1, and when f (x) is greater than the threshold θ, p is-1; when h is 1, the voting of the weak classifier is smiling face, and when h is 0, the voting of the weak classifier is non-smiling face. Generally, there are a plurality of weak classifiers participating in smiling face recognition, for example, 300.
Then, in step 104, it is determined whether the count value of the weak classifiers is less than the total number of the weak classifiers, and if so, the procedure goes to step 105; if greater than or equal to the total number of weak classifiers, proceed to step 108;
in step 105, calculating a feature value of the current weak classifier, and determining whether the feature value of the current weak classifier is smaller than a threshold θ of the current weak classifier, wherein a count value of the weak classifier is an index value of the current weak classifier in all weak classifiers, when determining that the feature value of the current weak classifier is smaller than the threshold θ, the step 106 is performed, and when determining that the feature value of the current weak classifier is greater than or equal to the threshold θ, the step 107 is performed.
In step 106, the alpha values of the current weak classifiers are accumulated. The alpha value is a weight value of the corresponding weak classifier, and the value can be obtained through training of a training sample.
In step 107, the count values of the weak classifiers are accumulated. And after step 107, returns to step 104.
When the count value of the weak classifiers is greater than or equal to the total number of the weak classifiers, step 108 is entered. In step 108, it is determined whether the accumulated weight value is greater than a preset threshold n. If the cumulative sum of the alpha values is greater than the threshold n, step 109 is entered, and if the cumulative sum of the alpha values is less than or equal to the threshold n, step 110 is entered. In the present embodiment, the threshold n is half of the sum of the weight values (alpha values) of all weak classifiers.
In step 109, a smiling face score is calculated, and the smiling face detection result is the calculated smiling face score, and the smiling face score is output.
In step 110, it is determined whether the result of smiling face detection is a smiling face.
In step 111, the result of smile detection is output. For example, if the step 109 proceeds to this step, the output smiling face detection result is the calculated smiling face score; if the step 110 proceeds to this step, the output smiling face detection result is not a smiling face.
It is to be noted that steps 102 to 110 in the present embodiment are smiling face detection by the Adaboost algorithm of the voting mechanism. The Adaboost voting principle with low computational complexity is introduced into the smiling face detection algorithm, so that the computational complexity in the detection process can be greatly reduced, and real-time smiling face detection can be realized on portable equipment with poor computing capability and weak storage capability. Moreover, because the Adaboost algorithm based on the voting mechanism is sensitive to noise and difficult to identify the small difference between the smiling face and the non-smiling face, the original face image is preprocessed by denoising and brightness adjustment before the smiling face is detected by adopting the Adaboost algorithm, and the insufficient part of the Adaboost applied to smiling face identification is improved, so that the detection accuracy can be ensured while the calculation complexity in the detection process is reduced.
A second embodiment of the present invention relates to a smiling face detection method. The second embodiment is substantially the same as the first embodiment, and differs therefrom in that:
in the first embodiment, the original face image is denoised, and then the denoised face image is subjected to brightness adjustment. In the embodiment, firstly, the brightness adjustment is performed, then the denoising processing is performed on the face image after the brightness adjustment, and the smiling face detection is performed on the face image after the denoising processing by using an Adaboost algorithm of a voting mechanism.
Each method embodiment of the present invention can be implemented by software, hardware, firmware, or the like. Whether the present invention is implemented as software, hardware, or firmware, the instruction code may be stored in any type of computer-accessible memory (e.g., permanent or modifiable, volatile or non-volatile, solid or non-solid, fixed or removable media, etc.). Also, the Memory may be, for example, Programmable Array Logic (PAL), Random Access Memory (RAM), Programmable Read Only Memory (PROM), Read-Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), a magnetic disk, an optical disk, a Digital Versatile Disk (DVD), or the like.
A third embodiment of the present invention relates to a smiling face detection system. Fig. 4 is a schematic structural diagram of the smiling face detection system. The smiling face detection system includes:
and the preprocessing module is used for preprocessing the original face image, and the preprocessing comprises denoising processing and brightness adjusting processing.
And the detection module is used for carrying out smiling face detection on the face image preprocessed by the preprocessing module by using an Adaboost algorithm.
And the output module is used for outputting the result of smiling face detection performed by the detection module.
Wherein, the preprocessing module comprises the following sub-modules:
and the denoising processing submodule is used for denoising the original face image.
And the brightness adjusting processing submodule is used for adjusting the brightness of the face image processed by the denoising processing submodule.
Specifically, the denoising processing sub-module further includes:
and the first calculation submodule is used for calculating the derivatives of each pixel in the face image in the row direction and the column direction and triggering the second calculation submodule.
And the second calculation submodule is used for calculating the gradient value and the gradient modular length of each pixel according to the calculated derivatives of each pixel in the row direction and the column direction, and triggering the third calculation submodule.
And the third calculation submodule is used for solving the divergence of the ratio of the gradient value to the gradient mode length of each pixel according to the gradient value and the gradient mode length of each pixel and triggering the noise reduction processing submodule.
The noise reduction processing submodule is used for processing the face image u according to the following formulaiCarrying out noise treatment once, and triggering an iteration submodule:
u i + 1 = u i + t ( div ( &dtri; u i ( x , y ) | &dtri; u i ( x , y ) | ) - &lambda; ( u i - u 0 ) )
where i is initialized to 0, u0Representing the original facial image, ui+1Represents a pair uiThe face image after the noise reduction processing is performed once,
Figure BSA00000262414200102
the divergence representing the ratio of the gradient value and the gradient mode length of each pixel, x and y represent the positioning coordinates of the pixels, t is the algorithm step length for adjusting the algorithm speed, λ is a preset adjustable parameter, and a smaller λ represents a requirement for smoother images.
The iteration submodule is used for judging | | | ui+1-uiIf | < ε is true, where ε is a given threshold, if | | ui+1-uiIf | < ε, then ui+1As a face image after the denoising process. If u | |i+1-uiIf | < epsilon is not true, i is equal to i +1, and the first calculation submodule is triggered again.
The brightness adjustment processing sub-module further comprises:
and the gray mean value calculating submodule is used for calculating the mean value of the gray levels of all pixels in the face image to be adjusted.
And the adjusting submodule is used for adjusting the gray levels of all pixels in the face image to be adjusted to a fixed value according to the average value of the gray levels of all pixels calculated by the gray level average value calculating submodule.
In the Adaboost algorithm of the present embodiment, the weak classifier used for voting is h (x, f, p, θ),
Figure BSA00000262414200111
wherein, (x) is a feature value of the weak classifier, f (x) takes a haar feature value, θ is a preset threshold used for judging whether the feature value of the weak classifier meets the smile condition, the threshold θ can be obtained by training a training sample, p is used for adjusting the direction of the unequal sign, when f (x) is smaller than the threshold θ, p is 1, when f (x) is larger than the threshold θ, p is-1, h is 1, the vote of the weak classifier is a smile, and h is 0, the vote of the weak classifier is an smile.
Specifically, the detection module includes the following sub-modules:
and the integral image calculation submodule is used for calculating the integral image of the preprocessed face image.
And the zeroing sub-module is used for zeroing the count value of the weak classifier.
The counting value judgment submodule is used for judging whether the counting value of the weak classifiers is smaller than the total number of the weak classifiers or not, if so, the characteristic value calculation operator module is triggered, and if not, the threshold judgment submodule is triggered;
the characteristic value operator module is used for calculating the characteristic value of the current weak classifier and judging whether the characteristic value of the current weak classifier is smaller than a preset threshold value theta, wherein the counting value of the weak classifier is the index value of the current weak classifier in all weak classifiers, and when the characteristic value operator module judges that the characteristic value of the current weak classifier is smaller than the threshold value theta, the characteristic value operator module triggers the weight value accumulation submodule; when judging that the feature value of the current calculated weak classifier is larger than or equal to the threshold value theta, the feature value operator module triggers the count value accumulation submodule;
the weighted value accumulation submodule is used for accumulating the weighted value of the current weak classifier and triggering the counting value accumulation submodule;
the count value accumulation submodule is used for accumulating the count value of the weak classifier and triggering the count value judgment submodule;
the threshold judgment submodule is used for judging whether the weight value accumulated by the weight value accumulation submodule is greater than a preset threshold value n or not, and if the weight value accumulated by the weight value accumulation submodule is greater than the threshold value n, the smiling face detection result is the calculated smiling face score; if the value is less than or equal to the threshold value n, the smiling face detection result is a non-smiling face image. Wherein the threshold n is half of the sum of the weight values of all weak classifiers.
It is to be understood that the first embodiment is a method embodiment corresponding to the present embodiment, and the present embodiment can be implemented in cooperation with the first embodiment. The related technical details mentioned in the first embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the first embodiment.
A fourth embodiment of the present invention relates to a smiling face detection method. The fourth embodiment is substantially the same as the third embodiment, and differs therefrom in that:
in a third embodiment, the preprocessing module performs denoising processing on the original face image, and then performs brightness adjustment processing on the denoised face image. In the embodiment, the preprocessing module performs brightness adjustment processing first, and then performs denoising processing on the face image after the brightness adjustment processing. That is, the brightness adjustment processing submodule performs brightness adjustment processing on the original face image, and then the denoising processing submodule performs denoising processing on the face image processed by the brightness adjustment processing submodule.
It is to be understood that the second embodiment is a method embodiment corresponding to the present embodiment, and the present embodiment can be implemented in cooperation with the second embodiment. The related technical details mentioned in the second embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the second embodiment.
It should be noted that, each unit mentioned in each device embodiment of the present invention is a logical unit, and physically, one logical unit may be one physical unit, or may be a part of one physical unit, or may be implemented by a combination of multiple physical units, and the physical implementation manner of these logical units itself is not the most important, and the combination of the functions implemented by these logical units is the key to solve the technical problem provided by the present invention. Furthermore, the above-mentioned embodiments of the apparatus of the present invention do not introduce elements that are less relevant for solving the technical problems of the present invention in order to highlight the innovative part of the present invention, which does not indicate that there are no other elements in the above-mentioned embodiments of the apparatus.
While the invention has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention.

Claims (11)

1. A smiling face detection method, comprising the steps of:
preprocessing an original face image, wherein the preprocessing comprises denoising processing and brightness adjusting processing;
carrying out smiling face detection on the preprocessed face image by using an Adaboost algorithm of a voting mechanism;
outputting a result of the smiling face detection;
wherein, in the Adaboost algorithm, the weak classifier used for voting is h (x, f, p, theta),
Figure FSB00000872721400011
(x) is a feature value of the weak classifier, wherein f (x) takes a haar feature value, θ is a preset threshold used for judging whether the feature value of the weak classifier meets a smiling face condition, p is used for adjusting the direction of an unequal sign, when f (x) is smaller than the threshold θ, p is 1, and when f (x) is larger than the threshold θ, p is-1; when h is 1, the voting of the weak classifier is smiling face, and when h is 0, the voting of the weak classifier is non-smiling face;
the step of smiling face detection by the Adaboost algorithm includes the following substeps:
b1, calculating an integral image of the preprocessed face image;
b2, resetting the count value of the weak classifier to zero;
b3, judging whether the count value of the weak classifiers is less than the total number of the weak classifiers, and if so, entering a step B4; if greater than or equal to the total number of weak classifiers, proceed to step B7;
b4, calculating the characteristic value of the current weak classifier, and judging whether the characteristic value of the current calculated weak classifier is smaller than the threshold value theta, wherein the count value of the weak classifier is the index value of the current weak classifier in all weak classifiers, and entering the step B5 when judging that the characteristic value of the current calculated weak classifier is smaller than the threshold value theta; when the feature value of the currently calculated weak classifier is judged to be larger than or equal to the threshold value theta, entering a step B6;
b5, accumulating the weight value of the current weak classifier;
b6, accumulating the count values of the weak classifiers and returning to the step B3;
b7, judging whether the accumulated weight value is larger than a preset threshold value n, if so, determining that the smiling face detection result is the calculated smiling face score; and if the value is less than or equal to the threshold value n, the smiling face detection result is a non-smiling face image.
2. The smiling face detection method of claim 1, wherein in the preprocessing of the original face image, the denoising process is performed first, and then the brightness adjustment process is performed.
3. The smiling face detection method according to claim 1, wherein the denoising process is a variational-based face image denoising process, comprising the sub-steps of:
a1, calculating the derivative of each pixel in the face image in the row direction and the column direction;
a2, calculating gradient value and gradient module length of each pixel according to the calculated derivatives of each pixel in the row direction and the column direction;
a3, obtaining the divergence of the ratio of gradient value and gradient mode length of each pixel according to the gradient value and gradient mode length of each pixel;
a4, face image u is processed according to the following formulaiCarrying out primary noise reduction treatment:
Figure FSB00000872721400021
where i is initialized to 0, u0Representing the original facial image, ui+1Represents a pair uiThe face image after the noise reduction processing is performed once,the divergence representing the ratio of the gradient value and the gradient module length of each pixel, x and y represent the positioning coordinates of the pixels, t is the algorithm step length for adjusting the algorithm speed, lambda is a preset adjustable parameter, and the smaller lambda is, the smoother the image is required;
a5, judgment | | | ui+1-uiIf | < ε is true, where ε is a given threshold, if | | ui+1-uiIf | < ε, then ui+1As the face image after the denoising processing; if u | |i+1-uiIf | < epsilon is not true, i is equal to i +1, and repeatPerforming the step A1 through the step A5.
4. The smiling face detection method of claim 3, wherein both the t and the λ are 0.01.
5. The smiling face detection method of claim 1, wherein the brightness adjustment process comprises the sub-steps of:
calculating the mean value of the gray levels of all pixels in the face image to be adjusted;
and adjusting the gray levels of all pixels in the face image to be adjusted to a fixed value according to the calculated average value of the gray levels of all pixels.
6. The smiling face detection method of claim 1, wherein the threshold θ is obtained by training a training sample, and the threshold n is half of a sum of weight values of all weak classifiers.
7. A smiling face detection system, comprising:
the system comprises a preprocessing module, a display module and a display module, wherein the preprocessing module is used for preprocessing an original face image, and the preprocessing comprises denoising processing and brightness adjusting processing;
the detection module is used for carrying out smiling face detection on the face image preprocessed by the preprocessing module by using an Adaboost algorithm of a voting mechanism;
an output module for outputting a result of smiling face detection by the detection module;
wherein, in the Adaboost algorithm, the weak classifier used for voting is h (x, f, p, theta),
Figure FSB00000872721400031
(x) is a feature value of the weak classifier, wherein f (x) takes a haar feature value, θ is a preset threshold used for judging whether the feature value of the weak classifier meets a smiling face condition, p is used for adjusting the direction of an unequal sign, when f (x) is smaller than the threshold θ, p is 1, and when f (x) is larger than the threshold θ, p is-1; when h is 1, the voting of the weak classifier is smiling face, and when h is 0, the voting of the weak classifier is non-smiling face;
the detection module comprises the following sub-modules:
the integral image calculation submodule is used for calculating the integral image of the preprocessed face image;
the zero-setting sub-module is used for setting the count value of the weak classifier to zero;
the counting value judgment submodule is used for judging whether the counting value of the weak classifiers is smaller than the total number of the weak classifiers or not, if so, the characteristic value calculation operator module is triggered, and if not, the threshold judgment submodule is triggered;
the characteristic value operator module is used for calculating the characteristic value of the current weak classifier and judging whether the characteristic value of the current weak classifier is smaller than the threshold value theta, wherein the counting value of the weak classifier is the index value of the current weak classifier in all the weak classifiers, and the characteristic value operator module triggers the weight value accumulation sub-module when judging that the characteristic value of the current weak classifier is smaller than the threshold value theta; the characteristic value operator module triggers a count value accumulation submodule when judging that the characteristic value of the current calculated weak classifier is larger than or equal to the threshold value theta;
the weight value accumulation submodule is used for accumulating the weight value of the current weak classifier and triggering the count value accumulation submodule;
the count value accumulation submodule is used for accumulating the count value of the weak classifier and triggering the count value judgment submodule;
the threshold judgment submodule is used for judging whether the weight value accumulated by the weight value accumulation submodule is greater than a preset threshold value n or not, and if the weight value accumulated by the weight value accumulation submodule is greater than the threshold value n, the smiling face detection result is the calculated smiling face score; and if the value is less than or equal to the threshold value n, the smiling face detection result is a non-smiling face image.
8. The smiling face detection system of claim 7, wherein the pre-processing module comprises the following sub-modules:
the denoising processing submodule is used for denoising the original face image;
and the brightness adjusting processing submodule is used for adjusting the brightness of the face image processed by the denoising processing submodule.
9. The smiling face detection system of claim 7, wherein the de-noising processing sub-module comprises:
the first calculation submodule is used for calculating the derivatives of each pixel in the face image in the row direction and the column direction and triggering the second calculation submodule;
the second calculation submodule is used for calculating the gradient value and the gradient modular length of each pixel according to the calculated derivatives of each pixel in the row direction and the column direction, and triggering the third calculation submodule;
the third calculation submodule is used for obtaining the divergence of the ratio of the gradient value to the gradient mode length of each pixel according to the gradient value and the gradient mode length of each pixel and triggering the noise reduction processing submodule;
the noise reduction processing submodule is used for processing the face image u according to the following formulaiCarrying out noise treatment once, and triggering an iteration submodule:
Figure FSB00000872721400051
where i is initialized to 0, u0Representing the original facial image, ui+1Represents a pair uiThe face image after the noise reduction processing is performed once,divergence representing the ratio of the gradient value and the gradient mode length of each pixel, x and y representing the location coordinates of the pixel, t being the algorithm step size used to adjust the algorithm speed,lambda is a preset adjustable parameter, and the smaller lambda is, the smoother the image is required to be;
the iteration submodule is used for judging | | | ui+1-uiIf | < ε is true, where ε is a given threshold, if | | ui+1-uiIf | < ε, then ui+1As the face image after the denoising processing; if u | |i+1-uiIf | < epsilon is not true, i is equal to i +1, and the first calculation submodule is triggered again.
10. The smiling face detection system of claim 7, wherein the brightness adjustment processing sub-module comprises:
the gray mean value calculating submodule is used for calculating the mean value of the gray levels of all pixels in the face image to be adjusted;
and the adjusting submodule is used for adjusting the gray levels of all the pixels in the face image to be adjusted to a fixed value according to the average value of the gray levels of all the pixels calculated by the gray level average value calculating submodule.
11. The smiling face detection system of claim 7, wherein the threshold θ is trained by training samples, and the threshold n is half of the sum of the weight values of all weak classifiers.
CN 201010276313 2010-09-09 2010-09-09 Smiling face detecting method and system Active CN101950356B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010276313 CN101950356B (en) 2010-09-09 2010-09-09 Smiling face detecting method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010276313 CN101950356B (en) 2010-09-09 2010-09-09 Smiling face detecting method and system

Publications (2)

Publication Number Publication Date
CN101950356A CN101950356A (en) 2011-01-19
CN101950356B true CN101950356B (en) 2013-08-28

Family

ID=43453851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010276313 Active CN101950356B (en) 2010-09-09 2010-09-09 Smiling face detecting method and system

Country Status (1)

Country Link
CN (1) CN101950356B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012139271A1 (en) * 2011-04-11 2012-10-18 Intel Corporation Smile detection techniques

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080025576A1 (en) * 2006-07-25 2008-01-31 Arcsoft, Inc. Method for detecting facial expressions of a portrait photo by an image capturing electronic device
CN101699470A (en) * 2009-10-30 2010-04-28 华南理工大学 Extracting method for smiling face identification on picture of human face
CN101702199A (en) * 2009-11-13 2010-05-05 深圳华为通信技术有限公司 Smiling face detection method and device and mobile terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080025576A1 (en) * 2006-07-25 2008-01-31 Arcsoft, Inc. Method for detecting facial expressions of a portrait photo by an image capturing electronic device
CN101699470A (en) * 2009-10-30 2010-04-28 华南理工大学 Extracting method for smiling face identification on picture of human face
CN101702199A (en) * 2009-11-13 2010-05-05 深圳华为通信技术有限公司 Smiling face detection method and device and mobile terminal

Also Published As

Publication number Publication date
CN101950356A (en) 2011-01-19

Similar Documents

Publication Publication Date Title
US9619708B2 (en) Method of detecting a main subject in an image
US8463049B2 (en) Image processing apparatus and image processing method
CN103530599B (en) The detection method and system of a kind of real human face and picture face
WO2020018359A1 (en) Three-dimensional living-body face detection method, face authentication recognition method, and apparatuses
US20160364849A1 (en) Defect detection method for display panel based on histogram of oriented gradient
US8577099B2 (en) Method, apparatus, and program for detecting facial characteristic points
JP6688277B2 (en) Program, learning processing method, learning model, data structure, learning device, and object recognition device
CN108230292B (en) Object detection method, neural network training method, device and electronic equipment
US8111877B2 (en) Image processing device and storage medium storing image processing program
AU2017201281B2 (en) Identifying matching images
US9489566B2 (en) Image recognition apparatus and image recognition method for identifying object
US10861128B2 (en) Method of cropping an image, an apparatus for cropping an image, a program and a storage medium
US11475707B2 (en) Method for extracting image of face detection and device thereof
US20130272575A1 (en) Object detection using extended surf features
EP2234388B1 (en) Object detection apparatus and method
US8094971B2 (en) Method and system for automatically determining the orientation of a digital image
CN111104830A (en) Deep learning model for image recognition, training device and method of deep learning model
CN105046278A (en) Optimization method of Adaboost detection algorithm on basis of Haar features
US20120076418A1 (en) Face attribute estimating apparatus and method
CN111814659B (en) Living body detection method and system
CN101950356B (en) Smiling face detecting method and system
CN113610071B (en) Face living body detection method and device, electronic equipment and storage medium
CN112560557A (en) People number detection method, face detection device and electronic equipment
Zhou et al. On contrast combinations for visual saliency detection
Robinson et al. Foreground segmentation in atmospheric turbulence degraded video sequences to aid in background stabilization

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20170210

Address after: Room 32, building 3205F, No. 707, Zhang Yang Road, free trade zone,, China (Shanghai)

Patentee after: Xin Xin Finance Leasing Co.,Ltd.

Address before: 201203 Shanghai city Zuchongzhi road Pudong New Area Zhangjiang hi tech park, Spreadtrum Center Building 1, Lane 2288

Patentee before: SPREADTRUM COMMUNICATIONS (SHANGHAI) Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20170712

Address after: 100033 room 2062, Wenstin Executive Apartment, 9 Financial Street, Beijing, Xicheng District

Patentee after: Xin Xin finance leasing (Beijing) Co.,Ltd.

Address before: Room 32, building 707, Zhang Yang Road, China (Shanghai) free trade zone, 3205F

Patentee before: Xin Xin Finance Leasing Co.,Ltd.

EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20110119

Assignee: SPREADTRUM COMMUNICATIONS (SHANGHAI) Co.,Ltd.

Assignor: Xin Xin finance leasing (Beijing) Co.,Ltd.

Contract record no.: 2018990000163

Denomination of invention: Smiling face detecting method and system

Granted publication date: 20130828

License type: Exclusive License

Record date: 20180626

EE01 Entry into force of recordation of patent licensing contract
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200306

Address after: 201203 Zuchongzhi Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai 2288

Patentee after: SPREADTRUM COMMUNICATIONS (SHANGHAI) Co.,Ltd.

Address before: 100033 room 2062, Wenstin administrative apartments, 9 Financial Street B, Xicheng District, Beijing.

Patentee before: Xin Xin finance leasing (Beijing) Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200603

Address after: 361012 unit 05, 8th floor, building D, Xiamen international shipping center, No.97 Xiangyu Road, Xiamen area, Xiamen pilot Free Trade Zone, Xiamen City, Fujian Province

Patentee after: Xinxin Finance Leasing (Xiamen) Co.,Ltd.

Address before: 201203 Zuchongzhi Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai 2288

Patentee before: SPREADTRUM COMMUNICATIONS (SHANGHAI) Co.,Ltd.

EC01 Cancellation of recordation of patent licensing contract
EC01 Cancellation of recordation of patent licensing contract

Assignee: SPREADTRUM COMMUNICATIONS (SHANGHAI) Co.,Ltd.

Assignor: Xin Xin finance leasing (Beijing) Co.,Ltd.

Contract record no.: 2018990000163

Date of cancellation: 20210301

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20110119

Assignee: SPREADTRUM COMMUNICATIONS (SHANGHAI) Co.,Ltd.

Assignor: Xinxin Finance Leasing (Xiamen) Co.,Ltd.

Contract record no.: X2021110000010

Denomination of invention: Smiling face detection method and system

Granted publication date: 20130828

License type: Exclusive License

Record date: 20210317

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230714

Address after: 201203 Shanghai city Zuchongzhi road Pudong New Area Zhangjiang hi tech park, Spreadtrum Center Building 1, Lane 2288

Patentee after: SPREADTRUM COMMUNICATIONS (SHANGHAI) Co.,Ltd.

Address before: 361012 unit 05, 8 / F, building D, Xiamen international shipping center, 97 Xiangyu Road, Xiamen area, Xiamen pilot Free Trade Zone, Fujian Province

Patentee before: Xinxin Finance Leasing (Xiamen) Co.,Ltd.