CN116473520B - Electronic equipment and skin analysis method and device thereof - Google Patents
Electronic equipment and skin analysis method and device thereof Download PDFInfo
- Publication number
- CN116473520B CN116473520B CN202310566310.9A CN202310566310A CN116473520B CN 116473520 B CN116473520 B CN 116473520B CN 202310566310 A CN202310566310 A CN 202310566310A CN 116473520 B CN116473520 B CN 116473520B
- Authority
- CN
- China
- Prior art keywords
- frame
- image
- image quality
- skin analysis
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000004458 analytical method Methods 0.000 title claims abstract description 113
- 238000000034 method Methods 0.000 claims description 19
- 238000012549 training Methods 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 9
- 206010000496 acne Diseases 0.000 claims description 3
- 239000011148 porous material Substances 0.000 claims description 3
- 230000037303 wrinkles Effects 0.000 claims description 3
- 206010014970 Ephelides Diseases 0.000 claims description 2
- 208000003351 Melanosis Diseases 0.000 claims description 2
- 206010040954 Skin wrinkling Diseases 0.000 claims description 2
- 230000006870 function Effects 0.000 description 9
- 238000005286 illumination Methods 0.000 description 8
- 238000004422 calculation algorithm Methods 0.000 description 6
- 238000011156 evaluation Methods 0.000 description 6
- 238000013527 convolutional neural network Methods 0.000 description 5
- 238000005259 measurement Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 239000002131 composite material Substances 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000013441 quality evaluation Methods 0.000 description 2
- 208000002874 Acne Vulgaris Diseases 0.000 description 1
- 208000020154 Acnes Diseases 0.000 description 1
- 206010027145 Melanocytic naevus Diseases 0.000 description 1
- 208000007256 Nevus Diseases 0.000 description 1
- 230000003796 beauty Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/44—Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
- A61B5/441—Skin evaluation, e.g. for skin disorder diagnosis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/44—Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
- A61B5/441—Skin evaluation, e.g. for skin disorder diagnosis
- A61B5/444—Evaluating skin marks, e.g. mole, nevi, tumour, scar
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7203—Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2576/00—Medical imaging apparatus involving image processing or analysis
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Signal Processing (AREA)
- Dermatology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Psychiatry (AREA)
- Physiology (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The application provides electronic equipment and a skin analysis method and device thereof. The skin analysis method comprises the following steps: controlling a camera device to acquire a video stream at the target skin, wherein the video stream comprises a plurality of frames of images in a preset time period; selecting at least one frame of image meeting the image quality requirement from a plurality of frames of images of the video stream as at least one frame of target image; performing skin analysis according to at least one frame of target image to obtain a skin analysis result; and outputting the skin analysis result. Therefore, in an open environment, video streams are collected in real time, rather than pictures are collected by standing, so that efficient and noninductive collection is achieved, and user experience is improved; moreover, all the images for skin analysis can be controlled above the same quality level, so that the stability and comparability of skin analysis results are improved, the noise problem caused by the quality problem of the images is removed, and the accuracy of skin analysis is improved.
Description
Technical Field
The application relates to the field of cosmetology, in particular to electronic equipment and a skin analysis method and device thereof.
Background
In the existing skin analysis method, the input face image is directly processed and analyzed through an AI algorithm to obtain various parameters (roughness, spots, pores, wrinkles, acnes and the like) of the skin. In the process of the method, the image quality has great influence on the analysis result, and even if the same person performs measurement and analysis in the same time period, the analysis results are greatly different due to the influence of factors such as the image quality, such as definition, distance from a camera, deflection angle, illumination and the like. The fluctuation caused by the uneven image quality makes the effectiveness of skin analysis and the comparability between the measurement results of different persons and the measurement results of different times of persons not strong.
Disclosure of Invention
The present application aims to solve at least one of the technical problems existing in the prior art.
In order to solve the technical problems, the technical scheme of the application is as follows:
In a first aspect, the present application provides a skin analysis method, applied to an electronic device, the skin analysis method comprising the steps of:
Controlling a camera device to acquire a video stream at a target skin, wherein the video stream comprises a plurality of frames of images in a preset time period;
Selecting at least one frame of image meeting the image quality requirement from a plurality of frames of images of the video stream as at least one frame of target image;
performing skin analysis according to the at least one frame of target image to obtain a skin analysis result; and
And outputting the skin analysis result.
In a second aspect, the present application provides an electronic device, including a processor, a memory, and an image capturing device, where the image capturing device and the memory are electrically connected to the processor, respectively, and the memory stores a computer program, and the processor runs the computer program to execute the steps of the skin analysis method:
Controlling a camera device to acquire a video stream at a target skin, wherein the video stream comprises a plurality of frames of images in a preset time period;
Selecting at least one frame of image meeting the image quality requirement from a plurality of frames of images of the video stream as at least one frame of target image;
performing skin analysis according to the at least one frame of target image to obtain a skin analysis result; and
And outputting the skin analysis result.
In a third aspect, the present application also provides a computer readable storage medium storing a computer program executable by a processor to perform the steps of a skin analysis method:
Controlling a camera device to acquire a video stream at a target skin, wherein the video stream comprises a plurality of frames of images in a preset time period;
Selecting at least one frame of image meeting the image quality requirement from a plurality of frames of images of the video stream as at least one frame of target image;
performing skin analysis according to the at least one frame of target image to obtain a skin analysis result; and
And outputting the skin analysis result.
In a fourth aspect, the present application also provides a skin measurement device, including:
the acquisition control module is used for controlling the camera device to acquire a video stream at the target skin, wherein the video stream comprises a plurality of frames of images in a preset time period;
The image selecting module is used for selecting at least one frame image meeting the image quality requirement from a plurality of frame images of the video stream as at least one frame target image;
the skin analysis module is used for carrying out skin analysis according to the at least one frame of target image to obtain a skin analysis result; and
And the output control module is used for outputting the skin analysis result.
Compared with the prior art, the application has the beneficial effects that:
Therefore, in an open environment, video streams are collected in real time, rather than pictures are collected by standing, so that efficient and noninductive collection is achieved, and user experience is improved; moreover, all the images for skin analysis can be controlled above the same quality level, so that the stability and comparability of skin analysis results are improved, the noise problem caused by the quality problem of the images is removed, and the accuracy of skin analysis is improved.
Drawings
Fig. 1 is a schematic block diagram of an electronic device according to an embodiment of the application.
Fig. 2 is a flow chart of a skin analysis method according to an embodiment of the application.
Fig. 3 is a statistical example of various cases of marking a frame image in an embodiment of the present application.
Fig. 4 is a flow chart of a skin analysis method according to another embodiment of the application.
Fig. 5 is a schematic block diagram of a skin analysis device according to an embodiment of the application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present application and should not be construed as limiting the application.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the present application, unless explicitly specified and limited otherwise, the terms "connected," "fixed," and the like are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communicated with the inside of two elements or the interaction relationship of the two elements. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art according to the specific circumstances.
Referring to fig. 1, fig. 1 is a schematic block diagram of an electronic device 1 according to an embodiment of the application. The electronic device 1 is for performing a skin analysis. The electronic device 1 may be, but is not limited to, a skin analyzer, a beauty instrument with skin detection function, or an intelligent terminal, or a skin analysis system composed of a camera device with video acquisition function and a skin analysis device with data analysis function, or other devices capable of implementing video acquisition and data analysis functions, which is not limited herein. Specifically, the electronic device 1 comprises a memory 11 and a processor 12. The memory 11 is electrically connected to the processor 12. The memory 11 stores a computer program. The processor 12 runs the computer program to perform the steps of the skin analysis method. The electronic device 1 further comprises an imaging means 13. The image pickup device 13 is electrically connected to the processor 12. The camera device 13 may include a camera or a camera system composed of a plurality of cameras, so long as the camera device can face the skin of the user during skin test, and timely and accurately collect image data, which is not limited herein. In this embodiment, when the electronic device 1 is a device integrating a video capturing function and a data analyzing function, the image capturing apparatus 13 includes a front camera. It will be appreciated that in other embodiments, the camera device 13 may include not only a front camera but also a rear camera, and that the camera for capturing the video stream at the target skin may be switched between the front camera and the rear camera according to different situations.
Referring to fig. 2, fig. 2 is a flow chart of a skin analysis method according to an embodiment of the application. The skin analysis method may be applied to the above-described electronic device 1 to realize skin analysis. It will be appreciated that some steps of the skin analysis method may be added or deleted or sequentially exchanged, which is not limited herein. Specifically, the skin analysis method comprises the following steps:
step S210: the camera device 13 is controlled to acquire a video stream at the target skin, the video stream including a number of frame images within a preset period of time.
Step S220: and selecting at least one frame of image meeting the image quality requirement from a plurality of frames of images of the video stream as at least one frame of target image.
Step S230: and performing skin analysis according to the at least one frame of target image to obtain a skin analysis result.
Step S240: and outputting the skin analysis result.
In the application, the skin analysis method comprises the following steps: firstly, the camera device 13 is controlled to collect video stream at the target skin, then multiple frame images are extracted according to the collected video stream sample information, and then the frame images meeting the image quality requirements are selected from the multiple frame images for skin analysis. Therefore, in an open environment, video streams are collected in real time, rather than pictures are collected by standing, so that efficient and noninductive collection is achieved, and user experience is improved; moreover, all the images for skin analysis can be controlled above the same quality level, so that the stability and comparability of skin analysis results are improved, the noise problem caused by the quality problem of the images is removed, and the accuracy of skin analysis is improved.
The processor 12 controls the image capturing device 13 to capture a video stream within a preset period of time at the target skin, where the duration of the preset period of time may be set according to actual needs, for example, 5 seconds, 10 seconds, 15 seconds, etc., and is not limited herein. It will be appreciated that the longer the duration of the video stream, the more frame images the video stream includes and, therefore, the greater the probability of selecting a high quality image. Of course, this is accompanied by an increase in the amount of calculation data. Therefore, in practical applications, a certain choice needs to be made between the high-quality image and the amount of calculation data.
Because the time of each generation of the plurality of frame images of the video stream is different, the shooting parameters of different times may slightly change, so that the respective image quality of different frame images shot under different shooting parameters is different, the image quality of some frame images is relatively better, and the image quality of some frame images is relatively worse. Therefore, at least one frame of image meeting the image quality requirement is selected from a plurality of frames of images of the video stream, and each frame of image is used as a frame of target image, so that noise generated by frame images which do not meet the quality requirement on skin analysis can be removed, and the accuracy of skin analysis can be further improved.
In some embodiments, the manner of selecting at least one frame image meeting the image quality requirement from the several frame images of the video stream may be various, which is not limited herein.
Two of these options are described in detail below.
The first way is:
The selecting at least one frame image meeting the image quality requirement from a plurality of frame images of the video stream as at least one frame target image comprises the following steps:
extracting multi-frame images from the video stream;
Scoring the multi-frame images respectively to obtain an image quality score of each frame of image;
And selecting at least one frame of image meeting the image quality requirement from the multi-frame images according to the image quality score of each frame of image as the at least one frame of target image.
Therefore, in the first mode, it is necessary to score the image quality of each frame of the extracted multi-frame images, and after the image quality score of each frame of the extracted multi-frame images is obtained, at least one frame of image may be preferentially selected as the at least one frame of target image.
In some embodiments, the selecting at least one frame of image meeting the image quality requirement from the multiple frames of images according to the image quality score of each frame of image as the at least one frame of target image includes:
Selecting one frame with the highest image quality score or at least two frames of images with the relatively highest image quality scores from the multi-frame images as the at least one frame of target image according to the image quality scores of the images of each frame; or alternatively
And randomly selecting at least one frame of image meeting the image quality requirement from the multi-frame images according to the image quality score of each frame of image to serve as the at least one frame of target image.
Therefore, through the first mode, the frame image with the best image quality or the relatively best image quality in the multi-frame images of the video stream can be preferentially selected as the target image, and the skin analysis accuracy can be further improved.
The second mode is as follows:
The selecting at least one frame image meeting the image quality requirement from a plurality of frame images of the video stream as at least one frame target image comprises the following steps:
extracting multi-frame images from the video stream;
Sequentially scoring the multi-frame images from a first frame image, after scoring each frame image to obtain an image quality score, judging whether the image quality score of the frame image meets the image quality requirement, and if the image quality score of the frame image meets the image quality requirement, determining the frame image as a target image;
And if the image quality score of the frame image does not meet the image quality requirement, scoring the next frame image in the multi-frame image until the target image with the image quality score meeting the image quality requirement is determined.
For example, first scoring a first frame of the multi-frame images to obtain an image quality score of the first frame of images;
Judging whether the image quality score of the first frame image meets the image quality requirement, and if the image quality score of the first frame image meets the image quality requirement, determining the first frame image as a target image;
if the image quality score of the first frame image does not meet the image quality requirement, scoring a second frame image in the multi-frame images to obtain the image quality score of the second frame image;
Judging whether the image quality score of the second frame image meets the image quality requirement or not again, and if the image quality score of the second frame image meets the image quality requirement, determining the second frame image as a target image;
If the image quality score of the second frame image does not meet the image quality requirement, scoring a third frame image in the multi-frame images; and repeating the steps until the target image with the image quality score meeting the image quality requirement is determined.
In the second mode, therefore, the image quality score is performed from the first frame image and whether the image quality score of the first frame image meets the image quality requirement is judged, and if the first frame image meets the image quality requirement, the first frame image is taken as the target image; at this time, if the purpose is to select only one frame of the target image for skin analysis, the selection process of the target image may be ended, without performing image quality scoring again on the second frame of image, the third frame of image, and the like in the multiple frames of images, and judging whether the image quality requirement is met. Therefore, the data operation amount can be greatly reduced. If the second frame target image, the third frame target image and the like are required to be determined after the one frame target image is determined, the similar judging process is continued until the second frame target image, the third frame target image and the like are selected, and the selecting process of the target images is ended.
In some embodiments, if after the multi-frame images are subjected to the image quality scoring and the determination whether the multi-frame images meet the image quality requirement, the frame images meeting the image quality requirement are not selected as the target images, a next multi-frame image different from the multi-frame images is extracted from the video stream, and the image quality scoring of the next multi-frame image is performed again to select the target images meeting the image quality requirement until the target images meeting the image quality requirement are selected.
In some embodiments, a set of multi-frame images is extracted from a video stream, for example, the set of multi-frame images may be a continuous 10-frame image, may be a set of multi-frame images selected from a start position of the video stream, and extracting the next set of multi-frame images again is extracting from an end of the set of multi-frame images, for example, a continuous 10-frame image from an 11 th frame image, and so on until a target image whose image quality score meets an image quality requirement is determined. In other embodiments, a set of multi-frame images is extracted from a video stream, and a set of multi-frame images may be selected from any position of the video stream, and extracting the next set of multi-frame images again is started from another position where the set of multi-frame images do not coincide, so long as it is ensured that the next set of frame images and the frame images of the set of multi-frame images are not repeated. For example, one set of multi-frame images may be consecutive 10-frame images starting from the 21 st frame image, and the next set of multi-frame images may be consecutive 10-frame images starting from the 41 st frame image.
In some embodiments, scoring the image quality of each frame of image includes:
Quality marking is carried out on each frame of image from at least one dimension to obtain at least one marking result, wherein the at least one dimension comprises at least one of definition, illuminance and openness; and
And obtaining the image quality score of each frame of image according to the marking result of each frame of image.
It is to be understood that in other embodiments, the at least one dimension may also include other dimensions, not limited herein.
Referring to fig. 3, fig. 3 is a statistical example of various cases of marking a frame image according to an embodiment of the present application.
Fig. 3 shows three dimensions, sharpness, front and illuminance, of marking each frame of image. Wherein each dimension includes both pass and fail cases. And marking the dimension of the frame image as a first label when the qualified condition is met, and marking the dimension of the frame image as a second label when the unqualified condition is met. The first label and the second label each represent a different meaning. For example, in the definition dimension, the first label is 1 clear and the second label is 0 unclear. Definition of qualified definition is that the face is clear, and aesthetic features are distinguishable. Cases of disqualification of sharpness include: obvious blurring, indistinguishable fine lines of a nevus, local blurring of the face, and the like. In the dimension of the front, the first label is 1 positive, and the second label is 0 non-positive. The definition of qualified frontal degree is that the frontal imaging of the face is carried out and no deflection shielding exists. The case of disqualification of the front comprises: low head-up, left-right deviation, insufficiency or shielding, obvious expression and the like. And under the illumination dimension, the first label is good in illumination of 1, and the second label is poor in illumination of 0. The definition of qualified illuminance is that the illuminance is sufficient and uniform, no strong light spot exists, and the color cast does not influence the judgment of mole and acne. Cases of unacceptable illuminance include: too strong or too dark illuminance, uneven illumination, reddish or greenish, strong light spots, etc.
In some embodiments, outputting the marking result for each frame of image includes:
outputting a marking result of at least one dimension to each frame image, wherein the marking result of each dimension indicates a good probability value of the frame image in the dimension.
For example, in one example, an example of image quality evaluation of a target image is as follows:
The output of the image quality evaluation model of one frame is on= (Pqo, pzo, pgo), wherein Pqo, pzo and Pgo respectively represent the probability values of definition, frontality and good illuminance, and the value range is [0,1]. For example, pqo has a value of 0.7, indicating that the probability of frame image sharpness is 0.7; pzo =0.3, and pgo=0.5, and p=0.5, and denotes that the probability of frame image light is good (between good and bad).
In some embodiments, obtaining an image quality score of each frame of image according to the marking result of each frame of image includes:
And comprehensively scoring the evaluation probability of each dimension of the frame image.
For example, in the above example, the composite score for the frame image is = (Pqo + Pzo +pgo)/3×100%. The composite score is the image quality score for each frame of image. It will be appreciated that in other embodiments, after obtaining the probability value that each frame of image performs well in each dimension, the comprehensive scoring of the estimated probability of each dimension of the frame of image may not be limited to the above-mentioned average score method, or may be to make a weighting coefficient for each dimension and then average, or other manners are not limited herein.
In some embodiments, selecting at least one frame of image that meets image quality requirements includes:
comparing the image quality score of each frame of image with a system setting threshold value, wherein when the image quality score exceeds the system setting threshold value, the image quality score is qualified, and when the image quality score is lower than the system setting threshold value, the image quality score is unqualified, and the image quality score is required to be selected from the frame of images until at least one frame of image meeting the image quality requirement is selected as at least one frame of target image.
In some embodiments, before the quality marking of each frame of the image from at least one dimension to obtain at least one marking result, the method further comprises:
inputting a number of training images covering different situations of the at least one dimension;
Training an intelligent scoring model according to the training images;
and after the intelligent scoring model is trained, performing image quality scoring on each frame of image, namely inputting each frame of image into the intelligent scoring model to perform image quality scoring.
In some embodiments, the training process of the intelligent scoring model is as follows:
1. And (5) marking data. Marking and classifying all training images according to the method, and classifying all training images to obtain a sample training sample and a test sample. Wherein, marking each image, ln= (Pq, pz, pg), ln represents the label of the nth image, where Pq, pz, pg respectively represent the values of the image in the definition, the frontality, and the illuminance dimension, and the value is 0 or 1, where 0 represents unclear, improper, or poor illumination, and 1 represents clear, positive, or good illumination. For example, if the image is clear, incorrect, and the illumination is poor, the image label is (1, 0).
2. And constructing a CNN (convolutional neural network) model. And constructing a three-class (clear, frontal, light) model for CNN face image evaluation, setting a model loss function and an optimizer, and initializing model parameters.
Model training: and (3) inputting training samples into the CNN model, and performing multiple rounds of learning iteration on model parameters according to the loss function and the optimizer setting until the model converges.
Model evaluation: and (3) inputting a test sample into the CNN model obtained by training, comparing a classification result obtained by model prediction with a label, evaluating the model prediction precision, wherein the precision is greater than a control threshold (for example, 99%), completing model training, otherwise, adjusting learning parameters or iteration times in model training and model evaluation, and retraining.
3. And (5) defining model precision. Assuming that all positive samples in the model are predicted to be positive samples (clear and positive and well-lit) and the correct number of predictions is m, and that all positive samples in the sample set are n, the model accuracy is m/n.
The selection accuracy is used as a model evaluation index, so that the error rate of the screened positive sample image is expected to be smaller, and the screened image is ensured to have higher quality.
4. The model stores the deployment. And storing the training obtained model and deploying the training obtained model in an actually operated system.
It will be appreciated that in still other embodiments, the above-described smart scoring model may be trained and derived by other algorithms, without limitation.
In some embodiments, the performing the skin analysis according to the at least one frame of target image to obtain a skin analysis result includes:
extracting key point information in each target image, wherein the key point information comprises at least one of French lines, black eyes, pores, wrinkles, blackheads and freckles; and
And evaluating the quality of the skin according to the key point information in the at least one frame of target image to obtain a skin analysis result.
In some embodiments, the at least one frame of target image has both a frame of target image and at least two frames of icon images. And when the target image is a frame of target image, performing skin analysis on the target image to obtain a skin analysis result. When the target image is a multi-frame target image, respectively performing skin analysis on the multi-frame target image to obtain a plurality of analysis results, and comprehensively analyzing according to the plurality of analysis results to obtain the skin analysis results.
In some embodiments, the method further comprises:
And outputting the image quality score of the at least one frame of target image.
In some embodiments, when the at least one frame of target image is a frame of target image, an image quality score of the frame of target image is output.
In other embodiments, when the at least one target image is two or more target images, a composite image quality score of the two or more target images may be output.
Referring to fig. 4, fig. 4 is a flow chart of a skin analysis method according to another embodiment of the application.
Step S410: the camera device 13 is controlled to acquire a video stream at the target skin, the video stream including a number of frame images within a preset period of time.
Step S420: and extracting images from the video stream.
Step S430: the extracted image is scored for image quality.
Step S440: it is determined whether the image quality score is greater than the system setting threshold, if so, step S450 is entered, otherwise, step S410 is entered.
Step S450: and performing skin analysis according to the at least one frame of target image to obtain a skin analysis result.
Step S460: and outputting the skin analysis result and outputting the image quality score.
The general idea of the application for skin analysis is: and extracting continuous multi-frame images from the video stream, extracting face image labels of the continuous multi-frame photos according to an AI algorithm, scoring the images, and storing and outputting image quality scores. And eliminating the images which do not meet the image quality requirements. If no image meeting the quality requirement is found in the multi-frame images, extracting continuous multi-frame images from the video stream again, extracting image labels and scoring the evaluation result until the image meeting the image quality requirement is screened out as a target image. Then, the target image is subjected to skin analysis by adopting a skin analysis AI algorithm. Therefore, the skin analysis is carried out on the basis of the high-quality image by connecting the image quality detection AI algorithm and the skin analysis AI algorithm in series, so that the analysis result is more stable, and the comparability of the measurement results of different times is stronger.
The present application also provides a computer readable storage medium storing a computer program executable by a processor to perform the steps of the skin analysis method:
controlling the camera device 13 to acquire a video stream at the target skin, wherein the video stream comprises a plurality of frames of images in a preset time period;
Selecting at least one frame of image meeting the image quality requirement from a plurality of frames of images of the video stream as at least one frame of target image;
performing skin analysis according to the at least one frame of target image to obtain a skin analysis result; and
And outputting the skin analysis result.
In addition, the memory 11 may include: flash disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk. The processor 12 may be a general purpose processor, a digital signal processor, an application specific integrated circuit, an off-the-shelf programmable gate array or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component. The processor may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. The processor may be an image processor, a microprocessor, or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, such as an application, computer instructions or data in the memory readable by the processor 12, which, in combination with its hardware, performs the steps of the above-described method performed by the electronic device 1.
Referring to fig. 5, the present application further provides a skin analysis device 500, where the skin analysis device 500 includes:
the acquisition control module 510 is configured to control the camera to acquire a video stream at the target skin, where the video stream includes a plurality of frames of images within a preset time period;
An image selecting module 520, configured to select at least one frame image meeting an image quality requirement from a plurality of frame images of the video stream as at least one frame target image;
the skin analysis module 530 is configured to perform skin analysis according to the at least one frame of target image to obtain a skin analysis result; and
And an output control module 540, configured to output the skin analysis result.
In several embodiments provided herein, it should be understood that the disclosed skin analysis device 500 may be implemented in other ways. For example, the above-described embodiments of the skin analysis device 500 are merely illustrative, such as the division of the units is merely a logical functional division, and there may be additional divisions in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
In view of the above, the present application has the above-mentioned excellent characteristics, so that it can be used to improve the performance and practicality of the prior art, and is a product with great practical value.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but any modifications, equivalents, or improvements made within the spirit and principles of the present application should be included in the scope of the present application.
Claims (8)
1. The skin analysis method is applied to an electronic device and is characterized by comprising the following steps:
Controlling a camera device to acquire a video stream at a target skin, wherein the video stream comprises a plurality of frames of images in a preset time period;
Selecting at least one frame image meeting the image quality requirement from a plurality of frame images of the video stream as at least one frame target image, wherein selecting at least one frame image meeting the image quality requirement from the plurality of frame images of the video stream as at least one frame target image comprises the following steps:
extracting multi-frame images from the video stream;
Sequentially scoring the multi-frame images from a first frame image, after scoring each frame image to obtain an image quality score, judging whether the image quality score of the frame image meets the image quality requirement, and if the image quality score of the frame image meets the image quality requirement, determining the frame image as a target image;
If the image quality score of the frame image does not meet the image quality requirement, scoring the next frame image in the multi-frame image until determining a target image of which the image quality score meets the image quality requirement;
When the target image meeting the image quality requirement is not found from the multi-frame images, extracting a next group of multi-frame images different from the multi-frame images from the video stream, and grading the image quality of the next group of multi-frame images again until the target image meeting the image quality requirement is selected;
Wherein the position of each group of multi-frame images in the video stream is random, and the frame images at the same position only appear in one group of multi-frame images;
performing skin analysis according to the at least one frame of target image to obtain a skin analysis result; and
And outputting the skin analysis result.
2. The method for analyzing skin conditions according to claim 1, characterized in that the method further comprises:
And outputting the image quality score of the at least one frame of target image.
3. The method according to claim 1, wherein the performing the skin analysis according to the target image to obtain the skin analysis result comprises:
when the target image is a frame of target image, performing skin analysis on the target image to obtain a skin analysis result; or alternatively
When the target image is a multi-frame target image, respectively performing skin analysis on the multi-frame target image to obtain a plurality of analysis results, and comprehensively analyzing according to the plurality of analysis results to obtain the skin analysis results.
4. The method for analyzing skin conditions according to claim 1, characterized in that the method further comprises:
inputting a number of training images covering different situations of the at least one dimension;
Training an intelligent scoring model according to the training images;
and after the intelligent scoring model is trained, performing image quality scoring on each frame of image, namely inputting each frame of image into the intelligent scoring model to perform image quality scoring.
5. The method according to claim 1, wherein the performing the skin analysis according to the at least one frame of the target image to obtain the skin analysis result comprises:
Extracting key point information in each frame of target image, wherein the key point information comprises at least one of French lines, black eyes, pores, wrinkles, blackheads and freckles;
and evaluating the quality of the skin according to the key point information in the at least one frame of target image to obtain the skin analysis result.
6. An electronic device, comprising a processor, a memory and an image pickup device, wherein the image pickup device and the memory are respectively electrically connected with the processor, a computer program is stored in the memory, and the processor runs the computer program to execute the steps of the skin analysis method according to any one of claims 1-5.
7. A computer readable storage medium storing a computer program executable by a processor to perform the steps of the skin analysis method of any one of claims 1-5.
8. A skin analysis device comprising:
the acquisition control module is used for controlling the camera device to acquire a video stream at the target skin, wherein the video stream comprises a plurality of frames of images in a preset time period;
An image selection module for:
Selecting at least one frame image meeting the image quality requirement from a plurality of frame images of the video stream as at least one frame target image, wherein selecting at least one frame image meeting the image quality requirement from the plurality of frame images of the video stream as at least one frame target image comprises the following steps:
extracting multi-frame images from the video stream;
Sequentially scoring the multi-frame images from a first frame image, after scoring each frame image to obtain an image quality score, judging whether the image quality score of the frame image meets the image quality requirement, and if the image quality score of the frame image meets the image quality requirement, determining the frame image as a target image;
If the image quality score of the frame image does not meet the image quality requirement, scoring the next frame image in the multi-frame image until determining a target image of which the image quality score meets the image quality requirement;
When the target image meeting the image quality requirement is not found from the multi-frame images, extracting a next group of multi-frame images different from the multi-frame images from the video stream, and grading the image quality of the next group of multi-frame images again until the target image meeting the image quality requirement is selected;
Wherein the position of each group of multi-frame images in the video stream is random, and the frame images at the same position only appear in one group of multi-frame images;
the skin analysis module is used for carrying out skin analysis according to the at least one frame of target image to obtain a skin analysis result; and
And the output control module is used for outputting the skin analysis result.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310566310.9A CN116473520B (en) | 2023-05-18 | 2023-05-18 | Electronic equipment and skin analysis method and device thereof |
JP2023219108A JP7492793B1 (en) | 2023-05-18 | 2023-12-26 | Electronic device, skin quality analysis method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310566310.9A CN116473520B (en) | 2023-05-18 | 2023-05-18 | Electronic equipment and skin analysis method and device thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116473520A CN116473520A (en) | 2023-07-25 |
CN116473520B true CN116473520B (en) | 2024-10-29 |
Family
ID=87223306
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310566310.9A Active CN116473520B (en) | 2023-05-18 | 2023-05-18 | Electronic equipment and skin analysis method and device thereof |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP7492793B1 (en) |
CN (1) | CN116473520B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109831680A (en) * | 2019-03-18 | 2019-05-31 | 北京奇艺世纪科技有限公司 | A kind of evaluation method and device of video definition |
CN110858879A (en) * | 2018-08-23 | 2020-03-03 | 浙江宇视科技有限公司 | Video stream processing method, device and computer readable storage medium |
CN113313050A (en) * | 2021-06-15 | 2021-08-27 | 北京美丽年华文化有限公司 | Skin intelligent detection system based on video streaming |
CN113780212A (en) * | 2021-09-16 | 2021-12-10 | 平安科技(深圳)有限公司 | User identity verification method, device, equipment and storage medium |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7697026B2 (en) * | 2004-03-16 | 2010-04-13 | 3Vr Security, Inc. | Pipeline architecture for analyzing multiple video streams |
US10517521B2 (en) * | 2010-06-07 | 2019-12-31 | Affectiva, Inc. | Mental state mood analysis using heart rate collection based on video imagery |
GB2519620B (en) * | 2013-10-23 | 2015-12-30 | Imagination Tech Ltd | Skin colour probability map |
JP6616331B2 (en) * | 2014-06-06 | 2019-12-04 | コーニンクレッカ フィリップス エヌ ヴェ | Apparatus, system and method for detecting apnea in a subject |
JP6729572B2 (en) | 2015-06-08 | 2020-07-22 | ソニー株式会社 | Image processing device, image processing method, and program |
CN105426850B (en) * | 2015-11-23 | 2021-08-31 | 深圳市商汤科技有限公司 | Associated information pushing device and method based on face recognition |
CN107040795A (en) * | 2017-04-27 | 2017-08-11 | 北京奇虎科技有限公司 | The monitoring method and device of a kind of live video |
WO2019126470A1 (en) * | 2017-12-22 | 2019-06-27 | Google Llc | Non-invasive detection of infant bilirubin levels in a smart home environment |
CN110378228A (en) * | 2019-06-17 | 2019-10-25 | 深圳壹账通智能科技有限公司 | Video data handling procedure, device, computer equipment and storage medium are examined in face |
CN110390263A (en) * | 2019-06-17 | 2019-10-29 | 宁波江丰智能科技有限公司 | A kind of method of video image processing and system |
CN110570366A (en) * | 2019-08-16 | 2019-12-13 | 西安理工大学 | Image restoration method based on double-discrimination depth convolution generation type countermeasure network |
CN113674224B (en) * | 2021-07-29 | 2024-11-08 | 浙江大华技术股份有限公司 | Monitoring point position treatment method and device |
JP2023053733A (en) | 2021-10-01 | 2023-04-13 | パナソニックIpマネジメント株式会社 | Imaging guidance device, imaging guidance method, and program |
CN114298295A (en) | 2021-12-30 | 2022-04-08 | 上海阵量智能科技有限公司 | Chip, accelerator card, electronic device and data processing method |
CN116129013A (en) * | 2023-02-20 | 2023-05-16 | 上海科技大学 | Method, device and storage medium for generating virtual person animation video |
-
2023
- 2023-05-18 CN CN202310566310.9A patent/CN116473520B/en active Active
- 2023-12-26 JP JP2023219108A patent/JP7492793B1/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110858879A (en) * | 2018-08-23 | 2020-03-03 | 浙江宇视科技有限公司 | Video stream processing method, device and computer readable storage medium |
CN109831680A (en) * | 2019-03-18 | 2019-05-31 | 北京奇艺世纪科技有限公司 | A kind of evaluation method and device of video definition |
CN113313050A (en) * | 2021-06-15 | 2021-08-27 | 北京美丽年华文化有限公司 | Skin intelligent detection system based on video streaming |
CN113780212A (en) * | 2021-09-16 | 2021-12-10 | 平安科技(深圳)有限公司 | User identity verification method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
JP7492793B1 (en) | 2024-05-30 |
CN116473520A (en) | 2023-07-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109697726B (en) | Event camera-based end-to-end target motion estimation method | |
CN108229526B (en) | Network training method, network training device, image processing method, image processing device, storage medium and electronic equipment | |
KR102449841B1 (en) | Method and apparatus for detecting target | |
US20040239762A1 (en) | Adaptive background image updating | |
CN111401246B (en) | Smoke concentration detection method, device, equipment and storage medium | |
KR101330636B1 (en) | Face view determining apparatus and method and face detection apparatus and method employing the same | |
CN111401374A (en) | Model training method based on multiple tasks, character recognition method and device | |
CN113361513B (en) | Mobile terminal tongue picture acquisition method, device and equipment | |
CN109815936A (en) | A kind of target object analysis method and device, computer equipment and storage medium | |
CN102314591B (en) | Method and equipment for detecting static foreground object | |
CN106993188A (en) | A kind of HEVC compaction coding methods based on plurality of human faces saliency | |
CN112348762A (en) | Single image rain removing method for generating confrontation network based on multi-scale fusion | |
CN112804464A (en) | HDR image generation method and device, electronic equipment and readable storage medium | |
CN113298764B (en) | High-speed camera imaging quality analysis method based on image noise analysis | |
CN108769543B (en) | Method and device for determining exposure time | |
CN116473520B (en) | Electronic equipment and skin analysis method and device thereof | |
CN108257117B (en) | Image exposure evaluation method and device | |
CN112348011B (en) | Vehicle damage assessment method and device and storage medium | |
CN113067980A (en) | Image acquisition method and device, electronic equipment and storage medium | |
CN112488985A (en) | Image quality determination method, device and equipment | |
Nguyen et al. | Gaze tracking for region of interest coding in JPEG 2000 | |
CN116246200A (en) | Screen display information candid photographing detection method and system based on visual identification | |
CN116612355A (en) | Training method and device for face fake recognition model, face recognition method and device | |
CN113554685A (en) | Method and device for detecting moving target of remote sensing satellite, electronic equipment and storage medium | |
CN113706402A (en) | Neural network training method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |