CN107423694A - A kind of artificial intelligence user image management method and system based on machine vision - Google Patents
A kind of artificial intelligence user image management method and system based on machine vision Download PDFInfo
- Publication number
- CN107423694A CN107423694A CN201710543022.6A CN201710543022A CN107423694A CN 107423694 A CN107423694 A CN 107423694A CN 201710543022 A CN201710543022 A CN 201710543022A CN 107423694 A CN107423694 A CN 107423694A
- Authority
- CN
- China
- Prior art keywords
- image
- color
- region
- human body
- human
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of artificial intelligence user image management method and system based on machine vision, comprise the following steps:Step 1:By the image for the human body that image management is carried out needed for collection, and image is stored;Step 2:Identify the human region of the required management in image;Step 3:Colouring information in the human region of management needed for identification, is counted to color area;Step 4:Image appraisal is carried out to the human region that need to be managed:In the human region managed as needed in a variety of colors occupied area and the area color concentration, image appraisal is carried out to human region;Step 5:Provide mould image suggestion, report output appraisal report.The present invention can quickly, it is simple, easily and accurately the personal image of people is managed, can be applied in mobile phone, flat board, apparatus such as computer, calculate simple, greatly save management cost of the people to personal image.
Description
Technical field
The invention belongs to computer vision field, and in particular to a kind of artificial intelligence user image pipe based on machine vision
Manage method and system.
Background technology
With the development in epoch and the progress of society, the personal image of people plays more and more important effect, Ren Menye
Increasingly focus on the personal image of oneself, be particularly intended to oneself in the occasions such as business talks, interview, blind date, dinner party, people
Personal image that is best, meeting occasion can be shown.The appearance or appearance of namely one people of this personal image.
At present, management of the people for the personal image of oneself is mainly for dressing, hair style, dressing dress ornament according to oneself
Opinion dressed up to oneself, this method largely depends on the subjectivity of people, and causing to mould the personal image come can
The occasion attended needed for being not appropriate for.In addition, people can ask the hairstylist of specialty to help to mould suitable personal image, but
It is that please arrive that to sample high, reliable hairstylist's cost very high and inconvenient.
Therefore, a kind of simple and convenient, quick reliable individual subscriber image management method need to be looked for be highly desirable, also very
There is market.
The content of the invention
The present invention proposes a kind of artificial intelligence user image management method and system based on machine vision, can quickly,
Simply, easily and accurately the personal image of people is managed, can be applied in mobile phone, flat board, apparatus such as computer, calculate
Simply, management cost of the people to personal image is greatlyd save.
A kind of artificial intelligence user image management method based on machine vision proposed by the present invention, mainly including following step
Suddenly:
Step 1:The image of the required human body for carrying out image management is acquired using personal image managing device, and
Image is stored;
Step 2:Identify the human region of the required management in image;
Step 3:Colouring information in the human region of management needed for identification, is counted to color area;
Step 4:Image appraisal is carried out to the human region that need to be managed:A variety of colors in the human region managed as needed
The concentration of color in occupied area and the area, image appraisal is carried out to human region;
Step 5:Provide mould image suggestion, report output appraisal report.
As further, the human body represents each position of human body or whole human body, specifically includes:Facial adornment
Appearance, hair style, clothing.
As further, the step 2 includes:Using the method for machine learning, the required human body area managed is identified
Domain.
As further, the step 3 includes following sub-step:
Step 301:Obtain the positional information of each color region of human body;
Step 302:Calculate the area S of each color region of human bodyk。
As further, the step 4 includes following sub-step:
Step 401:Count the area S of each color region of human bodykInterior color depth Dk;
Step 402:According to the area S of each color region of human bodykInterior color depth DkHuman region is entered
Row image appraisal.
The personal image managing device include image acquisition device, memory, human region identifier, colour recognition device,
Color processor, display;
Described image collector is used to gather image, and image acquisition device is connected by data transmission link and video memory
Connect;
The memory is used for data, including view data, memory by data transmission link and image processor and
Display connects;
The human region identifier is used to identify human region, and human region identifier passes through data connecting line Lu Yuyan
Color processor connects;
The colour recognition device is mainly used in identifying the color in image, is connected by data connecting line road and color processor
Connect;
The color processor is mainly used in realizing the area for calculating color, calculates the function such as concentration of color, at color
Reason device is connected by data transmission link with memory and display;
The display is used for display image or/and output appraisal report, and shows some fault messages.
As further, described image collector includes several video cameras;
Described image memory is SD card or mobile hard disk;
The human region identifier, color processor CPU, GPU and the image procossing group available for mobile terminal
Part;
The mobile terminal includes mobile phone, flat board, removable computer, the mobile camera with processing function;
The data transmission link is data wire, is not intersected mutually between data wire.
The present invention also proposes a kind of artificial intelligence user image management system based on machine vision, including:
Image capture module:Described image acquisition module is used to gather image and storage image;
Human body image management region identification module:The human body image management region identification module is used to identify in image
The human region of required management;
Personal image management module:The personal image management module is used to manage human body personal image, is mainly used in commenting
The image of the part of image management is carried out needed for valency image;
Output module:For exporting output appraisal report and image;
The human body image management region identification module obtains view data identification human body from described image acquisition module
Image management region, recognition result is then passed to personal image management module and carries out human body personal image management, image is managed
Reason feedback result is passed to output module and exported.
As further, the output module is used to export human body image appraisal report and the suggestion with mould image;
The human body image management region is divided into facial dressing, hair style, clothing;
The clothing includes the dress of jacket, fitted pants or skirt and shoes.
As further, described image acquisition module includes image acquisition units and image storage unit;
The personal image management module includes colouring information recognition unit, areal calculation unit, color depth and calculates list
Member, evaluation generation unit;
The colouring information recognition unit is used to identify color;
The areal calculation unit is used for reference area, particularly, shared in the picture for calculating a variety of colors respectively
Area;
The color depth computing unit is used for the concentration for calculating color;
The evaluation generation unit is used for the value according to color area and color depth, provides toward closer vivid standard
Direction adjusts the suggestion of color occupied area and color depth.
The beneficial effect of a kind of artificial intelligence user image management method and system based on machine vision of the present invention
Fruit is:
1st, can quickly, it is simple, easily and accurately the personal image of people is managed, improve people's quality of life, section
About management cost of the people to personal image;
2nd, by the use of the color area and color depth of dressing in human body or dressing or hair style as standard type evaluation index,
Simply, intuitively, it is also reliable;
3rd, this method amount of calculation is small, can be applied to kinds of platform.
Brief description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by embodiment it is required use it is attached
Figure is briefly described, it will be appreciated that the following drawings illustrate only certain embodiments of the present invention, therefore be not construed as pair
The restriction of scope, for those of ordinary skill in the art, on the premise of not paying creative work, can also be according to this
A little accompanying drawings obtain other related accompanying drawings.
Fig. 1 is a kind of user image management method flow of the artificial intelligence user image management system based on machine vision
Figure;
Fig. 2 is a kind of human region identification convolutional Neural of artificial intelligence user image management system based on machine vision
Network architecture schematic diagram;
Fig. 3 is a kind of personal image managing device structure of the artificial intelligence user image management system based on machine vision
Schematic diagram;
Fig. 4 is a kind of user image management system structure of the artificial intelligence user image management system based on machine vision
Schematic diagram.
Embodiment
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention
In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is
Part of the embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art
The every other embodiment obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.
Embodiment 1:
The invention provides a kind of artificial intelligence user image management method and system based on machine vision, such as Fig. 1 institutes
Show, mainly include the following steps that:
Step 1:The image of the required human body for carrying out image management is acquired using personal image managing device, and
Image is stored.If be managed just for dressing, face-image only need to be shot, if be managed just for hair style, only
Hair image need to be shot, if necessary to whole people is really inscribed personal image be managed, it is necessary to shoot human body general image, it is necessary to
The position for needing to manage is included in image.Human body represents each position of human body or whole human body, specifically includes:Face
Portion's dressing, hair style, clothing.The clothing includes upper half of human body clothing, human body lower part of the body clothing, shoes.The human body upper half
Body clothing represents the jacket of human dressing, and the human body lower part of the body clothing represents the lower dress of human dressing, including trousers, skirt.
Step 2:Identify the human region of the required management in image.Using the method for machine learning, required pipe is identified
The human region of reason.Feature extraction training is carried out to image using CNN (convolutional neural networks), finally gives human body mould
Type:Convolutional neural networks are the neutral nets of a multilayer, and every layer is made up of multiple two dimensional surfaces, and each plane is by multiple only
Vertical neuron composition.The informational linkage between Internet and spatial domain is established for input data in CNN, passes through each layer
Convolution and pond etc. operate so that finally draw useful object characteristic feature.Specifically, comprise the following steps:
Step 201:Select training sample and addition label, i.e. input data X selection and preferable output YpSetting,
It is exactly the initialization of data.Sample and corresponding label acquisition pattern for training are:Using the image of human body represent as
Training sample, using the textural characteristics of human body as the criterion of identification of human body, human body image is added based on experience
Tag.This process manually performs.Here label classification C typical cases divide:Facial dressing, hair style, clothing, it is corresponding
Preferable output matrix
Yp={ a, b, c }
Wherein, a, b, c are real number.
Step 202:CNN network structures design;The CNN network structure specific designs of the present invention are as follows:
Step 2021:Convolution is carried out in the first hidden layer, obtains C1 layers.Specially:It is made up of 8 Feature Mappings, often
Individual Feature Mapping is made up of 28 × 28 neurons, and each neuron specifies the acceptance region of one 5 × 5.In CNN, for volume
The characteristic pattern x of each output of laminationjHave:
Wherein, MjThe input feature vector figure combination of selection is represented,It is i-th kind of characteristic pattern of input and the jth kind feature of output
The convolution kernel used in connection between figure,It is to bias corresponding to jth kind characteristic pattern, f is activation primitive, WlFor the power of l layers
Value matrix.
Step 2022:Second hidden layer realizes sub-sample and pond, obtains S2 layers.Specially:It is equally by 8 features
Mapping composition, but each of which Feature Mapping is made up of 14 × 14 neurons.Each neuron has the acceptance region of one 2 × 2,
One supersystem number, one can train biasing and a Sigmoid activation primitive.The behaviour of coefficient and biasing control neuron can be trained
Make a little.Defining square error cost function first is:
Wherein, N is number of samples, and C is the classification quantity of sample,For n-th of sample class xnKth dimension,It is n-th
The kth dimension of the output of network of samples.
Supersystem number sample error function representation, it is:.
In CNN, for each output characteristic figure x of sample leveljHave:
Wherein, down represents down-sampling, and f () is activation primitive,It is that l is biased, WlFor the weight matrix of l layers.
Step 2023:3rd hidden layer carries out second of convolution, obtains C3 layers.Specially:It is by 20 Feature Mapping groups
Into each mapping is made up of 10 × 10 neurons.Each neuron in the hidden layer may have and next hidden layer is several
The connected conflict link of individual Feature Mapping, it is operated in a manner of similar to first convolutional layer.
Step 2024:4th hidden layer carries out second of sub-sample and pondization calculates, and obtains S4 layers.Specially:It by
20 Feature Mapping compositions, but each mapping is made up of 5 × 5 neurons, and it is operated in a manner of similar to sampling for the first time.
Step 2025:5th hidden layer realizes convolution, obtains C5.Specially:It is made up of 120 neurons, each
Neuron specifies the acceptance region of one 5 × 5.
Step:C5 layers are mapped entirely, obtain feature vector, X, then C typical classification output vectors are calculated by characteristic vector
Op.Wherein X is full connection output vector, there is N-dimensional, is mapped to obtain output vector entirely by characteristic vector and is described as:
Then output vector is described as:
Op={ f (yj), j=1,2 ..., k
f(yj)=Byj
Wherein B is N × k matrix, and k is the number of types of output.That is the dimension of output vector.
The human region designed by step 2021 to step 2026 identifies convolutional neural networks model structure such as Fig. 2 institutes
Show.
Step 203:Train CNN network models.CNN training is divided into two stages, and the first stage is propagation stage forward,
Second stage is the back-propagation stage.
Step 2031:Propagation stage forward:First, from sampled images, this concentration extracts sample (X, an Xp), using X as net
In the input data input network of network, then with the network structure of step 202, the corresponding reality output O of X are calculatedp。
Step 2032:Back-propagation phase:
a:Calculate reality output OpWith corresponding preferable output YpDifference, i.e. loss function
b:Adjustment weight matrix is propagated using gradient descent method.Specially:
η is the learning rate that gradient declines, and reality output and difference η=L of preferable outputclc(Op,Yp)。
Step 2033:Training, which terminates, to be judged.On the one hand, determined by the learning rate and frequency of training of gradient decline, the opposing party
Face is specially by that coefficient can be trained to determine:
(1) if too small by the gradient decline learning rate η calculated in step 2032, illustrate, the reality currently drawn
Output result is already close to preferable output result, then can deconditioning is possible to, if now frequency of training reaches
Certain value, then training can be terminated.
(2) by the supersystem number described in step 2032:
Basis for estimation of the supersystem number as training, i.e., when supersystem number within the specific limits when the result that trains be to have
Effect, model can continue training, when not within the scope of this, it is possible that the situation of over-fitting, then now
Just should deconditioning.
Step 204:To biography before being carried out in the convolutional neural networks model that the incoming training of the human body image collected is completed
Broadcast, human body image will be extracted one group of feature vector, X={ xj, j=1,2 ..., N, output function corresponding to characteristic vector
For:
Then human body recognition function is described as:
fm,m∈[1,k](yj)=Byj
Wherein B is N × k matrix, and k is the number of types of output.That is the dimension of output vector.Recycle
The softmax Returns Law portray the output result of human body identification.Specially:
Softmax functions are:
Output result is:
Wherein, k be output number of types, this k=3.
C typical case's divisions so according to step 201, if the result of output corresponds to Output=F1- a, then should
Graphical representation is facial dressing;If the result of output corresponds to Output=F2- b, then the graphical representation hair style;It is if defeated
The result gone out corresponds to Output=F3- c, then the graphical representation wear clothes.
Step 3:Colouring information in the human region of management needed for identification, is counted to color area.Based on step 2
In human body recognition result, human body is split from image, then calculate human body each color institute
Area in region.Specifically, including following sub-step:
Step 301:Obtain the positional information of each color region of human body, i.e., the coordinate (x of each pixeli,
yi), i=1, wherein 2 ..., N, N represent the pixel quantity that the region is included.
Step 302:Calculate the area S of each color region of human bodyk:
Wherein, SkFor the area of kth kind color, (x1,y1) represent first in human body each color region
The coordinate of pixel, (xi,yi), i=2,3 ..., N is represented in each color region of human body beyond first first point
Other N-1 pixel coordinate.
Step 4:Image appraisal is carried out to the human region that need to be managed.If the face of each color region in the region
Product SkColor depth and area coinciding standard, then being evaluated as providing is suitable, and being evaluated as otherwise providing is improper.The evaluation
To be suitable, may also include evaluation preferably, excellent good index;It is described be evaluated as it is improper, may also include be evaluated as it is bad, poor etc.
Represent bad index.Further, including following sub-step:
Step 401:Count the area S of each color region of human bodykInterior color depth:With region SkRGB
Triple channel colouring information portrays human region color depth, optionally, by the colouring information of three Color Channels of RGB color
Accumulated value is as the dark angle value of human region.
Wherein, DkFor human region color depth value, Ri,Gi,BiPixel (x is represented respectivelyi,yi) R passages color
Value, the color value of G passages, the color value of G passages.
Step 402:Image appraisal is carried out to human region.If specifically, region area S shared by certain color kkIt is more than
Or less than certain areal extent, then the image appraisal to the human body parts is improper, and it is suitable to be otherwise evaluated as;If human body portion
The area S in region shared by each color in positionkDark angle value DkMore than or less than certain color depth scope, then to the human body
Partial image appraisal is improper, and it is suitable to be otherwise evaluated as.
Step 5:Provide mould image suggestion, report output appraisal report.The appraisal report content is entered for human region
Row image appraisal, and mould image suggestion.The mould image suggestion is the suggestion increased or decreased made on color.Example
Such as, the human region progress image appraisal result of certain user is:Kth kind color occupied area SkMore than 60, then assessment report is:
Kth kind color occupied area SkMore than certain limit, it is proposed that the area of less k colors;The human region of certain user carries out image
Evaluation result is:Kth kind color occupied area SkLess than 20, the area S in region shared by jth kind colorjDark angle value DjGreatly
In 356, then assessment report is:Kth kind color occupied area SkLess than 20, the area S in region shared by jth kind colorjIt is dark
Angle value DjMore than 356, it is proposed that increase kth kind color occupied area Sk, the area S in region shared by reduction jth kind colorjColor
Concentration value Dj。
Embodiment 2:
The invention provides a kind of artificial intelligence user image management system based on machine vision, as shown in figure 3, described
Personal image managing device includes image acquisition device, memory, human region identifier, colour recognition device, color processor, aobvious
Show device;
Described image collector is used to gather image, and image acquisition device is connected by data transmission link and video memory
Connect;
The memory is used for data, including view data, memory by data transmission link and image processor and
Display connects;
The human region identifier is used to identify human region, and human region identifier passes through data connecting line Lu Yuyan
Color processor connects;
The colour recognition device is mainly used in identifying the color in image, is connected by data connecting line road and color processor
Connect;
The color processor is mainly used in realizing the area for calculating color, calculates the function such as concentration of color, at color
Reason device is connected by data transmission link with memory and display;
The display is used for display image or/and output appraisal report, and shows some fault messages.
As further, described image collector includes several video cameras;
Described image memory is SD card or mobile hard disk;
The human region identifier, color processor CPU, GPU and the image procossing group available for mobile terminal
Part;
The mobile terminal includes mobile phone, flat board, removable computer, the mobile camera with processing function;
The data transmission link is data wire, is not intersected mutually between data wire.
Embodiment 3:
The invention provides a kind of artificial intelligence user image management system based on machine vision, as shown in figure 4, bag
Include:
Image capture module:Described image acquisition module is used to gather image and storage image;
Human body image management region identification module:The human body image management region identification module is used to identify in image
The human region of required management;
Personal image management module:The personal image management module is used to manage human body personal image, is mainly used in commenting
The image of the part of image management is carried out needed for valency image;
Output module:For exporting output appraisal report and image;
The human body image management region identification module obtains view data identification human body from described image acquisition module
Image management region, recognition result is then passed to personal image management module and carries out human body personal image management, image is managed
Reason feedback result is passed to output module and exported.
As further, the output module is used to export human body image appraisal report and the suggestion with mould image;
The human body image management region is divided into facial dressing, hair style, clothing;
The clothing includes the dress of jacket, fitted pants or skirt and shoes.
As further, described image acquisition module includes image acquisition units and image storage unit;
The personal image management module includes colouring information recognition unit, areal calculation unit, color depth and calculates list
Member, evaluation generation unit;
The colouring information recognition unit is used to identify color;
The areal calculation unit is used for reference area, particularly, shared in the picture for calculating a variety of colors respectively
Area;
The color depth computing unit is used for the concentration for calculating color;
The evaluation generation unit is used for the value according to color area and color depth, provides toward closer vivid standard
Direction adjusts the suggestion of color occupied area and color depth.
Although present disclosure is as above, the present invention is not limited to this.Any those skilled in the art, this is not being departed from
In the spirit and scope of invention, it can make various changes or modifications, therefore protection scope of the present invention should be with claim institute
The scope of restriction is defined.
Claims (8)
1. a kind of artificial intelligence user image management method based on machine vision, it is characterised in that mainly include the following steps that:
Step 1:The image of the required human body for carrying out image management is acquired using personal image managing device, and will figure
As storage;
Step 2:Identify the human region of the required management in image;
Step 3:Colouring information in the human region of management needed for identification, is counted to color area;
Including:
Step 301:Obtain the positional information of each color region of human body;
Step 302:Calculate the area S of each color region of human bodyk;
Step 4:Image appraisal is carried out to the human region that need to be managed:In the human region managed as needed shared by a variety of colors
The concentration of color in area and the area, image appraisal is carried out to human region;
Including:
Step 401:Count the area S of each color region of human bodykInterior color depth Dk;
Step 402:According to the area S of each color region of human bodykInterior color depth DkShape is carried out to human region
As evaluation;
Step 5:Provide mould image suggestion, report output appraisal report.
2. a kind of artificial intelligence user image management method based on machine vision according to claim 1, its feature exist
In the human body represents each position of human body or whole human body, specifically includes:Facial dressing, hair style, clothing.
3. a kind of artificial intelligence user image management method based on machine vision according to claim 1, its feature exist
In the step 2 includes:Using the method for machine learning, the required human region managed is identified.
4. a kind of artificial intelligence user image management method based on machine vision according to claim 1, its feature exist
In, the personal image managing device include image acquisition device, memory, human region identifier, colour recognition device, at color
Manage device, display;
Described image collector is used to gather image, and image acquisition device is connected by data transmission link with video memory;
The memory is used for data, including view data, memory pass through data transmission link and image processor and display
Device connects;
The human region identifier is used to identify human region, human region identifier by data connecting line road and color at
Manage device connection;
The colour recognition device is mainly used in identifying the color in image, is connected by data connecting line road with color processor;
The color processor is mainly used in realizing the functions such as the area for calculating color, the concentration for calculating color, color processor
It is connected by data transmission link with memory and display;
The display is used for display image or/and output appraisal report, and shows some fault messages.
5. a kind of artificial intelligence user image managing device based on machine vision according to claim 4, its feature exist
In,
Described image collector includes several video cameras;
Described image memory is SD card or mobile hard disk;
The human region identifier, color processor CPU, GPU and the image processing modules available for mobile terminal;
The mobile terminal includes mobile phone, flat board, removable computer, the mobile camera with processing function;
The data transmission link is data wire, is not intersected mutually between data wire.
A kind of 6. artificial intelligence user image management system based on machine vision, it is characterised in that including:
Image capture module:Described image acquisition module is used to gather image and storage image;
Human body image management region identification module:The human body image management region identification module is required in image for identifying
The human region of management;
Personal image management module:The personal image management module is used to manage human body personal image, is mainly used in evaluation figure
The image of the part of image management is carried out as needed for;
Output module:For exporting output appraisal report and image;
The human body image management region identification module obtains view data identification human body image from described image acquisition module
Management region, recognition result is then passed to personal image management module and carries out human body personal image management, image management is anti-
Feedback result is passed to output module and exported.
7. a kind of artificial intelligence user image management system based on machine vision according to claim 6, its feature exist
In the output module is used to export human body image appraisal report and the suggestion with mould image;
The human body image management region is divided into facial dressing, hair style, clothing;
The clothing includes the dress of jacket, fitted pants or skirt and shoes.
8. a kind of artificial intelligence user image management system based on machine vision according to claim 7, its feature exist
In described image acquisition module includes image acquisition units and image storage unit;
The personal image management module includes colouring information recognition unit, areal calculation unit, color depth computing unit, commented
Valency generation unit;
The colouring information recognition unit is used to identify color;
The areal calculation unit is used for reference area, particularly, for calculating a variety of colors shared face in the picture respectively
Product;
The color depth computing unit is used for the concentration for calculating color;
The evaluation generation unit is used for the value according to color area and color depth, provides toward the direction closer to vivid standard
Adjust the suggestion of color occupied area and color depth.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710543022.6A CN107423694A (en) | 2017-07-05 | 2017-07-05 | A kind of artificial intelligence user image management method and system based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710543022.6A CN107423694A (en) | 2017-07-05 | 2017-07-05 | A kind of artificial intelligence user image management method and system based on machine vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107423694A true CN107423694A (en) | 2017-12-01 |
Family
ID=60426962
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710543022.6A Pending CN107423694A (en) | 2017-07-05 | 2017-07-05 | A kind of artificial intelligence user image management method and system based on machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107423694A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110766502A (en) * | 2018-07-27 | 2020-02-07 | 北京京东尚科信息技术有限公司 | Commodity evaluation method and system |
CN112466086A (en) * | 2020-10-26 | 2021-03-09 | 福州微猪信息科技有限公司 | Visual identification early warning device and method for farm work clothes |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102054175A (en) * | 2009-10-28 | 2011-05-11 | 索尼公司 | Color-unevenness inspection apparatus and method |
CN102214303A (en) * | 2010-04-05 | 2011-10-12 | 索尼公司 | Information processing device, information processing method and program |
CN104203042A (en) * | 2013-02-01 | 2014-12-10 | 松下电器产业株式会社 | Makeup application assistance device, makeup application assistance method, and makeup application assistance program |
CN106529429A (en) * | 2016-10-27 | 2017-03-22 | 中国计量大学 | Image recognition-based facial skin analysis system |
-
2017
- 2017-07-05 CN CN201710543022.6A patent/CN107423694A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102054175A (en) * | 2009-10-28 | 2011-05-11 | 索尼公司 | Color-unevenness inspection apparatus and method |
CN102214303A (en) * | 2010-04-05 | 2011-10-12 | 索尼公司 | Information processing device, information processing method and program |
CN104203042A (en) * | 2013-02-01 | 2014-12-10 | 松下电器产业株式会社 | Makeup application assistance device, makeup application assistance method, and makeup application assistance program |
CN106529429A (en) * | 2016-10-27 | 2017-03-22 | 中国计量大学 | Image recognition-based facial skin analysis system |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110766502A (en) * | 2018-07-27 | 2020-02-07 | 北京京东尚科信息技术有限公司 | Commodity evaluation method and system |
CN112466086A (en) * | 2020-10-26 | 2021-03-09 | 福州微猪信息科技有限公司 | Visual identification early warning device and method for farm work clothes |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109325443B (en) | Face attribute identification method based on multi-instance multi-label deep migration learning | |
JP6788264B2 (en) | Facial expression recognition method, facial expression recognition device, computer program and advertisement management system | |
CN110148120B (en) | Intelligent disease identification method and system based on CNN and transfer learning | |
CN102332095B (en) | Face motion tracking method, face motion tracking system and method for enhancing reality | |
CN109584248A (en) | Infrared surface object instance dividing method based on Fusion Features and dense connection network | |
CN103731583B (en) | Intelligent synthetic, print processing method is used for taking pictures | |
CN108052884A (en) | A kind of gesture identification method based on improvement residual error neutral net | |
Zhang et al. | Content-adaptive sketch portrait generation by decompositional representation learning | |
CN109558832A (en) | A kind of human body attitude detection method, device, equipment and storage medium | |
CN107992842A (en) | Biopsy method, computer installation and computer-readable recording medium | |
CN109858466A (en) | A kind of face critical point detection method and device based on convolutional neural networks | |
CN107742107A (en) | Facial image sorting technique, device and server | |
CN106504064A (en) | Clothes classification based on depth convolutional neural networks recommends method and system with collocation | |
CN108765278A (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN107808132A (en) | A kind of scene image classification method for merging topic model | |
CN109410168A (en) | For determining the modeling method of the convolutional neural networks model of the classification of the subgraph block in image | |
CN105426850A (en) | Human face identification based related information pushing device and method | |
CN107220657A (en) | A kind of method of high-resolution remote sensing image scene classification towards small data set | |
CN107945175A (en) | Evaluation method, device, server and the storage medium of image | |
CN107944399A (en) | A kind of pedestrian's recognition methods again based on convolutional neural networks target's center model | |
CN110232326A (en) | A kind of D object recognition method, device and storage medium | |
CN107967484A (en) | A kind of image classification method based on multiresolution | |
CN109685713A (en) | Makeup analog control method, device, computer equipment and storage medium | |
CN109886153A (en) | A kind of real-time face detection method based on depth convolutional neural networks | |
CN108509892A (en) | Method and apparatus for generating near-infrared image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171201 |