CN107481218A - Image aesthetic feeling appraisal procedure and device - Google Patents
Image aesthetic feeling appraisal procedure and device Download PDFInfo
- Publication number
- CN107481218A CN107481218A CN201710564852.7A CN201710564852A CN107481218A CN 107481218 A CN107481218 A CN 107481218A CN 201710564852 A CN201710564852 A CN 201710564852A CN 107481218 A CN107481218 A CN 107481218A
- Authority
- CN
- China
- Prior art keywords
- msub
- model
- mrow
- parameter
- msup
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 80
- 238000012549 training Methods 0.000 claims abstract description 60
- 238000004364 calculation method Methods 0.000 claims description 14
- 239000004576 sand Substances 0.000 claims description 6
- 230000017105 transposition Effects 0.000 claims description 5
- 230000006870 function Effects 0.000 description 29
- 238000011156 evaluation Methods 0.000 description 7
- 238000013508 migration Methods 0.000 description 7
- 230000005012 migration Effects 0.000 description 7
- 238000000926 separation method Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 3
- 241000208340 Araliaceae Species 0.000 description 3
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 3
- 235000003140 Panax quinquefolius Nutrition 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 3
- 235000008434 ginseng Nutrition 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000013526 transfer learning Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000001035 drying Methods 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 238000004064 recycling Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Analysing Materials By The Use Of Radiation (AREA)
Abstract
The present invention relates to computer vision and image identification technical field, specifically provides a kind of image aesthetic feeling appraisal procedure and device, it is intended to solves the low technical problem of aesthetic feeling quantitative evaluating method efficiency.For this purpose, image aesthetic feeling appraisal procedure provided by the invention includes:According to default constraints, and aesthetic-qualitative level disaggregated model and the model parameter of aesthetic feeling fraction regression model after model training, computation model auxiliary parameter;Foundation model-aided parameter, adjust the model parameter of aesthetic-qualitative level disaggregated model and aesthetic feeling fraction regression model after model training;The model-aided parameter is recalculated according to the model parameter after adjustment, until model-aided parameter meets default iterated conditional.Meanwhile image aesthetic feeling apparatus for evaluating provided by the invention can perform each step of the above method.Technical scheme, the assessment efficiency and accuracy of aesthetic feeling quantitative evaluating method can be significantly improved.
Description
Technical field
The present invention relates to computer vision and image identification technical field, and in particular to a kind of image aesthetic feeling appraisal procedure and
Device.
Background technology
More and more convenient with the creation and acquisition of digital picture, explosive growth is presented in the quantity of digital picture, daily
The image shared on network is countless, and the sharp increase of amount of images causes image management work to become time-consuming and heavy.People
Often tend to obtain and preserve the picture of high quality.In image retrieval, graphical design, artistic work style analysis, man-machine
In the tasks such as interaction, the aesthetic feeling evaluation problem of image is all be unable to do without.
At present, image aesthetic feeling appraisal procedure mainly includes aesthetic feeling qualitative evaluation method and aesthetic feeling quantitative evaluating method.Aesthetic feeling
Qualitative evaluation method refers to being divided into high quality graphic and low-quality image according to picture quality, and the degree of accuracy is relatively low.Aesthetic feeling is determined
Amount appraisal procedure refers to the quality using fine fraction assessment image, but this method needs photography, aesthetics aspect
Technical staff carries out long-time mark to great amount of images, less efficient.
The content of the invention
It has been to solve the low technology of aesthetic feeling quantitative evaluating method efficiency to ask to solve above mentioned problem of the prior art
Topic, the invention provides a kind of image aesthetic feeling appraisal procedure and device.
In a first aspect, image aesthetic feeling appraisal procedure includes in the present invention:
Model training is carried out to default aesthetic-qualitative level disaggregated model and default aesthetic feeling fraction regression model;
Returned according to the aesthetic-qualitative level disaggregated model after default constraints, and the model training and aesthetic feeling fraction
The model parameter of model, computation model auxiliary parameter;
According to the model-aided parameter, adjust the aesthetic-qualitative level disaggregated model after the model training and aesthetic feeling fraction returns
Return the model parameter of model;The model-aided parameter is recalculated according to the model parameter after the adjustment, until the mould
Type auxiliary parameter meets default iterated conditional.
Further, an optimal technical scheme provided by the invention is:
Aesthetic-qualitative level disaggregated model f after the model trainings(x) it is shown below:
fs(x)=sgn (ws Tx+bs)
Wherein, the wsAnd bsIt is the model parameter of aesthetic-qualitative level disaggregated model, the T is transposition symbol;The sgn
(t) it is sign function, sgn (t)=+ 1 if t > 0, sgn (t)=- 1, t is the variable of sign function if t < 0;
Aesthetic feeling fraction regression model f after the model trainingt(x) it is shown below:
ft(x)=wt Tx+bt
Wherein, the wtAnd btIt is the model parameter of aesthetic feeling fraction regression model.
Further, an optimal technical scheme provided by the invention is:
Include before the computation model auxiliary parameter:
Shown method calculates the model parameter of aesthetic-qualitative level disaggregated model according to the following formula:
Wherein, the nsSample image quantity used by carry out model training to aesthetic-qualitative level disaggregated model;It is describedFor quadratic loss function, andInstitute
StateIt is described for the characteristics of image of i-th sample imageFor label corresponding to i-th sample image, the λ is described flat
Balance factor between square loss function and regular terms;
Shown method calculates the model parameter of aesthetic feeling fraction regression model according to the following formula:
Wherein, the ntSample image quantity used by carry out model training to aesthetic feeling fraction regression model;It is describedFor quadratic loss function, and
It is describedIt is described for the characteristics of image of i-th sample imageFor label corresponding to i-th sample image, the ε be less than
The arithmetic number of predetermined threshold value, balance factors of the μ between the quadratic loss function and regular terms.
Further, an optimal technical scheme provided by the invention is:
The default constraints is shown below:
Wherein, the w is model-aided parameter, the wsFor the model parameter of aesthetic-qualitative level disaggregated model, the wtFor U.S.
Feel the model parameter of fraction regression model;The γsFor model-aided parameter w and model parameter wsCorresponding auxiliary parameter, it is described
γtFor model-aided parameter w and model parameter wtCorresponding auxiliary parameter.
Further, an optimal technical scheme provided by the invention is:
The foundation model-aided parameter, adjusts the aesthetic-qualitative level disaggregated model after model training and aesthetic feeling fraction returns mould
The model parameter of type includes:
The model parameter of shown method adjustment aesthetic-qualitative level disaggregated model according to the following formula:
Wherein, the w is model-aided parameter, and the λ ' is the quadratic loss function and Parallel Constraint item | | w- τsws|
| the balance factor between 2, the τ s are model-aided parameter w and model parameter wsCorresponding auxiliary parameter;;
The model parameter of shown method adjustment aesthetic feeling fraction regression model according to the following formula:
Wherein, balance factors of the μ ' between loss function and Parallel Constraint item, the τtFor model-aided parameter w
With model parameter wtCorresponding auxiliary parameter.
Image aesthetic feeling apparatus for evaluating includes in second aspect, the present invention:
Model training module, it is configured to enter default aesthetic-qualitative level disaggregated model and default aesthetic feeling fraction regression model
Row model training;
Model-aided parameter calculating module, it is configured to according to U.S. after default constraints, and the model training
Feel the model parameter of grade separation model and aesthetic feeling fraction regression model, computation model auxiliary parameter;
Model parameter adjusting module, it is configured to, according to the model-aided parameter, adjust the aesthetic feeling after the model training
The model parameter of grade separation model and aesthetic feeling fraction regression model;
Iteration module, it is configured to recalculate the model-aided parameter according to the model parameter after the adjustment, until
The model-aided parameter meets default iterated conditional.
Further, an optimal technical scheme provided by the invention is:
Aesthetic-qualitative level disaggregated model f after the model trainings(x) it is shown below:
fs(x)=sgn (ws Tx+bs)
Wherein, the wsAnd bsIt is the model parameter of aesthetic-qualitative level disaggregated model, the T is transposition symbol;The sgn
(t) it is sign function, sgn (t)=+ 1 if t > 0, sgn (t)=- 1, t is the variable of sign function if t < 0;
Aesthetic feeling fraction regression model f after the model trainingt(x) it is shown below:
ft(x)=wt Tx+bt
Wherein, the wtAnd btIt is the model parameter of aesthetic feeling fraction regression model.
Further, an optimal technical scheme provided by the invention is:
The model-aided parameter calculating module includes the first model parameter calculation unit and the second model parameter calculation list
Member;
The first model parameter calculation unit, it is configured to method shown according to the following formula and calculates aesthetic-qualitative level disaggregated model
Model parameter:
Wherein, the nsSample image quantity used by carry out model training to aesthetic-qualitative level disaggregated model;It is describedFor quadratic loss function, andInstitute
StateIt is described for the characteristics of image of i-th sample imageFor label corresponding to i-th sample image, the λ is described flat
Balance factor between square loss function and regular terms;
The second model parameter calculation unit, it is configured to method shown according to the following formula and calculates aesthetic feeling fraction regression model
Model parameter:
Wherein, the ntSample image quantity used by carry out model training to aesthetic feeling fraction regression model;It is describedFor quadratic loss function, and
It is describedIt is described for the characteristics of image of i-th sample imageFor label corresponding to i-th sample image, the ε is less than pre-
If the arithmetic number of threshold value, balance factors of the μ between the quadratic loss function and regular terms.
Further, an optimal technical scheme provided by the invention is:
The model parameter adjusting module includes the first adjustment unit and the second adjustment unit;
First adjustment unit, it is configured to the model ginseng of method adjustment aesthetic-qualitative level disaggregated model shown according to the following formula
Number:
Wherein, the w is model-aided parameter, balance factors of the λ ' between loss function and Parallel Constraint item,
The τsFor model-aided parameter w and model parameter wsCorresponding auxiliary parameter;
Second adjustment unit, it is configured to the model ginseng of method adjustment aesthetic feeling fraction regression model shown according to the following formula
Number:
Wherein, balance factors of the μ ' between loss function and Parallel Constraint, the τtFor model-aided parameter w with
Model parameter wtCorresponding auxiliary parameter;.
Further, an optimal technical scheme provided by the invention is:
The default constraints is shown below:
Wherein, the w is model-aided parameter, the wsFor the model parameter of aesthetic-qualitative level disaggregated model, the wtFor U.S.
Feel the model parameter of fraction regression model;The γsFor model-aided parameter w and model parameter wsCorresponding auxiliary parameter, it is described
γtFor model-aided parameter w and model parameter wtCorresponding auxiliary parameter.
Further, an optimal technical scheme provided by the invention is:
Further, an optimal technical scheme provided by the invention is:
Compared with immediate prior art, above-mentioned technical proposal at least has the advantages that:
1st, image aesthetic feeling appraisal procedure in the present invention, can be according to default constraints, and U.S. after model training
Feel the model parameter of grade separation model and aesthetic feeling fraction regression model, computation model auxiliary parameter, and join according to model-aided
Number adjustment model trainings after aesthetic-qualitative level disaggregated model and aesthetic feeling fraction regression model model parameter, and according to adjustment after
Model parameter recalculate model-aided parameter, until model-aided parameter meets default iterated conditional.Pass through above-mentioned side
Method can realize the Data Migration between aesthetic-qualitative level disaggregated model and aesthetic feeling fraction regression model, with model-aided parameter regulation
The model parameter of aesthetic-qualitative level disaggregated model and aesthetic feeling fraction regression model, and calculated by successive ignition, it can significantly improve
The aesthetic feeling fraction assessment accuracy of aesthetic feeling fraction regression model.
2nd, image aesthetic feeling apparatus for evaluating in the present invention, model-aided parameter calculating module is mainly included, model parameter adjusts
Module and iteration module.Wherein, model-aided parameter calculating module is configurable to according to default constraints, and model
The model parameter of aesthetic-qualitative level disaggregated model and aesthetic feeling fraction regression model after training, computation model auxiliary parameter;Model is joined
Number adjusting module is configurable to according to model-aided parameter, adjusts the aesthetic-qualitative level disaggregated model after model training and aesthetic feeling point
The model parameter of number regression model;Iteration module is configurable to recalculate model-aided ginseng according to the model parameter after adjustment
Number, until model-aided parameter meets default iterated conditional.By said structure can realize aesthetic-qualitative level disaggregated model with
Data Migration between aesthetic feeling fraction regression model, returned with model-aided parameter regulation aesthetic-qualitative level disaggregated model and aesthetic feeling fraction
Return the model parameter of model, and calculated by successive ignition, the aesthetic feeling fraction that can significantly improve aesthetic feeling fraction regression model is commented
Estimate accuracy.
Brief description of the drawings
Fig. 1 is a kind of key step flow chart of image aesthetic feeling appraisal procedure in the embodiment of the present invention;
Fig. 2 is testing image schematic diagram in the embodiment of the present invention;
Fig. 3 is a kind of structural representation of image aesthetic feeling apparatus for evaluating in the embodiment of the present invention.
Embodiment
The preferred embodiment of the present invention described with reference to the accompanying drawings.It will be apparent to a skilled person that this
A little embodiments are used only for explaining the technical principle of the present invention, it is not intended that limit the scope of the invention.
Image aesthetic feeling can currently be assessed using the aesthetic feeling appraisal procedure based on deep learning, but aesthetic feeling quantifies
Appraisal procedure needs to mark the training sample of deep learning for a long time, and aesthetic feeling qualitative evaluation method can be to depth
The training sample of study is quickly marked.Meanwhile aesthetic feeling qualitative evaluation method also exist with aesthetic feeling quantitative evaluating method it is following
Contact:The image of high quality is assessed as in aesthetic feeling qualitative evaluation method compared to being assessed as low-quality image, its
The aesthetic feeling fraction being evaluated in quantitative evaluating method is also greater than the latter.Based on this, the invention provides a kind of assessment of image aesthetic feeling
Method, transfer learning method of this method based on parameter contact aesthetic-qualitative level disaggregated model and aesthetic feeling fraction regression model
Come, be specifically:Aesthetic-qualitative level disaggregated model is trained using the image data base for indicating aesthetic feeling " high/low quality ", is then used
Transfer learning method based on parameter aids in aesthetic feeling fraction regression model to be instructed on a small amount of image for indicating aesthetic feeling fraction
Practice, pressure is marked so as to mitigate training sample when aesthetic feeling fraction regression model carries out model training.
Below in conjunction with the accompanying drawings, a kind of image aesthetic feeling appraisal procedure in the embodiment of the present invention is illustrated.
Refering to accompanying drawing 1, Fig. 1 illustrates the key step of image aesthetic feeling appraisal procedure in the present embodiment.Such as Fig. 1 institutes
Show, image aesthetic feeling appraisal procedure mainly comprises the following steps in the present embodiment:
Step S101:Model instruction is carried out to default aesthetic-qualitative level disaggregated model and default aesthetic feeling fraction regression model
Practice.
The image characteristic extracting method based on deep neural network can be used in the present embodiment, obtains the figure of sample image
As feature, then according to sample image and acquired characteristics of image to default aesthetic-qualitative level disaggregated model and default aesthetic feeling
Fraction regression model carries out model training.In an optimal technical scheme of the present embodiment, it can use and be based on depth convolution
Neutral net Alexnet image characteristic extracting method, obtain the characteristics of image of sample image.
Specifically, setting carries out number of training used by model training to aesthetic-qualitative level disaggregated model in the present embodiment
Include n according to storehousesSample image is opened, each sample image is obtained using the foregoing image characteristic extracting method based on deep neural network
Characteristics of image, the characteristics of image of sample image is labeled as xs, as the characteristics of image of i-th sample image isIt is simultaneously right
Each sample image addition label ys, label corresponding to the sample if being " high quality " if the label information of i-th sample image
For "+1 ", label corresponding to the sample if being " low quality " if the label information of i-th sample imageFor " -1 ".Based on upper
Sample data is stated, the aesthetic-qualitative level disaggregated model f after model training shown in following formula (1) can be obtaineds(x):
fs(x)=sgn (ws Tx+bs) (1)
Each meaning of parameters is in formula (1):
wsAnd bsIt is the model parameter of aesthetic-qualitative level disaggregated model, T is transposition symbol;Sgn (t) is sign function, if t
Then sgn (t)=+ 1 of > 0, sgn (t)=- 1, t is the variable of sign function if t < 0.
Setting carries out training sample database bag used by model training to aesthetic feeling fraction regression model in the present embodiment
Include ntSample image is opened, the image of each sample image is obtained using the foregoing image characteristic extracting method based on deep neural network
Feature, the characteristics of image of sample image is labeled as xt, as the characteristics of image of i-th sample image isSimultaneously by each sample
The label y that the aesthetic feeling of image scores as each sample imaget, as the characteristics of image of i-th sample image isBased on above-mentioned
Sample data, the aesthetic feeling fraction regression model f after model training shown in following formula (2) can be obtainedt(x):
ft(x)=wt Tx+bt (2)
Each meaning of parameters is in formula (2):
wtAnd btIt is the model parameter of aesthetic feeling fraction regression model.
With continued reference to Fig. 1, image aesthetic feeling appraisal procedure also includes step S102 in the present embodiment:According to default constraint bar
Aesthetic-qualitative level disaggregated model f after part, and model trainingsAnd aesthetic feeling fraction regression model f (x)t(x) model parameter, calculate
Model-aided parameter.
L2-SVM (L2-loss Support Vector Machine) model can be used to calculate aesthetic feeling in the present embodiment
The model parameter of grade separation model, specifically the method shown in (3) aesthetic-qualitative level disaggregated model can be calculated according to the following formula
Model parameter:
Each meaning of parameters is in formula (3):
For quadratic loss function, balance factors of the λ between quadratic loss function and regular terms.
Wherein, shown in quadratic loss function such as following formula (4):
Newton Algorithm above-mentioned formula (3) can be used in the present embodiment.
Further, L2-SVR (L2-loss Support Vector Regression) can be used in the present embodiment
Model calculates the model parameter of aesthetic feeling fraction regression model, specifically the method shown in (5) can calculate aesthetic feeling point according to the following formula
The model parameter of number regression model:
Each meaning of parameters is in formula (5):
For quadratic loss function, balances of the μ between quadratic loss function and regular terms because
Son.Wherein, shown in quadratic loss function such as following formula (6):
Parameter ε is the arithmetic number less than predetermined threshold value in formula (6), and the parameter can characterize quadratic loss function to error
Sensitivity.
Newton Algorithm above-mentioned formula (5) can be used in the present embodiment.
Further, in the present embodiment in the case where ignoring noise and sample image mark personnel's aesthetic difference, aesthetic feeling
The prediction result of grade separation model and aesthetic feeling fraction regression model has uniformity, that is, be classified as+1 sample image institute it is right
The aesthetic feeling fraction answered should be more than the aesthetic feeling fraction corresponding to the sample image for being classified as -1, therefore can be based on aesthetic-qualitative level
The model parameter w of disaggregated modelsWith the model parameter w of aesthetic feeling fraction regression modeltParallel relation structure constraints.Wherein,
Model parameter wsWith wtBetween parallel relation refer to model parameter wsWith wtMeet following Parallel Constraints:
Each meaning of parameters is in formula (7):α is default constraint factor,For real number set.
Specifically, in the present embodiment shown in default constraints such as following formula (8):
Each meaning of parameters is in formula (8):
W is model-aided parameter, γsFor model-aided parameter w and model parameter wsCorresponding auxiliary parameter;γtFor model
Auxiliary parameter w and model parameter wtCorresponding auxiliary parameter.
Above-mentioned constraints is only to model parameter w in the present embodimentsWith wtDirection possess restriction ability, without influence two
The mould length of person.Specifically, in the present embodiment shown in the analytic solutions such as following formula (9) of formula (8):
By formula (9) it was determined that model-aided parameter w it is actual be model parameter wsWith wtAngular bisector direction,
And model parameter wsWith wtDirection be considered as showing the direction that esthetic evaluation rises, therefore model-aided parameter w be one more
The direction that the aesthetic feeling degree of generalized rises.Bring each parametric solution shown in formula (9) into formula (8), can obtain required by it
Shown in minimum value such as following formula (10):
Parameter in formula (10)
By formula (10) it was determined that above-mentioned constraints only with model parameter ws、wtAngle it is relevant, and with the two
Mould length it is unrelated, therefore above-mentioned constraints is only to model parameter wsWith wtDirection possess restriction ability, without influenceing the two
Mould is grown.
With continued reference to Fig. 1, image aesthetic feeling appraisal procedure also includes step S103 in the present embodiment:Join according to model-aided
Number, adjust the model parameter of aesthetic-qualitative level disaggregated model and aesthetic feeling fraction regression model after model training.
The model parameter w of model-aided parameter adjustment aesthetic-qualitative level disaggregated model is utilized in the present embodimentsSo that in minimum
As far as possible by model parameter w while changing quadratic loss functionsDirection and model-aided parameter direction keeping parallelism.Specifically
Ground, can according to the following formula shown in (11) method adjustment aesthetic-qualitative level disaggregated model model parameter:
Each meaning of parameters is in formula (11):
Balance factors of the λ ' between quadratic loss function and Parallel Constraint, τsFor model-aided parameter w and model parameter ws
Corresponding auxiliary parameter.
Parameter τ in formula (11) in the present embodimentsAnalytic solutions such as following formula (12) shown in:
By above-mentioned parameter τsAnalytic solutions bring formula (11) into and can obtain:
Newton Algorithm above-mentioned formula (12) can be used in the present embodiment.
Further, the model parameter w of model-aided parameter adjustment aesthetic feeling fraction regression model is utilized in the present embodimentt,
So that as far as possible by model parameter w while quadratic loss function is minimizedtKeep balancing with model-aided parameter.Specifically
Ground, in the present embodiment can the method adjustment aesthetic feeling fraction regression model shown in (14) according to the following formula model parameter:
Each meaning of parameters is in formula (14):
Balance factors of the μ ' between loss function and Parallel Constraint, τtFor model-aided parameter w and model parameter wtIt is corresponding
Auxiliary parameter.
Parameter τ in formula (14) in the present embodimenttAnalytic solutions such as following formula (15) shown in:
By above-mentioned parameter τtAnalytic solutions bring formula (14) into and can obtain:
Newton Algorithm above-mentioned formula (14) can be used in the present embodiment.
With continued reference to Fig. 1, the aesthetic-qualitative level disaggregated model after model training is being adjusted in step S103 in the present embodiment
After the model parameter of aesthetic feeling fraction regression model, in addition to:Model-aided is recalculated according to the model parameter after adjustment
Parameter, until model-aided parameter meets default iterated conditional.
According to model-aided parameter adjustment aesthetic-qualitative level disaggregated model and the mould of aesthetic feeling fraction regression model in the present embodiment
Shape parameter, the accuracy of above-mentioned model parameter can be improved, using the new model of the model parameter calculation for possessing more pinpoint accuracy
Auxiliary parameter, the model-aided parameter for recycling this new continue to adjust aesthetic-qualitative level disaggregated model and aesthetic feeling fraction regression model
Model parameter, so iterative calculation meet iterated conditional up to model-aided parameter, finally give accurate model parameter.
Referring next to Fig. 2, Fig. 2 illustrates testing image schematic diagram.As shown in Fig. 2 mapping is treated in the present embodiment
As the aesthetic feeling scoring of 11~testing image 18 is as shown in table 1 below:
Table 1
Specifically, aesthetic feeling fraction regression model is referred to not using the image aesthetic feeling shown in Fig. 1 before being migrated in the present embodiment
Appraisal procedure carries out the aesthetic feeling regression model of Data Migration to aesthetic feeling fraction regression model, and aesthetic feeling fraction regression model refers to after migration
Be that the aesthetic feeling for carrying out Data Migration to aesthetic feeling fraction regression model using the image aesthetic feeling appraisal procedure shown in Fig. 1 returns mould
Type.
It can be determined compared to aesthetic feeling fraction regression model before migration by table 1, be assessed using the image aesthetic feeling shown in Fig. 1
The assessment result that method carries out the aesthetic feeling fraction regression model of Data Migration to aesthetic feeling fraction regression model is more accurate.
Although each step is described in the way of above-mentioned precedence in above-described embodiment, this area
Technical staff is appreciated that to realize the effect of the present embodiment, is performed between different steps not necessarily in such order,
It (parallel) execution simultaneously or can be performed with reverse order, these simple changes all protection scope of the present invention it
It is interior.
Based on additionally providing a kind of image aesthetic feeling with embodiment of the method identical technical concept, the embodiment of the present invention and assess dress
Put.Below in conjunction with the accompanying drawings, the image aesthetic feeling apparatus for evaluating is specifically described.
Refering to accompanying drawing 3, Fig. 3 illustrates the structure of image aesthetic feeling apparatus for evaluating in the present embodiment.As shown in figure 3,
Image aesthetic feeling apparatus for evaluating can include model training module 21, model-aided parameter calculating module 22, model in the present embodiment
Parameter adjustment module 23 and iteration module 24.Wherein, model training module 21 is configurable to classify to default aesthetic-qualitative level
Model and default aesthetic feeling fraction regression model carry out model training.Model-aided parameter calculating module 22 is configurable to foundation
Default constraints, and aesthetic-qualitative level disaggregated model and the model parameter of aesthetic feeling fraction regression model after model training,
Computation model auxiliary parameter.Model parameter adjusting module 23 is configurable to according to model-aided parameter, after adjusting model training
Aesthetic-qualitative level disaggregated model and aesthetic feeling fraction regression model model parameter.Iteration module 24 is configurable to according to after adjustment
Model parameter recalculate model-aided parameter, until model-aided parameter meets default iterated conditional.
Further, the aesthetic-qualitative level disaggregated model f in the present embodiment in model training module 21 after model trainings(x)
As shown in formula (1), the aesthetic feeling fraction regression model f after model trainingt(x) as shown in formula (2).
Further, model-aided parameter calculating module 22 can include the first model parameter calculation unit in the present embodiment
With the second model parameter calculation unit.Wherein, the first model parameter calculation unit is configurable to the side shown according to formula (3)
Method calculates the model parameter of aesthetic-qualitative level disaggregated model.Second model parameter calculation unit is configurable to according to formula (5) institute
The method shown calculates the model parameter of aesthetic feeling fraction regression model.
Further, model parameter adjusting module 23 can include the first adjustment unit in the present embodiment and the second adjustment is single
Member.Wherein, the first adjustment unit is configurable to the mould according to the method adjustment aesthetic-qualitative level disaggregated model shown in formula (11)
Shape parameter.Second adjustment unit is configurable to the model according to the method adjustment aesthetic feeling fraction regression model shown in formula (14)
Parameter.
Further, constraints is formula (8) institute used by model-aided parameter calculating module 22 in the present embodiment
The constraints shown.
Above-mentioned image aesthetic feeling apparatus for evaluating embodiment can be used for performing above-mentioned image aesthetic feeling appraisal procedure embodiment, its skill
Art principle, the technical problem solved and caused technique effect are similar, and person of ordinary skill in the field can be clearly
Recognize, for convenience and simplicity of description, the specific work process and relevant explanation of the image aesthetic feeling apparatus for evaluating of foregoing description,
Earlier figures be may be referred to as the corresponding process in aesthetic feeling appraisal procedure embodiment, will not be repeated here.
It will be understood by those skilled in the art that above-mentioned image aesthetic feeling apparatus for evaluating also includes some other known features, example
Such as processor, controller, memory, wherein, memory include but is not limited to random access memory, flash memory, read-only storage, can
Program read-only memory, volatile memory, nonvolatile memory, serial storage, parallel storage or register etc., place
Reason device includes but is not limited to CPLD/FPGA, DSP, arm processor, MIPS processors etc., in order to unnecessarily obscure the disclosure
Embodiment, these known structures are not shown in FIG. 3.
It should be understood that the quantity of the modules in Fig. 3 is only schematical.According to being actually needed, each module can be with
With arbitrary quantity.
It will be understood by those skilled in the art that the module in the equipment in embodiment can adaptively be changed
And they are arranged in one or more equipment different from the embodiment.Can the module in embodiment or unit or
Component is combined into a module or unit or component, and can be divided into multiple submodule or subelement or subgroup in addition
Part.In addition at least some in such feature and/or process or unit exclude each other, any combinations can be used
To all features disclosed in this specification (including adjoint claim, summary and accompanying drawing) and such disclosed any side
All processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification (including adjoint right will
Ask, make a summary and accompanying drawing) disclosed in each feature can be replaced by the alternative features for providing identical, equivalent or similar purpose.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments
In included some features rather than further feature, but the combination of the feature of different embodiments means in of the invention
Within the scope of and form different embodiments.For example, in claims of the present invention, embodiment claimed
It is one of any mode to use in any combination.
The all parts embodiment of the present invention can be realized with hardware, or to be run on one or more processor
Software module realize, or realized with combinations thereof.It will be understood by those of skill in the art that it can use in practice
Microprocessor or digital signal processor (DSP) realize some in server according to embodiments of the present invention, client
Or some or all functions of whole parts.The present invention be also implemented as perform method as described herein one
Partly or completely equipment or program of device (for example, PC programs and PC program products).Such journey for realizing the present invention
Sequence can be stored on PC computer-readable recording mediums, or can have the form of one or more signal.Such signal can be from
Download and obtain on internet website, either provide on carrier signal or provided in the form of any other.
It should be noted that the present invention will be described rather than limits the invention for above-described embodiment, and ability
Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims,
Any reference symbol between bracket should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not
Element or step listed in the claims.Word "a" or "an" before element does not exclude the presence of multiple such
Element.The present invention can be realized by means of including the hardware of some different elements and by means of properly programmed PC.
If in the unit claim for listing equipment for drying, several in these devices can be come specific by same hardware branch
Embody.The use of word first, second, and third does not indicate that any order.These words can be construed to title.
So far, combined preferred embodiment shown in the drawings describes technical scheme, still, this area
Technical staff is it is easily understood that protection scope of the present invention is expressly not limited to these embodiments.Without departing from this
On the premise of the principle of invention, those skilled in the art can make equivalent change or replacement to correlation technique feature, these
Technical scheme after changing or replacing it is fallen within protection scope of the present invention.
Claims (10)
1. a kind of image aesthetic feeling appraisal procedure, it is characterised in that methods described includes:
Model training is carried out to default aesthetic-qualitative level disaggregated model and default aesthetic feeling fraction regression model;
According to the aesthetic-qualitative level disaggregated model after default constraints, and the model training and aesthetic feeling fraction regression model
Model parameter, computation model auxiliary parameter;
According to the model-aided parameter, adjust the aesthetic-qualitative level disaggregated model after the model training and aesthetic feeling fraction returns mould
The model parameter of type;The model-aided parameter is recalculated according to the model parameter after the adjustment, until the model is auxiliary
Parameter is helped to meet default iterated conditional.
2. according to the method for claim 1, it is characterised in that
Aesthetic-qualitative level disaggregated model f after the model trainings(x) it is shown below:
fs(x)=sgn (ws Tx+bs)
Wherein, the wsAnd bsIt is the model parameter of aesthetic-qualitative level disaggregated model, the T is transposition symbol;The sgn (t) is
Sign function, sgn (t)=+ 1 if t > 0, sgn (t)=- 1, t is the variable of sign function if t < 0;
Aesthetic feeling fraction regression model f after the model trainingt(x) it is shown below:
ft(x)=wt Tx+bt
Wherein, the wtAnd btIt is the model parameter of aesthetic feeling fraction regression model.
3. according to the method for claim 2, it is characterised in that include before the computation model auxiliary parameter:
Shown method calculates the model parameter of aesthetic-qualitative level disaggregated model according to the following formula:
<mrow>
<munder>
<mi>min</mi>
<mrow>
<msub>
<mi>w</mi>
<mi>s</mi>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mi>s</mi>
</msub>
</mrow>
</munder>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>w</mi>
<mi>s</mi>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
<mo>+</mo>
<mi>&lambda;</mi>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>n</mi>
<mi>s</mi>
</msub>
</munderover>
<msup>
<mi>l</mi>
<mi>s</mi>
</msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>x</mi>
<mi>i</mi>
<mi>s</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>y</mi>
<mi>i</mi>
<mi>s</mi>
</msubsup>
<mo>;</mo>
<msub>
<mi>w</mi>
<mi>s</mi>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mi>s</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
Wherein, the nsSample image quantity used by carry out model training to aesthetic-qualitative level disaggregated model;It is describedFor quadratic loss function, and
It is describedFor i-th sample
The characteristics of image of image, it is describedFor label corresponding to i-th sample image, the λ is the quadratic loss function and canonical
Balance factor between;
Shown method calculates the model parameter of aesthetic feeling fraction regression model according to the following formula:
<mrow>
<munder>
<mrow>
<mi>m</mi>
<mi>i</mi>
<mi>n</mi>
</mrow>
<mrow>
<msub>
<mi>w</mi>
<mi>t</mi>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mi>t</mi>
</msub>
</mrow>
</munder>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>w</mi>
<mi>t</mi>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
<mo>+</mo>
<mi>&mu;</mi>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>n</mi>
<mi>t</mi>
</msub>
</munderover>
<msup>
<mi>l</mi>
<mi>t</mi>
</msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>x</mi>
<mi>i</mi>
<mi>t</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>y</mi>
<mi>i</mi>
<mi>t</mi>
</msubsup>
<mo>;</mo>
<msub>
<mi>w</mi>
<mi>t</mi>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mi>t</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
Wherein, the ntSample image quantity used by carry out model training to aesthetic feeling fraction regression model;It is describedFor quadratic loss function, and
It is describedFor i-th sample graph
The characteristics of image of picture, it is describedFor label corresponding to i-th sample image, the ε is the arithmetic number less than predetermined threshold value, described
Balance factors of the μ between the quadratic loss function and regular terms.
4. according to the method for claim 1, it is characterised in that
The default constraints is shown below:
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<munder>
<mi>min</mi>
<mrow>
<msub>
<mi>&gamma;</mi>
<mi>s</mi>
</msub>
<mo>,</mo>
<msub>
<mi>&gamma;</mi>
<mi>t</mi>
</msub>
<mo>,</mo>
<mi>w</mi>
</mrow>
</munder>
<mo>|</mo>
<mo>|</mo>
<mi>w</mi>
<mo>-</mo>
<msub>
<mi>&gamma;</mi>
<mi>s</mi>
</msub>
<msub>
<mi>w</mi>
<mi>s</mi>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
<mo>+</mo>
<mo>|</mo>
<mo>|</mo>
<mi>w</mi>
<mo>-</mo>
<msub>
<mi>&gamma;</mi>
<mi>t</mi>
</msub>
<msub>
<mi>w</mi>
<mi>t</mi>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>s</mi>
<mo>.</mo>
<mi>t</mi>
<mo>.</mo>
<mo>|</mo>
<mo>|</mo>
<mi>w</mi>
<mo>|</mo>
<mo>|</mo>
<mo>=</mo>
<mn>1</mn>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
Wherein, the w is model-aided parameter, the wsFor the model parameter of aesthetic-qualitative level disaggregated model, the wtFor aesthetic feeling point
The model parameter of number regression model;The γsFor model-aided parameter w and model parameter wsCorresponding auxiliary parameter;The γt
For model-aided parameter w and model parameter wtCorresponding auxiliary parameter.
5. according to the method for claim 3, it is characterised in that
It is described according to model-aided parameter, adjust the aesthetic-qualitative level disaggregated model after model training and aesthetic feeling fraction regression model
Model parameter includes:
The model parameter of shown method adjustment aesthetic-qualitative level disaggregated model according to the following formula:
<mrow>
<munder>
<mrow>
<mi>m</mi>
<mi>i</mi>
<mi>n</mi>
</mrow>
<mrow>
<msub>
<mi>w</mi>
<mi>s</mi>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mi>s</mi>
</msub>
<mo>,</mo>
<msub>
<mi>&tau;</mi>
<mi>s</mi>
</msub>
</mrow>
</munder>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>w</mi>
<mi>s</mi>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
<mo>+</mo>
<mi>&lambda;</mi>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>n</mi>
<mi>s</mi>
</msub>
</munderover>
<msup>
<mi>l</mi>
<mi>s</mi>
</msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>x</mi>
<mi>i</mi>
<mi>s</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>y</mi>
<mi>i</mi>
<mi>s</mi>
</msubsup>
<mo>;</mo>
<msub>
<mi>w</mi>
<mi>s</mi>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mi>s</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msup>
<mi>&lambda;</mi>
<mo>&prime;</mo>
</msup>
<mo>|</mo>
<mo>|</mo>
<mi>w</mi>
<mo>-</mo>
<msub>
<mi>&tau;</mi>
<mi>s</mi>
</msub>
<msub>
<mi>w</mi>
<mi>s</mi>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
Wherein, the w is model-aided parameter, and the λ ' is the quadratic loss function and Parallel Constraint item | | w- τsws||2It
Between balance factor, the τsFor model-aided parameter w and model parameter wsCorresponding auxiliary parameter;
The model parameter of shown method adjustment aesthetic feeling fraction regression model according to the following formula:
<mrow>
<munder>
<mrow>
<mi>m</mi>
<mi>i</mi>
<mi>n</mi>
</mrow>
<mrow>
<msub>
<mi>w</mi>
<mi>t</mi>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mi>t</mi>
</msub>
<mo>,</mo>
<msub>
<mi>&tau;</mi>
<mi>t</mi>
</msub>
</mrow>
</munder>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>w</mi>
<mi>t</mi>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
<mo>+</mo>
<mi>&mu;</mi>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>n</mi>
<mi>t</mi>
</msub>
</munderover>
<msup>
<mi>l</mi>
<mi>t</mi>
</msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>x</mi>
<mi>i</mi>
<mi>t</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>y</mi>
<mi>i</mi>
<mi>t</mi>
</msubsup>
<mo>;</mo>
<msub>
<mi>w</mi>
<mi>t</mi>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mi>t</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msup>
<mi>&mu;</mi>
<mo>&prime;</mo>
</msup>
<mo>|</mo>
<mo>|</mo>
<mi>w</mi>
<mo>-</mo>
<msub>
<mi>&tau;</mi>
<mi>t</mi>
</msub>
<msub>
<mi>w</mi>
<mi>t</mi>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
Wherein, balance factors of the μ ' between loss function and Parallel Constraint, the τtFor model-aided parameter w and model
Parameter wtCorresponding auxiliary parameter.
6. a kind of image aesthetic feeling apparatus for evaluating, it is characterised in that described device includes:
Model training module, it is configured to carry out mould to default aesthetic-qualitative level disaggregated model and default aesthetic feeling fraction regression model
Type training;
Model-aided parameter calculating module, it is configured to according to aesthetic feeling after default constraints, and the model training etc.
The model parameter of level disaggregated model and aesthetic feeling fraction regression model, computation model auxiliary parameter;
Model parameter adjusting module, it is configured to, according to the model-aided parameter, adjust the aesthetic-qualitative level after the model training
The model parameter of disaggregated model and aesthetic feeling fraction regression model;
Iteration module, it is configured to recalculate the model-aided parameter according to the model parameter after the adjustment, until described
Model-aided parameter meets default iterated conditional.
7. device according to claim 6, it is characterised in that
Aesthetic-qualitative level disaggregated model f after the model trainings(x) it is shown below:
fs(x)=sgn (ws Tx+bs)
Wherein, the wsAnd bsIt is the model parameter of aesthetic-qualitative level disaggregated model, the T is transposition symbol;The sgn (t) is
Sign function, sgn (t)=+ 1 if t > 0, sgn (t)=- 1, t is the variable of sign function if t < 0;
Aesthetic feeling fraction regression model f after the model trainingt(x) it is shown below:
ft(x)=wt Tx+bt
Wherein, the wtAnd btIt is the model parameter of aesthetic feeling fraction regression model.
8. device according to claim 7, it is characterised in that the model-aided parameter calculating module includes the first model
Parameter calculation unit and the second model parameter calculation unit;
The first model parameter calculation unit, it is configured to the mould that method shown according to the following formula calculates aesthetic-qualitative level disaggregated model
Shape parameter:
<mrow>
<munder>
<mi>min</mi>
<mrow>
<msub>
<mi>w</mi>
<mi>s</mi>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mi>s</mi>
</msub>
</mrow>
</munder>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>w</mi>
<mi>s</mi>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
<mo>+</mo>
<mi>&lambda;</mi>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>n</mi>
<mi>s</mi>
</msub>
</munderover>
<msup>
<mi>l</mi>
<mi>s</mi>
</msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>x</mi>
<mi>i</mi>
<mi>s</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>y</mi>
<mi>i</mi>
<mi>s</mi>
</msubsup>
<mo>;</mo>
<msub>
<mi>w</mi>
<mi>s</mi>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mi>s</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
Wherein, the nsSample image quantity used by carry out model training to aesthetic-qualitative level disaggregated model;It is describedFor quadratic loss function, and
It is describedFor i-th sample graph
The characteristics of image of picture, it is describedFor label corresponding to i-th sample image, the λ is the quadratic loss function and regular terms
Between balance factor;
The second model parameter calculation unit, it is configured to the mould that method shown according to the following formula calculates aesthetic feeling fraction regression model
Shape parameter:
<mrow>
<munder>
<mrow>
<mi>m</mi>
<mi>i</mi>
<mi>n</mi>
</mrow>
<mrow>
<msub>
<mi>w</mi>
<mi>t</mi>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mi>t</mi>
</msub>
</mrow>
</munder>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>w</mi>
<mi>t</mi>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
<mo>+</mo>
<mi>&mu;</mi>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>n</mi>
<mi>t</mi>
</msub>
</munderover>
<msup>
<mi>l</mi>
<mi>t</mi>
</msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>x</mi>
<mi>i</mi>
<mi>t</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>y</mi>
<mi>i</mi>
<mi>t</mi>
</msubsup>
<mo>;</mo>
<msub>
<mi>w</mi>
<mi>t</mi>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mi>t</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
Wherein, the ntSample image quantity used by carry out model training to aesthetic feeling fraction regression model;It is describedFor quadratic loss function, and
It is describedFor i-th sample
The characteristics of image of image, it is describedFor label corresponding to i-th sample image, the ε is the arithmetic number less than predetermined threshold value, institute
State balance factors of the μ between the quadratic loss function and regular terms.
9. device according to claim 8, it is characterised in that the model parameter adjusting module includes the first adjustment unit
With the second adjustment unit;
First adjustment unit, it is configured to the model parameter of method adjustment aesthetic-qualitative level disaggregated model shown according to the following formula:
<mrow>
<munder>
<mrow>
<mi>m</mi>
<mi>i</mi>
<mi>n</mi>
</mrow>
<mrow>
<msub>
<mi>w</mi>
<mi>s</mi>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mi>s</mi>
</msub>
<mo>,</mo>
<msub>
<mi>&tau;</mi>
<mi>s</mi>
</msub>
</mrow>
</munder>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>w</mi>
<mi>s</mi>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
<mo>+</mo>
<mi>&lambda;</mi>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>n</mi>
<mi>s</mi>
</msub>
</munderover>
<msup>
<mi>l</mi>
<mi>s</mi>
</msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>x</mi>
<mi>i</mi>
<mi>s</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>y</mi>
<mi>i</mi>
<mi>s</mi>
</msubsup>
<mo>;</mo>
<msub>
<mi>w</mi>
<mi>s</mi>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mi>s</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msup>
<mi>&lambda;</mi>
<mo>&prime;</mo>
</msup>
<mo>|</mo>
<mo>|</mo>
<mi>w</mi>
<mo>-</mo>
<msub>
<mi>&tau;</mi>
<mi>s</mi>
</msub>
<msub>
<mi>w</mi>
<mi>s</mi>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
Wherein, the w is model-aided parameter, balance factors of the λ ' between loss function and Parallel Constraint, the τsFor
Model-aided parameter w and model parameter wsCorresponding auxiliary parameter;
Second adjustment unit, it is configured to the model parameter of method adjustment aesthetic feeling fraction regression model shown according to the following formula:
<mrow>
<munder>
<mrow>
<mi>m</mi>
<mi>i</mi>
<mi>n</mi>
</mrow>
<mrow>
<msub>
<mi>w</mi>
<mi>t</mi>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mi>t</mi>
</msub>
<mo>,</mo>
<msub>
<mi>&tau;</mi>
<mi>t</mi>
</msub>
</mrow>
</munder>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>w</mi>
<mi>t</mi>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
<mo>+</mo>
<mi>&mu;</mi>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>n</mi>
<mi>t</mi>
</msub>
</munderover>
<msup>
<mi>l</mi>
<mi>t</mi>
</msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>x</mi>
<mi>i</mi>
<mi>t</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>y</mi>
<mi>i</mi>
<mi>t</mi>
</msubsup>
<mo>;</mo>
<msub>
<mi>w</mi>
<mi>t</mi>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mi>t</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msup>
<mi>&mu;</mi>
<mo>&prime;</mo>
</msup>
<mo>|</mo>
<mo>|</mo>
<mi>w</mi>
<mo>-</mo>
<msub>
<mi>&tau;</mi>
<mi>t</mi>
</msub>
<msub>
<mi>w</mi>
<mi>t</mi>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
Wherein, balance factors of the μ ' between loss function and Parallel Constraint, the τtFor model-aided parameter w and model
Parameter wtCorresponding auxiliary parameter.
10. device according to claim 6, it is characterised in that
The default constraints is shown below:
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<munder>
<mi>min</mi>
<mrow>
<msub>
<mi>&gamma;</mi>
<mi>s</mi>
</msub>
<mo>,</mo>
<msub>
<mi>&gamma;</mi>
<mi>t</mi>
</msub>
<mo>,</mo>
<mi>w</mi>
</mrow>
</munder>
<mo>|</mo>
<mo>|</mo>
<mi>w</mi>
<mo>-</mo>
<msub>
<mi>&gamma;</mi>
<mi>s</mi>
</msub>
<msub>
<mi>w</mi>
<mi>s</mi>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
<mo>+</mo>
<mo>|</mo>
<mo>|</mo>
<mi>w</mi>
<mo>-</mo>
<msub>
<mi>&gamma;</mi>
<mi>t</mi>
</msub>
<msub>
<mi>w</mi>
<mi>t</mi>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>s</mi>
<mo>.</mo>
<mi>t</mi>
<mo>.</mo>
<mo>|</mo>
<mo>|</mo>
<mi>w</mi>
<mo>|</mo>
<mo>|</mo>
<mo>=</mo>
<mn>1</mn>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
Wherein, the w is model-aided parameter, the wsFor the model parameter of aesthetic-qualitative level disaggregated model, the wtFor aesthetic feeling point
The model parameter of number regression model;The γsFor model-aided parameter w and model parameter wsCorresponding auxiliary parameter;The γt
For model-aided parameter w and model parameter wtCorresponding auxiliary parameter.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710564852.7A CN107481218B (en) | 2017-07-12 | 2017-07-12 | Image aesthetic feeling evaluation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710564852.7A CN107481218B (en) | 2017-07-12 | 2017-07-12 | Image aesthetic feeling evaluation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107481218A true CN107481218A (en) | 2017-12-15 |
CN107481218B CN107481218B (en) | 2020-03-27 |
Family
ID=60595571
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710564852.7A Active CN107481218B (en) | 2017-07-12 | 2017-07-12 | Image aesthetic feeling evaluation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107481218B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108090902A (en) * | 2017-12-30 | 2018-05-29 | 中国传媒大学 | A kind of non-reference picture assessment method for encoding quality based on multiple dimensioned generation confrontation network |
CN109544503A (en) * | 2018-10-15 | 2019-03-29 | 北京达佳互联信息技术有限公司 | Image processing method, device, electronic equipment and storage medium |
CN110473164A (en) * | 2019-05-31 | 2019-11-19 | 北京理工大学 | A kind of image aesthetic quality evaluation method based on attention mechanism |
CN111008971A (en) * | 2019-12-24 | 2020-04-14 | 天津工业大学 | Aesthetic quality evaluation method of group photo image and real-time shooting guidance system |
CN111860039A (en) * | 2019-04-26 | 2020-10-30 | 四川大学 | Cross-connection CNN + SVR-based street space quality quantification method |
CN114186497A (en) * | 2021-12-15 | 2022-03-15 | 湖北工业大学 | Intelligent analysis method, system, equipment and medium for value of art work |
CN109522950B (en) * | 2018-11-09 | 2022-04-22 | 网易传媒科技(北京)有限公司 | Image scoring model training method and device and image scoring method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080050014A1 (en) * | 2006-08-22 | 2008-02-28 | Gary Bradski | Training and using classification components on multiple processing units |
CN102982373A (en) * | 2012-12-31 | 2013-03-20 | 山东大学 | OIN (Optimal Input Normalization) neural network training method for mixed SVM (Support Vector Machine) regression algorithm |
CN103218619A (en) * | 2013-03-15 | 2013-07-24 | 华南理工大学 | Image aesthetics evaluating method |
CN105894025A (en) * | 2016-03-30 | 2016-08-24 | 中国科学院自动化研究所 | Natural image aesthetic feeling quality assessment method based on multitask deep learning |
-
2017
- 2017-07-12 CN CN201710564852.7A patent/CN107481218B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080050014A1 (en) * | 2006-08-22 | 2008-02-28 | Gary Bradski | Training and using classification components on multiple processing units |
CN102982373A (en) * | 2012-12-31 | 2013-03-20 | 山东大学 | OIN (Optimal Input Normalization) neural network training method for mixed SVM (Support Vector Machine) regression algorithm |
CN103218619A (en) * | 2013-03-15 | 2013-07-24 | 华南理工大学 | Image aesthetics evaluating method |
CN105894025A (en) * | 2016-03-30 | 2016-08-24 | 中国科学院自动化研究所 | Natural image aesthetic feeling quality assessment method based on multitask deep learning |
Non-Patent Citations (5)
Title |
---|
KEUNWOO CHOI 等: "Transfer learning for music classification and regression tasks", 《COMPUTER VISION AND PATTERN RECOGNITION》 * |
LIXIN DUAN 等: "Domain Transfer Multiple Kernel Learning", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
OLIVIER CHAPELLE: "Training a Support Vector Machine in the Primal", 《NEURAL COMPUT》 * |
YUEYING KAO 等: "Deep Aesthetic Quality Assessment With Semantic Information", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 * |
蔡冬: "计算机图像美学分类与评价系统研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108090902A (en) * | 2017-12-30 | 2018-05-29 | 中国传媒大学 | A kind of non-reference picture assessment method for encoding quality based on multiple dimensioned generation confrontation network |
CN108090902B (en) * | 2017-12-30 | 2021-12-31 | 中国传媒大学 | Non-reference image quality objective evaluation method based on multi-scale generation countermeasure network |
CN109544503A (en) * | 2018-10-15 | 2019-03-29 | 北京达佳互联信息技术有限公司 | Image processing method, device, electronic equipment and storage medium |
CN109544503B (en) * | 2018-10-15 | 2020-12-01 | 北京达佳互联信息技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN109522950B (en) * | 2018-11-09 | 2022-04-22 | 网易传媒科技(北京)有限公司 | Image scoring model training method and device and image scoring method and device |
CN111860039A (en) * | 2019-04-26 | 2020-10-30 | 四川大学 | Cross-connection CNN + SVR-based street space quality quantification method |
CN110473164A (en) * | 2019-05-31 | 2019-11-19 | 北京理工大学 | A kind of image aesthetic quality evaluation method based on attention mechanism |
CN110473164B (en) * | 2019-05-31 | 2021-10-15 | 北京理工大学 | Image aesthetic quality evaluation method based on attention mechanism |
CN111008971A (en) * | 2019-12-24 | 2020-04-14 | 天津工业大学 | Aesthetic quality evaluation method of group photo image and real-time shooting guidance system |
CN114186497A (en) * | 2021-12-15 | 2022-03-15 | 湖北工业大学 | Intelligent analysis method, system, equipment and medium for value of art work |
Also Published As
Publication number | Publication date |
---|---|
CN107481218B (en) | 2020-03-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107481218A (en) | Image aesthetic feeling appraisal procedure and device | |
CN109212617B (en) | Automatic identification method and device for electric imaging logging phase | |
CN104899298B (en) | A kind of microblog emotional analysis method based on large-scale corpus feature learning | |
CN105955962B (en) | Method and device for calculating similarity of questions | |
CN107358293A (en) | A kind of neural network training method and device | |
CN103353872B (en) | A kind of teaching resource personalized recommendation method based on neutral net | |
CN107273490A (en) | A kind of combination mistake topic recommendation method of knowledge based collection of illustrative plates | |
CN110211173A (en) | A kind of paleontological fossil positioning and recognition methods based on deep learning | |
CN107220231A (en) | Electronic equipment and method and training method for natural language processing | |
CN106777402B (en) | A kind of image retrieval text method based on sparse neural network | |
CN103942749B (en) | A kind of based on revising cluster hypothesis and the EO-1 hyperion terrain classification method of semi-supervised very fast learning machine | |
CN110288007A (en) | The method, apparatus and electronic equipment of data mark | |
CN106203625A (en) | A kind of deep-neural-network training method based on multiple pre-training | |
CN107368613A (en) | Short text sentiment analysis method and device | |
CN106022954A (en) | Multiple BP neural network load prediction method based on grey correlation degree | |
McCormack et al. | Deep learning of individual aesthetics | |
CN112614552B (en) | BP neural network-based soil heavy metal content prediction method and system | |
CN105701512A (en) | Image classification method based on BBO-MLP and texture characteristic | |
CN109740072A (en) | Hotel's sort method and system under OTA platform based on POI | |
CN107945534A (en) | A kind of special bus method for predicting based on GMDH neutral nets | |
CN110308658A (en) | A kind of pid parameter setting method, device, system and readable storage medium storing program for executing | |
Londhe et al. | Infilling of missing daily rainfall records using artificial neural network | |
CN107819810A (en) | adaptive planning system | |
CN108416483A (en) | RBF type teaching quality evaluation prediction techniques based on PSO optimizations | |
CN114969528A (en) | User portrait and learning path recommendation method, device and equipment based on capability evaluation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |