CN107392865A - A kind of restored method of facial image - Google Patents
A kind of restored method of facial image Download PDFInfo
- Publication number
- CN107392865A CN107392865A CN201710528727.0A CN201710528727A CN107392865A CN 107392865 A CN107392865 A CN 107392865A CN 201710528727 A CN201710528727 A CN 201710528727A CN 107392865 A CN107392865 A CN 107392865A
- Authority
- CN
- China
- Prior art keywords
- image
- network
- input
- facial image
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 230000001815 facial effect Effects 0.000 title claims abstract description 46
- 230000002708 enhancing effect Effects 0.000 claims abstract description 20
- 238000011084 recovery Methods 0.000 claims abstract description 17
- 238000012549 training Methods 0.000 claims description 15
- 238000013527 convolutional neural network Methods 0.000 claims description 8
- 230000007787 long-term memory Effects 0.000 claims description 7
- 238000011478 gradient descent method Methods 0.000 claims description 4
- 230000007774 longterm Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 4
- 239000013598 vector Substances 0.000 description 9
- 230000008569 process Effects 0.000 description 6
- 238000012544 monitoring process Methods 0.000 description 5
- 230000001537 neural effect Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 3
- 230000003321 amplification Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000007935 neutral effect Effects 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000015654 memory Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/1347—Preprocessing; Feature extraction
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a kind of restored method of facial image, comprise the following steps:S1, obtain lineup's face image pair;S2, using blurred picture as initial input image, input in a tactful network;S3, by the tactful network, one piece of region is chosen from input picture;S4, by one enhancing network, in input picture S3 choose region restore;S5, iteration perform S3 to S4 several times;S6, tactful network and enhancing network are trained;S7, tactful network and enhancing network are initialized;S8, using the facial image of parked as initial input image, in input policing network, repeat S3 to S5, the facial image restored.The restored method of facial image provided by the invention, can the autonomous less region of distortion in the fuzzy facial image of preferential selection, these regions are recovered, and the recovery of remaining distortion zone is helped using the extraneous information after the recovery of these regions, reach recovery effect more more preferable than prior art.
Description
Technical field
The present invention relates to image processing field, more particularly, to a kind of restored method of facial image.
Background technology
Low-resolution face image is restored, and refers to going out clearly using secondary or several low resolution a face image restorations
Clear and high-resolution facial image.In many images or video, the high face of definition is often with important information
And value.Particularly in recent years, with road monitoring, the extensive popularization of drive recorder and safety monitoring, clearly face
In monitor video with being increasingly taken seriously in image.Many applications, such as authentication, population analysis, human body tracking etc., people
Face image all plays extremely important role.In actual applications, to the high-resolution demand of facial image often and monitoring
The low resolution of video forms contradiction.So fuzzy and definition deficiency of the facial image in monitor video gives video monitoring
Practical application bring it is many obstruction with inconvenience.Under the limitation of technical conditions, high-resolution optical sensor is not
It is seen everywhere.Although by upgrading the equipment such as optical sensor can be solved the problems such as face is fuzzy in video, this is often
Cause purchase cost and maintenance cost increase, and can not also solve the definition of recorded video.Meanwhile using process
In, can also there are many interference, such as move, distance is far waited situations such as influenceing recorded video quality.Therefore, technological means is passed through
Desired information is obtained from high-resolution image is restored, there is huge demand in actual applications.
In the current generation, when analyzing video, people are often the information in checking monitoring video, and counterweight repeatedly
Point part is observed repeatedly, and facial image is often one of keynote message in video.Because the face in monitor video
Often distant, accounting is small.So when camera position farther out when, the resolution ratio of facial image is often than relatively low.For video
The inadequate face of definition, the method for use are often carried out directly carrying out interpolation amplification, then analyzed again first.Interpolation
Method speed is fast, and has a wide range of applications.But because its amplification is second-rate, it can cause image high-frequency information is impaired to cause
Image obscures, and can restore the difficulty for bringing many to the identification of the facial image in video.
With the development of computer vision technique, many computer vision techniques have been applied to low-resolution face image
In recovery.At present, more ripe technology includes interpolation method, dictionary learning method, depth convolutional neural networks etc..Dictionary learning
Method is to establish low resolution image and two dictionaries of full resolution pricture, by learning different mapping relations, to reach from low point
Debate high-resolution mapping;Interpolation rule is by establishing the up-sampling function model more optimized, is ensureing high-frequency information holding
It is complete under the conditions of, carry out enlarged drawing;Deep approach of learning is by neural network from low resolution to high-resolution " dilute
Dredge expression-mapping-reconstruct " process, to obtain high-definition picture.Although exist many for low-resolution face image recovery
Method, but most method all be directed to controlled environment under facial image, i.e., face must in strict angle, illumination,
Under the conditions of expression.
The content of the invention
It is an object of the present invention to it is directed to problems of the prior art, there is provided a kind of restored method of facial image,
To realize fuzzy face image restoration under uncontrolled environment into picture rich in detail.
To achieve the above object, the present invention uses following technical scheme:
A kind of restored method of facial image, comprises the following steps:
S1, lineup's face image pair is obtained, the facial image is to a picture rich in detail including same person face image
With a blurred picture;
S2, using blurred picture as initial input image, input in a tactful network;
S3, by the tactful network, one piece of region is chosen from input picture;
S4, by one enhancing network, in input picture S3 choose region restore;
S5, the image obtained after S4 is carried out into region recovery integrally perform S3 extremely as S3 input picture, iteration
Several times, it is restored image that last time repeats the image that S4 is obtained to S4;
Similarity between the picture rich in detail obtained in restored image and S1 that S6, calculating S5 are obtained, and use extensive chemical
Practise algorithm to be trained the tactful network described in S3, using gradient passback and gradient descent method to the enhancing net described in S4
Network is trained;
S7, based on the parameter for training to obtain in S6 to tactful network and enhancing network initialize;
S8, using the facial image of parked as initial input image, in input policing network, repeat S3 to S5, obtain
The facial image of recovery.
Further, the tactful network includes full articulamentum and shot and long term memory network;The shot and long term memory network
The region chosen is recorded and encoded during for iteration before to be performed into S3, and by it is hidden it is vectorial in the form of be delivered to down
An iteration.
Further, the input picture of the tactful network in step S3 is to be walked in the blurred picture or last round of iteration
The image that rapid S4 is obtained, export for input picture probability graph of the same size;In S8, when going to S3, with probability graph
Point centered on probability highest point, the rectangular area that correspondence position intercepts one piece of fixed size over an input image is step S3 choosings
The region selected.
Further, before S8, when going to S3, point centered on a point is randomly selected in probability graph, is being inputted
The rectangular area that correspondence position intercepts one piece of fixed size on image is the region of step S3 selections.
Further, the enhancing network includes convolutional neural networks and multiple full articulamentums, the convolutional neural networks
It is made up of 8 convolutional layers.
Further, it is similar between restored image and the picture rich in detail obtained in S1 that calculating S5 is obtained in the S6
Mean square error of the method for degree for calculating between the two, that is, poor square of two images correspondence position pixel is calculated, and will
All obtained value summations.
Further, in the S6, the method using nitrification enhancement Training strategy network is specially:By in step S6
Obtained image similarity negates, as the prize signal in intensified learning method;Received awards letter using REINFORCE algorithms
Number relative to tactful network gradient;Algorithm is returned using gradient and gradient descent algorithm updates the parameter of tactful network.
Further, the S7 also includes obtaining multigroup facial image pair, and each group of people's face image is held to iteration successively
Row S2 to S7.
Compared with prior art, beneficial effects of the present invention are:The restored method of facial image provided by the invention, can be certainly
These regions are recovered, and answered using these regions by the less region of distortion in the main fuzzy facial image of preferential selection
Extraneous information after original helps the recovery of remaining distortion zone, reaches recovery effect more more preferable than prior art.
Brief description of the drawings
Fig. 1 is a kind of schematic flow sheet of the restored method of facial image provided by the invention.
Fig. 2 is the schematic diagram of facial image pair in the present invention.
Fig. 3 is the schematic flow sheet of S3 to S4 in the present invention.
Fig. 4 is the schematic flow sheet of S5 in the present invention.
Fig. 5 is the instance graph that facial image recovery is carried out using the method for the present invention.
Embodiment
Below in conjunction with accompanying drawing and specific embodiment, technical scheme is described in detail.
A kind of restored method of facial image provided by the invention, fuzzy face image restoration can be schemed into clear
Picture, mainly restore two parts comprising neural metwork training and facial image.
Specifically, as shown in figure 1, a kind of restored method of facial image provided by the invention comprises the following steps:
S1, lineup's face image pair is obtained, as shown in Fig. 2 the facial image is to including the one of same person face image
Open picture rich in detail and a blurred picture;
S2, using blurred picture as initial input image, input in a tactful network;
S3, by the tactful network, one piece of region is chosen from input picture;
S4, by one enhancing network, in input picture S3 choose region restore;
S5, the image obtained after S4 is carried out into region recovery integrally perform S3 extremely as S3 input picture, iteration
Several times, it is restored image that last time repeats the image that S4 is obtained to S4;
Similarity between the picture rich in detail obtained in restored image and S1 that S6, calculating S5 are obtained, and use extensive chemical
Practise algorithm to be trained the tactful network described in S3, using gradient passback and gradient descent method to the enhancing net described in S4
Network is trained;
S7, based on the parameter for training to obtain in S6 to tactful network and enhancing network initialize;
S8, using the facial image of parked as initial input image, in input policing network, repeat S3 to S5, obtain
The facial image of recovery.
Wherein, S1 to S7 is the process of neural metwork training, and S8 is the process that facial image restores.
, can be first at random in a manner of the normal distribution that average is 0, variance is 0.01 before being trained to neutral net
The parameter of the tactful network of beginningization and enhancing network.Wherein, the tactful network includes full articulamentum and shot and long term memory network;Institute
Stating enhancing network includes convolutional neural networks and multiple full articulamentums, and the convolutional neural networks are made up of 8 convolutional layers.
Further, step S3 to S4 processing procedure is as shown in figure 3, S5 processing procedure is as shown in Figure 4.Specifically such as
Under:In S5, each round iteration performs S3 to S4, all by one of output image new " state ".One " state " includes two
Part, a part are the images after the region that S4 exports to obtain is restored, and the region of all " states " is answered before the image contains
Former result so that fuzzy information which region that tactful network can obtain image is clear, which region remains unchanged, and can basis
Recovered region determines which region should be currently restored.Another part is tactful network borough chief short-term memory network in S3
Caused hidden vector, shot and long term memory network possess the ability of memory long-term information, and shot and long term memory network here is used to incite somebody to action
Before iteration selection regional location recorded and encoded, and by it is hidden it is vectorial in the form of be delivered to next iteration.
Since the second wheel iteration, the input of tactful network is last round of caused " state " (in i.e. last round of S4 in S3
The image obtained after the recovery of region is overall).Wherein, tactful network first tier is full articulamentum, is inputted as image.Assuming that figure
The size of picture is 128*128, then the full articulamentum pulls into input picture the vector of one 16384 dimension, exports one 256 dimension
Vector.The vector of 256 dimension, and last round of obtained hidden vector is together, is input in shot and long term memory network.Shot and long term is remembered
Recall network and then export the hidden variable of one 512 dimension, and the probability graph for being 128*128 by the full articulamentum output size.
Every bit represents tactful network over an input image in probability graph, selects the region of a fixed size centered on the point
Probability.Because also in training process, we are not required to the maximum region of select probability at present;Therefore selected at random in probability graph
A point is taken, centered on this point, size is the selection region that 60*45 rectangular area exports as step S3.
Assuming that the image size for needing to restore is 128*128, the image area size of extraction is 60*45.It is whole in step S4
Width image will be drawn as the vector of 16384 dimensions, obtain 256 dimensional vectors by the full articulamentum of first layer, then connect entirely by the second layer
Connect layer and obtain 256 dimensional vectors, finally obtain the characteristic pattern of 60*45 sizes by the 3rd full articulamentum again.The feature of the 60*45
Scheme to merge with the image-region extracted, form the characteristic pattern of 2*60*45 sizes.This feature figure passes through convolutional Neural net
Network, obtain the area image after 60*45 recovers.The convolutional neural networks are made up of 8 convolutional layers.First layer and the second layer
Convolution kernel size is 5*5, output size 60*45*16, and the convolution kernel size of the second layer and layer 6 is 7*7, and output size is
60*45*32, third layer, the 4th layer and layer 5 convolution kernel size are 7*7, output size 60*45*64.8th layer of convolution kernel
Size is 5*5, output size 60*45*1, for the area image of recovery.The area image of the recovery will replace the last time
Region corresponding to the image that iteration obtains, the image formed after replacement is integrally using as the input of next round iteration.
By in S5 iteration perform S3 to S4 several times, can finally obtain according to the figure after blur image restoration
Picture, hereon referred to as restored image., then can be to tactful network and enhancing net by the way that restored image and picture rich in detail are contrasted
Network is trained.
Specifically, in the S6, the similarity between the picture rich in detail obtained in restored image and S1 that S5 is obtained is calculated
Method to calculate mean square error between the two, that is, calculate poor square of two images correspondence position pixel, and by institute
There is obtained value summation.
Further, strengthen the conventional method of Web vector graphic training neutral net, i.e., be loss letter using mean square error
Number, update network parameter with gradient passback and gradient descent algorithm.Tactful network has then used nitrification enhancement, attempts every time
Different regions is selected, according to the quality of last prize signal, the selection in this whole sequence is encouraged or suppressed.
In the S6, the method using nitrification enhancement Training strategy network is specially:It is calculated by more than equal
Square error negates, as the prize signal in intensified learning method;Signal is received awards relative to plan using REINFORCE algorithms
The slightly gradient of network;Algorithm is returned using gradient and gradient descent algorithm updates the parameter of tactful network.In the present embodiment, it is false
If the value of prize signal is R, the probability of the wherein randomly selected point of an iteration is P, then the gradient of this step strategy network
Value is R/P in this point, and remaining unchecked point is 0, and the gradient will be used for gradient passback and update tactful network with gradient descent method
Parameter.
As an improvement, the S7 also includes obtaining multigroup facial image pair, and each group of people's face image is held to iteration successively
Row S2 to S7.With multigroup facial image to for sample, being iterated training to tactful network and enhancing network, it is possible to increase strategy
The training effect of network and enhancing network.The sample group number of facial image pair is more, and effect is better.Each group of people's face image centering,
The method that blurred picture can be amplified back full size again by picture rich in detail by bilinear interpolation diminution obtains, and simplifies sample acquisition
Process.
After completing the training of tactful network and enhancing network and carrying out parameter initialization, it is possible to by the people of parked
Face image realizes the recovery of facial image as initial input image.In S8, iteration perform S3 to S4 reach 25 times or
After certain number, the image finally given is the facial image after multiple regions are repeatedly restored excessively.As shown in figure 5,
The output image that everyone face image in 25 facial images is current iteration step S3 to S4;It is to work as above facial image
The human face region of preceding iterative step S3 selections.The output image of last time iteration is the output image of this method.
It should be noted that in S8, when going to S3, the probability graph exported according to tactful network selects image-region
When, and slightly distinguished during neural metwork training., should be with probability highest in probability graph when actually recovering single width facial image
Point centered on point, the rectangular area that correspondence position intercepts one piece of fixed size over an input image are the region of step S3 selections.
Embodiment described above only expresses the several embodiments of the present invention, and its description is more specific and detailed, but simultaneously
Therefore the limitation to the scope of the claims of the present invention can not be interpreted as.It should be pointed out that for one of ordinary skill in the art
For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the guarantor of the present invention
Protect scope.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.
Claims (8)
1. a kind of restored method of facial image, it is characterised in that comprise the following steps:
S1, lineup's face image pair is obtained, the facial image is to the picture rich in detail and one including same person face image
Open blurred picture;
S2, using blurred picture as initial input image, input in a tactful network;
S3, by the tactful network, one piece of region is chosen from input picture;
S4, by one enhancing network, in input picture S3 choose region restore;
S5, the image obtained after S4 is carried out into region recovery are integrally as S3 input picture, if iteration performs S3 to S4
Dry time, it is restored image that last time, which repeats the image that S4 is obtained,;
Similarity between the picture rich in detail obtained in restored image and S1 that S6, calculating S5 are obtained, and calculated using intensified learning
Method is trained to the tactful network described in S3, and the enhancing network described in S4 is entered using gradient passback and gradient descent method
Row training;
S7, based on the parameter for training to obtain in S6 to tactful network and enhancing network initialize;
S8, using the facial image of parked as initial input image, in input policing network, repeat S3 to S5, restored
Facial image.
2. according to the method for claim 1, it is characterised in that the tactful network includes full articulamentum and shot and long term is remembered
Network;The region that the shot and long term memory network is used to choose when iteration before is performed into S3 is recorded and encoded,
And by it is hidden it is vectorial in the form of be delivered to next iteration.
3. according to the method for claim 1, it is characterised in that the input picture of the tactful network in step S3 is the mould
The obtained images of step S4 in paste image or last round of iteration, export for input picture probability graph of the same size;In S8
In, when going to S3, the point centered on probability highest point in probability graph, one piece of fixation of correspondence position interception over an input image
The rectangular area of size is the region of step S3 selections.
4. according to the method for claim 3, it is characterised in that before S8, when going to S3, selected at random in probability graph
Point centered on a point is taken, the rectangular area that correspondence position intercepts one piece of fixed size over an input image is step S3 selections
Region.
5. according to the method for claim 1, it is characterised in that the enhancing network includes convolutional neural networks and multiple complete
Articulamentum, the convolutional neural networks are made up of 8 convolutional layers.
6. according to the method for claim 1, it is characterised in that in the S6, calculate in restored image and S1 that S5 is obtained
The method of similarity between the picture rich in detail of acquisition calculates two images and corresponds to position to calculate mean square error between the two
Poor square of pixel is put, and all obtained values are summed.
7. according to the method for claim 1, it is characterised in that in the S6, use nitrification enhancement Training strategy net
The method of network is specially:The image similarity obtained in step S6 is negated, as the prize signal in intensified learning method;Make
Received awards gradient of the signal relative to tactful network with REINFORCE algorithms;Algorithm is returned using gradient and gradient declines calculation
Method updates the parameter of tactful network.
8. according to the method for claim 1, it is characterised in that the S7 also includes obtaining multigroup facial image pair, and according to
It is secondary that S2 to S7 is performed to iteration to each group of people's face image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710528727.0A CN107392865B (en) | 2017-07-01 | 2017-07-01 | Restoration method of face image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710528727.0A CN107392865B (en) | 2017-07-01 | 2017-07-01 | Restoration method of face image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107392865A true CN107392865A (en) | 2017-11-24 |
CN107392865B CN107392865B (en) | 2020-08-07 |
Family
ID=60335138
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710528727.0A Active CN107392865B (en) | 2017-07-01 | 2017-07-01 | Restoration method of face image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107392865B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108280058A (en) * | 2018-01-02 | 2018-07-13 | 中国科学院自动化研究所 | Relation extraction method and apparatus based on intensified learning |
CN108305214A (en) * | 2017-12-28 | 2018-07-20 | 腾讯科技(深圳)有限公司 | Image processing method, device, storage medium and computer equipment |
CN108364262A (en) * | 2018-01-11 | 2018-08-03 | 深圳大学 | A kind of restored method of blurred picture, device, equipment and storage medium |
CN108510451A (en) * | 2018-02-09 | 2018-09-07 | 杭州雄迈集成电路技术有限公司 | A method of the reconstruction car plate based on the double-deck convolutional neural networks |
CN108830801A (en) * | 2018-05-10 | 2018-11-16 | 湖南丹尼尔智能科技有限公司 | A kind of deep learning image recovery method of automatic identification vague category identifier |
CN109886891A (en) * | 2019-02-15 | 2019-06-14 | 北京市商汤科技开发有限公司 | A kind of image recovery method and device, electronic equipment, storage medium |
CN110858279A (en) * | 2018-08-22 | 2020-03-03 | 格力电器(武汉)有限公司 | Food material identification method and device |
CN112200226A (en) * | 2020-09-27 | 2021-01-08 | 北京达佳互联信息技术有限公司 | Image processing method based on reinforcement learning, image processing method and related device |
CN112634158A (en) * | 2020-12-22 | 2021-04-09 | 平安普惠企业管理有限公司 | Face image recovery method and device, computer equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110305404A1 (en) * | 2010-06-14 | 2011-12-15 | Chia-Wen Lin | Method And System For Example-Based Face Hallucination |
CN104680491A (en) * | 2015-02-28 | 2015-06-03 | 西安交通大学 | Non-uniform image motion blur removing method based on deep neural network |
CN106127684A (en) * | 2016-06-22 | 2016-11-16 | 中国科学院自动化研究所 | Image super-resolution Enhancement Method based on forward-backward recutrnce convolutional neural networks |
CN106600538A (en) * | 2016-12-15 | 2017-04-26 | 武汉工程大学 | Human face super-resolution algorithm based on regional depth convolution neural network |
-
2017
- 2017-07-01 CN CN201710528727.0A patent/CN107392865B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110305404A1 (en) * | 2010-06-14 | 2011-12-15 | Chia-Wen Lin | Method And System For Example-Based Face Hallucination |
CN104680491A (en) * | 2015-02-28 | 2015-06-03 | 西安交通大学 | Non-uniform image motion blur removing method based on deep neural network |
CN106127684A (en) * | 2016-06-22 | 2016-11-16 | 中国科学院自动化研究所 | Image super-resolution Enhancement Method based on forward-backward recutrnce convolutional neural networks |
CN106600538A (en) * | 2016-12-15 | 2017-04-26 | 武汉工程大学 | Human face super-resolution algorithm based on regional depth convolution neural network |
Non-Patent Citations (2)
Title |
---|
ONCEL TUZEL ET AL: "Global-Local Face Upsampling Network", 《ARXIV PREPRINT ARXIV:1603.07235》 * |
VOLODYMYR MNIH ET AL: "Recurrent Models of Visual Attention", 《ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108305214A (en) * | 2017-12-28 | 2018-07-20 | 腾讯科技(深圳)有限公司 | Image processing method, device, storage medium and computer equipment |
CN108280058A (en) * | 2018-01-02 | 2018-07-13 | 中国科学院自动化研究所 | Relation extraction method and apparatus based on intensified learning |
CN108364262A (en) * | 2018-01-11 | 2018-08-03 | 深圳大学 | A kind of restored method of blurred picture, device, equipment and storage medium |
CN108510451A (en) * | 2018-02-09 | 2018-09-07 | 杭州雄迈集成电路技术有限公司 | A method of the reconstruction car plate based on the double-deck convolutional neural networks |
CN108510451B (en) * | 2018-02-09 | 2021-02-12 | 杭州雄迈集成电路技术股份有限公司 | Method for reconstructing license plate based on double-layer convolutional neural network |
CN108830801A (en) * | 2018-05-10 | 2018-11-16 | 湖南丹尼尔智能科技有限公司 | A kind of deep learning image recovery method of automatic identification vague category identifier |
CN110858279A (en) * | 2018-08-22 | 2020-03-03 | 格力电器(武汉)有限公司 | Food material identification method and device |
CN109886891A (en) * | 2019-02-15 | 2019-06-14 | 北京市商汤科技开发有限公司 | A kind of image recovery method and device, electronic equipment, storage medium |
CN109886891B (en) * | 2019-02-15 | 2022-01-11 | 北京市商汤科技开发有限公司 | Image restoration method and device, electronic equipment and storage medium |
CN112200226A (en) * | 2020-09-27 | 2021-01-08 | 北京达佳互联信息技术有限公司 | Image processing method based on reinforcement learning, image processing method and related device |
CN112634158A (en) * | 2020-12-22 | 2021-04-09 | 平安普惠企业管理有限公司 | Face image recovery method and device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN107392865B (en) | 2020-08-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107392865A (en) | A kind of restored method of facial image | |
Shi et al. | Normalised gamma transformation‐based contrast‐limited adaptive histogram equalisation with colour correction for sand–dust image enhancement | |
CN109035149B (en) | License plate image motion blur removing method based on deep learning | |
CN111275643B (en) | Real noise blind denoising network system and method based on channel and space attention | |
CN105657402B (en) | A kind of depth map restoration methods | |
CN112800876B (en) | Super-spherical feature embedding method and system for re-identification | |
CN110287846A (en) | A kind of face critical point detection method based on attention mechanism | |
CN108629753A (en) | A kind of face image restoration method and device based on Recognition with Recurrent Neural Network | |
CN111861906B (en) | Pavement crack image virtual augmentation model establishment and image virtual augmentation method | |
CN109214973A (en) | For the confrontation safety barrier generation method of steganalysis neural network | |
CN108596818B (en) | Image steganalysis method based on multitask learning convolutional neural network | |
CN110276389B (en) | Mine mobile inspection image reconstruction method based on edge correction | |
CN107729820A (en) | A kind of finger vein identification method based on multiple dimensioned HOG | |
CN106295501A (en) | The degree of depth based on lip movement study personal identification method | |
CN104537381B (en) | A kind of fuzzy image recognition method based on fuzzy invariant features | |
Zhang et al. | Single image dehazing based on bright channel prior model and saliency analysis strategy | |
CN111476727B (en) | Video motion enhancement method for face-changing video detection | |
CN113378672A (en) | Multi-target detection method for defects of power transmission line based on improved YOLOv3 | |
CN103679645A (en) | Signal processing apparatus, signal processing method, output apparatus, output method, and program | |
CN112001785A (en) | Network credit fraud identification method and system based on image identification | |
Liang et al. | Deep convolution neural networks for automatic eyeglasses removal | |
CN117372413A (en) | Wafer defect detection method based on generation countermeasure network | |
CN110175509A (en) | A kind of round-the-clock eye circumference recognition methods based on cascade super-resolution | |
CN111507185B (en) | Tumble detection method based on stack cavity convolution network | |
CN116668068A (en) | Industrial control abnormal flow detection method based on joint federal learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20220310 Address after: 511455 No. 106, Fengze East Road, Nansha District, Guangzhou City, Guangdong Province (self compiled Building 1) x1301-b013290 Patentee after: Guangzhou wisdom Technology (Guangzhou) Co.,Ltd. Address before: 510000 210-5, Chuangqi Building 1, 63 Chuangqi Road, Shilou Town, Panyu District, Guangzhou City, Guangdong Province Patentee before: GUANGZHOU SHENYU INFORMATION TECHNOLOGY CO.,LTD. |