[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN106327448A - Picture stylization processing method based on deep learning - Google Patents

Picture stylization processing method based on deep learning Download PDF

Info

Publication number
CN106327448A
CN106327448A CN201610789762.3A CN201610789762A CN106327448A CN 106327448 A CN106327448 A CN 106327448A CN 201610789762 A CN201610789762 A CN 201610789762A CN 106327448 A CN106327448 A CN 106327448A
Authority
CN
China
Prior art keywords
picture
pixel
super
processing method
method based
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610789762.3A
Other languages
Chinese (zh)
Inventor
盛斌
常柯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201610789762.3A priority Critical patent/CN106327448A/en
Publication of CN106327448A publication Critical patent/CN106327448A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a picture stylization processing method based on deep learning. The method includes steps: S1: establishing and training a neural network; S2: segmenting a to-be-processed picture into a plurality of ultra-pixels; S3: analyzing each ultra-pixel by employing the trained neural network, and marking the environment category with the highest adaption degree therefor; S4: defining the environment category with the maximum marking frequency of the ultra-pixel of all the ultra-pixels as the environment category of the picture; S5: extracting an object in the picture, and determining a target major object according to the position of the object in the picture; and S6: sharpening or blurring a background according to a stylization processing requirement, wherein the stylization processing requirement comprises background strengthening and background weakening. Compared with the prior art, the method is advantaged by fast processing speed and good effect.

Description

A kind of picture stylization processing method based on degree of depth study
Technical field
The present invention relates to a kind of image processing method, especially relate to a kind of picture stylization based on degree of depth study and process Method.
Background technology
Popular along with digital image device and social networks, shares picture by social networks and becomes very popular.? The means common in sharing of this picture are to use the social software of multiple such as Instagram to carry out stylization process.Tradition High-quality process and be typically to be processed by veteran artist is manual.In this work, system is passed through from example images One is comprised and trains one after the set of specific style picture before and after treatment learns and can enter by computation model Row automatic picture conversion work.
Traditional image procossing is experimental.Many is had to automatically process the software of color and style, such as Adobe Photoshop。
Google Auto Awesome and Microsoft Office Picture Manager.In addition, also have perhaps The correlational study of many this respects.
Summary of the invention
Defect that the purpose of the present invention is contemplated to overcome above-mentioned prior art to exist and provide a kind of and learn based on the degree of depth Picture stylization processing method.
The purpose of the present invention can be achieved through the following technical solutions:
A kind of picture stylization processing method based on degree of depth study, including step:
S1: build neutral net and be trained;
S2: pending picture segmentation is become multiple super-pixel;
S3: use each super-pixel of analysis of neural network trained, and mark, for it, the environmental classes that fit is the highest Not;
S4: the most environment category of number of times will be marked in all super-pixel be defined as the environment category of this picture;
S5: extract the object in picture, and determine target leading role's object according to object position in picture;
S6: process according to stylization and require sharpening background or obfuscation background, wherein, described stylization processes and requires bag Include strengthening background and weaken background.
Described neutral net includes an input layer, two hidden layers and an output layer, and two of which hidden layer has 192 neurons.
In described step S2, pending picture is divided into 7000 super-pixel.
The environment category that in described step S3, arbitrary super-pixel is marked includes sky, road, river, field, grass, and step S3 In the process of a super-pixel analysis is specifically included:
S31: randomly select the pixel setting number in super-pixel;
S32: use the neutral net trained to obtain this super-pixel fit based on the pixel analysis chosen the highest Environment category, and this super-pixel is labeled.
In described step S5, object nearest for distance center picture is defined as target leading role's object.
Described step S5 specifically includes step:
S51: extract the object in picture, and object nearest for distance center picture is defined as target leading role's object;
S52: judge whether the object consistent with target leading role's object classification, if it has, then perform step S53, if Being no, then perform step S54, wherein, the classification of object includes people, train, bus and building;
S53: the spacing with target leading role's object is defined as supporting role's object less than the similar object of threshold value, and performs step Rapid S54;
S54: residue object is defined as background object.
Described step S31 is particularly as follows: randomly select 10 pixels in super-pixel.
Compared with prior art, the invention have the advantages that
1) use neutral net to process based on the stylization carrying out picture on the premise of environment and object identification, process details More put in place, true to nature.
2) pending picture is divided into 7000 super-pixel, and the biggest training set reduces the risk of over-fitting
3) from each super-pixel, 10 pixels are randomly selected, moderate number, unlikely on the premise of ensureing degree of accuracy Excessive in system loading.
Accompanying drawing explanation
Fig. 1 is the key step schematic flow sheet of the inventive method.
Detailed description of the invention
The present invention is described in detail with specific embodiment below in conjunction with the accompanying drawings.The present embodiment is with technical solution of the present invention Premised on implement, give detailed embodiment and concrete operating process, but protection scope of the present invention be not limited to Following embodiment.
A kind of picture stylization processing method based on degree of depth study, as it is shown in figure 1, include step:
S1: building neutral net and be trained, neutral net includes an input layer, two hidden layers and an output Layer, two of which hidden layer has 192 neurons, and the neuronal quantity of output layer is equal with intended color conversion coefficient is 30, each 10 of three color channels;
S2: pending picture segmentation becomes multiple super-pixel, pending picture are divided into 7000 super-pixel, super-pixel (superpixel) refer to digital picture polygon segments, bigger than generic pixel be rendered same color and brightness.Will image Too it is slit into a series of subregion, between every sub regions inside, there is certain feature and there is the strongest concordance, due to super picture The acquisition of element no longer can be described in detail in the application by multiple existing mode, and in the present embodiment, super-pixel obtains and uses MatlabSuperpixel extracting tool;
S3: use each super-pixel of analysis of neural network trained, and mark, for it, the environmental classes that fit is the highest Not, the environment category that arbitrary super-pixel is marked includes in sky, road, river, field, grass, and step S3 dividing a super-pixel The process of analysis specifically includes:
S31: randomly select 10 pixels in super-pixel;
S32: use the neutral net trained to obtain this super-pixel fit based on the pixel analysis chosen the highest Environment category, and this super-pixel is labeled, concrete, the analysis process of environment category uses [Tighe and Lazebnik 2010] algorithm, this algorithmic technique is ripe, and has Open Source Code, and therefore application difficulty is little.
S4: the most environment category of number of times will be marked in all super-pixel be defined as the environment category of this picture;
S5: extract the object in picture, and determine target leading role's object according to object position in picture, by distance map The nearest object in sheet center is defined as target leading role's object, specifically includes step:
S51: extract the object in picture, the identification of object uses the " state-mentioned in [Wang et al.2013] Of-the-art object " detection method detects object classification Od being predefined and gathers the pixel comprised, because this detection There is the most complete SDK that increases income in method, easily applies, and object nearest for distance center picture is defined as target leading role's thing Body;
S52: judge whether the object consistent with target leading role's object classification, if it has, then perform step S53, if Being no, then perform step S54, wherein, the classification of object includes people, train, bus and building;
S53: the spacing with target leading role's object is defined as supporting role's object less than the similar object of threshold value, and performs step Rapid S54;
S54: residue object is defined as background object.
S6: process according to stylization and require sharpening background or obfuscation background, wherein, stylization processes and requires to include by force Change background and weaken background.Wherein use sharpening background during strengthening context request, when weakening context request, use the obfuscation back of the body Scape.Sharpening background is to improve the contrast of background, and obfuscation background is to reduce the contrast of background.

Claims (7)

1. a picture stylization processing method based on degree of depth study, it is characterised in that include step:
S1: build neutral net and be trained;
S2: pending picture segmentation is become multiple super-pixel;
S3: use each super-pixel of analysis of neural network trained, and mark, for it, the environment category that fit is the highest;
S4: the most environment category of number of times will be marked in all super-pixel be defined as the environment category of this picture;
S5: extract the object in picture, and determine target leading role's object according to object position in picture;
S6: process according to stylization and require sharpening background or obfuscation background, wherein, described stylization processes and requires to include by force Change background and weaken background.
A kind of picture stylization processing method based on degree of depth study the most according to claim 1, it is characterised in that described Neutral net includes an input layer, two hidden layers and an output layer, and two of which hidden layer has 192 neurons.
A kind of picture stylization processing method based on degree of depth study the most according to claim 1, it is characterised in that described In step S2, pending picture is divided into 7000 super-pixel.
A kind of picture stylization processing method based on degree of depth study the most according to claim 1, it is characterised in that described The environment category that in step S3, arbitrary super-pixel is marked includes in sky, road, river, field, grass, and step S3 a super picture The process that element is analyzed specifically includes:
S31: randomly select the pixel setting number in super-pixel;
S32: use the neutral net trained to obtain, based on the pixel analysis chosen, the environment that this super-pixel fit is the highest Classification, and this super-pixel is labeled.
A kind of picture stylization processing method based on degree of depth study the most according to claim 1, it is characterised in that described In step S5, object nearest for distance center picture is defined as target leading role's object.
A kind of picture stylization processing method based on degree of depth study the most according to claim 5, it is characterised in that described Step S5 specifically includes step:
S51: extract the object in picture, and object nearest for distance center picture is defined as target leading role's object;
S52: judge whether the object consistent with target leading role's object classification, if it has, then perform step S53, if it has not, Then performing step S54, wherein, the classification of object includes people, train, bus and building;
S53: the spacing with target leading role's object is defined as supporting role's object less than the similar object of threshold value, and performs step S54;
S54: residue object is defined as background object.
A kind of picture stylization processing method based on degree of depth study the most according to claim 1, it is characterised in that described Step S31 is particularly as follows: randomly select 10 pixels in super-pixel.
CN201610789762.3A 2016-08-31 2016-08-31 Picture stylization processing method based on deep learning Pending CN106327448A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610789762.3A CN106327448A (en) 2016-08-31 2016-08-31 Picture stylization processing method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610789762.3A CN106327448A (en) 2016-08-31 2016-08-31 Picture stylization processing method based on deep learning

Publications (1)

Publication Number Publication Date
CN106327448A true CN106327448A (en) 2017-01-11

Family

ID=57789612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610789762.3A Pending CN106327448A (en) 2016-08-31 2016-08-31 Picture stylization processing method based on deep learning

Country Status (1)

Country Link
CN (1) CN106327448A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154465A (en) * 2017-12-19 2018-06-12 北京小米移动软件有限公司 Image processing method and device
US10147459B2 (en) 2016-09-22 2018-12-04 Apple Inc. Artistic style transfer for videos
US10198839B2 (en) 2016-09-22 2019-02-05 Apple Inc. Style transfer-based image content correction
CN110225389A (en) * 2019-06-20 2019-09-10 北京小度互娱科技有限公司 The method for being inserted into advertisement in video, device and medium
CN110266960A (en) * 2019-07-19 2019-09-20 Oppo广东移动通信有限公司 Preview screen processing method, processing unit, photographic device and readable storage medium storing program for executing
US10664963B1 (en) 2017-09-11 2020-05-26 Apple Inc. Real-time selection of DNN style transfer networks from DNN sets
CN112528072A (en) * 2020-12-02 2021-03-19 泰州市朗嘉馨网络科技有限公司 Object type analysis platform and method applying big data storage
WO2021057463A1 (en) * 2019-09-25 2021-04-01 北京字节跳动网络技术有限公司 Image stylization processing method and apparatus, and electronic device and readable medium
US11367163B2 (en) 2019-05-31 2022-06-21 Apple Inc. Enhanced image processing techniques for deep neural networks

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103236042A (en) * 2013-04-27 2013-08-07 崔红保 Self-adaptive picture processing method and device
US20140270536A1 (en) * 2013-03-13 2014-09-18 Kofax, Inc. Systems and methods for classifying objects in digital images captured using mobile devices
CN105160695A (en) * 2015-06-30 2015-12-16 广东欧珀移动通信有限公司 Picture processing method and mobile terminal
CN105303514A (en) * 2014-06-17 2016-02-03 腾讯科技(深圳)有限公司 Image processing method and apparatus
CN105389584A (en) * 2015-10-13 2016-03-09 西北工业大学 Streetscape semantic annotation method based on convolutional neural network and semantic transfer conjunctive model
CN105631803A (en) * 2015-12-17 2016-06-01 小米科技有限责任公司 Method and device for filter processing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140270536A1 (en) * 2013-03-13 2014-09-18 Kofax, Inc. Systems and methods for classifying objects in digital images captured using mobile devices
CN103236042A (en) * 2013-04-27 2013-08-07 崔红保 Self-adaptive picture processing method and device
CN105303514A (en) * 2014-06-17 2016-02-03 腾讯科技(深圳)有限公司 Image processing method and apparatus
CN105160695A (en) * 2015-06-30 2015-12-16 广东欧珀移动通信有限公司 Picture processing method and mobile terminal
CN105389584A (en) * 2015-10-13 2016-03-09 西北工业大学 Streetscape semantic annotation method based on convolutional neural network and semantic transfer conjunctive model
CN105631803A (en) * 2015-12-17 2016-06-01 小米科技有限责任公司 Method and device for filter processing

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
WEN JUN 等: "Outdoor Scene Labeling Using Deep Convolutional Neural Networks", 《PROCEEDINGS OF THE 34TH CHINESE CONTROL CONFERENCE》 *
尹蕊: "基于多尺度卷积神经网络的场景标记", 《现代计算机》 *
杨亚威 等: "静态背景中目标运动去模糊", 《微电子学与计算机》 *
潘锋 等: "一种新的运动目标检测与跟踪算法", 《光电工程》 *
蒋应锋 等: "一种新的多尺度深度学习图像语义理解方法研究", 《光电子·激光》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10147459B2 (en) 2016-09-22 2018-12-04 Apple Inc. Artistic style transfer for videos
US10198839B2 (en) 2016-09-22 2019-02-05 Apple Inc. Style transfer-based image content correction
US10664963B1 (en) 2017-09-11 2020-05-26 Apple Inc. Real-time selection of DNN style transfer networks from DNN sets
US10664718B1 (en) 2017-09-11 2020-05-26 Apple Inc. Real-time adjustment of hybrid DNN style transfer networks
US10789694B1 (en) 2017-09-11 2020-09-29 Apple Inc. Real-time adjustment of temporal consistency constraints for video style
US10909657B1 (en) 2017-09-11 2021-02-02 Apple Inc. Flexible resolution support for image and video style transfer
CN108154465B (en) * 2017-12-19 2022-03-01 北京小米移动软件有限公司 Image processing method and device
CN108154465A (en) * 2017-12-19 2018-06-12 北京小米移动软件有限公司 Image processing method and device
US11367163B2 (en) 2019-05-31 2022-06-21 Apple Inc. Enhanced image processing techniques for deep neural networks
CN110225389A (en) * 2019-06-20 2019-09-10 北京小度互娱科技有限公司 The method for being inserted into advertisement in video, device and medium
CN110266960A (en) * 2019-07-19 2019-09-20 Oppo广东移动通信有限公司 Preview screen processing method, processing unit, photographic device and readable storage medium storing program for executing
WO2021057463A1 (en) * 2019-09-25 2021-04-01 北京字节跳动网络技术有限公司 Image stylization processing method and apparatus, and electronic device and readable medium
CN112528072B (en) * 2020-12-02 2021-06-22 深圳市三希软件科技有限公司 Object type analysis platform and method applying big data storage
CN112528072A (en) * 2020-12-02 2021-03-19 泰州市朗嘉馨网络科技有限公司 Object type analysis platform and method applying big data storage

Similar Documents

Publication Publication Date Title
CN106327448A (en) Picture stylization processing method based on deep learning
CN109859171B (en) Automatic floor defect detection method based on computer vision and deep learning
CN103810503B (en) Depth study based method for detecting salient regions in natural image
CN104966085B (en) A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features
CN104156693B (en) A kind of action identification method based on the fusion of multi-modal sequence
CN105354599B (en) A kind of color identification method based on improved SLIC super-pixel segmentation algorithm
CN105513105B (en) Image background weakening method based on notable figure
JP2018195293A5 (en)
CN103699900B (en) Building horizontal vector profile automatic batch extracting method in satellite image
CN106228138A (en) A kind of Road Detection algorithm of integration region and marginal information
WO2016000331A1 (en) Image enhancement method, image enhancement device and display device
CN105989334B (en) Road detection method based on monocular vision
CN103177259A (en) Color block identification method and device
CN108564549A (en) A kind of image defogging method based on multiple dimensioned dense connection network
CN104463816A (en) Image processing method and device
CN106295645B (en) A kind of license plate character recognition method and device
US20190019041A1 (en) Method and apparatus for detecting a vehicle in a driving assisting system
CN111127360B (en) Gray image transfer learning method based on automatic encoder
CN103745468A (en) Significant object detecting method based on graph structure and boundary apriority
CN104134198A (en) Method for carrying out local processing on image
CN103218833B (en) The color space the most steady extremal region detection method of Edge Enhancement type
CN106778785A (en) Build the method for image characteristics extraction model and method, the device of image recognition
CN106204597B (en) A kind of video object dividing method based on from the step Weakly supervised study of formula
CN106339984A (en) Distributed image super-resolution method based on K-means driven convolutional neural network
CN104966054A (en) Weak and small object detection method in visible image of unmanned plane

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170111