CN110148102A - Image composition method, ad material synthetic method and device - Google Patents
Image composition method, ad material synthetic method and device Download PDFInfo
- Publication number
- CN110148102A CN110148102A CN201810146131.9A CN201810146131A CN110148102A CN 110148102 A CN110148102 A CN 110148102A CN 201810146131 A CN201810146131 A CN 201810146131A CN 110148102 A CN110148102 A CN 110148102A
- Authority
- CN
- China
- Prior art keywords
- image
- original image
- stingy
- target material
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 104
- 239000000463 material Substances 0.000 title claims abstract description 38
- 239000000203 mixture Substances 0.000 title claims abstract description 35
- 238000010189 synthetic method Methods 0.000 title claims abstract description 17
- 239000013077 target material Substances 0.000 claims abstract description 95
- 238000012545 processing Methods 0.000 claims abstract description 41
- 238000001514 detection method Methods 0.000 claims abstract description 31
- 238000006748 scratching Methods 0.000 claims abstract description 15
- 230000002393 scratching effect Effects 0.000 claims abstract description 15
- 238000012549 training Methods 0.000 claims description 46
- 230000006870 function Effects 0.000 claims description 40
- 238000003860 storage Methods 0.000 claims description 25
- 230000015654 memory Effects 0.000 claims description 23
- 230000009471 action Effects 0.000 claims description 22
- 238000009415 formwork Methods 0.000 claims description 17
- 238000012805 post-processing Methods 0.000 claims description 10
- 241001465754 Metazoa Species 0.000 claims description 5
- 238000003708 edge detection Methods 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims 1
- 230000015572 biosynthetic process Effects 0.000 abstract description 35
- 238000003786 synthesis reaction Methods 0.000 abstract description 35
- 230000008569 process Effects 0.000 abstract description 24
- 238000005516 engineering process Methods 0.000 abstract description 14
- 238000010586 diagram Methods 0.000 description 15
- 238000004891 communication Methods 0.000 description 11
- 238000004519 manufacturing process Methods 0.000 description 11
- 230000001133 acceleration Effects 0.000 description 9
- 230000002093 peripheral effect Effects 0.000 description 9
- 238000013528 artificial neural network Methods 0.000 description 7
- 238000013527 convolutional neural network Methods 0.000 description 6
- 210000002569 neuron Anatomy 0.000 description 6
- 238000003066 decision tree Methods 0.000 description 5
- 230000007246 mechanism Effects 0.000 description 5
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 238000013461 design Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000009826 distribution Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 230000002194 synthesizing effect Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 239000000919 ceramic Substances 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 230000005764 inhibitory process Effects 0.000 description 2
- 210000003127 knee Anatomy 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000007477 logistic regression Methods 0.000 description 2
- 230000014759 maintenance of location Effects 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 230000001052 transient effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 1
- 238000012938 design process Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000004064 recycling Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000013517 stratification Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0276—Advertisement creation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Strategic Management (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Accounting & Taxation (AREA)
- Development Economics (AREA)
- Finance (AREA)
- Entrepreneurship & Innovation (AREA)
- Game Theory and Decision Science (AREA)
- Economics (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of image composition method, ad material synthetic method and devices, belong to digital image processing field.The described method includes: obtaining the original image of figure to be scratched, conspicuousness detection is carried out to original image and generates notable figure, graph model is scratched using depth according to notable figure, stingy figure masking-out is calculated, target material in original image is taken out using stingy figure masking-out, target material and primary template to be synthesized are synthesized, target image is obtained.The present invention scratches graph model by using conspicuousness detection and depth and carries out scratching figure automatically to original image, it avoids user in the related technology and needs the case where calibration of three components is carried out to original image repeatedly, simplify stingy graphic operation, it realizes and original image is carried out to scratch figure, and by the target material that stingy figure obtains and the full-automatic realization process that archetype synthesizes, the producing efficiency of image synthesis is improved.
Description
Technical field
The present embodiments relate to digital image processing field, in particular to a kind of image composition method, ad material are closed
At method and device.
Background technique
Image composition method is usually to synthesize the target material in original image with primary template to be synthesized, is obtained
To the process of target image.Wherein, the process for obtaining the target material in original image is digital matting process.Digital matting
Process is to solve the process of following stingy figure equation (English: matting equation).
Ii=αiFi+(1-αi)Bi;
Wherein, IiIt is the color value of pixel i, αiIt is a number between 0 to 1, the referred to as transparency of digital picture
Value, α are covered as (English: alpha matte) or unknown masking-out (English matte estimation).FiIt is the prospect of pixel i
Color, BiIt is the background colour of pixel i.The α matrix of original image is used to indicate the stingy figure result of original image.Work as αiValue represents when being 1
Pixel i belongs to prospect, works as αiValue represents pixel i when being 0 and belongs to background, works as αiValue is several between 0 and 1, represents pixel i and belongs to
Preceding background Mixed Zone.
In the related technology, it is demarcated by hand come the α for the most of pixel demarcated in digital picture by useriValue, i.e. three components.
As shown in Figure 1, including: that user demarcates α for an original image 100, in calibrated original imageiBefore value is 1
Scene area 12, user demarcate αiValue demarcates α for 0 background area 14 and useriValue is the uncertain zone of ignorance 16 being worth, not
Know that region 16 is the region scratched nomography and need to estimate.After user by hand demarcates original image, scratched using closed
The foreground area 12 and background area 14 that figure (closed-form matting) algorithm is specified according to user, to zone of ignorance 16
In foreground pixel and background pixel make estimation, obtain the α of each pixel in zone of ignoranceiValue.
Since user is difficult accurately to specify closed to scratch three components required for nomography, if to obtain accurately scratching figure
As a result, then needing user constantly to scratch figure result according to this and re-scale to scratch schemes required three components next time, carry out more
Can just obtain accurately scratching figure after secondary digital matting as a result, the process is very time-consuming and heavy dependence user it is professional, thus
Cause stingy graphic operation more complex, thereby reduces the producing efficiency of subsequent image synthesis.
Summary of the invention
The embodiment of the invention provides a kind of image composition method, ad material synthetic method and devices, can solve phase
The lower problem of the producing efficiency that image synthesizes in the technology of pass.The technical solution is as follows:
In a first aspect, providing a kind of image composition method, which comprises
Obtain the original image of figure to be scratched;
Conspicuousness detection is carried out to the original image, generates notable figure;
Graph model is scratched using depth according to the notable figure, stingy figure masking-out is calculated, the depth scratches graph model and is used for table
Show the stingy figure rule obtained based on sample image training;
Using the stingy figure masking-out, the target material in the original image is taken out;
The target material and primary template to be synthesized are synthesized, target image is obtained.
Second aspect provides a kind of ad material synthetic method, which comprises
It obtains and corresponds to the first trigger action that image uploads entrance in destination application, the destination application is
Application program with ad material processing function;
The original image uploaded is obtained according to first trigger action;
It opens and scratches figure function;
The original image is carried out to automate stingy figure, obtains the target material in the original image, the target element
Material is at least one of plant, animal and still life element;
The target material and advertisement formwork to be synthesized are synthesized, targeted advertisements image is obtained.
The third aspect, provides a kind of image synthesizer, and described device includes:
Module is obtained, for obtaining the original image of figure to be scratched;
Generation module generates notable figure for carrying out conspicuousness detection to the original image;
Computing module, for scratching graph model using depth and stingy figure masking-out, the depth being calculated according to the notable figure
It scratches graph model and is used to indicate the stingy figure rule obtained based on sample image training;
Module is taken, for using the stingy figure masking-out, takes out the target material in the original image;
Synthesis module obtains target image for synthesizing the target material and primary template to be synthesized.
Fourth aspect, provides a kind of ad material synthesizer, and described device includes:
First obtains module, for obtaining the first trigger action for corresponding to image in destination application and uploading entrance,
The destination application is the application program with ad material processing function;
Second obtains module, for obtaining the original image of upload according to first trigger action;
Opening module, for opening stingy figure function;
Module is scratched, automates stingy figure for carrying out to the original image, obtains the target element in the original image
Material, the target material are at least one of plant, animal and still life element;
Synthesis module obtains targeted advertisements figure for synthesizing the target material and advertisement formwork to be synthesized
Picture.
5th aspect, provides a kind of terminal, the terminal includes processor and memory, is stored in the memory
At least one instruction, at least a Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, institute
Code set or instruction set is stated to be loaded as the processor and executed to realize the image composition method as provided by first aspect.
6th aspect, provides a kind of terminal, the terminal includes processor and memory, is stored in the memory
At least one instruction, at least a Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, institute
Code set or instruction set is stated to be loaded by the processor and executed to realize the ad material side of synthesis as provided by second aspect
Method.
7th aspect, provides a kind of computer readable storage medium, at least one finger is stored in the storage medium
Enable, at least a Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, the code set or
Instruction set is loaded as the processor and is executed to realize the image composition method as provided by first aspect.
Eighth aspect provides a kind of computer readable storage medium, at least one finger is stored in the storage medium
Enable, at least a Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, the code set or
Instruction set is loaded as the processor and is executed to realize the ad material synthetic method as provided by second aspect.
Technical solution provided in an embodiment of the present invention has the benefit that
By obtaining the original image of figure to be scratched, conspicuousness detection is carried out to original image and generates notable figure, according to significant
Figure scratches graph model using depth and stingy figure masking-out is calculated, and takes out the target material in original image using stingy figure masking-out, will
Target material is synthesized with primary template to be synthesized, obtains target image.Enable the terminal to using conspicuousness detection and
Depth scratches graph model and carries out automatic stingy figure to original image, avoids user in the related technology and needs repeatedly to carry out original image
The case where three components are demarcated, simplifies stingy graphic operation, realizes and scratch figure to original image, the target material that stingy figure is obtained with
The full-automatic realization process that archetype is synthesized improves the producing efficiency of image synthesis.
Detailed description of the invention
Fig. 1 is the schematic diagram for the calibrated original image that the relevant technologies are related to;
Fig. 2 is the structural schematic diagram of image synthesis system provided in an embodiment of the present invention;
Fig. 3 is the flow chart of image composition method provided by one embodiment of the present invention;
Fig. 4 is the flow chart for the image composition method that another embodiment of the present invention provides;
Fig. 5 is the interface schematic diagram that image composition method provided by one embodiment of the present invention is related to;
Fig. 6 is the interface schematic diagram that the image composition method that another embodiment of the present invention provides is related to;
Fig. 7 is the interface schematic diagram that the image composition method that another embodiment of the present invention provides is related to;
Fig. 8 is the principle signal for the filter treatment process that image composition method provided by one embodiment of the present invention is related to
Figure;
Fig. 9 is the schematic illustration of image composition method provided by one embodiment of the present invention;
Figure 10 is that the principle for the conspicuousness detection algorithm that image composition method provided by one embodiment of the present invention is related to is shown
It is intended to;
Figure 11 is that the depth that image composition method provided by one embodiment of the present invention is related to scratches graph model training process
Schematic illustration;
Figure 12 is the flow chart of ad material synthetic method provided by one embodiment of the present invention;
Figure 13 is the structural schematic diagram of image synthesizer provided by one embodiment of the present invention;
Figure 14 is the structural schematic diagram of ad material synthesizer provided by one embodiment of the present invention;
Figure 15 is the structural block diagram for the terminal that an illustrative embodiment of the invention provides;
Figure 16 is the structural schematic diagram for the server that an illustrative embodiment of the invention provides.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to embodiment party of the present invention
Formula is described in further detail.
Firstly, to the present embodiments relate to some nouns explain:
Conspicuousness detection: in embodiments of the present invention, conspicuousness detection is the conspicuousness detection of view-based access control model, also referred to as vision
Conspicuousness detection (English: Visual saliency detection).Vision significance detection is to simulate people by intelligent algorithm
Class vision noticing mechanism (Visual Attention Mechanism, VA) predicts the technology of human eye vision region-of-interest.Vision
When attention mechanism is in face of a scene, the mankind automatically handle area-of-interest and selectively ignore and lose interest in
Region, wherein the area-of-interest of the mankind is referred to as salient region.
Scratch diagram technology: be by the target material in the original image of figure to be scratched from other elements the figure of separating treatment out
As processing technique.
Three components (English: trimap): being to divide obtained image roughly to one kind of digital picture, i.e., by digitized map
As being divided into foreground area, background area and the obtained image of zone of ignorance.
Super-resolution processing: being to go out corresponding high-resolution number according to the digital image reconstruction of the low resolution observed
The technology of word image.The super-resolution processing is used to improve the resolution ratio of digital picture by the method for hardware or software.
Depth scratches graph model: being a kind of for determining the mathematical model of stingy figure masking-out according to the input data.
Optionally, the stingy graph model of depth includes but is not limited to: convolutional neural networks (Convolutional Neural
Network, CNN) model, deep neural network (Deep Neural Network, DNN) model, Recognition with Recurrent Neural Network
(Recurrent Neural Networks, RNN) model, insertion (embedding) model, gradient promote decision tree
(Gradient Boosting Decision Tree, GBDT) model, logistic regression (Logistic Regression, LR) mould
At least one of type.
CNN model is a kind of feedforward neural network, is generally made of the full-mesh layer on two or more convolutional layers and top.
Optionally, convolutional neural networks model further includes relevant weight and pond layer.In practical applications, backpropagation can be used
Algorithm is trained CNN model.
DNN model is a kind of deep learning frame.DNN model includes input layer, at least one layer of hidden layer (or middle layer)
And output layer.Optionally, input layer, at least one layer of hidden layer (or middle layer) and output layer include at least one neuron,
Neuron is for handling the data received.Optionally, the quantity of the neuron between different layers can be identical;Or
Person can also be different.
RNN model is a kind of neural network with feedback arrangement.In RNN model, the output of neuron can be under
One timestamp is applied directly to itself, that is, input of the i-th layer of neuron at the m moment, in addition to (i-1) layer neuron this when
It further include its own output at (m-1) moment outside the output at quarter.
Embedding model is shown based on entity and relationship distribution vector table, by the relationship in each triple example
Regard the translation from entity head to entity tail as.Wherein, triple example includes main body, relationship, object, and triple example can be with table
It is shown as (main body, relationship, object);Main body is entity head, and object is entity tail.Such as: the father of Xiao Zhang is big, then passes through three
Tuple example is expressed as (Xiao Zhang, father, big).
GBDT model is a kind of decision Tree algorithms of iteration, which is made of more decision trees, and the result of all trees is tired
It adds up as final result.Each node of decision tree can obtain a predicted value, and by taking the age as an example, predicted value is to belong to
The average value at owner's age of age corresponding node.
LR model refers on the basis of linear regression, applies the model of logical function foundation.
In the related technology, by user, calibration determines the α value of most of pixel in digital picture by hand, and due to
Family is difficult accurately to specify closed to scratch three components required for nomography, if to obtain accurately scratching figure as a result, if need to use
Family constantly scratches figure result and re-scale to scratch next time according to this schemes required three components, carries out ability after multiple digital matting
Can obtain accurately scratching figure as a result, the process is very time-consuming and heavy dependence user it is professional, so as to cause stingy graphic operation compared with
Complexity thereby reduces the producing efficiency of subsequent image synthesis.For this purpose, the embodiment of the invention provides a kind of image synthesis sides
Method, ad material synthetic method and device.By obtaining the original image of figure to be scratched, conspicuousness detection life is carried out to original image
At notable figure, graph model is scratched using depth according to notable figure, stingy figure masking-out is calculated, original graph is taken out using stingy figure masking-out
Target material as in, target material and primary template to be synthesized are synthesized, target image is obtained.It enables the terminal to
Graph model is scratched using conspicuousness detection and depth, figure is scratched automatically to original image progress, avoided user in the related technology and need instead
The multiple the case where calibration of three components is carried out to original image, stingy graphic operation is simplified, realizes and original image is carried out to scratch figure, by stingy figure
Obtained target material and archetype is synthesized to obtain the full-automatic realization process of target image, improves image synthesis
Producing efficiency.
Referring to FIG. 2, it illustrates the structural schematic diagrams of image synthesis system provided in an embodiment of the present invention.The image closes
It include user terminal 220, server cluster 230 and designer's terminal 240 at system.
User terminal 220 can be mobile phone, tablet computer, E-book reader, MP3 player (Moving Picture
Experts Group Audio Layer III, dynamic image expert's compression standard audio level 3), MP4 (Moving
Picture Experts Group Audio Layer IV, dynamic image expert's compression standard audio level 4) player, knee
Mo(u)ld top half portable computer and desktop computer etc..
Operation has destination application in user terminal 220, and destination application is that have the function of stingy figure and image synthesis
The application program of function.Optionally, which is that recommendation information makes application program.Wherein, recommendation information is wide
Accusing information, multimedia messages or consultation information etc. has the information for recommending value.For example, the destination application is ad production
Application program.
It is connected between user terminal 220 and server cluster 230 by communication network.Optionally, communication network is wired
Network or wireless network.
Server cluster 230 is several servers of a server or a virtual platform, or
Person is a cloud computing service center.
Server cluster 230 includes external access layer, operation layer, data Layer and internal access layer.
Optionally, the first access server 231 and Nginx Reverse Proxy 232 are deployed in external access layer.The
One access server 231 is used to receive the original image of the upload of user terminal 220.
HyperText Preprocessor (PHP Hypertext Preprocessor, php) 7+nginx clothes are deployed in operation layer
Business device cluster 233, Go server cluster 234 and python server cluster 235.It is stored in python server cluster 235
TensorFlow software library.Wherein, TensorFlow software library is one using data flow diagram (English: data flow
Graphs), the open source software library calculated for numerical value.Optionally, the TensorFlow software in python server cluster 235
Library obtains the target material in original image for carrying out automating stingy figure to the original image received.
Common data base (CommonDataBase, CDB) cluster 236a, REDIS data-base cluster is deployed in data Layer
236b, file store (Cloud File Storage, CFS) server cluster 236c and ElasticSearch server cluster
236d is deployed with the second access server 237 in internal access layer.
Designer's terminal 240 can be mobile phone, tablet computer, E-book reader, MP3 player (Moving
Picture Experts Group Audio Layer III, dynamic image expert's compression standard audio level 3), MP4
(Moving Picture Experts Group Audio Layer IV, dynamic image expert's compression standard audio level 4) is broadcast
Put device, pocket computer on knee and desktop computer etc..
It is connected between designer's terminal 240 and server cluster 230 by communication network.Optionally, communication network is that have
Gauze network or wireless network.
Designer's terminal 240 includes information dispensing device.Information dispensing device in designer's terminal 240 for make to
The primary template made is thrown to user terminal 220 by the primary template of synthesis.When primary template is advertisement formwork, if
Information dispensing device in meter person terminal 240 is advertisement formwork dispensing device.
Referring to FIG. 3, it illustrates the flow charts of image composition method provided by one embodiment of the present invention.The present embodiment
It is applied to illustrate in image synthesis system shown in Fig. 2 with the image composition method.The image composition method includes:
Step 301, the original image of figure to be scratched is obtained.
Original image is a frame of digital image.In general, original image is the image for including background area and foreground area.Than
Such as, original image is static advertising image.
Optionally, original image is the digital picture using RGB (Red Green Blue, RGB) color standard.It is former
It include M*N pixel in beginning image, each pixel is indicated using tri- color components of RGB.It should be noted that the present invention is real
The image that example is equally applicable to black white image or other color standards is applied, this is not limited.
Original image includes N number of pictorial element, and N is positive integer.Wherein, pictorial element is the component of original image.
Optionally, pictorial element includes: picture element and/or text element.
Step 302, conspicuousness detection is carried out to original image, generates notable figure.
User terminal carries out conspicuousness detection after getting original image, to the original image, generates the original image
Corresponding notable figure.Notable figure is the image treating the original image of stingy figure and export after conspicuousness detection.It is every in notable figure
The value of a pixel is used to indicate attention degree of the human eye to the pixel of prediction.
It should be noted that the process that user terminal carries out conspicuousness detection generation notable figure to the original image can refer to
Correlative detail in following example is not introduced first herein.
Step 303, graph model is scratched using depth according to notable figure and stingy figure masking-out is calculated, depth scratches graph model and is used for table
Show the stingy figure rule obtained based on sample image training.
User terminal obtains depth and scratches graph model, scratches graph model using depth according to notable figure and stingy figure masking-out is calculated.
Optionally, it is the model being trained using sample image to neural network that depth, which scratches graph model,.
The depth that user terminal obtains itself storage scratches graph model, alternatively, obtaining depth from server cluster scratches artwork
Type.The present embodiment is not limited this.
Scratch the transparence value that figure masking-out is used to indicate each pixel of original image.Optionally, scratching figure masking-out is original graph
Transparence value matrix, that is, α matrix of picture, transparence value matrix are the matrixes for scratch to input picture figure.
Step 304, using stingy figure masking-out, the target material in original image is taken out.
User terminal takes out the target material in original image using figure masking-out is scratched.Wherein, target material is that composition is former
At least one pictorial element in N number of pictorial element of beginning image.
It should be noted that user terminal is using figure masking-out is scratched, the process for taking out the target material in original image can
With reference to the correlative detail in following example, do not introduce first herein.
Step 305, target material and primary template to be synthesized are synthesized, obtains target image.
Multiple archetypes are stored in user terminal, when user terminal detects the selection operation corresponding to primary template
When, target material and primary template that stingy figure obtains are synthesized, target image is obtained.Wherein, target image includes target
Material.
It should be noted is that user terminal synthesizes target material and primary template to be synthesized, mesh is obtained
The process of logo image can refer to the correlative detail in following example, not introduce first herein.
It needing to illustrate on the other hand, step 301 to step 304 can be implemented separately as a kind of automatically stingy drawing method,
The automatic stingy drawing method is used to take the target material in original image;Step 305 is a kind of image composition method, which closes
At the target material that method uses step 301 to step 304 to take out, is synthesized, obtained with primary template to be synthesized
Target image.
Optionally, above-mentioned steps 301 are to step 305 usually by user terminal or server with image processing function
Cluster is completed.The present embodiment is without restriction to this.For convenience of explanation, the present embodiment is only using executing subject as user terminal
It is illustrated.
In conclusion the embodiment of the present invention passes through the original image for obtaining figure to be scratched, conspicuousness inspection is carried out to original image
It surveys and generates notable figure, graph model is scratched using depth according to notable figure, stingy figure masking-out is calculated, using stingy figure masking-out, take out original
Target material and primary template to be synthesized are synthesized, obtain target image by the target material in beginning image;So that user
Terminal can scratch scratch automatic to original image progress of graph model with depth using conspicuousness detection and scheme, and avoid and use in the related technology
Family needs the case where carrying out the calibration of three components to original image repeatedly, simplifies stingy graphic operation, and realization carries out original image stingy
Figure is synthesized target material and archetype that stingy figure obtains to obtain the full-automatic realization process of target image, be improved
The producing efficiency of image synthesis.
Referring to FIG. 4, the flow chart of the image composition method provided it illustrates another embodiment of the present invention.This implementation
Example is applied to illustrate in image synthesis system illustrated in fig. 2 with the image composition method.The image composition method packet
It includes:
Step 401, the original image of primary template and figure to be scratched to be synthesized is obtained.
Optionally, when user terminal receives the second trigger action in destination application corresponding to stencil-chosen entrance
When, the M candidate template stored in user terminal is shown, when user terminal is received corresponding to a time in M candidate template
When the third trigger action of modeling plate, which is determined as to primary template to be synthesized, obtain and shows that this is to be synthesized
Primary template.Wherein, M is positive integer.
Optionally, stencil-chosen entrance be can operational controls for show M candidate template.Schematically, template is selected
The type for selecting entrance includes at least one of button, controllable entry, sliding block.
In a schematical example, as shown in figure 5, the user interface is the advertisement of destination application " XX production "
Make interface 51.It include stencil-chosen entrance 52 in the ad production interface 51.It is selected when user terminal is received corresponding to template
When selecting the clicking operation of entrance 52, the M stored in user terminal is shown with breviary diagram form in the left side at ad production interface 51
A candidate template.When user terminal is received corresponding to the clicking operation of a candidate template 53 in M candidate template, by this
Candidate template 53 is determined as primary template to be synthesized, and the right side at ad production interface 51 with tiled pattern formula show this to
The primary template 53 of synthesis.
Optionally, user terminal is right in destination application when obtaining after getting the selected primary template of user
The original image of figure to be scratched should be obtained, and open and scratch figure function when image uploads the first trigger action of entrance.
Optionally, image upload entrance be can operational controls for upload the original image of figure to be scratched.Schematically, scheme
Type as uploading entrance includes at least one of button, controllable entry, sliding block.
It should be noted that above-mentioned trigger action is (such as: the first trigger action, the second trigger action or third triggering behaviour
Make) it include clicking operation, slide, pressing operation, the combination of any one or more in long press operation.
In other possible implementations, above-mentioned trigger action can also be realized with speech form.For example, to trigger behaviour
For the first trigger action, user's input speech signal in the user terminal, after user terminal gets voice signal,
To the voice signal carry out parsing obtain voice content, when in voice content exist with image upload entrance presupposed information phase
When the crucial words matched, i.e., user terminal determines that the image uploads entrance and is triggered.
In a schematical example, the ad production interface based on Fig. 5 destination application " XX production " provided
51, as shown in fig. 6, further including uploading entrance 62 for image in the ad production interface 51.When user terminal gets correspondence
When image uploads the clicking operation of entrance 62, the original image 64 of upload is obtained, and open and scratch figure function.
Step 402, conspicuousness detection is carried out to original image, generates notable figure.
User terminal carries out conspicuousness detection after opening stingy figure function, to the original image got, generates notable figure.
It should be noted that the process of conspicuousness detection can refer to the correlative detail in following example, do not introduce first herein.
Step 403, edge detection is carried out to notable figure and obtains corresponding three component, three components include the foreground zone of notable figure
Domain, background area and zone of ignorance.
Optionally, user terminal uses filter in spatial domain and threshold segmentation algorithm, isolates foreground area and background area,
Combining form student movement calculates to obtain three components.
Step 404, according to original image and three components, the stingy figure illiteracy that original image is calculated in graph model is scratched using depth
Version.
Optionally, user terminal obtains depth and scratches graph model, and original image and three components are input to depth and scratch graph model
In, the stingy figure masking-out of original image is calculated.
Wherein, depth, which scratches graph model, is obtained according to the training of at least one set of sample data group, every group of sample data group packet
It includes: sample image, three component of sample and the correct stingy figure masking-out marked in advance.
It should be noted that the training process that depth scratches graph model can refer to the associated description in the following examples,
This is not introduced first.
Optionally, original image, corresponding three component and stingy figure masking-out are added to training sample set by user terminal, are obtained
Updated training sample set is scratched graph model to depth according to updated training sample set and is trained, and obtains updated
Depth scratches graph model.
Wherein, graph model is scratched to depth according to updated training sample set to be trained, obtain updated depth and scratch
The process of graph model can analogy reference depth scratch graph model training process, details are not described herein.
Step 405, using stingy figure masking-out, the target material in original image is taken out.
Optionally, for each pixel of the original image using RGB color standard, user terminal is by each of which color
The brightness value of component is multiplied with the transparence value of corresponding position indicated by stingy figure masking-out, and the stingy figure of original image can be obtained
As a result, therefrom isolating target material.
With continued reference to Fig. 6, based on the original image 64 that Fig. 6 is provided, user terminal is after opening stingy figure function, according to above-mentioned
Stingy drawing method take out the target material 66 in the original image 64, which is the picture materials of personage.
Step 406, the main element marked in advance in primary template is obtained.
Optionally, primary template is the template of structural stratification.The element hierarchy of primary template includes body layer and its
His level, other levels include at least one of background layer, decorative layer, text layers, alternation of bed.Main body in the primary template
Layer corresponds to main element, and main element is the element being replaced marked in advance in primary template.
Optionally, user terminal obtains the main element marked in advance in primary template, comprising: user terminal obtains original
After the element hierarchy of template, the main element in the body layer of primary template is obtained.
Optionally, step 406 can execute side by side with step 401, and step 406 can also execute before step 401, this
Embodiment is not limited this.
Step 407, according to target material, the main element in primary template is replaced, candidate image is obtained.
Optionally, step 407 executes after step 405 and step 406, when user terminal takes out in original image
After target material, which is saved into database, and the primary template according to selected by user, to primary template
In main element be replaced, i.e., automatically will target material be placed in primary template in.
Optionally, the main element in primary template is replaced with target material by user terminal, including but not limited to following
Two kinds of possible embodiments:
Main element in primary template is replaced with target material, is replaced by the first possible embodiment, terminal
Candidate image after changing.
Second of possible embodiment, terminal zoom in and out processing to target material, the target material after being scaled,
The absolute difference of the size of target material and main element after scaling is less than preset threshold;Using the target after scaling
Material replaces the main element in primary template, obtains candidate image.
Optionally, the first size of target material is compared by user terminal with the second size of main element, if mesh
The first size for marking material is less than the second size of main element, then amplifies processing to target material;If target material
First size is greater than the second size of main element, then carries out diminution processing to target material.
Step 408, post-processing is carried out to candidate image, generates target image.
Wherein, post-processing includes super-resolution processing and/or the processing of style filter.
In a schematical example, based on target material provided by primary template 53 provided by Fig. 5 and Fig. 6
66, as shown in fig. 7, user terminal obtains the main element 72 marked in advance in primary template 53, by the master in primary template 53
Element of volume 72 replaces with target material 66, obtains candidate image, and user terminal carries out post-processing to candidate image, generates target
Image 74.
Optionally, if target material amplifies processing in candidate image, the resolution ratio of target material can be reduced,
In order to improve the resolution ratio of the target material after enhanced processing, super-resolution processing is carried out to the target material after enhanced processing,
Fuzzy situation is caused to alleviate after target material is amplified to a certain extent.
Optionally, user terminal is using three dimensional lookup table (English: 3D look-up-table, 3D LUT) algorithm to candidate
Image carries out the processing of style filter, obtains target image.
Optionally, the style of target image includes lively style, lightness style, pure and fresh style, morning style, warm wind
One of lattice, Gao Lengfeng lattice.
Optionally, before user terminal carries out the processing of style filter to candidate image using 3D LUT algorithm, designer
Terminal needs to design LUT template.Referring to FIG. 8, style filter design process includes: that designer's terminal carries out Vision Design, adjusts
Color, uploads LUT template at export LUT template, thus the LUT template designed.Style filter treatment process includes: user
Terminal gets candidate image, and obtains the LUT template designed, and candidate image is input to LUT mould using 3D LUT algorithm
In plate, output obtains target image.
In a schematical example, as shown in figure 9, image composition method provided in an embodiment of the present invention includes but not
It is limited to three big steps: 1, obtains target material;Captured photo 91 is uploaded to the ad production application of user terminal by user
Corresponding in program, user terminal, which activates, scratches figure function, and user terminal carries out the photo 91 of upload to scratch figure automatically, is somebody's turn to do
Target material 92 in photo 91.2, primary template is disassembled and is replaced;The main body of user terminal acquisition primary template 93
Main element 94 in layer, replaces with target material 92 for main element 94.3, post-processing is carried out to candidate image, obtains mesh
Logo image.User terminal carries out filter processing etc. to candidate image and obtains target image 95.
In conclusion the embodiment of the present invention also passes through the main element for obtaining and marking in advance in primary template, by original mould
Main element in plate replaces with target material, obtains candidate image, according to candidate image, generates target image;Avoid phase
User needs the case where synthesizing manually to image in the technology of pass, realizes the automation of image synthesis, further improves
The producing efficiency of image synthesis.
It should be noted is that in embodiments of the present invention, the vision noticing mechanism of conspicuousness detection can be using certainly
Visual attention model on down, the basic structure of the visual attention model are as shown in Figure 10.
For a width original image, which carries out linear filtering to the original image, extracts primary view
Feel feature, is respectively as follows: color (English: RGBY), brightness and direction.Under a variety of scales using central peripheral (English:
Center-surround) operation generates multiple characteristic patterns for being used to indicate significance measure, these characteristic patterns are carried out crossfoot
Degree merges and normalization, obtains single feature dimensions notable figure, and single feature dimensions notable figure is carried out linear combining, aobvious after being merged
Figure is write, winner takes the competition mechanism of complete (English: Winner-take-all) to obtain most aobvious in original image in recycling biology
Focus-of-attention is finally completed using the method for inhibition of return (English: Inhibition of return) in the attention region of work
Transfer, obtain final notable figure.
Need to illustrate that on the other hand, before user terminal obtains the stingy graph model of depth, user terminal is needed to depth
Stingy graph model is trained.Optionally, the training process that depth scratches graph model includes: acquisition training sample set, training sample set
Including at least one set of sample data group;According at least one set of sample data group, using error backpropagation algorithm to initial parameter
Model is trained, and is obtained depth and is scratched graph model.
Every group of sample data group includes: sample image, three component of sample and the correct stingy figure masking-out marked in advance.
User terminal carries out initial parameter model according at least one set of sample data group, using error backpropagation algorithm
Training obtains depth and scratches graph model, including but not limited to following steps:
1, for every group of sample data group at least one set of sample data group, sample image and three component of sample are inputted
Initial parameter model, obtains training result.
Optionally, initial parameter model be according to Establishment of Neural Model, such as: initial parameter model is basis
DNN model or RNN model foundation.
Schematically, for every group of sample data group, user terminal creates the corresponding input and output of this group of sample data group
Right, the input parameter of inputoutput pair is three component of sample image and sample in this group of sample data group, and output parameter is should
Correct stingy figure masking-out in group sample data group;User terminal will input parameter input prediction model, obtain training result.
For example, sample data group include sample image A1, three panel A 2 of sample and mark in advance it is correct scratch figure masking-out X1,
The inputoutput pair of user terminal creation are as follows: (sample image A1, three panel A 2 of sample) -> (correctly stingy figure masking-out X1);Wherein,
(sample image A1, three panel A 2 of sample) is input parameter, and (correctly stingy figure masking-out X1) is output parameter.
Optionally, inputoutput pair is indicated by feature vector.
2, training result is compared with correct scene identity, obtains calculating loss, calculated loss and be used to indicate training
As a result the error between correct scene identity.
Optionally, calculating to lose through cross entropy (English: cross-entropy) indicates,
Optionally, calculating loss H (p, q) is calculated by following formula in user terminal:
Wherein, p (x) and q (x) is the discrete distribution vector of equal length, and p (x) indicates training result;Q (x) is indicated
Output parameter;X is a vector in training result or output parameter.
3, it is lost according to the corresponding calculating of at least one set of sample data group, trained using error backpropagation algorithm
Graph model is scratched to depth.
Optionally, user terminal determines that depth scratches the gradient side of graph model according to loss is calculated by back-propagation algorithm
To the output layer for scratching graph model from depth successively updates forward the model parameter that depth is scratched in graph model.
Schematically, as shown in figure 11, user terminal training obtains depth and scratches the process of graph model to include: that user terminal obtains
Take training sample set, which includes at least one set of sample data group, every group of sample data group include: sample image,
Three component of sample and the correct stingy figure masking-out marked in advance.For every group of sample data group, user terminal is by sample image and sample
This three component is input to initial parameter model, and output obtains training result, and training result is compared with correct figure masking-out of scratching,
It obtains calculating loss, is lost according to the corresponding calculating of at least one set of sample data group, instructed using error backpropagation algorithm
It gets depth and scratches graph model.After the depth that training obtains scratches graph model, the depth that user terminal obtains training scratches figure
Model is stored.When user terminal, which is opened, scratches figure function, user terminal obtains original image and corresponding three component, and obtains
The depth for taking training to obtain scratches graph model, and original image and corresponding three component are input to depth and scratch graph model, output obtains
The stingy figure masking-out of original image scratches figure masking-out using this and takes out target material in original image.
It should be noted that image composition method is also referred to as when image composition method is applied to ad material production field
For ad material synthetic method.In ad material synthetic method, above-mentioned destination application is with ad material processing
The application program of function, the target material in original image can be at least one of plant, animal and still life element, wait close
At primary template be advertisement formwork, the target image synthesized be targeted advertisements image.
Optionally, which includes but is not limited to following steps, as shown in figure 12:
Step 1201, it obtains and corresponds to the first trigger action that image uploads entrance in destination application, target application
Program is that have the application program of ad material processing function.
Optionally, when user terminal receives the click in ad material production application program corresponding to stencil-chosen entrance
When operation, M candidate locations template of storage is shown, when user terminal is received corresponding to one in M candidate locations template
When the clicking operation of candidate locations template, which is determined as advertisement formwork to be synthesized, obtains and shows this
Advertisement formwork.
Optionally, user terminal detects whether exist corresponding to image after getting the selected advertisement formwork of user
Upload the clicking operation of entrance, and if it exists, obtain and correspond to the clicking operation that image uploads entrance.
Step 1202, according to the first trigger action, the original image of upload is obtained.
Optionally, terminal obtains the original image of figure to be scratched according to the clicking operation for corresponding to image upload entrance.
Step 1203, it opens and scratches figure function.
Optionally, terminal opens the stingy figure function of destination application after the original image for obtaining figure to be scratched.
Step 1204, original image is carried out automating stingy figure, obtains the target material in original image, target material is
At least one of plant, animal and still life element.
User terminal carries out conspicuousness detection to original image, generates notable figure, scratches artwork using depth according to notable figure
Stingy figure masking-out is calculated in type, and depth is scratched graph model and is used to indicate the stingy figure rule obtained based on sample image training;Using stingy
Figure masking-out takes out the target material in original image.
It should be noted that user terminal scratches the mistake that stingy figure masking-out is calculated in graph model using conspicuousness detection and depth
Journey can refer to the correlative detail in above-described embodiment, and details are not described herein.
Step 1205, target material and advertisement formwork to be synthesized are synthesized, obtains targeted advertisements image.
User terminal obtains the main element marked in advance in advertisement formwork;According to target material, in advertisement formwork
Main element is replaced, and obtains candidate locations image;Post-processing is carried out to candidate locations image, generates targeted advertisements figure
Picture.
Optionally, user terminal zooms in and out processing to target material, the target material after being scaled, the mesh after scaling
The absolute difference for marking the size of material and main element is less than preset threshold;It is wide using the target material replacement after scaling
Main element in slide former obtains candidate locations image.
Optionally, user terminal carries out the processing of style filter to candidate locations image using three-dimensional lookup table algorithm, obtains
Targeted advertisements image.
It should be noted that the ad material synthetic method can the analogy conjunction of the image with reference to provided by above method embodiment
At the correlative detail of method, details are not described herein.
Following is apparatus of the present invention embodiment, can be used for executing embodiment of the present invention method.For apparatus of the present invention reality
Undisclosed details in example is applied, embodiment of the present invention method is please referred to.
Figure 13 is please referred to, it illustrates the structural schematic diagrams of image synthesizer provided by one embodiment of the present invention.It should
Image synthesizer can be by special hardware circuit, alternatively, the whole of software and hardware being implemented in combination with as image synthesis system
Or a part, the image synthesizer include: to obtain module 1310, generation module 1320, computing module 1330, take module
1340 and synthesis module 1350.
Module 1310 is obtained, for realizing above-mentioned steps 301 and/or step 401.
Generation module 1320, for realizing above-mentioned steps 302 and/or step 402.
Computing module 1330, for realizing above-mentioned steps 303.
Module 1340 is taken, for realizing above-mentioned steps 304 and/or step 405.
Synthesis module 1350, for realizing above-mentioned steps 305.
Optionally, computing module 1330, comprising: detection unit and computing unit.
Detection unit, for realizing above-mentioned steps 403.
Computing unit realizes above-mentioned steps 404 for root.
Optionally, computing unit is also used to obtain depth and scratches graph model, and it is according at least one set of sample that depth, which scratches graph model,
Data group training obtains, and every group of sample data group includes: that sample image, three component of sample and the correct figure of scratching marked in advance cover
Version.Original image and three components are input to depth to scratch in graph model, the stingy figure masking-out of original image is calculated.
Optionally, computing unit, is also used to obtain training sample set, and training sample set includes at least one set of sample data
Group;According at least one set of sample data group, initial parameter model is trained using error backpropagation algorithm, obtains depth
Scratch graph model.
Optionally, computing unit is also used to for every group of sample data group at least one set of sample data group, by sample
Image and three component of sample input initial parameter model, obtain training result.Training result and correct figure masking-out of scratching are compared
Compared with obtaining calculating loss, calculate the error that loss is used to indicate between training result and correct stingy figure masking-out.According at least one set
The corresponding calculating loss of sample data group, obtains depth using error backpropagation algorithm training and scratches graph model.
Optionally, module 1310 is obtained, comprising: first acquisition unit and second acquisition unit.
First acquisition unit, for obtaining the first trigger action for corresponding to image in destination application and uploading entrance,
Destination application is with the application program for scratching figure function.
Second acquisition unit for obtaining the original image of figure to be scratched according to the first trigger action, and is opened and scratches figure function
Energy.
Optionally, synthesis module 1350, comprising: third acquiring unit, replacement unit and generation unit.
Third acquiring unit, for obtaining the main element marked in advance in primary template.
Replacement unit, for being replaced to the main element in primary template, obtaining candidate figure according to target material
Picture.
Generation unit generates target image for carrying out post-processing to candidate image.
Optionally, replacement unit is also used to zoom in and out target material processing, the target material after being scaled, contracting
The absolute difference of the size of target material and main element after putting is less than preset threshold;Using the target element after scaling
Material replaces the main element in primary template, obtains candidate image.
Optionally, replacement unit is also used to carry out the processing of style filter to candidate image using three-dimensional lookup table algorithm, obtain
To target image.
Correlative detail is in combination with the embodiment of the method with reference to shown in Fig. 3 to Figure 12.Wherein, it obtains module 1310 and is also used to reality
Any other implicit or disclosed function relevant to obtaining step in existing above method embodiment;Generation module 1320 is also used to
Realize any other implicit or disclosed function relevant to generation step in above method embodiment;Computing module 1330 is also used
Any other implicit or disclosed function relevant to step is calculated in realization above method embodiment;Take module 1340 also
For realizing implicit or disclosed function relevant to step is taken any other in above method embodiment;Synthesis module 1350
It is also used to realize any other implicit or disclosed function relevant to synthesis step in above method embodiment.
Figure 14 is please referred to, it illustrates the structural representations of ad material synthesizer provided by one embodiment of the present invention
Figure.The image synthesizer can be by special hardware circuit, alternatively, software and hardware is implemented in combination with as image synthesis system
All or part of, which includes: that the first acquisition module 1410, second obtains module 1420, opening module
1430, module 1440 and synthesis module 1450 are scratched.
First obtains module 1410, for realizing above-mentioned steps 1201.
Second obtains module 1420, for realizing above-mentioned steps 1202.
Opening module 1430, for realizing above-mentioned steps 1203.
Module 1440 is scratched, for realizing above-mentioned steps 1204.
Synthesis module 1450, for realizing above-mentioned steps 1205.
Optionally, module 1440 is scratched, comprising: the first generation unit, computing unit and take unit.
First generation unit generates notable figure for carrying out conspicuousness detection to the original image;
Computing unit, for scratching graph model using depth and stingy figure masking-out, the depth being calculated according to the notable figure
It scratches graph model and is used to indicate the stingy figure rule obtained based on sample image training;
Unit is taken, for using the stingy figure masking-out, takes out the target material in the original image.
Optionally, synthesis module 1450, comprising: acquiring unit, replacement unit and the second generation unit.
Generation unit is obtained, for obtaining the main element marked in advance in the advertisement formwork;
Replacement unit, for being replaced to the main element in the advertisement formwork according to the target material,
Obtain candidate locations image;
Second generation unit generates the targeted advertisements image for carrying out post-processing to the candidate locations image.
Optionally, replacement unit is also used to zoom in and out processing to the target material, the target element after being scaled
The absolute difference of material, the size of target material and the main element after the scaling is less than preset threshold;Using
Target material after the scaling replaces the main element in the advertisement formwork, obtains the candidate locations image.
Optionally, the second generation unit is also used to carry out wind to the candidate locations image using three-dimensional lookup table algorithm
The processing of lattice filter, obtains the targeted advertisements image.
Correlative detail is in combination with the embodiment of the method with reference to shown in Fig. 3 to Figure 12.Wherein, first module 1410 and the are obtained
Two acquisition modules 1420 are also used to realize any other implicit or disclosed relevant to obtaining step in above method embodiment
Function;Opening module 1430 is also used to realize any other implicit or disclosed related to step is opened in above method embodiment
Function;Scratch module 1440 be also used to realize it is any other implicit or disclosed with stingy figure step phase in above method embodiment
The function of pass;Synthesis module 1450 is also used to realize any other implicit or disclosed and synthesis step in above method embodiment
Relevant function.
It should be noted that device provided by the above embodiment, when realizing its function, only with above-mentioned each functional module
It divides and carries out for example, can according to need in practical application and be completed by different functional modules above-mentioned function distribution,
The internal structure of equipment is divided into different functional modules, to complete all or part of the functions described above.In addition,
Apparatus and method embodiment provided by the above embodiment belongs to same design, and specific implementation process is detailed in embodiment of the method, this
In repeat no more.
The embodiment of the invention provides a kind of image synthesis system, described image synthesis system includes processor and storage
Device, at least one instruction is stored in the memory, and at least one instruction is loaded by the processor and executed with reality
The image composition method or ad material synthetic method that existing above-mentioned each embodiment of the method provides.
Figure 15 shows the structural block diagram of the terminal 1500 of an illustrative embodiment of the invention offer.The terminal 1500 can
Be Fig. 2 provide image synthesis system in user terminal 220, can also be designer's terminal 240.
In general, terminal 1500 includes: processor 1501 and memory 1502.
Processor 1501 may include one or more processing cores, such as 4 core processors, 8 core processors etc..Place
Reason device 1501 can use DSP (Digital Signal Processing, Digital Signal Processing), FPGA (Field-
Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, may be programmed
Logic array) at least one of example, in hardware realize.Processor 1501 also may include primary processor and coprocessor, master
Processor is the processor for being handled data in the awake state, also referred to as CPU (Central Processing
Unit, central processing unit);Coprocessor is the low power processor for being handled data in the standby state.?
In some embodiments, processor 1501 can be integrated with GPU (Graphics Processing Unit, image processor),
GPU is used to be responsible for the rendering and drafting of content to be shown needed for display screen.In some embodiments, processor 1501 can also be wrapped
AI (Artificial Intelligence, artificial intelligence) processor is included, the AI processor is for handling related machine learning
Calculating operation.
Memory 1502 may include one or more computer readable storage mediums, which can
To be non-transient.Memory 1502 may also include high-speed random access memory and nonvolatile memory, such as one
Or multiple disk storage equipments, flash memory device.In some embodiments, the non-transient computer in memory 1502 can
Storage medium is read for storing at least one instruction, at least one instruction is above-mentioned for realizing performed by processor 1501
The image composition method or ad material synthetic method that each embodiment provides.
In some embodiments, terminal 1500 is also optional includes: peripheral device interface 1503 and at least one periphery are set
It is standby.It can be connected by bus or signal wire between processor 1501, memory 1502 and peripheral device interface 1503.It is each outer
Peripheral equipment can be connected by bus, signal wire or circuit board with peripheral device interface 1503.Specifically, peripheral equipment includes:
In radio circuit 1504, touch display screen 1505, camera 1506, voicefrequency circuit 1507, positioning component 1508 and power supply 1509
At least one.
Peripheral device interface 1503 can be used for I/O (Input/Output, input/output) is relevant outside at least one
Peripheral equipment is connected to processor 1501 and memory 1502.In some embodiments, processor 1501, memory 1502 and periphery
Equipment interface 1503 is integrated on same chip or circuit board;In some other embodiments, processor 1501, memory
1502 and peripheral device interface 1503 in any one or two can be realized on individual chip or circuit board, this implementation
Example is not limited this.
Radio circuit 1504 is for receiving and emitting RF (Radio Frequency, radio frequency) signal, also referred to as electromagnetic signal.
Radio circuit 1504 is communicated by electromagnetic signal with communication network and other communication equipments.Radio circuit 1504 is by telecommunications
Number being converted to electromagnetic signal is sent, alternatively, the electromagnetic signal received is converted to electric signal.Optionally, radio circuit
1504 include: antenna system, RF transceiver, one or more amplifiers, tuner, oscillator, digital signal processor, volume solution
Code chipset, user identity module card etc..Radio circuit 1504 can by least one wireless communication protocol come with it is other
Terminal is communicated.The wireless communication protocol includes but is not limited to: WWW, Metropolitan Area Network (MAN), Intranet, each third generation mobile communication network
(2G, 3G, 4G and 5G), WLAN and/or WiFi (Wireless Fidelity, Wireless Fidelity) network.In some implementations
In example, radio circuit 1504 can also include that NFC (Near Field Communication, wireless near field communication) is related
Circuit, the present invention are not limited this.
Display screen 1505 is for showing UI (User Interface, user interface).The UI may include figure, text,
Icon, video and its their any combination.When display screen 1505 is touch display screen, display screen 1505 also there is acquisition to exist
The ability of the touch signal on the surface or surface of display screen 1505.The touch signal can be used as control signal and be input to place
Reason device 1501 is handled.At this point, display screen 1505 can be also used for providing virtual push button and/or dummy keyboard, it is also referred to as soft to press
Button and/or soft keyboard.In some embodiments, display screen 1505 can be one, and the front panel of terminal 1500 is arranged;Another
In a little embodiments, display screen 1505 can be at least two, be separately positioned on the different surfaces of terminal 1500 or in foldover design;
In still other embodiments, display screen 1505 can be flexible display screen, is arranged on the curved surface of terminal 1500 or folds
On face.Even, display screen 1505 can also be arranged to non-rectangle irregular figure, namely abnormity screen.Display screen 1505 can be with
Using LCD (Liquid Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode,
Organic Light Emitting Diode) etc. materials preparation.
CCD camera assembly 1506 is for acquiring image or video.Optionally, CCD camera assembly 1506 includes front camera
And rear camera.In general, the front panel of terminal is arranged in front camera, the back side of terminal is arranged in rear camera.?
In some embodiments, rear camera at least two is that main camera, depth of field camera, wide-angle camera, focal length are taken the photograph respectively
As any one in head, to realize that main camera and the fusion of depth of field camera realize background blurring function, main camera and wide
Pan-shot and VR (Virtual Reality, virtual reality) shooting function or other fusions are realized in camera fusion in angle
Shooting function.In some embodiments, CCD camera assembly 1506 can also include flash lamp.Flash lamp can be monochromatic temperature flash of light
Lamp is also possible to double-colored temperature flash lamp.Double-colored temperature flash lamp refers to the combination of warm light flash lamp and cold light flash lamp, can be used for
Light compensation under different-colour.
Voicefrequency circuit 1507 may include microphone and loudspeaker.Microphone is used to acquire the sound wave of user and environment, and
It converts sound waves into electric signal and is input to processor 1501 and handled, or be input to radio circuit 1504 to realize that voice is logical
Letter.For stereo acquisition or the purpose of noise reduction, microphone can be separately positioned on the different parts of terminal 1500 to be multiple.
Microphone can also be array microphone or omnidirectional's acquisition type microphone.Loudspeaker is then used to that processor 1501 or radio frequency will to be come from
The electric signal of circuit 1504 is converted to sound wave.Loudspeaker can be traditional wafer speaker, be also possible to piezoelectric ceramics loudspeaking
Device.When loudspeaker is piezoelectric ceramic loudspeaker, the audible sound wave of the mankind can be not only converted electrical signals to, can also be incited somebody to action
Electric signal is converted to the sound wave that the mankind do not hear to carry out the purposes such as ranging.In some embodiments, voicefrequency circuit 1507 may be used also
To include earphone jack.
Positioning component 1508 is used for the current geographic position of positioning terminal 1500, to realize navigation or LBS (Location
Based Service, location based service).Positioning component 1508 can be the GPS (Global based on the U.S.
Positioning System, global positioning system), China dipper system or Russia Galileo system positioning group
Part.
Power supply 1509 is used to be powered for the various components in terminal 1500.Power supply 1509 can be alternating current, direct current
Electricity, disposable battery or rechargeable battery.When power supply 1509 includes rechargeable battery, which can be line charge
Battery or wireless charging battery.Wired charging battery is the battery to be charged by Wireline, and wireless charging battery is to pass through
The battery of wireless coil charging.The rechargeable battery can be also used for supporting fast charge technology.
In some embodiments, terminal 1500 further includes having one or more sensors 1510.One or more sensing
Device 1510 includes but is not limited to: acceleration transducer 1511, gyro sensor 1512, pressure sensor 1513, fingerprint sensing
Device 1514, optical sensor 1515 and proximity sensor 1516.
Acceleration transducer 1511 can detecte the acceleration in three reference axis of the coordinate system established with terminal 1500
Size.For example, acceleration transducer 1511 can be used for detecting component of the acceleration of gravity in three reference axis.Processor
The 1501 acceleration of gravity signals that can be acquired according to acceleration transducer 1511, control touch display screen 1505 with transverse views
Or longitudinal view carries out the display of user interface.Acceleration transducer 1511 can be also used for game or the exercise data of user
Acquisition.
Gyro sensor 1512 can detecte body direction and the rotational angle of terminal 1500, gyro sensor 1512
Acquisition user can be cooperateed with to act the 3D of terminal 1500 with acceleration transducer 1511.Processor 1501 is according to gyro sensors
The data that device 1512 acquires, following function may be implemented: action induction (for example changing UI according to the tilt operation of user) is clapped
Image stabilization, game control and inertial navigation when taking the photograph.
The lower layer of side frame and/or touch display screen 1505 in terminal 1500 can be set in pressure sensor 1513.When
When the side frame of terminal 1500 is arranged in pressure sensor 1513, user can detecte to the gripping signal of terminal 1500, by
Reason device 1501 carries out right-hand man's identification or prompt operation according to the gripping signal that pressure sensor 1513 acquires.Work as pressure sensor
1513 when being arranged in the lower layer of touch display screen 1505, is grasped by processor 1501 according to pressure of the user to touch display screen 1505
Make, realization controls the operability control on the interface UI.Operability control include button control, scroll bar control,
At least one of icon control, menu control.
Fingerprint sensor 1514 is used to acquire the fingerprint of user, is collected by processor 1501 according to fingerprint sensor 1514
Fingerprint recognition user identity, alternatively, by fingerprint sensor 1514 according to the identity of collected fingerprint recognition user.Knowing
Not Chu the identity of user when being trusted identity, authorize the user to execute relevant sensitive operation by processor 1501, which grasps
Make to include solving lock screen, checking encryption information, downloading software, payment and change setting etc..Fingerprint sensor 1514 can be set
Set the front, the back side or side of terminal 1500.When being provided with physical button or manufacturer Logo in terminal 1500, fingerprint sensor
1514 can integrate with physical button or manufacturer Logo.
Optical sensor 1515 is for acquiring ambient light intensity.In one embodiment, processor 1501 can be according to light
The ambient light intensity that sensor 1515 acquires is learned, the display brightness of touch display screen 1505 is controlled.Specifically, work as ambient light intensity
When higher, the display brightness of touch display screen 1505 is turned up;When ambient light intensity is lower, the aobvious of touch display screen 1505 is turned down
Show brightness.In another embodiment, the ambient light intensity that processor 1501 can also be acquired according to optical sensor 1515, is moved
The acquisition parameters of state adjustment CCD camera assembly 1506.
Proximity sensor 1516, also referred to as range sensor are generally arranged at the front panel of terminal 1500.Proximity sensor
1516 for acquiring the distance between the front of user Yu terminal 1500.In one embodiment, when proximity sensor 1516 is examined
When measuring the distance between the front of user and terminal 1500 and gradually becoming smaller, by processor 1501 control touch display screen 1505 from
Bright screen state is switched to breath screen state;When proximity sensor 1516 detect the distance between front of user and terminal 1500 by
When gradual change is big, touch display screen 1505 is controlled by processor 1501 and is switched to bright screen state from breath screen state.
It, can be with it will be understood by those skilled in the art that the restriction of the not structure paired terminal 1500 of structure shown in Figure 15
Including than illustrating more or fewer components, perhaps combining certain components or being arranged using different components.
Figure 16 is please referred to, it illustrates the structural representations for the server 1600 that an illustrative embodiment of the invention provides
Figure.The server 1600 can be the service of any of server cluster 230 in image synthesis system illustrated in fig. 2
Device, specifically: the server 1600 includes central processing unit (CPU) 1601 including random access memory (RAM)
1602 and read-only memory (ROM) 1603 system storage 1604, and connection system storage 1604 and central processing list
The system bus 1605 of member 1601.The server 1600 further includes that information is transmitted between each device helped in computer
Basic input/output (I/O system) 1606, and it is used for storage program area 1613, application program 1614 and other programs
The mass-memory unit 1607 of module 1615.
The basic input/output 1606 includes display 1608 for showing information and inputs for user
The input equipment 1609 of such as mouse, keyboard etc of information.Wherein the display 1608 and input equipment 1609 all pass through
The input and output controller 1610 for being connected to system bus 1605 is connected to central processing unit 1601.The basic input/defeated
System 1606 can also include input and output controller 1610 to touch for receiving and handling from keyboard, mouse or electronics out
Control the input of multiple other equipment such as pen.Similarly, input and output controller 1610 also provide output to display screen, printer or
Other kinds of output equipment.
The mass-memory unit 1607 (is not shown by being connected to the bulk memory controller of system bus 1605
It is connected to central processing unit 1601 out).The mass-memory unit 1607 and its associated computer-readable medium are
Server 1600 provides non-volatile memories.That is, the mass-memory unit 1607 may include such as hard disk or
The computer-readable medium (not shown) of person's CD-ROI driver etc.
Without loss of generality, the computer-readable medium may include computer storage media and communication media.Computer
Storage medium includes information such as computer readable instructions, data structure, program module or other data for storage
The volatile and non-volatile of any method or technique realization, removable and irremovable medium.Computer storage medium includes
RAM, ROM, EPROM, EEPROM, flash memory or other solid-state storages its technologies, CD-ROM, DVD or other optical storages, tape
Box, tape, disk storage or other magnetic storage devices.Certainly, skilled person will appreciate that the computer storage medium
It is not limited to above-mentioned several.Above-mentioned system storage 1604 and mass-memory unit 1607 may be collectively referred to as memory.
According to various embodiments of the present invention, the server 1600 can also be arrived by network connections such as internets
Remote computer operation on network.Namely server 1600 can be connect by the network being connected on the system bus 1605
Mouth unit 1611 is connected to network 1612, in other words, it is other kinds of to be connected to that Network Interface Unit 1611 also can be used
Network or remote computer system (not shown).
Optionally, at least one instruction, at least a Duan Chengxu, code set or instruction set are stored in the memory, at least
One instruction, an at least Duan Chengxu, code set or instruction set are loaded by processor and are executed to realize that above-mentioned each method is implemented
The step of being executed in image composition method provided by example or ad material synthetic method by server.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
It is completely or partially walked those of ordinary skill in the art will appreciate that realizing that the data of above-described embodiment are returned in shelves method
Suddenly may be implemented by hardware, relevant hardware can also be instructed to complete by program, the program can store in
In a kind of computer readable storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and
Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.
Claims (14)
1. a kind of image composition method, which is characterized in that the described method includes:
Obtain the original image of figure to be scratched;
Conspicuousness detection is carried out to the original image, generates notable figure;
Graph model is scratched using depth according to the notable figure, stingy figure masking-out is calculated, the depth scratches graph model for indicating base
In the stingy figure rule that sample image training obtains;
Using the stingy figure masking-out, the target material in the original image is taken out;
The target material and primary template to be synthesized are synthesized, target image is obtained.
2. the method according to claim 1, wherein described scratch graph model meter using depth according to the notable figure
Calculation obtains scratching figure masking-out, comprising:
Edge detection is carried out to the notable figure and obtains corresponding three component, three component includes the foreground zone of the notable figure
Domain, background area and zone of ignorance;
According to the original image and three component, graph model is scratched using the depth, scratching for the original image is calculated
Figure masking-out.
3. method according to claim 2, which is characterized in that it is described according to the original image and three component, using institute
It states depth and scratches the stingy figure masking-out that the original image is calculated in graph model, comprising:
It obtaining the depth and scratches graph model, the depth, which scratches graph model, to be obtained according to the training of at least one set of sample data group,
Sample data group described in every group includes: sample image, three component of sample and the correct stingy figure masking-out marked in advance;
The original image and three component are input to the depth to scratch in graph model, the original image is calculated
Scratch figure masking-out.
4. according to the method in claim 3, which is characterized in that described to obtain the stingy graph model of the depth, comprising:
Training sample set is obtained, training sample set includes at least one set of sample data group;
According at least one set of sample data group, initial parameter model is trained using error backpropagation algorithm, is obtained
Graph model is scratched to the depth.
5. according to the method described in claim 4, it is characterized in that, described according at least one set of sample data group, use
Error backpropagation algorithm is trained initial parameter model, obtains the depth and scratches graph model, comprising:
For every group of sample data group at least one set of sample data group, the sample image and the sample three are divided
Figure inputs the initial parameter model, obtains training result;
The training result is compared with the correctly stingy figure masking-out, obtains calculating loss, the calculating loss is for referring to
Show the error between the training result and the correct stingy figure masking-out;
According at least one set of corresponding calculating loss of sample data group, using error backpropagation algorithm training
It obtains the depth and scratches graph model.
6. method according to any one of claims 1 to 5, which is characterized in that the original image for obtaining figure to be scratched, packet
It includes:
It obtains and corresponds to the first trigger action that image uploads entrance in destination application, the destination application is that have
Scratch the application program of figure function;
According to first trigger action, the original image of figure to be scratched is obtained, and opens the stingy figure function.
7. method according to any one of claims 1 to 5, which is characterized in that described by the target material and to be synthesized
Primary template is synthesized, and target image is obtained, comprising:
Obtain the main element marked in advance in the primary template;
According to the target material, the main element in the primary template is replaced, candidate image is obtained;
Post-processing is carried out to the candidate image, generates the target image.
8. the method according to the description of claim 7 is characterized in that described according to the target material, to the primary template
In the main element be replaced, obtain candidate image, comprising:
Processing is zoomed in and out to the target material, the target material after being scaled, target material and institute after the scaling
The absolute difference for stating the size of main element is less than preset threshold;
The main element in the primary template is replaced using the target material after the scaling, obtains the candidate figure
Picture.
9. being generated the method according to the description of claim 7 is characterized in that described carry out post-processing to the candidate image
The target image, comprising:
The processing of style filter is carried out to the candidate image using three-dimensional lookup table algorithm, obtains the target image.
10. a kind of ad material synthetic method, which is characterized in that the described method includes:
It obtains and corresponds to the first trigger action that image uploads entrance in destination application, the destination application is that have
The application program of ad material processing function;
The original image uploaded is obtained according to first trigger action;
It opens and scratches figure function;
The original image is carried out to automate stingy figure, obtains the target material in the original image, the target material is
At least one of plant, animal and still life element;
The target material and advertisement formwork to be synthesized are synthesized, targeted advertisements image is obtained.
11. according to the method described in claim 10, it is characterized in that, described carry out the original image to automate stingy figure,
Obtain the target material in the original image, comprising:
Conspicuousness detection is carried out to the original image and generates notable figure;
According to the notable figure, graph model is scratched using depth, stingy figure masking-out is calculated, the depth scratches graph model for indicating
The stingy figure rule obtained based on sample image training;
Using the stingy figure masking-out, the target material in the original image is taken out.
12. according to the method described in claim 10, it is characterized in that, described by the target material and advertisement mould to be synthesized
Plate is synthesized, and targeted advertisements image is obtained, comprising:
Obtain the main element marked in advance in the advertisement formwork;
According to the target material, the main element in the advertisement formwork is replaced, candidate locations image is obtained;
Post-processing is carried out to the candidate locations image, generates the targeted advertisements image.
13. a kind of terminal, which is characterized in that the terminal includes processor and memory, is stored at least in the memory
One instruction, at least a Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, the generation
Code collection or instruction set are loaded by the processor and are executed to realize image composition method as described in any one of claim 1 to 9
Or the ad material synthetic method as described in claim 10 to 12 is any.
14. a kind of computer readable storage medium, which is characterized in that be stored at least one instruction, extremely in the storage medium
A few Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, the code set or instruction
Collection is loaded by the processor and is executed to realize image composition method as described in any one of claim 1 to 9 or such as right
It is required that 10 to 12 any ad material synthetic methods.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810146131.9A CN110148102B (en) | 2018-02-12 | 2018-02-12 | Image synthesis method, advertisement material synthesis method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810146131.9A CN110148102B (en) | 2018-02-12 | 2018-02-12 | Image synthesis method, advertisement material synthesis method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110148102A true CN110148102A (en) | 2019-08-20 |
CN110148102B CN110148102B (en) | 2022-07-15 |
Family
ID=67588140
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810146131.9A Active CN110148102B (en) | 2018-02-12 | 2018-02-12 | Image synthesis method, advertisement material synthesis method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110148102B (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110675356A (en) * | 2019-09-30 | 2020-01-10 | 中国科学院软件研究所 | Embedded image synthesis method based on user intention inference |
CN110852942A (en) * | 2019-11-19 | 2020-02-28 | 腾讯科技(深圳)有限公司 | Model training method, and media information synthesis method and device |
CN111161288A (en) * | 2019-12-26 | 2020-05-15 | 郑州阿帕斯数云信息科技有限公司 | Image processing method and device |
CN111179159A (en) * | 2019-12-31 | 2020-05-19 | 北京金山云网络技术有限公司 | Method and device for eliminating target image in video, electronic equipment and storage medium |
CN111507889A (en) * | 2020-04-13 | 2020-08-07 | 北京字节跳动网络技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN111640123A (en) * | 2020-05-22 | 2020-09-08 | 北京百度网讯科技有限公司 | Background-free image generation method, device, equipment and medium |
CN111724407A (en) * | 2020-05-25 | 2020-09-29 | 北京市商汤科技开发有限公司 | Image processing method and related product |
CN111784726A (en) * | 2019-09-25 | 2020-10-16 | 北京沃东天骏信息技术有限公司 | Image matting method and device |
CN111784564A (en) * | 2020-06-30 | 2020-10-16 | 稿定(厦门)科技有限公司 | Automatic cutout method and system |
CN111857515A (en) * | 2020-07-24 | 2020-10-30 | 深圳市欢太科技有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
CN112700513A (en) * | 2019-10-22 | 2021-04-23 | 阿里巴巴集团控股有限公司 | Image processing method and device |
WO2021164534A1 (en) * | 2020-02-18 | 2021-08-26 | Oppo广东移动通信有限公司 | Image processing method and apparatus, device, and storage medium |
CN113379665A (en) * | 2021-06-28 | 2021-09-10 | 展讯通信(天津)有限公司 | Matting correction method and apparatus |
CN113628105A (en) * | 2020-05-07 | 2021-11-09 | 阿里巴巴集团控股有限公司 | Image processing method, device, storage medium and processor |
CN113743281A (en) * | 2021-08-30 | 2021-12-03 | 上海明略人工智能(集团)有限公司 | Program advertisement material identification method, system, computer device and storage medium |
CN113902754A (en) * | 2021-09-27 | 2022-01-07 | 四川新网银行股份有限公司 | Method for generating standardized electronic data |
CN114253451A (en) * | 2021-12-21 | 2022-03-29 | 咪咕音乐有限公司 | Screenshot method and device, electronic equipment and storage medium |
CN114615520A (en) * | 2022-03-08 | 2022-06-10 | 北京达佳互联信息技术有限公司 | Subtitle positioning method, subtitle positioning device, computer equipment and medium |
CN115543161A (en) * | 2022-11-04 | 2022-12-30 | 广州市保伦电子有限公司 | Matting method and device suitable for whiteboard all-in-one machine |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103714540A (en) * | 2013-12-21 | 2014-04-09 | 浙江传媒学院 | SVM-based transparency estimation method in digital image matting processing |
CN105488784A (en) * | 2015-11-23 | 2016-04-13 | 广州一刻影像科技有限公司 | Automatic portrait matting method |
CN105809666A (en) * | 2014-12-30 | 2016-07-27 | 联芯科技有限公司 | Image matting method and device |
CN106355583A (en) * | 2016-08-30 | 2017-01-25 | 成都丘钛微电子科技有限公司 | Image processing method and device |
CN107481261A (en) * | 2017-07-31 | 2017-12-15 | 中国科学院长春光学精密机械与物理研究所 | A kind of color video based on the tracking of depth prospect scratches drawing method |
-
2018
- 2018-02-12 CN CN201810146131.9A patent/CN110148102B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103714540A (en) * | 2013-12-21 | 2014-04-09 | 浙江传媒学院 | SVM-based transparency estimation method in digital image matting processing |
CN105809666A (en) * | 2014-12-30 | 2016-07-27 | 联芯科技有限公司 | Image matting method and device |
CN105488784A (en) * | 2015-11-23 | 2016-04-13 | 广州一刻影像科技有限公司 | Automatic portrait matting method |
CN106355583A (en) * | 2016-08-30 | 2017-01-25 | 成都丘钛微电子科技有限公司 | Image processing method and device |
CN107481261A (en) * | 2017-07-31 | 2017-12-15 | 中国科学院长春光学精密机械与物理研究所 | A kind of color video based on the tracking of depth prospect scratches drawing method |
Non-Patent Citations (2)
Title |
---|
NINGXU 等: "Deep Image Matting", 《COMPUTER VISION AND PATTERN RECOGNITION 2017》 * |
程珺: "基于优化全局采样的图像抠取算法研究", 《基于优化全局采样的图像抠取算法研究》 * |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111784726A (en) * | 2019-09-25 | 2020-10-16 | 北京沃东天骏信息技术有限公司 | Image matting method and device |
CN110675356A (en) * | 2019-09-30 | 2020-01-10 | 中国科学院软件研究所 | Embedded image synthesis method based on user intention inference |
CN110675356B (en) * | 2019-09-30 | 2022-02-22 | 中国科学院软件研究所 | Embedded image synthesis method based on user intention inference |
CN112700513A (en) * | 2019-10-22 | 2021-04-23 | 阿里巴巴集团控股有限公司 | Image processing method and device |
CN110852942A (en) * | 2019-11-19 | 2020-02-28 | 腾讯科技(深圳)有限公司 | Model training method, and media information synthesis method and device |
CN111161288B (en) * | 2019-12-26 | 2023-04-14 | 郑州阿帕斯数云信息科技有限公司 | Image processing method and device |
CN111161288A (en) * | 2019-12-26 | 2020-05-15 | 郑州阿帕斯数云信息科技有限公司 | Image processing method and device |
CN111179159A (en) * | 2019-12-31 | 2020-05-19 | 北京金山云网络技术有限公司 | Method and device for eliminating target image in video, electronic equipment and storage medium |
CN111179159B (en) * | 2019-12-31 | 2024-02-20 | 北京金山云网络技术有限公司 | Method and device for eliminating target image in video, electronic equipment and storage medium |
WO2021164534A1 (en) * | 2020-02-18 | 2021-08-26 | Oppo广东移动通信有限公司 | Image processing method and apparatus, device, and storage medium |
CN111507889A (en) * | 2020-04-13 | 2020-08-07 | 北京字节跳动网络技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN113628105A (en) * | 2020-05-07 | 2021-11-09 | 阿里巴巴集团控股有限公司 | Image processing method, device, storage medium and processor |
CN111640123A (en) * | 2020-05-22 | 2020-09-08 | 北京百度网讯科技有限公司 | Background-free image generation method, device, equipment and medium |
CN111640123B (en) * | 2020-05-22 | 2023-08-11 | 北京百度网讯科技有限公司 | Method, device, equipment and medium for generating background-free image |
US11704811B2 (en) | 2020-05-22 | 2023-07-18 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method and apparatus for generating background-free image, device, and medium |
CN111724407A (en) * | 2020-05-25 | 2020-09-29 | 北京市商汤科技开发有限公司 | Image processing method and related product |
CN111784564A (en) * | 2020-06-30 | 2020-10-16 | 稿定(厦门)科技有限公司 | Automatic cutout method and system |
CN111857515B (en) * | 2020-07-24 | 2024-03-19 | 深圳市欢太科技有限公司 | Image processing method, device, storage medium and electronic equipment |
WO2022016981A1 (en) * | 2020-07-24 | 2022-01-27 | 深圳市欢太科技有限公司 | Image processing methods and apparatus, storage medium, and electronic device |
CN111857515A (en) * | 2020-07-24 | 2020-10-30 | 深圳市欢太科技有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
CN113379665A (en) * | 2021-06-28 | 2021-09-10 | 展讯通信(天津)有限公司 | Matting correction method and apparatus |
CN113743281A (en) * | 2021-08-30 | 2021-12-03 | 上海明略人工智能(集团)有限公司 | Program advertisement material identification method, system, computer device and storage medium |
CN113902754A (en) * | 2021-09-27 | 2022-01-07 | 四川新网银行股份有限公司 | Method for generating standardized electronic data |
CN114253451A (en) * | 2021-12-21 | 2022-03-29 | 咪咕音乐有限公司 | Screenshot method and device, electronic equipment and storage medium |
CN114615520A (en) * | 2022-03-08 | 2022-06-10 | 北京达佳互联信息技术有限公司 | Subtitle positioning method, subtitle positioning device, computer equipment and medium |
CN114615520B (en) * | 2022-03-08 | 2024-01-02 | 北京达佳互联信息技术有限公司 | Subtitle positioning method, subtitle positioning device, computer equipment and medium |
CN115543161A (en) * | 2022-11-04 | 2022-12-30 | 广州市保伦电子有限公司 | Matting method and device suitable for whiteboard all-in-one machine |
CN115543161B (en) * | 2022-11-04 | 2023-08-15 | 广东保伦电子股份有限公司 | Image matting method and device suitable for whiteboard integrated machine |
Also Published As
Publication number | Publication date |
---|---|
CN110148102B (en) | 2022-07-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110148102A (en) | Image composition method, ad material synthetic method and device | |
CN108549863B (en) | Human body gesture prediction method, apparatus, equipment and storage medium | |
CN110163048B (en) | Hand key point recognition model training method, hand key point recognition method and hand key point recognition equipment | |
CN109255769A (en) | The training method and training pattern and image enchancing method of image enhancement network | |
CN110136136A (en) | Scene Segmentation, device, computer equipment and storage medium | |
CN110135336A (en) | Training method, device and the storage medium of pedestrian's generation model | |
CN109034102A (en) | Human face in-vivo detection method, device, equipment and storage medium | |
CN110059661A (en) | Action identification method, man-machine interaction method, device and storage medium | |
CN110070072A (en) | A method of generating object detection model | |
CN109978989A (en) | Three-dimensional face model generation method, device, computer equipment and storage medium | |
CN110084313A (en) | A method of generating object detection model | |
CN110111418A (en) | Create the method, apparatus and electronic equipment of facial model | |
CN108594997A (en) | Gesture framework construction method, apparatus, equipment and storage medium | |
CN110064200A (en) | Object construction method, device and readable storage medium storing program for executing based on virtual environment | |
CN110222551A (en) | Method, apparatus, electronic equipment and the storage medium of identification maneuver classification | |
CN109978936A (en) | Parallax picture capturing method, device, storage medium and equipment | |
CN110059652A (en) | Face image processing process, device and storage medium | |
CN108810538A (en) | Method for video coding, device, terminal and storage medium | |
CN110084253A (en) | A method of generating object detection model | |
CN110263213A (en) | Video pushing method, device, computer equipment and storage medium | |
CN109949390A (en) | Image generating method, dynamic expression image generating method and device | |
CN107833219A (en) | Image-recognizing method and device | |
CN108830186A (en) | Method for extracting content, device, equipment and the storage medium of text image | |
CN108701355A (en) | GPU optimizes and the skin possibility predication based on single Gauss online | |
CN112036331A (en) | Training method, device and equipment of living body detection model and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |