CN110533740A - A kind of image rendering methods, device, system and storage medium - Google Patents
A kind of image rendering methods, device, system and storage medium Download PDFInfo
- Publication number
- CN110533740A CN110533740A CN201910702526.7A CN201910702526A CN110533740A CN 110533740 A CN110533740 A CN 110533740A CN 201910702526 A CN201910702526 A CN 201910702526A CN 110533740 A CN110533740 A CN 110533740A
- Authority
- CN
- China
- Prior art keywords
- image
- training
- channel value
- colored
- colorant
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 73
- 238000009877 rendering Methods 0.000 title claims abstract description 31
- 238000003860 storage Methods 0.000 title claims abstract description 31
- 238000004040 coloring Methods 0.000 claims abstract description 121
- 239000003086 colorant Substances 0.000 claims abstract description 108
- 238000013528 artificial neural network Methods 0.000 claims abstract description 88
- 238000012549 training Methods 0.000 claims description 209
- 238000005520 cutting process Methods 0.000 claims description 43
- 238000004590 computer program Methods 0.000 claims description 15
- 238000005070 sampling Methods 0.000 claims description 14
- 238000013527 convolutional neural network Methods 0.000 claims description 12
- 210000005036 nerve Anatomy 0.000 claims description 7
- 238000010186 staining Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 3
- 235000013399 edible fruits Nutrition 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 22
- 230000006870 function Effects 0.000 description 11
- 238000001514 detection method Methods 0.000 description 10
- 238000012545 processing Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- 230000001537 neural effect Effects 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 238000013519 translation Methods 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 241000209140 Triticum Species 0.000 description 1
- 235000021307 Triticum Nutrition 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 210000004218 nerve net Anatomy 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
Abstract
The present invention provides a kind of image rendering methods, device, system and storage mediums, which comprises image to be colored is converted to Lab color space format and obtains the L channel value of the image to be colored;Obtain the ab channel value of the image or the image to be colored for coloring, wherein the image for coloring includes image at least partly identical with the image to be colored;The L channel value and the ab channel value are input to image colorant neural network, obtain the ab channel value for coloring;It merges the ab channel value for being used to colour to obtain the image colorant result of the image to be colored with the L channel value.According to the method for the present invention, device, system and storage medium, the color channel values coloured by neural network are merged with the luminance channel of image to be colored realizes image colorant, improve existing image colorant colour cast, leakage color, lack the problems such as color, improves the efficiency and coloring effect of image colorant.
Description
Technical field
The present invention relates to technical field of image processing, relate more specifically to the processing of image colorant.
Background technique
Image colorant is intended to become an existing image completion color one and does not occur macroscopic obvious lack
Sunken color image.For example, the coloring based on a small amount of primitive color information refers to, in Lab color space, in a picture
Most of pixel does not have the colouring information of ab, and a small amount of pixel has ab colouring information, and basis is needed to have a small amount of color to no face
The pixel of color is coloured, that is, restores its channel UV.Existing color method is based between grey scale pixel value and two pixels
The information such as distance, and according to hypothesis: " for two adjacent pixels, if its brightness is more similar, color is also answered
This is similar " design object function and sparse vectors are solved, to realize according to assuming and prior information deduces certain
The color of a pixel.There are certain defects for this method, because this method is based on the assumption that, which is a kind of empirical
It is assumed that it is not fully reliable, the problems such as being easy to appear image some regions leakage color, lack color and colour cast.
In addition, if colouring information ab can not be tight with the channel the L pixel scale of original image in existing color method
Lattice alignment, it is easy to cause color filling can not also excavate the pass between color and contour of object information to the position of mistake
System is thus by correct color filling into correct physical contours.Such as two images that two cameras of camera are shot,
First possesses the channel ab colouring information, and second is only had L channel information, needs using the colouring information in first to be second
Zhang Jinhang coloring, because two cameras are not in same position and parameter is different, so that there are the depth of field and offset for two pictures
Etc. differences, image colorant will be will lead to using existing dye technology and mistake occur, so that image colorant effect is bad, reduced
User experience.
Therefore, image colorant in the prior art has that coloring effect is bad.
Summary of the invention
The present invention is proposed in view of the above problem.The present invention provides a kind of image rendering methods, device, system and
Computer storage medium, the color channel values coloured by neural network merge realization with the luminance channel of image to be colored
Image colorant improves existing image colorant colour cast, leakage color, lacks the problems such as color, improves the efficiency and coloring of image colorant
Effect.
According to the first aspect of the invention, a kind of image rendering methods are provided, comprising:
Image to be colored is converted into Lab color space format and obtains the L channel value of the image to be colored;
Obtain for coloring image or the image to be colored ab channel value, it is described for coloring image include with
At least partly identical image of the image to be colored;
The L channel value and the ab channel value are input to image colorant neural network, obtained logical for the ab of coloring
Road value;
It merges the ab channel value for being used to colour to obtain the image colorant of the image to be colored with the L channel value
As a result.
According to the second aspect of the invention, a kind of image colorant device is provided, comprising:
Image module to be colored obtains the figure to be colored for image to be colored to be converted to Lab color space format
The L channel value of picture;
Rendered image module, for obtaining the ab channel value of the image or the image to be colored that are used for coloring, the use
In the image of coloring include image at least partly identical with the image to be colored;
Computing module is used for the L channel value and the ab channel value to be input to image colorant neural network
In the ab channel value of coloring;
Staining module, it is described to be colored for merging to obtain the ab channel value for being used to colour with the L channel value
The image colorant result of image.
According to the third aspect of the invention we, it provides a kind of image colorant system, including memory, processor and is stored in
The computer program run on the memory and on the processor, which is characterized in that the processor executes the meter
The step of first aspect the method is realized when calculation machine program.
According to the fourth aspect of the invention, a kind of computer storage medium is provided, computer program is stored thereon with,
The step of being characterized in that, first aspect the method realized when the computer program is computer-executed.
Image rendering methods, device, system and computer storage medium according to an embodiment of the present invention, pass through neural network
The color channel values coloured are merged with the luminance channel of image to be colored realizes image colorant, improves existing image colorant
Colour cast, lacks the problems such as color at leakage color, improves the efficiency and coloring effect of image colorant.
Detailed description of the invention
The embodiment of the present invention is described in more detail in conjunction with the accompanying drawings, the above and other purposes of the present invention,
Feature and advantage will be apparent.Attached drawing is used to provide to further understand the embodiment of the present invention, and constitutes explanation
A part of book, is used to explain the present invention together with the embodiment of the present invention, is not construed as limiting the invention.In the accompanying drawings,
Identical reference label typically represents same parts or step.
Fig. 1 is for realizing the signal of the exemplary electronic device of image rendering methods according to an embodiment of the present invention and device
Property block diagram;
Fig. 2 is the schematic flow chart of image rendering methods according to an embodiment of the present invention;
Fig. 3 is the example of training image pair according to an embodiment of the present invention;
Fig. 4 is the example of image colorant neural metwork training according to an embodiment of the present invention;
Fig. 5 is the schematic block diagram of image colorant device according to an embodiment of the present invention;
Fig. 6 is the schematic block diagram of image colorant system according to an embodiment of the present invention.
Specific embodiment
In order to enable the object, technical solutions and advantages of the present invention become apparent, root is described in detail below with reference to accompanying drawings
According to example embodiments of the present invention.Obviously, described embodiment is only a part of the embodiments of the present invention, rather than this hair
Bright whole embodiments, it should be appreciated that the present invention is not limited by example embodiment described herein.Based on described in the present invention
The embodiment of the present invention, those skilled in the art's obtained all other embodiment in the case where not making the creative labor
It should all fall under the scope of the present invention.
Firstly, being described with reference to Figure 1 the exemplary electron of the image rendering methods and device for realizing the embodiment of the present invention
Equipment 100.
As shown in Figure 1, electronic equipment 100 include one or more processors 101, it is one or more storage device 102, defeated
Enter device 103, output device 104, imaging sensor 105, the connection machine that these components pass through bus system 106 or other forms
The interconnection of structure (not shown).It should be noted that the component and structure of electronic equipment 100 shown in FIG. 1 are only exemplary, rather than limit
Property, as needed, the electronic equipment also can have other assemblies and structure.
The processor 101 can be central processing unit (CPU) or have data-handling capacity or instruction execution energy
The processing unit of the other forms of power, and can control other components in the electronic equipment 100 to execute desired function
Energy.
The storage device 102 may include one or more computer program products, and the computer program product can
To include various forms of computer readable storage mediums, such as volatile memory or nonvolatile memory.It is described volatile
Property memory for example may include random access memory (RAM) or cache memory (cache) etc..It is described non-volatile
Memory for example may include read-only memory (ROM), hard disk, flash memory etc..It can be on the computer readable storage medium
One or more computer program instructions are stored, processor 102 can run described program instruction, to realize sheet described below
The client functionality (realized by processor) in inventive embodiments and/or other desired functions.The computer can
It reads that various application programs and various data can also be stored in storage medium, such as the application program is used or generated various
Data etc..
The input unit 103 can be the device that user is used to input instruction, and may include keyboard, mouse, wheat
One or more of gram wind and touch screen etc..
The output device 104 can export various information (such as image or sound) to external (such as user), and
It may include one or more of display, loudspeaker etc..
Described image sensor 105 can be shot the desired image of user (such as photo, video etc.), and will be captured
Image be stored in the storage device 102 for other components use.
Illustratively, the exemplary electronic device for realizing image rendering methods according to an embodiment of the present invention and device can
To be implemented as smart phone, tablet computer, Image Acquisition end etc..
Image rendering methods 200 according to an embodiment of the present invention are described next, with reference to Fig. 2.As shown in Fig. 2, a kind of figure
As color method 200, comprising:
Firstly, image to be colored is converted to Lab color space format and obtains the image to be colored in step S210
L channel value;
In step S220, the ab channel value of the image or the image to be colored for coloring is obtained, it is described for colouring
Image include image at least partly identical with the image to be colored;
In step S230, the L channel value and the ab channel value are input to image colorant neural network, are used for
The ab channel value of coloring;
Finally, in step S240, the ab channel value for being used to colour is merged with the L channel value obtain described in
The image colorant result of chromatic graph picture.
Wherein, image to be colored refers to the image that needs to carry out coloring treatment (the less image of such as colouring information, specifically may be used
To be image of the colouring information less than predetermined ratio), the image for coloring, which refers to, provides coloring for the image to be colored
The image of colouring information, by from it is described for coloring image in obtain colouring information such as ab channel value, with it is described to
The L channel value of chromatic graph picture inputs the trained image colorant neural network based on deep learning together, and it is logical to export corresponding ab
Road value for being coloured to the image to be colored, i.e., the ab channel value exported image colorant neural network and it is described to
The L channel value of chromatic graph picture merges to obtain the image after coloring treatment as image colorant result.With image portion split-phase to be colored
With for coloring image and on the basis of, by the image colorant neural network based on deep learning obtain to it is described to
The colouring information that rendered image is coloured can improve colour cast, leakage color, scarce color etc. and ask compared to traditional image rendering methods
Topic makes the image after coloring more true from naked eyes.
Further, image rendering methods according to an embodiment of the present invention can be based on image to be colored itself to itself
Coloured, can also the identical image in part, that is, image to be colored be that there is some difference with the image for coloring or picture
Between element and two pictures that are misaligned, that is to say, that image rendering methods according to an embodiment of the present invention not only can solve
The coloring problem of one image can solve the coloring problem between discrepant two images, be suitble to be widely used in various
Occasion avoids colo(u)r bias, improves coloring effect, and reduce cost.
It will be appreciated that above-mentioned steps S210 and the sequence of step S220 are merely illustrative, and indicate step S210 and step S220
Centainly according to this sequence carry out, step S220 can before step S210, simultaneously or after carry out, herein with no restrictions.
Illustratively, image rendering methods according to an embodiment of the present invention can be in setting with memory and processor
It is realized in standby, device or system.
Image rendering methods according to an embodiment of the present invention can be deployed at Image Acquisition end, for example, can be deployed in
At camera, or (such as smart phone, tablet computer, personal computer with camera at the personal terminal with camera
Deng).For example, can at Image Acquisition end or personal terminal with camera acquisition image data as image to be colored,
The image for coloring that image or other sources based on the acquisition obtain, is obtained by image colorant neural network to institute
The ab channel value that the image of acquisition is coloured is stated, the image of the acquisition is coloured using the ab channel value, obtains phase
The colouring results answered.
Alternatively, image rendering methods according to an embodiment of the present invention can also be deployed in being distributed at Image Acquisition end and
At personal terminal, smart phone, tablet computer, personal computer etc. at the personal terminal.For example, can be adopted in image
Collect end acquisition image data as image to be colored, described image data is sent to the personal terminal, by personal terminal root
According to image or the image for coloring that obtains of other sources based on the acquisition, obtained pair by image colorant neural network
The ab channel value that the image of the acquisition is coloured colours the image of the acquisition using the ab channel value, obtains
Corresponding colouring results.
Alternatively, image rendering methods according to an embodiment of the present invention can also be deployed in server end (or cloud with being distributed
End) and Image Acquisition end at.For example, image data can be acquired at Image Acquisition end as image to be colored, by the figure
As data pass to server end (or cloud), then image or other source of the server end (or cloud) based on the acquisition
The image for coloring obtained, the channel ab coloured to the image of the acquisition is obtained by image colorant neural network
Value, colours the image of the acquisition using the ab channel value, obtains corresponding colouring results.
Image rendering methods according to an embodiment of the present invention, the color channel values coloured by neural network with to
Image colorant is realized in the luminance channel fusion of chromatic graph picture, is improved existing image colorant colour cast, leakage color, is lacked the problems such as color, improves
The efficiency and coloring effect of image colorant.
According to embodiments of the present invention, in step S210, the Lab color space format includes the channel L, the channel a and b logical
Road, wherein L channel value refers to the brightness of pixel in image, and value range includes [0,100], indicates from black to pure white;The channel a
Value refers to the colouring information that pixel in image is in from red to the range of green, and value range is [127, -128], indicate from
Red arrives green;B channel value refer to that pixel in image is in slave yellow to the range of blue in colouring information, value range is
[127, -128] are indicated from yellow to blue.The L channel value of the image to be colored refers to pixel in the image to be colored
Brightness.
Illustratively, the image to be colored can be the realtime graphic directly collected by image collecting device,
It is also possible to the image acquired from local data source or remote data source.
Illustratively, the image to be colored can also be each frame in real time video data or non-real-time video data
Image.When need the video data is coloured when, can the image rendering methods based on the embodiment of the present invention to described
After each frame image in video data is coloured, the video data after obtaining coloring treatment.
According to embodiments of the present invention, in step S220, described obtain can be into for the ab channel value of the image of coloring
It include: to one step that the image for being used to colour is converted to Lab color space format to obtain the image for being used to colour
Ab channel value.
It can be after locally carrying out directly it will be appreciated that the image for being used to colour is converted to Lab color space format
The ab channel value of the image for coloring is obtained, described be used for can also be acquired from long-range after remote port progress
The ab channel value of the image of color, herein with no restrictions.
Illustratively, the image for coloring includes: the phase that at least there is preset ratio with the image to be colored
Same part.Wherein, described to may exist different for the image of coloring and the image to be colored, but at least need to exist
Identical part is to guarantee the effect and accuracy that colour.
In one embodiment, in step S220, when the ab channel value for obtaining the image to be colored for based on
Image to be colored colours itself itself, as the coloring treatment of single image.Wherein, the image to be colored is obtained
Ab channel value can carry out simultaneously in step S210, i.e., by image to be colored be converted to Lab color space format obtain it is described
The ab channel value and L channel value of image to be colored.
According to embodiments of the present invention, in step S230, described image, which colours neural network, to be simulated by random cropping
Two there is some difference or there is no the images of difference, by the L of the ab channel value of wherein piece image and another piece image
Channel value carries out neural network using the ab channel value of the another piece image as output layer data as input layer data
The image colorant neural network that training obtains.Image colorant neural network can be directly realized by end using multiple dimensioned random cropping and arrive
End colours the image of different resolution, without carrying out data mark in training, without exclusively carrying out data acquisition, leads to
Analogue data is crossed to realize training process, coloring effect can be significantly improved on the basis of not increasing or decreasing cost.
Optionally, described image coloring neural network includes convolutional neural networks.Wherein, due to according to embodiments of the present invention
Image rendering methods be for image procossing, so image colorant neural network can be image input and image output
Neural network structure.And the processing for image, described image coloring nerve relatively good using the effect of convolutional neural networks
Network can be obtained based on the different frameworks of convolutional neural networks (CNN).The framework of the convolutional neural networks includes and unlimited
In LeNet-5, AlexNet, ZF Net, VGGNet, Inception, ResNet etc..
Optionally, described image coloring neural network may include coding (encode) neural network module and decoding
(decode) neural network module.Wherein, the neural network of encoding function may be implemented in the different frameworks of convolutional neural networks
Any number of any combination may be incorporated in encoding nerve network module in framework, and the nerve of decoding function may be implemented
Any number of any combination may be incorporated in decoding neural network module in the network architecture.
In one embodiment, described image coloring neural network may include encoding nerve network module and decoding nerve
Network module, the encoding nerve network module include the ResNet neural network module of 5 connections, the decoding neural network
Module includes deconvolution neural network, and the ResNet neural network module of 5 connections is connected to the deconvolution nerve net
Network.In one embodiment, described image coloring neural network includes full convolutional neural networks.
It will be appreciated that the present invention is not limited by the convolutional neural networks framework specifically used, either existing convolution mind
In convolutional neural networks framework through the network architecture or exploitation in future, it can be applied to when being used for image procossing according to this hair
It in the image rendering methods of bright embodiment, and also should include within the scope of the present invention.
According to embodiments of the present invention, the method 200 further include:
After carrying out random size cutting to training image, then carries out offset cutting and obtain training image pair, the training figure
As to including at least partly identical first training image and the second training image;
Using the L channel value of the first training image and the ab channel sample value of the second training image and first instruction
The ab channel value training neural network for practicing image, obtains described image coloring neural network.
Wherein, conventional images data set can be used in the training image, such as imagenet data set;It can also be from network
Upper acquisition image data.Several scenes and a variety of object types can be covered as far as possible to guarantee the abundant of data by choosing training image
Property, be conducive to the accuracy for improving image colorant neural network.Specific required image can also be selected to carry out according to application scenarios
Training, such as natural land or building etc. such as need to carry out coloring treatment to the image of special scenes, can be specific for this
Scene carries out neural metwork training, and obtained image colorant neural network is more suitable for the image colorant processing of the special scenes.
Illustratively, it after carrying out random size cutting to training image, then carries out offset cutting and obtains training image pair, institute
Training image is stated to including at least partly identical first training image and the second training image, comprising:
Cut size is obtained at random based on predetermined size range;
The training image after random cropping is cut is carried out to training image with the cut size;
Deviation post is randomly selected in the training image after the cutting, and based on the deviation post to the cutting
Training image afterwards carries out offset cutting, obtains first training image and second training image.
Wherein, the predetermined size range, which can according to need, is configured, but the ruler of original training data is not to be exceeded
It is very little in order to avoid introduce unnecessary noise, because if the reductions size of selection may result in greater than the size of original training data
Data after cutting have more being left white for periphery than original training data and wait noises, influence the precision of image colorant neural network.In
Cut size is randomly selected within the scope of certain size, and neural network can be allowed to recognize the object of different scale in the training process
Feature, allow different resolution picture input when obtain stable effect.
It the case where for special scenes, can be based on object detection method to the target scene or object in training image
Body obtains target area after being detected, and it is but to be no more than including at least the target area that the predetermined size range, which is arranged,
The size of original training image;The training image is cut out with the random size centered on the target area,
Training image after being cut out, the training image after this is cut out at this time include the special scenes, is conducive to improve image
Effect of the color neural network to the coloring treatment of the special scenes.
Illustratively, obtaining cut size at random based on predetermined size range may include:
Special scenes region described in the training image is detected by object detection method, the predetermined size range is
Greater than the size in the special scenes region, and it is less than or equal to the size of the training image;
The reduction size is randomly selected in the predetermined size range.
It should be appreciated that the present invention is not limited by the object detection method specifically used, either existing object detection method
Or the object detection method of exploitation in the future can be applied in image rendering methods according to an embodiment of the present invention, and
It also should include within the scope of the present invention.
Illustratively, the deviation post includes the position of first training image and the position of second training image
It sets.
Illustratively, the deviation post include first training image position and second training image with
The relative position of first training image.
Illustratively, deviation post is randomly selected in the training image after the cutting, and is based on the deviation post
Offset cutting is carried out to the training image after the cutting, obtains first training image and second training image, is wrapped
It includes:
First area and second area are randomly selected based on the training image after the cutting, the first area and described
At least there is the overlapping region of predefined size in second area;
Respectively the training image after the cutting is cut out to obtain according to the first area and the second area
First training image and second training image.
Illustratively, deviation post is randomly selected in the training image after the cutting, and is based on the deviation post
Offset cutting is carried out to the training image after the cutting, obtains first training image and second training image, is wrapped
It includes:
First area is randomly selected based on the training image after the cutting to be cut out to obtain first training image;
The first area random translation is obtained into second area, the first area and the second area at least exist
The overlapping region of predefined size;
The training image after the cutting is cut according to the second area to obtain second training image.
Illustratively, the random translation includes: random movement in the horizontal and/or vertical.
It will be appreciated that the predefined size can according to need and be configured, herein with no restrictions.
After the above-mentioned progress random size cutting to training image, then the randomness cut twice in offset cutting is got over
Greatly, the image colorant neural network after training can more have Shandong property stick.In addition, by the random cropping of different modes twice,
The training data that will acquire carries out the training image for being cut to smaller size of random size, it is possible to reduce trained calculating cost
And preferable coloring effect is obtained on the image of normal resolution.
Since above-mentioned cut twice is all based on random size and carries out, so for different training images, obtain the
The size of one training image and the second training image may be also not identical.At this point, different training images can only be obtained
Training image updates the weight coefficient of the neural network to the independent training for carrying out neural network every time after training.In order to mention
Hi-vision colours the training speed of neural network, and the training image that can be obtained to different training images makes its ruler to processing is carried out
It is very little all the same, to carry out small lot training, improve the convergence rate and training effectiveness of image colorant neural network.
Illustratively, the method also includes: first training image and second training image are zoomed to pre-
If size.
The example of the training image pair of the embodiment of the present invention is shown referring to Fig. 3, Fig. 3.As shown in figure 3, to training image
It after carrying out random size cutting, then carries out offset cutting and obtains training image pair, the training image is to including at least partly phase
Same the first training image and the second training image, comprising:
In step S310, training image is obtained;
In step S320, cut size is randomly selected based on predetermined size range;
In step S330, the training image after random cropping is cut is carried out to training image with the cut size;
In step S340, deviation post is randomly selected in the training image after the cutting, and is based on the bits of offset
It sets and offset cutting is carried out to the training image after the cutting, obtain first training image and second training image;
In step S350, first training image and second training image are zoomed into pre-set dimension.
Illustratively, using the ab channel sample value of the L channel value of the first training image and the second training image, Yi Jisuo
State the first training image ab channel value training neural network include:
The training image of preset quantity is chosen to the neural network is trained in batches, every time after training described in update
The weight coefficient of neural network.
Illustratively, the method also includes: down-sampling is carried out to the ab channel value of second training image, obtains institute
State the ab channel sample value of the second training image.Wherein, the down-sampling can sample rule packet based on certain sampling rule
Uniform sampling is included but is not limited to, down-sampling rule should be simulated similar with the distribution of (a small amount of) colouring information when application as far as possible.
Referring to fig. 4, Fig. 4 shows the example of the image colorant neural metwork training of the embodiment of the present invention.As shown in figure 4,
Using the L channel value of the first training image and the ab channel sample value of the second training image and first training image
Ab channel value training neural network include:
First training image 410 and second training image 420 are switched into Lab color space format;
The L channel value of first training image 410 is taken out, and takes out the channel ab of second training image 420
It is worth and carries out down-sampling and obtains the ab channel sample value of second training image 420;
Using the L channel value of first training image 410 and the ab channel sample value of second training image 420 as
Input layer data and the ab channel value of second training image 420 are trained neural network as output layer data.
In one embodiment, by taking the image colorant of single image as an example, described image color method is further illustrated.
The described method includes:
Firstly, image to be colored, which is converted to Lab color space format, obtains the L channel value of the image to be colored, with
And the ab channel value of the image to be colored;
Secondly, the L channel value and the ab channel value are input to image colorant neural network, obtain for coloring
Ab channel value;
Finally, merging the ab channel value for being used to colour to obtain the figure of the image to be colored with the L channel value
As colouring results.
In one embodiment, by taking the image colorant of two there is some difference image A and B as an example, to described image
Color method further illustrates.The described method includes:
Firstly, image A to be colored, which is converted to Lab color space format, obtains the L channel value of the image A to be colored;
Secondly, obtain for coloring image B ab channel value, it is described for coloring image B include with it is described to
Chromatic graph is as at least partly identical image of A;
Then, the ab channel value of the L channel value of the image A to be colored and the image B for being used to colour is input to
Image colorant neural network obtains the ab channel value for coloring;
Finally, by the ab channel value for being used to colour merge with the L channel value of the image A to be colored to obtain described in
The image colorant result of rendered image A.
It follows that image rendering methods according to an embodiment of the present invention, the color coloured by neural network is logical
Road value is merged with the luminance channel of image to be colored realizes image colorant, improve existing image colorant colour cast, leakage color, lack color etc.
Problem improves the efficiency and coloring effect of image colorant.
Fig. 5 shows the schematic block diagram of image colorant device 500 according to an embodiment of the present invention.As shown in figure 5, according to
The image colorant device 500 of the embodiment of the present invention includes:
Image module 510 to be colored, for by image to be colored be converted to Lab color space format obtain it is described to be colored
The L channel value of image;
Rendered image module 520, it is described for obtaining the ab channel value of the image or the image to be colored that are used for coloring
Image for coloring includes image at least partly identical with the image to be colored;
Computing module 530 is obtained for the L channel value and the ab channel value to be input to image colorant neural network
To the ab channel value for coloring;
Staining module 540, for described in the ab channel value for being used to colour is merged with the L channel value to obtain to
The image colorant result of chromatic graph picture.
Wherein, image to be colored refers to the image that needs to carry out coloring treatment (the less image of such as colouring information, specifically may be used
To be image of the colouring information less than predetermined ratio), the image for coloring, which refers to, provides coloring for the image to be colored
The image of colouring information, by from it is described for coloring image in obtain colouring information such as ab channel value, with it is described to
The L channel value of chromatic graph picture inputs the trained image colorant neural network based on deep learning together, and it is logical to export corresponding ab
Road value for being coloured to the image to be colored, i.e., the ab channel value exported image colorant neural network and it is described to
The L channel value of chromatic graph picture merges to obtain the image after coloring treatment as image colorant result.With image portion split-phase to be colored
With for coloring image and on the basis of, by the image colorant neural network based on deep learning obtain to it is described to
The colouring information that rendered image is coloured, the image colorant device 500 of the embodiment of the present invention can improve colour cast, leakage color, lack
The problems such as color, makes the image after coloring more true from naked eyes.
Further, the image colorant device 500 of the embodiment of the present invention can be based on image to be colored itself to itself
Coloured, can also the identical image in part, that is, image to be colored be that there is some difference with the image for coloring or picture
Between element and two pictures that are misaligned, that is to say, that image colorant device 500 according to an embodiment of the present invention can not only solve
The certainly coloring problem of an image can solve the coloring problem between discrepant two images, be suitble to be widely used in each
Kind occasion, avoids colo(u)r bias, improves coloring effect, and reduce cost.
According to embodiments of the present invention, image to be colored can be converted to Lab color space lattice by image module 510 to be colored
Formula obtains the L channel value of the image to be colored;And the Lab color space format includes the channel L, the channel a and the channel b,
In, L channel value refers to the brightness of pixel in image, and value range includes [0,100], indicates from black to pure white;A channel value refers to figure
As in pixel be in from red to green range in colouring information, value range is [127, -128], indicate from red to
Green;B channel value refer to that pixel in image is in slave yellow to the range of blue in colouring information, value range be [127 ,-
128], indicate from yellow to blue.The L channel value of the image to be colored refers to the brightness of pixel in the image to be colored.
Illustratively, the image to be colored can be the realtime graphic directly collected by image collecting device,
It is also possible to the image acquired from local data source or remote data source.
Illustratively, the image to be colored can also be each frame in real time video data or non-real-time video data
Image.When need the video data is coloured when, can the image rendering methods based on the embodiment of the present invention to described
After each frame image in video data is coloured, the video data after obtaining coloring treatment.
According to embodiments of the present invention, rendered image module 520 can be further used for: by the image for being used to colour
It is converted to Lab color space format and obtains the ab channel value of the image for coloring.
It can be by rendered image module 520 it will be appreciated that the image for being used to colour is converted to Lab color space format
The ab channel value of the image for coloring is directly obtained after locally carrying out, it can also be after remote port progress by color-patch map
As ab channel value of the module 520 described in remotely acquire for the image of coloring, herein with no restrictions.
Illustratively, the image for coloring includes: the phase that at least there is preset ratio with the image to be colored
Same part.Wherein, described to may exist different for the image of coloring and the image to be colored, but at least need to exist
Identical part is to guarantee the effect and accuracy that colour.
In one embodiment, the rendered image module 520 is used to carry out itself based on image to be colored itself
Color, the as coloring treatment of single image.And at this point it is possible to omit the rendered image module 520 or the rendered image mould
Block 520 and the image module 510 to be colored are same module.
In one embodiment, by taking the image colorant of single image as an example, furtherly to described image color applicator 500
It is bright, comprising:
Firstly, image module 510 to be colored by image to be colored be converted to Lab color space format obtain it is described to be colored
The ab channel value of the L channel value of image and the image to be colored;
Secondly, the L channel value and the ab channel value are input to image colorant neural network by computing module 530, obtain
To the ab channel value for coloring;
Finally, staining module 540 by the ab channel value for being used to colour merge with the L channel value to obtain described in
The image colorant result of chromatic graph picture.
In one embodiment, by taking the image colorant of two there is some difference image A and B as an example, to described image
Color device 500 further illustrates, comprising:
Firstly, image module 510 to be colored by image A to be colored be converted to Lab color space format obtain it is described to
L channel value of the chromatic graph as A;
Secondly, rendered image module 520 obtains the ab channel value of the image B for coloring, the image B for coloring
Including image at least partly identical with the image A to be colored;
Then, computing module 530 leads to the ab of the L channel value of the image A to be colored and the image B for being used to colour
Road value is input to image colorant neural network, obtains the ab channel value for coloring;
Finally, staining module 540 by it is described be used for colour ab channel value and the image A to be colored L channel value value
Fusion obtains the image colorant result of the image A to be colored.
According to embodiments of the present invention, computing module 530 may further include model module 531, the model module 531
Neural network is coloured including described image.
Described image coloring neural network is simulated two by random cropping there is some difference or there is no difference
Image will be described another using the ab channel value of wherein piece image and the L channel value of another piece image as input layer data
The ab channel value of width image is as output layer data, the image colorant neural network being trained to neural network.Image
Coloring neural network can be directly realized by using multiple dimensioned random cropping and be coloured end to end to the image of different resolution,
Without carrying out data mark in training, without exclusively carrying out data acquisition, training process is realized by analogue data, is not being increased
Coloring effect can be significantly improved on the basis of cost by adding or reducing.
Optionally, described image coloring neural network includes convolutional neural networks.In one embodiment, described image
Color neural network includes full convolutional neural networks.
According to embodiments of the present invention, the model module 531 is used for:
After carrying out random size cutting to training image, then carries out offset cutting and obtain training image pair, the training figure
As to including at least partly identical first training image and the second training image;
Using the L channel value of the first training image and the ab channel sample value of the second training image and first instruction
The ab channel value training neural network for practicing image, obtains described image coloring neural network.
Wherein, conventional images data set can be used in the training image, such as imagenet data set;It can also be from network
Upper acquisition image data.Several scenes and a variety of object types can be covered as far as possible to guarantee the abundant of data by choosing training image
Property, be conducive to the accuracy for improving image colorant neural network.Specific required image can also be selected to carry out according to application scenarios
Training, such as natural land or building etc. such as need to carry out coloring treatment to the image of special scenes, can be specific for this
Scene carries out neural metwork training, and obtained image colorant neural network is more suitable for the image colorant processing of the special scenes.
Illustratively, the model module 531 further include:
Random cropping module 5311, for obtaining cut size at random based on predetermined size range, with the cut size
Training image after random cropping is cut is carried out to training image;
Offset cuts module 5312, for randomly selecting deviation post in the training image after the cutting, and is based on
The deviation post carries out offset cutting to the training image after the cutting, obtains first training image and described second
Training image.
Wherein, the predetermined size range, which can according to need, is configured, but the ruler of original training data is not to be exceeded
It is very little in order to avoid introduce unnecessary noise, because if the reductions size of selection may result in greater than the size of original training data
Data after cutting have more being left white for periphery than original training data and wait noises, influence the precision of image colorant neural network.In
Cut size is randomly selected within the scope of certain size, and neural network can be allowed to recognize the object of different scale in the training process
Feature, allow different resolution picture input when obtain stable effect.
The case where for special scenes, the model module 531 can be based on object detection methods in training image
Target scene or target object obtain target area after being detected, and it is including at least the mesh that the predetermined size range, which is arranged,
Region is marked, but is no more than the size of original training image;With the random size to the instruction centered on the target area
Practice image to be cut out, the training image after being cut out, the training image after this is cut out at this time includes the special scenes, is had
Conducive to raising image colorant neural network to the effect of the coloring treatment of the special scenes.
Illustratively, random cropping module 5311 can be used for:
Special scenes region described in the training image is detected by object detection method, the predetermined size range is
Greater than the size in the special scenes region, and it is less than or equal to the size of the training image;
The reduction size is randomly selected in the predetermined size range.
It should be appreciated that the present invention is not limited by the object detection method specifically used, either existing object detection method
Or the object detection method of exploitation in the future can be applied in image rendering methods according to an embodiment of the present invention, and
It also should include within the scope of the present invention.
Illustratively, the deviation post includes the position of first training image and the position of second training image
It sets.
Illustratively, the deviation post include first training image position and second training image with
The relative position of first training image.
Illustratively, offset cuts module 5312 and is also used to:
First area and second area are randomly selected based on the training image after the cutting, the first area and described
At least there is the overlapping region of predefined size in second area;
Respectively the training image after the cutting is cut out to obtain according to the first area and the second area
First training image and second training image.
Illustratively, offset cuts module 5312 and is also used to:
First area is randomly selected based on the training image after the cutting to be cut out to obtain first training image;
The first area random translation is obtained into second area, the first area and the second area at least exist
The overlapping region of predefined size;
The training image after the cutting is cut according to the second area to obtain second training image.
Illustratively, the random translation includes: random movement in the horizontal and/or vertical.
It will be appreciated that the predefined size can according to need and be configured, herein with no restrictions.
After the above-mentioned progress random size cutting to training image, then the randomness cut twice in offset cutting is got over
Greatly, the image colorant neural network after training can more have Shandong property stick.In addition, by the random cropping of different modes twice,
The training data that will acquire carries out the training image for being cut to smaller size of random size, it is possible to reduce trained calculating cost
And preferable coloring effect is obtained on the image of normal resolution.
Since above-mentioned cut twice is all based on random size and carries out, so for different training images, obtain the
The size of one training image and the second training image may be also not identical.At this point, different training images can only be obtained
Training image updates the weight coefficient of the neural network to the independent training for carrying out neural network every time after training.In order to mention
Hi-vision colours the training speed of neural network, and the training image that can be obtained to different training images makes its ruler to processing is carried out
It is very little all the same, to carry out small lot training, improve the convergence rate and training effectiveness of image colorant neural network.
Illustratively, the model module 531 further includes Zoom module 5313, is used for first training image and institute
It states the second training image and zooms to pre-set dimension.
Illustratively, the model module 531 can be also used for:
The training image of preset quantity is chosen to the neural network is trained in batches, every time after training described in update
The weight coefficient of neural network.
Illustratively, the model module 531 further includes sampling module 5314, for the ab to second training image
Channel value carries out down-sampling, obtains the ab channel sample value of second training image.Wherein, the down-sampling can be based on one
Fixed sampling rule, sampling rule include but is not limited to uniform sampling, (few when down-sampling rule should be simulated and be applied as far as possible
Amount) colouring information distribution it is similar.
It follows that image colorant device according to an embodiment of the present invention, the color coloured by neural network is logical
Road value is merged with the luminance channel of image to be colored realizes image colorant, improve existing image colorant colour cast, leakage color, lack color etc.
Problem improves the efficiency and coloring effect of image colorant.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
The scope of the present invention.
Fig. 6 shows the schematic block diagram of image colorant system 600 according to an embodiment of the present invention.Image colorant system
600 include imaging sensor 610, storage device 620 and processor 630.
Imaging sensor 610 is for acquiring image data.
The storage of storage device 620 is for realizing the corresponding steps in image rendering methods according to an embodiment of the present invention
Program code.
The processor 630 is for running the program code stored in the storage device 620, to execute according to the present invention
The corresponding steps of the image rendering methods of embodiment, and for realizing in image colorant device according to an embodiment of the present invention
Image module 510 to be colored, rendered image module 520, computing module 530 and staining module 540.
In addition, according to embodiments of the present invention, additionally providing a kind of storage medium, storing program on said storage
Instruction, when described program instruction is run by computer or processor for executing the image rendering methods of the embodiment of the present invention
Corresponding steps, and for realizing the corresponding module in image colorant device according to an embodiment of the present invention.The storage medium
It such as may include the storage card of smart phone, the storage unit of tablet computer, the hard disk of personal computer, read-only memory
(ROM), Erasable Programmable Read Only Memory EPROM (EPROM), portable compact disc read-only memory (CD-ROM), USB storage,
Or any combination of above-mentioned storage medium.The computer readable storage medium can be one or more computer-readable deposit
Any combination of storage media, such as a computer readable storage medium include by being randomly generated based on action command sequence
The readable program code of calculation machine, another computer readable storage medium include for carrying out the computer-readable of image colorant
Program code.
In one embodiment, the computer program instructions may be implemented real according to the present invention when being run by computer
Each functional module of the image colorant device of example is applied, and/or image colorant according to an embodiment of the present invention can be executed
Method.
Each module in image colorant system according to an embodiment of the present invention can pass through figure according to an embodiment of the present invention
It is realized as the processor computer program instructions that store in memory of operation of the electronic equipment of coloring, or can be in root
The computer instruction stored in computer readable storage medium according to the computer program product of the embodiment of the present invention is by computer
It is realized when operation.
Image rendering methods, device, system and storage medium according to an embodiment of the present invention, are obtained by neural network
The color channel values of coloring are merged with the luminance channel of image to be colored realizes image colorant, and it is inclined to improve existing image colorant
Color, lacks the problems such as color at leakage color, improves the efficiency and coloring effect of image colorant.
Although describing example embodiment by reference to attached drawing here, it should be understood that above example embodiment are only exemplary
, and be not intended to limit the scope of the invention to this.Those of ordinary skill in the art can carry out various changes wherein
And modification, it is made without departing from the scope of the present invention and spiritual.All such changes and modifications are intended to be included in appended claims
Within required the scope of the present invention.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
The scope of the present invention.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it
Its mode is realized.For example, apparatus embodiments described above are merely indicative, for example, the division of the unit, only
Only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can be tied
Another equipment is closed or is desirably integrated into, or some features can be ignored or not executed.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that implementation of the invention
Example can be practiced without these specific details.In some instances, well known method, structure is not been shown in detail
And technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the present invention and help to understand one or more of the various inventive aspects, In
To in the description of exemplary embodiment of the present invention, each feature of the invention be grouped together into sometimes single embodiment, figure,
Or in descriptions thereof.However, the method for the invention should not be construed to reflect an intention that i.e. claimed
The present invention claims features more more than feature expressly recited in each claim.More precisely, such as corresponding power
As sharp claim reflects, inventive point is that the spy of all features less than some disclosed single embodiment can be used
Sign is to solve corresponding technical problem.Therefore, it then follows thus claims of specific embodiment are expressly incorporated in this specific
Embodiment, wherein each, the claims themselves are regarded as separate embodiments of the invention.
It will be understood to those skilled in the art that any combination pair can be used other than mutually exclusive between feature
All features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed any method
Or all process or units of equipment are combined.Unless expressly stated otherwise, this specification (is wanted including adjoint right
Ask, make a summary and attached drawing) disclosed in each feature can be replaced with an alternative feature that provides the same, equivalent, or similar purpose.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments
In included certain features rather than other feature, but the combination of the feature of different embodiments mean it is of the invention
Within the scope of and form different embodiments.For example, in detail in the claims, embodiment claimed it is one of any
Can in any combination mode come using.
Various component embodiments of the invention can be implemented in hardware, or to run on one or more processors
Software module realize, or be implemented in a combination thereof.It will be understood by those of skill in the art that can be used in practice
Microprocessor or digital signal processor (DSP) realize some moulds in article analytical equipment according to an embodiment of the present invention
The some or all functions of block.The present invention is also implemented as a part or complete for executing method as described herein
The program of device (for example, computer program and computer program product) in portion.It is such to realize that program of the invention can store
On a computer-readable medium, it or may be in the form of one or more signals.Such signal can be from internet
Downloading obtains on website, is perhaps provided on the carrier signal or is provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and ability
Field technique personnel can be designed alternative embodiment without departing from the scope of the appended claims.In the claims,
Any reference symbol between parentheses should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not
Element or step listed in the claims.Word "a" or "an" located in front of the element does not exclude the presence of multiple such
Element.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer real
It is existing.In the unit claims listing several devices, several in these devices can be through the same hardware branch
To embody.The use of word first, second, and third does not indicate any sequence.These words can be explained and be run after fame
Claim.
The above description is merely a specific embodiment or to the explanation of specific embodiment, protection of the invention
Range is not limited thereto, and anyone skilled in the art in the technical scope disclosed by the present invention, can be easily
Expect change or replacement, should be covered by the protection scope of the present invention.Protection scope of the present invention should be with claim
Subject to protection scope.
Claims (10)
1. a kind of image rendering methods, which is characterized in that the described method includes:
Image to be colored is converted into Lab color space format and obtains the L channel value of the image to be colored;
Obtain for coloring image or the image to be colored ab channel value, wherein it is described for coloring image include
Image at least partly identical with the image to be colored;
The L channel value and the ab channel value are input to image colorant neural network, obtain the ab channel value for coloring;
It merges the ab channel value for being used to colour to obtain the image colorant knot of the image to be colored with the L channel value
Fruit.
2. the method as described in claim 1, which is characterized in that the method also includes:
After carrying out random size cutting to training image, then carries out offset cutting and obtain training image pair, the training image pair
Including at least partly identical first training image and the second training image;
Using the L channel value of the first training image and the ab channel sample value of the second training image and the first training figure
The ab channel value training neural network of picture, obtains described image coloring neural network.
3. method according to claim 2, which is characterized in that after carrying out random size cutting to training image, then carry out partially
It moves to cut and obtains training image pair, the training image is schemed to including at least partly identical first training image and the second training
Picture, comprising:
Cut size is obtained at random based on predetermined size range;
The training image after random cropping is cut is carried out to training image with the cut size;
Randomly select deviation post in the training image after the cutting, and based on the deviation post to the cutting after
Training image carries out offset cutting, obtains first training image and second training image.
4. method as claimed in claim 3, which is characterized in that the method also includes: by first training image and institute
It states the second training image and zooms to pre-set dimension.
5. method as claimed in claim 4, which is characterized in that schemed using the L channel value of the first training image and the second training
The ab channel sample value of picture and the ab channel value of first training image training neural network include:
The training image of preset quantity is chosen to the neural network is trained in batches, updates the nerve after training every time
The weight coefficient of network.
6. method as claimed in claim 3, which is characterized in that the method also includes: to the ab of second training image
Channel value carries out down-sampling, obtains the ab channel sample value of second training image.
7. the method as described in claim 1, which is characterized in that it includes convolutional neural networks that described image, which colours neural network,.
8. a kind of image colorant device, which is characterized in that described device includes:
Image module to be colored obtains the L of the image to be colored for image to be colored to be converted to Lab color space format
Channel value;
Rendered image module, it is described to be used for for obtaining the ab channel value of the image or the image to be colored that are used for coloring
The image of color includes image at least partly identical with the image to be colored;
Computing module is used for for the L channel value and the ab channel value to be input to image colorant neural network
The ab channel value of color;
Staining module, for merging the ab channel value for being used to colour to obtain the image to be colored with the L channel value
Image colorant result.
9. a kind of image colorant system, including memory, processor and it is stored on the memory and on the processor
The computer program of operation, which is characterized in that the processor is realized in claim 1 to 7 when executing the computer program
The step of any one the method.
10. a kind of computer storage medium, is stored thereon with computer program, which is characterized in that the computer program is counted
The step of calculation machine realizes any one of claims 1 to 7 the method when executing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910702526.7A CN110533740A (en) | 2019-07-31 | 2019-07-31 | A kind of image rendering methods, device, system and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910702526.7A CN110533740A (en) | 2019-07-31 | 2019-07-31 | A kind of image rendering methods, device, system and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110533740A true CN110533740A (en) | 2019-12-03 |
Family
ID=68661731
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910702526.7A Pending CN110533740A (en) | 2019-07-31 | 2019-07-31 | A kind of image rendering methods, device, system and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110533740A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111738186A (en) * | 2020-06-28 | 2020-10-02 | 香港中文大学(深圳) | Target positioning method and device, electronic equipment and readable storage medium |
CN112907497A (en) * | 2021-03-19 | 2021-06-04 | 苏州科达科技股份有限公司 | Image fusion method and image fusion device |
CN113313843A (en) * | 2021-06-18 | 2021-08-27 | 熵基科技股份有限公司 | Security check image coloring method and device, storage medium and computer equipment |
CN115131447A (en) * | 2022-01-14 | 2022-09-30 | 长城汽车股份有限公司 | Image coloring method, device, equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106855996A (en) * | 2016-12-13 | 2017-06-16 | 中山大学 | A kind of gray scale image color method and its device based on convolutional neural networks |
CN108830912A (en) * | 2018-05-04 | 2018-11-16 | 北京航空航天大学 | A kind of interactive grayscale image color method of depth characteristic confrontation type study |
CN108921932A (en) * | 2018-06-28 | 2018-11-30 | 福州大学 | A method of the black and white personage picture based on convolutional neural networks generates various reasonable coloring in real time |
CN109754444A (en) * | 2018-02-07 | 2019-05-14 | 京东方科技集团股份有限公司 | Image rendering methods and device |
-
2019
- 2019-07-31 CN CN201910702526.7A patent/CN110533740A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106855996A (en) * | 2016-12-13 | 2017-06-16 | 中山大学 | A kind of gray scale image color method and its device based on convolutional neural networks |
CN109754444A (en) * | 2018-02-07 | 2019-05-14 | 京东方科技集团股份有限公司 | Image rendering methods and device |
CN108830912A (en) * | 2018-05-04 | 2018-11-16 | 北京航空航天大学 | A kind of interactive grayscale image color method of depth characteristic confrontation type study |
CN108921932A (en) * | 2018-06-28 | 2018-11-30 | 福州大学 | A method of the black and white personage picture based on convolutional neural networks generates various reasonable coloring in real time |
Non-Patent Citations (2)
Title |
---|
张娜 等: "基于密集神经网络的灰度图像着色算法", 《计算机应用》 * |
林家骏 等: "基于像素级生成对抗网络的复杂场景灰度图像彩色化", 《计算机辅助设计与图形学学报》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111738186A (en) * | 2020-06-28 | 2020-10-02 | 香港中文大学(深圳) | Target positioning method and device, electronic equipment and readable storage medium |
CN111738186B (en) * | 2020-06-28 | 2024-02-02 | 香港中文大学(深圳) | Target positioning method, target positioning device, electronic equipment and readable storage medium |
CN112907497A (en) * | 2021-03-19 | 2021-06-04 | 苏州科达科技股份有限公司 | Image fusion method and image fusion device |
CN113313843A (en) * | 2021-06-18 | 2021-08-27 | 熵基科技股份有限公司 | Security check image coloring method and device, storage medium and computer equipment |
CN115131447A (en) * | 2022-01-14 | 2022-09-30 | 长城汽车股份有限公司 | Image coloring method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110533740A (en) | A kind of image rendering methods, device, system and storage medium | |
CN108419028B (en) | Image processing method, image processing device, computer-readable storage medium and electronic equipment | |
CN111353948B (en) | Image noise reduction method, device and equipment | |
US9305375B2 (en) | High-quality post-rendering depth blur | |
CN109003297B (en) | Monocular depth estimation method, device, terminal and storage medium | |
CN110351511A (en) | Video frame rate upconversion system and method based on scene depth estimation | |
CN106255990B (en) | Image for camera array is focused again | |
CN106331850A (en) | Browser live broadcast client, browser live broadcast system and browser live broadcast method | |
CN110691226B (en) | Image processing method, device, terminal and computer readable storage medium | |
CN108055452A (en) | Image processing method, device and equipment | |
CN108848367B (en) | Image processing method and device and mobile terminal | |
CN108921942B (en) | Method and device for 2D (two-dimensional) conversion of image into 3D (three-dimensional) | |
CN108024054A (en) | Image processing method, device and equipment | |
CN111738969B (en) | Image fusion method, device and computer readable storage medium | |
CN113902657A (en) | Image splicing method and device and electronic equipment | |
CN105096286A (en) | Fusion method and device for remote sensing images | |
CN109661816A (en) | The method and display device of panoramic picture are generated and shown based on rendering engine | |
CN110276831A (en) | Constructing method and device, equipment, the computer readable storage medium of threedimensional model | |
CN109360254A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN107517346A (en) | Photographic method, device and mobile device based on structure light | |
CN108616700A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN109640068A (en) | Information forecasting method, device, equipment and the storage medium of video frame | |
CN114820292A (en) | Image synthesis method, device, equipment and storage medium | |
CN110838088B (en) | Multi-frame noise reduction method and device based on deep learning and terminal equipment | |
CN109191398B (en) | Image processing method, image processing device, computer-readable storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191203 |
|
RJ01 | Rejection of invention patent application after publication |