[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111553961B - Method and device for acquiring line manuscript corresponding color map, storage medium and electronic device - Google Patents

Method and device for acquiring line manuscript corresponding color map, storage medium and electronic device Download PDF

Info

Publication number
CN111553961B
CN111553961B CN202010347093.0A CN202010347093A CN111553961B CN 111553961 B CN111553961 B CN 111553961B CN 202010347093 A CN202010347093 A CN 202010347093A CN 111553961 B CN111553961 B CN 111553961B
Authority
CN
China
Prior art keywords
feature
image
matching network
characteristic
feature image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010347093.0A
Other languages
Chinese (zh)
Other versions
CN111553961A (en
Inventor
张骞
王波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN202010347093.0A priority Critical patent/CN111553961B/en
Publication of CN111553961A publication Critical patent/CN111553961A/en
Application granted granted Critical
Publication of CN111553961B publication Critical patent/CN111553961B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method and a device for acquiring a line manuscript corresponding color chart, a storage medium and an electronic device, wherein the method comprises the following steps: respectively extracting a first characteristic image of a first line manuscript, a second characteristic image of a second line manuscript and a third characteristic image of a first color chart, wherein the first line manuscript and the second line manuscript have an association relationship, and the first color chart is the color chart of the first line manuscript; acquiring a fourth characteristic image according to the correlation degree of the first characteristic image and the second characteristic image and the third characteristic image, wherein the fourth characteristic image is a characteristic image corresponding to the color image of the second line manuscript; and converting the fourth characteristic image into a second color chart corresponding to the second line manuscript according to the second characteristic image. The application solves the problem that the color of the line manuscript with larger shape transformation and new shape can not be colored in the color mode of the line manuscript in the related technology.

Description

Method and device for acquiring line manuscript corresponding color map, storage medium and electronic device
Technical Field
The present application relates to the field of computers, and in particular, to a method and an apparatus for acquiring a color chart corresponding to a line manuscript, a storage medium, and an electronic device.
Background
The producer may make a multimedia asset (e.g., a cartoon) with multiple consecutive frames through a professional production tool. When multimedia resource production is performed, a producer needs to spend a lot of time to draw and color intermediate line manuscripts, in particular, color continuous frames. The continuous frame coloring refers to the completion of coloring of the next frame line manuscript according to the colored picture of the previous frame or multiple frames and the line manuscript of the next frame.
At present, an automatic color filling method based on shape matching is generally adopted to automatically color a line manuscript of a multimedia resource. Dividing the line manuscript into different sealing blocks by carrying out shape division on the line manuscript; and coloring the matching blocks in the next frame line manuscript according to the color of the sealing block of the previous frame line manuscript and the matching block of the next frame line manuscript according to the matching of the sealing block of the previous frame line manuscript and the sealing block of the next frame line manuscript.
However, the above-described method of coloring a thread script can simply fill in the shape, and requires that the thread script can be divided into different closed blocks, and cannot be adapted to the situations where the shape conversion is large and a new shape appears.
Therefore, the document coloring method in the related art has a problem that the document having a large shape change and a new shape cannot be colored.
Disclosure of Invention
The embodiment of the application provides a method and a device for acquiring a line manuscript corresponding color chart, a storage medium and an electronic device, which at least solve the problem that a line manuscript with larger shape transformation and new shape appearance cannot be colored in a line manuscript coloring mode in the related technology.
According to an aspect of the embodiment of the present application, there is provided a method for acquiring a line manuscript corresponding color chart, including: respectively extracting a first characteristic image of a first line manuscript, a second characteristic image of a second line manuscript and a third characteristic image of a first color chart, wherein the first line manuscript and the second line manuscript have an association relationship, and the first color chart is the color chart of the first line manuscript; acquiring a fourth characteristic image according to the correlation degree of the first characteristic image and the second characteristic image and the third characteristic image, wherein the fourth characteristic image is a characteristic image corresponding to the color image of the second line manuscript; and converting the fourth characteristic image into a second color chart corresponding to the second line manuscript according to the second characteristic image.
According to another aspect of the embodiment of the present application, there is provided an apparatus for acquiring a line manuscript corresponding color chart, including: the first extraction unit is used for respectively extracting a first characteristic image of the first line manuscript, a second characteristic image of the second line manuscript and a third characteristic image of the first color chart, wherein the first line manuscript and the second line manuscript have an association relationship, and the first color chart is the color chart of the first line manuscript; the acquisition unit is used for acquiring a fourth characteristic image according to the correlation degree of the first characteristic image and the second characteristic image and the third characteristic image, wherein the fourth characteristic image is a characteristic image corresponding to the color image of the second line manuscript; and the conversion unit is used for converting the fourth characteristic image into a second color chart corresponding to the second line manuscript according to the first characteristic image.
Optionally, the acquiring unit includes: the normalization module is used for normalizing the correlation between a first pixel point in the second characteristic image and each pixel point in the first characteristic image to obtain a target correlation, wherein the first pixel point is any pixel point in the second characteristic image; and the determining module is used for carrying out weighted summation on the target correlation degree by using the pixel points in the third characteristic image and determining the value of a second pixel point in the fourth characteristic image, wherein the second pixel point is the pixel point corresponding to the first pixel point in the fourth characteristic image.
Optionally, the acquiring unit includes: the input module is used for inputting the first feature image, the second feature image and the third feature image into the target decoding model to obtain a fourth feature image output by the target decoding model, wherein the target decoding model is obtained by training the initial decoding model by using a plurality of training line drafts with association relations and a plurality of training color maps corresponding to the training line drafts one by one, and the target decoding model is used for weighting the correlation degree between the second feature image and the first feature image according to the third feature image to obtain the fourth feature image.
Optionally, the target decoding model includes a multi-layer feature matching network and a multi-layer convolution network, the feature matching network is used for weighting the correlation degree of the second feature parameter and the third feature parameter according to the first feature parameter, outputting an intermediate feature image, the convolution network is used for connecting the fourth feature parameter and the fifth feature parameter in parallel, and then sequentially passing through a convolution layer and an up-sampling layer of the convolution network, outputting a reference feature image, and the input module includes: the first input sub-module is used for taking a sub-feature image corresponding to the third feature image and the current feature matching network as a first feature parameter, taking a sub-feature image corresponding to the second feature image and the current feature matching network as a second feature parameter, and taking the sub-feature image corresponding to the first feature image and the current feature matching network as a third feature parameter to be input into the current feature matching network to obtain an intermediate feature image output by the current feature matching network; the second input sub-module is used for taking a sub-feature image corresponding to a feature matching network of a previous layer of the feature matching network of the current feature matching network as a fourth feature parameter when the current feature matching network is a feature matching network except the first layer of the feature matching network, and inputting an intermediate feature image output by the previous layer of the feature matching network as a fifth feature parameter to a current convolution network corresponding to the current feature matching network to obtain a reference feature image output by the current convolution network; the third input sub-module is used for inputting the sub-feature image corresponding to the current feature matching network and the third feature image serving as a first feature parameter and a third feature parameter, and inputting the reference feature image output by the current convolution network serving as a second feature parameter to the current feature matching network to obtain an intermediate feature image output by the current feature matching network; the fourth characteristic image is an intermediate characteristic image output by the last layer of characteristic matching network of the multi-layer characteristic matching network.
Optionally, the conversion unit includes: the parallel module is used for connecting the second characteristic image and the fourth characteristic image in parallel to obtain a fifth characteristic image obtained after the second characteristic image and the fourth characteristic image are connected in parallel; the first acquisition module is used for convoluting the fifth characteristic image, and up-sampling the characteristic image obtained after convolution to acquire a second color map corresponding to the second line manuscript.
Optionally, the apparatus further includes: the second extraction unit is used for extracting semantic features of the first line manuscript before acquiring a fourth feature image according to the correlation degree of the first feature image and the second feature image and the third feature image to obtain a sixth feature image; the third extraction unit is used for extracting semantic features of the second line manuscript to obtain a seventh feature image; the first updating unit is used for updating the first characteristic image by using the sixth characteristic image to obtain an updated first characteristic image; and a second updating unit, configured to update the second feature image using the seventh feature image, and obtain an updated second feature image.
Optionally, the first updating unit includes: the second acquisition module is used for connecting or adding the sixth characteristic image and the first characteristic image in parallel to acquire an updated first characteristic image; the second updating unit includes: and the third acquisition module is used for connecting or adding the seventh characteristic image and the second characteristic image in parallel to acquire an updated second characteristic image.
According to a further aspect of embodiments of the present application there is also provided a computer readable storage medium having stored therein a computer program, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
According to a further aspect of embodiments of the present application there is also provided an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
According to the application, a mode of coloring the current line manuscript according to the characteristic correlation degree with the associated line manuscript and the color map of the associated line manuscript is adopted, and a first characteristic image of a first line manuscript, a second characteristic image of a second line manuscript and a third characteristic image of the first color map are respectively extracted, wherein the first line manuscript and the second line manuscript have the association relation, and the first color map is the color map of the first line manuscript; acquiring a fourth characteristic image according to the correlation degree of the first characteristic image and the second characteristic image and the third characteristic image, wherein the fourth characteristic image is a characteristic image corresponding to the color image of the second line manuscript; according to the second characteristic image, the fourth characteristic image is converted into a second color chart corresponding to the second line manuscript, and the color chart of the current line manuscript is reconstructed according to the matching degree (namely, the correlation degree (such as the non-local characteristic matching degree)) between the characteristic images and the color chart of the related line manuscript, so that the method can be suitable for coloring of large-motion and large-deformation continuous frames, the applicability of the coloring mode of the line manuscript can be improved, the coloring accuracy under large-motion and large-deformation is further improved, and the problem that the line manuscript cannot be colored in a line manuscript coloring mode with large shape conversion and new shape appearance in the related technology is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic diagram of a hardware environment of a method for acquiring a line manuscript corresponding color map according to an embodiment of the application;
FIG. 2 is a flow chart of an alternative method for obtaining a line-manuscript correspondence color map, in accordance with an embodiment of the application;
FIG. 3 is a schematic diagram of an alternative method for obtaining a line-manuscript correspondence color map, according to an embodiment of the application;
FIG. 4 is a flow chart of another alternative method for obtaining a line-manuscript correspondence color map, in accordance with an embodiment of the application;
FIG. 5 is a block diagram of an alternative device for acquiring a line-manuscript correspondence color map, according to an embodiment of the application; and
fig. 6 is a block diagram of a server according to an embodiment of the present application.
Detailed Description
The application will be described in detail hereinafter with reference to the drawings in conjunction with embodiments. It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
According to an aspect of the embodiment of the application, a method embodiment of a method for acquiring a line manuscript corresponding color chart is provided.
Alternatively, in the present embodiment, the above-described method of acquiring a line-manuscript-corresponding color chart may be applied to a hardware environment constituted by the terminal 101 and the server 103 as shown in fig. 1. As shown in fig. 1, the server 103 is connected to the terminal 101 through a network, which may be used to provide services (such as game services, application services, etc.) to the terminal or clients installed on the terminal, and a database may be provided on the server or independent of the server, for providing data storage services to the server 103, where the network includes, but is not limited to: the terminal 101 is not limited to a PC, a mobile phone, a tablet computer, or the like. The method for acquiring the line manuscript corresponding color map in the embodiment of the application can be executed by the server 103, the terminal 101 or both the server 103 and the terminal 101. The method for acquiring the line-script corresponding color map by the terminal 101 according to the embodiment of the present application may be performed by a client installed thereon.
Fig. 2 is a flowchart of an alternative method for obtaining a line-manuscript corresponding color map according to an embodiment of the present application, as shown in fig. 2, the method may include the following steps:
step S202, respectively extracting a first characteristic image of a first line manuscript, a second characteristic image of a second line manuscript and a third characteristic image of a first color chart, wherein the first line manuscript and the second line manuscript have an association relationship, and the first color chart is the color chart of the first line manuscript;
step S204, obtaining a fourth characteristic image according to the correlation degree of the first characteristic image and the second characteristic image and the third characteristic image, wherein the fourth characteristic image is a characteristic image corresponding to the color image of the second line manuscript;
step S206, converting the fourth characteristic image into a second color chart corresponding to the second line manuscript according to the second characteristic image.
Alternatively, the main execution body of the steps may be a server, a terminal device, etc., but the method is not limited thereto, and other devices capable of acquiring the color chart corresponding to the line manuscript may be used to execute the method in the embodiment of the present application.
Through the steps S202 to S206, the first feature image of the first line manuscript, the second feature image of the second line manuscript and the third feature image of the first color chart are extracted, wherein the first line manuscript and the second line manuscript have an association relationship, and the first color chart is the color chart of the first line manuscript; acquiring a fourth characteristic image according to the correlation degree of the first characteristic image and the second characteristic image and the third characteristic image, wherein the fourth characteristic image is a characteristic image corresponding to the color image of the second line manuscript; according to the second characteristic image, the fourth characteristic image is converted into a second color chart corresponding to the second line manuscript, so that the problem that the line manuscript with larger shape transformation and new shape cannot be colored in the line manuscript coloring mode in the related technology is solved, the applicability of the line manuscript coloring mode is improved, and the coloring accuracy under large movement and large deformation is improved.
In the technical scheme provided in step S202, a first feature image of a first line manuscript, a second feature image of a second line manuscript, and a third feature image of a first color chart are extracted respectively, wherein the first line manuscript and the second line manuscript have an association relationship, and the first color chart is a color chart of the first line manuscript.
A user may upload a multimedia resource (e.g., a cartoon draft) having a plurality of continuous drafts to a server through a client running on a terminal device, which may be a client for making a multimedia resource, i.e., may draw and upload a draft on the client, or may be software for coloring a draft, i.e., the client may not support drawing and may only upload a drawn draft. In addition, the coloring process of the manuscript can also be performed by the terminal equipment.
For example, the plurality of continuous line manuscripts may be colored by the terminal device according to the colored line manuscripts as the non-colored line manuscripts, or the terminal device may send the uploaded multimedia resource to the server, and the server performs the coloring according to the colored line manuscripts as the non-colored line manuscripts.
For a first line manuscript of the multimedia resource, the first line manuscript is colored, and a first color chart is obtained after the first line manuscript is colored. The second line manuscript has an association relationship with the first line manuscript, and the association relationship can be: the first line manuscript is the first N frame line manuscript of the second line manuscript, where N is a positive integer greater than or equal to 1, for example, the previous frame line manuscript, but not limited thereto, and other color association relations may be used in the present embodiment.
Taking the server for coloring the line manuscript as an example, after the server acquires the first line manuscript, the second line manuscript and the first color chart, the server can extract the image features of the first line manuscript, the second line manuscript and the first color chart respectively to obtain a first feature image of the first line manuscript, a second feature image of the second line manuscript and a third feature image of the first color chart.
The feature image may be used to represent low-dimensional or high-dimensional image features of the image, and may be a feature map suitable for feature matching, so as to adapt to multimedia resources of different styles.
For example, the first thread may be a 1 st thread of the multimedia resource, the first color chart is a color chart obtained by coloring the first thread by the user, that is, the 1 st color chart, and the second thread is a 2 nd thread of the multimedia resource. The server may acquire the feature map of the 1 st draft, the feature map of the 2 nd draft, and the feature map of the 1 st color map, respectively. If the extracted feature map is a multidimensional feature map, the number of feature maps extracted from each map (any one of the first line drawing, the second line drawing, and the first color drawing) may be plural.
In the technical solution provided in step S204, a fourth feature image is obtained according to the correlation between the first feature image and the second feature image, and the third feature image.
After extracting the feature map of the first line manuscript (the first feature image), the feature map of the second line manuscript (the second feature image) and the feature map of the first color map (the third feature image), the server can automatically generate the color map corresponding to the second line manuscript, the second color map and the feature map of the second color map according to the correlation degree between the feature maps of the two line manuscripts and combining the feature map of the first color map.
As an alternative embodiment, acquiring the fourth feature image according to the correlation degree of the first feature image and the second feature image, and the third feature image includes: normalizing the correlation between the first pixel point in the second characteristic image and each pixel point in the first characteristic image to obtain a target correlation; and carrying out weighted summation on the target correlation degree by using the pixel points in the third characteristic image, and determining the value of the second pixel point in the fourth characteristic image.
The correlation of the feature images may be determined by feature matching, and in order to accommodate large motions, large deformations, and the like of the line manuscript, the correlation of the two feature images may be determined by non-local feature matching (global feature matching). The feature matching process described above may be implemented by a learnable feature matching network architecture module, for example, a CM (Correlation Match, correlation matching) module, which may be a software unit or a program unit running on a processor, or may be a separately integrated chip. The CM module can be combined with a generation network to design a set of network structure for coloring a line manuscript.
For any pixel point (first pixel point) in the second feature image, the correlation between the pixel point and each pixel point in the first feature image can be calculated, and the obtained correlation is normalized to obtain the target correlation. The obtained target correlation may comprise a set of correlations corresponding to the number of pixels on the first feature image.
After the target correlation is obtained, the target correlation may be weighted and summed using pixels on the feature image of the first color chart: and multiplying the values of the pixel points in the third characteristic image with the correlation degrees at the corresponding positions respectively, and summing the multiplied values to obtain the pixel value of the pixel point (namely, the second pixel point) corresponding to the first pixel point on the fourth characteristic image.
For example, assume that the a field represents a line manuscript, the B field represents a color chart, x represents a subsequent frame (second line manuscript), and y represents a previous frame of x (first line manuscript, y may be a previous frame of x). X is x A Is a characteristic diagram of A domain, y A Another Zhang Tezheng map (first feature image) for the A domain, y B Sum of B domain y A Map of consistent spatial content, x B Reconstructed and x A And a corresponding feature map on the B domain. The CM module may be implemented as shown in equation (1):
Where i, j represents pixel coordinates, functionRepresentation->Unlike local (local), the inputs in the CM module are different.
Through the embodiment, the pixel values of all the pixel points on the fourth characteristic image are determined in a global characteristic matching mode, so that the method can adapt to large movement and large deformation of a line manuscript, and the coloring accuracy under the large movement and large deformation is improved.
As an alternative embodiment, before the fourth feature image is acquired according to the correlation degree between the first feature image and the second feature image and the third feature image, the semantic feature of the first manuscript may be extracted to obtain a sixth feature image; extracting semantic features of the second line manuscript to obtain a seventh feature image; updating the first characteristic image by using the sixth characteristic image to obtain an updated first characteristic image; and updating the second characteristic image by using the seventh characteristic image to obtain an updated second characteristic image.
The line manuscript semantic network can be used for carrying out image semantic understanding in the continuous frame coloring task, and is introduced into the continuous frame coloring network, so that the network can understand complex line manuscript semantic conversion relations, adapt to large deformation and improve the coloring accuracy. The line manuscript semantic network can be any network capable of performing image semantic analysis, and a semantic extraction network (semantic extractor) in the related art can be used for semantic feature extraction in the embodiment.
Alternatively, in the present embodiment, semantic features of the first line manuscript and the second line manuscript may be extracted using a semantic extractor, respectively, to obtain a sixth feature image and a seventh feature image. The first feature image may be updated using the sixth feature image to obtain an updated first feature image, and the second feature image may be updated using the seventh feature image to obtain an updated second feature image. Therefore, the semantic feature can be used for assisting in calculating the correlation degree, and the coloring accuracy of the line manuscript is improved.
According to the embodiment, the line manuscript semantic network is introduced into the continuous frame coloring network, so that the network can understand complex line manuscript semantic conversion relations, thereby adapting to large deformation and improving the coloring accuracy.
As an alternative embodiment, updating the first feature image using the sixth feature image, the obtaining the updated first feature image includes: the sixth characteristic image and the first characteristic image are connected in parallel or added to obtain an updated first characteristic image; updating the second feature image using the seventh feature image, the obtaining the updated second feature image comprising: and connecting or adding the seventh characteristic image and the second characteristic image in parallel to acquire an updated second characteristic image.
The existing feature map may be updated using the semantic feature map in parallel with or added to the existing feature map. When the feature map is updated, the semantic feature map and the existing feature map can be connected in parallel, that is, the semantic feature map and the existing feature map are processed as data of different channels, or the semantic feature map and the existing feature map can be added, that is, the data of the semantic feature map and the existing feature map are added and then subjected to subsequent processing.
It should be noted that, the dimensions of the extracted features are different, the same line manuscript can correspond to a plurality of feature graphs with the same or different sizes, and the extracted semantic feature graphs can be matched with the sizes and/or the numbers of the feature graphs, so that the semantic feature graphs and the feature graphs can be ensured to be connected in parallel or added.
For example, as shown in fig. 3, the preceding frame line manuscript S 1 Input encoder E s Obtaining F 1 Post-frame line manuscript S 2 Input encoder E s Obtaining F 2 The method comprises the steps of carrying out a first treatment on the surface of the Front frame color chart C 1 Input encoder E c ObtainingFront frame line manuscript S 1 Inputting line manuscript semantic extractor I to obtain Se 1 Post-frame line manuscript S 2 Obtaining Se by semantic extractor I 2 . Will F 1 And Se (Se) 1 Parallel or addition to obtain +.>Will F 2 And Se (Se) 2 Parallel or addition to obtain +.>
It should be noted that, the number of convolutional network layers of the encoder may be n, then F 1 、F 2 、Fc 1 The number of layers of the feature map of (2) may be n, and each of the n layers is represented by 0 to (n-1), 0 represents the output of the innermost layer of the encoder, and F is represented by 1 And Se (Se) 1 The parallel connection or addition may be to add F 1 Output of innermost layer and Se 1 (Se 1 For one-layer output of the semantic network, i.e. connected to the output of the encoder using only one-layer output of the semantic network), it is also possible to connect F in parallel or add 1 Output per layer and Se 1 The outputs of the corresponding layers are connected in parallel or added, and the specific manner of connection in parallel or added may be set as needed, which is not particularly limited in this embodiment.
As an alternative embodiment, acquiring the fourth feature image according to the correlation degree of the first feature image and the second feature image, and the third feature image includes: inputting the first feature image, the second feature image and the third feature image into a target decoding model to obtain a fourth feature image output by the target decoding model, wherein the target decoding model is obtained by training an initial decoding model by using a plurality of training line drafts with association relations and a plurality of training color maps corresponding to the training line drafts one by one, and the target decoding model is used for weighting the correlation degree of the second feature image and the first feature image according to the third feature image to obtain the fourth feature image.
The fourth feature image may be acquired using an already trained target decoder (e.g., a target decoding model). The target decoding model may be part of a target draft coloring model including an encoding model and a decoding model, and may implement a complete process from inputting the first draft, the second draft, and the first color map to outputting the second color map, for example, an encoding process (feature extraction process), a decoding process (correlation calculation and color map determination process).
The target decoding model is obtained by training the initial decoding model by using a plurality of training line drafts with association relations and a plurality of training color maps corresponding to the training line drafts one by one. In the training process, a front frame line manuscript, color maps of the front frame line manuscripts and a rear frame line manuscript in a plurality of training line manuscripts can be input into an initial line manuscript coloring model (comprising an initial decoding model), an initial color map corresponding to the rear frame line manuscripts output by the initial line manuscripts coloring model is obtained, model parameters are adjusted according to a loss function by comparing the initial color map with the color maps of the rear frame line manuscripts, and the difference between the color maps of the rear frame line manuscripts output by the model and the color maps corresponding to the rear frame line manuscripts in a training sample is smaller than or equal to a set threshold value.
It should be noted that if the semantic network is introduced into the model, parameters of the semantic network may be fixed, parameters of the encoding portion and the decoding portion may be adjusted during training, and parameters of the semantic network may also be adjustable, and parameters of the encoding portion, the semantic network portion and the decoding portion may be adjusted during training.
For example, the generator shown in fig. 3 may be trained using a GAN training system, and the generator is G, and the training data set is (S 1 ,C 1 ,S 2 ,C 2 ) Let the picture size be W H, where S 1 、S 2 Respectively a front frame line manuscript and a rear frame line manuscript, C 1 、C 2 Respectively a front frame color chart and a rear frame color chart, and then the generated color charts
Can be usedTraining a generator, wherein L content Is->And C, L adv For GAN loss, λ is a parameter that adjusts both ratios.
For the target decoding model obtained by training, the first feature image, the second feature image, and the third feature image may be input to the target decoding model. The target decoding model may weight the correlation between the second feature image and the first feature image according to the third feature image to obtain a fourth feature image.
By the embodiment, the fourth feature image is acquired by using the trained decoding model, so that the acquisition efficiency of the fourth feature image can be improved.
Optionally, in this embodiment, the target decoding model includes a multi-layer feature matching network and a multi-layer convolution network, where the feature matching network is configured to weight the correlation between the second feature parameter and the third feature parameter according to the first feature parameter, output an intermediate feature image, and after the convolution network is configured to connect the fourth feature parameter and the fifth feature parameter in parallel, sequentially pass through a convolution layer and an up-sampling layer of the convolution network, and output a reference feature image.
The CM module and the generator can be combined to obtain a coarse-to-fine generation matching fusion network structure, which has the advantages of generation and matching, and can generate unmatched areas of the manuscript while ensuring the matching accuracy.
In order to realize the matching from thick to thin, a multi-layer feature matching network can be arranged to perform feature matching with different sizes and different dimensions. In order to ensure that the matching result of the previous layer can be used for the feature matching of the present layer, the output result of the previous layer can be up-sampled to ensure that the sizes of the processed feature graphs are consistent.
The target decoding model comprises a multi-layer feature matching network and a multi-layer convolution network, wherein the feature matching network is used for weighting the correlation degree of the second feature parameter and the third feature parameter according to the first feature parameter, outputting an intermediate feature image, and the convolution network is used for connecting the fourth feature parameter and the fifth feature parameter in parallel and sequentially passing through a convolution layer and an up-sampling layer of the convolution network to output a reference feature image.
As an alternative embodiment, the data may be processed in the same or different manners in different convolutional networks of the target decoding model.
If the current feature matching network is a first layer feature matching network, the sub-feature image corresponding to the third feature image and the current feature matching network can be used as a first feature parameter, the sub-feature image corresponding to the second feature image and the current feature matching network is used as a second feature parameter, and the sub-feature image corresponding to the first feature image and the current feature matching network is used as a third feature parameter to be input into the current feature matching network, so that an intermediate feature image output by the current feature matching network is obtained.
For example, for a first layer feature matching network (innermost feature matching network) in the decoder of the generator as shown in fig. 3, the first layer feature matching network will beAnd->(output of the innermost layer of encoder) performing correlation calculation, taking the correlation function asThe original formula is then equivalent to CM (x A ,y A ,y B )=softmax((x A ) T y A )y B Will->Andinput into CM module to calculate +.>
If the current feature matching network is a feature matching network except the first layer feature matching network, a sub-feature image corresponding to a feature matching network of a previous layer of the second feature image and the current feature matching network can be used as a fourth feature parameter, and an intermediate feature image output by the previous layer of the feature matching network is used as a fifth feature parameter to be input into the current convolution network corresponding to the current feature matching network, so that a reference feature image output by the current convolution network is obtained.
For example, for other feature matching networks than the first layer feature matching network in the decoder of the generator shown in fig. 3, the output of the feature matching network of the previous layer may be first processed through the convolutional network. At the output of the first layer of feature matching networkAfterwards, +.>And->Parallel connection is performed, and the +.>I.e.)>Wherein conv 0 For convolution operations, up 0 For up-sampling operations, cat is a parallel operation.
When the reference feature image is obtained, the sub feature image corresponding to the third feature image and the current feature matching network can be used as a first feature parameter and a third feature parameter, and the reference feature image output by the current convolution network is used as a second feature parameter to be input into the current feature matching network, so that an intermediate feature image output by the current feature matching network is obtained.
For example, as shown in FIG. 3, when the following is obtainedAfter that, it is possible to continue with +.>And->Input to CM module to obtainContinuing to obtain +.>
By analogy, the calculation relationship between the calculation of the nth layer (convolution network) and the calculation of the feature matching network of the (n-1) th layer can be obtained as shown in the formula (2):
The output of the feature matching network of the n-th layer may be as shown in formula (3):
by analogy with the second layer, the network structure of the later layer can be obtained, and the generated network structure is finally obtainedIn the figure, N is the number of layers of the CM module structure.
In the technical solution provided in step S206, the fourth feature image is converted into a second color chart corresponding to the second line manuscript according to the second feature image.
The fourth feature image is calculated based on the correlation between the feature map of the first draft, the feature map of the second draft, and the feature map of the first color map, and can be regarded as the feature map of the second color map. Therefore, after the fourth feature image is obtained, the second color map corresponding to the second line manuscript can be obtained according to the fourth feature image.
The second color map may be obtained from a feature map of the second line manuscript (for example, may be a first layer feature map output by the encoder) and a fourth feature image. The feature map of the second line manuscript can adjust the conversion process from the fourth feature image to the second color map, so that the matching degree of the obtained second color map and the second line manuscript is improved.
As an alternative embodiment, converting the fourth feature image into a second color map corresponding to the second line manuscript based on the second feature image includes: the second characteristic image and the fourth characteristic image are connected in parallel, and a fifth characteristic image obtained after the parallel connection is obtained; and convolving the fifth characteristic image, and up-sampling the characteristic image obtained after convolution to obtain a second color chart corresponding to the second line manuscript.
The second characteristic image and the fourth characteristic image may be first connected in parallel to obtain a fifth characteristic image obtained after the parallel connection. The fifth feature image is rolled and up-sampled, and a color chart corresponding to the second line manuscript, that is, a second color chart can be obtained.
For example, as shown in FIG. 3, when the generator is obtainedAfterwards, +.>And->Parallel, convolving the parallel result and then upsampling, < >>And generating a color chart of the second line manuscript.
According to the embodiment, the color map of the second line manuscript is obtained by carrying out parallel connection, convolution and up-sampling on the second characteristic image and the fourth characteristic image, so that the matching degree of the obtained second color map and the second line manuscript can be improved.
The method for acquiring the color map corresponding to the draft is described below with reference to an alternative example. In this example, the multimedia resource is a cartoon draft, the cartoon draft includes a plurality of continuous drafts, a first draft is a front frame draft in the cartoon draft, a second draft is a rear frame draft in the cartoon draft, and the server uses a generator as shown in fig. 3 to automatically generate a color map corresponding to the rear frame draft.
In this example, the CM module is added to the generator, and the CM module can conveniently match and reconstruct the features, so that the network can learn the feature map suitable for feature matching through big data, and thus, the network is suitable for cartoons of different styles. In addition, the CM module performs matching calculation on the whole domain, so that the model can adapt to large movement and large deformation of the line manuscript. The CM module is combined with the generation network, so that the quality of the color map can be improved on the premise of ensuring the matching accuracy.
Meanwhile, the CM module is fused into the generator, a network structure of coarse-to-fine generation and matching fusion is formed, matching difficulty is reduced, coloring accuracy is improved, and the matching generator can be used for generating unmatched areas of the manuscript while ensuring matching.
In addition, the line manuscript semantic network is introduced into the continuous frame coloring network, so that the network can understand complex line manuscript semantic conversion relations, adapt to large deformation and improve the coloring accuracy.
As shown in fig. 4, the flow of the acquisition method of the line-manuscript correspondence color chart in the present example may include the steps of:
step S402, obtaining a color chart corresponding to the cartoon line manuscript and at least one line manuscript.
The server can receive the cartoon draft uploaded by the client on the terminal equipment and the color chart corresponding to at least one draft.
Step S404, inputting the cartoon line manuscript and the color chart corresponding to the at least one line manuscript into a generator to obtain the color chart corresponding to the at least one line manuscript output by the generator.
The server may input the color map and the post-frame draft corresponding to the animation draft front-frame draft to the generator. The encoder and decoder of the generator process the input data and finally output a color map corresponding to the post-frame draft.
Front frame line manuscript S 1 Input encoder E s Obtaining F 1 Post-frame line manuscript S 2 Input encoder E s Obtaining F 2 The method comprises the steps of carrying out a first treatment on the surface of the Front frame color chart C 1 Input encoder E c ObtainingFront frame line manuscript S 1 Inputting line manuscript semantic extractor I to obtain Se 1 Post-frame line manuscript S 2 Obtaining Se by semantic extractor I 2 The method comprises the steps of carrying out a first treatment on the surface of the Will F 1 And Se (Se) 1 Parallel or addition to obtain +.>Will F 2 And Se (Se) 2 Parallel or addition to obtain +.>
The CM module and the convolution up-sampling module in the encoder respectively process data according to the formula (3) and the formula (2), and finally obtainIs a color chart corresponding to the post-frame line manuscript.
In step S406, the generated color chart is displayed on the client.
The server may send the generated color map to the client on the terminal device for display, either after each color map is generated or after all color maps are generated.
By means of the method, the CM module is integrated into the generator, adaptability of a continuous frame coloring model is improved, and coloring accuracy under large movement and large deformation is improved; the line manuscript semantic network is introduced into the continuous frame coloring task, so that the line manuscript semantic network can be suitable for complex semantic changes, and the accuracy of semantic matching is improved.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
According to another aspect of the embodiment of the present application, there is provided an acquisition apparatus for a line-manuscript-corresponding color chart for implementing the acquisition method of a line-manuscript-corresponding color chart in the above embodiment. Optionally, the device is used to implement the foregoing embodiments and preferred embodiments, which have been described and will not be repeated. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 5 is a block diagram of an alternative device for acquiring a line-manuscript corresponding color chart according to an embodiment of the present application, as shown in fig. 5, the device includes:
(1) A first extracting unit 52, configured to extract a first feature image of a first line document, a second feature image of a second line document, and a third feature image of a first color chart, where the first line document and the second line document have an association relationship, and the first color chart is a color chart of the first line document;
(2) An obtaining unit 54, connected to the first extracting unit 52, for obtaining a fourth feature image according to the correlation degree between the first feature image and the second feature image, and the third feature image, where the fourth feature image is a feature image corresponding to the color chart of the second line manuscript;
(3) And a conversion unit 56 connected to the acquisition unit 54 for converting the fourth feature image into a second color chart corresponding to the second line manuscript based on the second feature image.
Alternatively, the first extraction unit 52 may be used for step S202 in the above-described embodiment, the acquisition unit 54 may be used for step S204 in the above-described embodiment, and the conversion unit 56 may be used for performing step S206 in the above-described embodiment.
Through the module, the first characteristic image of the first line manuscript, the second characteristic image of the second line manuscript and the third characteristic image of the first color chart are respectively extracted, wherein the first line manuscript and the second line manuscript have an association relationship, and the first color chart is the color chart of the first line manuscript; acquiring a fourth characteristic image according to the correlation degree of the first characteristic image and the second characteristic image and the third characteristic image, wherein the fourth characteristic image is a characteristic image corresponding to the color image of the second line manuscript; according to the second characteristic image, the fourth characteristic image is converted into a second color chart corresponding to the second line manuscript, so that the problem that the line manuscript with larger shape transformation and new shape cannot be colored in the line manuscript coloring mode in the related technology is solved, the applicability of the line manuscript coloring mode is improved, and the coloring accuracy under large movement and large deformation is improved.
As an alternative embodiment, the acquisition unit 54 includes:
(1) The normalization module is used for normalizing the correlation between a first pixel point in the second characteristic image and each pixel point in the first characteristic image to obtain a target correlation, wherein the first pixel point is any pixel point in the second characteristic image;
(2) And the determining module is used for carrying out weighted summation on the target correlation degree by using the pixel points in the third characteristic image and determining the value of a second pixel point in the fourth characteristic image, wherein the second pixel point is the pixel point corresponding to the first pixel point in the fourth characteristic image.
As an alternative embodiment, the acquisition unit 54 includes:
(1) The input module is used for inputting the first feature image, the second feature image and the third feature image into the target decoding model to obtain a fourth feature image output by the target decoding model, wherein the target decoding model is obtained by training the initial decoding model by using a plurality of training line drafts with association relations and a plurality of training color maps corresponding to the training line drafts one by one, and the target decoding model is used for weighting the correlation degree between the second feature image and the first feature image according to the third feature image to obtain the fourth feature image.
Optionally, in this embodiment, the target decoding model includes a multi-layer feature matching network and a multi-layer convolution network, where the feature matching network is configured to weight the correlation between the second feature parameter and the third feature parameter according to the first feature parameter, output an intermediate feature image, and the convolution network is configured to connect the fourth feature parameter and the fifth feature parameter in parallel, and sequentially pass through a convolution layer and an upsampling layer of the convolution network, output a reference feature image,
as an alternative embodiment, the input module includes:
(1) The first input sub-module is used for taking a sub-feature image corresponding to the third feature image and the current feature matching network as a first feature parameter, taking a sub-feature image corresponding to the second feature image and the current feature matching network as a second feature parameter, and taking the sub-feature image corresponding to the first feature image and the current feature matching network as a third feature parameter to be input into the current feature matching network to obtain an intermediate feature image output by the current feature matching network;
(1) The second input sub-module is used for taking a sub-feature image corresponding to a feature matching network of a previous layer of the feature matching network of the current feature matching network as a fourth feature parameter when the current feature matching network is a feature matching network except the first layer of the feature matching network, and inputting an intermediate feature image output by the previous layer of the feature matching network as a fifth feature parameter to a current convolution network corresponding to the current feature matching network to obtain a reference feature image output by the current convolution network;
(3) The third input sub-module is used for inputting the sub-feature image corresponding to the current feature matching network and the third feature image serving as a first feature parameter and a third feature parameter, and inputting the reference feature image output by the current convolution network serving as a second feature parameter to the current feature matching network to obtain an intermediate feature image output by the current feature matching network;
the fourth characteristic image is an intermediate characteristic image output by the last layer of characteristic matching network of the multi-layer characteristic matching network.
As an alternative embodiment, the conversion unit 56 includes:
(1) The parallel module is used for connecting the second characteristic image and the fourth characteristic image in parallel to obtain a fifth characteristic image obtained after the second characteristic image and the fourth characteristic image are connected in parallel;
(2) The first acquisition module is used for convoluting the fifth characteristic image, and up-sampling the characteristic image obtained after convolution to acquire a second color map corresponding to the second line manuscript.
As an alternative embodiment, the above device further comprises:
(1) The second extraction unit is used for extracting semantic features of the first line manuscript before acquiring a fourth feature image according to the correlation degree of the first feature image and the second feature image and the third feature image to obtain a sixth feature image;
(2) The third extraction unit is used for extracting semantic features of the second line manuscript to obtain a seventh feature image;
(3) The first updating unit is used for updating the first characteristic image by using the sixth characteristic image to obtain an updated first characteristic image;
(4) And a second updating unit, configured to update the second feature image using the seventh feature image, and obtain an updated second feature image.
As an alternative embodiment, the first updating unit comprises: the second acquisition module, the second updating unit includes: a third acquisition module, wherein,
(1) The second acquisition module is used for connecting or adding the sixth characteristic image and the first characteristic image in parallel to acquire an updated first characteristic image;
(2) And the third acquisition module is used for connecting or adding the seventh characteristic image and the second characteristic image in parallel to acquire an updated second characteristic image.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
According to another aspect of the embodiment of the present application, there is further provided an electronic device for implementing the method for obtaining a line-manuscript corresponding color chart, where the electronic device may be a server, a terminal, or a combination thereof.
Taking a server as an example, fig. 6 is a block diagram of a structure of a server according to an embodiment of the present application, as shown in fig. 6, the server may include: one or more (only one is shown in the figure) processors 601, memory 603, and transmission means 605, as shown in fig. 6, the server may further comprise an input output device 607.
The memory 603 may be configured to store software programs and modules, such as program instructions/modules corresponding to the method and apparatus for acquiring a color chart corresponding to a line manuscript in the embodiment of the present application, and the processor 601 executes the software programs and modules stored in the memory 603 to perform various functional applications and data processing, that is, to implement the method for acquiring a color chart corresponding to a line manuscript. Memory 603 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, memory 603 may further include memory located remotely from processor 601, which may be connected to a server via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 605 is used to receive or transmit data via a network, and may also be used for data transmission between the processor and the memory. Specific examples of the network described above may include wired networks and wireless networks. In one example, the transmission device 605 includes a network adapter (Network Interface Controller, NIC) that may be connected to other network devices and routers via a network cable to communicate with the internet or a local area network. In one example, the transmission device 605 is a Radio Frequency (RF) module that is configured to communicate wirelessly with the internet.
In particular, the memory 603 is used to store applications.
The processor 601 may call an application program stored in the memory 603 through the transmission means 605 to perform the steps of:
s1, respectively extracting a first characteristic image of a first line manuscript, a second characteristic image of a second line manuscript and a third characteristic image of a first color chart, wherein the first line manuscript and the second line manuscript have an association relationship, and the first color chart is the color chart of the first line manuscript;
s2, acquiring a fourth characteristic image according to the correlation degree of the first characteristic image and the second characteristic image and the third characteristic image, wherein the fourth characteristic image is a characteristic image corresponding to the color image of the second line manuscript;
And S3, converting the fourth characteristic image into a second color chart corresponding to the second line manuscript according to the second characteristic image.
By adopting the embodiment of the application, the scheme for acquiring the color map corresponding to the line manuscript is provided. Extracting a first characteristic image of a first line manuscript, a second characteristic image of a second line manuscript and a third characteristic image of a first color chart respectively, wherein the first line manuscript and the second line manuscript have an association relationship, and the first color chart is the color chart of the first line manuscript; acquiring a fourth characteristic image according to the correlation degree of the first characteristic image and the second characteristic image and the third characteristic image, wherein the fourth characteristic image is a characteristic image corresponding to the color image of the second line manuscript; according to the second characteristic image, the fourth characteristic image is converted into a second color chart corresponding to the second line manuscript, so that the problem that the line manuscript with larger shape transformation and new shape cannot be colored in the line manuscript coloring mode in the related technology is solved, the applicability of the line manuscript coloring mode is improved, and the coloring accuracy under large movement and large deformation is improved.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments, and this embodiment is not described herein.
It will be understood by those skilled in the art that the structure shown in fig. 6 is only schematic, and the device implementing the method for obtaining the color map corresponding to the line manuscript may also be a terminal, where the terminal may be a smart phone (such as an Android mobile phone, an iOS mobile phone, etc.), a tablet computer, a palm computer, a mobile internet device (Mobile Internet Devices, MID), a PAD, etc. Fig. 6 is not limited to the structure of the electronic device. For example, the terminal may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in fig. 6, or have a different configuration than shown in fig. 6.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing a terminal device to execute in association with hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic or optical disk, and the like.
The embodiment of the application also provides a storage medium. Alternatively, in the present embodiment, the above-described storage medium may be used to execute the program code of the acquisition method of the manuscript-corresponding color chart.
Alternatively, in this embodiment, the storage medium may be located on at least one network device of the plurality of network devices in the network shown in the above embodiment.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of:
s1, respectively extracting a first characteristic image of a first line manuscript, a second characteristic image of a second line manuscript and a third characteristic image of a first color chart, wherein the first line manuscript and the second line manuscript have an association relationship, and the first color chart is the color chart of the first line manuscript;
s2, acquiring a fourth characteristic image according to the correlation degree of the first characteristic image and the second characteristic image and the third characteristic image, wherein the fourth characteristic image is a characteristic image corresponding to the color image of the second line manuscript;
and S3, converting the fourth characteristic image into a second color chart corresponding to the second line manuscript according to the second characteristic image.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments, and this embodiment is not described herein.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a U disk, ROM, RAM, a mobile hard disk, a magnetic disk or an optical disk.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The integrated units in the above embodiments may be stored in the above-described computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing one or more computer devices (which may be personal computers, servers or network devices, etc.) to perform all or part of the steps of the method described in the embodiments of the present application.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In several embodiments provided by the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (8)

1. The method for acquiring the line manuscript corresponding color map is characterized by comprising the following steps of:
respectively extracting a first characteristic image of a first line manuscript, a second characteristic image of a second line manuscript and a third characteristic image of a first color chart, wherein the first line manuscript and the second line manuscript have an association relation, and the first color chart is the color chart of the first line manuscript;
Acquiring a fourth characteristic image according to the correlation degree of the first characteristic image and the second characteristic image and the third characteristic image, wherein the fourth characteristic image is a characteristic image corresponding to the color map of the second line manuscript;
converting the fourth characteristic image into a second color chart corresponding to the second line manuscript according to the second characteristic image;
according to the correlation degree of the first feature image and the second feature image and the third feature image, the obtaining the fourth feature image comprises: inputting the first feature image, the second feature image and the third feature image into a target decoding model to obtain the fourth feature image output by the target decoding model, wherein the target decoding model is obtained by training an initial decoding model by using a plurality of training line drafts with association relations and a plurality of training color maps which are in one-to-one correspondence with the plurality of training line drafts, and the target decoding model is used for weighting the correlation degree of the second feature image and the first feature image according to the third feature image to obtain the fourth feature image;
the target decoding model comprises a multi-layer feature matching network and a multi-layer convolution network, the feature matching network is used for weighting the correlation degree of a second feature parameter and a third feature parameter according to a first feature parameter and outputting an intermediate feature image, the convolution network is used for enabling a fourth feature parameter and a fifth feature parameter to pass through a convolution layer and an up-sampling layer of the convolution network in sequence after being connected in parallel and outputting a reference feature image, and the first feature image, the second feature image and the third feature image are input into the target decoding model, so that the fourth feature image output by the target decoding model is obtained, and the method comprises the following steps: under the condition that the current feature matching network is a first layer feature matching network, taking a sub-feature image corresponding to the third feature image and the current feature matching network as the first feature parameter, taking a sub-feature image corresponding to the second feature image and the current feature matching network as the second feature parameter, and taking a sub-feature image corresponding to the first feature image and the current feature matching network as the third feature parameter to be input into the current feature matching network to obtain an intermediate feature image output by the current feature matching network; when the current feature matching network is a feature matching network except the first layer feature matching network, taking a sub-feature image corresponding to a feature matching network of a previous layer of the second feature image and the current feature matching network as the fourth feature parameter, and taking an intermediate feature image output by the previous layer of the feature matching network as the fifth feature parameter to be input into a current convolution network corresponding to the current feature matching network to obtain a reference feature image output by the current convolution network; inputting the sub-feature image corresponding to the third feature image and the current feature matching network as the first feature parameter and the third feature parameter, and inputting the reference feature image output by the current convolution network as the second feature parameter to the current feature matching network to obtain an intermediate feature image output by the current feature matching network; the fourth characteristic image is an intermediate characteristic image output by the last layer of characteristic matching network of the multi-layer characteristic matching network.
2. The method of claim 1, wherein acquiring the fourth feature image based on the correlation of the first feature image and the second feature image, and the third feature image comprises:
normalizing the correlation degree between a first pixel point in the second characteristic image and each pixel point in the first characteristic image to obtain a target correlation degree, wherein the first pixel point is any pixel point in the second characteristic image;
and carrying out weighted summation on the target correlation degree by using the pixel points in the third characteristic image, and determining the value of a second pixel point in the fourth characteristic image, wherein the second pixel point is the pixel point corresponding to the first pixel point in the fourth characteristic image.
3. The method of claim 1, wherein converting the fourth feature image into the second color map corresponding to the second line manuscript based on the second feature image comprises:
the second characteristic image and the fourth characteristic image are connected in parallel, and a fifth characteristic image obtained after the second characteristic image and the fourth characteristic image are connected in parallel is obtained;
and convolving the fifth characteristic image, and up-sampling the characteristic image obtained after convolution to obtain the second color map corresponding to the second line manuscript.
4. A method according to any one of claims 1 to 3, wherein before acquiring the fourth feature image from the correlation of the first feature image and the second feature image, and the third feature image, the method further comprises:
extracting semantic features of the first line manuscript to obtain a sixth feature image;
extracting semantic features of the second line manuscript to obtain a seventh feature image;
updating the first characteristic image by using the sixth characteristic image to obtain the updated first characteristic image;
and updating the second characteristic image by using the seventh characteristic image to obtain the updated second characteristic image.
5. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
updating the first feature image using the sixth feature image, the obtaining the updated first feature image comprising: the sixth characteristic image and the first characteristic image are connected in parallel or added to obtain the updated first characteristic image;
updating the second feature image using the seventh feature image, the obtaining the updated second feature image comprising: and connecting the seventh characteristic image with the second characteristic image in parallel or adding the seventh characteristic image and the second characteristic image to obtain the updated second characteristic image.
6. An apparatus for acquiring a line manuscript corresponding color chart, comprising:
the first extraction unit is used for respectively extracting a first characteristic image of a first line manuscript, a second characteristic image of a second line manuscript and a third characteristic image of a first color chart, wherein the first line manuscript and the second line manuscript have an association relationship, and the first color chart is the color chart of the first line manuscript;
an obtaining unit, configured to obtain a fourth feature image according to a correlation degree between the first feature image and the second feature image, and the third feature image, where the fourth feature image is a feature image corresponding to a color chart of the second line manuscript;
the conversion unit is used for converting the fourth characteristic image into a second color chart corresponding to the second line manuscript according to the second characteristic image;
the obtaining unit inputs the first feature image, the second feature image and the third feature image into a target decoding model to obtain the fourth feature image output by the target decoding model, wherein the target decoding model is obtained by training an initial decoding model by using a plurality of training line drafts with association relations and a plurality of training color maps which are in one-to-one correspondence with the plurality of training line drafts, and the target decoding model is used for weighting the correlation degree of the second feature image and the first feature image according to the third feature image to obtain the fourth feature image; the target decoding model comprises a multi-layer feature matching network and a multi-layer convolution network, the feature matching network is used for weighting the correlation degree of a second feature parameter and a third feature parameter according to a first feature parameter and outputting an intermediate feature image, the convolution network is used for enabling a fourth feature parameter and a fifth feature parameter to pass through a convolution layer and an up-sampling layer of the convolution network in sequence after being connected in parallel and outputting a reference feature image, and the first feature image, the second feature image and the third feature image are input into the target decoding model, so that the fourth feature image output by the target decoding model is obtained, and the method comprises the following steps: under the condition that the current feature matching network is a first layer feature matching network, taking a sub-feature image corresponding to the third feature image and the current feature matching network as the first feature parameter, taking a sub-feature image corresponding to the second feature image and the current feature matching network as the second feature parameter, and taking a sub-feature image corresponding to the first feature image and the current feature matching network as the third feature parameter to be input into the current feature matching network to obtain an intermediate feature image output by the current feature matching network; when the current feature matching network is a feature matching network except the first layer feature matching network, taking a sub-feature image corresponding to a feature matching network of a previous layer of the second feature image and the current feature matching network as the fourth feature parameter, and taking an intermediate feature image output by the previous layer of the feature matching network as the fifth feature parameter to be input into a current convolution network corresponding to the current feature matching network to obtain a reference feature image output by the current convolution network; inputting the sub-feature image corresponding to the third feature image and the current feature matching network as the first feature parameter and the third feature parameter, and inputting the reference feature image output by the current convolution network as the second feature parameter to the current feature matching network to obtain an intermediate feature image output by the current feature matching network; the fourth characteristic image is an intermediate characteristic image output by the last layer of characteristic matching network of the multi-layer characteristic matching network.
7. A computer-readable storage medium, characterized in that the storage medium has stored therein a computer program, wherein the computer program is arranged to perform the method of any of claims 1 to 5 when run.
8. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, the processor being arranged to perform the method of any of claims 1 to 5 by means of the computer program.
CN202010347093.0A 2020-04-27 2020-04-27 Method and device for acquiring line manuscript corresponding color map, storage medium and electronic device Active CN111553961B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010347093.0A CN111553961B (en) 2020-04-27 2020-04-27 Method and device for acquiring line manuscript corresponding color map, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010347093.0A CN111553961B (en) 2020-04-27 2020-04-27 Method and device for acquiring line manuscript corresponding color map, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN111553961A CN111553961A (en) 2020-08-18
CN111553961B true CN111553961B (en) 2023-09-08

Family

ID=72005858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010347093.0A Active CN111553961B (en) 2020-04-27 2020-04-27 Method and device for acquiring line manuscript corresponding color map, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN111553961B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937356A (en) * 2022-04-25 2023-04-07 北京字跳网络技术有限公司 Image processing method, apparatus, device and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014044595A (en) * 2012-08-27 2014-03-13 Utsunomiya Univ Line drawing coloring system
CN108615252A (en) * 2018-05-03 2018-10-02 苏州大学 The training method and device of color model on line original text based on reference picture
CN109147003A (en) * 2018-08-01 2019-01-04 北京东方畅享科技有限公司 Method, equipment and the storage medium painted to line manuscript base picture
CN109859288A (en) * 2018-12-25 2019-06-07 北京飞搜科技有限公司 Based on the image painting methods and device for generating confrontation network
JP6676744B1 (en) * 2018-12-28 2020-04-08 株式会社Cygames Image processing method, image processing system and program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7252701B2 (en) * 2017-05-23 2023-04-05 株式会社Preferred Networks Systems, Programs and Methods

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014044595A (en) * 2012-08-27 2014-03-13 Utsunomiya Univ Line drawing coloring system
CN108615252A (en) * 2018-05-03 2018-10-02 苏州大学 The training method and device of color model on line original text based on reference picture
CN109147003A (en) * 2018-08-01 2019-01-04 北京东方畅享科技有限公司 Method, equipment and the storage medium painted to line manuscript base picture
CN109859288A (en) * 2018-12-25 2019-06-07 北京飞搜科技有限公司 Based on the image painting methods and device for generating confrontation network
JP6676744B1 (en) * 2018-12-28 2020-04-08 株式会社Cygames Image processing method, image processing system and program

Also Published As

Publication number Publication date
CN111553961A (en) 2020-08-18

Similar Documents

Publication Publication Date Title
US11551333B2 (en) Image reconstruction method and device
CN108022212B (en) High-resolution picture generation method, generation device and storage medium
US10192163B2 (en) Audio processing method and apparatus based on artificial intelligence
US20200349680A1 (en) Image processing method and device, storage medium and electronic device
US20210160556A1 (en) Method for enhancing resolution of streaming file
CN110163801B (en) Image super-resolution and coloring method, system and electronic equipment
CN111476708B (en) Model generation method, model acquisition method, device, equipment and storage medium
CN110751649B (en) Video quality evaluation method and device, electronic equipment and storage medium
CN110599395A (en) Target image generation method, device, server and storage medium
CN112950471A (en) Video super-resolution processing method and device, super-resolution reconstruction model and medium
CN107464217B (en) Image processing method and device
CN113066017A (en) Image enhancement method, model training method and equipment
CN113222855A (en) Image recovery method, device and equipment
CN110874575A (en) Face image processing method and related equipment
CN110852980A (en) Interactive image filling method and system, server, device and medium
US11887277B2 (en) Removing compression artifacts from digital images and videos utilizing generative machine-learning models
US10445921B1 (en) Transferring motion between consecutive frames to a digital image
CN113822790A (en) Image processing method, device, equipment and computer readable storage medium
CN111553961B (en) Method and device for acquiring line manuscript corresponding color map, storage medium and electronic device
CN115713462A (en) Super-resolution model training method, image recognition method, device and equipment
WO2022173814A1 (en) System and method for photorealistic image synthesis using unsupervised semantic feature disentanglement
WO2024174645A1 (en) Deep learning-based distorted image reconstruction method and related apparatus
US20230060988A1 (en) Image processing device and method
CN115272667B (en) Farmland image segmentation model training method and device, electronic equipment and medium
US20240161327A1 (en) Diffusion models having continuous scaling through patch-wise image generation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant