[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2018050001A1 - Method and device for generating animation data - Google Patents

Method and device for generating animation data Download PDF

Info

Publication number
WO2018050001A1
WO2018050001A1 PCT/CN2017/100472 CN2017100472W WO2018050001A1 WO 2018050001 A1 WO2018050001 A1 WO 2018050001A1 CN 2017100472 W CN2017100472 W CN 2017100472W WO 2018050001 A1 WO2018050001 A1 WO 2018050001A1
Authority
WO
WIPO (PCT)
Prior art keywords
animation
segment
sample
target
bone
Prior art date
Application number
PCT/CN2017/100472
Other languages
French (fr)
Chinese (zh)
Inventor
方小致
吴松城
陈军宏
Original Assignee
厦门幻世网络科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 厦门幻世网络科技有限公司 filed Critical 厦门幻世网络科技有限公司
Publication of WO2018050001A1 publication Critical patent/WO2018050001A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation

Definitions

  • the present application relates to the field of computer technologies, and in particular, to the field of multimedia technologies, and in particular, to a method and apparatus for generating animation data.
  • the purpose of the present application is to propose a method and apparatus for generating animation data to solve the technical problems mentioned in the background section above.
  • the present application provides a method for generating animation data, the method comprising: acquiring a target motion parameter of at least one target video segment in a target animation to be generated; mapping the target motion parameter to The input matching vector of the trained radial basis function neural network model is input to the radial basis function neural network model, wherein the radial basis function neural network model is through various samples in the sequence of sample cartoon segments The cartoon segment is trained; determining each component in the vector output by the radial basis function neural network model as a fusion weight coefficient of each sample animation segment in the sample animation segment sequence; according to the fusion weight coefficient And merging the animation data of each sample animation segment in the sample animation segment sequence to obtain animation data of the target animation segment; and generating animation data of the target animation based on the animation data of the at least one target animation segment.
  • each of the sample animation segments in the sequence of sample animation segments is a skeletal animation.
  • the method includes the step of training a radial basis function neural network model, the step of training the radial basis function neural network model comprising: sampling a sample of each sample in the sequence of sample animation segments The motion parameters of the cartoon segment are mapped to a first vector, and a second vector is generated according to the order of the sample cartoon segments in the sequence of the sample cartoon segments, wherein the dimension of the second vector is a sample animation segment of the sample animation segment sequence Number, and the component corresponding to the order of the sample animation segments in the second vector is set to 1 and the other components are set to 0; the dimension of the first vector is determined as the number of input layer nodes of the radial basis function neural network model to be trained Determining the number of sample animation segments in the sequence of sample animation segments as the number of nodes in the intermediate hidden layer of the radial basis function neural network model and the number of nodes in the output layer; a vector is used as the input of the radial basis function neural network model, and the second vector corresponding to the sample cartoon segment is
  • each sample animation segment in the sequence of sample animation segments has been previously divided into at least one time segment according to a key time point; and said each of said sample animation segment sequences is used according to said fusion weight coefficient
  • the animation data of the sample animation segment is fused, comprising: performing weighted averaging on the duration of the time segment in each video animation segment according to the fusion weight coefficient of each sample animation segment to determine the duration of the time segment in the target animation segment;
  • the duration of the time segment of the cartoon segment is adjusted to coincide with the time segment in the target video segment; for each adjusted video animation segment, the fusion parameter is performed on the skeleton parameters of the animation frame in each time segment according to the fusion weight coefficient, The skeleton of the animation frame within the corresponding time segment in the target animation segment parameter.
  • the performing a fusion interpolation on a bone parameter of an animation frame in each time segment according to the fusion weight coefficient including at least one of: performing spherical interpolation on a rotation parameter of each bone in the animation frame; Linear interpolation is performed on the positional parameters of the root bone in the frame.
  • the respective Performing fusion interpolation on the bone parameters of the animation frame in the time segment further comprising: obtaining a horizontal orientation difference and/or a horizontal position difference of the root skeleton of each sample animation segment and the target animation segment; adjusting the horizontal orientation of the root skeleton of each sample animation segment And/or horizontal position to eliminate the horizontal orientation difference and/or horizontal position difference.
  • the generating the animation data of the target animation based on the animation data of the at least one target video segment comprises: acquiring a bone parameter of the first animation frame of the current target animation segment and a previous target animation The skeletal parameter of the last animation frame of the segment; according to the skeletal parameters of the two animation frames, the skeletal parameter of the intermediate animation frame to be inserted between the two animation frames is calculated by the interpolation method; between the two animation frames The intermediate animation frame is inserted to generate animation data of the target animation.
  • the method further includes: obtaining, for each animation frame in the time segment of the target video segment, a target position parameter of the bone to be corrected in the animation frame; determining a current position parameter and a target position of the bone to be corrected The difference between the parameters; the inverse kinetic iteration is used to adjust the rotation parameters of the bone to be corrected and the associated bone to correct the difference.
  • the using the inverse dynamic iteration to iteratively adjust the rotation parameters of the bone to be corrected and the associated bone to correct the difference comprises: acquiring a pre-saved, waiting for the previous animation frame
  • the modified bone uses the inverse dynamic iteration to adjust the adjustment value of the rotation parameter of the bone; the adjustment value is set to the initial adjustment value when the rotation parameter of the bone is adjusted using the inverse dynamic iteration for the current animation frame.
  • the method further includes prior to setting the adjustment value to an initial adjustment value when the rotation parameter of the bone is adjusted using the inverse dynamic iteration for the current animation frame. Include: attenuating the adjustment value.
  • the present application provides an apparatus for generating animation data, the apparatus comprising: an acquiring unit, configured to acquire target motion parameters of at least one target video segment in a target animation to be generated; and an input unit, configured to: Mapping the target motion parameter to a vector matching the input of the pre-trained radial basis function neural network model, and inputting to the radial basis function neural network model, wherein the radial basis function neural network model is passed a sample obtained by training each sample animation segment in the sample cartoon segment sequence; determining unit configured to determine each component in the vector output by the radial basis function neural network model as each sample animation in the sample animation segment sequence a fusion weighting coefficient of the segment; the fusion unit is configured to use the animation data of each sample animation segment in the sample animation segment sequence to perform animation data according to the fusion weight coefficient, to obtain animation data of the target animation segment; and a generating unit, configured to Animating data of the at least one target cartoon segment to generate the mesh Animation data animation.
  • each of the sample animation segments in the sequence of sample animation segments is a skeletal animation.
  • the apparatus further includes a training unit for training a radial basis function neural network model, the training unit specifically configured to: for each sample video segment in the sequence of sample animation segments, the sample animation segment The motion parameter is mapped to the first vector, and the second vector is generated according to the order of the sample cartoon segments in the sequence of the sample cartoon segments, wherein the dimension of the second vector is the number of sample animation segments in the sequence of the sample cartoon segments And the component corresponding to the order of the sample cartoon segments in the second vector is set to 1, and the other components are set to 0; the dimension of the first vector is determined as the number of input layer nodes of the radial basis function neural network model to be trained, The number of sample animation segments in the sample animation segment sequence is determined as the number of nodes in the intermediate hidden layer of the radial basis function neural network model and the number of nodes in the output layer; the first vector corresponding to the sample animation segment As an input of the radial basis function neural network model, and using a second vector corresponding to the sample animation segment as the
  • each of the sample animation segments in the sequence of sample animation segments has been previously divided into at least one time segment according to a key time point; and the fusion unit,
  • the method includes: a duration determining subunit, configured to perform weighted averaging on durations of time segments in each video animation segment according to a fusion weight coefficient of each sample animation segment to determine a duration of a time segment in the target animation segment; and a duration adjustment subunit, For adjusting the duration of the time segment of each sample animation segment to be consistent with the time segment in the target animation segment; the fusion subunit is configured to use the fusion weight coefficient for each time segment for the adjusted individual sample animation segments
  • the skeletal parameters of the animation frame perform fusion interpolation to obtain the skeletal parameters of the animation frame in the corresponding time segment in the target animation segment.
  • the blending subunit is further for at least one of: performing spherical interpolation on rotation parameters of respective bones in the animation frame; performing linear interpolation on positional parameters of the root bone in the animation frame.
  • the merging unit further includes a registration subunit, configured to: acquire a horizontal orientation difference and/or a horizontal position difference of a root skeleton of each sample cartoon segment and the target animation segment; and adjust a root of each sample animation segment The horizontal orientation and/or horizontal position of the bone to eliminate the horizontal orientation difference and/or horizontal position difference.
  • a registration subunit configured to: acquire a horizontal orientation difference and/or a horizontal position difference of a root skeleton of each sample cartoon segment and the target animation segment; and adjust a root of each sample animation segment The horizontal orientation and/or horizontal position of the bone to eliminate the horizontal orientation difference and/or horizontal position difference.
  • the generating unit includes: an obtaining subunit, configured to acquire a bone parameter of a first animation frame of the current target video segment and a skeleton parameter of a last animation frame of the previous target video segment; a unit for calculating a bone parameter of an intermediate animation frame to be inserted between the two animation frames according to a bone parameter of two animation frames; inserting a subunit for between the two animation frames The intermediate animation frame is inserted to generate animation data of the target animation.
  • the apparatus further includes: a position parameter obtaining unit, configured to acquire, for each animation frame in the time segment of the target video segment, a target position parameter of the bone to be corrected in the animation frame; the difference determining unit, Determining a difference between a current position parameter of the bone to be corrected and a target position parameter; adjusting unit, configured to iteratively adjust a rotation parameter of the bone to be corrected and an associated bone using an inverse dynamics to correct the Describe the difference.
  • a position parameter obtaining unit configured to acquire, for each animation frame in the time segment of the target video segment, a target position parameter of the bone to be corrected in the animation frame
  • the difference determining unit Determining a difference between a current position parameter of the bone to be corrected and a target position parameter
  • adjusting unit configured to iteratively adjust a rotation parameter of the bone to be corrected and an associated bone using an inverse dynamics to correct the Describe the difference.
  • the adjusting unit includes: an adjustment value acquisition subunit, configured to acquire a pre-saved adjustment value of a rotation parameter of the bone to be corrected using an inverse dynamic iteration for the bone to be corrected of the previous animation frame; Setting a sub-unit for setting the adjustment value to an initial adjustment when the rotation parameter of the bone is adjusted using an inverse dynamic iteration for the current animation frame Integer value.
  • the adjusting unit further comprises: an attenuation subunit, configured to: before setting the adjustment value to an initial adjustment value when the rotation parameter of the bone is adjusted using the inverse dynamic iteration for the current animation frame, The adjustment value is attenuated.
  • the method and device for generating animation data provided by the present application can obtain a target animation segment according to a target motion parameter by using a radial basis function neural network model generated by a sample animation segment sequence and a sample animation segment sequence, and finally form a target animation.
  • the automatic generation of animation data is realized, which reduces the pressure of manual design animation and reduces the data storage pressure.
  • FIG. 1 is an exemplary system architecture diagram to which the present application can be applied;
  • FIG. 2 is a flow diagram of one embodiment of a method for generating animation data in accordance with the present application
  • FIG. 3 is a flow chart of still another embodiment of a method for generating animation data in accordance with the present application.
  • FIG. 4 is a schematic structural diagram of an embodiment of an apparatus for generating animation data according to the present application.
  • FIG. 5 is a schematic structural diagram of a computer system suitable for implementing a terminal device or a server of an embodiment of the present application.
  • FIG. 1 illustrates an exemplary system architecture 100 of an embodiment of a method and apparatus for generating animation data to which the present application may be applied.
  • system architecture 100 can include terminal devices 101, 102, 103, network 104, and server 105.
  • the network 104 is used to provide a medium for communication links between the terminal devices 101, 102, 103 and the server 105.
  • Network 104 may include various types of connections, such as wired, wireless communication links, fiber optic cables, and the like.
  • the user can interact with the server 105 over the network 104 using the terminal devices 101, 102, 103 to receive or transmit messages and the like.
  • Various communication client applications such as animation design software, animation playing software, etc., can be installed on the terminal devices 101, 102, and 103.
  • the terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting animation display or animation design, including but not limited to smart phones, tablets, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer) III.
  • the motion picture expert compresses the standard audio layer 3), the MP4 (Moving Picture Experts Group Audio Layer IV) player, the laptop portable computer, the desktop computer, and the like.
  • the server 105 may be a server that provides various services, such as a background server that provides support for animation displayed on the terminal devices 101, 102, 103.
  • the background server can analyze and process data such as the received animation generation request, and feed back the processing result (for example, animation data) to the terminal device.
  • the method for generating an animation provided by the embodiment of the present application may be performed by the server 105, or may be performed by the terminal device 101, 102, 103, or may be performed by the server 105 and the terminal devices 101, 102, and 103, respectively. Different steps are performed; accordingly, the means for generating the animation may be provided in the server 105, or may be provided in the terminal device 101, 102, 103, or different units may be provided in the server 105 and the terminal devices 101, 102, 103. .
  • terminal devices, networks, and servers in Figure 1 is merely illustrative. Depending on the implementation needs, there can be any number of terminal devices, networks, and servers.
  • a flow 200 of one embodiment of a method for generating animation data in accordance with the present application is illustrated.
  • the method for generating animation data including the following step:
  • Step 201 Acquire a target motion parameter of at least one target video segment in the target animation to be generated.
  • the target animation to be generated is an animation that the user wants, and the target animation may be composed of at least one target animation segment.
  • an electronic device such as the server or terminal device shown in FIG. 1 on which the method of generating animation data is run can acquire target motion parameters of the target video segment by various methods.
  • the server may acquire the target motion parameter sent by the terminal device, may also acquire the target motion parameter pre-stored locally in the server, and may obtain the parameter from other servers; when the electronic device is the terminal device, the terminal device
  • the target motion parameter can usually be obtained from the user, or it can be obtained from other devices.
  • the target motion parameter of the target video segment can reflect the motion state of the object in the target video segment when moving.
  • the target motion parameter can be a single parameter or multiple parameters.
  • the target motion parameters in the target animation segment of the walking animation may include forward speed, lateral speed, and cornering speed.
  • step 202 the target motion parameter is mapped to a vector matching the input of the pre-trained radial basis function neural network model, and then input to the radial basis function neural network model.
  • the electronic device may map it to a vector matching the input of the pre-trained radial basis function neural network model, and then input the vector to the radial direction.
  • Basis function in a neural network model is obtained by training each sample animation segment in the sample cartoon segment sequence.
  • the vector may be bound in advance before the vector formed by the mapping is input to the radial basis function neural network model. That is, when the vector exceeds the range of the training data, the vector is constrained to ensure that the vector input to the radial basis function neural network model does not exceed the range of the training data.
  • Step 203 Determine each component in the vector output by the radial basis function neural network model as a fusion weight coefficient of each sample animation segment in the sample cartoon segment sequence.
  • the radial basis function neural network model may output a corresponding vector, and the electronic device may determine each component in the output vector to determine a fusion weight coefficient.
  • the weight fusion coefficient is used to confirm The sample animation segments used in the subsequent animation of the sample animation segments in the sequence of sample animation segments and the respective usage ratios are determined.
  • Step 204 According to the fusion weight coefficient, use the animation data of each sample animation segment in the sample animation segment sequence to perform fusion, and obtain animation data of the target animation segment.
  • the electronic device can use the animation data of each sample animation segment in the sample cartoon segment sequence to fuse, and the merged animation data can be used as an animation of the corresponding target animation segment. data.
  • Step 205 Generate animation data of the target animation based on the animation data of the at least one target video segment.
  • the electronic device may splicing the respective target video segments in order, thereby obtaining animation data of the entire target animation.
  • each sample animation segment in the sequence of sample animation segments is a skeletal animation.
  • Skeletal animation consists of a "bone” that interacts with each other. By controlling the position, direction and size of the bones, and attaching skin data to the bones, you can render the desired visible animated image.
  • bones form a hierarchy, the skeleton structure, according to the characteristics of the animated character.
  • the skeletal structure is a series of bones formed by a combination of bones. These bones are organized in a tree structure according to the father-son relationship, forming the entire skeleton frame of the character model.
  • the bone at the root of the tree is called the root bone, which is the key point in the formation of the bone structure.
  • each bone is its parent bone relative to the bones of its next level.
  • each bone has two matrices, one is the initial matrix, which is used to represent the initial position of the bone, and the other is the transformation matrix, which reflects the transformation of the bone; multiplied by the initial matrix and the variation matrix.
  • the calculation can get the final matrix of the bone, mainly used to transform the bone.
  • the initial matrix, the transformation matrix, and the final matrix can be characterized either by a relative matrix or by an absolute matrix.
  • the absolute matrix is the matrix of the current bone relative to the world, and the relative matrix is the matrix of the current bone relative to its parent bone.
  • the absolute matrix of the current bone can be obtained by multiplying its relative matrix with the absolute matrix of the parent bone, and the absolute matrix of its parent bone can pass The relative matrix of the parent bone is multiplied by the absolute matrix of the previous bone. Therefore, the iterative multiplication in the above manner up to the root skeleton, the absolute matrix of the current bone can be calculated.
  • Adjacent bones are joined together by joints and can be used for relative movement. When a rotation occurs between the bones, the bones that make up the animated character can make different actions to achieve different animation effects.
  • skeletal animation only needs to store bone transformation data, and does not need to store data of each vertex in every frame, so using skeletal animation can save a lot of storage space.
  • the method further includes the step of training the radial basis function neural network model.
  • the step of training the radial basis function neural network model specifically includes the following process:
  • the motion parameters of the sample animation segment are mapped to a first vector, and a second vector is generated according to the order of the sample animation segments in the sequence of sample animation segments.
  • the sample animation segment sequence includes at least one sample animation segment, and each sample animation segment may have a corresponding serial number.
  • the sample animation segment sequence includes a sample animation segment 1, a sample animation segment 2, and a sample animation segment n.
  • the animation parameter may include at least one physical quantity, and the value of each physical quantity of the current motion is used as a component to form the first vector.
  • the motion parameters of the walking animation may include three physical quantities such as a forward speed, a lateral speed, and a turning speed, and the first vector to which the sample animation segment is mapped includes the forward speed, the lateral speed, and the turning speed, respectively.
  • the three components corresponding to the value, that is, the dimension of the first vector is 3. It should be noted that the motion parameters in the sample animation segment usually need to be consistent with the target motion parameters in the target animation.
  • the dimension of the second vector is the number of cartoon segments in the sequence of sample cartoon segments, and the component corresponding to the order of the sample animation segments in the second vector is placed 1, other components are set to 0.
  • the sample animation segment sequence includes the sample animation segment 1, the sample animation segment 2, the sample animation segment n, and the number of sample animation segments is n
  • the dimension of the second vector corresponding to each sample animation segment is n, ie
  • the two vectors can be expressed in the form of (X 1 , X 2 ... X n ).
  • the sample animation segment 1 since its serial number is 1, it is set to 1 on the X 1 component, and X 2 ...
  • X n is set to 0, that is, the second vector corresponding to the sample animation segment 1 is (1, 0...0). ).
  • the second vector corresponding to the sample animation segment 2 is (0, 1, 0... 0), and the second vector corresponding to the sample animation segment n is (0, 0...0, 1).
  • the dimension of the first vector is determined as the number of input layer nodes of the radial basis function neural network model to be trained, and the number of animation segments in the sequence of sample animation segments is determined as the middle hidden of the radial basis function neural network model.
  • the first vector corresponding to the sample animation segment is used as the input of the radial basis function neural network model and the second vector corresponding to the sample animation segment is used as the output of the radial basis function neural network model, so
  • the dimension of the first vector can be determined as the number of input layer nodes of the radial basis function neural network model, and the number of cartoon segments in the sample cartoon segment sequence is determined as the diameter.
  • the number of nodes in the intermediate hidden layer of the basis function neural network model and the number of nodes in the output layer are such that the scale of the radial basis function neural network model matches the size of the first vector and the second vector.
  • the intermediate hidden layer of the radial basis function neural network model may adopt a Gaussian kernel function.
  • the first vector corresponding to the sample animation segment is used as the input of the radial basis function neural network model
  • the second vector corresponding to the sample animation segment is used as the output of the radial basis function neural network model
  • the radial basis function neural network The model is trained. Since the scale of the radial basis function neural network model matches the size of the first vector and the second vector, the radial basis function neural network model can be successfully trained by using the first vector and the second vector corresponding to the sample animation segment.
  • the secondary training is to input the first vector and the second vector corresponding to the same sample animation segment as the input and output of the radial basis function neural network model.
  • the gradient weighting method is used to train the connection weight between the intermediate hidden layer and the output layer and the width of the hidden layer. That is, during the training process, since the input and output are determined, the function of the intermediate hidden layer is continuously adjusted by the determined input and output.
  • each sample animation segment in the sequence of sample animation segments has been divided into at least one time segment according to a key time point in advance; and step 204 specifically includes: first, according to each sample animation segment The weighting coefficient is used to perform weighted averaging on the duration of the time segments in each sample animation segment to determine the duration of the time segment in the target animation segment; secondly, the duration of the time segment of each sample animation segment is adjusted to be in the target animation segment Time segments are consistent; finally, for adjustments Each of the sample animation segments performs fusion interpolation on the skeleton parameters of the animation frames in each time segment according to the fusion weight coefficient, and obtains the skeleton parameters of the animation frames in the corresponding time segments in the target animation segment.
  • the critical time points can be specifically determined based on the actions represented by the sample animation segments. For example, taking the walking animation as an example, since the walking motion has a significant change in the posture before and after each changing of the foot, and the posture between the two changing feet is gradually changed, the key time point of the walking animation may be each change.
  • the actions corresponding to the at least one time segment into which the sample animation segment is divided are the first step, the second step, the third step, and the fourth step, which are respectively divided into the time points of the change of the foot.
  • the skeleton parameters of the animation frames in the corresponding time segments in the target animation segment are respectively calculated using the skeleton parameters of the time segments corresponding to the respective animation segments.
  • the durations of the time segments in the video segments of the samples are not necessarily the same, it is first necessary to determine the duration of the time segments in the target video segment in a certain manner. In this implementation manner, the duration of the time segment in each video clip is weighted and averaged by using the above-mentioned fusion weight coefficient to determine the duration of the time segment in the target video segment.
  • the fusion parameters need to be performed on the bone parameters of the corresponding animation frames in each time segment in the subsequent fusion process, and the durations of the time segments in each sample animation segment are not necessarily the same, it is necessary to first time the animation segments of each sample.
  • the duration of the clip is adjusted to match the time slice in the target movie clip.
  • the skeletal parameters of the corresponding animation frame of the time segment of the adjusted sample animation segment can be used for the fusion operation to obtain the target The bone parameter of the animation frame within the corresponding time segment in the animation segment.
  • performing the fusion interpolation on the bone parameters of the animation frame in each time segment according to the fusion weight coefficient may include: performing spherical interpolation on the rotation parameters of each bone in the animation frame; Linear interpolation is performed on the positional parameters of the root bone in the frame.
  • the bone parameters of each animation frame can include the positional parameters of the root bone and the rotation parameters of each bone.
  • the rotation parameter of the root bone can be represented by the absolute matrix of the root bone.
  • the motion of the non-root bone is mostly the rotation motion relative to its parent bone, and its rotation parameter can be represented by the relative matrix of the bone.
  • the positional parameter can be represented by a three-dimensional vector
  • the rotation parameter can be represented by a four-dimensional vector. Therefore, when the positional parameter is fused and interpolated, linear interpolation can be used for the positional parameter, and for the rotation parameter, the rotation parameter can be used. Spherical interpolation method.
  • the step 204 before performing the interpolation operation on the skeletal parameters of the animation frame in each time segment according to the fused weight coefficient for each of the adjusted video animation segments, the step 204 further includes: first, acquiring The horizontal orientation of the root skeleton of each sample animation segment and the target animation segment is poor and/or horizontally positional; thereafter, the horizontal orientation and/or horizontal position of the root skeleton of each sample animation segment is adjusted to eliminate horizontal orientation differences and/or levels Poor position.
  • the character may be offset between the horizontal and/or horizontal position of the root skeleton and between the target animation segment, and the horizontal orientation and/or horizontal position difference of the root skeleton in the animation segment may be adjusted.
  • the horizontal orientation difference and/or the horizontal position difference may be calculated based on the horizontal orientation and/or the horizontal position of the root skeleton in the starting frame of each sample animation segment, and the root skeleton in each animation frame in the sample animation segment.
  • the overall adjustment can be made based on the calculated adjustment parameters.
  • the step 205 specifically includes: acquiring a bone parameter of a first animation frame of the current target animation segment and a skeleton parameter of a last animation frame of the previous target animation segment;
  • the skeletal parameters of the frame, the skeletal parameters of the intermediate animation frame to be inserted between the two animation frames are calculated by the interpolation method; the intermediate animation frame is inserted between the two animation frames to generate the animation data of the target animation.
  • the implementation method calculates the bone parameter of the intermediate animation frame to be inserted between the two animation frames by the interpolation method, and inserts the intermediate animation frame between the two animation frames, thereby enhancing the smoothing of the change between the animation frames. Sex to weaken the jump of the presented action.
  • the method provided by the above embodiment of the present application can use the radial basis function neural network model generated by the sample cartoon segment sequence and the sample cartoon segment sequence to obtain the target animation segment according to the target motion parameter, and finally form the target animation, and realize the animation data.
  • Automatic generation reduces the pressure on manual design animations and reduces data storage pressure.
  • a flow 300 of yet another embodiment of a method for generating animation data is shown.
  • the process 300 of the method for generating animation data includes the following steps:
  • Step 301 Acquire a target motion parameter of at least one target video segment in the target animation to be generated.
  • step 301 may refer to step 201 of the corresponding embodiment of FIG. 2.
  • step 302 the target motion parameter is mapped to a vector matching the input of the pre-trained radial basis function neural network model, and then input to the radial basis function neural network model.
  • step 302 may refer to step 202 of the corresponding embodiment of FIG. 2.
  • Step 303 determining each component in the vector output by the radial basis function neural network model as a fusion weight coefficient of each sample animation segment in the sample cartoon segment sequence.
  • step 303 may refer to step 203 of the corresponding embodiment of FIG. 2.
  • Step 304 According to the fusion weight coefficient, the animation data of each sample animation segment in the sample animation segment sequence is used to obtain the animation data of the target animation segment.
  • step 304 may refer to step 204 of the corresponding embodiment of FIG. 2 .
  • Step 305 Generate animation data of the target animation based on the animation data of the at least one target video segment.
  • step 305 may refer to step 205 of the corresponding embodiment of FIG. 2.
  • Step 306 acquiring, for each animation frame in the time segment of the target video segment.
  • one of the character's feet should be fixed at the ground position where the foot is initially pressed. Since the target video segment is formed by the merging operation, the foot in the presented action may be displaced within the time segment, and the sliding step phenomenon occurs, thereby affecting the realisticness of the action presented by the target animation. Therefore, it is necessary to amend it.
  • the electronic device may acquire the target position parameter to which the bone to be corrected needs to be corrected.
  • the target location parameter can be set by the user or automatically according to pre-configured rules.
  • Step 307 determining a difference between a current position parameter of the bone to be corrected and a target position parameter.
  • the electronic device may calculate the current position parameter and the target based on the target position parameter acquired in step 306 and the current position parameter of the bone to be repaired in the target animation segment generated by the foregoing steps. The difference between the positional parameters to be used as a parameter for the subsequent correction process.
  • step 308 the inverse kinetic iteration is used to adjust the rotation parameters of the bone to be corrected and the associated bone to correct the difference.
  • the electronic device can iteratively adjust the rotation parameters of the bone to be corrected and the associated bone by inverse dynamics, and correct the difference so that the bone to be corrected is in this
  • the process gradually approaches the target position.
  • the associated bones may be set by the user, or may be determined according to the bone to be corrected and certain rules. For example, when adjusting the above-mentioned sliding phenomenon, when adjusting the position of the foot, it is necessary to keep the torso (root skeleton) at the same time, so the bones that need to adjust the rotation parameters include the bone corresponding to the foot (the bone to be corrected), the calf and The two bones (associated bones) corresponding to the thighs.
  • the final target of the rotation parameter of the adjustment bone is the position of the bone to be corrected, it is possible to perform an iterative adjustment to the parent bone of the current bone from the bone to be corrected which is most closely related to the correction target by the inverse dynamic method.
  • the correction of the above difference is completed.
  • the specific algorithm of inverse dynamics is not described here.
  • step 306 to step 308, generally after step 304 Execution may also be performed after step 305.
  • the step 308 may include: acquiring, in advance, an adjustment value of the rotation parameter of the bone to be corrected using the inverse dynamic iteration for the bone to be corrected of the previous animation frame; Set to the initial adjustment value when using the inverse dynamic iteration to adjust the rotation parameters of the bone for the current animation frame.
  • the adjustment value of the bone rotation parameter in the previous animation frame can be adjusted by the inverse dynamics as an initial adjustment value when the rotation parameter of the bone is adjusted using the inverse dynamic iteration for the current animation frame,
  • the skeletal parameter difference between adjacent animation frames in the time segment that needs to be corrected in the animation segment obtained by the foregoing steps is small, and the adjustment value of the bone rotation parameter in the inverse dynamic kinetic animation frame starts from a small
  • the number of iterations can be used to correct the positional parameters of the bone to be corrected in the current animation frame, which improves the calculation efficiency and reduces the time spent.
  • the method before the adjustment value is set to an initial adjustment value when the rotation parameter of the bone is adjusted using the inverse dynamic iteration for the current animation frame, the method further includes: performing the adjustment value attenuation.
  • the adjustment value of the previous animation frame can be attenuated according to the law, thereby further improving the stability of the animation.
  • the flow 300 of the method for generating animation data in the present embodiment increases the step of correcting the position of the bone in the animation frame as compared with the embodiment corresponding to FIG. 2, further improving The fidelity of the generated animation data.
  • the present application provides an embodiment of an apparatus for generating animation data, the apparatus embodiment corresponding to the method embodiment shown in FIG.
  • the device can be specifically applied to various electronic devices.
  • the apparatus 400 for generating animation data includes an acquisition unit 401, an input unit 402, a determination unit 403, a fusion unit 404, and a generation unit 405.
  • the acquiring unit 401 is configured to acquire target motion parameters of at least one target video segment in the target animation to be generated
  • the input unit 402 is configured to map the target motion parameter to match the input of the pre-trained radial basis function neural network model.
  • the determining unit 403 is configured to output the radial basis function neural network model
  • Each component in the vector is determined as a fusion weight coefficient of each sample animation segment in the sequence of sample animation segments
  • the fusion unit 404 is configured to use the animation data of each sample animation segment in the sample animation segment sequence to fuse according to the fusion weight coefficient, and obtain the target The animation data of the cartoon segment
  • the generating unit 405 is configured to generate animation data of the target animation based on the animation data of the at least one target video segment.
  • the specific processing of the obtaining unit 401, the input unit 402, the determining unit 403, the merging unit 404, and the generating unit 405 may refer to step 201, step 202, step 203, step 204 in the corresponding embodiment of FIG. 2, respectively. Step 205, which will not be described again here.
  • each sample animation segment in the sequence of sample animation segments is a skeletal animation.
  • the apparatus 400 further includes a training unit (not shown) for training a radial basis function neural network model, the training unit being specifically configured to: for each sample in the sequence of sample animation segments
  • the animation segment maps the motion parameters of the sample animation segment to the first vector, and generates a second vector according to the order of the sample cartoon segments in the sequence of the sample cartoon segments, wherein the dimension of the second vector is the sample animation segment in the sequence of the sample cartoon segments
  • the number of components in the second vector corresponding to the order of the sample animation segments is set to 1 and the other components are set to 0; the dimensions of the first vector are determined as the input layer nodes of the radial basis function neural network model to be trained.
  • the number of the sample animation segments in the sample animation segment sequence is determined as the number of nodes in the intermediate hidden layer of the radial basis function neural network model and the number of nodes in the output layer; the first vector corresponding to the sample animation segment is taken as The input of the radial basis function neural network model, and the second vector corresponding to the video animation segment as the radial basis function neural network
  • the output of the model trains the radial basis function neural network model.
  • each sample animation segment in the sequence of sample animation segments has been divided into at least one time segment according to a key time point in advance; and the fusion unit 404 includes: a duration determination subunit (not shown) ), for the duration of the time segment in each sample animation segment according to the fusion weight coefficient of each sample animation segment Performing a weighted average to determine the duration of the time segment in the target video segment; a duration adjustment sub-unit (not shown) for adjusting the duration of the time segment of each sample animation segment to coincide with the time segment in the target animation segment; a sub-unit (not shown) is configured to perform fused interpolation on the skeletal parameters of the animation frame in each time segment according to the fused weight coefficient for the adjusted individual sample video segments, to obtain an animation frame in the corresponding time segment in the target video segment. Skeleton parameters.
  • the fusion subunit is further used for at least one of: performing spherical interpolation on rotation parameters of respective bones in the animation frame; and performing linear interpolation on positional parameters of the root bone in the animation frame.
  • the fusion unit 404 further includes a registration subunit (not shown) for: obtaining a horizontal orientation difference and/or level of the root skeleton of each sample animation segment and the target animation segment. Position difference; adjust the horizontal orientation and/or horizontal position of the root bone of each sample animation segment to eliminate horizontal orientation differences and/or horizontal position differences.
  • the generating unit 405 includes: an obtaining subunit (not shown) for acquiring a bone parameter of the first animation frame of the current target video segment and a last of the previous target video segment. a skeleton parameter of an animation frame; a calculation subunit (not shown) for calculating a bone parameter of an intermediate animation frame to be inserted between two animation frames by an interpolation method according to a bone parameter of the two animation frames; inserting the subunit (not shown) for inserting an intermediate animation frame between two animation frames to generate animation data of the target animation.
  • the apparatus 400 further includes: a position parameter acquisition unit (not shown), configured to acquire, for each animation frame in the time segment of the target video segment, the skeleton to be corrected in the animation frame. a target position parameter; a difference determining unit (not shown) for determining a difference between a current position parameter of the bone to be corrected and a target position parameter; an adjusting unit (not shown) for iterating using the inverse dynamics Adjust the rotation parameters of the bone to be corrected and the associated bone to correct the difference.
  • a position parameter acquisition unit (not shown), configured to acquire, for each animation frame in the time segment of the target video segment, the skeleton to be corrected in the animation frame.
  • a target position parameter configured to acquire, for each animation frame in the time segment of the target video segment, the skeleton to be corrected in the animation frame.
  • a target position parameter configured to acquire, for each animation frame in the time segment of the target video segment, the skeleton to be corrected in the animation frame.
  • a target position parameter configured to acquire, for each animation frame
  • the adjusting unit includes: an adjustment value acquisition subunit (not shown), configured to acquire a pre-stored inverse dynamics iterative adjustment for the bone to be corrected of the previous animation frame.
  • the adjustment value of the rotation parameter of the bone a set subunit (not shown) for setting the adjustment value to the initial adjustment value when the rotation parameter of the bone is adjusted using the inverse dynamic iteration for the current animation frame.
  • the adjusting unit further includes: an attenuation subunit (not shown) for setting the adjustment value to use the inverse dynamic iteration to adjust the rotation parameter of the bone to the current animation frame The adjustment value is attenuated before the initial adjustment value.
  • FIG. 5 there is shown a block diagram of a computer system 500 suitable for use in implementing a terminal device or server of an embodiment of the present application.
  • computer system 500 includes a central processing unit (CPU) 501 that can be loaded into a program in random access memory (RAM) 503 according to a program stored in read only memory (ROM) 502 or from storage portion 508. And perform various appropriate actions and processes.
  • RAM random access memory
  • ROM read only memory
  • RAM 503 various programs and data required for the operation of the system 500 are also stored.
  • the CPU 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504.
  • An input/output (I/O) interface 505 is also coupled to bus 504.
  • the following components are connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, etc.; an output portion 507 including, for example, a cathode ray tube (CRT), a liquid crystal display (LCD), and the like, and a storage portion 508 including a hard disk or the like. And a communication portion 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the Internet.
  • Driver 510 is also coupled to I/O interface 505 as needed.
  • a removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like is mounted on the drive 510 as needed so that a computer program read therefrom is installed into the storage portion 508 as needed.
  • an embodiment of the present disclosure includes a computer program product comprising a computer program tangibly embodied on a machine readable medium, the computer program comprising program code for executing the method illustrated in the flowchart.
  • the computer program can be downloaded and installed from the network via the communication portion 509, and/or installed from the removable medium 511.
  • each block of the flowchart or block diagram can represent a module, a program segment, or a portion of code, the module, the program segment, or a portion of code comprising one or more An executable instruction that implements the specified logic functions.
  • the functions noted in the blocks may also occur in a different order than that illustrated in the drawings. For example, two successively represented blocks may in fact be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented in a dedicated hardware-based system that performs the specified function or operation. Or it can be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present application may be implemented by software or by hardware.
  • the described unit may also be provided in the processor, for example, as a processor including an acquisition unit, an input unit, a determination unit, a fusion unit, and a generation unit.
  • the names of the units do not constitute a limitation on the unit itself in some cases.
  • the obtaining unit may also be described as “a unit that acquires target motion parameters of at least one target video segment in the target animation to be generated”. .
  • the present application further provides a non-volatile computer storage medium, which may be a non-volatile computer storage medium included in the apparatus described in the foregoing embodiments; It may be a non-volatile computer storage medium that exists alone and is not assembled into the terminal.
  • the non-volatile computer storage medium stores one or more programs, when the one or more programs are executed by a device, causing the device to: acquire target motion of at least one target video segment in the target animation to be generated a parameter; mapping the target motion parameter to a vector matching the input of the pre-trained radial basis function neural network model, and inputting to the radial basis function neural network model, wherein the radial basis function neural network model It is obtained by training each sample animation segment in the sequence of sample animation segments; determining each component in the vector output by the radial basis function neural network model as the fusion of each sample animation segment in the sequence of the sample animation segment Weighting coefficient; according to the fusion weight coefficient, using the animation data of each sample animation segment in the sample animation segment sequence to obtain animation data of the target animation segment; based on the animation data of the at least one target animation segment, generating the The animation data of the target animation.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method and device for generating animation data. The method comprises: obtaining a target motion parameter of at least one target animation clip in a target animation to be generated (201); mapping the target motion parameter as a vector matching the input of a pre-trained radial basis function neural network model, and then inputting the vector into the radial basis function neural network model (202); determining components in the vector outputted by the radial basis function neural network model, as fusion weight coefficients of sample animation clips in a sample animation clip sequence (203); according to the fusion weight coefficient, performing fusion by using animation data of the sample animation clips in the sample animation clip sequence, so as to obtain animation data of target animation clips (204); and generating animation data of a target animation according to animation data of the at least one target animation clip (205). Animation data is automatically generated.

Description

一种用于生成动画数据的方法和装置Method and device for generating animation data
相关申请的交叉引用Cross-reference to related applications
本申请要求于2016年9月14日提交的中国专利申请号为“201610822607.7”的优先权,其全部内容作为整体并入本申请中。The present application claims the priority of the Chinese Patent Application Serial No. No. 2016.
技术领域Technical field
本申请涉及计算机技术领域,具体涉及多媒体技术领域,尤其涉及用于生成动画数据的方法和装置。The present application relates to the field of computer technologies, and in particular, to the field of multimedia technologies, and in particular, to a method and apparatus for generating animation data.
背景技术Background technique
在动画技术领域中,需要获取各种动画形象的动画数据,才能通过渲染得到相应的动画。现有技术中,通常需要动画师提供动画形象的动画数据,以使所形成的动画形象更加逼真。In the field of animation technology, it is necessary to obtain animation data of various animated images in order to obtain corresponding animations by rendering. In the prior art, an animator is usually required to provide animation data of an animated image to make the formed animated image more realistic.
然而,一味地依靠动画师提供动画数据,需要大量的人工操作。并且,动画师提供的动画数据使用场景较为限制,不利于多变化的场合,而为各个场景均提供动画数据会在动画制作过程中造成数据臃肿,限制软件的应用。因此,需要减少动画数据的制作,获得能自动适应场景与目的变化的动画数据。However, relying on animators to provide animation data requires a lot of manual work. Moreover, the animation data provided by the animator is more limited in use scenes, which is not conducive to many occasions. However, providing animation data for each scene will cause data bloat in the animation process and limit the application of the software. Therefore, it is necessary to reduce the production of animation data and obtain animation data that can automatically adapt to changes in scenes and purposes.
发明内容Summary of the invention
本申请的目的在于提出一种用于生成动画数据的方法和装置,来解决以上背景技术部分提到的技术问题。The purpose of the present application is to propose a method and apparatus for generating animation data to solve the technical problems mentioned in the background section above.
第一方面,本申请提供了一种用于生成动画数据的方法,所述方法包括:获取待生成的目标动画中至少一个目标动画片段的目标运动参数;将所述目标运动参数映射为与预先训练的径向基函数神经网络模型的输入相匹配的向量后输入至所述径向基函数神经网络模型,其中所述径向基函数神经网络模型是通过样本动画片段序列中的各个样 本动画片段进行训练得到的;将所述径向基函数神经网络模型所输出的向量中的各个分量确定为所述样本动画片段序列中各个样本动画片段的融合权重系数;按照所述融合权重系数,使用所述样本动画片段序列中各个样本动画片段的动画数据进行融合,得到目标动画片段的动画数据;基于所述至少一个目标动画片段的动画数据,生成所述目标动画的动画数据。In a first aspect, the present application provides a method for generating animation data, the method comprising: acquiring a target motion parameter of at least one target video segment in a target animation to be generated; mapping the target motion parameter to The input matching vector of the trained radial basis function neural network model is input to the radial basis function neural network model, wherein the radial basis function neural network model is through various samples in the sequence of sample cartoon segments The cartoon segment is trained; determining each component in the vector output by the radial basis function neural network model as a fusion weight coefficient of each sample animation segment in the sample animation segment sequence; according to the fusion weight coefficient And merging the animation data of each sample animation segment in the sample animation segment sequence to obtain animation data of the target animation segment; and generating animation data of the target animation based on the animation data of the at least one target animation segment.
在一些实施例中,所述样本动画片段序列中的各个样本动画片段为骨骼动画。In some embodiments, each of the sample animation segments in the sequence of sample animation segments is a skeletal animation.
在一些实施例中,所述方法包括训练径向基函数神经网络模型的步骤,所述训练径向基函数神经网络模型的步骤包括:针对所述样本动画片段序列中各个样本动画片段,将样本动画片段的运动参数映射为第一向量,并根据样本动画片段在所述样本动画片段序列中的次序生成第二向量,其中,第二向量的维度是所述样本动画片段序列中样本动画片段的个数,且第二向量中与样本动画片段的次序对应的分量上置1,其它分量置0;将第一向量的维度确定为待训练的径向基函数神经网络模型的输入层节点个数,将所述样本动画片段序列中样本动画片段的个数确定为所述径向基函数神经网络模型的中间隐含层的节点个数以及输出层的节点个数;将样本动画片段对应的第一向量作为所述径向基函数神经网络模型的输入,并将样本动画片段对应的第二向量作为所述径向基函数神经网络模型的输出,对所述径向基函数神经网络模型进行训练。In some embodiments, the method includes the step of training a radial basis function neural network model, the step of training the radial basis function neural network model comprising: sampling a sample of each sample in the sequence of sample animation segments The motion parameters of the cartoon segment are mapped to a first vector, and a second vector is generated according to the order of the sample cartoon segments in the sequence of the sample cartoon segments, wherein the dimension of the second vector is a sample animation segment of the sample animation segment sequence Number, and the component corresponding to the order of the sample animation segments in the second vector is set to 1 and the other components are set to 0; the dimension of the first vector is determined as the number of input layer nodes of the radial basis function neural network model to be trained Determining the number of sample animation segments in the sequence of sample animation segments as the number of nodes in the intermediate hidden layer of the radial basis function neural network model and the number of nodes in the output layer; a vector is used as the input of the radial basis function neural network model, and the second vector corresponding to the sample cartoon segment is taken as Output radial basis function neural network model, the neural network model is trained radial basis function.
在一些实施例中,所述样本动画片段序列中的各个样本动画片段已预先根据关键时间点划分为至少一个时间片段;以及所述按照所述融合权重系数,使用所述样本动画片段序列中各个样本动画片段的动画数据进行融合,包括:按照各个样本动画片段的融合权重系数,对各个样本动画片段中的时间片段的时长进行加权平均,以确定目标动画片段中时间片段的时长;将各个样本动画片段的时间片段的时长调整为与目标动画片段中的时间片段一致;针对调整后的各个样本动画片段,按照所述融合权重系数对各个时间片段内的动画帧的骨骼参数执行融合插值,得到目标动画片段中相应时间片段内的动画帧的骨骼 参数。In some embodiments, each sample animation segment in the sequence of sample animation segments has been previously divided into at least one time segment according to a key time point; and said each of said sample animation segment sequences is used according to said fusion weight coefficient The animation data of the sample animation segment is fused, comprising: performing weighted averaging on the duration of the time segment in each video animation segment according to the fusion weight coefficient of each sample animation segment to determine the duration of the time segment in the target animation segment; The duration of the time segment of the cartoon segment is adjusted to coincide with the time segment in the target video segment; for each adjusted video animation segment, the fusion parameter is performed on the skeleton parameters of the animation frame in each time segment according to the fusion weight coefficient, The skeleton of the animation frame within the corresponding time segment in the target animation segment parameter.
在一些实施例中,所述按照所述融合权重系数对各个时间片段内的动画帧的骨骼参数执行融合插值,包括以下至少一项:对动画帧中各个骨骼的旋转参数执行球面插值;对动画帧中根骨骼的位置参数执行线性插值。In some embodiments, the performing a fusion interpolation on a bone parameter of an animation frame in each time segment according to the fusion weight coefficient, including at least one of: performing spherical interpolation on a rotation parameter of each bone in the animation frame; Linear interpolation is performed on the positional parameters of the root bone in the frame.
在一些实施例中,在所述针对调整后的各个样本动画片段,按照所述融合权重系数对各个时间片段内的动画帧的骨骼参数执行插值操作之前,所述按照所述融合权重系数对各个时间片段内的动画帧的骨骼参数执行融合插值,还包括:获取各个样本动画片段与目标动画片段的根骨骼的水平朝向差和/或水平位置差;调整各个样本动画片段的根骨骼的水平朝向和/或水平位置,以消除所述水平朝向差和/或水平位置差。In some embodiments, before performing the interpolation operation on the skeletal parameters of the animation frames in the respective time segments according to the fused weight coefficients for the adjusted individual sample video segments, the respective Performing fusion interpolation on the bone parameters of the animation frame in the time segment, further comprising: obtaining a horizontal orientation difference and/or a horizontal position difference of the root skeleton of each sample animation segment and the target animation segment; adjusting the horizontal orientation of the root skeleton of each sample animation segment And/or horizontal position to eliminate the horizontal orientation difference and/or horizontal position difference.
在一些实施例中,所述基于所述至少一个目标动画片段的动画数据,生成所述目标动画的动画数据,包括:获取当前目标动画片段的第一个动画帧的骨骼参数以及前一目标动画片段的最后一个动画帧的骨骼参数;根据两个动画帧的骨骼参数,通过插值方法计算待插入所述两个动画帧之间的中间动画帧的骨骼参数;在所述两个动画帧之间插入所述中间动画帧,以生成所述目标动画的动画数据。In some embodiments, the generating the animation data of the target animation based on the animation data of the at least one target video segment comprises: acquiring a bone parameter of the first animation frame of the current target animation segment and a previous target animation The skeletal parameter of the last animation frame of the segment; according to the skeletal parameters of the two animation frames, the skeletal parameter of the intermediate animation frame to be inserted between the two animation frames is calculated by the interpolation method; between the two animation frames The intermediate animation frame is inserted to generate animation data of the target animation.
在一些实施例中,所述方法还包括:针对目标动画片段的时间片段中的各个动画帧,获取动画帧中待修正骨骼的目标位置参数;确定所述待修正骨骼的当前位置参数与目标位置参数之间的差值;使用反向动力学迭代调整所述待修正骨骼以及相关联的骨骼的旋转参数,以修正所述差值。In some embodiments, the method further includes: obtaining, for each animation frame in the time segment of the target video segment, a target position parameter of the bone to be corrected in the animation frame; determining a current position parameter and a target position of the bone to be corrected The difference between the parameters; the inverse kinetic iteration is used to adjust the rotation parameters of the bone to be corrected and the associated bone to correct the difference.
在一些实施例中,所述使用反向动力学迭代调整所述待修正骨骼以及相关联的骨骼的旋转参数,以修正所述差值,包括:获取预先保存的、对上一动画帧的待修正骨骼使用反向动力学迭代调整骨骼的旋转参数的调整值;将所述调整值设置为对当前动画帧使用反向动力学迭代调整骨骼的旋转参数时的初始调整值。In some embodiments, the using the inverse dynamic iteration to iteratively adjust the rotation parameters of the bone to be corrected and the associated bone to correct the difference comprises: acquiring a pre-saved, waiting for the previous animation frame The modified bone uses the inverse dynamic iteration to adjust the adjustment value of the rotation parameter of the bone; the adjustment value is set to the initial adjustment value when the rotation parameter of the bone is adjusted using the inverse dynamic iteration for the current animation frame.
在一些实施例中,在将所述调整值设置为对当前动画帧使用反向动力学迭代调整骨骼的旋转参数时的初始调整值之前,所述方法还包 括:对所述调整值进行衰减。In some embodiments, the method further includes prior to setting the adjustment value to an initial adjustment value when the rotation parameter of the bone is adjusted using the inverse dynamic iteration for the current animation frame. Include: attenuating the adjustment value.
第二方面,本申请提供了一种用于生成动画数据的装置,所述装置包括:获取单元,用于获取待生成的目标动画中至少一个目标动画片段的目标运动参数;输入单元,用于将所述目标运动参数映射为与预先训练的径向基函数神经网络模型的输入相匹配的向量后输入至所述径向基函数神经网络模型,其中所述径向基函数神经网络模型是通过样本动画片段序列中的各个样本动画片段进行训练得到的;确定单元,用于将所述径向基函数神经网络模型所输出的向量中的各个分量确定为所述样本动画片段序列中各个样本动画片段的融合权重系数;融合单元,用于按照所述融合权重系数,使用所述样本动画片段序列中各个样本动画片段的动画数据进行融合,得到目标动画片段的动画数据;生成单元,用于基于所述至少一个目标动画片段的动画数据,生成所述目标动画的动画数据。In a second aspect, the present application provides an apparatus for generating animation data, the apparatus comprising: an acquiring unit, configured to acquire target motion parameters of at least one target video segment in a target animation to be generated; and an input unit, configured to: Mapping the target motion parameter to a vector matching the input of the pre-trained radial basis function neural network model, and inputting to the radial basis function neural network model, wherein the radial basis function neural network model is passed a sample obtained by training each sample animation segment in the sample cartoon segment sequence; determining unit configured to determine each component in the vector output by the radial basis function neural network model as each sample animation in the sample animation segment sequence a fusion weighting coefficient of the segment; the fusion unit is configured to use the animation data of each sample animation segment in the sample animation segment sequence to perform animation data according to the fusion weight coefficient, to obtain animation data of the target animation segment; and a generating unit, configured to Animating data of the at least one target cartoon segment to generate the mesh Animation data animation.
在一些实施例中,所述样本动画片段序列中的各个样本动画片段为骨骼动画。In some embodiments, each of the sample animation segments in the sequence of sample animation segments is a skeletal animation.
在一些实施例中,所述装置还包括用于训练径向基函数神经网络模型的训练单元,所述训练单元具体用于:针对所述样本动画片段序列中各个样本动画片段,将样本动画片段的运动参数映射为第一向量,并根据样本动画片段在所述样本动画片段序列中的次序生成第二向量,其中,第二向量的维度是所述样本动画片段序列中样本动画片段的个数,且第二向量中与样本动画片段的次序对应的分量上置1,其它分量置0;将第一向量的维度确定为待训练的径向基函数神经网络模型的输入层节点个数,将所述样本动画片段序列中样本动画片段的个数确定为所述径向基函数神经网络模型的中间隐含层的节点个数以及输出层的节点个数;将样本动画片段对应的第一向量作为所述径向基函数神经网络模型的输入,并将样本动画片段对应的第二向量作为所述径向基函数神经网络模型的输出,对所述径向基函数神经网络模型进行训练。In some embodiments, the apparatus further includes a training unit for training a radial basis function neural network model, the training unit specifically configured to: for each sample video segment in the sequence of sample animation segments, the sample animation segment The motion parameter is mapped to the first vector, and the second vector is generated according to the order of the sample cartoon segments in the sequence of the sample cartoon segments, wherein the dimension of the second vector is the number of sample animation segments in the sequence of the sample cartoon segments And the component corresponding to the order of the sample cartoon segments in the second vector is set to 1, and the other components are set to 0; the dimension of the first vector is determined as the number of input layer nodes of the radial basis function neural network model to be trained, The number of sample animation segments in the sample animation segment sequence is determined as the number of nodes in the intermediate hidden layer of the radial basis function neural network model and the number of nodes in the output layer; the first vector corresponding to the sample animation segment As an input of the radial basis function neural network model, and using a second vector corresponding to the sample animation segment as the radial basis Output of the neural network model, the radial basis function neural network model is trained.
在一些实施例中,所述样本动画片段序列中的各个样本动画片段已预先根据关键时间点划分为至少一个时间片段;以及所述融合单元, 包括:时长确定子单元,用于按照各个样本动画片段的融合权重系数,对各个样本动画片段中的时间片段的时长进行加权平均,以确定目标动画片段中时间片段的时长;时长调整子单元,用于将各个样本动画片段的时间片段的时长调整为与目标动画片段中的时间片段一致;融合子单元,用于针对调整后的各个样本动画片段,按照所述融合权重系数对各个时间片段内的动画帧的骨骼参数执行融合插值,得到目标动画片段中相应时间片段内的动画帧的骨骼参数。In some embodiments, each of the sample animation segments in the sequence of sample animation segments has been previously divided into at least one time segment according to a key time point; and the fusion unit, The method includes: a duration determining subunit, configured to perform weighted averaging on durations of time segments in each video animation segment according to a fusion weight coefficient of each sample animation segment to determine a duration of a time segment in the target animation segment; and a duration adjustment subunit, For adjusting the duration of the time segment of each sample animation segment to be consistent with the time segment in the target animation segment; the fusion subunit is configured to use the fusion weight coefficient for each time segment for the adjusted individual sample animation segments The skeletal parameters of the animation frame perform fusion interpolation to obtain the skeletal parameters of the animation frame in the corresponding time segment in the target animation segment.
在一些实施例中,所述融合子单元进一步用于以下至少一项:对动画帧中各个骨骼的旋转参数执行球面插值;对动画帧中根骨骼的位置参数执行线性插值。In some embodiments, the blending subunit is further for at least one of: performing spherical interpolation on rotation parameters of respective bones in the animation frame; performing linear interpolation on positional parameters of the root bone in the animation frame.
在一些实施例中,所述融合单元还包括配准子单元,用于:获取各个样本动画片段与目标动画片段的根骨骼的水平朝向差和/或水平位置差;调整各个样本动画片段的根骨骼的水平朝向和/或水平位置,以消除所述水平朝向差和/或水平位置差。In some embodiments, the merging unit further includes a registration subunit, configured to: acquire a horizontal orientation difference and/or a horizontal position difference of a root skeleton of each sample cartoon segment and the target animation segment; and adjust a root of each sample animation segment The horizontal orientation and/or horizontal position of the bone to eliminate the horizontal orientation difference and/or horizontal position difference.
在一些实施例中,所述生成单元,包括:获取子单元,用于获取当前目标动画片段的第一个动画帧的骨骼参数以及前一目标动画片段的最后一个动画帧的骨骼参数;计算子单元,用于根据两个动画帧的骨骼参数,通过插值方法计算待插入所述两个动画帧之间的中间动画帧的骨骼参数;插入子单元,用于在所述两个动画帧之间插入所述中间动画帧,以生成所述目标动画的动画数据。In some embodiments, the generating unit includes: an obtaining subunit, configured to acquire a bone parameter of a first animation frame of the current target video segment and a skeleton parameter of a last animation frame of the previous target video segment; a unit for calculating a bone parameter of an intermediate animation frame to be inserted between the two animation frames according to a bone parameter of two animation frames; inserting a subunit for between the two animation frames The intermediate animation frame is inserted to generate animation data of the target animation.
在一些实施例中,所述装置还包括:位置参数获取单元,用于针对目标动画片段的时间片段中的各个动画帧,获取动画帧中待修正骨骼的目标位置参数;差值确定单元,用于确定所述待修正骨骼的当前位置参数与目标位置参数之间的差值;调整单元,用于使用反向动力学迭代调整所述待修正骨骼以及相关联的骨骼的旋转参数,以修正所述差值。In some embodiments, the apparatus further includes: a position parameter obtaining unit, configured to acquire, for each animation frame in the time segment of the target video segment, a target position parameter of the bone to be corrected in the animation frame; the difference determining unit, Determining a difference between a current position parameter of the bone to be corrected and a target position parameter; adjusting unit, configured to iteratively adjust a rotation parameter of the bone to be corrected and an associated bone using an inverse dynamics to correct the Describe the difference.
在一些实施例中,所述调整单元,包括:调整值获取子单元,用于获取预先保存的、对上一动画帧的待修正骨骼使用反向动力学迭代调整骨骼的旋转参数的调整值;设置子单元,用于将所述调整值设置为对当前动画帧使用反向动力学迭代调整骨骼的旋转参数时的初始调 整值。In some embodiments, the adjusting unit includes: an adjustment value acquisition subunit, configured to acquire a pre-saved adjustment value of a rotation parameter of the bone to be corrected using an inverse dynamic iteration for the bone to be corrected of the previous animation frame; Setting a sub-unit for setting the adjustment value to an initial adjustment when the rotation parameter of the bone is adjusted using an inverse dynamic iteration for the current animation frame Integer value.
在一些实施例中,所述调整单元还包括:衰减子单元,用于在将所述调整值设置为对当前动画帧使用反向动力学迭代调整骨骼的旋转参数时的初始调整值之前,对所述调整值进行衰减。In some embodiments, the adjusting unit further comprises: an attenuation subunit, configured to: before setting the adjustment value to an initial adjustment value when the rotation parameter of the bone is adjusted using the inverse dynamic iteration for the current animation frame, The adjustment value is attenuated.
本申请提供的用于生成动画数据的方法和装置,可以利用样本动画片段序列以及样本动画片段序列生成的径向基函数神经网络模型即可根据目标运动参数得到目标动画片段,最终形成目标动画,实现了动画数据的自动生成,减小了人工设计动画的压力,也减小了数据存储压力。The method and device for generating animation data provided by the present application can obtain a target animation segment according to a target motion parameter by using a radial basis function neural network model generated by a sample animation segment sequence and a sample animation segment sequence, and finally form a target animation. The automatic generation of animation data is realized, which reduces the pressure of manual design animation and reduces the data storage pressure.
附图说明DRAWINGS
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本申请的其它特征、目的和优点将会变得更明显:Other features, objects, and advantages of the present application will become more apparent from the detailed description of the accompanying drawings.
图1是本申请可以应用于其中的示例性系统架构图;1 is an exemplary system architecture diagram to which the present application can be applied;
图2是根据本申请的用于生成动画数据的方法的一个实施例的流程图;2 is a flow diagram of one embodiment of a method for generating animation data in accordance with the present application;
图3是根据本申请的用于生成动画数据的方法的又一个实施例的流程图;3 is a flow chart of still another embodiment of a method for generating animation data in accordance with the present application;
图4是根据本申请的用于生成动画数据的装置的一个实施例的结构示意图;4 is a schematic structural diagram of an embodiment of an apparatus for generating animation data according to the present application;
图5是适于用来实现本申请实施例的终端设备或服务器的计算机系统的结构示意图。FIG. 5 is a schematic structural diagram of a computer system suitable for implementing a terminal device or a server of an embodiment of the present application.
具体实施方式detailed description
下面结合附图和实施例对本申请作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。The present application will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the invention, rather than the invention. It is also to be noted that, for the convenience of description, only the parts related to the related invention are shown in the drawings.
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本 申请。It should be noted that the embodiments in the present application and the features in the embodiments may be combined with each other without conflict. The present invention will be described in detail below with reference to the accompanying drawings. Application.
图1示出了可以应用本申请的用于生成动画数据的方法和装置的实施例的示例性系统架构100。FIG. 1 illustrates an exemplary system architecture 100 of an embodiment of a method and apparatus for generating animation data to which the present application may be applied.
如图1所示,系统架构100可以包括终端设备101、102、103,网络104和服务器105。网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。As shown in FIG. 1, system architecture 100 can include terminal devices 101, 102, 103, network 104, and server 105. The network 104 is used to provide a medium for communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various types of connections, such as wired, wireless communication links, fiber optic cables, and the like.
用户可以使用终端设备101、102、103通过网络104与服务器105交互,以接收或发送消息等。终端设备101、102、103上可以安装有各种通讯客户端应用,例如动画设计软件、动画播放软件等。The user can interact with the server 105 over the network 104 using the terminal devices 101, 102, 103 to receive or transmit messages and the like. Various communication client applications, such as animation design software, animation playing software, etc., can be installed on the terminal devices 101, 102, and 103.
终端设备101、102、103可以是具有显示屏并且支持动画展示或动画设计的各种电子设备,包括但不限于智能手机、平板电脑、电子书阅读器、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、膝上型便携计算机和台式计算机等等。The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting animation display or animation design, including but not limited to smart phones, tablets, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer) III. The motion picture expert compresses the standard audio layer 3), the MP4 (Moving Picture Experts Group Audio Layer IV) player, the laptop portable computer, the desktop computer, and the like.
服务器105可以是提供各种服务的服务器,例如对终端设备101、102、103上显示的动画提供支持的后台服务器。后台服务器可以对接收到的动画生成请求等数据进行分析等处理,并将处理结果(例如动画数据)反馈给终端设备。The server 105 may be a server that provides various services, such as a background server that provides support for animation displayed on the terminal devices 101, 102, 103. The background server can analyze and process data such as the received animation generation request, and feed back the processing result (for example, animation data) to the terminal device.
需要说明的是,本申请实施例所提供的用于生成动画的方法可以由服务器105执行,也可以由终端设备101、102、103执行,也可以由服务器105和终端设备101、102、103分别执行不同的步骤;相应地,用于生成动画的装置可以设置于服务器105中,也可以设置于终端设备101、102、103,也可以在服务器105和终端设备101、102、103设置不同的单元。It should be noted that the method for generating an animation provided by the embodiment of the present application may be performed by the server 105, or may be performed by the terminal device 101, 102, 103, or may be performed by the server 105 and the terminal devices 101, 102, and 103, respectively. Different steps are performed; accordingly, the means for generating the animation may be provided in the server 105, or may be provided in the terminal device 101, 102, 103, or different units may be provided in the server 105 and the terminal devices 101, 102, 103. .
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。It should be understood that the number of terminal devices, networks, and servers in Figure 1 is merely illustrative. Depending on the implementation needs, there can be any number of terminal devices, networks, and servers.
继续参考图2,示出了根据本申请的用于生成动画数据的方法的一个实施例的流程200。所述的用于生成动画数据的方法,包括以下 步骤:With continued reference to FIG. 2, a flow 200 of one embodiment of a method for generating animation data in accordance with the present application is illustrated. The method for generating animation data, including the following step:
步骤201,获取待生成的目标动画中至少一个目标动画片段的目标运动参数。Step 201: Acquire a target motion parameter of at least one target video segment in the target animation to be generated.
在本实施例中,待生成的目标动画是用户想得到的动画,该目标动画可以由至少一个目标动画片段组成。针对各个目标动画片段,生成动画数据的方法运行于其上的电子设备(例如图1所示的服务器或终端设备)可以通过各种方法获取目标动画片段的目标运动参数。当电子设备为服务器时,服务器可以获取终端设备发送的目标运动参数,也可以获取预先存储在服务器本地的目标运动参数,还可以从其他服务器获取该参数;当电子设备为终端设备时,终端设备通常可以从用户处获取该目标运动参数,也可以从其他设备获取该参数。目标动画片段的目标运动参数,可以体现目标动画片段中的物体运动时的运动状态。目标运动参数可以是单个参数,也可以是多个参数。例如,行走动画的目标动画片段中的目标运动参数可以包括前进速度、侧向速度和转弯速度。In this embodiment, the target animation to be generated is an animation that the user wants, and the target animation may be composed of at least one target animation segment. For each target video segment, an electronic device (such as the server or terminal device shown in FIG. 1) on which the method of generating animation data is run can acquire target motion parameters of the target video segment by various methods. When the electronic device is a server, the server may acquire the target motion parameter sent by the terminal device, may also acquire the target motion parameter pre-stored locally in the server, and may obtain the parameter from other servers; when the electronic device is the terminal device, the terminal device The target motion parameter can usually be obtained from the user, or it can be obtained from other devices. The target motion parameter of the target video segment can reflect the motion state of the object in the target video segment when moving. The target motion parameter can be a single parameter or multiple parameters. For example, the target motion parameters in the target animation segment of the walking animation may include forward speed, lateral speed, and cornering speed.
步骤202,将目标运动参数映射为与预先训练的径向基函数神经网络模型的输入相匹配的向量后输入至径向基函数神经网络模型。In step 202, the target motion parameter is mapped to a vector matching the input of the pre-trained radial basis function neural network model, and then input to the radial basis function neural network model.
在本实施例中,基于步骤201中得到的目标运动参数,电子设备可以将其映射为与预先训练的径向基函数神经网络模型的输入相匹配的向量,再将该向量输入至该径向基函数神经网络模型中。其中,径向基函数神经网络模型是通过样本动画片段序列中的各个样本动画片段进行训练得到的。实践中,在将映射所形成的向量输入径向基函数神经网络模型之前,可以预先对向量进行范围约束。即,在向量超出训练数据的范围时,对向量进行约束调整,从而保证输入至径向基函数神经网络模型的向量不超出训练数据的范围。In this embodiment, based on the target motion parameter obtained in step 201, the electronic device may map it to a vector matching the input of the pre-trained radial basis function neural network model, and then input the vector to the radial direction. Basis function in a neural network model. Among them, the radial basis function neural network model is obtained by training each sample animation segment in the sample cartoon segment sequence. In practice, the vector may be bound in advance before the vector formed by the mapping is input to the radial basis function neural network model. That is, when the vector exceeds the range of the training data, the vector is constrained to ensure that the vector input to the radial basis function neural network model does not exceed the range of the training data.
步骤203,将径向基函数神经网络模型所输出的向量中的各个分量确定为样本动画片段序列中各个样本动画片段的融合权重系数。Step 203: Determine each component in the vector output by the radial basis function neural network model as a fusion weight coefficient of each sample animation segment in the sample cartoon segment sequence.
在本实施例中,基于步骤202输入至径向基函数神经网络模型,该径向基函数神经网络模型可以输出对应的向量,电子设备可以将所输出的向量中的各个分量确定融合权重系数,该权重融合系数用于确 定后续对样本动画片段序列中各个样本动画片段融合得到目标动画片段时使用到的样本动画片段以及各自的使用比例。In this embodiment, based on step 202, input to a radial basis function neural network model, the radial basis function neural network model may output a corresponding vector, and the electronic device may determine each component in the output vector to determine a fusion weight coefficient. The weight fusion coefficient is used to confirm The sample animation segments used in the subsequent animation of the sample animation segments in the sequence of sample animation segments and the respective usage ratios are determined.
步骤204,按照融合权重系数,使用样本动画片段序列中各个样本动画片段的动画数据进行融合,得到目标动画片段的动画数据。Step 204: According to the fusion weight coefficient, use the animation data of each sample animation segment in the sample animation segment sequence to perform fusion, and obtain animation data of the target animation segment.
在本实施例中,基于步骤203中得到的融合权重系数,电子设备可以使用样本动画片段序列中各个样本动画片段的动画数据进行融合,融合得到的动画数据即可作为对应的目标动画片段的动画数据。In this embodiment, based on the fusion weight coefficient obtained in step 203, the electronic device can use the animation data of each sample animation segment in the sample cartoon segment sequence to fuse, and the merged animation data can be used as an animation of the corresponding target animation segment. data.
步骤205,基于至少一个目标动画片段的动画数据,生成目标动画的动画数据。Step 205: Generate animation data of the target animation based on the animation data of the at least one target video segment.
在本实施例中,基于步骤204得到的目标动画片段的动画数据,电子设备可以按照顺序对各个目标动画片段进行拼接,从而得到整个目标动画的动画数据。In this embodiment, based on the animation data of the target video segment obtained in step 204, the electronic device may splicing the respective target video segments in order, thereby obtaining animation data of the entire target animation.
在本实施例的一些可选实现方式中,样本动画片段序列中的各个样本动画片段为骨骼动画。骨骼动画由一根根互相作用连接的“骨头”组成,通过控制这些骨骼的位置、旋转方向和大小,并在这些骨头的位置附着皮肤数据,即可渲染成所需的可见动画形象。在骨骼动画中,骨骼按照动画角色的特点形成一个层次结构,即骨骼结构。骨骼结构就是一系列的骨头相结合形成的骨层级,这些骨骼按照父子关系是以树状结构组织起来的,形成角色模型的整个骨骼框架。位于树根处的骨头称为根骨骼,它是形成骨骼结构的关键点。而其他所有的骨骼被称为子骨骼或者兄弟骨骼,它们都附属于根骨骼,都直接或间接连接到根骨骼上。换句话说,每一个骨骼相对于其下一层级的骨骼来说均为其父骨骼。骨骼动画中一般每块骨骼都带有两个矩阵,一个是初始矩阵,用于表示骨骼的初始位置;另一个是变换矩阵,它可以反映出骨骼的变换情况;通过初始矩阵和变化矩阵相乘计算可以得到该骨骼的最终矩阵,主要用于对骨骼进行变换。通常,初始矩阵、变换矩阵以及最终矩阵既可以通过相对矩阵进行表征,也可以通过绝对矩阵进行表征。其中,绝对矩阵是当前骨骼相对于世界的矩阵,相对矩阵是当前骨骼相对于其父骨骼的矩阵。当前骨骼的绝对矩阵可以通过其相对矩阵与父骨骼的绝对矩阵相乘得到,而其父骨骼绝对矩阵可以通过 该父骨骼的相对矩阵与上一层骨骼的绝对矩阵相乘得到。因此,以上述方式迭代相乘直到根骨骼,即可以计算得到当前骨骼的绝对矩阵。相邻的骨骼由关节连在一起,可以做相对的运动。当骨骼之间发生旋转时,组成动画角色的骨骼就可以做出不同的动作,从而实现不同的动画效果。相对于顶点动画,骨骼动画只需要存储骨骼变换数据,不需要在每一帧中存储各个顶点的数据,因此使用骨骼动画可以节省许多存储空间。In some optional implementations of this embodiment, each sample animation segment in the sequence of sample animation segments is a skeletal animation. Skeletal animation consists of a "bone" that interacts with each other. By controlling the position, direction and size of the bones, and attaching skin data to the bones, you can render the desired visible animated image. In skeletal animation, bones form a hierarchy, the skeleton structure, according to the characteristics of the animated character. The skeletal structure is a series of bones formed by a combination of bones. These bones are organized in a tree structure according to the father-son relationship, forming the entire skeleton frame of the character model. The bone at the root of the tree is called the root bone, which is the key point in the formation of the bone structure. All other bones are called child bones or brother bones, which are attached to the root bone and are connected directly or indirectly to the root bone. In other words, each bone is its parent bone relative to the bones of its next level. In skeletal animation, each bone has two matrices, one is the initial matrix, which is used to represent the initial position of the bone, and the other is the transformation matrix, which reflects the transformation of the bone; multiplied by the initial matrix and the variation matrix. The calculation can get the final matrix of the bone, mainly used to transform the bone. In general, the initial matrix, the transformation matrix, and the final matrix can be characterized either by a relative matrix or by an absolute matrix. The absolute matrix is the matrix of the current bone relative to the world, and the relative matrix is the matrix of the current bone relative to its parent bone. The absolute matrix of the current bone can be obtained by multiplying its relative matrix with the absolute matrix of the parent bone, and the absolute matrix of its parent bone can pass The relative matrix of the parent bone is multiplied by the absolute matrix of the previous bone. Therefore, the iterative multiplication in the above manner up to the root skeleton, the absolute matrix of the current bone can be calculated. Adjacent bones are joined together by joints and can be used for relative movement. When a rotation occurs between the bones, the bones that make up the animated character can make different actions to achieve different animation effects. Compared to vertex animation, skeletal animation only needs to store bone transformation data, and does not need to store data of each vertex in every frame, so using skeletal animation can save a lot of storage space.
在本实施例的一些可选实现方式中,上述方法还包括训练径向基函数神经网络模型的步骤。该训练径向基函数神经网络模型的步骤具体包括以下过程:In some optional implementations of this embodiment, the method further includes the step of training the radial basis function neural network model. The step of training the radial basis function neural network model specifically includes the following process:
首先,针对样本动画片段序列中各个样本动画片段,将样本动画片段的运动参数映射为第一向量,并根据样本动画片段在样本动画片段序列中的次序生成第二向量。其中,样本动画片段序列包括至少一个样本动画片段,且各个样本动画片段可以有对应的序号,例如样本动画片段序列包括样本动画片段1、样本动画片段2……样本动画片段n。在将样本动画片段的运动参数映射为第一向量时,动画参数可以包括至少一个物理量,将当前运动的各个物理量的值作为分量,即可形成第一向量。以行走动画为例,行走动画的运动参数可以包括前进速度、侧向速度以及转弯速度等三个物理量,则样本动画片段所映射成的第一向量包括分别与前进速度、侧向速度以及转弯速度的值所对应的三个分量,即第一向量的维度为3。需要说明的是,样本动画片段中的运动参数通常需要与目标动画中的目标运动参数保持一致。First, for each sample animation segment in the sequence of sample animation segments, the motion parameters of the sample animation segment are mapped to a first vector, and a second vector is generated according to the order of the sample animation segments in the sequence of sample animation segments. The sample animation segment sequence includes at least one sample animation segment, and each sample animation segment may have a corresponding serial number. For example, the sample animation segment sequence includes a sample animation segment 1, a sample animation segment 2, and a sample animation segment n. When the motion parameter of the sample cartoon segment is mapped to the first vector, the animation parameter may include at least one physical quantity, and the value of each physical quantity of the current motion is used as a component to form the first vector. Taking the walking animation as an example, the motion parameters of the walking animation may include three physical quantities such as a forward speed, a lateral speed, and a turning speed, and the first vector to which the sample animation segment is mapped includes the forward speed, the lateral speed, and the turning speed, respectively. The three components corresponding to the value, that is, the dimension of the first vector is 3. It should be noted that the motion parameters in the sample animation segment usually need to be consistent with the target motion parameters in the target animation.
在根据样本动画片段在动画片段序列中的次序生成第二向量,第二向量的维度是样本动画片段序列中动画片段的个数,且第二向量中与样本动画片段的次序对应的分量上置1,其它分量置0。例如,当样本动画片段序列包括样本动画片段1、样本动画片段2……样本动画片段n时,样本动画片段的数量为n,则各个样本动画片段对应的第二向量的维度为n,即第二向量可以用(X1,X2……Xn)的形式表示。对于样本动画片段1,由于其序号为1,则在X1分量上置1,X2……Xn则置为0,即样本动画片段1对应的第二向量是(1,0……0)。对 应的,样本动画片段2对应的第二向量是(0,1,0……0),样本动画片段n对应的第二向量是(0,0……0,1)。Generating a second vector according to the order of the sample cartoon segments in the sequence of the cartoon segments, the dimension of the second vector is the number of cartoon segments in the sequence of sample cartoon segments, and the component corresponding to the order of the sample animation segments in the second vector is placed 1, other components are set to 0. For example, when the sample animation segment sequence includes the sample animation segment 1, the sample animation segment 2, the sample animation segment n, and the number of sample animation segments is n, the dimension of the second vector corresponding to each sample animation segment is n, ie, The two vectors can be expressed in the form of (X 1 , X 2 ... X n ). For the sample animation segment 1, since its serial number is 1, it is set to 1 on the X 1 component, and X 2 ... X n is set to 0, that is, the second vector corresponding to the sample animation segment 1 is (1, 0...0). ). Correspondingly, the second vector corresponding to the sample animation segment 2 is (0, 1, 0... 0), and the second vector corresponding to the sample animation segment n is (0, 0...0, 1).
其次,将第一向量的维度确定为待训练的径向基函数神经网络模型的输入层节点个数,将样本动画片段序列中动画片段的个数确定为径向基函数神经网络模型的中间隐含层的节点个数以及输出层的节点个数。由于后续训练中需要将样本动画片段对应的第一向量作为径向基函数神经网络模型的输入以及将样本动画片段对应的第二向量作为径向基函数神经网络模型的输出,因此在确定所使用的的径向基函数神经网络模型规模时,可以将第一向量的维度确定为径向基函数神经网络模型的输入层节点个数,并将样本动画片段序列中动画片段的个数确定为径向基函数神经网络模型的中间隐含层的节点个数以及输出层的节点个数,以使得在径向基函数神经网络模型的规模与第一向量以及第二向量的大小相匹配。可选的,上述径向基函数神经网络模型的中间隐含层可以采用高斯核函数。Secondly, the dimension of the first vector is determined as the number of input layer nodes of the radial basis function neural network model to be trained, and the number of animation segments in the sequence of sample animation segments is determined as the middle hidden of the radial basis function neural network model. The number of nodes with layers and the number of nodes in the output layer. In the follow-up training, the first vector corresponding to the sample animation segment is used as the input of the radial basis function neural network model and the second vector corresponding to the sample animation segment is used as the output of the radial basis function neural network model, so When the radial basis function neural network model scales, the dimension of the first vector can be determined as the number of input layer nodes of the radial basis function neural network model, and the number of cartoon segments in the sample cartoon segment sequence is determined as the diameter. The number of nodes in the intermediate hidden layer of the basis function neural network model and the number of nodes in the output layer are such that the scale of the radial basis function neural network model matches the size of the first vector and the second vector. Optionally, the intermediate hidden layer of the radial basis function neural network model may adopt a Gaussian kernel function.
最后,将样本动画片段对应的第一向量作为径向基函数神经网络模型的输入,并将样本动画片段对应的第二向量作为径向基函数神经网络模型的输出,对径向基函数神经网络模型进行训练。由于径向基函数神经网络模型的规模与第一向量以及第二向量的大小相匹配,可以顺利利用样本动画片段对应的第一向量和第二向量对径向基函数神经网络模型进行训练,每次训练是将同一个样本动画片段对应的第一向量和第二向量作为径向基函数神经网络模型的输入与输出。在训练过程中,采用梯度下降的方法,训练中间隐含层与输出层之间的连接权值以及隐含层的宽度。即,在训练的过程中,由于输入和输出是确定的,通过确定的输入和输出来不断调整中间隐含层的函数。Finally, the first vector corresponding to the sample animation segment is used as the input of the radial basis function neural network model, and the second vector corresponding to the sample animation segment is used as the output of the radial basis function neural network model, and the radial basis function neural network The model is trained. Since the scale of the radial basis function neural network model matches the size of the first vector and the second vector, the radial basis function neural network model can be successfully trained by using the first vector and the second vector corresponding to the sample animation segment. The secondary training is to input the first vector and the second vector corresponding to the same sample animation segment as the input and output of the radial basis function neural network model. In the training process, the gradient weighting method is used to train the connection weight between the intermediate hidden layer and the output layer and the width of the hidden layer. That is, during the training process, since the input and output are determined, the function of the intermediate hidden layer is continuously adjusted by the determined input and output.
在本实施例的一些可选的实现方式中,样本动画片段序列中的各个样本动画片段已预先根据关键时间点划分为至少一个时间片段;以及步骤204具体包括:首先,按照各个样本动画片段的融合权重系数,对各个样本动画片段中的时间片段的时长进行加权平均,以确定目标动画片段中时间片段的时长;其次,将各个样本动画片段的时间片段的时长调整为与目标动画片段中的时间片段一致;最后,针对调整后 的各个样本动画片段,按照融合权重系数对各个时间片段内的动画帧的骨骼参数执行融合插值,得到目标动画片段中相应时间片段内的动画帧的骨骼参数。In some optional implementation manners of the embodiment, each sample animation segment in the sequence of sample animation segments has been divided into at least one time segment according to a key time point in advance; and step 204 specifically includes: first, according to each sample animation segment The weighting coefficient is used to perform weighted averaging on the duration of the time segments in each sample animation segment to determine the duration of the time segment in the target animation segment; secondly, the duration of the time segment of each sample animation segment is adjusted to be in the target animation segment Time segments are consistent; finally, for adjustments Each of the sample animation segments performs fusion interpolation on the skeleton parameters of the animation frames in each time segment according to the fusion weight coefficient, and obtains the skeleton parameters of the animation frames in the corresponding time segments in the target animation segment.
在该实现方式中,关键时间点可以根据样本动画片段所表示的动作而具体确定。例如,以行走动画为例,由于行走动作在每次换脚前后姿态会有明显的变化,而两次换脚之间的姿态则呈渐进变化,因此行走动画的关键时间点可以是每次换脚时对应的动画帧。假设,行走动画对应的行走动作包括以左脚踏地的第一步、以右脚踏地的第二步、以左脚踏地的第三步、以右脚踏地的第四步,则样本动画片段所划分成的至少一个时间片段对应的动作分别是以换脚时间点分成的第一步、第二步、第三步和第四步。In this implementation, the critical time points can be specifically determined based on the actions represented by the sample animation segments. For example, taking the walking animation as an example, since the walking motion has a significant change in the posture before and after each changing of the foot, and the posture between the two changing feet is gradually changed, the key time point of the walking animation may be each change. The corresponding animation frame at the foot. It is assumed that the walking motion corresponding to the walking animation includes the first step of stepping on the left foot, the second step of stepping on the right foot, the third step of stepping on the left foot, and the fourth step of stepping on the right foot. The actions corresponding to the at least one time segment into which the sample animation segment is divided are the first step, the second step, the third step, and the fourth step, which are respectively divided into the time points of the change of the foot.
对应的,在步骤204进行融合时,也是使用各个动画片段对应的时间片段的骨骼参数分别计算目标动画片段中相应时间片段内的动画帧的骨骼参数。首先,由于各个样本动画片段中时间片段的时长不一定相同,因此首先需要按照一定方式确定目标动画片段中时间片段的时长。本实现方式中,利用上述融合权重系数,对各个样本动画片段中的时间片段的时长进行加权平均,以确定目标动画片段中时间片段的时长。之后,由于后续融合过程中需要对各个时间片段内的相对应的动画帧的骨骼参数执行融合插值,而各个样本动画片段中时间片段的时长不一定相同,因此需要首先将各个样本动画片段中时间片段的时长调整为与目标动画片段中的时间片段一致。最后,由于调整后的各个样本动画片段的时间片段的时长与目标动画片段中的时间片段一致,因此可以采用调整后样本动画片段的时间片段的对应的动画帧的骨骼参数进行融合操作,得到标动画片段中相应时间片段内的动画帧的骨骼参数。Correspondingly, when the fusion is performed in step 204, the skeleton parameters of the animation frames in the corresponding time segments in the target animation segment are respectively calculated using the skeleton parameters of the time segments corresponding to the respective animation segments. First, since the durations of the time segments in the video segments of the samples are not necessarily the same, it is first necessary to determine the duration of the time segments in the target video segment in a certain manner. In this implementation manner, the duration of the time segment in each video clip is weighted and averaged by using the above-mentioned fusion weight coefficient to determine the duration of the time segment in the target video segment. After that, since the fusion parameters need to be performed on the bone parameters of the corresponding animation frames in each time segment in the subsequent fusion process, and the durations of the time segments in each sample animation segment are not necessarily the same, it is necessary to first time the animation segments of each sample. The duration of the clip is adjusted to match the time slice in the target movie clip. Finally, since the adjusted duration of the time segment of each sample animation segment is consistent with the time segment in the target animation segment, the skeletal parameters of the corresponding animation frame of the time segment of the adjusted sample animation segment can be used for the fusion operation to obtain the target The bone parameter of the animation frame within the corresponding time segment in the animation segment.
在上一实现方式的一些可选的实现方式中,按照融合权重系数对各个时间片段内的动画帧的骨骼参数执行融合插值可以包括:对动画帧中各个骨骼的旋转参数执行球面插值;对动画帧中根骨骼的位置参数执行线性插值。对于骨骼动画,每一个动画帧的骨骼参数可以包括根骨骼的位置参数以及各个骨骼的旋转参数。其中,骨骼的旋转参数 中,根骨骼的旋转参数可以通过根骨骼的绝对矩阵来表示,而非根骨骼的运动多为相对于其父骨骼的旋转运动,其旋转参数可以通过该骨骼的相对矩阵来表示。通常,在骨骼动画中的动画帧中,位置参数可以用三维向量表示,旋转参数可以用四维向量表示,因此在对位置参数进行融合插值时,对于位置参数可以采用线性插值,对于旋转参数可以采用球面插值方法。In some optional implementation manners of the previous implementation, performing the fusion interpolation on the bone parameters of the animation frame in each time segment according to the fusion weight coefficient may include: performing spherical interpolation on the rotation parameters of each bone in the animation frame; Linear interpolation is performed on the positional parameters of the root bone in the frame. For skeletal animation, the bone parameters of each animation frame can include the positional parameters of the root bone and the rotation parameters of each bone. Where the rotation parameters of the bone In the middle, the rotation parameter of the root bone can be represented by the absolute matrix of the root bone. The motion of the non-root bone is mostly the rotation motion relative to its parent bone, and its rotation parameter can be represented by the relative matrix of the bone. Generally, in the animation frame in the skeletal animation, the positional parameter can be represented by a three-dimensional vector, and the rotation parameter can be represented by a four-dimensional vector. Therefore, when the positional parameter is fused and interpolated, linear interpolation can be used for the positional parameter, and for the rotation parameter, the rotation parameter can be used. Spherical interpolation method.
在本实施例的一些可选实现方式中,在针对调整后的各个样本动画片段,按照融合权重系数对各个时间片段内的动画帧的骨骼参数执行插值操作之前,步骤204还包括:首先,获取各个样本动画片段与目标动画片段的根骨骼的水平朝向差和/或水平位置差;之后,调整各个样本动画片段的根骨骼的水平朝向和/或水平位置,以消除水平朝向差和/或水平位置差。在各个样本动画片段中,人物在根骨骼的水平朝向和/或水平位置之间以及与目标动画片段中可能存在偏差,可以对动画片段中的根骨骼的水平朝向和/或水平位置差进行调整,以使其与目标动画片段中水平朝向差和/或水平位置均保持一致。通常,水平朝向差和/或水平位置差可以以各个样本动画片段中起始帧中根骨骼的水平朝向和/或水平位置为准进行计算即可,样本动画片段中的各个动画帧中的根骨骼根据计算出的调整参数进行整体调整即可。In some optional implementation manners of the embodiment, before performing the interpolation operation on the skeletal parameters of the animation frame in each time segment according to the fused weight coefficient for each of the adjusted video animation segments, the step 204 further includes: first, acquiring The horizontal orientation of the root skeleton of each sample animation segment and the target animation segment is poor and/or horizontally positional; thereafter, the horizontal orientation and/or horizontal position of the root skeleton of each sample animation segment is adjusted to eliminate horizontal orientation differences and/or levels Poor position. In each sample animation segment, the character may be offset between the horizontal and/or horizontal position of the root skeleton and between the target animation segment, and the horizontal orientation and/or horizontal position difference of the root skeleton in the animation segment may be adjusted. So that it is consistent with the horizontal orientation and/or horizontal position in the target animation segment. In general, the horizontal orientation difference and/or the horizontal position difference may be calculated based on the horizontal orientation and/or the horizontal position of the root skeleton in the starting frame of each sample animation segment, and the root skeleton in each animation frame in the sample animation segment. The overall adjustment can be made based on the calculated adjustment parameters.
在本实施例的一些可选实现方式中,步骤205具体包括:获取当前目标动画片段的第一个动画帧的骨骼参数以及前一目标动画片段的最后一个动画帧的骨骼参数;根据两个动画帧的骨骼参数,通过插值方法计算待插入两个动画帧之间的中间动画帧的骨骼参数;在两个动画帧之间插入中间动画帧,以生成目标动画的动画数据。In some optional implementations of this embodiment, the step 205 specifically includes: acquiring a bone parameter of a first animation frame of the current target animation segment and a skeleton parameter of a last animation frame of the previous target animation segment; The skeletal parameters of the frame, the skeletal parameters of the intermediate animation frame to be inserted between the two animation frames are calculated by the interpolation method; the intermediate animation frame is inserted between the two animation frames to generate the animation data of the target animation.
由于各个目标动画片段的动画数据的生成彼此具有一定的独立性,对于目标动画中相邻的两个目标动画片段,前一个目标动画片段最后一个动画帧与后一个目标动画片段的第一个动画帧之间骨骼参数的变化很可能缺乏平滑性,使得目标动画的动作在对应的时间点会出现动作跳变,降低所呈现的动作的逼真性。基于此,本实现方式通过插值方法计算待插入两个动画帧之间的中间动画帧的骨骼参数,并在两个动画帧之间插入该中间动画帧,从而增强动画帧之间变化的平滑 性,以减弱所呈现动作的跳变。Since the animation data of each target animation segment has a certain independence from each other, for the two adjacent target animation segments in the target animation, the first animation frame of the previous target animation segment and the first animation of the latter target animation segment The change of the bone parameters between the frames is likely to lack smoothness, so that the action of the target animation will have an action jump at the corresponding time point, reducing the fidelity of the presented action. Based on this, the implementation method calculates the bone parameter of the intermediate animation frame to be inserted between the two animation frames by the interpolation method, and inserts the intermediate animation frame between the two animation frames, thereby enhancing the smoothing of the change between the animation frames. Sex to weaken the jump of the presented action.
本申请的上述实施例提供的方法可以利用样本动画片段序列以及样本动画片段序列生成的径向基函数神经网络模型即可根据目标运动参数得到目标动画片段,最终形成目标动画,实现了动画数据的自动生成,减小了人工设计动画的压力,也减小了数据存储压力。The method provided by the above embodiment of the present application can use the radial basis function neural network model generated by the sample cartoon segment sequence and the sample cartoon segment sequence to obtain the target animation segment according to the target motion parameter, and finally form the target animation, and realize the animation data. Automatic generation reduces the pressure on manual design animations and reduces data storage pressure.
进一步参考图3,其示出了用于生成动画数据的方法的又一个实施例的流程300。该用于生成动画数据的方法的流程300,包括以下步骤:With further reference to FIG. 3, a flow 300 of yet another embodiment of a method for generating animation data is shown. The process 300 of the method for generating animation data includes the following steps:
步骤301,获取待生成的目标动画中至少一个目标动画片段的目标运动参数。Step 301: Acquire a target motion parameter of at least one target video segment in the target animation to be generated.
在本实施例中,步骤301的具体处理可以参考图2对应实施例的步骤201。In this embodiment, the specific processing of step 301 may refer to step 201 of the corresponding embodiment of FIG. 2.
步骤302,将目标运动参数映射为与预先训练的径向基函数神经网络模型的输入相匹配的向量后输入至径向基函数神经网络模型。In step 302, the target motion parameter is mapped to a vector matching the input of the pre-trained radial basis function neural network model, and then input to the radial basis function neural network model.
在本实施例中,步骤302的具体处理可以参考图2对应实施例的步骤202。In this embodiment, the specific processing of step 302 may refer to step 202 of the corresponding embodiment of FIG. 2.
步骤303,将径向基函数神经网络模型所输出的向量中的各个分量确定为样本动画片段序列中各个样本动画片段的融合权重系数。 Step 303, determining each component in the vector output by the radial basis function neural network model as a fusion weight coefficient of each sample animation segment in the sample cartoon segment sequence.
在本实施例中,步骤303的具体处理可以参考图2对应实施例的步骤203。In this embodiment, the specific processing of step 303 may refer to step 203 of the corresponding embodiment of FIG. 2.
步骤304,按照融合权重系数,使用样本动画片段序列中各个样本动画片段的动画数据进行融合,得到目标动画片段的动画数据。Step 304: According to the fusion weight coefficient, the animation data of each sample animation segment in the sample animation segment sequence is used to obtain the animation data of the target animation segment.
在本实施例中,在本实施例中,步骤304的具体处理可以参考图2对应实施例的步骤204。In this embodiment, in the embodiment, the specific processing of step 304 may refer to step 204 of the corresponding embodiment of FIG. 2 .
步骤305,基于至少一个目标动画片段的动画数据,生成目标动画的动画数据。Step 305: Generate animation data of the target animation based on the animation data of the at least one target video segment.
在本实施例中,步骤305的具体处理可以参考图2对应实施例的步骤205。In this embodiment, the specific processing of step 305 may refer to step 205 of the corresponding embodiment of FIG. 2.
步骤306,针对目标动画片段的时间片段中的各个动画帧,获取 动画帧中待修正骨骼的目标位置参数。 Step 306, acquiring, for each animation frame in the time segment of the target video segment. The target position parameter of the bone to be corrected in the animation frame.
以行走动画为例,在角色两次换脚之间的时间片段内,角色的一只脚应当是固定在一开始踩住的地面位置。由于目标动画片段是通过融合操作形成的,所呈现的动作中这只脚可能在上述时间片段内发生位移,出现滑步现象,从而影响目标动画所呈现动作的逼真性。因此,有必要对其进行修正。Taking the walking animation as an example, in the time segment between the character's two foot changes, one of the character's feet should be fixed at the ground position where the foot is initially pressed. Since the target video segment is formed by the merging operation, the foot in the presented action may be displaced within the time segment, and the sliding step phenomenon occurs, thereby affecting the realisticness of the action presented by the target animation. Therefore, it is necessary to amend it.
在本实施例中,电子设备可以在根据用户操作确定动画片段各个动画帧中需要修正的骨骼后,获取将待修正骨骼需要修正到的目标位置参数。该目标位置参数可以由用户进行设置,也可以根据预先配置的规则自动获取。In this embodiment, after determining the bone that needs to be corrected in each animation frame of the animation segment according to the user operation, the electronic device may acquire the target position parameter to which the bone to be corrected needs to be corrected. The target location parameter can be set by the user or automatically according to pre-configured rules.
步骤307,确定待修正骨骼的当前位置参数与目标位置参数之间的差值。 Step 307, determining a difference between a current position parameter of the bone to be corrected and a target position parameter.
在本实施例中,针对待修复骨骼,电子设备可以基于步骤306所获取的目标位置参数以及通过前述步骤所生成的目标动画片段中该待修复骨骼的当前位置参数,计算出当前位置参数与目标位置参数之间的差值,以用作后续修正过程的参数。In this embodiment, for the bone to be repaired, the electronic device may calculate the current position parameter and the target based on the target position parameter acquired in step 306 and the current position parameter of the bone to be repaired in the target animation segment generated by the foregoing steps. The difference between the positional parameters to be used as a parameter for the subsequent correction process.
步骤308,使用反向动力学迭代调整待修正骨骼以及相关联的骨骼的旋转参数,以修正差值。In step 308, the inverse kinetic iteration is used to adjust the rotation parameters of the bone to be corrected and the associated bone to correct the difference.
在本实施例中,基于步骤307所得到的差值,电子设备可以通过反向动力学迭代调整待修正骨骼以及相关联的骨骼的旋转参数,对差值进行修正,使待修正骨骼在这一过程中逐渐逼近目标位置。其中,相关联的骨骼可以是用户设置的,也可以根据待修正骨骼以及一定的规则确定的。例如,在对上述滑步现象进行调整时,在调整脚的位置时,同时需要保持躯干(根骨骼)不动,因此需要调整旋转参数的骨骼包括脚对应的骨骼(待修正骨骼)、小腿和大腿分别对应的两个骨骼(相关联的骨骼)。由于调整骨骼的旋转参数的最终目标是待修正骨骼的位置,因此可以通过反向动力学方法,从与修正目标关系最紧密的待修正骨骼出发,向当前骨骼的父骨骼方向进行迭代调整,最终完成对上述差值的修正。反向动力学的具体算法,这里不再赘述。In this embodiment, based on the difference obtained in step 307, the electronic device can iteratively adjust the rotation parameters of the bone to be corrected and the associated bone by inverse dynamics, and correct the difference so that the bone to be corrected is in this The process gradually approaches the target position. The associated bones may be set by the user, or may be determined according to the bone to be corrected and certain rules. For example, when adjusting the above-mentioned sliding phenomenon, when adjusting the position of the foot, it is necessary to keep the torso (root skeleton) at the same time, so the bones that need to adjust the rotation parameters include the bone corresponding to the foot (the bone to be corrected), the calf and The two bones (associated bones) corresponding to the thighs. Since the final target of the rotation parameter of the adjustment bone is the position of the bone to be corrected, it is possible to perform an iterative adjustment to the parent bone of the current bone from the bone to be corrected which is most closely related to the correction target by the inverse dynamic method. The correction of the above difference is completed. The specific algorithm of inverse dynamics is not described here.
需要说明的是,步骤306到步骤308,通常可以在步骤304之后 执行,也可以在步骤305之后执行。It should be noted that, step 306 to step 308, generally after step 304 Execution may also be performed after step 305.
在本实施例的一些可选实现方式中,上述步骤308可以包括:获取预先保存的、对上一动画帧的待修正骨骼使用反向动力学迭代调整骨骼的旋转参数的调整值;将调整值设置为对当前动画帧使用反向动力学迭代调整骨骼的旋转参数时的初始调整值。In some optional implementation manners of the embodiment, the step 308 may include: acquiring, in advance, an adjustment value of the rotation parameter of the bone to be corrected using the inverse dynamic iteration for the bone to be corrected of the previous animation frame; Set to the initial adjustment value when using the inverse dynamic iteration to adjust the rotation parameters of the bone for the current animation frame.
在该实现方式中,可以将通过反向动力学调整对上一动画帧中的骨骼旋转参数的调整值作为对当前动画帧使用反向动力学迭代调整骨骼的旋转参数时的初始调整值,由于通过前述步骤得到的动画片段中需要修正的时间片段内相邻的动画帧之间骨骼参数差异较小,反向动力学上一动画帧中的骨骼旋转参数的调整值出发,可以通过较小的迭代次数即可完成对当前动画帧中待修正骨骼的位置参数的修正,提高了计算效率,减少了所耗用的时间。In this implementation, the adjustment value of the bone rotation parameter in the previous animation frame can be adjusted by the inverse dynamics as an initial adjustment value when the rotation parameter of the bone is adjusted using the inverse dynamic iteration for the current animation frame, The skeletal parameter difference between adjacent animation frames in the time segment that needs to be corrected in the animation segment obtained by the foregoing steps is small, and the adjustment value of the bone rotation parameter in the inverse dynamic kinetic animation frame starts from a small The number of iterations can be used to correct the positional parameters of the bone to be corrected in the current animation frame, which improves the calculation efficiency and reduces the time spent.
在上一实现方式的一些可选实现方式中,在将调整值设置为对当前动画帧使用反向动力学迭代调整骨骼的旋转参数时的初始调整值之前,上述方法还包括:对调整值进行衰减。通常,时间片段内相邻的动画帧之间的骨骼旋转参数存在一定差异,因此可以根据该规律对上一动画帧的调整值进行一定比率的衰减,从而可以进一步提高动画的稳定性。In some optional implementations of the previous implementation, before the adjustment value is set to an initial adjustment value when the rotation parameter of the bone is adjusted using the inverse dynamic iteration for the current animation frame, the method further includes: performing the adjustment value attenuation. Generally, there are some differences in the bone rotation parameters between adjacent animation frames in the time segment. Therefore, the adjustment value of the previous animation frame can be attenuated according to the law, thereby further improving the stability of the animation.
从图3中可以看出,与图2对应的实施例相比,本实施例中的用于生成动画数据的方法的流程300增加了对动画帧中骨骼的位置进行修正的步骤,进一步提高了所生成的动画数据的逼真性。As can be seen from FIG. 3, the flow 300 of the method for generating animation data in the present embodiment increases the step of correcting the position of the bone in the animation frame as compared with the embodiment corresponding to FIG. 2, further improving The fidelity of the generated animation data.
进一步参考图4,作为对上述各图所示方法的实现,本申请提供了一种用于生成动画数据的装置的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。With further reference to FIG. 4, as an implementation of the method shown in the above figures, the present application provides an embodiment of an apparatus for generating animation data, the apparatus embodiment corresponding to the method embodiment shown in FIG. The device can be specifically applied to various electronic devices.
如图4所示,本实施例所述的用于生成动画数据的装置400包括:获取单元401、输入单元402、确定单元403、融合单元404和生成单元405。其中,获取单元401用于获取待生成的目标动画中至少一个目标动画片段的目标运动参数;输入单元402用于将目标运动参数映射为与预先训练的径向基函数神经网络模型的输入相匹配的向量后输 入至径向基函数神经网络模型,其中径向基函数神经网络模型是通过样本动画片段序列中的各个样本动画片段进行训练得到的;确定单元403用于将径向基函数神经网络模型所输出的向量中的各个分量确定为样本动画片段序列中各个样本动画片段的融合权重系数;融合单元404用于按照融合权重系数,使用样本动画片段序列中各个样本动画片段的动画数据进行融合,得到目标动画片段的动画数据;而生成单元405用于基于至少一个目标动画片段的动画数据,生成目标动画的动画数据。As shown in FIG. 4, the apparatus 400 for generating animation data according to the embodiment includes an acquisition unit 401, an input unit 402, a determination unit 403, a fusion unit 404, and a generation unit 405. The acquiring unit 401 is configured to acquire target motion parameters of at least one target video segment in the target animation to be generated, and the input unit 402 is configured to map the target motion parameter to match the input of the pre-trained radial basis function neural network model. Vector after losing Into the radial basis function neural network model, wherein the radial basis function neural network model is obtained by training each sample animation segment in the sample cartoon segment sequence; the determining unit 403 is configured to output the radial basis function neural network model Each component in the vector is determined as a fusion weight coefficient of each sample animation segment in the sequence of sample animation segments; the fusion unit 404 is configured to use the animation data of each sample animation segment in the sample animation segment sequence to fuse according to the fusion weight coefficient, and obtain the target The animation data of the cartoon segment; and the generating unit 405 is configured to generate animation data of the target animation based on the animation data of the at least one target video segment.
在本实施例中,获取单元401、输入单元402、确定单元403、融合单元404和生成单元405的具体处理可以分别参考图2对应实施例中的步骤201、步骤202、步骤203、步骤204和步骤205,这里不再赘述。In this embodiment, the specific processing of the obtaining unit 401, the input unit 402, the determining unit 403, the merging unit 404, and the generating unit 405 may refer to step 201, step 202, step 203, step 204 in the corresponding embodiment of FIG. 2, respectively. Step 205, which will not be described again here.
在本实施例的一些可选实现方式中,样本动画片段序列中的各个样本动画片段为骨骼动画。In some optional implementations of this embodiment, each sample animation segment in the sequence of sample animation segments is a skeletal animation.
在本实施例的一些可选实现方式中,装置400还包括用于训练径向基函数神经网络模型的训练单元(未示出),该训练单元具体用于:针对样本动画片段序列中各个样本动画片段,将样本动画片段的运动参数映射为第一向量,并根据样本动画片段在样本动画片段序列中的次序生成第二向量,其中,第二向量的维度是样本动画片段序列中样本动画片段的个数,且第二向量中与样本动画片段的次序对应的分量上置1,其它分量置0;将第一向量的维度确定为待训练的径向基函数神经网络模型的输入层节点个数,将样本动画片段序列中样本动画片段的个数确定为径向基函数神经网络模型的中间隐含层的节点个数以及输出层的节点个数;将样本动画片段对应的第一向量作为径向基函数神经网络模型的输入,并将样本动画片段对应的第二向量作为径向基函数神经网络模型的输出,对径向基函数神经网络模型进行训练。In some optional implementations of this embodiment, the apparatus 400 further includes a training unit (not shown) for training a radial basis function neural network model, the training unit being specifically configured to: for each sample in the sequence of sample animation segments The animation segment maps the motion parameters of the sample animation segment to the first vector, and generates a second vector according to the order of the sample cartoon segments in the sequence of the sample cartoon segments, wherein the dimension of the second vector is the sample animation segment in the sequence of the sample cartoon segments The number of components in the second vector corresponding to the order of the sample animation segments is set to 1 and the other components are set to 0; the dimensions of the first vector are determined as the input layer nodes of the radial basis function neural network model to be trained. The number of the sample animation segments in the sample animation segment sequence is determined as the number of nodes in the intermediate hidden layer of the radial basis function neural network model and the number of nodes in the output layer; the first vector corresponding to the sample animation segment is taken as The input of the radial basis function neural network model, and the second vector corresponding to the video animation segment as the radial basis function neural network The output of the model trains the radial basis function neural network model.
在本实施例的一些可选实现方式中,样本动画片段序列中的各个样本动画片段已预先根据关键时间点划分为至少一个时间片段;以及融合单元404,包括:时长确定子单元(未示出),用于按照各个样本动画片段的融合权重系数,对各个样本动画片段中的时间片段的时长 进行加权平均,以确定目标动画片段中时间片段的时长;时长调整子单元(未示出),用于将各个样本动画片段的时间片段的时长调整为与目标动画片段中的时间片段一致;融合子单元(未示出),用于针对调整后的各个样本动画片段,按照融合权重系数对各个时间片段内的动画帧的骨骼参数执行融合插值,得到目标动画片段中相应时间片段内的动画帧的骨骼参数。In some optional implementation manners of the embodiment, each sample animation segment in the sequence of sample animation segments has been divided into at least one time segment according to a key time point in advance; and the fusion unit 404 includes: a duration determination subunit (not shown) ), for the duration of the time segment in each sample animation segment according to the fusion weight coefficient of each sample animation segment Performing a weighted average to determine the duration of the time segment in the target video segment; a duration adjustment sub-unit (not shown) for adjusting the duration of the time segment of each sample animation segment to coincide with the time segment in the target animation segment; a sub-unit (not shown) is configured to perform fused interpolation on the skeletal parameters of the animation frame in each time segment according to the fused weight coefficient for the adjusted individual sample video segments, to obtain an animation frame in the corresponding time segment in the target video segment. Skeleton parameters.
在本实施例的一些可选实现方式中,融合子单元进一步用于以下至少一项:对动画帧中各个骨骼的旋转参数执行球面插值;对动画帧中根骨骼的位置参数执行线性插值。In some optional implementations of this embodiment, the fusion subunit is further used for at least one of: performing spherical interpolation on rotation parameters of respective bones in the animation frame; and performing linear interpolation on positional parameters of the root bone in the animation frame.
在本实施例的一些可选实现方式中,融合单元404还包括配准子单元(未示出),用于:获取各个样本动画片段与目标动画片段的根骨骼的水平朝向差和/或水平位置差;调整各个样本动画片段的根骨骼的水平朝向和/或水平位置,以消除水平朝向差和/或水平位置差。In some optional implementations of this embodiment, the fusion unit 404 further includes a registration subunit (not shown) for: obtaining a horizontal orientation difference and/or level of the root skeleton of each sample animation segment and the target animation segment. Position difference; adjust the horizontal orientation and/or horizontal position of the root bone of each sample animation segment to eliminate horizontal orientation differences and/or horizontal position differences.
在本实施例的一些可选实现方式中,生成单元405包括:获取子单元(未示出),用于获取当前目标动画片段的第一个动画帧的骨骼参数以及前一目标动画片段的最后一个动画帧的骨骼参数;计算子单元(未示出),用于根据两个动画帧的骨骼参数,通过插值方法计算待插入两个动画帧之间的中间动画帧的骨骼参数;插入子单元(未示出),用于在两个动画帧之间插入中间动画帧,以生成目标动画的动画数据。In some optional implementations of this embodiment, the generating unit 405 includes: an obtaining subunit (not shown) for acquiring a bone parameter of the first animation frame of the current target video segment and a last of the previous target video segment. a skeleton parameter of an animation frame; a calculation subunit (not shown) for calculating a bone parameter of an intermediate animation frame to be inserted between two animation frames by an interpolation method according to a bone parameter of the two animation frames; inserting the subunit (not shown) for inserting an intermediate animation frame between two animation frames to generate animation data of the target animation.
在本实施例的一些可选实现方式中,装置400还包括:位置参数获取单元(未示出),用于针对目标动画片段的时间片段中的各个动画帧,获取动画帧中待修正骨骼的目标位置参数;差值确定单元(未示出),用于确定待修正骨骼的当前位置参数与目标位置参数之间的差值;调整单元(未示出),用于使用反向动力学迭代调整待修正骨骼以及相关联的骨骼的旋转参数,以修正差值。In some optional implementations of this embodiment, the apparatus 400 further includes: a position parameter acquisition unit (not shown), configured to acquire, for each animation frame in the time segment of the target video segment, the skeleton to be corrected in the animation frame. a target position parameter; a difference determining unit (not shown) for determining a difference between a current position parameter of the bone to be corrected and a target position parameter; an adjusting unit (not shown) for iterating using the inverse dynamics Adjust the rotation parameters of the bone to be corrected and the associated bone to correct the difference.
在本实施例的一些可选实现方式中,调整单元包括:调整值获取子单元(未示出),用于获取预先保存的、对上一动画帧的待修正骨骼使用反向动力学迭代调整骨骼的旋转参数的调整值;设置子单元(未示出),用于将调整值设置为对当前动画帧使用反向动力学迭代调整骨骼的旋转参数时的初始调整值。 In some optional implementation manners of the embodiment, the adjusting unit includes: an adjustment value acquisition subunit (not shown), configured to acquire a pre-stored inverse dynamics iterative adjustment for the bone to be corrected of the previous animation frame. The adjustment value of the rotation parameter of the bone; a set subunit (not shown) for setting the adjustment value to the initial adjustment value when the rotation parameter of the bone is adjusted using the inverse dynamic iteration for the current animation frame.
在本实施例的一些可选实现方式中,调整单元还包括:衰减子单元(未示出),用于在将调整值设置为对当前动画帧使用反向动力学迭代调整骨骼的旋转参数时的初始调整值之前,对调整值进行衰减。In some optional implementations of this embodiment, the adjusting unit further includes: an attenuation subunit (not shown) for setting the adjustment value to use the inverse dynamic iteration to adjust the rotation parameter of the bone to the current animation frame The adjustment value is attenuated before the initial adjustment value.
下面参考图5,其示出了适于用来实现本申请实施例的终端设备或服务器的计算机系统500的结构示意图。Referring now to Figure 5, there is shown a block diagram of a computer system 500 suitable for use in implementing a terminal device or server of an embodiment of the present application.
如图5所示,计算机系统500包括中央处理单元(CPU)501,其可以根据存储在只读存储器(ROM)502中的程序或者从存储部分508加载到随机访问存储器(RAM)503中的程序而执行各种适当的动作和处理。在RAM 503中,还存储有系统500操作所需的各种程序和数据。CPU 501、ROM 502以及RAM 503通过总线504彼此相连。输入/输出(I/O)接口505也连接至总线504。As shown in FIG. 5, computer system 500 includes a central processing unit (CPU) 501 that can be loaded into a program in random access memory (RAM) 503 according to a program stored in read only memory (ROM) 502 or from storage portion 508. And perform various appropriate actions and processes. In the RAM 503, various programs and data required for the operation of the system 500 are also stored. The CPU 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also coupled to bus 504.
以下部件连接至I/O接口505:包括键盘、鼠标等的输入部分506;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分507;包括硬盘等的存储部分508;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分509。通信部分509经由诸如因特网的网络执行通信处理。驱动器510也根据需要连接至I/O接口505。可拆卸介质511,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器510上,以便于从其上读出的计算机程序根据需要被安装入存储部分508。The following components are connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, etc.; an output portion 507 including, for example, a cathode ray tube (CRT), a liquid crystal display (LCD), and the like, and a storage portion 508 including a hard disk or the like. And a communication portion 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the Internet. Driver 510 is also coupled to I/O interface 505 as needed. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like is mounted on the drive 510 as needed so that a computer program read therefrom is installed into the storage portion 508 as needed.
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括有形地包含在机器可读介质上的计算机程序,所述计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分509从网络上被下载和安装,和/或从可拆卸介质511被安装。In particular, the processes described above with reference to the flowcharts may be implemented as a computer software program in accordance with an embodiment of the present disclosure. For example, an embodiment of the present disclosure includes a computer program product comprising a computer program tangibly embodied on a machine readable medium, the computer program comprising program code for executing the method illustrated in the flowchart. In such an embodiment, the computer program can be downloaded and installed from the network via the communication portion 509, and/or installed from the removable medium 511.
附图中的流程图和框图,图示了按照本申请各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,所述模块、程序段、或代码的一部分包含一个或多个用于 实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the Figures illustrate the architecture, functionality and operation of possible implementations of systems, methods and computer program products in accordance with various embodiments of the present application. In this regard, each block of the flowchart or block diagram can represent a module, a program segment, or a portion of code, the module, the program segment, or a portion of code comprising one or more An executable instruction that implements the specified logic functions. It should also be noted that in some alternative implementations, the functions noted in the blocks may also occur in a different order than that illustrated in the drawings. For example, two successively represented blocks may in fact be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending upon the functionality involved. It is also noted that each block of the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts, can be implemented in a dedicated hardware-based system that performs the specified function or operation. Or it can be implemented by a combination of dedicated hardware and computer instructions.
描述于本申请实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中,例如,可以描述为:一种处理器包括获取单元、输入单元、确定单元、融合单元和生成单元。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定,例如,获取单元还可以被描述为“获取待生成的目标动画中至少一个目标动画片段的目标运动参数的单元”。The units involved in the embodiments of the present application may be implemented by software or by hardware. The described unit may also be provided in the processor, for example, as a processor including an acquisition unit, an input unit, a determination unit, a fusion unit, and a generation unit. Wherein, the names of the units do not constitute a limitation on the unit itself in some cases. For example, the obtaining unit may also be described as “a unit that acquires target motion parameters of at least one target video segment in the target animation to be generated”. .
作为另一方面,本申请还提供了一种非易失性计算机存储介质,该非易失性计算机存储介质可以是上述实施例中所述装置中所包含的非易失性计算机存储介质;也可以是单独存在,未装配入终端中的非易失性计算机存储介质。上述非易失性计算机存储介质存储有一个或者多个程序,当所述一个或者多个程序被一个设备执行时,使得所述设备:获取待生成的目标动画中至少一个目标动画片段的目标运动参数;将所述目标运动参数映射为与预先训练的径向基函数神经网络模型的输入相匹配的向量后输入至所述径向基函数神经网络模型,其中所述径向基函数神经网络模型是通过样本动画片段序列中的各个样本动画片段进行训练得到的;将所述径向基函数神经网络模型所输出的向量中的各个分量确定为所述样本动画片段序列中各个样本动画片段的融合权重系数;按照所述融合权重系数,使用所述样本动画片段序列中各个样本动画片段的动画数据进行融合,得到目标动画片段的动画数据;基于所述至少一个目标动画片段的动画数据,生成所述目标动画的动画数据。In another aspect, the present application further provides a non-volatile computer storage medium, which may be a non-volatile computer storage medium included in the apparatus described in the foregoing embodiments; It may be a non-volatile computer storage medium that exists alone and is not assembled into the terminal. The non-volatile computer storage medium stores one or more programs, when the one or more programs are executed by a device, causing the device to: acquire target motion of at least one target video segment in the target animation to be generated a parameter; mapping the target motion parameter to a vector matching the input of the pre-trained radial basis function neural network model, and inputting to the radial basis function neural network model, wherein the radial basis function neural network model It is obtained by training each sample animation segment in the sequence of sample animation segments; determining each component in the vector output by the radial basis function neural network model as the fusion of each sample animation segment in the sequence of the sample animation segment Weighting coefficient; according to the fusion weight coefficient, using the animation data of each sample animation segment in the sample animation segment sequence to obtain animation data of the target animation segment; based on the animation data of the at least one target animation segment, generating the The animation data of the target animation.
以上描述仅为本申请的较佳实施例以及对所运用技术原理的说 明。本领域技术人员应当理解,本申请中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离所述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本申请中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。 The above description is only the preferred embodiment of the present application and the principle of the applied technology. Bright. It should be understood by those skilled in the art that the scope of the invention referred to in the present application is not limited to the specific combination of the above technical features, and should also be covered by the above technical features without departing from the inventive concept. Other technical solutions formed by any combination of their equivalent features. For example, the above features are combined with the technical features disclosed in the present application, but are not limited to the technical features having similar functions.

Claims (22)

  1. 一种用于生成动画数据的方法,其特征在于,所述方法包括:A method for generating animation data, the method comprising:
    获取待生成的目标动画中至少一个目标动画片段的目标运动参数;Obtaining a target motion parameter of at least one target video segment in the target animation to be generated;
    将所述目标运动参数映射为与预先训练的径向基函数神经网络模型的输入相匹配的向量后输入至所述径向基函数神经网络模型,其中所述径向基函数神经网络模型是通过样本动画片段序列中的各个样本动画片段进行训练得到的;Mapping the target motion parameter to a vector matching the input of the pre-trained radial basis function neural network model, and inputting to the radial basis function neural network model, wherein the radial basis function neural network model is passed Training each sample animation segment in the sample cartoon segment sequence;
    将所述径向基函数神经网络模型所输出的向量中的各个分量确定为所述样本动画片段序列中各个样本动画片段的融合权重系数;Determining, in the vector output by the radial basis function neural network model, a fusion weight coefficient of each sample animation segment in the sequence of sample animation segments;
    按照所述融合权重系数,使用所述样本动画片段序列中各个样本动画片段的动画数据进行融合,得到目标动画片段的动画数据;According to the fusion weight coefficient, using the animation data of each sample animation segment in the sample animation segment sequence to perform fusion, to obtain animation data of the target animation segment;
    基于所述至少一个目标动画片段的动画数据,生成所述目标动画的动画数据。Generating animation data of the target animation based on animation data of the at least one target video segment.
  2. 根据权利要求1所述的方法,其特征在于,所述样本动画片段序列中的各个样本动画片段为骨骼动画。The method of claim 1 wherein each of the sample animation segments in the sequence of sample animation segments is a skeletal animation.
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法包括训练径向基函数神经网络模型的步骤,所述训练径向基函数神经网络模型的步骤包括:The method of claim 1 or 2, wherein the method comprises the step of training a radial basis function neural network model, the step of training the radial basis function neural network model comprising:
    针对所述样本动画片段序列中各个样本动画片段,将样本动画片段的运动参数映射为第一向量,并根据样本动画片段在所述样本动画片段序列中的次序生成第二向量,其中,第二向量的维度是所述样本动画片段序列中样本动画片段的个数,且第二向量中与样本动画片段的次序对应的分量上置1,其它分量置0;Mapping a motion parameter of the sample animation segment to a first vector for each sample animation segment in the sample animation segment sequence, and generating a second vector according to an order of the sample animation segment in the sample animation segment sequence, wherein the second The dimension of the vector is the number of sample animation segments in the sequence of sample animation segments, and the component corresponding to the order of the sample animation segments in the second vector is set to 1, and the other components are set to 0;
    将第一向量的维度确定为待训练的径向基函数神经网络模型的输入层节点个数,将所述样本动画片段序列中样本动画片段的个数确定为所述径向基函数神经网络模型的中间隐含层的节点个数以及输出层 的节点个数;Determining the dimension of the first vector as the number of input layer nodes of the radial basis function neural network model to be trained, and determining the number of sample animation segments in the sequence of the sample animation segments as the radial basis function neural network model Number of nodes in the middle hidden layer and output layer Number of nodes;
    将样本动画片段对应的第一向量作为所述径向基函数神经网络模型的输入,并将样本动画片段对应的第二向量作为所述径向基函数神经网络模型的输出,对所述径向基函数神经网络模型进行训练。Taking the first vector corresponding to the sample cartoon segment as the input of the radial basis function neural network model, and using the second vector corresponding to the sample animation segment as the output of the radial basis function neural network model, for the radial direction The basis function neural network model is trained.
  4. 根据权利要求2所述的方法,其特征在于,所述样本动画片段序列中的各个样本动画片段已预先根据关键时间点划分为至少一个时间片段;以及The method according to claim 2, wherein each of the sample animation segments in the sequence of sample animation segments has been previously divided into at least one time segment according to a key time point;
    所述按照所述融合权重系数,使用所述样本动画片段序列中各个样本动画片段的动画数据进行融合,包括:Performing, according to the fusion weight coefficient, the animation data of each sample animation segment in the sample animation segment sequence, including:
    按照各个样本动画片段的融合权重系数,对各个动画片段中的时间片段的时长进行加权平均,以确定目标动画片段中时间片段的时长;Performing a weighted average of the durations of the time segments in each of the video segments according to the blending weight coefficients of the video segments of each sample to determine the duration of the time segments in the target video segment;
    将各个样本动画片段的时间片段的时长调整为与目标动画片段中的时间片段一致;Adjusting the duration of the time segment of each sample animation segment to coincide with the time segment in the target animation segment;
    针对调整后的各个样本动画片段,按照所述融合权重系数对各个时间片段内的动画帧的骨骼参数执行融合插值,得到目标动画片段中相应时间片段内的动画帧的骨骼参数。For the adjusted individual video clips, the fused parameter is performed on the skeletal parameters of the animation frames in each time segment according to the fusion weight coefficient, and the skeletal parameters of the animation frames in the corresponding time segments in the target animation segment are obtained.
  5. 根据权利要求4所述的方法,其特征在于,所述按照所述融合权重系数对各个时间片段内的动画帧的骨骼参数执行融合插值,包括以下至少一项:The method according to claim 4, wherein said performing fusion interpolation on a bone parameter of an animation frame in each time segment according to said fusion weight coefficient comprises at least one of the following:
    对动画帧中各个骨骼的旋转参数执行球面插值;Performing spherical interpolation on the rotation parameters of each bone in the animation frame;
    对动画帧中根骨骼的位置参数执行线性插值。Perform linear interpolation on the positional parameters of the root bone in the animation frame.
  6. 根据权利要求4所述的方法,其特征在于,在所述针对调整后的各个样本动画片段,按照所述融合权重系数对各个时间片段内的动画帧的骨骼参数执行插值操作之前,所述按照所述融合权重系数对各个时间片段内的动画帧的骨骼参数执行融合插值,还包括:The method according to claim 4, wherein said step of performing an interpolation operation on the skeleton parameters of the animation frames in each time segment according to said fusion weight coefficient for said adjusted individual sample video segments The fusion weight coefficient performs fusion interpolation on the bone parameters of the animation frame in each time segment, and further includes:
    获取各个样本动画片段与目标动画片段的根骨骼的水平朝向差和/或水平位置差; Obtaining a horizontal orientation difference and/or a horizontal position difference of a root skeleton of each sample animation segment and the target animation segment;
    调整各个样本动画片段的根骨骼的水平朝向和/或水平位置,以消除所述水平朝向差和/或水平位置差。The horizontal orientation and/or horizontal position of the root bone of each sample animation segment is adjusted to eliminate the horizontal orientation difference and/or horizontal position difference.
  7. 根据权利要求2所述的方法,其特征在于,所述基于所述至少一个目标动画片段的动画数据,生成所述目标动画的动画数据,包括:The method according to claim 2, wherein the generating animation data of the target animation based on the animation data of the at least one target video segment comprises:
    获取当前目标动画片段的第一个动画帧的骨骼参数以及前一目标动画片段的最后一个动画帧的骨骼参数;Get the bone parameter of the first animation frame of the current target animation segment and the bone parameter of the last animation frame of the previous target animation segment;
    根据两个动画帧的骨骼参数,通过插值方法计算待插入所述两个动画帧之间的中间动画帧的骨骼参数;Calculating a bone parameter of an intermediate animation frame to be inserted between the two animation frames by an interpolation method according to a bone parameter of two animation frames;
    在所述两个动画帧之间插入所述中间动画帧,以生成所述目标动画的动画数据。The intermediate animation frame is inserted between the two animation frames to generate animation data of the target animation.
  8. 根据权利要求2所述的方法,其特征在于,所述方法还包括:The method of claim 2, wherein the method further comprises:
    针对目标动画片段的时间片段中的各个动画帧,获取动画帧中待修正骨骼的目标位置参数;Obtaining a target position parameter of the bone to be corrected in the animation frame for each animation frame in the time segment of the target cartoon segment;
    确定所述待修正骨骼的当前位置参数与目标位置参数之间的差值;Determining a difference between a current position parameter of the bone to be corrected and a target position parameter;
    使用反向动力学迭代调整所述待修正骨骼以及相关联的骨骼的旋转参数,以修正所述差值。The inverse kinetic iteration is used to adjust the rotation parameters of the bone to be corrected and the associated bone to correct the difference.
  9. 根据权利要求8所述的方法,其特征在于,所述使用反向动力学迭代调整所述待修正骨骼以及相关联的骨骼的旋转参数,以修正所述差值,包括:The method according to claim 8, wherein the iteratively adjusting the rotation parameters of the bone to be corrected and the associated bone using an inverse dynamic iteration to correct the difference comprises:
    获取预先保存的、对上一动画帧的待修正骨骼使用反向动力学迭代调整骨骼的旋转参数的调整值;Obtaining an adjustment value of a rotation parameter of the bone that is pre-saved and for the bone to be corrected of the previous animation frame using an inverse dynamic iteration;
    将所述调整值设置为对当前动画帧使用反向动力学迭代调整骨骼的旋转参数时的初始调整值。The adjustment value is set to an initial adjustment value when the rotation parameters of the bone are adjusted using the inverse dynamic iteration for the current animation frame.
  10. 根据权利要求9所述的方法,其特征在于,在将所述调整值设置为对当前动画帧使用反向动力学迭代调整骨骼的旋转参数时的初 始调整值之前,所述方法还包括:The method of claim 9 wherein the initial adjustment is performed to adjust the rotation parameters of the bone using an inverse dynamic iteration of the current animation frame. Before the initial adjustment value, the method further includes:
    对所述调整值进行衰减。The adjustment value is attenuated.
  11. 一种用于生成动画数据的装置,其特征在于,所述装置包括:An apparatus for generating animation data, the apparatus comprising:
    获取单元,用于获取待生成的目标动画中至少一个目标动画片段的目标运动参数;An acquiring unit, configured to acquire a target motion parameter of at least one target video segment in the target animation to be generated;
    输入单元,用于将所述目标运动参数映射为与预先训练的径向基函数神经网络模型的输入相匹配的向量后输入至所述径向基函数神经网络模型,其中所述径向基函数神经网络模型是通过样本动画片段序列中的各个样本动画片段进行训练得到的;An input unit, configured to map the target motion parameter to a vector matching a input of a pre-trained radial basis function neural network model, and input the result to the radial basis function neural network model, wherein the radial basis function The neural network model is obtained by training each sample animation segment in the sequence of sample animations;
    确定单元,用于将所述径向基函数神经网络模型所输出的向量中的各个分量确定为所述样本动画片段序列中各个样本动画片段的融合权重系数;a determining unit, configured to determine each component in the vector output by the radial basis function neural network model as a fusion weight coefficient of each sample animation segment in the sequence of sample animation segments;
    融合单元,用于按照所述融合权重系数,使用所述样本动画片段序列中各个样本动画片段的动画数据进行融合,得到目标动画片段的动画数据;a merging unit, configured to perform merging using the animation data of each sample animation segment in the sequence of sample animation segments according to the fusion weight coefficient, to obtain animation data of the target animation segment;
    生成单元,用于基于所述至少一个目标动画片段的动画数据,生成所述目标动画的动画数据。And a generating unit, configured to generate animation data of the target animation based on the animation data of the at least one target video segment.
  12. 根据权利要求11所述的装置,其特征在于,所述样本动画片段序列中的各个样本动画片段为骨骼动画。The apparatus according to claim 11, wherein each of the sample animation segments in the sequence of sample animation segments is a skeletal animation.
  13. 根据权利要求11或12所述的装置,其特征在于,还包括用于训练径向基函数神经网络模型的训练单元,所述训练单元具体用于:The apparatus according to claim 11 or 12, further comprising a training unit for training a radial basis function neural network model, the training unit being specifically configured to:
    针对所述样本动画片段序列中各个样本动画片段,将样本动画片段的运动参数映射为第一向量,并根据样本动画片段在所述样本动画片段序列中的次序生成第二向量,其中,第二向量的维度是所述样本动画片段序列中样本动画片段的个数,且第二向量中与样本动画片段的次序对应的分量上置1,其它分量置0;Mapping a motion parameter of the sample animation segment to a first vector for each sample animation segment in the sample animation segment sequence, and generating a second vector according to an order of the sample animation segment in the sample animation segment sequence, wherein the second The dimension of the vector is the number of sample animation segments in the sequence of sample animation segments, and the component corresponding to the order of the sample animation segments in the second vector is set to 1, and the other components are set to 0;
    将第一向量的维度确定为待训练的径向基函数神经网络模型的输 入层节点个数,将所述样本动画片段序列中样本动画片段的个数确定为所述径向基函数神经网络模型的中间隐含层的节点个数以及输出层的节点个数;Determining the dimension of the first vector as the radial basis function neural network model to be trained Entering the number of nodes, determining the number of sample animation segments in the sequence of sample animation segments as the number of nodes in the intermediate hidden layer of the radial basis function neural network model and the number of nodes in the output layer;
    将样本动画片段对应的第一向量作为所述径向基函数神经网络模型的输入,并将样本动画片段对应的第二向量作为所述径向基函数神经网络模型的输出,对所述径向基函数神经网络模型进行训练。Taking the first vector corresponding to the sample cartoon segment as the input of the radial basis function neural network model, and using the second vector corresponding to the sample animation segment as the output of the radial basis function neural network model, for the radial direction The basis function neural network model is trained.
  14. 根据权利要求12所述的装置,其特征在于,所述样本动画片段序列中的各个样本动画片段已预先根据关键时间点划分为至少一个时间片段;以及The apparatus according to claim 12, wherein each of the sample animation segments in the sequence of sample animation segments has been previously divided into at least one time segment according to a key time point;
    所述融合单元包括:The fusion unit includes:
    时长确定子单元,用于按照各个样本动画片段的融合权重系数,对各个样本动画片段中的时间片段的时长进行加权平均,以确定目标动画片段中时间片段的时长;The duration determining subunit is configured to perform weighted averaging on the duration of the time segments in each of the sample animation segments according to the fusion weight coefficient of each sample animation segment to determine the duration of the time segment in the target animation segment;
    时长调整子单元,用于将各个样本动画片段的时间片段的时长调整为与目标动画片段中的时间片段一致;a time adjustment subunit for adjusting the duration of the time segment of each sample animation segment to be consistent with the time segment in the target animation segment;
    融合子单元,用于针对调整后的各个样本动画片段,按照所述融合权重系数对各个时间片段内的动画帧的骨骼参数执行融合插值,得到目标动画片段中相应时间片段内的动画帧的骨骼参数。a fusion subunit, configured to perform fused interpolation on the skeletal parameters of the animation frames in each time segment according to the fused weight coefficients for the adjusted individual sample animation segments, to obtain the skeleton of the animation frame in the corresponding time segment in the target animation segment parameter.
  15. 根据权利要求14所述的装置,其特征在于,所述融合子单元进一步用于以下至少一项:The apparatus according to claim 14, wherein said fusion subunit is further used for at least one of the following:
    对动画帧中各个骨骼的旋转参数执行球面插值;Performing spherical interpolation on the rotation parameters of each bone in the animation frame;
    对动画帧中根骨骼的位置参数执行线性插值。Perform linear interpolation on the positional parameters of the root bone in the animation frame.
  16. 根据权利要求14所述的装置,其特征在于,所述融合单元还包括配准子单元,所述配准子单元用于:The apparatus according to claim 14, wherein said merging unit further comprises a registration subunit, said registration subunit being:
    获取各个样本动画片段与目标动画片段的根骨骼的水平朝向差和/或水平位置差;Obtaining a horizontal orientation difference and/or a horizontal position difference of a root skeleton of each sample animation segment and the target animation segment;
    调整各个样本动画片段的根骨骼的水平朝向和/或水平位置,以消 除所述水平朝向差和/或水平位置差。Adjust the horizontal orientation and/or horizontal position of the root bone of each sample animation segment to eliminate In addition to the horizontal orientation difference and/or horizontal position difference.
  17. 根据权利要求12所述的装置,其特征在于,所述生成单元包括:The device according to claim 12, wherein the generating unit comprises:
    获取子单元,用于获取当前目标动画片段的第一个动画帧的骨骼参数以及前一目标动画片段的最后一个动画帧的骨骼参数;Obtaining a subunit, configured to acquire a bone parameter of a first animation frame of the current target animation segment and a skeleton parameter of a last animation frame of the previous target animation segment;
    计算子单元,用于根据两个动画帧的骨骼参数,通过插值方法计算待插入所述两个动画帧之间的中间动画帧的骨骼参数;a calculating subunit, configured to calculate a bone parameter of an intermediate animation frame to be inserted between the two animation frames by an interpolation method according to a bone parameter of the two animation frames;
    插入子单元,用于在所述两个动画帧之间插入所述中间动画帧,以生成所述目标动画的动画数据。Inserting a subunit for inserting the intermediate animation frame between the two animation frames to generate animation data of the target animation.
  18. 根据权利要求12所述的装置,其特征在于,还包括:The device according to claim 12, further comprising:
    位置参数获取单元,用于针对目标动画片段的时间片段中的各个动画帧,获取动画帧中待修正骨骼的目标位置参数;a position parameter obtaining unit, configured to acquire a target position parameter of the bone to be corrected in the animation frame for each animation frame in the time segment of the target video segment;
    差值确定单元,用于确定所述待修正骨骼的当前位置参数与目标位置参数之间的差值;a difference determining unit, configured to determine a difference between a current position parameter of the bone to be corrected and a target position parameter;
    调整单元,用于使用反向动力学迭代调整所述待修正骨骼以及相关联的骨骼的旋转参数,以修正所述差值。An adjustment unit for iteratively adjusting the rotation parameters of the bone to be corrected and the associated bone using inverse dynamics to correct the difference.
  19. 根据权利要求18所述的装置,其特征在于,所述调整单元包括:The apparatus according to claim 18, wherein the adjustment unit comprises:
    调整值获取子单元,用于获取预先保存的、对上一动画帧的待修正骨骼使用反向动力学迭代调整骨骼的旋转参数的调整值;The adjustment value acquisition sub-unit is configured to obtain a pre-saved adjustment value of the rotation parameter of the bone to be corrected using the inverse dynamic iteration for the bone to be corrected of the previous animation frame;
    设置子单元,用于将所述调整值设置为对当前动画帧使用反向动力学迭代调整骨骼的旋转参数时的初始调整值。A sub-unit is provided for setting the adjustment value to an initial adjustment value when the rotation parameter of the bone is adjusted using the inverse dynamic iteration for the current animation frame.
  20. 根据权利要求19所述的装置,其特征在于,所述调整单元还包括:The apparatus according to claim 19, wherein the adjusting unit further comprises:
    衰减子单元,用于在将所述调整值设置为对当前动画帧使用反向动力学迭代调整骨骼的旋转参数时的初始调整值之前,对所述调整值 进行衰减。An attenuation subunit, configured to adjust the value before setting the initial adjustment value when the rotation parameter of the bone is adjusted using the inverse dynamic iteration for the current animation frame Attenuate.
  21. 一种设备,包括:A device that includes:
    处理器;和Processor; and
    存储器,Memory,
    所述存储器中存储有能够被所述处理器执行的计算机可读指令,在所述计算机可读指令被执行时,所述处理器执行如权利要求1-10中任一项所述的方法。The memory stores computer readable instructions executable by the processor, the processor executing the method of any of claims 1-10 when the computer readable instructions are executed.
  22. 一种非易失性计算机存储介质,所述计算机存储介质存储有能够被处理器执行的计算机可读指令,当所述计算机可读指令被处理器执行时,所述处理器执行如权利要求1-10中任一项所述的方法。 A non-volatile computer storage medium storing computer readable instructions executable by a processor, the processor executing as claimed in claim 1 The method of any of ten.
PCT/CN2017/100472 2016-09-14 2017-09-05 Method and device for generating animation data WO2018050001A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610822607.7A CN106447748B (en) 2016-09-14 2016-09-14 A kind of method and apparatus for generating animation data
CN201610822607.7 2016-09-14

Publications (1)

Publication Number Publication Date
WO2018050001A1 true WO2018050001A1 (en) 2018-03-22

Family

ID=58167743

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/100472 WO2018050001A1 (en) 2016-09-14 2017-09-05 Method and device for generating animation data

Country Status (2)

Country Link
CN (1) CN106447748B (en)
WO (1) WO2018050001A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872375A (en) * 2019-01-10 2019-06-11 珠海金山网络游戏科技有限公司 A kind of skeleton cartoon key frame compression method and device
CN110246208A (en) * 2019-06-20 2019-09-17 武汉两点十分文化传播有限公司 A kind of plug-in unit that cartoon making flow path efficiency can be improved
CN111340920A (en) * 2020-03-02 2020-06-26 长沙千博信息技术有限公司 Semantic-driven two-dimensional animation automatic generation method
CN111968206A (en) * 2020-08-18 2020-11-20 网易(杭州)网络有限公司 Animation object processing method, device, equipment and storage medium
CN112165630A (en) * 2020-10-16 2021-01-01 广州虎牙科技有限公司 Image rendering method and device, electronic equipment and storage medium
CN112184863A (en) * 2020-10-21 2021-01-05 网易(杭州)网络有限公司 Animation data processing method and device
CN112270734A (en) * 2020-10-19 2021-01-26 北京大米科技有限公司 Animation generation method, readable storage medium and electronic device
CN112418279A (en) * 2020-11-05 2021-02-26 北京迈格威科技有限公司 Image fusion method and device, electronic equipment and readable storage medium
CN113034650A (en) * 2021-03-17 2021-06-25 深圳市大富网络技术有限公司 Mode switching method, system, device and storage medium for manufacturing skeleton animation
US20210287415A1 (en) * 2019-04-30 2021-09-16 Tencent Technology (Shenzhen) Company Limited Virtual object display method and apparatus, electronic device, and storage medium
CN111080755B (en) * 2019-12-31 2023-11-14 上海米哈游天命科技有限公司 Motion calculation method and device, storage medium and electronic equipment

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106447748B (en) * 2016-09-14 2019-09-24 厦门黑镜科技有限公司 A kind of method and apparatus for generating animation data
CN106780683A (en) * 2017-02-23 2017-05-31 网易(杭州)网络有限公司 The processing method and processing device of bone animation data
CN106971414B (en) * 2017-03-10 2021-02-23 华东交通大学 Three-dimensional animation generation method based on deep cycle neural network algorithm
CN106952325B (en) * 2017-03-27 2020-07-21 厦门黑镜科技有限公司 Method and apparatus for manipulating three-dimensional animated characters
CN106981099B (en) * 2017-03-27 2020-04-14 厦门黑镜科技有限公司 Method and apparatus for manipulating three-dimensional animated characters
CN108665518B (en) * 2017-04-01 2021-10-22 Tcl科技集团股份有限公司 Control method and system for adjusting animation speed
CN108122266B (en) * 2017-12-20 2021-07-27 成都卓杭网络科技股份有限公司 Method, device and storage medium for caching rendering textures of skeleton animation
CN109300179B (en) * 2018-09-28 2023-08-22 南京蜜宝信息科技有限公司 Animation production method, device, terminal and medium
CN109785413B (en) * 2018-12-06 2023-03-14 珠海西山居互动娱乐科技有限公司 Unity-based tree compression method and device for configurable animation file
CN111462284B (en) * 2020-03-31 2023-09-05 北京小米移动软件有限公司 Animation generation method, animation generation device and electronic equipment
CN112156462B (en) * 2020-10-14 2024-06-25 网易(杭州)网络有限公司 Animation processing method and device for game skills
CN118154732A (en) * 2020-12-22 2024-06-07 完美世界(北京)软件科技发展有限公司 Animation data processing method and device, storage medium and computer equipment
CN113658300B (en) * 2021-08-18 2023-05-30 北京百度网讯科技有限公司 Animation playing method and device, electronic equipment and storage medium
CN114998491B (en) * 2022-08-01 2022-11-18 阿里巴巴(中国)有限公司 Digital human driving method, device, equipment and storage medium
CN116051699B (en) * 2023-03-29 2023-06-02 腾讯科技(深圳)有限公司 Dynamic capture data processing method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100428281C (en) * 2006-09-14 2008-10-22 浙江大学 Automatic generation method for 3D human body animation based on moving script
CN102157009A (en) * 2011-05-24 2011-08-17 中国科学院自动化研究所 Method for compiling three-dimensional human skeleton motion based on motion capture data
US20160232698A1 (en) * 2015-02-06 2016-08-11 Electronics And Telecommunications Research Institute Apparatus and method for generating animation
CN106447748A (en) * 2016-09-14 2017-02-22 厦门幻世网络科技有限公司 Method and device for generating animation data

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101241600B (en) * 2008-02-19 2010-09-29 深圳先进技术研究院 Chain-shaped bone matching method in movement capturing technology
CN101364309B (en) * 2008-10-09 2011-05-04 中国科学院计算技术研究所 Cartoon generating method for mouth shape of source virtual characters
CN102945561B (en) * 2012-10-16 2015-11-18 北京航空航天大学 Based on the motion synthesis of motion capture data and edit methods in a kind of computing machine skeleton cartoon
CN103279970B (en) * 2013-05-10 2016-12-28 中国科学技术大学 A kind of method of real-time voice-driven human face animation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100428281C (en) * 2006-09-14 2008-10-22 浙江大学 Automatic generation method for 3D human body animation based on moving script
CN102157009A (en) * 2011-05-24 2011-08-17 中国科学院自动化研究所 Method for compiling three-dimensional human skeleton motion based on motion capture data
US20160232698A1 (en) * 2015-02-06 2016-08-11 Electronics And Telecommunications Research Institute Apparatus and method for generating animation
CN106447748A (en) * 2016-09-14 2017-02-22 厦门幻世网络科技有限公司 Method and device for generating animation data

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872375A (en) * 2019-01-10 2019-06-11 珠海金山网络游戏科技有限公司 A kind of skeleton cartoon key frame compression method and device
CN109872375B (en) * 2019-01-10 2023-04-14 珠海金山数字网络科技有限公司 Skeleton animation key frame compression method and device
US11615570B2 (en) 2019-04-30 2023-03-28 Tencent Technology (Shenzhen) Company Limited Virtual object display method and apparatus, electronic device, and storage medium
US20210287415A1 (en) * 2019-04-30 2021-09-16 Tencent Technology (Shenzhen) Company Limited Virtual object display method and apparatus, electronic device, and storage medium
CN110246208A (en) * 2019-06-20 2019-09-17 武汉两点十分文化传播有限公司 A kind of plug-in unit that cartoon making flow path efficiency can be improved
CN111080755B (en) * 2019-12-31 2023-11-14 上海米哈游天命科技有限公司 Motion calculation method and device, storage medium and electronic equipment
CN111340920A (en) * 2020-03-02 2020-06-26 长沙千博信息技术有限公司 Semantic-driven two-dimensional animation automatic generation method
CN111340920B (en) * 2020-03-02 2024-04-09 长沙千博信息技术有限公司 Semantic-driven two-dimensional animation automatic generation method
CN111968206A (en) * 2020-08-18 2020-11-20 网易(杭州)网络有限公司 Animation object processing method, device, equipment and storage medium
CN111968206B (en) * 2020-08-18 2024-04-30 网易(杭州)网络有限公司 Method, device, equipment and storage medium for processing animation object
CN112165630A (en) * 2020-10-16 2021-01-01 广州虎牙科技有限公司 Image rendering method and device, electronic equipment and storage medium
CN112270734B (en) * 2020-10-19 2024-01-26 北京大米科技有限公司 Animation generation method, readable storage medium and electronic equipment
CN112270734A (en) * 2020-10-19 2021-01-26 北京大米科技有限公司 Animation generation method, readable storage medium and electronic device
CN112184863B (en) * 2020-10-21 2024-03-15 网易(杭州)网络有限公司 Animation data processing method and device
CN112184863A (en) * 2020-10-21 2021-01-05 网易(杭州)网络有限公司 Animation data processing method and device
CN112418279A (en) * 2020-11-05 2021-02-26 北京迈格威科技有限公司 Image fusion method and device, electronic equipment and readable storage medium
CN113034650A (en) * 2021-03-17 2021-06-25 深圳市大富网络技术有限公司 Mode switching method, system, device and storage medium for manufacturing skeleton animation

Also Published As

Publication number Publication date
CN106447748A (en) 2017-02-22
CN106447748B (en) 2019-09-24

Similar Documents

Publication Publication Date Title
WO2018050001A1 (en) Method and device for generating animation data
JP7198332B2 (en) Image regularization and retargeting system
CN106910247B (en) Method and apparatus for generating three-dimensional avatar model
AU2017228685B2 (en) Sketch2painting: an interactive system that transforms hand-drawn sketch to painting
CN113838176B (en) Model training method, three-dimensional face image generation method and three-dimensional face image generation equipment
CN109961507A (en) A kind of Face image synthesis method, apparatus, equipment and storage medium
US11893717B2 (en) Initializing a learned latent vector for neural-network projections of diverse images
CN110288705B (en) Method and device for generating three-dimensional model
CN111476871A (en) Method and apparatus for generating video
CN111951372A (en) Three-dimensional face model generation method and equipment
CN115908109A (en) Facial image stylized model training method, equipment and storage medium
CN113610989B (en) Method and device for training style migration model and method and device for style migration
CN112766215A (en) Face fusion method and device, electronic equipment and storage medium
CN111696034B (en) Image processing method and device and electronic equipment
US10445921B1 (en) Transferring motion between consecutive frames to a digital image
CN117336527A (en) Video editing method and device
CN112329752A (en) Training method of human eye image processing model, image processing method and device
US20240062495A1 (en) Deformable neural radiance field for editing facial pose and facial expression in neural 3d scenes
US20240331330A1 (en) System and Method for Dynamically Improving the Performance of Real-Time Rendering Systems via an Optimized Data Set
CN114452646A (en) Virtual object perspective processing method and device and computer equipment
WO2023246403A1 (en) Model training method, watermark restoration method, and related device
US20180089882A1 (en) Blend shape system with texture coordinate blending
CN113223128B (en) Method and apparatus for generating image
CN113408452A (en) Expression redirection training method and device, electronic equipment and readable storage medium
CN114758374A (en) Expression generation method, computing device and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17850203

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03.09.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17850203

Country of ref document: EP

Kind code of ref document: A1