CN110136230A - Cartoon display method, device, electronic equipment and storage medium - Google Patents
Cartoon display method, device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN110136230A CN110136230A CN201910253191.5A CN201910253191A CN110136230A CN 110136230 A CN110136230 A CN 110136230A CN 201910253191 A CN201910253191 A CN 201910253191A CN 110136230 A CN110136230 A CN 110136230A
- Authority
- CN
- China
- Prior art keywords
- animation
- target
- parameter information
- types
- different
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The disclosure is directed to a kind of cartoon display method, device, electronic equipment and storage mediums, the cartoon display method includes: the parameter information for obtaining target animation to be shown, wherein, target animation is the animation that the animation application program being mounted in second terminal is drawn;According to parameter information and pre-stored from the matched multiple animated types of different animation types, the model data and spatial cue of target animation are obtained;According to the default mapping relations between default programming interface and the programming interface of different development platforms, the first data type that model data and the data type of spatial cue are supported from default programming interface is converted to the second data type that the target programmed interface of first terminal is supported;According to the spatial cue after conversion, the model data after conversion is rendered, displaying target animation.The disclosure can cross-platform displaying target animation, it is not necessary to modify codes.
Description
Technical field
This disclosure relates to graph processing technique field more particularly to a kind of cartoon display method, device, electronic equipment and deposit
Storage media.
Background technique
Currently, terminal side has the animation soft of many types, terminal is can be used in designer
(such as AfterEffects, abbreviation AE are that a graphics video processing that Adobe company releases is soft to the animation soft of side
Part) design and produce various animation effects.
If that the animation for needing to make the animation soft of terminal side, shows on mobile terminals,
API (the Application Programming for then needing developer to be supported according to the operating system of the mobile terminal
Interface, application programming interface), it rewrites animation APP (application program, Application) and is mounted on this
It is used on mobile terminal.
But the programming language of different operating system is different, the API that is supported is also different, therefore, for the same animation
Special efficacy, need to be familiar with each operating system development language different developers be directed to respectively each platform terminal development go out it is more
Animation APP is covered, to show same animation effect in the terminal of different platform.
Obviously, animation implementation in the related technology be difficult to be multiplexed there is animation code, animation code maintenance difficulties
Greatly and the high problem of animation code development cost.
Summary of the invention
To overcome animation implementation to be in the related technology difficult to be multiplexed there is animation code, animation code maintenance difficulties
Greatly and the high problem of animation code development cost, the disclosure provides a kind of cartoon display method, device, electronic equipment and deposits
Storage media.
According to the first aspect of the embodiments of the present disclosure, a kind of cartoon display method is provided, is connect applied to target programmed
The first terminal of mouth, which comprises
Obtain the parameter information of target animation to be shown, wherein the target animation is to be mounted in second terminal
The animation that animation application program is drawn;
According to the parameter information and pre-stored from the matched multiple animated types of different animation types, the mesh is obtained
Mark the model data and spatial cue of animation;
According to the default mapping relations between default programming interface and the programming interface of different development platforms, by the model
The first data type that data and the data type of the spatial cue are supported from the default programming interface is converted to described the
The second data type that the target programmed interface of one terminal is supported;
According to the spatial cue after conversion, the model data after conversion is rendered, displaying target animation.
In a kind of possible embodiment, the parameter information for obtaining target animation to be shown, comprising:
Obtain the configuration file of target animation, wherein the configuration file is the animation being mounted in the second terminal
The configuration file corresponding with the target animation of application program;
According to the configuration file, the parameter information of the target animation is obtained.
It is described according to the parameter information and pre-stored from different animation types in a kind of possible embodiment
Matched multiple animated types obtain the model data and spatial cue of the target animation, comprising:
According to the parameter information and pre-stored from the matched multiple animated types of different animation types, obtain with it is described
The matched multiple animation objects of target animation;
According to the parameter information, the model data of each animation object, the first wash with watercolours in the multiple animation object are obtained
Contaminate information, and the second spatial cue for describing animation relationship between different animation objects in the multiple animation object.
In a kind of possible embodiment, the target animation is made of one or more atom animations, the parameter
Information includes the animation parameters, every of the animation types of each atom animation corresponding with the target animation, each atom animation
Relevant parameter between the model identification of a atom animation and not homoatomic animation;
It is described according to the parameter information and pre-stored from the matched multiple animated types of different animation types, obtain with
The matched multiple animation objects of target animation, comprising:
It is according to the pre-stored data from the matched multiple animated types of different animation types, according to every described in the parameter information
The animation types of a atom animation and the animation parameters, creation and the matched multiple animation objects of the target animation,
Wherein, each animation object is each configured with the corresponding animation parameters;
It is described according to the parameter information, obtain the model data of each animation object in the multiple animation object,
One spatial cue, and the second rendering letter for describing animation relationship between different animation objects in the multiple animation object
Breath, comprising:
According to the model identification of each atom animation corresponding with the target animation in the parameter information, described in acquisition
The model data of each animation object in multiple animation objects;
According to the animation parameters of each animation object configuration, the first spatial cue of each animation object is generated;
According to the relevant parameter between the corresponding not homoatomic animation of the target animation, the multiple animation object is created
Related information between middle difference animation object;
According to related information generation, for describing in the multiple animation object, animation is closed between different animation objects
Second spatial cue of system.
It is described according to corresponding with the target animation each in the parameter information in a kind of possible embodiment
The model identification of atom animation obtains the model data of each animation object in the multiple animation object, comprising:
Obtain model file;
According to the model identification of each atom animation corresponding with the target animation in the parameter information, from the mould
The model data of each animation object in the multiple animation object is obtained in type file.
In a kind of possible embodiment, the corresponding animation types of the multiple animated type are the animation application program
The animation types of support.
According to the second aspect of an embodiment of the present disclosure, a kind of animation display device is provided, is connect applied to target programmed
The first terminal of mouth, comprising:
First obtains module, is configured as obtaining the parameter information of target animation to be shown, wherein the target animation
The animation drawn for the animation application program being mounted in second terminal;
Second obtains module, is configured as according to the parameter information and pre-stored matched from different animation types
Multiple animated types obtain the model data and spatial cue of the target animation;
Mapping block is configured as reflecting according to default between default programming interface and the programming interface of different development platforms
Relationship is penetrated, the first data that the model data and the data type of the spatial cue are supported from the default programming interface
Type is converted to the second data type that the target programmed interface of the first terminal is supported;
Rendering module is configured as carrying out the model data after conversion according to the spatial cue after conversion
Rendering, displaying target animation.
In a kind of possible embodiment, the first acquisition module includes:
First acquisition submodule is configured as obtaining the configuration file of target animation, wherein the configuration file is installation
The configuration file corresponding with the target animation of animation application program in the second terminal;
Second acquisition submodule is configured as obtaining the parameter information of the target animation according to the configuration file.
In a kind of possible embodiment, the second acquisition module includes:
Third acquisition submodule is configured as according to the parameter information and pre-stored matches from different animation types
Multiple animated types, obtain with the matched multiple animation objects of the target animation;
4th acquisition submodule is configured as obtaining and each moving in the multiple animation object according to the parameter information
Model data, the first spatial cue of object are drawn, and for describing in the multiple animation object between different animation objects
Second spatial cue of animation relationship.
In a kind of possible embodiment, the target animation is made of one or more atom animations, the parameter
Information includes the animation parameters, every of the animation types of each atom animation corresponding with the target animation, each atom animation
Relevant parameter between the model identification of a atom animation and not homoatomic animation;
The third acquisition submodule includes:
First creating unit, is configured as according to the pre-stored data from the matched multiple animated types of different animation types, presses
The animation types and the animation parameters according to each atom animation described in the parameter information, creation are dynamic with the target
Draw matched multiple animation objects, wherein each animation object is each configured with the corresponding animation parameters;
4th acquisition submodule includes:
First acquisition unit is configured as dynamic according to each atom corresponding with the target animation in the parameter information
The model identification of picture obtains the model data of each animation object in the multiple animation object;
First generation unit is configured as generating each animation according to the animation parameters of each animation object configuration
First spatial cue of object;
Second creating unit is configured as according to the association ginseng between the corresponding not homoatomic animation of the target animation
Number creates the related information in the multiple animation object between different animation objects;
Second generation unit is configured as being generated according to the related information for describing in the multiple animation object not
With the second spatial cue of animation relationship between animation object.
In a kind of possible embodiment, the first acquisition unit includes:
First obtains subelement, is configured as obtaining model file;
Second obtains subelement, is configured as according to each atom corresponding with the target animation in the parameter information
The model identification of animation, from the model data for obtaining each animation object in the multiple animation object in the model file.
In a kind of possible embodiment, the corresponding animation types of the multiple animated type are the animation application program
The animation types of support.
According to the third aspect of an embodiment of the present disclosure, a kind of electronic equipment is provided, comprising:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to executing to realize performed by cartoon display method described in above-mentioned any one
Operation.
According to a fourth aspect of embodiments of the present disclosure, a kind of non-transitorycomputer readable storage medium is provided, when described
When instruction in storage medium is executed by the processor of electronic equipment, so that electronic equipment is able to carry out one kind to realize as above-mentioned
Operation performed by cartoon display method described in any one.
According to a fifth aspect of the embodiments of the present disclosure, a kind of application program is provided, the application program is by electronic equipment
When processor executes, so that electronic equipment is able to carry out a kind of cartoon display method institute to realize as described in above-mentioned any one
The operation of execution.
The technical scheme provided by this disclosed embodiment can include the following benefits:
In this way, parameter information of the embodiment of the present disclosure by acquisition target animation to be shown, and combine pre-stored
From the matched multiple animated types of different animation types, to obtain the model data and rendering data of target animation, then, according to pre-
If the default mapping relations of programming interface and the programming interface of different development platforms, by the model data and rendering data
The first data type that data type is supported from default programming interface, the target programmed interface for being converted to first terminal are supported
The second data type, thus using conversion after data complete target animation display.The method of the embodiment of the present disclosure can be with
It is respectively applied to programming interface and is realized different from each first terminal of the default programming interface without modifying animation code
Multiplexing to animation code, and reduce the maintenance difficulties and development cost of animation code, without flat for different exploitations
Platform carries out the codes that cover for the same animation more respectively and realizes.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not
The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the disclosure
Example, and together with specification for explaining the principles of this disclosure.
Fig. 1 is a kind of flow chart of cartoon display method shown according to an exemplary embodiment;
Fig. 2 is a kind of flow chart of cartoon display method shown according to an exemplary embodiment;
Fig. 3 is a kind of flow chart of cartoon display method shown according to an exemplary embodiment;
Fig. 4 is a kind of structural block diagram of animation display device shown according to an exemplary embodiment;
Fig. 5 is a kind of block diagram of device shown for animation shown according to an exemplary embodiment;
Fig. 6 is a kind of block diagram of device shown for animation shown according to an exemplary embodiment.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to
When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment
Described in embodiment do not represent all implementations consistent with this disclosure.On the contrary, they be only with it is such as appended
The example of the consistent device and method of some aspects be described in detail in claims, the disclosure.
Fig. 1 is a kind of flow chart of cartoon display method shown according to an exemplary embodiment, the cartoon display method
Applied to the first terminal with target programmed interface, this method can specifically include following steps:
Step 101, the parameter information of target animation to be shown is obtained;
Wherein, the target animation is the animation application program (such as AE) for being mounted on second terminal (such as computer terminal)
The animation drawn.Programming interface so in order to make the target animation be shown in programming interface different from second terminal
In each terminal, the embodiment of the present disclosure develops a target APP (the i.e. target animation application journey for being common to each programming interface
Sequence), the parameter information of the available need target animation to be shown of target APP, wherein the parameter information is the target animation
Parameter information in existing animation application program (such as the AE for being mounted on above-mentioned computer terminal), the parameter information can be retouched
State the target animation.
It should be noted that the disclosure for the animation application program developed that is mounted in second terminal not
It is limited, can be any one animation application program developed, the animation application program system can be used in designer
Make target animation.
It is each dynamic to be that the animation application program for having developed this supported for the purpose of the method for the embodiment of the present invention
Picture is developed again, and the target animation application program reopened is made to can be adapted for the mobile terminal institute of each programming interface
It installs and uses, so as to draw out the animation application program developed on the mobile terminal with each programming interface
The each animation supported, avoids the overlapping development of same set of animation code.
In a kind of possible embodiment, when executing step 101, the configuration file of available target animation;So
Afterwards, according to the configuration file, the parameter information of the target animation is obtained.
Wherein, the configuration file be the animation application program being mounted in the second terminal with the target animation
Corresponding configuration file.
Hereinafter using second terminal as computer terminal, first terminal is to be illustrated for mobile terminal.But according to answering
With the difference of scene, the present invention is not particularly limited the terminal type of first terminal and second terminal, can all be mobile
Terminal can also all be computer terminal, can be computer terminal with one, the other is mobile terminal, but need to illustrate
It is that the programming interface that first terminal and second terminal are supported is different, the operating system of the two is different in other words.
It, can be by the target animation in the animation application program of computer terminal in order to obtain the parameter information of target animation
Configuration file (such as json file) read memory, then, be provided with resolver in the target APP of the embodiment of the present disclosure, can
To be parsed to the data in configuration file in memory, to get the parameter information of the target animation.
In this way, the embodiment of the present disclosure only needs to obtain target animation to be shown in the animation application program of second terminal
Configuration file, so that it may the parameter information of the target animation, required external data are got according to the configuration file
It is less, so that it may the parameter information and pre-stored multiple animated types to be based on, to get the pattern number of the target animation
According to and spatial cue, to complete mapping to model data and spatial cue, and using the data after mapping in first terminal
Upper displaying target animation, easy to operate, required external data is less, and does not need according to different platform overlapping development, section
Development cost is saved.
Step 102, it according to the parameter information and pre-stored from the matched multiple animated types of different animation types, obtains
Take the model data and spatial cue of the target animation;
In a kind of possible embodiment, the corresponding animation types of the multiple animated type are to be mounted on second terminal
The animation types that the animation application program is supported.
Due to the animation effect broad categories that AE can make, then being mounted in order to make the embodiment of the present disclosure
The display of the animation to various animation types also may be implemented in the target APP of one terminal, wraps in the target APP of the embodiment of the present disclosure
Animation system is included, which has been stored in advance multiple animated types, i.e., class (Class) file of multiple description animations.Wherein,
Each class file describes a type of animation (atom animation i.e. described hereinafter), therefore referred to as animated type.And it is different dynamic
It draws class and corresponds to different animation types, and multiple animation types corresponding to multiple animated type may each be for example above-mentioned AE institute
The animation types of support.
Wherein, animation types include but is not limited to rotate, be displaced etc..For the animated type of rotation type, the animated type
Animation parameters, spatial cue etc. that rotation animation is related to can be described.In this way, no matter target animation is which animation of AE production
The animation of type it is combined at target animation, the target APP of the embodiment of the present disclosure can realize the animated function, to make
It is mounted on the achievable animation of target APP of first terminal and is mounted on the achievable animation of animation application program of second terminal
It is identical.
Wherein, it can store the default value of each animation parameters of the animated type in pre-stored animated type.Such as it revolves
Rotate and be stored with default in the animated type of picture and rotate to the left 30 degree, when rotation a length of 2 seconds animation parameters default value.
So in this step, animation system in the target APP of the embodiment of the present disclosure can be according to target animation
Parameter information and pre-stored multiple animated types, to obtain the model data and spatial cue of the target animation.
Wherein, so-called model data, the i.e. application of animation.By taking the animation of rotation type as an example, the movement of rotation
Application is needed, and its application is model data, such as model 1, the animation of generation is exactly that the model 1 is revolved
Turn.
So-called rendering data, it can be understood as render instruction, the render instruction input to hardware (such as GPU (graphics process
Device, Graphics Processing Unit)) after, can make GPU on the screen of first terminal according to the render instruction to mould
Type data are rendered.
It,, can be by S201 and S202 come real referring to Fig. 2 when executing step 102 in a kind of possible embodiment
It is existing:
S201 is obtained according to the parameter information and pre-stored from the matched multiple animated types of different animation types
With the matched multiple animation objects of the target animation;
Wherein, pre-stored multiple animation class definitions animation parameters of each animation types, animation ginseng in target APP
Several default values etc., still, the numerical value of the animation parameters of target animation are not necessarily the default value, and target animation may
The combination of animation including multiple animation types.Therefore, this step can be with multiple animated types according to the pre-stored data and described
Parameter information, to create multiple animation objects, wherein the corresponding animation types of multiple animation object are all that the target is dynamic
The animation types for including in the parameter information of picture.And multiple animation objects of creation are configured with certainly according to the parameter information
Define the animation parameters of numerical value.
S202 obtains the model data of each animation object in the multiple animation object, according to the parameter information
One spatial cue, and the second rendering letter for describing animation relationship between different animation objects in the multiple animation object
Breath.
Wherein, the data for the animation model for indicating that the target animation is related to can be carried in the parameter information of target animation,
Therefore, it can use the model data that the parameter information gets each animation object in the multiple animation object;In addition, also
Can according to the parameter information of target animation, to obtain the first spatial cue of the corresponding each animation object of target animation, with
And the second spatial cue for describing animation relationship between different animation objects in the multiple animation object.
In this way, the embodiment of the present disclosure using target by being moved when obtaining the model data and spatial cue of target animation
The parameter information of picture and pre-stored multiple animated types, so as to create to obtain and the matched multiple animations pair of target animation
As, and using the available model data to each animation object of the parameter information, the first spatial cue, it is described for describing
In multiple animation objects between different animation objects animation relationship the second spatial cue, simple and fast drafting target animation institute
The data acquisition needed arrives, and improves formation efficiency of the target animation on first terminal.
In a kind of possible embodiment, the target animation is made of one or more atom animations, the parameter
Information includes the animation parameters, every of the animation types of each atom animation corresponding with the target animation, each atom animation
Relevant parameter between the model identification of a atom animation and not homoatomic animation.
Wherein, since any one designed target animation is made of multiple sub-animations, wherein every height is dynamic
Picture cannot be split again, therefore, animation that is constituting target animation and cannot splitting again can be referred to as atom here
Animation.One target animation may include one or more atom animations, and the type of atom animation can include but is not limited to revolve
Turn, be displaced, scale etc..
Which can recorde the target animation in the configuration file of target animation to be made of the atom animation of type, constituting should
The animation parameters of each atom animation of target animation and the not relevant parameter between homoatomic animation, in addition, target animation
Configuration file in can also record the application of each atom animation, i.e. model identification.
Therefore, the method for the embodiment of the present disclosure is parsed by the parameter information to target animation, available to upper
State the parameters information enumerated.
For animation parameters, for example with the atom animation of rotation type, animation parameters be can include but is not limited to:
Angular velocity of rotation, linear velocity, around which axis rotation, rotation lasts duration, direction of rotation etc.;It is dynamic with the atom of displacement type
Citing is drawn, animation parameters can include but is not limited to starting velocity, acceleration, deformation trace etc..
For the relevant parameter between not homoatomic animation, illustrated for rotating animation and displacement animation, the two
What kind of display order between animation is, both how to combine (such as be displaced in rotary course, rotate after be displaced again
Combination), the interval duration between different animation etc..Wherein, the not relevant parameter between homoatomic animation, is embodied in
In second spatial cue described in text.
In the embodiments of the present disclosure, referring to Fig. 3, then can be realized by S301 when executing S201:
S301, it is according to the pre-stored data from the matched multiple animated types of different animation types, according in the parameter information
The animation types of each atom animation and the animation parameters, creation and the matched multiple animations of the target animation
Object, wherein each animation object is each configured with the corresponding animation parameters.
Specifically, pre-stored animated type is for example all animation types that animation application program is supported, it still, should
Target animation can be only made of part of atoms animation, therefore, can be according to each atom in the parameter information of target animation
The animation types (for example including rotation and displacement) of animation, in pre-stored multiple animated types, only to rotation animation
Two animation objects are respectively created in the animated type of animated type and displacement animation, and according to the dynamic of atom animation each in parameter information
Parameter is drawn, corresponding animation parameters are respectively configured to the animation object of creation.Wherein, the animated type of pre-stored animated type
Type corresponds to a kind of animation types of atom animation, and the animation types of different animated types correspond to the animated type of not homoatomic animation
Type.
For example, the animation parameters of target animation include three atom animations, respectively displacement animation 1, displacement animation 2
With rotation animation 3, displacement animation 1 is joined with animation parameters 2, rotation animation 3 with animation with animation parameters 1, displacement animation 2
Number 3.So this step is needed from pre-stored multiple animated types, and contraposition moves animated type and creates two displacement animation objects,
Respectively displacement animation object 1 and displacement animation object 2, and using animation parameters 1 align the animation parameters of mobile picture object 1 into
Row configuration is configured using the animation parameters that animation parameters 2 align mobile picture object 2;And to rotation animated type creation one
A rotation animation object 3, and animation parameters 3 are used to rotate 3 configuring animations parameter of animation.
In this way, the numerical value of animation parameters and animation parameters that each animation object of creation is configured, all it is and target
Animation is matched, rather than the default number of each animation parameters in pre-stored animated type.
So when executing S202, referring to Fig. 3, it can be realized by S302~S305:
S302 is obtained according to the model identification of each atom animation corresponding with the target animation in the parameter information
Take the model data of each animation object in the multiple animation object;
Wherein, the parameter information of target animation can recorde the model mark of each atom animation included by the target animation
Know, then the method for the embodiment of the present disclosure can utilize the model identification, to obtain and the matched pattern number of the model identification
According to using the model data as the model data of the corresponding atom animation of the model identification.And atom animation and above-described embodiment
The animation object be again it is one-to-one, it is therefore, available to the corresponding each animation object of target animation here
Model data.Such as the model data of displacement animation object 1 is model 1, the model data of displacement animation object 2 is model 2, rotation
The model data for rotating picture object 3 is model 1.
It should be noted that the corresponding model data of difference animation object can be identical or different, the disclosure does not do this
Limitation.
It, can be firstly, obtaining model file when executing S302 in a kind of possible embodiment;Then, according to institute
The model identification for stating each atom animation corresponding with the target animation in parameter information, obtains institute from the model file
State the model data of each animation object in multiple animation objects.
Specifically, the model file derived from animation application program can be directly acquired when obtaining model file,
The model file is model file corresponding to the target animation of animation application program production;Also it can use above-mentioned configuration text
Part can recorde the routing information of the model file to obtain in the model file, such as configuration file, then should by reading
Routing information can then read the model file from respective paths.Wherein, the type of the model data in model file can
To include two dimensional model data, three-dimensional modeling data etc., and it is corresponding with model data to store model identification in model file
Relationship.
Due to including the model identification of the corresponding each atom animation of target animation in the parameter information of target animation, and
It stores the corresponding relationship of model identification and model data in model file, therefore, can be read described from model file
The corresponding model data of model identification of each atom animation, and each atom animation is created with corresponding animation object,
Therefore the available model data to the corresponding each animation object of target animation here.
In this way, the embodiment of the present disclosure is by being also introduced into this for the model file in animation application program about target animation
It is corresponding with the target animation so as in the parameter information by means of target animation in the target APP of open embodiment
Each atom animation model identification, to get the corresponding model data of each atom animation from model file, and this
The method of open embodiment is in advance to the corresponding animation object of each atom animation creation, to get each animation
The model data of object, so that target animation and the target animation that the target APP of the embodiment of the present disclosure is shown are in computer terminal
The effect of upper display is consistent.
S303 generates the first rendering letter of each animation object according to the animation parameters of each animation object configuration
Breath;
Illustrate so that animation object is rotation animation object 3 as an example, the animation parameters of the rotation animation object include: from 45
Degree rotation to 90 degree, when rotation a length of 2s, direction of rotation be it is clockwise, rotated around Y-axis, number of revolutions is 60 times.
So the animation system of the embodiment of the present disclosure can divide the above-mentioned animation parameters of the rotation animation object
Analysis calculates, so that it is determined that going out the render instruction of the rotation animation object 3.Such as some render instruction be in the screen upper left corner, away from
From 10 centimetres of horizontal axis, the information of a point is drawn apart from 2 centimetres of the longitudinal axis, the longitudinal axis.
Since render instruction is that GPU is transferred to be performed instruction, render instruction only describes the screen in first terminal
On how the information of graphing.And the animation system of the embodiment of the present disclosure is calculated according to the animation parameters of animation object
The drafting information out, i.e., the described render instruction.
S304 is created the multiple dynamic according to the relevant parameter between the corresponding not homoatomic animation of the target animation
Draw the related information in object between different animation objects;
Wherein, since the parameter information of target animation may include between the corresponding not homoatomic animation of the target animation
Relevant parameter, and above-mentioned steps are to the corresponding animation object of each atom animation creation, therefore, there is also the need to by
In the relevant parameter, to establish the related information in the corresponding multiple animation objects of the target animation between different animation objects.
The substantive content of relevant parameter and related information can be identical, and still, their data structure is different, because closing
Connection information belongs to the attribute of animation object, needs to meet the definition requirement in animated type to the attribute.But relevant parameter is
The parameter information of target animation.
S305 is generated according to the related information for describing to move between different animation objects in the multiple animation object
Second spatial cue of picture relationship.
Wherein, each animation object corresponding to target animation generates the first spatial cue in S303 step, and one
Target animation is made of multiple atom animations, and is also there are associated between multiple atom animations, and therefore, the disclosure is real
Apply example animation system can incidence relation in multiple animation objects corresponding to target animation between different animation objects into
Row processing calculates, to generate the render instruction in target animation between different animation objects.
Since render instruction is that GPU is transferred to be performed instruction, render instruction only describes the screen in first terminal
On how the information of graphing.
In this way, the embodiment of the present disclosure by obtain corresponding with the target animation each atom animation animation types,
Animation parameters, model identification and not parameter informations such as relevant parameter between homoatomic animation, so as to by means of animation
Type and animation parameters, to utilize pre-stored multiple animated types, to create and the matched multiple animations of the target animation
Object, wherein even if animation types are identical, but animation parameters are different, also will create two different animation objects;And it utilizes
Model identification in parameter information, to obtain the corresponding model data of each animation object, and animation according to each object
Parameter generates the first spatial cue of each animation object, according to the incidence relation between different animation objects, to generate use
In the second spatial cue of the animation relationship between different animation objects in the multiple animation object that describes, target animation is generated
In each atom animation spatial cue, and the not spatial cue between homoatomic animation and each atom animation application
Object, i.e. model data.Avoid the more set codes of exploitation.
It step 103, will according to the default mapping relations between default programming interface and the programming interface of different development platforms
The first data type that the model data and the data type of the spatial cue are supported from the default programming interface, conversion
The second data type supported for the target programmed interface of the first terminal;
Specifically, the target APP of the embodiment of the present disclosure is using default programming interface (such as OpenGL (Open
GraphicsLibrary, open graphic library)) language write, wherein the default programming interface is a kind of figure API, is being opened
When sending out target APP, it can be developed using any one figure API.So in order to applying the target APP write
It is used on to each development platform, no replacement is required code, the target APP of the embodiment of the present disclosure includes middle layer, which deposits
Containing the programming interface of the default programming interface and different development platforms, (such as (Vulkan is a cross-platform 2D to Vulkan
((Direct eXtension, abbreviation DX are connect by the Multimedia Programming of Microsoft's creation with 3D drawing API), DirectX
Mouthful), Metal (being a kind of rendering application programming interface of low level) etc.) between default mapping relations.
Wherein, it is preset in mapping relations at this, default programming interface is respectively provided with from the programming interface of different development platforms
One-to-one mapping relations, for any one one-to-one mapping relations, the content of mapping can include but is not limited to following
At least one: memory information (corresponding to the byte length in following citings), coordinate axis information, clip space information (i.e. depth
The range of information).By the mapping of the above content, the purpose of data type conversion can achieve.
Such as presetting programming interface used in target APP is OpenGL, and target APP installs first terminal extremely
The target programmed interface of support is Vulkan, then the middle layer of the embodiment of the present disclosure can will preset mapping relations according to this
Mapping relations between middle OpenGL and Vulkan, by the data type of the model data, the data of the spatial cue
The data type that type is required from OpenGL is converted into the data type of Vulkan support, so that the drive of first terminal
Dynamic, GPU can identify the model data after change data type and the spatial cue, to complete the aobvious of target animation
Show.
In other words, the data type that different programming interface are supported is different, and therefore, it is necessary to believe model data, rendering
Breath data type corresponding to default programming interface is converted to the current programming for installing first terminal extremely of target APP and connects
The data type that mouth is supported.Such as the rendering function 1 of programming interface 1 supports the spatial cue of type-A, the wash with watercolours of programming interface 2
The spatial cue that function 2 supports B type is contaminated, so needing the data type of spatial cue being converted to B type, example from type-A
The byte length of spatial cue is such as converted into the 0.5k that B type is supported from the 1k that type-A is supported.
Specifically, carry out data type conversion when, in order to realize data type conversion purpose, can be to rendering
Information carries out the conversion of memory headroom, the conversion of coordinate value information, the conversion of clip space, to reach spatial cue from A
Type is converted to the purpose of the data of B type, so that the spatial cue for belonging to B type after conversion, it can be for programming interface 2
Rendering function 2 called.
Step 104, according to the spatial cue after conversion, the model data after conversion is rendered, is shown
Target animation.
Wherein, the middle layer of the embodiment of the present disclosure is equal by the rendering data after conversion and the model data after conversion
It is handed down to hardware driving layer, then the GPU of the embodiment of the present disclosure can be according to the spatial cue after conversion, after conversion
The model data rendered, thus the displaying target animation on the screen of first terminal.
It, can be according to the first spatial cue after conversion, to each animation object pair in a kind of possible embodiment
The model data answered is rendered, and according to the second rendering data after conversion, difference corresponding to different animation objects
Animation relationship between model data is rendered, and realizes the real-time display of target animation.
In this way, parameter information of the embodiment of the present disclosure by acquisition target animation to be shown, and combine pre-stored
From the matched multiple animated types of different animation types, to obtain the model data and rendering data of target animation, then, according to pre-
If the default mapping relations of programming interface and the programming interface of different development platforms, by the model data and rendering data
The first data type that data type is supported from default programming interface, the target programmed interface for being converted to first terminal are supported
The second data type, thus using conversion after data complete target animation display.The method of the embodiment of the present disclosure can be with
It is respectively applied to programming interface and is realized different from each first terminal of the default programming interface without modifying animation code
Multiplexing to animation code, and reduce the maintenance difficulties and development cost of animation code, without flat for different exploitations
Platform carries out the codes that cover for the same animation more respectively and realizes.
Fig. 4 is a kind of structural block diagram of animation display device shown according to an exemplary embodiment, is applied to have mesh
Mark the first terminal of programming interface.Referring to Fig. 4, which includes:
First obtains module 41, is configured as obtaining the parameter information of target animation to be shown, wherein the target is dynamic
It is depicted as being mounted on the animation that the animation application program in second terminal is drawn;
Second obtains module 42, is configured as according to the parameter information and the pre-stored matching with different animation types
Multiple animated types, obtain the model data and spatial cue of the target animation;
Mapping block 43 is configured as according to default between default programming interface and the programming interface of different development platforms
Mapping relations, the first number that the model data and the data type of the spatial cue are supported from the default programming interface
According to type, the second data type that the target programmed interface of the first terminal is supported is converted to;
Rendering module 44, is configured as according to the spatial cue after conversion, to the model data after conversion into
Row rendering, displaying target animation.
In a kind of possible embodiment, the first acquisition module 41 includes:
First acquisition submodule is configured as obtaining the configuration file of target animation, wherein the configuration file is installation
The configuration file corresponding with the target animation of animation application program in the second terminal;
Second acquisition submodule is configured as obtaining the parameter information of the target animation according to the configuration file.
In a kind of possible embodiment, the second acquisition module 42 includes:
Third acquisition submodule is configured as according to the parameter information and pre-stored matches from different animation types
Multiple animated types, obtain with the matched multiple animation objects of the target animation;
4th acquisition submodule is configured as obtaining and each moving in the multiple animation object according to the parameter information
Model data, the first spatial cue of object are drawn, and for describing in the multiple animation object between different animation objects
Second spatial cue of animation relationship.
In a kind of possible embodiment, the target animation is made of one or more atom animations, the parameter
Information includes the animation parameters, every of the animation types of each atom animation corresponding with the target animation, each atom animation
Relevant parameter between the model identification of a atom animation and not homoatomic animation;
The third acquisition submodule includes:
First creating unit, is configured as according to the pre-stored data from the matched multiple animated types of different animation types, presses
The animation types and the animation parameters according to each atom animation described in the parameter information, creation are dynamic with the target
Draw matched multiple animation objects, wherein each animation object is each configured with the corresponding animation parameters;
4th acquisition submodule includes:
First acquisition unit is configured as dynamic according to each atom corresponding with the target animation in the parameter information
The model identification of picture obtains the model data of each animation object in the multiple animation object;
First generation unit is configured as generating each animation according to the animation parameters of each animation object configuration
First spatial cue of object;
Second creating unit is configured as according to the association ginseng between the corresponding not homoatomic animation of the target animation
Number creates the related information in the multiple animation object between different animation objects;
Second generation unit is configured as being generated according to the related information for describing in the multiple animation object not
With the second spatial cue of animation relationship between animation object.
In a kind of possible embodiment, the first acquisition unit includes:
First obtains subelement, is configured as obtaining model file;
Second obtains subelement, is configured as according to each atom corresponding with the target animation in the parameter information
The model identification of animation, from the model data for obtaining each animation object in the multiple animation object in the model file.
In a kind of possible embodiment, the corresponding animation types of the multiple animated type are the animation application program
The animation types of support.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method
Embodiment in be described in detail, no detailed explanation will be given here.
Fig. 5 is a kind of block diagram of device 800 shown for animation shown according to an exemplary embodiment.For example, dress
Setting 800 can be mobile phone, computer, digital broadcasting terminal, messaging device, game console, tablet device, medical treatment
Equipment, body-building equipment, personal digital assistant etc..
Referring to Fig. 5, device 800 may include following one or more components: processing component 802, memory 804, electric power
Component 806, multimedia component 808, audio component 810, the interface 812 of input/output (I/O), sensor module 814, and
Communication component 816.
The integrated operation of the usual control device 800 of processing component 802, such as with display, telephone call, data communication, phase
Machine operation and record operate associated operation.Processing component 802 may include that one or more processors 820 refer to execute
It enables, to perform all or part of the steps of the methods described above.In addition, processing component 802 may include one or more modules, just
Interaction between processing component 802 and other assemblies.For example, processing component 802 may include multi-media module, it is more to facilitate
Interaction between media component 808 and processing component 802.
Memory 804 is configured as storing various types of data to support the operation in equipment 800.These data are shown
Example includes the instruction of any application or method for operating on device 800, contact data, and telephone book data disappears
Breath, picture, video etc..Memory 804 can be by any kind of volatibility or non-volatile memory device or their group
It closes and realizes, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable to compile
Journey read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash
Device, disk or CD.
Power supply module 806 provides electric power for the various assemblies of device 800.Power supply module 806 may include power management system
System, one or more power supplys and other with for device 800 generate, manage, and distribute the associated component of electric power.
Multimedia component 808 includes the screen of one output interface of offer between described device 800 and user.One
In a little embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen
Curtain may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touch sensings
Device is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding action
Boundary, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, more matchmakers
Body component 808 includes a front camera and/or rear camera.When equipment 800 is in operation mode, such as screening-mode or
When video mode, front camera and/or rear camera can receive external multi-medium data.Each front camera and
Rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 810 is configured as output and/or input audio signal.For example, audio component 810 includes a Mike
Wind (MIC), when device 800 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone is matched
It is set to reception external audio signal.The received audio signal can be further stored in memory 804 or via communication set
Part 816 is sent.In some embodiments, audio component 810 further includes a loudspeaker, is used for output audio signal.
I/O interface 812 provides interface between processing component 802 and peripheral interface module, and above-mentioned peripheral interface module can
To be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and lock
Determine button.
Sensor module 814 includes one or more sensors, and the state for providing various aspects for device 800 is commented
Estimate.For example, sensor module 814 can detecte the state that opens/closes of equipment 800, and the relative positioning of component, for example, it is described
Component is the display and keypad of device 800, and sensor module 814 can be with 800 1 components of detection device 800 or device
Position change, the existence or non-existence that user contacts with device 800,800 orientation of device or acceleration/deceleration and device 800
Temperature change.Sensor module 814 may include proximity sensor, be configured to detect without any physical contact
Presence of nearby objects.Sensor module 814 can also include optical sensor, such as CMOS or ccd image sensor, at
As being used in application.In some embodiments, which can also include acceleration transducer, gyro sensors
Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 816 is configured to facilitate the communication of wired or wireless way between device 800 and other equipment.Device
800 can access the wireless network based on communication standard, such as WiFi, carrier network (such as 2G, 3G, 4G or 5G) or them
Combination.In one exemplary embodiment, communication component 816 is received via broadcast channel from the wide of external broadcasting management system
Broadcast signal or broadcast related information.In one exemplary embodiment, the communication component 816 further includes near-field communication (NFC)
Module, to promote short range communication.For example, radio frequency identification (RFID) technology, Infrared Data Association (IrDA) can be based in NFC module
Technology, ultra wide band (UWB) technology, bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 800 can be believed by one or more application specific integrated circuit (ASIC), number
Number processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instruction, example are additionally provided
It such as include the memory 804 of instruction, above-metioned instruction can be executed by the processor 820 of device 800 to complete the above method.For example,
The non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk
With optical data storage devices etc..
In the exemplary embodiment, a kind of application program including instruction, the memory for example including instruction are additionally provided
804, above-metioned instruction can be executed by the processor 820 of device 800 to complete the above method.For example, the non-transitory computer
Readable storage medium storing program for executing can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage devices etc..
Fig. 6 is a kind of block diagram of device 1900 shown for animation shown according to an exemplary embodiment.For example, dress
Setting 1900 may be provided as a server.Referring to Fig. 6, it further comprises one that device 1900, which includes processing component 1922,
Or multiple processors and memory resource represented by a memory 1932, it can holding by processing component 1922 for storing
Capable instruction, such as application program.The application program stored in memory 1932 may include one or more each
A module for corresponding to one group of instruction.In addition, processing component 1922 is configured as executing instruction, to execute the above method.
Device 1900 can also include that a power supply module 1926 be configured as the power management of executive device 1900, and one
Wired or wireless network interface 1950 is configured as device 1900 being connected to network and input and output (I/O) interface
1958.Device 1900 can be operated based on the operating system for being stored in memory 1932, such as Windows ServerTM, Mac
OS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
It should be noted that the executing subject of the disclosure can be mobile phone, computer, digital broadcast terminal, message
Transceiver, game console, tablet device, Medical Devices, body-building equipment, personal digital assistant etc.;It is also possible to server.
As electronic equipment such as mobile phone, computer, digital broadcasting terminal, messaging device, game console, tablet device, doctor
Treat equipment, body-building equipment, whens personal digital assistant etc., as shown in Figure 5.When electronic equipment is server, as shown in Figure 6.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to its of the disclosure
Its embodiment.The disclosure is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or
Person's adaptive change follows the general principles of this disclosure and including the undocumented common knowledge in the art of the disclosure
Or conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by following
Claim is pointed out.
It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and
And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by the accompanying claims.
Claims (10)
1. a kind of cartoon display method, which is characterized in that applied to the first terminal with target programmed interface, the method packet
It includes:
Obtain the parameter information of target animation to be shown, wherein the target animation is the animation being mounted in second terminal
The animation that application program is drawn;
According to the parameter information and pre-stored from the matched multiple animated types of different animation types, it is dynamic to obtain the target
The model data and spatial cue of picture;
According to the default mapping relations between default programming interface and the programming interface of different development platforms, by the model data
The first data type supported with the data type of the spatial cue from the default programming interface, is converted to described first eventually
The second data type that the target programmed interface at end is supported;
According to the spatial cue after conversion, the model data after conversion is rendered, displaying target animation.
2. cartoon display method according to claim 1, which is characterized in that described to deposit according to the parameter information and in advance
Storage from the matched multiple animated types of different animation types, obtain the model data and spatial cue of the target animation, comprising:
According to the parameter information and pre-stored from the matched multiple animated types of different animation types, obtain and the target
The matched multiple animation objects of animation;
According to the parameter information, the model data of each animation object in the multiple animation object, the first rendering letter are obtained
Breath, and the second spatial cue for describing animation relationship between different animation objects in the multiple animation object.
3. cartoon display method according to claim 2, which is characterized in that the target animation is by one or more atoms
Animation is constituted, and the parameter information includes the animation types of each atom animation corresponding with the target animation, each atom
The animation parameters of animation, the model identification of each atom animation and the not relevant parameter between homoatomic animation;
It is described according to the parameter information and pre-stored from the matched multiple animated types of different animation types, obtain with it is described
The matched multiple animation objects of target animation, comprising:
It is according to the pre-stored data from the matched multiple animated types of different animation types, according to each original described in the parameter information
The animation types of sub-animation and the animation parameters, creation and the matched multiple animation objects of the target animation, wherein
Each animation object is each configured with the corresponding animation parameters;
It is described according to the parameter information, obtain the model data of each animation object, the first wash with watercolours in the multiple animation object
Information, and the second spatial cue for describing animation relationship between different animation objects in the multiple animation object are contaminated,
Include:
According to the model identification of each atom animation corresponding with the target animation in the parameter information, obtain the multiple
The model data of each animation object in animation object;
According to the animation parameters of each animation object configuration, the first spatial cue of each animation object is generated;
According to the relevant parameter between the corresponding not homoatomic animation of the target animation, create in the multiple animation object not
With the related information between animation object;
It is generated according to the related information for describing in the multiple animation object animation relationship between different animation objects
Second spatial cue.
4. cartoon display method according to claim 3, which is characterized in that
The model identification according to each atom animation corresponding with the target animation in the parameter information, described in acquisition
The model data of each animation object in multiple animation objects, comprising:
Obtain model file;
According to the model identification of each atom animation corresponding with the target animation in the parameter information, from the model text
The model data of each animation object in the multiple animation object is obtained in part.
5. a kind of animation display device, which is characterized in that applied to the first terminal with target programmed interface, comprising:
First obtains module, is configured as obtaining the parameter information of target animation to be shown, wherein the target animation is peace
The animation that animation application program in second terminal is drawn;
Second obtains module, is configured as according to the parameter information and pre-stored matched multiple from different animation types
Animated type obtains the model data and spatial cue of the target animation;
Mapping block is configured as closing according to the default mapping between default programming interface and the programming interface of different development platforms
System, the first data class that the model data and the data type of the spatial cue are supported from the default programming interface
Type is converted to the second data type that the target programmed interface of the first terminal is supported;
Rendering module is configured as rendering the model data after conversion according to the spatial cue after conversion,
Displaying target animation.
6. animation display device according to claim 5, which is characterized in that described second, which obtains module, includes:
Third acquisition submodule is configured as according to the parameter information and pre-stored matched more from different animation types
A animated type obtains and the matched multiple animation objects of the target animation;
4th acquisition submodule is configured as obtaining each animation pair in the multiple animation object according to the parameter information
The model data of elephant, the first spatial cue, and for describing in the multiple animation object animation between different animation objects
Second spatial cue of relationship.
7. animation display device according to claim 6, which is characterized in that the target animation is by one or more atoms
Animation is constituted, and the parameter information includes the animation types of each atom animation corresponding with the target animation, each atom
The animation parameters of animation, the model identification of each atom animation and the not relevant parameter between homoatomic animation;
The third acquisition submodule includes:
First creating unit, be configured as it is according to the pre-stored data from the matched multiple animated types of different animation types, according to institute
State the animation types of each atom animation and the animation parameters described in parameter information, creation and the target animation
The multiple animation objects matched, wherein each animation object is each configured with the corresponding animation parameters;
4th acquisition submodule includes:
First acquisition unit is configured as according to each atom animation corresponding with the target animation in the parameter information
Model identification obtains the model data of each animation object in the multiple animation object;
First generation unit is configured as generating each animation object according to the animation parameters of each animation object configuration
The first spatial cue;
Second creating unit is configured as according to the relevant parameter between the corresponding not homoatomic animation of the target animation, wound
Build the related information in the multiple animation object between different animation objects;
Second generation unit is configured as being generated according to the related information different dynamic in the multiple animation object for describing
Draw the second spatial cue of animation relationship between object.
8. animation display device according to claim 7, which is characterized in that
The first acquisition unit includes:
First obtains subelement, is configured as obtaining model file;
Second obtains subelement, is configured as according to each atom animation corresponding with the target animation in the parameter information
Model identification, from the model data for obtaining each animation object in the multiple animation object in the model file.
9. a kind of electronic equipment characterized by comprising
Processor;
For storing the memory of the processor-executable instruction;
Wherein, the processor is configured to executing dynamic as described in any one of claim 1 to claim 5 to realize
Draw operation performed by display methods.
10. a kind of non-transitorycomputer readable storage medium, which is characterized in that when the instruction in the storage medium is by electronics
When the processor of equipment executes, so that the electronic equipment is able to carry out one kind to realize as in claim 1 to claim 5
Operation performed by cartoon display method described in any one.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910253191.5A CN110136230B (en) | 2019-03-29 | 2019-03-29 | Animation display method, device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910253191.5A CN110136230B (en) | 2019-03-29 | 2019-03-29 | Animation display method, device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110136230A true CN110136230A (en) | 2019-08-16 |
CN110136230B CN110136230B (en) | 2023-09-05 |
Family
ID=67568848
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910253191.5A Active CN110136230B (en) | 2019-03-29 | 2019-03-29 | Animation display method, device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110136230B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110647325A (en) * | 2019-08-21 | 2020-01-03 | 北京达佳互联信息技术有限公司 | Graphic resource conversion method, apparatus, electronic device and storage medium |
CN110659024A (en) * | 2019-08-21 | 2020-01-07 | 北京达佳互联信息技术有限公司 | Graphic resource conversion method, apparatus, electronic device and storage medium |
CN110751592A (en) * | 2019-08-21 | 2020-02-04 | 北京达佳互联信息技术有限公司 | Graphic resource conversion method, apparatus, electronic device and storage medium |
CN110865800A (en) * | 2019-11-01 | 2020-03-06 | 浙江大学 | Full-platform three-dimensional reconstruction code processing method based on engine modularization |
CN111951355A (en) * | 2020-08-04 | 2020-11-17 | 北京字节跳动网络技术有限公司 | Animation processing method and device, computer equipment and storage medium |
CN112560397A (en) * | 2020-12-24 | 2021-03-26 | 成都极米科技股份有限公司 | Drawing method, drawing device, terminal equipment and storage medium |
CN112750182A (en) * | 2019-10-29 | 2021-05-04 | 腾讯科技(深圳)有限公司 | Dynamic effect implementation method and device and computer readable storage medium |
CN114049416A (en) * | 2021-11-16 | 2022-02-15 | 珠海金山数字网络科技有限公司 | Animation data acquisition method and device |
TWI837972B (en) * | 2022-11-29 | 2024-04-01 | 宏碁股份有限公司 | Graphics processing device and method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000011199A (en) * | 1998-06-18 | 2000-01-14 | Sony Corp | Automatic generating method for animation |
CN101484921A (en) * | 2006-03-28 | 2009-07-15 | 斯特里米泽公司 | Method for calculating animation parameters of objects of a multimedia scene |
US20150370444A1 (en) * | 2014-06-24 | 2015-12-24 | Google Inc. | Computerized systems and methods for rendering an animation of an object in response to user input |
CN109359262A (en) * | 2018-10-11 | 2019-02-19 | 广州酷狗计算机科技有限公司 | Animation playing method, device, terminal and storage medium |
-
2019
- 2019-03-29 CN CN201910253191.5A patent/CN110136230B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000011199A (en) * | 1998-06-18 | 2000-01-14 | Sony Corp | Automatic generating method for animation |
CN101484921A (en) * | 2006-03-28 | 2009-07-15 | 斯特里米泽公司 | Method for calculating animation parameters of objects of a multimedia scene |
US20150370444A1 (en) * | 2014-06-24 | 2015-12-24 | Google Inc. | Computerized systems and methods for rendering an animation of an object in response to user input |
CN109359262A (en) * | 2018-10-11 | 2019-02-19 | 广州酷狗计算机科技有限公司 | Animation playing method, device, terminal and storage medium |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110647325A (en) * | 2019-08-21 | 2020-01-03 | 北京达佳互联信息技术有限公司 | Graphic resource conversion method, apparatus, electronic device and storage medium |
CN110659024A (en) * | 2019-08-21 | 2020-01-07 | 北京达佳互联信息技术有限公司 | Graphic resource conversion method, apparatus, electronic device and storage medium |
CN110751592A (en) * | 2019-08-21 | 2020-02-04 | 北京达佳互联信息技术有限公司 | Graphic resource conversion method, apparatus, electronic device and storage medium |
CN110659024B (en) * | 2019-08-21 | 2023-12-26 | 北京达佳互联信息技术有限公司 | Graphics resource conversion method and device, electronic equipment and storage medium |
CN112750182A (en) * | 2019-10-29 | 2021-05-04 | 腾讯科技(深圳)有限公司 | Dynamic effect implementation method and device and computer readable storage medium |
CN110865800A (en) * | 2019-11-01 | 2020-03-06 | 浙江大学 | Full-platform three-dimensional reconstruction code processing method based on engine modularization |
CN110865800B (en) * | 2019-11-01 | 2021-03-09 | 浙江大学 | Full-platform three-dimensional reconstruction code processing method based on engine modularization |
CN111951355A (en) * | 2020-08-04 | 2020-11-17 | 北京字节跳动网络技术有限公司 | Animation processing method and device, computer equipment and storage medium |
CN112560397A (en) * | 2020-12-24 | 2021-03-26 | 成都极米科技股份有限公司 | Drawing method, drawing device, terminal equipment and storage medium |
CN114049416A (en) * | 2021-11-16 | 2022-02-15 | 珠海金山数字网络科技有限公司 | Animation data acquisition method and device |
TWI837972B (en) * | 2022-11-29 | 2024-04-01 | 宏碁股份有限公司 | Graphics processing device and method |
Also Published As
Publication number | Publication date |
---|---|
CN110136230B (en) | 2023-09-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110136230A (en) | Cartoon display method, device, electronic equipment and storage medium | |
US9967388B2 (en) | Mirrored interface navigation of multiple user interfaces | |
CN107087101B (en) | Apparatus and method for providing dynamic panorama function | |
CN106354455B (en) | Man-machine interface display processing unit and its method | |
CN109901894A (en) | A kind of progress bar image generating method, device and storage medium | |
CN108845854B (en) | User interface display method, device, terminal and storage medium | |
CN112634416A (en) | Method and device for generating virtual image model, electronic equipment and storage medium | |
CN110989901B (en) | Interactive display method and device for image positioning, electronic equipment and storage medium | |
JP2006134322A (en) | Apparatus and method for providing 3d animation file reflecting only user's personality in mobile communication terminal | |
KR20230044213A (en) | Motion Expressions for Articulating Animation | |
KR20150079387A (en) | Illuminating a Virtual Environment With Camera Light Data | |
CN109542417A (en) | A kind of method, apparatus, terminal and the storage medium of DOM rendering webpage | |
CN113409427A (en) | Animation playing method and device, electronic equipment and computer readable storage medium | |
CN105094830B (en) | A kind of method and apparatus of performance objective function | |
EP4272173A1 (en) | Flow-guided motion retargeting | |
US10325569B2 (en) | Method and apparatus for coding image information for display | |
CN110929616B (en) | Human hand identification method and device, electronic equipment and storage medium | |
WO2023092950A1 (en) | Material processing method and apparatus for virtual scenario, and electronic device, storage medium and computer program product | |
CN113708590A (en) | Linear vibration motor, tactile feedback vibration module, control method and device | |
CN107219989A (en) | Icon processing method, device and terminal | |
KR20130101823A (en) | Digital device and video call performing method | |
CN108829473A (en) | event response method, device and storage medium | |
KR20120039848A (en) | Terminal and method for contolling the same | |
CN116048686B (en) | Display method and folding screen device | |
CN114064017B (en) | Drawing method and related equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |