[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114862996A - Animation rendering method and device, electronic equipment and storage medium - Google Patents

Animation rendering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114862996A
CN114862996A CN202210343845.5A CN202210343845A CN114862996A CN 114862996 A CN114862996 A CN 114862996A CN 202210343845 A CN202210343845 A CN 202210343845A CN 114862996 A CN114862996 A CN 114862996A
Authority
CN
China
Prior art keywords
animation
view
target
style attribute
view element
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210343845.5A
Other languages
Chinese (zh)
Inventor
孙弘法
鞠达豪
杨小刚
胡方正
杨凯丽
朱彤
李伟鹏
蔡晓华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202210343845.5A priority Critical patent/CN114862996A/en
Publication of CN114862996A publication Critical patent/CN114862996A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The method comprises the steps of analyzing a starting pattern attribute of at least one first view element in an animation starting view, an ending pattern attribute of at least one second view element in an animation ending view and a target animation duration from animation configuration information; determining a start-end style attribute of a target view element based on a start style attribute and an end style attribute, wherein the target view element is an element union of at least one first view element and at least one second view element; and rendering the target animation based on the start-end style attributes and the target animation duration. By utilizing the embodiment of the disclosure, the data size of animation configuration analysis can be reduced, and the simplicity of animation view description, the operation convenience of animation rendering, the rendering efficiency and the fluency of animation can be improved.

Description

Animation rendering method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of animation production technologies, and in particular, to an animation rendering method and apparatus, an electronic device, and a storage medium.
Background
With the development of animation technology, animations are widely used in many fields. On the network, animations are often placed in a certain area of a web page to emphasize a certain content, so that the user experience can be better improved.
In the related art, an animation is generally manufactured by using a plurality of layers, and the configuration of rendering information of the plurality of layers in each frame of animation view needs to be performed in advance; and then, analyzing rendering attribute information from the rendering configuration information frame by frame layer by layer, and then, combining the rendering attribute information obtained by analysis to render the animation frame by frame layer by layer so as to realize the rendering production of the animation. In the related art, rendering configuration information is complex and data volume is large, so that rendering attribute information required by animation rendering needs to be obtained through layer-by-layer frame-by-frame analysis in the animation rendering process, and the animation rendering process is complex in operation, low in efficiency and poor in animation smoothness.
Disclosure of Invention
The present disclosure provides an animation rendering method, an apparatus, an electronic device, and a storage medium, to at least solve the problems of a large data amount of animation rendering processing in the related art, complex operation, low efficiency, poor animation smoothness, and the like in an animation rendering process. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided an animation rendering method, including:
analyzing the starting pattern attribute of at least one first view element in the animation starting view, the ending pattern attribute of at least one second view element in the animation ending view and the target animation duration from the animation configuration information;
determining a start-end style attribute of a target view element based on the start style attribute and the end style attribute, the target view element being an element union of the at least one first view element and the at least one second view element; the start-end style attribute of the target view element comprises a style attribute of the target view element in the animation start view and a style attribute of the target view element in the animation end view;
and rendering the target animation based on the start-end style attributes and the target animation duration.
In an alternative embodiment, the rendering the target animation based on the beginning-end style attributes and the target animation duration includes:
creating an animation rendering executor matched with a local operating system;
and rendering the target animation based on the animation rendering actuator, the start-end style attributes and the target animation duration.
In an alternative embodiment, the rendering the target animation based on the animation rendering executor, the start-end style property, and the target animation duration comprises:
controlling the animation rendering actuator to generate style attribute change information of the target view element in the target animation duration based on the start-end style attributes;
and controlling the animation rendering actuator to render the target animation based on the style attribute variation information.
In an optional embodiment, the style attribute variation information includes variation information of a plurality of attributes within the target animation duration; the controlling the animation rendering executor, rendering the target animation based on the style attribute variation information, includes:
and controlling the animation rendering actuator, and rendering the view corresponding to the target view element in the target animation duration to obtain the target animation based on the variation information of the multiple attributes in the target animation duration.
In an optional embodiment, the method further comprises:
comparing the at least one first view element with the at least one second view element to obtain a comparison result;
in case the comparison result indicates that the at least one first view element and the at least one second view element are not identical, the determining, based on the start style attribute and the end style attribute, a start-end style attribute of the target view element comprises:
determining shared view elements and non-shared view elements in the target view elements;
determining a start-end style attribute corresponding to the non-shared view element based on a preset style attribute and a target style attribute of the non-shared view element, wherein the target style attribute is a start style attribute or an end style attribute of the non-shared view element;
and determining the start and end style attributes of the shared view elements according to the start style attribute of the shared view elements and the end style attribute of the shared view elements.
In an optional embodiment, in case that the alignment result indicates that the at least one first view element and the at least one second view element are consistent, the determining the beginning and ending style attributes of the target view element based on the beginning style attribute and the ending style attribute comprises:
and taking the starting style attribute of the target view element and the ending style attribute of the target view element as the starting style attribute and the ending style attribute of the target view element.
In an optional embodiment, the parsing the start pattern attribute of at least one first view element in the animation start view, the end pattern attribute of at least one second view element in the animation end view, and the target animation duration from the animation configuration information includes:
analyzing the animation configuration information to obtain a starting view element tree corresponding to the animation starting view, an ending view element tree corresponding to the animation ending view and the target animation duration; the starting view element tree is a structure tree which takes the view elements in the animation starting view with the style attributes as nodes and takes the view layer level relationship among the view elements in the animation starting view as the node level relationship, the ending view element tree is a structure tree which takes the view elements in the animation ending view with the style attributes as nodes and takes the view layer level relationship among the view elements in the animation ending view as the node level relationship;
obtaining a starting style attribute of the at least one first view element from the starting view element tree;
and acquiring the ending style attribute of the at least one second view element from the ending view element tree.
In an optional embodiment, said obtaining the starting style attribute of the at least one first view element from the starting view element tree comprises:
traversing nodes in the starting view element tree to obtain at least one first view element;
in the process of traversing the nodes in the initial view element tree, acquiring a style attribute corresponding to a first current traversal node;
and taking the style attribute corresponding to the first current traversal node as the starting style attribute of the view element corresponding to the first current traversal node in the at least one first view element.
In an optional embodiment, said obtaining the ending style attribute of the at least one second view element from the ending view element tree comprises:
traversing nodes in the end view element tree to obtain the at least one second view element;
acquiring a style attribute corresponding to a second current traversal node in the process of traversing the nodes in the end view element tree;
and taking the style attribute corresponding to the second current traversal node as the ending style attribute of the view element corresponding to the second current traversal node in the at least one second view element.
According to a second aspect of the embodiments of the present disclosure, there is provided an animation rendering apparatus including:
the data acquisition module is configured to analyze the starting pattern attribute of at least one first view element in the animation starting view, the ending pattern attribute of at least one second view element in the animation ending view and the target animation duration from the animation configuration information;
a start-end style attribute determination module configured to perform determining a start-end style attribute of a target view element based on the start style attribute and the end style attribute, the target view element being an element union of the at least one first view element and the at least one second view element; the start-end style attribute of the target view element comprises a style attribute of the target view element in the animation start view and a style attribute of the target view element in the animation end view;
and the target animation rendering module is configured to render the target animation based on the start-end style attributes and the target animation duration.
In an alternative embodiment, the target animation rendering module includes:
an animation rendering executor creating unit configured to perform creating an animation rendering executor matched with the local operating system;
and the target animation rendering unit is configured to perform rendering of the target animation based on the animation rendering actuator, the start-end style attribute and the target animation duration.
In an alternative embodiment, the target animation rendering unit includes:
a style attribute variation information generation unit configured to perform control of the animation rendering executor to generate style attribute variation information of the target view element within the target animation duration based on the start-end style attributes;
and the target animation rendering subunit is configured to execute control of the animation rendering actuator, and render the target animation based on the style attribute change information.
In an optional embodiment, the style attribute variation information includes variation information of a plurality of attributes within the target animation duration; the target animation rendering subunit is specifically configured to perform: and controlling the animation rendering actuator, and rendering the view corresponding to the target view element in the target animation duration to obtain the target animation based on the variation information of the multiple attributes in the target animation duration.
In an optional embodiment, the apparatus further comprises:
a view element comparison module configured to perform comparison between the at least one first view element and the at least one second view element to obtain a comparison result;
in a case that the comparison result indicates that the at least one first view element and the at least one second view element are not identical, the beginning-end style attribute determination module includes:
a view element determination unit configured to perform determining a shared view element and a non-shared view element of the target view elements;
a first start-end style attribute determining unit, configured to perform determining a start-end style attribute corresponding to the non-shared view element based on a preset style attribute and a target style attribute of the non-shared view element, where the target style attribute is a start style attribute or an end style attribute of the non-shared view element;
a second start and end style attribute determination unit configured to perform determining a start and end style attribute of the shared view element according to a start style attribute of the shared view element and an end style attribute of the shared view element.
In an optional embodiment, in a case that the alignment result indicates that the at least one first view element and the at least one second view element are identical, the beginning and end style attribute determining module includes:
a third start-end style attribute determination unit configured to perform setting the start style attribute of the target view element and the end style attribute of the target view element as start-end style attributes of the target view element.
In an optional embodiment, the data acquisition module comprises:
the analysis processing unit is configured to analyze the animation configuration information to obtain a starting view element tree corresponding to the animation starting view, an ending view element tree corresponding to the animation ending view and the target animation duration; the starting view element tree is a structure tree which takes the view elements in the animation starting view with the style attributes as nodes and takes the view layer level relationship among the view elements in the animation starting view as the node level relationship, the ending view element tree is a structure tree which takes the view elements in the animation ending view with the style attributes as nodes and takes the view layer level relationship among the view elements in the animation ending view as the node level relationship;
a starting style attribute acquiring unit configured to acquire a starting style attribute of the at least one first view element from the starting view element tree;
an ending style attribute obtaining unit configured to perform obtaining an ending style attribute of the at least one second view element from the ending view element tree.
In an optional embodiment, the starting style attribute obtaining unit includes:
a first node traversal unit configured to perform traversal of nodes in the starting view element tree to obtain the at least one first view element;
the first pattern attribute acquisition unit is configured to acquire a pattern attribute corresponding to a first current traversal node in the process of traversing the nodes in the initial view element tree;
a starting style attribute determining unit configured to perform the style attribute corresponding to the first current traversal node as a starting style attribute of the view element corresponding to the first current traversal node among the at least one first view element.
In an optional embodiment, the ending style attribute obtaining unit includes:
a second node traversal unit configured to perform traversal of nodes in the end view element tree to obtain the at least one second view element;
the second style attribute acquisition unit is configured to acquire a style attribute corresponding to a second current traversal node in the process of traversing the nodes in the end view element tree;
an ending style attribute determination unit configured to perform the style attribute corresponding to the second current traversal node as an ending style attribute of the view element corresponding to the second current traversal node among the at least one second view element.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the method of any one of the first aspects described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the method of any one of the first aspects of the embodiments of the present disclosure.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the method of any one of the first aspects of the embodiments of the present disclosure.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
in the animation rendering process, the rendering of the animation is realized through the starting pattern attribute of at least one first view element in the animation starting view, the ending pattern attribute of at least one second view element in the animation ending view and the target animation duration which are analyzed from the animation configuration information, the data volume of the animation configuration analysis can be effectively reduced, and the convenience and the efficiency of the animation configuration analysis are improved; and the description of animation rendering data is carried out by combining the style attributes of the view elements in the animation start and end views, so that the simplicity of description of the animation views can be greatly improved, and the operation convenience, the rendering efficiency and the animation fluency of animation rendering can be better improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a schematic diagram illustrating an application environment in accordance with an illustrative embodiment;
FIG. 2 is a flow diagram illustrating a method of rendering an animation according to an exemplary embodiment;
FIG. 3 is a flowchart illustrating parsing of a start pattern attribute of at least one first view element in an animation start view, an end pattern attribute of at least one second view element in an animation end view, and a target animation duration from animation configuration information, according to an exemplary embodiment;
FIG. 4 is a flowchart illustrating a method for determining a start-end style attribute of a target view element based on a start style attribute and an end style attribute in accordance with an illustrative embodiment;
FIG. 5 is a flowchart illustrating rendering of a target animation based on a start-end style property and a target animation duration, according to an exemplary embodiment;
FIG. 6 is a schematic illustration of a partial view of a target animation provided in accordance with an exemplary embodiment;
FIG. 7 is a block diagram of an animation rendering device, shown in accordance with an exemplary embodiment;
FIG. 8 is a block diagram illustrating an electronic device for animation rendering according to an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for presentation, analyzed data, etc.) referred to in the present disclosure are information and data authorized by the user or sufficiently authorized by each party.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an application environment according to an exemplary embodiment, and as shown in fig. 1, the application environment may include a terminal 100 and a server 200.
The terminal 100 can be used to provide business services to any user and render animations related to the business services. Specifically, the terminal 100 may include, but is not limited to, a mobile electronic device of a type such as a smart phone, an Augmented Reality (AR)/Virtual Reality (VR) device, a smart wearable device, and the like, and may also be software running on the mobile electronic device, such as an application program and the like. Alternatively, the operating system running on the electronic device may include, but is not limited to, an android system, an IOS system, and the like.
In an alternative embodiment, the server 200 may provide a background service for the terminal 100. Specifically, the server 200 may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers.
In addition, it should be noted that fig. 1 shows only one application environment provided by the present disclosure, and in practical applications, other application environments may also be included, for example, more terminals may be included.
In the embodiment of the present specification, the terminal 100 and the server 200 may be directly or indirectly connected through wired or wireless communication, and the disclosure is not limited herein.
Fig. 2 is a flowchart illustrating an animation rendering method, according to an exemplary embodiment, which is used in a terminal electronic device, as shown in fig. 2, and includes the following steps.
In step S201, analyzing a start pattern attribute of at least one first view element in an animation start view, an end pattern attribute of at least one second view element in an animation end view, and a target animation duration from animation configuration information;
in a specific embodiment, the animation configuration information may be rendering configuration information of an animation to be rendered. Alternatively, the animation configuration information may be acquired from a server. Specifically, the animation configuration information may be configured based on a preset universal parsing protocol. Optionally, the preset general parsing protocol may be a preset parsing protocol adapted to multiple operating systems. Specifically, the various operating systems may include an android system, an IOS system, and the like.
In particular, the at least one first view element may be a view element in an animation starting view, and the starting style attribute of the at least one first view element may be a style attribute of the at least one first view element in the animation starting view. The at least one second view element may be a view element in an animation end view, and the end style attribute of the at least one second view element may be a style attribute of the at least one second view element in the animation end view. Specifically, the target animation duration may be a preset playing duration of the animation to be rendered.
In a specific embodiment, the style attribute of the view element may be an attribute capable of characterizing a presentation state of the view element in the page, and optionally, the style attribute may include multiple attributes, such as a width, a height, a transparency, and coordinates of the view element.
In an optional embodiment, as shown in fig. 3, the analyzing the start pattern attribute of at least one first view element in the animation start view, the end pattern attribute of at least one second view element in the animation end view, and the target animation duration from the animation configuration information may include:
in step S2011, the animation configuration information is parsed to obtain a starting view element tree corresponding to the animation starting view, an ending view element tree corresponding to the animation ending view, and a target animation duration.
In a specific embodiment, the starting view element tree may be a structure tree in which view elements in the animation starting view with style attributes are hung as nodes, and a view layer level relationship between the view elements in the animation starting view is a node level relationship. The ending view element tree may be a structure tree in which view elements in the animation ending view with the style attributes are hung as nodes, and a view layer level relationship between the view elements in the animation ending view is a node level relationship.
In a specific embodiment, the terminal may be preset with an analyzer, and the analyzer may be configured to analyze the animation configuration information in combination with a preset general analysis protocol, so as to obtain a starting view element tree corresponding to an animation starting view, an ending view element tree corresponding to an animation ending view, and a target animation duration.
In step S2013, a start style attribute of at least one first view element is obtained from the start view element tree.
In an optional embodiment, the obtaining the start style attribute of the at least one first view element from the start view element tree may include: traversing nodes in the initial view element tree to obtain at least one first view element; in the process of traversing nodes in the initial view element tree, acquiring a style attribute corresponding to a first current traversal node; and taking the style attribute corresponding to the first current traversal node as the starting style attribute of the view element corresponding to the first current traversal node in the at least one first view element.
In a particular embodiment, the first current traversal node may be the node currently traversed from the starting view element tree. Specifically, nodes in the starting view element tree may be traversed in sequence from the root node to determine view elements in the animation starting view, and in the traversing process, a style attribute corresponding to the currently traversed node may be used as a starting style attribute of the view element corresponding to the node.
In the above embodiment, by traversing the nodes in the initial view element tree, the initial style attribute of the view element can be obtained while the view element in the animation initial view is determined, and the efficiency and accuracy of obtaining the style attribute of the view element in the animation initial view are greatly improved.
In step S2015, an end style attribute of at least one second view element is obtained from the end view element tree.
In an optional embodiment, the obtaining, from the ending view element tree, an ending style attribute of at least one second view element may include: traversing nodes in the ending view element tree to obtain at least one second view element; acquiring a style attribute corresponding to a second current traversal node in the process of traversing nodes in the end view element tree; and taking the style attribute corresponding to the second current traversal node as the ending style attribute of the view element corresponding to the second current traversal node in the at least one second view element.
In a particular embodiment, the second current traversal node may be the node currently traversed from the end view element tree. Specifically, nodes in the end view element tree may be traversed in sequence from the root node to determine view elements in the animation end view, and in the traversal process, a style attribute corresponding to the currently traversed node may be used as a starting style attribute of the view element corresponding to the node.
In the above embodiment, by traversing the nodes in the end view element tree, the start style attribute of the view element can be acquired while the view element in the animation end view is determined, and the efficiency and accuracy of acquiring the style attribute of the view element in the animation end view are greatly improved.
In an optional embodiment, in the case that the view elements in the animation start view and the animation end view are the same, at least one first view element and at least one second view element traversed sequentially may be put in a corresponding stack structure sequentially, and thus, the order of data may be promoted.
In the above embodiment, the animation configuration information is analyzed into the structure tree in which the view elements with the style attributes are hung as nodes, and the view layer level relationship between the view elements is the node level relationship, so that the simplicity of description of the animation view can be greatly improved, the consumption of computing resources in the animation rendering process can be reduced, and the performance of the device can be improved.
In step S203, the start-end style attribute of the target view element is determined based on the start style attribute and the end style attribute.
In a specific embodiment, the target view element may be a union of at least one first view element and at least one second view element; the beginning and end style properties of the target view element may include a style property of the target view element in the animation beginning view and a style property of the target view element in the animation ending view.
In an optional embodiment, the view elements in the animation start view and the animation end view may be the same or different, and accordingly, the method may further include:
comparing at least one first view element with at least one second view element to obtain a comparison result;
optionally, in a case that the comparison result indicates that the at least one first view element and the at least one second view element are not consistent, as shown in fig. 4, the determining the beginning-end style attribute of the target view element based on the beginning style attribute and the ending style attribute may include the following steps:
in step S2031, shared view elements and non-shared view elements among the target view elements are determined;
in step S2033, based on the preset style attribute and the target style attribute of the non-shared view element, determining a start-end style attribute corresponding to the non-shared view element;
in step S2035, the start and end style attributes of the shared view element are determined according to the start style attribute of the shared view element and the end style attribute of the shared view element.
In a specific embodiment, the shared view element may be a view element included in both the animation start view and the animation end view; the non-shared view elements may be view elements contained in an animation start view or an animation end view. Specifically, the target style attribute is a start style attribute or an end style attribute of the non-shared view element. Specifically, the preset style attribute may be a transparency of 0.
In a specific embodiment, the non-shared view element may include one or more view elements, and optionally, in a case that any view element in the non-shared view element is a view element in an animation starting view, a target style attribute of the any view element may be a starting style attribute; accordingly, the preset style attribute may be used as the ending style attribute of any one of the view elements. Optionally, in a case that any view element in the non-shared view elements is a view element in an animation end view, the target style attribute of the any view element may be an end style attribute; accordingly, a preset style attribute may be used as a starting style attribute of any of the view elements.
In an optional embodiment, in a case that the comparison result indicates that the at least one first view element and the at least one second view element are consistent, the determining, based on the start style attribute and the end style attribute, the start-end style attribute of the target view element includes: and taking the starting style attribute of the target view element and the ending style attribute of the target view element as the starting style attribute and the ending style attribute of the target view element.
In a specific embodiment, in the case that the comparison result indicates that at least one first view element and at least one second view element are consistent, it may be determined that the view elements in the animation start view and the animation end view are consistent, and accordingly, the start style attribute of the target view element and the end style attribute of the target view element may be used as the start and end style attributes of the target view element.
In the above embodiment, when the comparison result indicates that the at least one first view element and the at least one second view element are not consistent, the start-end style attribute of the target view element may be determined more specifically by combining the preset style attribute by determining the shared view element and the non-shared view element in the target view element, and when the comparison result indicates that the at least one first view element and the at least one second view element are consistent, the start style attribute of the target view element and the end style attribute of the target view element are directly used as the start-end style attribute of the target view element, so that accuracy and effectiveness of the determined start-end style attribute may be improved, and different animation rendering scenes may be better handled.
In step S205, a target animation is rendered based on the start-end style attributes and the target animation duration.
In a particular embodiment, the target animation may be an animation that needs to be rendered.
In an alternative embodiment, as shown in FIG. 5, rendering the target animation based on the beginning-end style property and the target animation duration may include the following steps:
in step S2051, an animation rendering executor matching with the local operating system is created;
in step S2053, the target animation is rendered based on the animation rendering executor, the start-end style attribute, and the target animation duration.
In a specific embodiment, in the case of creating an animation rendering executor matched with a local operating system, the animation rendering executor may be invoked, and a target animation is rendered by the animation rendering executor by combining the start-end style attribute and the target animation duration of the target view element.
In the embodiment, the target animation is rendered by creating the animation rendering executor matched with the local operating system, so that the local operating system can be better adapted, and the adaptability and the efficiency of the animation rendering operation can be further improved.
In an alternative embodiment, the rendering the target animation based on the animation rendering executor, the start-end style property and the target animation duration may include: controlling an animation rendering actuator, and generating style attribute change information of the target view element in the target animation duration based on the start and end style attributes; and controlling an animation rendering actuator to render the target animation based on the style attribute change information.
In an alternative embodiment, the style attribute variation information of the target view element in the target animation duration may be a style attribute variation curve of the target view element in the target animation duration. The style attribute variation curve may record a style attribute corresponding to the target view element at each time.
In a specific embodiment, a linear relationship between time and style attributes may be set in the animation rendering executor, and accordingly, style attribute variation information of the target view element within the target animation duration may be generated by combining the linear relationship and the start and end attributes of the target view element, and the target animation may be rendered by combining the style attribute variation information.
In the above embodiment, the style attribute variation information of the target view element within the target animation duration is generated in combination with the start-end style attributes, so that the style attribute of the target view element within the target animation duration can be clearly and intuitively represented, and the target animation can be rapidly rendered in combination with the style attribute variation information.
In an alternative embodiment, the style attribute may include multiple attributes, and accordingly, the variation information of the multiple attributes within the target animation duration may be determined by combining the multiple attributes and the linear relationship between time. Optionally, in a case that the style attribute variation information includes variation information of a plurality of attributes within a target animation duration, the controlling the animation rendering actuator to render the target animation based on the style attribute variation information may include: and controlling an animation rendering actuator, and rendering a view corresponding to a target view element in the target animation duration to obtain the target animation based on the change information of the multiple attributes in the target animation duration.
In a specific embodiment, the view corresponding to the target view element may be a view of the target view element in the page.
In the above embodiment, under the condition that the style attributes include multiple attributes, the multiple attributes of the target view element in the target animation duration can be clearly and intuitively represented by combining the variation information of the multiple attributes in the target animation duration, so that the target animation can be rapidly rendered.
In a particular embodiment, as shown in FIG. 6, FIG. 6 is a diagram of a partial view of a target animation provided in accordance with an exemplary embodiment. Optionally, under the condition that the animation playing instruction is triggered, the starting pattern attribute of the view element in the animation starting view, the ending pattern attribute of the view element in the animation ending view and the target animation duration may be analyzed from the animation configuration information; optionally, in conjunction with the animation start view of the target animation shown in fig. 6a, the view elements in the animation start view may include a background element 601, an avatar element 602, a text element 603, and a button element 604; in connection with the animation ending view of the target animation shown in fig. 6c, the view elements in the animation ending view may include a background element 601, an avatar element 602, a text element 603, a button element 604, and a text element 605, and accordingly, it may be determined that the target view elements may include the background element 601, the avatar element 602, the text element 603, the button element 604, and the text element 605; optionally, because the text element 605 is added to the animation end view relative to the animation start view, correspondingly, a preset style attribute may be used as the style attribute of the text element 605 in the animation start view, and the respective start and end style attributes of the background element 601, the head portrait element 602, the text element 603, the button element 604, and the text element 605 are generated; then, the style attribute variation information of the background element 601, the head portrait element 602, the text element 603, the button element 604 and the text element 605 in the target animation duration can be determined by combining the respective start and end style attributes of the background element 601, the head portrait element 602, the text element 603, the button element 604 and the text element 605; further, the view corresponding to the background element 601, the head portrait element 602, the text element 603, the button element 604 and the text element 605 (target view element) in the target animation duration may be dyed in combination with the style attribute variation information of the background element 601, the head portrait element 602, the text element 603, the button element 604 and the text element 605 in the target animation duration, and optionally, as shown in fig. 6b, the view element of the animation middle view (one animation view between the animation start view and the animation end view) of the target animation is consistent with the view element in the animation start view, but the style attribute is changed.
As can be seen from the technical solutions provided by the embodiments of the present specification, in the animation rendering process in the present specification, rendering of an animation is implemented through the start pattern attribute of at least one first view element in an animation start view, the end pattern attribute of at least one second view element in an animation end view, and the target animation duration, which are analyzed in animation configuration information, so that the data volume of animation configuration analysis can be effectively reduced, and the convenience and efficiency of animation configuration analysis are improved; and the description of animation rendering data is carried out by combining the style attributes of the view elements in the animation start and end views, so that the simplicity of description of the animation views can be greatly improved, and the operation convenience, the rendering efficiency and the animation fluency of animation rendering can be better improved.
FIG. 7 is a block diagram illustrating an animation rendering apparatus according to an example embodiment. Referring to fig. 7, the apparatus includes:
the data acquisition module 710 is configured to perform parsing from the animation configuration information to obtain a start pattern attribute of at least one first view element in the animation start view, an end pattern attribute of at least one second view element in the animation end view, and a target animation duration;
a start-end style attribute determination module 720 configured to perform determining a start-end style attribute of a target view element based on a start style attribute and an end style attribute, the target view element being a union of elements of at least one first view element and at least one second view element; the start and end style attributes of the target view element comprise a style attribute of the target view element in the animation start view and a style attribute of the target view element in the animation end view;
and a target animation rendering module 730 configured to perform rendering the target animation based on the start-end style property and the target animation duration.
In an alternative embodiment, object animation rendering module 730 includes:
an animation rendering executor creating unit configured to perform creating an animation rendering executor matched with the local operating system;
and the target animation rendering unit is configured to perform rendering of the target animation based on the animation rendering actuator, the start-end style attribute and the target animation duration.
In an alternative embodiment, the target animation rendering unit includes:
the style attribute change information generation unit is configured to execute control of the animation rendering executor and generate style attribute change information of the target view element in the target animation duration based on the start and end style attributes;
and the target animation rendering subunit is configured to execute the control animation rendering executor, and render the target animation based on the style attribute change information.
In an optional embodiment, the style attribute variation information includes variation information of a plurality of attributes within the target animation duration; the target animation rendering subunit is specifically configured to perform: and controlling an animation rendering actuator, and rendering a view corresponding to a target view element in the target animation duration to obtain the target animation based on the change information of the multiple attributes in the target animation duration.
In an optional embodiment, the apparatus further comprises:
the view element comparison module is configured to compare at least one first view element with at least one second view element to obtain a comparison result;
in the case that the comparison result indicates that the at least one first view element and the at least one second view element are not identical, the top-to-bottom style attribute determining module 720 includes:
a view element determination unit configured to perform determining a shared view element and a non-shared view element in the target view element;
the first starting and ending style attribute determining unit is configured to execute determining starting and ending style attributes corresponding to the non-shared view elements based on a preset style attribute and a target style attribute of the non-shared view elements, wherein the target style attribute is a starting style attribute or an ending style attribute of the non-shared view elements;
a second start and end style attribute determination unit configured to perform determining a start and end style attribute of the shared view element according to a start style attribute of the shared view element and an end style attribute of the shared view element.
In an alternative embodiment, in the case that the comparison result indicates that the at least one first view element and the at least one second view element are consistent, the beginning-end style attribute determining module 720 includes:
a third start-end style attribute determination unit configured to perform setting the start style attribute of the target view element and the end style attribute of the target view element as start-end style attributes of the target view element.
In an alternative embodiment, the data acquisition module 710 includes:
the analysis processing unit is configured to analyze the animation configuration information to obtain a starting view element tree corresponding to an animation starting view, an ending view element tree corresponding to an animation ending view and a target animation duration; the starting view element tree is a structure tree which takes view elements in an animation starting view with style attributes as nodes, takes the view layer level relationship among the view elements in the animation starting view as a node level relationship, and takes the view elements in an animation ending view with the style attributes as nodes and takes the view layer level relationship among the view elements in the animation ending view as a node level relationship;
a starting style attribute acquiring unit configured to acquire a starting style attribute of at least one first view element from a starting view element tree;
and the ending style attribute acquiring unit is configured to acquire the ending style attribute of at least one second view element from the ending view element tree.
In an alternative embodiment, the start style property acquisition unit includes:
a first node traversal unit configured to perform traversal of nodes in the starting view element tree to obtain at least one first view element;
the first pattern attribute acquisition unit is configured to acquire a pattern attribute corresponding to a first current traversal node in the process of traversing nodes in the initial view element tree;
a starting style attribute determining unit configured to perform the style attribute corresponding to the first current traversal node as a starting style attribute of a view element corresponding to the first current traversal node among the at least one first view element.
In an optional embodiment, the ending style attribute acquiring unit includes:
a second node traversal unit configured to perform traversal of a node in the end-view element tree to obtain at least one second view element;
the second style attribute acquisition unit is configured to acquire a style attribute corresponding to a second current traversal node in the process of traversing the nodes in the end view element tree;
and an ending style attribute determining unit configured to perform the style attribute corresponding to the second current traversal node as an ending style attribute of the view element corresponding to the second current traversal node in the at least one second view element.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 8 is a block diagram illustrating an electronic device for animation rendering, which may be a terminal, according to an example embodiment, and an internal structure thereof may be as shown in fig. 8. The electronic device comprises a processor, a memory, a network interface, a display screen and an input device which are connected through a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The network interface of the electronic device is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement an animation rendering method. The display screen of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the electronic equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and does not constitute a limitation on the electronic devices to which the disclosed aspects apply, as a particular electronic device may include more or less components than those shown, or combine certain components, or have a different arrangement of components.
In an exemplary embodiment, there is also provided an electronic device including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement an animation rendering method as in embodiments of the disclosure.
In an exemplary embodiment, there is also provided a computer-readable storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform an animation rendering method in an embodiment of the present disclosure.
In an exemplary embodiment, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the animation rendering method in the embodiments of the present disclosure.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An animation rendering method, comprising:
analyzing the starting pattern attribute of at least one first view element in the animation starting view, the ending pattern attribute of at least one second view element in the animation ending view and the target animation duration from the animation configuration information;
determining a start-end style attribute of a target view element based on the start style attribute and the end style attribute, the target view element being an element union of the at least one first view element and the at least one second view element; the start-end style attribute of the target view element comprises a style attribute of the target view element in the animation start view and a style attribute of the target view element in the animation end view;
and rendering the target animation based on the start-end style attributes and the target animation duration.
2. The animation rendering method of claim 1, wherein rendering the target animation based on the start-end style property and the target animation duration comprises:
creating an animation rendering executor matched with a local operating system;
and rendering the target animation based on the animation rendering actuator, the start-end style attributes and the target animation duration.
3. The animation rendering method of claim 2, wherein the rendering the target animation based on the animation rendering executor, the start-end style attribute, and the target animation duration comprises:
controlling the animation rendering executor to generate style attribute variation information of the target view element in the target animation duration based on the start-end style attributes;
and controlling the animation rendering actuator to render the target animation based on the style attribute change information.
4. The animation rendering method according to claim 3, wherein the style attribute variation information includes variation information of a plurality of attributes within the target animation duration; the controlling the animation rendering executor, rendering the target animation based on the style attribute variation information, includes:
and controlling the animation rendering actuator, and rendering the view corresponding to the target view element in the target animation duration based on the change information of the multiple attributes in the target animation duration to obtain the target animation.
5. The animation rendering method according to any one of claims 1 to 4, further comprising:
comparing the at least one first view element with the at least one second view element to obtain a comparison result;
in case the comparison result indicates that the at least one first view element and the at least one second view element are inconsistent, the determining, based on the start style attribute and the end style attribute, a start-end style attribute of the target view element comprises:
determining shared view elements and non-shared view elements in the target view elements;
determining a start-end style attribute corresponding to the non-shared view element based on a preset style attribute and a target style attribute of the non-shared view element, wherein the target style attribute is a start style attribute or an end style attribute of the non-shared view element;
and determining the start and end style attributes of the shared view elements according to the start style attribute of the shared view elements and the end style attribute of the shared view elements.
6. The animation rendering method of claim 5, wherein, in the case that the comparison result indicates that the at least one first view element and the at least one second view element are consistent, the determining, based on the start style attribute and the end style attribute, the start-end style attribute of the target view element comprises:
and taking the starting style attribute of the target view element and the ending style attribute of the target view element as the starting style attribute and the ending style attribute of the target view element.
7. An animation rendering apparatus, comprising:
the data acquisition module is configured to analyze the starting pattern attribute of at least one first view element in the animation starting view, the ending pattern attribute of at least one second view element in the animation ending view and the target animation duration from the animation configuration information;
a start-end style attribute determination module configured to perform determining a start-end style attribute of a target view element based on the start style attribute and the end style attribute, the target view element being an element union of the at least one first view element and the at least one second view element; the start-end style attribute of the target view element comprises a style attribute of the target view element in the animation start view and a style attribute of the target view element in the animation end view;
and the target animation rendering module is configured to render the target animation based on the start-end style attributes and the target animation duration.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the animation rendering method of any of claims 1 to 6.
9. A computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the animation rendering method of any of claims 1 to 6.
10. A computer program product comprising computer instructions, wherein the computer instructions, when executed by a processor, implement the animation rendering method of any of claims 1 to 6.
CN202210343845.5A 2022-03-31 2022-03-31 Animation rendering method and device, electronic equipment and storage medium Pending CN114862996A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210343845.5A CN114862996A (en) 2022-03-31 2022-03-31 Animation rendering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210343845.5A CN114862996A (en) 2022-03-31 2022-03-31 Animation rendering method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114862996A true CN114862996A (en) 2022-08-05

Family

ID=82629836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210343845.5A Pending CN114862996A (en) 2022-03-31 2022-03-31 Animation rendering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114862996A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060103655A1 (en) * 2004-11-18 2006-05-18 Microsoft Corporation Coordinating animations and media in computer display output
CN111899322A (en) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 Video processing method, animation rendering SDK, device and computer storage medium
CN113409427A (en) * 2021-07-21 2021-09-17 北京达佳互联信息技术有限公司 Animation playing method and device, electronic equipment and computer readable storage medium
CN114139083A (en) * 2022-01-06 2022-03-04 北京百度网讯科技有限公司 Webpage rendering method and device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060103655A1 (en) * 2004-11-18 2006-05-18 Microsoft Corporation Coordinating animations and media in computer display output
CN111899322A (en) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 Video processing method, animation rendering SDK, device and computer storage medium
CN113409427A (en) * 2021-07-21 2021-09-17 北京达佳互联信息技术有限公司 Animation playing method and device, electronic equipment and computer readable storage medium
CN114139083A (en) * 2022-01-06 2022-03-04 北京百度网讯科技有限公司 Webpage rendering method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孔素然;殷均平;: "三维动画图像纹理实时渲染系统设计", 现代电子技术, no. 05, 27 February 2018 (2018-02-27) *

Similar Documents

Publication Publication Date Title
CN109343851B (en) Page generation method, page generation device, computer equipment and storage medium
CN106250104B (en) A kind of remote operating system for server, method and device
CN113727039B (en) Video generation method and device, electronic equipment and storage medium
EP4075299A1 (en) Method and apparatus for recommending multimedia resource
CN114924815B (en) Page rendering method and device, electronic equipment and storage medium
CN109542962B (en) Data processing method, data processing device, computer equipment and storage medium
CN114637935A (en) Page information display method and device, electronic equipment and storage medium
US20230030729A1 (en) Method and apparatus for displaying page
CN114327435A (en) Technical document generation method and device and computer readable storage medium
CN115145545A (en) Method and device for generating small program code, computer equipment and storage medium
CN113610558A (en) Resource distribution method and device, electronic equipment and storage medium
CN114862996A (en) Animation rendering method and device, electronic equipment and storage medium
CN113672829B (en) Page display method and device, electronic equipment and storage medium
CN114817585A (en) Multimedia resource processing method and device, electronic equipment and storage medium
CN114491093B (en) Multimedia resource recommendation and object representation network generation method and device
CN113730917A (en) Game script generation method and device, computer equipment and storage medium
CN109614188A (en) A kind of page online help method, apparatus, computer equipment and storage medium
CN114463474A (en) Page display method and device, electronic equipment, storage medium and product
CN115269529A (en) Document processing method and device, electronic equipment and storage medium
CN113868516A (en) Object recommendation method and device, electronic equipment and storage medium
CN114924782B (en) Service update processing method and device, electronic equipment and storage medium
CN112700522A (en) Method and system for displaying spine animation file in unity
CN114996249B (en) Data processing method, device, electronic equipment, storage medium and product
CN113204477B (en) Application testing method and device, electronic equipment and storage medium
CN114118033B (en) Report generation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination