[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN116310013A - Animation rendering method, device, computer equipment and computer readable storage medium - Google Patents

Animation rendering method, device, computer equipment and computer readable storage medium Download PDF

Info

Publication number
CN116310013A
CN116310013A CN202310099492.3A CN202310099492A CN116310013A CN 116310013 A CN116310013 A CN 116310013A CN 202310099492 A CN202310099492 A CN 202310099492A CN 116310013 A CN116310013 A CN 116310013A
Authority
CN
China
Prior art keywords
fluid
model
fluid particles
particles
attribute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310099492.3A
Other languages
Chinese (zh)
Inventor
潘俊澎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202310099492.3A priority Critical patent/CN116310013A/en
Publication of CN116310013A publication Critical patent/CN116310013A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/603D [Three Dimensional] animation of natural phenomena, e.g. rain, snow, water or plants
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/663Methods for processing data by generating or executing the game program for rendering three dimensional images for simulating liquid objects, e.g. water, gas, fog, snow, clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides an animation rendering method, an animation rendering device, computer equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring a virtual paper model and an ink fluid model covered on the virtual paper model; wherein, the virtual paper model is filled with first fluid particles with first attributes, and the ink fluid model is filled with second fluid particles with second attributes; based on preset speed field information, carrying out hydrodynamic analysis on the first fluid particles and the second fluid particles to obtain third fluid particles with third properties; based on the first fluid particles, the second fluid particles, and the third fluid particles, rendering a fluid motion blur animation that displays the ink fluid model motion blur virtual paper model. By adopting the method, the effect quality of the fluid motion sickness animation can be effectively improved.

Description

Animation rendering method, device, computer equipment and computer readable storage medium
Technical Field
The present application relates to the field of game technologies, and in particular, to an animation rendering method, an animation rendering device, a computer device, and a computer readable storage medium.
Background
In the art expression of the visual effect of Chinese wind, the ink halation is a very good expression form, is often applied to the nostalgic style scene, makes the drawing technique of separating yin and yang back, reflects the beauty of the profound rhythm of Chinese culture, especially presents the introduction of related characters or scenes in animation, or when certain characters are emphasized, the ink halation effect is given, so that the visual infection is more prominent, eyeballs are attracted, and the interaction is pulled.
At present, the production of the ink halation effect is generally realized by special shooting by a high-definition camera, driving particles by using fluid smoke in three-dimensional software, using a material mask and the like, but the production has respective limitations, such as real-scene shooting by the high-definition camera, complicated procedures and difficult control of success rate. In another embodiment, the fluid smoke is used to drive particles to simulate the ink effect in three-dimensional software, and although the dynamic state of certain ink movement can be realized, the ideal effect of ink halation is not achieved in the expression form. If the later-stage synthesis software material is used for masking, more proper materials are required to be searched from a large number of materials for processing, the workload is large, and the effect consistency is reduced due to material splicing.
Therefore, the existing ink halation dyeing method has the technical problem of poor picture effect caused by unreasonable manufacturing mode.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an animation rendering method, apparatus, computer device, and computer-readable storage medium for improving the quality of animation effects of ink bloom on paper.
In a first aspect, the present application provides an animation rendering method, including:
acquiring a virtual paper model and an ink fluid model covered on the virtual paper model; wherein, the virtual paper model is filled with first fluid particles with first attributes, and the ink fluid model is filled with second fluid particles with second attributes;
based on preset speed field information, carrying out hydrodynamic analysis on the first fluid particles and the second fluid particles to obtain third fluid particles with third properties;
based on the first fluid particles, the second fluid particles, and the third fluid particles, rendering a fluid motion blur animation that displays the ink fluid model motion blur virtual paper model.
In a second aspect, the present application provides an animation rendering device, including:
the model acquisition module is used for acquiring the virtual paper model and the ink-water fluid model covered on the virtual paper model; wherein, the virtual paper model is filled with first fluid particles with first attributes, and the ink fluid model is filled with second fluid particles with second attributes;
The fluid analysis module is used for carrying out fluid dynamics analysis on the first fluid particles and the second fluid particles based on preset speed field information to obtain third fluid particles with third properties;
and the animation rendering module is used for rendering and displaying the fluid halation animation of the ink fluid model halation virtual paper model based on the first fluid particles, the second fluid particles and the third fluid particles.
In a third aspect, the present application also provides a computer device comprising:
one or more processors;
a memory; and one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the processor to implement the animation rendering method described above.
In a fourth aspect, the present application also provides a computer readable storage medium having stored thereon a computer program that is loaded by a processor to perform the above-described animation rendering method.
In a fifth aspect, embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the animation rendering method provided in the first aspect.
According to the animation rendering method, the device, the computer equipment and the computer readable storage medium, through obtaining the virtual paper model filled with the first fluid particles with the first attribute and the ink fluid model covered on the virtual paper model and filled with the second fluid particles with the second attribute, the first fluid particles and the second fluid particles can be subjected to fluid dynamics analysis based on preset speed field information to obtain the third fluid particles with the third attribute, finally, the fluid halation animation of the ink fluid model halation virtual paper model is rendered and displayed based on the first fluid particles, the second fluid particles and the third fluid particles, so that the simulation operation on the ink fluid state is completed from the particle level, the animation special effect of the ink halation paper is realized, and the effect quality of the fluid halation animation is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is an application scene graph of an animation rendering method in an embodiment of the present application;
FIG. 2 is a flow chart of an animation rendering method in an embodiment of the present application;
FIG. 3 is a schematic illustration of an interface of a fluid particle in an embodiment of the present application;
FIG. 4 is a schematic illustration of a fluid particle motion interface under the influence of a velocity field in an embodiment of the present application;
FIG. 5 is a schematic illustration of a fluid particle motion interface under the influence of a collision boundary in an embodiment of the present application;
fig. 6 is a schematic structural diagram of an animation rendering device in the embodiment of the present application;
fig. 7 is a schematic structural diagram of a computer device in an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
In the description of the present application, the term "for example" is used to mean "serving as an example, instance, or illustration. Any embodiment described herein as "for example" is not necessarily to be construed as preferred or advantageous over other embodiments. The following description is presented to enable any person skilled in the art to make and use the invention. In the following description, details are set forth for purposes of explanation. It will be apparent to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known structures and processes have not been described in detail so as not to obscure the description of the invention with unnecessary detail. Thus, the present invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
Referring to fig. 1, the animation rendering method may be implemented and executed based on a cloud interaction system, wherein the cloud interaction system includes a terminal device 102 and a server 104. The terminal device 102 may be a device that includes both receive and transmit hardware, i.e., a device having receive and transmit hardware capable of performing bi-directional communications over a bi-directional communication link. Such a device may include: a cellular or other communication device having a single-line display or a multi-line display. The terminal device 102 may be a desktop terminal or a mobile terminal, and the terminal device 102 may be one of a mobile phone, a tablet computer, and a notebook computer. The server 104 may be a stand-alone server, or may be a server network or a server cluster of servers, including but not limited to a computer, a network host, a single network server, an edge server, a set of multiple network servers, or a cloud server of multiple servers. Wherein the Cloud server is composed of a large number of computers or web servers based on Cloud Computing (Cloud Computing). In addition, the terminal device 102 and the server 104 establish a communication connection through a network, and the network may specifically be any one of a wide area network, a local area network, and a metropolitan area network.
In some embodiments of the present application, various cloud applications may be run under a cloud interaction system, for example: and (5) cloud game. Taking cloud game as an example, cloud game refers to a game mode based on cloud computing. In the cloud game operation mode, the game program operation main body and the game picture presentation main body are separated, the storage and operation of the animation rendering method are completed on the cloud game server, and the terminal equipment is used for receiving and sending data and presenting the game picture, for example, the terminal equipment can be a display equipment with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the terminal device for information processing is cloud game server of cloud. When playing the game, the player operates the terminal device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the terminal device through a network, and finally decodes the data through the terminal device and outputs the game pictures.
In some embodiments of the present application, the terminal device may be a local terminal device. Taking a game as an example, the local terminal device stores a game program and is used to present a game screen. The local terminal device is used for interacting with the player through the graphical user interface, namely, conventionally downloading and installing the game program through the electronic device and running. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered on a display screen of the terminal, or provided to the player by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including game visuals, and a processor for running the game, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
A game scene (or referred to as a virtual scene) is a virtual scene that an application program displays (or provides) when running on a terminal or a server. Optionally, the virtual scene is a simulation environment for the real world, or a semi-simulated semi-fictional virtual environment, or a purely fictional virtual environment. The virtual scene is any one of a two-dimensional virtual scene and a three-dimensional virtual scene, and the virtual environment can be sky, land, ocean and the like, wherein the land comprises environmental elements such as deserts, cities and the like. The virtual scene is a scene of a complete game logic of a virtual object such as user control, for example, in a sandbox 3D shooting game, the virtual scene is a 3D game world for a player to control the virtual object to fight, and an exemplary virtual scene may include: at least one element selected from mountains, flat lands, rivers, lakes, oceans, deserts, sky, plants, buildings and vehicles; for example, in a 2D card game, the virtual scene is a scene for showing a released card or a virtual object corresponding to the released card, and an exemplary virtual scene may include: arenas, battle fields, or other "field" elements or other elements that can display the status of card play; for a 2D or 3D multiplayer online tactical game, the virtual scene is a 2D or 3D terrain scene for virtual objects to fight, an exemplary virtual scene may include: mountain, line, river, classroom, table and chair, podium, etc.
The game Interface refers to an Interface corresponding to an application program provided or displayed through a graphical User Interface, and the Interface comprises a User Interface (UI) and a game screen for a player to interact. In alternative embodiments, game controls (e.g., skill controls, movement controls, functionality controls, etc.), indication identifiers (e.g., direction indication identifiers, character indication identifiers, etc.), information presentation areas (e.g., number of clicks, time of play, etc.), or game setting controls (e.g., system settings, stores, gold coins, etc.) may be included in the UI interface. In an alternative embodiment, the game screen is a display screen corresponding to the virtual scene displayed by the terminal device, and the game screen may include virtual objects such as a game character, a Non-player character (NPC), and an artificial intelligence character (Artificial Intelligence, AI) that execute game logic in the virtual scene.
It should be noted that, the game scenario described in the embodiment of the present application is for more clearly describing the technical solution of the embodiment of the present invention, and does not constitute a limitation to the technical solution provided in the embodiment of the present application, and those skilled in the art can know that, with the occurrence of an abnormal new service scenario, the technical solution provided in the embodiment of the present invention is applicable to similar technical problems.
Referring to fig. 2, an embodiment of the present application provides an animation rendering method, and the following embodiment will be exemplified by applying the method to the terminal 102 in fig. 1, where the method includes steps S201 to S203, specifically as follows:
s201, acquiring a virtual paper model and an ink fluid model covered on the virtual paper model; wherein, the virtual paper model is filled with first fluid particles with first attributes, and the ink fluid model is filled with second fluid particles with second attributes.
The virtual paper model or the ink fluid model may be any geometric model of any material or shape, for example, a cube, a sphere, a column, etc., but the virtual paper model is an interface (such as a paper or other plane) for simulating the supporting of the halable fluid, so that the cube is generally selected, but other geometric bodies are not excluded. The main viewing area of the virtual paper model is larger than the main viewing area of the ink-water fluid model (taking paper lying on a table surface as an example, the main viewing can be perpendicular to the inward overlooking angle of the paper), and the limitation of the main viewing area is mainly to avoid that the area of the paper is too small to fully present the ink-water halation effect. In addition, the ink fluid model is a simulated fluid emitter, so the shape of the ink fluid model can be set according to actual service requirements, but the ink fluid model with different shapes affects the fluid emission state, such as an ink fluid model containing cones, which are used as second fluid particles emitted by the fluid emitter to form a beam shape, and an ink fluid model containing cubes, which are used as second fluid particles emitted by the fluid emitter to form a plane shape.
The main view area refers to the area of the main view surface of the geometric model, and the area unit can be square millimeter, square centimeter, square decimeter, and the like.
The first attribute comprises a first color, a first density, a first viscosity value and a first cooling value, and the second attribute comprises a second color, a second density, a second viscosity value and a second cooling value. The first color and the second color may be represented by numerical values, for example, a first color of "1" indicates that the corresponding particle is white and a second color of "0" indicates that the corresponding particle is black. The first density and the second density may also be expressed by numerical values, for example, a first density of "500" indicates 500 particles per unit area and a second density of "1000" indicates 1000 particles per unit area. The first tackiness value and the second tackiness value may also be represented by numerical values, for example, the first tackiness value is a random value among "0-40", and the second tackiness value is "10", with a larger tackiness value meaning a stronger resistance feeling of the particles. The first cooling value and the second cooling value may still be represented by numerical values, e.g. a first cooling value of "0" indicates that the corresponding particles are not fused at all, and a second cooling value of "1" indicates that the corresponding particles are to be fused at all.
In a specific implementation, in order to enhance the effect of the fluid halation image including ink, the embodiment of the application proposes that two geometric models, such as a virtual paper model and a fluid ink model, can be created first, so as to simulate "paper" in an actual application scene by using the virtual paper model filled with first fluid particles, and simulate fluid (such as ink) dropped on the "paper" by using the fluid ink model filled with second fluid particles, so as to facilitate subsequent fluid dynamics analysis thereof, and simulate and render an effect animation of halation of the fluid ink on the paper.
In this regard, in order to intuitively display the fluid containing the ink, the embodiment of the application proposes that the animation effect of spreading the ink on different carriers (such as paper or other planes) is achieved by disposing the ink-water fluid model filled with the second fluid particles over the virtual paper model, and not the side or rear view surface, which affects the display of the effect. The tool for realizing the scheme can realize the visual effect of colored fluid halation including ink by building a node workflow in corresponding software by means of computer graph making software (such as 'houdini'), can solve the problems of low real-beat ink halation efficiency and non-ideal three-dimensional software simulation effect, and can also solve the problems of low later synthesis efficiency and poor realization of artistic ideas expression, thereby obtaining the fluid halation animation with better picture effect.
In one embodiment, the step includes: acquiring an initial virtual paper model and an initial ink fluid model, wherein the model height of the initial virtual paper model and the initial ink fluid model meets the preset height condition; the initial ink fluid model is covered on the initial virtual paper model according to the direction of the model height; acquiring first fluid particles with first attributes, so as to fill the first fluid particles into an initial virtual paper model, and obtaining the virtual paper model; and acquiring second fluid particles with second attributes to fill the second fluid particles into the initial ink fluid model to obtain the ink fluid model.
Wherein the first attribute comprises one or more of a first color, a first density, a first viscosity value, and a first cooling value, and the second attribute comprises one or more of a second color, a second density, a second viscosity value, and a second cooling value.
The model height can be a numerical value of the geometric model on a space Z axis, and the numerical unit of the model height can be the number of voxels or measurement units such as millimeters, centimeters and the like; the preset height condition may be preset by taking the actual application scene as a reference, and the height range accords with the actual situation of the scene, for example, "0-0.02", and the specific range is not limited in the embodiment of the present application.
In a specific implementation, to obtain the virtual paper model, the terminal 102 may obtain, by using "houdini" software or other computer graphics software, a square geometric body with a model height satisfying a preset height condition, as an initial virtual paper model. Then, taking "houdini" software as an example, the input square geometry can be converted into fluid particles by taking the square geometry as a boundary through the "flip source" node, or the input square geometry is filled with fluid particles, and finally the fluid particles in the square geometry are adjusted based on the preset attribute through the "attribvop" node so as to become the first fluid particles with the first attribute.
Of course, the terminal 102 may also adjust the first attribute of the first fluid particle by the specific node according to the user instruction based on the actual service requirement, that is, the color, density, viscosity value and cooling value described above may be determined by one or more adjustments, and the specific adjustment times are not limited in the embodiments of the present application.
Further, the terminal 102 may also acquire a sphere by means of "houdini" software or other computer graphics software to deform the sphere by using a "mountain" node as an initial ink fluid model, adjust the deformed model height by using an "attribrange" node to match the model height of the virtual paper model, convert the deformed model into fluid particles by using a "flip source" node, or fill the deformed model with fluid particles, and finally adjust the fluid particles in the deformed model by using an "attribvop" node based on a preset attribute to form second fluid particles with a second attribute.
It should be noted that, since the virtual paper model is used to simulate paper or other planes, the height of the model can be set as small as possible, for example, 0.01 cm, that is, the model can be as close to the actual application scene as possible. Similarly, the ink fluid model is used to simulate the ink emitter, so the height of the model can be set as small as possible, for example, greater than 0 and less than or equal to 0.02, but the practical values are not limited in this embodiment.
S202, based on preset speed field information, carrying out hydrodynamic analysis on the first fluid particles and the second fluid particles to obtain third fluid particles with third properties.
The Velocity Field information (Velocity Field) is a physical Field composed of Velocity vectors at each point at each time. Taking fluid as an example, the velocity field refers to the vector velocity distribution of the fluid flow front, and the fluid velocity vector distribution state of all points in space at the same moment.
In a specific implementation, after the virtual paper model and the ink fluid model are acquired, the terminal 102 may input the virtual paper model, the ink fluid model and the velocity field information into the dynamics module "DOP" by using the dynamics module "DOP" in the "houdini" software, so that the dynamics module "DOP" applies the lagrangian equation set to perform the fluid dynamics solution analysis on the first fluid particle and the second fluid particle, and analyzes the fusion situation of the second fluid particle and the first fluid particle after the second fluid particle receives the driving of the velocity field information and moves towards the velocity field direction, and the color and the density of the second fluid particle are all affected by the fusion of the first fluid particle and are converted into the third fluid particle.
For example, referring to fig. 3, the terminal 102 screen shows the front view of a virtual paper model, which presents a rectangle, and the first fluid particles filled in the virtual paper model appear white. The front view of the ink fluid model is in a circular shape, the second fluid particles filled in the ink fluid model are in black, and the ink fluid model is covered on the virtual paper model. At this time, the virtual paper model can be regarded as a piece of paper, the ink-water fluid model can be regarded as a drop of ink, the arrow direction represents the speed vector direction of the speed field information, and the dynamics module 'DOP' can simulate the diffusion motion of the second fluid particles along the speed field direction and the speed field speed, so that the second fluid particles are fused with surrounding first fluid particles, and third fluid particles in a fused state are generated to display gray.
It should be noted that the velocity field information may be manually set according to the service requirement, as shown in fig. 4, the continuous velocity field information may be smoothed to affect the movement of the second fluid particles, so as to enhance the fluid halation effect.
In one embodiment, the step includes: carrying out hydrodynamic analysis on the first fluid particles and the second fluid particles based on preset speed field information so as to obtain particle fusion attribute values of the second fluid particles and the first fluid particles in a fusion state; and acquiring attribute information required for rendering each frame of animation image based on the particle fusion attribute value to obtain third fluid particles with third attributes.
The particle fusion attribute value may include a particle fusion color value and a particle fusion density value, and mainly because fluid particles with different attributes are fused, fusion transformation in two aspects of density and color is generated.
In a specific implementation, the first fluid particle and the second fluid particle are fused, so that fusion on color and density is generated, and the terminal 102 can calculate the fusion result of color and density under the limitation of the second cooling value through a dynamics module "DOP (Dynamics Operations, dynamics editing module)", so as to obtain the fusion attribute value of the particles. Wherein the dynamics module "DOP" function in "houdini" software consists in providing dynamics nodes to set conditions and rules for dynamics simulation. In this embodiment of the present application, the dynamics module "DOP" is configured to obtain a particle fusion attribute value, and mainly analyze a second color and a second density of a second fluid particle, and how to neutralize the color and the density under the influence of a color average value and a density average value of surrounding first fluid particles. In order to render the fluid halation animation, the disassembly analysis should be performed on each frame of animation image, namely, the particle fusion attribute value corresponding to the N frames of images contained in the fluid halation animation is obtained, so as to obtain the particle state of each frame of animation image, and then the third fluid particles with the third attribute can be obtained. Among them, the particle fusion attribute value acquisition step and the third fluid particle acquisition step involved in the present embodiment will be described in detail below.
In one embodiment, based on the preset velocity field information, performing a hydrodynamic analysis on the first fluid particle and the second fluid particle to obtain a particle fusion attribute value of the second fluid particle and the first fluid particle in a fused state, including: acquiring a fusion critical value corresponding to each frame of animation image based on a preset cooling reduction value; acquiring a particle fusion color value and/or a particle fusion density value after the second fluid particles in each frame of animation image are fused with the first fluid particles to reach a fusion critical value under the influence of speed field information; and taking the particle fusion color value and/or the particle fusion density value as the particle fusion attribute value.
The cooling increment value may be a decreasing step value of the second cooling value for each frame of moving image, for example, the cooling increment value is "0.01", when the second cooling value is input as "1", the cooling value participating in the analysis of the fusion attribute value of the first frame of moving image particles is "1", the cooling value participating in the analysis of the fusion attribute value of the second frame of moving image particles is "1-0.01", the cooling value participating in the analysis of the fusion attribute value of the third frame of moving image particles is "1-0.01-0.01", and so on.
The blending threshold may be a difference between a second cooling value and a cooling decreasing value after the second frame of the moving image, for example, when the second cooling value is "1" and the cooling decreasing value is "0.01", the blending threshold is "0.99"; the fusion critical value may also be the difference between the second cooling value and the cooling decreasing value by a certain multiple "M", for example, "m=0.5", the second cooling value is "1", and the cooling decreasing value is "0.01", where the fusion critical value is "0.49". It may be understood that if the fusion threshold is equal to the second cooling value for the first frame animation image, for example, the second cooling value is "1", and the fusion threshold corresponding to the first frame animation image is "1"; for another example, if the second cooling value is "1" and "m=0.5", the fusion threshold value corresponding to the first frame of moving image is "0.5".
In a specific implementation, the terminal 102 is configured to obtain a particle fusion attribute value that can be presented by the second fluid particle and the first fluid particle in a fused state in each frame of animation image, that is, first determine a fusion critical value corresponding to each frame of animation image, then analyze a particle fusion color value and/or a particle fusion density value after the second fluid particle is fused with the first fluid particle to reach the fusion critical value under the influence of a motion speed and a motion direction contained in the speed field information, and then obtain the particle fusion attribute value. That is, the fusion threshold may control the mixing intensity of the fluid particles.
For example, if the second cooling value is "1", "m=0.5", the fusion threshold value corresponding to the first frame of moving image is "0.5", and if the first color of the first fluid particle is "1" (representing white), the second color of the second fluid particle is "0" (representing black), the particle fusion color value should be "0.5".
For example, if the second cooling value is "1", "m=0.5", the fusion critical value corresponding to the first frame of moving image is "0.5", and if the first density of the first fluid particles is "500", the second density of the second fluid particles is "1000", the particle fusion density value should be "500".
For another example, the second cooling value is "1", "m=0.5", the cooling decrease value is "0.01", the fusion threshold value corresponding to the second frame of moving image is "0.49", the first color of the first fluid particle is "1" (representing white), the second color of the second fluid particle is "0" (representing black), and the particle fusion color value is "0.49". It will be appreciated that the terminal 102 will fuse the second fluid particles in the second frame of the animated image faster than it fuses in the first frame of the animated image during the rendering of the fluid motion sickness.
In one embodiment, based on the particle fusion attribute value, acquiring attribute information required for rendering each frame of animation image to obtain third fluid particles with third attributes, including: acquiring a particle fusion attribute value, a second viscosity value and a second cooling value which are included in a second attribute as attribute information required for rendering each frame of animation image; and taking the attribute information as a third attribute to replace the second attribute of the second fluid particle, so as to obtain a third fluid particle with the third attribute.
In a specific implementation, in order to obtain the third fluid particles corresponding to each frame of animation image, the terminal 102 needs to combine the second viscosity value and the second cooling value included in the second attribute in addition to the particle fusion attribute value, so as to determine the third attribute, and then obtain the third fluid particles in the fused state.
For example, the cooling value of the third fluid particles is a second cooling value "1", and the viscosity value of the third fluid particles is a second viscosity value "10".
S203, rendering and displaying the fluid halation animation of the ink fluid model halation virtual paper model based on the first fluid particles, the second fluid particles and the third fluid particles.
Among them, the fluid halation animation is a special effect that is expressed as an animation effect.
In a specific implementation, after acquiring the particle attribute corresponding to each frame of animation image required for rendering the fluid halation animation, the terminal 102 may render the fluid halation animation, such as the ink halation effect animation, based on the first fluid particle, the second fluid particle and the third fluid particle, frame by frame, and display the fluid halation animation through the interface.
In one embodiment, the step includes: acquiring collision boundary information; if the collision boundary information is not empty, simulating the motion process of the first fluid particles and the second fluid particles which are mutually fused under the influence of the collision boundary information to obtain a third fluid particle motion process, and rendering to obtain a fluid halation animation of the ink fluid model halation virtual paper model; and displaying the fluid halation animation of the ink fluid model halation virtual paper model.
The collision boundary information may be boundary coordinate information in which fluid particles are effectively fused, and the boundary coordinate information may be three-dimensional coordinates.
In a specific implementation, the fluid halation animation scheme described in the above embodiment is actually exemplified by a case where collision boundary information is not set. In an actual application scenario, a certain service requirement exists to obtain an animation that presents a fluid halation boundary, so in order to improve the quality effect of the fluid halation animation, the terminal 102 may obtain collision boundary information in an animation rendering stage, and simulate a particle motion effect that the first fluid particle and the second fluid particle encounter the collision boundary in the fusion process, i.e. present a rebound trend, after the collision boundary information is obtained. Therefore, the first fluid particles and the second fluid particles can be mutually fused in the boundary to obtain third fluid particles, so that the fluid motion sickness animation is rendered.
For example, referring to fig. 5, taking the collision boundary information as a "baishen" word as an example, the second fluid particles exhibiting black and the first fluid particles exhibiting white are halation fused in the "baishen" boundary, and the resulting third fluid particles are confined within the "baishen" boundary.
In one embodiment, acquiring collision boundary information includes: obtaining a target map; the target map comprises characters and/or patterns; converting the target map into a preset screen coordinate system so as to enable each screen coordinate in the screen coordinate system to be associated with corresponding pixel information, and obtaining coordinate pixel information; acquiring volume coordinate information of characters and/or patterns based on the coordinate pixel information; and taking the difference between the volume coordinate information and the volume cloud coordinate of the virtual paper model as collision boundary information.
The target map may be a map including any graphic or word, and as shown in fig. 5, the target map may be a map including a "Baishen" word.
The screen coordinate system is a coordinate system based on the whole hardware screen and is related to the screen resolution. The lower left corner of the screen is (0, 0), the upper right corner is (screen. Width, screen. Height), screen. Width represents screen width, screen. Height represents screen height.
Wherein the volume cloud may be a volume storing the symbolic distances and the volume cloud coordinates may be model coordinates associated with the volume cloud.
In a specific implementation, the terminal 102 may first obtain the target map to obtain collision boundary information, if the target map includes characters and/or patterns, the target map may be converted into a preset screen coordinate system by means of "uvtexture" and "attribfrommap1" nodes by means of "houdini" software, so that a UV set is obtained on a plane, and the UV set may use the color information of the map as a reference, and correspond the characters and/or pattern pixels to corresponding spatial positions on the plane one by one, so as to obtain coordinate pixel information.
Further, the terminal 102 may analyze the coordinate pixel information through the "delete" node to delete the unnecessary coordinate pixel information, such as deleting the coordinates with black pixels, sequentially obtain the target coordinate information of the text and/or the pattern, and then extrude the aggregate to obtain the thickness, thereby obtaining the volume coordinate information. Finally, the volume coordinate information is analyzed through the "vddbfrom polygons" node, so that the geometry of the characters and/or patterns can be converted into the "VDB volume" (namely the volume cloud in the format of "VDB"), and then the volume cloud coordinates of the virtual paper model are obtained, so that the two volume clouds in the format of VDB are subtracted, and the collision boundary information can be obtained.
In the above embodiment, the terminal obtains the virtual paper model filled with the first fluid particles with the first attribute and the ink fluid model covered on the virtual paper model, so that the first fluid particles and the second fluid particles can be subjected to fluid dynamics analysis based on the preset speed field information to obtain the third fluid particles with the third attribute, and finally, based on the first fluid particles, the second fluid particles and the third fluid particles, the fluid halation animation for displaying the ink fluid model halation virtual paper model is rendered, so that the simulation operation of the ink liquid state is completed from the particle level, the animation special effect of the ink halation paper is realized, the camera is not required to be used for real shooting, and complex processing treatments such as masking, splicing and the like are not required to be performed based on a large amount of materials, the continuity loss of the picture effect is avoided, and the effect quality of the fluid halation animation is further improved.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 2 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
In order to better implement the animation rendering method provided in the embodiment of the present application, on the basis of the animation rendering method provided in the embodiment of the present application, an animation rendering device is further provided in the embodiment of the present application, as shown in fig. 6, where the animation rendering device 600 includes:
a model acquisition module 610 for acquiring a virtual paper model and an ink-water fluid model overlaid on the virtual paper model; wherein, the virtual paper model is filled with first fluid particles with first attributes, and the ink fluid model is filled with second fluid particles with second attributes;
the fluid analysis module 620 is configured to perform a fluid dynamic analysis on the first fluid particle and the second fluid particle based on the preset velocity field information, so as to obtain a third fluid particle with a third attribute;
the animation rendering module 630 is configured to render a fluid motion blur animation for displaying the ink fluid model blur virtual paper model based on the first fluid particles, the second fluid particles, and the third fluid particles.
In one embodiment, the model acquisition module 610 is further configured to acquire an initial virtual paper model and an initial ink-water fluid model with a model height that meets a preset height condition; the initial ink fluid model is covered on the initial virtual paper model according to the direction of the model height; acquiring first fluid particles with first attributes, so as to fill the first fluid particles into an initial virtual paper model, and obtaining the virtual paper model; obtaining second fluid particles with second attributes, so as to fill the second fluid particles into the initial ink fluid model to obtain the ink fluid model; wherein the first attribute comprises one or more of a first color, a first density, a first viscosity value, and a first cooling value, and the second attribute comprises one or more of a second color, a second density, a second viscosity value, and a second cooling value.
In one embodiment, the fluid analysis module 620 is further configured to perform a fluid dynamic analysis on the first fluid particle and the second fluid particle based on the preset velocity field information to obtain a third fluid particle with a third attribute, including: carrying out hydrodynamic analysis on the first fluid particles and the second fluid particles based on preset speed field information so as to obtain particle fusion attribute values of the second fluid particles and the first fluid particles in a fusion state; and acquiring attribute information required for rendering each frame of animation image based on the particle fusion attribute value to obtain third fluid particles with third attributes.
In one embodiment, the fluid analysis module 620 is further configured to obtain a fusion threshold value corresponding to each frame of animated image based on a preset cooling decrement value; acquiring a particle fusion color value and/or a particle fusion density value after the second fluid particles in each frame of animation image are fused with the first fluid particles to reach a fusion critical value under the influence of speed field information; and taking the particle fusion color value and/or the particle fusion density value as the particle fusion attribute value.
In one embodiment, the fluid analysis module 620 is further configured to obtain a particle fusion attribute value, a second viscosity value included in the second attribute, and a second cooling value as attribute information required for rendering each frame of the animated image; and taking the attribute information as a third attribute to replace the second attribute of the second fluid particle, so as to obtain a third fluid particle with the third attribute.
In one embodiment, animation rendering module 630 is further configured to obtain collision boundary information; if the collision boundary information is not empty, simulating the motion process of the first fluid particles and the second fluid particles which are mutually fused under the influence of the collision boundary information to obtain a third fluid particle motion process, and rendering to obtain a fluid halation animation of the ink fluid model halation virtual paper model; and displaying the fluid halation animation of the ink fluid model halation virtual paper model.
In one embodiment, animation rendering module 630 is further configured to obtain a target map; the target map comprises characters and/or patterns; converting the target map into a preset screen coordinate system so as to enable each screen coordinate in the screen coordinate system to be associated with corresponding pixel information, and obtaining coordinate pixel information; acquiring volume coordinate information of characters and/or patterns based on the coordinate pixel information; and taking the difference between the volume coordinate information and the volume cloud coordinate of the virtual paper model as collision boundary information.
In the above embodiment, by acquiring the virtual paper model filled with the first fluid particles with the first attribute and the ink fluid model covered on the virtual paper model and filled with the second fluid particles with the second attribute, the first fluid particles and the second fluid particles can be subjected to fluid dynamics analysis based on the preset velocity field information to obtain the third fluid particles with the third attribute, and finally, based on the first fluid particles, the second fluid particles and the third fluid particles, the fluid halation animation of the ink fluid model halation virtual paper model is rendered and displayed, so that the simulation operation on the ink liquid state is completed from the particle layer, the animation special effect of the ink halation paper is realized, and the effect quality of the fluid halation animation is further improved.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for communication with an external terminal in a wired or wireless manner, which may be implemented by wireless network communication technology (Wireless Fidelity, WI-FI), carrier network, near field communication (Near Field Communication, NFC) or other technologies. The computer program is executed by a processor to implement an animation rendering method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 7 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring a virtual paper model and an ink fluid model covered on the virtual paper model; wherein, the virtual paper model is filled with first fluid particles with first attributes, and the ink fluid model is filled with second fluid particles with second attributes;
based on preset speed field information, carrying out hydrodynamic analysis on the first fluid particles and the second fluid particles to obtain third fluid particles with third properties;
based on the first fluid particles, the second fluid particles, and the third fluid particles, rendering a fluid motion blur animation that displays the ink fluid model motion blur virtual paper model.
In one embodiment, the processor when executing the computer program further performs the steps of:
acquiring an initial virtual paper model and an initial ink fluid model, wherein the model height of the initial virtual paper model and the initial ink fluid model meets the preset height condition; the initial ink fluid model is covered on the initial virtual paper model according to the direction of the model height; acquiring first fluid particles with first attributes, so as to fill the first fluid particles into an initial virtual paper model, and obtaining the virtual paper model; obtaining second fluid particles with second attributes, so as to fill the second fluid particles into the initial ink fluid model to obtain the ink fluid model; wherein the first attribute comprises one or more of a first color, a first density, a first viscosity value, and a first cooling value, and the second attribute comprises one or more of a second color, a second density, a second viscosity value, and a second cooling value.
In one embodiment, the processor when executing the computer program further performs the steps of:
based on preset velocity field information, performing hydrodynamic analysis on the first fluid particles and the second fluid particles to obtain third fluid particles with third properties, including: carrying out hydrodynamic analysis on the first fluid particles and the second fluid particles based on preset speed field information so as to obtain particle fusion attribute values of the second fluid particles and the first fluid particles in a fusion state; and acquiring attribute information required for rendering each frame of animation image based on the particle fusion attribute value to obtain third fluid particles with third attributes.
In one embodiment, the processor when executing the computer program further performs the steps of:
acquiring a fusion critical value corresponding to each frame of animation image based on a preset cooling reduction value; acquiring a particle fusion color value and/or a particle fusion density value after the second fluid particles in each frame of animation image are fused with the first fluid particles to reach a fusion critical value under the influence of speed field information; and taking the particle fusion color value and/or the particle fusion density value as the particle fusion attribute value.
In one embodiment, the processor when executing the computer program further performs the steps of:
acquiring a particle fusion attribute value, a second viscosity value and a second cooling value which are included in a second attribute as attribute information required for rendering each frame of animation image; and taking the attribute information as a third attribute to replace the second attribute of the second fluid particle, so as to obtain a third fluid particle with the third attribute.
In one embodiment, the processor when executing the computer program further performs the steps of:
acquiring collision boundary information; if the collision boundary information is not empty, simulating the motion process of the first fluid particles and the second fluid particles which are mutually fused under the influence of the collision boundary information to obtain a third fluid particle motion process, and rendering to obtain a fluid halation animation of the ink fluid model halation virtual paper model; and displaying the fluid halation animation of the ink fluid model halation virtual paper model.
In one embodiment, the processor when executing the computer program further performs the steps of:
obtaining a target map; the target map comprises characters and/or patterns; converting the target map into a preset screen coordinate system so as to enable each screen coordinate in the screen coordinate system to be associated with corresponding pixel information, and obtaining coordinate pixel information; acquiring volume coordinate information of characters and/or patterns based on the coordinate pixel information; and taking the difference between the volume coordinate information and the volume cloud coordinate of the virtual paper model as collision boundary information.
In the above embodiment, by acquiring the virtual paper model filled with the first fluid particles with the first attribute and the ink fluid model covered on the virtual paper model and filled with the second fluid particles with the second attribute, the first fluid particles and the second fluid particles can be subjected to fluid dynamics analysis based on the preset velocity field information to obtain the third fluid particles with the third attribute, and finally, based on the first fluid particles, the second fluid particles and the third fluid particles, the fluid halation animation of the ink fluid model halation virtual paper model is rendered and displayed, so that the simulation operation on the ink liquid state is completed from the particle layer, the animation special effect of the ink halation paper is realized, and the effect quality of the fluid halation animation is further improved.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a virtual paper model and an ink fluid model covered on the virtual paper model; wherein, the virtual paper model is filled with first fluid particles with first attributes, and the ink fluid model is filled with second fluid particles with second attributes;
based on preset speed field information, carrying out hydrodynamic analysis on the first fluid particles and the second fluid particles to obtain third fluid particles with third properties;
based on the first fluid particles, the second fluid particles, and the third fluid particles, rendering a fluid motion blur animation that displays the ink fluid model motion blur virtual paper model.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring an initial virtual paper model and an initial ink fluid model, wherein the model height of the initial virtual paper model and the initial ink fluid model meets the preset height condition; the initial ink fluid model is covered on the initial virtual paper model according to the direction of the model height; acquiring first fluid particles with first attributes, so as to fill the first fluid particles into an initial virtual paper model, and obtaining the virtual paper model; obtaining second fluid particles with second attributes, so as to fill the second fluid particles into the initial ink fluid model to obtain the ink fluid model; wherein the first attribute comprises one or more of a first color, a first density, a first viscosity value, and a first cooling value, and the second attribute comprises one or more of a second color, a second density, a second viscosity value, and a second cooling value.
In one embodiment, the computer program when executed by the processor further performs the steps of:
based on preset velocity field information, performing hydrodynamic analysis on the first fluid particles and the second fluid particles to obtain third fluid particles with third properties, including: carrying out hydrodynamic analysis on the first fluid particles and the second fluid particles based on preset speed field information so as to obtain particle fusion attribute values of the second fluid particles and the first fluid particles in a fusion state; and acquiring attribute information required for rendering each frame of animation image based on the particle fusion attribute value to obtain third fluid particles with third attributes.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a fusion critical value corresponding to each frame of animation image based on a preset cooling reduction value; acquiring a particle fusion color value and/or a particle fusion density value after the second fluid particles in each frame of animation image are fused with the first fluid particles to reach a fusion critical value under the influence of speed field information; and taking the particle fusion color value and/or the particle fusion density value as the particle fusion attribute value.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Acquiring a particle fusion attribute value, a second viscosity value and a second cooling value which are included in a second attribute as attribute information required for rendering each frame of animation image; and taking the attribute information as a third attribute to replace the second attribute of the second fluid particle, so as to obtain a third fluid particle with the third attribute.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring collision boundary information; if the collision boundary information is not empty, simulating the motion process of the first fluid particles and the second fluid particles which are mutually fused under the influence of the collision boundary information to obtain a third fluid particle motion process, and rendering to obtain a fluid halation animation of the ink fluid model halation virtual paper model; and displaying the fluid halation animation of the ink fluid model halation virtual paper model.
In one embodiment, the computer program when executed by the processor further performs the steps of:
obtaining a target map; the target map comprises characters and/or patterns; converting the target map into a preset screen coordinate system so as to enable each screen coordinate in the screen coordinate system to be associated with corresponding pixel information, and obtaining coordinate pixel information; acquiring volume coordinate information of characters and/or patterns based on the coordinate pixel information; and taking the difference between the volume coordinate information and the volume cloud coordinate of the virtual paper model as collision boundary information.
In the above embodiment, by acquiring the virtual paper model filled with the first fluid particles with the first attribute and the ink fluid model covered on the virtual paper model and filled with the second fluid particles with the second attribute, the first fluid particles and the second fluid particles can be subjected to fluid dynamics analysis based on the preset velocity field information to obtain the third fluid particles with the third attribute, and finally, based on the first fluid particles, the second fluid particles and the third fluid particles, the fluid halation animation of the ink fluid model halation virtual paper model is rendered and displayed, so that the simulation operation on the ink liquid state is completed from the particle layer, the animation special effect of the ink halation paper is realized, and the effect quality of the fluid halation animation is further improved.
Those skilled in the art will appreciate that implementing all or part of the above-described embodiment methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein can include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can take many forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing describes in detail a method, apparatus, computer device and computer readable storage medium for rendering an animation according to the embodiments of the present application, and specific examples are applied to illustrate the principles and embodiments of the present invention, where the foregoing examples are only used to help understand the method and core idea of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present invention, the present description should not be construed as limiting the present invention.

Claims (10)

1. An animation rendering method, comprising:
acquiring a virtual paper model and an ink fluid model covered on the virtual paper model; wherein, the virtual paper model is filled with first fluid particles with first attributes, and the ink fluid model is filled with second fluid particles with second attributes;
Based on preset speed field information, carrying out hydrodynamic analysis on the first fluid particles and the second fluid particles to obtain third fluid particles with third properties;
rendering a fluid stun animation that displays the ink fluid model stun the virtual paper model based on the first fluid particles, the second fluid particles, and the third fluid particles.
2. The method of claim 1, wherein the obtaining a virtual paper model, and the ink-water fluid model overlaying the virtual paper model, comprises:
acquiring an initial virtual paper model and an initial ink fluid model, wherein the model height of the initial virtual paper model and the initial ink fluid model meets the preset height condition; the initial ink fluid model is covered on the initial virtual paper model according to the direction of the height of the model;
acquiring first fluid particles with first attributes, so as to fill the first fluid particles into the initial virtual paper model, and obtaining the virtual paper model; and
acquiring second fluid particles with second attributes, so as to fill the second fluid particles into the initial ink fluid model, and obtain the ink fluid model;
Wherein the first attribute comprises one or more of a first color, a first density, a first viscosity value, and a first cooling value, and the second attribute comprises one or more of a second color, a second density, a second viscosity value, and a second cooling value.
3. The method of claim 1, wherein the performing a hydrodynamic analysis of the first fluid particles and the second fluid particles based on the preset velocity field information to obtain third fluid particles of a third attribute comprises:
carrying out hydrodynamic analysis on the first fluid particles and the second fluid particles based on preset speed field information so as to obtain particle fusion attribute values of the second fluid particles and the first fluid particles in a fusion state;
and acquiring attribute information required for rendering each frame of animation image based on the particle fusion attribute value to obtain third fluid particles with the third attribute.
4. A method according to claim 3, wherein the performing a hydrodynamic analysis on the first fluid particles and the second fluid particles based on the preset velocity field information to obtain a particle fusion attribute value of the second fluid particles and the first fluid particles in a fused state comprises:
Acquiring a fusion critical value corresponding to each frame of animation image based on a preset cooling reduction value;
acquiring a particle fusion color value and/or a particle fusion density value after the second fluid particles in each frame of animation image are fused with the first fluid particles to reach the fusion critical value under the influence of the speed field information;
and taking the particle fusion color value and/or the particle fusion density value as the particle fusion attribute value.
5. The method of claim 3, wherein obtaining attribute information required for rendering each frame of animated image based on the particle fusion attribute values to obtain third fluid particles of the third attribute, comprises:
acquiring the particle fusion attribute value, a second viscosity value and a second cooling value which are included in the second attribute as attribute information required for rendering each frame of animation image;
and taking the attribute information as the third attribute to replace the second attribute of the second fluid particles, so as to obtain third fluid particles with the third attribute.
6. The method of claim 1, wherein the rendering a fluid stun animation that displays the ink fluid model stun the virtual paper model based on the first fluid particles, the second fluid particles, and the third fluid particles comprises:
Acquiring collision boundary information;
if the collision boundary information is not empty, simulating the motion process of the third fluid particles by mutually fusing the first fluid particles and the second fluid particles under the influence of the collision boundary information based on the collision boundary information, and rendering to obtain a fluid halation animation of the ink-water fluid model halation of the virtual paper model;
and displaying the fluid halation animation of the ink fluid model halation of the virtual paper model.
7. The method of claim 6, wherein the acquiring collision boundary information comprises:
obtaining a target map; wherein, the target map comprises characters and/or patterns;
converting the target map into a preset screen coordinate system so as to enable each screen coordinate in the screen coordinate system to be associated with corresponding pixel information, and obtaining coordinate pixel information;
acquiring volume coordinate information of the characters and/or the patterns based on the coordinate pixel information;
and taking the difference between the volume coordinate information and the volume cloud coordinate of the virtual paper model as the collision boundary information.
8. An animation rendering device, comprising:
The model acquisition module is used for acquiring a virtual paper model and an ink fluid model covered on the virtual paper model; wherein, the virtual paper model is filled with first fluid particles with first attributes, and the ink fluid model is filled with second fluid particles with second attributes;
the fluid analysis module is used for carrying out fluid dynamics analysis on the first fluid particles and the second fluid particles based on preset speed field information to obtain third fluid particles with third properties;
and the animation rendering module is used for rendering and displaying the fluid halation animation of the ink-water fluid model halation virtual paper model based on the first fluid particles, the second fluid particles and the third fluid particles.
9. A computer device, comprising:
one or more processors;
a memory; and one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the processor to implement the animation rendering method of any of claims 1-7.
10. A computer-readable storage medium, having stored thereon a computer program, the computer program being loaded by a processor to perform the animation rendering method of any of claims 1 to 7.
CN202310099492.3A 2023-01-30 2023-01-30 Animation rendering method, device, computer equipment and computer readable storage medium Pending CN116310013A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310099492.3A CN116310013A (en) 2023-01-30 2023-01-30 Animation rendering method, device, computer equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310099492.3A CN116310013A (en) 2023-01-30 2023-01-30 Animation rendering method, device, computer equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN116310013A true CN116310013A (en) 2023-06-23

Family

ID=86823099

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310099492.3A Pending CN116310013A (en) 2023-01-30 2023-01-30 Animation rendering method, device, computer equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN116310013A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993877A (en) * 2023-08-09 2023-11-03 深圳市闪剪智能科技有限公司 Method, device and storage medium for simulating special effect of object drifting

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993877A (en) * 2023-08-09 2023-11-03 深圳市闪剪智能科技有限公司 Method, device and storage medium for simulating special effect of object drifting

Similar Documents

Publication Publication Date Title
US7450124B2 (en) Generating 2D transitions using a 3D model
KR101623288B1 (en) Rendering system, rendering server, control method thereof, program, and recording medium
CN109771951B (en) Game map generation method, device, storage medium and electronic equipment
CN108984169B (en) Cross-platform multi-element integrated development system
US10974459B2 (en) Parallel method of flood filling, and apparatus
US9588651B1 (en) Multiple virtual environments
CN112184873B (en) Fractal graph creation method, fractal graph creation device, electronic equipment and storage medium
US11097486B2 (en) System and method of 3D print modeling utilizing a point cloud representation and generating a voxel representation of the point cloud
KR20080018404A (en) Computer readable recording medium having background making program for making game
US20220067244A1 (en) Method for Generating Simulations of Fluid Interfaces for Improved Animation of Fluid Interactions
CN117390322A (en) Virtual space construction method and device, electronic equipment and nonvolatile storage medium
CN117788689A (en) Interactive virtual cloud exhibition hall construction method and system based on three-dimensional modeling
CN116310013A (en) Animation rendering method, device, computer equipment and computer readable storage medium
CN115082607B (en) Virtual character hair rendering method, device, electronic equipment and storage medium
Trapp et al. Colonia 3D communication of virtual 3D reconstructions in public spaces
KR102108244B1 (en) Image processing method and device
CN113546410B (en) Terrain model rendering method, apparatus, electronic device and storage medium
CN115311395A (en) Three-dimensional scene rendering method, device and equipment
CN108986228B (en) Method and device for displaying interface in virtual reality
US11417043B2 (en) Method for generating simulations of thin film interfaces for improved animation
CN112396683B (en) Shadow rendering method, device, equipment and storage medium for virtual scene
CN114219888A (en) Method and device for generating dynamic silhouette effect of three-dimensional character and storage medium
CN117409127B (en) Real-time ink fluid rendering method and device based on artificial intelligence
CN113509731B (en) Fluid model processing method and device, electronic equipment and storage medium
WO2023216771A1 (en) Virtual weather interaction method and apparatus, and electronic device, computer-readable storage medium and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination