CN112396683B - Shadow rendering method, device, equipment and storage medium for virtual scene - Google Patents
Shadow rendering method, device, equipment and storage medium for virtual scene Download PDFInfo
- Publication number
- CN112396683B CN112396683B CN202011379577.XA CN202011379577A CN112396683B CN 112396683 B CN112396683 B CN 112396683B CN 202011379577 A CN202011379577 A CN 202011379577A CN 112396683 B CN112396683 B CN 112396683B
- Authority
- CN
- China
- Prior art keywords
- range
- virtual object
- shooting range
- shadow
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 91
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000003860 storage Methods 0.000 title claims abstract description 13
- 238000005286 illumination Methods 0.000 claims description 45
- 230000015654 memory Effects 0.000 claims description 23
- 210000000988 bone and bone Anatomy 0.000 claims description 13
- 210000003049 pelvic bone Anatomy 0.000 claims description 12
- 238000005266 casting Methods 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 23
- 230000000694 effects Effects 0.000 description 12
- 238000003384 imaging method Methods 0.000 description 11
- 230000003993 interaction Effects 0.000 description 11
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 238000013461 design Methods 0.000 description 5
- 239000002245 particle Substances 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000013515 script Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 235000019800 disodium phosphate Nutrition 0.000 description 2
- 230000008030 elimination Effects 0.000 description 2
- 238000003379 elimination reaction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 210000004197 pelvis Anatomy 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 208000015041 syndromic microphthalmia 10 Diseases 0.000 description 2
- 238000012384 transportation and delivery Methods 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000003042 antagnostic effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- -1 fire Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000005304 joining Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000013439 planning Methods 0.000 description 1
- 238000004549 pulsed laser deposition Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012876 topography Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/60—Shadow generation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application provides a shadow rendering method, a shadow rendering device, shadow rendering equipment and a computer readable storage medium of a virtual scene; the method comprises the following steps: acquiring a shooting range of a main camera of a virtual scene; determining the position of the virtual object in the shooting range according to the shooting range; determining the shooting range of a shadow camera according to the position of the virtual object in the shooting range; and based on the shooting range of the shadow camera, shadow rendering is carried out on the part of the virtual object in the shooting range. The shadow rendering efficiency and shadow quality can be improved through the method and the device.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a computer readable storage medium for shadow rendering of a virtual scene.
Background
With the development of image rendering technology, in order to simulate a more realistic virtual scene, a terminal generally renders shadows of virtual objects in the virtual scene in real time.
In the related art, when a virtual object in a virtual scene is rendered, a world space bounding box covered by the virtual object or a fixed bounding sphere manually set is generally used as an imaging range of a shadow camera, and then the virtual object is shadow-rendered according to the imaging range of the shadow camera. In this way, the whole virtual object is shadow-rendered no matter what the whole body, half body or local part of the virtual object is photographed by the main camera, so that when only the local part of the virtual object is photographed by the main camera, the problems of low shadow rendering efficiency and low shadow quality exist.
Disclosure of Invention
The embodiment of the application provides a shadow rendering method, device and equipment of a virtual scene and a computer readable storage medium, which can improve the shadow rendering efficiency and shadow quality.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a shadow rendering method of a virtual scene, which comprises the following steps:
Acquiring a shooting range of a main camera of a virtual scene;
Determining the position of the virtual object in the shooting range according to the shooting range;
determining the shooting range of a shadow camera according to the position of the virtual object in the shooting range;
And based on the shooting range of the shadow camera, shadow rendering is carried out on the part of the virtual object in the shooting range.
The embodiment of the application provides a shadow rendering device of a virtual scene, which comprises the following components:
The acquisition module is used for acquiring the shooting range of the main camera of the virtual scene;
The first determining module is used for determining the position of the virtual object in the shooting range according to the shooting range;
The second determining module is used for determining the shooting range of the shadow camera according to the position of the virtual object in the shooting range;
And the rendering module is used for performing shadow rendering on the part of the virtual object in the shooting range based on the shooting range of the shadow camera.
In the above scheme, the acquiring module is further configured to acquire a camera parameter of a main camera and a position of the main camera in a virtual scene, where the camera parameter includes a viewing angle, a near-plane distance, and a far-plane distance of the main camera;
and acquiring a cone viewing range corresponding to the main camera based on the camera parameters and the positions, and taking the cone viewing range as a shooting range of the main camera.
In the above aspect, the first determining module is further configured to determine at least one intersection line where the boundary surface of the shooting range intersects the virtual object when the boundary surface of the shooting range intersects the virtual object;
And dividing the virtual object based on the at least one intersection line to obtain a part of the virtual object in the shooting range.
In the above aspect, the first determining module is further configured to determine a straight line passing through a center point of the virtual object and being perpendicular to a horizontal plane;
acquiring a first intersection point and a second intersection point of the straight line intersecting with a boundary surface of the shooting range;
Determining a height range corresponding to the shooting range according to the positions of the first intersection point and the second intersection point;
And determining the position of the virtual object in the shooting range based on the height range corresponding to the shooting range and the height range of the virtual object.
In the above solution, the first determining module is further configured to obtain an intersection of a height range corresponding to the shooting range and a height range of the virtual object;
And taking the part of the virtual object corresponding to the intersection as the part of the virtual object in the shooting range.
In the above aspect, the first determining module is further configured to obtain a side view of a shape corresponding to the shooting range;
Two points at which the straight line intersects with the side of the side view are determined as a first intersection point and a second intersection point at which the straight line intersects with a boundary surface of the photographing range.
In the above scheme, the first determining module is further configured to obtain a position of a pelvic bone of the virtual object and an offset value of a bone space coordinate system;
Superposing the position of the pelvic bone and the offset value of the bone space coordinate system to obtain the center position of the virtual object;
Taking the central position as a sphere center, and constructing an initial surrounding sphere containing the virtual object;
the second determining module is further configured to determine, according to the shooting range, a portion of the initial bounding sphere that is within the shooting range, so as to determine a location of the virtual object that is within the shooting range.
In the above aspect, the first determining module is further configured to determine a straight line passing through the center of the initial enclosing ball and being perpendicular to a horizontal plane;
acquiring a line segment of the straight line between an upper boundary surface and a lower boundary surface of the shooting range;
comparing the highest point of the line segment with the highest point of the initial enclosing ball, and the lowest point of the line segment with the lowest point of the initial enclosing ball;
Selecting the highest point of the line segment and the lower point of the highest point of the initial enclosing sphere as an upper boundary, and selecting the lowest point of the line segment and the upper point of the lowest point of the initial enclosing sphere as a lower boundary;
Determining a portion of the initial bounding sphere between the upper boundary and the lower boundary as a portion of the initial bounding sphere within the photographing range.
In the above scheme, the second determining module is further configured to construct a target bounding sphere by using the upper boundary and the lower boundary as a boundary of the bounding sphere when the initial bounding sphere is not completely within the shooting range;
and acquiring the shooting range of the shadow camera based on the target enclosing ball.
In the above solution, the second determining module is further configured to obtain, when the initial bounding sphere is completely within the shooting range, the shooting range of the shadow camera based on the initial bounding sphere.
In the above scheme, the second determining module is further configured to construct a bounding sphere including a portion of the virtual object within the shooting range;
acquiring an illumination direction in a virtual scene;
Constructing a bounding box tangent to the bounding sphere based on the illumination direction and the bounding sphere, and determining a range corresponding to the bounding box as a shooting range of a shadow camera;
Wherein at least one face of the bounding box is perpendicular to the illumination direction.
In the above scheme, the rendering module is further configured to obtain an illumination direction in the virtual scene;
Generating a shadow map based on a shooting range of the shadow camera and an illumination direction in the virtual scene, wherein the shadow map is obtained by performing shadow casting on a part of the virtual object in the shooting range from the illumination direction;
and based on the shadow map, shadow rendering is carried out on the part of the virtual object in the shooting range.
An embodiment of the present application provides a computer apparatus including:
A memory for storing executable instructions;
and the processor is used for realizing the shadow rendering method of the virtual scene when executing the executable instructions stored in the memory.
The embodiment of the application provides a computer readable storage medium which stores executable instructions for causing a processor to execute the method for shadow rendering of a virtual scene.
The embodiment of the application has the following beneficial effects:
By applying the embodiment, the shooting range of the main camera of the virtual scene is acquired; determining the position of the virtual object in the shooting range according to the shooting range; determining the shooting range of a shadow camera according to the position of the virtual object in the shooting range; shadow rendering is carried out on the part of the virtual object in the shooting range based on the shooting range of the shadow camera; therefore, the shooting range of the shadow camera can be dynamically adjusted according to the shooting range of the main camera, so that shadow rendering is only carried out on the part of the virtual object in the shooting range, and shadow rendering efficiency and shadow quality are improved relative to shadow rendering of the whole virtual object.
Drawings
FIG. 1 is a schematic diagram of an alternative implementation scenario of a shadow rendering method for a virtual scenario according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a computer device 500 according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a man-machine interaction engine installed in a shadow rendering device of a virtual scene according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating a shadow rendering method of a virtual scene according to an embodiment of the present application;
fig. 5 is a schematic view of a shooting range of a main camera according to an embodiment of the present application;
FIGS. 6A-6B are schematic diagrams illustrating intersection of a boundary surface of a photographing range and a virtual object according to an embodiment of the present application;
fig. 7A to 7C are three views of a photographing range provided by an embodiment of the present application;
FIG. 8 is a schematic view of a straight line intersecting upper and lower boundary lines of a side view provided by an embodiment of the present application;
FIGS. 9A-9D are schematic illustrations of a comparison of the height range of a bounding sphere with the height range of a line segment provided by an embodiment of the present application;
FIGS. 10A-10C are schematic illustrations of the construction of a target bounding sphere provided by an embodiment of the present application;
FIG. 11 is a schematic diagram of the construction of a bounding box provided by an embodiment of the present application;
Fig. 12A is a schematic diagram of a bounding box provided in the related art;
FIG. 12B is a schematic view of a surrounding sphere provided in the related art;
FIG. 13 is a flowchart illustrating a method for shadow rendering of a virtual scene according to an embodiment of the present application;
FIG. 14 is a schematic view of an initial enclosing ball provided by an embodiment of the present application;
fig. 15A is a schematic view of a shooting range of a shadow camera in the related art;
fig. 15B is a schematic view of a shooting range of a shadow camera provided in an embodiment of the present application;
FIG. 16A is a schematic diagram of a shadow map rendered by a shadow camera in the related art;
FIG. 16B is a schematic diagram of a shadow map rendered by a shadow camera provided by an embodiment of the present application;
FIG. 17A is a schematic diagram of the effect of shadow rendering in the related art;
FIG. 17B is a schematic diagram of an effect of shadow rendering provided by an embodiment of the present application.
Detailed Description
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", "third" and the like are merely used to distinguish similar objects and do not represent a particular ordering of the objects, it being understood that the "first", "second", "third" may be interchanged with a particular order or sequence, as permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
Before describing embodiments of the present application in further detail, the terms and terminology involved in the embodiments of the present application will be described, and the terms and terminology involved in the embodiments of the present application will be used in the following explanation.
1) And a client, an application program for providing various services, such as a video playing client, a game client, etc., running in the terminal.
2) The virtual scene is a virtual scene that an application program displays (or provides) when running on a terminal. The virtual scene may be a simulation environment for the real world, a semi-simulation and semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, a virtual scene may include sky, land, sea, etc., the land may include environmental elements of a desert, city, etc., and a user may control a virtual object to move in the virtual scene.
3) Virtual objects, images of various people and objects in a virtual scene that can interact, or movable objects in a virtual scene. The movable object may be a virtual character, a virtual animal, a cartoon character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the virtual scene. The virtual object may be an avatar in the virtual scene for representing a user. A virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene, occupying a portion of space in the virtual scene.
Alternatively, the virtual object may be a user character controlled by an operation on the client, an artificial intelligence (AI, artificial Intelligence) set in the virtual scene fight by training, or a Non-user character (NPC, non-PLAYER CHARACTER) set in the virtual scene interaction. Alternatively, the virtual object may be a virtual character that performs an antagonistic interaction in the virtual scene. Optionally, the number of virtual objects participating in the interaction in the virtual scene may be preset, or may be dynamically determined according to the number of clients joining the interaction.
4) The main camera, conventionally a camera that outputs colors onto a display screen, uses perspective projection.
5) The shadow camera is used for shooting a camera of the shadow depth of the virtual object, orthogonal projection is used, the orientation of the camera is consistent with the direction of sunlight for generating projection, and a rendering result is output to a shadow map.
Referring to fig. 1, fig. 1 is a schematic diagram of an alternative implementation scenario of a virtual scene shadow rendering method according to an embodiment of the present application, in order to support an exemplary application, a terminal 400 (a terminal 400-1 and a terminal 400-2 are shown in an exemplary manner) is connected to a server 200 through a network 300, where the network 300 may be a wide area network or a local area network, or a combination of the two, and data transmission is implemented using a wireless link.
In some embodiments, the server 200 may be a stand-alone physical server, a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (CDNs, content Delivery Network), and basic cloud computing services such as big data and artificial intelligence platforms. The terminal may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc. The terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present application.
In actual implementation, the virtual scene may be a game scene, an indoor design simulation scene, etc., and the terminal installs and runs an application program supporting the virtual scene. The application may be any one of a First person shooter game (FPS, first-Person Shooting game), a third person shooter game, a multiplayer online tactical game (MOBA, multiplayer Online Battle ARENA GAMES), a virtual reality application, a three-dimensional map program, or a three-dimensional design program. The terminal can render the virtual objects and the shadows of the virtual objects in the virtual scene through the application program.
In an exemplary scenario, a virtual scenario application is provided in the terminal 400-1, and a user may start the virtual scenario application through the terminal 400-1, the terminal 400-1 obtains virtual scenario data from the server 200 through the network 300, and renders a virtual object in the virtual scenario and a shadow of the virtual object based on the obtained virtual scenario data. When the shadow of the virtual object is rendered, the terminal can acquire the shooting range of the main camera of the virtual scene; determining the position of the virtual object in the shooting range according to the shooting range; determining the shooting range of a shadow camera according to the position of the virtual object in the shooting range; and based on the shooting range of the shadow camera, shadow rendering is carried out on the part of the virtual object in the shooting range.
Here, the user adjusts the picture of the virtual scene in real time through the client to adjust the shooting range of the main camera, and the terminal 400-1 re-determines the position of the virtual object in the shooting range according to the adjustment of the shooting range of the main camera, thereby dynamically adjusting the shooting range of the shadow camera; and shadow rendering is carried out on the part of the virtual object in the shooting range based on the shooting range of the shadow camera.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a computer device 500 provided in an embodiment of the present application, in practical application, the computer device 500 may be a terminal (e.g. 400-1) or a server 200 in fig. 1, and the computer device is taken as an example of the server 200 shown in fig. 1, to describe a computer device implementing a shadow rendering method of a virtual scene in an embodiment of the present application. The computer device 500 shown in fig. 2 includes: at least one processor 510, a memory 550, at least one network interface 520, and a user interface 530. The various components in computer device 500 are coupled together by bus system 540. It is appreciated that the bus system 540 is used to enable connected communications between these components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to the data bus. The various buses are labeled as bus system 540 in fig. 2 for clarity of illustration.
The Processor 510 may be an integrated circuit chip having signal processing capabilities such as a general purpose Processor, such as a microprocessor or any conventional Processor, a digital signal Processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
The user interface 530 includes one or more output devices 531 that enable presentation of media content, including one or more speakers and/or one or more visual displays. The user interface 530 also includes one or more input devices 532, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 550 may optionally include one or more storage devices physically located remote from processor 510.
Memory 550 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The non-volatile Memory may be a Read Only Memory (ROM) and the volatile Memory may be a random access Memory (RAM, random Access Memory). The memory 550 described in embodiments of the present application is intended to comprise any suitable type of memory.
In some embodiments, memory 550 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
Network communication module 552 is used to reach other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 include: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (USB, universal Serial Bus), etc.;
A presentation module 553 for enabling presentation of information (e.g., a user interface for operating a peripheral device and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
The input processing module 554 is configured to detect one or more user inputs or interactions from one of the one or more input devices 532 and translate the detected inputs or interactions.
In some embodiments, the shadow rendering device for a virtual scene provided by the embodiments of the present application may be implemented in software, and fig. 2 shows a shadow rendering device 555 for a virtual scene stored in a memory 550, which may be software in the form of a program, a plug-in, or the like, and includes the following software modules: the acquisition module 5551, the first determination module 5552, the second determination module 5553, and the rendering module 5554 are logical, and thus may be arbitrarily combined or further split according to the implemented functions.
The functions of the respective modules will be described hereinafter.
In other embodiments, the shadow rendering device for a virtual scene provided in the embodiments of the present application may be implemented in hardware, and by way of example, the shadow rendering device for a virtual scene provided in the embodiments of the present application may be a processor in the form of a hardware decoding processor that is programmed to perform the shadow rendering method for a virtual scene provided in the embodiments of the present application, for example, the processor in the form of a hardware decoding processor may employ one or more Application specific integrated circuits (ASICs, application SPECIFIC INTEGRATED circuits), DSPs, programmable logic devices (PLDs, programmable Logic Device), complex Programmable logic devices (CPLDs, complex Programmable Logic Device), field-Programmable gate arrays (FPGAs), field-Programmable GATE ARRAY), or other electronic components.
In some embodiments, a man-machine interaction engine for implementing a shadow rendering method of a virtual scene is installed in a shadow rendering device of the virtual scene, where the man-machine interaction engine includes a functional module, a component or a plug-in for implementing the shadow rendering method of the virtual scene, and fig. 3 is a schematic diagram of the man-machine interaction engine installed in the shadow rendering device of the virtual scene according to the embodiment of the present application, and referring to fig. 3, taking the virtual scene as an example of a game scene, and correspondingly, the man-machine interaction engine is the game engine.
The game engine is a set of codes or instructions designed for an electronic device running a certain type of game and capable of being recognized by the electronic device (computer device), and is used for controlling the running of the game, and a game program can be divided into two major parts, namely: game = game engine (program code) +game resources (images, sounds, animations, etc.); wherein, the game resources comprise image, sound, animation and the like, and the game engine calls the resources sequentially according to the requirements of game design.
The shadow rendering method of the virtual scene provided by the embodiment of the application can be implemented by each module in the shadow rendering device of the virtual scene shown in fig. 3 by calling the relevant module, component or plug-in of the game engine shown in fig. 3, and the module, component or plug-in included in the game engine shown in fig. 3 is exemplified below.
1) The main camera, the necessary subassembly of game scene picture for the presentation of game scene picture, a game scene corresponds at least one main camera, can have two or more according to actual need, as the window of game rendering, for the player captures and presents the picture content of game world, can adjust the viewing angle that the player watched the game world through setting up the parameter of main camera, like first person's viewing angle, third person's viewing angle.
2) Scene organization, which is used for game scene management, such as collision detection, visibility elimination, etc.; wherein, for collision detection, the collision body can be realized by a collision body, and according to actual needs, the collision body can be realized by an Axis alignment bounding box (Axis-Aligned Bounding Box, AABB) or a direction bounding box (Oriented Bounding Box, OBB); for the visibility elimination, the implementation can be based on a view body, wherein the view body is a three-dimensional frame generated according to a virtual camera and is used for cutting objects outside the visible range of the camera, and for the objects in the view body to be projected to a view plane, the objects not in the view body are discarded and not processed.
3) And the component is used for creating and editing the game terrain, such as creating the terrains in the game scenes of mountains, canyons, holes and the like.
4) An editor, an auxiliary tool in a game design, comprising:
The scene editor is used for editing the content of the game scene, such as changing the topography, customizing vegetation distribution, lamplight layout and the like;
a model editor for creating and editing a model in a game (character model in a game scene);
the special effect editor is used for editing special effects in the game picture;
And the action editor is used for defining and editing actions of the characters in the game screen.
5) The special effect component is used for manufacturing and editing the special effect of the game in the game picture, and in practical application, the special effect of particles and the texture UV animation can be adopted; the particle special effect is to combine innumerable single particles to enable the particles to be in a fixed form, and control the whole or single movement of the particles by a controller and a script to simulate the effects of water, fire, fog, gas and the like in reality; UV animation is a texture animation achieved by dynamically modifying the UV coordinates of the map.
6) Bone animation, which is realized by using built-in bones to drive objects to generate motion, can be understood as two concepts as follows:
Bone: an abstract concept for controlling skin, such as human skeleton control skin;
Covering: factors controlled by bones and displayed outside, such as the skin of the human body, are affected by bones.
7) Morph animation: i.e., a morphing animation, an animation achieved by adjusting the vertices of the base model.
8) And the UI control is used for realizing the control of game picture display.
9) The bottom layer algorithm, the algorithm required to be invoked for realizing the functions in the game engine, such as the graphics algorithm required by the scene organization, realizes the matrix transformation and the vector transformation required by the skeleton animation.
10A rendering component, a component necessary for the game picture effect presentation, and a rendering component is used for converting the scene described by the three-dimensional vector into the scene described by the two-dimensional pixels, wherein the rendering component comprises model rendering and scene rendering.
11 A, searching paths, and an algorithm for searching the shortest paths, wherein the algorithm is used for path planning, path searching and graph traversing in game design.
The shadow rendering method of the virtual scene provided by the embodiment of the application will be described in connection with the exemplary application and implementation of the terminal provided by the embodiment of the application.
Referring to fig. 4, fig. 4 is a flowchart of a shadow rendering method of a virtual scene according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 4.
Step 401: the terminal acquires the shooting range of the main camera of the virtual scene.
In practical implementation, the terminal is provided with a shadow rendering service, which can render virtual objects in virtual scenes and shadows of the virtual objects, wherein the virtual objects can be characters or objects, and the virtual scenes can be any virtual scenes displayed on the terminal. The data of the virtual scene can be stored locally or from the cloud.
In practical application, the virtual scene may be a game scene, an indoor design simulation scene, etc., and the terminal installs and runs an application program supporting the virtual scene. The application may be any one of a First person shooter game (FPS, first-Person Shooting game), a third person shooter game, a multiplayer online tactical game (MOBA, multiplayer Online Battle ARENA GAMES), a virtual reality application, a three-dimensional map program, or a three-dimensional design program.
The main camera is used for presenting the virtual scene pictures, one virtual scene at least corresponds to one virtual camera, two or more than two virtual cameras can be used as virtual scene rendering windows according to actual needs, the picture content of the virtual scene is captured and presented for a user, and the viewing angle of a player for viewing the virtual scene, such as a first person viewing angle and a third person viewing angle, can be adjusted by setting parameters of the main camera. The shooting range of the main camera is a range corresponding to a virtual scene picture presented by the terminal.
In some embodiments, the terminal may acquire the shooting range of the main camera of the virtual scene by: acquiring camera parameters of a main camera and the position of the main camera in a virtual scene, wherein the camera parameters comprise the view angle, the near-plane distance and the far-plane distance of the main camera; and acquiring a cone viewing range corresponding to the main camera based on the camera parameters and the positions, and taking the cone viewing range as a shooting range of the main camera.
In practical implementation, when the three-dimensional body is placed in the world space coordinate system, since the display (screen) can only display the three-dimensional body with a two-dimensional image, it is necessary to reduce the dimension of the three-dimensional body by projection including perspective projection and front projection, and perspective projection is used for the main camera. The terminal can determine the shooting range of the main camera according to the camera parameters of the main camera, wherein the camera parameters of the main camera comprise the view angle, the near plane distance and the far plane distance of the main camera.
Here, the view angle of the main camera is an angle for controlling the view in the XY plane, the range is 0-180 degrees, and the larger the view angle is, the larger the view is presented; the near plane distance refers to the distance between the near plane and the main camera; far plane distance refers to the distance between the far plane and the main camera; based on these parameters, a viewing cone can be determined, and only the content located inside the viewing cone is visible, i.e. the viewing cone range is the range of the main camera.
Fig. 5 is a schematic diagram of a shooting range of a main camera provided in an embodiment of the present application, referring to fig. 5, a perspective projection is in a pyramid shape, the main camera is located at a vertex of the pyramid, the pyramid is truncated by a near plane 501 and a far plane 502 to form a pyramid 503, and a range (a cone viewing range) corresponding to the pyramid is taken as a range of the main camera.
Step 402: and determining the part of the virtual object in the shooting range according to the shooting range.
Here, only the content that is within the shooting range can be presented by the display, that is, the portion of the virtual object that is within the shooting range can be presented, based on which the portion of the virtual object that is within the shooting range is acquired. Here, the portion of the virtual object within the imaging range may be a part of the virtual object or may be the entire portion of the virtual object.
In some embodiments, the location of the virtual object within the shooting range may be determined from the shooting range by: when the boundary surface of the shooting range intersects with the virtual object, determining at least one intersection line of the boundary surface of the shooting range and the virtual object; and dividing the virtual object based on the at least one intersection line to obtain a part of the virtual object in the shooting range.
Here, when the boundary surface of the photographing range intersects the virtual object, it means that only a part of the virtual object is within the photographing range of the main camera. In actual implementation, the boundary surfaces of the shooting range include four boundary surfaces of up, down, left and right, and one or more of the boundary surfaces may intersect with the virtual object.
As an example, when there is a boundary surface intersecting with a virtual object, fig. 6A is a schematic diagram of the intersection of the boundary surface of a shooting range and the virtual object provided in the embodiment of the present application, referring to fig. 6A, the lower boundary surface of the shooting range intersects with the virtual object, according to the intersection line of the boundary surface of the shooting range and the virtual object, a portion of the virtual object in the shooting range may be obtained as an upper body of the virtual object, and a portion of the dashed box 601 is a portion of the virtual object in the shooting range.
As an example, when there are two boundary surfaces intersecting with a virtual object, fig. 6B is a schematic diagram of the intersection of the boundary surface of the shooting range and the virtual object provided in the embodiment of the present application, referring to fig. 6B, the lower boundary surface of the shooting range and the left boundary surface of the shooting range intersect with the virtual object, according to two intersecting lines of the boundary surface of the shooting range and the virtual object, a portion of the virtual object in the shooting range may be obtained as an upper right portion of the virtual object, and a portion of the dashed frame 602 is a portion of the virtual object in the shooting range.
In some embodiments, when the boundary surface of the photographing range does not intersect the virtual object, it is indicated that all portions of the virtual object are within the photographing range.
In some embodiments, the location of the virtual object within the shooting range may be determined from the shooting range by: determining a straight line passing through a center point of the virtual object and perpendicular to a horizontal plane; acquiring a first intersection point and a second intersection point of the straight line intersecting with a boundary surface of the shooting range; determining a height range corresponding to the shooting range according to the positions of the first intersection point and the second intersection point; and determining the position of the virtual object in the shooting range based on the height range corresponding to the shooting range and the height range of the virtual object.
Here, when the aspect ratio of the virtual object is less than 1, only the height range of the virtual object may be considered, that is, the height range of the virtual object is compared with the displayable height range to determine a location within the photographing range.
In actual implementation, as the main camera is perspective projection, the height range corresponding to the shooting range is related to the position of the virtual object, and the closer the main camera is, the smaller the height range corresponding to the shooting range is, the less content can be presented; based on the above, determining the position of the center point of the virtual object, representing the position of the virtual object by the center point, and then determining a straight line which passes through the center point of the virtual object and is perpendicular to the horizontal plane, wherein the straight line intersects with the upper boundary and the lower boundary surface of the shooting range to obtain two intersecting points, namely a first intersecting point and a second intersecting point; the range from the height of the first intersection point to the height of the second intersection point is taken as the height range corresponding to the shooting range. For example, if the position coordinates of the first intersection point in the world coordinate system are (1, 3) and the position coordinates of the second intersection point in the world coordinate system are (1, 1), the height range corresponding to the photographing range may be obtained as [1,3].
In some embodiments, the height range of the virtual object may be determined by acquiring a model of the virtual object, and then determining the position coordinates of the highest point of the virtual object and the position coordinates of the lowest point of the virtual object by combining the position of the virtual object in the virtual scene and the model of the virtual object, and further determining the height range of the virtual object according to the position coordinates of the highest point of the virtual object and the position coordinates of the lowest point of the virtual object. The model of the virtual object is here obtained by modeling the virtual object. For example, the position coordinate of the highest point of the virtual object in the world coordinate system is (1,1,3.5), the position coordinate of the highest point of the virtual object in the world coordinate system is (0.5,1,1), and then the altitude range of the virtual object is [1,3.5].
In some embodiments, the location of the virtual object within the shooting range may be determined by: acquiring an intersection of a height range corresponding to the shooting range and the height range of the virtual object; and taking the part of the virtual object corresponding to the intersection as the part of the virtual object in the shooting range.
In actual implementation, a smaller value of the maximum value of the height range corresponding to the photographing range and the maximum value of the height range of the virtual object may be obtained by comparing the maximum value of the height range corresponding to the photographing range with the maximum value of the height range of the virtual object and comparing the minimum value of the height range corresponding to the photographing range with the minimum value of the height range of the virtual object, and a larger value of the minimum value of the height range corresponding to the photographing range and the minimum value of the height range of the virtual object may be obtained, and a range between the obtained smaller value and the larger value may be determined as an intersection of the height range corresponding to the photographing range and the height range of the virtual object. For example, the height range is [1,3], the height range of the virtual object is [1,3.5], and the intersection of the height range corresponding to the photographing range and the height range of the virtual object is [1,3]. After the intersection of the height range corresponding to the shooting range and the height range of the virtual object is obtained, the part of the virtual object in the area range corresponding to the intersection is taken as the part of the virtual object in the shooting range.
In some embodiments, the first intersection point and the second intersection point may be determined by: acquiring a side view of a shape corresponding to the shooting range; two points at which the straight line intersects with the side of the side view are determined as a first intersection point and a second intersection point at which the straight line intersects with a boundary surface of the photographing range.
In practical implementation, the shooting range of the main camera may be abstracted into three views, for example, fig. 7A-7C are three views of the shooting range provided by the embodiment of the present application, see fig. 7A-7C, and fig. 7A is a front view of a cone, where 701 represents a near plane of the camera, 702 represents a far plane of the main camera, and 703 represents a boundary surface (including upper, lower, left, and right boundaries) of the shooting range of the main camera; fig. 7B is a top view of the cone and fig. 7C is a side view of the cone.
Since the height range corresponding to the imaging range is calculated, only the upper and lower boundary surfaces of the imaging range need be considered, and thus the subsequent calculation can be performed using a side view. Here, upper and lower boundary lines of the side view correspond to upper and lower boundary surfaces of the photographing range, a straight line passing through the center of the virtual object and perpendicular to the horizontal plane is determined, the straight line intersects with the upper and lower boundary lines of the side view at two points, and the two intersecting points are determined as a first intersecting point and a second intersecting point at which the straight line intersects with the boundary surfaces of the photographing range.
Fig. 8 is a schematic diagram of a straight line intersecting with upper and lower boundary lines of a side view, referring to fig. 8, in which a trapezoid represents a side view of a photographing range, a straight line passing through a center 801 of a virtual object and perpendicular to a horizontal plane intersects with upper and lower boundary lines of the side view at a first intersection point 802 and a second intersection point 803.
Step 403: and determining the shooting range of the shadow camera according to the position of the virtual object in the shooting range.
Here, the photographing range of the shadow camera refers to a range in which depth information is to be acquired. In actual implementation, the imaging range of the shadow camera is determined according to the part of the virtual object in the imaging range, so that the imaging range of the shadow camera corresponds to the part of the virtual object in the imaging range.
In some embodiments, before determining a location of a virtual object within the photographing range according to the photographing range, the terminal may acquire a position of a pelvic bone of the virtual object and an offset value of a bone space coordinate system; superposing the position of the pelvic bone and the offset value of the bone space coordinate system to obtain the center position of the virtual object; taking the central position as a sphere center, and constructing an initial surrounding sphere containing the virtual object; accordingly, the location of the virtual object within the shooting range may be determined by: and determining the part of the initial enclosing sphere in the shooting range according to the shooting range so as to determine the part of the virtual object in the shooting range.
In practical implementation, when the virtual object is a humanoid virtual object, since the pelvis is generally located at a center position relatively close to the height of the humanoid virtual object, the true center position of the humanoid virtual object is convenient to determine. Firstly determining the pelvic bone position of a virtual object, then setting an offset value based on a skeleton space coordinate system, wherein the offset value is used for moving an anchor point to a real center position similar to the height of a humanoid virtual object, and the real center position of the virtual object can be obtained by superposing the position of the pelvic bone and the offset value of the skeleton space coordinate system; then, setting a radius for the initial bounding sphere, wherein the radius can be set manually or can be determined according to a bounding box, namely, the radius of the bounding box is taken as the radius of the initial bounding sphere; finally, an initial enclosing ball is constructed by taking the central position as the center of the ball and the set radius as the radius, wherein the initial enclosing ball can completely enclose the virtual object.
When determining the radius of the initial bounding sphere based on the bounding box radius, it is necessary to determine the radius of the initial bounding sphere, which is half of the side of the world space bounding box, according to the size of the world space bounding box behind the virtual object skin.
In practical application, after the initial enclosing ball is determined, a portion of the initial enclosing ball within the shooting range may be acquired, and then a portion of the virtual object within the portion is taken as a portion of the virtual object within the shooting range.
In some embodiments, the terminal may determine the portion of the initial bounding sphere that is within the capture range by: determining a straight line passing through the center of the initial enclosing ball and perpendicular to a horizontal plane; acquiring a line segment of the straight line between an upper boundary surface and a lower boundary surface of the shooting range; comparing the highest point of the line segment with the highest point of the initial enclosing ball, and the lowest point of the line segment with the lowest point of the initial enclosing ball; selecting the highest point of the line segment and the lower point of the highest point of the initial enclosing sphere as an upper boundary, and selecting the lowest point of the line segment and the upper point of the lowest point of the initial enclosing sphere as a lower boundary; determining a portion of the initial bounding sphere between the upper boundary and the lower boundary as a portion of the initial bounding sphere within the photographing range.
In actual implementation, when an initial enclosing ball is constructed in advance, the center of the initial enclosing ball is actually the center point of the virtual object, based on the initial enclosing ball, a straight line which is perpendicular to the horizontal plane and is positioned between the upper boundary surface and the lower boundary surface of the shooting range can be determined, the height range corresponding to the shooting range is represented by the determined height range of the straight line, and the height range of the virtual object is represented by the height range of the initial enclosing ball; the height range of the line segment is then compared with the height range of the initial bounding sphere to achieve a comparison of the height range of the photographing range with the height range of the virtual object.
In practical application, comparing the highest point of the line segment with the highest point of the initial enclosing ball and comparing the lowest point of the line segment with the lowest point of the initial enclosing ball, wherein when the highest point of the initial enclosing ball is higher than the highest point of the line segment and the lowest point of the initial enclosing ball is higher than the lowest point of the line segment, the highest point of the line segment is taken as an upper boundary, and the lowest point of the initial enclosing ball is taken as a lower boundary; when the highest point of the initial enclosing ball is lower than the highest point of the line segment and the lowest point of the initial enclosing ball is lower than the lowest point of the line segment, taking the lowest point of the line segment as a lower boundary and taking the highest point of the initial enclosing ball as an upper boundary; when the highest point of the initial enclosing ball is higher than the highest point of the line segment and the lowest point of the initial enclosing ball is lower than the lowest point of the line segment, the highest point of the line segment is taken as an upper boundary, and the lowest point of the line segment is taken as a lower boundary.
9A-9D are schematic diagrams for comparing the height range of the bounding sphere with the height range of the line segment, referring to FIG. 9A, the initial bounding sphere is completely within the height range of the line segment, which means that the height range of the bounding box is completely within the range of the screen height range, and the highest point of the initial bounding sphere and the lowest point of the initial bounding sphere are respectively taken as an upper boundary and a lower boundary; referring to fig. 9B, when the highest point of the bounding box exceeds the height range of the line segment, the lowest point of the line segment is required to be a lower boundary, and the highest point of the initial bounding sphere is required to be an upper boundary; referring to fig. 9C, when the lowest point of the bounding box exceeds the line segment height range, it means that the lowest point of the bounding box exceeds the screen height range, and at this time, the lowest point of the line segment needs to be taken as a lower boundary, and the highest point of the initial bounding sphere needs to be taken as an upper boundary; referring to fig. 9D, when the highest point and the lowest point of the bounding box are both out of the line segment height range, the highest point of the line segment is required to be taken as an upper boundary, and the lowest point of the line segment is required to be taken as a lower boundary.
In some embodiments, the terminal may determine the shooting range of the shadow camera by: when the initial bounding sphere is not completely in the shooting range, constructing a target bounding sphere by taking the upper boundary and the lower boundary as the boundaries of the bounding sphere; and acquiring the shooting range of the shadow camera based on the target enclosing ball.
In actual practice, when the initial bounding sphere is not fully within the shooting range, it is necessary to reconstruct the bounding sphere, i.e., the target bounding sphere, to calculate the shooting range of the shadow camera based on the target bounding sphere. Here, when constructing the bounding sphere, the upper boundary and the lower boundary are taken as the boundary of the target bounding sphere, that is, the upper boundary and the lower boundary are taken as one point on the target bounding sphere to construct the target bounding sphere.
10A-10C are schematic diagrams of constructing a target bounding sphere according to an embodiment of the present application, referring to FIG. 10A, the target bounding sphere 1001 is constructed with the highest point of a line segment as the upper boundary and the lowest point of the initial bounding sphere as the lower boundary; referring to fig. 10B, a target bounding sphere 1002 is constructed with the lowest point of the line segment as the lower boundary and the highest point of the initial bounding sphere as the upper boundary; referring to fig. 10C, a target bounding sphere 1003 is constructed with the highest point of the line segment as the upper boundary and the lowest point of the line segment as the lower boundary.
Here, after the object bounding sphere is constructed, a bounding box tangent to the object bounding sphere is determined according to the illumination direction in the virtual scene, two faces of the bounding box are perpendicular to the illumination direction, the other faces are parallel to the illumination direction, and the bounding box is tangent to the object bounding sphere.
In some embodiments, the terminal may determine the shooting range of the shadow camera by: and acquiring the shooting range of the shadow camera based on the initial enclosing sphere when the initial enclosing sphere is completely in the shooting range.
In practical implementation, when the initial bounding sphere is completely within the shooting range, which means that the whole virtual object can be presented, the shooting range of the shadow camera is acquired directly based on the initial bounding sphere, without reconstructing the target bounding sphere.
Here, according to the illumination direction in the virtual scene, a bounding box tangent to the initial bounding sphere is determined, two faces in the bounding box are perpendicular to the illumination direction, the other faces are parallel to the illumination direction, and the bounding box is tangent to the initial bounding sphere.
In some embodiments, the terminal may determine the shooting range of the shadow camera by: constructing a surrounding sphere containing a part of the virtual object in the shooting range; acquiring an illumination direction in a virtual scene; constructing a bounding box tangent to the bounding sphere based on the illumination direction and the bounding sphere, and determining a range corresponding to the bounding box as a shooting range of a shadow camera; wherein at least one face of the bounding box is perpendicular to the illumination direction.
In actual implementation, determining the central position of a part of the virtual object in the shooting range, and constructing a surrounding sphere by taking the central position as a sphere center so that the surrounding sphere just surrounds the part of the virtual object in the shooting range; then, according to the illumination direction in the virtual scene, determining a bounding box tangent to the bounding sphere, wherein two surfaces in the bounding box are perpendicular to the illumination direction, the other surfaces are parallel to the illumination direction, and the bounding box is tangent to the bounding sphere.
Fig. 11 is a schematic diagram of construction of a bounding box according to an embodiment of the present application, referring to fig. 11, two surfaces of the bounding box are perpendicular to an illumination direction 1101, other surfaces of the bounding box are parallel to the illumination direction, and the bounding box is tangent to a bounding sphere. The range corresponding to the bounding box is referred to herein as the shooting range of the shadow camera.
Step 404: and based on the shooting range of the shadow camera, shadow rendering is carried out on the part of the virtual object in the shooting range.
The shadow rendering of the part of the virtual object within the shooting range based on the shooting range of the shadow camera comprises the following steps: acquiring the illumination direction in the virtual scene; generating a shadow map based on a shooting range of the shadow camera and an illumination direction in the virtual scene, wherein the shadow map is obtained by performing shadow casting on a part of the virtual object in the shooting range from the illumination direction; and based on the shadow map, shadow rendering is carried out on the part of the virtual object in the shooting range.
In practical implementation, in the process of rendering, coordinate transformation between multiple coordinate systems is generally involved, where a world coordinate system is a real coordinate system where the virtual scene is located, there is usually and only one world coordinate system for a virtual scene, the world coordinate system uses a scene base point of the virtual scene as a coordinate origin, and positions of pixel points of each virtual object in the world coordinate system are called world coordinates.
The shadow map is equivalent to a map obtained by shadow casting of each virtual object in the virtual scene by the light source from the illumination direction, that is, a transformation matrix mapped from the world coordinate system to the model coordinate system of the light source.
In the above process, the terminal directly takes the world coordinates of the part of the virtual object in the shooting range by the viewpoint matrix in the illumination direction, transforms the part of the virtual object in the shooting range from the current view angle to the view angle in the illumination direction, and acquires the real-time image of the part of the virtual object in the shooting range under the view angle in the illumination direction as the at least one shadow map, wherein the shadow map is used for providing texture (UV) information of the shadow of the part of the virtual object in the shooting range. And after the shadow map is obtained, shadow rendering is carried out on the part of the virtual object in the shooting range based on the shadow map.
By applying the embodiment, the shooting range of the main camera of the virtual scene is acquired; determining the position of the virtual object in the shooting range according to the shooting range; determining the shooting range of a shadow camera according to the position of the virtual object in the shooting range; shadow rendering is carried out on the part of the virtual object in the shooting range based on the shooting range of the shadow camera; therefore, the shooting range of the shadow camera can be dynamically adjusted according to the shooting range of the main camera, so that shadow rendering is only carried out on the part of the virtual object in the shooting range, and compared with shadow rendering of the whole virtual object, the shadow rendering efficiency is low and the shadow quality is improved.
In the following, an exemplary application of the embodiment of the present application in a practical application scenario will be described. The shadow rendering method of the virtual scene provided by the embodiment of the application is described by taking a humanoid virtual object in the virtual scene as an example. Here, the humanoid virtual object refers to a virtual object of a human or a human skeleton, which has in common that the body stands upright, the body is slim (aspect ratio is smaller than 1), and the front face is wide and the side face is narrow.
In the related art, a shadow rendering scheme of a virtual object generally calculates a photographing range of a shadow camera according to a world space bounding box behind a virtual object skin or a fixed bounding sphere manually set. Fig. 12A is a schematic view of a bounding box provided in the related art, and fig. 12B is a schematic view of a bounding sphere provided in the related art, see fig. 12A and 12B, with the bounding box and bounding volume just surrounding a virtual object.
Fig. 13 is a flowchart of a shadow rendering method of a virtual scene according to an embodiment of the present application, referring to fig. 13, the shadow rendering method of a virtual scene according to an embodiment of the present application includes:
step 1301: an initial bounding sphere is constructed.
In actual implementation, the pelvic bone position of the humanoid virtual object is first acquired. Because the pelvis is typically located at a central position that is relatively close to the height of the humanoid virtual object, it is convenient to determine the true central position of the humanoid virtual object. An Offset value (Offset) based on the skeletal space coordinate system is then set for moving the anchor point to the true center position of the humanoid virtual object height. Then, an initial radius is set according to the bounding box of the humanoid virtual object to construct an initial bounding sphere with the center of the height of the humanoid virtual object as a center and the radius of the bounding box as a radius, wherein the initial bounding sphere can completely enclose the humanoid virtual object.
Fig. 14 is a schematic diagram of an initial bounding sphere provided by an embodiment of the present application, referring to fig. 14, where a circle in the figure represents the bounding sphere, and is actually constructed according to a pelvic bone position (target body), an offset value, and a bounding box radius, where a humanoid virtual object is located at the center of the initial bounding sphere, and the center of the sphere is the same as the center of the humanoid virtual object.
Step 1302: the height range of the initial enclosing sphere is compared to the screen height range.
In practical implementation, the shooting range of the main camera is first acquired, and since the main camera is a perspective projection, see fig. 7, the shooting range of the main camera is in the shape of a cone. Here, the cone is abstracted into three views, see fig. 8A-8C, fig. 8A is a front view of the cone, where 801 represents a near plane of the camera, 802 represents a far plane of the main camera, 803 represents a boundary line (including upper, lower, left, right boundaries) of the shooting range of the main camera; fig. 8B is a top view of the cone and fig. 8C is a side view of the cone.
Since we consider only the upper and lower boundaries of the screen, we use the side view for subsequent calculations. Here, a straight line passing through the center of the initial enclosing ball and perpendicular to the horizontal plane is determined, and a line segment between the intersections of the upper and lower planes of the photographing range is determined, and then the height range of the enclosing ball is compared with the screen height range by the line segment.
Referring to fig. 8, a trapezoid represents a side view of a photographing range of a main camera, and a straight line passing through a virtual object center position (initially surrounding a sphere center) 801 intersects with upper and lower boundary surfaces of the photographing range at a first intersection point 802 and a second intersection point 803, and a line segment having the first intersection point 802 and the second intersection point 803 as end points is determined.
Step 1303a: when the initial bounding sphere is completely within the line segment height, no processing is performed.
Step 1303b: when the highest point of the initial enclosing ball exceeds the height range of the line segment, the highest point of the line segment is taken as an upper boundary, and the target enclosing ball is constructed.
Step 1303c: when the lowest point of the initial enclosing ball exceeds the height range of the line segment, the lowest point of the line segment is taken as a lower boundary, and the target enclosing ball is constructed.
Step 1303d: when the highest point and the lowest point of the initial enclosing ball exceed the height range of the line segment, the highest point of the line segment is taken as an upper boundary, and the lowest point of the line segment is taken as a lower boundary, so that the target enclosing ball is constructed.
Referring to fig. 9A, the initial bounding sphere is entirely within the line segment height range, meaning that the bounding box height range is entirely within the screen height range, i.e., the human-like virtual object is entirely within the photographing range of the main camera, without any processing.
Referring to fig. 9B, when the highest point of the bounding box exceeds the line segment height range, it means that the highest point of the bounding box exceeds the screen height range, that is, the lower body of the human-like virtual object is located within the photographing range of the main camera, at this time, the target bounding sphere is required to be constructed with the lowest point of the line segment as the lower boundary, referring to fig. 10A, the highest point of the line segment as the upper boundary, and the lowest point of the initial bounding sphere as the lower boundary, and the target bounding sphere 1001 is constructed.
Referring to fig. 9C, when the lowest point of the bounding box exceeds the line segment height range, it means that the lowest point of the bounding box exceeds the screen height range, that is, the upper body of the human-like virtual object is located within the shooting range of the main camera, at this time, the target bounding sphere needs to be constructed by taking the lowest point of the line segment as the lower boundary, referring to fig. 10B, and the target bounding sphere 1002 is constructed by taking the lowest point of the line segment as the lower boundary and the highest point of the initial bounding sphere as the upper boundary.
Referring to fig. 9D, when the highest point and the lowest point of the bounding box both exceed the height range of the line segment, it means that the highest point and the lowest point of the bounding box both exceed the height range of the screen, that is, only the middle virtual object of the human body is located in the shooting range of the main camera without separating the body, and at this time, the target bounding sphere needs to be constructed by taking the highest point of the line segment as the upper boundary and the lowest point of the line segment as the lower boundary, referring to fig. 10C, and the target bounding sphere 1003 is constructed by taking the highest point of the line segment as the upper boundary and the lowest point of the line segment as the lower boundary.
Step 1304: the shooting range of the shadow camera is determined.
Here, if the target bounding sphere is not constructed, determining a photographing range of the shadow camera according to the initial bounding sphere; if the target bounding sphere is constructed, determining the shooting range of the shadow camera according to the target bounding sphere.
The description will be given taking, as an example, a determination of the shooting range of the shadow camera based on the initial surrounding sphere. According to the illumination direction in the virtual scene, determining a bounding box tangent to the initial bounding sphere, wherein two surfaces in the bounding box are perpendicular to the illumination direction, the other surfaces are parallel to the illumination direction, and the bounding box is tangent to the bounding sphere. The range corresponding to the bounding box is referred to herein as the shooting range of the shadow camera.
The method of determining the imaging range of the shadow camera from the target bounding sphere is the same as the method of determining the imaging range of the shadow camera from the initial bounding sphere.
Step 1305: shadow rendering is performed based on the shooting range of the shadow camera.
By applying the embodiment of the application, when the main camera only shoots the part of the virtual object, the effective area is obviously increased in the shadow map with 1024 x 1024 resolution, and the final shadow quality is obviously improved.
The main camera is taken as an example to capture only the upper half of the virtual object. Fig. 15A is a schematic view of a photographing range of a shadow camera in the related art, and fig. 15B is a schematic view of a photographing range of a shadow camera provided in an embodiment of the present application, referring to fig. 15A and 15B, in which the photographing range of the shadow camera includes the entire virtual object, and in which the photographing range of the shadow camera includes only the upper body of the virtual object.
Accordingly, fig. 15A is a schematic diagram of a shadow map rendered by a shadow camera in the related art, referring to fig. 15A, in the related art, when a main camera only shoots the upper half of a humanoid virtual object, the shadow camera still renders the whole humanoid virtual object, and the effective use area of the shadow map is a part in a dashed frame; fig. 15B is a schematic diagram of a shadow map rendered by a shadow camera according to an embodiment of the present application, referring to fig. 15B, the shadow camera in the present application only renders the upper half of a humanoid virtual object, and the effective usage area of the shadow map is close to 100%.
Fig. 16A is a schematic view of shadow rendering effects in the related art, and fig. 16B is a schematic view of shadow rendering effects provided by an embodiment of the present application, referring to fig. 16A and 16B, in the related art, a projection of a humanoid virtual object on a neck and a projection of a chest bullet on a skin, and there is a significant jaggy shake at an edge; in the application, the projection of the humanoid virtual object on the neck and the projection of the chest bullet on the skin have clear edges and no saw teeth.
Continuing with the description below of an exemplary architecture of the shadow rendering device 555 implemented as a software module for a virtual scene provided by embodiments of the present application, in some embodiments, as shown in fig. 2, the software modules stored in the shadow rendering device 555 for a virtual scene of the memory 550 may include:
an acquisition module 5551, configured to acquire a shooting range of a main camera of a virtual scene;
a first determining module 5552, configured to determine, according to the shooting range, a location of the virtual object within the shooting range;
A second determining module 5553, configured to determine a shooting range of a shadow camera according to a location of the virtual object within the shooting range;
And the rendering module 5554 is configured to perform shadow rendering on a location of the virtual object within the shooting range based on the shooting range of the shadow camera.
In some embodiments, the obtaining module is further configured to obtain a camera parameter of a main camera and a position of the main camera in a virtual scene, where the camera parameter includes a viewing angle, a near-plane distance, and a far-plane distance of the main camera;
and acquiring a cone viewing range corresponding to the main camera based on the camera parameters and the positions, and taking the cone viewing range as a shooting range of the main camera.
In some embodiments, the first determining module is further configured to determine at least one intersection line where the boundary surface of the shooting range intersects the virtual object when the boundary surface of the shooting range intersects the virtual object;
And dividing the virtual object based on the at least one intersection line to obtain a part of the virtual object in the shooting range.
In some embodiments, the first determining module is further configured to determine a straight line passing through a center point of the virtual object and perpendicular to a horizontal plane;
acquiring a first intersection point and a second intersection point of the straight line intersecting with a boundary surface of the shooting range;
Determining a height range corresponding to the shooting range according to the positions of the first intersection point and the second intersection point;
And determining the position of the virtual object in the shooting range based on the height range corresponding to the shooting range and the height range of the virtual object.
In some embodiments, the first determining module is further configured to obtain an intersection of a height range corresponding to the shooting range and a height range of the virtual object;
And taking the part of the virtual object corresponding to the intersection as the part of the virtual object in the shooting range.
In some embodiments, the first determining module is further configured to obtain a side view of a shape corresponding to the shooting range;
Two points at which the straight line intersects the side of the side view are determined as a first intersection point and a second intersection point at which the straight line intersects the boundary surface of the photographing range.
In some embodiments, the first determining module is further configured to obtain a position of a pelvic bone of the virtual object, and an offset value of a bone space coordinate system;
Superposing the position of the pelvic bone and the offset value of the bone space coordinate system to obtain the center position of the virtual object;
Taking the central position as a sphere center, and constructing an initial surrounding sphere containing the virtual object;
the second determining module is further configured to determine, according to the shooting range, a portion of the initial bounding sphere that is within the shooting range, so as to determine a location of the virtual object that is within the shooting range.
In some embodiments, the first determining module is further configured to determine a straight line passing through the center of the initial enclosing ball and perpendicular to a horizontal plane;
acquiring a line segment of the straight line between an upper boundary surface and a lower boundary surface of the shooting range;
comparing the highest point of the line segment with the highest point of the initial enclosing ball, and the lowest point of the line segment with the lowest point of the initial enclosing ball;
Selecting the highest point of the line segment and the lower point of the highest point of the initial enclosing sphere as an upper boundary, and selecting the lowest point of the line segment and the upper point of the lowest point of the initial enclosing sphere as a lower boundary;
Determining a portion of the initial bounding sphere between the upper boundary and the lower boundary as a portion of the initial bounding sphere within the photographing range.
In some embodiments, the second determining module is further configured to construct a target bounding sphere with the upper boundary and the lower boundary as boundaries of the bounding sphere when the initial bounding sphere is not fully within the shooting range;
and acquiring the shooting range of the shadow camera based on the target enclosing ball.
In some embodiments, the second determining module is further configured to obtain a shooting range of the shadow camera based on the initial bounding sphere when the initial bounding sphere is completely within the shooting range.
In some embodiments, the second determining module is further configured to construct a bounding sphere that includes a location of the virtual object within the shooting range;
acquiring an illumination direction in a virtual scene;
Constructing a bounding box tangent to the bounding sphere based on the illumination direction and the bounding sphere, and determining a range corresponding to the bounding box as a shooting range of a shadow camera;
Wherein at least one face of the bounding box is perpendicular to the illumination direction.
In some embodiments, the rendering module is further configured to obtain an illumination direction in the virtual scene;
Generating a shadow map based on a shooting range of the shadow camera and an illumination direction in the virtual scene, wherein the shadow map is obtained by performing shadow casting on a part of the virtual object in the shooting range from the illumination direction;
and based on the shadow map, shadow rendering is carried out on the part of the virtual object in the shooting range.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the computer device executes the shadow rendering method of the virtual scene according to the embodiment of the application.
Embodiments of the present application provide a computer readable storage medium having stored therein executable instructions which, when executed by a processor, cause the processor to perform a method provided by embodiments of the present application, for example, as shown in fig. 3.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, such as in one or more scripts in a hypertext markup language (HTML, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or distributed across multiple sites and interconnected by a communication network.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and scope of the present application are included in the protection scope of the present application.
Claims (14)
1. A shadow rendering method for a virtual scene, the method comprising:
acquiring camera parameters of a main camera and the position of the main camera in a virtual scene, wherein the camera parameters comprise the view angle, the near-plane distance and the far-plane distance of the main camera;
Acquiring a cone viewing range corresponding to the main camera based on the camera parameters and the position, and taking the cone viewing range as a shooting range of the main camera;
Determining the position of the virtual object in the shooting range according to the shooting range;
constructing a surrounding sphere containing a part of the virtual object in the shooting range;
acquiring an illumination direction in a virtual scene;
Constructing a bounding box tangent to the bounding sphere based on the illumination direction and the bounding sphere, and determining a range corresponding to the bounding box as a shooting range of a shadow camera, wherein at least one surface of the bounding box is perpendicular to the illumination direction;
And based on the shooting range of the shadow camera, shadow rendering is carried out on the part of the virtual object in the shooting range.
2. The method of claim 1, wherein the determining, based on the capture range, a location of a virtual object within the capture range comprises:
When the boundary surface of the shooting range intersects with the virtual object, determining at least one intersection line of the boundary surface of the shooting range and the virtual object;
And dividing the virtual object based on the at least one intersection line to obtain a part of the virtual object in the shooting range.
3. The method of claim 1, wherein the determining, based on the capture range, a location of a virtual object within the capture range comprises:
Determining a straight line passing through a center point of the virtual object and perpendicular to a horizontal plane;
acquiring a first intersection point and a second intersection point of the straight line intersecting with a boundary surface of the shooting range;
Determining a height range corresponding to the shooting range according to the positions of the first intersection point and the second intersection point;
And determining the position of the virtual object in the shooting range based on the height range corresponding to the shooting range and the height range of the virtual object.
4. The method of claim 3, wherein the determining the location of the virtual object within the photographing range based on the height range corresponding to the photographing range and the height range of the virtual object comprises:
Acquiring an intersection of a height range corresponding to the shooting range and the height range of the virtual object;
And taking the part of the virtual object corresponding to the intersection as the part of the virtual object in the shooting range.
5. The method of claim 3, wherein the acquiring a first intersection point and a second intersection point at which the straight line intersects a boundary surface of the photographing range comprises:
acquiring a side view of a shape corresponding to the shooting range;
Two points at which the straight line intersects with the side of the side view are determined as a first intersection point and a second intersection point at which the straight line intersects with the boundary surface of the photographing range.
6. The method of claim 1, wherein the determining, based on the capture range, a location of a virtual object within the capture range is preceded by the method further comprising:
Acquiring the position of the pelvic bone of the virtual object and an offset value of a bone space coordinate system;
Superposing the position of the pelvic bone and the offset value of the bone space coordinate system to obtain the center position of the virtual object;
Taking the central position as a sphere center, and constructing an initial surrounding sphere containing the virtual object;
correspondingly, the determining the location of the virtual object in the shooting range includes:
And determining the part of the initial enclosing sphere in the shooting range according to the shooting range so as to determine the part of the virtual object in the shooting range.
7. The method of claim 6, wherein the determining the portion of the initial bounding sphere that is within the capture range based on the capture range comprises:
determining a straight line passing through the center of the initial enclosing ball and perpendicular to a horizontal plane;
acquiring a line segment of the straight line between an upper boundary surface and a lower boundary surface of the shooting range;
comparing the highest point of the line segment with the highest point of the initial enclosing ball, and the lowest point of the line segment with the lowest point of the initial enclosing ball;
Selecting the highest point of the line segment and the lower point of the highest point of the initial enclosing sphere as an upper boundary, and selecting the lowest point of the line segment and the upper point of the lowest point of the initial enclosing sphere as a lower boundary;
Determining a portion of the initial bounding sphere between the upper boundary and the lower boundary as a portion of the initial bounding sphere within the photographing range.
8. The method of claim 7, wherein determining the range of the shadow camera based on the location of the virtual object within the range of the shot comprises:
When the initial bounding sphere is not completely in the shooting range, constructing a target bounding sphere by taking the upper boundary and the lower boundary as the boundaries of the bounding sphere;
and acquiring the shooting range of the shadow camera based on the target enclosing ball.
9. The method of claim 6, wherein determining the range of the shadow camera based on the location of the virtual object within the range of the shot comprises:
And acquiring the shooting range of the shadow camera based on the initial enclosing sphere when the initial enclosing sphere is completely in the shooting range.
10. The method of claim 1, wherein the shadow rendering of the portion of the virtual object within the capture range based on the capture range of the shadow camera comprises:
Acquiring the illumination direction in the virtual scene;
Generating a shadow map based on a shooting range of the shadow camera and an illumination direction in the virtual scene, wherein the shadow map is obtained by performing shadow casting on a part of the virtual object in the shooting range from the illumination direction;
and based on the shadow map, shadow rendering is carried out on the part of the virtual object in the shooting range.
11. A shadow rendering apparatus for a virtual scene, the apparatus comprising:
The device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring camera parameters of a main camera and the position of the main camera in a virtual scene, wherein the camera parameters comprise the visual angle, the near-plane distance and the far-plane distance of the main camera; acquiring a cone viewing range corresponding to the main camera based on the camera parameters and the position, and taking the cone viewing range as a shooting range of the main camera;
The first determining module is used for determining the position of the virtual object in the shooting range according to the shooting range;
The second determining module is used for constructing a surrounding sphere containing a part of the virtual object in the shooting range; acquiring an illumination direction in a virtual scene; constructing a bounding box tangent to the bounding sphere based on the illumination direction and the bounding sphere, and determining a range corresponding to the bounding box as a shooting range of a shadow camera, wherein at least one surface of the bounding box is perpendicular to the illumination direction;
And the rendering module is used for performing shadow rendering on the part of the virtual object in the shooting range based on the shooting range of the shadow camera.
12. A computer device, comprising:
A memory for storing executable instructions;
a processor for implementing the shadow rendering method of a virtual scene according to any one of claims 1 to 10 when executing executable instructions stored in said memory.
13. A computer readable storage medium storing executable instructions for implementing the shadow rendering method of a virtual scene according to any one of claims 1 to 10 when executed by a processor.
14. A computer program product comprising computer instructions which, when executed by a processor, implement the shadow rendering method of a virtual scene of any one of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011379577.XA CN112396683B (en) | 2020-11-30 | 2020-11-30 | Shadow rendering method, device, equipment and storage medium for virtual scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011379577.XA CN112396683B (en) | 2020-11-30 | 2020-11-30 | Shadow rendering method, device, equipment and storage medium for virtual scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112396683A CN112396683A (en) | 2021-02-23 |
CN112396683B true CN112396683B (en) | 2024-06-04 |
Family
ID=74604832
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011379577.XA Active CN112396683B (en) | 2020-11-30 | 2020-11-30 | Shadow rendering method, device, equipment and storage medium for virtual scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112396683B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113769382A (en) * | 2021-09-10 | 2021-12-10 | 网易(杭州)网络有限公司 | Method, device and equipment for eliminating model in game scene and storage medium |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102663807A (en) * | 2012-04-19 | 2012-09-12 | 北京天下图数据技术有限公司 | Visibility analysis rendering method base on principle of stencil shadow |
CN102768765A (en) * | 2012-06-25 | 2012-11-07 | 南京安讯网络服务有限公司 | Real-time soft shadow rendering method for point light sources |
GB201413145D0 (en) * | 2014-07-24 | 2014-09-10 | Advanced Risc Mach Ltd | Graphics Processing System |
WO2015070618A1 (en) * | 2013-11-18 | 2015-05-21 | 华为技术有限公司 | Method and device for global illumination rendering under multiple light sources |
CN105913478A (en) * | 2015-12-28 | 2016-08-31 | 乐视致新电子科技(天津)有限公司 | 360-degree panorama display method and display module, and mobile terminal |
CN107952241A (en) * | 2017-12-05 | 2018-04-24 | 北京像素软件科技股份有限公司 | Render control method, device and readable storage medium storing program for executing |
CN108154548A (en) * | 2017-12-06 | 2018-06-12 | 北京像素软件科技股份有限公司 | Image rendering method and device |
CN110152291A (en) * | 2018-12-13 | 2019-08-23 | 腾讯科技(深圳)有限公司 | Rendering method, device, terminal and the storage medium of game picture |
CN110585713A (en) * | 2019-09-06 | 2019-12-20 | 腾讯科技(深圳)有限公司 | Method and device for realizing shadow of game scene, electronic equipment and readable medium |
CN111080798A (en) * | 2019-12-02 | 2020-04-28 | 网易(杭州)网络有限公司 | Visibility data processing method of virtual scene and rendering method of virtual scene |
CN111105491A (en) * | 2019-11-25 | 2020-05-05 | 腾讯科技(深圳)有限公司 | Scene rendering method and device, computer readable storage medium and computer equipment |
CN111476877A (en) * | 2020-04-16 | 2020-07-31 | 网易(杭州)网络有限公司 | Shadow rendering method and device, electronic equipment and storage medium |
CN111589114A (en) * | 2020-05-12 | 2020-08-28 | 腾讯科技(深圳)有限公司 | Virtual object selection method, device, terminal and storage medium |
CN111790150A (en) * | 2020-06-18 | 2020-10-20 | 完美世界(北京)软件科技发展有限公司 | Shadow data determination method, device, equipment and readable medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4948218B2 (en) * | 2007-03-22 | 2012-06-06 | キヤノン株式会社 | Image processing apparatus and control method thereof |
CN106412556B (en) * | 2016-10-21 | 2018-07-17 | 京东方科技集团股份有限公司 | A kind of image generating method and device |
JP6470356B2 (en) * | 2017-07-21 | 2019-02-13 | 株式会社コロプラ | Program and method executed by computer for providing virtual space, and information processing apparatus for executing the program |
-
2020
- 2020-11-30 CN CN202011379577.XA patent/CN112396683B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102663807A (en) * | 2012-04-19 | 2012-09-12 | 北京天下图数据技术有限公司 | Visibility analysis rendering method base on principle of stencil shadow |
CN102768765A (en) * | 2012-06-25 | 2012-11-07 | 南京安讯网络服务有限公司 | Real-time soft shadow rendering method for point light sources |
WO2015070618A1 (en) * | 2013-11-18 | 2015-05-21 | 华为技术有限公司 | Method and device for global illumination rendering under multiple light sources |
GB201413145D0 (en) * | 2014-07-24 | 2014-09-10 | Advanced Risc Mach Ltd | Graphics Processing System |
CN105913478A (en) * | 2015-12-28 | 2016-08-31 | 乐视致新电子科技(天津)有限公司 | 360-degree panorama display method and display module, and mobile terminal |
CN107952241A (en) * | 2017-12-05 | 2018-04-24 | 北京像素软件科技股份有限公司 | Render control method, device and readable storage medium storing program for executing |
CN108154548A (en) * | 2017-12-06 | 2018-06-12 | 北京像素软件科技股份有限公司 | Image rendering method and device |
CN110152291A (en) * | 2018-12-13 | 2019-08-23 | 腾讯科技(深圳)有限公司 | Rendering method, device, terminal and the storage medium of game picture |
CN110585713A (en) * | 2019-09-06 | 2019-12-20 | 腾讯科技(深圳)有限公司 | Method and device for realizing shadow of game scene, electronic equipment and readable medium |
CN111105491A (en) * | 2019-11-25 | 2020-05-05 | 腾讯科技(深圳)有限公司 | Scene rendering method and device, computer readable storage medium and computer equipment |
CN111080798A (en) * | 2019-12-02 | 2020-04-28 | 网易(杭州)网络有限公司 | Visibility data processing method of virtual scene and rendering method of virtual scene |
CN111476877A (en) * | 2020-04-16 | 2020-07-31 | 网易(杭州)网络有限公司 | Shadow rendering method and device, electronic equipment and storage medium |
CN111589114A (en) * | 2020-05-12 | 2020-08-28 | 腾讯科技(深圳)有限公司 | Virtual object selection method, device, terminal and storage medium |
CN111790150A (en) * | 2020-06-18 | 2020-10-20 | 完美世界(北京)软件科技发展有限公司 | Shadow data determination method, device, equipment and readable medium |
Non-Patent Citations (1)
Title |
---|
一种基于Shadow Mapping的阴影生成改进算法;谭同德等;《计算机工程与应用》;20081111(第32期);第169-172页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112396683A (en) | 2021-02-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP3786132B2 (en) | Game image processing program and storage medium | |
US11514673B2 (en) | Systems and methods for augmented reality | |
CN112711458B (en) | Method and device for displaying prop resources in virtual scene | |
TWI831074B (en) | Information processing methods, devices, equipments, computer-readable storage mediums, and computer program products in virtual scene | |
US20040021680A1 (en) | Image processing method, apparatus and program | |
CN108043027B (en) | Storage medium, electronic device, game screen display method and device | |
CN116704103A (en) | Image rendering method, device, equipment, storage medium and program product | |
CN116958344A (en) | Animation generation method and device for virtual image, computer equipment and storage medium | |
Lebiedź et al. | Virtual sightseeing in immersive 3D visualization lab | |
US11956571B2 (en) | Scene freezing and unfreezing | |
US11593989B1 (en) | Efficient shadows for alpha-mapped models | |
US20240037837A1 (en) | Automatic graphics quality downgrading in a three-dimensional virtual environment | |
CN112396683B (en) | Shadow rendering method, device, equipment and storage medium for virtual scene | |
CN115082607A (en) | Virtual character hair rendering method and device, electronic equipment and storage medium | |
CN112891940A (en) | Image data processing method and device, storage medium and computer equipment | |
US11704864B1 (en) | Static rendering for a combination of background and foreground objects | |
US11711494B1 (en) | Automatic instancing for efficient rendering of three-dimensional virtual environment | |
CN112870694B (en) | Picture display method and device of virtual scene, electronic equipment and storage medium | |
CN116958390A (en) | Image rendering method, device, equipment, storage medium and program product | |
US20240307776A1 (en) | Method and apparatus for displaying information in virtual scene, electronic device, storage medium, and computer program product | |
CN113476835B (en) | Picture display method and device | |
US11682164B1 (en) | Sampling shadow maps at an offset | |
US11776203B1 (en) | Volumetric scattering effect in a three-dimensional virtual environment with navigable video avatars | |
CN118628633A (en) | High light effect rendering method, device, equipment, storage medium and program product | |
CN118351261A (en) | Method, device, equipment, medium and program for generating cover map of virtual object |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40038847 Country of ref document: HK |
|
GR01 | Patent grant | ||
GR01 | Patent grant |