CN115430153A - Collision detection method, device, apparatus, medium, and program in virtual environment - Google Patents
Collision detection method, device, apparatus, medium, and program in virtual environment Download PDFInfo
- Publication number
- CN115430153A CN115430153A CN202211054142.7A CN202211054142A CN115430153A CN 115430153 A CN115430153 A CN 115430153A CN 202211054142 A CN202211054142 A CN 202211054142A CN 115430153 A CN115430153 A CN 115430153A
- Authority
- CN
- China
- Prior art keywords
- virtual
- virtual object
- line segment
- position point
- collision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 149
- 230000000694 effects Effects 0.000 claims abstract description 134
- 238000000034 method Methods 0.000 claims abstract description 78
- 230000004044 response Effects 0.000 claims abstract description 50
- 230000033001 locomotion Effects 0.000 claims description 21
- 230000001133 acceleration Effects 0.000 claims description 15
- 238000003860 storage Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 7
- 238000005070 sampling Methods 0.000 claims description 7
- 230000002452 interceptive effect Effects 0.000 claims description 5
- 230000003993 interaction Effects 0.000 abstract description 36
- 238000005516 engineering process Methods 0.000 abstract description 6
- 238000013473 artificial intelligence Methods 0.000 abstract description 5
- 230000008569 process Effects 0.000 description 31
- 238000010586 diagram Methods 0.000 description 12
- 238000012545 processing Methods 0.000 description 9
- 238000000926 separation method Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 239000000779 smoke Substances 0.000 description 5
- 238000003825 pressing Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000000977 initiatory effect Effects 0.000 description 3
- 230000001960 triggered effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 229910052731 fluorine Inorganic materials 0.000 description 2
- 125000001153 fluoro group Chemical group F* 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000033228 biological regulation Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000010432 diamond Substances 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000005304 joining Methods 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000009834 selective interaction Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
- 208000015041 syndromic microphthalmia 10 Diseases 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/55—Controlling game characters or game objects based on the game progress
- A63F13/57—Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
- A63F13/577—Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application discloses a collision detection method, device, equipment, medium and program in a virtual environment, and relates to the technical field of computers. The method comprises the following steps: acquiring a first position point of a first virtual object in a virtual scene; determining a second position point of the first virtual object in the virtual scene based on the moving speed and the moving direction of the first virtual object, wherein the second position point is a position point which is reached by the first virtual object moving to the moving direction at the moving speed for a preset time interval after the first position point; acquiring a connecting line segment between the first position point and the second position point; and displaying a collision effect corresponding to a second virtual object in the virtual scene in response to the intersection relationship between the connecting line segment and the second virtual object. Through the mode, the collision effect of the second virtual object which is in an intersection relation with the connecting line segment can be displayed under the condition that the connecting line segment is long, and the human-computer interaction efficiency is improved. The method and the device can be applied to various scenes such as cloud technology, artificial intelligence and the like.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, a medium, and a program for collision detection in a virtual environment.
Background
With the improvement of the cultural entertainment living standard, the living experience and the requirement of people on the virtual world are higher and higher, and the game is used as an expression mode of the virtual world and becomes a pressure releasing channel for a plurality of people. In current game applications, a player can control a virtual object to perform actions such as walking, running, fighting and the like in a virtual environment, and can participate in completing various game tasks in modes such as picking up virtual items and the like.
In the related art, the relative position between the first virtual object and the second virtual object is usually detected in real time, and when the second virtual object is located within a certain preset distance of the first virtual object, the interaction effect between the first virtual object and the second virtual object is displayed, for example: and when the second virtual object is the virtual prop, displaying a pickup control of the virtual prop, or displaying a luminous state of the virtual prop.
However, when the first virtual object moves faster, even if the second virtual object is located within the preset distance that can interact with the first virtual object, the second virtual object cannot interact with the first virtual object in time, so that the first virtual object cannot interact with the second virtual object within a limited time, and the game experience of the player is affected.
Disclosure of Invention
The embodiment of the application provides a collision detection method, a collision detection device, equipment, a medium and a program in a virtual environment, and can enable a first virtual object to interact with a second virtual object which has an intersection relation with a connecting line segment as far as possible under the condition that the connecting line segment is long, so that the game experience of a player is improved, and the human-computer interaction efficiency is improved. The technical scheme is as follows.
In one aspect, a method for collision detection in a virtual environment is provided, the method comprising:
acquiring a first position point of a first virtual object in a virtual scene, wherein the first position point is the current position point of the first virtual object in the virtual scene;
determining a second position point of the first virtual object in the virtual scene based on the moving speed and the moving direction of the first virtual object, wherein the second position point is a position point reached by the first virtual object moving to the moving direction at the moving speed for a preset time interval after the first position point;
acquiring a connecting line segment between the first position point and the second position point;
and responding to the intersection relation between the connecting line segment and a second virtual object in the virtual scene, and displaying a collision effect corresponding to the second virtual object.
In another aspect, there is provided a collision detection apparatus in a virtual environment, the apparatus comprising:
a position point obtaining module, configured to obtain a first position point of a first virtual object in a virtual scene, where the first position point is a position point where the first virtual object is currently located in the virtual scene;
a position point determining module, configured to determine, based on a moving speed and a moving direction of the first virtual object, a second position point of the first virtual object in a virtual scene, where the second position point is a position point reached by the first virtual object moving in the moving direction at the moving speed for a preset time interval after the first position point;
the line segment acquisition module is used for acquiring a connecting line segment between the first position point and the second position point;
and the effect display module is used for responding to the intersection relation between the connecting line segment and a second virtual object in the virtual scene and displaying the collision effect corresponding to the second virtual object.
In another aspect, a computer device is provided, the computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement a collision detection method in a virtual environment as described in any of the embodiments of the present application.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, which is loaded and executed by a processor to implement the collision detection method in a virtual environment as described in any of the embodiments of the present application.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the collision detection method in the virtual environment according to any of the above embodiments.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
and when an intersection relation exists between the connecting line segment and the second virtual object, displaying the pickup-allowed effect of the second virtual object. The effect of allowing picking up of the second virtual object can be displayed only by depending on the distance between the first virtual object and the second virtual object, and even if the moving speed of the first virtual object is high and the distance between the second position point reached by the first virtual object and the second virtual object is large, whether the collision effect corresponding to the second virtual object is displayed or not can be determined through the intersection relation between the line segment and the second virtual object, so that a player is assisted in controlling the first virtual object, a more efficient interaction process is performed on the second virtual object passed by the line segment, the experience of the player on a game is improved, and the human-computer interaction efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a block diagram of a terminal according to an exemplary embodiment of the present application;
FIG. 2 is a schematic illustration of an implementation environment provided by an exemplary embodiment of the present application;
FIG. 3 is a flow chart of a method for collision detection in a virtual environment provided by an exemplary embodiment of the present application;
FIG. 4 is a flow chart of a method for collision detection in a virtual environment provided by another exemplary embodiment of the present application;
FIG. 5 is a schematic view of a crash box provided by an exemplary embodiment of the present application;
FIG. 6 is a schematic diagram of determining an intersection relationship provided by an exemplary embodiment of the present application;
FIG. 7 is a schematic diagram illustrating a determination of an intersection relationship provided by another exemplary embodiment of the present application;
FIG. 8 is a schematic diagram of determining an intersection relationship provided by yet another exemplary embodiment of the present application;
FIG. 9 is a flow chart of a method for collision detection in a virtual environment provided by yet another exemplary embodiment of the present application;
FIG. 10 is a flow chart of a method for collision detection in a virtual environment provided by yet another exemplary embodiment of the present application;
FIG. 11 is a flowchart illustrating a method for collision detection in a virtual environment according to an exemplary embodiment of the present application;
FIG. 12 is an interface schematic diagram of a collision detection method in a virtual environment provided by an exemplary embodiment of the present application;
FIG. 13 is an interface schematic diagram of a collision detection method in a virtual environment provided by another exemplary embodiment of the present application;
FIG. 14 is a flowchart illustrating a method for collision detection in a virtual environment according to another exemplary embodiment of the present application;
FIG. 15 is a block diagram of a collision detection device in a virtual environment provided by an exemplary embodiment of the present application;
fig. 16 is a block diagram of a terminal according to an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
First, terms referred to in the embodiments of the present application will be briefly described.
Virtual world: is a virtual scene that is displayed (or provided) by an application program when the application program runs on a terminal. The virtual scene is a semi-simulation semi-fictional virtual environment or a pure fictional virtual environment. The virtual scene is any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, which is not limited in the present application. For example, the virtual scene includes sky, land, sea, and the like, the land includes environmental elements such as desert, city, and the like, and the user can control the virtual object to move in the virtual scene. Of course, the virtual scene also includes virtual objects, for example, a throwing object, a building, a vehicle, or a prop required for equipping itself or interacting with other virtual objects in the virtual scene, and the virtual scene can also be used to simulate real environments in different weathers, for example, weather such as sunny days, rainy days, foggy days, or dark nights. The variety of scene elements enhances the diversity and realism of the virtual scene.
Virtual object: refers to a virtual character that can move in a virtual scene, and the movable object is a virtual character, a virtual animal, an animation character, and the like. The virtual object is an avatar in the virtual scene for representing the user. The virtual scene comprises a plurality of virtual objects, and each virtual object has a shape and a volume in the virtual scene and occupies a part of space in the virtual scene. Optionally, the virtual object is a Character controlled by operating on the client, or an Artificial Intelligence (AI) set in the virtual environment by training, or a Non-Player Character (NPC) set in the virtual scene. Optionally, the virtual object is a virtual character playing a game in a virtual scene. Optionally, the number of virtual objects in the virtual scene is preset, or is dynamically determined according to the number of clients joining the virtual scene, which is not limited in this embodiment of the application.
Virtual props: refers to a prop that a virtual object in a virtual scene may use. Taking shooting type games as an example, shooting type games are provided with a thrower prop, a virtual launcher and other shooting props, wherein the virtual launcher is an object launched by the virtual launcher when launching operation is executed, the thrower prop and the shooting props can injure a hit virtual object, and the shooting props are also provided with a skill prop such as a virtual chip, and can endow the equipped with operation authorities corresponding to the virtual object. The virtual props can also assist the virtual objects in achieving a certain purpose, for example, a virtual smoke cartridge can assist the virtual objects in obscuring the figure. It should be noted that the type of the virtual item is not limited in the embodiment of the present application.
In the related art, the relative position between the first virtual object and the second virtual object is usually detected in real time, and when the second virtual object is located within a certain preset distance of the first virtual object, the interaction effect between the first virtual object and the second virtual object is displayed, for example: and when the second virtual object is a virtual item, displaying a pickup control of the virtual item, or displaying the virtual item in a luminous state. However, when the first virtual object moves faster, even if the second virtual object is located within the preset distance that can interact with the first virtual object, the second virtual object cannot interact with the first virtual object in time, so that the first virtual object cannot interact with the second virtual object within a limited time, and the game experience of the player is affected.
In the embodiment of the application, the collision detection method in the virtual environment is provided, so that the second virtual object which has an intersection relation with the connecting line segment can be displayed as far as possible under the condition that the connecting line segment is long, the game experience of a player is improved, and the human-computer interaction efficiency is improved. The collision detection method in the virtual environment obtained by training in the application can be applied to various game scenes such as a hand game scene and an end game scene, and the method is not limited in the embodiment of the application.
It should be noted that information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals referred to in this application are authorized by the user or sufficiently authorized by various parties, and the collection, use, and processing of the relevant data is required to comply with relevant laws and regulations and standards in relevant countries and regions. For example, the location point data and the like referred to in the present application are obtained with sufficient authority.
Fig. 1 shows a block diagram of an electronic device according to an exemplary embodiment of the present application. The electronic device 100 includes: an operating system 120 and application programs 122.
Operating system 120 is the base software that provides applications 122 with secure access to computer hardware.
Application 122 is an application that supports a virtual environment. Optionally, application 122 is an application that supports a three-dimensional virtual environment. The application 122 may be any one of a virtual reality application, a three-dimensional map program, a self-propelled chess game, an educational game, a Third-Person Shooting game (TPS), a First-Person Shooting game (FPS), a Multiplayer Online tactical sports game (MOBA), and a Multiplayer gunfight survival game. The application 122 may be a stand-alone application, such as a stand-alone three-dimensional game program, or may be a network-connected application.
Next, an implementation environment related to the embodiment of the present application is described, and please refer to fig. 2 schematically, in which a terminal 210, a server 220, and a communication network 230 are related, where the terminal 210 and the server 220 are connected through the communication network 230.
The terminal 210 is running a target application that supports a virtual environment. Illustratively, the terminal 210 displays a virtual scene through a target application, where the virtual scene includes a first virtual object, a second virtual object, and the like, after acquiring a first position point of the first virtual object in the virtual scene, acquiring a second position point reached by the first virtual object after moving in a moving direction at a moving speed for a preset time interval, determining a connecting line segment between the first position point and the second position point according to the first position point and the second position point, transmitting a data resource of the connecting line segment to the server 220, determining, by the server 220, a display result indicating whether to display a collision effect corresponding to the second virtual object according to an intersection relationship between the connecting line segment and the second virtual object in the virtual scene, and feeding the display result back to the terminal 210, and when the display result indicates that the terminal 210 displays the collision effect corresponding to the second virtual object, displaying the collision effect corresponding to the second virtual object on an interface of the terminal 210, thereby indicating that the first virtual object interacts with the second virtual object.
It should be noted that the above terminals include, but are not limited to, mobile terminals such as mobile phones, tablet computers, portable laptop computers, intelligent voice interaction devices, intelligent home appliances, and vehicle-mounted terminals, and can also be implemented as desktop computers; the server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, content Delivery Network (CDN), big data and an artificial intelligence platform.
The Cloud technology (Cloud technology) is a hosting technology for unifying series resources such as hardware, application programs, networks and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data.
In some embodiments, the servers described above may also be implemented as nodes in a blockchain system.
With reference to the above noun introduction and application scenario, the method for detecting a collision in a virtual environment provided by the present application is described, taking the application of the method to a terminal as an example, as shown in fig. 3, the method includes the following steps 310 to 340.
In step 310, a first position point of a first virtual object in a virtual scene is obtained.
Wherein the first position point is a position point where the first virtual object is currently located in the virtual scene.
Illustratively, the first virtual object is a virtual character controlled by a player, such as: the first virtual object is a virtual legal player controlled by the player; or the first virtual object is a virtual pet controlled by the player; alternatively, the first virtual object is a fighting hero or the like controlled by the player to ride on the virtual ride.
Optionally, after the game application is opened, a game application run by the terminal displays a virtual scene corresponding to the game, where the virtual scene is as follows: the virtual battlefield scene, the virtual town scene, the desert scene and the like, wherein the virtual scene comprises a first virtual object controlled by a player, a virtual ground for the first virtual object to move, a virtual sky for improving the reality degree of the virtual scene, a virtual city wall, a virtual sea, a virtual building and at least one of various virtual elements.
Schematically, in a virtual scene, a world coordinate system is established by taking a preset point as an origin of the world coordinate system and taking straight lines with pairwise vertical relation as axes of the world coordinate system. And determining the position point of the first virtual object controlled by the player at the current moment in the world coordinate system, and taking the position point as the first position point. For example: determining the position coordinates of the first virtual object in a world coordinate system corresponding to the virtual scene, and taking the position coordinates as first position point coordinates, namely: and determining the position information of the first position point in the virtual scene according to the position coordinates of the first virtual object at the current moment.
Or, in the virtual scene, taking the position of the first virtual object controlled by the player at the current moment as an origin, and establishing an object coordinate system, that is, the origin of the object coordinate system is the first position point of the first virtual object in the virtual scene; or, determining different screen coordinate systems based on terminal pixel differences of the displayed virtual scene interface. For example: the terminal for displaying the virtual scene interface is a computer, the lower left corner of the computer screen is determined as an origin point by taking the display pixels of the computer screen as a unit, the length and the width of the computer screen and a camera in the virtual scene are taken as axes, a screen coordinate system is established, a position point of a first virtual object at the current moment under the screen coordinate system is further determined, and the position point is taken as a first position point.
In an alternative embodiment, the speed and direction of movement of the first virtual object is determined based on the player's control of the first virtual object.
Illustratively, in an end-play scene, a player controls the first virtual object through an external device such as a keyboard, a mouse, a gamepad, and the like. For example: the player controls the first virtual object to move in the virtual scene through the appointed key positions (such as key position A, key position → key position, etc.) on the keyboard; or the player controls the first virtual object to carry out common attack by clicking the left button of the mouse; or the player controls and changes the orientation of the first virtual object through the movement operation of the left mouse button; alternatively, the player controls the first virtual object to move in the corresponding direction (i.e., move in the moving direction) through the direction adjustment control on the gamepad. Alternatively, the player may also control the moving speed, moving mode, etc. of the first virtual object by performing a long-press operation on the designated key position of the keyboard sheet, such as: and long pressing the key A to control the first virtual object to run in an accelerated manner towards the left direction, long pressing the space key to control the first virtual object to finish jumping operation and the like.
Illustratively, in a hand-game scene, a player controls the first virtual object through a screen of the terminal, a gamepad and other devices. Optionally, a direction control is included in the game screen displayed by the terminal, and the player controls the first virtual object to move in the corresponding direction in the virtual scene (i.e., move to the moving direction) by performing a sliding operation on the direction control, where the moving speed is usually implemented as a default moving speed of the first virtual object. Optionally, the player may also control the moving speed of the first virtual object through a direction control, such as: and long-pressing the direction control along the moving direction to control the first virtual object to run at an accelerated speed towards the moving direction. The moving speed of the accelerated running is greater than the default moving speed of the first virtual object when the direction control is not operated by long pressing, and the default moving speed and the moving speed of the accelerated running are the moving speeds of the first virtual object under different conditions.
Optionally, a "flash control" is displayed on a screen corresponding to the terminal, and based on a trigger operation of the "flash control" by a player through a mouse, a keyboard, a trigger, and the like, the first virtual object is controlled to run at an accelerated speed in the direction of the first virtual object. For example, a player clicks the 'flash control' through a mouse to realize the trigger operation of the 'flash control'; or, the player realizes the triggering operation of the 'flash control' by clicking a preset shortcut key position (such as a shortcut function of realizing flash through an O key position) on the keyboard; or the player realizes the trigger operation of the 'flash control' and the like through the click operation of the 'flash control' on the screen.
Or the virtual shoe in the virtual scene is a virtual prop for improving the moving speed of the first virtual object, and the first virtual object accelerates to run in the virtual scene in response to the player equipping the first virtual object with the virtual shoe; or, the virtual backpack in the virtual scene is a virtual item that slows down the moving speed of the first virtual object, and the first virtual object moves in the virtual scene in a deceleration manner in response to the player equipping the first virtual object with the virtual backpack, and the like. Here, the moving speed of the accelerated running, the moving speed of the decelerated movement, and the like are moving speeds of the first virtual object in different cases.
It should be noted that the above is only an illustrative example, and the present invention is not limited to this.
In an alternative embodiment, the speed and direction of movement of the first virtual object is determined based on default settings of the system.
Illustratively, the default configuration of the system is "automatically control the first virtual object to move in the virtual scene when the first virtual object does not move within a preset time period". For example: the preset time is 5 minutes, when the terminal runs the game application program and detects that the first virtual object does not move within 5 minutes, the first virtual object is automatically controlled to move towards the teammate direction at the default moving speed; alternatively, when it is detected that the first virtual object has not moved within 5 minutes, the first virtual object is automatically controlled to move in a right direction in a running acceleration manner, and so on.
And the second position point is a position point which is reached by the first virtual object moving to the moving direction at the moving speed for a preset time interval after the first position point.
Alternatively, after the first position point, the first virtual object is moved in the moving direction controlled by the player at the moving speed controlled by the player based on the control of the first virtual object by the player, and the position point reached by the first virtual object after moving for the preset time interval is taken as the second position point.
For example: the preset time interval is 0.1 second, the player controls the first virtual object to accelerate in the southeast direction, and the position point reached after the first virtual object moves in the southeast direction (moving direction) for 0.1 second at the moving speed of the accelerated running is taken as the second position point.
Optionally, after the first location point, the control process for the first virtual object is implemented based on a default configuration of the system, so that the first virtual object moves to a default moving direction of the system based on a default moving speed of the system, and a location point reached by the first virtual object after moving for a preset time interval is taken as the second location point.
For example: the preset time interval is 1 second, when the terminal detects that the player does not control the first virtual object within 3 minutes (preset duration configured by default of the system), the terminal automatically controls the first virtual object to move towards the direction of the teammate at the default moving speed, and a position point which is reached after the first virtual object moves towards the direction of the teammate (moving direction) for 1 second at the default moving speed is used as a second position point.
In an alternative embodiment, the position information of the second location point is determined by the position coordinates of the first virtual object in the virtual scene.
Schematically, determining a central point of a first virtual object, and taking a position coordinate of the central point corresponding to the first virtual object in a virtual scene as a position coordinate of a second position point; alternatively, the position coordinates of the first virtual object on the virtual plane in the virtual scene are determined, and the position coordinates are used as the position coordinates of the second position point.
Optionally, in the virtual scene, the preset point is used as an origin of the world coordinate system, and the straight lines with the pairwise vertical relationship are used as axes of the world coordinate system to establish the world coordinate system. And determining position coordinates which are reached by moving the first virtual object controlled by the player to the moving direction at the moving speed for a preset time interval after the first position point in the world coordinate system, and taking the position coordinates as a second position point of the first virtual object in the virtual scene.
Or, in the virtual scene, an object coordinate system is established with a first position point of a first virtual object controlled by the player as an origin, and a position coordinate reached by moving the first virtual object in the moving direction at a moving speed for a preset time interval after the first position point is determined, and the position coordinate is taken as a second position point.
Or, based on terminal pixel information of a display virtual scene interface, establishing a screen coordinate system, determining a position coordinate reached by a first virtual object moving to a moving direction at a moving speed for a preset time interval after the first position point in the screen coordinate system, and taking the position coordinate as a second position point.
It should be noted that the above is only an illustrative example, and the present invention is not limited to this.
Optionally, after determining a first position point and a second position point of the first virtual object in the virtual scene, connecting the first position point and the second position point to obtain a connecting line segment between the first position point and the second position point.
Illustratively, the virtual scene is implemented as a three-dimensional (3-dimensional, 3D) space, wherein the first position point is at a point M in the 3D space, and after the first virtual object moves in the northeast direction (moving direction) in the 3D space for 0.1 second at a moving speed of 400 yards/second, the first virtual object reaches a second position point in the 3D space, the second position point is represented as a point N, and the point M and the point N are connected to obtain a connecting line segment between the first position point and the second position point.
Optionally, a world coordinate system is established according to the virtual scene, and the position information of the first position point and the second position point is represented by position coordinates.
Illustratively, when the virtual scene is implemented as a two-dimensional stereo (2D) space, a two-dimensional world coordinate system is established. Wherein the first position point and the second position point are represented by the position of an x axis and a y axis, such as: the first location point is denoted as (2,3), the second location point is denoted as (3,5), and so on; alternatively, when the virtual scene is implemented as a three-dimensional stereo space, the first position point and the second position point are represented by positions in an x-axis, a y-axis and a z-axis, such as: the first location point is denoted as (2,3,1), the second location point is denoted as (3,6,5), and so on.
Wherein, the numerical value of the corresponding x axle, y axle and z axle of first position point and second position point, both can be the same, also can be different, also promptly: the line segment between the first location point and the second location point may be parallel to the x-axis, the y-axis, or the z-axis, or may intersect the x-axis, the y-axis, and the z-axis. The above description is merely exemplary, and the present disclosure is not limited thereto.
Optionally, based on the first location point and the second location point, determining a length of a line segment connecting the first location point and the second location point, that is: the distance between the first location point and the second location point in the virtual scene is determined.
Optionally, the second virtual object includes at least one of a target virtual character and a target virtual prop in the virtual scene. Illustratively, the collision effect is used to indicate an interaction between the first virtual object and the second virtual object.
In an optional embodiment, when the second virtual object is implemented as a target virtual character, in response to the intersection relationship between the connecting line segment and the target virtual character in the virtual scene, the interactive effect between the first virtual object and the target virtual character is displayed, and the interactive effect is used as a collision effect.
Optionally, when the connecting line segment and the target virtual character in the virtual scene have an intersection relationship, the first virtual object interacts with the target virtual character in different interaction modes based on the affiliated marketing of the target virtual character.
Illustratively, when the connecting line segment and a target virtual character in the virtual scene have an intersection relationship, if the target virtual character is an enemy virtual character, the first virtual object starts an attack on the target virtual character; or, if the target virtual character is a friend virtual character, the first virtual object initiates treatment on the second virtual object, and the like.
Optionally, different ways of interaction are determined based on the player's settings for the game. For example: when the connection line segment is intersected with a target virtual character in a virtual scene, a first virtual object initiates a chat request to the target virtual character; alternatively, the first virtual object sends a synchronization request or the like to the target avatar.
The attack effect generated by initiating an attack, the treatment effect generated by initiating a treatment, the request effect generated by initiating a chat request, the synchronization request, and the like can all be used as the interaction effect, which is not limited in the embodiment of the present application.
In an optional embodiment, when the second virtual object is implemented as a target virtual item, in response to the intersection relationship between the connecting line segment and the target virtual item in the virtual scene, a collision effect corresponding to the target virtual item is displayed.
Illustratively, when a connection line segment and a target virtual prop in a virtual scene have an intersection relationship, a first virtual object picks up the target virtual prop; or, the target virtual item is presented to allow a pickup effect, and is used for instructing the first virtual object to pick up the target virtual item and the like.
In an optional embodiment, the collision effect corresponding to the display of the second virtual object is described by taking the second virtual object as an example of the target virtual item.
A plurality of virtual props for assisting the first virtual object in playing the game are also displayed in the virtual scene, for example: virtual weapons such as virtual cutters, virtual firearms and virtual smoke bombs; virtual ornaments such as virtual hats and virtual shoes and the like can assist players to participate in the game better through the virtual props.
Illustratively, the player controls the first virtual object to pick up the virtual item in the virtual scene, so as to equip the first virtual object with the virtual item required by the player, for example: the player controls the first virtual object to pick up the virtual smoke bomb in the virtual scene so as to resist other first virtual objects (such as NPC monsters in the virtual scene, first virtual objects belonging to other teams in the virtual scene and the like) through the virtual smoke bomb; or the player controls the first virtual object to pick up the virtual shoes in the virtual scene, so that the functions of running and the like are realized by replacing the virtual shoes.
Optionally, when the second virtual object is implemented as a target virtual item existing in the virtual scene and having an intersection relationship with the line segment, in response to the intersection relationship between the line segment and the target virtual item in the virtual scene, an allowed pickup effect corresponding to the target virtual item is displayed, and the allowed pickup effect is used as a collision effect.
Wherein the allowed pick effect is used to indicate that the target virtual item is picked up.
Illustratively, the intersection relationship is used to indicate that the connecting line segment intersects the space corresponding to the second virtual object. For example: when the virtual scene is realized as a 3D space, determining a space H of a second virtual object in the 3D space, and judging whether a line segment of the first virtual object moving from a first position point to a second position point has an intersection with the space H corresponding to the second virtual object; or, when the virtual scene is implemented as a 2D space, determining a region G of the second virtual object in the 2D space, and determining whether a line segment connecting the first virtual object moving from the first position point to the second position point has an intersection with the region G corresponding to the second virtual object.
Schematically, when the intersection relation is judged according to the intersection point between the connecting line segment and the second virtual object, and when one intersection point exists between the connecting line segment and the second virtual object, the intersection relation between the connecting line segment and the second virtual object is determined; or when two intersection points exist between the connecting line segment and the second virtual object, determining that an intersection relation exists between the connecting line segment and the virtual prop.
In an alternative embodiment, the pick-up effect is allowed to be implemented as a pick-up control corresponding to the target virtual prop. Namely: and responding to the intersection relationship between the connecting line segment and the target virtual prop in the virtual scene, and displaying a pickup control corresponding to the target virtual prop.
Illustratively, the pick-up control is implemented as a triggerable control, and the target virtual item is picked up based on the trigger operation of the player on the pick-up control on the game interface. For example: the pickup control is a trigger control displayed in the virtual scene, for example: when the connection line segment and the target virtual prop are in an intersecting relationship, the trigger control is displayed beside the target virtual prop, and the target virtual prop is picked up based on the trigger operation of the trigger control.
In an optional embodiment, the item name of the target virtual item is marked in the pick-up control, and the target virtual item is picked up in response to receiving a voice trigger operation on the item name.
Illustratively, after the player performs voice indication on the item name of the target virtual item displayed in the pick-up control, the terminal performs voice trigger operation on the item name based on the voice indication, so as to control the virtual object to pick up the target virtual item. For example: and based on the intersection relationship between the connecting line segment and the target virtual prop T, displaying a pickup control corresponding to the target virtual prop T in the virtual scene, wherein the pickup control comprises the prop name ' T ' of the target virtual prop T, and when the terminal receives a voice instruction of the player as the pickup prop T ', the terminal controls the virtual object to pick up the target virtual prop T based on the voice instruction.
In an alternative embodiment, the pick-up effect is allowed to be implemented as a virtual special effect corresponding to the target virtual item.
Schematically, when the line segment and the target virtual prop have an intersection relationship, highlighting the target virtual prop, wherein the highlighted state is a schematic virtual special effect; or, the target virtual property and the like are displayed in a light-emitting state, which is an exemplary virtual special effect. Optionally, after the player triggers the target virtual item with the virtual special effect, the target virtual item can be picked up.
It should be noted that the above is only an illustrative example, and the present invention is not limited to this.
In summary, the first position point of the first virtual object in the virtual scene and the second position point determined according to the moving speed and the moving direction are obtained, the first position point and the second position point are connected to obtain a connecting line segment, and when the connecting line segment and the second virtual object have an intersection relationship, the collision effect of the second virtual object is displayed. The method has the advantages that the effect of allowing picking up of the second virtual object can be displayed only by depending on the distance between the first virtual object and the second virtual object, even if the moving speed of the first virtual object is high and the distance between the second position point reached by the first virtual object and the second virtual object is large, whether the collision effect is displayed or not can be determined through the intersection relation between the line segment and the second virtual object, so that a player is assisted in controlling the first virtual object, the efficient interaction process of the second virtual object passed by the line segment is carried out, the experience of the player on games is improved, and the human-computer interaction efficiency is improved.
In an alternative embodiment, in determining the intersection relationship between the line segment and the second virtual object, the determination is made by the relationship between the line segment and the crash detection box surrounding the second virtual object. Illustratively, as shown in fig. 4, the embodiment shown in fig. 3 can also be implemented as the following steps 410 to 450.
At step 410, a first location point of a first virtual object in a virtual scene is obtained.
The first position point is the position point where the first virtual object is located currently in the virtual scene.
Illustratively, in the virtual scene, the first position point is determined based on the position information of the position point where the first virtual object is currently located. For example: and establishing a rectangular coordinate system based on the virtual scene, determining the current position coordinate of the first virtual object in the virtual scene, and taking the position coordinate as the position information of the first position point.
Step 410 has already been described above in step 310, and therefore will not be described herein.
In step 420, a second location point of the first virtual object in the virtual scene is determined based on the moving speed and the moving direction of the first virtual object.
And the second position point is a position point which is reached by the first virtual object moving to the moving direction at the moving speed for a preset time interval after the first position point.
Illustratively, the predetermined time interval is a predetermined time interval for determining the second position point. For example: determining the position information of the first virtual object in the virtual scene once every 0.1 second, and taking 0.1 second as the preset time interval, or taking the interval time of 0.2 second, 0.3 second and the like as the preset time interval and the like; or, every time a frame of virtual scene is refreshed, the position information of the first virtual object in the virtual scene is determined once, and the time interval corresponding to 1 frame is taken as the preset time interval, or the time intervals of 2 frames, 3 frames and the like are taken as the preset time interval, and the like.
In an alternative embodiment, a time interval corresponding to 1 frame is taken as the preset time interval, and after the first position point of the first virtual object in the nth frame of the virtual scene is determined, the second position point of the first virtual object in the n +1 th frame is determined based on the moving direction and the moving speed of the first virtual object.
Step 420 is already described in step 320, and thus is not described herein again.
Illustratively, after a first position point and a second position point of the first virtual object in the virtual scene are determined, the first position point and the second position point are connected to obtain a connecting line segment between the first position point and the second position point.
Optionally, taking a time interval corresponding to 1 frame as a preset time interval as an example, after determining a first position point of a first virtual object when the virtual scene is in the nth frame, determining a second position point of the first virtual object when the virtual scene is in the (n + 1) th frame based on a moving direction and a moving speed of the first virtual object, and connecting the first position point and the second position point to obtain a connecting line segment between the first position point and the second position point, where the connecting line segment is used to indicate a moving distance of the first virtual object from the nth frame to the (n + 1) th frame.
Step 430 has already been described in step 330, and thus is not described herein again.
Based on the position information of the second virtual object in the virtual scene, a collision detection box surrounding the second virtual object is determined, step 440.
Wherein the collision detecting box of the second virtual object is used to indicate the collision detecting box enclosing the second virtual object.
In an optional embodiment, the second virtual object is implemented as a target virtual item, and when the second virtual object is implemented as the target virtual item, the collision detection box of the second virtual object is used to indicate the collision detection box of the target virtual item to be picked up.
Illustratively, in a virtual scene, a plurality of virtual items are displayed, and a player finishes a game task, promotes a game level and the like by controlling a first virtual object to pick up, use and the like the virtual items. The second virtual object is used for indicating a virtual item to be picked up.
Optionally, the plurality of virtual props are distributed at different positions of the virtual scene, for example: virtual cutter K 1 In a virtual building, a virtual tool K 2 On a virtual table, a virtual smoke bomb B 1 On a virtual tool K 1 And side and the like. And determining a collision detection box surrounding the virtual props according to the position information of the different virtual props in the virtual scene.
Illustratively, different virtual props correspond to have different prop shapes, and according to the prop shape of virtual prop, confirm the collision detection box who surrounds virtual prop, for example: virtual cutter K 1 For a long virtual prop, according to a virtual tool K 1 Determining the edge points of the surrounding virtual tool K 1 The crash detection cartridge of (1); or, a virtual cartridge B 1 Is a circular virtual prop and is based on a virtual smoke bomb B 1 To determine the surrounding virtual cartridge B 1 The crash detection cartridge of (1), and the like.
In an alternative embodiment, the shape of the collision detecting box of the second virtual object is the same as the shape of the prop of the second virtual object. Illustratively, the second virtual object is a virtual gun G, edge points of a plurality of virtual guns G are connected to obtain an area having the same shape as the prop of the virtual gun G, and the area is used as a collision detection box corresponding to the second virtual object.
In an alternative embodiment, at least one regular shape surrounding the second virtual object is determined based on the object shape of the second virtual object, the regular shape being the collision detection box for the second virtual object.
Illustratively, different regular shapes are determined according to the dimension difference of the virtual scene. For example: when the virtual scene is implemented as a 2D plane, the regular shapes include squares, rectangles, spheres, diamonds, and the like; when the virtual scene is implemented as a 3D space, the regular shape includes a cube, a cuboid, a sphere, and the like.
Schematically, as shown in fig. 5, the virtual scene is implemented as a 3D space for illustration, the second isThe two virtual objects are virtual smoke shells B 1 510, the virtual cartridge B 1 510 the prop shape is shown in fig. 5, based on the virtual smoke cartridge B 1 Determining the shape of the prop surrounding the virtual smoke cartridge B 1 At least one regular shape of, e.g. surrounding, the virtual cartridge B 1 Or, surrounding the virtual cartridge B 1 The sphere 530, etc. Optionally, this virtual cartridge B will be enclosed 1 The rectangular solid 520 of (a) is used as a collision bounding box corresponding to the second virtual object; or will surround the virtual cartridge B 1 As a collision bounding box for the second virtual object.
Optionally, when determining the collision bounding box enclosing the second virtual object, the regular shape of the collision bounding box just enclosing the second virtual object is determined according to the shape of the second virtual object, for example: when the second virtual object is implemented as a sphere and the regular shape of the collision bounding box that just surrounds the second virtual object is a cube, the diameter of the second virtual object is the side length of the collision bounding box.
Alternatively, when determining the bounding box that encloses the second virtual object, the regular shape of the bounding box that can enclose the second virtual object is arbitrarily determined within a certain area, that is: the conditions such as the diameter and the side length of the collision bounding box do not need to be determined strictly depending on the prop shape of the second virtual object, but a regular shape of the collision bounding box that can surround the second virtual object is arbitrarily determined within a certain area based on the position information of the second virtual object, for example: the requirement of the collision bounding box is a volume of the second virtual object that encloses the second virtual object and the volume of the collision bounding box is less than 1.5 times, and the like.
It should be noted that the above is only an illustrative example, and the present invention is not limited to this.
And step 450, responding to the intersection relation between the connecting line segment and the collision detection box, and displaying the collision effect corresponding to the second virtual object.
Wherein the collision effect is used for indicating the interaction condition between the first virtual object and the second virtual object.
Illustratively, after determining a line segment connecting the first position point and the second position point, and a collision detection box corresponding to the second virtual object, it is determined whether there is an intersection relationship between the line segment and the collision detection box.
Optionally, when there is an intersection relationship between the connecting line segment and the collision detection box, it represents that the moving trajectory of the first virtual object in the preset time interval intersects the collision detection box corresponding to the second virtual object, for example: the first virtual object passes through the second virtual object within a preset time interval, or the first virtual object reaches the position of the second virtual object after the preset time interval, and the like.
In an alternative embodiment, in response to the intersection of the connecting line segment with at least one face of the crash box, a crash effect corresponding to the second virtual object is displayed.
Schematically, as shown in FIG. 6, wherein the first position point P is based on 1 610 and a second location point P 2 620, determining a first location point P 1 610 and a second location point P 2 620, and the sphere area is the collision-detection box 640 corresponding to a second virtual object (not shown in the figure) in the virtual scene. After the line segment 630 and the collision detecting box 640 are obtained, the intersection relationship between the line segment 630 and the collision detecting box 640 is determined, as can be seen from fig. 6, an intersection point exists between the line segment 630 and the surface of the collision detecting box 640, and the intersection relationship between the line segment 630 and the collision detecting box 640 is determined based on the intersection point.
Schematically, as shown in FIG. 7, wherein the first position point P is based on 1 710 and a second location point P 2 720, determining a first position point P 1 710 and a second location point P 2 720, and the cube region is the collision-detection box 740 corresponding to a second virtual object (not shown) in the virtual scene. After the line segment 730 and the collision detection box 740 are obtained, the intersection relationship between the line segment 730 and the collision detection box 740 is determined, as can be seen from fig. 7, an intersection point exists between the line segment 730 and the surface 741 of the collision detection box 740, and based on the intersection point, the line segment 730 is determinedAn intersection relationship with the crash box 740; alternatively, as is clear from fig. 8, the connecting line segment 730 intersects with the surface 741 and the surface 742 of the crash box 740, and the intersection relationship between the connecting line segment 730 and the crash box 740 is determined based on the two intersections.
In an optional embodiment, when the second virtual object is implemented as the target virtual item, the allowable pickup effect corresponding to the target virtual item is displayed based on the intersection relationship between the line segment and the collision detection box.
Schematically, a picking control corresponding to the target virtual object is displayed beside the target virtual prop, and the picking control is used as an allowed picking effect; or, the target virtual item is highlighted, and the highlighted state is used as a pickup-permitted effect.
In an optional embodiment, a point sampling operation is performed on the connecting line segment to obtain a plurality of line segment points in the connecting line segment.
Illustratively, after a connecting line segment is obtained, point sampling operation is performed on the connecting line segment at fixed intervals, so as to obtain a plurality of segment points in the connecting line segment, wherein the segment lengths between any two adjacent segment points are the same; or after obtaining the line segment, performing point sampling operation on the line segment at random intervals to obtain a plurality of line segment points in the line segment, wherein the line segment lengths between any two adjacent line segment points may be the same or different.
Optionally, in response to that at least one of the plurality of line segment points is located in the collision detection box, displaying a collision state corresponding to the second virtual object.
Illustratively, after obtaining the plurality of line segment points, the position relationship between the plurality of line segment points and the collision detection box corresponding to the second virtual object is determined, wherein the position relationship includes at least one of the following conditions.
(1) At least one of the plurality of line segment points is located within the collision detection box.
Illustratively, after performing point sampling operation on a connecting line segment, a line segment point L is obtained 1 Line segment point L 2 And line segment point L 3 Then, the positional relationship between the plurality of line segment points and the crash boxes corresponding to the second virtual object is judged, and the line segment points L are determined 1 In the crash box, line segment points L 2 On the edge of the crash box, line segment points L 3 Outside the crash box. Alternatively, the line segment point L to be located on the edge of the crash detection box 2 As line segment points located within the crash detection box.
(2) The plurality of line segment points are all located outside the collision detection box.
Illustratively, after performing point sampling operation on a connecting line segment, a line segment point L is obtained 1 Line segment point L 2 And line segment point L 3 Then, the positional relationship between the plurality of line segment points and the crash boxes corresponding to the second virtual object is judged, and the line segment points L are determined 1 Line segment point L 2 And line segment point L 3 Outside the crash box.
In an alternative embodiment, when at least one of the plurality of line segment points is located in the crash box, it is determined that the connecting line segment has an intersecting relationship with the crash box.
For example: when a line segment point is located in the collision detection box, determining that the connection line segment and the collision detection box have an intersection relation; alternatively, when there is a line segment point on the crash box, it is determined that the connecting line segment has an intersecting relationship with the crash box, or the like.
In an optional embodiment, based on a preset interpolation weight, obtaining an interpolation point corresponding to at least one point on the connecting line segment; and responding to the interpolation point being positioned in the collision detection box corresponding to the second virtual object, and displaying the pickup-allowed effect corresponding to the second virtual object.
Illustratively, the preset interpolation weight is 0.8, and when an interpolation point corresponding to the first position point on the connecting line segment is determined, the coordinates of the first position point are determined first. For example: the coordinate of the first position point is (3,6,5), and when the interpolation point corresponding to the first position point is determined, the interpolation weight is multiplied by the coordinate value, so that the coordinate of the interpolation point corresponding to the first position point is (2.4,4.8,4).
Illustratively, when the coordinates of the interpolation point are located in the area surrounded by the collision detection box corresponding to the second virtual object, the collision effect corresponding to the second virtual object is displayed.
Optionally, when the second virtual object is implemented as the target virtual item, based on the intersection relationship, a pickup-allowed effect corresponding to the target virtual item is displayed.
Illustratively, based on the intersection relationship between the line segment and the collision detection box, it is determined that the movement track of the virtual object at the preset time interval intersects with the target virtual prop, and an allowable pickup effect corresponding to the target virtual prop is displayed in the virtual scene, where the allowable pickup effect is used to indicate that the target virtual prop is picked up. For example: when the picking effect is allowed to be presented as a picking control corresponding to the target virtual prop, the picking process of the target virtual prop is realized through the triggering operation of the picking control; or when the picking effect is allowed to be presented as a virtual special effect state of the target virtual prop, the target virtual prop is triggered to be picked up.
In an optional embodiment, the second virtual object is implemented as a target virtual prop, and the target virtual prop corresponding to the collision detection box is automatically picked up in response to the intersection relationship between the line segment and the collision detection box.
Illustratively, after the intersection between the connecting line segment and the collision detection box is determined, the terminal determines a target virtual prop corresponding to the collision detection box based on the intersection relation, and automatically picks up the target virtual prop, so as to realize a process of quickly picking up the target virtual prop on a moving track of a virtual object.
Optionally, in response to the capacity of the virtual backpack of the virtual object equipment not reaching a preset capacity threshold, automatically picking up the target virtual item; and responding to the capacity of the virtual backpack equipped by the virtual object to reach a preset capacity threshold value, displaying a pickup allowing effect corresponding to the target virtual prop so that the player can selectively perform pickup operation on the target virtual prop.
It should be noted that the above is only an illustrative example, and the present invention is not limited to this.
In summary, the first position point of the first virtual object in the virtual scene and the second position point determined according to the moving speed and the moving direction are obtained, the first position point and the second position point are connected to obtain a connecting line segment, and when the connecting line segment and the second virtual object have an intersection relationship, the collision effect of the second virtual object is displayed. Even if the moving speed of the first virtual object is high, and the distance between the second position point reached by the first virtual object and the second virtual object is large, whether the collision effect corresponding to the second virtual object is displayed or not can be determined through the intersection relation between the connecting line segment and the second virtual object, so that the player is assisted in controlling the first virtual object, a more efficient interaction process is carried out on the second virtual object passed by the connecting line segment, the experience of the player on games is improved, and the human-computer interaction efficiency is improved.
In the embodiment of the present application, a method of determining an intersection relationship between a target first virtual object and a connecting line segment by a collision detection box of a second virtual object is described. Determining a collision detection box surrounding a second virtual object based on the position information of the second virtual object in the virtual scene, judging the intersection relationship between a connecting line segment and the collision detection box corresponding to the second virtual object after obtaining the connecting line segment between a first position point and a second position point, and displaying the collision effect corresponding to the second virtual object when determining that the connecting line segment is intersected with the collision detection box, so that a player can control the first virtual object to interact with the second virtual object. The collision detection box is used as a detection box surrounding the second virtual object, the complexity of judging the intersection relation is reduced while the second virtual object is embodied, so that the terminal can determine the intersection relation more quickly and accurately to realize the judgment process of whether the interaction effect is displayed.
In an optional embodiment, when the first virtual object moves rapidly in the virtual scene, the first virtual object and the second virtual object perform an interaction process by using the collision detection method in the virtual environment. Illustratively, as shown in fig. 9, the embodiment shown in fig. 3 described above can also be implemented as the following steps 910 to 940.
Wherein the first position point is a position point where the first virtual object is currently located in the virtual scene.
Illustratively, in the virtual scene, the first location point is determined based on the location information of the location point where the first virtual object is currently located. For example: and establishing a rectangular coordinate system based on the virtual scene, determining the current position coordinate of the first virtual object in the virtual scene, and taking the position coordinate as the position information of the first position point.
Step 910 is already described in step 310, and thus is not described herein again.
And step 921, in response to the moving speed of the first virtual object reaching the preset speed threshold, determining a second position point of the first virtual object in the virtual scene along the moving direction.
Illustratively, the preset speed threshold is a preset moving speed value of the first virtual object, and when the first virtual object moves in the virtual scene from the first position point, the moving speed of the first virtual object in the virtual scene is determined.
For example: determining the moving speed of the first virtual object in the virtual scene in real time; or, the moving speed of the first virtual object in the virtual scene is determined at the same time length, such as: the terminal determines the moving speed of the first virtual object in the virtual scene once every 0.1 second, and the like.
Namely: when the first virtual object moves at a high speed in the virtual scene, the collision detection method in the virtual environment is adopted.
Illustratively, through the display process of the picture frame corresponding to the game, the corresponding virtual scene is displayed in the display game. The high-speed motion is used to indicate that the first virtual object passes through more picture frames in a shorter time, and when the related art is used for position point acquisition, the relationship between the second virtual object and the first virtual object is usually determined by a method of determining the positions of the first virtual object and the second virtual object at regular time.
However, in the above process, the position relationship between the second virtual object and the first virtual object is usually determined once every multiple frame frames, so that during the process of high-speed motion of the first virtual object, multiple middle frame frames passing through the first virtual object cannot be correctly analyzed, and even if the second virtual object exists correspondingly in the multiple middle frame frames, the collision effect between the first virtual object and the second virtual object cannot be timely displayed, so that the first virtual object cannot timely interact with the second virtual object, and the detection effect of the first virtual object on the second virtual object is affected.
By adopting the collision detection method in the virtual environment, the line segment of the first virtual object in the virtual scene is determined, and the passing condition of the line segment in a plurality of intermediate picture frames is further determined, so that the plurality of picture frames passed by the line segment can be considered, the existence condition of the second virtual object corresponding to the plurality of picture frames under high-speed motion is determined, and the collision analysis of the second virtual object is avoided being omitted.
Optionally, starting from the first location point, a moving speed of the first virtual object in the virtual scene is determined and compared with a preset speed threshold. Illustratively, when the moving speed reaches a preset speed threshold value, a second position point reached by the first virtual object is determined along the moving direction of the first virtual object in the virtual scene.
And the second position point is a position point which is reached by the first virtual object moving to the moving direction at the moving speed for a preset time interval after the first position point.
Schematically, when the moving speed of the first virtual object moving to the moving direction after starting from the first position point is greater than or equal to a preset speed threshold, determining a second position point reached by the first virtual object after a preset time interval, so as to perform a subsequent item pickup judgment process based on the second position point and the first position point; when the moving speed of the first virtual object moving in the moving direction is smaller than a preset speed threshold, the second position point reached by the first virtual object after a preset time interval is not determined.
For example: the preset speed threshold value is 3m/s, the moving speed of the first virtual object in the virtual scene is determined to be 5m/s from the first position point, the moving speed is compared with the preset speed threshold value, and the second position point reached by the first virtual object is determined along the moving direction of the first virtual object in the virtual scene based on the moving speed reaching the preset speed threshold value; or, starting from the first position point, determining the moving speed of the first virtual object in the virtual scene to be 2m/s, comparing the moving speed with a preset speed threshold, and not performing the process of determining the second position point based on the moving speed being less than the preset speed threshold.
Namely: when the moving mode of the first virtual object after the first position point is the acceleration movement, the second position point is determined based on the moving speed and the moving direction of the acceleration movement.
And step 922, in response to receiving the triggering operation of the acceleration control, determining a second position point of the first virtual object in the virtual scene along the moving direction.
Illustratively, in the virtual screen presented by the terminal, an acceleration control is included, and the acceleration control is used for controlling the first virtual object to make an accelerated motion in the virtual scene, for example: and when the player triggers the acceleration control, the first virtual object moves in the virtual scene in an accelerated manner. Illustratively, the acceleration movement has a movement speed greater than the default speed.
Illustratively, the acceleration control includes at least one of a "flash control", an "acceleration running control", and the like, which is not limited in this embodiment of the present application.
Optionally, based on the triggering operation on the acceleration control, a position point where the first virtual object is located after a preset time interval is determined along the moving direction of the first virtual object, and the position point is taken as the second position point.
In an alternative embodiment, the moving direction and the moving speed may change during the process of moving the first virtual object from the first position point to the second position point.
Schematically, the description will be made taking the change in the moving direction as an example. For example: the first virtual object moves first in the right direction after starting from the first position point, then moves diagonally upward, and reaches the second position point.
Schematically, the description will be made taking the change in the moving speed as an example. For example: the first virtual object moves at a moving speed of 3m/s first after moving from the first position point, and then moves at a moving speed of 5m/s and reaches the second position point.
At step 930, a line segment between the first location point and the second location point is obtained.
Illustratively, after the first position point and the second position point are obtained through the above process, the first position point and the second position point are connected, so as to obtain a connecting line segment between the first position point and the second position point.
In an alternative embodiment, in response to the distance from the first position point to the second position point meeting a preset distance threshold, a connecting line segment between the first position point and the second position point is obtained.
Illustratively, the preset distance threshold is a preset distance threshold, after the first position point and the second position point are determined, the distance between the first position point and the second position point in the virtual scene is determined, and the distance is compared with the preset distance threshold. For example: the distance between the first position point and the second position point is 5, the preset distance threshold value is 3, and based on 5 being more than 3, a connecting line segment between the first position point and the second position point is connected; or, the distance between the first position point and the second position point is 2, the preset distance threshold is 3, and based on 2 being less than 3, the process of acquiring the connecting line segment is not performed.
The method comprises the steps that a first virtual object moves from a first position point to a second position point, wherein the first position point and the second position point are the starting point and the end point of the movement of the first virtual object under the condition of a preset time interval, therefore, the moving speed of the first virtual object can be roughly determined based on the distance between the first position point and the second position point and the preset time interval, when the distance between the first position point and the second position point is compared with a preset distance threshold value, when the distance between the first position point and the second position point accords with the preset distance threshold value, the moving speed of the first virtual object is indicated to be larger than a certain moving speed, and then a connecting line segment between the first position point and the second position point is obtained.
It should be noted that the above are only exemplary, and the embodiments of the present application are not limited thereto.
And 940, responding to the intersection relationship between the connecting line segment and a second virtual object in the virtual scene, and displaying the collision effect corresponding to the second virtual object.
And the collision effect is used for indicating the interaction condition of the first virtual object and the second virtual object.
Schematically, when judging whether the connection line segment and a second virtual object in the virtual scene have an intersection relationship, judging whether the connection line segment and the second virtual object have an intersection point; or judging whether the connecting line segment and the collision detection box corresponding to the second virtual object have an intersection point; or judging the position relation between the line segment point on the connecting line segment and the second virtual object; or, the positional relationship between the line segment point on the connecting line segment and the collision detecting box corresponding to the second virtual object is determined.
Optionally, in response to that the connection line segment and a second virtual object in the virtual scene have the at least one intersection relationship, displaying a collision effect corresponding to the second virtual object on a virtual screen corresponding to the terminal.
Schematically, when at least one intersection point exists between the connecting line segment and the second virtual object, determining that the connecting line segment and the second virtual object have an intersection relation, and displaying a collision effect corresponding to the second virtual object; or when at least one intersection point exists between the connecting line segment and the collision detection box corresponding to the second virtual object, determining that the connecting line segment and the second virtual object have an intersection relation, and displaying a collision effect corresponding to the second virtual object; or when the line segment point on the connecting line segment is positioned in the second virtual object or on the edge of the second virtual object, determining that the connecting line segment and the second virtual object have an intersection relation, and displaying the collision effect corresponding to the second virtual object; or when the line segment point on the connecting line segment is located in the collision detection box corresponding to the second virtual object or on the edge of the collision detection box, determining that the connecting line segment and the second virtual object have an intersection relation, and displaying the collision effect corresponding to the second virtual object, and the like.
It should be noted that the above is only an illustrative example, and the present invention is not limited to this.
In an alternative embodiment, the second virtual object is implemented as a target virtual item. And automatically picking up the target virtual prop in response to the intersection relationship between the connecting line segment and the target virtual prop in the virtual scene.
Illustratively, when the line segment intersects with the target virtual prop in the virtual scene, the terminal controls the virtual object to automatically pick up the target virtual prop.
In an optional embodiment, in response to that the connection line segment does not have an intersection relationship with the target virtual prop, determining a separation distance between the connection line segment and the target virtual prop; and responding to the fact that the spacing distance is smaller than the preset spacing distance, and displaying the picking-up allowing effect corresponding to the target virtual prop.
Illustratively, a line segment of the virtual object moving from the first position point to the second position point is located in an area corresponding to the target virtual item, for example: and the connecting line segment of the virtual object moving from the first position point to the second position point is positioned in the collision detection box corresponding to the target virtual prop, so that the connecting line segment and the target virtual prop do not have an intersection relationship.
Optionally, when determining the separation distance between the line segment and the target virtual prop, determining the separation distance between the line segment and the central point of the target virtual prop, for example: and determining the collision detection and the spacing distance of the central point corresponding to the connecting line segment and the target virtual prop, and when the spacing distance is less than the preset spacing distance, displaying the allowable picking effect corresponding to the target virtual prop.
In summary, the first position point of the first virtual object in the virtual scene and the second position point determined according to the moving speed and the moving direction are obtained, the first position point and the second position point are connected to obtain a connecting line segment, and when the connecting line segment and the second virtual object have an intersection relationship, the collision effect of the second virtual object is displayed. Even if the moving speed of the first virtual object is high, and the distance between the second position point reached by the first virtual object and the second virtual object is large, whether the interaction effect is displayed or not can be determined through the intersection relation between the connecting line segment and the second virtual object, so that a player is assisted to control the first virtual object, a more efficient interaction process is carried out on the second virtual object passed by the connecting line segment, the experience of the player on games is improved, and the human-computer interaction efficiency is improved.
In the embodiments of the present application, an interaction process of a first virtual object with a second virtual object under a fast motion in a virtual scene is described. After the first position point is determined, when the moving speed of the first virtual object reaches a preset speed threshold or a trigger operation on an acceleration control is received, a second position point of the first virtual object in the virtual scene is determined along the moving direction, and whether a collision effect corresponding to the second virtual object is displayed or not is determined based on a connecting line segment between the first position point and the second position point. Namely: even if the moving speed of the first virtual object is high, and the distance between the second position point reached by the first virtual object and the second virtual object is large, whether the collision effect corresponding to the second virtual object is displayed or not can be determined through the intersection relation between the connecting line segment and the second virtual object, the use experience of a player is improved, and the human-computer interaction efficiency is improved.
In an optional embodiment, when the line segment along which the first virtual object moves has an intersection relationship with the plurality of second virtual objects, the first virtual object can perform an interactive operation with the plurality of second virtual objects by using the collision detection method in the virtual environment. Illustratively, as shown in fig. 10, step 340 in the embodiment shown in fig. 3 can also be implemented as the following steps 1010 to 1030.
Illustratively, in the virtual environment, a plurality of second virtual objects to be picked up are included, and the collision detection boxes corresponding to different second virtual objects are determined according to the position information of the different second virtual objects in the virtual environment.
Wherein the ith second virtual object is surrounded by the ith collision detection box, and i is a positive integer.
Optionally, collision detection boxes of different shapes are determined for different second virtual objects according to the shapes of props corresponding to the different second virtual objects.
Illustratively, after determining a line segment connecting the first position point and the second position point, the intersection relationship between the line segment and each of the plurality of collision detection boxes is determined, so as to determine the second virtual object according to the intersection relationship.
For example: in the virtual scene, the collision detection box 1 corresponding to the second virtual object 1, the collision detection box 2 corresponding to the second virtual object 2 and the collision detection box 3 corresponding to the second virtual object 3 are included, and the intersection relations between the connecting line segment and the collision detection boxes 1, 2 and 3 are respectively determined. Wherein, if there is an intersection point between the line segment and one surface of the collision detection box 1, there is an intersection relationship between the line segment and the collision detection box 1, so as to determine the second virtual object 1 corresponding to the collision detection box 1; if the connecting line segment does not have an intersection with the collision detection box 2, the connecting line segment does not have an intersection with the collision detection box 2; if the vertex (e.g., the first position point or the second position point) of the line segment is located on the edge of the collision detecting box 3, the line segment intersects with the collision detecting box 3, and the second virtual object 3 corresponding to the collision detecting box 3 is determined.
Namely: after the intersection relationship between the line segment and the plurality of crash detection boxes is determined, it is determined that the intersection relationship exists between the line segment and the crash detection boxes 1 and 3, and the second virtual object 1 corresponding to the crash detection box 1 and the second virtual object 3 corresponding to the crash detection box 3 are determined.
And 1030, displaying the collision effect corresponding to the at least one second virtual object.
In an optional embodiment, when the second virtual object is implemented as the target virtual item, the pickup controls corresponding to the at least one second virtual object are displayed, the pickup controls are used as the allowable pickup effect, and the allowable pickup effect is used as the collision effect.
Illustratively, after determining the at least one second virtual object, the pick-up controls corresponding to the at least one second virtual object are displayed in the virtual screen displayed by the terminal. For example: and determining a second virtual object 1 corresponding to the collision detection box 1 and a second virtual object 3 corresponding to the collision detection box 3 based on the intersection relation between the connecting line segment and the collision detection box, and displaying the pickup control 1 corresponding to the second virtual object 1 and the pickup control 3 corresponding to the second virtual object 3 in a virtual picture displayed by the terminal.
And the picking control is used for picking the second virtual object. Illustratively, after a player performs a trigger operation on the pickup control 1 corresponding to the second virtual object 1, the first virtual object is controlled to pick up the second virtual object 1; or, after the player performs a trigger operation on the pickup control 3 corresponding to the second virtual object 3, the first virtual object is controlled to pick up the second virtual object 3.
Optionally, the at least one second virtual object is respectively subjected to virtual special effect display, and the virtual special effect display is taken as a pickup-allowed effect.
In an alternative embodiment, an object view list is displayed.
Wherein the object view list is used for indicating that at least one second virtual object is viewed.
Optionally, when the second virtual object is implemented as a target virtual item, a prop pickup list is displayed, and the prop pickup list is used as the object viewing list.
Wherein the item picking list is used for indicating that at least one target virtual item is viewed.
Illustratively, after a plurality of target virtual props are determined, a prop pickup list is displayed on a virtual screen displayed by the terminal, wherein the prop pickup list includes at least one target virtual prop determined by the intersection relationship.
In an alternative embodiment, in response to receiving an expansion operation on the object viewing list, collision effects respectively corresponding to the at least one second virtual object are displayed.
Optionally, when the second virtual object is implemented as a target virtual item, in response to receiving an expansion operation on the item pickup list, pickup controls respectively corresponding to at least one target virtual item are displayed.
Optionally, the prop picking list is triggered to expand the prop picking list, so that picking controls respectively corresponding to the determined at least one target virtual prop are displayed on a virtual screen displayed by the terminal.
Illustratively, in the expanded item pickup list, after a pickup control corresponding to any one target virtual item is triggered, the virtual object is controlled to pick up the target virtual item, so as to perform a subsequent game process through the picked-up target virtual item.
In summary, the first position point of the first virtual object in the virtual scene and the second position point determined according to the moving speed and the moving direction are obtained, the first position point and the second position point are connected to obtain a connecting line segment, and when the connecting line segment and the second virtual object have an intersection relationship, the collision effect of the second virtual object is displayed. Even if the moving speed of the first virtual object is high, and the distance between the second position point reached by the first virtual object and the second virtual object is large, whether the collision effect is displayed or not can be determined through the intersection relation between the connecting line segment and the second virtual object, so that a player is assisted to control the first virtual object, a more efficient interaction process is carried out on the second virtual object passed by the connecting line segment, the experience of the player on games is improved, and the human-computer interaction efficiency is improved.
In the embodiment of the present application, a process of interacting a plurality of second virtual objects by using the collision detection method in the virtual environment is described. Determining a plurality of collision detection boxes respectively surrounding the plurality of second virtual objects based on the position information of the plurality of second virtual objects in the virtual environment, determining at least one second virtual object corresponding to the at least one collision detection box in response to the intersection relationship between the connecting line segment and at least one collision detection box in the plurality of collision detection boxes, and displaying the collision effect corresponding to the at least one second virtual object, thereby facilitating the player to control the first virtual object and performing a more efficient interaction process on the plurality of second virtual objects on the connecting line segment, for example: in a longer distance from the first position point to the second position point, the first virtual object can perform a selective interaction process on a plurality of second virtual objects which are intersected with the connecting line segment at the second position point, and the human-computer interaction efficiency is effectively improved.
Based on the related art, the second virtual object and the first virtual object are generally regarded as points moving in the virtual scene, such as: when the second virtual object is realized as a target virtual item, the first virtual object is controlled to pick up the target virtual item through the item point corresponding to the target virtual item and the object point corresponding to the first virtual object according to the distance between the item point and the object point in the virtual scene, if: and picking up the target virtual prop in a certain area away from the first virtual object. The related art is referred to as a "dot detection logic method".
In an optional embodiment, the collision detection method in the virtual environment is implemented by an intersection relationship between a connecting line segment and a second virtual object, and is referred to as a "linear detection logic method", that is to say: and (3) a linear detection logic method is adopted to pick up the target virtual prop in the virtual scene instead of the point detection logic method.
Illustratively, as shown in fig. 11, taking the second virtual object as the target virtual object as an example, the determination process of the collision detection method in the virtual environment is implemented as the following steps 1110 to 1132.
Step 1110, taking the previous frame position of the first virtual object as a starting point, and taking the next frame position of the first virtual object as an end point, making a line segment, and determining the number of intersecting surfaces of the line segment and the three-dimensional space cube.
The three-dimensional space cube is used for indicating a collision detection box surrounding the target virtual prop.
Illustratively, with a scene frame of the virtual scene change as a preset time interval, at an nth scene frame of the virtual scene, a position of the first virtual object in the virtual scene is determined, the position is taken as a starting point, and then, at an n +1 th scene frame of the virtual scene, the position of the first virtual object in the virtual scene is determined, the position is taken as an end point, where n is a positive integer.
Optionally, the starting point and the end point are connected to obtain a line segment. When the first virtual object moves from the starting point to the end point, the moving direction does not change, and the line segment is the moving track of the first virtual object.
And after the line segment and the three-dimensional space cube corresponding to the target virtual prop are determined, judging the number of intersecting surfaces of the line segment and the three-dimensional space cube.
Optionally, based on the line segment being a straight line segment, the number of intersecting surfaces is used to indicate the number of intersections existing between the line segment and 6 surfaces of the three-dimensional space cube.
Schematically, a general formula Ax + By + Cz + D =0 is used to represent a plane equation corresponding to a three-dimensional space cube; using the standard formula A (x-x) 0 )+B(y-y 0 )+C(z-z 0 ) And =0 represents a plane equation corresponding to the line segment, and the expression is solved by combining the above equations as follows.
WhereinA, B, C for indicating a direction vector; d is used for indicating a spatial intercept; x, y, z are used to indicate arbitrary point coordinates; x is the number of 0 、y 0 、z 0 For indicating origin coordinates; x is a radical of a fluorine atom 1 、y 1 、z 1 The terminal coordinates are used for indicating the moved first virtual object; x is a radical of a fluorine atom 2 、y 2 、z 2 The method is used for indicating the coordinates of a starting point of the first virtual object before moving, t is used for indicating the judgment parameters obtained by solving, and when t is between 0 and 1, the representative line segment and the plane of the three-dimensional space cube have an intersection, namely: have an intersecting relationship; if t =0, the representative line segment does not have an intersection with the plane of the three-dimensional space cube, that is: there is no intersection relationship.
In an optional embodiment, the position of the first virtual object in the previous frame of the virtual scene and the position of the first virtual object in the current frame of the virtual scene are determined, a line segment is made between the two positions, and whether the first virtual object picks up the target virtual prop is judged through the line segment.
Schematically, assuming that the position coordinate of the previous frame is a (5,5,6), the position coordinate B of the current frame is (10,5,8), performing interpolation (lerp) between the AB points to obtain a plurality of interpolation points, and determining whether the interpolation points are located in the coordinate system of the three-dimensional prop. For example: and determining a plurality of interpolation points according to the preset percentage value, and respectively judging whether the interpolation points are positioned in a coordinate system of the three-dimensional prop.
Illustratively, when the number of intersecting surfaces of the line segment and the three-dimensional space cube is not more than 0, the step 1131 is implemented; step 1132 is implemented when the number of intersecting faces of the line segment and the three-dimensional space cube is greater than 0.
Step 1131, the target virtual prop is not in the prop picking range or is completely located in the three-dimensional space cube of the picking detection.
Illustratively, when the number of intersecting surfaces of the line segment and the three-dimensional space cube is not more than 0, that is: and (3) the number of intersecting surfaces does not exist between the line segment and the three-dimensional space cube (the number of intersecting surfaces is equal to 0), and the fact that the target virtual prop is not in the prop picking range is determined, so that the allowed picking effect corresponding to the target virtual prop is not displayed.
Or when the number of the intersecting surfaces of the line segment and the three-dimensional space cube is not more than 0, determining that the first virtual object is completely located in the three-dimensional space cube corresponding to the first virtual object, that is, within a preset time interval, the line segment moved by the first virtual object is located in the three-dimensional space cube corresponding to the target virtual prop.
Optionally, in order to avoid a process that the target virtual prop cannot be picked up because the first virtual object is completely located in the three-dimensional space cube corresponding to the target virtual prop, when there is no number of intersecting surfaces between the line segment and the three-dimensional space cube, a separation distance between the target virtual prop and the first virtual object in the virtual scene is determined, and when the separation distance meets a pickup condition, a pickup-allowed effect corresponding to the target virtual prop is displayed.
Illustratively, the pick-up conditions are: and if the spacing distance between the target virtual prop and the first virtual object is smaller than the preset spacing distance, displaying an allowable picking effect corresponding to the target virtual prop.
Illustratively, when the number of intersecting surfaces of a line segment and a three-dimensional space cube is greater than 0, that is: and the line segments and the three-dimensional space cube have the number of intersecting surfaces, and the target virtual prop is determined to be in the prop picking range.
Optionally, in response to that the target virtual item is located within the item pickup range, displaying an allowed pickup effect corresponding to the target virtual item; or, in response to the target virtual item being within the item pickup range, automatically picking up the target virtual item.
Schematically, as shown in fig. 12, a schematic view of a virtual scene in a game application is shown. Including a first virtual object 1210 and a prop (smoke bomb 1220), fig. 12 is a schematic view of a virtual scene where the first virtual object 1210 is located at a starting point, and the first virtual object is a virtual character controlled by a player to ride a ride. After the player controls the first virtual object 1210 to move in an accelerated manner, the terminal displays a virtual scene diagram as shown in fig. 13, and at this time, the first virtual object 1210 moves to the end position.
Optionally, as shown in fig. 13, in response to that a connection line from the starting point to the ending point of the first virtual object 1210 has an intersection relationship with the smoke bomb 1220, a pickup control 1230 corresponding to the smoke bomb 1220 is displayed (for example, a name of the smoke bomb is displayed), and the first virtual object 1210 is controlled to pick up the smoke bomb 1220 based on a trigger operation of a player on the pickup control 1230, where the pickup control 1230 is an exemplary pickup-enabled effect.
It should be noted that the above is only an illustrative example, and the present invention is not limited to this.
In an alternative embodiment, the second virtual object and the first virtual object are represented by different bounding boxes, for example: when the second virtual object is realized as the target virtual item, a bounding box surrounding the target virtual item is called a 'item bounding box', a bounding box surrounding the first virtual object is called an 'object bounding box', and an item pickup condition is determined based on an intersection relation between the 'item bounding box' and the 'object bounding box'. The method for realizing prop picking through the bounding box is called as bounding box detection logic, namely: and (3) a bounding box detection logic method is adopted instead of the point detection logic method, and the target virtual prop is picked up in the virtual scene.
Illustratively, as shown in fig. 14, the above-mentioned item pickup method determining process is implemented as the following steps 1410 to 1432.
The AABB detection method is to analyze the collision relation between two objects by adopting an external rectangle parallel to a coordinate axis.
Illustratively, the virtual scene is implemented as a 3D space, and when an "object bounding box" that encloses the first virtual object is determined, a cube that is capable of enclosing the first virtual object is determined in the 3D space, and 12 sides of the cube are each parallel to a corresponding coordinate axis. For example: from the edges of the first virtual object, a cube is determined that happens to enclose the first virtual object, the length of the cube being parallel to the x-axis of the 3D space, the width of the cube being parallel to the y-axis of the 3D space, and the height of the cube being parallel to the z-axis of the 3D space.
Similarly, when determining a "prop bounding box" surrounding the target virtual prop, in the 3D space, a cube capable of surrounding the target virtual prop is determined, and 12 sides of the cube are parallel to corresponding coordinate axes, and the cube is used as the prop bounding box. For example: according to a plurality of edge points of the target virtual prop, a cube which just can surround the target virtual prop is determined, the length of the cube is parallel to the x axis of the 3D space, the width of the cube is parallel to the y axis of the 3D space, and the height of the cube is parallel to the z axis of the 3D space.
Optionally, as the first virtual object moves from a starting point to an end point in the virtual scene, a starting point position and an end point position of the "object bounding box" corresponding to the first virtual object in the virtual scene are determined, and when the intersection condition of the "object bounding box" and the "prop bounding box" is judged, the intersection condition of the "object bounding box" and the "prop bounding box" at the end point position is determined.
In an optional embodiment, an area range through which an "object bounding box" corresponding to the first virtual object passes when moving in the virtual scene is determined, and when the intersection condition of the "object bounding box" and the "prop bounding box" is judged, the intersection condition of the area range and the "prop bounding box" is determined.
Alternatively, the logic for determining whether to intersect is as follows.
The description will be given taking an example in which the "object bounding box" and the "property bounding box" are implemented as rectangular bounding boxes. Judging the position relation between eight vertexes of the object bounding box and eight vertexes corresponding to the prop bounding box, and determining that the object bounding box and the prop bounding box have an intersection relation when one vertex in the eight vertexes of the object bounding box is positioned in the range of the prop bounding box; similarly, when one vertex exists in the eight vertexes of the "prop bounding box" and is located in the range of the "object bounding box", it is determined that the "object bounding box" and the "prop bounding box" have an intersecting relationship, and the like.
Illustratively, when the "object bounding box" intersects the "prop bounding box," this is accomplished as step 1431; when the "object bounding box" does not intersect the "prop bounding box," this is accomplished as step 1432.
Illustratively, when the object bounding box intersects the prop bounding box, it is determined that the target virtual prop is within the prop pick-up range.
Optionally, in response to the target virtual item being located within the item pickup range, displaying an allowable pickup effect corresponding to the target virtual item; or, in response to the target virtual item being within the item pickup range, automatically picking up the target virtual item.
At step 1432, the target virtual prop is not within the prop pick-up range or is completely within the three-dimensional space cube of the pick-up detection.
Illustratively, when the object bounding box and the prop bounding box are not intersected, the target virtual prop is determined not to be in the prop picking range, so that the picking-up allowing effect corresponding to the target virtual prop is not displayed.
Or, when the "object bounding box" and the "prop bounding box" are not intersected, it is determined that the first virtual object is completely located in the "prop bounding box" corresponding to the first virtual object, that is, within a preset time interval, the moving range of the first virtual object is smaller than the range of the "prop bounding box" corresponding to the target virtual prop.
Optionally, in order to avoid that the target virtual item cannot be picked up due to the fact that the first virtual object is completely located in the "item bounding box" corresponding to the target virtual item, under the condition that the "object bounding box" and the "item bounding box" are not intersected, the separation distance between the target virtual item and the first virtual object in the virtual scene is determined, and when the separation distance meets the pickup condition, a pickup-allowed effect corresponding to the target virtual item is displayed.
Illustratively, the pick-up conditions are: and if the distance between the center point of the 'prop bounding box' and the center point of the 'object bounding box' is smaller than the preset distance, displaying the allowable picking effect corresponding to the target virtual prop.
It should be noted that the above is only an illustrative example, and the present invention is not limited to this.
In summary, the starting point of the first virtual object in the virtual scene is obtained, the end point is determined according to the moving speed and the moving direction, and the intersection relationship between the line segment and the second virtual object is determined according to the line segment of the starting point and the end point, so as to implement the interaction process of the second virtual object. Even if the moving speed of the first virtual object is high, the distance between the end point position of the first virtual object and the second virtual object is large, the interaction process of the second virtual object can be more efficiently realized through the intersection relation between the line segment and the second virtual object, the collision detection method in the virtual environment is applied to the stage property picking process, and when the second virtual object is realized as the target virtual stage property, a completely new stage property picking algorithm is adopted, so that the problem of low target virtual stage property picking preparation degree in the related technology is solved, the game experience of a player is improved, and the human-computer interaction efficiency is improved.
Fig. 15 is a block diagram of a collision detection apparatus in a virtual environment according to an exemplary embodiment of the present application, and as shown in fig. 15, the apparatus includes the following components:
a location point obtaining module 1510, configured to obtain a first location point of a first virtual object in a virtual scene, where the first location point is a location point where the first virtual object is currently located in the virtual scene;
a position point determining module 1520, configured to determine a second position point of the first virtual object in the virtual scene based on the moving speed and the moving direction of the first virtual object, where the second position point is a position point reached by the first virtual object moving to the moving direction at the moving speed for a preset time interval after the first position point;
a line segment obtaining module 1530 for obtaining a line segment between the first position point and the second position point;
the effect display module 1540 is configured to display a collision effect corresponding to a second virtual object in the virtual scene in response to that the connection line segment has an intersection relationship with the second virtual object.
In an alternative embodiment, the effect display module 1540 is further configured to determine a collision detection box surrounding the second virtual object based on the position information of the second virtual object in the virtual scene; and responding to the intersection relation between the connecting line segment and the collision detection box, and displaying the collision effect corresponding to the second virtual object.
In an optional embodiment, the effect display module 1540 is further configured to display the collision effect corresponding to the second virtual object in response to an intersection point of the connecting line segment and at least one surface of the collision detection box.
In an optional embodiment, the effect display module 1540 is further configured to perform a point sampling operation on the connection line segment, so as to obtain a plurality of segment points in the connection line segment; and displaying the collision effect corresponding to the second virtual object in response to the fact that at least one line segment point exists in the plurality of line segment points and is located in the collision detection box.
In an alternative embodiment, the location point determining module 1520 is further configured to determine the second location point of the first virtual object in the virtual scene along the moving direction in response to the moving speed of the first virtual object reaching a preset speed threshold; or, in response to receiving a triggering operation on an acceleration control, determining the second location point of the first virtual object in the virtual scene along the moving direction.
In an optional embodiment, the line segment acquiring module 1530 is further configured to acquire the line segment between the first location point and the second location point in response to a distance between the first location point and the second location point meeting a preset distance threshold.
In an alternative embodiment, the second virtual object comprises a target virtual prop;
the effect display module 1540 is further configured to, in response to that the connection line segment has an intersecting relationship with the target virtual item in the virtual scene, display an allowable pickup effect of the target virtual item, and use the allowable pickup effect as the collision effect, where the allowable pickup effect is used to indicate that the target virtual item is picked up.
In an alternative embodiment, the second virtual object comprises a target avatar;
the display module 1540 is further configured to, in response to that the connection line segment and the target virtual character in the virtual scene have an intersection relationship, display an interaction effect of the first virtual object and the target virtual character, and use the interaction effect as the collision effect.
In an alternative embodiment, the effect display module 1540 is further configured to determine a plurality of collision detection boxes respectively surrounding a plurality of second virtual objects according to the position information of the second virtual objects in the virtual environment, where the ith second virtual object is surrounded by the ith collision detection box, and i is a positive integer 1; determining at least one second virtual object corresponding to at least one collision detection box in the plurality of collision detection boxes in response to the intersection relationship between the line segment and the at least one collision detection box; and displaying the collision effect corresponding to the at least one second virtual object.
In an alternative embodiment, the effect displaying module 1540 is further configured to display the collision effects respectively corresponding to the at least one second virtual object; or, displaying an object viewing list, the object viewing list being used to indicate viewing of the at least one second virtual object; displaying the collision effects respectively corresponding to at least one second virtual object in response to receiving an expansion operation on the object viewing list.
In summary, a first position point of the first virtual object in the virtual scene and a second position point determined according to the moving speed and the moving direction are obtained, the first position point and the second position point are connected to obtain a connecting line segment, and when an intersection relationship exists between the connecting line segment and the second virtual object, the collision effect of the second virtual object is displayed. By the aid of the device, even if the first virtual object moves at a high speed and the distance between the second position point reached by the first virtual object and the second virtual object is large, whether the collision effect is displayed or not can be determined through the intersection relation between the connecting line segment and the second virtual object, so that a player is assisted in controlling the first virtual object, a more efficient interaction process is performed on the second virtual object passed by the connecting line segment, the experience of the player on a game is improved, and the human-computer interaction efficiency is improved.
It should be noted that: the collision detection apparatus in a virtual environment provided in the foregoing embodiment is only illustrated by the division of the functional modules, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the collision detection apparatus in a virtual environment provided by the above embodiments and the collision detection method in a virtual environment belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments, and are not described herein again.
Fig. 16 shows a block diagram of an electronic device 1600 provided in an exemplary embodiment of the present application. The electronic device 1600 may be a portable mobile terminal, such as: the mobile terminal comprises a smart phone, a vehicle-mounted terminal, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, standard Audio Layer 3 for motion Picture Experts compression), an MP4 player (Moving Picture Experts Group Audio Layer IV, standard Audio Layer 4 for motion Picture Experts compression), a notebook computer or a desktop computer. Electronic device 1600 may also be referred to by other names such as user equipment, portable terminals, laptop terminals, desktop terminals, and the like.
Generally, the electronic device 1600 includes: a processor 1601, and a memory 1602.
In some embodiments, the electronic device 1600 also includes one or more sensors. The one or more sensors include, but are not limited to: proximity sensors, gyroscope sensors, pressure sensors.
Proximity sensors, also known as distance sensors, are typically provided on the front panel of the electronic device 1600. The proximity sensor is used to capture the distance between the user and the front of the electronic device 1600.
The gyroscope sensor can detect the body direction and the rotation angle of the electronic device 1600, and the gyroscope sensor and the acceleration sensor can cooperatively acquire the 3D action of the user on the electronic device 1600. The processor 1601 may implement the following functions according to the data collected by the gyroscope sensor: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors may be disposed on a side bezel of the electronic device 1600 and/or underlying layers of the display screen. When the pressure sensor is arranged on the side frame of the electronic device 1600, a holding signal of a user to the electronic device 1600 can be detected, and the processor 1601 is used for identifying a left hand or a right hand or performing quick operation according to the holding signal acquired by the pressure sensor. When the pressure sensor is disposed at the lower layer of the display screen, the processor 1601 is configured to control the operability control on the UI interface according to the pressure operation of the user on the display screen. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
In some embodiments, electronic device 1600 also includes other component parts, and those skilled in the art will appreciate that the structure shown in FIG. 16 does not constitute a limitation of electronic device 1600, and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components may be employed.
Embodiments of the present application further provide a computer device, which may be implemented as a terminal or a server as shown in fig. 2. The computer device comprises a processor and a memory, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the collision detection method in a virtual environment provided by the above-mentioned method embodiments.
Embodiments of the present application further provide a computer-readable storage medium, on which at least one instruction, at least one program, a code set, or a set of instructions is stored, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the collision detection method in a virtual environment provided by the above method embodiments.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the collision detection method in the virtual environment according to any of the above embodiments.
Optionally, the computer-readable storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a Solid State Drive (SSD), or an optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM). The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.
Claims (14)
1. A method of collision detection in a virtual environment, the method comprising:
acquiring a first position point of a first virtual object in a virtual scene, wherein the first position point is a position point where the first virtual object is currently located in the virtual scene;
determining a second position point of the first virtual object in the virtual scene based on the moving speed and the moving direction of the first virtual object, wherein the second position point is a position point reached by the first virtual object moving to the moving direction at the moving speed for a preset time interval after the first position point;
acquiring a connecting line segment between the first position point and the second position point;
and responding to the intersection relation between the connecting line segment and a second virtual object in the virtual scene, and displaying a collision effect corresponding to the second virtual object.
2. The method of claim 1, wherein the displaying, in response to the line segment intersecting a second virtual object in the virtual scene, a collision effect corresponding to the second virtual object comprises:
determining a collision detection box surrounding the second virtual object based on the position information of the second virtual object in the virtual scene;
and responding to the intersection relation between the connecting line segment and the collision detection box, and displaying the collision effect corresponding to the second virtual object.
3. The method according to claim 2, wherein the displaying the collision effect corresponding to the second virtual object in response to the intersection relationship between the line segment and the collision detection box comprises:
and displaying the collision effect corresponding to the second virtual object in response to the connecting line segment and at least one surface of the collision detection box having an intersection.
4. The method according to claim 2, wherein the displaying the collision effect corresponding to the second virtual object in response to the intersection relationship between the line segment and the collision detection box comprises:
performing point sampling operation on the connecting line segment to obtain a plurality of segment points in the connecting line segment;
and displaying the collision effect corresponding to the second virtual object in response to the fact that at least one line segment point exists in the plurality of line segment points and is located in the collision detection box.
5. The method of any of claims 1 to 4, wherein determining the second location point of the first virtual object in the virtual scene based on the moving speed and the moving direction of the first virtual object comprises:
in response to the speed of movement of the first virtual object reaching a preset speed threshold, determining the second location point of the first virtual object in the virtual scene along the direction of movement;
or,
in response to receiving a triggering operation on an acceleration control, determining the second location point of the first virtual object in the virtual scene along the movement direction.
6. The method of any of claims 1 to 4, further comprising:
and acquiring the connecting line segment between the first position point and the second position point in response to the fact that the distance between the first position point and the second position point meets a preset distance threshold.
7. The method of any of claims 1 to 4, wherein the second virtual object comprises a target virtual prop;
the displaying a collision effect corresponding to a second virtual object in the virtual scene in response to the connection line segment having an intersection relationship with the second virtual object includes:
responding to the intersection relationship between the connecting line segment and the target virtual prop in the virtual scene, displaying an allowed picking effect of the target virtual prop, and taking the allowed picking effect as the collision effect, wherein the allowed picking effect is used for indicating to pick the target virtual prop.
8. The method of any of claims 1 to 4, wherein the second virtual object comprises a target virtual character;
the displaying a collision effect corresponding to a second virtual object in the virtual scene in response to the connection line segment having an intersection relationship with the second virtual object includes:
and responding to the intersection relationship between the connecting line segment and the target virtual character in the virtual scene, displaying the interactive effect of the first virtual object and the target virtual character, and taking the interactive effect as the collision effect.
9. The method according to any one of claims 1 to 4, wherein the displaying, in response to the line segment intersecting with a second virtual object in the virtual scene, a collision effect corresponding to the second virtual object comprises:
determining a plurality of collision detection boxes respectively surrounding a plurality of second virtual objects based on position information of the plurality of second virtual objects in the virtual environment, wherein the ith second virtual object is surrounded by the ith collision detection box, and i is a positive integer;
determining at least one second virtual object corresponding to at least one collision detection box in the plurality of collision detection boxes in response to the intersection relationship between the line segment and the at least one collision detection box;
and displaying the collision effect corresponding to the at least one second virtual object.
10. The method of claim 9, wherein the displaying the collision effect corresponding to the at least one second virtual object comprises:
displaying the collision effects respectively corresponding to the at least one second virtual object;
or,
displaying an object viewing list indicating viewing of the at least one second virtual object; displaying the collision effects respectively corresponding to at least one second virtual object in response to receiving an expansion operation on the object viewing list.
11. An apparatus for collision detection in a virtual environment, the apparatus comprising:
a position point obtaining module, configured to obtain a first position point of a first virtual object in a virtual scene, where the first position point is a position point where the first virtual object is currently located in the virtual scene;
a position point determining module, configured to determine a second position point of the first virtual object in the virtual scene based on a moving speed and a moving direction of the first virtual object, where the second position point is a position point reached by the first virtual object moving in the moving direction at the moving speed for a preset time interval after the first position point;
the line segment acquisition module is used for acquiring a connecting line segment between the first position point and the second position point;
and the effect display module is used for responding to the intersection relation between the connecting line segment and a second virtual object in the virtual scene and displaying the collision effect corresponding to the second virtual object.
12. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction that is loaded and executed by the processor to implement a method of collision detection in a virtual environment according to any of claims 1 to 10.
13. A computer-readable storage medium having stored therein at least one instruction which is loaded and executed by a processor to implement a method of collision detection in a virtual environment according to any one of claims 1 to 10.
14. A computer program product comprising computer instructions which, when executed by a processor, implement a method of collision detection in a virtual environment as claimed in any one of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211054142.7A CN115430153A (en) | 2022-08-30 | 2022-08-30 | Collision detection method, device, apparatus, medium, and program in virtual environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211054142.7A CN115430153A (en) | 2022-08-30 | 2022-08-30 | Collision detection method, device, apparatus, medium, and program in virtual environment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115430153A true CN115430153A (en) | 2022-12-06 |
Family
ID=84244222
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211054142.7A Pending CN115430153A (en) | 2022-08-30 | 2022-08-30 | Collision detection method, device, apparatus, medium, and program in virtual environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115430153A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117392358A (en) * | 2023-12-04 | 2024-01-12 | 腾讯科技(深圳)有限公司 | Collision detection method, collision detection device, computer device and storage medium |
WO2024199200A1 (en) * | 2023-03-30 | 2024-10-03 | 腾讯科技(深圳)有限公司 | Method and apparatus for determining collision event, and storage medium, electronic device and program product |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012133701A (en) * | 2010-12-24 | 2012-07-12 | Shift:Kk | Three-dimensional space data processing apparatus and program |
CN104606887A (en) * | 2014-12-30 | 2015-05-13 | 北京像素软件科技股份有限公司 | Collision judgment method |
CN108629847A (en) * | 2018-05-07 | 2018-10-09 | 网易(杭州)网络有限公司 | Virtual objects mobile route generation method, device, storage medium and electronic equipment |
CN111167120A (en) * | 2019-12-31 | 2020-05-19 | 网易(杭州)网络有限公司 | Method and device for processing virtual model in game |
CN114377397A (en) * | 2022-01-21 | 2022-04-22 | 腾讯科技(深圳)有限公司 | Path finding method, device, equipment and storage medium in virtual scene |
-
2022
- 2022-08-30 CN CN202211054142.7A patent/CN115430153A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012133701A (en) * | 2010-12-24 | 2012-07-12 | Shift:Kk | Three-dimensional space data processing apparatus and program |
CN104606887A (en) * | 2014-12-30 | 2015-05-13 | 北京像素软件科技股份有限公司 | Collision judgment method |
CN108629847A (en) * | 2018-05-07 | 2018-10-09 | 网易(杭州)网络有限公司 | Virtual objects mobile route generation method, device, storage medium and electronic equipment |
CN111167120A (en) * | 2019-12-31 | 2020-05-19 | 网易(杭州)网络有限公司 | Method and device for processing virtual model in game |
CN114377397A (en) * | 2022-01-21 | 2022-04-22 | 腾讯科技(深圳)有限公司 | Path finding method, device, equipment and storage medium in virtual scene |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024199200A1 (en) * | 2023-03-30 | 2024-10-03 | 腾讯科技(深圳)有限公司 | Method and apparatus for determining collision event, and storage medium, electronic device and program product |
CN117392358A (en) * | 2023-12-04 | 2024-01-12 | 腾讯科技(深圳)有限公司 | Collision detection method, collision detection device, computer device and storage medium |
CN117392358B (en) * | 2023-12-04 | 2024-04-09 | 腾讯科技(深圳)有限公司 | Collision detection method, collision detection device, computer device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112090069B (en) | Information prompting method and device in virtual scene, electronic equipment and storage medium | |
CN110665230B (en) | Virtual role control method, device, equipment and medium in virtual world | |
CN110585731B (en) | Method, device, terminal and medium for throwing virtual article in virtual environment | |
CN109529356B (en) | Battle result determining method, device and storage medium | |
CN110465087B (en) | Virtual article control method, device, terminal and storage medium | |
CN110732135B (en) | Virtual scene display method and device, electronic equipment and storage medium | |
CN111714886B (en) | Virtual object control method, device, equipment and storage medium | |
US20240070974A1 (en) | Method and apparatus for displaying virtual environment picture, device, and storage medium | |
CN113144597B (en) | Virtual vehicle display method, device, equipment and storage medium | |
CN110585706B (en) | Interactive property control method, device, terminal and storage medium | |
CN113398601A (en) | Information transmission method, information transmission device, computer-readable medium, and apparatus | |
CN115430153A (en) | Collision detection method, device, apparatus, medium, and program in virtual environment | |
CN110801629B (en) | Method, device, terminal and medium for displaying virtual object life value prompt graph | |
CN113680060B (en) | Virtual picture display method, apparatus, device, medium and computer program product | |
TW202224739A (en) | Method for data processing in virtual scene, device, apparatus, storage medium and program product | |
US20230033530A1 (en) | Method and apparatus for acquiring position in virtual scene, device, medium and program product | |
CN112843682B (en) | Data synchronization method, device, equipment and storage medium | |
CN112316429A (en) | Virtual object control method, device, terminal and storage medium | |
KR20210141971A (en) | Method, apparatus, terminal, and storage medium for selecting virtual objects | |
CN111659122A (en) | Virtual resource display method and device, electronic equipment and storage medium | |
CN112316423A (en) | Method, device, equipment and medium for displaying state change of virtual object | |
WO2023071808A1 (en) | Virtual scene-based graphic display method and apparatus, device, and medium | |
CN115634449A (en) | Method, device, equipment and product for controlling virtual object in virtual scene | |
CN113680061A (en) | Control method, device, terminal and storage medium of virtual prop | |
CN111905380A (en) | Virtual object control method, device, terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |