CN112717397B - Virtual object control method, device, equipment and storage medium - Google Patents
Virtual object control method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN112717397B CN112717397B CN202011613689.7A CN202011613689A CN112717397B CN 112717397 B CN112717397 B CN 112717397B CN 202011613689 A CN202011613689 A CN 202011613689A CN 112717397 B CN112717397 B CN 112717397B
- Authority
- CN
- China
- Prior art keywords
- virtual
- rocker
- controlled
- virtual object
- obstacle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/55—Controlling game characters or game objects based on the game progress
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
- A63F13/42—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/80—Special adaptations for executing a specific game genre or game mode
- A63F13/837—Shooting of targets
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/80—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
- A63F2300/8076—Shooting
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses a virtual object control method, a device, equipment and a storage medium, and belongs to the technical field of computers. According to the technical scheme provided by the embodiment of the application, in the game process, when the user drags the virtual rocker in a mode meeting the target condition, the virtual rocker can be fixed at the second position. The controlled virtual object can be controlled to continuously move in the virtual scene based on the fixed virtual rocker, the virtual rocker is not required to be manually dragged by a user, the burden of the user is reduced, the user can complete more tasks and operations, and the game operation experience of the user is improved. In addition, the user can determine the locking position of the virtual rocker by himself, so that the moving direction of the controlled virtual object is controlled, the moving direction of the controlled virtual object can be any direction, the flexibility of direction locking is improved, and the efficiency of man-machine interaction is improved as a whole.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a virtual object control method, device, apparatus, and storage medium.
Background
With the development of multimedia technology and the diversification of terminal functions, the variety of games that can be played on terminals is increasing. Shooting games are a popular game, and in the game process, a user can control a virtual object to fight with other virtual objects controlled by other users in a virtual scene.
In the related art, if a user wants to control a virtual object to move in a virtual scene, the user needs to drag a virtual rocker with a finger of a left hand or a right hand and keep a drag state of the virtual rocker during the movement of the virtual object.
In this case, taking the case that the user drags the virtual rocker with the finger of the left hand as an example, during the movement of the virtual object, the left hand of the user cannot perform other operations related to the game, resulting in lower efficiency of man-machine interaction.
Disclosure of Invention
The embodiment of the application provides a virtual object control method, a device, equipment and a storage medium, which can realize man-machine interaction efficiency. The technical scheme is as follows:
in one aspect, a virtual object control method is provided, the method including:
displaying a visual field picture of a controlled virtual object, wherein a virtual rocker is displayed at a first position of the visual field picture and is used for controlling the moving direction of the controlled virtual object in a virtual scene;
In response to detecting that a first dragging operation of the virtual rocker meets a target condition, fixing the virtual rocker at a second position corresponding to the first dragging operation;
and controlling the controlled virtual object to continuously move in the virtual scene according to the direction indicated by the position difference between the second position and the first position.
In a possible implementation manner, the virtual rocker is further used for controlling a moving direction of the virtual vehicle in the virtual scene, and after the virtual rocker is fixed at the second position corresponding to the first drag operation, the method further includes:
and controlling the virtual vehicle driven by the controlled virtual object to continuously move in the virtual scene according to the direction indicated by the position difference between the second position and the first position.
In one aspect, there is provided a virtual object control apparatus, the apparatus comprising:
the display module is used for displaying a visual field picture of the controlled virtual object, a virtual rocker is displayed on a first position of the visual field picture, and the virtual rocker is used for controlling the moving direction of the controlled virtual object in a virtual scene;
The fixing module is used for responding to the fact that the first dragging operation of the virtual rocker meets the target condition, and fixing the virtual rocker at a second position corresponding to the first dragging operation;
and the control module is used for controlling the controlled virtual object to continuously move in the virtual scene according to the direction indicated by the position difference between the second position and the first position.
In one possible implementation, the first drag operation meets a target condition refers to any one of the following:
the holding time of the first dragging operation at the second position is longer than the target duration;
the first drag operation has a force at the second location that is greater than a target force.
In one possible implementation manner, a rocker limit graph is further displayed on the view field picture, the virtual rocker is displayed on the rocker limit graph, and the rocker limit graph is used for limiting the movement range of the virtual rocker.
In one possible embodiment, the second position is one edge of the rocker limit pattern.
In one possible embodiment, the first position is a geometric center of the rocker limit pattern.
In a possible implementation manner, the control module is used for determining a target moving direction of the controlled virtual object in the virtual scene according to a direction indicated by a position difference between the second position and the first position; and controlling the controlled virtual object to continuously move in the virtual scene in the target moving direction.
In one possible embodiment, the apparatus further comprises at least one of the following modules:
the locking graphic display module is used for displaying a locking graphic on the virtual rocker, and the locking graphic is used for indicating that the virtual rocker is fixed;
the prompt information display module is used for displaying locking prompt information on the visual field picture, wherein the locking prompt information is used for prompting that the virtual rocker is fixed;
the vibration module is used for sending out a first type of vibration prompt, wherein the first type of vibration prompt is used for reminding that the virtual rocker is fixed;
the playing module is used for playing a first type of sound prompt, and the first type of sound prompt is used for reminding that the virtual rocker is fixed.
In one possible implementation, the control module is further configured to, in response to detecting a second drag operation on the virtual rocker, release the fixation of the virtual rocker; and controlling the controlled virtual object to move in the virtual scene based on the moving direction indicated by the second dragging operation.
In a possible implementation manner, the display module is further configured to highlight, on the view field screen, an outline of the virtual rocker in response to the controlled virtual object being attacked by another virtual object in the virtual scene, so as to remind of releasing the fixation of the virtual rocker.
In a possible implementation manner, the control module is further configured to control the controlled virtual object to move towards a virtual obstacle closest to the controlled virtual object in response to the controlled virtual object being attacked by other virtual objects in the virtual scene.
In one possible implementation manner, the display module is further configured to display, on the view screen, obstacle prompting information in response to detecting that a virtual obstacle exists in a movement direction of the controlled virtual object, where the obstacle prompting information is used to prompt a type of the virtual obstacle and a distance between the virtual obstacle and the controlled virtual object.
In one possible implementation manner, the control module is further configured to control the controlled virtual object to execute an auxiliary action corresponding to the virtual obstacle in response to detecting that a virtual obstacle exists in front of the controlled virtual object, where the auxiliary action is used to assist the controlled virtual object to pass through the virtual obstacle.
In a possible implementation manner, the virtual rocker is further used for controlling a moving direction of the virtual vehicle in the virtual scene, and the control module is further used for controlling the virtual vehicle driven by the controlled virtual object to continuously move in the virtual scene according to the direction indicated by the position difference between the second position and the first position.
In one aspect, a computer device is provided that includes one or more processors and one or more memories having at least one computer program stored therein, the computer program loaded and executed by the one or more processors to implement the virtual object control method.
In one aspect, a computer readable storage medium having at least one computer program stored therein is provided, the computer program being loaded and executed by a processor to implement the virtual object control method.
In one aspect, a computer program product or a computer program is provided, the computer program product or computer program comprising a program code, the program code being stored in a computer readable storage medium, the program code being read from the computer readable storage medium by a processor of a computer device, the program code being executed by the processor, causing the computer device to perform the above-described virtual object control method.
According to the technical scheme provided by the embodiment of the application, in the game process, when the user drags the virtual rocker in a mode meeting the target condition, the virtual rocker can be fixed at the second position. The controlled virtual object can be controlled to continuously move in the virtual scene based on the fixed virtual rocker, the virtual rocker is not required to be manually dragged by a user, the burden of the user is reduced, the user can complete more tasks and operations, and the game operation experience of the user is improved. In addition, the user can determine the locking position of the virtual rocker by himself, so that the moving direction of the controlled virtual object is controlled, the moving direction of the controlled virtual object can be any direction, the flexibility of direction locking is improved, and the efficiency of man-machine interaction is improved as a whole.
Drawings
For the sake of more clearly illustrating the technical solutions in the embodiments of the present application, the drawings needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an implementation environment of a virtual object control method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a view field picture according to an embodiment of the present application;
FIG. 3 is a schematic illustration of an interface provided by an embodiment of the present application;
FIG. 4 is a flowchart of a virtual object control method according to an embodiment of the present application;
FIG. 5 is a flowchart of a virtual object control method according to an embodiment of the present application;
FIG. 6 is a schematic illustration of an interface provided by an embodiment of the present application;
FIG. 7 is a schematic illustration of an interface provided by an embodiment of the present application;
FIG. 8 is a schematic illustration of an interface provided by an embodiment of the present application;
FIG. 9 is a schematic illustration of an interface provided by an embodiment of the present application;
FIG. 10 is a logic block diagram of a virtual object control method provided in an embodiment of the present application;
fig. 11 is a schematic structural diagram of a virtual object control device according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The terms "first," "second," and the like in this application are used to distinguish between identical or similar items that have substantially the same function and function, and it should be understood that there is no logical or chronological dependency between the "first," "second," and "nth" terms, nor is it limited to the number or order of execution.
The term "at least one" in this application means one or more, and the meaning of "a plurality of" means two or more, for example, a plurality of reference face images means two or more reference face images.
Virtual scene: is a virtual scene that an application program displays (or provides) while running on a terminal. The virtual scene may be a simulation environment for the real world, a semi-simulation and semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, a virtual scene may include sky, land, sea, etc., the land may include environmental elements of a desert, city, etc., and a user may control a virtual object to move in the virtual scene.
Virtual object: refers to movable objects in a virtual scene. The movable object may be a virtual character, a virtual animal, a cartoon character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the virtual scene. The virtual object may be an avatar in the virtual scene for representing a user. A virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene, occupying a portion of space in the virtual scene.
Optionally, the virtual object is a user Character controlled by an operation on the client, or is an artificial intelligence (Artificial Intelligence, AI) set in the virtual scene fight by training, or is a Non-user Character (NPC) set in the virtual scene. Optionally, the virtual object is a virtual character playing an athletic in the virtual scene. Optionally, the number of virtual objects participating in the interaction in the virtual scene is preset or dynamically determined according to the number of clients joining the interaction.
Taking shooting games as an example, the user can control the virtual object to freely fall, glide or open a parachute to fall in the sky of the virtual scene, run, jump, crawl, bend down and go forward on land, or control the virtual object to swim, float or dive in the ocean, or the like, and of course, the user can control the virtual object to move in the virtual scene by taking a virtual carrier, for example, the virtual carrier may be a virtual automobile, a virtual aircraft, a virtual yacht, or the like, and only the above scenes are exemplified. The user can also control the virtual object to perform interaction in a fight mode with other virtual objects through the interaction prop, for example, the interaction prop can be a throwing type interaction prop such as a grenade, a cluster grenade, a viscous grenade (for short, "viscous grenade"), or a shooting type interaction prop such as a machine gun, a pistol, a rifle, etc., and the type of the interaction prop is not particularly limited.
Fig. 1 is a schematic diagram of an implementation environment of a virtual object control method according to an embodiment of the present application, referring to fig. 1, the implementation environment includes: a first terminal 120, a second terminal 140, and a server 160.
The first terminal 120 installs and runs an application program supporting the display of virtual scenes. Optionally, the application is any one of a First-person shooter game (FPS), a third-person shooter game, a virtual reality application, a three-dimensional map program, or a multiplayer gunfight survival game. The first terminal 120 is a terminal used by a first user to operate a controlled virtual object located in a virtual scene using the first terminal 120 to perform activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking up, shooting, attacking, throwing. Illustratively, the controlled virtual object is a first virtual character, such as an emulated persona or a cartoon persona.
The first terminal 120 and the second terminal 140 are connected to the server 160 through a wireless network or a wired network.
The second terminal 140 installs and runs an application program supporting the display of virtual scenes. Optionally, the application is any one of an FPS, a third person shooter game, a virtual reality application, a three-dimensional map program, or a multiplayer gunfight survival game. The second terminal 140 is a terminal used by a second user who uses the second terminal 140 to operate another virtual object located in the virtual scene for activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking up, shooting, attacking, throwing. Illustratively, the virtual object controlled by the second terminal 140 is a second virtual character, such as an emulated character or a cartoon character.
Alternatively, the virtual object controlled by the first terminal 120 and the virtual object controlled by the second terminal 140 are in the same virtual scene, and at this time, the virtual object controlled by the first terminal 120 may interact with the virtual object controlled by the second terminal 140 in the virtual scene. In some embodiments, the virtual object controlled by the first terminal 120 and the virtual object controlled by the second terminal 140 are in hostile relationship, for example, the virtual object controlled by the first terminal 120 and the virtual object controlled by the second terminal 140 belong to different teams and organizations, and interaction in the fight mode can be performed on land in a shooting mode.
Alternatively, the applications installed on the first terminal 120 and the second terminal 140 are the same, or the applications installed on the two terminals are the same type of application of different operating system platforms. Wherein the first terminal 120 refers broadly to one of the plurality of terminals and the second terminal 140 refers broadly to one of the plurality of terminals, the present embodiment is exemplified by only the first terminal 120 and the second terminal 140. The device types of the first terminal 120 and the second terminal 140 are the same or different, and the device types include: at least one of a smart phone, a tablet computer, a laptop portable computer, and a desktop computer. For example, the first terminal 120 and the second terminal 140 may be smart phones, or other handheld portable game devices, but are not limited thereto. The technical solution provided in the embodiment of the present application can be applied to the first terminal 120 and the second terminal 140, which is not limited in this embodiment of the present application. For the sake of clarity and brevity, in the following description, a terminal is used to refer to a first terminal or a second terminal.
Optionally, the server 160 is an independent physical server, or a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a delivery network (Content Delivery Network, CDN), and basic cloud computing services such as big data and an artificial intelligence platform, which are not limited in number and device types.
In order to more clearly describe the technical solution provided in the embodiments of the present application, first, a field of view picture of a controlled virtual object in the present application is described, referring to fig. 2, in order to make a shooting game more realistic, a game designer refers to a manner in which a human observes the real world to design a manner in which the controlled virtual object observes a virtual scene. The controlled virtual object 201 can observe the virtual scene in the region 202 in the virtual scene, and the screen obtained by observing the region 202 at the angle of the controlled virtual object 201 is the field of view screen of the controlled virtual object. The user can adjust the position at which the controlled virtual object 201 observes the virtual scene by adjusting the orientation of the controlled virtual object 201. The manner in which the controlled virtual object 201 observes the virtual scene is the manner in which the user observes the virtual scene. The terminal can project the virtual scene in the region 202 onto the screen so that the user can see the scene seen in the virtual scene by the controlled virtual object 201 through the screen.
Taking a terminal as an intelligent mobile phone as an example, a control for controlling the controlled virtual object to execute different actions is also displayed on the visual field picture of the controlled virtual object. Referring to fig. 3, a virtual rocker 302, an attitude adjustment control 303, a shooting control 304, and a prop switching control 305 are displayed on a visual field screen 301 of a controlled virtual object, where the virtual rocker 302 is used to control a moving direction of the controlled virtual object. The gesture adjustment control 303 is used for adjusting the gesture of the controlled virtual object, for example, controlling the virtual object to execute squat or creeping actions. Shooting control 304 is used to control the firing of virtual ammunition by an interactive prop held by a controlled virtual object. Prop switching control 305 is used to switch a target virtual prop, and in the embodiment of the present application, a user can control a controlled virtual object to throw the target virtual prop through shooting control 304. 306 is a minimap, through which the user can view the positions of teammates and enemies in the virtual scene.
It should be noted that, in the following description of the technical solution provided in the present application, a terminal is taken as an execution body for example. In other possible embodiments, the technical solution provided in the present application may also be executed through cooperation between the terminal and the server, and the embodiment of the present application does not limit the type of the execution body.
Fig. 4 is a flowchart of a virtual object control method provided in an embodiment of the present application, referring to fig. 4, the method includes:
401. the terminal displays a visual field picture of the controlled virtual object, and a virtual rocker is displayed at a first position of the visual field picture and is used for controlling the moving direction of the controlled virtual object in the virtual scene.
The description of the view field picture of the controlled virtual object is referred to the description of fig. 3, and is not repeated here. The first position may also be referred to as an initial display position of the virtual rocker, that is, a display position of the virtual rocker when the user does not drag the virtual rocker, where the non-drag operation on the virtual rocker includes two layers of meaning, where the first layer of meaning means that the user does not perform any operation on the virtual rocker, and the second layer of meaning means that the drag operation on the virtual rocker by the user is ended. During the game, the user can move the controlled virtual object in different directions in the virtual scene through the virtual rocker.
402. And in response to detecting that the first dragging operation of the virtual rocker meets the target condition, the terminal fixes the virtual rocker at a second position corresponding to the first dragging operation.
The second position corresponding to the first drag operation is the position where the virtual rocker is located under the first drag operation.
403. And the terminal controls the controlled virtual object to continuously move in the virtual scene according to the direction indicated by the position difference between the second position and the first position.
In some embodiments, the terminal can employ a vector pointing from the first position to the second position to represent a direction indicated by a difference in position between the second position and the first position, and can subsequently adjust a direction of movement of the controlled virtual object in the virtual scene based on the vector.
Through the technical scheme provided by the embodiment of the application, in the game process, when the user drags the virtual rocker in a mode meeting the target condition, the terminal can fix the virtual rocker at the second position. The terminal can control the controlled virtual object to continuously move in the virtual scene based on the fixed virtual rocker, the virtual rocker is not required to be manually dragged by a user, the burden of the user is reduced, the user can complete more tasks and operations, and the game operation experience of the user is improved. In addition, the user can determine the locking position of the virtual rocker by himself, so that the moving direction of the controlled virtual object is controlled, the moving direction of the controlled virtual object can be any direction, the flexibility of direction locking is improved, and the efficiency of man-machine interaction is improved as a whole.
The steps 401 to 403 are simple descriptions of the virtual object control method provided in the embodiments of the present application, and the technical solutions provided in the embodiments of the present application will be described in detail below with reference to some examples.
Fig. 5 is a flowchart of a virtual object control method provided in an embodiment of the present application, referring to fig. 5, the method includes:
501. the terminal displays a visual field picture of the controlled virtual object, and a virtual rocker is displayed at a first position of the visual field picture and is used for controlling the moving direction of the controlled virtual object in the virtual scene.
The controlled virtual object is a virtual object controlled by the terminal, and a user can control the controlled virtual object by operating the terminal. In some embodiments, the virtual rocker is circular.
In one possible implementation, the terminal can display a visual field screen 301 of the controlled virtual object as shown in fig. 3, where a virtual rocker 302 is displayed on a first position of the visual field screen 301. After the user touches the virtual rocker 302 displayed on the terminal with a finger, dragging the finger on the screen can change the position of the virtual rocker 302. The position of the virtual rocker 302 corresponds to the movement direction of the controlled virtual object, and the user can change the movement direction of the controlled virtual object by changing the position of the virtual rocker 302. For the terminal, in response to detecting the drag operation on the virtual rocker 302, the terminal determines the position of the drag operation in real time, and displays the virtual rocker 302 at the position corresponding to the drag operation.
In a possible implementation manner, if the terminal is a terminal, the terminal can display the virtual rocker at the first position corresponding to the touch operation in response to detecting the touch operation as in the field view screen 301 of the controlled virtual object shown in fig. 3.
In the embodiment, the terminal does not fixedly display the virtual rocker on the visual field picture of the controlled virtual object, but displays the virtual rocker on the position where the user touches the screen, so that the user can determine the display position of the virtual rocker by himself, and the efficiency of man-machine interaction is high.
In one possible implementation manner, a rocker limit graph is further displayed on the visual field picture of the controlled virtual object, the virtual rocker is displayed on the rocker limit graph, and the rocker limit graph is used for limiting the movement range of the virtual rocker.
The shape of the rocker limiting graph can be circular, elliptical or square, and the embodiment of the application is not limited to the shape.
Under the implementation mode, a user can know the allowed movement range of the virtual rocker by observing the rocker limiting graph, the virtual rocker is limited in a certain range, redundant operation of the user when dragging the virtual rocker can be reduced, and the efficiency of man-machine interaction is improved.
For example, referring to fig. 6, a field of view screen 601 of the controlled virtual object is displayed in fig. 6, a joystick limit graphic 602 is displayed on the field of view screen 601, and a virtual joystick 603 is displayed on the joystick limit graphic 602. The range of the joystick limit graph 602, that is, the range of the virtual joystick 603 that allows movement, the virtual joystick 603 cannot drag out of the range of the joystick limit graph 602 no matter the user moves a finger on the screen during the process of dragging the virtual joystick 603 by the user. In some embodiments, after the user drags the virtual rocker 603 to the edge of the rocker limit graphic 602, the finger continues to move in the same direction, in which case the virtual rocker 603 would still be displayed at that edge of the rocker limit graphic. In some embodiments, the first position is the geometric center of the rocker limit graphic 602, that is, when the user does not drag the virtual rocker 603, the terminal displays the virtual rocker 603 on the geometric center of the rocker limit graphic 602, and if the virtual rocker 603 and the rocker limit graphic 602 are both circular, the terminal displays the virtual rocker 603 and the rocker limit graphic 602 concentrically.
502. And in response to detecting that the first dragging operation of the virtual rocker meets the target condition, the terminal fixes the virtual rocker at a second position corresponding to the first dragging operation.
Wherein the first drag operation meets the target condition means any one of the following: the holding time of the first drag operation at the second position is longer than the target duration, or the force of the first drag operation at the second position is longer than the target force. After the virtual rocker is fixed by the terminal, any operation except the operation on the virtual rocker does not interrupt the fixing of the virtual rocker by the terminal, and if the user originally controls the virtual rocker through the left hand, after the virtual rocker is fixed by the terminal, the left hand of the user can execute other game operations, such as checking a virtual knapsack of a controlled virtual object or enlarging a map and the like.
In one possible implementation, in response to detecting that the hold time of the first drag operation of the virtual rocker in the second position is greater than the target duration, the terminal fixes the virtual rocker in the second position corresponding to the first drag operation. The target duration is set by a technician according to actual situations, for example, set to 0.5s, 1s, or 1.5s, which is not limited in the embodiment of the present application.
In this embodiment, when the first drag operation is kept at the second position for a longer time, the user keeps the virtual rocker at the second position through the first drag operation for a longer time, and the virtual rocker kept at the second position indicates that the user wants to control the controlled virtual object to move towards one direction, at this time, the terminal fixes the virtual rocker at the second position, and the user does not need to continuously drag the virtual rocker, so that the controlled virtual object moves towards one direction in the virtual scene, thereby reducing the burden of the user and improving the efficiency of man-machine interaction.
The above embodiments are described below by way of two examples.
In example 1, after the virtual joystick is dragged from the first position, the terminal starts to count when it is detected that the first dragging operation on the virtual joystick is stopped at the second position, that is, when the user's finger is stopped at the second position. If the target duration is 1.5s, the terminal fixes the virtual rocker at the second position in response to detecting that the duration of the first drag operation staying at the second position is greater than 1.5 s. After the terminal fixes the virtual rocker at the second position, the user finishes the first dragging operation of the virtual rocker, namely, when the user lifts the finger, the virtual rocker is still displayed at the second position, and the controlled virtual object can still move in the virtual scene according to the direction indicated by the virtual rocker.
Example 2 after the virtual rocker is dragged away from the first position, the terminal starts timing when it detects that the virtual rocker stays in the second position. And if the target duration is 2s, the terminal fixes the virtual rocker at the second position in response to detecting that the stay time of the virtual rocker at the second position is longer than 2 s. After the terminal fixes the virtual rocker at the second position, the user finishes the first dragging operation of the virtual rocker, namely, when the user lifts the finger, the virtual rocker is still displayed at the second position, and the controlled virtual object can still move in the virtual scene according to the direction indicated by the virtual rocker.
In one possible implementation, in response to detecting that the force of the first drag operation on the virtual rocker at the second position is greater than the target force, the terminal secures the virtual rocker at the second position corresponding to the first drag operation. The target force is set by a technician according to actual situations, for example, set to 0.5N, 1N, or 1.5N, which is not limited in the embodiment of the present application.
In this embodiment, the user can control the terminal to fix the virtual rocker at the second position in a manner of pressing the second position, and since the second position is a position determined by the user, the user can also control the terminal to fix the virtual rocker at the corresponding position according to the moving direction of the controlled virtual object in the virtual scene, so that the flexibility of operation is high and the efficiency of man-machine interaction is also high.
The real-time mode will be described by way of two examples.
Example 1, a pressure sensor is provided below a screen of a terminal, and the terminal can detect pressing force to the screen through the pressure sensor. Taking the target strength of 0.5N as an example, in the process of dragging the virtual rocker through the first dragging operation, the terminal can fix the virtual rocker at the second position in response to the fact that the pressing strength of the screen at the second position is detected to be greater than 0.5N through the pressure sensor. After the terminal fixes the virtual rocker at the second position, the user finishes the first dragging operation of the virtual rocker, namely, when the user lifts the finger, the virtual rocker is still displayed at the second position, and the controlled virtual object can still move in the virtual scene according to the direction indicated by the virtual rocker.
In example 2, if the screen of the terminal is a capacitive screen, the capacitive screen includes an upper electrode and a lower electrode, and a capacitance is formed between the upper electrode and the lower electrode. When the screen is pressed down, the distance between the upper electrode and the lower electrode is reduced, and according to a calculation formula of capacitance c=εS/4pi kd, wherein C is capacitance, ε is dielectric constant, S is the facing area of the upper electrode and the lower electrode, k is electrostatic force constant, and d is the vertical distance between the upper electrode and the lower electrode. In the case where the other parameters are unchanged, the vertical distance d between the upper electrode and the lower electrode decreases when the screen is pressed down, that is, the screen is pressed down. The terminal can determine the capacitance C after the change between the upper electrode and the lower electrode according to the reduced d c . The terminal is based on the capacitance before the change and the capacitance C after the change c And the physical coefficient of the screen can reversely push the force of pressing the screen, wherein the physical coefficient of the screen comprises parameters such as hardness and elastic modulus of the screen. Taking a target force of 1.5N as an example, when the first dragging operation is passedIn the dragging process of the virtual rocker, the terminal can fix the virtual rocker at the second position in response to the fact that the pressing force of the screen at the second position is larger than 0.5N. After the terminal fixes the virtual rocker at the second position, the user finishes the first dragging operation of the virtual rocker, namely, when the user lifts the finger, the virtual rocker is still displayed at the second position, and the controlled virtual object can still move in the virtual scene according to the direction indicated by the virtual rocker.
Example 3, taking a touch operation of a user on a screen of a terminal by a finger as an example, when the finger lightly presses the screen, the area of the finger contacting the screen is a, and when the finger heavily presses the screen, the area of the finger contacting the screen is B. Since the deformation degree of the finger is larger than that of the screen, the area B of the finger contacting the screen when the finger presses the screen again is larger than the area A of the finger contacting the screen when the finger presses the screen lightly, and the terminal can determine whether the finger presses the screen again or presses the screen lightly through the area of the finger contacting the screen, and both A and B are positive numbers. For the terminal, a plurality of touch sensors are arranged below the screen, the touch sensors can be used for detecting touch operation of fingers on the screen, and the terminal can represent the contact area of the fingers and the screen by adopting the number of the touch sensors for detecting the touch operation. For example, when the finger presses the screen lightly, since the contact area between the finger and the screen is small, the M touch sensors under the screen can detect a touch operation; when the finger presses the screen again, because the contact area between the finger and the screen is large, the N touch sensors below the screen can detect touch operation, wherein M and N are positive integers, and N is more than M. Taking the example that the number of touch sensors detecting the touch operation is smaller than or equal to K as a light press and the number of touch sensors detecting the touch operation is larger than K as a heavy press, in the process of dragging the virtual rocker through the first dragging operation, the terminal can fix the virtual rocker at the second position when the number of touch sensors responding to the touch operation is larger than K. After the terminal fixes the virtual rocker at the second position, the user finishes the first dragging operation of the virtual rocker, namely, when the user lifts the finger, the virtual rocker is still displayed at the second position, and the controlled virtual object can still move in the virtual scene according to the direction indicated by the virtual rocker.
In a possible implementation manner, in response to detecting that the holding time of the first dragging operation of the virtual rocker at the second position is longer than the target time length, and the force of the first dragging operation at the second position is longer than the target force, the terminal fixes the virtual rocker at the second position corresponding to the first dragging operation, wherein the target time length and the target force are set by a technician according to actual conditions, and the embodiment of the application is not limited.
Under this kind of embodiment, when first drag operation is in the second position and keeps time longer, and the dynamics of first drag operation in the second position is greater than the target dynamics, the terminal just can fix virtual rocker in the second position to reduce the virtual rocker that the user maloperation leads to and be fixed probability, improve man-machine interaction's efficiency.
For example, in response to detecting that the first drag operation remains in the second position during the drag of the virtual rocker by the first drag operation, the terminal starts timing. If the target duration is 1.5s and the target force is 1.5N, responding to detection that the force of the first dragging operation at the second position is greater than 1.5N and the holding time of the first dragging operation at the second position is greater than 1.5s, and fixing the virtual rocker at the second position by the terminal. After the terminal fixes the virtual rocker at the second position, the user finishes the first dragging operation of the virtual rocker, namely, when the user lifts the finger, the virtual rocker is still displayed at the second position, and the controlled virtual object can still move in the virtual scene according to the direction indicated by the virtual rocker.
Optionally, on the basis of the foregoing possible embodiments, if the view field picture of the controlled virtual object displays a rocker limit graph, the second position is one edge of the rocker limit graph.
Under this kind of embodiment, the terminal can only fix the virtual rocker that is dragged to the spacing figure edge of rocker, and the user can be in the interior free control virtual rocker of the spacing figure of rocker, avoids virtual rocker to be locked because of the maloperation, improves man-machine interaction's efficiency.
For example, referring to fig. 6, if the rocker limit graph 602 and the virtual rocker 603 are both circular, then the positioning of the virtual rocker 603 at the edge of the rocker limit graph 602 means that the virtual rocker 603 inscribes the rocker limit graph 602, or that the center of the virtual rocker 603 is positioned on the edge of the rocker limit graph 602, which is not limited in the embodiment of the present application.
Optionally, after the virtual rocker is fixed at the second position corresponding to the first drag operation, the terminal is further capable of performing at least one of the following steps:
and step 1, the terminal displays a locking graph on the virtual rocker, wherein the locking graph is used for indicating that the virtual rocker is fixed.
In the embodiment, after the virtual rocker is fixed, the terminal can display the locking graph on the virtual rocker, and a user can know that the virtual rocker is fixed by observing the locking graph, so that the efficiency of man-machine interaction is high.
For example, referring to fig. 7, after the terminal secures the virtual rocker 701 in the second position, a lock graphic 702 can be displayed on the virtual rocker 701, the lock graphic 702 being a lock in some embodiments.
And 2, displaying locking prompt information on the visual field picture by the terminal, wherein the locking prompt information is used for prompting that the virtual rocker is fixed.
In the embodiment, after the virtual rocker is fixed, the terminal can display the locking prompt information on the visual field picture of the controlled virtual object, and a user can know that the virtual rocker is fixed through the locking prompt information, so that the efficiency of man-machine interaction is high.
For example, referring to fig. 8, after the terminal secures the virtual rocker 801 in the second position, a lock hint 803 can be displayed on the view 802. In some embodiments, the lock hint 803 is the word "lock traveling.
And 3, the terminal sends out a first type of vibration prompt, wherein the first type of vibration prompt is used for reminding the virtual rocker to be fixed.
Under this kind of embodiment, the terminal can send out first type vibrations suggestion after fixed virtual rocker, and the user just can learn virtual rocker and be fixed through first type vibrations suggestion, and man-machine interaction's efficiency is higher.
For example, after the virtual rocker is fixed at the second position corresponding to the first drag operation, the terminal can send a vibration control signal to the vibration controller. In response to receiving the vibration control signal, the vibration controller vibrates. In some embodiments, the vibration controller is a linear motor.
And 4, the terminal plays a first type of sound prompt, wherein the first type of sound prompt is used for reminding the virtual rocker to be fixed.
In this embodiment, the terminal can send out the first type of voice prompt after fixing the virtual rocker, and the user can know that the virtual rocker is fixed by listening to the first type of voice prompt, so that the efficiency of man-machine interaction is higher.
For example, after the virtual joystick is fixed at the second position corresponding to the first drag operation, the terminal can send a sound control signal to the sound controller. In response to receiving the sound control signal, the sound controller plays the audio. In some embodiments, the sound controller is a speaker and the first type of sound cue is "virtual rocker locked".
It should be noted that, the terminal can perform any one, any two, any three or all the four steps, which is not limited in this embodiment of the present application.
503. And the terminal controls the controlled virtual object to continuously move in the virtual scene according to the direction indicated by the position difference between the second position and the first position.
In one possible implementation, the terminal determines a target moving direction of the controlled virtual object in the virtual scene according to a direction indicated by a position difference between the second position and the first position. And the terminal controls the controlled virtual object to continuously move in the virtual scene along the target moving direction.
In this embodiment, the terminal can determine the target moving direction of the controlled virtual object in the virtual scene according to the direction indicated by the position difference between the second position and the first position of the virtual rocker, and after determining the target moving direction, the terminal can control the controlled virtual object to move continuously without user operation, thereby reducing the operation burden of the user. Because the second position is determined by the user, the user can determine the target moving direction by the virtual rocker, and the target moving direction can cover all moving directions, so that the efficiency of man-machine interaction is higher.
For example, the terminal may be capable of acquiring the second screen coordinate of the virtual rocker at the second position and the first screen coordinate of the virtual rocker at the first position, and in some embodiments, the origin of the screen coordinate system in which the first screen coordinate and the second screen coordinate are located is the first screen coordinate, the x-axis is parallel to the screen horizontal line, and the y-axis is perpendicular to the screen horizontal line. In this case, the terminal acquires the second screen coordinates (-1, 1), the first screen coordinates (0, 0), and the terminal can acquire the first vector (-1, 1) directed from the first screen coordinates to the second screen coordinates. The terminal determines that the first vector (-1, 1) is at-45 deg. from the positive direction of the y-axis, where "-" indicates that the first vector is to the left of the y-axis. In some embodiments, the y-axis positive direction corresponds to the facing direction of the controlled virtual object. Under the condition that the included angle between the first vector and the positive direction of the y axis is-45 degrees, the terminal can control the controlled virtual object to continuously move in the virtual scene along the direction facing the left side by 45 degrees, and in the continuous movement process of the controlled virtual object, the user does not need to operate, namely, even if the user lifts the finger from the screen, the controlled virtual object still continuously moves in the virtual scene according to the target movement direction, and other game operations can be executed by the lifted finger of the user. That is, after the virtual rocker is fixed at the second position, even if the user does not perform any operation, the terminal can control the controlled virtual object to continuously move in the virtual scene according to the second position where the virtual rocker is located, so that the operation burden of the user is reduced, and the efficiency of man-machine interaction is improved.
Optionally, the terminal can also perform the following steps 504 and 505 after step 503.
504. And in response to detecting the second dragging operation of the virtual rocker, the terminal releases the fixation of the virtual rocker.
In one possible implementation, in response to detecting the second drag operation on the virtual rocker, the terminal releases the fixation of the virtual rocker, and the user can drag the virtual rocker by himself to change the moving direction of the controlled virtual object in the virtual scene.
Optionally, after the terminal releases the fixation of the virtual rocker, the terminal is further capable of performing at least one of the following steps:
and 1, canceling the locking graph displayed on the virtual rocker by the terminal.
In the embodiment, after the fixation of the virtual rocker is released, the terminal cancels the locking graph displayed on the virtual rocker, so that confusion of a user caused by the locking graph is avoided, and the efficiency of man-machine interaction is higher.
For example, referring to fig. 7, after the terminal cancels the fixation of the virtual rocker 701, the display of the lock graphic 702 on the virtual rocker 701 can be canceled.
And 2, the terminal cancels the locking prompt information displayed on the visual field picture.
In the embodiment, after the fixation of the virtual rocker is released, the terminal can cancel the display of the locking prompt information on the visual field picture of the controlled virtual object, so that confusion of the locking prompt information to a user is avoided, and the efficiency of man-machine interaction is higher.
For example, referring to fig. 8, after the terminal cancels the fixation of the virtual stick 801, the lock hint information 803 displayed on the view screen 802 can be cancelled.
And 3, the terminal sends out a second type of vibration prompt, wherein the second type of vibration prompt is used for reminding the fixing of the virtual rocker to be released.
In the embodiment, the terminal can send out the second type vibration prompt after fixing the virtual rocker, and the user can know that the fixing of the virtual rocker is released through the second type vibration prompt, so that the efficiency of man-machine interaction is higher.
For example, after releasing the fixation of the virtual rocker, the terminal can send a vibration control signal to the vibration controller. In response to receiving the vibration control signal, the vibration controller vibrates.
And 4, the terminal plays a second type of sound prompt, wherein the second type of sound prompt is used for reminding the fixing of the virtual rocker to be released.
In the embodiment, the terminal can send out the second type of sound prompt after the virtual rocker is fixed, and the user can know that the virtual rocker is fixed by listening to the second type of sound prompt, so that the efficiency of man-machine interaction is higher.
For example, after the virtual joystick is fixed at the second position corresponding to the first drag operation, the terminal can send a sound control signal to the sound controller. In response to receiving the sound control signal, the sound controller plays the audio. In some embodiments, the second type of audible cue is "the virtual rocker is unlocked".
It should be noted that, the terminal can perform any one, any two, any three or all the four steps, which is not limited in this embodiment of the present application.
505. And the terminal controls the controlled virtual object to move in the virtual scene based on the moving direction indicated by the second dragging operation.
For example, referring to fig. 9, a user can control a game character 902, that is, a moving direction of a controlled virtual object by dragging a virtual stick 901.
For example, the terminal can obtain the coordinates of the virtual rocker in the screen coordinate system under the action of the second drag operation, and the description of the screen coordinate system is referred to the description of step 503, which is not repeated herein. For example, at a first moment, the second drag operation drags the virtual rocker to a third position indicated by the screen coordinates (1, 1), and then the terminal can determine a second vector (1, 1) pointing from the first position (0, 0) of the virtual rocker to the third position (1, 1), and the terminal obtains an included angle +45° between the second vector (1, 1) and the positive direction of the y-axis of the screen coordinate system. The terminal can control the controlled virtual object to move towards the 45-degree direction facing the right side in the virtual scene. If at the second moment, the second dragging operation drags the virtual rocker to a fourth position indicated by the screen coordinate (0, 1), the terminal can determine a third vector (0, 1) pointing from the first position (0, 0) of the virtual rocker to the fourth position (0, 1), and the terminal obtains an included angle +90° between the third vector (0, 1) and the positive direction of the y-axis of the screen coordinate system. The terminal can control the controlled virtual object to move towards the direction of 90 degrees facing the right side in the virtual scene. If the second moment is a moment after the first moment, then the face of the controlled virtual object is rotated 45+90=135° towards the right side at the second moment compared to the face of the controlled virtual object at the first moment. That is, when the terminal adjusts the moving direction of the controlled virtual object each time, the adjusted reference direction is the current facing direction of the controlled virtual object, so that the direction change of the controlled virtual object can be guaranteed to correspond to the position change of the virtual rocker when the user drags the virtual rocker. For example, in the game process, a user drags the virtual rocker to the left and the controlled virtual object moves towards the left and the front; correspondingly, when the user drags the virtual rocker to the upper right, the controlled virtual object moves towards the front right, and in this way, the user operation corresponds to the adjustment of the moving direction of the controlled virtual object one by one, so that the user can control the controlled virtual object more smoothly, and the efficiency of man-machine interaction is improved.
Alternatively, the terminal can perform any of the steps described below in addition to the step 504 described above after the step 503.
And step A, responding to the attack of the controlled virtual object by other virtual objects in the virtual scene, and highlighting the outline of the virtual rocker on the visual field picture by the terminal so as to remind the release of the fixation of the virtual rocker.
Under the implementation mode, when the controlled virtual object is attacked by other virtual objects, the terminal can highlight the outline of the virtual rocker, so that a user can be reminded to release the fixation of the virtual rocker, the controlled virtual object is controlled to avoid through manual operation, the probability of the controlled virtual object being knocked out in the moving process is reduced, the game experience of the user is improved, and the efficiency of man-machine interaction is improved.
For example, in response to detecting that the controlled virtual object is hit by a virtual prop emitted by another virtual object, the terminal highlights the outline of the virtual rocker with red, and of course, in other possible embodiments, the terminal can also highlight the outline of the virtual rocker with another color, which is not limited in this embodiment.
And B, responding to the attack of the controlled virtual object by other virtual objects in the virtual scene, and controlling the controlled virtual object to move towards the virtual obstacle closest to the controlled virtual object by the terminal.
In the embodiment, when the controlled virtual object is attacked by other virtual objects, the terminal can control the controlled virtual object to avoid in the virtual scene, so that the probability of the controlled virtual object being knocked and killed in the moving process can be reduced, the game experience of a user is improved, and the efficiency of man-machine interaction is improved.
For example, in response to detecting that the controlled virtual object is hit by a virtual prop emitted by another virtual object, the terminal determines a virtual obstacle closest to the controlled virtual object. The terminal determines a path of the controlled virtual object moving from the current position to the virtual obstacle, and controls the controlled object to move along the path.
And C, in response to detecting that the virtual obstacle exists in the moving direction of the controlled virtual object, the terminal displays obstacle prompt information on the visual field picture, wherein the obstacle prompt information is used for prompting the type of the virtual obstacle and the distance between the virtual obstacle and the controlled virtual object.
In this embodiment, when a virtual obstacle exists in the moving direction of the controlled virtual object, the terminal can display the obstacle prompt information of the virtual obstacle on the view field picture, the user can know in advance according to the obstacle prompt information, and the type of the virtual obstacle in front of the controlled virtual object and the distance between the controlled virtual object and the virtual obstacle, so that the user can react in advance to control the controlled virtual object to avoid, and the efficiency of man-machine interaction is higher.
For example, the terminal can emit a detection ray with the controlled virtual object as an origin, and in response to the detection ray contacting any virtual obstacle, the terminal obtains obstacle prompt information of the virtual obstacle, and displays the obstacle prompt information on the view field picture.
And D, in response to detecting that the virtual obstacle exists in front of the controlled virtual object, the terminal controls the controlled virtual object to execute auxiliary actions corresponding to the virtual obstacle, wherein the auxiliary actions are used for assisting the controlled virtual object to pass through the virtual obstacle.
In the embodiment, when the virtual obstacle in front of the controlled virtual object is detected, the terminal can control the controlled virtual object to execute the auxiliary action corresponding to the virtual obstacle so as to assist the controlled virtual object to pass through the virtual obstacle, a user does not need to manually control the controlled virtual object to pass through the virtual obstacle, and the efficiency of man-machine interaction is high.
For example, the terminal can emit a detection ray with the controlled virtual object as an origin, and in response to the detection ray contacting any virtual obstacle, obtain an auxiliary action corresponding to the virtual obstacle, and the terminal controls the controlled virtual object to execute the auxiliary action to pass through the virtual obstacle. Taking a virtual obstacle as an example of a virtual stone, the auxiliary action corresponding to the virtual stone is taken as jumping, and the terminal can control the controlled virtual object to jump in front of the virtual stone.
And E, the virtual rocker is also used for controlling the moving direction of the virtual carrier in the virtual scene, and the terminal can also control the virtual carrier driven by the controlled virtual object to move in the virtual scene according to the direction indicated by the position difference between the second position and the first position after fixing the virtual rocker at the second position corresponding to the first dragging operation.
In the embodiment, the terminal not only can lock the moving direction of the controlled virtual object in the virtual scene, but also can lock the moving direction of the virtual carrier when the controlled virtual object drives the virtual carrier, thereby reducing the operation burden of a user and improving the efficiency of man-machine interaction.
Any combination of the above optional solutions may be adopted to form an optional embodiment of the present application, which is not described herein in detail.
The technical solutions provided in the embodiments of the present application will be described below with reference to fig. 10 and the foregoing various alternative embodiments.
In one possible implementation manner, a rocker limit graph is displayed on a visual field picture of the controlled virtual object, and when the first dragging operation is detected to drag the virtual rocker to a second position of the edge of the rocker limit graph, whether the force of the first dragging operation is greater than the target force is determined. When the strength of the first dragging operation is detected to be greater than the target strength, the terminal fixes the virtual rocker at the second position, and the terminal controls the controlled virtual object to move in the virtual scene according to the direction indicated by the position difference between the second position and the first position. And when the second dragging operation of the virtual rocker is detected, the terminal releases the fixation of the virtual rocker, and the controlled virtual object is controlled to move according to the direction indicated by the second dragging operation.
Through the technical scheme provided by the embodiment of the application, in the game process, when the user drags the virtual rocker in a mode meeting the target condition, the terminal can fix the virtual rocker at the second position. The terminal can control the controlled virtual object to continuously move in the virtual scene based on the fixed virtual rocker, the virtual rocker is not required to be manually dragged by a user, the burden of the user is reduced, the user can complete more tasks and operations, and the game operation experience of the user is improved. In addition, the user can determine the locking position of the virtual rocker by himself, so that the moving direction of the controlled virtual object is controlled, the moving direction of the controlled virtual object can be any direction, the flexibility of direction locking is improved, and the efficiency of man-machine interaction is improved as a whole.
Fig. 11 is a schematic structural diagram of a virtual object control device provided in an embodiment of the present application, referring to fig. 11, the device includes: a display module 1101, a fixing module 1102, and a control module 1103.
The display module 1101 is configured to display a view field picture of the controlled virtual object, where a virtual rocker is displayed on a first position of the view field picture, and the virtual rocker is configured to control a moving direction of the controlled virtual object in the virtual scene.
And the fixing module 1102 is configured to fix the virtual rocker at a second position corresponding to the first drag operation in response to detecting that the first drag operation on the virtual rocker meets the target condition.
The control module 1103 is configured to control the controlled virtual object to move continuously in the virtual scene according to the direction indicated by the position difference between the second position and the first position.
In one possible implementation, the first drag operation meeting the target condition refers to any one of the following:
the hold time of the first drag operation at the second position is greater than the target duration.
The first drag operation has a force at the second location that is greater than the target force.
In one possible implementation, a rocker limit graph is further displayed on the view field picture, and the virtual rocker is displayed on the rocker limit graph, wherein the rocker limit graph is used for limiting the movement range of the virtual rocker.
In one possible embodiment, the second position is one edge of the rocker limit pattern.
In one possible embodiment, the first position is the geometric center of the rocker limit pattern.
In one possible implementation, the control module is configured to determine a target moving direction of the controlled virtual object in the virtual scene according to a direction indicated by a position difference between the second position and the first position. And controlling the controlled virtual object to continuously move in the virtual scene in the target moving direction.
In one possible embodiment, the apparatus further comprises at least one of the following modules:
and the locking graphic display module is used for displaying a locking graphic on the virtual rocker, and the locking graphic is used for indicating that the virtual rocker is fixed.
And the prompt information display module is used for displaying locking prompt information on the visual field picture, wherein the locking prompt information is used for prompting that the virtual rocker is fixed.
The vibration module is used for sending out a first type of vibration prompt, and the first type of vibration prompt is used for reminding the virtual rocker to be fixed.
The playing module is used for playing a first type of sound prompt, and the first type of sound prompt is used for reminding the virtual rocker to be fixed.
In one possible implementation, the control module is further configured to release the fixation of the virtual rocker in response to detecting the second drag operation on the virtual rocker. And controlling the controlled virtual object to move in the virtual scene based on the moving direction indicated by the second dragging operation.
In one possible implementation, the display module is further configured to highlight the outline of the virtual rocker on the view field screen in response to the controlled virtual object being attacked by another virtual object in the virtual scene, so as to remind of releasing the fixation of the virtual rocker.
In one possible implementation, the control module is further configured to control the controlled virtual object to move toward a virtual obstacle closest to the controlled virtual object in response to the controlled virtual object being attacked by other virtual objects in the virtual scene.
In one possible implementation manner, the display module is further configured to display, on the view field screen, obstacle prompting information in response to detecting that the virtual obstacle exists in the moving direction of the controlled virtual object, where the obstacle prompting information is used to prompt the type of the virtual obstacle and the distance between the virtual obstacle and the controlled virtual object.
In one possible implementation, the control module is further configured to control the controlled virtual object to perform an auxiliary action corresponding to the virtual obstacle in response to detecting that the virtual obstacle exists in front of the controlled virtual object, where the auxiliary action is configured to assist the controlled virtual object through the virtual obstacle.
In one possible implementation manner, the virtual rocker is further used for controlling a moving direction of the virtual vehicle in the virtual scene, and the control module is further used for controlling the virtual vehicle driven by the controlled virtual object to continuously move in the virtual scene according to the direction indicated by the position difference between the second position and the first position.
Through the technical scheme provided by the embodiment of the application, in the game process, when the user drags the virtual rocker in a mode meeting the target condition, the terminal can fix the virtual rocker at the second position. The terminal can control the controlled virtual object to continuously move in the virtual scene based on the fixed virtual rocker, the virtual rocker is not required to be manually dragged by a user, the burden of the user is reduced, the user can complete more tasks and operations, and the game operation experience of the user is improved. In addition, the user can determine the locking position of the virtual rocker by himself, so that the moving direction of the controlled virtual object is controlled, the moving direction of the controlled virtual object can be any direction, the flexibility of direction locking is improved, and the efficiency of man-machine interaction is improved as a whole.
The embodiment of the application provides a computer device, which is used for executing the method, the computer device can be implemented as a terminal or a server, and the structure of the terminal is described below:
fig. 12 is a schematic structural diagram of a terminal according to an embodiment of the present application. The terminal 1200 may be: smart phones, tablet computers, notebook computers or desktop computers. Terminal 1200 may also be referred to as a user device, portable terminal, laptop terminal, desktop terminal, etc.
In general, the terminal 1200 includes: one or more processors 1201 and one or more memories 1202.
In some embodiments, the terminal 1200 may further optionally include: a peripheral interface 1203, and at least one peripheral. The processor 1201, the memory 1202, and the peripheral interface 1203 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 1203 via buses, signal lines, or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1204, a display 1205, a camera assembly 1206, audio circuitry 1207, and a power supply 1209.
The peripheral interface 1203 may be used to connect at least one peripheral device associated with an I/O (Input/Output) to the processor 1201 and the memory 1202. In some embodiments, the processor 1201, the memory 1202, and the peripheral interface 1203 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1201, the memory 1202, and the peripheral interface 1203 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1204 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1204 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1204 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1204 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth.
The display 1205 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 1205 is a touch display, the display 1205 also has the ability to collect touch signals at or above the surface of the display 1205. The touch signal may be input as a control signal to the processor 1201 for processing. At this time, the display 1205 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard.
The camera assembly 1206 is used to capture images or video. Optionally, camera assembly 1206 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal.
The audio circuitry 1207 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1201 for processing, or inputting the electric signals to the radio frequency circuit 1204 for voice communication.
The power supply 1209 is used to power the various components in the terminal 1200. The power source 1209 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery.
In some embodiments, terminal 1200 also includes one or more sensors 1210. The one or more sensors 1210 include, but are not limited to: an acceleration sensor 1211, a gyro sensor 1212, a pressure sensor 1213, an optical sensor 1215, and a proximity sensor 1216.
The acceleration sensor 1211 may detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 1200.
The gyro sensor 1212 may capture a 3D motion of the user on the terminal 1200 in cooperation with the acceleration sensor 1211, and the body direction and the rotation angle of the terminal 1200 may be determined by the gyro sensor 1212.
The pressure sensor 1213 may be disposed at a side frame of the terminal 1200 and/or at a lower layer of the display 1205. When the pressure sensor 1213 is provided at a side frame of the terminal 1200, a grip signal of the terminal 1200 by a user may be detected, and the processor 1201 performs a left-right hand recognition or a shortcut operation according to the grip signal collected by the pressure sensor 1213. When the pressure sensor 1213 is disposed at the lower layer of the display 1205, the processor 1201 controls the operability control on the UI interface according to the pressure operation of the user on the display 1205.
The optical sensor 1215 is used to collect the ambient light intensity. In one embodiment, processor 1201 may control the display brightness of display 1205 based on the intensity of ambient light collected by optical sensor 1215.
The proximity sensor 1216 is used to collect the distance between the user and the front of the terminal 1200.
It will be appreciated by those skilled in the art that the structure shown in fig. 12 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
The computer device may also be implemented as a server, and the following describes the structure of the server:
fig. 13 is a schematic structural diagram of a server provided in an embodiment of the present application, where the server 1300 may include one or more processors (Central Processing Units, CPU) 1301 and one or more memories 1302, where the one or more memories 1302 store at least one computer program, and the at least one computer program is loaded and executed by the one or more processors 1301 to implement the methods provided in the foregoing method embodiments. Of course, the server 1300 may also have a wired or wireless network interface, a keyboard, an input/output interface, etc. for performing input/output, and the server 1300 may also include other components for implementing the functions of the device, which are not described herein.
In an exemplary embodiment, a computer readable storage medium, for example a memory comprising a computer program executable by a processor to perform the virtual object control method of the above embodiment is also provided. For example, the computer readable storage medium may be Read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), compact disc Read-Only Memory (CD-ROM), magnetic tape, floppy disk, optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program product or a computer program comprising program code stored in a computer readable storage medium, the program code being read from the computer readable storage medium by a processor of a computer device, the program code being executed by the processor, causing the computer device to perform the above-described virtual object control method.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the above storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments is merely exemplary in nature and is not intended to limit the invention, but is intended to cover various modifications, substitutions, improvements, and alternatives falling within the spirit and principles of the invention.
Claims (18)
1. A virtual object control method, the method comprising:
displaying a visual field picture of a controlled virtual object, wherein a virtual rocker and a rocker limiting graph are displayed at a first position of the visual field picture, the virtual rocker is displayed on the rocker limiting graph, the virtual rocker is used for controlling the moving direction of the controlled virtual object in a virtual scene, and the rocker limiting graph is used for limiting the moving range of the virtual rocker;
when a second position corresponding to a first dragging operation of the virtual rocker is one edge of the rocker limiting graph and the number of touch sensors for detecting touch operations on the second position is larger than a preset number, fixing the virtual rocker at the second position corresponding to the first dragging operation;
according to the direction indicated by the position difference between the second position and the first position, the controlled virtual object is controlled to continuously move in the virtual scene until the virtual rocker is unfixed;
In the continuous moving process of the controlled virtual object, responding to the detection that the controlled virtual object is attacked by other virtual objects in the virtual scene, and determining a virtual barrier closest to the controlled virtual object;
determining a path of the controlled virtual object moving from the current position to the virtual obstacle, and controlling the controlled virtual object to move along the path so as to avoid attacks of other virtual objects;
in the continuous moving process of the controlled virtual object, responding to the fact that the controlled virtual object is not detected to be attacked by other virtual objects, and transmitting detection rays taking the controlled virtual object as an origin;
responding to the contact of the detection ray and any virtual obstacle, and acquiring obstacle prompt information of the virtual obstacle, wherein the obstacle prompt information is used for prompting the type of the virtual obstacle and the distance between the virtual obstacle and the controlled virtual object;
and displaying the obstacle prompt information on the visual field picture so that a user can release the fixation of the virtual rocker in advance based on the obstacle prompt information to control the controlled virtual object to avoid the virtual obstacle.
2. The method of claim 1, wherein the first position is a geometric center of the rocker limit pattern.
3. The method of claim 1, wherein controlling the controlled virtual object to move continuously in the virtual scene according to the direction indicated by the difference in position between the second position and the first position comprises:
determining a target moving direction of the controlled virtual object in the virtual scene according to the direction indicated by the position difference between the second position and the first position;
and controlling the controlled virtual object to continuously move in the virtual scene in the target moving direction.
4. The method of claim 1, wherein the securing the virtual rocker after the second position corresponding to the first drag operation further comprises at least one of:
displaying a locking graph on the virtual rocker, wherein the locking graph is used for indicating that the virtual rocker is fixed;
displaying locking prompt information on the visual field picture, wherein the locking prompt information is used for prompting that the virtual rocker is fixed;
Sending out a first type vibration prompt, wherein the first type vibration prompt is used for reminding the virtual rocker to be fixed;
and playing a first type of sound prompt, wherein the first type of sound prompt is used for reminding the virtual rocker to be fixed.
5. The method of claim 1, wherein the controlling the controlled virtual object after the continuously moving in the virtual scene according to the direction indicated by the position difference between the second position and the first position further comprises:
releasing the fixing of the virtual rocker in response to detecting a second drag operation on the virtual rocker;
and controlling the controlled virtual object to move in the virtual scene based on the moving direction indicated by the second dragging operation.
6. The method of claim 1, wherein the controlling the controlled virtual object after the continuously moving in the virtual scene according to the direction indicated by the position difference between the second position and the first position further comprises:
and responding to the controlled virtual object being attacked by other virtual objects in the virtual scene, highlighting the outline of the virtual rocker on the visual field picture so as to remind the fixation of the virtual rocker to be released.
7. The method of claim 1, wherein the controlling the controlled virtual object after the continuously moving in the virtual scene according to the direction indicated by the position difference between the second position and the first position further comprises:
and in response to detecting that a virtual obstacle exists in front of the controlled virtual object, controlling the controlled virtual object to execute an auxiliary action corresponding to the virtual obstacle, wherein the auxiliary action is used for assisting the controlled virtual object to pass through the virtual obstacle.
8. The method of claim 1, wherein the virtual rocker is further used to control a direction of movement of a virtual vehicle in the virtual scene;
the method further comprises the steps of:
and controlling the virtual vehicle driven by the controlled virtual object to continuously move in the virtual scene according to the direction indicated by the position difference between the second position and the first position.
9. A virtual object control apparatus, the apparatus comprising:
the display module is used for displaying a visual field picture of the controlled virtual object, a virtual rocker and a rocker limiting graph are displayed on a first position of the visual field picture, the virtual rocker is displayed on the rocker limiting graph, the virtual rocker is used for controlling the moving direction of the controlled virtual object in a virtual scene, and the rocker limiting graph is used for limiting the moving range of the virtual rocker;
The fixing module is used for fixing the virtual rocker at the second position corresponding to the first dragging operation when the second position corresponding to the first dragging operation of the virtual rocker is one edge of the rocker limiting graph and the number of touch sensors for detecting the touch operation of the second position is larger than the preset number;
the control module is used for controlling the controlled virtual object to continuously move in the virtual scene according to the direction indicated by the position difference between the second position and the first position until the virtual rocker is unfixed;
the control module is further used for determining a virtual barrier closest to the controlled virtual object in response to detecting that the controlled virtual object is attacked by other virtual objects in the virtual scene in the continuous moving process of the controlled virtual object;
the control module is further used for determining a path of the controlled virtual object moving from the current position to the virtual obstacle, and controlling the controlled virtual object to move along the path so as to avoid attacks of other virtual objects;
the control module is further used for transmitting detection rays taking the controlled virtual object as an origin in response to the fact that the controlled virtual object is not detected to be attacked by other virtual objects in the continuous moving process of the controlled virtual object;
The control module is further used for responding to the contact of the detection ray and any virtual obstacle to obtain obstacle prompt information of the virtual obstacle, wherein the obstacle prompt information is used for prompting the type of the virtual obstacle and the distance between the virtual obstacle and the controlled virtual object;
the control module is further used for displaying the obstacle prompt information on the view field picture so that a user can release the fixation of the virtual rocker in advance based on the obstacle prompt information to control the controlled virtual object to avoid the virtual obstacle.
10. The device of claim 9, wherein the first position is a geometric center of the rocker limit pattern.
11. The apparatus of claim 9, wherein the control module is configured to determine a target movement direction of the controlled virtual object in the virtual scene based on a direction indicated by a difference in position between the second position and the first position; and controlling the controlled virtual object to continuously move in the virtual scene in the target moving direction.
12. The apparatus of claim 9, further comprising at least one of the following modules:
The locking graphic display module is used for displaying a locking graphic on the virtual rocker, and the locking graphic is used for indicating that the virtual rocker is fixed;
the prompt information display module is used for displaying locking prompt information on the visual field picture, wherein the locking prompt information is used for prompting that the virtual rocker is fixed;
the vibration module is used for sending out a first type of vibration prompt, wherein the first type of vibration prompt is used for reminding that the virtual rocker is fixed;
the playing module is used for playing a first type of sound prompt, and the first type of sound prompt is used for reminding that the virtual rocker is fixed.
13. The apparatus of claim 9, wherein the control module is further to un-secure the virtual rocker in response to detecting a second drag operation on the virtual rocker; and controlling the controlled virtual object to move in the virtual scene based on the moving direction indicated by the second dragging operation.
14. The apparatus of claim 9, wherein the display module is further configured to highlight an outline of the virtual rocker on the view screen to alert to release of the fixation of the virtual rocker in response to the controlled virtual object being attacked by other virtual objects in the virtual scene.
15. The apparatus of claim 9, wherein the control module is further configured to control the controlled virtual object to perform an auxiliary action corresponding to the virtual obstacle in response to detecting that a virtual obstacle exists in front of the controlled virtual object, the auxiliary action being configured to assist the controlled virtual object through the virtual obstacle.
16. The apparatus of claim 9, wherein the virtual rocker is further configured to control a direction of movement of a virtual vehicle in the virtual scene, and the control module is further configured to control a virtual vehicle driven by the controlled virtual object to move continuously in the virtual scene according to a direction indicated by a difference in position between the second position and the first position.
17. A computer device comprising one or more processors and one or more memories, the one or more memories having at least one computer program stored therein, the computer program being loaded and executed by the one or more processors to implement the virtual object control method of any of claims 1 to 8.
18. A computer readable storage medium having stored therein at least one computer program loaded and executed by a processor to implement the virtual object control method of any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011613689.7A CN112717397B (en) | 2020-12-30 | 2020-12-30 | Virtual object control method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011613689.7A CN112717397B (en) | 2020-12-30 | 2020-12-30 | Virtual object control method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112717397A CN112717397A (en) | 2021-04-30 |
CN112717397B true CN112717397B (en) | 2023-05-12 |
Family
ID=75611248
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011613689.7A Active CN112717397B (en) | 2020-12-30 | 2020-12-30 | Virtual object control method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112717397B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113559512A (en) * | 2021-08-04 | 2021-10-29 | 网易(杭州)网络有限公司 | Sound source sound effect processing method and device, computer equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013131119A (en) * | 2011-12-22 | 2013-07-04 | Konami Digital Entertainment Co Ltd | Game device, game device control method, and program |
JP6244445B1 (en) * | 2016-12-22 | 2017-12-06 | 株式会社コロプラ | Information processing method, apparatus, and program for causing computer to execute information processing method |
WO2018120489A1 (en) * | 2016-12-29 | 2018-07-05 | 珠海市一微半导体有限公司 | Route planning method for intelligent robot |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108459750B (en) * | 2017-02-17 | 2020-05-12 | 京东方科技集团股份有限公司 | Pressure touch detection method, touch panel and electronic device |
JP7058034B2 (en) * | 2017-09-29 | 2022-04-21 | グリー株式会社 | Game processing program, game processing method, and game processing device |
CN109621411B (en) * | 2017-09-30 | 2022-05-06 | 网易(杭州)网络有限公司 | Information processing method, information processing device, electronic equipment and storage medium |
CN108515909B (en) * | 2018-04-04 | 2021-04-20 | 京东方科技集团股份有限公司 | Automobile head-up display system and obstacle prompting method thereof |
CN111714886B (en) * | 2020-07-24 | 2021-09-28 | 腾讯科技(深圳)有限公司 | Virtual object control method, device, equipment and storage medium |
-
2020
- 2020-12-30 CN CN202011613689.7A patent/CN112717397B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013131119A (en) * | 2011-12-22 | 2013-07-04 | Konami Digital Entertainment Co Ltd | Game device, game device control method, and program |
JP6244445B1 (en) * | 2016-12-22 | 2017-12-06 | 株式会社コロプラ | Information processing method, apparatus, and program for causing computer to execute information processing method |
WO2018120489A1 (en) * | 2016-12-29 | 2018-07-05 | 珠海市一微半导体有限公司 | Route planning method for intelligent robot |
Also Published As
Publication number | Publication date |
---|---|
CN112717397A (en) | 2021-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7419382B2 (en) | Method and apparatus and computer program for controlling a virtual object to mark a virtual item | |
CN109126129B (en) | Method, device and terminal for picking up virtual article in virtual environment | |
EP3943173A1 (en) | Virtual object controlling method, apparatus and device and medium | |
CN111035918B (en) | Reconnaissance interface display method and device based on virtual environment and readable storage medium | |
CN110917619B (en) | Interactive property control method, device, terminal and storage medium | |
CN111013142B (en) | Interactive effect display method and device, computer equipment and storage medium | |
CN111589140B (en) | Virtual object control method, device, terminal and storage medium | |
JP2022551069A (en) | Virtual object control method, its device, equipment, and computer program | |
CN111282266B (en) | Skill aiming method, device, terminal and storage medium in three-dimensional virtual environment | |
CN113117330B (en) | Skill release method, device, equipment and medium for virtual object | |
CN111760285B (en) | Virtual scene display method, device, equipment and medium | |
CN113289331B (en) | Display method and device of virtual prop, electronic equipment and storage medium | |
CN112569607B (en) | Display method, device, equipment and medium for pre-purchased prop | |
CN113101656B (en) | Virtual object control method, device, terminal and storage medium | |
CN111659122B (en) | Virtual resource display method and device, electronic equipment and storage medium | |
JPWO2021143259A5 (en) | ||
CN112717397B (en) | Virtual object control method, device, equipment and storage medium | |
CN113134232B (en) | Virtual object control method, device, equipment and computer readable storage medium | |
CN112494958B (en) | Method, system, equipment and medium for converting words by voice | |
CN114130023A (en) | Virtual object switching method, device, equipment, medium and program product | |
CN111672115B (en) | Virtual object control method and device, computer equipment and storage medium | |
CN111589102B (en) | Auxiliary tool detection method, device, equipment and storage medium | |
CN113713385A (en) | Virtual item control method, device, equipment, medium and computer program product | |
CN114042315B (en) | Virtual scene-based graphic display method, device, equipment and medium | |
CN113144600B (en) | Virtual object control method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40042618 Country of ref document: HK |
|
GR01 | Patent grant | ||
GR01 | Patent grant |