CN115328354A - Interactive processing method and device in game, electronic equipment and storage medium - Google Patents
Interactive processing method and device in game, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN115328354A CN115328354A CN202210986090.0A CN202210986090A CN115328354A CN 115328354 A CN115328354 A CN 115328354A CN 202210986090 A CN202210986090 A CN 202210986090A CN 115328354 A CN115328354 A CN 115328354A
- Authority
- CN
- China
- Prior art keywords
- node
- target
- candidate
- tree
- guide
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 19
- 238000003860 storage Methods 0.000 title claims abstract description 19
- 230000002452 interceptive effect Effects 0.000 title abstract description 16
- 238000000034 method Methods 0.000 claims abstract description 60
- 238000012545 processing Methods 0.000 claims abstract description 31
- 230000003993 interaction Effects 0.000 claims abstract description 26
- 230000001960 triggered effect Effects 0.000 claims description 23
- 230000004044 response Effects 0.000 claims description 17
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 7
- 238000012216 screening Methods 0.000 claims description 6
- 230000001976 improved effect Effects 0.000 abstract description 9
- 230000002829 reductive effect Effects 0.000 abstract description 6
- 230000006870 function Effects 0.000 description 41
- 230000008569 process Effects 0.000 description 21
- 238000010586 diagram Methods 0.000 description 15
- 238000004891 communication Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 5
- 230000002123 temporal effect Effects 0.000 description 5
- 230000007246 mechanism Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 230000001965 increasing effect Effects 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000008093 supporting effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 239000010437 gem Substances 0.000 description 1
- 229910001751 gemstone Inorganic materials 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000005304 joining Methods 0.000 description 1
- 230000002147 killing effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
- 208000015041 syndromic microphthalmia 10 Diseases 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04817—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application provides an interactive processing method, an interactive processing device, electronic equipment and a storage medium in a game, wherein the method comprises the following steps: receiving a trigger operation executed by a user on a target node, wherein the target node is one node in a target strategy tree in a game; responding to the trigger operation, determining a guide node of the target node, and entering a first operation interface aiming at the guide node; and displaying a first trigger control aiming at the guide node on the first operation interface, wherein the first trigger control is a control for executing a predetermined function aiming at the virtual element corresponding to the guide node in the game. By the method and the device, the complexity of operation of the user on each node in the target strategy tree can be reduced, and the interaction efficiency is improved.
Description
Technical Field
The present application relates to the field of computer application technologies, and in particular, to an interactive processing method and apparatus in a game, an electronic device, and a storage medium.
Background
In a game, a certain game playing method is often shown in a tree structure form, such as a science and technology tree, an talent tree, a weapon tree, a prop tree, and the like in the game.
At present, when a player looks at a tree structure interface, all tree nodes are shown in the tree structure interface, and when the player desires to operate each tree node, the player needs to find the corresponding tree node first, open the tree nodes one by one, and operate each tree node, so that the operation process of the player on each tree node becomes complicated and low in processing efficiency.
Disclosure of Invention
In view of this, the embodiments of the present application at least provide an interaction processing method and apparatus in a game, an electronic device, and a storage medium, which can reduce the complexity of operations performed by a user on each node in a target policy tree and improve interaction efficiency.
In a first aspect, an exemplary embodiment of the present application provides an interaction processing method in a game, where the method includes: receiving a trigger operation executed by a user on a target node, wherein the target node is one node in a target strategy tree in a game; responding to the trigger operation, determining a guide node of the target node, and entering a first operation interface aiming at the guide node; and displaying a first trigger control aiming at the guide node on the first operation interface, wherein the first trigger control is a control for executing a predetermined function aiming at a virtual element corresponding to the guide node in a game.
In a possible implementation manner, the triggering operation includes a selection operation performed by the user on a second operation interface for the target node on a second trigger control, where the second trigger control is a control for performing a predetermined function on a virtual element corresponding to the target node in the game.
In one possible implementation, the target node is a node in the target policy tree that satisfies the following condition: each node on a designated node path in the target strategy tree is triggered, the designated node path is a path from a father node of the target node to a root node, and the triggered node means that a predetermined function is executed aiming at a virtual element corresponding to the node in a game; the trigger condition corresponding to the target node belongs to the target constraint condition.
In a possible embodiment, the method further comprises: displaying a tree structure interface aiming at a target strategy tree, wherein target nodes in the target strategy tree are displayed in the tree structure interface; and responding to the selected operation executed on the target node on the tree structure interface, and entering a second operation interface aiming at the target node.
In a possible embodiment, each node in the target policy tree has a corresponding constraint, and the step of determining the leading node of the target node includes: screening out a first candidate node from the nodes of the target strategy tree according to a target constraint condition; determining a boot-up order based on the first candidate node; determining a bootstrap node of the target node from the nodes of the target policy tree based on a bootstrap sequence, wherein the bootstrap node is a next node of the target node in the bootstrap sequence under the target policy tree.
In a possible embodiment, the step of determining a steering order based on the first candidate node comprises: determining a first sub-order of the first candidate node; determining the first sub-order as a boot order.
In a possible implementation, the target constraint comprises a first time constraint, wherein the step of determining a first subsequence of the first candidate node comprises: determining a traversal order aiming at the target strategy tree according to a preset traversal rule; traversing each first candidate node in the target strategy tree according to the determined traversal order to form a first subsequence corresponding to all the first candidate nodes, wherein the first candidate nodes are nodes of which the trigger conditions belong to first time constraint conditions in the target strategy tree.
In one possible implementation, the first candidate node precedes other nodes in the boot order.
In a possible embodiment, the step of determining a steering order based on the first candidate node comprises: determining a first subsequence of a first candidate node and a second subsequence of a second candidate node, wherein the second candidate node is a node of a target strategy tree, and a trigger condition of the second candidate node belongs to a second time constraint condition; the boot-up sequence is built according to a first sub-sequence and a second sub-sequence, wherein the second sub-sequence follows the first sub-sequence.
In a possible embodiment, the constraint comprises a second temporal constraint, wherein the step of determining a second subsequence of second candidate nodes comprises: determining a traversal order aiming at the target strategy tree according to a preset traversal rule; and traversing each second candidate node in the target strategy tree according to the determined traversal sequence to form a second subsequence corresponding to all the second candidate nodes.
In a possible embodiment, the step of entering the first operation interface for the bootstrap node comprises: and automatically selecting the guide node to enter a first operation interface aiming at the guide node.
In a possible embodiment, the first user interface is entered by: responding to the trigger operation, switching the interface to quit a second operation interface aiming at the target node, and displaying a tree structure interface; automatically selecting the guide node on the displayed tree structure interface to enter a first operation interface aiming at the guide node.
In a possible embodiment, the method further comprises: after the interface is switched, controlling the tree structure interface to move on a screen so as to display a guide node in the target strategy tree in the screen; and/or creating a window after the guide node is automatically selected, and displaying a first operation interface aiming at the guide node in the created window.
In a possible embodiment, the method further comprises: receiving selection operation of a user on a first trigger control; responding to the selection operation of a first trigger control, and determining a trigger condition corresponding to the guide node; and when the trigger condition is met, executing the predetermined function aiming at the virtual element corresponding to the guide node.
In a possible embodiment, the guidance node is a node in the target policy tree whose trigger condition belongs to the first time constraint condition, and the first operation interface is a simplified interface including a first trigger control for the guidance node, where the method further includes: and in response to the selection operation of the first trigger control, determining a node next to the guide node in the guide sequence under the target strategy tree, and entering a third operation interface aiming at the node.
In a possible embodiment, the guidance node is a node in the target policy tree where the trigger condition belongs to the second time constraint condition, and the first operation interface is an editing interface including a first trigger control for the guidance node and a plurality of candidate virtual elements, where the method further includes: receiving a selection operation of any candidate virtual element in the plurality of candidate virtual elements on a first operation interface; in response to the selection operation of the first trigger control, any selected candidate virtual element is determined as a virtual element corresponding to the guide node, so that the predetermined function is executed for the virtual element corresponding to the guide node.
In a second aspect, an embodiment of the present application further provides an interaction processing apparatus in a game, where the apparatus includes: the receiving module is used for receiving the triggering operation executed by a user on a target node, wherein the target node is one node in a target strategy tree in a game; the conversion module is used for responding to the trigger operation, determining a guide node of a target node and entering a first operation interface aiming at the guide node; and the display control module displays a first trigger control aiming at the guide node on the first operation interface, wherein the first trigger control is a control used for executing a preset function aiming at a virtual element corresponding to the guide node in a game.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, the processor and the memory communicate with each other through the bus when the electronic device runs, and the machine-readable instructions are executed by the processor to perform the steps of the method for interactive processing in a game in any one of the above-mentioned first aspect or any one of the above-mentioned possible embodiments of the first aspect.
In a fourth aspect, the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the method for processing interaction in a game in the first aspect or any one of the possible implementation manners of the first aspect.
The interaction processing method, the interaction processing device, the electronic equipment and the storage medium in the game are beneficial to reducing the operation complexity of the user on each node in the target strategy tree, so that the interaction efficiency is improved.
In order to make the aforementioned objects, features and advantages of the present application comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
FIG. 1 is a flow chart illustrating a method for interactive processing in a game provided by an exemplary embodiment of the present application;
FIG. 2 shows a flowchart illustrating steps provided by an exemplary embodiment of the present application to enter a second operator interface;
FIG. 3 is a flowchart illustrating steps provided by an exemplary embodiment of the present application to determine a bootstrap node for a target node;
FIG. 4 is a diagram illustrating a traversal rule for a target policy tree provided by an exemplary embodiment of the present application;
FIG. 5 shows a flowchart of the steps provided by an exemplary embodiment of the present application to form a boot sequence;
FIG. 6 illustrates one of the diagrams provided by exemplary embodiments of the present application for determining a boot sequence;
FIG. 7 illustrates a second schematic diagram for determining a boot sequence provided by an exemplary embodiment of the present application;
FIG. 8 illustrates a third schematic diagram for determining a boot sequence provided by an exemplary embodiment of the present application;
FIG. 9 is a flowchart illustrating steps provided by an exemplary embodiment of the present application to display a first operator interface;
FIG. 10 is a schematic diagram illustrating a first operator interface provided by an exemplary embodiment of the present application;
FIG. 11 is a flowchart illustrating steps provided by an exemplary embodiment of the present application to trigger a bootstrap node;
FIG. 12 is a diagram illustrating an exemplary architecture of an interactive processing device in a game, according to an exemplary embodiment of the present application;
fig. 13 shows a schematic structural diagram of an electronic device provided in an exemplary embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not intended to limit the scope of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and that steps without logical context may be performed in reverse order or concurrently. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
The terms "a", "an", "the" and "said" are used in this specification to denote the presence of one or more elements/components/parts/etc.; the terms "comprising" and "having" are intended to be inclusive and mean that there may be additional elements/components/etc. other than the listed elements/components/etc.; the terms "first" and "second", etc. are used merely as labels, and are not limiting on the number of their objects.
It should be understood that in the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or" is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "comprising A, B and/or C" means that any 1 or any 2 or 3 of A, B and C are comprised.
It should be understood that in the embodiment of the present application, "B corresponding to a", "a corresponds to B", or "B corresponds to a" means that B is associated with a, and B can be determined according to a. Determining B from a does not mean determining B from a alone, but may be determined from a and/or other information.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
In a game, a game playing method, such as a science and technology tree, a talent tree, a weapon tree, a prop tree, etc., in the game is often displayed in a tree structure form.
At present, when a player looks at a tree structure interface, all tree nodes are displayed in the tree structure interface, when the player expects to operate each tree node, the corresponding tree node needs to be found first, then one tree node is opened, and each tree node is operated, so that the operation process of the player on each tree node becomes complicated and low in processing efficiency.
In view of the above problem in at least one aspect, the present application provides an interaction processing method, an apparatus, an electronic device, and a storage medium in a game, which simplify an operation process of a user on each node in a target policy tree by introducing an automatic guidance mechanism, so as to improve interaction efficiency.
First, the names referred to in the embodiments of the present application will be briefly described.
In an embodiment of the present application, a graphical user interface may be provided by a terminal device, where:
the terminal equipment:
the terminal device related in the embodiment of the present application mainly refers to an intelligent device that is used for providing a game interface (a game scene is presented in the game interface) and can control and operate a virtual character, and the terminal device may include, but is not limited to, any one of the following devices: smart phones, tablet computers, portable computers, desktop computers, game machines, personal Digital Assistants (PDAs), e-book readers, MP4 (Moving Picture Experts Group Audio Layer IV) players, and the like. The terminal device is installed and operated with an application program supporting a game scene, such as an application program supporting a three-dimensional game scene. The application program may include, but is not limited to, any one of a virtual reality application program, a three-dimensional map program, a military simulation program, a MOBA Game, a multi-player gunfight type survival Game, a Third-person Shooting Game (TPS). Alternatively, the application may be a stand-alone application, such as a stand-alone 3D game program, or may be a network online application.
A graphical user interface:
the interface display format is used for human-computer communication, and allows a user to use an input device such as a mouse or a keyboard to manipulate icons, logos or menu options on a screen, and also allows the user to manipulate the icons or menu options on the screen by performing a touch operation on a touch screen of the touch terminal to select a command, start a program or perform other tasks.
A game scene:
is a virtual scene that an application displays (or provides) when running on a terminal or server. Optionally, the virtual game is a simulated environment of the real world, or a semi-simulated semi-fictional virtual environment, or a purely fictional virtual environment. The virtual scene is any one of a two-dimensional virtual scene and a three-dimensional virtual scene, and the virtual environment can be sky, land, ocean and the like, wherein the land comprises environmental elements such as deserts, cities and the like. The game scene is a scene in which a user controls the complete game logic of the virtual character.
Virtual roles:
refers to a virtual character in a virtual environment, which can be a virtual character manipulated by a player, including but not limited to at least one of a virtual character, a virtual animal, a cartoon character, and a non-player manipulated virtual character (NPC). Alternatively, when the virtual environment is a three-dimensional virtual environment, the virtual characters may be three-dimensional virtual models, each virtual character having its own shape and volume in the three-dimensional virtual environment, occupying a part of the space in the three-dimensional virtual environment. Optionally, the virtual character is a three-dimensional character constructed based on three-dimensional human skeleton technology, and the virtual character realizes different external appearances by wearing different skins. In some implementations, the virtual role can also be implemented by using a 2.5-dimensional or 2-dimensional model, which is not limited in this application.
There may be a plurality of virtual characters in the virtual scene, which are virtual characters manipulated by a player (i.e., characters controlled by the player through an input device), or Artificial Intelligence (AI) set in a virtual environment fight by training. Optionally, the virtual character is a virtual character playing a game in the game scene. Optionally, the number of virtual characters in the game scene match is preset, or is dynamically determined according to the number of terminal devices joining the virtual match, which is not limited in the embodiment of the present application. In one possible implementation, the user can control the virtual character to move within the virtual scene, e.g., control the virtual character to run, jump, crawl, etc., and can also control the virtual character to fight other virtual characters using virtual skills, virtual props, etc., provided by the application.
In an alternative embodiment, the terminal device may be a local terminal device. Taking a game as an example, the local terminal device stores a game program and is used for presenting a game screen. The local terminal device is used for interacting with the player through a graphical user interface, namely, a game program is downloaded and installed and operated through an electronic device conventionally. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal device or provided to the player through holographic projection. By way of example, the local terminal device may include a display screen for presenting a graphical user interface including a game scene screen and a processor for running the game, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
An application scenario to which the present application is applicable is introduced. The method and the device can be applied to the technical field of games, and a plurality of players participating in the games join the same virtual game pair together in the games.
Before entering the virtual game, the player can select different character attributes, such as identity attributes, for his/her own virtual character in the virtual game, and determine different battlements by allocating the different character attributes, so that the player wins the game match by performing tasks allocated by the game at different game stages of the virtual game, for example, a plurality of virtual characters with A character attributes "eliminate" the virtual character with B character attributes at the game stage, so as to win the game match. Here, when entering the virtual game, a role attribute may be randomly assigned to each virtual role participating in the virtual game.
An implementation environment provided in one embodiment of the present application may include: the system comprises a first terminal device, a server and a second terminal device. The first terminal device and the second terminal device are respectively communicated with the server to realize data communication. In this embodiment, the first terminal device and the second terminal device are respectively installed with an application program for executing the interactive processing method in the game provided by the present application, and the server is a server side for executing the interactive processing method in the game provided by the present application. The first terminal device and the second terminal device are enabled to communicate with the server respectively through the application program.
Taking the first terminal device as an example, the first terminal device establishes communication with the server by running the application program. In an alternative embodiment, the server establishes the virtual game-play based on the game request of the application. The parameters of the virtual game play can be determined according to the parameters in the received game request, for example, the parameters of the virtual game play can include the number of people participating in the virtual game play, the character level participating in the virtual game play, and the like. When the first terminal device receives a response of the game server, a game scene corresponding to the virtual game is displayed through a graphical user interface of the first terminal device, the first terminal device is a device controlled by a first user, a virtual character displayed in the graphical user interface of the first terminal device is a player character (namely, the first virtual character) controlled by the first user, and the first user inputs an operation instruction through the graphical user interface so as to control the player character to execute corresponding operation in the game scene.
Taking the second terminal device as an example, the second terminal device establishes communication with the server by running the application program. In an alternative embodiment, the server establishes the virtual game-play based on the game request of the application. The parameters of the virtual game may be determined according to parameters in the received game request, for example, the parameters of the virtual game may include the number of people participating in the virtual game, the level of characters participating in the virtual game, and the like. And when the second terminal equipment receives the response of the server, displaying the game scene corresponding to the virtual game through the graphical user interface of the second terminal equipment. The second terminal device is a device controlled by a second user, the virtual character displayed in the graphical user interface of the second terminal device is a player character (i.e., a second virtual character) controlled by the second user, and the second user inputs an operation instruction through the graphical user interface to control the player character to execute corresponding operation in the virtual scene.
The server performs data calculation according to the game data reported by the first terminal device and the second terminal device, and synchronizes the calculated game data to the first terminal device and the second terminal device, so that the first terminal device and the second terminal device control the graphical user interface to render a corresponding game scene and/or virtual character according to the synchronization data issued by the game server.
In this embodiment, the first virtual character controlled by the first terminal device and the second virtual character controlled by the second terminal device are virtual characters in the same virtual partner. The first virtual role controlled by the first terminal device and the second virtual role controlled by the second terminal device may have the same role attribute or different role attributes, and the first virtual role controlled by the first terminal device and the second virtual role controlled by the second terminal device belong to the same marketing.
It should be noted that, in the virtual game, two or more virtual characters may be included, and different virtual characters may correspond to different terminal devices, that is, in the virtual game, there are two or more terminal devices that respectively perform transmission and synchronization of game data with the game server.
For the convenience of understanding of the present application, the following detailed descriptions are provided for an interaction processing method, an interaction processing apparatus, an electronic device, and a storage medium in a game provided in an embodiment of the present application.
Referring to fig. 1, a flowchart of an interaction processing method in a game according to an exemplary embodiment of the present application is shown, where the interaction processing method specifically includes:
step S101: and receiving a trigger operation executed by a user on the target node.
Here, at least one policy tree exists in the game, each policy tree has at least one branch, each branch has at least one node, the nodes having association relation are connected by a connecting line, and two nodes connected by the connecting line may have a sequential relation or may not have a sequential relation.
Illustratively, each node in the policy tree corresponds to a virtual element in the game scene, and the state of the game world can be changed by operating on the virtual element.
By way of example, a policy tree may include, but is not limited to: science trees, talent trees, weapon trees, prop trees, building trees (which may refer to tree structures used to create buildings in a game scene), and so forth. That is to say, science and technology, weapons, props, etc. in the game can all be shown in the form of a tree structure, and through the operation on the nodes in the tree structure, the virtual elements corresponding to the operated nodes can be applied in the game scene to change the state of the game world.
In the exemplary embodiment of the present application, the tree structure interface corresponding to the policy tree may be entered in various ways to operate on each node in the policy tree.
In one example, a player in a game may obtain a game box in various ways, such as at least one of: paying virtual electronic resources, picking up and killing virtual roles strutted by enemies from game scenes, and completing virtual tasks in the game. And when the opening operation of the game box is received, generating a strategy tree corresponding to the game box, and displaying each node of the strategy tree on a tree structure interface. It should be understood that the above listed manner of generating the policy tree is merely an example, and the present application is not limited thereto.
In an embodiment of the present application, the target node is a node in a target policy tree in the game, and the target policy tree is one of at least one policy tree existing in the game.
As an example, the above-mentioned trigger operation is an operation for triggering a predetermined function corresponding to the target node, for example, the trigger operation may include a selection operation performed by the user on a second operation interface for the target node, where the second operation interface is an interface for showing description information of the target node and being operable on the target node, and the second trigger control is a control for performing a predetermined function on a virtual element corresponding to the target node in the game.
Exemplarily, each node in the policy tree corresponds to a virtual element in the game, taking the target policy tree as the building tree as an example, each node in the building tree corresponds to a building, at this time, the second trigger control for the target node is a control for creating the building in the game scene, taking the target policy tree as the prop tree as an example, each node in the prop tree corresponds to a prop, and the second trigger control for the target node is a control for enabling the virtual character to have the prop in the game.
Exemplary, the virtual elements may include virtual items, virtual skills for enhancing the combat ability of the virtual character. The virtual item may refer to a virtual item in the game, which is a virtual item for increasing the virtual ability of the virtual character.
As an example, the virtual item may include, but is not limited to, at least one of: virtual weapons held by the virtual character, virtual accessories worn on the virtual character or attached to the virtual weapons. Wherein, the virtual accessory can refer to a jewel, a stock and the like which can be attached to a weapon.
The process of entering the second operation interface for the target node is described below with reference to fig. 2, and it should be understood that the manner shown in fig. 2 is only an example, and the application is not limited thereto.
Fig. 2 shows a flowchart of steps provided by an exemplary embodiment of the present application to enter a second operation interface.
In step S201, a tree structure interface for the target policy tree is displayed.
Here, a target node in the target policy tree is displayed in the tree structure interface. For example, all nodes in the target policy tree may be displayed in the tree structure interface to show the target nodes in the tree structure interface, or an interface enlarging operation may be performed on the tree structure interface to display only a part of nodes in the target policy tree, which include the target nodes, in the tree structure interface.
In step S202, in response to the selection operation performed on the target node on the tree structure interface, a second operation interface for the target node is entered.
In an optional embodiment of the present application, the interaction processing method may further include: determining whether the target node is a first triggered node after entering the tree structure interface, if it is determined that the target node is the first triggered node after entering the tree structure interface, the selecting operation is a manual operation, for example, the selecting operation is a click operation performed on the target node by a user, and if it is determined that the target node is not the first triggered node after entering the tree structure interface, the selecting operation is an automatic operation, for example, the selecting operation is an operation of automatically selecting the target node after entering the tree structure interface.
In a preferred embodiment of the present application, the target node is a node in the target policy tree that satisfies the following condition: all nodes on the appointed node path in the target strategy tree are triggered, and the triggering conditions corresponding to the target nodes belong to target constraint conditions.
Here, the above-mentioned specified node path may refer to a path from a parent node of the target node to the root node, the triggered node means that a predetermined function has been executed on the virtual element corresponding to the node in the game, and for example, the triggered node may mean that all the trigger controls corresponding to the nodes have been selected.
In the embodiment of the application, each node in the target policy tree corresponds to a respective trigger condition, when the trigger condition is met, a predetermined function is executed on the virtual element corresponding to the node in the game, and when the trigger condition is not met, the predetermined function is not executed on the virtual element corresponding to the node in the game.
The triggering condition corresponding to each node may be various constraints triggered by the node, and for example, the triggering condition corresponding to each node in the target policy tree may include at least one of the following items: constraints on trigger times, constraints on fabrication materials for virtual elements, task constraints for virtual elements (e.g., associating a degree of completion of at least one task in a game with a virtual element corresponding to a node).
When the trigger condition corresponding to the node is met, the indication shows that the preset function can be executed on the virtual element corresponding to the node in the game. Here, the trigger condition corresponding to the node further includes a condition that execution of a predetermined function on the virtual element corresponding to the node is completed.
In this embodiment of the application, for a case that the trigger condition corresponding to a node is a constraint condition for trigger time, the constraint condition for trigger time corresponding to each node may include a first time constraint condition and a second time constraint condition, where as an example, the first time constraint condition indicates that the trigger time corresponding to the node is 0, and the second time constraint condition indicates that the trigger time corresponding to the node is not 0. For example, for a node with a trigger time of 0, when the trigger control corresponding to the node is selected, the virtual element corresponding to the node is immediately executed with the predetermined function in the game, and for a node with a trigger time of not 0, when the trigger control corresponding to the node is selected, the virtual element corresponding to the node needs to be executed with the predetermined function in the game after the trigger time elapses.
Returning to fig. 1, step S102: and responding to a trigger operation executed on the target node by the user, determining a guide node of the target node, and entering a first operation interface aiming at the guide node.
In an embodiment of the present application, a manner of entering a first operation interface for a guidance node includes: and automatically selecting the guide node to enter a first operation interface aiming at the guide node.
When the target node is triggered, an automatic guide mechanism is introduced to automatically select the guide node needing to be operated, a user does not need to click, and the operation process of the user on each node in the target strategy tree is simplified to a certain extent.
In one case, the bootstrap node is the first candidate node in the target policy tree. Here, the first candidate node is a node in the target policy tree where the trigger condition belongs to the first time constraint condition.
Fig. 3 is a flowchart illustrating a step of determining a bootstrap node of a target node according to an exemplary embodiment of the present application.
In step S301, a first candidate node is screened from nodes of the target policy tree according to the target constraint condition.
Here, the triggering condition corresponding to the node includes at least one constraint condition triggered for the node, the target constraint condition is one of the at least one constraint condition, and node screening may be performed on the target policy tree based on the determined target constraint condition to obtain the first candidate node. As an example embodiment, the target constraint condition may be a first time constraint condition that the trigger time corresponding to the node is 0, or may be a first material constraint condition that the material loss value corresponding to the node is lower than a preset loss value, or may be a first task constraint condition that the task association degree of the node is greater than a second preset threshold.
That is, the first candidate node is a node in the target policy tree that satisfies the target constraint.
In step S302, a boot order is determined based on the first candidate node.
Here, the guiding sequence is a predetermined triggering sequence for each node in the target policy tree, and exemplarily, the guiding sequence is a determined sequence for automatically guiding the user to perform function triggering on the node after traversing each first candidate node in the target policy tree.
In this embodiment, the trigger order for the first candidate node is determined as the bootstrap order for the target policy tree. For example, a first sub-order of the first candidate node may be determined, the first sub-order being determined as the leading order.
And aiming at the condition that the trigger condition corresponding to the node is a constraint condition on the trigger time, wherein the target constraint condition comprises a first time constraint condition.
In this case, the first sub-order of the first candidate node is determined by: and determining a traversal sequence aiming at the target strategy tree according to a preset traversal rule, and traversing each first candidate node in the target strategy tree according to the determined traversal sequence to form a first subsequence corresponding to all the first candidate nodes. Here, the first candidate node is a node in the target policy tree where the trigger condition belongs to the first time constraint condition.
The preset traversal rule may be various traversal manners, and may include, for example and without limitation, a pre-order traversal manner, a middle-order traversal manner, a post-order traversal manner, and the like for the target policy tree, and in addition, may also be other customized traversal manners.
Fig. 4 is a schematic diagram illustrating a traversal rule for a target policy tree according to an exemplary embodiment of the present application.
In this example, a representation of a target policy tree is shown, which includes a-K10 nodes.
The following describes a process of determining a traversal order for the target policy tree based on a pre-order traversal manner, a middle-order traversal manner, and a post-order traversal manner, respectively.
Taking a preface traversal mode as an example, the traversal mode traverses each node in the target policy tree in a left-right order, and the traversal order of the target policy tree shown in fig. 4 in the traversal mode is: ABDHECFJKG.
Taking a middle-order traversal mode as an example, the traversal mode traverses each node in the target policy tree in a left-middle-right order, and the traversal order of the target policy tree shown in fig. 4 in the traversal mode is: DHBEAJFKCG.
Taking a subsequent traversal mode as an example, the traversal mode is to traverse each node in the target policy tree in a left-right order, and the traversal order of the target policy tree shown in fig. 4 in the traversal mode is: HDEBJKEGCA.
Through the determined preset traversal mode, each candidate node in the target strategy tree can be traversed, and omission is avoided.
In the embodiment of the application, the target policy tree includes candidate nodes and non-candidate nodes, the candidate nodes are selectable nodes in the target policy tree, and the non-candidate nodes are non-selectable nodes in the target policy tree. Illustratively, the node state of each node includes an unlocked state and a locked state, and the node in the unlocked state may be selected and the node in the locked state may not be selected.
In the embodiment of the application, only the node in the target policy tree in the unlocked state is determined as the candidate node for traversing.
Here, the candidate nodes include a first candidate node and a second candidate node, the first candidate node is a node in the target policy tree where the trigger condition belongs to the first time constraint, and the second candidate node is a node in the target policy tree where the trigger condition belongs to the second time constraint.
In the embodiment of the present application, traversal may be performed only for the first candidate node with the first time constraint condition to form a guiding order.
In step S303, a bootstrap node of the target node is determined from the nodes of the target policy tree based on the bootstrap order.
Here, the bootstrap node is a node next to the target node in the bootstrap order under the target policy tree, that is, a node located next to the target node in the bootstrap order is the bootstrap node.
Illustratively, when the target node has a node that is not the last level, then the next boot node may be the node next to the target node. When the target node is the leaf node of the last level of bifurcation a in the target policy tree, then the next leading node may be the root node that regresses to bifurcation B, i.e., with another node as the next leading node.
In the embodiment of the present application, the first candidate node precedes other nodes in the boot order.
Through the embodiment, the first candidate nodes are screened from the target strategy tree, guidance is only performed on each first candidate node, and no guidance may be performed on the second candidate nodes. In this embodiment, the first candidate nodes belonging to the first time constraint condition may be triggered immediately, so that the automatic guidance of such nodes may be more convenient for a subsequent user to operate the node, so as to further improve the efficiency of operating the node, and help to improve the fluency of game operation.
In another case, the bootstrap node is a candidate node in the target policy tree. Here, the candidate node as the leader node may be the first candidate node or the second candidate node.
In this embodiment, the bootstrap node is a node next to the target node in the bootstrap order under the target policy tree, that is, a node next to the target node in the bootstrap order is a bootstrap node. Here, the guiding sequence is a predetermined triggering sequence for each node in the target policy tree, and exemplarily, the guiding sequence is a determined sequence for automatically guiding the user to perform function triggering on the node after traversing each candidate node in the target policy tree.
Fig. 5 shows a flowchart of steps for forming a boot sequence provided by an exemplary embodiment of the present application.
In step S310, a first sub-order of the first candidate node and a second sub-order of the second candidate node are determined.
Here, the manner of determining the first subsequence of the first candidate node is the same as the manner of determining the first subsequence of the first candidate node in step S302, and details of this part are not repeated herein.
And aiming at the condition that the trigger condition corresponding to the node is a constraint condition for the trigger time, wherein the constraint condition comprises a second time constraint condition. In this case, each second candidate node in the target policy tree may be traversed according to the determined traversal order, so as to form a second subsequence corresponding to all the second candidate nodes. The first sub-order and the second sub-order may be determined by the same traversal rule, or may be determined by different traversal rules, which is not limited in this example embodiment.
In step S320, a boot sequence is constructed according to the first subsequence and the second subsequence. Here, a guiding sequence is formed after all selectable nodes in the target policy tree are traversed.
For example, the arrangement order between the first sub-order and the second sub-order may be determined according to actual requirements.
In a preferred embodiment, the ranking between the first sub-sequence and the second sub-sequence may be determined based on historical operating habits.
In the embodiment of the application, the traversal form for the candidate nodes is determined according to the historical operation data of each candidate node in the strategy tree in the game. For example, if it is determined that the first candidate nodes and the second candidate nodes in the policy tree are respectively processed in a centralized manner according to the historical operation data (e.g., after triggering is completed on all the first candidate nodes in the policy tree, all the second candidate nodes are triggered), it is determined that the first candidate nodes and the second candidate nodes are respectively traversed.
In the embodiment of the application, candidate nodes with different trigger conditions are traversed respectively, and corresponding sub-sequences are determined so as to meet personalized operation habits of users.
For example, the selection orders of different users for the first candidate node and the second candidate node in the game can be collected, and the arrangement order between the first sub-order and the second sub-order is determined according to the selection orders. And if the number of the users selecting the first candidate node first is larger than that of the users selecting the second candidate node first based on the historical operation habits, determining that the first subsequence is positioned before the second subsequence, and if the number of the users selecting the second candidate node first is larger than that of the users selecting the first candidate node first based on the historical operation habits, determining that the first subsequence is positioned after the second subsequence.
Here, based on the collected selection orders of different users in the game for the first candidate node and the second candidate node, when it is determined that the number of users who first selected the first candidate node is greater than the number of users who first selected the second candidate node, a scheme for determining the guidance order only for the first candidate node may be selected.
In addition, only the historical operation habit of the current user (i.e., the user who performs the triggering operation of S101) may be obtained, and the ranking order between the first subsequence and the second subsequence may be determined according to the selection order of the user for the first candidate node and the second candidate node.
In a preferred embodiment of the present application, the second sub-sequence in the guiding sequence is located after the first sub-sequence, that is, the user is guided to operate the first candidate node first, and then the user is guided to operate the second candidate node, so that the operation process is simplified, the operation efficiency is improved, the operation habit of the user is also considered, the user operation experience is facilitated to be improved, and the user stickiness is increased.
The process of determining the boot order is described below in connection with the examples of fig. 6-8.
Fig. 6 shows one of the diagrams for determining a boot sequence provided by the exemplary embodiments of the present application.
In this example, assuming that the preset traversal rule is a left-to-right and top-to-bottom traversal order of the target policy tree, in the target policy tree shown in fig. 6, the nodes T, T1, T2, T3, T5, T6, and T7 are candidate nodes, i.e., selectable nodes, and the node T4 is a non-candidate node, i.e., non-selectable nodes.
Specifically, the nodes T, T1, T2, T5, T6, and T7 are nodes whose trigger conditions belong to a first time constraint condition in the target policy tree, the node T3 is a node whose trigger conditions belong to a second time constraint condition in the target policy tree, and taking the target policy tree as an example, the virtual element corresponding to the node belonging to the first time constraint condition may be a game object whose research time is 0 in the game, and the virtual element corresponding to the node belonging to the second time constraint condition may be a game object whose research time is needed in the game, such as a ship technical value. The game objects may include game items, game skills, virtual buildings, and the like.
In this embodiment, the first candidate nodes T, T1, T2, T5, T6, and T7 may be traversed according to the above traversal order, a first sub-order (T → T1 → T2 → T5 → T6 → T7) is determined, and the second candidate node T3 may be traversed according to the above traversal order, so as to obtain a second sub-order (T3). The guiding sequence formed by the first and second subsequence is T → T1 → T2 → T5 → T6 → T7 → T3.
After receiving the trigger operation on the node T of the target strategy tree, determining the guide node of the node T as T1 according to the guide sequence, exiting the operation interface aiming at the node T, returning to the tree structure interface, automatically selecting the guide node T1, and entering the operation interface of the node T1.
After that, if receiving the trigger operation on the node T1, repeating the above process, that is, determining the guide node of the node 1T as T2 according to the above guide sequence, at this time, exiting the operation interface for the node T1, returning to the tree structure interface, automatically selecting the guide node T2, and entering the operation interface for the node T2.
By analogy, the above process is subsequently repeated for each node in the order of T5 → T6 → T7 → T3.
That is, after the guiding sequence for the target policy tree is determined, the nodes are automatically selected in sequence according to the guiding sequence, so as to guide the user to operate on the selected nodes.
Fig. 7 shows a second schematic diagram for determining a boot sequence according to an exemplary embodiment of the present application.
In the present example, assuming that the preset traversal rule is still the left-to-right and top-to-bottom traversal order of the target policy tree, in the target policy tree shown in fig. 7, the nodes W, W1, W2, W3, W4 are candidate nodes. Wherein, W1, W2, W4 belong to the first candidate node, and W3 belongs to the second candidate node.
In this embodiment, the first candidate nodes W, W1, W2, W4 and the second candidate node W3 may be traversed in the above-mentioned traversal order, and the guiding order is W → W1 → W2 → W4 → W3.
And according to the guiding sequence, sequentially executing the processes of automatic selection and manual triggering on each node.
Fig. 8 shows a third schematic diagram for determining a boot sequence according to an exemplary embodiment of the present application.
In the present example, assuming that the preset traversal rule is still the left-to-right and top-to-bottom traversal order of the target policy tree, in the target policy tree shown in fig. 8, the nodes Y, Y1, Y2, Y3, Y4, Y5 are candidate nodes. Wherein Y, Y1, Y2, Y3, Y4 belong to the first candidate node, and Y5 belongs to the second candidate node.
In this embodiment, the first candidate nodes Y, Y1, Y2, Y3, Y4 and the second candidate node Y5 may be traversed in the above-described traversal order, and the guidance order is Y → Y1 → Y2 → Y3 → Y4 → Y5.
And according to the guide sequence, sequentially executing the processes of automatic selection and manual triggering on each node.
It should be understood that the traversal order may also be from parent node to child node, that is, traversal is performed according to the sequence of parent node to child node, which is not limited in this application, and may also be determined randomly.
In this embodiment of the present application, the processing procedure shown in fig. 1 may be performed one by one for each node in the guidance sequence according to the guidance sequence, that is, each node is sequentially used as a target node to trigger an automatic selection mechanism for the guidance node, and based on the manner of combining automatic guidance and manual triggering, not only may frequent operations of each node by a user be avoided, but also nodes in the target policy tree may be avoided from being omitted.
In addition, based on the mode of automatic guidance and manual triggering, the operation complexity of the user on each node can be reduced, the interactive experience of the user for obtaining virtual elements from the game is kept, and the improvement of the user viscosity is facilitated. In addition, interaction experience breakpoints caused by frequent sliding and clicking of the user can be reduced, and the playing smoothness is improved.
The process of entering the first operation interface for the bootstrap node is described below with reference to fig. 9, and it should be understood that the process shown in fig. 9 is merely an example, and the present application is not limited thereto.
Fig. 9 is a flowchart illustrating a step of displaying a first operation interface according to an exemplary embodiment of the present application.
In step S401, in response to a trigger operation performed on the target node by the user, the interface is switched.
Here, the trigger operation may be a selection operation performed by the user on the second operation interface for the target node on the second trigger control, in which case, in response to the selection operation, the second operation interface for the target node is exited, and the tree structure interface is displayed to perform interface switching.
In step S402, the on-screen control tree structure interface moves to display the guidance node in the target policy tree in the screen.
Here, the above-described tree structure interface may be displayed in a graphical user interface provided by the terminal device, and the tree structure interface is controlled to slide with respect to a screen of the terminal device to present the guidance node on the screen. This is because, in order to facilitate the user to operate the nodes in the target policy tree, the tree structure interface is enlarged, so that all the nodes of the target policy tree cannot be completely displayed on the tree structure interface, and at this time, after one node is operated, the tree structure interface needs to be moved to display another node on the screen.
In step S403, a guide node is automatically selected on the displayed tree structure interface.
Here, the operation of selecting the guidance node is automatically performed, and the user does not need to manually click the guidance node.
In step S404, a window is created, and a first operation interface for the guidance node is displayed in the created window.
In an example, the first user interface may be displayed directly in the graphical user interface after the guidance node is automatically selected.
In another example, the first user interface may be displayed in the created window after the bootstrap node is automatically selected.
Preferably, the window can be displayed in a floating mode above the tree structure interface, movement operation performed on the window by a user is received, and the display position of the window on the tree structure interface can be changed.
For example, the above interaction processing method according to the embodiment of the present application may further include: and receiving a positioning operation executed on the window, responding to the positioning operation, determining a positioning node corresponding to the window from the target strategy tree, if the positioning node belongs to a first candidate node, adjusting the sequence of the positioning node in the guide sequence, and if the positioning node belongs to a second candidate node or a non-candidate node, not adjusting the guide sequence.
For example, a positioning control may be disposed within the window or at an edge of the window, and the positioning operation may include, for example, a selection operation of the positioning control of the window. In addition, the positioning node may refer to a node closest to the window in the target policy tree, or may refer to a node covered by an area where the window is located.
As an example, adjusting the ordering of the positioning nodes in the boot order may include: the positioning node is determined as the node located next in the boot order to the boot node.
Here, the above steps S401 to S404 are a series of actions performed after receiving a trigger operation performed by the user on the target node, so as to implement automatic guidance for the user.
Returning to fig. 1, step S103: and displaying a first trigger control aiming at the guide node on the first operation interface.
In an embodiment of the present application, the first trigger control is a control for executing a predetermined function on a virtual element corresponding to the guidance node in the game. For example, introduction information of the virtual element corresponding to the guidance node may be further displayed in the first operation interface, so that the user can confirm the virtual element.
Based on the interactive processing scheme, the automatic guide mechanism for the guide node of the target node in the strategy tree can be triggered aiming at the target node, so that frequent operation of a user on each node can be avoided, the operation process is simplified, and the operation fluency can be improved.
In one case, the bootstrap node is a node in the target policy tree where the trigger condition belongs to the first time constraint.
At this time, the first operation interface is a simplified interface including the first trigger control for the guidance node, that is, the virtual element corresponding to the guidance node is unique, and the first operation interface may include the first trigger control for the guidance node and introduction information for the corresponding virtual element.
In this case, in response to a selection operation of the first trigger control, a node next to the guidance node in the guidance order under the target policy tree is determined, and a third operation interface for the node is entered.
Here, it is equivalent to take the boot node as the target node, and the above step S101 is executed again until the trigger is completed for each node in the boot order.
In another case, the bootstrap node is a node of the target policy tree where the trigger condition belongs to the second time constraint condition.
At this time, the first operation interface is an editing interface including a first trigger control for the guidance node and a plurality of candidate virtual elements, that is, the virtual element corresponding to the guidance node is not unique, and the virtual element corresponding to the guidance node needs to be determined through an operation performed on the first operation interface.
In this case, a selection operation on any one of the candidate virtual elements on the first operation interface is received, and in response to the selection operation on the first trigger control, any one of the selected candidate virtual elements is determined as a virtual element corresponding to the guide node, so as to execute a predetermined function on the virtual element corresponding to the guide node.
Fig. 10 is a schematic diagram illustrating a first operation interface provided in an exemplary embodiment of the present application.
As shown in fig. 10, a candidate virtual element display area is provided below the first operation interface 10, at least a part of candidate virtual elements are displayed in the candidate virtual element display area, and when all the candidate virtual elements cannot be displayed in the candidate virtual element display area, the rightmost arrow is clicked to perform scroll display.
When the candidate virtual element 11 is selected, displaying the selected candidate virtual element 11 in a preview area of the first operation interface 10, and if a selection operation on the first trigger control 12 on the first operation interface 10 is received, determining the candidate virtual element 11 as a virtual element corresponding to the guidance node, and executing a predetermined function for the virtual element corresponding to the guidance node.
Illustratively, after receiving the selection operation of the first trigger control 12 on the first operation interface 10, the virtual element confirmation interface is entered. The method comprises the steps that trigger conditions of a virtual element corresponding to a guide node comprise a first time constraint condition and a second time constraint condition, at this time, a first condition option and a second condition option are displayed in a virtual element confirmation interface, the first condition option is used for indicating the first time constraint condition, the second condition option is used for indicating the second time constraint condition, a selection operation of a user on one of the first condition option and the second condition option is received, the selected time constraint condition is determined as the trigger condition of the virtual element, and a preset function is executed on the virtual element corresponding to the guide node in a game based on the trigger condition.
Fig. 11 is a flowchart illustrating a step of triggering a bootstrap node according to an exemplary embodiment of the present application.
In step S501, a selection operation of a first trigger control by a user is received.
Illustratively, the selection operation may include a click operation on a first trigger control.
In step S502, in response to a selection operation of the first trigger control, a trigger condition corresponding to the guidance node is determined.
Here, it may be determined whether the trigger condition corresponding to the guidance node belongs to the first time constraint or the second time constraint.
In step S503, when the trigger condition is satisfied, a predetermined function is executed for the virtual element corresponding to the guidance node.
For example, if the trigger condition corresponding to the guidance node belongs to the first time constraint condition, a predetermined function is directly executed in the game for the virtual element corresponding to the guidance node in response to the selection operation of the first trigger control. And if the trigger condition corresponding to the guide node belongs to the second time constraint condition, responding to the selection operation of the first trigger control, and executing a preset function for the virtual element corresponding to the guide node in the game after waiting for the trigger time corresponding to the second time constraint condition.
Illustratively, the predetermined function may include, but is not limited to, at least one of: unlocking (or activating) the corresponding virtual element in the game, applying (or assembling, using) the corresponding game element in the game, and promoting the attribute parameter of the corresponding game element in the game.
Based on the interactive processing scheme of the embodiment of the application, the user can be assisted in operating each node in the policy tree, that is, only the node is automatically selected, and whether the node is triggered or not is still determined by the user.
Based on the same application concept, the embodiment of the present application further provides an interactive processing device in a game corresponding to the method provided by the above embodiment, and as the principle of solving the problem of the device in the embodiment of the present application is similar to the interactive processing method in the game in the above embodiment of the present application, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Fig. 12 is a schematic structural diagram of an interaction processing apparatus in a game according to an exemplary embodiment of the present application, and as shown in fig. 12, the interaction processing apparatus 200 in a game includes:
a receiving module 210, configured to receive a trigger operation performed by a user on a target node, where the target node is a node in a target policy tree in a game;
the conversion module 220 is used for responding to the trigger operation, determining a guide node of a target node, and entering a first operation interface aiming at the guide node;
and the display control module 230 displays a first trigger control for the guide node on the first operation interface, where the first trigger control is a control for executing a predetermined function for a virtual element corresponding to the guide node in the game.
In one possible implementation manner of the present application, the trigger operation includes a selection operation performed by a user on a second operation interface for the target node on a second trigger control, where the second trigger control is a control for performing a predetermined function on a virtual element corresponding to the target node in a game.
In one possible embodiment of the present application, the target node is a node in the target policy tree that satisfies the following condition: each node on a designated node path in the target strategy tree is triggered, the designated node path is a path from a father node of the target node to a root node, and the triggered node means that a predetermined function is executed aiming at a virtual element corresponding to the node in a game; the trigger condition corresponding to the target node belongs to the target constraint condition.
In one possible embodiment of the present application, the display control module 230 displays a tree structure interface for the target policy tree, in which a target node in the target policy tree is displayed; the receiving module 210 enters a second operation interface for the target node in response to the selection operation performed on the target node on the tree structure interface.
In one possible implementation of the present application, each node in the target policy tree has a corresponding constraint, wherein the transformation module 220 determines the leading node of the target node by: screening out a first candidate node from the nodes of the target strategy tree according to a target constraint condition; determining a boot order based on the first candidate node; determining a bootstrap node of the target node from the nodes of the target policy tree based on a bootstrap sequence, wherein the bootstrap node is a next node of the target node in the bootstrap sequence under the target policy tree.
In one possible implementation of the present application, the conversion module 220 determines the boot order based on the first candidate node by: determining a first sub-order of the first candidate node, the first sub-order being determined as a leading order.
In one possible implementation of the present application, the target constraint includes a first temporal constraint, wherein the transformation module 220 determines the first subsequence of the first candidate node by: determining a traversal order aiming at the target strategy tree according to a preset traversal rule; and traversing each first candidate node in the target strategy tree according to the determined traversal sequence to form a first subsequence corresponding to all the first candidate nodes, wherein the first candidate nodes are nodes of which the trigger conditions belong to first time constraint conditions in the target strategy tree.
In one possible embodiment of the present application, the first candidate node precedes other nodes in the boot order.
In one possible implementation of the present application, the conversion module 220 determines the boot order based on the first candidate node by: determining a first subsequence of a first candidate node and a second subsequence of a second candidate node, wherein the second candidate node is a node of a target strategy tree, and a trigger condition of the second candidate node belongs to a second time constraint condition; the boot-up sequence is built according to a first sub-sequence and a second sub-sequence, wherein the second sub-sequence follows the first sub-sequence.
In one possible implementation of the present application, the constraint includes a second temporal constraint, wherein the transformation module 220 determines the second subsequence of the second candidate node by: determining a traversal order aiming at the target strategy tree according to a preset traversal rule; and traversing each second candidate node in the target strategy tree according to the determined traversal sequence to form a second subsequence corresponding to all the second candidate nodes.
In one possible implementation of the present application, the conversion module 220 enters the first operation interface for the bootstrap node by: and automatically selecting the guide node to enter a first operation interface aiming at the guide node.
In one possible implementation of the present application, the conversion module 220 enters the first operation interface by: responding to the trigger operation, switching the interface to quit a second operation interface aiming at the target node, and displaying a tree structure interface; automatically selecting the guide node on the displayed tree structure interface to enter a first operation interface aiming at the guide node.
In one possible embodiment of the present application, the conversion module 220 is further configured to perform the following processes: after the interface is switched, controlling the tree structure interface to move on a screen so as to display a guide node in the target strategy tree in the screen; and/or after the guide node is automatically selected, creating a window, and displaying a first operation interface aiming at the guide node in the created window.
In one possible implementation manner of the present application, the receiving module 210 receives a selection operation of a first trigger control by a user; responding to the selection operation of a first trigger control, and determining a trigger condition corresponding to the guide node; and when the trigger condition is met, executing the predetermined function aiming at the virtual element corresponding to the guide node.
In a possible implementation manner of the present application, the guidance node is a node in the target policy tree whose trigger condition belongs to the first time constraint condition, and the first operation interface is a simplified interface that includes a first trigger control for the guidance node, where the conversion module 220 determines, in response to a selection operation on the first trigger control, a node next to the guidance node in the guidance order under the target policy tree, and enters a third operation interface for the node.
In a possible implementation manner of the present application, the guidance node is a node in the target policy tree where the trigger condition belongs to the second time constraint condition, the first operation interface is an editing interface including a first trigger control for the guidance node and a plurality of candidate virtual elements, wherein the receiving module 210 receives a selection operation of any one of the candidate virtual elements on the first operation interface, and the apparatus further includes: and the triggering module is used for responding to the selection operation of the first triggering control, determining any selected candidate virtual element as a virtual element corresponding to the guide node, and executing the predetermined function aiming at the virtual element corresponding to the guide node.
According to the scheme of the embodiment of the application, frequent operation of the user on each node can be avoided, the operation complexity of the user on each node is reduced, and the interaction efficiency is improved.
Referring to fig. 12, fig. 12 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure. As shown in fig. 12, the electronic device 300 includes a processor 310, a memory 320, and a bus 330.
The memory 320 stores machine-readable instructions executable by the processor 310, when the electronic device 300 runs, the processor 310 communicates with the memory 320 through the bus 330, and when the machine-readable instructions are executed by the processor 310, the steps of the interactive processing method in the game in any of the embodiments described above can be executed, specifically as follows:
receiving a trigger operation executed by a user on a target node, wherein the target node is one node in a target strategy tree in a game;
responding to the trigger operation, determining a guide node of a target node, and entering a first operation interface aiming at the guide node;
and displaying a first trigger control aiming at the guide node on the first operation interface, wherein the first trigger control is a control for executing a predetermined function aiming at a virtual element corresponding to the guide node in a game.
In one possible implementation manner of the present application, the trigger operation includes a selection operation performed by a user on a second operation interface for the target node on a second trigger control, where the second trigger control is a control for performing a predetermined function on a virtual element corresponding to the target node in a game.
In one possible embodiment of the present application, the target node is a node in the target policy tree that satisfies the following condition: each node on a designated node path in the target strategy tree is triggered, the designated node path is a path from a father node of the target node to a root node, and the triggered node means that a predetermined function is executed aiming at a virtual element corresponding to the node in a game; the trigger condition corresponding to the target node belongs to the target constraint condition.
In one possible embodiment of the present application, the processor 410 is further configured to perform the following: displaying a tree structure interface aiming at a target strategy tree, wherein target nodes in the target strategy tree are displayed in the tree structure interface; and responding to the selected operation executed on the target node on the tree structure interface, and entering a second operation interface aiming at the target node.
In one possible embodiment of the present application, each node in the target policy tree has a corresponding constraint, wherein the processor 410 is further configured to perform the following processing to determine a leading node of the target node: screening out a first candidate node from the nodes of the target strategy tree according to a target constraint condition; determining a boot-up order based on the first candidate node; determining a bootstrap node of the target node from the nodes of the target policy tree based on a bootstrap sequence, wherein the bootstrap node is a next node of the target node in the bootstrap sequence under the target policy tree.
In one possible embodiment of the present application, the processor 410 is further configured to perform the following processing to determine a boot order based on the first candidate node: determining a first sub-order of the first candidate node, the first sub-order being determined as a leading order.
In one possible embodiment of the present application, the target constraint comprises a first temporal constraint, wherein the processor 410 is further configured to perform the following processing to determine a first sub-order of the first candidate node: determining a traversal order aiming at the target strategy tree according to a preset traversal rule; traversing each first candidate node in the target strategy tree according to the determined traversal order to form a first subsequence corresponding to all the first candidate nodes, wherein the first candidate nodes are nodes of which the trigger conditions belong to first time constraint conditions in the target strategy tree.
In one possible embodiment of the present application, the first candidate node precedes other nodes in the boot order.
In one possible embodiment of the present application, the processor 410 is further configured to perform the following processing to determine a boot order based on the first candidate node: determining a first subsequence of a first candidate node and a second subsequence of a second candidate node, wherein the second candidate node is a node of a target strategy tree, and a trigger condition of the second candidate node belongs to a second time constraint condition; the boot-up sequence is built according to a first sub-sequence and a second sub-sequence, wherein the second sub-sequence follows the first sub-sequence.
In one possible embodiment of the present application, the constraint includes a second time constraint, wherein the processor 410 is further configured to perform the following process to determine a second subsequence of the second candidate node: determining a traversal order aiming at the target strategy tree according to a preset traversal rule; and traversing each second candidate node in the target strategy tree according to the determined traversal sequence to form a second subsequence corresponding to all the second candidate nodes.
In one possible embodiment of the present application, the processor 410 is further configured to perform the following processing to enter the first operation interface for the guidance node: and automatically selecting the guide node to enter a first operation interface aiming at the guide node.
In one possible embodiment of the present application, the processor 410 is further configured to enter the first operation interface by: responding to the trigger operation, switching the interface to quit a second operation interface aiming at the target node, and displaying a tree structure interface; automatically selecting the guide node on the displayed tree structure interface to enter a first operation interface aiming at the guide node.
In one possible embodiment of the present application, the processor 410 is further configured to perform the following: after the interface is switched, controlling the tree structure interface to move on a screen so as to display a guide node in the target strategy tree in the screen; and/or after the guide node is automatically selected, creating a window, and displaying a first operation interface aiming at the guide node in the created window.
In one possible embodiment of the present application, the processor 410 is further configured to perform the following: receiving selection operation of a user on a first trigger control; responding to the selection operation of a first trigger control, and determining a trigger condition corresponding to the guide node; and when the trigger condition is met, executing the predetermined function aiming at the virtual element corresponding to the guide node. .
In a possible embodiment of the present application, the guidance node is a node in the target policy tree where the trigger condition belongs to a first time constraint condition, and the first operation interface is a simplified interface including a first trigger control for the guidance node, where the processor 410 is further configured to perform the following processing: and responding to the selection operation of the first trigger control, determining a node next to the guide node in the guide sequence under the target strategy tree, and entering a third operation interface aiming at the node.
In a possible embodiment of the present application, the guidance node is a node in the target policy tree where the trigger condition belongs to the second time constraint, and the first operation interface is an editing interface including a first trigger control for the guidance node and a plurality of candidate virtual elements, where the processor 410 is further configured to perform the following processing: receiving a selection operation of any candidate virtual element in the plurality of candidate virtual elements on a first operation interface; in response to the selection operation of the first trigger control, any selected candidate virtual element is determined as a virtual element corresponding to the guide node, so that the predetermined function is executed for the virtual element corresponding to the guide node.
According to the scheme of the embodiment of the application, frequent operation of the user on each node can be avoided, the operation complexity of the user on each node is reduced, and the interaction efficiency is improved.
An embodiment of the present application further provides a computer-readable storage medium, where the storage medium stores a computer program, and when the computer program is executed by a processor, the computer program may perform the steps of the interaction processing method in the game in any of the above embodiments, specifically as follows:
receiving a trigger operation executed by a user on a target node, wherein the target node is one node in a target strategy tree in a game;
responding to the trigger operation, determining a guide node of a target node, and entering a first operation interface aiming at the guide node;
and displaying a first trigger control aiming at the guide node on the first operation interface, wherein the first trigger control is a control for executing a predetermined function aiming at a virtual element corresponding to the guide node in a game.
In one possible implementation manner of the present application, the trigger operation includes a selection operation performed by a user on a second operation interface for the target node on a second trigger control, where the second trigger control is a control for performing a predetermined function on a virtual element corresponding to the target node in a game.
In one possible implementation manner of the present application, the target node is a node in the target policy tree that satisfies the following condition: each node on a designated node path in the target strategy tree is triggered, the designated node path is a path from a father node of the target node to a root node, and the triggered node means that a predetermined function is executed aiming at a virtual element corresponding to the node in a game; the trigger condition corresponding to the target node belongs to the target constraint condition.
In one possible embodiment of the application, the processor is further configured to perform the following: displaying a tree structure interface aiming at a target strategy tree, wherein target nodes in the target strategy tree are displayed in the tree structure interface; and responding to the selected operation executed on the target node on the tree structure interface, and entering a second operation interface aiming at the target node.
In one possible embodiment of the present application, each node in the target policy tree has a corresponding constraint, and the processor is further configured to perform the following processing to determine a leading node of the target node: screening out a first candidate node from the nodes of the target strategy tree according to a target constraint condition; determining a boot order based on the first candidate node; determining a bootstrap node of the target node from the nodes of the target policy tree based on a bootstrap sequence, wherein the bootstrap node is a next node of the target node in the bootstrap sequence under the target policy tree.
In one possible embodiment of the present application, the processor is further configured to perform the following process to determine a boot-up order based on the first candidate node: determining a first sub-order of the first candidate node, the first sub-order being determined as a leading order.
In one possible embodiment of the present application, the target constraint comprises a first temporal constraint, wherein the processor is further configured to perform the following processing to determine a first sub-order of the first candidate node: determining a traversal order aiming at the target strategy tree according to a preset traversal rule; traversing each first candidate node in the target strategy tree according to the determined traversal order to form a first subsequence corresponding to all the first candidate nodes, wherein the first candidate nodes are nodes of which the trigger conditions belong to first time constraint conditions in the target strategy tree.
In one possible embodiment of the present application, the first candidate node precedes other nodes in the boot order.
In one possible embodiment of the present application, the processor is further configured to perform the following processing to determine a boot order based on the first candidate node: determining a first subsequence of a first candidate node and a second subsequence of a second candidate node, wherein the second candidate node is a node of a target strategy tree, and a trigger condition of the second candidate node belongs to a second time constraint condition; the boot-sequence is built according to a first sub-sequence and a second sub-sequence, wherein the second sub-sequence is located after the first sub-sequence.
In one possible embodiment of the present application, the constraint includes a second time constraint, wherein the processor is further configured to perform the following process to determine a second sub-order of the second candidate node: determining a traversal order aiming at the target strategy tree according to a preset traversal rule; and traversing each second candidate node in the target strategy tree according to the determined traversal sequence to form a second subsequence corresponding to all the second candidate nodes.
In one possible embodiment of the present application, the processor is further configured to perform the following processing to enter a first operation interface for the bootstrap node: and automatically selecting the guide node to enter a first operation interface aiming at the guide node.
In one possible embodiment of the present application, the processor is further configured to enter the first operation interface by: responding to the trigger operation, switching the interface to quit a second operation interface aiming at the target node, and displaying a tree structure interface; automatically selecting the guide node on the displayed tree structure interface to enter a first operation interface aiming at the guide node.
In one possible embodiment of the application, the processor is further configured to perform the following: after the interface is switched, controlling the tree structure interface to move on a screen so as to display a guide node in the target strategy tree in the screen; and/or after the guide node is automatically selected, creating a window, and displaying a first operation interface aiming at the guide node in the created window.
In one possible embodiment of the application, the processor is further configured to perform the following: receiving a selection operation of a user on a first trigger control; responding to the selection operation of a first trigger control, and determining a trigger condition corresponding to the guide node; and when the trigger condition is met, executing the predetermined function aiming at the virtual element corresponding to the guide node. .
In a possible implementation manner of the present application, the guidance node is a node in the target policy tree where the trigger condition belongs to a first time constraint condition, and the first operation interface is a simplified interface including a first trigger control for the guidance node, where the processor is further configured to perform the following processing: and responding to the selection operation of the first trigger control, determining a node next to the guide node in the guide sequence under the target strategy tree, and entering a third operation interface aiming at the node.
In a possible implementation manner of the present application, the guidance node is a node in the target policy tree where the trigger condition belongs to the second time constraint condition, and the first operation interface is an editing interface including a first trigger control for the guidance node and a plurality of candidate virtual elements, where the processor is further configured to perform the following processing: receiving a selection operation of any candidate virtual element in the plurality of candidate virtual elements on a first operation interface; in response to the selection operation of the first trigger control, any selected candidate virtual element is determined as a virtual element corresponding to the guide node, so that the predetermined function is executed for the virtual element corresponding to the guide node.
According to the scheme of the embodiment of the application, frequent operation of the user on each node can be avoided, the operation complexity of the user on each node is reduced, and the interaction efficiency is improved.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in software functional units and sold or used as a stand-alone product, may be stored in a non-transitory computer-readable storage medium executable by a processor. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (19)
1. An interaction processing method in a game, the method comprising:
receiving a trigger operation executed by a user on a target node, wherein the target node is one node in a target strategy tree in a game;
responding to the trigger operation, determining a guide node of the target node, and entering a first operation interface aiming at the guide node;
and displaying a first trigger control aiming at the guide node on the first operation interface, wherein the first trigger control is a control for executing a predetermined function aiming at a virtual element corresponding to the guide node in a game.
2. The method according to claim 1, wherein the trigger operation comprises a selection operation performed by a user on a second operation interface for the target node on a second trigger control, and the second trigger control is a control for performing a predetermined function on a virtual element corresponding to the target node in the game.
3. The method of claim 1 or 2, wherein the target node is a node in the target policy tree that satisfies the following condition:
each node on a designated node path in the target strategy tree is triggered, the designated node path is a path from a father node of the target node to a root node, and the triggered node means that a predetermined function is executed aiming at a virtual element corresponding to the node in a game;
the trigger condition corresponding to the target node belongs to the target constraint condition.
4. The method of claim 2, further comprising:
displaying a tree structure interface aiming at a target strategy tree, wherein target nodes in the target strategy tree are displayed in the tree structure interface;
and responding to the selected operation executed on the target node on the tree structure interface, and entering a second operation interface aiming at the target node.
5. The method of claim 1, wherein each node in the target policy tree has a corresponding constraint,
wherein the step of determining a bootstrap node for the target node comprises:
screening out a first candidate node from the nodes of the target strategy tree according to a target constraint condition;
determining a boot-up order based on the first candidate node;
determining a bootstrap node of the target node from the nodes of the target policy tree based on a bootstrap sequence, wherein the bootstrap node is a next node of the target node in the bootstrap sequence under the target policy tree.
6. The method of claim 5, wherein determining a steering order based on the first candidate node comprises:
determining a first sub-order of the first candidate node;
determining the first sub-order as a boot order.
7. The method of claim 6, wherein the target constraint comprises a first time constraint,
wherein the step of determining a first sub-order of the first candidate node comprises:
determining a traversal order aiming at the target strategy tree according to a preset traversal rule;
traversing each first candidate node in the target strategy tree according to the determined traversal order to form a first subsequence corresponding to all the first candidate nodes, wherein the first candidate nodes are nodes of which the trigger conditions belong to first time constraint conditions in the target strategy tree.
8. The method of claim 5, wherein the first candidate node is positioned prior to other nodes in the boot-up order.
9. The method of claim 8, wherein determining a steering order based on the first candidate node comprises:
determining a first subsequence of a first candidate node and a second subsequence of a second candidate node, wherein the second candidate node is a node of a target strategy tree, and a trigger condition of the second candidate node belongs to a second time constraint condition;
the boot-sequence is built according to a first sub-sequence and a second sub-sequence, wherein the second sub-sequence is located after the first sub-sequence.
10. The method of claim 9, wherein the constraint comprises a second time constraint,
wherein the step of determining a second sub-order of the second candidate node comprises:
determining a traversal order aiming at the target strategy tree according to a preset traversal rule;
and traversing each second candidate node in the target strategy tree according to the determined traversal sequence to form a second subsequence corresponding to all the second candidate nodes.
11. The method of claim 1, wherein the step of entering a first operator interface for the bootstrap node comprises:
and automatically selecting the guide node to enter a first operation interface aiming at the guide node.
12. The method of claim 4, wherein the first operator interface is accessed by:
responding to the trigger operation, switching the interface to quit a second operation interface aiming at the target node, and displaying a tree structure interface;
automatically selecting the guide node on the displayed tree structure interface to enter a first operation interface aiming at the guide node.
13. The method of claim 12, further comprising:
after the interface is switched, controlling the tree structure interface to move on a screen so as to display a guide node in the target strategy tree in the screen;
and/or after the guide node is automatically selected, creating a window, and displaying a first operation interface aiming at the guide node in the created window.
14. The method of claim 1, further comprising:
receiving a selection operation of a user on a first trigger control;
responding to the selection operation of a first trigger control, and determining a trigger condition corresponding to the guide node;
and when the trigger condition is met, executing the predetermined function aiming at the virtual element corresponding to the guide node.
15. The method of claim 1, wherein the guide node is a node in the target policy tree where the trigger condition belongs to a first time constraint, wherein the first operation interface is a simplified interface comprising a first trigger control for the guide node,
wherein, still include:
and in response to the selection operation of the first trigger control, determining a node next to the guide node in the guide sequence under the target strategy tree, and entering a third operation interface aiming at the node.
16. The method according to claim 1, wherein the guide node is a node in the target policy tree whose trigger condition belongs to a second time constraint, the first operation interface is an editing interface including a first trigger control for the guide node and a plurality of candidate virtual elements,
wherein, still include:
receiving a selection operation of any candidate virtual element in the plurality of candidate virtual elements on a first operation interface;
in response to the selection operation of the first trigger control, any selected candidate virtual element is determined as a virtual element corresponding to the guide node, so that the predetermined function is executed for the virtual element corresponding to the guide node.
17. An interaction processing apparatus in a game, the apparatus comprising:
the receiving module is used for receiving a trigger operation executed by a user on a target node, wherein the target node is one node in a target strategy tree in a game;
the conversion module is used for responding to the trigger operation, determining a guide node of the target node and entering a first operation interface aiming at the guide node;
and the display control module is used for displaying a first trigger control aiming at the guide node on the first operation interface, and the first trigger control is a control used for executing a predetermined function aiming at a virtual element corresponding to the guide node in a game.
18. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device is operating, the processor executing the machine-readable instructions to perform the steps of the method according to any one of claims 1 to 16.
19. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, is adapted to carry out the steps of the method according to any one of claims 1 to 16.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210986090.0A CN115328354B (en) | 2022-08-16 | 2022-08-16 | Interactive processing method and device in game, electronic equipment and storage medium |
PCT/CN2023/079118 WO2024036915A1 (en) | 2022-08-16 | 2023-03-01 | Interaction processing method and apparatus in game, and electronic device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210986090.0A CN115328354B (en) | 2022-08-16 | 2022-08-16 | Interactive processing method and device in game, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115328354A true CN115328354A (en) | 2022-11-11 |
CN115328354B CN115328354B (en) | 2024-05-10 |
Family
ID=83924260
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210986090.0A Active CN115328354B (en) | 2022-08-16 | 2022-08-16 | Interactive processing method and device in game, electronic equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115328354B (en) |
WO (1) | WO2024036915A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024036915A1 (en) * | 2022-08-16 | 2024-02-22 | 网易(杭州)网络有限公司 | Interaction processing method and apparatus in game, and electronic device and storage medium |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110021263A1 (en) * | 2009-07-24 | 2011-01-27 | Wms Gaming, Inc. | Controlling event-driven behavior of wagering game objects |
KR20140023638A (en) * | 2012-08-16 | 2014-02-27 | (주)네오위즈게임즈 | Game guide method, server performing the same and storage media storing the same |
CN108664287A (en) * | 2018-05-11 | 2018-10-16 | 腾讯科技(深圳)有限公司 | Export method, apparatus, terminal and the storage medium of operation guide |
CN109876445A (en) * | 2019-01-11 | 2019-06-14 | 珠海金山网络游戏科技有限公司 | A kind of high decoupling bootstrap technique and system of Behavior-based control tree |
CN110478893A (en) * | 2019-08-23 | 2019-11-22 | 网易(杭州)网络有限公司 | Game events execute method and device |
US20200097848A1 (en) * | 2018-09-24 | 2020-03-26 | International Business Machines Corporation | Stochastic control with a quantum computer |
CN111111202A (en) * | 2019-12-26 | 2020-05-08 | 北京像素软件科技股份有限公司 | Game AI behavior logic control method and system |
CN111481937A (en) * | 2020-04-09 | 2020-08-04 | 网易(杭州)网络有限公司 | Game task testing method and device, testing terminal and server |
CN112245928A (en) * | 2020-10-23 | 2021-01-22 | 网易(杭州)网络有限公司 | Guiding method and device in game, electronic equipment and storage medium |
CN113886393A (en) * | 2021-10-12 | 2022-01-04 | 网易(杭州)网络有限公司 | Data processing method, data processing apparatus, storage medium, and electronic apparatus |
CN113926203A (en) * | 2021-09-29 | 2022-01-14 | 杭州电魂网络科技股份有限公司 | Game item package setting method and system based on frequency association rule |
CN114011068A (en) * | 2021-11-25 | 2022-02-08 | 腾讯科技(深圳)有限公司 | Method and related device for processing virtual prop |
WO2022041663A1 (en) * | 2020-08-31 | 2022-03-03 | 网易(杭州)网络有限公司 | Method for recommending and purchasing virtual prop of game, and electronic device |
CN114404984A (en) * | 2021-12-30 | 2022-04-29 | 北京像素软件科技股份有限公司 | Data processing method and device for game scene, computer equipment and medium |
CN114470791A (en) * | 2022-02-14 | 2022-05-13 | 腾讯科技(深圳)有限公司 | Game item recommendation method and device, computer equipment, storage medium and product |
CN114797111A (en) * | 2022-04-24 | 2022-07-29 | 网易(杭州)网络有限公司 | Linkage triggering method and device for secret room office, terminal and storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110221733A (en) * | 2018-03-01 | 2019-09-10 | 阿里巴巴集团控股有限公司 | Methods of exhibiting and device |
CN114247141B (en) * | 2021-11-09 | 2023-07-25 | 腾讯科技(深圳)有限公司 | Method, device, equipment, medium and program product for guiding tasks in virtual scene |
CN115328354B (en) * | 2022-08-16 | 2024-05-10 | 网易(杭州)网络有限公司 | Interactive processing method and device in game, electronic equipment and storage medium |
-
2022
- 2022-08-16 CN CN202210986090.0A patent/CN115328354B/en active Active
-
2023
- 2023-03-01 WO PCT/CN2023/079118 patent/WO2024036915A1/en unknown
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110021263A1 (en) * | 2009-07-24 | 2011-01-27 | Wms Gaming, Inc. | Controlling event-driven behavior of wagering game objects |
KR20140023638A (en) * | 2012-08-16 | 2014-02-27 | (주)네오위즈게임즈 | Game guide method, server performing the same and storage media storing the same |
CN108664287A (en) * | 2018-05-11 | 2018-10-16 | 腾讯科技(深圳)有限公司 | Export method, apparatus, terminal and the storage medium of operation guide |
US20200097848A1 (en) * | 2018-09-24 | 2020-03-26 | International Business Machines Corporation | Stochastic control with a quantum computer |
CN109876445A (en) * | 2019-01-11 | 2019-06-14 | 珠海金山网络游戏科技有限公司 | A kind of high decoupling bootstrap technique and system of Behavior-based control tree |
CN110478893A (en) * | 2019-08-23 | 2019-11-22 | 网易(杭州)网络有限公司 | Game events execute method and device |
CN111111202A (en) * | 2019-12-26 | 2020-05-08 | 北京像素软件科技股份有限公司 | Game AI behavior logic control method and system |
CN111481937A (en) * | 2020-04-09 | 2020-08-04 | 网易(杭州)网络有限公司 | Game task testing method and device, testing terminal and server |
WO2022041663A1 (en) * | 2020-08-31 | 2022-03-03 | 网易(杭州)网络有限公司 | Method for recommending and purchasing virtual prop of game, and electronic device |
CN112245928A (en) * | 2020-10-23 | 2021-01-22 | 网易(杭州)网络有限公司 | Guiding method and device in game, electronic equipment and storage medium |
CN113926203A (en) * | 2021-09-29 | 2022-01-14 | 杭州电魂网络科技股份有限公司 | Game item package setting method and system based on frequency association rule |
CN113886393A (en) * | 2021-10-12 | 2022-01-04 | 网易(杭州)网络有限公司 | Data processing method, data processing apparatus, storage medium, and electronic apparatus |
CN114011068A (en) * | 2021-11-25 | 2022-02-08 | 腾讯科技(深圳)有限公司 | Method and related device for processing virtual prop |
CN114404984A (en) * | 2021-12-30 | 2022-04-29 | 北京像素软件科技股份有限公司 | Data processing method and device for game scene, computer equipment and medium |
CN114470791A (en) * | 2022-02-14 | 2022-05-13 | 腾讯科技(深圳)有限公司 | Game item recommendation method and device, computer equipment, storage medium and product |
CN114797111A (en) * | 2022-04-24 | 2022-07-29 | 网易(杭州)网络有限公司 | Linkage triggering method and device for secret room office, terminal and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024036915A1 (en) * | 2022-08-16 | 2024-02-22 | 网易(杭州)网络有限公司 | Interaction processing method and apparatus in game, and electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN115328354B (en) | 2024-05-10 |
WO2024036915A1 (en) | 2024-02-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5887458B1 (en) | A game system for searching for a route of a non-player character based on a player's movement history | |
US10010793B2 (en) | Techniques for improved user interface helping super guides | |
KR102258278B1 (en) | Seasonal reward distribution system | |
KR102610422B1 (en) | Method and apparatus, device, and storage medium for processing avatar usage data | |
JP7447299B2 (en) | Adaptive display method and device for virtual scenes, electronic equipment, and computer program | |
US11305193B2 (en) | Systems and methods for multi-user editing of virtual content | |
CN112121417B (en) | Event processing method, device, equipment and storage medium in virtual scene | |
US11925872B2 (en) | Dynamic modifications of single player and multiplayer mode in a video game | |
CN105630160A (en) | Virtual reality using interface system | |
Hu et al. | Deep learning applications in games: a survey from a data perspective | |
CN115328354B (en) | Interactive processing method and device in game, electronic equipment and storage medium | |
WO2023061133A1 (en) | Virtual scene display method and apparatus, device, and storage medium | |
JP6713525B2 (en) | Information processing system, information processing apparatus, information processing program, and information processing method | |
KR20240033087A (en) | Control methods, devices, devices, storage media and program products of opening operations in hypothetical scenarios | |
CN111569428A (en) | Information processing method and device in game | |
Aliaga et al. | Level building sidekick: an AI-assisted level editor package for unity | |
CN112800252B (en) | Method, device, equipment and storage medium for playing media files in virtual scene | |
Riaz et al. | User Interface Designing Principles for Real-time Games Strategies | |
US20240375002A1 (en) | Method and apparatus for displaying game skill cooldown prompt in virtual scene, device, and medium | |
CN118384504A (en) | Game program generation method and device and electronic equipment | |
AUXTERO | GAME ENVIRONMENT DESIGN CREATOR USING ARTIFICIAL INTELLIGENCE PROCEDURAL GENERATION | |
CN117726728A (en) | Avatar generation method, device, electronic equipment and storage medium | |
CN116764214A (en) | Virtual prop processing method, device, equipment, storage medium and program product | |
Tolinsson et al. | To make sense of dungeons | |
CN114210046A (en) | Virtual skill control method, device, equipment, storage medium and program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |