CN118161857A - Task display method, device, storage medium and equipment - Google Patents
Task display method, device, storage medium and equipment Download PDFInfo
- Publication number
- CN118161857A CN118161857A CN202211575273.XA CN202211575273A CN118161857A CN 118161857 A CN118161857 A CN 118161857A CN 202211575273 A CN202211575273 A CN 202211575273A CN 118161857 A CN118161857 A CN 118161857A
- Authority
- CN
- China
- Prior art keywords
- task
- target
- determining
- historical
- target task
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 238000003860 storage Methods 0.000 title claims abstract description 22
- 230000003190 augmentative effect Effects 0.000 claims description 21
- 238000004590 computer program Methods 0.000 claims description 19
- 230000006870 function Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 230000008447 perception Effects 0.000 description 7
- 230000001953 sensory effect Effects 0.000 description 7
- 238000012549 training Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000004088 simulation Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000001360 synchronised effect Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000033001 locomotion Effects 0.000 description 3
- 230000016776 visual perception Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000000386 athletic effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000008713 feedback mechanism Effects 0.000 description 1
- 230000001339 gustatory effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000005304 joining Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/55—Controlling game characters or game objects based on the game progress
- A63F13/56—Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/45—Controlling the progress of the video game
- A63F13/47—Controlling the progress of the video game involving branching, e.g. choosing one of several possible scenarios at a given point in time
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses a task display method, a device, a storage medium and equipment, wherein the method comprises the following steps: determining a first location from a plurality of candidate locations within the target area; determining a target task type corresponding to the target area according to the historical task completion data; determining a target task corresponding to the target task type; and displaying the virtual prop of the target task at the first position. The method and the device can flexibly set the first position for displaying the target task, and can determine the target task according to the historical task completion data, so that a user can enjoy fresh task experience each time the user enters the same task gate to execute the task, and the task replay is improved.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a task display method, device, storage medium, and apparatus.
Background
With the rapid development of computer technology and the popularization of networks, various tasks that people can perform through terminal devices are becoming more and more active. Such as game tasks, teaching tasks, training tasks, and the like. However, the present task presentation mode is relatively fixed and single, which may lose freshness of executing the task in the process of re-playing the task, so how to reasonably determine the target task to improve the task re-playing performance has become one of the main research subjects.
Disclosure of Invention
The embodiment of the application provides a task display method, a device, a storage medium and equipment, which can flexibly set a first position for displaying a target task, and can determine the target task according to historical task completion data, so that a user can enjoy fresh task experience every time entering the same task checkpoint to execute the task, and the task replay is improved.
In one aspect, an embodiment of the present application provides a task display method, applied to a terminal device, where the method includes:
Determining a first location from a plurality of candidate locations within the target area;
determining a target task type corresponding to the target area according to the historical task completion data;
Determining a target task corresponding to the target task type;
and displaying the virtual prop of the target task at the first position.
In some embodiments, the determining, according to the historical task completion data, the target task type corresponding to the target area includes:
acquiring the number of tasks corresponding to the target area as X, wherein X is a positive integer;
And determining X target task types from a task pool according to the historical task completion data, wherein the X target task types are X random task types except the completed historical task types in the historical task completion data in the task pool.
In some embodiments, the determining a target task corresponding to the target task type includes:
And determining a target task corresponding to each target task type from Y candidate tasks corresponding to each target task type according to the historical task completion data, wherein Y is a positive integer, and the target task is a random task except the completed historical task in the historical task completion data in the Y candidate tasks corresponding to each target task type.
In some embodiments, the determining the first location from the plurality of candidate locations within the target area includes:
And determining a first position corresponding to each target task type in the X target task types according to the plurality of candidate positions in the target area, the random weights of the positions of the plurality of candidate positions and the task number, wherein the first positions corresponding to each target task type in the X target task types are different.
In some embodiments, the displaying the virtual prop of the target task at the first location includes:
And displaying the virtual prop of the corresponding target task at a first position corresponding to each target task type in the target area in the current virtual scene.
In some embodiments, the target region is one of a plurality of regions in the current virtual scene.
In some embodiments, the historical task completion data is historical task completion data corresponding to a current user and/or a plurality of users within the target area.
In some embodiments, the terminal device comprises any one of an augmented reality device, a virtual reality device, an augmented reality device, and a mixed reality device.
In another aspect, an embodiment of the present application provides a task display device, applied to a terminal device, where the device includes:
a first determining unit configured to determine a first position from among a plurality of candidate positions within a target area;
The second determining unit is used for determining a target task type corresponding to the target area according to the historical task completion data;
The third determining unit is used for determining a target task corresponding to the target answer;
And the display unit is used for displaying the virtual prop of the target task at the first position.
In another aspect, embodiments of the present application provide a computer readable storage medium storing a computer program adapted to be loaded by a processor to perform the task display method according to any of the embodiments above.
In another aspect, an embodiment of the present application provides a terminal device, where the terminal device includes a processor and a memory, where the memory stores a computer program, and the processor is configured to execute the task display method according to any one of the embodiments above by calling the computer program stored in the memory.
The embodiment of the application determines a first position from a plurality of candidate positions in a target area; determining a target task type corresponding to the target area according to the historical task completion data; determining a target task corresponding to the target task type; and displaying the virtual prop of the target task at the first position. The embodiment of the application can flexibly set the first position for displaying the target task, and can determine the target task according to the historical task completion data, so that a user can enjoy fresh task experience every time the user enters the same task checkpoint to execute the task, and the task replay is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a task display method according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a first application scenario of a task display method according to an embodiment of the present application.
Fig. 3 is a schematic diagram of a second application scenario of the task display method according to the embodiment of the present application.
Fig. 4 is a schematic structural diagram of a task display device according to an embodiment of the present application.
Fig. 5 is a schematic diagram of a first structure of a terminal device according to an embodiment of the present application.
Fig. 6 is a second schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
The embodiment of the application provides a task display method, a task display device, a computer readable storage medium and a device. Specifically, the task display method of the embodiment of the present application may be executed by a terminal device or by a server. The terminal device may be a terminal device such as a head-mounted device, an augmented reality device, a virtual reality device, an augmented reality device and a mixed reality device, a smart phone, a tablet computer, a notebook computer, a touch screen, a game machine, a Personal computer (PC, personal Computer), a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), and the like, and the terminal device may further include a client, which may be a task application client, a browser client carrying a task program, or an instant messaging client. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms.
For example, when the task display method is run on the terminal device, the terminal device stores a task application and is used to present a virtual scene picture. The terminal device is used for interacting with a user through a graphical user interface, for example, the terminal device downloads and installs a task application program and operates the task application program. The way in which the terminal device presents the graphical user interface to the user may include a variety of ways, for example, the graphical user interface may be rendered for display on a display screen of the terminal device, or presented by holographic projection. For example, the terminal device may include a touch display screen for presenting a graphical user interface including a virtual scene screen and receiving operation instructions generated by a user acting on the graphical user interface, and a processor for running the task, generating the graphical user interface, responding to the operation instructions, and controlling the display of the graphical user interface on the touch display screen.
For example, when the task display method is run on a server, it may be a cloud task such as a cloud game. Cloud tasks refer to task modes based on cloud computing. In the running mode of the cloud task, a running main body of the task application program and a virtual scene picture presentation main body are separated, and the storage and the running of the task display method are completed on a cloud task server. The virtual scene image presentation is completed at a cloud task client, which is mainly used for receiving and sending task data and presenting the virtual scene image, for example, the cloud task client may be a display device with a data transmission function, such as a head-mounted device, a mobile terminal, a television, a computer, a palm computer, a personal digital assistant, etc., near a user side, but the computer device performing task data processing is a cloud task server of a cloud end. When a task is carried out, a user operates the cloud task client to send an operation instruction to the cloud task server, the cloud task server runs the task according to the operation instruction, data such as a virtual scene picture and the like are encoded and compressed, the data are returned to the cloud task client through a network, and finally, the cloud task client decodes and outputs the virtual scene picture.
The embodiment of the application can be applied to various application scenes such as augmented Reality (eXtend ED REA LITY, XR), virtual Reality (VR), augmented Reality (Augmented Reality, AR), mixed Reality (MR) and the like.
The embodiment of the application can be applied to application scenes such as games, offices, teaching, training and the like.
First, partial terms or terminology appearing in the course of describing the embodiments of the application are explained as follows:
The virtual scene is a virtual scene that an application program displays (or provides) when running on a terminal or a server. Optionally, the virtual scene is a simulation environment for the real world, or a semi-simulated semi-fictional virtual environment, or a purely fictional virtual environment. The virtual scene is any one of a two-dimensional virtual scene and a three-dimensional virtual scene, and the virtual environment can be sky, land, ocean and the like, wherein the land comprises environmental elements such as deserts, cities and the like. The virtual scene is a scene of a virtual object complete game logic such as user control.
Virtual objects refer to dynamic objects that can be controlled in a virtual scene. Alternatively, the dynamic object may be a virtual character, a virtual animal, a cartoon character, or the like. The virtual object is a character controlled by a player through an input device or is artificial intelligence set in the fight of a virtual environment through training
(ARTIFICIAL INTELLIGENCE, AI), or a Non-player character (Non PLAYER CHARACTER, NPC) set in a virtual scene battle. Optionally, the virtual object is a virtual character playing an athletic in the virtual scene. Optionally, the number of virtual objects in the virtual scene fight is preset, or dynamically determined according to the number of clients joining the fight, which is not limited by the embodiment of the present application. In one possible implementation, a user can control a virtual object to move in the virtual scene, e.g., control the virtual object to run, jump, crawl, etc., as well as control the virtual object to fight other virtual objects using skills, virtual props, etc., provided by the application.
Augmented Reality (eXtend ED REA LITY, XR) is a technology that includes concepts of Virtual Reality (VR), augmented Reality (Augumented Reality, AR), and Mixed Reality (MR), representing an environment in which a Virtual world is connected to a real world, with which a user can interact in real time.
Virtual Reality (VR), a technology of creating and experiencing a Virtual world, generating a Virtual environment by calculation, is a multi-source information (the Virtual Reality mentioned herein at least comprises visual perception, and may further comprise auditory perception, tactile perception, motion perception, and even further comprises gustatory perception, olfactory perception, etc.), realizes the simulation of a fused and interactive three-dimensional dynamic view and entity behavior of the Virtual environment, immerses a user into the simulated Virtual Reality environment, and realizes application in various Virtual environments such as a map, a game, a video, education, medical treatment, simulation, collaborative training, sales, assistance in manufacturing, maintenance, repair, and the like.
Augmented reality (Augmented Reality, AR), a technique of calculating camera pose parameters of a camera in the real world (or three-dimensional world, real world) in real time during the process of capturing an image by the camera, and adding virtual elements on the image captured by the camera according to the camera pose parameters. Virtual elements include, but are not limited to: images, videos, and three-dimensional models. The goal of AR technology is to socket the virtual world over the real world on the screen for interaction.
Mixed Reality (MR) integrates computer-created sensory input (e.g., virtual objects) with sensory input from a physical scenery or a representation thereof into a simulated scenery, and in some MR sceneries, the computer-created sensory input may be adapted to changes in sensory input from the physical scenery. In addition, some electronic systems for rendering MR scenes may monitor orientation and/or position relative to the physical scene to enable virtual objects to interact with real objects (i.e., physical elements from the physical scene or representations thereof). For example, the system may monitor movement such that the virtual plants appear to be stationary relative to the physical building.
Enhanced virtualization (Augmented Virtuality, AV): AV scenery refers to a simulated scenery in which a computer created scenery or virtual scenery incorporates at least one sensory input from a physical scenery. The one or more sensory inputs from the physical set may be a representation of at least one feature of the physical set. For example, the virtual object may present the color of the physical element captured by the one or more imaging sensors. As another example, the virtual object may exhibit characteristics consistent with actual weather conditions in the physical scenery, as identified via weather-related imaging sensors and/or online weather data. In another example, an augmented reality forest may have virtual trees and structures, but an animal may have features that are accurately reproduced from images taken of a physical animal.
A virtual Field Of View (FOV) represents a perceived area Of a virtual environment that a user can perceive through a lens in a virtual reality device, using a Field Of View (FOV) Of the virtual Field Of View.
The virtual reality device, the terminal for realizing the virtual reality effect, may be provided in the form of glasses, a head mounted display (Head Mount Display, HMD), or a contact lens for realizing visual perception and other forms of perception, but the form of the virtual reality device is not limited to this, and may be further miniaturized or enlarged as needed.
The virtual reality device described in the embodiments of the present application may include, but is not limited to, the following types:
And the computer-side virtual reality (PCVR) equipment performs related calculation of virtual reality functions and data output by using the PC side, and the external computer-side virtual reality equipment realizes the effect of virtual reality by using the data output by the PC side.
The mobile virtual reality device supports setting up a mobile terminal (such as a smart phone) in various manners (such as a head-mounted display provided with a special card slot), performing related calculation of a virtual reality function by the mobile terminal through connection with the mobile terminal in a wired or wireless manner, and outputting data to the mobile virtual reality device, for example, watching a virtual reality video through an APP of the mobile terminal.
The integrated virtual reality device has a processor for performing the calculation related to the virtual function, and thus has independent virtual reality input and output functions, and is free from connection with a PC or a mobile terminal, and has high degree of freedom in use.
The following will describe in detail. It should be noted that the following description order of embodiments is not a limitation of the priority order of embodiments.
The application provides a task display method, which can be executed by a terminal or a server or can be executed by the terminal and the server together; the embodiment of the present application will be described by taking a task display method executed by a terminal (terminal device) as an example.
Referring to fig. 1 to 3, fig. 1 is a flow chart of a task display method according to an embodiment of the present application, and fig. 2 and fig. 3 are application scenario diagrams of the task display method according to an embodiment of the present application. The method is applied to a terminal device, which may include any one of an augmented reality device, a virtual reality device, an augmented reality device, and a mixed reality device. The method comprises the following steps:
step 110, determining a first location from a plurality of candidate locations within the target area.
In some embodiments, the method further comprises: the target area is one of a plurality of areas in the current virtual scene.
For example, a virtual scene picture may be provided by the terminal device, where the picture displayed by the virtual scene picture may be a three-dimensional virtual scene picture in the game scene, and the three-dimensional virtual scene is a virtual environment provided when the application program runs on the terminal device, and may be a simulation scene of the real world, a semi-simulation and semi-fictional scene, or a pure fictional scene. The scene picture displayed on the virtual scene picture is a scene picture presented when the virtual object observes the three-dimensional virtual scene.
For example, the virtual scene screen may include a virtual scene, and a plurality of regions may be provided in the virtual scene.
As shown in fig. 2, a plurality of regions, such as a rectangular region as shown in fig. 2, of a region a, a region B, a region C, and the like, are included in the current virtual scene. A plurality of candidate locations may be provided within each region, such as the circular location points shown in fig. 2.
For example, the target area is determined according to the current task gate, for example, if the current task gate corresponds to the area a, the area a may be determined to be the target area.
For example, if the current task checkpoint corresponds to a plurality of regions, the target region may be determined according to the position of the virtual object controlled by the current user in the current task checkpoint, e.g., if the virtual object controlled by the current user is located in the B region in the current task checkpoint, the B region may be determined to be the target region.
For example, the virtual scene may include, but is not limited to, the following: game virtual scenes, teaching virtual scenes, training virtual scenes, and the like.
For example, a virtual scene screen including a target area may be displayed by the terminal device. The corresponding target task may be performed within the target area. For example, the task may be any one of a game task, a teaching task, a training task, and a training task.
In some embodiments, the terminal device comprises any one of an augmented reality device, a virtual reality device, an augmented reality device, and a mixed reality device.
In some embodiments, if the number of tasks corresponding to the target area is X, the target task types include X target task types;
the determining a first location from a plurality of candidate locations within the target area includes:
And determining a first position corresponding to each target task type in the X target task types according to the plurality of candidate positions in the target area, the random weights of the positions of the plurality of candidate positions and the task number, wherein the first positions corresponding to each target task type in the X target task types are different.
For example, before determining the first location from the plurality of candidate locations in the target area, a first configuration information table may be preset, where the first configuration information table may include the number of tasks corresponding to each area under each task checkpoint, the candidate locations, and the location random weights of each candidate location. For example, the first configuration information table shown in table 1 below may configure, for each of a plurality of areas under different task checkpoints, the number of tasks existing in the area, and candidate positions in the respective areas, and the position random weights of the respective candidate positions.
TABLE 1
For example, a plurality of candidate positions corresponding to the target area and the number of tasks corresponding to the target area are obtained according to the first configuration information table, and then the first positions corresponding to the target task types are determined according to the plurality of candidate positions in the target area and the number of tasks and the position random weights of the plurality of candidate positions. For example, in combination with the illustration of table 1, if the target area is the area C, and the number of tasks in the area C is 3, then 3 target task types (for example, a target task type a, a target task type b, and a target task type C) are correspondingly present, and according to a plurality of candidate positions of the area C configured in the first configuration information table, a first position corresponding to each target task type in the 3 target task types is determined, for example, the first position corresponding to the target task type a is determined to be an X 7Y7Z7 coordinate point, the first position corresponding to the target task type b is determined to be an X 9Y9Z9 coordinate point, the first position corresponding to the target task type C is an X 10Y10Z10 coordinate point, and the first position corresponding to each target task type in the 3 target task types is a different coordinate point.
And 120, determining the target task type corresponding to the target area according to the historical task completion data.
In some embodiments, the determining, according to the historical task completion data, the target task type corresponding to the target area includes:
acquiring the number of tasks corresponding to the target area as X, wherein X is a positive integer;
And determining X target task types from a task pool according to the historical task completion data, wherein the X target task types are X random task types except the completed historical task types in the historical task completion data in the task pool.
For example, when x=1, there is one target task type within the target area. For example, when X > 1, multiple target task types may exist simultaneously within the target area.
For example, a task pool may be preset, where the task pool has various task types, and each task type has a corresponding task checkpoint and a checkpoint position, which are used to define a checkpoint and a position where the task type occurs. In the task pool, a plurality of candidate answers are set for each task type, and each candidate answer corresponds to a plurality of candidate tasks. For example, the task pool information table shown in the following table 2 may be configured with corresponding task gate configurations, gate positions, main prop blueprints corresponding to task types, candidate answers corresponding to each task type, candidate tasks and answer random weights corresponding to each candidate answer, and the like for different task types.
TABLE 2
For example, the task number X corresponding to the target area is obtained according to the first configuration information table, for example, as illustrated in table 1, if the target area is the area C, the task number in the area C is x=3.
According to the historical task completion data, 3 target task types are determined from the task pool shown in table 2, wherein the 3 target task types are X random task types except the completed historical task types in the historical task completion data in the task pool. The determined target task type is an unreleased task type, and the target task type can be the unreleased task type of the current user, or can be the unreleased task type of a plurality of users in the target area under the current task gate, wherein the plurality of users in the target area under the current task gate comprise the current user.
For example, by randomly selecting X random task types in the task pool, except the completed historical task types in the historical task completion data, as target task types, the task types in the same task checkpoint can be kept from being repeated, so that a user can enjoy fresh task experience every time the user enters the same task checkpoint to execute tasks.
In some embodiments, the historical task completion data is historical task completion data corresponding to a current user and/or a plurality of users within the target area.
For example, the historical task completion data may be historical task completion data corresponding to the current user, and specifically may be task types and tasks that are not played by the current user under any task gate, or task types and tasks that are not played by the current user in any region under the current task gate.
For example, the historical task completion data may be historical task completion data corresponding to a plurality of users in the target area, and specifically may be task types and tasks that are not played by the plurality of users in the target area in any area under the current task checkpoint or in the target area, for example, the plurality of users in the target area may include the current user.
For example, the historical task completion data may be historical task completion data corresponding to the current user and a plurality of users within the target area.
And 130, determining a target task corresponding to the target task type.
For example, the target task may be a different task corresponding to the same answer; or the target task may be a different task corresponding to a different answer.
For example, one target task for each target task type. For example, when x=1, there is one target task within the target area. For example, when X > 1, multiple target tasks may exist simultaneously within a target area.
In some embodiments, the determining a target task corresponding to the target task type includes:
And determining a target task corresponding to each target task type from Y candidate tasks corresponding to each target task type according to the historical task completion data, wherein Y is a positive integer, and the target task is a random task except the completed historical task in the historical task completion data in the Y candidate tasks corresponding to each target task type.
For example, each target task type may correspond to Y candidate tasks, where different target task types may correspond to different numbers of candidate tasks. Or each target task type may correspond to the same number of candidate tasks. The method can select the task which is not played from the candidate tasks as the target task, and can realize that the task in the same task checkpoint is not repeated, so that a user can enjoy fresh task experience every time the user enters the same task checkpoint to execute the task.
For example, the determining, according to the historical task completion data, the target task corresponding to each target task type from the Y candidate tasks corresponding to each target task type includes:
determining a target answer corresponding to each target task type in the X target task types from a plurality of candidate answers corresponding to each target task type;
Y candidate tasks corresponding to each target answer in the X target answers are obtained, wherein Y is a positive integer;
Determining a target task corresponding to each target answer from Y candidate tasks corresponding to each target answer according to the historical task completion data, wherein the target task is a random task except for the completed historical task in the historical task completion data in the Y candidate tasks corresponding to each target answer;
and determining the target task corresponding to each target task type according to one target task corresponding to each target answer.
For example, the determining, from a plurality of candidate answers corresponding to each target task type, one target answer corresponding to each target task type in the X target task types includes:
And determining one target answer corresponding to each target task type in the X target task types from the plurality of candidate answers corresponding to each target task type according to the random weights of the answers of the plurality of candidate answers corresponding to each target task type.
For example, referring to table 2, if the number of tasks X of the target task type is 1, for example, the target task type corresponds to task type 1, and it is determined that one target answer corresponding to task type 1 is candidate answer 2 from a plurality of candidate answers corresponding to task type 1, then the candidate tasks corresponding to candidate answer 2 include 4 candidate tasks, specifically, candidate task 1, candidate task 5, candidate task 2 and candidate task 4; if the completed historical task in the historical task completion data is the candidate task 1, randomly selecting one candidate task from the 3 candidate tasks except the candidate task 1 from the 4 candidate tasks as a target task.
For example, there may be multiple candidate answers under the same target task type, each candidate answer corresponds to multiple candidate tasks, and the target task is selected from the multiple candidate tasks corresponding to the target answer by randomly selecting the target answer, that is, the played task type is excluded, and the played task is also excluded, so that the user experience can be enabled to experience non-repeated task types, non-repeated answers and non-repeated tasks under the same task type, the task experience of the user is improved, the user viscosity is increased, and the task replay is improved.
And 140, displaying the virtual prop of the target task at the first position.
In some embodiments, the displaying the virtual prop of the target task at the first location includes:
And displaying the virtual prop of the corresponding target task at a first position corresponding to each target task type in the target area in the current virtual scene.
For example, the target task may have a corresponding virtual prop, as shown in fig. 4, and the virtual prop of the target task may be displayed in the oval position shown in fig. 4, for example, the virtual prop is a virtual game machine.
For example, each target task type corresponds to a main prop, each target task corresponds to a virtual prop, the virtual props may be sub-props, and the main props of the target task type and the virtual props of the corresponding target tasks may be displayed at a first position corresponding to each target task type in a target area in the current virtual scene.
All the above technical solutions may be combined to form an optional embodiment of the present application, and will not be described in detail herein.
The embodiment of the application determines a first position from a plurality of candidate positions in a target area; determining a target task type corresponding to the target area according to the historical task completion data; determining a target task corresponding to the target task type; and displaying the virtual prop of the target task at the first position. The embodiment of the application can flexibly set the first position for displaying the target task, and can determine the target task according to the historical task completion data, so that a user can enjoy fresh task experience every time the user enters the same task checkpoint to execute the task, and the task replay is improved.
In order to facilitate better implementation of the task display method of the embodiment of the application, the embodiment of the application also provides a task display device. Referring to fig. 4, fig. 4 is a schematic structural diagram of a task display device according to an embodiment of the application. Wherein the task display device 200 is applied to a terminal apparatus, the task display device 200 may include:
A first determining unit 210, configured to determine a first location from a plurality of candidate locations in the target area;
A second determining unit 220, configured to determine, according to the historical task completion data, a target task type corresponding to the target area;
a third determining unit 230, configured to determine a target task corresponding to the target answer;
and the display unit 240 is configured to display the virtual prop of the target task at the first position.
In some embodiments, the second determining unit 220 may be configured to:
acquiring the number of tasks corresponding to the target area as X, wherein X is a positive integer;
And determining X target task types from a task pool according to the historical task completion data, wherein the X target task types are X random task types except the completed historical task types in the historical task completion data in the task pool.
In some embodiments, the third determining unit 230 may be configured to:
And determining a target task corresponding to each target task type from Y candidate tasks corresponding to each target task type according to the historical task completion data, wherein Y is a positive integer, and the target task is a random task except the completed historical task in the historical task completion data in the Y candidate tasks corresponding to each target task type.
In some embodiments, the first determining unit 210 may be configured to:
And determining a first position corresponding to each target task type in the X target task types according to the plurality of candidate positions in the target area, the random weights of the positions of the plurality of candidate positions and the task number, wherein the first positions corresponding to each target task type in the X target task types are different.
In some embodiments, the first determining unit 210 may be configured to:
And displaying the virtual prop of the corresponding target task at a first position corresponding to each target task type in the target area in the current virtual scene.
In some embodiments, the target region is one of a plurality of regions in the current virtual scene.
In some embodiments, the historical task completion data is historical task completion data corresponding to a current user and/or a plurality of users within the target area.
In some embodiments, the terminal device comprises any one of an augmented reality device, a virtual reality device, an augmented reality device, and a mixed reality device.
The various elements of the task display device 200 described above may be implemented in whole or in part by software, hardware, or a combination thereof. The above units may be embedded in hardware or may be independent of a processor in the terminal device, or may be stored in software in a memory in the terminal device, so that the processor invokes and executes operations corresponding to the above units.
The task display device 200 may be integrated in a terminal or a server having a memory and a processor mounted therein and having an arithmetic capability, or the task display device 200 may be the terminal or the server.
In some embodiments, the present application further provides a terminal device, including a memory and a processor, where the memory stores a computer program, and the processor implements the steps in the above method embodiments when executing the computer program.
As shown in fig. 5, fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present application, and the terminal device 300 may be generally provided in the form of glasses, a head mounted display (Head Mount Display, HMD), or a contact lens for realizing visual perception and other forms of perception, but the form of realizing the terminal device is not limited thereto, and may be further miniaturized or enlarged as required. The terminal device 300 may include, but is not limited to, the following:
The detection module 301: various sensors are used to detect user operation commands and act on the virtual environment, such as to update the images displayed on the display screen along with the user's line of sight, to achieve user interaction with the virtual and scene, such as to update real content based on the detected direction of rotation of the user's head.
Feedback module 302: receiving data from the sensor, providing real-time feedback to the user; wherein the feedback module 302 may be for displaying a graphical user interface, such as displaying a virtual environment on the graphical user interface. For example, the feedback module 302 may include a display screen or the like.
Sensor 303: on one hand, an operation command from a user is accepted and acted on the virtual environment; on the other hand, the result generated after the operation is provided to the user in the form of various feedback.
Control module 304: the sensors and various input/output devices are controlled, including obtaining user data (e.g., motion, speech) and outputting sensory data, such as images, vibrations, temperature, sounds, etc., to affect the user, virtual environment, and the real world.
Modeling module 305: constructing a three-dimensional model of a virtual environment may also include various feedback mechanisms such as sound, touch, etc. in the three-dimensional model.
In an embodiment of the present application, a virtual scene may be constructed by the modeling module 305; displaying a graphical user interface through the feedback module 302, wherein the graphical user interface comprises a virtual scene, a target space region contained in the virtual scene and a virtual prop of a target task displayed in the target space region; determining a first position from a plurality of candidate positions in a target area through a control module 304, determining a target task type corresponding to the target area according to historical task completion data, and determining a target task corresponding to the target task type; and display the virtual prop of the target task at the first location via feedback module 302.
In some embodiments, as shown in fig. 6, fig. 6 is another schematic structural diagram of a terminal device according to an embodiment of the present application, where the terminal device 300 further includes a processor 310 with one or more processing cores, a memory 320 with one or more computer readable storage media, and a computer program stored in the memory 320 and capable of running on the processor. The processor 310 is electrically connected to the memory 320. It will be appreciated by those skilled in the art that the terminal device structure shown in the figures does not constitute a limitation of the terminal device, and may include more or less components than those illustrated, or may combine certain components, or may have a different arrangement of components.
The processor 310 is a control center of the terminal device 300, connects respective parts of the entire terminal device 300 using various interfaces and lines, and performs various functions of the terminal device 300 and processes data by running or loading software programs and/or modules stored in the memory 320 and calling data stored in the memory 320, thereby performing overall monitoring of the terminal device 300.
In the embodiment of the present application, the processor 310 in the terminal device 300 loads the instructions corresponding to the processes of one or more application programs into the memory 320 according to the following steps, and the processor 310 executes the application programs stored in the memory 320, so as to implement various functions:
Determining a first location from a plurality of candidate locations within the target area;
determining a target task type corresponding to the target area according to the historical task completion data;
Determining a target task corresponding to the target task type;
and displaying the virtual prop of the target task at the first position.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
In some embodiments, the processor 310 may include a detection module 301, a control module 304, and a modeling module 305.
In some embodiments, as shown in fig. 6, the terminal device 300 further includes: radio frequency circuitry 306, audio circuitry 307, and power supply 308. The processor 310 is electrically connected to the memory 320, the feedback module 302, the sensor 303, the rf circuit 306, the audio circuit 307, and the power supply 308, respectively. It will be appreciated by those skilled in the art that the terminal device structure shown in fig. 5 or 6 does not constitute a limitation of the terminal device, and may include more or less components than illustrated, or may combine certain components, or may be arranged in different components.
The radio frequency circuitry 306 may be configured to receive and transmit radio frequency signals to and from a network device or other terminal device via wireless communication to and from the network device or other terminal device.
The audio circuit 307 may be used to provide an audio interface between the user and the terminal device via a speaker, microphone. The audio circuit 307 may transmit the received electrical signal after audio data conversion to a speaker, and convert the electrical signal into a sound signal for output by the speaker; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 307 and converted into audio data, which is processed by the audio data output processor 301 and sent to, for example, another terminal device via the radio frequency circuit 306, or which is output to a memory for further processing. The audio circuit 307 may also include an ear bud jack to provide communication of the peripheral ear bud with the terminal device.
The power supply 308 is used to power the various components of the terminal device 300.
Although not shown in fig. 5 or fig. 6, the terminal device 300 may further include a camera, a wireless fidelity module, a bluetooth module, an input module, and the like, which are not described herein.
In some embodiments, the present application also provides a computer-readable storage medium storing a computer program. The computer readable storage medium may be applied to a terminal device or a server, and the computer program causes the terminal device or the server to execute the corresponding flow in the task display method in the embodiment of the present application, which is not described herein for brevity.
In some embodiments, the present application also provides a computer program product comprising a computer program stored in a computer readable storage medium. The processor of the terminal device reads the computer program from the computer readable storage medium, and the processor executes the computer program, so that the terminal device executes a corresponding flow in the task display method in the embodiment of the present application, which is not described herein for brevity.
The present application also provides a computer program comprising a computer program stored in a computer readable storage medium. The processor of the terminal device reads the computer program from the computer readable storage medium, and the processor executes the computer program, so that the terminal device executes a corresponding flow in the task display method in the embodiment of the present application, which is not described herein for brevity.
It should be appreciated that the processor of an embodiment of the present application may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method embodiments may be implemented by integrated logic circuits of hardware in a processor or instructions in software form. The Processor may be a general purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application SPECIFIC INTEGRATED Circuit (ASIC), an off-the-shelf programmable gate array (Field Programmable GATE ARRAY, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
It will be appreciated that the memory in embodiments of the application may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDR SDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCHLINK DRAM, SLDRAM), and Direct memory bus RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in the form of a software product stored in a storage medium, comprising several instructions for causing a terminal device (which may be a personal computer, a server) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (11)
1. A task display method, characterized in that it is applied to a terminal device, the method comprising:
Determining a first location from a plurality of candidate locations within the target area;
determining a target task type corresponding to the target area according to the historical task completion data;
Determining a target task corresponding to the target task type;
and displaying the virtual prop of the target task at the first position.
2. The task display method according to claim 1, wherein the determining the target task type corresponding to the target area according to the historical task completion data includes:
acquiring the number of tasks corresponding to the target area as X, wherein X is a positive integer;
And determining X target task types from a task pool according to the historical task completion data, wherein the X target task types are X random task types except the completed historical task types in the historical task completion data in the task pool.
3. The task display method according to claim 1, wherein the determining a target task corresponding to the target task type includes:
And determining a target task corresponding to each target task type from Y candidate tasks corresponding to each target task type according to the historical task completion data, wherein Y is a positive integer, and the target task is a random task except the completed historical task in the historical task completion data in the Y candidate tasks corresponding to each target task type.
4. The task display method of claim 2, wherein the determining a first location from a plurality of candidate locations within the target area comprises:
And determining a first position corresponding to each target task type in the X target task types according to the plurality of candidate positions in the target area, the random weights of the positions of the plurality of candidate positions and the task number, wherein the first positions corresponding to each target task type in the X target task types are different.
5. The task display method of claim 4, wherein displaying the virtual prop of the target task at the first location comprises:
And displaying the virtual prop of the corresponding target task at a first position corresponding to each target task type in the target area in the current virtual scene.
6. The task display method of any one of claims 1 to 5, wherein the target area is one of a plurality of areas in a current virtual scene.
7. A task display method according to any one of claims 1 to 5, wherein the historical task completion data is historical task completion data corresponding to a current user and/or a plurality of users within the target area.
8. The task display method according to any one of claims 1 to 5, wherein the terminal device includes any one of an augmented reality device, a virtual reality device, an augmented reality device, and a mixed reality device.
9. A task display device, characterized by being applied to a terminal apparatus, the device comprising:
a first determining unit configured to determine a first position from among a plurality of candidate positions within a target area;
The second determining unit is used for determining a target task type corresponding to the target area according to the historical task completion data;
The third determining unit is used for determining a target task corresponding to the target answer;
And the display unit is used for displaying the virtual prop of the target task at the first position.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program adapted to be loaded by a processor for performing the task display method according to any of the claims 1-8.
11. A terminal device, characterized in that the terminal device comprises a processor and a memory, the memory storing a computer program, the processor being adapted to execute the task display method according to any one of claims 1-8 by calling the computer program stored in the memory.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211575273.XA CN118161857A (en) | 2022-12-08 | 2022-12-08 | Task display method, device, storage medium and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211575273.XA CN118161857A (en) | 2022-12-08 | 2022-12-08 | Task display method, device, storage medium and equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118161857A true CN118161857A (en) | 2024-06-11 |
Family
ID=91357422
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211575273.XA Pending CN118161857A (en) | 2022-12-08 | 2022-12-08 | Task display method, device, storage medium and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118161857A (en) |
-
2022
- 2022-12-08 CN CN202211575273.XA patent/CN118161857A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106383587B (en) | Augmented reality scene generation method, device and equipment | |
CN112156464B (en) | Two-dimensional image display method, device and equipment of virtual object and storage medium | |
JP5987060B2 (en) | GAME SYSTEM, GAME DEVICE, CONTROL METHOD, PROGRAM, AND RECORDING MEDIUM | |
JP6576245B2 (en) | Information processing apparatus, control method, and program | |
CN112933606B (en) | Game scene conversion method and device, storage medium and computer equipment | |
CN115883853B (en) | Video frame playing method, device, equipment and storage medium | |
KR20180013892A (en) | Reactive animation for virtual reality | |
CN111603771A (en) | Animation generation method, device, equipment and medium | |
CN111729307A (en) | Virtual scene display method, device, equipment and storage medium | |
US20190295324A1 (en) | Optimized content sharing interaction using a mixed reality environment | |
CN113194329B (en) | Live interaction method, device, terminal and storage medium | |
JP6200062B2 (en) | Information processing apparatus, control method, program, and recording medium | |
CN114288654A (en) | Live broadcast interaction method, device, equipment, storage medium and computer program product | |
CN112316423A (en) | Method, device, equipment and medium for displaying state change of virtual object | |
CN118161857A (en) | Task display method, device, storage medium and equipment | |
US11684852B2 (en) | Create and remaster computer simulation skyboxes | |
CN115068929A (en) | Game information acquisition method and device, electronic equipment and storage medium | |
CN118161858A (en) | Task adjusting method, device, storage medium and equipment | |
CN114053693A (en) | Object control method and device in virtual scene and terminal equipment | |
CN118229934A (en) | Virtual object display method, device, storage medium, equipment and program product | |
CN112973116A (en) | Virtual scene picture display method and device, computer equipment and storage medium | |
CN117671201A (en) | Information refreshing method, device, storage medium and equipment | |
CN118142168A (en) | Virtual object control method, device, storage medium and equipment | |
US20240001239A1 (en) | Use of machine learning to transform screen renders from the player viewpoint | |
US20240193894A1 (en) | Data processing method and apparatus, electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |