[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112099713B - Virtual element display method and related device - Google Patents

Virtual element display method and related device Download PDF

Info

Publication number
CN112099713B
CN112099713B CN202010989042.8A CN202010989042A CN112099713B CN 112099713 B CN112099713 B CN 112099713B CN 202010989042 A CN202010989042 A CN 202010989042A CN 112099713 B CN112099713 B CN 112099713B
Authority
CN
China
Prior art keywords
target
virtual element
interface
interactive
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010989042.8A
Other languages
Chinese (zh)
Other versions
CN112099713A (en
Inventor
叶金添
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010989042.8A priority Critical patent/CN112099713B/en
Publication of CN112099713A publication Critical patent/CN112099713A/en
Application granted granted Critical
Publication of CN112099713B publication Critical patent/CN112099713B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Multimedia (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a virtual element display method and a related device. Executing an interactive event containing relative displacement of the first virtual element and the second virtual element by responding to a target operation in the target interface; then monitoring the relative position relation of the first virtual element and the second virtual element in the relative displacement process; and when the relative position relation meets the triggering condition, displaying the execution process of the target event in the interactive interface. Therefore, in the interactive display process of the virtual elements, the position relation among the virtual elements is dynamic, the interactive relation among the virtual elements can be reflected, the interactive relation is clearer through the display of the target event, and the comprehensiveness and accuracy of the display of the virtual elements are improved.

Description

Virtual element display method and related device
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and a related apparatus for displaying a virtual element.
Background
With the rapid development of internet technology, more and more media contents appear in the terminal interfaces of people, and how to display virtual elements in the limited interfaces becomes a difficult problem.
Generally, the display of the virtual elements is performed by using static pictures, and the purpose of highlighting the virtual elements is achieved by configuring the pictures.
However, in some scenarios, interactive relationships exist among multiple virtual elements, for example, advertisement pictures of interactive narrative works, and a scheme of static pictures is adopted, so that the specific interactive experience of the interactive narrative works cannot be embodied, and the comprehensiveness and accuracy of virtual element display are affected.
Disclosure of Invention
In view of this, the present application provides a method for displaying a virtual element, which can effectively improve the comprehensiveness and accuracy of displaying the virtual element.
A first aspect of the present application provides a method for displaying a virtual element, which can be applied to a system or a program that includes a display function of the virtual element in a terminal device, and specifically includes:
responding to a target operation in a target interface, executing an interaction event containing a first virtual element and a second virtual element, wherein the interaction event is used for indicating the first virtual element and the second virtual element to perform relative displacement in the interaction interface, and the target interface contains the interaction interface;
monitoring the relative position relation of the first virtual element and the second virtual element in the relative displacement process;
and if the relative position relation meets the triggering condition, displaying the execution process of the target event in the interactive interface.
Optionally, in some possible implementations of the present application, the executing an interaction event including a first virtual element and a second virtual element in response to a target operation in a target interface includes:
determining a first layer corresponding to the first virtual element and a second layer corresponding to the second virtual element;
and responding to the target operation in the target interface, and executing the interaction event based on the first layer and the second layer.
Optionally, in some possible implementation manners of the present application, the executing the interaction event based on the first layer and the second layer in response to the target operation in the target interface includes:
acquiring an operation starting point of the target operation in the target interface to trigger the first layer and the second layer to perform relative displacement indicated by the interaction event;
and acquiring an operation end point of the target operation in the target interface to trigger the first layer and the second layer to stop the relative displacement.
Optionally, in some possible implementation manners of the present application, the executing the interaction event based on the first layer and the second layer in response to the target operation in the target interface includes:
responding to the target operation in the target interface, and determining an operation displacement distance corresponding to the target operation;
and determining layer displacement distances corresponding to the first layer and the second layer based on the operation displacement distance so as to execute the interaction event.
Optionally, in some possible implementation manners of the present application, the executing the interaction event based on the first layer and the second layer in response to the target operation in the target interface includes:
responding to the target operation in the target interface, and determining display elements of the first layer and the second layer in the interactive interface;
executing the interaction event based on the presentation element.
Optionally, in some possible implementations of the present application, the executing an interaction event including a first virtual element and a second virtual element in response to a target operation in a target interface includes:
responding to the target operation in the target interface, calling a target video, wherein the target video is used for indicating the process of relative displacement of the first virtual element and the second virtual element, and the target video is played based on the interactive interface;
and executing the interactive event based on the playing of the target video.
Optionally, in some possible implementations of the present application, the executing the interactive event based on the playing of the target video includes:
determining a playing progress bar of the target video;
and regulating and controlling the playing progress bar according to the operation component of the target operation in the target direction so as to execute the interactive event.
Optionally, in some possible implementations of the present application, the method further includes:
acquiring a target frame image of the target video;
and updating the initial state of the interactive interface based on the target frame image.
Optionally, in some possible implementation manners of the present application, if the relative position relationship satisfies a trigger condition, displaying an execution process of the target event in the interactive interface includes:
determining a key element in the first virtual element and a key element in the second virtual element;
and if the relative position relationship indicates that the distance between the key element in the first virtual element and the key element in the second virtual element is smaller than a preset value, triggering the execution of the target event in the interactive interface.
Optionally, in some possible implementations of the present application, the triggering, in the interactive interface, the execution of the target event includes:
calling a jump virtual element corresponding to the target event, wherein the jump virtual element is used for indicating the display of target media content in the target interface;
intercepting an interface display when the distance between the key element in the first virtual element and the key element in the second virtual element is equal to the preset value;
and displaying the jump virtual element in the interactive interface by taking the interface display as a background.
Optionally, in some possible implementation manners of the present application, the method further includes:
and responding to a rollback operation in the target interface, and executing the interaction events in a reverse order, wherein the rollback operation is opposite to the operation direction of the target operation.
Optionally, in some possible implementation manners of the present application, the target interface is a media content display interface, the target operation is a downslide operation, the interactive interface is an advertisement display frame, the first virtual element and the second virtual element are advertisement materials in the advertisement display frame, and the target event is playing of an advertisement video.
A second aspect of the present application provides a display apparatus for a virtual element, including: the response unit is used for responding to a target operation in a target interface and executing an interaction event containing a first virtual element and a second virtual element, wherein the interaction event is used for indicating the first virtual element and the second virtual element to perform relative displacement in the interaction interface, and the target interface contains the interaction interface;
the monitoring unit is used for monitoring the relative position relation of the first virtual element and the second virtual element in the relative displacement process;
and the display unit is used for displaying the execution process of the target event in the interactive interface if the relative position relation meets the triggering condition.
Optionally, in some possible implementation manners of the present application, the response unit is specifically configured to determine a first layer corresponding to the first virtual element and a second layer corresponding to the second virtual element;
the response unit is specifically configured to execute the interaction event based on the first layer and the second layer in response to the target operation in the target interface.
Optionally, in some possible implementation manners of the present application, the response unit is specifically configured to obtain an operation starting point of the target operation in the target interface, so as to trigger the first layer and the second layer to perform the relative displacement indicated by the interaction event;
the response unit is specifically configured to obtain an operation end point of the target operation in the target interface, so as to trigger the first layer and the second layer to stop the relative displacement.
Optionally, in some possible implementation manners of the present application, the response unit is specifically configured to determine, in response to the target operation in the target interface, an operation displacement distance corresponding to the target operation;
the response unit is specifically configured to determine layer displacement distances corresponding to the first layer and the second layer based on the operation displacement distance, so as to execute the interaction event.
Optionally, in some possible implementation manners of the present application, the response unit is specifically configured to determine, in response to the target operation in the target interface, display elements of the first layer and the second layer in the interactive interface;
the response unit is specifically configured to execute the interaction event based on the presentation element.
Optionally, in some possible implementation manners of the present application, the response unit is specifically configured to respond to the target operation in the target interface, and invoke a target video, where the target video is used to indicate a process of performing relative displacement on the first virtual element and the second virtual element, and the target video is played based on the interactive interface;
the response unit is specifically configured to execute the interactive event based on the playing of the target video.
Optionally, in some possible implementations of the present application, the response unit is specifically configured to determine a playing progress bar of the target video;
the response unit is specifically configured to regulate and control the play progress bar according to an operation component of the target operation in the target direction, so as to execute the interactive event.
Optionally, in some possible implementations of the present application, the response unit is specifically configured to obtain a target frame image of the target video;
the response unit is specifically configured to update the initial state of the interactive interface based on the target frame image.
Optionally, in some possible implementations of the present application, the presentation unit is specifically configured to determine a key element in the first virtual element and a key element in the second virtual element;
the display unit is specifically configured to trigger execution of the target event in the interactive interface if the relative position relationship indicates that a distance between a key element in the first virtual element and a key element in the second virtual element is smaller than a preset value.
Optionally, in some possible implementation manners of the present application, the presentation unit is specifically configured to invoke a jump virtual element corresponding to the target event, where the jump virtual element is used to indicate presentation of target media content in the target interface;
the display unit is specifically configured to intercept an interface display when a distance between a key element in the first virtual element and a key element in the second virtual element is equal to the preset value;
the display unit is specifically configured to display the jump virtual element in the interactive interface with the interface display as a background.
Optionally, in some possible implementations of the present application, the response unit is specifically configured to execute the interaction events in a reverse order in response to a rollback operation in the target interface, where the rollback operation is opposite to an operation direction of the target operation.
A third aspect of the present application provides a computer device comprising: a memory, a processor, and a bus system; the memory is used for storing program codes; the processor is configured to execute the method for presenting a virtual element according to any one of the first aspect and the first aspect according to instructions in the program code.
A fourth aspect of the present application provides a computer-readable storage medium, which stores instructions that, when executed on a computer, cause the computer to perform the method for presenting a virtual element according to any one of the first aspect or the first aspect.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method for presenting a virtual element provided in the first aspect or the various alternative implementations of the first aspect.
According to the technical scheme, the embodiment of the application has the following advantages:
executing an interaction event containing a first virtual element and a second virtual element by responding to a target operation in a target interface, wherein the interaction event is used for indicating the first virtual element and the second virtual element to perform relative displacement in the interaction interface, and the target interface contains the interaction interface; then monitoring the relative position relation of the first virtual element and the second virtual element in the relative displacement process; and when the relative position relation meets the triggering condition, displaying the execution process of the target event in the interactive interface. Therefore, in the interactive display process of the virtual elements, the position relation among the virtual elements is dynamic, the interactive relation among the virtual elements can be reflected, the interactive relation is clearer through the display of the target event, and the comprehensiveness and accuracy of the display of the virtual elements are improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a diagram of a network architecture for a presentation system of virtual elements;
FIG. 2 is a flowchart illustrating a virtual element according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a method for displaying a virtual element according to an embodiment of the present disclosure;
fig. 4 is a scene schematic diagram of a method for displaying a virtual element according to an embodiment of the present disclosure;
fig. 5 is a schematic view of a scene of another method for displaying a virtual element according to an embodiment of the present disclosure;
fig. 6 is a schematic view of a scene of another method for displaying a virtual element according to an embodiment of the present application;
fig. 7 is a schematic view of a scene of another method for displaying a virtual element according to an embodiment of the present application;
fig. 8 is a flowchart of another method for displaying a virtual element according to an embodiment of the present disclosure;
fig. 9 is a schematic view of a scene of another method for displaying a virtual element according to an embodiment of the present application;
fig. 10 is a schematic view of a scene of another method for displaying a virtual element according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a display apparatus for virtual elements according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of a terminal device according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a virtual element display method and a related device, which can be applied to a system or a program containing a virtual element display function in terminal equipment, and execute an interaction event containing a first virtual element and a second virtual element by responding to a target operation in a target interface, wherein the interaction event is used for indicating the first virtual element and the second virtual element to perform relative displacement in the interaction interface, and the target interface contains the interaction interface; further monitoring the relative position relation of the first virtual element and the second virtual element in the relative displacement process; and when the relative position relation meets the triggering condition, displaying the execution process of the target event in the interactive interface. Therefore, in the interactive display process of the virtual elements, the position relation among the virtual elements is dynamic, the interactive relation among the virtual elements can be reflected, the interactive relation is clearer through the display of the target event, and the comprehensiveness and accuracy of the display of the virtual elements are improved.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "corresponding" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some nouns that may appear in the embodiments of the present application are explained.
Interactive narrative works: users can interact with works according to their preferences and reasoning (the general interaction form is an option), so that the development of a plurality of plot lines is promoted, and even diversified outcomes are generated. The carrier of the interactive narrative may be text, pictures, video, etc.
An interactive narrative platform: platform class products that aggregate various interactive narrative works on which different interactive narrative works are displayed.
It should be understood that the method for displaying a virtual element provided by the present application may be applied to a system or a program including a display function of the virtual element in a terminal device, for example, an interactive narrative platform, specifically, the display system of the virtual element may run in a network architecture as shown in fig. 1, which is a network architecture diagram of the display system of the virtual element, as can be seen from the figure, the display system of the virtual element may provide a display process of the virtual element with multiple information sources, that is, a terminal side triggers a target event for a triggering operation of the virtual element, so as to invoke relevant data of the target event from a server, so as to enable the terminal device to display the virtual element; it can be understood that, fig. 1 shows various terminal devices, the terminal devices may be computer devices, in an actual scene, there may be more or fewer types of terminal devices participating in the presentation of the virtual element, the specific number and type are determined by the actual scene, and this is not limited herein, and in addition, fig. 1 shows one server, but in an actual scene, there may also be participation of multiple servers, especially in a scene where multiple recommendation platforms interact, the specific number of servers is determined by the actual scene.
In this embodiment, the server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, a big data and artificial intelligence platform, and the like. The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through a wired or wireless communication manner, and the terminal and the server may be connected to form a block chain network, which is not limited herein.
It is understood that the presentation system of the virtual element described above may be run on a personal mobile terminal, for example: the application as an interactive narrative platform can also run on a server and can also run on third-party equipment to provide the display of the virtual elements so as to obtain the display processing result of the virtual elements of the information source; the specific virtual element display system may be operated in the above-mentioned device in the form of a program, may also be operated as a system component in the above-mentioned device, and may also be used as one of cloud service programs, and a specific operation mode is determined by an actual scene, which is not limited herein.
With the rapid development of internet technology, more and more media contents appear in the terminal interfaces of people, and how to display virtual elements in the limited interfaces becomes a difficult problem.
Generally, the display of the virtual elements is performed by using static pictures, and the purpose of highlighting the virtual elements is achieved by configuring the pictures.
However, in some scenarios, interactive relationships exist among multiple virtual elements, for example, advertisement pictures of interactive narrative works, and a scheme of static pictures is adopted, so that the specific interactive experience of the interactive narrative works cannot be embodied, and the comprehensiveness and accuracy of virtual element display are affected.
In order to solve the above problem, the present application provides a method for displaying a virtual element, which is applied to a process framework for displaying a virtual element shown in fig. 2, as shown in fig. 2, for a process framework diagram for displaying a virtual element provided in an embodiment of the present application, a user performs a sliding operation (target operation) on a target interface through an interface layer, so as to display different interface contents, and correspondingly, a relative displacement between a first virtual element and a second virtual element included in the interface contents is monitored in an application layer, so as to trigger the display of a target event, thereby achieving an effect of highlighting the display.
It can be understood that the method provided by the present application may be a program written as a processing logic in a hardware system, or may be a display device of a virtual element, and the processing logic is implemented in an integrated or external manner. As an implementation manner, the virtual element presentation apparatus executes an interaction event including a first virtual element and a second virtual element by responding to a target operation in a target interface, where the interaction event is used to indicate that the first virtual element and the second virtual element perform relative displacement in the interaction interface, and the target interface includes the interaction interface; further monitoring the relative position relation of the first virtual element and the second virtual element in the relative displacement process; and when the relative position relation meets the triggering condition, displaying the execution process of the target event in the interactive interface. Therefore, in the interactive display process of the virtual elements, the position relation among the virtual elements is dynamic, the interactive relation among the virtual elements can be reflected, the interactive relation is clearer through the display of the target event, and the comprehensiveness and accuracy of the display of the virtual elements are improved.
With reference to the above flow architecture, a method for displaying a virtual element in the present application will be described below, please refer to fig. 3, where fig. 3 is a flow chart of a method for displaying a virtual element provided in an embodiment of the present application, where the management method may be executed by a terminal device, or by a server, or by both the terminal device and the server, and the following description will be given by taking the execution of the terminal device as an example. The embodiment of the application at least comprises the following steps:
301. in response to a target operation in the target interface, an interaction event is executed that includes the first virtual element and the second virtual element.
In this embodiment, the interactive event is used to indicate that the first virtual element and the second virtual element perform relative displacement in the interactive interface, and the target interface includes the interactive interface; wherein, the target operation may be a sliding operation, such as a page up sliding; or a click operation, such as clicking a page scroll button; the target operation may also be an action obtained through other non-contact input manners, such as scrolling the page upwards by voice control, and the specific operation manner depends on the actual scene and is not limited herein.
Correspondingly, the process of generating the relative displacement by the target operation may be to establish a logical relationship between the sliding distance and the displacement corresponding to the target operation, for example, the sliding distance and the displacement are the same; in addition, the process of the relative displacement may be a transverse motion process, a longitudinal motion process, or a motion process combined in other directions, and the interactive interface is a portion of the target interface for displaying the first virtual element and the second virtual element, for example, the first virtual element and the second virtual element belong to an advertisement material of the advertisement a, and the interactive interface is a display position of the advertisement.
In a possible scenario, a process of generating a relative displacement by a target operation is shown in fig. 4, where fig. 4 is a schematic view of a scenario of a method for displaying a virtual element provided in an embodiment of the present application, and the scenario illustrates a target interface a1, an interactive interface a2, a first virtual element A3, a second virtual element a4, and a displacement-completed virtual element a5, where the first virtual element A3 and the second virtual element a4 perform a relative movement in response to an upward sliding operation of a user, that is, the first virtual element A3 and the second virtual element a4 are close to each other, so as to obtain a displacement-completed virtual element a 5; it will be appreciated that the interactive interface A2 may or may not move in response to a user's swipe-up operation (e.g., a fixed ad slot scene). While first virtual element A3 and second virtual element A4 are being displayed, only a portion within interactive interface A2 may be displayed; in addition, for the interactive event, that is, the process of relative displacement between the first virtual element A3 and the second virtual element a4, and for the target event corresponding to the displaced virtual element a5, the playing of the media content may be performed, for example, the love in the figure is changed into a dynamic element, and an animation of heartbeat is presented.
Optionally, in the above scenario, the displacement in the virtual element may also be in multiple directions, as shown in fig. 5, fig. 5 is a schematic view of a scenario of another virtual element presentation method provided in this embodiment of the present application, where the sliding trajectory B1, the longitudinal component B2 of the sliding trajectory, and the transverse component B3 of the sliding trajectory are shown, and the relative displacement for the virtual element may be performed based on the longitudinal component B2 of the sliding trajectory and the transverse component B3 of the sliding trajectory, that is, the relative displacement based on the longitudinal component B2 of the sliding trajectory is performed in the longitudinal direction, and the relative displacement based on the transverse component B2 of the sliding trajectory is performed in the transverse direction, so as to improve richness of the virtual object moving process.
In another possible scenario, the displacement process of the virtual element may also be performed based on layers corresponding to the virtual element, that is, a first layer corresponding to the first virtual element and a second layer corresponding to the second virtual element are determined first; and then responding to a target operation in the target interface, and executing the interaction event based on the first layer and the second layer. Specifically, as shown in fig. 6, fig. 6 is a scene schematic diagram of another method for displaying a virtual element provided in the embodiment of the present application, where the display element in an interactive interface is shown to be composed of a background layer, a first layer (including a first virtual element), and a second layer (including a second virtual element), and in a process of executing an interactive event, that is, in an interactive interface region, a relative motion between the first layer and the second layer is performed based on the background layer, where a specific motion form is determined according to an actual scene, and no limitation is made here, and by operating the layers, the virtual element is represented by a unified metric (a layer frame), so that management of the virtual element is facilitated, and accuracy of the process of performing the relative motion on the virtual element is improved.
It can be understood that the relative movement process of the first layer and the second layer may be performed in real time along with the progress of the target operation, that is, an operation starting point of the target operation in the target interface is first obtained to trigger the relative displacement of the first layer and the second layer for the indication of the interaction event; and then, acquiring an operation end point of the target operation in the target interface to trigger the first layer and the second layer to stop relative displacement. The method is a real-time response process, and the instant feedback of the user operation is improved.
In addition, the relative movement process of the first image layer and the second image layer may also be displacement performed after the target operation determines the relevant displacement parameter, that is, an operation displacement distance corresponding to the target operation is determined in response to the target operation in the target interface; and then determining layer displacement distances corresponding to the first layer and the second layer based on the operation displacement distance so as to execute the interaction event. For example, the layer displacement distance of the first layer and the second layer is twice of the operation displacement distance, so that the displacement display of the first layer and the second layer is enhanced.
Optionally, during the process of displacing the first layer and the second layer, layer clipping based on an interactive interface may be performed, that is, display elements of the first layer and the second layer in the interactive interface are determined in response to a target operation in a target interface; the interaction event is executed based on the presentation element. Therefore, the processing amount of the image is saved, and the system resource is saved.
302. And monitoring the relative position relation of the first virtual element and the second virtual element in the relative displacement process.
In this embodiment, the relative position relationship between the first virtual element and the second virtual element in the relative displacement process may be a relative position relationship between a specific point or a specific region in the virtual element, or may also be an intersection condition of the first virtual element and the second virtual element, where the specific position is determined by an actual scene.
303. And if the relative position relation meets the triggering condition, displaying the execution process of the target event in the interactive interface.
In this embodiment, the trigger condition is associated with the relative position relationship, for example, when the relative position relationship is a point-to-point relationship, the trigger condition may be that the distance between the relative point-to-point locations is less than 10 pixel bits; or the relative positional relationship is the relative relationship of a specific region, for example, if the part of the first virtual element indicating the "head" and the part of the second virtual element indicating the "head" are in the same horizontal line, the trigger condition is satisfied.
Specifically, a key element in the first virtual element and a key element in the second virtual element may be determined first; and if the relative position relation indicates that the distance between the key element in the first virtual element and the key element in the second virtual element is smaller than a preset value, triggering the execution of the target event in the interactive interface. Thereby improving the accuracy of trigger condition judgment.
In a possible scenario, as shown in fig. 7, fig. 7 is a schematic view of another virtual element presentation method provided in this embodiment of the present application, where the view illustrates a key element C1 in a first virtual element, a key element C2 in a second virtual element, and a horizontal line C3 of the key element, that is, in response to a target operation, the first virtual element and the second virtual element perform a relative motion, and correspondingly, the key element C1 in the first virtual element and the key element C2 in the second virtual element perform a relative displacement, and when a central point of the key element C1 in the first virtual element and a central point of the key element C2 in the second virtual element are located on the horizontal line C3 of the key element, it is determined that a trigger condition is met, so that an execution process of a target event is presented in an interactive interface; the target event can be a dialog between a first virtual element and a second virtual element in a graph, or an entrance related to an interactive narrative story is generated, namely the first virtual element and the second virtual element are characters in the interactive narrative story, and a user can know related content related to the interactive narrative story, so that the information integrity in the process of displaying the virtual elements is improved.
Specifically, for the jump process of the interactive narrative story, a jump virtual element corresponding to a target event can be called, and the jump virtual element is used for indicating the display of target media content in a target interface; then intercepting an interface display when the distance between the key element in the first virtual element and the key element in the second virtual element is equal to a preset value; and displaying the jump virtual element in the interactive interface by taking interface display as a background. Namely, the background of the interactive interface is frozen at the moment when the first virtual element and the second virtual element meet, a jump virtual button (jump virtual element) of the interactive narrative story comprising the first virtual element and the second virtual element is displayed on the background of the interactive interface, corresponding resources are loaded at the background, and if a user clicks the button, the jump is carried out immediately, so that the interaction efficiency of the virtual elements is improved.
Optionally, in the above embodiment, the user may also perform an operation in a direction opposite to the target operation direction to restore the relative positions of the first virtual element and the second virtual element, that is, in response to a rollback operation in the target interface, the interaction events are executed in a reverse order, where the rollback operation is in a direction opposite to the target operation direction. Thereby improving the controllability of the interactive interface.
In one possible scenario, the target interface in the above embodiment is a media content display interface in the mobile terminal, the target operation is a slide-down operation, the interactive interface is an advertisement display frame of an interactive narrative story, the first virtual element and the second virtual element are advertisement materials in the advertisement display frame, the target event is playing of an advertisement video, and the advertisement video is associated with the interactive narrative story, such as a content outline of the interactive narrative story.
With reference to the foregoing embodiments, by responding to a target operation in a target interface, executing an interaction event including a first virtual element and a second virtual element, where the interaction event is used to indicate that the first virtual element and the second virtual element perform relative displacement in the interaction interface, and the target interface includes the interaction interface; further monitoring the relative position relation of the first virtual element and the second virtual element in the relative displacement process; and when the relative position relation meets the triggering condition, displaying the execution process of the target event in the interactive interface. Therefore, in the interactive display process of the virtual elements, the position relation among the virtual elements is dynamic, the interactive relation among the virtual elements can be reflected, the interactive relation is clearer through the display of the target event, and the comprehensiveness and accuracy of the display of the virtual elements are improved.
In another possible scenario, the first virtual element and the second virtual element may be virtual elements in the same video, which scenario is explained below. Referring to fig. 8, fig. 8 is a flowchart of another method for displaying a virtual element according to an embodiment of the present disclosure, where the embodiment of the present disclosure at least includes the following steps:
801. a target video is invoked that includes a first virtual element and a second virtual element.
In this embodiment, the target video may be recorded in the process of performing displacement based on the layer based on the first virtual element and the second virtual element, or may be a custom video including the first virtual element and the second virtual element.
Specifically, a target video is called in response to a target operation in a target interface; and executing the interactive event based on the playing of the target video. The target video is used for indicating the process of relative displacement of the first virtual element and the second virtual element, and the target video is played based on an interactive interface, such as a dynamic video playing process responding to target operation in an advertisement space.
802. And responding to the target operation in the target interface, and playing the target video.
In this embodiment, the process of playing the target video may be automatic playing after receiving the target operation, or may be an accompanying playing process responding to the target operation. Specifically, for the process of accompanying playing, a playing progress bar of the target video may be determined first; and then regulating and controlling the playing progress bar according to the operation component of the target operation in the target direction so as to execute the interactive event. Therefore, the accuracy of target video playing is improved.
In a possible scenario, as shown in fig. 9, fig. 9 is a scenario diagram of another method for displaying a virtual element according to an embodiment of the present application. Scene (1) is an initial interface state, and scene (2) is a process of adjusting and playing a progress bar D1 of a target video in response to a distance D2 of a sliding operation after a user starts the sliding operation (target operation), so as to realize the accompanying playing; the scene (3) is an interface scene at a certain moment in the process of accompanying playing; when the target video is played completely or reaches a certain playing point, the target event D3 is displayed in the interactive interface, so that the user can perform further interactive operation, thereby improving the interactivity in the process of displaying the virtual element.
Optionally, for a scene in which the target video is not played, interface display based on the target video may be performed in the interactive interface, that is, a target frame image, for example, a first frame image, of the target video is obtained; the initial state of the interactive interface is then updated based on the target frame image. Thereby improving the identifiability of the interactive interface.
In another possible scenario, the control of the progress bar of the target video by the target operation is performed directly, as shown in fig. 10, fig. 10 is a schematic view of a scenario of another method for displaying a virtual element provided in this embodiment of the present application. The diagram shows a track E1 of the target operation, a longitudinal component E2 corresponding to the track of the target operation, a transverse component E3 corresponding to the track of the target operation, a progress bar E4 of the target video, and a distance E5 of interface sliding, that is, by analyzing the track E1 of the target operation, a longitudinal component E2 corresponding to the track of the target operation and a transverse component E3 corresponding to the track of the target operation can be obtained; and then, associating the longitudinal component E2 corresponding to the track of the target operation with the interface sliding distance E5, and associating the transverse component E3 corresponding to the track of the target operation with the progress bar E4 of the target video, so that the process of playing the target video and sliding the interface separately can be realized.
803. And if the relative position relation of the first virtual element and the second virtual element in the playing process meets the triggering condition, displaying the execution process of the target event in the interactive interface.
In this embodiment, the triggering condition may be that the target video is played completely, or that a relative position relationship between the first virtual element and the second virtual element in the video satisfies the triggering condition, where the specific position relationship is similar to step 303 in the embodiment shown in fig. 3, and details are not described here.
It can be understood that, by playing the target video, in the process of implementing dynamic display of the virtual element, an instant background processing process (for example, processing of a layer) is not required, that is, only two processing threads of page sliding and video playing are included, thereby saving resource occupation of dynamic display of the virtual element in the interactive narrative story scene.
In order to better implement the above-mentioned aspects of the embodiments of the present application, the following also provides related apparatuses for implementing the above-mentioned aspects. Referring to fig. 11, fig. 11 is a schematic structural diagram of a display apparatus of a virtual element according to an embodiment of the present application, where the display apparatus 1100 includes:
a response unit 1101, configured to execute, in response to a target operation in a target interface, an interaction event including a first virtual element and a second virtual element, where the interaction event is used to indicate that the first virtual element and the second virtual element perform a relative displacement in the interaction interface, and the target interface includes the interaction interface;
a monitoring unit 1102, configured to monitor a relative position relationship between the first virtual element and the second virtual element in the relative displacement process;
a displaying unit 1103, configured to display an execution process of the target event in the interactive interface if the relative position relationship meets a trigger condition.
Optionally, in some possible implementations of the present application, the response unit 1101 is specifically configured to determine a first layer corresponding to the first virtual element and a second layer corresponding to the second virtual element;
the response unit 1101 is specifically configured to, in response to the target operation in the target interface, execute the interaction event based on the first layer and the second layer.
Optionally, in some possible implementation manners of the present application, the response unit 1101 is specifically configured to obtain an operation starting point of the target operation in the target interface, so as to trigger the first layer and the second layer to perform the relative displacement indicated by the interaction event;
the response unit 1101 is specifically configured to obtain an operation end point of the target operation in the target interface, so as to trigger the first layer and the second layer to stop the relative displacement.
Optionally, in some possible implementations of the present application, the response unit 1101 is specifically configured to determine, in response to the target operation in the target interface, an operation displacement distance corresponding to the target operation;
the response unit 1101 is specifically configured to determine layer displacement distances corresponding to the first layer and the second layer based on the operation displacement distance, so as to execute the interaction event.
Optionally, in some possible implementations of the present application, the response unit 1101 is specifically configured to determine, in response to the target operation in the target interface, display elements of the first layer and the second layer in the interactive interface;
the response unit 1101 is specifically configured to execute the interaction event based on the presentation element.
Optionally, in some possible implementations of the present application, the response unit 1101 is specifically configured to respond to the target operation in the target interface, and invoke a target video, where the target video is used to instruct the first virtual element and the second virtual element to perform a relative displacement process, and the target video is played based on the interactive interface;
the response unit 1101 is specifically configured to execute the interactive event based on the playing of the target video.
Optionally, in some possible implementations of the present application, the response unit 1101 is specifically configured to determine a playing progress bar of the target video;
the response unit 1101 is specifically configured to regulate and control the play progress bar according to an operation component of the target operation in the target direction, so as to execute the interactive event.
Optionally, in some possible implementations of the present application, the response unit 1101 is specifically configured to acquire a target frame image of the target video;
the response unit 1101 is specifically configured to update the initial state of the interactive interface based on the target frame image.
Optionally, in some possible implementations of the present application, the presentation unit 1103 is specifically configured to determine a key element in the first virtual element and a key element in the second virtual element;
the displaying unit 1103 is specifically configured to trigger execution of the target event in the interactive interface if the relative position relationship indicates that a distance between a key element in the first virtual element and a key element in the second virtual element is smaller than a preset value.
Optionally, in some possible implementation manners of the present application, the presentation unit 1103 is specifically configured to invoke a jump virtual element corresponding to the target event, where the jump virtual element is used to indicate presentation of target media content in the target interface;
the display unit 1103 is specifically configured to intercept an interface display when a distance between a key element in the first virtual element and a key element in the second virtual element is equal to the preset value;
the presentation unit 1103 is specifically configured to perform presentation of the jump virtual element in the interactive interface with the interface display as a background.
Optionally, in some possible implementations of the present application, the response unit 1101 is specifically configured to execute the interaction events in a reverse order in response to a rollback operation in the target interface, where the rollback operation is opposite to an operation direction of the target operation.
Executing an interaction event containing a first virtual element and a second virtual element by responding to a target operation in a target interface, wherein the interaction event is used for indicating the first virtual element and the second virtual element to perform relative displacement in the interaction interface, and the target interface contains the interaction interface; further monitoring the relative position relation of the first virtual element and the second virtual element in the relative displacement process; and when the relative position relation meets the triggering condition, displaying the execution process of the target event in the interactive interface. Therefore, in the interactive display process of the virtual elements, the position relation among the virtual elements is dynamic, the interactive relation among the virtual elements can be reflected, the interactive relation is clearer through the display of the target event, and the comprehensiveness and accuracy of the display of the virtual elements are improved.
An embodiment of the present application further provides a terminal device, as shown in fig. 12, which is a schematic structural diagram of another terminal device provided in the embodiment of the present application, and for convenience of description, only a portion related to the embodiment of the present application is shown, and details of the specific technology are not disclosed, please refer to a method portion in the embodiment of the present application. The terminal may be any terminal device including a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a point of sale (POS), a vehicle-mounted computer, and the like, taking the terminal as the mobile phone as an example:
fig. 12 is a block diagram illustrating a partial structure of a mobile phone related to a terminal provided in an embodiment of the present application. Referring to fig. 12, the cellular phone includes: radio Frequency (RF) circuitry 1210, memory 1220, input unit 1230, display unit 1240, sensors 1250, audio circuitry 1260, wireless fidelity (WiFi) module 1270, processor 1280, and power supply 1290. Those skilled in the art will appreciate that the handset configuration shown in fig. 12 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 12:
the RF circuit 1210 is configured to receive and transmit signals during information transmission and reception or during a call, and in particular, receive downlink information of a base station and then process the received downlink information to the processor 1280; in addition, the data for designing uplink is transmitted to the base station. In general, the RF circuit 1210 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 1210 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), etc.
The memory 1220 may be used to store software programs and modules, and the processor 1280 executes various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 1220. The memory 1220 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1220 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 1230 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 1230 may include a touch panel 1231 and other input devices 1232. The touch panel 1231, also referred to as a touch screen, can collect touch operations of a user (e.g., operations of the user on or near the touch panel 1231 using any suitable object or accessory such as a finger, a stylus, etc., and a range of spaced touch operations on the touch panel 1231) and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 1231 may include two portions, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 1280, and can receive and execute commands sent by the processor 1280. In addition, the touch panel 1231 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 1230 may include other input devices 1232 in addition to the touch panel 1231. In particular, other input devices 1232 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 1240 may be used to display information input by the user or information provided to the user and various menus of the cellular phone. The display unit 1240 may include a display panel 1241, and optionally, the display panel 1241 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), or the like. Further, touch panel 1231 can overlay display panel 1241, and when touch panel 1231 detects a touch operation thereon or nearby, the touch panel 1231 can transmit the touch operation to processor 1280 to determine the type of the touch event, and then processor 1280 can provide a corresponding visual output on display panel 1241 according to the type of the touch event. Although in fig. 12, the touch panel 1231 and the display panel 1241 are implemented as two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 1231 and the display panel 1241 may be integrated to implement the input and output functions of the mobile phone.
The cell phone may also include at least one sensor 1250, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1241 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 1241 and/or the backlight when the mobile phone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 1260, speaker 1261, and microphone 1262 can provide an audio interface between a user and a cell phone. The audio circuit 1260 can transmit the received electrical signal converted from the audio data to the speaker 1261, and the audio signal is converted into a sound signal by the speaker 1261 and output; on the other hand, the microphone 1262 converts the collected sound signals into electrical signals, which are received by the audio circuit 1260 and converted into audio data, which are processed by the audio data output processor 1280, and then passed through the RF circuit 1210 to be transmitted to, for example, another cellular phone, or output to the memory 1220 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 1270, and provides wireless broadband internet access for the user. Although fig. 12 shows the WiFi module 1270, it is understood that it does not belong to the essential constitution of the handset, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 1280 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 1220 and calling data stored in the memory 1220, thereby performing overall monitoring of the mobile phone. Optionally, processor 1280 may include one or more processing units; optionally, the processor 1280 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into the processor 1280.
The mobile phone further includes a power supply 1290 (e.g., a battery) for supplying power to each component, and optionally, the power supply may be logically connected to the processor 1280 through a power management system, so that the power management system may manage functions such as charging, discharging, and power consumption management.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In this embodiment, the processor 1280 included in the terminal further has a function of executing each step of the page processing method.
Referring to fig. 13, fig. 13 is a schematic structural diagram of a server provided in this embodiment, and the server 1300 may have a relatively large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 1322 (e.g., one or more processors) and a memory 1332, and one or more storage media 1330 (e.g., one or more mass storage devices) storing an application 1342 or data 1344. Memory 1332 and storage medium 1330 may be, among other things, transitory or persistent storage. The program stored on the storage medium 1330 may include one or more modules (not shown), each of which may include a sequence of instructions operating on a server. Still further, the central processor 1322 may be arranged in communication with the storage medium 1330, executing a sequence of instruction operations in the storage medium 1330 on the server 1300.
The server 1300 may also include one or more power supplies 1326, one or more wired or wireless network interfaces 1350, one or more input-output interfaces 1358, and/or one or more operating systems 1341, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
The steps performed by the management apparatus in the above-described embodiment may be based on the server configuration shown in fig. 13.
An embodiment of the present application further provides a computer-readable storage medium, in which instructions for displaying a virtual element are stored, and when the instructions are executed on a computer, the instructions cause the computer to perform the steps performed by the display apparatus for virtual elements in the methods described in the foregoing embodiments shown in fig. 2 to 10.
Also provided in an embodiment of the present application is a computer program product including instructions for displaying a virtual element, which, when run on a computer, causes the computer to perform the steps performed by the apparatus for displaying a virtual element in the method described in the foregoing embodiments shown in fig. 2 to 10.
The embodiment of the present application further provides a system for displaying a virtual element, where the system for displaying a virtual element may include a display apparatus of a virtual element in the embodiment described in fig. 11, a terminal device in the embodiment described in fig. 12, or a server described in fig. 13.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a virtual element display apparatus, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (15)

1. A method for displaying virtual elements is applied to an interactive narrative platform, and comprises the following steps:
responding to a target operation in a target interface, executing an interaction event containing a first virtual element and a second virtual element, wherein the interaction event is used for indicating the first virtual element and the second virtual element to perform relative displacement in the interaction interface, and the target interface contains the interaction interface; the interactive interface is an advertisement display frame of an interactive narrative story; the first virtual element and the second virtual element are virtual elements in an interactive narrative scene;
monitoring the relative position relation of the first virtual element and the second virtual element in the relative displacement process;
if the relative position relation meets a trigger condition, displaying an execution process of a target event in the interactive interface, wherein the execution process of displaying the target event comprises the following steps: generating an entry for the interactive narrative story, or playing a video associated with the interactive narrative story.
2. The method of claim 1, wherein executing an interaction event comprising a first virtual element and a second virtual element in response to a target operation in a target interface comprises:
determining a first layer corresponding to the first virtual element and a second layer corresponding to the second virtual element;
and responding to the target operation in the target interface, and executing the interaction event based on the first layer and the second layer.
3. The method of claim 2, wherein the performing the interaction event based on the first layer and the second layer in response to the target operation in the target interface comprises:
acquiring an operation starting point of the target operation in the target interface to trigger the first layer and the second layer to perform relative displacement indicated by the interaction event;
and acquiring an operation end point of the target operation in the target interface to trigger the first layer and the second layer to stop the relative displacement.
4. The method of claim 2, wherein the performing the interaction event based on the first layer and the second layer in response to the target operation in the target interface comprises:
responding to the target operation in the target interface, and determining an operation displacement distance corresponding to the target operation;
and determining layer displacement distances corresponding to the first layer and the second layer based on the operation displacement distance so as to execute the interaction event.
5. The method of claim 2, wherein the performing the interaction event based on the first layer and the second layer in response to the target operation in the target interface comprises:
responding to the target operation in the target interface, and determining display elements of the first layer and the second layer in the interactive interface;
executing the interaction event based on the presentation element.
6. The method of claim 1, wherein executing an interaction event comprising a first virtual element and a second virtual element in response to a target operation in a target interface comprises:
responding to the target operation in the target interface, calling a target video, wherein the target video is used for indicating the process of relative displacement of the first virtual element and the second virtual element, and the target video is played based on the interactive interface;
and executing the interactive event based on the playing of the target video.
7. The method of claim 6, wherein the performing the interactivity event based on the playing of the target video comprises:
determining a playing progress bar of the target video;
and regulating and controlling the playing progress bar according to the operation component of the target operation in the target direction so as to execute the interactive event.
8. The method of claim 6, further comprising:
acquiring a target frame image of the target video;
and updating the initial state of the interactive interface based on the target frame image.
9. The method according to claim 1, wherein if the relative position relationship satisfies a trigger condition, displaying an execution process of a target event in the interactive interface includes:
determining a key element in the first virtual element and a key element in the second virtual element;
and if the relative position relationship indicates that the distance between the key element in the first virtual element and the key element in the second virtual element is smaller than a preset value, triggering the execution of the target event in the interactive interface.
10. The method of claim 9, wherein triggering execution of the target event in the interactive interface comprises:
calling a jump virtual element corresponding to the target event, wherein the jump virtual element is used for indicating the display of target media content in the target interface;
intercepting an interface display when the distance between the key element in the first virtual element and the key element in the second virtual element is equal to the preset value;
and displaying the jump virtual element in the interactive interface by taking the interface display as a background.
11. The method according to any one of claims 1-10, further comprising:
and responding to a rollback operation in the target interface, and executing the interaction events in a reverse order, wherein the rollback operation is opposite to the operation direction of the target operation.
12. The method according to claim 1, wherein the target interface is a media content display interface, the target operation is a slide-down operation, the interactive interface is an advertisement display frame, the first virtual element and the second virtual element are advertisement materials in the advertisement display frame, and the target event is playing of an advertisement video.
13. A virtual element display device, which is applied to an interactive narrative platform, the device comprises:
the response unit is used for responding to a target operation in a target interface and executing an interaction event containing a first virtual element and a second virtual element, wherein the interaction event is used for indicating the first virtual element and the second virtual element to perform relative displacement in the interaction interface, and the target interface contains the interaction interface; the interactive interface is an advertisement display frame of an interactive narrative story; the first virtual element and the second virtual element are virtual elements in an interactive narrative scene;
the monitoring unit is used for monitoring the relative position relation of the first virtual element and the second virtual element in the relative displacement process;
a display unit, configured to display an execution process of the target event in the interactive interface if the relative position relationship satisfies a trigger condition, where the execution process of the target event includes: generating an entry for the interactive narrative story, or playing a video associated with the interactive narrative story.
14. A computer device, the computer device comprising a processor and a memory:
the memory is used for storing program codes; the processor is configured to execute the method for presenting a virtual element according to any one of claims 1 to 12 according to instructions in the program code.
15. A computer-readable storage medium having stored therein instructions which, when run on a computer, cause the computer to execute the method of presenting a virtual element of any of the preceding claims 1 to 12.
CN202010989042.8A 2020-09-18 2020-09-18 Virtual element display method and related device Active CN112099713B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010989042.8A CN112099713B (en) 2020-09-18 2020-09-18 Virtual element display method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010989042.8A CN112099713B (en) 2020-09-18 2020-09-18 Virtual element display method and related device

Publications (2)

Publication Number Publication Date
CN112099713A CN112099713A (en) 2020-12-18
CN112099713B true CN112099713B (en) 2022-02-01

Family

ID=73758898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010989042.8A Active CN112099713B (en) 2020-09-18 2020-09-18 Virtual element display method and related device

Country Status (1)

Country Link
CN (1) CN112099713B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327214A (en) * 2022-01-05 2022-04-12 北京有竹居网络技术有限公司 Interaction method, interaction device, electronic equipment, storage medium and computer program product

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108984087A (en) * 2017-06-02 2018-12-11 腾讯科技(深圳)有限公司 Social interaction method and device based on three-dimensional avatars
CN110377861A (en) * 2019-07-23 2019-10-25 腾讯科技(深圳)有限公司 Element interactive approach, device, storage medium and computer equipment between scene
CN110913261A (en) * 2019-11-19 2020-03-24 维沃移动通信有限公司 Multimedia file generation method and electronic equipment
CN110908757A (en) * 2019-11-18 2020-03-24 腾讯科技(深圳)有限公司 Method and related device for displaying media content
CN111265869A (en) * 2020-01-14 2020-06-12 腾讯科技(深圳)有限公司 Virtual object detection method, device, terminal and storage medium
CN111275797A (en) * 2020-02-26 2020-06-12 腾讯科技(深圳)有限公司 Animation display method, device, equipment and storage medium
CN111324253A (en) * 2020-02-12 2020-06-23 腾讯科技(深圳)有限公司 Virtual article interaction method and device, computer equipment and storage medium
CN111659120A (en) * 2020-07-16 2020-09-15 网易(杭州)网络有限公司 Virtual role position synchronization method, device, medium and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102955651A (en) * 2011-08-24 2013-03-06 宏碁股份有限公司 Advertisement and multimedia video interaction system and advertisement and multimedia movie interaction method
CN108671543A (en) * 2018-05-18 2018-10-19 腾讯科技(深圳)有限公司 Labelled element display methods, computer equipment and storage medium in virtual scene
CN110162667A (en) * 2019-05-29 2019-08-23 北京三快在线科技有限公司 Video generation method, device and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108984087A (en) * 2017-06-02 2018-12-11 腾讯科技(深圳)有限公司 Social interaction method and device based on three-dimensional avatars
CN110377861A (en) * 2019-07-23 2019-10-25 腾讯科技(深圳)有限公司 Element interactive approach, device, storage medium and computer equipment between scene
CN110908757A (en) * 2019-11-18 2020-03-24 腾讯科技(深圳)有限公司 Method and related device for displaying media content
CN110913261A (en) * 2019-11-19 2020-03-24 维沃移动通信有限公司 Multimedia file generation method and electronic equipment
CN111265869A (en) * 2020-01-14 2020-06-12 腾讯科技(深圳)有限公司 Virtual object detection method, device, terminal and storage medium
CN111324253A (en) * 2020-02-12 2020-06-23 腾讯科技(深圳)有限公司 Virtual article interaction method and device, computer equipment and storage medium
CN111275797A (en) * 2020-02-26 2020-06-12 腾讯科技(深圳)有限公司 Animation display method, device, equipment and storage medium
CN111659120A (en) * 2020-07-16 2020-09-15 网易(杭州)网络有限公司 Virtual role position synchronization method, device, medium and electronic equipment

Also Published As

Publication number Publication date
CN112099713A (en) 2020-12-18

Similar Documents

Publication Publication Date Title
CN111061574B (en) Object sharing method and electronic device
US10659844B2 (en) Interaction method and system based on recommended content
CN108055408B (en) Application program control method and mobile terminal
CN109525874B (en) Screen capturing method and terminal equipment
US10506292B2 (en) Video player calling method, apparatus, and storage medium
CN109189300B (en) View circulation display method and device
CN108646961B (en) Management method and device for tasks to be handled and storage medium
CN109885373B (en) Rendering method and device of user interface
WO2019149028A1 (en) Application download method and terminal
CN110569078B (en) Method and device for preloading webpage-level program
CN108958629B (en) Split screen quitting method and device, storage medium and electronic equipment
WO2017193496A1 (en) Application data processing method and apparatus, and terminal device
US20160292946A1 (en) Method and apparatus for collecting statistics on network information
CN104571979A (en) Method and device for realizing split-screen views
CN112817501A (en) Method and related device for displaying media content
CN110908757B (en) Method and related device for displaying media content
CN112099713B (en) Virtual element display method and related device
CN107193551B (en) Method and device for generating image frame
WO2015135457A1 (en) Method, apparatus, and system for sending and playing multimedia information
WO2018145539A1 (en) Streaming media data processing method and mobile terminal
CN115373577A (en) Image processing method and device and computer readable storage medium
CN108920086B (en) Split screen quitting method and device, storage medium and electronic equipment
CN108170362B (en) Application running state control method and mobile terminal
CN110865743A (en) Task management method and terminal equipment
CN111427496B (en) Parameter adjusting method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40035765

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant