CN117519874A - Picture updating method and device, electronic equipment and storage medium - Google Patents
Picture updating method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN117519874A CN117519874A CN202311498391.XA CN202311498391A CN117519874A CN 117519874 A CN117519874 A CN 117519874A CN 202311498391 A CN202311498391 A CN 202311498391A CN 117519874 A CN117519874 A CN 117519874A
- Authority
- CN
- China
- Prior art keywords
- interaction
- target
- picture
- target layer
- rendering data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 230000003993 interaction Effects 0.000 claims abstract description 242
- 230000002452 interceptive effect Effects 0.000 claims abstract description 161
- 238000009877 rendering Methods 0.000 claims abstract description 143
- 230000004044 response Effects 0.000 claims abstract description 101
- 230000005540 biological transmission Effects 0.000 abstract description 34
- 238000004891 communication Methods 0.000 description 23
- 230000006870 function Effects 0.000 description 14
- 238000012545 processing Methods 0.000 description 13
- 238000004590 computer program Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 241000196324 Embryophyta Species 0.000 description 9
- 230000008859 change Effects 0.000 description 7
- 238000011161 development Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 2
- 230000008030 elimination Effects 0.000 description 2
- 238000003379 elimination reaction Methods 0.000 description 2
- 238000004382 potting Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 241000737241 Cocos Species 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000011022 operating instruction Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000008635 plant growth Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/958—Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The disclosure provides a method, a device, an electronic device and a storage medium for updating a picture, wherein the method comprises the following steps: displaying a target picture on a display interface, wherein the target picture comprises a target layer and a non-target layer, the target layer comprises an object responding to the interactive operation, and the non-target layer comprises an object not responding to the interactive operation; responding to the interactive operation, and sending a picture interactive request to a server; acquiring interaction response information fed back by a server aiming at a picture interaction request based on a first acquisition mode; acquiring rendering data of the non-target layer based on a second acquisition mode; the target picture is updated based on the interactive response information and the rendering data of the non-target layer. According to the method provided by the disclosure, in the data transmission process after the interactive operation, the rendering data of the whole target picture is not required to be transmitted, so that the network transmission pressure is reduced, the occurrence probability of network time delay, packet loss, network jitter and other phenomena is reduced, the picture fluency is improved, and the user experience is improved.
Description
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a method and device for updating a picture, an electronic device and a storage medium.
Background
The cloud game is a game mode based on cloud computing, and in a cloud game scene, the game is not run in a player game terminal, but is run in a cloud server, the cloud server renders the game scene into a video and audio stream, and the video and audio stream is transmitted to the player game terminal through a network.
In the related art, a terminal responds to a triggering operation of an interactable object, an interaction request is sent to a cloud server, and the cloud server transmits a re-rendered game picture to the terminal for display in a video stream mode based on the interaction request. However, in the process of transmitting video streams, the network transmission pressure is high, and phenomena such as network delay, packet loss, network jitter and the like are easy to occur, so that the using effect of games is affected.
Disclosure of Invention
The disclosure provides a picture updating method and device and electronic equipment, and aims to solve the problems of long time consumption and high network transmission pressure of cloud terminal game rendering.
According to an aspect of the present disclosure, there is provided a picture update method including:
displaying a target picture on a display interface, wherein the target picture comprises a target layer and a non-target layer, the target layer comprises an object responding to interactive operation, and the non-target layer comprises an object not responding to the interactive operation;
Responding to the interactive operation, and sending a picture interactive request to a server;
acquiring interaction response information fed back by the server aiming at the picture interaction request based on a first acquisition mode;
acquiring rendering data of the non-target layer based on a second acquisition mode;
and updating the target picture based on the interaction response information and the rendering data of the non-target picture layer.
According to another aspect of the present disclosure, there is provided a picture update method including:
receiving a picture interaction request of a target picture sent by a terminal, wherein the target picture comprises a target picture layer and a non-target picture layer, the target picture layer comprises an object responding to interaction operation, the non-target picture layer comprises an object not responding to the interaction operation, and the interaction request is generated based on the interaction operation acting on the target picture layer;
and sending interaction response information to the terminal based on the picture interaction request, wherein the interaction response information and the rendering data of the non-target layer are used for updating the target picture, and a first acquisition mode of the terminal for the interaction response information and a second acquisition mode of the terminal for the rendering data of the non-target layer are different.
According to another aspect of the present disclosure, there is provided a picture updating apparatus including:
the display module is used for displaying a target picture on a display interface, wherein the target picture comprises a target layer and a non-target layer, the target layer comprises an object responding to interactive operation, and the non-target layer comprises an object not responding to the interactive operation;
the sending module is used for responding to the interactive operation and sending a picture interactive request to the server;
the first acquisition module is used for acquiring interaction response information fed back by the server aiming at the picture interaction request based on a first acquisition mode;
the second acquisition module is used for acquiring the rendering data of the non-target layer based on a second acquisition mode;
and the updating module is used for updating the target picture based on the interaction response information and the rendering data of the non-target picture layer.
According to another aspect of the present disclosure, there is provided a picture updating apparatus including:
the receiving module is used for receiving a picture interaction request of a target picture sent by the terminal, wherein the target picture comprises a target layer and a non-target layer, the target layer comprises an object responding to interaction operation, the non-target layer comprises an object not responding to the interaction operation, and the interaction request is generated based on the interaction operation acting on the target layer;
The sending module is used for sending interaction response information to the terminal based on the picture interaction request, the interaction response information and the rendering data of the non-target layer are used for updating the target picture, and a first acquisition mode of the terminal for the interaction response information and a second acquisition mode of the terminal for the rendering data of the non-target layer are different.
According to another aspect of the present disclosure, there is provided an electronic device, which is a terminal, including:
a processor; the method comprises the steps of,
a memory storing a program;
wherein the program comprises instructions which, when executed by a processor, cause the processor to perform a method of terminating execution subject in accordance with exemplary embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided an electronic device, which is a server, including:
a processor; the method comprises the steps of,
a memory storing a program;
wherein the program comprises instructions which, when executed by a processor, cause the processor to perform a server-based method according to an exemplary embodiment of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform a method according to an exemplary embodiment of the present disclosure.
According to one or more technical schemes provided in the exemplary embodiments of the present disclosure, a target picture is displayed on a display interface, the target picture includes a target layer and a non-target layer, the target layer includes an object that responds to an interactive operation, and the non-target layer includes an object that does not respond to the interactive operation; responding to the interactive operation, and sending a picture interactive request to a server; acquiring interaction response information fed back by a server aiming at a picture interaction request based on a first acquisition mode; acquiring rendering data of the non-target layer based on a second acquisition mode; the target picture is updated based on the interactive response information and the rendering data of the non-target layer. In the method of the exemplary embodiment of the present disclosure, as the interaction operation occurs in the object responding to the interaction operation, the layer to which the object responding to the interaction operation belongs is the target layer, rendering data of the target layer changes before and after the interaction operation occurs, but rendering data of the target layer is not the same, and the rendering data of the target layer can be determined by the server according to the interaction response information fed back by the picture interaction request, so in the data transmission process after the interaction operation, the exemplary embodiment of the present disclosure can obtain the interaction response information and the rendering data of the non-target layer through two data transmission paths respectively, thereby eliminating the need of the server to transmit the rendering data of the entire re-rendered target picture to the terminal, improving data processing efficiency, reducing network transmission pressure, thereby reducing occurrence probability of phenomena such as network delay, packet loss, network jitter, etc., improving picture smoothness, and improving user experience.
Drawings
Further details, features and advantages of the present disclosure are disclosed in the following description of exemplary embodiments, with reference to the following drawings, wherein:
FIG. 1 illustrates a schematic diagram of an example system in which various methods described herein may be implemented, according to an example embodiment of the present disclosure;
FIG. 2 illustrates a flowchart of a screen update method of an exemplary embodiment of the present disclosure;
FIG. 3 illustrates a schematic diagram of an update process of a target screen according to an exemplary embodiment of the present disclosure;
FIG. 4 shows a block diagram of a screen updating apparatus according to an exemplary embodiment of the present disclosure;
FIG. 5 shows another block diagram of a screen update apparatus according to an exemplary embodiment of the present disclosure;
fig. 6 shows a schematic block diagram of a chip of an exemplary embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below. It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Before describing embodiments of the present disclosure, the following definitions are first provided for the relative terms involved in the embodiments of the present disclosure:
the terminal game refers to a game directly running on a terminal device such as a mobile phone or a computer.
The cloud terminal game is a game in which a terminal game inclusion developed based on a terminal architecture is cloud-formed to a cloud server to run, an operation instruction of a user on the terminal is transmitted to the cloud server, and a responded picture is returned to the terminal game through a video stream, so that the terminal does not need to be provided with the terminal game inclusion, and the cloud game can be experienced like watching a video.
The cloud primary game is a game which is born at the cloud, namely, the cloud available resources are fully considered from the beginning of design and development, and the game is designed and developed according to a cloud architecture.
It will be appreciated that if a "potting" is used to simulate a terminal game, the pot is the terminal and the plants in the pot are the terminal games, the roots of the plants will be understood to be the terminal games' ability to acquire terminal resources. The pot size limits the size of the plant, that is, the terminal's capabilities limit the terminal's game experience; conversely, the larger the plant, the larger the pot needed, i.e., the better the terminal gaming experience, and the more terminal performance that is needed, the more the terminal performance must be achieved. At the same time, the roots of the plants are also limited by the pot, resulting in terminal games that can only be designed and developed under limited architecture.
If a potted plant buried in a forest is used for simulating a cloud terminal game, the forest is a cloud server with huge resources, the pot is a terminal constructed on the cloud server in a virtualization, container and other modes, and the plant is still a terminal game designed and developed based on a terminal architecture. Since the cloud terminal game is limited by the terminal architecture from the beginning of design and development, even if a pot is buried in a vast forest, the pot can only use the resources in the pot, and the resource advantage of the cloud server cannot be fully utilized.
If the "plants in forest" are used to simulate the cloud primary game, the forest is a cloud server with huge resources, and the plants are cloud primary games. The growth of plants is not limited by potting any more, and the resources of forests can be fully acquired and used, that is, the cloud primary game can fully utilize the resources of the cloud server.
In the related art, for the cloud terminal game, because the cloud terminal game is developed based on a game development engine or a game development method (such as Cocos, unity, etc.), the cloud terminal game is limited by the terminal, and even if the developed terminal game is clouded and then is operated on a cloud server, the cloud terminal game can only be called as cloud of the game, but is not a real cloud primary game, and cloud server resources cannot be fully utilized.
When the cloud terminal game is rendered on a picture, a plurality of layers in a game scene canvas are generally stacked and rendered into a video stream which is transmitted to the terminal from a network for display. When a user interacts with the cloud terminal game, the whole game scene picture after the interaction is often required to be transmitted to the terminal in a video stream mode, the network transmission pressure is high, and the phenomena of network delay, packet loss, network jitter and the like are easily caused when the cloud terminal game is displayed on the terminal, so that the user experience and game ending are affected.
In view of the above problems, exemplary embodiments of the present disclosure provide a method for updating a frame, which may update a target frame by using an interaction response result for a target interactable object after the target interactable object performs an interaction operation, thereby improving data processing efficiency, reducing network transmission pressure, reducing occurrence probability of phenomena such as network delay, packet loss, network jitter, and the like, improving smoothness of the frame, and improving user experience.
Fig. 1 illustrates a schematic diagram of an example system in which various methods described herein may be implemented, according to an example embodiment of the present disclosure. As shown in fig. 1, an application scenario 100 of an exemplary embodiment of the present disclosure includes a terminal 110 and a server 120.
As shown in fig. 1, the device types of the terminal 110 include: at least one of a cell phone, a tablet (portable Android device, PAD), a wearable device, a vehicle-mounted device, a notebook, a laptop, a desktop, a mobile internet device (MobileInternetDevices, MID), an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, a personal digital assistant (Personal Digital Asistant, PDA), a smart television, and a wearable device based on augmented reality (augmented reality, AR) and/or Virtual Reality (VR) technology, etc.
For example, when the terminal is a wearable device, the wearable device may also be a generic name for intelligently designing daily wear by applying a wearing technology and developing wearable devices, such as glasses, gloves, watches, clothes, shoes, and the like. The wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user. The wearable device is not only a hardware device, but also can realize a powerful function through software support, data interaction and cloud interaction. The generalized wearable intelligent device comprises full functions, large size, and complete or partial functions which can be realized independent of a smart phone, such as a smart watch or a smart glasses, and is only focused on certain application functions, and needs to be matched with other devices such as the smart phone for use, such as various smart bracelets, smart jewelry and the like for physical sign monitoring.
The terminal 110 may communicate with the server 120 through a communication network. In terms of communication, the communication network may be a wireless communication network, such as satellite communication, microwave communication, or a wired communication network, such as optical fiber communication, and power line carrier communication; the communication network may be a local area communication network, such as Wifi, zigbee communication network, or a wide area communication network, such as the Internet.
As shown in fig. 1, the server 120 may be one server or may be a server cluster formed by a plurality of servers. The server 120 may include one or more (only one shown in fig. 1) processors, which may include, but are not limited to, a microprocessor MCU or a programmable logic device FPGA or the like, and a memory for storing data; the memory may be used to store a computer program, such as a software program of application software and a module, such as a computer program corresponding to a big data processing method applied to a cloud primary game service in an exemplary embodiment of the present disclosure.
In an alternative way, the server 120 may further comprise transmission means for communication functions. The processor executes the computer program stored in the memory to perform various functional applications and data processing, i.e., to implement the methods described above. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
In one example, the memory may further include memory remotely located relative to the processor, which may be connected to the server 120 through a communication network. Examples of such communication networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means are for receiving or transmitting data via a communication network. Specific examples of the communication network described above may include a wireless network provided by a communication provider of the server 120. For example, the transmission means comprises a network adapter (Network Interface Controller, NIC) which can be connected to other network devices via the base station so as to communicate with the internet. For another example, the transmission device may be a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
In practical applications, a user may perform an interactive operation with respect to an object in a current picture, which may include a target layer and a non-target layer, through the terminal 110, the object belonging to the target layer. The terminal 110 may send a corresponding picture interaction request to the server 120 through the communication network based on the interaction operation, the server 120 does not need to transmit a video stream of the re-rendered entire picture to the terminal 110 based on the picture interaction request, but transmits interaction response information of the object to the terminal 110, and the terminal 110 may acquire the interaction response information based on the first acquisition manner. Meanwhile, the terminal 110 may further acquire rendering information of the non-target layer based on the second acquisition manner, and update the current picture based on the interaction response information and the rendering information of the non-target layer. That is, according to the exemplary embodiments of the present disclosure, data transmission may be performed by adopting different data transmission modes for different layers, so as to reduce the rendering information amount of the layers and reduce the network transmission pressure.
The picture updating method provided by the exemplary embodiment of the present disclosure may be applied to an electronic device or a chip in the electronic device, where the electronic device may be a terminal or a server. The method of exemplary embodiments of the present disclosure is described in detail below with reference to the attached drawing figures.
For example, the electronic device of the exemplary embodiments of the present disclosure may be a terminal having a display function, and the type thereof may be the same as that of the terminal in the foregoing.
The steps executed by the terminal in the picture updating method according to the exemplary embodiment of the present disclosure may be executed by a chip applied to the terminal, or the steps executed by the server may be executed by a chip applied to the server, and the following embodiments take the terminal and the server as execution bodies respectively as examples, and describe the method according to the exemplary embodiment of the present disclosure in an interactive manner with reference to the accompanying drawings. Fig. 2 shows a flowchart of a screen update method according to an exemplary embodiment of the present disclosure. As shown in fig. 2, the picture update method of the exemplary embodiment of the present disclosure may include:
step 201: the terminal displays a target picture on a display interface, wherein the target picture comprises a target layer and a non-target layer, the target layer comprises an object responding to the interactive operation, and the non-target layer comprises an object not responding to the interactive operation.
In the cloud game scene, the target image may be a game scene image displayed by a display interface, and the target image may be a simulation environment of a real world, a semi-simulation and semi-fictional virtual environment, or a pure fictional virtual environment, which is determined according to an actual application scene, and is not specifically limited herein.
The target screen may include a plurality of objects, which may include one or more of a virtual character, a game prop, a replaceable background screen, and the like, but is not limited thereto. In picture rendering, a game scene picture may be stacked by a plurality of layers, and thus, rendering data of a target picture may include rendering data of a target layer and rendering data of a non-target layer. Based on this, the target picture may have a plurality of layers in terms of the number of layers. The target picture may include a plurality of objects in terms of the number of objects, and the objects may independently belong to different layers or may partially belong to different layers.
For different layers, the exemplary embodiments of the present disclosure may divide the layers into a target layer and a non-target layer according to whether or not an interaction occurs. For different objects, the exemplary embodiments of the present disclosure may divide the objects into objects that respond to an interaction and objects that do not respond to an interaction, according to whether or not the interaction is responded to. In the method of the exemplary embodiments of the present disclosure, the target layer includes an object that is responsive to the interaction, and the non-target layer includes an object that is not responsive to the interaction.
The object responding to the interactive operation may respond to the interactive operation, and the object capable of performing man-machine interaction with the user may generate a certain change before and after performing man-machine interaction, where the change includes one or more changes of position, state, gesture, action, color, special effect, and the like, but is not limited thereto.
Objects that do not respond to an interactive operation may not interact with the user while the interactive operation is occurring, remain unchanged in the display interface, or change adaptively (e.g., adaptive switching of background pictures) as the game scene changes.
In the related art, when an interactive operation needs to update a target picture, a terminal acquires rendering data of the updated target picture from a server, and the rendering data is transmitted to the terminal as a video stream through a network, so that the phenomena of blocking, packet loss and the like are easily caused due to large data volume. In the method of the exemplary embodiment of the present disclosure, since the target layer includes the object that responds to the interactive operation and the non-target layer includes the object that does not respond to the interactive operation, only the rendering data of the target layer is changed accordingly after the interactive operation occurs, and the rendering data of the non-target layer remains unchanged before and after the interactive operation occurs. Based on this, the exemplary embodiments of the present disclosure may acquire the updated rendering data of the target layer (i.e., the interaction response information hereinafter) by performing steps 202 to 204, and acquire the rendering data of the non-target layer by performing step 205.
Step 202: and the terminal responds to the interaction operation and sends a picture interaction request to the server.
When the user performs interactive operation on the object responding to the interactive operation, the terminal monitors an interactive instruction triggered by the interactive operation, and can send a picture interactive request to the server based on the interactive instruction. That is, the above-described screen interaction request is generated based on the interaction operation. The above interactive operation may be input to the terminal through an input device such as a mouse, a keyboard, etc. When the terminal has a touch screen, the interactive operation may also be input to the terminal through a touch, click, or the like operation with respect to the touch screen.
Step 203: and the server receives a picture interaction request of the target picture sent by the terminal.
The screen interaction request may include an interaction operation for an object responding to the interaction operation, which is used for determining a screen generated after the corresponding interaction operation is performed by the object responding to the interaction operation.
Step 204: the server sends interaction response information to the terminal based on the picture interaction request, the interaction response information and the rendering data of the non-target layer are used for updating the target picture, and a first acquisition mode of the terminal for the interaction response information is different from a second acquisition mode of the terminal for the rendering data of the non-target layer. At this time, the terminal may acquire the interaction response information fed back by the server for the picture interaction request based on the first acquisition manner.
When the interactive operation occurs, the server can send interactive response information to the terminal in real time, and the terminal can acquire the interactive response information from the server in real time according to the first acquisition mode of the interactive response information. The interactive response information may be understood as change information (which may be referred to as an interactive response result) generated by an object responding to an interactive operation after performing man-machine interaction when the interactive operation occurs. The essence of the interactive response information is rendering data after the layer to which the object responding to the interactive operation belongs (i.e. the target layer) is updated.
In practical application, since the object in the non-target layer, which does not respond to the interactive operation, does not generate the interactive operation, that is, the rendering data of the non-target layer does not change before and after the interactive operation is executed by the object responding to the interactive operation, the exemplary embodiment of the disclosure may store the rendering data of the non-target layer in advance, directly take out the rendering data of the non-target layer from the storage part when the terminal is required to update the target picture based on the rendering data of the non-target layer and the interactive response information, and then perform mixed flow on the rendering data of the non-target layer and the interactive response information at the terminal, and display the updated target picture after the mixed flow synthesis at the display interface.
Based on this, the second obtaining manner of the rendering data of the non-target layer by the terminal may be to obtain the rendering data of the non-target layer from the cache device. Here, the caching device may be a cache server, or may be a local cache of the terminal. The cache server may be the same server as the above server, or may be a different server, and is determined according to an actual application scenario, which is not specifically limited herein.
When the cache device is a cache server and the cache server may be the same server as the server, the method according to the exemplary embodiment of the present disclosure may further include: and sending rendering data of the non-target layers to the terminal. The second obtaining manner may be that the terminal obtains rendering data of the non-target layer from the server.
When the buffer device is a local buffer of the terminal, the second obtaining mode may be that the terminal directly invokes rendering data of the non-target layer from the local buffer of the terminal, so as to update the target picture, and the data transmission path may be shortened, so that a situation that the picture is blocked due to network delay may be avoided.
Step 205: and the terminal acquires rendering data of the non-target layer based on the second acquisition mode.
Step 206: the terminal updates the target picture based on the interactive response information and the rendering data of the non-target layer. Because the interactive response information is the updated rendering information of the target layer, after receiving the interactive response information, the terminal can replace the original rendering information of the target layer with the updated rendering information of the target layer so as to determine the new content of the object responding to the interactive operation, thereby achieving the purpose of updating the target picture.
As can be seen, in the method of the exemplary embodiment of the present disclosure, since the interaction occurs in the object responding to the interaction, the layer to which the object responding to the interaction belongs is the target layer, rendering data of the target layer changes before and after the interaction occurs, but rendering data of the non-target layer remains unchanged, and the rendering data of the target layer may be determined by the interaction response information fed back by the server for the picture interaction request, so in the data transmission process after the interaction occurs, the exemplary embodiment of the present disclosure may acquire the interaction response information and the rendering data of the non-target layer through two data transmission paths respectively.
Compared with the prior art that the rendering data of the whole target picture is obtained through one video stream, the method of the embodiment of the disclosure does not need a server to transmit the rendering data of the re-rendered whole target picture to the terminal, but obtains the interactive response information (the updated rendering data of the target picture layer) from the server based on a first obtaining mode and obtains the non-updated rendering data of the non-target picture layer based on a second obtaining mode respectively, so that when the interactive operation occurs, only the rendering data of the target picture layer is updated to the interactive response information, and the updated target picture is re-determined together with the rendering data of the non-target picture layer. Because the first acquisition mode and the second acquisition mode are different in transmission modes, transmission paths are different, namely, the interactive response information and the rendering data of the non-target image layer are transmitted in video streaming by adopting different transmission channels, the data processing efficiency is improved, the network transmission pressure is reduced, the occurrence probability of network delay, packet loss, network jitter and the like is reduced, the smoothness of pictures is improved, the user experience is improved, the data transmission quantity is reduced, the network transmission pressure is reduced, and the occurrence probability of network delay, packet loss, network jitter and the like is reduced.
The method of the exemplary embodiments of the present disclosure is illustrated below in conjunction with the figures.
Fig. 3 illustrates a schematic diagram of an update process of a target screen according to an exemplary embodiment of the present disclosure. As shown in fig. 3, the current first target screen 310 includes squares, triangles and pentagrams stacked together, and the current target screen 310 includes a target layer 301 and a non-target layer 302, wherein the pentagram is an object responding to an interactive operation, and the layer to which the pentagram belongs is the target layer 301; both squares and triangles are objects that do not respond to an interaction and belong to the same non-target layer 302.
As shown in fig. 3, if a user sends an interactive operation of changing a five-pointed star into a circle, a terminal sends a picture interactive request to a server in response to the interactive operation, the server receives the picture interactive request and sends interactive response information 303 of the five-pointed star to the terminal, and then the terminal can obtain the interactive response information 303 from the server based on the picture interactive request through an I-th path, and can obtain rendering data of a non-target layer 302 from a cache device based on the picture interactive request through an II-th path. Then, the terminal updates the current first target picture 310 to the second target picture 320 based on the interactive response information 303 and the rendering data of the non-target layer 302.
Therefore, in the method of the exemplary embodiment of the present disclosure, the rendering data of the non-target object and the interactive response information are independent rendering data, and when receiving the interactive response information fed back by the server, the terminal may further obtain the rendering data of the non-target object from the cache device, so that there is no need to recombine the rendering data and the interactive response information of the rendering non-target layer at the server, reducing the data processing pressure of the server, reducing the overall rendering time of the target frame, and only needing to mix the rendering data of the non-target layer and the interactive response information at the terminal to synthesize the updated target frame, so that the terminal has the capability of synthesizing complex frames, and can also reduce the network transmission pressure in the data transmission process, avoid the occurrence of frame blocking, ensure the smoothness of the frames, and further improve the user experience.
In practical applications, the target layer may include one or more objects. When the object included in the object layer is one, the object layer includes a target interactable object, which is an object responding to the interaction operation, and the interaction response information may include an interaction result of the target interactable object.
When the target layer includes a plurality of objects, the target layer may include a target interactable object and a non-target interactable object, the target interactable object is an object responding to the interaction operation, the non-target interactable object is an object not responding to the interaction operation, and the interaction response information includes an interaction result of the target interactable object and rendering data of the non-target interactable object.
Here, the non-target interactable object may be an object (such as a background object) which does not respond to the interaction operation, or may be an object which does not respond to the current interaction operation only and responds to the corresponding interaction operation.
For example, exemplary embodiments of the present disclosure may store rendering data of a target layer in a server, where the rendering data of the target layer may include rendering data of a target interactable object and rendering data of a non-target interactable object, and the server may update the rendering data of the target interactable object based on a screen interaction request to obtain an interaction result of the target interactable object; then, interaction response information is determined based on the interaction result of the target interactable object and the rendering data of the non-target interactable object.
Based on this, the exemplary embodiment of the present disclosure may determine, by the server, the updated rendering data of the target layer when the interactive operation occurs, and feed back the updated rendering data of the target layer to the terminal in the form of the interactive response information, thereby avoiding transmitting the rendering data of the entire target frame to the terminal after the interactive operation occurs, reducing the data transmission amount, reducing the network transmission pressure, and reducing the occurrence probability of phenomena such as network delay, packet loss, network jitter, and the like.
In an alternative manner, the screen interaction request of the exemplary embodiments of the present disclosure may include an identity parameter and an interaction parameter of an object in response to the interaction; the interactive response information is determined by the server based on the identity parameters of the object responsive to the interaction and the interaction parameters, and the rendering data of the non-target layer is determined by the server based on the identity parameters of the object responsive to the interaction.
The identity parameter may include any parameter that can be used to determine the identity of the object in response to the interaction, which may be, but is not limited to, an identification parameter, an index of identity parameter, etc. The server in the exemplary embodiments of the present disclosure may determine an object performing an interactive operation in the target screen based on an identity parameter of the object in response to the interactive operation. The identity parameters of the object responding to the interaction may also carry the identity parameters of the layer to which the object responding to the interaction belongs (i.e., the target layer), so the server in the exemplary embodiment of the disclosure may also determine the target layer based on the identity parameters of the object responding to the interaction.
The above-mentioned interactive operation parameters may include any parameters that can be used to drive a change in an object in response to the interactive operation, which may be, but are not limited to, a position parameter, an action parameter, a color parameter, etc. The server in the exemplary embodiments of the present disclosure may apply a corresponding interactive operation to the object responsive to the interactive operation based on the interactive operation parameter of the object responsive to the interactive operation to cause a change in one or more of a state, a posture, an action, a color, a special effect, etc. of the object responsive to the interactive operation.
Based on this, when the target layer contains a plurality of interactable objects, the server in the exemplary embodiment of the disclosure may send interaction response information to the terminal for the screen interaction request based on the first sending manner, and may include:
the server determines an interaction result of the object responding to the interaction operation based on the identity parameter and the interaction operation parameter of the object responding to the interaction operation; the server determines rendering data of the object which does not respond to the interaction operation based on the identity parameter of the object which responds to the interaction operation, wherein the object which does not respond to the interaction operation and the object which responds to the interaction operation belong to the same layer; the server determines interaction response information based on an interaction result of the object responding to the interaction operation and rendering data of the object not responding to the interaction operation, and the server transmits the interaction response information to the terminal based on the first transmission mode.
Exemplary embodiments of the present disclosure may determine, by a server, an interaction result of an object responding to an interaction based on an identity parameter of the object responding to the interaction and the interaction parameter. After determining the object responding to the interaction based on the identity parameter of the object responding to the interaction, the server determines the object not responding to the interaction through the elimination method, and further determines the rendering parameter of the object not responding to the interaction.
Here, the object that does not respond to the interaction and the object that responds to the interaction may belong to the same layer or may belong to different layers. The exemplary embodiment of the disclosure can prestore the identity parameters of the object responding to the interaction operation and the identity parameters of the object not responding to the interaction operation in the server, and the identity parameters of the object not responding to the interaction operation can also carry the identity parameters of the layer to which the object belongs. If the identity parameters of the layer to which the object does not respond are carried by the identity parameters of the object which does not respond to the interaction operation, the identity parameters of the layer to which the object does not respond to the interaction operation are identical to the identity parameters of the target layer carried by the identity parameters of the object which responds to the interaction operation, and the object which does not respond to the interaction operation and the object which responds to the interaction operation belong to the same layer; otherwise, they belong to different layers.
When the object not responding to the interactive operation and the object responding to the interactive operation may belong to the same layer, the server may determine the updated rendering data of the target layer based on the interactive result of the object responding to the interactive operation and the rendering data of the object not responding to the interactive operation, and transmit the updated rendering data of the target layer to the terminal as the interactive response information.
In practical applications, the exemplary embodiments of the present disclosure may divide rendering data of a target screen into a plurality of sets of rendering data and store the sets of rendering data, respectively. Before and after the interactive operation is performed by the object responding to the interactive operation, only the rendering data of the target layer corresponding to the object responding to the interactive operation (interactive response result) is changed, and the rendering data of other groups are unchanged. Accordingly, the plurality of sets of rendering data included in the target screen in the exemplary embodiments of the present disclosure may be grouped in units of layers. When the plurality of groups of rendering data take the layers as the grouping units, the number of groups of the plurality of groups of rendering data is the same as the number of layers contained in the target picture.
Based on this, the method of the exemplary embodiment of the present disclosure may further include: the server obtains rendering data of a non-target layer from a plurality of groups of rendering data of the target picture based on the identity parameters of the object responding to the interactive operation; the server transmits rendering data of the non-target layer to the terminal based on the second transmission mode.
Before and after the interactive operation is executed by the object responding to the interactive operation, the server can determine the non-target layer by adopting the elimination method based on the identity parameters of the target layer carried by the identity parameters of the object responding to the interactive operation, and then determine the rendering data corresponding to the non-target layer from the prestored multiple groups of rendering data because the rendering data of the non-target layer are unchanged.
As can be seen, the exemplary embodiments of the present disclosure may determine interactive response information based on the identity parameters and the interactive operation parameters of the object responding to the interactive operation, which are included in the screen interactive request, and determine rendering data of the non-target layer based on the identity parameters of the object responding to the interactive operation, which are included in the screen interactive request, by the server, so that the server may determine the rendering data and the interactive response information of the non-target layer with less coverage of the screen interactive request.
In an alternative manner, the screen interaction request of the exemplary embodiments of the present disclosure may include an identity parameter of an object that is responsive to an interaction, an interaction parameter, and an identity parameter of an object that is not responsive to an interaction; the interactive response information is determined by the server based on the identity parameters of the object that responded to the interactive operation and the interactive operation parameters, and the rendering data of the non-target layer is determined by the server based on the identity parameters of the object that did not respond to the interactive operation. Here, the relevant content of the identity parameter and the interaction parameter may be referred to the foregoing, and will not be described herein.
The identity parameters of the objects responding to the interaction can be used for determining the objects responding to the interaction in the target picture by the server, and the interaction response information can be determined by the server based on the identity parameters and the interaction operation parameters of the objects responding to the interaction, wherein the interaction response information is an interaction result of the objects responding to the interaction.
The identity parameters of the non-interoperable object may be used by the server to determine the non-interoperable object in the target frame, rendering data of the non-interoperable object being determined by the server based on the identity parameters of the non-interoperable object. Here, the object that does not respond to the interaction and the object that responds to the interaction may belong to the same layer or may belong to different layers.
For example, when the object that does not respond to the interaction and the object that responds to the interaction may belong to different layers, if the screen interaction request may include an identity parameter of the object that does not respond to the interaction, the method of the exemplary embodiment of the present disclosure may further include: the server obtains rendering data of a non-target layer from multiple groups of rendering data of a target picture based on identity parameters of objects which do not respond to the interactive operation; the server sends rendering data of the non-target layers to the terminal. Here, the packet units of the plurality of groups of rendering data included in the target frame are referred to in the foregoing, and will not be described herein.
When the plurality of groups of rendering data take the layers as the grouping units, the number of groups of the plurality of groups of rendering data is the same as the number of layers contained in the target picture. When the object not responding to the interactive operation and the object responding to the interactive operation can belong to different layers, the object not responding to the interactive operation and the object responding to the interactive operation necessarily correspond to different groups of rendering data, and before and after the object responding to the interactive operation executes the interactive operation, the server can directly determine the non-target layer to which the object not responding to the interactive operation belongs based on the identity parameter of the non-target layer carried by the identity parameter of the object not responding to the interactive operation, and determine the rendering data of the non-target layer from the pre-stored groups of rendering data of the layers.
As can be seen, the exemplary embodiments of the present disclosure may determine interactive response information based on the identity parameters and the interactive operation parameters of the object responding to the interactive operation, which are included in the screen interactive request, and determine rendering data of the non-target layer based on the identity parameters of the object not responding to the interactive operation, which are included in the screen interactive request, by the server, so that the server may determine the rendering data and the interactive response information of the non-target layer directly based on the screen interactive request.
According to one or more technical schemes provided in the exemplary embodiments of the present disclosure, a target picture is displayed on a display interface, the target picture includes a target layer and a non-target layer, the target layer includes an object that responds to an interactive operation, and the non-target layer includes an object that does not respond to the interactive operation; responding to the interactive operation, and sending a picture interactive request to a server; acquiring interaction response information fed back by a server aiming at a picture interaction request based on a first acquisition mode; acquiring rendering data of the non-target layer based on a second acquisition mode; the target picture is updated based on the interactive response information and the rendering data of the non-target layer. In the method of the exemplary embodiment of the present disclosure, as the interaction operation occurs in the object responding to the interaction operation, the layer to which the object responding to the interaction operation belongs is the target layer, rendering data of the target layer changes before and after the interaction operation occurs, but rendering data of the target layer is not the same, and the rendering data of the target layer can be determined by the server according to the interaction response information fed back by the picture interaction request, so in the data transmission process after the interaction operation, the exemplary embodiment of the present disclosure can obtain the interaction response information and the rendering data of the non-target layer through two data transmission paths respectively, thereby eliminating the need of the server to transmit the rendering data of the entire re-rendered target picture to the terminal, improving data processing efficiency, reducing network transmission pressure, thereby reducing occurrence probability of phenomena such as network delay, packet loss, network jitter, etc., improving picture smoothness, and improving user experience.
The foregoing has been mainly presented in terms of the teachings of the presently disclosed embodiments. It will be appreciated that, in order to achieve the above-described functions, the electronic device includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The embodiment of the disclosure may divide the functional units of the electronic device according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present disclosure, the division of the modules is merely a logic function division, and other division manners may be implemented in actual practice.
In the case of dividing each functional module with corresponding each function, exemplary embodiments of the present disclosure provide a screen updating apparatus, which may be an electronic device or a chip applied to the electronic device. The electronic device may be a terminal or a server.
When the screen updating apparatus of the exemplary embodiment of the present disclosure is applied to a terminal or a chip applied to the terminal, fig. 4 shows a block schematic diagram of one module of the screen updating apparatus of the exemplary embodiment of the present disclosure. As shown in fig. 4, the apparatus 400 includes:
the display module 401 is configured to display a target frame on a display interface, where the target frame includes a target layer and a non-target layer, the target layer includes an object that responds to an interactive operation, and the non-target layer includes an object that does not respond to the interactive operation;
a sending module 402, configured to send a screen interaction request to a server in response to an interaction operation;
a first obtaining module 403, configured to obtain, based on a first obtaining manner, interaction response information that is fed back by a server for a picture interaction request;
a second obtaining module 404, configured to obtain rendering data of the non-target layer based on a second obtaining manner;
and an updating module 405 for updating the target picture based on the interactive response information and the rendering data of the non-target layer.
As one possible implementation manner, the first obtaining manner is to obtain the interaction response information from the server in real time, and the second obtaining manner is to obtain the rendering data of the non-target layer from the cache device.
As one possible implementation, the target layer includes a target interactable object, where the target interactable object is an object responsive to an interaction operation, and the interaction response information includes an interaction result of the target interactable object.
As one possible implementation, the target layer includes a target interactable object and a non-target interactable object, the target interactable object is an object responding to the interaction operation, the non-target interactable object is an object not responding to the interaction operation, and the interaction response information includes an interaction result of the target interactable object and rendering data of the non-target interactable object.
As one possible implementation, the screen interaction request includes an identity parameter and an interaction parameter of the object in response to the interaction;
the interactive response information is determined by the server based on the identity parameters of the object responsive to the interaction and the interaction parameters, and the rendering data of the non-target layer is determined by the server based on the identity parameters of the object responsive to the interaction.
As one possible implementation, the screen interaction request includes an identity parameter of the object that is responsive to the interaction, an interaction parameter, and an identity parameter of the object that is not responsive to the interaction;
the interactive response information is determined by the server based on the identity parameters of the object that responded to the interactive operation and the interactive operation parameters, and the rendering data of the non-target layer is determined by the server based on the identity parameters of the object that did not respond to the interactive operation.
When the screen updating apparatus of the exemplary embodiment of the present disclosure is applied to a server or a chip of the server, fig. 5 shows another block schematic diagram of the screen updating apparatus of the exemplary embodiment of the present disclosure. As shown in fig. 5, the apparatus 500 includes:
a receiving module 501, configured to receive a picture interaction request of a target picture sent by a terminal, where the target picture includes a target layer and a non-target layer, the target layer includes an object that responds to an interaction operation, and the non-target layer includes an object that does not respond to the interaction operation, and the picture interaction request is generated based on the interaction operation;
the sending module 502 is configured to send, based on the picture interaction request, interaction response information to the terminal, where the interaction response information and rendering data of the non-target layer are used to update the target picture, and a first acquisition mode of the terminal for the interaction response information is different from a second acquisition mode of the terminal for the rendering data of the non-target layer.
Fig. 6 shows a schematic block diagram of a chip of an exemplary embodiment of the present disclosure. As shown in fig. 6, the chip 600 includes one or more (including two) processors 601 and a communication interface 602. The communication interface 602 may support a server to perform the data transceiving steps of the method described above, and the processor 601 may support the server to perform the data processing steps of the method described above.
Optionally, as shown in fig. 6, the chip 600 further includes a memory 603, and the memory 603 may include a read only memory and a random access memory, and provides operation instructions and data to the processor. A portion of the memory may also include non-volatile random access memory (non-volatile random access memory, NVRAM).
In some embodiments, as shown in FIG. 6, the processor 601 performs the corresponding operation by invoking a memory-stored operating instruction (which may be stored in an operating system). The processor 601 controls the processing operations of any one of the terminal devices, which may also be referred to as a central processing unit (central processing unit, CPU). Memory 603 may include read only memory and random access memory and provide instructions and data to processor 601. A portion of the memory 603 may also include NVRAM. Such as a memory, a communication interface, and a memory coupled together by a bus system that may include a power bus, a control bus, a status signal bus, etc., in addition to a data bus. But for clarity of illustration, the various buses are labeled as bus system 604 in fig. 6.
The method disclosed by the embodiment of the disclosure can be applied to a processor or implemented by the processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The processor may be a general purpose processor, a digital signal processor (digital signal processing, DSP), an ASIC, an off-the-shelf programmable gate array (field-programmable gate array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks of the disclosure in the embodiments of the disclosure may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present disclosure may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
The exemplary embodiment of the disclosure also provides an electronic device, which is a terminal, including: at least one processor; and a memory communicatively coupled to the at least one processor. The memory stores a computer program executable by the at least one processor for causing the electronic device to perform a terminal-based execution subject method according to embodiments of the present disclosure when executed by the at least one processor.
The exemplary embodiment of the disclosure also provides an electronic device, which is a server, including: at least one processor; and a memory communicatively coupled to the at least one processor. The memory stores a computer program executable by the at least one processor for causing the electronic device to perform a server-based method according to embodiments of the present disclosure when executed by the at least one processor.
The present disclosure also provides a non-transitory computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor of a computer, is for causing the computer to perform a method according to an embodiment of the present disclosure.
The present disclosure also provides a computer program product comprising a computer program, wherein the computer program, when executed by a processor of a computer, is for causing the computer to perform a method according to embodiments of the disclosure.
Although the present disclosure has been described in connection with specific features and embodiments thereof, it will be apparent that various modifications and combinations thereof can be made without departing from the spirit and scope of the disclosure. Accordingly, the specification and drawings are merely exemplary illustrations of the present disclosure as defined in the appended claims and are considered to cover any and all modifications, variations, combinations, or equivalents within the scope of the disclosure. It will be apparent to those skilled in the art that various modifications and variations can be made to the present disclosure without departing from the spirit or scope of the disclosure. Thus, the present disclosure is intended to include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (10)
1. A picture updating method, comprising:
displaying a target picture on a display interface, wherein the target picture comprises a target layer and a non-target layer, the target layer comprises an object responding to interactive operation, and the non-target layer comprises an object not responding to the interactive operation;
Responding to the interactive operation, and sending a picture interactive request to a server;
acquiring interaction response information fed back by the server aiming at the picture interaction request based on a first acquisition mode;
acquiring rendering data of the non-target layer based on a second acquisition mode;
and updating the target picture based on the interaction response information and the rendering data of the non-target picture layer.
2. The method of claim 1, wherein the first obtaining manner is to obtain the interactive response information from the server in real time, and the second obtaining manner is to obtain the rendering data of the non-target layer from a cache device.
3. The method of claim 1, wherein the target layer comprises a target interactable object, the target interactable object being an object responsive to the interaction, the interaction response information comprising an interaction result of the target interactable object.
4. The method of claim 1, wherein the target layer comprises a target interactable object and a non-target interactable object, the target interactable object being an object responsive to the interaction, the non-target interactable object being an object not responsive to the interaction, the interaction response information comprising an interaction result of the target interactable object and rendering data of the non-target interactable object.
5. The method of any one of claims 1 to 4, wherein the screen interaction request includes an identity parameter and an interaction parameter of an object responsive to the interaction;
the interactive response information is determined by the server based on the identity parameters of the object responsive to the interaction and the interaction parameters, and the rendering data of the non-target layer is determined by the server based on the identity parameters of the object responsive to the interaction.
6. The method of any one of claims 1 to 4, wherein the screen interaction request includes an identity parameter of an object that is responsive to the interaction, an interaction parameter, and an identity parameter of an object that is not responsive to the interaction;
the interactive response information is determined by the server based on the identity parameters of the object that responded to the interactive operation and the interactive operation parameters, and the rendering data of the non-target layer is determined by the server based on the identity parameters of the object that did not respond to the interactive operation.
7. A picture updating method, comprising:
receiving a picture interaction request of a target picture sent by a terminal, wherein the target picture comprises a target picture layer and a non-target picture layer, the target picture layer comprises an object responding to interaction operation, the non-target picture layer comprises an object not responding to the interaction operation, and the picture interaction request is generated based on the interaction operation;
And sending interaction response information to the terminal based on the picture interaction request, wherein the interaction response information and the rendering data of the non-target layer are used for updating the target picture, and a first acquisition mode of the terminal for the interaction response information and a second acquisition mode of the terminal for the rendering data of the non-target layer are different.
8. A picture updating apparatus, comprising:
the display module is used for displaying a target picture on a display interface, wherein the target picture comprises a target layer and a non-target layer, the target layer comprises an object responding to interactive operation, and the non-target layer comprises an object not responding to the interactive operation;
the sending module is used for responding to the interactive operation and sending a picture interactive request to the server;
the first acquisition module is used for acquiring interaction response information fed back by the server aiming at the picture interaction request based on a first acquisition mode;
the second acquisition module is used for acquiring the rendering data of the non-target layer based on a second acquisition mode;
and the updating module is used for updating the target picture based on the interaction response information and the rendering data of the non-target picture layer.
9. A picture updating apparatus, comprising:
the receiving module is used for receiving a picture interaction request of a target picture sent by the terminal, wherein the target picture comprises a target layer and a non-target layer, the target layer comprises an object responding to interaction operation, the non-target layer comprises an object not responding to the interaction operation, and the picture interaction request is generated based on the interaction operation;
the sending module is used for sending interaction response information to the terminal based on the picture interaction request, the interaction response information and the rendering data of the non-target layer are used for updating the target picture, and a first acquisition mode of the terminal for the interaction response information and a second acquisition mode of the terminal for the rendering data of the non-target layer are different.
10. An electronic device, characterized in that the electronic device is a terminal or a server, comprising:
a processor; the method comprises the steps of,
a memory storing a program;
wherein the program comprises instructions which, when executed by the processor, cause the processor to perform the method according to any one of claims 1 to 6 or claim 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311498391.XA CN117519874A (en) | 2023-11-10 | 2023-11-10 | Picture updating method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311498391.XA CN117519874A (en) | 2023-11-10 | 2023-11-10 | Picture updating method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117519874A true CN117519874A (en) | 2024-02-06 |
Family
ID=89756167
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311498391.XA Pending CN117519874A (en) | 2023-11-10 | 2023-11-10 | Picture updating method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117519874A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118227069A (en) * | 2024-05-23 | 2024-06-21 | 鼎道智芯(上海)半导体有限公司 | Display control method and electronic equipment |
-
2023
- 2023-11-10 CN CN202311498391.XA patent/CN117519874A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118227069A (en) * | 2024-05-23 | 2024-06-21 | 鼎道智芯(上海)半导体有限公司 | Display control method and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105263050B (en) | Mobile terminal real-time rendering system and method based on cloud platform | |
Li et al. | MUVR: Supporting multi-user mobile virtual reality with resource constrained edge cloud | |
EP3264370B1 (en) | Media content rendering method, user equipment, and system | |
US7830388B1 (en) | Methods and apparatus of sharing graphics data of multiple instances of interactive application | |
US20220241689A1 (en) | Game Character Rendering Method And Apparatus, Electronic Device, And Computer-Readable Medium | |
CN111433743A (en) | APP remote control method and related equipment | |
CN109460233A (en) | Primary interface display update method, device, terminal device and the medium of the page | |
GB2491819A (en) | Server for remote viewing and interaction with a virtual 3-D scene | |
CN104012059A (en) | Direct link synchronization cummuication between co-processors | |
CN105677265A (en) | Display method and terminal | |
CN113079216B (en) | Cloud application implementation method and device, electronic equipment and readable storage medium | |
WO2022095708A1 (en) | Wireless communication method and apparatus, device, storage medium, and computer program product | |
US20230077904A1 (en) | Wireless programmable media processing system | |
CN115065684B (en) | Data processing method, apparatus, device and medium | |
CN106797398A (en) | Method and system for providing from virtual desktop serve to client | |
CN117519874A (en) | Picture updating method and device, electronic equipment and storage medium | |
CN103632337A (en) | Real-time order-independent transparent rendering | |
de Paiva Guimarães et al. | Immersive and interactive virtual reality applications based on 3D web browsers | |
US9614900B1 (en) | Multi-process architecture for a split browser | |
CN115661011A (en) | Rendering method, device, equipment and storage medium | |
CN109587118B (en) | Distributed multi-terminal and multi-network supporting system for Android online game | |
CN114570020A (en) | Data processing method and system | |
CN112565869A (en) | Window fusion method, device and equipment for video redirection | |
DE102019122181A1 (en) | GENERALIZED LOW-Latency USER INTERACTION WITH VIDEO ON VARIOUS TRANSPORT UNITS | |
CN114003139B (en) | Vehicle-mounted equipment operation method and device, storage medium and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |