[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN115550677A - Expression package based live broadcast room interaction method and device, computer equipment and medium - Google Patents

Expression package based live broadcast room interaction method and device, computer equipment and medium Download PDF

Info

Publication number
CN115550677A
CN115550677A CN202211142077.3A CN202211142077A CN115550677A CN 115550677 A CN115550677 A CN 115550677A CN 202211142077 A CN202211142077 A CN 202211142077A CN 115550677 A CN115550677 A CN 115550677A
Authority
CN
China
Prior art keywords
expression package
expression
virtual
package
live broadcast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211142077.3A
Other languages
Chinese (zh)
Inventor
许英俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN202211142077.3A priority Critical patent/CN115550677A/en
Publication of CN115550677A publication Critical patent/CN115550677A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application relates to the technical field of network live broadcast, and provides a live broadcast room interaction method and device based on an emoticon, computer equipment and a storage medium. The client responds to an expression list loading instruction of the live broadcast room to acquire expression list data; loading an expression list according to the expression list data; the method comprises the steps that a client responds to triggering operation of a first expression package image, and an expression package editing page is displayed according to a first expression package identification corresponding to the first expression package image; the client responds to the triggering operation of the first interactive content and obtains expression package interactive data according to the first interactive content and the first expression package identification; and the client responds to the sending operation of the expression package interactive data and sends the expression package interactive data to the live broadcast room for displaying. According to the embodiment of the application, the emotion bag image is combined with the interactive content, so that the interactivity of audiences and the anchor is enhanced, and the retention rate of the audiences in a live broadcast room is improved.

Description

Expression package based live broadcast room interaction method and device, computer equipment and medium
Technical Field
The embodiment of the application relates to the technical field of network live broadcast, in particular to a live broadcast room interaction method and device based on an expression package, computer equipment and a storage medium.
Background
Live webcasting refers to a technology in which a director shares live audio and video streams with audiences on the internet through a live webcasting platform. Live webcasting is a new network business, which embodies the characteristics of internet openness and sharing, and enables common people to have an opportunity to show own talent on the network.
In the process of the talent art of the network live broadcast performance, part of audiences send out the virtual gifts in the live broadcast room to interact so as to express the affirmation of the live broadcast content, thereby being capable of creating profits for the anchor and realizing the home work of the anchor without going out. Especially for people who are in remote areas or have inconvenient actions or cannot normally go out to work under the influence of epidemic situations, a more convenient employment way is created, and social employment is driven.
The network live broadcast platform provides better live broadcast content for selecting more talent anchor or encouraging the anchor, more than two anchors are allowed to carry out live broadcast in a live broadcast room at the same time, and the more talent anchor or the anchor capable of providing the better live broadcast content can be accepted by more audiences.
When watching the network live broadcast, a user can interact with the anchor by sending the expression package. However, at present, the interaction mode is single, the limitation of interaction between the user and the anchor is large, and the user experience is influenced.
Disclosure of Invention
The embodiment of the application provides a live broadcast room interaction method and device based on an emoticon, computer equipment and a storage medium, which can enhance the interactivity of audiences and anchor broadcasts and improve the retention rate of the audiences in the live broadcast room. The technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a live broadcast room interaction method based on an emoticon, including the steps of:
the client responds to an expression list loading instruction of the live broadcast room to acquire expression list data; loading an expression list according to the expression list data; the expression list comprises expression packet images corresponding to a plurality of expression packet identifications;
the client responds to a triggering operation of a first expression package image and displays an expression package editing page according to a first expression package identifier corresponding to the first expression package image; the emotion bag editing page displays a plurality of interactive contents; the first expression package image is one expression package image selected from the plurality of expression package images;
the client responds to triggering operation of first interactive content, and expression package interactive data are obtained according to the first interactive content and the first expression package identification; the first interactive content is selected from the interactive contents;
and the client responds to the sending operation of the expression package interactive data and sends the expression package interactive data to the live broadcast room for displaying.
In a second aspect, an embodiment of the present application provides a live broadcast control device for team interaction, including:
the emotion list loading module is used for responding to an emotion list loading instruction of the live broadcast room and acquiring emotion list data; loading an expression list according to the expression list data; the expression list comprises expression packet images corresponding to a plurality of expression packet identifications;
the editing page display module is used for responding to triggering operation on a first expression package image and displaying an expression package editing page according to a first expression package identifier corresponding to the first expression package image; the emotion bag editing page displays a plurality of interactive contents; the first expression package image is one expression package image selected from the plurality of expression package images;
the interactive data acquisition module is used for responding to triggering operation on first interactive content and acquiring expression package interactive data according to the first interactive content and the first expression package identifier; the first interactive content is selected from the interactive contents;
and the interactive data display module is used for responding to the sending operation of the expression package interactive data and sending the expression package interactive data to the live broadcast room for display.
In a third aspect, embodiments of the present application provide a computer device, a processor, a memory, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method according to the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium storing a computer program, which when executed by a processor implements the steps of the method according to the first aspect.
The method comprises the steps that a client responds to an expression list loading instruction of a live broadcast room to obtain expression list data; loading an expression list according to the expression list data; the expression list comprises expression packet images corresponding to a plurality of expression packet identifications; the client responds to a triggering operation of a first expression package image and displays an expression package editing page according to a first expression package identifier corresponding to the first expression package image; the expression package editing page displays a plurality of interactive contents; the first expression package image is one expression package image selected from the plurality of expression package images; the client responds to triggering operation of first interactive content, and expression package interactive data are obtained according to the first interactive content and the first expression package identification; the first interactive content is selected from the interactive contents; and the client responds to the sending operation of the expression package interactive data and sends the expression package interactive data to the live broadcast room for displaying. According to the embodiment of the application, the emotion bag image and the interactive content are combined, so that the interactivity of audiences and anchor is enhanced, and the retention rate of the audiences in a live broadcast room is improved.
For a better understanding and implementation, the technical solutions of the present application are described in detail below with reference to the accompanying drawings.
Drawings
Fig. 1 is a schematic view of an application scenario of a live broadcast room interaction method based on an emoticon according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a live broadcast room interaction method based on an emoticon according to an embodiment of the present application;
fig. 3 is a schematic flowchart of S30 in a method for live broadcast room interaction based on an emoticon according to an embodiment of the present application;
fig. 4 is a schematic flowchart of S40 in a method for live broadcast room interaction based on emoticon, according to an embodiment of the present application;
fig. 5 is a schematic display diagram of an avatar emotion package in a live broadcast room interface according to an embodiment of the present application;
fig. 6 is a schematic flowchart of S403 in the method for live broadcast interaction based on emoticon, according to an embodiment of the present application;
fig. 7 is a schematic flowchart of S30 in a method for emotion package-based live broadcast interaction according to another embodiment of the present application;
fig. 8 is a schematic flowchart of S40 in a method for emotion package-based live broadcast interaction according to another embodiment of the present application;
fig. 9 is a schematic display diagram of a virtual emoticon in a live broadcast interface according to an embodiment of the present application;
fig. 10 is a schematic flowchart of S406 in a method for emotion package-based live broadcast interaction according to an embodiment of the present application;
fig. 11 is a schematic view of a virtual special effect display of a first virtual gift in a virtual gift emoticon provided in an embodiment of the present application;
fig. 12 is a schematic flowchart of S30 in a method for emotion package-based live broadcast interaction according to another embodiment of the present application;
fig. 13 is a schematic flowchart of S40 in a live broadcast room interaction method based on emoticon according to another embodiment of the present application;
fig. 14 is a schematic display diagram of a virtual treasure box emoticon in a live broadcast room interface according to an embodiment of the present application;
FIG. 15 is a schematic display diagram illustrating opening of a virtual treasure box in a virtual treasure box emoticon provided in an embodiment of the present application;
fig. 16 is a schematic structural diagram of a live broadcast room interaction device based on an emoticon according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if/if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination", depending on the context.
As will be appreciated by those skilled in the art, the terms "client," "terminal device," and "terminal device" as used herein include both wireless signal receiver devices, which include only wireless signal receiver devices without transmit capability, and receiving and transmitting hardware devices, which include receiving and transmitting hardware devices capable of two-way communication over a two-way communication link. Such a device may include: cellular or other communication devices such as personal computers, tablets, etc. having single or multi-line displays or cellular or other communication devices without multi-line displays; PCS (personal communications Service), which may combine voice, data processing, facsimile and/or data communications capabilities; a PDA (Personal Digital Assistant), which may include a radio frequency receiver, a pager, internet/intranet access, a web browser, a notepad, a calendar and/or a GPS (Global positioning system) receiver; a conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, a "client," "terminal device" can be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or situated and/or configured to operate locally and/or in a distributed fashion at any other location(s) on earth and/or in space. The "client", "terminal Device" used herein may also be a communication terminal, a web terminal, a music/video playing terminal, such as a PDA, an MID (Mobile Internet Device) and/or a Mobile phone with music/video playing function, and may also be a smart tv, a set-top box, and the like.
The hardware referred to by the names "server," "client," "service node," and the like, is essentially a computer device with capabilities such as a personal computer, and is a hardware device having necessary components disclosed by von neumann principles such as a central processing unit (including an arithmetic unit and a controller), a memory, an input device, and an output device, wherein a computer program is stored in the memory, and the central processing unit loads a program stored in an external memory into the internal memory to run, executes instructions in the program, and interacts with the input and output devices, thereby completing a specific function.
It should be noted that the concept of "server" in this application can be extended to the case of server cluster. According to the network deployment principle understood by those skilled in the art, the servers should be logically divided, and in physical space, the servers may be independent from each other but can be called through an interface, or may be integrated into one physical computer or a set of computer clusters. Those skilled in the art will appreciate this variation and should not be so limited as to restrict the implementation of the network deployment of the present application.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of a live broadcast room interaction method based on emoticons according to an embodiment of the present application, where the application scenario includes an anchor client 101, a server 102 and a viewer client 103, and the anchor client 101 and the viewer client 103 interact with each other through the server 102.
The anchor client 101 is one end that sends a webcast video, and is typically a client used by an anchor (i.e., a webcast anchor user) in webcast.
The viewer client 103 refers to an end that receives and views webcast video, and is typically a client employed by a viewer viewing video in webcast (i.e., a live viewer user).
The hardware at which the anchor client 101 and viewer client 103 are directed is essentially a computer device, and in particular, as shown in fig. 1, it may be a type of computer device such as a smart phone, smart interactive tablet, and personal computer. Both the anchor client 101 and the viewer client 103 may access the internet via a known network access method to establish a data communication link with the server 102.
The server 102 is a business server, and may be responsible for further connecting related audio data servers, video streaming servers, and other servers providing related support, etc., so as to form a logically associated server cluster for providing services to related terminal devices, such as the anchor client 101 and the viewer client 103 shown in fig. 1.
In the embodiment of the present application, the anchor client 101 and the audience client 103 may join in the same live broadcast room (i.e., a live broadcast channel), where the live broadcast room is a chat room implemented by means of an internet technology, and generally has an audio/video broadcast control function. The anchor user carries out live broadcast in the live broadcast room through the anchor client 101, and audiences of the audience client 103 can log in the server 102 to enter the live broadcast room to watch the live broadcast.
In the live broadcast room, interaction between the anchor and the audience can be realized through known online interaction modes such as voice, video, text and the like, generally, the anchor user performs programs for the audience in the form of audio and video streams, and resource interaction behaviors can also be generated in the interaction process, for example, the audience client 103 presents virtual gifts to the anchor client 101 in the same live broadcast room. Of course, the application form of the live broadcast room is not limited to online entertainment, and can also be popularized to other relevant scenes, such as: user pairing interaction scenes, video conference scenes, online teaching scenes, product recommendation and sales scenes, and any other scene requiring similar interaction.
Specifically, the viewer watches live broadcast as follows: the audience can click and access the live application program installed on the audience client-side 103, select to enter any one live broadcast room, and trigger the audience client-side 103 to load a live broadcast room interface for the audience, wherein the live broadcast room interface comprises a plurality of interaction components, and the audience can watch live broadcast in the live broadcast room by loading the interaction components and carry out various online interactions.
The live broadcast audience can trigger the emotion interaction component at the audience client 103, open the emotion list, the audience selects the emotion packets from the emotion list and then sends the emotion packets to the server 102, and the server 102 sends the emotion packets to all the clients added to the live broadcast room to display the emotion packets, so that interaction between the live broadcast audience and the anchor is achieved. However, at present, the interaction mode is single, the interaction limitation between the live audience and the anchor is large, and the user experience is influenced.
Referring to fig. 2, fig. 2 is a schematic flowchart of a live broadcast room interaction method based on emoticons according to an embodiment of the present application, where the method includes the following steps:
s10: the method comprises the steps that a client responds to an expression list loading instruction of a live broadcast room to obtain expression list data; loading an expression list according to the expression list data; the expression list comprises expression packet images corresponding to a plurality of expression packet identifications.
The user triggers an expression list loading instruction by clicking an expression control of a live broadcast interface, and expression list data are obtained. The emoticon may be an icon, a button, or any other control that a user can touch or click.
The expression list data comprises a plurality of expression packet identifications and expression packet image data corresponding to the expression packet identifications, the expression packet identifications are unique identifications of the expression packet images, and the expression packet identifications can be numbers or names of the expression packets. The expression list comprises a plurality of expression bag images. The user can browse and select the expression package image in the expression list so as to realize the expression package interaction between the user and the anchor.
In an optional embodiment, an emoticon icon is displayed on each emoticon image, the emoticon icon is used for identifying the type of the emoticon, the emoticon icon can be an icon carrying characters, letters or numbers, and the like, and the emoticon types can include a head portrait emoticon, a gift emoticon and a treasure box emoticon. In the embodiment of the application, the emoticon icons are displayed at the upper right corner of each emoticon image, and the emoticon icons are icons with characters, so that a user can clearly confirm which type of emoticon the emoticon image belongs to. For example, for an avatar emoticon, the corresponding emoticon icon is "avatar" in text. For gift emoticons, the corresponding emoticon icon is "gift". For the treasure box emoticons, the corresponding emoticons are in the shape of a treasure box.
S20: the client responds to the triggering operation of the first expression package image and displays an expression package editing page according to a first expression package identifier corresponding to the first expression package image; the emotion bag editing page displays a plurality of interactive contents; the first expression package image is one expression package image selected from a plurality of expression package images.
The user can select one expression package image from the expression list, correspondingly, the client receives or detects the expression package image selected by the user, and the user can trigger the selected expression package image through clicking, touch or long-time pressing and other triggering operations. After the user triggers the emoticon image, an emoticon editing page can be displayed. The emoticon editing page comprises a plurality of interactive contents, and the interactive contents can comprise an avatar, a virtual gift and a virtual treasure box.
In the embodiment of the application, a user selects a first expression package image in an expression list, a client detects the triggering operation of the user on the first expression package image, and an expression package editing page is displayed. Specifically, a corresponding emoticon editing page is displayed according to the emoticon identifier of the emoticon image. For example, the emoticon identifier indicates that the emoticon is an emoticon, the displayed emoticon editing page selects a popup for the emoticon, and the avatar selection popup includes a plurality of emoticons, which may be a user avatar, a main cast avatar, or a custom avatar. The expression package identification is indicated as a gift expression package, the displayed expression package editing page is a virtual gift bar, and the virtual gift bar comprises a plurality of virtual gifts. Expression package sign instruction is precious case expression package, and the expression package edit page that shows selects the popup for the precious case, and precious case selection popup includes the virtual precious case of a plurality of.
S30: the client responds to the triggering operation of the first interactive content and obtains expression package interactive data according to the first interactive content and the first expression package identification; the first interactive content is selected from a plurality of interactive contents.
The user can select one interactive content in the expression editing page, correspondingly, the client receives or detects the interactive content selected by the user, and the user can trigger the selected interactive content through clicking, touch control, long pressing and other triggering operations.
In the embodiment of the application, the user selects the first interactive content in the expression editing page, and the client detects the first interactive content selected by the user. The first interactive content and the first expression packet identifier are synthesized to obtain expression packet interactive data.
Specifically, the first interactive content may be a user avatar, an anchor avatar, or a custom avatar selected by the user, or a virtual gift selected by the user, or a virtual treasure box selected by the user.
S40: and the client responds to the sending operation of the emotion packet interactive data and sends the emotion packet interactive data to the live broadcast room for displaying.
The user can click the sending control of the expression editing page to send the expression package interaction data. The sending control may be an icon, a button, or any other control that the user can touch or click.
In the embodiment of the application, the client responds to the sending operation of the emotion packet interactive data, the emotion packet interactive data are sent to the server, the server receives the emotion packet interactive data, and the emotion packet interactive data are sent to the client which is added into the live broadcast room, so that the emotion packet interactive data are displayed on the interface of the live broadcast room of the client which is added into the live broadcast room.
In an optional embodiment, the expression control of the live broadcast interface can be used as an independent control, a user clicks the expression control of the live broadcast interface to display an expression editing page, and after the expression page is edited, the client directly sends the edited expression package interactive data to the live broadcast for display.
In another optional embodiment, the expression control of the live broadcast interface can also be used as an additional control in public screen chat, when a user speaks in a public screen, the expression control can be triggered to display an expression editing page, and after the expression page is edited, the expression package interaction data is displayed in a public screen speaking input box; after receiving a public screen speech sending instruction of a user, the client splices and sends the emotion packet interactive data and the file data input by the user to a live broadcast room for displaying.
Specifically, speech content of a user in a live broadcast room comprises one or more selected expression packages and a selected file, expression package interactive data and file data are correspondingly displayed in a public screen speech input box, when the user in the live broadcast room clicks to send the speech, the client side splices and sends the expression package interactive data and the file data to the server, the server receives the spliced data and sends the spliced data to the client side added in the live broadcast room, and therefore the live broadcast room interface of the client side added in the live broadcast room displays the spliced expression packages and files.
By applying the embodiment of the application, the client responds to the expression list loading instruction of the live broadcast room to obtain the expression list data; loading an expression list according to the expression list data; the expression list comprises expression packet images corresponding to a plurality of expression packet identifications; the client responds to the triggering operation of the first expression package image and displays an expression package editing page according to a first expression package identifier corresponding to the first expression package image; the emotion bag editing page displays a plurality of interactive contents; the first expression package image is one expression package image selected from a plurality of expression package images; the client responds to the triggering operation of the first interactive content and obtains expression package interactive data according to the first interactive content and the first expression package identification; the first interactive content is selected from a plurality of interactive contents; and the client responds to the sending operation of the emotion packet interactive data and sends the emotion packet interactive data to the live broadcast room for displaying. According to the embodiment of the application, the emotion bag image and the interactive content are combined, so that the interactivity of audiences and anchor is enhanced, and the retention rate of the audiences in a live broadcast room is improved.
In an alternative embodiment, referring to fig. 3, if the first emoticon identifier indicates an avatar emoticon type, the interactive contents are avatar pieces, and step S30 includes steps S301 to S302, which are as follows:
s301: the client side responds to the triggering operation of the first head portrait and obtains a picture address of the first head portrait and a first expression package identification; wherein, the first head portrait is a selected head portrait in the plurality of head portraits;
s302: the client side obtains a first head portrait picture according to the picture address; acquiring a first expression package image according to the first expression package identifier; synthesizing the first head portrait picture and the first expression package image to obtain a head portrait expression package; and obtaining the emotion packet interaction data according to the head portrait emotion packet.
In the embodiment of the application, the head portrait picture is cached locally at the client, and the client responds to the triggering operation of the first head portrait and can directly acquire the corresponding first head portrait picture from the local cache according to the picture address of the first head portrait.
The method comprises the steps of carrying out picture synthesis on a first head portrait picture and a first expression package image, specifically, converting the first head portrait picture into a first bitmap, converting the first expression package image into a second bitmap, and replacing pixel points of a preset position area of the second bitmap with pixel points of the first bitmap, so that a head portrait expression package is obtained. For example, the size of the first bitmap is 50 × 50, the size of the second bitmap is 50 × 120, the upper left corner position area with the size of 50 × 50 in the second bitmap is used as a preset position area, and the preset position area is replaced by the first bitmap, so that the head portrait expression package is obtained.
By combining the head portrait with the emoticons, the diversity of the playing methods of the emoticons is enriched, the interactivity between audience users and the anchor is improved, and the retention rate of the live broadcast room is increased.
In an alternative embodiment, referring to fig. 4, step S40 includes steps S401 to S403, which are as follows:
s401: the client responds to the sending operation of the emotion packet interactive data and sends the emotion packet interactive data to the server;
s402: the server receives the emotion packet interactive data and sends the emotion packet interactive data to a client added to the live broadcast room;
s403: and (4) analyzing the emotion packet interactive data by the client side added into the live broadcast room to obtain an avatar emotion packet, and displaying the avatar emotion packet in the interface of the live broadcast room.
Referring to fig. 5, fig. 5 is a schematic view illustrating a display of an avatar emoticon in a live broadcast interface according to an embodiment of the present application. In the live broadcast interface shown in fig. 5, an avatar emoticon 51 is displayed, and a first avatar 52 is displayed in the avatar emoticon 51.
In the embodiment of the application, the client responds to the sending operation of the emotion packet interactive data, converts the emotion packet interactive data into the compressible format picture, and sends the compressible format picture to the public screen server. And the public screen server issues the compressible format picture to the client side added into the live broadcasting room so as to display the compressible format picture in the live broadcasting room interface of the client side added into the live broadcasting room. The compressible format picture can be PNG and JPG format pictures. By converting the expression package interactive data into the compressible format picture, the memory occupied by the expression package interactive data can be saved, and the transmission efficiency of the expression package interactive data is improved.
In an optional embodiment, referring to fig. 6, after the step of adding the client in the live broadcast to analyze the emotion package interaction data to obtain the avatar emotion package and displaying the avatar emotion package in the live broadcast interface, the method includes steps S4031 to S4032, which are as follows:
s4031: responding to the triggering operation of a client added into a live broadcast room on the head portrait expression package, and acquiring special effect data corresponding to the head portrait expression package;
s4032: and displaying the special effect corresponding to the head portrait expression package according to the special effect data.
In the embodiment of the application, the special effect data corresponding to the avatar emoji may be preset animation special effect data, the preset animation special effect data may be pre-stored in a local cache of a client joining the live broadcast room, and the preset animation special effect data is directly obtained from the local cache in response to a trigger operation of the client joining the live broadcast room on the avatar emoji. The special effect corresponding to the avatar emoticon can be an animation effect of picture shaking.
The user clicks the head portrait expression package through the live broadcast room, the special effect corresponding to the head portrait expression package is displayed, the user experience is improved, and the retention rate of the user in the live broadcast room is improved.
In an alternative embodiment, referring to fig. 7, if the first emoticon packet identifier indicates a gift emoticon packet type, the interactive contents are virtual gifts; step S30 includes steps S303 to S304, which are specifically as follows:
s303: the client side responds to the triggering operation of the first virtual gift and acquires the virtual gift identification, the virtual gift number and the first expression package identification of the first virtual gift; the first virtual gift is one virtual gift selected from a plurality of virtual gifts;
s304: and the client acquires the expression package interaction data according to the virtual gift identification, the virtual gift number and the first expression package identification.
The virtual gift identification is a unique identification of the virtual gift, and the virtual gift identification may be a number or a name of the virtual gift. The number of virtual gifts refers to the number of virtual gifts that the audience presents to the anchor. In the embodiment of the application, the virtual gift identification, the number of the virtual gifts and the first emotion bag identification of the first virtual gift are subjected to data synthesis, and emotion bag interaction data can be automatically and quickly obtained.
Through combining virtual gift and expression package, richened the variety of expression package playing method, improved the interactivity between spectator user and the anchor, increased the survival rate of live broadcast room.
In an alternative embodiment, referring to fig. 8, step S40 includes steps S404 to S304, which are as follows:
s404: the client responds to the sending operation of the emotion packet interactive data and sends the emotion packet interactive data to the server;
s405: the server analyzes the expression package interaction data to obtain a virtual gift identifier, a virtual gift number and a first expression package identifier, and presents the corresponding first virtual gift to the anchor according to the virtual gift identifier and the virtual gift number; sending the expression package interaction data to a client added into a live broadcast room;
s406: a client side added into a live broadcast room analyzes the expression package interaction data to obtain a virtual gift identifier and a first expression package identifier; and obtaining a virtual gift expression package according to the virtual gift identifier and the first expression package identifier, and displaying the virtual gift expression package in a live broadcasting room interface.
Referring to fig. 9, fig. 9 is a schematic view illustrating a display of a virtual emoticon in a live broadcast interface according to an embodiment of the present application. The live broadcast interface shown in fig. 9 is displayed with a virtual gift emoticon 91, and the virtual gift emoticon 91 is displayed with a first virtual gift 92.
In the embodiment of the application, the client responds to the sending operation of the emotion packet interactive data, the emotion packet interactive data are sent to the emotion packet server, the emotion packet server analyzes the emotion packet interactive data to obtain the virtual gift identification and the virtual gift number, a virtual gift presenting request is generated according to the virtual gift identification and the virtual gift number, the virtual gift presenting request is sent to the gift sending server, and the gift presenting server sends gift presenting information of the virtual gift to the anchor terminal according to the virtual gift presenting request. The gift giving information comprises a virtual gift identification and a virtual gift number.
The client responds to the sending operation of the emotion packet interactive data, the emotion packet interactive data are sent to the public screen server, and the public screen server sends the emotion packet interactive data to the client which is added into the live broadcast room.
And displaying the virtual gift facial expression package on the interface of the live broadcast room, wherein if the first virtual gift is a virtual gift with a virtual special effect, the virtual special effect of the first virtual gift can be displayed in a superposed manner in a preset position area of the first facial expression package image. If the first virtual gift is a virtual gift without a virtual special effect, the virtual picture of the first virtual gift and the first emoticon image can be synthesized, and the synthesized picture is displayed.
The user of the live broadcast room can give the virtual gift to the anchor broadcast through the expression package, so that the mode of giving the virtual gift is increased, the experience of the user is improved, and the retention rate of the user of the live broadcast room is improved.
In an alternative embodiment, referring to fig. 10, step S406 includes steps S4061 to S4063, which are as follows:
s4061: the client side joining the live broadcast room judges whether the first virtual gift has a virtual special effect or not according to the virtual gift identification;
s4062: if the first virtual gift does not have a virtual special effect, the client side added into the live broadcast room acquires a virtual gift picture corresponding to the first virtual gift according to the virtual gift identification; acquiring a first expression package image according to the first expression package identifier; and synthesizing the virtual gift picture and the first expression package image to obtain a virtual gift expression package, and displaying the virtual gift expression package in a live broadcasting room interface.
In the embodiment of the application, the virtual gift picture is converted into the third bitmap, the first emotion package image is converted into the fourth bitmap, pixel points of the third bitmap replace pixel points of a preset position area of the fourth bitmap, and the virtual gift emotion package can be automatically and quickly obtained.
S4063: if the first virtual gift has a virtual special effect, the client-side added into the live broadcast room acquires a virtual gift picture and virtual gift special effect data corresponding to the first virtual gift according to the virtual gift identification; acquiring a first expression package image according to the first expression package identifier; and synthesizing the virtual gift picture and the first expression package image to obtain a virtual gift expression package, and displaying the virtual gift expression package and a virtual special effect corresponding to the virtual gift special effect data in a live broadcasting room interface.
Referring to fig. 11, fig. 11 is a schematic view illustrating a virtual special effect display of a first virtual gift in a virtual gift emoticon according to an embodiment of the present application. The virtual special effect of the first virtual gift in the virtual gift facial expression package 111 shown in fig. 11 is changed from the first virtual special effect presentation effect map 112 to the second virtual special effect presentation effect map 113.
In the embodiment of the application, a special effect component for playing animation is added in a preset position area of the first expression package image, virtual gift special effect data corresponding to the first virtual gift is rendered through the special effect component, and therefore a virtual special effect corresponding to the virtual gift special effect data is displayed in the virtual gift expression package.
Through show the special effect that virtual gift corresponds in the expression package, increased the variety of expression package playing method, promoted user's experience, improved live broadcast room user's retention rate.
In an alternative embodiment, referring to fig. 12, if the first emoticon label indicates a treasure box emoticon type, the interactive contents are virtual treasure boxes; step S30 includes steps S305 to S306, which are specifically as follows:
s305: the client side responds to the triggering operation of the first virtual treasure box, and obtains a virtual value total amount and a first expression package identification corresponding to the first virtual treasure box; the first virtual treasure box is a virtual treasure box selected from a plurality of virtual treasure boxes;
s306: and the client acquires the expression package interaction data according to the total virtual value and the first expression package identifier.
Wherein, there is a plurality of virtual treasure case in the virtual treasure case bullet window, and every virtual treasure case corresponds there is a virtual value total amount. The total virtual value of each virtual treasure box can be preset or user-defined. Specifically, a virtual value input box can be displayed in the pop-up window of the virtual treasure box, and a user can input any virtual value total amount in the virtual value input box.
In the embodiment of the application, a user selects the first virtual treasure box, the client side obtains the virtual value total amount and the first expression package identification corresponding to the first virtual treasure box, data synthesis is carried out on the virtual value total amount and the first expression package identification, and expression package interaction data can be automatically and rapidly obtained.
Through combining virtual treasure case and expression package, richened the variety of expression package playing method, improved the interactivity between audience user and the anchor, increased the survival rate of live broadcast room.
In an alternative embodiment, referring to fig. 13, step S40 includes steps S407 to S409, which are as follows:
s407: the client responds to the sending operation of the emotion packet interactive data and sends the emotion packet interactive data to the server;
s408: the server receives the emotion packet interactive data and sends the emotion packet interactive data to a client added to the live broadcast room;
s409: a client side added in a live broadcast room analyzes the expression package interaction data to obtain a first expression package identification; acquiring a first expression package image according to the first expression package identifier; acquiring a preset virtual treasure box picture and virtual treasure box special effect data; and synthesizing the image of the virtual treasure box and the image of the first expression package to obtain the expression package of the virtual treasure box, and displaying the expression package of the virtual treasure box and the virtual special effect corresponding to the special effect data of the virtual treasure box in a live broadcasting room interface.
Referring to fig. 14, fig. 14 is a schematic view illustrating a display of a virtual treasure box emoticon in a live broadcast interface according to an embodiment of the present application. In the live broadcast interface shown in fig. 14, a virtual treasure box emoticon 141 is displayed, and a virtual treasure box 142 is displayed in the virtual treasure box emoticon 141.
In the embodiment of the application, a special effect component for playing animation is added in the preset position area of the first expression package image, and the special effect data of the virtual treasure box is rendered through the special effect component, so that the virtual special effect corresponding to the special effect data of the virtual treasure box is displayed in the expression package of the virtual treasure box.
Through show the special effect that virtual treasure case corresponds in the expression package, increased the variety of expression package playing method, promoted user's experience, improved live broadcast room user's retention rate.
In an optional embodiment, after the step of sending the emotion package interaction data to the server by the client in response to the sending operation of the emotion package interaction data, the method includes:
s4071: the server analyzes the expression package interaction data to obtain the total amount of the virtual value; and decomposing the total virtual value into a plurality of different virtual values, determining corresponding virtual gifts according to each virtual value, and obtaining a plurality of virtual gifts corresponding to the total virtual value.
In the embodiment of the application, after the server obtains the total virtual value of the first virtual treasure box, the total virtual value is disassembled into a plurality of different virtual values, and a plurality of corresponding virtual gifts are obtained. For example, if the virtual value total is 900Y coins, 900Y coins are divided into a plurality of gift combinations, specifically, 450Y coins are divided into large gifts with large virtual values, 450Y coins are divided into small gifts with small virtual values, the large gifts can be lavender garden virtual gifts with 50Y coins, 9 coins are divided, and the small gifts can be true love entrance evidence virtual gifts with 1Y coins, 450 shares are divided.
After the step of sending the emotion packet interactive data to a live broadcast room for displaying by responding to the sending operation of the emotion packet interactive data by the client, the method comprises the following steps of:
s410: responding to the triggering operation of a client added into a live broadcast room on the expression package of the virtual treasure box, generating a virtual treasure box opening request, and sending the virtual treasure box opening request to a server;
s411: the server receives the virtual treasure box opening request, selects one virtual gift from the plurality of virtual gifts, and sends the selected virtual gift to the client side joining the live broadcast room.
Referring to fig. 15, fig. 15 is a schematic view illustrating a display of opening a virtual treasure box in a virtual treasure box emoticon provided in an embodiment of the present application. The virtual treasure box facial bag 151 shown in fig. 15 has an opened virtual treasure box 152 and a retrieved virtual gift 153 displayed therein.
In this application embodiment, the user in live broadcast room can click the virtual treasure case in the virtual treasure case expression package to open virtual treasure case, obtain corresponding virtual gift. Specifically, after the user clicks the virtual treasure box, the virtual treasure box opening request is sent to the server, and the server can feed back a virtual treasure box opening result to the client where the user is located. The client may present the virtual treasure box opening result in the form of a pop-up window. The virtual treasure box opening result may be a successful pick up of the virtual gift, e.g., a pop-up window display of "congratulating you for true love with a certificate of 2". The opening result of the virtual treasure box can also be that the virtual gift is not received successfully, if all the virtual gifts in the server are received completely, the popup reminds the user that the treasure box is completely opened and the virtual gift cannot be obtained continuously. The user of live broadcast room can open virtual treasure case through clicking virtual treasure case expression package, acquires corresponding virtual gift, has promoted user's experience, has improved live broadcast room user's retention rate.
Optionally, if the user successfully receives the virtual gift by opening the virtual treasure box, the message that the user receives the virtual gift can be displayed on the public screen, so that the user experience is improved, and the retention rate of the user in the live broadcast room is improved.
Please refer to fig. 16, which is a schematic structural diagram of a live broadcast room interaction device based on emoticons according to a second embodiment of the present application. The apparatus may be implemented as all or part of a client, in software, hardware, or a combination of both.
This live broadcast room interactive installation 5 based on expression package includes:
the expression list loading module 51 is configured to respond to an expression list loading instruction of the live broadcast room and acquire expression list data; loading an expression list according to the expression list data; the expression list comprises expression packet images corresponding to a plurality of expression packet identifications;
the edit page display module 52 is configured to respond to a trigger operation on the first expression package image and display an expression package edit page according to a first expression package identifier corresponding to the first expression package image; the emotion bag editing page displays a plurality of interactive contents; the first expression package image is an expression package image selected from a plurality of expression package images;
the interactive data obtaining module 53 is configured to, in response to a trigger operation on the first interactive content, obtain expression package interactive data according to the first interactive content and the first expression package identifier; the first interactive content is selected from a plurality of interactive contents;
and the interactive data display module 54 is configured to respond to the sending operation of the emotion packet interactive data, and send the emotion packet interactive data to the live broadcast room for display.
It should be noted that, when the emotion package-based live broadcast room interaction apparatus provided in the foregoing embodiment executes the emotion package-based live broadcast room interaction method, only the division of the function modules is used for illustration, and in practical applications, the function distribution may be completed by different function modules as needed, that is, the internal structure of the device is divided into different function modules, so as to complete all or part of the functions described above. In addition, the emotion bag-based live broadcast room interaction device and the emotion bag-based live broadcast room interaction method provided by the embodiments belong to the same concept, and details of implementation processes are shown in the method embodiments and are not described herein again.
Fig. 17 is a schematic structural diagram of a computer device according to a fifth embodiment of the present application. As shown in fig. 17, the computer device 21 may include: a processor 210, a memory 211, and a computer program 212 stored in the memory 211 and operable on the processor 210, such as: a live broadcast control program for team interaction; the steps in the above embodiments are implemented when the processor 210 executes the computer program 212.
The processor 210 may include one or more processing cores, among other things. The processor 210 is connected to various parts in the computer device 21 by various interfaces and lines, executes various functions of the computer device 21 and processes data by executing or executing instructions, programs, code sets or instruction sets stored in the memory 211 and calling data in the memory 211, and optionally, the processor 210 may be implemented in at least one hardware form of Digital Signal Processing (DSP), field-Programmable Gate Array (FPGA), programmable Logic Array (PLA). The processor 210 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing contents required to be displayed by the touch display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 210, and may be implemented by a single chip.
The Memory 211 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 211 includes a non-transitory computer-readable medium. The memory 211 may be used to store instructions, programs, code sets, or instruction sets. The memory 211 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function (such as touch instructions, etc.), instructions for implementing the above-mentioned method embodiments, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 211 may optionally be at least one memory device located remotely from the processor 210.
The embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, where the instructions are suitable for being loaded by a processor and executing the method steps of the foregoing embodiment, and a specific execution process may refer to specific descriptions of the foregoing embodiment, which is not described herein again.
It should be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional units and modules is only used for illustration, and in practical applications, the above function distribution may be performed by different functional units and modules as needed, that is, the internal structure of the device is divided into different functional units or modules, so as to perform all or part of the above described functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described apparatus/terminal device embodiments are merely illustrative, and for example, a module or a unit may be divided into only one type of logic function, and another division manner may be provided in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium and used by a processor to implement the steps of the above-described embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc.
The present invention is not limited to the above-described embodiments, and various modifications and variations of the present invention are intended to be included within the scope of the claims and the equivalent technology of the present invention if they do not depart from the spirit and scope of the present invention.

Claims (13)

1. A live broadcast room interaction method based on an expression package is characterized by comprising the following steps:
the method comprises the steps that a client responds to an expression list loading instruction of a live broadcast room to obtain expression list data; loading an expression list according to the expression list data; the expression list comprises expression packet images corresponding to a plurality of expression packet identifications;
the client responds to a triggering operation of a first expression package image and displays an expression package editing page according to a first expression package identifier corresponding to the first expression package image; the expression package editing page displays a plurality of interactive contents; the first expression package image is one expression package image selected from the plurality of expression package images;
the client responds to triggering operation of first interactive content, and expression package interactive data are obtained according to the first interactive content and the first expression package identification; the first interactive content is selected from the interactive contents;
and the client responds to the sending operation of the expression package interactive data and sends the expression package interactive data to the live broadcast room for displaying.
2. The emoticon-based live room interaction method of claim 1, wherein:
if the first expression package identification indicates that the first expression package identification is the head portrait expression package type, the interactive contents are head portraits; the client responds to the triggering operation of the first interactive content, and obtains expression package interactive data according to the first interactive content and the first expression package identifier, wherein the step comprises the following steps:
the client side responds to a trigger operation of a first head portrait and acquires a picture address of the first head portrait and the first expression package identification; wherein the first head portrait is a selected head portrait in the plurality of head portraits;
the client side obtains a first head portrait picture according to the picture address; obtaining the first expression package image according to the first expression package identifier; synthesizing the first head portrait picture and the first expression package image to obtain a head portrait expression package; and obtaining the expression package interaction data according to the head portrait expression package.
3. The emoticon-based live room interaction method of claim 2, wherein:
the client responds to the sending operation of the emotion packet interactive data, and sends the emotion packet interactive data to the live broadcast room for displaying, wherein the step comprises the following steps:
the client responds to the sending operation of the expression package interaction data and sends the expression package interaction data to a server;
the server receives the expression package interaction data and sends the expression package interaction data to a client side added into a live broadcast room;
and the client side added into the live broadcast room analyzes the emotion packet interaction data to obtain the avatar emotion packet, and the avatar emotion packet is displayed in a live broadcast room interface.
4. The emoticon-based live room interaction method of claim 2, wherein:
the method comprises the steps that the client side added into the live broadcast room analyzes the expression package interaction data to obtain the avatar expression package, and the steps of displaying the avatar expression package in a live broadcast room interface comprise the following steps:
responding to the triggering operation of the client added into the live broadcast room on the avatar emotion package, and acquiring special effect data corresponding to the avatar emotion package;
and displaying the special effect corresponding to the head portrait expression package according to the special effect data.
5. The emoticon-based live room interaction method of claim 1, wherein:
if the first emotion packet identification indicates a gift emotion packet type, the interactive contents are virtual gifts; the client responds to the triggering operation of the first interactive content, and obtains expression package interactive data according to the first interactive content and the first expression package identifier, wherein the step comprises the following steps:
the client side responds to the triggering operation of the first virtual gift, and obtains the virtual gift identification, the virtual gift number and the first expression package identification of the first virtual gift; wherein the first virtual gift is a selected one of the plurality of virtual gifts;
and the client acquires the expression package interaction data according to the virtual gift identification, the virtual gift quantity and the first expression package identification.
6. The emoticon-based live room interaction method of claim 5, wherein:
the client responds to the sending operation of the emotion packet interactive data, and sends the emotion packet interactive data to the live broadcast room for displaying, wherein the step comprises the following steps:
the client responds to the sending operation of the expression package interaction data and sends the expression package interaction data to a server;
the server analyzes the expression package interaction data to obtain the virtual gift identification, the virtual gift quantity and the first expression package identification, and presents the corresponding first virtual gift to a main broadcast according to the virtual gift identification and the virtual gift quantity; sending the expression package interaction data to a client added into a live broadcast room;
the client side joining the live broadcast room analyzes the expression package interaction data to obtain the virtual gift identification and the first expression package identification; and obtaining a virtual gift expression package according to the virtual gift identifier and the first expression package identifier, and displaying the virtual gift expression package in a live broadcasting room interface.
7. The live room interaction method based on the emoticon, as recited in claim 6, further comprising:
the step of obtaining a virtual gift emotion package according to the virtual gift identifier and the first emotion package identifier, and displaying the virtual gift emotion package in a live broadcast interface includes:
the client side joining the live broadcast room judges whether the first virtual gift has a virtual special effect or not according to the virtual gift identification;
if the first virtual gift does not have a virtual special effect, the client side joining the live broadcast room obtains a virtual gift picture corresponding to the first virtual gift according to the virtual gift identification; acquiring a first expression package image according to the first expression package identifier; synthesizing the virtual gift picture and the first expression package image to obtain a virtual gift expression package, and displaying the virtual gift expression package in a live broadcasting room interface;
if the first virtual gift has a virtual special effect, the client-side joining the live broadcast room acquires a virtual gift picture and virtual gift special effect data corresponding to the first virtual gift according to the virtual gift identification; acquiring a first expression package image according to the first expression package identifier; and synthesizing the virtual gift picture and the first expression package image to obtain a virtual gift expression package, and displaying the virtual gift expression package and displaying a virtual special effect corresponding to the virtual gift special effect data in a live broadcasting room interface.
8. The emoticon-based live room interaction method of claim 1, wherein:
if the first expression package identification indicates a treasure box expression package type, the plurality of interactive contents are a plurality of virtual treasure boxes; the method comprises the following steps that the client responds to the triggering operation of first interactive content and obtains expression package interactive data according to the first interactive content and the first expression package identification, and comprises the following steps:
the client side responds to a trigger operation of a first virtual treasure box, and obtains a virtual value total amount corresponding to the first virtual treasure box and the first expression package identification; wherein the first virtual treasure box is a selected one of the plurality of virtual treasure boxes;
and the client acquires expression package interaction data according to the total virtual value and the first expression package identification.
9. The emoticon-based live room interaction method of claim 8, wherein:
the client responds to the sending operation of the emotion packet interactive data, and sends the emotion packet interactive data to the live broadcast room for displaying, wherein the step comprises the following steps:
the client responds to the sending operation of the expression package interaction data and sends the expression package interaction data to a server;
the server receives the expression package interaction data and sends the expression package interaction data to a client side added into a live broadcast room;
the client side added into the live broadcast room analyzes the expression package interaction data to obtain the first expression package identification; acquiring a first expression package image according to the first expression package identifier; acquiring a preset virtual treasure box picture and virtual treasure box special effect data; and synthesizing the virtual treasure box picture and the first expression packet image to obtain a virtual treasure box expression packet, and displaying the virtual treasure box expression packet and displaying a virtual special effect corresponding to the virtual treasure box special effect data in a live broadcasting room interface.
10. The emoticon-based live room interaction method of claim 9, wherein:
after the step of sending the expression package interaction data to the server by the client in response to the sending operation of the expression package interaction data, the method comprises the following steps:
the server analyzes the expression package interaction data to obtain the total amount of the virtual value; decomposing the total virtual value into a plurality of different virtual values, determining corresponding virtual gifts according to each virtual value, and obtaining a plurality of virtual gifts corresponding to the total virtual value;
after the step of responding to the sending operation of the emotion packet interactive data by the client and sending the emotion packet interactive data to the live broadcast room for displaying, the method comprises the following steps:
responding to the triggering operation of the client joining the live broadcast room on the expression package of the virtual treasure box, generating a virtual treasure box opening request, and sending the virtual treasure box opening request to a server;
and the server receives the virtual treasure box opening request, selects one virtual gift from the plurality of virtual gifts, and sends the selected virtual gift to the client side joining the live broadcast room.
11. The utility model provides a live room interactive installation based on expression package which characterized in that includes:
the emotion list loading module is used for responding to an emotion list loading instruction of the live broadcast room and acquiring emotion list data; loading an expression list according to the expression list data; the expression list comprises expression packet images corresponding to a plurality of expression packet identifications;
the editing page display module is used for responding to triggering operation on a first expression package image and displaying an expression package editing page according to a first expression package identifier corresponding to the first expression package image; the emotion bag editing page displays a plurality of interactive contents; the first expression package image is one expression package image selected from the plurality of expression package images;
the interactive data acquisition module is used for responding to the trigger operation of first interactive content and acquiring the interactive data of the expression package according to the first interactive content and the first expression package identifier; the first interactive content is selected from the interactive contents;
and the interactive data display module is used for responding to the sending operation of the expression package interactive data and sending the expression package interactive data to the live broadcast room for display.
12. A computer device, comprising: processor, memory and computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 10 are implemented when the processor executes the computer program.
13. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 10.
CN202211142077.3A 2022-09-20 2022-09-20 Expression package based live broadcast room interaction method and device, computer equipment and medium Pending CN115550677A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211142077.3A CN115550677A (en) 2022-09-20 2022-09-20 Expression package based live broadcast room interaction method and device, computer equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211142077.3A CN115550677A (en) 2022-09-20 2022-09-20 Expression package based live broadcast room interaction method and device, computer equipment and medium

Publications (1)

Publication Number Publication Date
CN115550677A true CN115550677A (en) 2022-12-30

Family

ID=84726860

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211142077.3A Pending CN115550677A (en) 2022-09-20 2022-09-20 Expression package based live broadcast room interaction method and device, computer equipment and medium

Country Status (1)

Country Link
CN (1) CN115550677A (en)

Similar Documents

Publication Publication Date Title
CN108174272B (en) Method and device for displaying interactive information in live broadcast, storage medium and electronic equipment
EP1689155B1 (en) Method and system to process video effects
KR101314472B1 (en) Displaying method of remote sink device, source and system for the same
CN112714334B (en) Music gift giving method and device, equipment and medium thereof
CN113727130B (en) Message prompting method, system and device for live broadcasting room and computer equipment
CN113840154B (en) Live broadcast interaction method and system based on virtual gift and computer equipment
CN113453030B (en) Audio interaction method and device in live broadcast, computer equipment and storage medium
CN113938696B (en) Live broadcast interaction method and system based on custom virtual gift and computer equipment
CN113949892A (en) Live broadcast interaction method and system based on virtual resource consumption and computer equipment
CN113573083A (en) Live wheat-connecting interaction method and device and computer equipment
CN113840156A (en) Live broadcast interaction method and device based on virtual gift and computer equipment
CN113727178B (en) Screen-throwing resource control method and device, equipment and medium thereof
CN113824976A (en) Method and device for displaying approach show in live broadcast room and computer equipment
CN113596504A (en) Live broadcast room virtual gift presenting method and device and computer equipment
CN113573117A (en) Video live broadcast method and device and computer equipment
CN113824984A (en) Virtual gift pipelining display method, system, device and computer equipment
CN115134623B (en) Virtual gift interaction method, system, device, electronic equipment and medium
CN113891162B (en) Live broadcast room loading method and device, computer equipment and storage medium
CN115134620B (en) Picture display method and device under continuous wheat direct broadcast, electronic equipment and storage medium
CN112817671A (en) Image processing method, device, equipment and computer readable storage medium
CN115065838A (en) Live broadcast room cover interaction method, system and device and electronic equipment
CN113411622B (en) Loading method and device of live broadcast interface, client and storage medium
CN115550677A (en) Expression package based live broadcast room interaction method and device, computer equipment and medium
CN114827644B (en) Live broadcast interaction method, device, equipment and storage medium based on user matching information
CN114630189B (en) Multi-channel approach prompting method, system, device, computer equipment and medium in live broadcasting room

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination