[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109788364B - Video call interaction method and device and electronic equipment - Google Patents

Video call interaction method and device and electronic equipment Download PDF

Info

Publication number
CN109788364B
CN109788364B CN201711108000.3A CN201711108000A CN109788364B CN 109788364 B CN109788364 B CN 109788364B CN 201711108000 A CN201711108000 A CN 201711108000A CN 109788364 B CN109788364 B CN 109788364B
Authority
CN
China
Prior art keywords
video
terminal
call
interaction
played
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711108000.3A
Other languages
Chinese (zh)
Other versions
CN109788364A (en
Inventor
韦宏杰
沈姗姗
段虞峰
金晶
田洋菥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201711108000.3A priority Critical patent/CN109788364B/en
Publication of CN109788364A publication Critical patent/CN109788364A/en
Application granted granted Critical
Publication of CN109788364B publication Critical patent/CN109788364B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Telephonic Communication Services (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the application discloses a video call interaction method, a video call interaction device and electronic equipment, wherein the method comprises the following steps: the method comprises the steps that a first terminal determines progress information of a first video played in a second terminal; when the content related to interaction through a video call mode is played in the first video, a video call interface is simulated through a mode of displaying a second video obtained from a first server, wherein the second video is obtained by shooting at a shooting site of the first video. Through the embodiment of the application, the user can obtain more real experience, and the participation degree of the user is favorably improved.

Description

Video call interaction method and device and electronic equipment
Technical Field
The application relates to the technical field of multi-screen interaction, in particular to a video call interaction method, a video call interaction device and electronic equipment.
Background
With the increasing maturity and diversity of online merchant marketing modes, a single online shopping form cannot meet the user requirements, so that how to stand out in the current market atmosphere becomes a more competitive partner, and a stronger market promoter is a proposition which needs to be solved urgently.
In order to achieve the above purpose, some network selling systems make some originally common days become fixed dates for the network selling systems to hold large-scale sales promotion activities by manufacturing "shopping festivals" and the like, for example, "double 11" of 11 months and 11 days every year and the like. Since the peak of centralized ordering is usually met at the zero point of the day of the activity, and ordering enthusiasm of the user is the highest at this time, in order to better create a shopping atmosphere, the network sales platform such as the "Tianmao" also promotes evening activities such as the "double 11 bingy nights", and further promotes the shopping enthusiasm of the user through multi-channel evening live broadcast for a plurality of hours in succession before the "double 11". In addition, in the live broadcasting process of the late party, the whole-course interaction can be carried out with the user through the Tianmao client, so that the user can obtain the feeling of being on the scene of the late party, and the participation degree of the user can be improved.
Therefore, how to provide a new and diversified interactive mode becomes a technical problem to be solved by the technology in the field.
Disclosure of Invention
The application provides a video call interaction method, a video call interaction device and electronic equipment, which can enable a user to obtain more real experience and are beneficial to improving the participation degree of the user.
The application provides the following scheme:
a video call interaction method comprises the following steps:
the method comprises the steps that a first terminal determines progress information of a first video played in a second terminal;
when the content related to interaction through a video call mode is played in the first video, a video call interface is simulated through a mode of displaying a second video obtained from a first server, wherein the second video is obtained by shooting at a shooting site of the first video.
A video call interaction method comprises the following steps:
the method comprises the steps that a first service end receives a second video submitted by a third terminal, the second video is obtained by shooting in a shooting site of a first video, and the first video is used for being played through the second terminal;
and after receiving requests submitted by a plurality of first terminals for participating in video call interaction, providing the second video for the plurality of first terminals so as to simulate a video call interface by displaying the second video when the first terminals determine that the first video plays the content related to the interaction in the video call mode.
A video call interaction method comprises the following steps:
the third terminal obtains a second video through image acquisition equipment, wherein the second video is obtained by shooting at the shooting site of the first video; the first video is used for playing through second terminal equipment;
submitting the second video to a first server and a second server so that the first server can provide the second video to a first terminal, and the first terminal simulates a video call interface in a mode of displaying the second video; and in the process of simulating the video call, the second server takes the second video as the content played in the second terminal.
A video call method, comprising:
the third server side obtains a video stream shot in real time in the target event related entity place;
determining at least one target user associated with the target event;
providing the video stream to a third client associated with the target user for simulating a video call interface by playing the video stream.
A video call method, comprising:
the third client receives the video call request and provides a simulation interface, wherein the simulation interface is used for simulating the interface state when the video call request is received, and comprises operation options for answering;
after receiving an answering operation request through the operation option, obtaining a video stream from a server, wherein the video stream is obtained by shooting in real time at an entity place associated with a target event;
and simulating a video call interface in a mode of playing the video stream.
A video call method, comprising:
the fourth client side obtains a video stream, wherein the video stream is obtained by shooting in real time at an entity place associated with the target event;
and submitting the video stream to a third server to determine at least one target user associated with the target event, providing the video stream to a third client associated with the target user, and simulating a video call interface in a mode of playing the video stream.
A voice call interaction method comprises the following steps:
the method comprises the steps that a first terminal determines progress information of a video played in a second terminal;
when the content related to interaction in the voice call mode is played in the video, the voice call signal is simulated through the audio signal obtained from the first server, wherein the audio signal is obtained by performing audio acquisition on the video shooting site.
A voice call interaction method comprises the following steps:
the method comprises the steps that a first service end receives an audio signal submitted by a third terminal, the audio signal is obtained by carrying out audio acquisition on a shooting site of a target video, and the target video is used for being played through a second terminal;
after receiving requests submitted by a plurality of first terminals for participating in voice call interaction, providing the audio signals for the plurality of first terminals, so that when the first terminals determine that the target video plays the content related to the interaction in the voice call mode, the audio signals are used for simulating the voice call signals.
A voice call interaction method comprises the following steps:
the first terminal determines progress information of a first audio played in a fourth terminal;
when the content related to interaction in the voice call mode is played in the first audio, simulating a voice call signal through a second audio signal obtained from a first server, wherein the second audio signal is obtained by performing audio collection on the site where the first audio is collected.
A voice call interaction method comprises the following steps:
the first service end receives a second audio signal submitted by a fifth terminal, wherein the second audio signal is obtained by carrying out audio acquisition on a first audio acquisition site, and the first audio is used for playing through a fourth terminal;
and after receiving requests submitted by a plurality of first terminals for participating in voice call interaction, providing the second audio signals for the plurality of first terminals so as to be used for simulating voice call signals through the second audio signals when the first terminals determine that the first audio plays the content related to the interaction in the voice call mode.
A video call interaction device is applied to a first terminal and comprises:
a first progress information determining unit for determining progress information of a first video played in a second terminal;
the first call simulation unit is used for simulating a video call interface in a mode of displaying a second video obtained from a first server when the content related to the interaction in the video call mode is played in the first video, wherein the second video is obtained by shooting at the shooting site of the first video.
A video call interaction device is applied to a first service end and comprises:
the second video receiving unit is used for receiving a second video submitted by a third terminal, wherein the second video is obtained by shooting at the shooting site of a first video, and the first video is used for playing through the second terminal;
and the second video providing unit is used for providing the second video to the plurality of first terminals after receiving the requests submitted by the plurality of first terminals for participating in the video call interaction so as to simulate a video call interface by displaying the second video when the first terminals determine that the first video plays the content related to the interaction in the video call mode.
A video call interaction device is applied to a third terminal and comprises:
the second video acquisition unit is used for acquiring a second video through the image acquisition equipment, wherein the second video is acquired by shooting at the shooting site of the first video; the first video is used for playing through second terminal equipment;
the second video submitting unit is used for submitting the second video to a first server and a second server so that the first server can provide the second video to a first terminal, and the first terminal simulates a video call interface in a mode of displaying the second video; and in the process of simulating the video call, the second server takes the second video as the content played in the second terminal.
A video call device is applied to a third server and comprises:
the video stream obtaining unit is used for obtaining a video stream shot in real time in a target event related entity site;
the target user determining unit is used for determining at least one target user associated with the target event;
and the video stream providing unit is used for providing the video stream to a third client associated with the target user so as to simulate a video call interface in a mode of playing the video stream.
A video call device applied to a third client comprises:
the device comprises a video call request receiving unit, a video call request receiving unit and a simulation interface, wherein the video call request receiving unit is used for receiving a video call request and providing a simulation interface, and the simulation interface is used for simulating an interface state when the video call request is received and comprises operation options for answering;
the video stream obtaining unit is used for obtaining a video stream from a server after receiving an answering operation request through the operation option, wherein the video stream is obtained by shooting in real time at an entity place associated with a target event;
and the call simulation unit is used for simulating a video call interface in a mode of playing the video stream.
A video call device applied to a fourth client comprises:
the video stream obtaining unit is used for obtaining a video stream, and the video stream is obtained by shooting in real time at an entity place related to a target event;
and the video stream submitting unit is used for submitting the video stream to a third server so as to determine at least one target user associated with the target event, providing the video stream to a third client associated with the target user, and simulating a video call interface in a mode of playing the video stream.
A voice call interaction device is applied to a first terminal and comprises:
a second progress information determining unit for determining progress information of a video played in the second terminal;
and the second communication simulation unit is used for simulating a voice communication signal through an audio signal obtained from the first server when the content related to the interaction in the voice communication mode is played in the video, wherein the audio signal is obtained by carrying out audio acquisition on the video shooting site.
A voice call interaction device is applied to a first service end and comprises:
the audio signal receiving unit is used for receiving an audio signal submitted by a third terminal, wherein the audio signal is obtained by carrying out audio acquisition on a shooting site of a target video, and the target video is used for playing through a second terminal;
the audio signal providing unit is used for providing the audio signals to the plurality of first terminals after receiving the requests submitted by the plurality of first terminals for participating in the voice call interaction, so that when the first terminals determine that the target video plays the content related to the interaction in the voice call mode, the audio signals are used for simulating the voice call signals.
A voice call interaction device is applied to a first terminal and comprises:
a third progress information determining unit, configured to determine progress information of the first audio played in the fourth terminal;
and the third communication simulation unit is used for simulating a voice communication signal through a second audio signal obtained from the first server when the content related to the interaction in the voice communication mode is played in the first audio, wherein the second audio signal is obtained by performing audio acquisition on the acquisition site of the first audio.
A voice call interaction device is applied to a first service end and comprises:
the second audio signal receiving unit is used for receiving a second audio signal submitted by a fifth terminal, wherein the second audio signal is obtained by carrying out audio acquisition on a first audio acquisition site, and the first audio is used for playing through a fourth terminal;
and the second audio signal providing unit is used for providing the second audio signals to the plurality of first terminals after receiving the requests submitted by the plurality of first terminals for participating in the voice call interaction, so that when the first terminals determine that the first audio plays the content related to the interaction in the voice call mode, the voice call signals are simulated through the second audio signals.
An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
determining progress information of a first video played in a second terminal;
when the content related to interaction through a video call mode is played in the first video, a video call interface is simulated through a mode of displaying a second video obtained from a first server, wherein the second video is obtained by shooting at a shooting site of the first video.
According to the specific embodiments provided herein, the present application discloses the following technical effects:
according to the method and the device, when the video is played in the terminal equipment, the scene of the corresponding interactive interface can be provided for the user through the first terminal, the first terminal can determine the progress information of the first video played in the second terminal, and if the content related to interaction in a video call mode is played, the video call interface can be simulated in a mode of displaying the second video obtained from the first server, wherein the second video is obtained by shooting the target person on the shooting site of the first video. In this way, one-to-many video call can be realized, and because the second video used for simulating the video call interface is obtained by shooting the target person on the shooting site of the first video, and the user usually participates in related interaction in the process of watching the video through the terminal device, the situation of the video call interface obtained by the user through the first terminal is basically consistent with the situation seen in the video played by the second terminal, so that the user obtains more real experience, and the participation degree of the user is improved.
Of course, it is not necessary for any product to achieve all of the above-described advantages at the same time for the practice of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic diagram of a system architecture provided by an embodiment of the present application;
FIG. 2 is a flow chart of a first method provided by an embodiment of the present application;
FIG. 3 is a flow chart of a second method provided by embodiments of the present application;
FIG. 4 is a flow chart of a third method provided by embodiments of the present application;
FIG. 5 is a flow chart of a fourth method provided by embodiments of the present application;
FIG. 6 is a flow chart of a fifth method provided by embodiments of the present application;
FIG. 7 is a flow chart of a sixth method provided by embodiments of the present application;
FIG. 8 is a flow chart of a seventh method provided by embodiments of the present application;
fig. 9 is a flowchart of an eighth method provided by an embodiment of the present application;
FIG. 10 is a flow chart of a ninth method provided by embodiments of the present application;
fig. 11 is a flow chart of a tenth method provided by an embodiment of the present application;
FIG. 12 is a schematic diagram of a first apparatus provided by an embodiment of the present application;
FIG. 13 is a schematic diagram of a second apparatus provided by an embodiment of the present application;
FIG. 14 is a schematic diagram of a third apparatus provided by an embodiment of the present application;
FIG. 15 is a schematic diagram of a fourth apparatus provided by an embodiment of the present application;
FIG. 16 is a schematic diagram of a fifth apparatus provided by an embodiment of the present application;
FIG. 17 is a schematic view of a sixth apparatus provided by an embodiment of the present application;
FIG. 18 is a schematic view of a seventh apparatus provided by an embodiment of the present application;
FIG. 19 is a schematic diagram of an eighth apparatus provided by an embodiment of the present application;
FIG. 20 is a schematic view of a ninth apparatus provided by an embodiment of the present application;
fig. 21 is a schematic diagram of a tenth apparatus provided in an embodiment of the present application;
fig. 22 is a schematic diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments that can be derived from the embodiments given herein by a person of ordinary skill in the art are intended to be within the scope of the present disclosure.
In the embodiment of the application, a cross-screen interaction method through a video call is provided, and specifically, in a specific application scene, a live broadcast scene such as 'double 11 evening parties' can be provided, in the scene, on one hand, live broadcast can be performed on an evening party program through a plurality of channels such as a broadcast television and the internet, on the other hand, corresponding interaction interfaces can be provided through clients such as 'Tianmao', and thus, a user can perform specific interaction with the evening party through related clients installed in a first terminal such as a mobile phone in the process of watching the evening party. Specifically, a related character may be arranged at a shooting site of a program such as "evening party", the character may play a role of a video call requester, and a link of the "video call" activity may be arranged in the program, for example, the host or the character may introduce the related activity and issue some voice indication, such as "a certain star calls you, and is ready to answer a bar", and so on. The user can watch the live broadcast condition of the activity through a second terminal such as a television and the like, and meanwhile, an interface for simulating the incoming call request of the video call can be provided in the interface of the first terminal such as a mobile phone and the like, so that the user can obtain the experience that the video call is about to be accessed. Then, if the user chooses to answer, a corresponding video call picture can be provided for the user, and the picture can be obtained by shooting the person at a shooting site at a night, so that the video seen by the user in the mobile phone is the same as the background such as a stage seen in a television set in the environment where the opposite person is located in the process of passing through, so that the user can obtain more real experience and feel that the person is actually in video call with the user. Of course, since the embodiment of the present application mainly enables the user to see the effect of the video call in a simulation manner, the user cannot actually have a conversation with the person, and can only listen to the voice of the person who is shooting the scene at the evening.
In specific implementation, from the perspective of system architecture, referring to fig. 1, an embodiment of the present application may specifically relate to a first server, a first terminal, and a third terminal, where the first server may specifically be a server of a system that provides services such as a corresponding interface for live videos such as "late meetings", the first terminal may specifically be a terminal such as a mobile phone, a client provided by the system to a first user such as a consumer user may be installed or fixed, and the user may view a specific "late meeting exclusive page" through the first terminal, or may perform various interactions with specific activities in the "late meetings" through the page. The third terminal can be a terminal provided by the system for technicians such as field image acquisition at a night, when interactive content related to video call is specifically carried out, the third terminal can submit the acquired second video to the first service terminal, and the interface specifically displayed by the first terminal in the video call process can be generated by displaying the second video. In order to enable a user to obtain more real experience, the system architecture can further comprise a second server corresponding to a 'program guide car' and the like in the broadcast television system, and the second server is used for selecting from multiple paths of signals shot on site and determining a picture finally presented in terminal equipment such as a television and the like; and the third terminal submits the acquired second video related to the person who specifically dials the video call to the first server and simultaneously provides the second server corresponding to the guided wave vehicle and the like, so that the interface content of the video call seen by the user on the mobile phone is basically consistent with the content of the television program seen on the second terminal such as a television and the like, and the reality is further improved.
Specific implementations are described in detail below.
Example one
First, in the first embodiment, a video call interaction method is provided from the perspective of a first terminal, and referring to fig. 2, the method may specifically include:
s201: the method comprises the steps that a first terminal determines progress information of a first video played in a second terminal;
during specific implementation, a portal for entering a specific interactive interface can be provided through a home page of the first terminal, for example, a portal for "double 11 evening parties" is provided on a "tianmao" client home page. Of course, the access entry of the interface may also be provided through other positions, for example, the access entry of the interface may also be provided in a video content live interface of the first terminal, so that the user can switch from the live interface to an interactive interface, and so on. The user can enter the interactive interface corresponding to the evening through the entrance.
The first video played through the second terminal may specifically refer to the live evening content and the like. Specifically, the second terminal may be a terminal device for receiving and playing a video signal transmitted by a broadcast station, and the first video is a video signal transmitted by the broadcast station. Since the second terminal may be a television or the like, that is, the playing of the first video and the displaying of the video call interface may not be performed on the same terminal device, how the first terminal knows the playing progress of the specific video content in the second terminal, and which specific program content is currently being played becomes a problem to be considered. Of course, in a specific implementation, the above problem may be solved in a plurality of ways, for example, in a simple manner, the progress information of the video content played in the second terminal may be determined according to a preset time schedule. That is, even in a live broadcasting evening, the live broadcasting is usually performed according to a certain schedule, that is, the time corresponding to each specific program can be preset, and the broadcasting time of each program in the actual live broadcasting process is approximately the same as the preset time, so that the progress information of the video content can be determined according to the preset schedule, including determining which program content is currently or will be broadcasted, and the like.
Of course, since the specific first video content may be live, it may not be possible to maintain synchronization with the time point of the program content played by the second terminal in such a manner that the time is set in the first terminal in advance. Also, since the broadcast television signal is typically broadcast in the second terminal, although the time points at which the television signal is transmitted are the same, the time points at which the signal arrives at the user may differ for users in different geographical locations. That is, for the same event that occurs at the evening scene, the users in Beijing may be at 21: 00: 00 sees the occurrence of the event from the second terminal, whereas the user in Guangzhou might be at 21: 00: 02, etc. Therefore, even if each actual link of the evening is strictly performed according to the preset time schedule, the actual experience results of users in different regions may be different. However, for interactive program contents, the television and the mobile phone need to be matched with each other, and the television and the mobile phone are perfectly linked to each other, so that the user can obtain better and more real experience. However, the above situation may exist, so that the interface content viewed by some users from the mobile phone can be synchronized with the program content played in the television, and the situation viewed by some users is not synchronized.
For this reason, in the embodiment of the present application, it can also be realized by a more preferable manner. That is, the first terminal senses the playing progress of the first video content by detecting the information in the second terminal, which may include which program content is currently played, and so on. Specifically, since the user usually uses a mobile terminal such as a mobile phone to interact while watching tv, the first terminal and the second terminal are usually located in the same spatial environment and are separated by a relatively short distance. In this case, the perception of the target program content played in the second terminal by the first terminal may also be achieved by: the first terminal may perform voiceprint detection on the voice played in the second terminal, and determine the progress information of the video content played in the second terminal according to the detected keyword, where the progress information may include information such as a name of the target program content. The voice played in the second terminal may specifically refer to a voice signal generated by a host or a guest in a dictation manner, the dictation content may include some keywords, for example, a name of a certain program content, and the like.
Or, in another case, the second server corresponding to the broadcast station may add a sound wave signal with a preset frequency to the video signal to be transmitted at the position where each program content starts or is about to start. In this way, as the specific video signal is sent to the second terminal of the user, the sound wave signal is also sent, and the frequency of the sound wave signal may be out of the hearing range of human, that is, the user does not perceive the existence of the sound wave signal, but the first terminal can perceive the sound wave signal, and then the first terminal can use the sound wave signal as the basis for determining the start or the imminent start of the target program content. Through the mode, the sound wave signal with the prompting function can be carried through the specific video signal and is transmitted to the first terminal through the second terminal, so that the event seen by the user at the second terminal can be guaranteed, seamless connection can be better carried out on the image seen by the user and the image seen by the first terminal, and better experience can be obtained.
The specific frequency information of the sound wave signal may be determined by the server and provided to a server (second server) corresponding to the second terminal by the server, and the sound wave signal is inserted into the video signal at the position where the target program content starts or is about to start in the process of sending the video signal by the second server. On the other hand, the service end can also inform the first terminal of the frequency information of the sound wave signal in some ways, so that the first terminal and the second terminal can establish contact through the sound wave signal. It should be noted that, in a specific implementation, since a plurality of program contents are included in the same video content, sound wave signals with different frequencies may also be provided for different program contents, respectively. The corresponding relation between the specific program content and the sound wave frequency can be provided for the second server, and the second server can add the sound wave signal according to the corresponding relation when adding the sound wave signal; and the corresponding relation is also provided for the first terminal, and the first terminal can determine the program content currently playing or about to play according to the difference of the detected frequencies of the sound wave signals, and the like.
In addition, it should be noted that the first terminal may determine a program currently being played by a video played in the terminal device, and may also identify a link, a key node, and the like to which the current program is specifically played, that is, may identify on a specific program name level, and may also identify the key node and the like in a specific process in the same program. For example, for video call activity, the key nodes included therein may include: program start, a character on stage playing as the video call originator making a video call, a picture on stage indicating that the call is connected, etc. Specific sound wave signals can be added to each key node, or names and the like of the key points can be broadcasted by a host or a guest at the key nodes, in a word, the first terminal can also determine which key node the current program is specifically broadcasted to through sound wave detection, voiceprint recognition and other modes.
S202: when the content related to interaction through a video call mode is played in the first video, a video call interface is simulated through a mode of displaying a second video obtained from a first server, wherein the second video is obtained by shooting at a shooting site of the first video.
Since which program content the video in the second terminal is specifically played to can be determined, the interface state can be synchronously changed according to the specific program content. For example, if "evening" is going to be played to "video call session", a prompt message such as "a certain party is going to make a call" may be displayed. Then, in a preferred embodiment, if a "video call" link starts to be played in a video played in a "evening" or the like, more precisely, after it is determined that a related person makes a video call or the like, a simulation interface may be provided in the interface of the first terminal, where the simulation interface may display a first operation option for "answering" and a second operation option for "rejecting", and of course, in a preferred embodiment, information such as a head portrait of the related person may be obtained from the server and displayed in the simulation interface, so that the user may obtain a video call request as if he were actually listening to the video call through the simulation interface.
After receiving the answering operation request through the first operation option, a specific second video may be obtained from the first service end, where the second video may be specifically submitted to the first service end by the second client and obtained by shooting at a shooting site of the first video. In specific implementation, the second video may be transmitted in a form of a video stream, that is, the first terminal does not need to wait for the transmission of the entire second video file to be completed before playing, but may play the second video file while transmitting. That is to say, in the embodiment of the present application, the material for simulating a video call is not a video segment recorded in advance, but is a second video shot at the shooting site of the first video in the process of playing the first video at a evening and the like. During specific implementation, the first video can be played in a live broadcast mode, and therefore the second video is a video shot in a live broadcast site of the first video, and therefore a user can obtain more real experience. It should be noted that, in practical applications, a video call interaction usually involves an associated target person, which is in the shooting scene of the first video and plays the role of the video call initiator, and therefore, the target person may be shot particularly when shooting the second video. Of course, it is also possible to remove the lens from the target person during the shooting, for example, the user may be guided to view elsewhere in the first video shooting site under the guidance of the target person, and so on.
In specific implementation, a video player can be embedded in the interface, after a user triggers an answering operation option, a specific second video can be requested from the first server and played through the player, and as the shooting background of the second video is consistent with the background of the video played in terminal equipment such as a television, the user can feel that the target person is in a call with the user at the shooting site of a late meeting. It should be noted that, since a plurality of users can participate in the video call interaction, the first service end provides the second video to a plurality of different first terminals, thereby implementing a one-to-many video call.
Certainly, in specific implementation, if an operation request for refusing to connect is received through the second operation option, the simulation interface may be closed first, and a third operation option for initiating reconnection is provided, where prompt information such as "some video of people and others is displayed, and an operation option such as" reconnect some "is provided, and after the user performs an operation through the operation option, the first terminal may also obtain a specific second video from the first server, and enable the user to obtain an experience of performing a video call with the person by playing the second video.
In addition, after the user selects answering through the first operation option, in order to keep the synchronization between the second terminal such as the broadcast television and the interface content displayed in the first terminal, whether the first video played by the second terminal plays to the content for indicating that the video call is connected can be judged, for example, it may specifically collect an audio signal in a first video played by the second terminal, detect whether the video played by the terminal device includes an alert tone in a simulated "turned on" state (information such as a feature of the alert tone may be preconfigured or obtained from the first service end), and the like, if so, triggering to close the simulation interface, simulating a video call interface in a mode of displaying a second video provided by the first service end, therefore, the consistency in the interaction process can be improved, and the user can obtain more real experience.
Moreover, in order to enable the simulated video call interface to be more realistic, the image information of the user can be collected through the image collecting device of the first terminal where the first terminal is located, then a sub-window is provided on the upper layer of the simulated video call interface, and the image information of the user is displayed in the sub-window. A sub-window may be displayed at the upper right side of the window in which the second video is specifically played, or the like, in which the relevant image information of the current user is displayed, so that the specific interface display is more realistic.
It should be noted that, in a specific implementation, the number of users who choose to receive a video call is usually very large, so that the embodiment of the present application implements a one-to-many video call, that is, after a target person initiates a video call on a stage, a plurality of users can all receive the video call at the same time, and each user can obtain a feeling that the target person is shooting a scene at a night and making a call with the user. Of course, since the call is a one-to-many call, the user usually can only hear the voice of the target person, and cannot let the target person hear his own voice, and in fact cannot let the target person have a conversation with a large number of users at the same time. In order to enable at least some users to actually have a conversation with the target person, that is, enable the target person to hear the voice of at least one user, after the specific one-to-many call is ended, a one-to-one call link may be further set, and at this time, the first service end may select one of the users who have previously participated in the one-to-many call, so that the user can have an actual conversation with the target person. In this case, if the user associated with a first terminal is selected, the first terminal may also provide the user with a corresponding prompt, prompting that the user can speak, and the information such as the image and the sound related to the user is submitted to the first service end, the first service end can be forwarded to the second service end, the second service end can play the image and the sound of the user by the playing device on the spot at the evening, so that the target person at the evening scene can hear the user's voice and see the user's image, and accordingly, the user can still continue playing the second video played by the first terminal, so that both parties in the call can see the image of the other party and hear the sound of the other party, thereby realizing one-to-one two-way call, in addition, other users can see the selected user on the television to have a conversation with the target person on the same screen.
In summary, according to the embodiment of the application, when a video is played in a terminal device, a scene of a corresponding interactive interface can be provided for a user through a first terminal, the first terminal can determine the progress information of the first video played in a second terminal, and if the content related to interaction through a video call mode is played, a video call interface can be simulated through a mode of displaying a second video obtained from a first server, wherein the second video is obtained by shooting a shooting site of the first video. In this way, one-to-many video call can be realized, and the second video used for simulating the video call interface is obtained by shooting the target person on the shooting site of the first video, and the user usually participates in related interaction in the process of watching the video through the terminal device, so that the condition of the video call interface obtained by the user through the first terminal is basically consistent with the condition seen in the video played by the terminal device, so that the user obtains more real experience, and the participation degree of the user is favorably improved.
Example two
The second embodiment corresponds to the first embodiment, and provides a video call interaction method from the perspective of the first service end, with reference to fig. 3, the method may specifically include:
s401: the method comprises the steps that a first service end receives a second video submitted by a third terminal, the second video is obtained by shooting in a shooting site of a first video, and the first video is used for being played through the second terminal;
s402: and after receiving requests submitted by a plurality of first terminals for participating in video call interaction, providing the second video for the plurality of first terminals so as to simulate a video call interface by displaying the second video when the first terminals determine that the first video plays the content related to the interaction in the video call mode.
In specific implementation, the head portrait information of the video call interaction associated target person can be provided for the first terminal so as to be displayed in a simulation interface, and the simulation interface is used for simulating an interface state when a video call request is received.
In addition, after the one-to-many call is finished, the first service end can also select a target call user from the users associated with the plurality of first terminals; and then, receiving the image and sound information of the target call user, and providing the image and sound information to a second server for playing the image and sound of the user through the playing equipment of the video shooting site.
EXAMPLE III
The third embodiment also corresponds to the first embodiment, and provides a video call interaction method from the perspective of the second client, referring to fig. 4, where the method specifically includes:
s501: the third terminal obtains a second video through image acquisition equipment, wherein the second video is obtained by shooting at the shooting site of the first video; the first video is used for playing through second terminal equipment;
s502: submitting the second video to a first server and a second server so that the first server can provide the second video to a first terminal, and the first terminal simulates a video call interface in a mode of displaying the second video; and in the process of simulating the video call, the second server takes the second video as the content played in the second terminal.
For the parts, which are not described in detail, of the second embodiment and the third embodiment, reference may be made to the description of the first embodiment, and details are not described here.
Example four
In the foregoing first to third embodiments, specific application scenarios are limited to live scenarios such as "double 11 evening parties", but in practical applications, the scheme of the embodiment of the present application may also be applied to other scenarios. For example, in one scenario, in practical applications, activities such as several new releases of small and large size may be held in each offline physical location every day, and merchants certainly want to enable more online users to obtain information related to such offline new releases, and enable online users to obtain feelings closer to those of the existing locations of the activities as possible, so as to bring higher page conversion rate or sales volume to the online users. In this scenario, the solution provided in this embodiment of the present application may be used, specifically, a specific release meeting process may be shot on site at the site of an activity such as a new product release meeting, and a video stream obtained by shooting may be provided to a third server (which is called a "third" server, and is only for distinguishing from the first server in the foregoing embodiment, and in an actual application, the same first server as in the foregoing embodiment may provide a corresponding function), and the server may select a specific target user from online users, and provide the corresponding video stream to these target users by initiating a video call request to the target user. Therefore, the target user can watch the situation of the release meeting site through the terminal equipment of the target user. Of course, in order to better fit the scene of the video call, related people can be arranged to appear in the picture when the video stream is shot, the content of the video stream is provided in a manner that the people introduce the release meeting scene, and the like. In this way, the user can be made to obtain information relating to activities such as offline new product releases in a more novel manner.
Or, in another scenario, some users such as large-scale teams often have a need to hold a meeting at a certain physical location, for example, a meeting, a dinner party, etc., in order to notify the members participating in the meeting of information such as a specific meeting location, etc., in the prior art, it is usually required to send a mail, an instant message, a short message, a call, etc., and if the method provided by the embodiment of the present application is used, the organizer of the meeting, etc. can reach a specific physical location, for example, a conference room, a restaurant, etc., and use his mobile phone, etc. to take a picture of the identifier such as a house number, a room name, etc. of the physical location to obtain a video stream and submit the video stream to a third server, and in addition, information such as a list of the members needing to participate in the meeting can be provided to the third server in advance, so that the third server can send video call requests to each member separately, after the member users answer, the video stream shot by the organizer on site in the entity place can be provided for each member, so that each member can know information such as the name of the specific entity place in the way, and can also know the environment of the place and the like at the same time.
In a fourth embodiment, a video call method is provided from the perspective of the third server, and referring to fig. 5, the method may specifically include:
s601: the third server side obtains a video stream shot in real time in the target event related entity place;
s602: determining at least one target user associated with the target event;
s603: providing the video stream to a third client associated with the target user for simulating a video call interface by playing the video stream.
In a specific implementation, before providing the video stream to the third client associated with the target user, a video call request may be sent to the third client associated with the target user, the third client may provide a simulation interface, which may include an operation option for answering, and the user may determine whether to answer the video call through the operation option, and further, the third server may determine the target user who answers the video call.
In a specific implementation, as described above, the target event may include an event of a new event release meeting at a preset physical location; at this time, specifically, when at least one target user associated with the target event is determined, a search behavior of a user may be detected in an occurrence process of the target event, and if a keyword used by a certain user in searching is related to the target event, the certain user is determined to be the target user.
Or, the target event may also include an event that a meeting is held in a preset entity location, and at this time, specifically, when at least one target user associated with the target event is determined, user list information that needs to participate in the meeting may be obtained, and the target user is determined according to the user list.
EXAMPLE five
The fifth embodiment corresponds to the fourth embodiment, and provides a video call method from the perspective of a third client, where the third client may specifically be a client provided for a general user to use, referring to fig. 6, and the method may specifically include:
s701: the third client receives the video call request and provides a simulation interface, wherein the simulation interface is used for simulating the interface state when the video call request is received, and comprises operation options for answering;
s702: after receiving an answering operation request through the operation option, obtaining a video stream from a server, wherein the video stream is obtained by shooting in real time at an entity place associated with a target event;
s703: and simulating a video call interface in a mode of playing the video stream.
EXAMPLE six
The fifth embodiment corresponds to the fourth embodiment, and provides a video call method from the perspective of a fourth client, where the fourth client may specifically be a client provided for a user such as an organizer of a target event, and referring to fig. 7, the method may specifically include:
s801: the fourth client side obtains a video stream, wherein the video stream is obtained by shooting in real time at an entity place associated with the target event;
s802: and submitting the video stream to a third server to determine at least one target user associated with the target event, providing the video stream to a third client associated with the target user, and simulating a video call interface in a mode of playing the video stream.
For parts which are not described in detail in the fourth to sixth embodiments, reference may be made to the foregoing description, and details are not described here.
EXAMPLE seven
In the seventh embodiment, the interaction can be performed in the form of an audio call, that is, the user can listen to the sound from the video shooting site only through the first terminal without watching a specific picture.
Specifically, referring to fig. 8, the seventh embodiment provides a voice call interaction method from the perspective of the first terminal, where the method may include:
s901: the method comprises the steps that a first terminal determines progress information of a video played in a second terminal;
s902: when the content related to interaction in the voice call mode is played in the video, the voice call signal is simulated through the audio signal obtained from the first server, wherein the audio signal is obtained by performing audio acquisition on the video shooting site.
Example eight
The eighth embodiment corresponds to the seventh embodiment, and from the perspective of the first service end, there is provided a voice call interaction method, referring to fig. 9, the method may include:
s1001: the method comprises the steps that a first service end receives an audio signal submitted by a third terminal, the audio signal is obtained by carrying out audio acquisition on a shooting site of a target video, and the target video is used for being played through a second terminal;
s1002: after receiving requests submitted by a plurality of first terminals for participating in voice call interaction, providing the audio signals for the plurality of first terminals, so that when the first terminals determine that the target video plays the content related to the interaction in the voice call mode, the audio signals are used for simulating the voice call signals.
For the parts of the seventh embodiment and the eighth embodiment that are not described in detail, reference may be made to the descriptions of the first and second embodiments, and details thereof are not repeated here.
Example nine
In the foregoing embodiments, the interaction of the video or audio call is provided to the user during the process of playing the first video by the second terminal, and in the ninth embodiment, the interaction of the audio call may also be provided to the user during the process of playing the first audio by the fourth terminal. The fourth terminal may specifically include a terminal for listening to the radio station to transmit the audio signal, for example, a radio, or an associated device with a radio function, and so on; the first audio may be an audio signal transmitted by the broadcast station.
Specifically, referring to fig. 10, the ninth embodiment provides a voice call interaction method from the perspective of a first terminal, where the method specifically includes:
s1101: the first terminal determines progress information of a first audio played in a fourth terminal;
s1102: when the content related to interaction in the voice call mode is played in the first audio, simulating a voice call signal through a second audio signal obtained from a first server, wherein the second audio signal is obtained by performing audio collection on the site where the first audio is collected.
Example ten
Corresponding to the ninth embodiment, in the tenth embodiment, from the perspective of the first service end, there is provided a voice call interaction method, referring to fig. 11, the method may include:
s1201: the first service end receives a second audio signal submitted by a fifth terminal, wherein the second audio signal is obtained by carrying out audio acquisition on a first audio acquisition site, and the first audio is used for playing through a fourth terminal;
s1202: and after receiving requests submitted by a plurality of first terminals for participating in voice call interaction, providing the second audio signals for the plurality of first terminals so as to be used for simulating voice call signals through the second audio signals when the first terminals determine that the first audio plays the content related to the interaction in the voice call mode.
For the parts of the ninth embodiment and the tenth embodiment that are not described in detail, the descriptions in the foregoing embodiments can be referred to, and the details are not repeated herein.
Corresponding to the first embodiment, an embodiment of the present application further provides a video call interaction apparatus, referring to fig. 12, where the apparatus is applied to a first terminal, and includes:
a first progress information determining unit 1301, configured to determine progress information of a first video played in a second terminal;
a first call simulation unit 1302, configured to simulate a video call interface in a manner of displaying a second video obtained from a first server when content related to interaction in a video call manner is played in the first video, where the second video is obtained by shooting at a shooting site of the first video.
In a specific implementation, the second video may be transmitted in the form of a video stream.
The second video is obtained by shooting a target person at the shooting site of the first video, wherein the target person is a video call initiator in the interaction of the video call modes and is located at the shooting site of the first video.
The shooting site of the first video may be a live broadcast site of the first video.
In a specific implementation, the apparatus may further include:
the simulation interface providing unit is used for providing a simulation interface, the simulation interface is used for simulating the interface state when a video call request is received, and the simulation interface comprises a first operation option for answering;
and the second video obtaining unit is used for closing the simulation interface after receiving an answering operation request through the first operation option, and obtaining the second video from the first server for displaying.
In addition, the simulation interface may further include a second operation option for rejection, and in this case, the apparatus may further include:
and the third operation option providing unit is used for closing the simulation interface and providing a third operation option for initiating reconnection if a refused operation request is received through the second operation option.
Specifically, the apparatus may further include:
an avatar information obtaining unit, configured to obtain avatar information of a target person associated with the video call request, where the target person is a video call initiator in the interaction of the video call modes and is in a shooting site of the first video;
and the head portrait information providing unit is used for providing the head portrait information of the target person in the simulation interface.
In addition, the method can also comprise the following steps:
and the judging unit is used for judging whether the first video played in the second terminal plays the content for indicating that the video call is connected or not after receiving the answering operation request through the first operation option, if so, triggering to close the simulation interface, and simulating the video call interface in a mode of displaying the second video provided by the first service terminal.
Specifically, the determining unit may determine whether the first video played by the second terminal is played to the content indicating that the video call is connected by:
collecting an audio signal in a first video played by the second terminal;
and judging whether a prompt tone for indicating that the video call is connected is generated in the first video played by the second terminal or not in a mode of identifying the audio signal.
In a specific implementation, the apparatus may further include:
the user image information acquisition unit is used for acquiring the image information of a user through the image acquisition device of the first terminal;
and the user image information display unit is used for providing a sub-window on the upper layer of the simulated video call interface and displaying the image information of the user in the sub-window.
A prompt information providing unit for providing prompt information when receiving the information of the object selected as the one-to-one call;
and the user information submitting unit is used for acquiring image and sound information of a user and submitting the image and sound information to the first server for providing to the second server, and the second server plays the image and sound of the user through the playing equipment of the video shooting site.
The second terminal may be a terminal device for receiving and playing a video signal transmitted by a broadcast station, and the first video is a video signal transmitted by the broadcast station.
Corresponding to the second embodiment, an embodiment of the present application further provides a video call interaction apparatus, referring to fig. 13, where the apparatus is applied to a first service end, and includes:
a second video receiving unit 1401 configured to receive a second video submitted by a third terminal, the second video being obtained by shooting at a shooting site of a first video, the first video being for playing through the second terminal;
the second video providing unit 1402 is configured to provide the second video to the plurality of first terminals after receiving the requests submitted by the plurality of first terminals to participate in the video call interaction, so that when the first terminals determine that the first video plays the content related to the interaction in the video call manner, the second video simulates a video call interface by displaying the second video.
Specifically, the apparatus may further include:
the head portrait information providing unit is used for providing the head portrait information of the video call interaction associated target person for the first terminal so as to be displayed in a simulation interface, and the simulation interface is used for simulating the interface state when the video call request is received.
The user selection unit is used for selecting a target call user from users related to the plurality of first terminals;
and the user information receiving unit is used for receiving the image and sound information of the target call user and providing the image and sound information to a second server so as to play the image and sound of the user through the playing equipment of the video shooting site.
Corresponding to the third embodiment, the embodiment of the present application further provides a video call interaction device, referring to fig. 14, where the video call interaction device is applied to a third terminal, and the video call interaction device includes:
a second video capture unit 1501 for obtaining a second video by an image capture device, wherein the second video is obtained by shooting at a shooting site of the first video; the first video is used for playing through second terminal equipment;
a second video submitting unit 1502, configured to submit the second video to a first server and a second server, so that the first server provides the second video to a first terminal, and the first terminal simulates a video call interface by displaying the second video; and in the process of simulating the video call, the second server takes the second video as the content played in the second terminal.
Corresponding to the fourth embodiment, an embodiment of the present application further provides a video call interaction apparatus, referring to fig. 15, where the apparatus is applied to a third server, and includes:
a video stream obtaining unit 1601 configured to obtain a video stream captured in real time in a location of a target event-related entity;
a target user determining unit 1602, configured to determine at least one target user associated with the target event;
a video stream providing unit 1603, configured to provide the video stream to a third client associated with the target user, so as to simulate a video call interface by playing the video stream.
Specifically, the apparatus may further include:
a call request initiating unit, configured to send a video call request to a third client associated with the target user;
and the user determining unit is used for determining a target user for receiving the video call.
Wherein the target event comprises an event of a new product release meeting held at a preset physical place;
the target user determination unit may be specifically configured to:
and detecting the search behavior of the user in the occurrence process of the target event, and determining the user as the target user if the keyword used by the user in searching is related to the target event.
Or, the target event comprises a notification event of holding a meeting at a preset entity place;
the target user determination unit may be specifically configured to:
and obtaining the information of the user list needing to participate in the meeting, and determining the target user according to the user list.
Corresponding to the fifth embodiment, an embodiment of the present application further provides a video call interaction apparatus, referring to fig. 16, where the apparatus is applied to a third client, and includes:
a video call request receiving unit 1701, configured to receive a video call request and provide a simulation interface, where the simulation interface is used to simulate an interface state when the video call request is received, and includes an operation option for answering;
a video stream obtaining unit 1702, configured to obtain a video stream from a server after receiving an answering operation request through the operation option, where the video stream is obtained by performing real-time shooting at an entity location associated with a target event;
and a call simulation unit 1703, configured to simulate a video call interface in a manner of playing the video stream.
Corresponding to the sixth embodiment, an embodiment of the present application further provides a video call interaction apparatus, referring to fig. 17, where the apparatus is applied to a fourth client, and includes:
a video stream obtaining unit 1801, configured to obtain a video stream, where the video stream is obtained by performing real-time shooting at an entity location associated with a target event;
a video stream submitting unit 1802, configured to submit the video stream to a third server, so as to determine at least one target user associated with the target event, provide the video stream to a third client associated with the target user, and simulate a video call interface by playing the video stream.
Corresponding to the seventh embodiment, an embodiment of the present application further provides a voice call interaction apparatus, referring to fig. 18, where the apparatus is applied to a first terminal, and includes:
a second progress information determining unit 1901 for determining progress information of a video played in the second terminal;
the second call simulation unit 1902 is configured to simulate, when content related to interaction performed in a voice call manner is played in the video, a voice call signal through an audio signal obtained from the first server, where the audio signal is obtained by performing audio acquisition on a shooting site of the video.
Corresponding to the eighth embodiment, an embodiment of the present application further provides a voice call interaction apparatus, referring to fig. 19, where the apparatus is applied to a first service end, and includes:
an audio signal receiving unit 2001, configured to receive an audio signal submitted by a third terminal, where the audio signal is obtained by performing audio acquisition on a shooting site of a target video, and the target video is used for playing through a second terminal;
the audio signal providing unit 2002 is configured to, after receiving a request submitted by a plurality of first terminals to participate in a voice call interaction, provide the audio signal to the plurality of first terminals, so that when the first terminals determine that the target video plays a content related to an interaction performed in a voice call manner, the audio signal is used to simulate a voice call signal.
Corresponding to the ninth embodiment, an embodiment of the present application further provides a voice call interaction apparatus, referring to fig. 20, where the apparatus is applied to a first terminal, and includes:
a third progress information determining unit 2101, configured to determine progress information of the first audio played in the fourth terminal;
a third communication simulation unit 2102, configured to simulate a voice call signal through a second audio signal obtained from a first server when playing content related to interaction in a voice call mode in the first audio, where the second audio signal is obtained by performing audio capture on a capture site of the first audio.
Corresponding to the tenth embodiment, an embodiment of the present application further provides a voice call interaction apparatus, referring to fig. 21, where the apparatus is applied to a first service end, and includes:
a second audio signal receiving unit 2201, configured to receive a second audio signal submitted by a fifth terminal, where the second audio signal is obtained by performing audio acquisition on an acquisition site of a first audio, and the first audio is used for playing through a fourth terminal;
a second audio signal providing unit 2202, configured to provide, after receiving a request submitted by a plurality of first terminals to participate in a voice call interaction, the second audio signal to the plurality of first terminals, so that when the first terminals determine that the first audio plays a content related to an interaction performed in a voice call manner, the second audio signal simulates a voice call signal.
In addition, this application embodiment also provides an electronic device, including:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
determining progress information of a first video played in a second terminal;
when the content related to interaction through a video call mode is played in the first video, a video call interface is simulated through a mode of displaying a second video obtained from a first server, wherein the second video is obtained by shooting at a shooting site of the first video.
Fig. 22 illustratively shows an architecture of an electronic device, for example, device 2300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, an aircraft, and so forth.
Referring to fig. 22, device 2300 may include one or more of the following components: processing components 2302, memory 2304, power components 2306, multimedia components 2308, audio components 2310, input/output (I/O) interfaces 2312, sensor components 2314, and communication components 2316.
The processing component 2302 generally controls overall operation of the device 2300, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing element 2302 may include one or more processors 2320 to execute instructions to complete generating a traffic compression request when a preset condition is met in the video playing method provided in the technical solution of the present disclosure, and sending the traffic compression request to the server, where the traffic compression request records information for triggering the server to acquire a target attention area, and the traffic compression request is used to request the server to preferentially ensure a bitrate of video content in the target attention area; and playing the video content corresponding to the code stream file according to the code stream file returned by the server, wherein the code stream file is all or part of the video file obtained by carrying out code rate compression processing on the video content outside the target attention area by the server according to the flow compression request. Further, the processing component 2302 can include one or more modules that facilitate interaction between the processing component 2302 and other components. For example, the processing component 2302 can include a multimedia module to facilitate interaction between the multimedia component 2308 and the processing component 2302.
Memory 2304 is configured to store various types of data to support operations at device 2300. Examples of such data include instructions for any application or method operating on device 2300, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 2304 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 2306 provides power to the various components of the device 2300. The power components 2306 can include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 2300.
The multimedia component 2308 includes a screen that provides an output interface between the device 2300 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 2308 includes a front camera and/or a rear camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when device 2300 is in an operational mode, such as a capture mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 2310 is configured to output and/or input audio signals. For example, audio component 2310 includes a Microphone (MIC) configured to receive external audio signals when device 2300 is in an operational mode, such as a call mode, a record mode, and a voice recognition mode. The received audio signals may further be stored in the memory 2304 or transmitted via the communication component 2316. In some embodiments, the audio assembly 2310 further includes a speaker for outputting audio signals.
The I/O interface 2312 provides an interface between the processing element 2302 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
Sensor assembly 2314 includes one or more sensors for providing status assessment of various aspects of device 2300. For example, sensor assembly 2314 can detect the open/closed state of device 2300, the relative positioning of components, such as a display and keypad of device 2300, the change in position of device 2300 or a component of device 2300, the presence or absence of user contact with device 2300, the orientation or acceleration/deceleration of device 2300, and the change in temperature of device 2300. The sensor assembly 2314 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 2314 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 2314 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
Communications component 2316 is configured to facilitate communications between device 2300 and other devices in a wired or wireless manner. Device 2300 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication part 2316 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 2316 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the device 2300 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium including instructions, for example, the memory 2304 including instructions, where the instructions are executable by the processor 2320 of the device 2300 to complete that when a preset condition is met in a video playing method provided in the technical solution of the present disclosure, a traffic compression request is generated and sent to a server, where the traffic compression request records information for triggering the server to obtain a target attention area, and the traffic compression request is used to request the server to preferentially guarantee a bitrate of video content in the target attention area; and playing the video content corresponding to the code stream file according to the code stream file returned by the server, wherein the code stream file is obtained by performing code rate compression processing on the video content outside the target attention area by the server according to the flow compression request. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The video call interaction method, the video call interaction device and the electronic device provided by the application are introduced in detail, specific examples are applied in the text to explain the principle and the implementation of the application, and the description of the above embodiments is only used for helping to understand the method and the core idea of the application; meanwhile, for a person skilled in the art, according to the idea of the present application, the specific embodiments and the application range may be changed. In view of the above, the description should not be taken as limiting the application.

Claims (38)

1. A video call interaction method is characterized by comprising the following steps:
the method comprises the steps that a first terminal determines progress information of a first video played in a second terminal;
when the content related to interaction through a video call mode is played in the first video, a video call interface is simulated through a mode of displaying a second video obtained from a first server, wherein the second video is obtained by shooting at a shooting site of the first video in the playing process of the first video.
2. The method of claim 1, wherein the second video is transmitted as a video stream.
3. The method of claim 1, wherein the second video is obtained by capturing a target person at a capture site of the first video, the target person being a video call initiator in the video call mode interaction and being at the capture site of the first video.
4. The method of claim 1, wherein the capture site of the first video is a live broadcast site of the first video.
5. The method of claim 1, further comprising:
providing a simulation interface, wherein the simulation interface is used for simulating the interface state when a video call request is received, and comprises a first operation option for answering;
and after receiving an answering operation request through the first operation option, closing the simulation interface, and obtaining the second video from the first server for the display.
6. The method of claim 5, further comprising a second operational option for rejection in the simulation interface, the method further comprising:
and if a refused operation request is received through the second operation option, closing the simulation interface, and providing a third operation option for initiating reconnection.
7. The method of claim 5, wherein in providing the simulation interface, further comprising:
acquiring head portrait information of a target person associated with the video call request, wherein the target person is a video call initiator in the interaction of the video call mode and is in a shooting site of the first video;
providing avatar information of the target person in the simulation interface.
8. The method of claim 5, wherein receiving a listen operation request via the first operation option further comprises:
and judging whether the first video played in the second terminal plays the content for indicating that the video call is connected, if so, triggering to close the simulation interface, and simulating the video call interface in a mode of displaying the second video provided by the first service terminal.
9. The method according to claim 8, wherein it is determined whether the first video played by the second terminal is played to the content indicating that the video call is connected by:
collecting an audio signal in a first video played by the second terminal;
and judging whether a prompt tone for indicating that the video call is connected is generated in the first video played by the second terminal or not in a mode of identifying the audio signal.
10. The method of claim 1, wherein when simulating a video call interface by displaying the second video obtained from the first server, further comprising:
acquiring image information of a user through an image acquisition device of the first terminal;
and providing a sub-window on the upper layer of the simulated video call interface, and displaying the image information of the user in the sub-window.
11. The method of claim 1, further comprising:
providing prompt information when receiving the information of the object selected as the one-to-one call;
and collecting image and sound information of a user, submitting the image and sound information to the first server for providing to a second server, and playing the image and sound of the user by the second server through a playing device of the video shooting site.
12. The method according to any one of claims 1 to 11, wherein the second terminal is a terminal device for receiving and playing a video signal transmitted by a broadcast station, and the first video is a video signal transmitted by the broadcast station.
13. A video call interaction method is characterized by comprising the following steps:
a first service end receives a second video submitted by a third terminal, wherein the second video is obtained by shooting a first video at a shooting site in the playing process of the first video, and the first video is used for playing through the second terminal;
and after receiving requests submitted by a plurality of first terminals for participating in video call interaction, providing the second video for the plurality of first terminals so as to simulate a video call interface by displaying the second video when the first terminals determine that the first video plays the content related to the interaction in the video call mode.
14. The method of claim 13, further comprising, prior to the method:
and providing the head portrait information of the video call interaction associated target person to a first terminal for displaying in a simulation interface, wherein the simulation interface is used for simulating the interface state when the video call request is received.
15. The method of claim 13, further comprising:
selecting a target call user from the users associated with the plurality of first terminals;
and receiving the image and sound information of the target call user, and providing the image and sound information to a second server for playing the image and sound of the user through the playing equipment of the video shooting site.
16. A video call interaction method is characterized by comprising the following steps:
the third terminal obtains a second video through image acquisition equipment, wherein the second video is obtained by shooting in the shooting site of the first video in the playing process of the first video; the first video is used for playing through second terminal equipment;
submitting the second video to a first server and a second server so that the first server can provide the second video to a first terminal, and the first terminal simulates a video call interface in a mode of displaying the second video; and in the process of simulating the video call, the second server takes the second video as the content played in the second terminal.
17. A video call method, comprising:
the third server side obtains a video stream shot in real time in the target event related entity place;
determining at least one target user associated with the target event;
and providing the video stream to a third client associated with the target user, so that the third client simulates a video call interface in a video play mode when determining that the content related to the interaction in the video through the video call mode is played in the video according to the progress information of the video played in another terminal.
18. The method of claim 17, wherein prior to providing the video stream to the third client associated with the target user, further comprising:
sending the video call request to a third client associated with the target user;
and determining a target user for receiving the video call.
19. The method of claim 17, wherein the target event comprises an event of a new event release held at a pre-set physical location;
the determining at least one target user associated with the target event comprises:
detecting the searching behavior of the user in the occurrence process of the target event;
and if the keywords used by a certain user in searching are related to the target event, determining the target user.
20. The method of claim 17, wherein the target event comprises a notification event of a meeting at a preset physical location;
the determining at least one target user associated with the target event comprises:
and obtaining the information of the user list needing to participate in the meeting, and determining the target user according to the user list.
21. A video call method, comprising:
the third client receives the video call request and provides a simulation interface, wherein the simulation interface is used for simulating the interface state when the video call request is received, and comprises operation options for answering;
after receiving an answering operation request through the operation option, obtaining a video stream from a server, wherein the video stream is obtained by shooting in real time at an entity place associated with a target event;
and simulating a video call interface in a video stream playing mode when determining that the content related to the interaction in the video call mode is played according to the progress information of the video played in the other terminal.
22. A video call method, comprising:
the fourth client side obtains a video stream, wherein the video stream is obtained by shooting in real time at an entity place associated with the target event;
and submitting the video stream to a third server for determining at least one target user associated with the target event and providing the video stream to a third client associated with the target user, wherein the third client simulates a video call interface in a video play mode when determining that the content related to the interaction in the video through the video call mode is played according to the progress information of the video played in another terminal.
23. A voice call interaction method is characterized by comprising the following steps:
the method comprises the steps that a first terminal determines progress information of a video played in a second terminal;
when the content related to interaction in a voice call mode is played in the video, a voice call signal is simulated through an audio signal obtained from a first server, wherein the audio signal is obtained by performing audio acquisition on a video shooting site in the video playing process.
24. A voice call interaction method is characterized by comprising the following steps:
the method comprises the steps that a first service end receives an audio signal submitted by a third terminal, the audio signal is obtained by carrying out audio acquisition on a shooting site of a target video in the playing process of the target video, and the target video is used for being played through a second terminal;
after receiving requests submitted by a plurality of first terminals for participating in voice call interaction, providing the audio signals for the plurality of first terminals, so that when the first terminals determine that the target video plays the content related to the interaction in the voice call mode, the audio signals are used for simulating the voice call signals.
25. A voice call interaction method is characterized by comprising the following steps:
the first terminal determines progress information of a first audio played in a fourth terminal;
when the content related to interaction through a voice call mode is played in the first audio, simulating a voice call signal through a second audio signal obtained from a first server, wherein the second audio signal is obtained by performing audio collection on the site where the first audio is collected in the playing process of the first audio.
26. The method of claim 25, wherein the fourth terminal comprises a terminal for listening to an audio signal transmitted by a broadcast station, and wherein the first audio is an audio signal transmitted by the broadcast station.
27. A voice call interaction method is characterized by comprising the following steps:
the method comprises the steps that a first service end receives a second audio signal submitted by a fifth terminal, the second audio signal is obtained by carrying out audio acquisition on an acquisition site of a first audio in the playing process of the first audio, and the first audio is used for being played through a fourth terminal;
and after receiving requests submitted by a plurality of first terminals for participating in voice call interaction, providing the second audio signals for the plurality of first terminals so as to be used for simulating voice call signals through the second audio signals when the first terminals determine that the first audio plays the content related to the interaction in the voice call mode.
28. A video call interaction device is applied to a first terminal and comprises:
a first progress information determining unit for determining progress information of a first video played in a second terminal;
the first call simulation unit is used for simulating a video call interface in a mode of displaying a second video obtained from a first server when the content related to the interaction through the video call mode is played in the first video, wherein the second video is obtained by shooting at the shooting site of the first video in the playing process of the first video.
29. A video call interaction device applied to a first service end comprises:
the second video receiving unit is used for receiving a second video submitted by a third terminal, wherein the second video is obtained by shooting at the shooting site of a first video in the playing process of the first video, and the first video is used for playing through the second terminal;
and the second video providing unit is used for providing the second video to the plurality of first terminals after receiving the requests submitted by the plurality of first terminals for participating in the video call interaction so as to simulate a video call interface by displaying the second video when the first terminals determine that the first video plays the content related to the interaction in the video call mode.
30. A video call interaction device is applied to a third terminal and comprises:
the second video acquisition unit is used for acquiring a second video through the image acquisition equipment, wherein the second video is acquired by shooting at the shooting site of the first video in the playing process of the first video; the first video is used for playing through second terminal equipment;
the second video submitting unit is used for submitting the second video to a first server and a second server so that the first server can provide the second video to a first terminal, and the first terminal simulates a video call interface in a mode of displaying the second video; and in the process of simulating the video call, the second server takes the second video as the content played in the second terminal.
31. A video call device, applied to a third server, includes:
the video stream obtaining unit is used for obtaining a video stream shot in real time in a target event related entity site;
the target user determining unit is used for determining at least one target user associated with the target event;
and the video stream providing unit is used for providing the video stream to a third client associated with the target user, so that the third client simulates a video call interface in a video playing mode when determining that the content related to the interaction in the video is played in the video according to the progress information of the video played in the other terminal.
32. A video call device applied to a third client comprises:
the device comprises a video call request receiving unit, a video call request receiving unit and a simulation interface, wherein the video call request receiving unit is used for receiving a video call request and providing a simulation interface, and the simulation interface is used for simulating an interface state when the video call request is received and comprises operation options for answering;
the video stream obtaining unit is used for obtaining a video stream from a server after receiving an answering operation request through the operation option, wherein the video stream is obtained by shooting in real time at an entity place associated with a target event;
and the call simulation unit is used for simulating a video call interface in a video stream playing mode when the content related to the interaction in the video call mode is determined to be played according to the progress information of the video played in the other terminal.
33. A video call device applied to a fourth client comprises:
the video stream obtaining unit is used for obtaining a video stream, and the video stream is obtained by shooting in real time at an entity place related to a target event;
and the video stream submitting unit is used for submitting the video stream to a third server so as to determine at least one target user associated with the target event and provide the video stream to a third client associated with the target user, and the third client simulates a video call interface in a video play mode when determining that content related to interaction in the video through a video call mode is played according to the progress information of the video played in another terminal.
34. A voice call interaction device is applied to a first terminal and comprises:
a second progress information determining unit for determining progress information of a video played in the second terminal;
and the second communication simulation unit is used for simulating a voice communication signal through an audio signal obtained from the first server when the content related to the interaction in the voice communication mode is played in the video, wherein the audio signal is obtained by performing audio acquisition on the video shooting site in the video playing process.
35. A voice call interaction device is applied to a first service end, and comprises:
the audio signal receiving unit is used for receiving an audio signal submitted by a third terminal, wherein the audio signal is obtained by carrying out audio acquisition on a shooting site of a target video in the playing process of the target video, and the target video is used for being played through a second terminal;
the audio signal providing unit is used for providing the audio signals to the plurality of first terminals after receiving the requests submitted by the plurality of first terminals for participating in the voice call interaction, so that when the first terminals determine that the target video plays the content related to the interaction in the voice call mode, the audio signals are used for simulating the voice call signals.
36. A voice call interaction device is applied to a first terminal and comprises:
a third progress information determining unit, configured to determine progress information of the first audio played in the fourth terminal;
and the third communication simulation unit is used for simulating a voice communication signal through a second audio signal obtained from the first server when the content related to the interaction in the voice communication mode is played in the first audio, wherein the second audio signal is obtained by performing audio acquisition on the acquisition site of the first audio in the playing process of the first audio.
37. A voice call interaction device is applied to a first service end, and comprises:
the second audio signal receiving unit is used for receiving a second audio signal submitted by a fifth terminal, wherein the second audio signal is obtained by carrying out audio acquisition on a first audio acquisition site in the playing process of a first audio, and the first audio is used for playing through a fourth terminal;
and the second audio signal providing unit is used for providing the second audio signals to the plurality of first terminals after receiving the requests submitted by the plurality of first terminals for participating in the voice call interaction, so that when the first terminals determine that the first audio plays the content related to the interaction in the voice call mode, the voice call signals are simulated through the second audio signals.
38. An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
determining progress information of a first video played in a second terminal;
when the content related to interaction through a video call mode is played in the first video, a video call interface is simulated through a mode of displaying a second video obtained from a first server, wherein the second video is obtained by shooting at a shooting site of the first video in the playing process of the first video.
CN201711108000.3A 2017-11-10 2017-11-10 Video call interaction method and device and electronic equipment Active CN109788364B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711108000.3A CN109788364B (en) 2017-11-10 2017-11-10 Video call interaction method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711108000.3A CN109788364B (en) 2017-11-10 2017-11-10 Video call interaction method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN109788364A CN109788364A (en) 2019-05-21
CN109788364B true CN109788364B (en) 2021-10-26

Family

ID=66485349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711108000.3A Active CN109788364B (en) 2017-11-10 2017-11-10 Video call interaction method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN109788364B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110675875B (en) * 2019-09-30 2022-02-18 思必驰科技股份有限公司 Intelligent voice conversation technology telephone experience method and device
CN111935084B (en) * 2020-06-29 2023-06-06 五八到家有限公司 Communication processing method and device
CN112565657B (en) * 2020-11-30 2023-09-15 百果园技术(新加坡)有限公司 Call interaction method, device, equipment and storage medium
CN114554009A (en) * 2022-02-15 2022-05-27 支付宝(杭州)信息技术有限公司 Incoming call risk guiding method and device
CN117082461B (en) * 2023-08-09 2024-10-29 中移互联网有限公司 Method, device and storage medium for transmitting 5G message in audio/video call

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104917995A (en) * 2015-06-04 2015-09-16 小米科技有限责任公司 Realization method and device of off-line video communication
CN104954726A (en) * 2015-06-29 2015-09-30 上海卓易科技股份有限公司 Video calling device and method
CN105898604A (en) * 2016-04-28 2016-08-24 乐视控股(北京)有限公司 Live broadcast video interaction information configuration method and device based on mobile terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104917995A (en) * 2015-06-04 2015-09-16 小米科技有限责任公司 Realization method and device of off-line video communication
CN104954726A (en) * 2015-06-29 2015-09-30 上海卓易科技股份有限公司 Video calling device and method
CN105898604A (en) * 2016-04-28 2016-08-24 乐视控股(北京)有限公司 Live broadcast video interaction information configuration method and device based on mobile terminal

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
web3D﹠AR在天猫双11互动中大规模应用;汪成 肖竹;《D2前端技术论坛》;20170105;视频第27分36秒-28分04秒 *
主流媒体网站内容建设的三个维度;孟威;《人民论坛》;20160701;第2页 *
孟威.主流媒体网站内容建设的三个维度.《人民论坛》.2016, *

Also Published As

Publication number Publication date
CN109788364A (en) 2019-05-21

Similar Documents

Publication Publication Date Title
CN109788364B (en) Video call interaction method and device and electronic equipment
KR101659674B1 (en) Voice link system
CN106105246B (en) Display methods, apparatus and system is broadcast live
CN106534953B (en) Video rebroadcasting method in live broadcast application and control terminal
CN106162230A (en) The processing method of live information, device, Zhu Boduan, server and system
CN111343476A (en) Video sharing method and device, electronic equipment and storage medium
CN109151565B (en) Method and device for playing voice, electronic equipment and storage medium
CN105630353A (en) Comment information issuing method and device
CN113411538B (en) Video session processing method and device and electronic equipment
CN113490005B (en) Information interaction method and device for live broadcasting room, electronic equipment and storage medium
CN108449605B (en) Information synchronous playing method, device, equipment, system and storage medium
CN105635846B (en) Apparatus control method and device
CN114051170A (en) Live broadcast processing method and device, electronic equipment and computer readable storage medium
CN109729367B (en) Method and device for providing live media content information and electronic equipment
CN115802068A (en) Live broadcast information processing method and device and electronic equipment
CN114025180A (en) Game operation synchronization system, method, device, equipment and storage medium
CN106131291B (en) Information expands screen display method and device
CN105427443A (en) Voting message sending method and device
CN110191367B (en) Information synchronization processing method and device and electronic equipment
CN113986414A (en) Information sharing method and electronic equipment
WO2019076202A1 (en) Multi-screen interaction method and apparatus, and electronic device
CN112532931A (en) Video processing method and device and electronic equipment
US20220210501A1 (en) Method and apparatus for playing data
CN109788327B (en) Multi-screen interaction method and device and electronic equipment
CN111739538B (en) Translation method and device, earphone and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant