CN112188269A - Video playing method and device and video generating method and device - Google Patents
Video playing method and device and video generating method and device Download PDFInfo
- Publication number
- CN112188269A CN112188269A CN202011043068.XA CN202011043068A CN112188269A CN 112188269 A CN112188269 A CN 112188269A CN 202011043068 A CN202011043068 A CN 202011043068A CN 112188269 A CN112188269 A CN 112188269A
- Authority
- CN
- China
- Prior art keywords
- video
- local
- global
- scene
- playing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 230000004044 response Effects 0.000 claims description 9
- 230000002452 interceptive effect Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 10
- 230000003993 interaction Effects 0.000 description 9
- 238000004590 computer program Methods 0.000 description 7
- 230000003321 amplification Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 4
- 238000003199 nucleic acid amplification method Methods 0.000 description 4
- 238000013500 data storage Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 244000144972 livestock Species 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/485—End-user interface for client configuration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
- H04N21/4882—Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The disclosure relates to a video playing method and device and a video generating method and device. The video playing method comprises the following steps: receiving encoded video, wherein the encoded video is formed by multiplexing global video and at least one local video shot in the same scene, the global video is video shot in the overall scene, each local video in the at least one local video is video shot in one local scene, and the encoded video comprises position information of each local video in the at least one local video about the position of the corresponding local scene in the overall scene; decoding the global video of the encoded videos and the position information of each local video of the at least one local video; playing the global video; and displaying prompt information on a user interface playing the global video based on the position information of each local video in the at least one local video, wherein the prompt information is used for prompting that the local video exists at the corresponding local scene position in the global video picture.
Description
Technical Field
The present disclosure relates to the field of audio and video technologies, and in particular, to a video playing method and apparatus and a video generating method and apparatus.
Background
When a client plays a video, if a user wants to locally magnify a video image, the conventional method generally plays the video image or magnifies the video image through some super-resolution algorithms, and when the magnification is large, the locally viewed image is very blurred and details cannot be seen clearly.
Furthermore, when the client plays a video, if different objects in the video image are located in different focal distance ranges, it may cause the local picture to be unclear, for example, if the video is taken with a close-focus object in focus, it may cause the far-focus object to be displayed unclear, and if the video is taken with a far-focus object in focus, it may cause the near-focus object to be displayed unclear.
For example, for a live stock selling scene, when three items are placed in a main broadcast and the camera takes a whole picture of the three items, if each item is locally enlarged, the enlarged image may be blurred, or if the three items are located in different focal length ranges, some items may be clearly focused and some items may be blurred.
Disclosure of Invention
The present disclosure provides a video playing method and apparatus and a video generating method and apparatus to solve at least the problems of the related art described above, and may not solve any of the problems described above.
According to a first aspect of the embodiments of the present disclosure, there is provided a video playing method, including: receiving encoded video, wherein the encoded video is obtained by multiplexing global video and at least one local video shot in the same scene, the global video is a video shot in an overall scene, each local video in the at least one local video is a video shot in a local scene, and the encoded video comprises position information of each local video in the at least one local video about the position of the corresponding local scene in the overall scene; decoding location information of a global video of the encoded videos and each of the at least one local video; playing the global video; and displaying prompt information on a user interface playing the global video based on the position information of each local video in the at least one local video, wherein the prompt information is used for prompting that the local video exists at the corresponding local scene position in the global video picture.
Optionally, the video playing method may further include: in response to receiving an input that a user selects to play the first local video on a user interface for playing the global video, decoding the first local video and playing the first local video.
Optionally, the decoding the first partial video to play the first partial video may include: decoding the first local video and terminating decoding the global video; and playing the first local video and terminating playing the global video.
Optionally, the decoding the first partial video to play the first partial video may include: decoding the first local video while continuing to decode the global video; the first local video is played while the global video continues to be played.
Optionally, the video playing method may further include: displaying a closed interface when the first local video is played; in response to receiving an input that a user selects to close the interface, the decoding of the first local video and the playing of the first local video are terminated, and the decoding of the global video and the playing of the global video are continued.
Optionally, the displaying, on the user interface for playing the global video, the prompt information based on the location information of each of the at least one local video may include: determining a position of a corresponding local scene in a global video picture based on the position information of each local video of the at least one local video; based on the determined location, the hint information is displayed proximate to the corresponding local scene in the global video frame.
Optionally, the displaying, on the user interface for playing the global video, the prompt information based on the location information of each of the at least one local video may include: and setting a local scene position area and/or an area displaying the prompt message as an interactive area on a user interface playing the global video, wherein the interactive area is used for receiving input of selecting the corresponding local video by a user.
According to a second aspect of the embodiments of the present disclosure, there is provided a video generation method, including: acquiring a global video and at least one local video shot in the same scene, wherein the global video is a video shot of an overall scene, and each local video in the at least one local video is a video shot of one local scene; determining a position of a corresponding local scene of each of the at least one local video in the overall scene to generate position information of each of the at least one local video; generating an encoded video by multi-path encoding the global video and the at least one local video and embedding the position information of each of the at least one local video.
Optionally, the determining a position of the respective local scene of each of the at least one local video in the overall scene may include: and determining the position of the corresponding local scene of each local video in the at least one local video in the overall scene by performing image matching on the global video and each local video or receiving the position mark of each local video by the user.
According to a third aspect of the embodiments of the present disclosure, there is provided a video playback apparatus including: a video receiving unit configured to receive encoded video, wherein the encoded video is multiplexed-encoded by a global video and at least one local video captured in the same scene, the global video is a video capturing an overall scene, each of the at least one local video is a video capturing one local scene, and the encoded video includes position information of each of the at least one local video regarding a position of the corresponding local scene in the overall scene; a video decoding unit configured to decode a global video of the encoded videos and position information of each local video of the at least one local video; a video playing unit configured to play a global video; an information display unit configured to display prompt information on a user interface playing the global video based on the position information of each local video of the at least one local video, wherein the prompt information is used for prompting that the local video exists at the corresponding local scene position in the global video picture.
Optionally, the video playing apparatus may further include: a user interface unit; wherein, in response to receiving an input through the user interface unit that a user selects to play the first partial video on the user interface that plays the global video, the video decoding unit may decode the first partial video, and the video playing unit may play the first partial video.
Alternatively, the video decoding unit may decode the first local video and terminate decoding the global video; the video playing unit may play the first local video and terminate playing the global video.
Alternatively, the video decoding unit may decode the first local video while continuing to decode the global video; the video playing unit may play the first local video while continuing to play the global video.
Optionally, the information display unit may be further configured to display the close interface while the first partial video is played; wherein, in response to the user interface unit receiving an input that a user selects to close the interface, the video decoding unit may terminate decoding the first local video, the video playing unit may terminate playing the first local video, and the video decoding unit may continue decoding the global video, and the video playing unit may continue playing the global video.
Alternatively, the information display unit may determine a position of the corresponding local scene in the global video picture based on the position information of each of the at least one local video, and display the indication information in the vicinity of the corresponding local scene in the global video picture based on the determined position.
Alternatively, the information display unit may set a local scene position area and/or an area displaying the cue information as an interactive area on a user interface playing the global video, wherein an input for a user to select a corresponding local video may be received through the interactive area.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a video generating apparatus including: a video acquisition unit configured to acquire a global video and at least one local video shot in the same scene, wherein the global video is a video shot of an overall scene, and each of the at least one local video is a video shot of one local scene; a position determination unit configured to determine a position of a corresponding local scene of each of the at least one local video in the overall scene to generate position information of each of the at least one local video; a video encoding unit configured to generate an encoded video by multi-path encoding the global video and the at least one local video and embedding the position information of each of the at least one local video.
Alternatively, the position determination unit may determine the position of the corresponding local scene of each local video of the at least one local video in the overall scene by performing image matching on the global video and each local video or receiving a position mark of each local video by a user.
According to a fifth aspect of embodiments of the present disclosure, there is provided an electronic apparatus including: at least one processor; at least one memory storing computer-executable instructions, wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform a video playback method according to the present disclosure.
According to a sixth aspect of embodiments of the present disclosure, there is provided an apparatus comprising: at least one processor; at least one memory storing computer-executable instructions, wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform a video generation method according to the present disclosure.
According to a seventh aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium storing instructions, which when executed by at least one processor, cause the at least one processor to perform a video playing method according to the present disclosure or a video generating method according to the present disclosure.
According to an eighth aspect of embodiments of the present disclosure, there is provided a computer program product, instructions in which are executable by a processor of a computer device to perform a video playing method according to the present disclosure or a video generating method according to the present disclosure.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
according to the video playing method and device and the video generating method and device disclosed by the disclosure, a plurality of videos shot by different local objects in the same scene (for example, a global video shot by an overall scene and at least one local video shot by at least one local scene) can be provided for a user to be subjected to multi-channel coding to obtain a coded video, and when the coded video is played, the user is prompted which local scenes can be locally enlarged for the user to select and view enlarged local scene pictures. In addition, for an object which is not located in the effective range of the focal length and causes the focusing to be unclear, among the objects located in different focal length positions in the global picture, the user is prompted to clearly display the object when playing so as to be selected by the user and view the picture in which the object is clear. When a user selects to amplify the same local scene according to the requirement, the corresponding local video image shot by the independent camera can be decoded and played, so that the high-definition local image amplification effect is realized, and the user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a schematic diagram illustrating implementation scenarios of a video playing method and a video generating method according to an exemplary embodiment of the present disclosure.
Fig. 2 is a flowchart illustrating a video playing method according to an exemplary embodiment of the present disclosure.
Fig. 3 is a flowchart illustrating a video generation method according to an exemplary embodiment of the present disclosure.
Fig. 4 is a block diagram illustrating a video playback apparatus 400 according to an exemplary embodiment of the present disclosure.
Fig. 5 is a block diagram illustrating a video generation apparatus 500 according to an exemplary embodiment of the present disclosure.
Fig. 6 is a block diagram of an electronic device 600 according to an example embodiment of the present disclosure.
Fig. 7 is a block diagram of an apparatus 700 according to an example embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The embodiments described in the following examples do not represent all embodiments consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In this case, the expression "at least one of the items" in the present disclosure means a case where three types of parallel expressions "any one of the items", "a combination of any plural ones of the items", and "the entirety of the items" are included. For example, "include at least one of a and B" includes the following three cases in parallel: (1) comprises A; (2) comprises B; (3) including a and B. For another example, "at least one of the first step and the second step is performed", which means that the following three cases are juxtaposed: (1) executing the step one; (2) executing the step two; (3) and executing the step one and the step two.
In order to solve the problem that the traditional local amplification (for example, stretching amplification by a user gesture) of video playing is unclear or the definition of objects in different focal length ranges of the video is different, the present disclosure provides a new video playing method and a new video generating method, by separately shooting the objects in different focal length ranges in the same scene and shooting the overall scene, a global video and at least one local video of the same scene are obtained and sent to a user, when the user needs to amplify to see a certain object in the global video when watching the global video, the corresponding local video can be played, so as to meet the requirement that the user wants to clearly amplify or watch the local scene. Hereinafter, a video playing method and apparatus and a video generating method and apparatus according to exemplary embodiments of the present disclosure will be described in detail with reference to fig. 1 to 7.
Fig. 1 is a schematic diagram illustrating implementation scenarios of a video playing method and a video generating method according to an exemplary embodiment of the present disclosure.
Referring to fig. 1, in building a live broadcast room 100 according to an exemplary embodiment of the present disclosure, assuming that a shooting scene of the direct room 100 includes a plurality of partial objects (e.g., an object a, an object B, and an object C), one camera (e.g., a camera S) (here, the camera is a generic term of a device having a shooting function) may be provided to shoot an overall scene (e.g., an overall scene including the object a, the object B, and the object C), and a plurality of cameras may be provided to respectively shoot each partial scene, for example, a camera 1 is used to shoot a partial scene including the object a, a camera 2 is used to shoot a partial scene of the object B, and a camera 3 is used to shoot a partial scene including the object C.
In live broadcasting, the camera S and the plurality of cameras 1,2, and 3 may transmit respective photographed videos to the server 110. The server 110 may encode (e.g., multiplex-encode) the received video captured by each of the camera S and the plurality of cameras 1,2, and 3 to generate one video. Alternatively, a main camera (for example, but not limited to, the camera S) may be provided among the camera S and the plurality of cameras 1,2, and 3, and the video captured by each of the other cameras may be transmitted to the main camera, and the main camera may encode (for example, multiplex-encode) the video captured by itself and the received video captured by each of the other cameras to generate a video, and transmit the generated video to the server 110. Still alternatively, the camera S and the plurality of cameras 1,2, and 3 may transmit each photographed video to a separate encoder device (not shown) or a server (e.g., cloud encoding) (not shown) providing an encoding service, encode (e.g., multiplex encoding) the received video photographed by each of the main camera S and the plurality of cameras 1,2, and 3 by the encoder device or the server to generate one video, and transmit the generated video to the server 110.
In addition, the clients in user terminals 120 and 130 may also play on-demand programs. At this point, the live programming generated by the server 110 may be saved to a memory (e.g., cloud storage) associated with the server 110. When user terminals 120 and 130 perform an on-demand operation, server 110 may transmit the on-demand program to user terminals 120 and 130.
Specifically, when in the live room 100,when the camera S and the plurality of cameras 1,2, and 3 capture respective capturing scenes, videos captured by the camera S and the plurality of cameras 1,2, and 3 may be multiplexed and encoded. For example, video images that can be taken by each of the camera S and the plurality of cameras 1,2, and 3 are encoded into different paths of video frames corresponding to the same timestamp to form a sequence of frames: s1,A1,B1,C1,S2,A2,B2,C2,S3,A3,B3,C3… are provided. Where S, a, B, C denote video frames taken by the camera S and the plurality of cameras 1,2 and 3, respectively, and the subscripts 1,2,3 denote frame numbers. When a plurality of videos are coded into one video, different IDs can be marked on video frames of different videos, and the videos to which the video frames belong can be distinguished according to the different IDs during decoding. Furthermore, position information indicating the positions of the partial scenes in the overall scene, which are respectively photographed by the plurality of cameras 1,2, and 3, may also be determined and embedded in the generated video, for example, the position information of each partial video may be embedded in the attached information of the respective encoded frame. For example, the positions of the objects (which may be local scenes) captured by the plurality of cameras 1,2, and 3 in the overall scene may be marked by image matching the video images captured by the camera S with the video images captured by the plurality of cameras 1,2, and 3, respectively, to generate position information, e.g., the position of the object captured by the camera 1 in the upper left corner a in the global video image captured by the camera S, the position of the object captured by the camera 2 in the center B in the global video image captured by the camera S, and the position of the object captured by the camera 3 in the lower right corner C in the global video image captured by the camera S. For another example, the video images taken by the plurality of cameras 1,2, and 3 may be manually marked by the user to mark the positions of the objects taken by the plurality of cameras 1,2, and 3 in the overall scene to generate the position information. Here, the position of the local scene in the overall scene may refer to a position of a main shooting object in the local scene in the global video image, and may also be any position that may represent the position of the local scene in the overall scene, for example, a position of the local video image in the global video image as a whole, and the disclosure is not limited thereto.
Further, it is understood that the operations of determining the location of the local scene to generate the location information, multi-coding the global video and the local video, and embedding the location may be performed in the camera S and any one of the plurality of cameras 1,2, and 3, the server 110, and other external devices to generate the coded video, which the present disclosure does not limit.
After the camera S and any one of the plurality of cameras 1,2, and 3 and other external devices transmit the generated encoded video to the server 110, or the server 110 generates the encoded video, the server 110 may process the encoded video to generate a live program and transmit the live program to the user terminal 120 or 130, or the server 110 may store the live program and transmit the live program to the user terminal 120 or 130 when the user terminal 120 or 130 requests the live program.
When a user terminal 120 or 130 receives a live program or an on-demand program, i.e., receives encoded video, the encoded video may be decoded. For example, global video in the encoded video is decoded by default. For example, the video frame of the global video may be determined according to the ID information and decoded to obtain the decoded global video. In addition, the position information of each partial video is also decoded.
When decoded, the user terminal 120 or 130 may play the decoded global video. That is, when the user has just started to watch a live program or an on-demand program, the global video is played by default, and the user can view the image content captured by the camera S. In addition, the user terminal 120 or 130 may display, on the user interface for playing the global video, prompt information for prompting that the local video exists at the corresponding local scene position in the global video picture, based on the decoded position information. The prompt information indicates that the user can view the video image of the local scene position in a high-definition magnifying mode. Here, the prompt information may be displayed by a button, an icon, text, or the like, which is not limited by the present disclosure. Specifically, the user terminal 120 or 130 may determine the position of the corresponding local scene in the global video picture based on the position information of each local video and display the cue information in the vicinity of the corresponding local scene in the global video picture based on the determined position, for example, the cue information may be displayed in the vicinity of the object a, the cue information may be displayed in the vicinity of the object B, and the cue information may be displayed in the vicinity of the object C in the global video picture. Of course, the present disclosure does not limit the display position and display manner of the guidance information, for example, the guidance information on the objects A, B and C is displayed in a certain predetermined area, and the like.
When the user terminal 120 or 130 displays the hint information, the local scene position area and/or the area displaying the hint information may be set as an interaction area on the user interface for playing the global video, and the user may select to enlarge the corresponding local scene by selecting the interaction area, that is, play the corresponding local video. For example, an area where the object a, the object B, and the object C are located in the global video screen may be set as an interaction area that is selectable by a user (e.g., by an operation such as clicking or touching), or an area where the prompt information about the object a, the prompt information about the object B, and the prompt information about the object C are displayed in the user interface may be set as an interaction area that is selectable by a user (e.g., by an operation such as clicking or touching), or both of the areas may be set as interaction areas to facilitate the user operation.
When the user selects to play a certain partial video, the user terminal 120 or 130 may decode the partial video and play the partial video. Here, the partial videos may be synchronously played according to a video time stamp drop frame synchronization method or a low-latency transmission protocol. For example, when an object a is far away from an object B in the overall scene and the camera S is focusing on the object B, the object a may not be clearly displayed due to focusing blur in the video image of the camera S because it is not within the effective range of the focal distance. While the camera 1 is in focus with the object a, the video image of the object a taken by the camera 1 is sharp and has full resolution, and therefore, is sharper than the image of the object a enlarged in the video image taken by the camera S. Therefore, when the user selects the object a, a clear video image of the enlarged object a is displayed to the user by playing the video image of the object a captured by the camera 1.
According to an exemplary embodiment of the present disclosure, the local video may be decoded and the global video may be terminated from being decoded, and the local video may be played and the global video may be terminated from being played. According to another exemplary embodiment of the present disclosure, the local video may be decoded while continuing to decode the global video, and the local video may be played while continuing to play the global video. For example, the user interface is divided into two areas, one area displays the global video, the other area displays the local video, or the user interface displays the global video in a full screen mode, and displays the local video in a predetermined area of the display screen of the global video, or other display modes are also possible, which is not limited by the disclosure. According to another exemplary embodiment of the present disclosure, in the case that the global video and the local video are simultaneously played, the user may also select to play another local video from the picture in which the global video is played, and therefore, the user terminal 120 or 130 may simultaneously display the global video and the two local videos, and the display layout is not limited by the present disclosure. And so on.
In addition, the user terminal 120 or 130 displays a close interface when playing the local video, terminates decoding the local video and playing the local video when the user selects to close the interface, and continues decoding the global video and playing the global video. Here, the global video may be synchronously played according to a video time stamp drop frame synchronization method or a low-latency transmission protocol. In the case that the global video and the local video are simultaneously displayed, the window of the local video can be closed, and the global video is displayed in a full screen mode. In addition, in the case where the global video and the plurality of local videos are simultaneously displayed, a closing interface may be displayed on a window of each local video, and a user may close a window of any local video by selecting any closing interface. The present disclosure does not limit the display position and the display manner of the closing interface, and for example, an icon "x" may be displayed at the upper right of the window of each partial video as the closing interface, and so on.
Fig. 2 is a flowchart illustrating a video playing method according to an exemplary embodiment of the present disclosure. The video playing method according to the exemplary embodiment of the present disclosure may be performed in the user terminal 120 or 130, for example, by a client in the user terminal 120 or 130 or a video playing device, etc., which is not limited by the present disclosure.
Referring to fig. 2, at step 201, an encoded video may be received. Here, the encoded video is obtained by multiplexing a global video and at least one local video captured in the same scene, the global video being a video captured of an overall scene, and each of the at least one local video being a video captured of one local scene. For example, as shown in fig. 1, the global video may be a video in which an overall scene including the objects A, B and C of the live broadcast room 100 is photographed by the camera S, and each of the at least one local video may be a video in which a local scene including the object a of the live broadcast room 100 is photographed by the camera 1, a video in which a local scene including the object B of the live broadcast room 100 is photographed by the camera 2, and a video in which a local scene including the object B of the live broadcast room 100 is photographed by the camera 3. Furthermore, the encoded video may further include location information for each of the at least one partial video regarding a location of the respective partial scene in the overall scene. Here, the position of the local scene in the overall scene may refer to a position of a main shooting object in the local scene in the global video image, and may also be any position that may represent the position of the local scene in the overall scene, for example, a position of the local video image in the global video image as a whole, and the disclosure is not limited thereto.
In step 202, the location information of the global video and each of the at least one local video in the encoded video may be decoded. For example, the video frame of the global video may be determined according to the ID information and decoded to obtain the decoded global video.
In step 203, a global video may be played.
In step 204, prompt information for prompting that the local video exists at the corresponding local scene position in the global video picture can be displayed on the user interface for playing the global video based on the position information of each local video in the at least one local video. Here, the prompt information may indicate that the user may magnify and view the video image of the local scene position with high definition, and may be displayed by a button, an icon, text, or the like, which is not limited by the present disclosure. According to an exemplary embodiment of the present disclosure, a position of a corresponding local scene in a global video picture may be determined based on position information of each local video, and cue information may be displayed near the corresponding local scene in the global video picture based on the determined position. Of course, the display position and display mode of the prompt message are not limited in the present disclosure, and for example, the prompt message may be displayed in a predetermined area, and the like. Further, according to an exemplary embodiment of the present disclosure, a local scene position area and/or an area displaying a hint information may be set as an interactive area on a user interface playing a global video, so that an input of a user selecting a corresponding local video may be received through the interactive area. That is, the user may select to zoom in the corresponding partial scene, i.e., play the corresponding partial video, by selecting the interactive region. Specifically, each of the partial scene position areas may be set as an interactive area for enlarging the corresponding partial scene (i.e., playing the corresponding partial video), or an area of the hint information display of each of the partial videos may be set as an interactive area for enlarging the corresponding partial scene (i.e., playing the corresponding partial video), or both of them may be set as interactive areas for facilitating user operations.
According to an exemplary embodiment of the present disclosure, the video playing method may further include: in response to receiving an input that a user selects to play the first local video on a user interface for playing the global video, decoding the first local video and playing the first local video. For example, the user may select to play the first partial video by selecting an interaction region for the first partial video on a user interface that plays the global video. For example, an area of a partial scene corresponding to the first partial video or an area displaying prompt information about the first partial video may be clicked or touched.
Further, according to an exemplary embodiment of the present disclosure, the first local video may be decoded and the decoding of the global video may be terminated, and the first local video may be played and the playing of the global video may be terminated. According to another exemplary embodiment of the present disclosure, the first local video may be decoded while continuing to decode the global video, and the first local video may be played while continuing to play the global video. For example, the user interface is divided into two areas, one area displays the global video, and the other area displays the first local video, or the user interface displays the global video in full screen, and displays the first local video in a predetermined area of the display screen of the global video, or other display manners are also possible, which is not limited by the disclosure. According to another exemplary embodiment of the present disclosure, in a case where the global video and the first local video are simultaneously played, the user may further select to play another local video (for example, a second local video) from a screen in which the global video is played, and therefore, the global video and the first and second local videos may be simultaneously displayed to facilitate the user to view the plurality of local videos simultaneously, which is not limited by the present disclosure. And so on.
According to an exemplary embodiment of the present disclosure, the video playing method may further include: and displaying a closing interface when the first local video is played, terminating decoding and playing the first local video in response to receiving an input of a user for selecting the closing interface, and continuing decoding and playing the global video. In the case that the global video and the first local video are played simultaneously, the window of the first local video may be closed, and the global video may be displayed in a full screen. Further, in the case where the global video and the plurality of local videos (e.g., the first local video and the second local video) are simultaneously played, a closing interface may be displayed on a window of each local video, and a user may close a window of any local video by selecting any closing interface. The present disclosure does not limit the display position and the display manner of the closing interface, and for example, an icon "x" may be displayed at the upper right of the window of each partial video as the closing interface, and so on.
In addition, the present disclosure does not limit the execution order of the above steps 202 to 204, and the steps 202 to 204 may be executed in any possible order. For example, after step 202, steps 203 and 204 may be performed simultaneously, or the global video decoding in step 202 may be performed first and then step 203, then the location information decoding in step 202 and then step 204 may be performed, and so on.
In the following, a video playing method according to an exemplary embodiment of the present disclosure is illustrated by an exemplary example.
When encoded video (multiplexed-encoded by a global video and at least one local video shot in the same scene and including position information of each local video regarding a position of the corresponding local scene in the overall scene) is received, the global video in the encoded video is decoded by default and played. In addition, the position information of each local video in the at least one local video in the coded video is decoded, and information is prompted near the corresponding displayed local scene on a user interface for playing the global video based on the decoded position information. When a user selects to enlarge a certain local scene through at least one local scene area and/or a prompt message display area on the user interface, a local video corresponding to the local scene in the coded video can be decoded and played, and the playing of the global video can be continued or stopped according to the condition, which can be determined through presetting. And displaying a closing interface on a window for playing the local video, wherein when an input of selecting the closing interface by a user is received, the local video can be stopped from being played, and the global video can be continuously played.
Fig. 3 is a flowchart illustrating a video generation method according to an exemplary embodiment of the present disclosure. The video generation method according to an exemplary embodiment of the present disclosure may be performed by the camera S, the plurality of cameras 1,2, and 3, one device of the server 110, or other external devices (not shown) as shown in fig. 1.
Referring to fig. 3, in step 301, a global video and at least one local video photographed in the same scene may be acquired. Here, the global video is a video capturing an overall scene, and each of the at least one local video is a video capturing one local scene. For example, as shown in fig. 1, the global video may be a video in which an overall scene including the objects A, B and C of the live broadcast room 100 is photographed by the camera S, and each of the at least one local video may be a video in which a local scene including the object a of the live broadcast room 100 is photographed by the camera 1, a video in which a local scene including the object B of the live broadcast room 100 is photographed by the camera 2, and a video in which a local scene including the object B of the live broadcast room 100 is photographed by the camera 3.
In step 302, a position of a respective local scene of each of the at least one local video in the overall scene may be determined to generate position information of each of the at least one local video. Here, the position of the local scene in the overall scene may refer to a position of a main shooting object in the local scene in the global video image, and may also be any position that may represent the position of the local scene in the overall scene, for example, a position of the local video image in the global video image as a whole, and the disclosure is not limited thereto.
According to an exemplary embodiment of the present disclosure, a position of a corresponding local scene of each local video of the at least one local video in the overall scene may be determined by image matching the global video and each local video or receiving a position mark of each local video by a user.
In step 303, an encoded video may be generated by multi-path encoding the global video and the at least one local video and embedding the position information of each of the at least one local video. For example, the video images of the global video and the at least one local video may be encoded into different paths of video frames corresponding to the same timestamp, and the video frames of different videos may be labeled with different IDs, and when decoding, which video the video frame belongs to may be distinguished according to the different IDs. Furthermore, position information indicating each of the at least one partial video may also be embedded, e.g., the position information of each partial video may be embedded in the auxiliary information of the respective encoded frame.
Fig. 4 is a block diagram illustrating a video playback apparatus 400 according to an exemplary embodiment of the present disclosure. The video playback apparatus 400 according to an exemplary embodiment of the present disclosure may be included in the user terminal 120 or 130 or a client of the user terminal 120 or 130.
Referring to fig. 4, the video playback device 400 may include a video receiving unit 401, a video decoding unit 402, a video playback unit 403, and an information display unit 404.
The video receiving unit 401 may receive encoded video. Here, the encoded video is obtained by multiplexing a global video and at least one local video captured in the same scene, the global video being a video captured of an overall scene, and each of the at least one local video being a video captured of one local scene. Furthermore, the encoded video may further include location information for each of the at least one partial video regarding a location of the respective partial scene in the overall scene.
The video decoding unit 402 may decode the global video of the encoded videos and the position information of each local video of the at least one local video.
The video playing unit 403 can play the global video.
The information display unit 404 may display, on the user interface that plays the global video, cue information for cueing that the local video exists at a corresponding local scene position in the global video screen, based on the position information of each of the at least one local video.
The present disclosure does not limit the execution order of the above operations performed by the video decoding unit 402, the video playing unit 403, and the information display unit 404, respectively, and the above operations may be performed in any possible order. For example, after the video decoding unit 402 performs the operation, the video playing unit 403 and the information display unit 404 may perform the operation at the same time, or the video decoding unit 402 may perform global video decoding first and then the video playing unit 403 may play the global video, and then the video decoding unit 402 may perform position information decoding again and then the information display unit 404 displays the hint information, and so on.
According to an exemplary embodiment of the present disclosure, the video receiving apparatus 400 may further include a user interface unit (not shown) for interacting with a user. In response to receiving an input through the user interface unit that the user selects to play the first partial video on the user interface that plays the global video, the video decoding unit 403 decodes the first partial video, and the video playing unit 404 plays the first partial video. According to an exemplary embodiment of the present disclosure, the video decoding unit 403 may decode the first local video and terminate decoding the global video, and the video playing unit 404 plays the first local video and terminates playing the global video. According to another exemplary embodiment of the present disclosure, the video decoding unit 403 may decode the first local video while continuing to decode the global video, and the video playing unit 404 plays the first local video while continuing to play the global video.
According to an exemplary embodiment of the present disclosure, the information display unit 404 may be further configured to display a close interface while the first partial video is played. In response to a user interface unit (not shown) receiving an input selecting to close the interface by a user, the video decoding unit 402 terminates decoding the first local video, the video playing unit 403 terminates playing the first local video, and the video decoding unit 402 continues decoding the global video, and the video playing unit 403 continues playing the global video.
According to an exemplary embodiment of the present disclosure, the information display unit 404 may determine a position of a corresponding local scene in the global video picture based on the position information of each local video, and display cue information near the corresponding local scene in the global video picture based on the determined position. Of course, the display position and display mode of the prompt message are not limited in the present disclosure, and for example, the prompt message may be displayed in a predetermined area, and the like. Further, according to an exemplary embodiment of the present disclosure, the information display unit 404 may set a local scene position area and/or an area displaying the hint information as an interactive area on a user interface playing the global video, so that an input of a user selecting a corresponding local video may be received through the interactive area.
Fig. 5 is a block diagram illustrating a video generation apparatus 500 according to an exemplary embodiment of the present disclosure. The video generation apparatus 500 according to an exemplary embodiment of the present disclosure may be included in one device among the camera S, the plurality of cameras 1,2, and 3, the server 110, as shown in fig. 1, or in other external devices (not shown).
Referring to fig. 5, the video generating apparatus 500 may include a video acquiring unit 501, a position determining unit 502, and a video encoding unit 503.
The video acquisition unit 501 may acquire a global video and at least one local video photographed in the same scene. Here, the global video is a video capturing an overall scene, and each of the at least one local video is a video capturing one local scene.
The position determination unit 502 may determine a position of a corresponding local scene of each of the at least one local video in the overall scene to generate position information of each of the at least one local video. According to an exemplary embodiment of the present disclosure, the position determination unit 502 may determine a position of a corresponding local scene of each local video of the at least one local video in the overall scene by image matching the global video and each local video or receiving a position marker of each local video by a user.
The video encoding unit 503 may generate an encoded video by multiplexing the global video and the at least one local video and embedding the position information of each of the at least one local video.
Fig. 6 is a block diagram of an electronic device 600 according to an example embodiment of the present disclosure.
Referring to fig. 6, the electronic device 600 includes at least one memory 601 and at least one processor 602, the at least one memory 601 having stored therein a set of computer-executable instructions that, when executed by the at least one processor 602, perform a video playback method according to an exemplary embodiment of the present disclosure.
By way of example, the electronic device 600 may be a PC computer, tablet device, personal digital assistant, smartphone, or other device capable of executing the set of instructions described above. Here, the electronic device 600 need not be a single electronic device, but can be any arrangement or collection of circuits capable of executing the above-described instructions (or sets of instructions), either individually or in combination. The electronic device 600 may also be part of an integrated control system or system manager, or may be configured as a portable electronic device that interfaces with local or remote (e.g., via wireless transmission).
In the electronic device 600, the processor 602 may include a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a programmable logic device, a special purpose processor system, a microcontroller, or a microprocessor. By way of example, and not limitation, processors may also include analog processors, digital processors, microprocessors, multi-core processors, processor arrays, network processors, and the like.
The processor 602 may execute instructions or code stored in the memory 601, wherein the memory 601 may also store data. The instructions and data may also be transmitted or received over a network via a network interface device, which may employ any known transmission protocol.
The memory 601 may be integrated with the processor 602, for example, with RAM or flash memory disposed within an integrated circuit microprocessor or the like. Further, memory 601 may comprise a stand-alone device, such as an external disk drive, storage array, or any other storage device usable by a database system. The memory 601 and the processor 602 may be operatively coupled or may communicate with each other, e.g., through I/O ports, network connections, etc., such that the processor 602 can read files stored in the memory.
Further, the electronic device 600 may also include a video display (such as a liquid crystal display) and a user interaction interface (such as a keyboard, mouse, touch input device, etc.). All components of the electronic device 600 may be connected to each other via a bus and/or a network.
Fig. 7 is a block diagram of an apparatus 700 according to an example embodiment of the present disclosure. The device 700 according to an exemplary embodiment of the present disclosure may be the camera S, one of the plurality of cameras 1,2 and 3, the server 110 as shown in fig. 1, or may also be other external devices (not shown) for generating encoded video.
Referring to fig. 7, the device 700 includes at least one memory 701 and at least one processor 702, the at least one memory 701 having stored therein a set of computer-executable instructions that, when executed by the at least one processor 702, perform a video generation method according to an exemplary embodiment of the present disclosure.
According to an exemplary embodiment of the present disclosure, there may also be provided a computer-readable storage medium storing instructions, which when executed by at least one processor, cause the at least one processor to perform a video playing method or a video generating method according to the present disclosure. Examples of the computer-readable storage medium herein include: read-only memory (ROM), random-access programmable read-only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), dynamic random-access memory (DRAM), static random-access memory (SRAM), flash memory, non-volatile memory, CD-ROM, CD-R, CD + R, CD-RW, CD + RW, DVD-ROM, DVD-R, DVD + R, DVD-RW, DVD + RW, DVD-RAM, BD-ROM, BD-R, BD-R LTH, BD-RE, Blu-ray or compact disc memory, Hard Disk Drive (HDD), solid-state drive (SSD), card-type memory (such as a multimedia card, a Secure Digital (SD) card or a extreme digital (XD) card), magnetic tape, a floppy disk, a magneto-optical data storage device, an optical data storage device, a hard disk, a magnetic tape, a magneto-optical data storage device, a, A solid state disk, and any other device configured to store and provide a computer program and any associated data, data files, and data structures to a processor or computer in a non-transitory manner such that the processor or computer can execute the computer program. The computer program in the computer-readable storage medium described above can be run in an environment deployed in a computer apparatus, such as a client, a host, a proxy device, a server, and the like, and further, in one example, the computer program and any associated data, data files, and data structures are distributed across a networked computer system such that the computer program and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by one or more processors or computers.
According to an exemplary embodiment of the present disclosure, there may also be provided a computer program product, in which instructions are executable by a processor of a computer device to perform a video playing method or a video generating method according to an exemplary embodiment of the present disclosure.
The video playing method and the video generating method according to the present disclosure are not limited to be applied to live or on-demand scenes, but may be applied to any other scenes where video is played.
According to the video playing method and device and the video generating method and device disclosed by the disclosure, a plurality of videos shot by different local objects in the same scene (for example, a global video shot by an overall scene and at least one local video shot by at least one local scene) can be provided for a user to be subjected to multi-channel coding to obtain a coded video, and when the coded video is played, the user is prompted which local scenes can be locally enlarged for the user to select and view enlarged local scene pictures. In addition, for an object which is not located in the effective range of the focal length and causes the focusing to be unclear, among the objects located in different focal length positions in the global picture, the user is prompted to clearly display the object when playing so as to be selected by the user and view the picture in which the object is clear. When a user selects to amplify the same local scene according to the requirement, the corresponding local video image shot by the independent camera can be decoded and played, so that the high-definition local image amplification effect is realized, and the user experience is improved.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (10)
1. A video playback method, comprising:
receiving encoded video, wherein the encoded video is obtained by multiplexing global video and at least one local video shot in the same scene, the global video is a video shot in an overall scene, each local video in the at least one local video is a video shot in a local scene, and the encoded video comprises position information of each local video in the at least one local video about the position of the corresponding local scene in the overall scene;
decoding location information of a global video of the encoded videos and each of the at least one local video;
playing the global video;
and displaying prompt information on a user interface playing the global video based on the position information of each local video in the at least one local video, wherein the prompt information is used for prompting that the local video exists at the corresponding local scene position in the global video picture.
2. The video playback method of claim 1, further comprising:
in response to receiving an input that a user selects to play the first local video on a user interface for playing the global video, decoding the first local video and playing the first local video.
3. The video playback method of claim 2, wherein said decoding the first partial video to play back the first partial video comprises:
decoding the first local video and terminating decoding the global video;
and playing the first local video and terminating playing the global video.
4. The video playback method of claim 2, wherein said decoding the first partial video to play back the first partial video comprises:
decoding the first local video while continuing to decode the global video;
the first local video is played while the global video continues to be played.
5. A method of video generation, comprising:
acquiring a global video and at least one local video shot in the same scene, wherein the global video is a video shot of an overall scene, and each local video in the at least one local video is a video shot of one local scene;
determining a position of a corresponding local scene of each of the at least one local video in the overall scene to generate position information of each of the at least one local video;
generating an encoded video by multi-path encoding the global video and the at least one local video and embedding the position information of each of the at least one local video.
6. A video playback apparatus, comprising:
a video receiving unit configured to receive encoded video, wherein the encoded video is multiplexed-encoded by a global video and at least one local video captured in the same scene, the global video is a video capturing an overall scene, each of the at least one local video is a video capturing one local scene, and the encoded video includes position information of each of the at least one local video regarding a position of the corresponding local scene in the overall scene;
a video decoding unit configured to decode a global video of the encoded videos and position information of each local video of the at least one local video;
a video playing unit configured to play a global video;
an information display unit configured to display prompt information on a user interface playing the global video based on the position information of each local video of the at least one local video, wherein the prompt information is used for prompting that the local video exists at the corresponding local scene position in the global video picture.
7. A video generation apparatus, comprising:
a video acquisition unit configured to acquire a global video and at least one local video shot in the same scene, wherein the global video is a video shot of an overall scene, and each of the at least one local video is a video shot of one local scene;
a position determination unit configured to determine a position of a corresponding local scene of each of the at least one local video in the overall scene to generate position information of each of the at least one local video;
a video encoding unit configured to generate an encoded video by multi-path encoding the global video and the at least one local video and embedding the position information of each of the at least one local video.
8. An electronic device, comprising:
at least one processor;
at least one memory storing computer-executable instructions,
wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform the video playback method of any of claims 1 to 4.
9. An apparatus, comprising:
at least one processor;
at least one memory storing computer-executable instructions,
wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform the video generation method of claim 5.
10. A computer-readable storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform the video playback method of any one of claims 1 to 4 or the video generation method of claim 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011043068.XA CN112188269B (en) | 2020-09-28 | 2020-09-28 | Video playing method and device and video generating method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011043068.XA CN112188269B (en) | 2020-09-28 | 2020-09-28 | Video playing method and device and video generating method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112188269A true CN112188269A (en) | 2021-01-05 |
CN112188269B CN112188269B (en) | 2023-01-20 |
Family
ID=73946675
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011043068.XA Active CN112188269B (en) | 2020-09-28 | 2020-09-28 | Video playing method and device and video generating method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112188269B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113891105A (en) * | 2021-09-28 | 2022-01-04 | 广州繁星互娱信息科技有限公司 | Picture display method and device, storage medium and electronic equipment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103595954A (en) * | 2012-08-16 | 2014-02-19 | 北京中电华远科技有限公司 | Method and system for multi-video-image fusion processing based on position information |
US20170178289A1 (en) * | 2015-12-16 | 2017-06-22 | Xiaomi Inc. | Method, device and computer-readable storage medium for video display |
CN106888169A (en) * | 2017-01-06 | 2017-06-23 | 腾讯科技(深圳)有限公司 | Video broadcasting method and device |
CN108171723A (en) * | 2017-12-22 | 2018-06-15 | 湖南源信光电科技股份有限公司 | Based on more focal length lens of Vibe and BP neural network algorithm linkage imaging camera machine system |
CN208063332U (en) * | 2018-03-06 | 2018-11-06 | 北京伟开赛德科技发展有限公司 | Panoramic video plays the linkage photographic device being combined with local detail amplification display |
CN109121000A (en) * | 2018-08-27 | 2019-01-01 | 北京优酷科技有限公司 | A kind of method for processing video frequency and client |
CN109963200A (en) * | 2017-12-25 | 2019-07-02 | 上海全土豆文化传播有限公司 | Video broadcasting method and device |
CN111314769A (en) * | 2020-02-27 | 2020-06-19 | 北京金和网络股份有限公司 | Video processing method, device, storage medium and server |
-
2020
- 2020-09-28 CN CN202011043068.XA patent/CN112188269B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103595954A (en) * | 2012-08-16 | 2014-02-19 | 北京中电华远科技有限公司 | Method and system for multi-video-image fusion processing based on position information |
US20170178289A1 (en) * | 2015-12-16 | 2017-06-22 | Xiaomi Inc. | Method, device and computer-readable storage medium for video display |
CN106888169A (en) * | 2017-01-06 | 2017-06-23 | 腾讯科技(深圳)有限公司 | Video broadcasting method and device |
CN108171723A (en) * | 2017-12-22 | 2018-06-15 | 湖南源信光电科技股份有限公司 | Based on more focal length lens of Vibe and BP neural network algorithm linkage imaging camera machine system |
CN109963200A (en) * | 2017-12-25 | 2019-07-02 | 上海全土豆文化传播有限公司 | Video broadcasting method and device |
CN208063332U (en) * | 2018-03-06 | 2018-11-06 | 北京伟开赛德科技发展有限公司 | Panoramic video plays the linkage photographic device being combined with local detail amplification display |
CN109121000A (en) * | 2018-08-27 | 2019-01-01 | 北京优酷科技有限公司 | A kind of method for processing video frequency and client |
CN111314769A (en) * | 2020-02-27 | 2020-06-19 | 北京金和网络股份有限公司 | Video processing method, device, storage medium and server |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113891105A (en) * | 2021-09-28 | 2022-01-04 | 广州繁星互娱信息科技有限公司 | Picture display method and device, storage medium and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN112188269B (en) | 2023-01-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10425679B2 (en) | Method and device for displaying information on video image | |
TWI496459B (en) | Facilitating placeshifting using matrix code | |
US10305957B2 (en) | Video production system with DVE feature | |
US8253794B2 (en) | Image processing apparatus and image display method | |
CN107786905B (en) | Video sharing method and device | |
US20220210516A1 (en) | Methods, systems, and media for providing media guidance | |
KR102133207B1 (en) | Communication apparatus, communication control method, and communication system | |
US9635079B1 (en) | Social media sharing based on video content | |
US11166084B2 (en) | Display overlays for prioritization of video subjects | |
US10021433B1 (en) | Video-production system with social-media features | |
KR20150011943A (en) | Broadcasting providing apparatus, Broadcasting providing system, and Method for providing broadcasting thereof | |
US11211097B2 (en) | Generating method and playing method of multimedia file, multimedia file generation apparatus and multimedia file playback apparatus | |
US20150256690A1 (en) | Image processing system and image capturing apparatus | |
CN112188269B (en) | Video playing method and device and video generating method and device | |
CN106331891A (en) | Information interaction method and electronic device | |
CN112188219B (en) | Video receiving method and device and video transmitting method and device | |
US10375456B2 (en) | Providing highlights of an event recording | |
EP2942949A1 (en) | System for providing complex-dimensional content service using complex 2d-3d content file, method for providing said service, and complex-dimensional content file therefor | |
US10382824B2 (en) | Video production system with content extraction feature | |
CN113315987A (en) | Video live broadcast method and video live broadcast device | |
CN110198457B (en) | Video playing method and device, system, storage medium, terminal and server thereof | |
EP3522525B1 (en) | Method and apparatus for processing video playing | |
KR102139331B1 (en) | Apparatus, server, and method for playing moving picture contents | |
JP2008090526A (en) | Conference information storage device, system, conference information display device, and program | |
JP2017046162A (en) | Synthetic moving image creation system, synthetic moving image creation support system and synthetic moving image creation program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |