[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2017181777A1 - 全景视频直播方法、装置和系统以及视频源控制设备 - Google Patents

全景视频直播方法、装置和系统以及视频源控制设备 Download PDF

Info

Publication number
WO2017181777A1
WO2017181777A1 PCT/CN2017/075573 CN2017075573W WO2017181777A1 WO 2017181777 A1 WO2017181777 A1 WO 2017181777A1 CN 2017075573 W CN2017075573 W CN 2017075573W WO 2017181777 A1 WO2017181777 A1 WO 2017181777A1
Authority
WO
WIPO (PCT)
Prior art keywords
video data
panoramic video
time interval
preset time
panoramic
Prior art date
Application number
PCT/CN2017/075573
Other languages
English (en)
French (fr)
Inventor
胡镇杰
Original Assignee
北京金山安全软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京金山安全软件有限公司 filed Critical 北京金山安全软件有限公司
Publication of WO2017181777A1 publication Critical patent/WO2017181777A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Definitions

  • the present application relates to the field of mobile Internet technologies, and in particular, to a panoramic video live broadcast method, apparatus, and system, and a video source control device.
  • video live broadcast is more and more popular with the public.
  • video live broadcast is evolved from video on demand, broadcasted by the Internet and streaming media technology, so that video can be broadcast in real time and comprehensively. content.
  • the current problem is that when viewing a live video through a mobile terminal, the user can only watch the area captured by the live video user's mobile phone camera, and the viewed video picture can only change according to the movement of the camera, and cannot be given to the user. Provide a great visual experience.
  • the present application aims to solve at least one of the technical problems in the related art to some extent.
  • the first object of the present application is to provide a method for broadcasting a panoramic video.
  • the method for broadcasting a panoramic video enables a client user to view the entire panorama around the other party, understand the real environment information around the other party, and improve the live video broadcast by the user. Visual experience.
  • a second object of the present application is to provide a panoramic video live broadcast device.
  • the third object of the present application is to propose a panoramic video live broadcast system.
  • a fourth object of the present application is to propose a video source control device.
  • a fifth object of the present application is to propose a storage medium.
  • a sixth object of the present application is to propose an application.
  • the first aspect of the present application provides a method for broadcasting a panoramic video, comprising: receiving video data of different angles collected by multiple video capture devices in real time; and stitching the video data of different angles to generate The panoramic video data; and the panoramic video data in the preset time interval is sent to the cloud server every preset time interval.
  • the video data collected by multiple video capture devices in real time is spliced, and the spliced panoramic video data is sent to the server at preset time intervals, so that the server can be based on the panoramic video data.
  • the client provides a live broadcast of the panoramic video, so that the client user can see the entire panorama around the other party, understand the real environment information around the other party, and enhance the visual experience of the user watching the live video.
  • the sending the panoramic video data in the preset time interval to the cloud server every preset time interval includes: generating corresponding to the panoramic video data in the preset time interval. An index file; and establishing a mapping relationship between the index file and the panoramic video data in the preset time interval, and sending the panoramic video data in the preset time interval and the index file to the cloud server.
  • the splicing the video data of the different angles to generate panoramic video data includes: splicing the video data of different angles in order to synthesize panoramic video; using dynamic bit rate
  • the adaptive technology collects the synthesized panoramic video; and encodes the collected panoramic video data to obtain encoded panoramic video data.
  • the method further includes: performing the preset time interval every the preset time interval The different angles of video data are sent to the cloud server.
  • the current state information of the multiple video capture devices is acquired, and if at least one of the multiple video capture devices stops detecting video data, the control device is detected.
  • the other video capture devices of the plurality of video capture devices stop collecting video data and generate error prompt information.
  • the second aspect of the present application provides a panoramic video live broadcast apparatus, including: a receiving module, configured to receive video data of different angles collected by multiple video capture devices in real time; and a processing module for The video data of different angles is spliced to generate panoramic video data, and the sending module is configured to send the panoramic video data in the preset time interval to the cloud server every preset time interval.
  • the panoramic video live broadcast device of the embodiment of the present invention splices the video data collected by multiple video capture devices in real time, and sends the spliced panoramic video data to the server at preset time intervals, so that the server can be based on the panoramic video data.
  • the client provides a live broadcast of the panoramic video, so that the client user can see the entire panorama around the other party, understand the real environment information around the other party, and enhance the visual experience of the user watching the live video.
  • the panoramic video broadcast device further includes: a first generation module, An index file corresponding to the panoramic video data in the preset time interval; an establishing module, configured to establish a mapping relationship between the index file and the panoramic video data in the preset time interval; the sending module further And sending, to the cloud server, the panoramic video data and the index file in the preset time interval, so that the cloud server saves the index file and the preset time interval according to the mapping relationship.
  • a first generation module An index file corresponding to the panoramic video data in the preset time interval
  • an establishing module configured to establish a mapping relationship between the index file and the panoramic video data in the preset time interval
  • the sending module further And sending, to the cloud server, the panoramic video data and the index file in the preset time interval, so that the cloud server saves the index file and the preset time interval according to the mapping relationship.
  • Panoramic video data Panoramic video data.
  • the processing module is further configured to: splicing the video data of different angles in order to synthesize a panoramic video, and synthesizing the synthesized panoramic video by using dynamic rate adaptive technology.
  • the acquisition is performed, and the collected panoramic video data is encoded to obtain encoded panoramic video data.
  • the sending module is further configured to: send the video data of the different angles in the preset time interval to the cloud server every the preset time interval.
  • the panoramic video broadcast device further includes: an acquiring module, configured to acquire current state information of the plurality of video capturing devices; and a control module, configured to detect the multiple videos When at least one of the video capture devices stops collecting video data, the other video capture devices of the plurality of video capture devices are controlled to stop collecting video data; and the second generation module is configured to generate error prompt information.
  • the third aspect of the present application provides a panoramic video live broadcast system, including: a plurality of video capture devices, a processing device, and a cloud server, wherein the multiple video capture devices are used for real-time acquisition.
  • Video data of different angles the processing device is configured to receive the video data of different angles collected in the real-time, and splicing the video data of the different angles to generate panoramic video data, and every predetermined time interval
  • the panoramic video data in the preset time interval is sent to the cloud server, and the cloud server is configured to provide a panoramic video live broadcast to the client according to the panoramic video data in the preset time interval.
  • the video data collected by multiple video capture devices in real time is spliced by the processing device, and the spliced panoramic video data is sent to the server at preset time intervals, so that the server can be based on the panoramic video.
  • the data provides a live broadcast of the panoramic video for the client, so that the client user can see the entire panorama around the other party, understand the real environment information around the other party, and enhance the visual experience of the user watching the live video.
  • the processing apparatus is further configured to: generate an index file corresponding to the panoramic video data in the preset time interval, and establish a panoramic view of the index file and the preset time interval Mapping the video data, and transmitting the panoramic video data and the index file in the preset time interval to the cloud server; the cloud server is further configured to save the index file according to the mapping relationship Panoramic video data within the preset time interval.
  • the processing device is further configured to: press the video data of different angles
  • the splicing is performed in sequence to synthesize the panoramic video, and the synthesized panoramic video is acquired by using dynamic rate adaptive technology, and the collected panoramic video data is encoded to obtain encoded panoramic video data.
  • the processing apparatus is further configured to: send the video data of the different angles in the preset time interval to the cloud server every the preset time interval;
  • the cloud server is further configured to splicing the different angles of video data in the preset time interval to generate panoramic video data.
  • the cloud server is further configured to: generate an index file corresponding to the panoramic video data in the preset time interval, and establish the index file and the preset time interval And mapping the panoramic video data, and saving the index file and the panoramic video data in the preset time interval according to the mapping relationship.
  • the processing device is further configured to: acquire current state information of the multiple video capture devices, and if at least one of the multiple video capture devices detects that the video capture device stops collecting video Data, controlling other video capture devices of the plurality of video capture devices to stop collecting video data and generating error prompt information.
  • the cloud server is further configured to: receive a download request sent by the client, and sequentially send the panoramic video data in the preset time interval to the Client.
  • a fourth aspect of the present application provides a video source control device including one or more of the following components: a processor, a memory, a power supply circuit, an input/output (I/O) interface, and a communication.
  • An assembly wherein the processor and the memory are disposed on a circuit board; the power supply circuit is configured to supply power to each circuit or device of the video source control device; the memory is configured to store executable program code; The processor, by reading the executable program code stored in the memory, runs a program corresponding to the executable program code for performing the panoramic video live broadcast method according to the first aspect of the present application.
  • the video source control device of the embodiment of the present application splices the video data collected by multiple video capture devices in real time, and sends the spliced panoramic video data to the server at a preset time interval, so that the server can be based on the panoramic video data.
  • the client provides a live broadcast of the panoramic video, so that the client user can see the entire panorama around the other party, understand the real environment information around the other party, and enhance the visual experience of the user watching the live video.
  • the fifth aspect of the present application provides a storage medium, wherein the storage medium stores one or more programs, and the program performs the panoramic video live broadcast method according to the first aspect of the present application. .
  • the program splices the video data collected by the multiple video collection devices in real time, and sends the spliced panoramic video data to the server at a preset time interval, so that the server can serve the client according to the panoramic video data.
  • the live video of the panoramic video is provided, so that the client user can see the whole circumference of the other party.
  • the sixth aspect of the present application provides an application program, where the application is used to execute the panoramic video live broadcast method according to the embodiment of the present application at runtime.
  • the application program of the embodiment of the present application splices the video data collected by the multiple video collection devices in real time, and sends the spliced panoramic video data to the server at a preset time interval, so that the server can be the client according to the panoramic video data.
  • the live broadcast of the panoramic video is provided, so that the client user can see the entire panorama around the other party, understand the real environment information around the other party, and enhance the visual experience of the user watching the live video.
  • FIG. 1 is a flowchart of a method for broadcasting a panoramic video according to an embodiment of the present application
  • FIG. 2 is a flowchart of a panoramic video live broadcast method according to an embodiment of the present application.
  • FIG. 3 is a flowchart of a method for broadcasting a panoramic video according to another embodiment of the present application.
  • FIG. 4 is a flowchart of a method for broadcasting a panoramic video according to another embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a panoramic video live broadcast apparatus according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a panoramic video live broadcast apparatus according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a panoramic video live broadcast apparatus according to another embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a panoramic video live broadcast system according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a video source control device according to an embodiment of the present application.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” and “second” may include one or more of the features either explicitly or implicitly.
  • the meaning of "a plurality” is two or more unless specifically and specifically defined otherwise.
  • FIG. 1 is a flowchart of a method for broadcasting a panoramic video according to an embodiment of the present application.
  • the panoramic video live broadcast method includes:
  • S101 Receive video data of different angles collected by multiple video collection devices in real time.
  • the video recording is performed by the panoramic shooting device, wherein the panoramic shooting device includes a plurality of cameras, each of which is used to record video of different angles.
  • the panoramic shooting device includes a plurality of cameras, each of which is used to record video of different angles. For example, attaching multiple Gopro motion cameras to the panoramic shooting bracket, turning on the power of the Gopro motion camera and switching the mode to video recording mode, each Gopro motion camera establishes a connection with the control device via Wi-fi, through control The device controls each Gopro motion camera in unison.
  • each Gopro motion camera is controlled by the control device to start recording video.
  • Each Gopro motion camera generates video data during the recording process, and the video data is transmitted to the control device in the form of UDP (User Datagram Protocol).
  • UDP User Datagram Protocol
  • video capture devices is, for example, six or eight, which can be set according to requirements, which is not limited in this application.
  • the video data collected by the received Gopro motion cameras is sequentially spliced by the control device to synthesize the panoramic video, and then the synthesized panoramic video is processed by using HLS (HTTP Live Streaming) technology.
  • HLS HTTP Live Streaming
  • the generated panoramic video is collected, and the panoramic video data is subjected to H.264 encoding to obtain the encoded panoramic video data.
  • S103 Send the panoramic video data in the preset time interval to the cloud server at a preset time interval, so that the cloud server provides the panoramic video live broadcast to the client according to the panoramic video data in the preset time interval.
  • the panoramic video data is cut according to a preset time interval, for example, the panoramic video data is cut into small files of 5 seconds by a stream cutter, and the cut small files are sent to the cloud server.
  • the synthesized panoramic video data is uploaded to the cloud server every preset time interval, so that the cloud server can provide a panoramic video live broadcast service for the client users who watch the live broadcast.
  • the video data collected by multiple video capture devices in real time is spliced, and the spliced panoramic video data is sent to the server at preset time intervals, so that the server can be based on the panoramic video data.
  • the client provides a live broadcast of the panoramic video, so that the client user can see the entire panorama around the other party, understand the real environment information around the other party, and enhance the visual experience of the user watching the live video.
  • FIG. 2 is a flow chart of a method for broadcasting a panoramic video according to an embodiment of the present application.
  • the panoramic video live broadcast method includes:
  • S201 Receive video data of different angles collected by multiple video capture devices in real time.
  • the video recording is performed by the panoramic shooting device, wherein the panoramic shooting device includes a plurality of cameras, each of which is used to record video of different angles.
  • the panoramic shooting device includes a plurality of cameras, each of which is used to record video of different angles. For example, attaching multiple Gopro motion cameras to the panoramic shooting bracket, turning on the power of the Gopro motion camera and switching the mode to video recording mode, each Gopro motion camera establishes a connection with the control device via Wi-fi, through control The device controls each Gopro motion camera in unison.
  • each Gopro motion camera is controlled by the control device to start recording video.
  • Each Gopro motion camera generates video data during the recording process, and the video data is transmitted to the control device in the form of UDP (User Datagram Protocol).
  • UDP User Datagram Protocol
  • the video data collected by the received Gopro motion cameras is sequentially spliced by the control device to synthesize the panoramic video, and then the synthesized panoramic video is processed by using HLS (HTTP Live Streaming) technology.
  • HLS HTTP Live Streaming
  • the generated panoramic video is collected, and the panoramic video data is subjected to H.264 encoding to obtain the encoded panoramic video data.
  • the panoramic video data is cut according to a preset time interval, for example, the panoramic video data is cut into a small file of 5 seconds by a stream cutter, and each of the included small files is generated while the cut is generated.
  • An index file of the small file pointers wherein the index file includes index information such as an identifier of the cut small file, a video start time, and a video end time.
  • S204 Establish a mapping relationship between the index file and the panoramic video data in the preset time interval, and send the panoramic video data and the index file in the preset time interval to the cloud server, so that the cloud server saves the index file and the pre-prepared according to the mapping relationship. Set the panoramic video data within the time interval.
  • the index file is mapped to the cut small file, and the index information of the cut small file is saved by the extended M3U playlist format file, and then the cut small files and index files are saved and saved.
  • the M3U playlist format file of the mapping relationship between these small files and index files is sent to the cloud server.
  • the cloud server receives the download request sent by the client, and sequentially sends the panoramic video data in the preset time interval to the client according to the index file.
  • the cloud server may search for the panoramic video data in the corresponding preset time interval according to the index file, and sequentially send the panoramic video data in the preset time interval to the client in sequence according to the index file.
  • the client uses the index file to download the small file of the segmented panoramic video data.
  • the Android 3D engine Rajawali or the Google Cardboard SDK can be used to display the panoramic video.
  • the small files of the data are converted into VR videos, so that the user can use the VR device to watch the live broadcast of the panoramic video.
  • the client may include, but is not limited to, one of a PC, a mobile phone, a tablet, a wearable device, and the like.
  • the panoramic video live broadcast method of the embodiment of the present invention generates an index file corresponding to the panoramic video data in a preset time interval, and sends the index file and the panoramic video data in the preset time interval to the cloud server, so that the cloud server is
  • the panoramic video data in multiple preset time intervals may be sequentially sent to the client for playing according to the index file.
  • FIG. 3 is a flowchart of a method for broadcasting a panoramic video according to another embodiment of the present application.
  • the panoramic video live broadcast method includes:
  • S301 Receive video data of different angles collected by multiple video capture devices in real time.
  • the video recording is performed by the panoramic shooting device, wherein the panoramic shooting device includes a plurality of cameras, each of which is used to record video of different angles.
  • the panoramic shooting device includes a plurality of cameras, each of which is used to record video of different angles. For example, attaching multiple Gopro motion cameras to the panoramic shooting bracket, turning on the power of the Gopro motion camera and switching the mode to video recording mode, each Gopro motion camera establishes a connection with the control device via Wi-fi, through control The device controls each Gopro motion camera in unison.
  • each Gopro motion camera is controlled by the control device to start recording video.
  • Each Gopro motion camera generates video data during the recording process, and the video data is transmitted to the control device in the form of UDP (User Datagram Protocol).
  • UDP User Datagram Protocol
  • S302 Send video data of different angles in a preset time interval to the cloud server at a preset time interval, so that the cloud server splices the video data of different angles in the preset time interval to generate panoramic video data, and Provide panoramic video live broadcast to the client according to the panoramic video data in the preset time interval.
  • the captured video data of different angles are cut according to preset time intervals, for example, the video data of each angle is cut into small files of 5 seconds by a stream cutter, and the cut small files are sent to Cloud server.
  • the cloud server splices the cut small files in order, generates the cut panoramic video data, and uses the HLS (HTTP Live Streaming) technology to generate the panoramic video.
  • the acquisition is performed, and the panoramic video data is subjected to H.264 encoding to obtain the encoded panoramic video data.
  • the cloud server can provide a panoramic video live broadcast service for client users watching the live broadcast.
  • the cloud server generates an index file corresponding to the panoramic video data in the preset time interval, and establishes a mapping relationship between the index file and the panoramic video data in the preset time interval, and saves the index according to the mapping relationship.
  • File and panoramic video data within a preset time interval. Specifically, after generating the cut At the same time as the small file of the panoramic video data, each index file containing the pointers of the small files is generated, wherein the index file includes index information such as the identifier of the cut small file, the video start time, and the video end time. Further, the index file is mapped to the small file of the cut panoramic video data, and the index information of the cut small file is saved by the extended M3U playlist format file, and then the cut small file and the index file are further processed. Send to the cloud server.
  • the cloud server receives the download request sent by the client, and sequentially sends the panoramic video data in the preset time interval to the client according to the index file.
  • the cloud server may search for the panoramic video data in the corresponding preset time interval according to the index file, and sequentially send the panoramic video data in the preset time interval to the client in sequence according to the index file.
  • the client uses the index file to download the small file of the segmented panoramic video data, and after downloading, for example, the small file of the panoramic video data can be converted into the VR video by using the Android 3D engine Rajawali or the Google Cardboard SDK.
  • the user can use the VR device to watch the live broadcast of the panoramic video.
  • the video data collected by the multiple video capture devices is sent to the cloud server at a preset time interval, so that the cloud server splices the video data, and according to the spliced panoramic video data.
  • the client provides a live broadcast of the panoramic video, thereby completing the work of splicing the panoramic video through the cloud server, fully utilizing the advantages of the cloud server resources, and improving the processing efficiency of processing the video data.
  • the client user can see the entire panorama around the other party, understand the real environment information around the other party, and enhance the visual experience of the user watching the live video.
  • FIG. 4 is a flowchart of a method for broadcasting a panoramic video according to another embodiment of the present application.
  • the panoramic video live broadcast method includes:
  • S401 Receive video data of different angles collected by multiple video capture devices in real time.
  • the video recording is performed by the panoramic shooting device, wherein the panoramic shooting device includes a plurality of cameras, each of which is used to record video of different angles.
  • the panoramic shooting device includes a plurality of cameras, each of which is used to record video of different angles. For example, attaching multiple Gopro motion cameras to the panoramic shooting bracket, turning on the power of the Gopro motion camera and switching the mode to video recording mode, each Gopro motion camera establishes a connection with the control device via Wi-fi, through control The device controls each Gopro motion camera in unison.
  • each Gopro motion camera is controlled by the control device to start recording video.
  • Each Gopro motion camera generates video data during the recording process, and the video data is transmitted to the control device in the form of UDP (User Datagram Protocol).
  • UDP User Datagram Protocol
  • video capture devices is, for example, six or eight, which can be set according to requirements, which is not limited in this application.
  • S402. Acquire current state information of multiple video collection devices. If multiple video capture devices are detected, If one video capture device stops collecting video data, then other video capture devices in the plurality of video capture devices are controlled to stop collecting video data and generate error prompt information.
  • the other video capture devices can be controlled in time. Video recording is also stopped.
  • the video data collected by the received Gopro motion cameras is sequentially spliced by the control device to synthesize the panoramic video, and then the synthesized panoramic video is processed by using HLS (HTTP Live Streaming) technology.
  • HLS HTTP Live Streaming
  • the generated panoramic video is collected, and the panoramic video data is subjected to H.264 encoding to obtain the encoded panoramic video data.
  • S404 Send the panoramic video data in the preset time interval to the cloud server at a preset time interval, so that the cloud server provides the panoramic video live broadcast to the client according to the panoramic video data in the preset time interval.
  • the panoramic video data is cut according to a preset time interval, for example, the panoramic video data is cut into small files of 5 seconds by a stream cutter, and the cut small files are sent to the cloud server.
  • the synthesized panoramic video data is uploaded to the cloud server every preset time interval, so that the cloud server can provide a panoramic video live broadcast service for the client users who watch the live broadcast.
  • the panoramic video live broadcast method of the embodiment of the present application by acquiring the current state information of the multiple video capture devices, and controlling other video capture devices to stop working when determining that one of the video capture devices is not working, thereby avoiding performing the captured video data.
  • a splicing error occurs due to the lack of video data at a certain angle.
  • the present application also provides a panoramic video live broadcast device.
  • FIG. 5 is a schematic structural diagram of a panoramic video live broadcast apparatus according to an embodiment of the present application.
  • the panoramic video broadcast device includes: a receiving module 110, a processing module 120, and a sending module 130.
  • the receiving module 110 is configured to receive video data of different angles collected by multiple video collection devices in real time.
  • the processing module 120 is configured to splicing video data of different angles to generate panoramic video data.
  • the sending module 130 is configured to send the panoramic video data in the preset time interval to the cloud server at a preset time interval, so that the cloud server provides the panoramic video live broadcast to the client according to the panoramic video data in the preset time interval.
  • the panoramic video live broadcast device of the embodiment of the present invention splices the video data collected by multiple video capture devices in real time, and sends the spliced panoramic video data to the server at preset time intervals, so that the server can be based on the panoramic video data.
  • the client provides a live broadcast of the panoramic video, so that the client user can see the entire panorama around the other party, understand the real environment information around the other party, and enhance the visual experience of the user watching the live video.
  • FIG. 6 is a schematic structural diagram of a panoramic video live broadcast apparatus according to an embodiment of the present application.
  • the panoramic video broadcast device includes: a receiving module 110, a processing module 120, a sending module 130, a first generating module 140, and an establishing module 150.
  • the first generation module 140 is configured to generate an index file corresponding to the panoramic video data in the preset time interval.
  • the establishing module 150 is configured to establish a mapping relationship between the index file and the panoramic video data in the preset time interval.
  • the sending module 130 is further configured to send the panoramic video data and the index file in the preset time interval to the cloud server, so that the cloud server saves the index file and the panoramic video data in the preset time interval according to the mapping relationship.
  • the panoramic video broadcast device of the embodiment of the present invention generates an index file corresponding to the panoramic video data in a preset time interval, and sends the index file and the panoramic video data in the preset time interval to the cloud server, so that the cloud server is
  • the panoramic video data in multiple preset time intervals may be sequentially sent to the client for playing according to the index file.
  • FIG. 7 is a schematic structural diagram of a panoramic video live broadcast apparatus according to another embodiment of the present application.
  • the panoramic video broadcast device includes: a receiving module 110 , a processing module 120 , a sending module 130 , a first generating module 140 , an establishing module 150 , an obtaining module 160 , a control module 170 , and a second generating module 180 .
  • the obtaining module 160 is configured to acquire current state information of multiple video collection devices.
  • the control module 170 is configured to control other video capture devices of the plurality of video capture devices to stop collecting video data when detecting that at least one of the plurality of video capture devices stops collecting video data.
  • the second generation module 180 is configured to generate error prompt information.
  • the panoramic video live broadcast device of the embodiment of the present invention controls the other video capture devices to stop working when determining that one of the video capture devices is not working, by acquiring the current state information of the multiple video capture devices, thereby avoiding the captured video data.
  • a splicing error occurs due to the lack of video data at a certain angle.
  • the present application also proposes a panoramic video live broadcast system.
  • FIG. 8 is a schematic structural diagram of a panoramic video live broadcast system according to an embodiment of the present application.
  • the panoramic video live broadcast system includes: a processing device 100, a plurality of video capture devices 200, and a cloud server 300.
  • the plurality of video capture devices 200 are used for video data of different angles collected in real time.
  • the processing device 100 is configured to receive video data of different angles collected in real time, and splicing video data of different angles to generate panoramic video data, and send the panoramic video data in the preset time interval to the cloud every preset time interval.
  • Server 300 configured to receive video data of different angles collected in real time, and splicing video data of different angles to generate panoramic video data, and send the panoramic video data in the preset time interval to the cloud every preset time interval.
  • the cloud server 300 is configured to provide a panoramic video live broadcast to the client according to the panoramic video data in the preset time interval.
  • the video data collected by multiple video capture devices in real time is spliced by the processing device, and the spliced panoramic video data is sent to the server at preset time intervals, so that the server can be based on the panoramic video.
  • the data provides a live broadcast of the panoramic video for the client, so that the client user can see the entire panorama around the other party, understand the real environment information around the other party, and enhance the visual experience of the user watching the live video.
  • the processing apparatus 100 is further configured to generate an index file corresponding to the panoramic video data in the preset time interval, and establish a mapping relationship between the index file and the panoramic video data in the preset time interval, and The panoramic video data and the index file in the preset time interval are sent to the cloud server 300.
  • the cloud server 300 is further configured to save the index file and the panoramic video data in the preset time interval according to the mapping relationship. Therefore, when the cloud server 300 provides the live video of the panoramic video to the client according to the panoramic video data, the cloud server 300 can sequentially send the panoramic video data in multiple preset time intervals to the client for playing according to the index file.
  • the processing apparatus 100 is further configured to send video data of different angles within a preset time interval to the cloud server 300 every preset time interval.
  • the cloud server 300 is further configured to splicing video data of different angles within a preset time interval to generate panoramic video data. Therefore, the video data is spliced by the cloud server 300, and the live video of the panoramic video is provided to the client according to the spliced panoramic video data, so that the work of splicing the panoramic video is completed by the cloud server, and the resource of the cloud server 300 is fully utilized.
  • the advantage is to improve the processing efficiency of processing video data.
  • the cloud server 300 is further configured to generate an index file corresponding to the panoramic video data in the preset time interval, and establish a mapping relationship between the index file and the panoramic video data in the preset time interval, and according to The mapping relationship saves the index file and the panoramic video data within a preset time interval.
  • the processing device 100 is further configured to acquire the current state of the multiple video capture devices.
  • the state information if it is detected that at least one of the plurality of video capture devices stops collecting video data, controls other video capture devices of the plurality of video capture devices to stop collecting video data and generate error prompt information.
  • controlling the other video capture devices 200 to stop working when determining that one of the video capture devices 200 is not working, thereby avoiding stitching the collected video data to generate a panoramic video.
  • a splicing error occurs due to the lack of video data at a certain angle.
  • the cloud server 300 is further configured to receive a download request sent by the client, and sequentially send the panoramic video data in the preset time interval to the client according to the index file.
  • the present application also proposes a video source control device.
  • FIG. 9 is a schematic structural diagram of a video source control device according to an embodiment of the present application.
  • the video source control device 1000 includes a processor 1001, a memory 1002, a power supply circuit 1003, an interface 1004 for input/output (I/O), and a communication component 1005.
  • the processor 1001 and the memory 1002 are disposed on a circuit board.
  • the power circuit 1003 is used to power various circuits or devices of the video source control device 1000.
  • the memory 1002 is for storing executable program code.
  • the processor 1001 runs a program corresponding to the executable program code by reading the executable program code stored in the memory 1002 for performing the following steps:
  • the video data of different angles are spliced to generate panoramic video data.
  • the panoramic video data in the preset time interval is sent to the cloud server at a preset time interval, so that the cloud server provides the panoramic video live broadcast to the client according to the panoramic video data in the preset time interval.
  • the video source control device of the embodiment of the present application splices the video data collected by multiple video capture devices in real time, and sends the spliced panoramic video data to the server at a preset time interval, so that the server can be based on the panoramic video data.
  • the client provides a live broadcast of the panoramic video, so that the client user can see the entire panorama around the other party, understand the real environment information around the other party, and enhance the visual experience of the user watching the live video.
  • the present application also proposes a storage medium.
  • the storage medium is used to store an application program, where the application is used to perform the panoramic video live broadcast method in the embodiment of the present application, where the panoramic video live broadcast method includes:
  • the video data of different angles are spliced to generate panoramic video data.
  • the panoramic video data in the preset time interval is sent to the cloud server every preset time interval.
  • the application program splices the video data collected by the multiple video collection devices in real time, and sends the spliced panoramic video data to the server at a preset time interval, so that the server can be based on the panoramic video data.
  • the client provides a live broadcast of the panoramic video, so that the client user can see the entire panorama around the other party, understand the real environment information around the other party, and enhance the visual experience of the user watching the live video.
  • the application further provides an application program, where the application is used to perform the panoramic video live broadcast method in the embodiment of the present application, where the panoramic video live broadcast method includes:
  • the video data of different angles are spliced to generate panoramic video data.
  • the panoramic video data in the preset time interval is sent to the cloud server every preset time interval.
  • the application program of the embodiment of the present application splices the video data collected by the multiple video collection devices in real time, and sends the spliced panoramic video data to the server at a preset time interval, so that the server can be the client according to the panoramic video data.
  • the live broadcast of the panoramic video is provided, so that the client user can see the entire panorama around the other party, understand the real environment information around the other party, and enhance the visual experience of the user watching the live video.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • portions of the application can be implemented in hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: having logic gates for implementing logic functions on data signals. Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
  • the disclosed apparatus may be implemented in other ways.
  • the device embodiments described above are schematic, for example, the division of the unit is only a logical function division, and the actual implementation may have another division manner, for example, multiple units or components may be combined or may be integrated. Go to another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present application may contribute to the prior art or all or part of the technical solution may be embodied in the form of a software product.
  • the computer software product is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Studio Devices (AREA)

Abstract

本申请提出一种全景视频直播方法、装置和系统以及视频源控制设备。其中,该全景视频直播方法包括:接收多个视频采集装置实时采集的不同角度的视频数据;对不同角度的视频数据进行拼接以生成全景视频数据;以及每隔预设时间间隔将预设时间间隔内的全景视频数据发送至云端服务器,以使云端服务器根据预设时间间隔内的全景视频数据为客户端提供全景视频直播。本申请实施例的全景视频直播方法,使客户端用户可以看到对方周围的整个全景,了解对方周围的真实环境信息,提升用户观看视频直播的视觉体验。

Description

全景视频直播方法、装置和系统以及视频源控制设备
相关申请的交叉引用
本申请要求北京金山安全软件有限公司于2016年4月19日提交的、发明名称为“全景视频直播方法、装置和系统以及视频源控制设备”的、中国专利申请号“201610245526.5”的优先权。
技术领域
本申请涉及移动互联网技术领域,尤其涉及一种全景视频直播方法、装置和系统以及视频源控制设备。
背景技术
在当今时代,视频直播越来越受大众的欢迎,纵观视频直播的发展史,是由视频点播的基础上演变而来,由互联网以及流媒体技术进行直播,从而能够实时、全面地传播视频内容。
目前,随着视频直播技术的普及,越来越多的用户开始使用移动终端进行视频直播,例如,用户在旅游中用手机摄像头拍摄遇到的美景,通过移动网络将视频数据传输到服务器上,再通过服务器处理并以直播的形式分享给其他用户的移动终端。
然而,目前存在的问题是,用户在通过移动终端观看视频直播时,只能观看直播视频用户的手机摄像头拍摄的区域,观看的视频画面也只能跟随摄像头的移动而变化,并不能够给用户提供很好的视觉体验。
发明内容
本申请旨在至少在一定程度上解决相关技术中的技术问题之一。
为此,本申请的第一个目的在于提出一种全景视频直播方法,该全景视频直播方法使客户端用户可以看到对方周围的整个全景,了解对方周围的真实环境信息,提升用户观看视频直播的视觉体验。
本申请的第二个目的在于提出一种全景视频直播装置。
本申请的第三个目的在于提出一种全景视频直播系统。
本申请的第四个目的在于提出一种视频源控制设备。
本申请的第五个目的在于提出一种存储介质。
本申请的第六个目的在于提出一种应用程序。
为达上述目的,本申请第一方面实施例提出了一种全景视频直播方法,包括:接收多个视频采集装置实时采集的不同角度的视频数据;对所述不同角度的视频数据进行拼接以生成全景视频数据;以及每隔预设时间间隔将所述预设时间间隔内的全景视频数据发送至云端服务器。
本申请实施例的全景视频直播方法,通过将多个视频采集装置实时采集的视频数据进行拼接,并将拼接后的全景视频数据以预设时间间隔发送至服务器,从而服务器能够根据全景视频数据为客户端提供全景视频的直播,使客户端用户可以看到对方周围的整个全景,了解对方周围的真实环境信息,提升用户观看视频直播的视觉体验。
在本申请的一个实施例中,所述每隔预设时间间隔将所述预设时间间隔内的全景视频数据发送至云端服务器,包括:生成所述预设时间间隔内的全景视频数据对应的索引文件;以及建立所述索引文件与所述预设时间间隔内的全景视频数据的映射关系,并将所述预设时间间隔内的全景视频数据与所述索引文件发送至所述云端服务器。
在本申请的一个实施例中,所述对所述不同角度的视频数据进行拼接以生成全景视频数据,包括:将所述不同角度的视频数据按照顺序进行拼接以合成全景视频;利用动态码率自适应技术对合成后的所述全景视频进行采集;以及对采集到的全景视频数据进行编码以得到编码后的全景视频数据。
进一步地,在本申请的一个实施例中,在所述接收多个视频采集装置实时采集的不同角度的视频数据之后,还包括:每隔所述预设时间间隔将所述预设时间间隔内的所述不同角度的视频数据发送至所述云端服务器。
进一步地,在本申请的一个实施例中,获取所述多个视频采集装置的当前状态信息,若检测到所述多个视频采集装置中的至少一个视频采集装置停止采集视频数据,则控制所述多个视频采集装置中的其它视频采集装置停止采集视频数据,并生成错误提示信息。
为达上述目的,本申请第二方面实施例提出了一种全景视频直播装置,包括:接收模块,用于接收多个视频采集装置实时采集的不同角度的视频数据;处理模块,用于对所述不同角度的视频数据进行拼接以生成全景视频数据;以及发送模块,用于每隔预设时间间隔将所述预设时间间隔内的全景视频数据发送至云端服务器。
本申请实施例的全景视频直播装置,通过将多个视频采集装置实时采集的视频数据进行拼接,并将拼接后的全景视频数据以预设时间间隔发送至服务器,从而服务器能够根据全景视频数据为客户端提供全景视频的直播,使客户端用户可以看到对方周围的整个全景,了解对方周围的真实环境信息,提升用户观看视频直播的视觉体验。
进一步地,在本申请的一个实施例中,全景视频直播装置还包括:第一生成模块, 用于生成所述预设时间间隔内的全景视频数据对应的索引文件;建立模块,用于建立所述索引文件与所述预设时间间隔内的全景视频数据的映射关系;所述发送模块还用于将所述预设时间间隔内的全景视频数据与所述索引文件发送至所述云端服务器,以使所述云端服务器根据所述映射关系保存所述索引文件与所述预设时间间隔内的全景视频数据。
在本申请的一个实施例中,所述处理模块还用于:将所述不同角度的视频数据按照顺序进行拼接以合成全景视频,并利用动态码率自适应技术对合成后的所述全景视频进行采集,以及对采集到的全景视频数据进行编码以得到编码后的全景视频数据。
在本申请的一个实施例中,所述发送模块还用于:每隔所述预设时间间隔将所述预设时间间隔内的所述不同角度的视频数据发送至所述云端服务器。
进一步地,在本申请的一个实施例中,全景视频直播装置还包括:获取模块,用于获取所述多个视频采集装置的当前状态信息;控制模块,用于在检测到所述多个视频采集装置中的至少一个视频采集装置停止采集视频数据时,控制所述多个视频采集装置中的其它视频采集装置停止采集视频数据;以及第二生成模块,用于生成错误提示信息。
为达上述目的,本申请第三方面实施例提出了一种全景视频直播系统,包括:多个视频采集装置、处理装置和云端服务器,其中,所述多个视频采集装置,用于实时采集的不同角度的视频数据;所述处理装置,用于接收所述实时采集的不同角度的视频数据,并对所述不同角度的视频数据进行拼接以生成全景视频数据,以及每隔预设时间间隔将所述预设时间间隔内的全景视频数据发送至所述云端服务器;所述云端服务器,用于根据所述预设时间间隔内的全景视频数据为客户端提供全景视频直播。
本申请实施例的全景视频直播系统,通过处理装置将多个视频采集装置实时采集的视频数据进行拼接,并将拼接后的全景视频数据以预设时间间隔发送至服务器,从而服务器能够根据全景视频数据为客户端提供全景视频的直播,使客户端用户可以看到对方周围的整个全景,了解对方周围的真实环境信息,提升用户观看视频直播的视觉体验。
在本申请的一个实施例中,所述处理装置还用于:生成所述预设时间间隔内的全景视频数据对应的索引文件,并建立所述索引文件与所述预设时间间隔内的全景视频数据的映射关系,并将所述预设时间间隔内的全景视频数据与所述索引文件发送至所述云端服务器;所述云端服务器,还用于根据所述映射关系保存所述索引文件与所述预设时间间隔内的全景视频数据。
在本申请的一个实施例中,所述处理装置还用于:将所述不同角度的视频数据按 照顺序进行拼接以合成全景视频,并利用动态码率自适应技术对合成后的所述全景视频进行采集,以及对采集到的全景视频数据进行编码以得到编码后的全景视频数据。
在本申请的一个实施例中,所述处理装置还用于:每隔所述预设时间间隔将所述预设时间间隔内的所述不同角度的视频数据发送至所述云端服务器;所述云端服务器,还用于对所述预设时间间隔内的所述不同角度的视频数据进行拼接以生成全景视频数据。
在本申请的一个实施例中,所述云端服务器还用于:生成所述预设时间间隔内的全景视频数据对应的索引文件,并建立所述索引文件与所述预设时间间隔内的所述全景视频数据的映射关系,以及根据所述映射关系保存所述索引文件与所述预设时间间隔内的所述全景视频数据。
在本申请的一个实施例中,所述处理装置还用于:获取所述多个视频采集装置的当前状态信息,若检测到所述多个视频采集装置中的至少一个视频采集装置停止采集视频数据,则控制所述多个视频采集装置中的其它视频采集装置停止采集视频数据,并生成错误提示信息。
在本申请的一个实施例中,所述云端服务器还用于:接收所述客户端发送的下载请求,并根据所述索引文件依次将所述预设时间间隔内的全景视频数据发送至所述客户端。
为达上述目的,本申请第四方面实施例提出了一种视频源控制设备,包括以下一个或多个组件:处理器,存储器,电源电路,输入/输出(I/O)的接口,以及通信组件;其中,所述处理器和所述存储器设置在电路板上;所述电源电路,用于为所述视频源控制设备的各个电路或器件供电;所述存储器用于存储可执行程序代码;所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于执行本申请第一方面实施例所述的全景视频直播方法。
本申请实施例的视频源控制设备,通过将多个视频采集装置实时采集的视频数据进行拼接,并将拼接后的全景视频数据以预设时间间隔发送至服务器,从而服务器能够根据全景视频数据为客户端提供全景视频的直播,使客户端用户可以看到对方周围的整个全景,了解对方周围的真实环境信息,提升用户观看视频直播的视觉体验。
为达上述目的,本申请第五方面实施例提出了一种存储介质,其中,该存储介质存储有一个或多个程序,所述程序执行本申请第一方面实施例所述的全景视频直播方法。
本申请实施例的存储介质,程序通过将多个视频采集装置实时采集的视频数据进行拼接,并将拼接后的全景视频数据以预设时间间隔发送至服务器,从而服务器能够根据全景视频数据为客户端提供全景视频的直播,使客户端用户可以看到对方周围的整 个全景,了解对方周围的真实环境信息,提升用户观看视频直播的视觉体验。
为达上述目的,本申请第六方面实施例提出了一种应用程序,其中,所述应用程序用于在运行时执行本申请实施例所述的全景视频直播方法。
本申请实施例的应用程序,通过将多个视频采集装置实时采集的视频数据进行拼接,并将拼接后的全景视频数据以预设时间间隔发送至服务器,从而服务器能够根据全景视频数据为客户端提供全景视频的直播,使客户端用户可以看到对方周围的整个全景,了解对方周围的真实环境信息,提升用户观看视频直播的视觉体验。
附图说明
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1是本申请一个实施例的全景视频直播方法的流程图;
图2是本申请一个具体实施例的全景视频直播方法的流程图;
图3是本申请另一个实施例的全景视频直播方法的流程图;
图4是本申请另一个实施例的全景视频直播方法的流程图;
图5是本申请一个实施例的全景视频直播装置的结构示意图;
图6是本申请一个具体实施例的全景视频直播装置的结构示意图;
图7是本申请另一个实施例的全景视频直播装置的结构示意图;
图8是本申请一个实施例的全景视频直播系统的结构示意图;以及
图9是本申请一个实施例的视频源控制设备的结构示意图。
具体实施方式
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序, 包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
图1是本申请一个实施例的全景视频直播方法的流程图。
如图1所示,全景视频直播方法包括:
S101,接收多个视频采集装置实时采集的不同角度的视频数据。
具体地,通过全景拍摄设备进行视频录制,其中,全景拍摄设备中包括多个摄像头,每个摄像头分别用于录制不同角度的视频。例如,将多个Gopro运动相机固定在全景拍摄支架上,打开Gopro运动相机的电源并把模式切换为视频录制模式,每一台Gopro运动相机均通过Wi-fi建立与控制设备的连接,通过控制设备统一控制每一台Gopro运动相机。
进而,通过控制设备控制所有的Gopro运动相机开始录制视频,录制过程中每个Gopro运动相机都会产生视频数据,视频数据以UDP(User Datagram Protocol,用户数据报协议)的形式传输至控制设备中。
应当理解的是,视频采集装置的个数例如是6个或者8个,可以根据需求进行设置,本申请中对此并不进行限定。
S102,对不同角度的视频数据进行拼接以生成全景视频数据。
具体地,通过控制设备对接收到的各个Gopro运动相机采集的视频数据按照顺序进行拼接,合成全景视频,进而将合成后的全景视频利用HLS(HTTP Live Streaming,动态码率自适应)技术,对生成的全景视频进行采集,并对全景视频数据进行H.264编码得到编码后的全景视频数据。
S103,每隔预设时间间隔将预设时间间隔内的全景视频数据发送至云端服务器,以使云端服务器根据预设时间间隔内的全景视频数据为客户端提供全景视频直播。
具体地,按照预设时间间隔对全景视频数据进行切割,例如,通过流切割器将全景视频数据切割为5秒一个的小文件,将这些切割后的小文件发送至云端服务器。
也就是说,每个预设时间间隔就将合成后的全景视频数据上传至云端服务器,使得云端服务器可以为观看直播的客户端用户提供全景视频直播服务。
本申请实施例的全景视频直播方法,通过将多个视频采集装置实时采集的视频数据进行拼接,并将拼接后的全景视频数据以预设时间间隔发送至服务器,从而服务器能够根据全景视频数据为客户端提供全景视频的直播,使客户端用户可以看到对方周围的整个全景,了解对方周围的真实环境信息,提升用户观看视频直播的视觉体验。
图2是本申请一个具体实施例的全景视频直播方法的流程图。
如图2所示,全景视频直播方法包括:
S201,接收多个视频采集装置实时采集的不同角度的视频数据。
具体地,通过全景拍摄设备进行视频录制,其中,全景拍摄设备中包括多个摄像头,每个摄像头分别用于录制不同角度的视频。例如,将多个Gopro运动相机固定在全景拍摄支架上,打开Gopro运动相机的电源并把模式切换为视频录制模式,每一台Gopro运动相机均通过Wi-fi建立与控制设备的连接,通过控制设备统一控制每一台Gopro运动相机。
进而,通过控制设备控制所有的Gopro运动相机开始录制视频,录制过程中每个Gopro运动相机都会产生视频数据,视频数据以UDP(User Datagram Protocol,用户数据报协议)的形式传输至控制设备中。
S202,对不同角度的视频数据进行拼接以生成全景视频数据。
具体地,通过控制设备对接收到的各个Gopro运动相机采集的视频数据按照顺序进行拼接,合成全景视频,进而将合成后的全景视频利用HLS(HTTP Live Streaming,动态码率自适应)技术,对生成的全景视频进行采集,并对全景视频数据进行H.264编码得到编码后的全景视频数据。
S203,生成预设时间间隔内的全景视频数据对应的索引文件。
具体地,按照预设时间间隔对全景视频数据进行切割,例如,通过流切割器将全景视频数据切割为5秒一个的小文件,在生成切割后的多个小文件的同时,生成每个包含这些小文件指针的索引文件,其中,索引文件包括切割后的小文件的标识、视频开始时间、视频结束时间等索引信息。
S204,建立索引文件与预设时间间隔内的全景视频数据的映射关系,并将预设时间间隔内的全景视频数据与索引文件发送至云端服务器,以使云端服务器根据映射关系保存索引文件与预设时间间隔内的全景视频数据。
具体地,将索引文件与切割后的小文件建立映射关系,并通过扩展的M3U播放列表格式文件保存切割后的小文件的索引信息,进而,将这些切割后的小文件和索引文件,以及保存这些小文件和索引文件的映射关系的M3U播放列表格式文件发送至云端服务器。
在本申请的一个实施例中,云端服务器接收客户端发送的下载请求,并根据索引文件依次将预设时间间隔内的全景视频数据发送至客户端。具体地,云端服务器可以根据索引文件查找到对应的预设时间间隔内的全景视频数据,并根据索引文件依次将预设时间间隔内的全景视频数据按照顺序发送至客户端。
进而,客户端利用索引文件将把切分好的全景视频数据的小文件下载下来,下载后例如可以利用Android的3D引擎Rajawali或者Google Cardboard SDK等,将全景视频 数据的小文件转换成VR视频,进而用户就可以使用VR设备来观看全景视频的直播。
应当理解的是,客户端可包括但不限于PC、手机、平板电脑、穿戴式设备等中的一种。
本申请实施例的全景视频直播方法,通过生成预设时间间隔内的全景视频数据对应的索引文件,并将索引文件和预设时间间隔内的全景视频数据一并发送至云端服务器,使得云端服务器在根据全景视频数据为客户端提供全景视频的直播时,可以根据索引文件将多个预设时间间隔内的全景视频数据按照顺序发送至客户端进行播放。
图3是本申请另一个实施例的全景视频直播方法的流程图。
如图3所示,全景视频直播方法包括:
S301,接收多个视频采集装置实时采集的不同角度的视频数据。
具体地,通过全景拍摄设备进行视频录制,其中,全景拍摄设备中包括多个摄像头,每个摄像头分别用于录制不同角度的视频。例如,将多个Gopro运动相机固定在全景拍摄支架上,打开Gopro运动相机的电源并把模式切换为视频录制模式,每一台Gopro运动相机均通过Wi-fi建立与控制设备的连接,通过控制设备统一控制每一台Gopro运动相机。
进而,通过控制设备控制所有的Gopro运动相机开始录制视频,录制过程中每个Gopro运动相机都会产生视频数据,视频数据以UDP(User Datagram Protocol,用户数据报协议)的形式传输至控制设备中。
S302,每隔预设时间间隔将预设时间间隔内的不同角度的视频数据发送至云端服务器,以使云端服务器对预设时间间隔内的不同角度的视频数据进行拼接以生成全景视频数据,并根据预设时间间隔内的全景视频数据为客户端提供全景视频直播。
具体地,按照预设时间间隔对采集的不同角度的视频数据进行切割,例如,通过流切割器将每个角度的视频数据切割为5秒一个的小文件,将这些切割后的小文件发送至云端服务器。
进而,云端服务器将切割后的小文件按照顺序进行拼接,生成切割后的全景视频数据,并将合成后的全景视频利用HLS(HTTP Live Streaming,动态码率自适应)技术,对生成的全景视频进行采集,并对全景视频数据进行H.264编码得到编码后的全景视频数据。
进而,云端服务器可以为观看直播的客户端用户提供全景视频直播服务。
在本申请的一个实施例中,云端服务器生成预设时间间隔内的全景视频数据对应的索引文件,并建立索引文件与预设时间间隔内的全景视频数据的映射关系,以及根据映射关系保存索引文件与预设时间间隔内的全景视频数据。具体地,在生成切割后的 全景视频数据的小文件的同时,生成每个包含这些小文件指针的索引文件,其中,索引文件包括切割后的小文件的标识、视频开始时间、视频结束时间等索引信息。进而,将索引文件与切割后的全景视频数据的小文件建立映射关系,并通过扩展的M3U播放列表格式文件保存切割后的小文件的索引信息,进而,将这些切割后的小文件和索引文件发送至云端服务器。
在本申请的一个实施例中,云端服务器接收客户端发送的下载请求,并根据索引文件依次将预设时间间隔内的全景视频数据发送至客户端。具体地,云端服务器可以根据索引文件查找到对应的预设时间间隔内的全景视频数据,并根据索引文件依次将预设时间间隔内的全景视频数据按照顺序发送至客户端。
进而,客户端利用索引文件将把切分好的全景视频数据的小文件下载下来,下载后例如可以利用Android的3D引擎Rajawali或者Google Cardboard SDK等,将全景视频数据的小文件转换成VR视频,进而用户就可以使用VR设备来观看全景视频的直播。
本申请实施例的全景视频直播方法,通过将多个视频采集装置实时采集的视频数据以预设时间间隔发送至云端服务器,使得云端服务器对视频数据进行拼接,并根据拼接后的全景视频数据为客户端提供全景视频的直播,从而将拼接全景视频的工作通过云端服务器完成,充分利用了云端服务器资源的优势,提高了处理视频数据的处理效率。同时使客户端用户可以看到对方周围的整个全景,了解对方周围的真实环境信息,提升用户观看视频直播的视觉体验。
图4是本申请另一个实施例的全景视频直播方法的流程图。
如图4所示,全景视频直播方法包括:
S401,接收多个视频采集装置实时采集的不同角度的视频数据。
具体地,通过全景拍摄设备进行视频录制,其中,全景拍摄设备中包括多个摄像头,每个摄像头分别用于录制不同角度的视频。例如,将多个Gopro运动相机固定在全景拍摄支架上,打开Gopro运动相机的电源并把模式切换为视频录制模式,每一台Gopro运动相机均通过Wi-fi建立与控制设备的连接,通过控制设备统一控制每一台Gopro运动相机。
进而,通过控制设备控制所有的Gopro运动相机开始录制视频,录制过程中每个Gopro运动相机都会产生视频数据,视频数据以UDP(User Datagram Protocol,用户数据报协议)的形式传输至控制设备中。
应当理解的是,视频采集装置的个数例如是6个或者8个,可以根据需求进行设置,本申请中对此并不进行限定。
S402,获取多个视频采集装置的当前状态信息,若检测到多个视频采集装置中的至 少一个视频采集装置停止采集视频数据,则控制多个视频采集装置中的其它视频采集装置停止采集视频数据,并生成错误提示信息。
具体地,如果检测到多个视频采集装置中的一个视频采集装置因线路故障导致断电等原因停止了视频数据的采集,当检测到该视频采集装置停止工作时,可及时控制其他视频采集装置也停止视频录制。
S403,对不同角度的视频数据进行拼接以生成全景视频数据。
具体地,通过控制设备对接收到的各个Gopro运动相机采集的视频数据按照顺序进行拼接,合成全景视频,进而将合成后的全景视频利用HLS(HTTP Live Streaming,动态码率自适应)技术,对生成的全景视频进行采集,并对全景视频数据进行H.264编码得到编码后的全景视频数据。
S404,每隔预设时间间隔将预设时间间隔内的全景视频数据发送至云端服务器,以使云端服务器根据预设时间间隔内的全景视频数据为客户端提供全景视频直播。
具体地,按照预设时间间隔对全景视频数据进行切割,例如,通过流切割器将全景视频数据切割为5秒一个的小文件,将这些切割后的小文件发送至云端服务器。
也就是说,每个预设时间间隔就将合成后的全景视频数据上传至云端服务器,使得云端服务器可以为观看直播的客户端用户提供全景视频直播服务。
本申请实施例的全景视频直播方法,通过获取多个视频采集装置的当前状态信息,在判断其中一个视频采集装置不工作时控制其他视频采集装置也停止工作,从而能够避免对采集的视频数据进行拼接生成全景视频数据时,出现因缺失某一角度的视频数据而导致拼接出错的情况发生。
为了实现上述实施例,本申请还提出一种全景视频直播装置。
图5是本申请一个实施例的全景视频直播装置的结构示意图。
如图5所示,全景视频直播装置包括:接收模块110、处理模块120和发送模块130。
具体地,接收模块110用于接收多个视频采集装置实时采集的不同角度的视频数据。
处理模块120用于对不同角度的视频数据进行拼接以生成全景视频数据。
发送模块130用于每隔预设时间间隔将预设时间间隔内的全景视频数据发送至云端服务器,以使云端服务器根据预设时间间隔内的全景视频数据为客户端提供全景视频直播。
需要说明的是,前述对全景视频直播方法的实施例的解释说明也适用于该实施例的全景视频直播装置,其实现原理类似,此处不再赘述。
本申请实施例的全景视频直播装置,通过将多个视频采集装置实时采集的视频数据进行拼接,并将拼接后的全景视频数据以预设时间间隔发送至服务器,从而服务器能够根据全景视频数据为客户端提供全景视频的直播,使客户端用户可以看到对方周围的整个全景,了解对方周围的真实环境信息,提升用户观看视频直播的视觉体验。
图6是本申请一个具体实施例的全景视频直播装置的结构示意图。
如图6所示,全景视频直播装置包括:接收模块110、处理模块120、发送模块130、第一生成模块140和建立模块150。
具体地,第一生成模块140用于生成预设时间间隔内的全景视频数据对应的索引文件。
建立模块150用于建立索引文件与预设时间间隔内的全景视频数据的映射关系。
发送模块130还用于将预设时间间隔内的全景视频数据与索引文件发送至云端服务器,以使云端服务器根据映射关系保存索引文件与预设时间间隔内的全景视频数据。
需要说明的是,前述对全景视频直播方法的实施例的解释说明也适用于该实施例的全景视频直播装置,其实现原理类似,此处不再赘述。
本申请实施例的全景视频直播装置,通过生成预设时间间隔内的全景视频数据对应的索引文件,并将索引文件和预设时间间隔内的全景视频数据一并发送至云端服务器,使得云端服务器在根据全景视频数据为客户端提供全景视频的直播时,可以根据索引文件将多个预设时间间隔内的全景视频数据按照顺序发送至客户端进行播放。
图7是本申请另一个实施例的全景视频直播装置的结构示意图。
如图7所示,全景视频直播装置包括:接收模块110、处理模块120、发送模块130、第一生成模块140、建立模块150、获取模块160、控制模块170和第二生成模块180。
具体地,获取模块160用于获取多个视频采集装置的当前状态信息。
控制模块170用于在检测到多个视频采集装置中的至少一个视频采集装置停止采集视频数据时,控制多个视频采集装置中的其它视频采集装置停止采集视频数据。
第二生成模块180用于生成错误提示信息。
需要说明的是,前述对全景视频直播方法的实施例的解释说明也适用于该实施例的全景视频直播装置,其实现原理类似,此处不再赘述。
本申请实施例的全景视频直播装置,通过获取多个视频采集装置的当前状态信息,在判断其中一个视频采集装置不工作时控制其他视频采集装置也停止工作,从而能够避免对采集的视频数据进行拼接生成全景视频数据时,出现因缺失某一角度的视频数据而导致拼接出错的情况发生。
为了实现上述实施例,本申请还提出一种全景视频直播系统。
图8是本申请一个实施例的全景视频直播系统的结构示意图。
如图8所示,全景视频直播系统包括:处理装置100、多个视频采集装置200和云端服务器300。
其中,多个视频采集装置200用于实时采集的不同角度的视频数据。
处理装置100用于接收实时采集的不同角度的视频数据,并对不同角度的视频数据进行拼接以生成全景视频数据,以及每隔预设时间间隔将预设时间间隔内的全景视频数据发送至云端服务器300.
云端服务器300用于根据预设时间间隔内的全景视频数据为客户端提供全景视频直播。
需要说明的是,前述对全景视频直播方法的实施例的解释说明也适用于该实施例的全景视频直播系统,其实现原理类似,此处不再赘述。
本申请实施例的全景视频直播系统,通过处理装置将多个视频采集装置实时采集的视频数据进行拼接,并将拼接后的全景视频数据以预设时间间隔发送至服务器,从而服务器能够根据全景视频数据为客户端提供全景视频的直播,使客户端用户可以看到对方周围的整个全景,了解对方周围的真实环境信息,提升用户观看视频直播的视觉体验。
在本申请的一个实施例中,处理装置100还用于生成预设时间间隔内的全景视频数据对应的索引文件,并建立索引文件与预设时间间隔内的全景视频数据的映射关系,并将预设时间间隔内的全景视频数据与索引文件发送至云端服务器300。云端服务器300还用于根据映射关系保存索引文件与预设时间间隔内的全景视频数据。由此,云端服务器300在根据全景视频数据为客户端提供全景视频的直播时,可以根据索引文件将多个预设时间间隔内的全景视频数据按照顺序发送至客户端进行播放。
在本申请的一个实施例中,处理装置100还用于每隔预设时间间隔将预设时间间隔内的不同角度的视频数据发送至云端服务器300。云端服务器300还用于对预设时间间隔内的不同角度的视频数据进行拼接以生成全景视频数据。由此,通过云端服务器300对视频数据进行拼接,并根据拼接后的全景视频数据为客户端提供全景视频的直播,从而将拼接全景视频的工作通过云端服务器完成,充分利用了云端服务器300资源的优势,提高了处理视频数据的处理效率。
在本申请的一个实施例中,云端服务器300还用于生成预设时间间隔内的全景视频数据对应的索引文件,并建立索引文件与预设时间间隔内的全景视频数据的映射关系,以及根据映射关系保存索引文件与预设时间间隔内的全景视频数据。
在本申请的一个实施例中,处理装置100还用于获取多个视频采集装置的当前状 态信息,若检测到多个视频采集装置中的至少一个视频采集装置停止采集视频数据,则控制多个视频采集装置中的其它视频采集装置停止采集视频数据,并生成错误提示信息。由此,通过获取多个视频采集装置200的当前状态信息,在判断其中一个视频采集装置200不工作时控制其他视频采集装置200也停止工作,从而能够避免对采集的视频数据进行拼接生成全景视频数据时,出现因缺失某一角度的视频数据而导致拼接出错的情况发生。
在本申请的一个实施例中,云端服务器300还用于接收客户端发送的下载请求,并根据索引文件依次将预设时间间隔内的全景视频数据发送至客户端。
为了实现上述实施例,本申请还提出一种视频源控制设备。
图9是本申请一个实施例的视频源控制设备的结构示意图。
如图9所示,视频源控制设备1000包括处理器1001、存储器1002、电源电路1003、输入/输出(I/O)的接口1004以及通信组件1005。
其中,处理器1001和存储器1002设置在电路板上。电源电路1003用于为视频源控制设备1000的各个电路或器件供电。存储器1002用于存储可执行程序代码。处理器1001通过读取存储器1002中存储的可执行程序代码来运行与可执行程序代码对应的程序,以用于执行以下步骤:
接收多个视频采集装置实时采集的不同角度的视频数据。
对不同角度的视频数据进行拼接以生成全景视频数据。
每隔预设时间间隔将预设时间间隔内的全景视频数据发送至云端服务器,以使云端服务器根据预设时间间隔内的全景视频数据为客户端提供全景视频直播。
需要说明的是,前述对全景视频直播方法的实施例的解释说明也适用于该实施例的视频源控制设备,其实现原理类似,此处不再赘述。
本申请实施例的视频源控制设备,通过将多个视频采集装置实时采集的视频数据进行拼接,并将拼接后的全景视频数据以预设时间间隔发送至服务器,从而服务器能够根据全景视频数据为客户端提供全景视频的直播,使客户端用户可以看到对方周围的整个全景,了解对方周围的真实环境信息,提升用户观看视频直播的视觉体验。
为了实现上述实施例,本申请还提出一种存储介质。其中,该存储介质用于存储应用程序,该应用程序用于在运行时执行本申请实施例的全景视频直播方法,其中,该全景视频直播方法包括:
接收多个视频采集装置实时采集的不同角度的视频数据。
对不同角度的视频数据进行拼接以生成全景视频数据。
每隔预设时间间隔将预设时间间隔内的全景视频数据发送至云端服务器。
需要说明的是,本实施例的应用程序执行全景视频直播方法和原理和实现方式与上述实施例的全景视频直播方法类似,为了避免冗余,此处不再赘述。
本申请实施例的存储介质,应用程序通过将多个视频采集装置实时采集的视频数据进行拼接,并将拼接后的全景视频数据以预设时间间隔发送至服务器,从而服务器能够根据全景视频数据为客户端提供全景视频的直播,使客户端用户可以看到对方周围的整个全景,了解对方周围的真实环境信息,提升用户观看视频直播的视觉体验。
为了实现上述实施例,本申请还提出一种应用程序,其中,该应用程序用于在运行时执行本申请实施例的全景视频直播方法,其中,该全景视频直播方法包括:
接收多个视频采集装置实时采集的不同角度的视频数据。
对不同角度的视频数据进行拼接以生成全景视频数据。
每隔预设时间间隔将预设时间间隔内的全景视频数据发送至云端服务器。
需要说明的是,本实施例的应用程序执行全景视频直播方法和原理和实现方式与上述实施例的全景视频直播方法类似,为了避免冗余,此处不再赘述。
本申请实施例的应用程序,通过将多个视频采集装置实时采集的视频数据进行拼接,并将拼接后的全景视频数据以预设时间间隔发送至服务器,从而服务器能够根据全景视频数据为客户端提供全景视频的直播,使客户端用户可以看到对方周围的整个全景,了解对方周围的真实环境信息,提升用户观看视频直播的视觉体验。
对于装置、电子设备、存储介质及应用程序实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
值得说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出 来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本申请技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,其中,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。
以上所述仅为本申请的较佳实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和原则之内所作的任何修改、等同替换、改进等,均包含在本申请的保护范围内。

Claims (20)

  1. 一种全景视频直播方法,其特征在于,包括以下步骤:
    接收多个视频采集装置实时采集的不同角度的视频数据;
    对所述不同角度的视频数据进行拼接以生成全景视频数据;以及
    每隔预设时间间隔将所述预设时间间隔内的全景视频数据发送至云端服务器。
  2. 如权利要求1所述的全景视频直播方法,其特征在于,所述每隔预设时间间隔将所述预设时间间隔内的全景视频数据发送至云端服务器,包括:
    生成所述预设时间间隔内的全景视频数据对应的索引文件;以及
    建立所述索引文件与所述预设时间间隔内的全景视频数据的映射关系,并将所述预设时间间隔内的全景视频数据与所述索引文件发送至所述云端服务器。
  3. 如权利要求1或2所述的全景视频直播方法,其特征在于,所述对所述不同角度的视频数据进行拼接以生成全景视频数据,包括:
    将所述不同角度的视频数据按照顺序进行拼接以合成全景视频;
    利用动态码率自适应技术对合成后的所述全景视频进行采集;以及
    对采集到的全景视频数据进行编码以得到编码后的全景视频数据。
  4. 如权利要求1-3任一所述的全景视频直播方法,其特征在于,在所述接收多个视频采集装置实时采集的不同角度的视频数据之后,还包括:
    每隔所述预设时间间隔将所述预设时间间隔内的所述不同角度的视频数据发送至所述云端服务器。
  5. 如权利要求1-4任一所述的全景视频直播方法,其特征在于,还包括:
    获取所述多个视频采集装置的当前状态信息,若检测到所述多个视频采集装置中的至少一个视频采集装置停止采集视频数据,则控制所述多个视频采集装置中的其它视频采集装置停止采集视频数据,并生成错误提示信息。
  6. 一种全景视频直播装置,其特征在于,包括:
    接收模块,用于接收多个视频采集装置实时采集的不同角度的视频数据;
    处理模块,用于对所述不同角度的视频数据进行拼接以生成全景视频数据;以及
    发送模块,用于每隔预设时间间隔将所述预设时间间隔内的全景视频数据发送至云端服务器。
  7. 如权利要求6所述的全景视频直播装置,其特征在于,还包括:
    第一生成模块,用于生成所述预设时间间隔内的全景视频数据对应的索引文件;
    建立模块,用于建立所述索引文件与所述预设时间间隔内的全景视频数据的映射 关系;
    所述发送模块还用于将所述预设时间间隔内的全景视频数据与所述索引文件发送至所述云端服务器。
  8. 如权利要求6或7所述的全景视频直播装置,其特征在于,所述处理模块还用于:
    将所述不同角度的视频数据按照顺序进行拼接以合成全景视频,并利用动态码率自适应技术对合成后的所述全景视频进行采集,以及对采集到的全景视频数据进行编码以得到编码后的全景视频数据。
  9. 如权利要求6-8任一所述的全景视频直播装置,其特征在于,所述发送模块还用于:
    每隔所述预设时间间隔将所述预设时间间隔内的所述不同角度的视频数据发送至所述云端服务器。
  10. 如权利要求6-9任一所述的全景视频直播装置,其特征在于,还包括:
    获取模块,用于获取所述多个视频采集装置的当前状态信息;
    控制模块,用于在检测到所述多个视频采集装置中的至少一个视频采集装置停止采集视频数据时,控制所述多个视频采集装置中的其它视频采集装置停止采集视频数据;以及
    第二生成模块,用于生成错误提示信息。
  11. 一种全景视频直播系统,其特征在于,包括:多个视频采集装置、处理装置和云端服务器,其中,
    所述多个视频采集装置,用于实时采集的不同角度的视频数据;
    所述处理装置,用于接收所述实时采集的不同角度的视频数据,并对所述不同角度的视频数据进行拼接以生成全景视频数据,以及每隔预设时间间隔将所述预设时间间隔内的全景视频数据发送至所述云端服务器;
    所述云端服务器,用于根据所述预设时间间隔内的全景视频数据为客户端提供全景视频直播。
  12. 如权利要求11所述的全景视频直播系统,其特征在于,
    所述处理装置,还用于生成所述预设时间间隔内的全景视频数据对应的索引文件,并建立所述索引文件与所述预设时间间隔内的全景视频数据的映射关系,并将所述预设时间间隔内的全景视频数据与所述索引文件发送至所述云端服务器;
    所述云端服务器,还用于根据所述映射关系保存所述索引文件与所述预设时间间隔内的全景视频数据。
  13. 如权利要求11或12所述的全景视频直播系统,其特征在于,所述处理装置还用于:
    将所述不同角度的视频数据按照顺序进行拼接以合成全景视频,并利用动态码率自适应技术对合成后的所述全景视频进行采集,以及对采集到的全景视频数据进行编码以得到编码后的全景视频数据。
  14. 如权利要求11-13任一所述的全景视频直播系统,其特征在于,
    所述处理装置,还用于每隔所述预设时间间隔将所述预设时间间隔内的所述不同角度的视频数据发送至所述云端服务器;
    所述云端服务器,还用于对所述预设时间间隔内的所述不同角度的视频数据进行拼接以生成全景视频数据。
  15. 如权利要求11-14任一所述的全景视频直播系统,其特征在于,所述云端服务器还用于:
    生成所述预设时间间隔内的全景视频数据对应的索引文件,并建立所述索引文件与所述预设时间间隔内的所述全景视频数据的映射关系,以及根据所述映射关系保存所述索引文件与所述预设时间间隔内的所述全景视频数据。
  16. 如权利要求11-15任一所述的全景视频直播系统,其特征在于,所述处理装置还用于:
    获取所述多个视频采集装置的当前状态信息,若检测到所述多个视频采集装置中的至少一个视频采集装置停止采集视频数据,则控制所述多个视频采集装置中的其它视频采集装置停止采集视频数据,并生成错误提示信息。
  17. 如权利要求12或15所述的全景视频直播系统,其特征在于,所述云端服务器还用于:
    接收所述客户端发送的下载请求,并根据所述索引文件依次将所述预设时间间隔内的全景视频数据发送至所述客户端。
  18. 一种视频源控制设备,其特征在于,包括以下一个或多个组件:处理器,存储器,电源电路,输入/输出(I/O)的接口,以及通信组件;其中,所述处理器和所述存储器设置在电路板上;所述电源电路,用于为所述视频源控制设备的各个电路或器件供电;所述存储器用于存储可执行程序代码;所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于执行权利要求1-5任一项所述的一种全景视频直播方法。
  19. 一种存储介质,其特征在于,所述存储介质存储有一个或者多个程序,所述一个或者多个程序被一个设备执行权利要求1-5任一项所述的一种全景视频直播方法。
  20. 一种应用程序,其特征在于,所述应用程序用于在运行时,执行权利要求1-5任一项所述的一种全景视频直播方法。
PCT/CN2017/075573 2016-04-19 2017-03-03 全景视频直播方法、装置和系统以及视频源控制设备 WO2017181777A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610245526.5 2016-04-19
CN201610245526.5A CN105847851A (zh) 2016-04-19 2016-04-19 全景视频直播方法、装置和系统以及视频源控制设备

Publications (1)

Publication Number Publication Date
WO2017181777A1 true WO2017181777A1 (zh) 2017-10-26

Family

ID=56588896

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/075573 WO2017181777A1 (zh) 2016-04-19 2017-03-03 全景视频直播方法、装置和系统以及视频源控制设备

Country Status (2)

Country Link
CN (1) CN105847851A (zh)
WO (1) WO2017181777A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109756683A (zh) * 2017-11-02 2019-05-14 深圳市裂石影音科技有限公司 全景音视频录制方法、装置、存储介质和计算机设备
CN110019367A (zh) * 2017-12-28 2019-07-16 北京京东尚科信息技术有限公司 一种统计数据特征的方法和装置
CN111010599A (zh) * 2019-12-18 2020-04-14 浙江大华技术股份有限公司 一种处理多场景视频流的方法、装置及计算机设备
CN111901628A (zh) * 2020-08-03 2020-11-06 江西科骏实业有限公司 基于zSpace桌面VR一体机的云端渲染方法
CN113473165A (zh) * 2021-06-30 2021-10-01 中国电信股份有限公司 直播控制系统、直播控制方法、装置、介质与设备
US11323754B2 (en) 2018-11-20 2022-05-03 At&T Intellectual Property I, L.P. Methods, devices, and systems for updating streaming panoramic video content due to a change in user viewpoint
CN114979799A (zh) * 2022-05-20 2022-08-30 北京字节跳动网络技术有限公司 一种全景视频处理方法、装置、设备和存储介质

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847851A (zh) * 2016-04-19 2016-08-10 北京金山安全软件有限公司 全景视频直播方法、装置和系统以及视频源控制设备
CN107800946A (zh) * 2016-09-02 2018-03-13 丰唐物联技术(深圳)有限公司 一种直播方法及系统
CN106303554A (zh) * 2016-09-07 2017-01-04 四川天辰智创科技有限公司 一种视频直播系统及方法
CN106210703B (zh) * 2016-09-08 2018-06-08 北京美吉克科技发展有限公司 Vr环境中特写镜头的运用及显示方法和系统
CN106657963B (zh) * 2016-09-14 2019-03-01 深圳岚锋创视网络科技有限公司 一种数据处理装置及方法
CN106572344A (zh) * 2016-09-29 2017-04-19 宇龙计算机通信科技(深圳)有限公司 虚拟现实直播方法、系统以及云服务器
CN106507119A (zh) * 2016-10-21 2017-03-15 安徽协创物联网技术有限公司 一种移动视频直播系统
CN108012073B (zh) * 2016-10-28 2020-05-19 努比亚技术有限公司 一种实现全景拍摄的方法及装置
CN106375347A (zh) * 2016-11-18 2017-02-01 上海悦野健康科技有限公司 虚拟现实的旅游直播平台
CN106604042A (zh) * 2016-12-22 2017-04-26 Tcl集团股份有限公司 一种基于云端服务器的全景直播系统及全景直播方法
CN106851242A (zh) * 2016-12-30 2017-06-13 成都西纬科技有限公司 一种实现运动相机3d视频直播的方法及系统
CN106791906B (zh) * 2016-12-31 2020-06-23 北京星辰美豆文化传播有限公司 一种多人网络直播方法、装置及其电子设备
CN106993156A (zh) * 2017-03-23 2017-07-28 余仁集 一种无盲区vr视频采集装置
CN107071500A (zh) * 2017-04-13 2017-08-18 深圳电航空技术有限公司 直播系统
CN107071499A (zh) * 2017-04-13 2017-08-18 深圳电航空技术有限公司 直播系统
CN107027043A (zh) * 2017-04-26 2017-08-08 上海翌创网络科技股份有限公司 虚拟现实场景直播方法
CN107197316A (zh) * 2017-04-28 2017-09-22 北京传视慧眸科技有限公司 全景直播系统及方法
CN107426487A (zh) * 2017-05-04 2017-12-01 深圳市酷开网络科技有限公司 一种全景图像录播方法及系统
CN107835433B (zh) * 2017-06-09 2020-11-20 越野一族(北京)传媒科技有限公司 一种赛事宽视角直播系统、相关联的设备和直播方法
CN107147918A (zh) * 2017-06-12 2017-09-08 北京佰才邦技术有限公司 一种数据处理方法、系统、设备和服务器
CN107197172A (zh) * 2017-06-21 2017-09-22 北京小米移动软件有限公司 视频直播方法、装置和系统
CN107451248A (zh) * 2017-07-28 2017-12-08 福建中金在线信息科技有限公司 一种数据存储方法、装置及电子设备
CN108769788A (zh) * 2018-05-30 2018-11-06 根尖体育科技(北京)有限公司 一种同一场景中不同角度摄像的视频片段截取方法
CN111343415A (zh) * 2018-12-18 2020-06-26 杭州海康威视数字技术股份有限公司 数据传输方法及装置
CN109889855A (zh) * 2019-01-31 2019-06-14 南京理工大学 基于移动app的智能全景视频直播网络购物系统及方法
CN111669604A (zh) * 2019-03-07 2020-09-15 阿里巴巴集团控股有限公司 一种采集设备设置方法及装置、终端、采集系统、设备
WO2020181112A1 (en) 2019-03-07 2020-09-10 Alibaba Group Holding Limited Video generating method, apparatus, medium, and terminal
CN111355967A (zh) * 2020-03-11 2020-06-30 叠境数字科技(上海)有限公司 基于自由视点的视频直播处理方法、系统、装置及介质
CN111416989A (zh) * 2020-04-28 2020-07-14 北京金山云网络技术有限公司 视频直播方法、系统及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101951412A (zh) * 2010-10-15 2011-01-19 上海交通大学 基于http协议的多子流流媒体传输系统及其传输方法
US20140270684A1 (en) * 2013-03-15 2014-09-18 3D-4U, Inc. Apparatus and Method for Playback of Multiple Panoramic Videos with Control Codes
CN105196925A (zh) * 2014-06-25 2015-12-30 比亚迪股份有限公司 行车辅助装置以及具有其的车辆
CN105847851A (zh) * 2016-04-19 2016-08-10 北京金山安全软件有限公司 全景视频直播方法、装置和系统以及视频源控制设备

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102355545A (zh) * 2011-09-20 2012-02-15 中国科学院宁波材料技术与工程研究所 一种360度实时全景摄像机
CN103716578A (zh) * 2012-09-28 2014-04-09 华为技术有限公司 一种视频数据发送、存储及检索方法和视频监控系统
CN103177615B (zh) * 2013-03-26 2016-03-02 北京新学道教育科技有限公司 一种基于云计算技术的录播系统及方法
CN103813213B (zh) * 2014-02-25 2019-02-12 南京工业大学 基于移动云计算的实时视频分享平台和方法
CN103944888B (zh) * 2014-04-02 2018-03-06 天脉聚源(北京)传媒科技有限公司 一种资源共享的方法、装置及系统
CN104168434A (zh) * 2014-08-28 2014-11-26 深圳市银翔科技有限公司 视频文件的存储、播放和管理方法
CN204926578U (zh) * 2015-08-05 2015-12-30 外星人(北京)科技有限公司 基于云计算的在线家庭教育设备
CN105120193A (zh) * 2015-08-06 2015-12-02 佛山六滴电子科技有限公司 一种录制全景视频的设备及方法
CN105245909A (zh) * 2015-10-10 2016-01-13 上海慧体网络科技有限公司 结合智能硬件、云计算、互联网进行比赛直播的方法
CN105282526A (zh) * 2015-12-01 2016-01-27 北京时代拓灵科技有限公司 一种全景视频拼接的处理方法及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101951412A (zh) * 2010-10-15 2011-01-19 上海交通大学 基于http协议的多子流流媒体传输系统及其传输方法
US20140270684A1 (en) * 2013-03-15 2014-09-18 3D-4U, Inc. Apparatus and Method for Playback of Multiple Panoramic Videos with Control Codes
CN105196925A (zh) * 2014-06-25 2015-12-30 比亚迪股份有限公司 行车辅助装置以及具有其的车辆
CN105847851A (zh) * 2016-04-19 2016-08-10 北京金山安全软件有限公司 全景视频直播方法、装置和系统以及视频源控制设备

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109756683A (zh) * 2017-11-02 2019-05-14 深圳市裂石影音科技有限公司 全景音视频录制方法、装置、存储介质和计算机设备
CN109756683B (zh) * 2017-11-02 2024-06-04 深圳市裂石影音科技有限公司 全景音视频录制方法、装置、存储介质和计算机设备
CN110019367A (zh) * 2017-12-28 2019-07-16 北京京东尚科信息技术有限公司 一种统计数据特征的方法和装置
US11323754B2 (en) 2018-11-20 2022-05-03 At&T Intellectual Property I, L.P. Methods, devices, and systems for updating streaming panoramic video content due to a change in user viewpoint
CN111010599A (zh) * 2019-12-18 2020-04-14 浙江大华技术股份有限公司 一种处理多场景视频流的方法、装置及计算机设备
CN111901628A (zh) * 2020-08-03 2020-11-06 江西科骏实业有限公司 基于zSpace桌面VR一体机的云端渲染方法
CN113473165A (zh) * 2021-06-30 2021-10-01 中国电信股份有限公司 直播控制系统、直播控制方法、装置、介质与设备
CN114979799A (zh) * 2022-05-20 2022-08-30 北京字节跳动网络技术有限公司 一种全景视频处理方法、装置、设备和存储介质

Also Published As

Publication number Publication date
CN105847851A (zh) 2016-08-10

Similar Documents

Publication Publication Date Title
WO2017181777A1 (zh) 全景视频直播方法、装置和系统以及视频源控制设备
CN105991962B (zh) 连接方法、信息展示方法、装置及系统
KR101467430B1 (ko) 클라우드 컴퓨팅 기반 어플리케이션 제공 방법 및 시스템
US20150222815A1 (en) Aligning videos representing different viewpoints
EP2876885A1 (en) Method and apparatus in a motion video capturing system
JP2022536182A (ja) データストリームを同期させるシステム及び方法
CN105872453A (zh) 网络摄像头监控方法、服务器及系统
WO2017092338A1 (zh) 一种数据传输的方法和装置
RU2015143011A (ru) Устройство серверного узла и способ
CN111092898B (zh) 报文传输方法及相关设备
US9445142B2 (en) Information processing apparatus and control method thereof
JP6564884B2 (ja) マルチメディア情報再生方法及びシステム、ならびに標準化サーバ及びライブストリーミング端末
WO2014075413A1 (zh) 一种确定待共享的终端的方法、装置和系统
US20160029053A1 (en) Method for transmitting media data and virtual desktop server
US11089073B2 (en) Method and device for sharing multimedia content
WO2018145572A1 (zh) 实现vr直播的方法、装置和ott业务系统以及存储介质
CN112822435A (zh) 一种用户可轻松接入的安防方法、装置及系统
JP6530820B2 (ja) マルチメディア情報再生方法及びシステム、採集デバイス、標準サーバ
CN108141564B (zh) 用于视频广播的系统和方法
CN106412492B (zh) 视频数据处理方法和装置
EP3513546B1 (en) Systems and methods for segmented data transmission
WO2019227426A1 (zh) 多媒体数据处理方法、装置和设备/终端/服务器
CN114143616A (zh) 目标视频的处理方法和系统、存储介质及电子装置
CN114466145A (zh) 视频处理方法、装置、设备和存储介质
US9584572B2 (en) Cloud service device, multi-image preview method and cloud service system

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17785263

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 31.01.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17785263

Country of ref document: EP

Kind code of ref document: A1