[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US20130336379A1 - System and Methods for Encoding Live Multimedia Content with Synchronized Resampled Audio Data - Google Patents

System and Methods for Encoding Live Multimedia Content with Synchronized Resampled Audio Data Download PDF

Info

Publication number
US20130336379A1
US20130336379A1 US13/629,292 US201213629292A US2013336379A1 US 20130336379 A1 US20130336379 A1 US 20130336379A1 US 201213629292 A US201213629292 A US 201213629292A US 2013336379 A1 US2013336379 A1 US 2013336379A1
Authority
US
United States
Prior art keywords
audio
timeline
video
data
duration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/629,292
Inventor
Kirill Erofeev
Galina Petrova
Dmitry Sahno
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonic IP LLC
Original Assignee
Divx LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Divx LLC filed Critical Divx LLC
Priority to US13/629,292 priority Critical patent/US20130336379A1/en
Assigned to DIVX, LLC reassignment DIVX, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: EROFEEV, Kirill, PETROVA, Galina, SAHNO, Dmitry
Priority to PCT/US2013/042105 priority patent/WO2013188065A2/en
Assigned to SONIC IP, INC. reassignment SONIC IP, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DIVX, LLC
Publication of US20130336379A1 publication Critical patent/US20130336379A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/242Synchronization processes, e.g. processing of PCR [Program Clock References]

Definitions

  • the present invention is directed, in general, to systems and methods for encoding multimedia content and more specifically to systems and methods for encoding live multimedia content with synchronized resampled audio data.
  • Streaming video over the Internet has become a phenomenon in modern times.
  • Many popular websites such as YouTube, a service of Google, Inc. of Mountain View, Calif., and WatchESPN, a service of ESPN of Bristol, Conn., utilize streaming video in order to provide video and television programming to consumers via the Internet.
  • Video data is often compressed to facilitate the storage and transmission of video data, particularly over networks such as the Internet.
  • codecs video compression standards
  • MPEG2 by the Moving Picture Experts Group (MPEG) of the International Organization for Standardization (ISO) of Geneva, Switzerland, with the International Electrotechnical Commission (IEC) of Geneva, Switzerland
  • ISO International Organization for Standardization
  • ISO International Electrotechnical Commission
  • MPEG4 by the ISO/IEC MPEG
  • H.264/MPEG4 AVC by the International Telecommunication Union Telecommunication Standardization Sector of Geneva, Switzerland.
  • Video data is compressed, also known as encoded, using an encoder. Encoded video data is decompressed using a decoder corresponding to the encoder used to encode the video data.
  • Scalable Video Coding is an extension of the H.264/MPEG-4 AVC video compression standard, which is specified by the ITU-T H.264 standard by the International Telecommunication Union Telecommunication Standardization Sector of Geneva, Switzerland.
  • SVC enables the encoding of a video bitstream that additionally contains one or more sub-bitstreams.
  • the sub-bitstreams are derived from the video bitstream by dropping packets of data from the video bitstream, resulting in a sub-bitstream of lower quality and lower bandwidth than the original video bitstream.
  • SVC supports three forms of scaling a video bitstream into sub-bitstreams: temporal scaling, spatial scaling, and quality scaling. Each of these scaling techniques can be used individually or combined depending on the specific video system.
  • PCM Pulse Code Modulation
  • a PCM stream is a digital representation of an analog signal where the magnitude of the analog signal is sampled at uniform intervals, known as the sample rate, and quantized to a value within a range of digital steps.
  • PCM streams are commonly created using analog to digital converters, and are decoded using digital to analog converters.
  • the Matroska container is a media container developed as an open standard project by the Matroska non-profit organization of Aussonne, France.
  • the Matroska container is based upon Extensible Binary Meta Language (EBML), which is a binary derivative of the Extensible Markup Language (XML).
  • EBML Extensible Binary Meta Language
  • XML Extensible Markup Language
  • Decoding of the Matroska container is supported by many consumer electronics (CE) devices.
  • CE consumer electronics
  • the DivX Plus file format developed by DivX, LLC of San Diego, Calif. utilizes an extension of the Matroska container format, including elements that are not specified within the Matroska format.
  • multimedia content is typically stored on a media server as a top level index file pointing to a number of alternate streams that contain the actual video and audio data.
  • Each stream is typically stored in one or more container files.
  • container files including the Matroska container, may be utilized in adaptive streaming systems.
  • an encoding system includes live multimedia content storage configured to store live multimedia content, where the live multimedia content includes audio data and video data, where the audio data includes a plurality of audio samples having an audio sample duration and the video data includes a plurality of video frames, a processor, and a multimedia encoder, wherein the multimedia encoder configures the processor to receive live multimedia content, generate a timeline using the video data, where the timeline contains a plurality of timestamps, where at least one timestamp in the plurality of timestamps is determined using at least one video frame in the plurality of video frames, compute a first time window, where the first time window includes a first time window duration corresponding to the difference in time between a first timestamp in the timeline and a second timestamp in the timeline, align the audio data to the video data using the audio samples and the timeline by assigning at
  • the audio sample duration is a fixed duration.
  • the audio sample duration is a variable duration.
  • the synchronization value is measured by subtracting the duration of at least one audio sample from the first time window duration.
  • the threshold value is pre-determined.
  • the threshold value is determined dynamically.
  • the at least one audio sample includes a sampling rate and the audio sample is resampled by increasing the sampling rate.
  • the at least one audio sample includes a sampling rate and the audio sample is resampled by decreasing the sampling rate.
  • the multimedia encoder further configures the processor to perform pitch compensation of the resampled audio sample.
  • At least one video frame in the plurality of video frames includes a video frame timestamp and at least one timestamp in the plurality of timestamps is determined using the video frame timestamp.
  • At least one video frame in the plurality of video frames includes a video frame duration and at least one timestamp in the plurality of timestamps is determined using the video frame duration.
  • Still another embodiment of the invention includes a method for encoding live multimedia content includes receiving live multimedia content using an encoding system, generating a timeline using the video data and the encoding system, where the timeline contains a plurality of timestamps, where at least one timestamp in the plurality of timestamps is determined using at least one video frame in the plurality of video frames, computing a first time window using the encoding system, where the first time window includes a first time window duration corresponding to the difference in time between a first timestamp in the timeline and a second timestamp in the timeline, aligning the audio data to the video data using the audio samples and the timeline by assigning at least one audio sample to the first time window based upon the number of audio sample durations that occur within the first time window duration using the encoding system, measuring a synchronization value of the aligned audio data to the video data using the timeline and the encoding system, resampling at least one audio sample in the aligned audio data when the synchronization value exceeds a threshold
  • the audio sample duration is a fixed duration.
  • the audio sample duration is a variable duration.
  • measuring the synchronization value includes subtracting the duration of at least one audio sample from the first time window duration using the encoding system.
  • the threshold value is pre-determined.
  • the threshold value is determined dynamically.
  • the at least one audio sample includes a sampling rate and resampling an audio sample includes increasing the sampling rate using the encoding system.
  • the at least one audio sample includes a sampling rate and resampling an audio sample includes decreasing the sampling rate using the encoding system.
  • encoding live multimedia content includes performing pitch compensation of at least one resampled audio sample using the encoding system.
  • At least one video frame in the plurality of video frames includes a video frame timestamp and determining at least one timestamp in the plurality of timestamps utilizes the video frame timestamp and the encoding system.
  • At least one video frame in the plurality of video frames includes a video frame duration and determining at least one timestamp in the plurality of timestamps utilizes the video frame duration and the encoding system.
  • Yet another embodiment of the invention includes a machine readable medium containing processor instructions, where execution of the instructions by a processor causes the processor to perform a process including receiving live multimedia content, generating a timeline using the video data, where the timeline contains a plurality of timestamps, where at least one timestamp in the plurality of timestamps is determined using at least one video frame in the plurality of video frames, computing a first time window, where the first time window includes a first time window duration corresponding to the difference in time between a first timestamp in the timeline and a second timestamp in the timeline, aligning the audio data to the video data using the audio samples and the timeline by assigning at least one audio sample to the first time window based upon the number of audio sample durations that occur within the first time window duration, measuring a synchronization value of the aligned audio data to the video data using the timeline, resampling at least one audio sample in the aligned audio data when the synchronization value exceeds a threshold value, and multiplexing the audio data and video data
  • FIG. 1 is a system diagram of a system for encoding and delivering live multimedia content in accordance with an embodiment of the invention.
  • FIG. 2 conceptually illustrates a media server configured to encode live video data with synchronized resampled audio data in accordance with an embodiment of the invention.
  • FIG. 3 is a flow chart illustrating a process for encoding live multimedia content with audio data synchronized with video data in accordance with an embodiment of the invention.
  • FIG. 4 is a flow chart illustrating a process for encoding live multimedia content with resampled audio data synchronized with video data in accordance with an embodiment of the invention.
  • Multimedia content typically includes audio data and video data.
  • video data is encoded using one of a variety of video compression schemes and the audio data is encoded using pulse code modulation (PCM).
  • PCM pulse code modulation
  • the audio data can then be multiplexed with the frames of encoded video data and stored in a container file.
  • the audio data is synchronized quickly in order to facilitate the encoding and delivery of the live multimedia content.
  • Container files are composed of blocks of content (e.g. fragments, elements, or chunks), where each block of content includes audio data and/or video data.
  • a number of container files have restrictions as to how timestamps are applied to the data stored in the container file and/or the container files may have blocks of content of a fixed size.
  • the generation of timestamps for live sources of audio and video data may contain errors or other issues. For example, the difference between adjacent timestamps present in the audio data may not be equal to the actual duration of the audio data contained between the adjacent timestamps. These restrictions and errors can cause the audio and video data to become desynchronized when the audio and video data is multiplexed in a container file.
  • the likelihood of desynchronization is reduced by constructing a timeline using the timestamps of the encoded video data and synchronizing the audio data to the video data based upon the sampling rate of the audio data.
  • the audio data is synchronized to the timeline by adjusting the number of PCM samples assigned to a particular time interval.
  • the container file format permits a specific number of audio samples in each frame interval and the audio data is synchronized to the timeline by resampling the audio data to obtain an appropriate number of samples. The audio data and video data can then be multiplexed into a container file.
  • Media servers in accordance with embodiments of the invention are configured to encode live multimedia content to be stored and/or streamed to network clients.
  • a media streaming network including a media server configured to encode live multimedia content in accordance with an embodiment of the invention is illustrated in FIG. 1 .
  • the illustrated media streaming network 10 includes a media source 100 configured to encode multimedia content in real time.
  • the media source 100 is configured to capture and/or receive streams of live audio data from an audio source and streams of live video data from a video source.
  • the audio source and video source generate and apply a timestamp to their respective captured data.
  • the media source 100 contains pre-encoded multimedia content.
  • the media source 100 is connected to a network renderer 102 .
  • the network renderer 102 synchronizes the encoded audio data the encoded video data by using the encoded video data to create a timeline and synchronizing the encoded audio data to the timeline based upon the sampling rate of the encoded audio data.
  • the initial synchronization of the video data and the audio data may be obtained using an initial synchronization sequence.
  • the media source 100 and the network renderer 102 are implemented using a media server.
  • the network renderer 102 is connected to a plurality of network clients 104 utilizing a network 108 .
  • the network renderer 102 is implemented using a single machine. In several embodiments of the invention, the network renderer 102 is implemented using a plurality of machines. In many embodiments, the network 108 is the Internet. In several embodiments, the network 108 is any IP network.
  • the network clients 104 contain a media decoder 106 .
  • the network client 104 is configured to receive and decode received multimedia content using the media decoder 106 .
  • a media server where the media server includes a media source 100 and a network renderer 102 , is implemented using a machine capable of receiving live multimedia content and multiplexing the received live multimedia content into a container file.
  • the media server is also capable of encoding the received live multimedia content.
  • FIG. 2 The basic architecture of a media server in accordance with an embodiment of the invention is illustrated in FIG. 2 .
  • the media server 200 includes a processor 210 in communication with non-volatile memory 230 , volatile memory 220 , and a network interface 240 .
  • the non-volatile memory includes a media encoder 232 that configures the processor to encode live multimedia content by creating a timeline using the video data in the live multimedia content, synchronizing samples of the audio data with the video data using the timeline, and multiplexing the audio data and video data in a container file.
  • the container file contains blocks of content with a fixed number of audio samples; in accordance with embodiments of the invention, the audio data is resampled to obtain the appropriate number of audio samples prior to multiplexing the audio data and the video data in a container file.
  • the network interface 240 may be in communication with the processor 210 , the volatile memory 220 , and/or the non-volatile memory 230 .
  • any of a variety of architectures including architectures where the media encoder 232 is located on disk or some other form of storage and is loaded into volatile memory 220 at runtime can be utilized to implement media servers in accordance with embodiments of the invention.
  • FIGS. 1 and 2 Although specific architectures for a media streaming network and a media server configured to encode live multimedia content are described with respect to FIGS. 1 and 2 , other implementations appropriate to a specific application can be utilized in accordance with embodiments of the invention. Methods for encoding live multimedia content with synchronized audio data in accordance with embodiments of the invention are discussed below.
  • Live multimedia content can be encoded for a variety of purposes including, but not limited to, streaming the live multimedia content to a number of network clients.
  • the video data and the audio data should be closely synchronized so that the encoded multimedia content provides an experience similar to viewing the content live.
  • the audio will correspond with relevant visual cues such as (but not limited to) lip motion associated with a person talking or singing.
  • this synchronization is performed using the timestamps associated with the audio data and the video data; these timestamps are created by the hardware and/or software capturing the audio data and video data.
  • the timestamps generated during the capture of the audio data and the video data may not be aligned and/or differences in the hardware and/or software may result in timestamps generated by the hardware recording the audio data and the hardware recording the video data being inconsistent with each other over the course of the recording of the live multimedia content. Further compounding the problem, the timestamps generated when recording the live audio data and video data may not accurately represent the real world elapsed time between the recorded timestamps. Moreover, direct multiplexing of the audio data and the video data may result in the loss of timestamps captured by the recording hardware, potentially causing synchronization problems as additional live multimedia content is encoded. However, these issues may be minimized, or even avoided, by constructing a new timeline using the video data and synchronizing the audio data using the timeline based upon the sampling rate of the audio data in accordance with embodiments of the invention.
  • FIG. 3 A process for encoding live multimedia content in which audio data is synchronized with encoded video data is illustrated in FIG. 3 .
  • the process 300 includes receiving ( 310 ) live multimedia content, where the multimedia content includes (but is not limited to) audio data and video data.
  • a timeline containing one or more timestamps is generated ( 312 ) using the timestamps associated with the frames of video data and/or the known frame rate of the video data.
  • the audio data is aligned ( 314 ) to the video data using the timeline.
  • the synchronization of the audio data is measured ( 316 ) using a synchronization threshold. If the audio data is de-synchronized ( 318 ) beyond the synchronization threshold, the audio data is adjusted ( 320 ).
  • the synchronized audio data and video data are multiplexed ( 322 ) into a container file.
  • the video data is encoded in accordance with video encoding standards including (but not limited to) MPEG2, MPEG4, H.264, or Scalable Video Coding.
  • the audio data is encoded using PCM.
  • the audio data is aligned ( 314 ) to the video data using the timeline by assigning fragments of PCM samples to the timeline without any gaps or overlays.
  • PCM samples have a fixed duration and the audio data is aligned ( 314 ) to the timestamps by assigning enough PCM samples so that the duration fills the difference between the current timestamp and the adjacent timestamp in the timeline.
  • the difference between the current timestamp and the adjacent timestamp in the timeline is a time window; the time window has a duration which is the difference between the timestamps.
  • the synchronization of the PCM samples is measured ( 316 ) at each timestamp in the timeline using a variety of methods, including, but not limited to, subtracting the total length of the PCM samples from the difference in time between the timestamp and the adjacent timestamp.
  • the audio data and video data are multiplexed ( 322 ) into a container file that includes a block of content (e.g. a fragment, element, or chunk) that stores one or more frames of video data and the audio data played back during the display of the one or more frames of video data.
  • a timestamp can be associated with the predetermined portions of the container file containing video frames and associated audio data and the timestamps can be utilized by decoders to time the display of individual frames of video and the playback of the accompanying audio.
  • the audio data is adjusted ( 320 ) by moving PCM samples from one timestamp to the next. For example, if the measured ( 316 ) synchronization indicates that the audio data is falling behind the video data at timestamp X, the audio data is adjusted ( 320 ) by pulling PCM samples of audio data from the block of content in the container file associated with adjacent timestamp X+1. Likewise, if the audio data is ahead of the video data, the audio data is adjusted ( 320 ) by pushing PCM samples of audio data from the portion of the container file associated with timestamp X to the block of content associated with adjacent timestamp X+1. Other adjustments may be utilized in accordance with embodiments of the invention.
  • a specific process for encoding live media with audio data synchronized to the video data in accordance with embodiments of the invention is described above with respect to FIG. 3 ; however, a variety of processes may be utilized in accordance with embodiments of the invention. Methods for encoding live media with synchronized resampled audio using containers with fixed size blocks of content in accordance with embodiments of the invention are discussed below.
  • a number of container file formats utilized in accordance with embodiments of the invention fix the size of the predetermined portions that contain one or more frames of video data and the audio data that accompanies the one or more frames of video. Fixing the size of each predetermined portion utilized to store video and audio data typically means that each of the predetermined portions contains the same number of audio samples. Depending upon the sampling rate of the audio relative to the frame rate of the video, a different number of samples may fall within each frame interval.
  • the audio samples are resampled to obtain the appropriate number of samples.
  • filters and/or other appropriate adjustments can be applied to the samples to minimize the audio distortion resulting from playback of the resampled audio.
  • a process for encoding live multimedia content in which audio data is synchronized with video data and multiplexing the audio and video data into a container file have a fixed number of audio samples per frame interval is illustrated in FIG. 4 .
  • the process 400 includes receiving ( 410 ) live multimedia content, where the multimedia content includes, but is not limited to, audio data and video data.
  • a timeline containing one or more timestamps is generated ( 412 ) using the video data.
  • the audio data is aligned ( 414 ) to the video data using the timeline.
  • the synchronization of the audio data is measured ( 416 ) using a synchronization threshold.
  • the audio data is de-synchronized ( 418 ) beyond the synchronization threshold, the audio data is resampled ( 420 ), and, if necessary, corrections are applied ( 422 ) to the resampled audio data.
  • the audio data and video data are multiplexed ( 424 ) into the container file.
  • a process similar to the one described above with respect to FIG. 3 may be utilized for building a timeline and detecting audio de-synchronization ( 410 )-( 418 ).
  • the synchronization threshold is measured ( 416 ) by counting the number of samples of audio data assigned to a timestamp in the timeline and comparing that number to the number of audio samples allowed for a block of content in the container file. If the number of samples of audio data differs from the number of audio samples allowed for the block of content, the audio data is resampled ( 420 ).
  • resampling ( 420 ) the PCM samples contained in the audio data results in the resampled audio data having a sample rate that is higher or lower than the original sample rate.
  • the difference between the resampled sample rate and the original sample rate is kept within a threshold value, such as (but not limited to) 500 Hz.
  • the resampled audio data is corrected ( 422 ) using one or more of a variety of techniques, including, but not limited to, pitch compensation, in order to mask changes in the sound resulting from the resampling process.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Systems and methods for encoding live multimedia content with audio data synchronized with other streams of data within the multimedia content, including video data in accordance with embodiments of the invention are disclosed. In one embodiment of the invention, an encoding system includes live multimedia content storage configured to store live multimedia content including audio data and video data, a processor, and a multimedia encoder, wherein the multimedia encoder configures the processor to receive live multimedia content, generate a timeline using the video data, compute a first time window, align the audio data to the video data using the audio samples and the timeline, measure a synchronization value of the aligned audio data to the video data, resample at least one audio sample in the aligned audio data when the synchronization value exceeds a threshold value, and multiplex the audio data and video data into a container file.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Patent Application No. 61/659,111, filed on Jun. 13, 2012, the disclosure of which is hereby incorporated by reference in its entirety.
  • FIELD OF THE INVENTION
  • The present invention is directed, in general, to systems and methods for encoding multimedia content and more specifically to systems and methods for encoding live multimedia content with synchronized resampled audio data.
  • BACKGROUND
  • Streaming video over the Internet has become a phenomenon in modern times. Many popular websites, such as YouTube, a service of Google, Inc. of Mountain View, Calif., and WatchESPN, a service of ESPN of Bristol, Conn., utilize streaming video in order to provide video and television programming to consumers via the Internet.
  • Video data is often compressed to facilitate the storage and transmission of video data, particularly over networks such as the Internet. A number of video compression standards (codecs) exist, including MPEG2 by the Moving Picture Experts Group (MPEG) of the International Organization for Standardization (ISO) of Geneva, Switzerland, with the International Electrotechnical Commission (IEC) of Geneva, Switzerland, MPEG4 by the ISO/IEC MPEG, and H.264/MPEG4 AVC by the International Telecommunication Union Telecommunication Standardization Sector of Geneva, Switzerland. Video data is compressed, also known as encoded, using an encoder. Encoded video data is decompressed using a decoder corresponding to the encoder used to encode the video data.
  • Scalable Video Coding (SVC) is an extension of the H.264/MPEG-4 AVC video compression standard, which is specified by the ITU-T H.264 standard by the International Telecommunication Union Telecommunication Standardization Sector of Geneva, Switzerland. SVC enables the encoding of a video bitstream that additionally contains one or more sub-bitstreams. The sub-bitstreams are derived from the video bitstream by dropping packets of data from the video bitstream, resulting in a sub-bitstream of lower quality and lower bandwidth than the original video bitstream. SVC supports three forms of scaling a video bitstream into sub-bitstreams: temporal scaling, spatial scaling, and quality scaling. Each of these scaling techniques can be used individually or combined depending on the specific video system.
  • Pulse Code Modulation (PCM) is a method used to create a digital representation of analog signals, including analog audio data. A PCM stream is a digital representation of an analog signal where the magnitude of the analog signal is sampled at uniform intervals, known as the sample rate, and quantized to a value within a range of digital steps. PCM streams are commonly created using analog to digital converters, and are decoded using digital to analog converters. Systems and methods for performing pulse code modulation of analog signals are described in U.S. Pat. No. 2,801,281, entitled “Communication System Employing Pulse Code Modulation” to Oliver et al., dated Jul. 30, 1957, the entirety of which is incorporated by reference.
  • A variety of multimedia containers may be used to store encoded multimedia content, including the Matroska container. The Matroska container is a media container developed as an open standard project by the Matroska non-profit organization of Aussonne, France. The Matroska container is based upon Extensible Binary Meta Language (EBML), which is a binary derivative of the Extensible Markup Language (XML). Decoding of the Matroska container is supported by many consumer electronics (CE) devices. The DivX Plus file format developed by DivX, LLC of San Diego, Calif. utilizes an extension of the Matroska container format, including elements that are not specified within the Matroska format.
  • In adaptive streaming systems, multimedia content is typically stored on a media server as a top level index file pointing to a number of alternate streams that contain the actual video and audio data. Each stream is typically stored in one or more container files. A variety of container files, including the Matroska container, may be utilized in adaptive streaming systems.
  • SUMMARY OF THE INVENTION
  • Systems and methods for encoding live multimedia content with audio data synchronized with other streams of data within the multimedia content, including video data in accordance with embodiments of the invention are disclosed. In one embodiment of the invention, an encoding system includes live multimedia content storage configured to store live multimedia content, where the live multimedia content includes audio data and video data, where the audio data includes a plurality of audio samples having an audio sample duration and the video data includes a plurality of video frames, a processor, and a multimedia encoder, wherein the multimedia encoder configures the processor to receive live multimedia content, generate a timeline using the video data, where the timeline contains a plurality of timestamps, where at least one timestamp in the plurality of timestamps is determined using at least one video frame in the plurality of video frames, compute a first time window, where the first time window includes a first time window duration corresponding to the difference in time between a first timestamp in the timeline and a second timestamp in the timeline, align the audio data to the video data using the audio samples and the timeline by assigning at least one audio sample to the first time window based upon the number of audio sample durations that occur within the first time window duration, measure a synchronization value of the aligned audio data to the video data using the timeline, resample at least one audio sample in the aligned audio data when the synchronization value exceeds a threshold value, and multiplex the audio data and video data into a container file.
  • In another embodiment of the invention, the audio sample duration is a fixed duration.
  • In an additional embodiment of the invention, the audio sample duration is a variable duration.
  • In yet another additional embodiment of the invention, the synchronization value is measured by subtracting the duration of at least one audio sample from the first time window duration.
  • In still another additional embodiment of the invention, the threshold value is pre-determined.
  • In yet still another additional embodiment of the invention, the threshold value is determined dynamically.
  • In yet another embodiment of the invention, the at least one audio sample includes a sampling rate and the audio sample is resampled by increasing the sampling rate.
  • In still another embodiment of the invention, the at least one audio sample includes a sampling rate and the audio sample is resampled by decreasing the sampling rate.
  • In yet still another embodiment of the invention, the multimedia encoder further configures the processor to perform pitch compensation of the resampled audio sample.
  • In yet another additional embodiment of the invention, at least one video frame in the plurality of video frames includes a video frame timestamp and at least one timestamp in the plurality of timestamps is determined using the video frame timestamp.
  • In still another additional embodiment of the invention, at least one video frame in the plurality of video frames includes a video frame duration and at least one timestamp in the plurality of timestamps is determined using the video frame duration.
  • Still another embodiment of the invention includes a method for encoding live multimedia content includes receiving live multimedia content using an encoding system, generating a timeline using the video data and the encoding system, where the timeline contains a plurality of timestamps, where at least one timestamp in the plurality of timestamps is determined using at least one video frame in the plurality of video frames, computing a first time window using the encoding system, where the first time window includes a first time window duration corresponding to the difference in time between a first timestamp in the timeline and a second timestamp in the timeline, aligning the audio data to the video data using the audio samples and the timeline by assigning at least one audio sample to the first time window based upon the number of audio sample durations that occur within the first time window duration using the encoding system, measuring a synchronization value of the aligned audio data to the video data using the timeline and the encoding system, resampling at least one audio sample in the aligned audio data when the synchronization value exceeds a threshold value using the encoding system, and multiplexing the audio data and video data into a container file using the encoding system.
  • In yet another additional embodiment of the invention, the audio sample duration is a fixed duration.
  • In still another additional embodiment of the invention, the audio sample duration is a variable duration.
  • In yet still another additional embodiment of the invention, measuring the synchronization value includes subtracting the duration of at least one audio sample from the first time window duration using the encoding system.
  • In yet another embodiment of the invention, the threshold value is pre-determined.
  • In still another embodiment of the invention, the threshold value is determined dynamically.
  • In yet still another embodiment of the invention, the at least one audio sample includes a sampling rate and resampling an audio sample includes increasing the sampling rate using the encoding system.
  • In yet another additional embodiment of the invention, the at least one audio sample includes a sampling rate and resampling an audio sample includes decreasing the sampling rate using the encoding system.
  • In still another additional embodiment of the invention, encoding live multimedia content includes performing pitch compensation of at least one resampled audio sample using the encoding system.
  • In yet still another additional embodiment of the invention, at least one video frame in the plurality of video frames includes a video frame timestamp and determining at least one timestamp in the plurality of timestamps utilizes the video frame timestamp and the encoding system.
  • In yet another embodiment of the invention, at least one video frame in the plurality of video frames includes a video frame duration and determining at least one timestamp in the plurality of timestamps utilizes the video frame duration and the encoding system.
  • Yet another embodiment of the invention includes a machine readable medium containing processor instructions, where execution of the instructions by a processor causes the processor to perform a process including receiving live multimedia content, generating a timeline using the video data, where the timeline contains a plurality of timestamps, where at least one timestamp in the plurality of timestamps is determined using at least one video frame in the plurality of video frames, computing a first time window, where the first time window includes a first time window duration corresponding to the difference in time between a first timestamp in the timeline and a second timestamp in the timeline, aligning the audio data to the video data using the audio samples and the timeline by assigning at least one audio sample to the first time window based upon the number of audio sample durations that occur within the first time window duration, measuring a synchronization value of the aligned audio data to the video data using the timeline, resampling at least one audio sample in the aligned audio data when the synchronization value exceeds a threshold value, and multiplexing the audio data and video data into a container file.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a system diagram of a system for encoding and delivering live multimedia content in accordance with an embodiment of the invention.
  • FIG. 2 conceptually illustrates a media server configured to encode live video data with synchronized resampled audio data in accordance with an embodiment of the invention.
  • FIG. 3 is a flow chart illustrating a process for encoding live multimedia content with audio data synchronized with video data in accordance with an embodiment of the invention.
  • FIG. 4 is a flow chart illustrating a process for encoding live multimedia content with resampled audio data synchronized with video data in accordance with an embodiment of the invention.
  • DETAILED DESCRIPTION
  • Turning now to the drawings, systems and methods for encoding live multimedia content with synchronized resampled audio data in accordance with embodiments of the invention are disclosed. Multimedia content typically includes audio data and video data. In many embodiments, video data is encoded using one of a variety of video compression schemes and the audio data is encoded using pulse code modulation (PCM). The audio data can then be multiplexed with the frames of encoded video data and stored in a container file. When encoding live multimedia content, such as for use in live streaming over the Internet, the audio data is synchronized quickly in order to facilitate the encoding and delivery of the live multimedia content.
  • Container files are composed of blocks of content (e.g. fragments, elements, or chunks), where each block of content includes audio data and/or video data. A number of container files have restrictions as to how timestamps are applied to the data stored in the container file and/or the container files may have blocks of content of a fixed size. Furthermore, the generation of timestamps for live sources of audio and video data may contain errors or other issues. For example, the difference between adjacent timestamps present in the audio data may not be equal to the actual duration of the audio data contained between the adjacent timestamps. These restrictions and errors can cause the audio and video data to become desynchronized when the audio and video data is multiplexed in a container file.
  • In many embodiments, the likelihood of desynchronization is reduced by constructing a timeline using the timestamps of the encoded video data and synchronizing the audio data to the video data based upon the sampling rate of the audio data. In a number of embodiments, the audio data is synchronized to the timeline by adjusting the number of PCM samples assigned to a particular time interval. In several embodiments, the container file format permits a specific number of audio samples in each frame interval and the audio data is synchronized to the timeline by resampling the audio data to obtain an appropriate number of samples. The audio data and video data can then be multiplexed into a container file. Systems and methods for encoding live multimedia content with synchronized resampled audio data in accordance with embodiments of the invention are discussed further below.
  • System Overview
  • Media servers in accordance with embodiments of the invention are configured to encode live multimedia content to be stored and/or streamed to network clients. A media streaming network including a media server configured to encode live multimedia content in accordance with an embodiment of the invention is illustrated in FIG. 1. The illustrated media streaming network 10 includes a media source 100 configured to encode multimedia content in real time. The media source 100 is configured to capture and/or receive streams of live audio data from an audio source and streams of live video data from a video source. In accordance with embodiments of the invention, the audio source and video source generate and apply a timestamp to their respective captured data. However, the timestamps applied by the audio source and video source are likely to not be synchronized with each other and cannot be relied upon by the video source 100 to synchronize the encoded audio data and the encoded video data. In a number of embodiments of the invention, the media source 100 contains pre-encoded multimedia content. The media source 100 is connected to a network renderer 102. In accordance with embodiments of the invention, the network renderer 102 synchronizes the encoded audio data the encoded video data by using the encoded video data to create a timeline and synchronizing the encoded audio data to the timeline based upon the sampling rate of the encoded audio data. In several embodiments, the initial synchronization of the video data and the audio data may be obtained using an initial synchronization sequence. In many embodiments, the media source 100 and the network renderer 102 are implemented using a media server. In accordance with embodiments of the invention, the network renderer 102 is connected to a plurality of network clients 104 utilizing a network 108.
  • In many embodiments of the invention, the network renderer 102 is implemented using a single machine. In several embodiments of the invention, the network renderer 102 is implemented using a plurality of machines. In many embodiments, the network 108 is the Internet. In several embodiments, the network 108 is any IP network.
  • The network clients 104 contain a media decoder 106. In several embodiments of the invention, the network client 104 is configured to receive and decode received multimedia content using the media decoder 106.
  • In many embodiments of the invention, a media server, where the media server includes a media source 100 and a network renderer 102, is implemented using a machine capable of receiving live multimedia content and multiplexing the received live multimedia content into a container file. In accordance with embodiments of the invention, the media server is also capable of encoding the received live multimedia content. The basic architecture of a media server in accordance with an embodiment of the invention is illustrated in FIG. 2. The media server 200 includes a processor 210 in communication with non-volatile memory 230, volatile memory 220, and a network interface 240. In the illustrated embodiment, the non-volatile memory includes a media encoder 232 that configures the processor to encode live multimedia content by creating a timeline using the video data in the live multimedia content, synchronizing samples of the audio data with the video data using the timeline, and multiplexing the audio data and video data in a container file. In many embodiments, the container file contains blocks of content with a fixed number of audio samples; in accordance with embodiments of the invention, the audio data is resampled to obtain the appropriate number of audio samples prior to multiplexing the audio data and the video data in a container file. In several embodiments, the network interface 240 may be in communication with the processor 210, the volatile memory 220, and/or the non-volatile memory 230. Although a specific media server architecture is illustrated in FIG. 2, any of a variety of architectures including architectures where the media encoder 232 is located on disk or some other form of storage and is loaded into volatile memory 220 at runtime can be utilized to implement media servers in accordance with embodiments of the invention.
  • Although specific architectures for a media streaming network and a media server configured to encode live multimedia content are described with respect to FIGS. 1 and 2, other implementations appropriate to a specific application can be utilized in accordance with embodiments of the invention. Methods for encoding live multimedia content with synchronized audio data in accordance with embodiments of the invention are discussed below.
  • Encoding Live Multimedia Content with Synchronized Audio Data
  • Live multimedia content can be encoded for a variety of purposes including, but not limited to, streaming the live multimedia content to a number of network clients. In order to provide a high quality viewing experience, the video data and the audio data should be closely synchronized so that the encoded multimedia content provides an experience similar to viewing the content live. In this way, the audio will correspond with relevant visual cues such as (but not limited to) lip motion associated with a person talking or singing. Traditionally, this synchronization is performed using the timestamps associated with the audio data and the video data; these timestamps are created by the hardware and/or software capturing the audio data and video data. However, the timestamps generated during the capture of the audio data and the video data may not be aligned and/or differences in the hardware and/or software may result in timestamps generated by the hardware recording the audio data and the hardware recording the video data being inconsistent with each other over the course of the recording of the live multimedia content. Further compounding the problem, the timestamps generated when recording the live audio data and video data may not accurately represent the real world elapsed time between the recorded timestamps. Moreover, direct multiplexing of the audio data and the video data may result in the loss of timestamps captured by the recording hardware, potentially causing synchronization problems as additional live multimedia content is encoded. However, these issues may be minimized, or even avoided, by constructing a new timeline using the video data and synchronizing the audio data using the timeline based upon the sampling rate of the audio data in accordance with embodiments of the invention.
  • A process for encoding live multimedia content in which audio data is synchronized with encoded video data is illustrated in FIG. 3. The process 300 includes receiving (310) live multimedia content, where the multimedia content includes (but is not limited to) audio data and video data. A timeline containing one or more timestamps is generated (312) using the timestamps associated with the frames of video data and/or the known frame rate of the video data. The audio data is aligned (314) to the video data using the timeline. The synchronization of the audio data is measured (316) using a synchronization threshold. If the audio data is de-synchronized (318) beyond the synchronization threshold, the audio data is adjusted (320). The synchronized audio data and video data are multiplexed (322) into a container file.
  • In several embodiments, the video data is encoded in accordance with video encoding standards including (but not limited to) MPEG2, MPEG4, H.264, or Scalable Video Coding. In a number of embodiments, the audio data is encoded using PCM. In many embodiments, the audio data is aligned (314) to the video data using the timeline by assigning fragments of PCM samples to the timeline without any gaps or overlays. In a number of embodiments, PCM samples have a fixed duration and the audio data is aligned (314) to the timestamps by assigning enough PCM samples so that the duration fills the difference between the current timestamp and the adjacent timestamp in the timeline. In many embodiments, the difference between the current timestamp and the adjacent timestamp in the timeline is a time window; the time window has a duration which is the difference between the timestamps. In several embodiments, the synchronization of the PCM samples is measured (316) at each timestamp in the timeline using a variety of methods, including, but not limited to, subtracting the total length of the PCM samples from the difference in time between the timestamp and the adjacent timestamp. In many embodiments, the audio data and video data are multiplexed (322) into a container file that includes a block of content (e.g. a fragment, element, or chunk) that stores one or more frames of video data and the audio data played back during the display of the one or more frames of video data. These predetermined portions can have a variable size depending on the data stored in the block of content. In accordance with embodiments of the invention, a timestamp can be associated with the predetermined portions of the container file containing video frames and associated audio data and the timestamps can be utilized by decoders to time the display of individual frames of video and the playback of the accompanying audio.
  • In many embodiments, if the measured (316) synchronization of the audio data exceeds (318) the synchronization threshold, the audio data is adjusted (320) by moving PCM samples from one timestamp to the next. For example, if the measured (316) synchronization indicates that the audio data is falling behind the video data at timestamp X, the audio data is adjusted (320) by pulling PCM samples of audio data from the block of content in the container file associated with adjacent timestamp X+1. Likewise, if the audio data is ahead of the video data, the audio data is adjusted (320) by pushing PCM samples of audio data from the portion of the container file associated with timestamp X to the block of content associated with adjacent timestamp X+1. Other adjustments may be utilized in accordance with embodiments of the invention.
  • A specific process for encoding live media with audio data synchronized to the video data in accordance with embodiments of the invention is described above with respect to FIG. 3; however, a variety of processes may be utilized in accordance with embodiments of the invention. Methods for encoding live media with synchronized resampled audio using containers with fixed size blocks of content in accordance with embodiments of the invention are discussed below.
  • Encoding Live Multimedia Content with Resampled Audio Data
  • As noted above, creating a timeline using video data and using that timeline to synchronize audio data to the video data enables the encoding of live multimedia content which will provide a high quality viewing experience. However, a number of container file formats utilized in accordance with embodiments of the invention fix the size of the predetermined portions that contain one or more frames of video data and the audio data that accompanies the one or more frames of video. Fixing the size of each predetermined portion utilized to store video and audio data typically means that each of the predetermined portions contains the same number of audio samples. Depending upon the sampling rate of the audio relative to the frame rate of the video, a different number of samples may fall within each frame interval. In a number of embodiments of the invention, the audio samples are resampled to obtain the appropriate number of samples. In several embodiments, filters and/or other appropriate adjustments can be applied to the samples to minimize the audio distortion resulting from playback of the resampled audio.
  • A process for encoding live multimedia content in which audio data is synchronized with video data and multiplexing the audio and video data into a container file have a fixed number of audio samples per frame interval is illustrated in FIG. 4. The process 400 includes receiving (410) live multimedia content, where the multimedia content includes, but is not limited to, audio data and video data. A timeline containing one or more timestamps is generated (412) using the video data. The audio data is aligned (414) to the video data using the timeline. The synchronization of the audio data is measured (416) using a synchronization threshold. If the audio data is de-synchronized (418) beyond the synchronization threshold, the audio data is resampled (420), and, if necessary, corrections are applied (422) to the resampled audio data. The audio data and video data are multiplexed (424) into the container file.
  • In accordance with embodiments of the invention, a process similar to the one described above with respect to FIG. 3 may be utilized for building a timeline and detecting audio de-synchronization (410)-(418). In several embodiments, the synchronization threshold is measured (416) by counting the number of samples of audio data assigned to a timestamp in the timeline and comparing that number to the number of audio samples allowed for a block of content in the container file. If the number of samples of audio data differs from the number of audio samples allowed for the block of content, the audio data is resampled (420). In a number of embodiments, resampling (420) the PCM samples contained in the audio data results in the resampled audio data having a sample rate that is higher or lower than the original sample rate. In several embodiments, the difference between the resampled sample rate and the original sample rate is kept within a threshold value, such as (but not limited to) 500 Hz. In many embodiments, the resampled audio data is corrected (422) using one or more of a variety of techniques, including, but not limited to, pitch compensation, in order to mask changes in the sound resulting from the resampling process.
  • A specific process for encoding live media with synchronized audio using containers with fixed block sizes within the multimedia content is described above with respect to FIG. 4; however, a variety of processes may be utilized in accordance with embodiments of the invention.
  • Although the present invention has been described in certain specific aspects, many additional modifications and variations would be apparent to those skilled in the art. It is therefore to be understood that the present invention may be practiced otherwise than specifically described without departing from the scope and spirit of the present invention. Thus, embodiments of the present invention should be considered in all respects as illustrative and not restrictive. Accordingly, the scope of the invention should be determined not by the embodiments illustrated, but by the appended claims and their equivalents.

Claims (23)

What is claimed is:
1. An encoding system, comprising:
live multimedia content storage configured to store live multimedia content, where the live multimedia content comprises audio data and video data, where the audio data comprises a plurality of audio samples having an audio sample duration and the video data comprises a plurality of video frames;
a processor; and
a multimedia encoder;
wherein the multimedia encoder configures the processor to:
receive live multimedia content;
generate a timeline using the video data, where the timeline contains a plurality of timestamps, where at least one timestamp in the plurality of timestamps is determined using at least one video frame in the plurality of video frames;
compute a first time window, where the first time window comprises a first time window duration corresponding to the difference in time between a first timestamp in the timeline and a second timestamp in the timeline;
align the audio data to the video data using the audio samples and the timeline by assigning at least one audio sample to the first time window based upon the number of audio sample durations that occur within the first time window duration;
measure a synchronization value of the aligned audio data to the video data using the timeline;
resample at least one audio sample in the aligned audio data when the synchronization value exceeds a threshold value; and
multiplex the audio data and video data into a container file.
2. The encoding system of claim 1, wherein the audio sample duration is a fixed duration.
3. The encoding system of claim 1, wherein the audio sample duration is a variable duration.
4. The encoding system of claim 1, wherein the synchronization value is measured by subtracting the duration of at least one audio sample from the first time window duration.
5. The encoding system of claim 1, wherein the threshold value is pre-determined.
6. The encoding system of claim 1, wherein the threshold value is determined dynamically.
7. The encoding system of claim 1, wherein the at least one audio sample comprises a sampling rate and the audio sample is resampled by increasing the sampling rate.
8. The encoding system of claim 1, wherein the at least one audio sample comprises a sampling rate and the audio sample is resampled by decreasing the sampling rate.
9. The encoding system of claim 1, wherein the multimedia encoder further configures the processor to perform pitch compensation of the resampled audio sample.
10. The encoding system of claim 1, wherein:
at least one video frame in the plurality of video frames comprises a video frame timestamp; and
at least one timestamp in the plurality of timestamps is determined using the video frame timestamp.
11. The encoding system of claim 1, wherein:
at least one video frame in the plurality of video frames comprises a video frame duration; and
at least one timestamp in the plurality of timestamps is determined using the video frame duration.
12. A method for encoding live multimedia content, comprising:
receiving live multimedia content using an encoding system;
generating a timeline using the video data and the encoding system, where the timeline contains a plurality of timestamps, where at least one timestamp in the plurality of timestamps is determined using at least one video frame in the plurality of video frames;
computing a first time window using the encoding system, where the first time window comprises a first time window duration corresponding to the difference in time between a first timestamp in the timeline and a second timestamp in the timeline;
aligning the audio data to the video data using the audio samples and the timeline by assigning at least one audio sample to the first time window based upon the number of audio sample durations that occur within the first time window duration using the encoding system;
measuring a synchronization value of the aligned audio data to the video data using the timeline and the encoding system;
resampling at least one audio sample in the aligned audio data when the synchronization value exceeds a threshold value using the encoding system; and
multiplexing the audio data and video data into a container file using the encoding system.
13. The method of claim 12, wherein the audio sample duration is a fixed duration.
14. The method of claim 12, wherein the audio sample duration is a variable duration.
15. The method of claim 12, wherein measuring the synchronization value comprises subtracting the duration of at least one audio sample from the first time window duration using the encoding system.
16. The method of claim 12, wherein the threshold value is pre-determined.
17. The method of claim 12, wherein the threshold value is determined dynamically.
18. The method of claim 12, wherein the at least one audio sample comprises a sampling rate and resampling an audio sample comprises increasing the sampling rate using the encoding system.
19. The method of claim 12, wherein the at least one audio sample comprises a sampling rate and resampling an audio sample comprises decreasing the sampling rate using the encoding system.
20. The method of claim 12, further comprising performing pitch compensation of at least one resampled audio sample using the encoding system.
21. The method of claim 12, wherein:
at least one video frame in the plurality of video frames comprises a video frame timestamp; and
determining at least one timestamp in the plurality of timestamps utilizes the video frame timestamp and the encoding system.
22. The method of claim 12, wherein:
at least one video frame in the plurality of video frames comprises a video frame duration; and
determining at least one timestamp in the plurality of timestamps utilizes the video frame duration and the encoding system.
23. A machine readable medium containing processor instructions, where execution of the instructions by a processor causes the processor to perform a process comprising:
receiving live multimedia content;
generating a timeline using the video data, where the timeline contains a plurality of timestamps, where at least one timestamp in the plurality of timestamps is determined using at least one video frame in the plurality of video frames;
computing a first time window, where the first time window comprises a first time window duration corresponding to the difference in time between a first timestamp in the timeline and a second timestamp in the timeline;
aligning the audio data to the video data using the audio samples and the timeline by assigning at least one audio sample to the first time window based upon the number of audio sample durations that occur within the first time window duration;
measuring a synchronization value of the aligned audio data to the video data using the timeline;
resampling at least one audio sample in the aligned audio data when the synchronization value exceeds a threshold value; and
multiplexing the audio data and video data into a container file.
US13/629,292 2012-06-13 2012-09-27 System and Methods for Encoding Live Multimedia Content with Synchronized Resampled Audio Data Abandoned US20130336379A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US13/629,292 US20130336379A1 (en) 2012-06-13 2012-09-27 System and Methods for Encoding Live Multimedia Content with Synchronized Resampled Audio Data
PCT/US2013/042105 WO2013188065A2 (en) 2012-06-13 2013-05-21 System and methods for encoding live multimedia content with synchronized resampled audio data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261659111P 2012-06-13 2012-06-13
US13/629,292 US20130336379A1 (en) 2012-06-13 2012-09-27 System and Methods for Encoding Live Multimedia Content with Synchronized Resampled Audio Data

Publications (1)

Publication Number Publication Date
US20130336379A1 true US20130336379A1 (en) 2013-12-19

Family

ID=49755885

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/629,292 Abandoned US20130336379A1 (en) 2012-06-13 2012-09-27 System and Methods for Encoding Live Multimedia Content with Synchronized Resampled Audio Data
US13/629,306 Active 2034-03-31 US9281011B2 (en) 2012-06-13 2012-09-27 System and methods for encoding live multimedia content with synchronized audio data

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/629,306 Active 2034-03-31 US9281011B2 (en) 2012-06-13 2012-09-27 System and methods for encoding live multimedia content with synchronized audio data

Country Status (2)

Country Link
US (2) US20130336379A1 (en)
WO (1) WO2013188065A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015108672A1 (en) * 2014-01-15 2015-07-23 Avigilon Corporation Streaming multiple encodings encoded using different encoding parameters
CN109089130A (en) * 2018-09-18 2018-12-25 网宿科技股份有限公司 A kind of method and apparatus for the timestamp adjusting live video
US10701417B2 (en) 2016-05-24 2020-06-30 Divx, Llc Systems and methods for providing audio content during trick-play playback

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI557727B (en) * 2013-04-05 2016-11-11 杜比國際公司 An audio processing system, a multimedia processing system, a method of processing an audio bitstream and a computer program product
US9930086B2 (en) * 2013-10-28 2018-03-27 Samsung Electronics Co., Ltd. Content presentation for MPEG media transport
WO2015131934A1 (en) * 2014-03-05 2015-09-11 2Kb Beteiligungs Gmbh System and method for live video streaming
US20170118501A1 (en) * 2014-07-13 2017-04-27 Aniview Ltd. A system and methods thereof for generating a synchronized audio with an imagized video clip respective of a video clip
US9807336B2 (en) * 2014-11-12 2017-10-31 Mediatek Inc. Dynamic adjustment of video frame sampling rate
CN104410894B (en) * 2014-11-19 2018-05-01 大唐移动通信设备有限公司 A kind of method and apparatus of wireless environment document-video in-pace
US10034036B2 (en) 2015-10-09 2018-07-24 Microsoft Technology Licensing, Llc Media synchronization for real-time streaming
GB2547442B (en) * 2016-02-17 2022-01-12 V Nova Int Ltd Physical adapter, signal processing equipment, methods and computer programs
US10441885B2 (en) * 2017-06-12 2019-10-15 Microsoft Technology Licensing, Llc Audio balancing for multi-source audiovisual streaming
US11051050B2 (en) * 2018-08-17 2021-06-29 Kiswe Mobile Inc. Live streaming with live video production and commentary
US10887646B2 (en) * 2018-08-17 2021-01-05 Kiswe Mobile Inc. Live streaming with multiple remote commentators
CN112985583B (en) * 2021-05-20 2021-08-03 杭州兆华电子有限公司 Acoustic imaging method and system combined with short-time pulse detection
US11930189B2 (en) * 2021-09-30 2024-03-12 Samsung Electronics Co., Ltd. Parallel metadata generation based on a window of overlapped frames
US11943125B2 (en) * 2022-01-26 2024-03-26 Dish Network Technologies India Private Limited Discontinuity detection in transport streams
CN115052178B (en) * 2022-04-15 2024-01-26 武汉微科中芯电子技术有限公司 Audio/video encoding/decoding/encoding/decoding system, encoding/decoding method, and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020003948A1 (en) * 2000-04-26 2002-01-10 Takuji Himeno Recording apparatus and method, playback apparatus and method, and recording medium therefor
US6792047B1 (en) * 2000-01-04 2004-09-14 Emc Corporation Real time processing and streaming of spliced encoded MPEG video and associated audio
US20070208571A1 (en) * 2004-04-21 2007-09-06 Pierre-Anthony Stivell Lemieux Audio Bitstream Format In Which The Bitstream Syntax Is Described By An Ordered Transversal of A Tree Hierarchy Data Structure
US20090326930A1 (en) * 2006-07-12 2009-12-31 Panasonic Corporation Speech decoding apparatus and speech encoding apparatus
US20120140018A1 (en) * 2010-06-04 2012-06-07 Alexey Pikin Server-Assisted Video Conversation

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5751280A (en) * 1995-12-11 1998-05-12 Silicon Graphics, Inc. System and method for media stream synchronization with a base atom index file and an auxiliary atom index file
WO1997046027A1 (en) * 1996-05-29 1997-12-04 Sarnoff Corporation Preserving synchronization of audio and video presentation
US5893062A (en) * 1996-12-05 1999-04-06 Interval Research Corporation Variable rate video playback with synchronized audio
EP1099350A2 (en) * 1999-05-14 2001-05-16 Koninklijke Philips Electronics N.V. Method of converting a packetized stream of information signals into a stream of information signals with time stamps and vice versa
KR20040041082A (en) * 2000-07-24 2004-05-13 비브콤 인코포레이티드 System and method for indexing, searching, identifying, and editing portions of electronic multimedia files
US6920181B1 (en) * 2000-09-19 2005-07-19 Todd Porter Method for synchronizing audio and video streams
US7130316B2 (en) * 2001-04-11 2006-10-31 Ati Technologies, Inc. System for frame based audio synchronization and method thereof
US20030044166A1 (en) * 2001-08-31 2003-03-06 Stmicroelectronics, Inc. System for multiplexing video data streams in a digital video recorder and method of operating the same
US6931071B2 (en) * 2001-08-31 2005-08-16 Stmicroelectronics, Inc. Apparatus and method for synchronizing video and audio MPEG streams in a video playback device
US7116894B1 (en) * 2002-05-24 2006-10-03 Digeo, Inc. System and method for digital multimedia stream conversion
US6995311B2 (en) 2003-03-31 2006-02-07 Stevenson Alexander J Automatic pitch processing for electric stringed instruments
JP4902935B2 (en) * 2003-05-08 2012-03-21 ソニー株式会社 Information processing apparatus, information processing method, program, and recording medium
EP1873775A4 (en) * 2005-04-07 2009-10-14 Panasonic Corp Recording medium, reproducing device, recording method, and reproducing method
US8799757B2 (en) * 2005-07-01 2014-08-05 Microsoft Corporation Synchronization aspects of interactive multimedia presentation management
EP1908303A4 (en) * 2005-07-01 2011-04-06 Sonic Solutions Method, apparatus and system for use in multimedia signal encoding
US7414550B1 (en) * 2006-06-30 2008-08-19 Nvidia Corporation Methods and systems for sample rate conversion and sample clock synchronization
US8856371B2 (en) * 2006-08-07 2014-10-07 Oovoo Llc Video conferencing over IP networks
EP2201707A4 (en) * 2007-09-20 2011-09-21 Visible World Corp Systems and methods for media packaging
US8788079B2 (en) * 2010-11-09 2014-07-22 Vmware, Inc. Monitoring audio fidelity and audio-video synchronization
JP5400009B2 (en) * 2010-09-27 2014-01-29 ルネサスエレクトロニクス株式会社 Transcoding device, transcoding method and program
US8736700B2 (en) * 2010-09-30 2014-05-27 Apple Inc. Techniques for synchronizing audio and video data in an image signal processing system
US20130141643A1 (en) * 2011-12-06 2013-06-06 Doug Carson & Associates, Inc. Audio-Video Frame Synchronization in a Multimedia Stream

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6792047B1 (en) * 2000-01-04 2004-09-14 Emc Corporation Real time processing and streaming of spliced encoded MPEG video and associated audio
US20020003948A1 (en) * 2000-04-26 2002-01-10 Takuji Himeno Recording apparatus and method, playback apparatus and method, and recording medium therefor
US20070208571A1 (en) * 2004-04-21 2007-09-06 Pierre-Anthony Stivell Lemieux Audio Bitstream Format In Which The Bitstream Syntax Is Described By An Ordered Transversal of A Tree Hierarchy Data Structure
US20090326930A1 (en) * 2006-07-12 2009-12-31 Panasonic Corporation Speech decoding apparatus and speech encoding apparatus
US20120140018A1 (en) * 2010-06-04 2012-06-07 Alexey Pikin Server-Assisted Video Conversation

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015108672A1 (en) * 2014-01-15 2015-07-23 Avigilon Corporation Streaming multiple encodings encoded using different encoding parameters
US10567765B2 (en) 2014-01-15 2020-02-18 Avigilon Corporation Streaming multiple encodings with virtual stream identifiers
US11228764B2 (en) 2014-01-15 2022-01-18 Avigilon Corporation Streaming multiple encodings encoded using different encoding parameters
US10701417B2 (en) 2016-05-24 2020-06-30 Divx, Llc Systems and methods for providing audio content during trick-play playback
US11044502B2 (en) 2016-05-24 2021-06-22 Divx, Llc Systems and methods for providing audio content during trick-play playback
US11546643B2 (en) 2016-05-24 2023-01-03 Divx, Llc Systems and methods for providing audio content during trick-play playback
CN109089130A (en) * 2018-09-18 2018-12-25 网宿科技股份有限公司 A kind of method and apparatus for the timestamp adjusting live video

Also Published As

Publication number Publication date
WO2013188065A2 (en) 2013-12-19
US9281011B2 (en) 2016-03-08
WO2013188065A3 (en) 2014-03-06
US20130336412A1 (en) 2013-12-19

Similar Documents

Publication Publication Date Title
US9281011B2 (en) System and methods for encoding live multimedia content with synchronized audio data
KR101689616B1 (en) Method for transmitting/receiving media segment and transmitting/receiving apparatus thereof
EP3136732B1 (en) Converting adaptive bitrate chunks to a streaming format
CN109314784B (en) System and method for encoding video content
US8743906B2 (en) Scalable seamless digital video stream splicing
JP2023083353A (en) Regeneration method and regeneration device
US8613013B2 (en) Ad splicing using re-quantization variants
CN112369042B (en) Frame conversion for adaptive streaming alignment
KR20120084252A (en) Receiver for receiving a plurality of transport stream, transmitter for transmitting each of transport stream, and reproducing method thereof
JP2001513606A (en) Processing coded video
JP5972616B2 (en) Reception device, clock restoration method, and program
US8170401B2 (en) Optimizing ad insertion by removing low information frames
US20180338168A1 (en) Splicing in adaptive bit rate (abr) video streams
US12010367B2 (en) Broadcast in-home streaming
WO2015162226A2 (en) Digital media splicing system and method
US20210168472A1 (en) Audio visual time base correction in adaptive bit rate applications
US10700799B2 (en) Method and apparatus for broadcast signal transmission
WO2011086350A1 (en) Method and apparatus for processing transport streams
CN115668955A (en) System for recovering presentation time stamps from a transcoder
GB2543080A (en) Digital media splicing system and method
JP7569195B2 (en) Transmitting device and receiving device
Yun et al. A hybrid architecture based on TS and HTTP for real-time 3D video transmission
EP2150066A1 (en) Procedure for measuring the change channel time on digital television

Legal Events

Date Code Title Description
AS Assignment

Owner name: DIVX, LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:EROFEEV, KIRILL;PETROVA, GALINA;SAHNO, DMITRY;REEL/FRAME:029218/0869

Effective date: 20121031

AS Assignment

Owner name: SONIC IP, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DIVX, LLC;REEL/FRAME:031713/0032

Effective date: 20131121

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION