US20100275229A1 - Systems, methods and computer readable media for instant multi-channel video content browsing in digital video distribution systems - Google Patents
Systems, methods and computer readable media for instant multi-channel video content browsing in digital video distribution systems Download PDFInfo
- Publication number
- US20100275229A1 US20100275229A1 US12/765,767 US76576710A US2010275229A1 US 20100275229 A1 US20100275229 A1 US 20100275229A1 US 76576710 A US76576710 A US 76576710A US 2010275229 A1 US2010275229 A1 US 2010275229A1
- Authority
- US
- United States
- Prior art keywords
- channel
- base layer
- channels
- video
- viewing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000012545 processing Methods 0.000 claims description 10
- 239000010410 layer Substances 0.000 description 55
- 230000000694 effects Effects 0.000 description 12
- 230000005540 biological transmission Effects 0.000 description 10
- 230000001934 delay Effects 0.000 description 10
- 230000002123 temporal effect Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 6
- 230000003139 buffering effect Effects 0.000 description 5
- 230000007246 mechanism Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 239000002356 single layer Substances 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 230000002045 lasting effect Effects 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/643—Communication protocols
- H04N21/64322—IP
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/102—Programmed access in sequence to addressed parts of tracks of operating record carriers
- G11B27/105—Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/162—User input
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/31—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the temporal domain
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/33—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/40—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234327—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into layers, e.g. base layer and one or more enhancement layers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234363—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234381—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
- H04N21/4314—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for fitting data in a restricted space on the screen, e.g. EPG data in a rectangular grid
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/438—Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving encoded video stream packets from an IP network
- H04N21/4383—Accessing a communication channel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/438—Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving encoded video stream packets from an IP network
- H04N21/4383—Accessing a communication channel
- H04N21/4384—Accessing a communication channel involving operations to reduce the access time, e.g. fast-tuning for reducing channel switching latency
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440227—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by decomposing into layers, e.g. base layer and one or more enhancement layers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/462—Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
- H04N21/4621—Controlling the complexity of the content stream or additional data, e.g. lowering the resolution or bit-rate of the video stream for a mobile client with a small screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8549—Creating video summaries, e.g. movie trailer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/78—Television signal recording using magnetic recording
- H04N5/782—Television signal recording using magnetic recording on tape
- H04N5/783—Adaptations for reproducing at a rate different from the recording rate
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
- H04N5/772—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/775—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television receiver
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/84—Television signal recording using optical recording
- H04N5/85—Television signal recording using optical recording on discs or drums
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/80—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
- H04N9/804—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
- H04N9/8042—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/80—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
- H04N9/82—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
- H04N9/8205—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
Definitions
- the disclosed invention relates to compressed digital video distribution systems such as cable television (CATV), satellite television, Internet protocol television (IPTV) and Internet-based video distribution systems.
- CATV cable television
- IPTV Internet protocol television
- digital video distribution systems to enable fast browsing of video content of multiple TV channels or video files while simultaneously watching one or more selected TV channels or video files.
- CATV cable television
- IPTV Internet protocol television
- digital video distribution systems to enable fast browsing of video content of multiple TV channels or video files while simultaneously watching one or more selected TV channels or video files.
- It is also concerned with the technology used in the endpoints of a digital video distribution system, such as a set-top-box or game console.
- CATV is one of the most popular broadband digital cable networks in Europe, Australia, America, and Asia.
- CATV system many video channels are multiplexed on a single cable medium with very high bandwidth and distributed through dispersed cable head-end offices that serve a geographical area.
- the cable head-end of the CATV infrastructure simultaneously carries the digitized and encoded video of each and every channel, regardless of whether the user watches a channel or not.
- IPTV which transmits TV programs over packet networks
- IPTV has gained significant momentum due to its advantage in delivering new services with ease.
- One of the drawbacks of IPTV is the relatively narrow bandwidth of the user's access line.
- a user's access line may be a telephone line employing asymmetric digital subscriber line (ADSL) or similar technologies, which have limited bandwidth available to deliver high quality video content.
- ADSL asymmetric digital subscriber line
- Sending a large number of programs at the same time is not practical in an IPTV system due to the aforementioned lack of bandwidth.
- IPTV may rely on public Internet or a private IP network, which may have notable transport delays.
- CATV video on demand
- PSV pay per view
- Endpoints designed for video conferencing have been disclosed, amongst other things, in co-pending U.S. patent application Ser. No. 12/015,956, incorporated herein by reference.
- Video distribution, e.g., IPTV, endpoints share many commonalities with video conferencing endpoints relevant to this invention.
- a typical endpoint ( 101 ) includes a set of devices and/or software that is located at the user's premises.
- One typical endpoint includes a network interface ( 102 ) (for example, a DSL modem, a cable modem, or an ISDN T1 interface) connected to a network ( 103 ) (for example, the Internet or another private or public IP network), a computer ( 104 ) (for example, a set-top box, game console, personal computer or another type of computer) that connects via a local area network ( 105 ) (for example, Ethernet) to the network interface ( 102 ), a video display ( 106 ) (for example, a TV or computer monitor), and an audio output (for example, a set of loudspeakers).
- a network interface for example, a DSL modem, a cable modem, or an ISDN T1 interface
- a network for example, the Internet or another private or public IP network
- a computer for example, a set-top box, game console,
- the set-top-box translates the data received from the Internet into a signal format the TV understands; traditionally, a combination of analog audio and video signals are used, but recently all digital interfaces (such as HDMI) have become common.
- the set-top-box therefore typically includes analog or digital audio/video outputs and interfaces.
- Both TV monitor and set-top-box device are typically controlled by an input device ( 107 ), alternatively known as a pointing device (for example, a remote control, computer mouse, keyboard, or another input device).
- a pointing device for example, a remote control, computer mouse, keyboard, or another input device.
- most prior art set-top-boxes lack media input devices, such as camera or microphone, that are common to videoconference endpoints.
- a set-top-box ( 200 ) has a hardware architecture similar to a general purpose computer: a central processing unit (CPU) ( 201 ) executes instructions stored in Random Access Memory (RAM) ( 202 ) and/or read-only-memory (ROM) ( 203 ), and utilizes interface hardware to connect to the network interface ( 204 ), the audio/video output interface ( 205 ), and the user interface ( 206 ) (which is connected to a user input device ( 207 ), for example, a remote control). All these components are under the control of the CPU.
- CPU central processing unit
- RAM Random Access Memory
- ROM read-only-memory
- a typical set-top-box also includes an accelerator unit ( 208 ) (for example, a dedicated Digital Signal Processor (DSP)) that helps the CPU ( 201 ) with computationally complex tasks, such as video decoding and video processing.
- An accelerator unit ( 208 ) is typically present for reasons of cost efficiency, rather than for technical necessity. That is, a much faster CPU can often substitute for accelerator or DSP, but those much faster CPUs (and their required infrastructure such as power supplies and faster memory) may be more expensive than dedicated accelerator units.
- General purpose computers such as Personal Computers (PCs)
- PCs Personal Computers
- additional hardware can be added to the general purpose computer to provide the interfaces a typical set-top-box contains, and/or additional accelerator hardware can be added to augment the CPU for video decoding and processing.
- the operating system controlling the set-top-box typically offers services that can be used (for example, receivers and transmitters according to certain protocols).
- the protocols of most interest here are those for the transmission of real-time application data: Internet Protocol (IP), User Datagram Protocol (UDP) and/or Transmission Control Protocol (TCP), and Real-time Transport Protocol (RTP).
- IP Internet Protocol
- UDP User Datagram Protocol
- TCP Transmission Control Protocol
- RTP Real-time Transport Protocol
- RTP receivers and transmitters can also be implemented in the application, rather than in the operating system.
- Most operating systems support the parallel or quasi-parallel use of more than one protocol receiver and/or transmitter.
- codec is equally used to describe techniques for encoding and decoding and for implementations of these techniques.
- a (media) encoder converts input media data into a bitstream or a packet stream
- a (media) decoder converts an input bitstream or packet stream into a media representation suitable for presentation to a user (for example, digital or analog video for presentation on a video display, or digital or analog audio for presentation through loudspeakers.
- Encoders and decoders can be dedicated hardware devices or building blocks of a software-based implementation running on a general purpose CPU and/or an associated accelerator unit.
- Set-top-boxes can be constructed such that many encoders or decoders run in parallel or quasi-parallel.
- one easy way to support multiple encoders/decoders is to integrate multiple instances in the set-top-box.
- similar mechanisms can be employed. For example, in a multi-process operating system, multiple instances of encoder/decoder code can be run quasi-simultaneously.
- Digital video codecs alternatively known as digital video coding/decoding techniques (e.g., MPEG-2, H-series codecs such as H.263 and H.264), in conjunction with packet network delivery, have increased channel-change times to several hundred milliseconds or even seconds in many cases, for at least the following two reasons:
- Transport Delays These delays result from buffering by the decoder at the receiving end, i.e., the endpoint, which is necessary to alleviate the effects of: (a) bandwidth changes in the transport network (such as variable link bandwidths experienced in wireless networks); (b) delay jitter caused by varying queuing delays in transport network switches; and/or (c) packet loss in the network.
- the decoder at the endpoint must receive an I-frame, alternatively known as an intra-coded frame, from the encoder before a video can be decoded.
- the temporal interval between I-frames in an encoder is in most prior art systems fixed (for example, 0.25 sec or more in most CATV systems) to reduce the required coding bandwidth. Therefore, when a user changes a channel, it can take as long as 0.5 seconds or more before the receiver can decode the video. Furthermore, it is well known that increasing the interval between I frames improves the coding efficiency. As a result, many IPTV service providers trade channel change times for better picture quality, with the result that channel change times of several seconds are not uncommon in deployed IPTV systems.
- IPTV and satellite TV systems suffer only from encoding delays
- IPTV and other packet network-based video distribution systems also suffer from transport delays, which can involve a significantly longer delay.
- transport delays In the evolving IPTV environment, the channel change time has become significantly longer, particularly when video channels are delivered over a best effort network such as the public Internet, where the network conditions are completely unpredictable.
- an encoder is needed that: (a) generates a synchronization frame (i.e., the I-frame of the prior systems) only when needed (that is, not necessarily in a fixed time interval); (b) employs no or only a small number of future frames to minimize algorithmic delay; and (c) compensates for possible packet loss or insurmountable delay, rather than relying on receiving end buffering and error mitigation as the sole mechanism for error resilience. Because transport delays can cause significant impact to channel-change time, even a generic video teleconferencing codec (which normally implements all aforementioned features) cannot completely eliminate the delay problems.
- H.261 and H.263 used for person-to-person communication purposes such as videoconferencing
- MPEG-1 and MPEG-2 Main Profile used in Video CDs and DVDs, respectively
- that bitrate can be either fixed, or variable and dictated by the media content. That is, the more complex a scene becomes, the higher a bitrate is generated.
- a limitation of single layer coding exists where, in the final rendering on the screen, a lower spatial resolution is required compared to the one typically utilized for full-screen video reproduction (such as in TV).
- the full resolution signal must be sent and decoded at the receiving end, but the spatial resolution needs to be reduced to fit the low required spatial resolution, thus wasting both bandwidth and computational resources.
- support for lower resolutions is essential in a channel surfing application displaying several channels simultaneously, as one goal is to fit as many channels displayed in mini browsing windows (MBWs) as possible into a specific screen area—which results in the MBWs being naturally of lower resolution than the main video program.
- Layered video codecs are video compression techniques that have been developed explicitly for heterogeneous environments.
- two or more layers are generated for a given source video signal: a base layer and at least one enhancement layer.
- the base layer offers a basic representation of the source signal at a reduced quality, which can be achieved, for example, by reducing the Signal-to-Noise Ratio (SNR) through coarse quantization, using a reduced spatial and/or temporal resolution, or a combination of these techniques.
- the base layer can advantageously be transmitted using a reliable channel, i.e., a channel with guaranteed or enhanced quality of service (QoS).
- QoS quality of service
- Each enhancement layer increases the quality by increasing the SNR, spatial resolution, or temporal resolution, and can often be transmitted with reduced QoS or best effort. In effect, a user is guaranteed to receive a signal with at least a minimum level of quality of the base layer signal.
- An endpoint is configured to receive a first channel in layered bitstream format, including a base layer and optionally a plurality of enhancement layers.
- the base and optional enhancement layers of the first channel can be decoded and displayed in a main window of a video display. Further, the endpoint can be configured to receive at least one second channel in the form of a base layer.
- This second channel can also be decoded, and can be displayed in a Mini Browsing Window (MBW).
- MBW Mini Browsing Window
- the decoding of the enhancement layer of the first channel terminates.
- the display of the first channel in the main window terminates.
- the decoded second channel is zoomed to fit the size of the main window, and can be displayed.
- the decoded base layer of the first channel may be displayed in a MBW.
- the server is instructed to stop sending enhancement layers of the first channel and/or commence sending at least one enhancement layer for the second channel.
- FIG. 1 is a block diagram illustrating an exemplary endpoint in accordance with the present invention.
- FIG. 2 is a block diagram illustrating an exemplary endpoint in accordance with the present invention.
- FIG. 3 is a block diagram illustrating an exemplary endpoint in accordance with the present invention.
- FIG. 4 is an exemplary video display screen in accordance with the present invention.
- FIG. 5 is an exemplary video display screen in accordance with the present invention.
- FIG. 7 is an exemplary server in accordance with the present invention.
- FIG. 8 is an exemplary message flow between an endpoint and a server in accordance with the present invention.
- the present invention provides techniques for the distribution and display of video content, for example, live/on-air (e.g., TV channel), online, or pre-stored video files, in a way that provides for effective video content browsing, alternatively known as “channel surfing,” and is well suited for any generic digital video distribution system, including those that use packet networks (e.g., IPTV) or public Internet (e.g., video services available on the Internet).
- a “channel” denotes not only live/on-air video content, but also any online or pre-stored video content. Channels may be represented by, for example, video signals, compressed video signals, or audio-visual signals.
- the techniques provide for a digital video distribution system that allows for display of channels using a plurality of mini browsing windows, alternatively known as MBWs, of different sizes and numbers that simultaneously display several channels or video programs.
- MBWs can be displayed independently or as an overlay on a main window, alternatively known as the full screen, which displays a single channel.
- a rapid switching feature provides a user, alternatively known as a TV viewer, with the ability to browse a set of channels while watching one specific channel, and instantly switch to a different set of channels for browsing.
- the disclosed techniques provide a significantly enhanced channel surfing experience.
- an exemplary digital video distribution system advantageously uses layered codec, for example, as described in co-pending U.S. patent application Ser. Nos. 12/015,956, 11/608,776, and 11/682,263 and U.S. Pat. No. 7,593,032.
- the present invention avoids the buffering and inherent encoding delays of a classical digital video distribution system, and permits fast switching of channels in MBWs.
- the present invention improves bandwidth usage by generating multiple layers of video, i.e., the channels are coded in layered bitstream format, and uses only the lower layers to display channels in the MBWs.
- These lower layers represent lower resolutions, lower frame rate, or lower SNR, using much less bandwidth and enabling a low processing complexity.
- These techniques eliminate the need for receiver buffering by introducing slight performance degradation in the event of packet loss or excessive packet delay.
- layered codec provides rate matching to account for the fact that different channels may be using IP network connections with different bandwidths, which requires different data rates from the encoder.
- FIG. 3 illustrates several components of an IPTV endpoint according to one embodiment of the present invention.
- the endpoint includes more than one receiver ( 301 , 302 ), each able to receive a channel in layered bitstream format, including at least a base layer, and possibly also one or more enhancement layers associated with the base layer.
- a receiver may also be configured to receive only an enhancement layer.
- Each receiver translates incoming packet data ( 303 , 304 ) from the network interface ( 305 ), advantageously using the IP/UDP/RTP protocol hierarchy, into layered video bitstreams ( 306 , 307 ) and side information ( 308 , 309 ), such as timing information.
- a given decoder can be coupled to more than one receiver (decoder 310 can be coupled to receiver 302 by connection 314 ) to receive, for example, a base layer from a first receiver and one or more enhancement layers from a second receiver.
- the receivers are coupled with at least four decoders ( 310 , 311 ) to generate four sequences of video images ( 312 , 313 ), which are assembled by the GUI ( 315 ) into a single screen layout, and displayed through the video output interface ( 316 ) to the TV screen ( 317 ).
- FIG. 5 depicts another exemplary screen layout where four MBWs ( 501 ) are in use, and overlap a main window ( 502 ).
- the main window displays a channel selected by the user.
- the MBWs display other channels which the user could easily select to view in the main window by using an input device to select or “click on” the MBW representing the desired channel.
- the user can set MBW display configuration preferences through the GUI.
- the GUI is typically implemented as a software application similar to a Windows-based user interface.
- the number of MBWs is only limited by the number of receivers and decoders.
- decoders and/or receivers are implemented in hardware, then the number of MBWs is limited according to the number of available decoders and/or receivers. If decoders and receivers are implemented in software, then in most implementations, there is no practical limit in the number of MBWs except the performance of the CPU.
- the user can fit as many MBWs as he/she desires so long as the total size of all MBWs does not exceed the available display size. There is no minimum limit for MBW size.
- the user can set the desired size on an MBW by dragging the edges of an MBW window, and/or by setting MBW display configuration preferences which specify size. Depending on the GUI, it may also be possible to have overlapping MBWs.
- the server when in side channel mode, sends at least the “current channel” (i.e., the channel the user is most interested in, and which is typically displayed in the main window), and one side channel. In most cases, more than one side channel is sent.
- the “current channel” i.e., the channel the user is most interested in, and which is typically displayed in the main window
- the channel would be available at the server ( 701 ) in the form of a layered bitstream ( 711 ), which contains a base layer and, for example, four enhancement layers, and the video extractor ( 704 ) would remove all the enhancement layers and create a layered bitstream ( 709 ) that contains only the base layer.
- the newly created layered bitstream ( 303 ) represents one channel, and is sent through the network to the endpoint, wherein the layered bitstream ( 303 ) is typically fed by the network interface ( 305 ) into one receiver ( 301 ).
- the GUI ( 315 ) assembles the sequences of video images ( 312 , 313 ) into the screen layout illustrated in FIG. 5 .
- all five channels are served by the same server, as illustrated in FIG. 7 .
- All five channels may, for example, have been originally stored in the video database ( 706 ) at a spatio-temporal resolution high enough for being displayed in the main window.
- the video extractor ( 704 ) is aware, by instructions received from the MBW control logic ( 702 ), that only the channel to be displayed in the main window is required at full resolution; all secondary channels are required only at MBW resolution.
- the video extractor ( 704 ) creates a layered bitstream ( 709 ) containing at least one enhancement layer for the channel to be displayed in the main window, and, in this example, base-layer only bitstreams for the four secondary channels (only one bitstream ( 710 ) is illustrated in FIG. 7 ). These are the five layered bitstreams ( 710 ) ultimately received at the endpoint ( 703 ).
- an endpoint sends ( 803 ) to the server information that the user has requested to change channels from, for example, channel 1 to channel 2 .
- this information is processed ( 805 ) by the MBW control logic.
- the MBW control logic instructs the video extractor to a) stop ( 806 ) including those enhancement layers of the layered bitstream of channel 1 into the outgoing layered bitstream of channel 1 that are not required to achieve the spatial/temporal/quality resolution required for display of channel 1 in an MBW, and b) commence ( 807 ) including into the outgoing layered bitstream, for channel 2 , enhancement layers required to achieve the spatio/temporal/quality resolution for display of channel 2 in the main window.
- sub-activities ( 806 ) and ( 807 ) are described and depicted to be executed sequentially, they can also occur in parallel, depending on the server implementation.
- the selection of the correct enhancement layers may be based on other factors such as the connectivity of the endpoint and server, screen size of the endpoint video display, size of the main window, and user preference on the spatial/temporal/quality tradeoff.
- the endpoint receives ( 809 ), among others, packets that belong to the enhancement layers of channel 2 rather than channel 1 .
- the second ( 810 ) and third ( 811 ) activities mitigate this delay factor by briefly trading quality of the main window display for a fast visible reaction to user input. Both activities ( 810 , 811 ) are executed locally in an endpoint and are, therefore, independent of any transmission delay.
- the endpoint stops processing ( 812 ), i.e., receiving and decoding, the enhancement layers of channel 1 not required for display in a MBW.
- the sequence of video images switches—typically with a single picture's duration delay, for example 1/30 th of a second—from the high resolution previously used in the main window, to a resolution suitable for a MBW.
- the GUI starts displaying ( 813 ) the newly created sequence of video images of channel 1 in the MBW that was previously displaying channel 2 .
- the endpoint prepares ( 814 ) to process the enhancement layers related to channel 2 . That is, the receiver preparing the layered video bitstream for channel 2 is instructed not to discard any enhancement layers useful to achieve spatial/temporal/quality resolution required for display of channel 2 in the main window. However, until those enhancement layers are present in the layered bitstream received by the receiver preparing the layered video bitstream for channel 2 , the receiver and its coupled decoder continue to decode the layered bitstream at the resolution required for display in a MBW. Until such time when enhancement layer information arrives ( 814 ), the decoder performs the additional function of “zooming up” ( 815 ) the picture of typically low spatial resolution sequence to a resolution suitable for display in the main window.
- the GUI takes this up-zoomed sequence of video images, and displays ( 816 ) it in the main window.
- the enhancement information for channel 2 becomes available.
- the enhancement layer(s), together with the base layer are received, decoded, and displayed ( 817 ) in full resolution in the main window.
- decoding and rendering of audio corresponding to channel 1 is stopped ( 819 ), and decoding and rendering of the audio corresponding to channel 2 commences ( 820 ).
- the audio component of all channels can always be sent from the server to the endpoint; this is possible as compressed audio takes only a fraction of the bandwidth of compressed video.
- the server can also serve only the audio of the current channel, for example, the channel displayed in the main window. In this case, the bandwidth for the MBW-associated audio channels can be saved, but audio is not immediately available after a channel switch.
- a low quality audio for channels displayed in the MBWs using, for example a telephony-band speech codec at very low bitrate
- a high quality, possibly multi-channel audio for the channel displayed in the main window In that case the user experience on the audio side would be comparable to the video user experience: immediately after the channel switch, low quality audio is audible, which is replaced by high quality audio after the channel switch delay (e.g., hundreds of milliseconds to a few seconds).
- a layered audio codec an audio distribution mechanism similar to the one disclosed for video could be employed.
- the decoded picture sequences of the virtual MBWs are available for immediate zooming up in the event that the user initiates a channel switch, allowing for fast channel switches, but still enable the use of the full video display screen for the current channel.
- the MBW control logic can typically assign channels to those virtual MBWs according to a strategy that reflects closely the user's typical surfing behavior, as discussed below.
- Channels are assigned to receiver-decoder chains in ascending or descending order according to the direction of the user's channel surfing behavior and in the natural or user-selected channel order. That means that, for example, whenever the user presses the channel-up button on the remote control and thereby selects the “next” channel, the server MBW control logic instructs the video extractor to stop sending the layered bitstream that represents the “lowest” channel that is being sent, and switch instead to sending the layered bitstream corresponding to the “next” channel in either natural or user-selected channel order.
- the result is a sliding window of available channels around the current channel, that is being updated every time a user hits channel-up or channel-down.
- the endpoint can display a fixed number of channels in MBWs for a fixed period of time, and then display the “next” set of channels in the MBWs, and so forth.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
- This application claims the benefit of priority to U.S. Provisional Application Ser. No. 61/172,355, filed Apr. 24, 2009, which is hereby incorporated by reference herein in its entirety.
- 1. Technical Field
- The disclosed invention relates to compressed digital video distribution systems such as cable television (CATV), satellite television, Internet protocol television (IPTV) and Internet-based video distribution systems. In particular, it relates to digital video distribution systems to enable fast browsing of video content of multiple TV channels or video files while simultaneously watching one or more selected TV channels or video files. It is also concerned with the technology used in the endpoints of a digital video distribution system, such as a set-top-box or game console.
- 2. Background Art
- Subject matter related to the present application can be found in co-pending U.S. patent application Ser. Nos. 12/015,956, filed Jan. 17, 2008 and entitled “System And Method For Scalable And Low-Delay Videoconferencing Using Scalable Video Coding,” 11/608,776, filed Dec. 8, 2006 and entitled “Systems And Methods For Error Resilience And Random Access In Video Communication Systems,” and 11/682,263, filed Mar. 5, 2007 and entitled “System And Method For Providing Error Resilience, Random Access And Rate Control In Scalable Video Communications,” and U.S. Pat. No. 7,593,032, filed Jan. 17, 2008 and entitled “System And Method For A Conference Server Architecture For Low Delay And Distributed Conferencing Applications,” each of which is hereby incorporated by reference herein in their entireties.
- Traditionally, TV programs are often carried over CATV networks. CATV is one of the most popular broadband digital cable networks in Europe, Australia, America, and Asia. With a CATV system, many video channels are multiplexed on a single cable medium with very high bandwidth and distributed through dispersed cable head-end offices that serve a geographical area. The cable head-end of the CATV infrastructure simultaneously carries the digitized and encoded video of each and every channel, regardless of whether the user watches a channel or not.
- Recently, IPTV, which transmits TV programs over packet networks, has gained significant momentum due to its advantage in delivering new services with ease. One of the drawbacks of IPTV is the relatively narrow bandwidth of the user's access line. For example, a user's access line may be a telephone line employing asymmetric digital subscriber line (ADSL) or similar technologies, which have limited bandwidth available to deliver high quality video content. Sending a large number of programs at the same time is not practical in an IPTV system due to the aforementioned lack of bandwidth. Furthermore, given the vast amount of video material available over the Internet, it is practically impossible to deliver all video content of interest to the user simultaneously. In addition, IPTV may rely on public Internet or a private IP network, which may have notable transport delays. In addition, while the CATV infrastructure is designed for broadcast TV systems, video on demand (VoD) and pay per view (PPV) services, which require a unicast transmission to a user's TV for “personalized TV” services, are ideally fit for IPTV.
- Endpoints designed for video conferencing have been disclosed, amongst other things, in co-pending U.S. patent application Ser. No. 12/015,956, incorporated herein by reference. Video distribution, e.g., IPTV, endpoints share many commonalities with video conferencing endpoints relevant to this invention.
- Referring to
FIG. 1 , a typical endpoint (101) includes a set of devices and/or software that is located at the user's premises. One typical endpoint includes a network interface (102) (for example, a DSL modem, a cable modem, or an ISDN T1 interface) connected to a network (103) (for example, the Internet or another private or public IP network), a computer (104) (for example, a set-top box, game console, personal computer or another type of computer) that connects via a local area network (105) (for example, Ethernet) to the network interface (102), a video display (106) (for example, a TV or computer monitor), and an audio output (for example, a set of loudspeakers). The set-top-box translates the data received from the Internet into a signal format the TV understands; traditionally, a combination of analog audio and video signals are used, but recently all digital interfaces (such as HDMI) have become common. The set-top-box therefore typically includes analog or digital audio/video outputs and interfaces. Both TV monitor and set-top-box device are typically controlled by an input device (107), alternatively known as a pointing device (for example, a remote control, computer mouse, keyboard, or another input device). However, most prior art set-top-boxes lack media input devices, such as camera or microphone, that are common to videoconference endpoints. - As depicted in
FIG. 2 , a set-top-box (200) has a hardware architecture similar to a general purpose computer: a central processing unit (CPU) (201) executes instructions stored in Random Access Memory (RAM) (202) and/or read-only-memory (ROM) (203), and utilizes interface hardware to connect to the network interface (204), the audio/video output interface (205), and the user interface (206) (which is connected to a user input device (207), for example, a remote control). All these components are under the control of the CPU. A typical set-top-box also includes an accelerator unit (208) (for example, a dedicated Digital Signal Processor (DSP)) that helps the CPU (201) with computationally complex tasks, such as video decoding and video processing. An accelerator unit (208) is typically present for reasons of cost efficiency, rather than for technical necessity. That is, a much faster CPU can often substitute for accelerator or DSP, but those much faster CPUs (and their required infrastructure such as power supplies and faster memory) may be more expensive than dedicated accelerator units. - General purpose computers, such as Personal Computers (PCs), can often be configured to act like a set-top-box. In some cases, additional hardware can be added to the general purpose computer to provide the interfaces a typical set-top-box contains, and/or additional accelerator hardware can be added to augment the CPU for video decoding and processing.
- The operating system controlling the set-top-box typically offers services that can be used (for example, receivers and transmitters according to certain protocols). The protocols of most interest here are those for the transmission of real-time application data: Internet Protocol (IP), User Datagram Protocol (UDP) and/or Transmission Control Protocol (TCP), and Real-time Transport Protocol (RTP). RTP receivers and transmitters can also be implemented in the application, rather than in the operating system. Most operating systems support the parallel or quasi-parallel use of more than one protocol receiver and/or transmitter.
- The term “codec” is equally used to describe techniques for encoding and decoding and for implementations of these techniques. A (media) encoder converts input media data into a bitstream or a packet stream, and a (media) decoder converts an input bitstream or packet stream into a media representation suitable for presentation to a user (for example, digital or analog video for presentation on a video display, or digital or analog audio for presentation through loudspeakers. Encoders and decoders can be dedicated hardware devices or building blocks of a software-based implementation running on a general purpose CPU and/or an associated accelerator unit.
- Set-top-boxes can be constructed such that many encoders or decoders run in parallel or quasi-parallel. For hardware encoders or decoders, one easy way to support multiple encoders/decoders is to integrate multiple instances in the set-top-box. For software implementations, similar mechanisms can be employed. For example, in a multi-process operating system, multiple instances of encoder/decoder code can be run quasi-simultaneously.
- A basic approach to program navigation, i.e., successive channel skipping or “channel surfing,” was suitable in the early days of broadcast TV systems, where there were only a few channels. As the number of broadcasting channels increased to many hundreds, successive channel skipping has become more cumbersome and time consuming. Although several proposed solutions, such as text-based electronic program guides, have been offered to alleviate this problem, they are not substitutes for the easy-to-use channel surfing experience of the older systems.
- Increases in channel-change times have made channel surfing more difficult. Digital video codecs, alternatively known as digital video coding/decoding techniques (e.g., MPEG-2, H-series codecs such as H.263 and H.264), in conjunction with packet network delivery, have increased channel-change times to several hundred milliseconds or even seconds in many cases, for at least the following two reasons:
- (1) Transport Delays: These delays result from buffering by the decoder at the receiving end, i.e., the endpoint, which is necessary to alleviate the effects of: (a) bandwidth changes in the transport network (such as variable link bandwidths experienced in wireless networks); (b) delay jitter caused by varying queuing delays in transport network switches; and/or (c) packet loss in the network.
- (2) Encoding Delays: To display a video, the decoder at the endpoint, alternatively known as the receiver, receiver/receiving end, or receiver/receiving application, must receive an I-frame, alternatively known as an intra-coded frame, from the encoder before a video can be decoded. The temporal interval between I-frames in an encoder is in most prior art systems fixed (for example, 0.25 sec or more in most CATV systems) to reduce the required coding bandwidth. Therefore, when a user changes a channel, it can take as long as 0.5 seconds or more before the receiver can decode the video. Furthermore, it is well known that increasing the interval between I frames improves the coding efficiency. As a result, many IPTV service providers trade channel change times for better picture quality, with the result that channel change times of several seconds are not uncommon in deployed IPTV systems.
- While CATV and satellite TV systems suffer only from encoding delays, IPTV and other packet network-based video distribution systems also suffer from transport delays, which can involve a significantly longer delay. In the evolving IPTV environment, the channel change time has become significantly longer, particularly when video channels are delivered over a best effort network such as the public Internet, where the network conditions are completely unpredictable.
- In order to improve the channel surfing experience, significant changes are needed. In particular, an encoder is needed that: (a) generates a synchronization frame (i.e., the I-frame of the prior systems) only when needed (that is, not necessarily in a fixed time interval); (b) employs no or only a small number of future frames to minimize algorithmic delay; and (c) compensates for possible packet loss or insurmountable delay, rather than relying on receiving end buffering and error mitigation as the sole mechanism for error resilience. Because transport delays can cause significant impact to channel-change time, even a generic video teleconferencing codec (which normally implements all aforementioned features) cannot completely eliminate the delay problems.
- Traditional video codecs, for example H.261 and H.263 (used for person-to-person communication purposes such as videoconferencing) or MPEG-1 and MPEG-2 Main Profile (used in Video CDs and DVDs, respectively), are designed with single layer coding, which provides a single bitstream. Depending on the application, that bitrate can be either fixed, or variable and dictated by the media content. That is, the more complex a scene becomes, the higher a bitrate is generated.
- A limitation of single layer coding exists where, in the final rendering on the screen, a lower spatial resolution is required compared to the one typically utilized for full-screen video reproduction (such as in TV). The full resolution signal must be sent and decoded at the receiving end, but the spatial resolution needs to be reduced to fit the low required spatial resolution, thus wasting both bandwidth and computational resources. However, support for lower resolutions is essential in a channel surfing application displaying several channels simultaneously, as one goal is to fit as many channels displayed in mini browsing windows (MBWs) as possible into a specific screen area—which results in the MBWs being naturally of lower resolution than the main video program.
- Layered video codecs, alternatively known as layered or scalable codecs/coding, are video compression techniques that have been developed explicitly for heterogeneous environments. In such codecs, two or more layers are generated for a given source video signal: a base layer and at least one enhancement layer. The base layer offers a basic representation of the source signal at a reduced quality, which can be achieved, for example, by reducing the Signal-to-Noise Ratio (SNR) through coarse quantization, using a reduced spatial and/or temporal resolution, or a combination of these techniques. The base layer can advantageously be transmitted using a reliable channel, i.e., a channel with guaranteed or enhanced quality of service (QoS). Each enhancement layer increases the quality by increasing the SNR, spatial resolution, or temporal resolution, and can often be transmitted with reduced QoS or best effort. In effect, a user is guaranteed to receive a signal with at least a minimum level of quality of the base layer signal.
- Disclosed are techniques including a method, apparatus, system, and computer-readable media containing instructions for processing a plurality of channels in a digital video distribution system (e.g., IPTV), which enables fast channel switching between channels. In U.S. provisional patent application Ser. No. 61/172,355, some of the techniques have been introduced as “side channel mode.” An endpoint is configured to receive a first channel in layered bitstream format, including a base layer and optionally a plurality of enhancement layers. The base and optional enhancement layers of the first channel can be decoded and displayed in a main window of a video display. Further, the endpoint can be configured to receive at least one second channel in the form of a base layer. This second channel can also be decoded, and can be displayed in a Mini Browsing Window (MBW). Upon request by a user for a channel switch from the first channel to the second channel, in one exemplary embodiment, the decoding of the enhancement layer of the first channel terminates. In the same or another embodiment, the display of the first channel in the main window terminates. In the same or another embodiment, the decoded second channel is zoomed to fit the size of the main window, and can be displayed. In the same or another embodiment, the decoded base layer of the first channel may be displayed in a MBW. In the same or another embodiment, the server is instructed to stop sending enhancement layers of the first channel and/or commence sending at least one enhancement layer for the second channel.
- An exemplary endpoint includes at least one receiver configured to receive channels coded in layered bitstream format, at least one decoder configured to decode channels coded in layered bitstream format, and a graphical user interface for receiving user input.
- The accompanying drawings, which are incorporated and constitute part of this disclosure, illustrate exemplary embodiments of the disclosed invention and serve to explain the principles of the disclosed invention.
-
FIG. 1 is a block diagram illustrating an exemplary endpoint in accordance with the present invention. -
FIG. 2 is a block diagram illustrating an exemplary endpoint in accordance with the present invention. -
FIG. 3 is a block diagram illustrating an exemplary endpoint in accordance with the present invention. -
FIG. 4 is an exemplary video display screen in accordance with the present invention. -
FIG. 5 is an exemplary video display screen in accordance with the present invention. -
FIG. 6 is a block diagram illustrating an exemplary system for the distribution and display of audio-visual signals in accordance with the present invention. -
FIG. 7 is an exemplary server in accordance with the present invention. -
FIG. 8 is an exemplary message flow between an endpoint and a server in accordance with the present invention. - Throughout the drawings, the same reference numerals and characters, unless otherwise stated, are used to denote like features, elements, components or portions of the illustrated embodiments. Moreover, while the disclosed invention will now be described in detail with reference to the figures, it is done so in connection with the illustrative embodiments.
- The present invention provides techniques for the distribution and display of video content, for example, live/on-air (e.g., TV channel), online, or pre-stored video files, in a way that provides for effective video content browsing, alternatively known as “channel surfing,” and is well suited for any generic digital video distribution system, including those that use packet networks (e.g., IPTV) or public Internet (e.g., video services available on the Internet). A “channel” denotes not only live/on-air video content, but also any online or pre-stored video content. Channels may be represented by, for example, video signals, compressed video signals, or audio-visual signals. Specifically, the techniques provide for a digital video distribution system that allows for display of channels using a plurality of mini browsing windows, alternatively known as MBWs, of different sizes and numbers that simultaneously display several channels or video programs. The MBWs can be displayed independently or as an overlay on a main window, alternatively known as the full screen, which displays a single channel.
- A rapid switching feature provides a user, alternatively known as a TV viewer, with the ability to browse a set of channels while watching one specific channel, and instantly switch to a different set of channels for browsing. Thus, the disclosed techniques provide a significantly enhanced channel surfing experience.
- In order to achieve instant switching of channels displayed in MBWs, an exemplary digital video distribution system advantageously uses layered codec, for example, as described in co-pending U.S. patent application Ser. Nos. 12/015,956, 11/608,776, and 11/682,263 and U.S. Pat. No. 7,593,032. The present invention avoids the buffering and inherent encoding delays of a classical digital video distribution system, and permits fast switching of channels in MBWs.
- In addition, the present invention improves bandwidth usage by generating multiple layers of video, i.e., the channels are coded in layered bitstream format, and uses only the lower layers to display channels in the MBWs. These lower layers represent lower resolutions, lower frame rate, or lower SNR, using much less bandwidth and enabling a low processing complexity. These techniques eliminate the need for receiver buffering by introducing slight performance degradation in the event of packet loss or excessive packet delay. Furthermore, layered codec provides rate matching to account for the fact that different channels may be using IP network connections with different bandwidths, which requires different data rates from the encoder.
-
FIG. 3 illustrates several components of an IPTV endpoint according to one embodiment of the present invention. Specifically, the endpoint includes more than one receiver (301, 302), each able to receive a channel in layered bitstream format, including at least a base layer, and possibly also one or more enhancement layers associated with the base layer. A receiver may also be configured to receive only an enhancement layer. Each receiver translates incoming packet data (303, 304) from the network interface (305), advantageously using the IP/UDP/RTP protocol hierarchy, into layered video bitstreams (306, 307) and side information (308, 309), such as timing information. Coupled to the receivers (301, 302) are decoders (310, 311) for layered video bitstreams. Each decoder translates the input layered bitstream (i.e., the base layer and one or more, if any, enhancement layers) into a sequence of video images (312, 313) for display. The aforementioned translation process may include operations not commonly associated with video decoding, such as: a) on the input side, dropping individual layers for a multi-layered bitstream; and b) on the output side, post processing, including post-filtering, temporal interpolation to increase the frame rate, and up- or down-zooming to a desired spatial output resolution. In the same or another embodiment, a given decoder can be coupled to more than one receiver (decoder 310 can be coupled toreceiver 302 by connection 314) to receive, for example, a base layer from a first receiver and one or more enhancement layers from a second receiver. - In most cases, the receivers and decoders will be implemented in the form of independent processes running under a common operating system and on a given CPU, with the augmentation by accelerator units. However, they could also be implemented in other ways, including dedicated hardware implementations.
- The sequences of video images (312, 313) are assembled by the means of a Graphical User Interface (GUI) (315) into a screen layout, taking into account the side information (308, 309), which is sent through a video output interface (316) to a TV screen (317).
-
FIG. 4 depicts an exemplary screen layout where four MBWs (401) are depicted on a single TV screen (402). Conceptually, each of the MBWs presents to a user a motion video (e.g., a TV channel or stored video file). Returning toFIG. 3 , to implement the screen layout with four MBWs as depicted inFIG. 4 , at least four receivers (301, 302) are required, receiving at least four channels and at least four base layers. The receivers are coupled with at least four decoders (310, 311) to generate four sequences of video images (312, 313), which are assembled by the GUI (315) into a single screen layout, and displayed through the video output interface (316) to the TV screen (317). -
FIG. 5 depicts another exemplary screen layout where four MBWs (501) are in use, and overlap a main window (502). The main window displays a channel selected by the user. The MBWs display other channels which the user could easily select to view in the main window by using an input device to select or “click on” the MBW representing the desired channel. - In the same or another embodiment, the user can set MBW display configuration preferences through the GUI. The GUI is typically implemented as a software application similar to a Windows-based user interface. The user can control the GUI, for example, by using an input device, such as a TV remote control, computer mouse, keyboard, or other pointing device, and can select the number of MBWs (e.g., 2, 4 or more), the window size for each MBW (e.g., first MBW=QCIF, second MBW=QCIF, third MBW=CIF), or the location of the MBWs on the TV screen (e.g., align top, bottom, or side of the screen). The number of MBWs is only limited by the number of receivers and decoders. If decoders and/or receivers are implemented in hardware, then the number of MBWs is limited according to the number of available decoders and/or receivers. If decoders and receivers are implemented in software, then in most implementations, there is no practical limit in the number of MBWs except the performance of the CPU. Within the mentioned constraints in numbers and possibly constraints in the implementation of the GUI (potentially set by a service provider/operator to accommodate their business model), the user can fit as many MBWs as he/she desires so long as the total size of all MBWs does not exceed the available display size. There is no minimum limit for MBW size. The user can set the desired size on an MBW by dragging the edges of an MBW window, and/or by setting MBW display configuration preferences which specify size. Depending on the GUI, it may also be possible to have overlapping MBWs.
- In the same or another embodiment, a channel selection mechanism henceforth called “side-channel mode” is implemented. Side-channel mode can advantageously be employed when the roundtrip network delay is so large that after the user's request, the delay in changing the displayed video is annoying or unacceptable to the user. Note that in this mode, the side channels (i.e., the next sequential channels to be displayed when a user is channel surfing) are sent even though the user is not necessarily actively surfing channels. In the side channel mode, the channel order becomes important. There are two types of channel orders:
- (1) Natural order: The order of the channels as defined by the video service provider; and
- (2) User-selected order: The order of channels of interest to the user, which may be defined by the user through the GUI. In this scenario, the channel order can be completely different than that of natural order. There may be a much smaller set of channels of interest in the user-selected order.
- So far, this disclosure has been concerned mostly with the endpoint and its operations. Now there is a need to at least briefly introduce the video server.
- Referring to
FIG. 6 , an endpoint (601) is connected through an IP network (602), for example the Internet, to at least one server (603), which is normally operated by a video service provider or operator. In side channel mode, typically, the following virtual connections exist between the server (603) and at least one endpoint (601): a) at least one control channel (604), which is typically bi-directional and runs application layer protocols such as SIP, RTSP, and/or similar protocols; b) for each channel that is being sent by the server (603), at least one RTP packet stream (605, 606), which is typically unidirectional and point-to-point. The RTP packet stream (605, 606) may be formed as an RTP session, or more than one RTP packet streams may be multiplexed into one or more RTP sessions using techniques such as SSRC multiplexing. An RTP session may have an associated RTCP bi-directional RTP control channel (not shown). All virtual channels terminate, in most practical cases, at the same network interface (607) of the endpoint (601). The server (603) is depicted here as one physical device, but can be distributed and/or decomposed. The details of the server architecture are not relevant for the invention presented. - As previously mentioned, most network interfaces relevant for IPTV systems have capacity limitations so that it is impractical to send more than the absolute minimum of channels to an endpoint. This is in contrast to CATV systems, where an endpoint, at least at the physical layer, receives in most cases all offered channels, and discards those that are not displayed. As the network interface capacity is limited, in an IPTV system, the server sends only those channel(s) that the endpoint is interested in receiving. In most current IPTV endpoints, the number of those channels is one; however, according to the invention presented, that number can be considerably higher and depends on factors such as the number of available and/or used receivers, decoders, MBWs, endpoint CPU load, endpoint connectivity, and so forth.
- According to the invention, when in side channel mode, the server sends at least the “current channel” (i.e., the channel the user is most interested in, and which is typically displayed in the main window), and one side channel. In most cases, more than one side channel is sent.
-
FIG. 7 illustrates an exemplary video server (701) in accordance with the disclosed invention. The server may be distributed or centralized; for simplicity, a centralized server is shown. An exemplary video server (701) contains MBW control logic (702), which processes user control information (708) received from an endpoint (703), such as the desired number or size of the MBWs or the channels or other video content (e.g., pay-per-view or video-on-demand content stored in a video database) to be mapped to each MBW. The video server (701) can also contain one or more video extractors (704), which can extract base and/or enhancement layers from layered bitstreams (711) stemming from live/on-air video content from one or more layered encoders (705) or a video database (706). - The functionalities of a video extractor have been disclosed, for example, in co-pending U.S. patent application entitled “Systems, Methods and Computer Readable Media for Instant Multi-Channel Video Content Browsing in Digital Video Distribution Systems,” filed herewith. In short, and only in the context of the invention presented, the main function of the video extractor is to receive a layered bitstream (711) and remove zero or more enhancement layers according to control information (712) received from the MBW control logic (702), create another layered bitstream (709, 710), which may contain fewer layers, and forward the layered bitstream (709, 710), typically as one or more RTP packet streams, to an endpoint (703). For example, assuming the user requested a certain TV channel in a small MBW, the channel would be available at the server (701) in the form of a layered bitstream (711), which contains a base layer and, for example, four enhancement layers, and the video extractor (704) would remove all the enhancement layers and create a layered bitstream (709) that contains only the base layer. Returning to
FIG. 3 , the newly created layered bitstream (303) represents one channel, and is sent through the network to the endpoint, wherein the layered bitstream (303) is typically fed by the network interface (305) into one receiver (301). - Referring again to
FIG. 7 , the layered encoder (705) takes input from a video source (713), such as a camera, a satellite downfeed, or similar video source, and converts it into a layered bitstream (711) comprising a base and zero or more enhancement layers. - The video database (706), which can be internal or external, contains at least one, but typically many, layered bitstreams (711) comprising a base layer and typically one or more enhancement layers. The storage format for the layered bitstream (711) may be conformant to one of the many file formats defined for stored video. Each of the layered bitstreams (711) may represent an episode of a TV show, a movie, or a similar content. When requested, the video database (706) forwards the selected layered bitstream (711) to the video extractor (704), possibly obeying timing rules (a process commonly known as “streaming”). It is equally possible, that the buffering and timing logic required for “streaming” is implemented in the video extractor (704), in which case the video database (706) makes the complete layered bitstream (711) available as a unit.
- The details of the interworking between the video extractor (704), video database (706), and layered encoders (705) are not relevant for the invention presented, and, therefore, are not discussed further.
- In the following paragraphs, disclosed is an exemplary embodiment of the digital video distribution system.
- For this description it is assumed that the system is already up and running; that is, a user has authenticated himself/herself into the system (endpoint and, through the endpoint, server). Further, the system has brought up, as depicted in
FIG. 5 , one initial current channel (“channel 1”) displayed in the main window (502) and secondary channels displayed in a plurality of MBWs, for example, four MBWs (501) (“channels - Returning to
FIG. 3 , as a result of this setup, the layered video bitstream (303) representingchannel 1 displayed in the main window is received through the network interface (305) by a receiver (301), whereby a layered video bitstream (306) containing at least one enhancement layer, in conjunction with the base layer is forwarded to a decoder (310). The decoding process results in a sequence of video images (312) of high spatio-temporal resolution suitable for a pleasant viewing experience in the main window. - The secondary channels,
channels FIG. 3 illustrates one of the four receive-decode chains, depicted by a second receiver (302) a second decoder (311) and a second decoded sequence of video images (313). - The GUI (315) assembles the sequences of video images (312, 313) into the screen layout illustrated in
FIG. 5 . - In this example, all five channels are served by the same server, as illustrated in
FIG. 7 . All five channels may, for example, have been originally stored in the video database (706) at a spatio-temporal resolution high enough for being displayed in the main window. The video extractor (704) is aware, by instructions received from the MBW control logic (702), that only the channel to be displayed in the main window is required at full resolution; all secondary channels are required only at MBW resolution. As a result, the video extractor (704) creates a layered bitstream (709) containing at least one enhancement layer for the channel to be displayed in the main window, and, in this example, base-layer only bitstreams for the four secondary channels (only one bitstream (710) is illustrated inFIG. 7 ). These are the five layered bitstreams (710) ultimately received at the endpoint (703). - Returning to
FIG. 5 , assume that the user has clicked on the topmost MBW (501), thereby selecting to viewchannel 2 in the main window instead of thecurrent channel 1. This user activity invokes at least four activities in the endpoint, as illustrated by the flow chart inFIG. 8 . - Vertical timeline (801) is not drawn to scale, as an event executed locally in server or endpoint are executed in the order of microseconds or milliseconds, whereas a one way transmission delay can be hundreds of milliseconds.
FIG. 8 assumes a one way transmission delay of 300 milliseconds, and local execution at an endpoint lasting a user-imperceptible amount of time. - In a first activity (802), an endpoint sends (803) to the server information that the user has requested to change channels from, for example,
channel 1 tochannel 2. After the transmission delay (804), this information is processed (805) by the MBW control logic. As a result, the MBW control logic instructs the video extractor to a) stop (806) including those enhancement layers of the layered bitstream ofchannel 1 into the outgoing layered bitstream ofchannel 1 that are not required to achieve the spatial/temporal/quality resolution required for display ofchannel 1 in an MBW, and b) commence (807) including into the outgoing layered bitstream, forchannel 2, enhancement layers required to achieve the spatio/temporal/quality resolution for display ofchannel 2 in the main window. although sub-activities (806) and (807) are described and depicted to be executed sequentially, they can also occur in parallel, depending on the server implementation. The selection of the correct enhancement layers may be based on other factors such as the connectivity of the endpoint and server, screen size of the endpoint video display, size of the main window, and user preference on the spatial/temporal/quality tradeoff. After a one-way transmission delay (808), the endpoint receives (809), among others, packets that belong to the enhancement layers ofchannel 2 rather thanchannel 1. - The delay between the user request and the reception at the endpoint of the modified layered bitstreams, when taking the first activity in isolation, can be considerable and annoying. It is mostly the result of the two way transmission delay (804, 808) (which can be, for example, several hundreds of milliseconds, depending on the geospatial locations of server and endpoint), as well as on constraints in the video extractor. For example, simple video extractors may need to wait for an Intra frame before they can commence including enhancement layers representing a higher spatio-temporal resolution. This wait time is included in sub-activity (807).
- The second (810) and third (811) activities mitigate this delay factor by briefly trading quality of the main window display for a fast visible reaction to user input. Both activities (810, 811) are executed locally in an endpoint and are, therefore, independent of any transmission delay.
- In the second activity (810), the endpoint stops processing (812), i.e., receiving and decoding, the enhancement layers of
channel 1 not required for display in a MBW. As a result, the sequence of video images switches—typically with a single picture's duration delay, for example 1/30th of a second—from the high resolution previously used in the main window, to a resolution suitable for a MBW. Further, the GUI starts displaying (813) the newly created sequence of video images ofchannel 1 in the MBW that was previously displayingchannel 2. - In the third activity (811), the endpoint prepares (814) to process the enhancement layers related to
channel 2. That is, the receiver preparing the layered video bitstream forchannel 2 is instructed not to discard any enhancement layers useful to achieve spatial/temporal/quality resolution required for display ofchannel 2 in the main window. However, until those enhancement layers are present in the layered bitstream received by the receiver preparing the layered video bitstream forchannel 2, the receiver and its coupled decoder continue to decode the layered bitstream at the resolution required for display in a MBW. Until such time when enhancement layer information arrives (814), the decoder performs the additional function of “zooming up” (815) the picture of typically low spatial resolution sequence to a resolution suitable for display in the main window. The GUI takes this up-zoomed sequence of video images, and displays (816) it in the main window. After the duration of two transmission delays and the delay introduced by the video extractor, the enhancement information forchannel 2 becomes available. At this point, the enhancement layer(s), together with the base layer are received, decoded, and displayed (817) in full resolution in the main window. - Finally, in the fourth activity (818), decoding and rendering of audio corresponding to
channel 1 is stopped (819), and decoding and rendering of the audio corresponding tochannel 2 commences (820). The audio component of all channels (displayed in either a MBW or in the main window) can always be sent from the server to the endpoint; this is possible as compressed audio takes only a fraction of the bandwidth of compressed video. However, alternatively, the server can also serve only the audio of the current channel, for example, the channel displayed in the main window. In this case, the bandwidth for the MBW-associated audio channels can be saved, but audio is not immediately available after a channel switch. Alternatively, it is also possible to carry different qualities of audio, for example, a low quality audio for channels displayed in the MBWs (using, for example a telephony-band speech codec at very low bitrate), and a high quality, possibly multi-channel audio for the channel displayed in the main window. In that case the user experience on the audio side would be comparable to the video user experience: immediately after the channel switch, low quality audio is audible, which is replaced by high quality audio after the channel switch delay (e.g., hundreds of milliseconds to a few seconds). Finally, assuming the use of a layered audio codec, an audio distribution mechanism similar to the one disclosed for video could be employed. - A number of further improvements are disclosed.
- First, there are cases where it is both possible and reasonable to receive and decode channels for MBWs, but not display those MBWs. These non-displayed MBWs are henceforth called “virtual MBWs.” In one embodiment, the decoded picture sequences of the virtual MBWs are available for immediate zooming up in the event that the user initiates a channel switch, allowing for fast channel switches, but still enable the use of the full video display screen for the current channel. In order to enable this embodiment in a meaningful way, the MBW control logic can typically assign channels to those virtual MBWs according to a strategy that reflects closely the user's typical surfing behavior, as discussed below.
- Second, it has already been mentioned that many different mechanisms for the assignment of channels to receiver-decoder chains are possible. For example, an operator, or the user, may opt to make a fixed assignment between channel and receiver-decoder. In this case, very fast surfing between the channels with receiver-decoder assignment is possible, but changing to other channels would be time-consuming and annoying. However, depending on the number of available receivers and decoders in the endpoint (which, in the case of software implementations, can be virtually unlimited and depend mostly on available processing resources), and the available bandwidth between server and endpoint, it is conceivable that many receiver-decoder chains are active at the same time, probably serving the needs of most users. However, for more channel-hungry users, or (more likely) fewer available computational and/or bandwidth resources, the channel-to-receiver-decoder chain assignment can be dynamic to achieve the best possible user experience. One way to implement such dynamic assignment is as follows:
- Channels are assigned to receiver-decoder chains in ascending or descending order according to the direction of the user's channel surfing behavior and in the natural or user-selected channel order. That means that, for example, whenever the user presses the channel-up button on the remote control and thereby selects the “next” channel, the server MBW control logic instructs the video extractor to stop sending the layered bitstream that represents the “lowest” channel that is being sent, and switch instead to sending the layered bitstream corresponding to the “next” channel in either natural or user-selected channel order. The result is a sliding window of available channels around the current channel, that is being updated every time a user hits channel-up or channel-down.
- Other forms of assignment are also possible. For example, in the same or another embodiment, it is possible to automatically rotate the available channels in the available receiver-decoder chains for display in MBWs—as a result, the endpoint can display a fixed number of channels in MBWs for a fixed period of time, and then display the “next” set of channels in the MBWs, and so forth.
Claims (25)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/765,767 US8341672B2 (en) | 2009-04-24 | 2010-04-22 | Systems, methods and computer readable media for instant multi-channel video content browsing in digital video distribution systems |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17235509P | 2009-04-24 | 2009-04-24 | |
US12/765,767 US8341672B2 (en) | 2009-04-24 | 2010-04-22 | Systems, methods and computer readable media for instant multi-channel video content browsing in digital video distribution systems |
Publications (2)
Publication Number | Publication Date |
---|---|
US20100275229A1 true US20100275229A1 (en) | 2010-10-28 |
US8341672B2 US8341672B2 (en) | 2012-12-25 |
Family
ID=42992120
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/765,767 Active 2031-04-07 US8341672B2 (en) | 2009-04-24 | 2010-04-22 | Systems, methods and computer readable media for instant multi-channel video content browsing in digital video distribution systems |
US12/765,815 Abandoned US20100272187A1 (en) | 2009-04-24 | 2010-04-22 | Efficient video skimmer |
US12/765,793 Expired - Fee Related US8607283B2 (en) | 2009-04-24 | 2010-04-22 | Systems, methods and computer readable media for instant multi-channel video content browsing in digital video distribution systems |
US13/895,131 Expired - Fee Related US9426536B2 (en) | 2009-04-24 | 2013-05-15 | Systems, methods and computer readable media for instant multi-channel video content browsing in digital video distribution systems |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/765,815 Abandoned US20100272187A1 (en) | 2009-04-24 | 2010-04-22 | Efficient video skimmer |
US12/765,793 Expired - Fee Related US8607283B2 (en) | 2009-04-24 | 2010-04-22 | Systems, methods and computer readable media for instant multi-channel video content browsing in digital video distribution systems |
US13/895,131 Expired - Fee Related US9426536B2 (en) | 2009-04-24 | 2013-05-15 | Systems, methods and computer readable media for instant multi-channel video content browsing in digital video distribution systems |
Country Status (7)
Country | Link |
---|---|
US (4) | US8341672B2 (en) |
EP (1) | EP2422469A4 (en) |
JP (1) | JP2012525076A (en) |
CN (1) | CN102422577A (en) |
AU (1) | AU2010238757A1 (en) |
CA (1) | CA2759729A1 (en) |
WO (3) | WO2010124140A1 (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100272187A1 (en) * | 2009-04-24 | 2010-10-28 | Delta Vidyo, Inc. | Efficient video skimmer |
US20100325663A1 (en) * | 2009-06-22 | 2010-12-23 | Samsung Electronics Co. Ltd. | Broadcast receiving apparatus and method for switching channels thereof |
CN102097081A (en) * | 2011-01-27 | 2011-06-15 | 明基电通有限公司 | Display device as well as driving method and displaying method thereof |
US20120090006A1 (en) * | 2009-06-19 | 2012-04-12 | Shenzhen Tcl New Technology Co., Ltd. | Television and generating method of electronic program guide menu thereof |
US8473998B1 (en) * | 2009-07-29 | 2013-06-25 | Massachusetts Institute Of Technology | Network coding for multi-resolution multicast |
US20140074911A1 (en) * | 2012-09-12 | 2014-03-13 | Samsung Electronics Co., Ltd. | Method and apparatus for managing multi-session |
US20140096167A1 (en) * | 2012-09-28 | 2014-04-03 | Vringo Labs, Inc. | Video reaction group messaging with group viewing |
US20140341543A1 (en) * | 2011-09-12 | 2014-11-20 | Alcatel Lucent | Method for playing multimedia content, a related system and related playback module |
US20160021165A1 (en) * | 2012-07-30 | 2016-01-21 | Shivendra Panwar | Streamloading content, such as video content for example, by both downloading enhancement layers of the content and streaming a base layer of the content |
US20210213359A1 (en) * | 2012-10-03 | 2021-07-15 | Gree, Inc. | Method of synchronizing online game, and server device |
US11601712B2 (en) * | 2016-02-24 | 2023-03-07 | Nos Inovacao, S.A. | Predictive tuning system |
Families Citing this family (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW201106639A (en) * | 2009-08-05 | 2011-02-16 | Jian-Meng Yang | Auto-transmission method of multiple transmission interfaces and electronic product performing the same |
CN103210642B (en) * | 2010-10-06 | 2017-03-29 | 数码士有限公司 | Occur during expression switching, to transmit the method for the scalable HTTP streams for reproducing naturally during HTTP streamings |
US8891935B2 (en) | 2011-01-04 | 2014-11-18 | Samsung Electronics Co., Ltd. | Multi-video rendering for enhancing user interface usability and user experience |
EP2472519B1 (en) * | 2011-01-04 | 2017-12-13 | Samsung Electronics Co., Ltd. | Multi-video rendering for enhancing user interface usability and user experience |
US8689269B2 (en) * | 2011-01-27 | 2014-04-01 | Netflix, Inc. | Insertion points for streaming video autoplay |
JP6026443B2 (en) | 2011-03-10 | 2016-11-16 | ヴィディオ・インコーポレーテッド | Drawing direction information in video bitstream |
WO2012121744A1 (en) * | 2011-03-10 | 2012-09-13 | Vidyo, Inc | Adaptive picture rotation |
US8589996B2 (en) * | 2011-03-16 | 2013-11-19 | Azuki Systems, Inc. | Method and system for federated over-the-top content delivery |
EP2692131A4 (en) * | 2011-03-29 | 2015-10-07 | Lyrical Labs LLC | Video encoding system and method |
US8953044B2 (en) * | 2011-10-05 | 2015-02-10 | Xerox Corporation | Multi-resolution video analysis and key feature preserving video reduction strategy for (real-time) vehicle tracking and speed enforcement systems |
GB2500245B (en) * | 2012-03-15 | 2014-05-14 | Toshiba Res Europ Ltd | Rate optimisation for scalable video transmission |
US20140328570A1 (en) * | 2013-01-09 | 2014-11-06 | Sri International | Identifying, describing, and sharing salient events in images and videos |
US9407961B2 (en) * | 2012-09-14 | 2016-08-02 | Intel Corporation | Media stream selective decode based on window visibility state |
WO2014046916A1 (en) | 2012-09-21 | 2014-03-27 | Dolby Laboratories Licensing Corporation | Layered approach to spatial audio coding |
WO2014056546A1 (en) * | 2012-10-12 | 2014-04-17 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | File sytem, computer and method |
EP2741518A1 (en) * | 2012-12-04 | 2014-06-11 | Alcatel Lucent | Method for rendering a multimedia asset, a related system, media client and related media server |
US8804042B2 (en) | 2013-01-14 | 2014-08-12 | International Business Machines Corporation | Preemptive preloading of television program data |
US10631019B2 (en) | 2013-06-18 | 2020-04-21 | Vecima Networks Inc. | Remote storage digital video recording optimization method and system |
US9530422B2 (en) | 2013-06-27 | 2016-12-27 | Dolby Laboratories Licensing Corporation | Bitstream syntax for spatial voice coding |
US10284858B2 (en) * | 2013-10-15 | 2019-05-07 | Qualcomm Incorporated | Support of multi-mode extraction for multi-layer video codecs |
DE102014216511A1 (en) * | 2014-08-20 | 2016-02-25 | Carl Zeiss Meditec Ag | Create chapter structures for video data with images from a surgical microscope object area |
US11205305B2 (en) | 2014-09-22 | 2021-12-21 | Samsung Electronics Company, Ltd. | Presentation of three-dimensional video |
US10547825B2 (en) * | 2014-09-22 | 2020-01-28 | Samsung Electronics Company, Ltd. | Transmission of three-dimensional video |
FR3026260B1 (en) * | 2014-09-22 | 2018-03-23 | Airbus Ds Sas | METHOD FOR TRANSMITTING VIDEO SURVEILLANCE IMAGES |
JP2016092837A (en) * | 2014-10-30 | 2016-05-23 | 株式会社東芝 | Video compression apparatus, video reproduction apparatus and video distribution system |
WO2016079276A1 (en) * | 2014-11-21 | 2016-05-26 | Takeda Gmbh | Use of an anti-gm-csf antagonist and an anti-ccr2 antagonist in the treatment of an infectious disease |
US10984248B2 (en) * | 2014-12-15 | 2021-04-20 | Sony Corporation | Setting of input images based on input music |
US9716735B2 (en) | 2015-02-18 | 2017-07-25 | Viasat, Inc. | In-transport multi-channel media delivery |
US9961004B2 (en) | 2015-02-18 | 2018-05-01 | Viasat, Inc. | Popularity-aware bitrate adaptation of linear programming for mobile communications |
US20160345009A1 (en) * | 2015-05-19 | 2016-11-24 | ScaleFlux | Accelerating image analysis and machine learning through in-flash image preparation and pre-processing |
US10474745B1 (en) | 2016-04-27 | 2019-11-12 | Google Llc | Systems and methods for a knowledge-based form creation platform |
US11039181B1 (en) | 2016-05-09 | 2021-06-15 | Google Llc | Method and apparatus for secure video manifest/playlist generation and playback |
US10771824B1 (en) | 2016-05-10 | 2020-09-08 | Google Llc | System for managing video playback using a server generated manifest/playlist |
US10785508B2 (en) | 2016-05-10 | 2020-09-22 | Google Llc | System for measuring video playback events using a server generated manifest/playlist |
US10750216B1 (en) | 2016-05-10 | 2020-08-18 | Google Llc | Method and apparatus for providing peer-to-peer content delivery |
US10595054B2 (en) | 2016-05-10 | 2020-03-17 | Google Llc | Method and apparatus for a virtual online video channel |
US10750248B1 (en) | 2016-05-10 | 2020-08-18 | Google Llc | Method and apparatus for server-side content delivery network switching |
US11069378B1 (en) | 2016-05-10 | 2021-07-20 | Google Llc | Method and apparatus for frame accurate high resolution video editing in cloud using live video streams |
US11032588B2 (en) | 2016-05-16 | 2021-06-08 | Google Llc | Method and apparatus for spatial enhanced adaptive bitrate live streaming for 360 degree video playback |
KR20180020452A (en) * | 2016-08-18 | 2018-02-28 | 엘지전자 주식회사 | Terminal and method for controlling the same |
CN106534879B (en) * | 2016-11-08 | 2020-02-07 | 天脉聚源(北京)传媒科技有限公司 | Live broadcast switching method and system based on attention |
RU170238U1 (en) * | 2016-11-14 | 2017-04-18 | Закрытое акционерное общество "Региональный научно-исследовательский экспертный центр" | COMPLEX FOR RESEARCH OF RADIO ELECTRONIC DEVICES |
CN108269222A (en) * | 2016-12-30 | 2018-07-10 | 华为技术有限公司 | A kind of window rendering intent and terminal |
US11599263B2 (en) * | 2017-05-18 | 2023-03-07 | Sony Group Corporation | Information processing device, method, and program for generating a proxy image from a proxy file representing a moving image |
US11049218B2 (en) | 2017-08-11 | 2021-06-29 | Samsung Electronics Company, Ltd. | Seamless image stitching |
CN109195010B (en) * | 2018-08-15 | 2021-08-06 | 咪咕视讯科技有限公司 | Code rate adjusting method and device |
US20200322656A1 (en) * | 2019-04-02 | 2020-10-08 | Nbcuniversal Media, Llc | Systems and methods for fast channel changing |
CN110556032A (en) * | 2019-09-06 | 2019-12-10 | 深圳市艾肯麦客科技有限公司 | 3D model teaching system based on internet |
JP7164856B1 (en) | 2022-01-21 | 2022-11-02 | 17Live株式会社 | Server and method |
JP7388650B1 (en) * | 2023-06-09 | 2023-11-29 | 17Live株式会社 | servers and computer programs |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020129374A1 (en) * | 1991-11-25 | 2002-09-12 | Michael J. Freeman | Compressed digital-data seamless video switching system |
US6510553B1 (en) * | 1998-10-26 | 2003-01-21 | Intel Corporation | Method of streaming video from multiple sources over a network |
US20050275752A1 (en) * | 2002-10-15 | 2005-12-15 | Koninklijke Philips Electronics N.V. | System and method for transmitting scalable coded video over an ip network |
US20060184966A1 (en) * | 2005-02-14 | 2006-08-17 | Hillcrest Laboratories, Inc. | Methods and systems for enhancing television applications using 3D pointing |
US20070195203A1 (en) * | 2006-02-21 | 2007-08-23 | Qualcomm Incorporated | Multi-program viewing in a wireless apparatus |
US20090185618A1 (en) * | 2002-04-11 | 2009-07-23 | Microsoft Corporation | Streaming Methods and Systems |
Family Cites Families (61)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5555244A (en) | 1994-05-19 | 1996-09-10 | Integrated Network Corporation | Scalable multimedia network |
US6055012A (en) * | 1995-12-29 | 2000-04-25 | Lucent Technologies Inc. | Digital multi-view video compression with complexity and compatibility constraints |
US6141053A (en) | 1997-01-03 | 2000-10-31 | Saukkonen; Jukka I. | Method of optimizing bandwidth for transmitting compressed video data streams |
US6643496B1 (en) | 1998-03-31 | 2003-11-04 | Canon Kabushiki Kaisha | System, method, and apparatus for adjusting packet transmission rates based on dynamic evaluation of network characteristics |
US6292512B1 (en) * | 1998-07-06 | 2001-09-18 | U.S. Philips Corporation | Scalable video coding system |
US6275531B1 (en) | 1998-07-23 | 2001-08-14 | Optivision, Inc. | Scalable video coding method and apparatus |
US6167084A (en) | 1998-08-27 | 2000-12-26 | Motorola, Inc. | Dynamic bit allocation for statistical multiplexing of compressed and uncompressed digital video signals |
US6498865B1 (en) | 1999-02-11 | 2002-12-24 | Packetvideo Corp,. | Method and device for control and compatible delivery of digitally compressed visual data in a heterogeneous communication network |
US6499060B1 (en) * | 1999-03-12 | 2002-12-24 | Microsoft Corporation | Media coding for loss recovery with remotely predicted data units |
US6633725B2 (en) * | 2000-05-05 | 2003-10-14 | Microsoft Corporation | Layered coding of image data using separate data storage tracks on a storage medium |
US6816194B2 (en) * | 2000-07-11 | 2004-11-09 | Microsoft Corporation | Systems and methods with error resilience in enhancement layer bitstream of scalable video coding |
US6973622B1 (en) | 2000-09-25 | 2005-12-06 | Wireless Valley Communications, Inc. | System and method for design, tracking, measurement, prediction and optimization of data communication networks |
KR20020064932A (en) * | 2000-10-11 | 2002-08-10 | 코닌클리케 필립스 일렉트로닉스 엔.브이. | Spatial scalability for fine granular video encoding |
US6907070B2 (en) | 2000-12-15 | 2005-06-14 | Microsoft Corporation | Drifting reduction and macroblock-based control in progressive fine granularity scalable video coding |
JP3769468B2 (en) | 2001-03-21 | 2006-04-26 | 株式会社エヌ・ティ・ティ・ドコモ | Communication quality control method, communication quality control system, packet analysis device, and data transmission terminal device |
US7272153B2 (en) | 2001-05-04 | 2007-09-18 | Brooktree Broadband Holding, Inc. | System and method for distributed processing of packet data containing audio information |
US6496217B1 (en) * | 2001-06-12 | 2002-12-17 | Koninklijke Philips Electronics N.V. | Video communication system using model-based coding and prioritzation techniques |
US7012893B2 (en) * | 2001-06-12 | 2006-03-14 | Smartpackets, Inc. | Adaptive control of data packet size in networks |
TW550507B (en) * | 2001-10-16 | 2003-09-01 | Ulead Systems Inc | System and method for establishing interactive video disk playing menu |
US7225459B2 (en) * | 2001-10-17 | 2007-05-29 | Numerex Investment Corproation | Method and system for dynamically adjusting video bit rates |
US6789123B2 (en) | 2001-12-28 | 2004-09-07 | Microsoft Corporation | System and method for delivery of dynamically scalable audio/video content over a network |
US20030156824A1 (en) * | 2002-02-21 | 2003-08-21 | Koninklijke Philips Electronics N.V. | Simultaneous viewing of time divided segments of a tv program |
US7404001B2 (en) | 2002-03-27 | 2008-07-22 | Ericsson Ab | Videophone and method for a video call |
US7899915B2 (en) | 2002-05-10 | 2011-03-01 | Richard Reisman | Method and apparatus for browsing using multiple coordinated device sets |
US7706359B2 (en) | 2002-07-01 | 2010-04-27 | Converged Data Solutions, Inc. | Systems and methods for voice and data communications including a network drop and insert interface for an external data routing resource |
US7072394B2 (en) | 2002-08-27 | 2006-07-04 | National Chiao Tung University | Architecture and method for fine granularity scalable video coding |
JP3513148B1 (en) | 2002-10-11 | 2004-03-31 | 株式会社エヌ・ティ・ティ・ドコモ | Moving picture coding method, moving picture decoding method, moving picture coding apparatus, moving picture decoding apparatus, moving picture coding program, and moving picture decoding program |
US7403660B2 (en) | 2003-04-30 | 2008-07-22 | Nokia Corporation | Encoding picture arrangement parameter in picture bitstream |
BRPI0411433B1 (en) * | 2003-06-16 | 2018-10-16 | Thomson Licensing | decoding method and apparatus allowing fast change of compressed video channel |
US8327411B2 (en) * | 2003-12-01 | 2012-12-04 | Sharp Laboratories Of America, Inc. | Low-latency random access to compressed video |
US20070195878A1 (en) | 2004-04-06 | 2007-08-23 | Koninklijke Philips Electronics, N.V. | Device and method for receiving video data |
JP4805915B2 (en) * | 2004-05-04 | 2011-11-02 | クゥアルコム・インコーポレイテッド | Method and apparatus for assembling bi-directionally predicted frames for temporal scalability |
US20050254575A1 (en) | 2004-05-12 | 2005-11-17 | Nokia Corporation | Multiple interoperability points for scalable media coding and transmission |
US8484308B2 (en) | 2004-07-02 | 2013-07-09 | MatrixStream Technologies, Inc. | System and method for transferring content via a network |
JP2008523688A (en) * | 2004-12-10 | 2008-07-03 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Hierarchical digital video encoding system and method in a digital video recorder |
US7675661B2 (en) * | 2005-02-04 | 2010-03-09 | Seiko Epson Corporation | Printing based on motion picture |
KR100643291B1 (en) | 2005-04-14 | 2006-11-10 | 삼성전자주식회사 | Apparatus and method of video encoding and decoding for minimizing random access delay |
KR100667806B1 (en) * | 2005-07-07 | 2007-01-12 | 삼성전자주식회사 | Method and apparatus for video encoding and decoding |
CN101248668A (en) * | 2005-08-26 | 2008-08-20 | 汤姆森特许公司 | Trick broadcast using time demixing |
US8229983B2 (en) * | 2005-09-27 | 2012-07-24 | Qualcomm Incorporated | Channel switch frame |
US9008293B2 (en) | 2005-11-25 | 2015-04-14 | At&T Intellectual Property I, L.P. | Caller ID information to internet protocol television displays |
CA2633366C (en) * | 2005-12-22 | 2015-04-28 | Vidyo, Inc. | System and method for videoconferencing using scalable video coding and compositing scalable video conferencing servers |
JP5431733B2 (en) * | 2006-01-05 | 2014-03-05 | テレフオンアクチーボラゲット エル エム エリクソン(パブル) | Media content management |
US7421455B2 (en) * | 2006-02-27 | 2008-09-02 | Microsoft Corporation | Video search and services |
FR2898236A1 (en) * | 2006-03-03 | 2007-09-07 | Thomson Licensing Sas | METHOD OF TRANSMITTING AUDIOVISUAL FLOWS BY ANTICIPATING CONTROLS OF THE USER, RECEIVER AND TRANSMITTER FOR IMPLEMENTING THE METHOD |
US8396134B2 (en) * | 2006-07-21 | 2013-03-12 | Vidyo, Inc. | System and method for scalable video coding using telescopic mode flags |
CN100588249C (en) * | 2006-07-27 | 2010-02-03 | 腾讯科技(深圳)有限公司 | Method, system and terminal for adjusting video quality |
US8773494B2 (en) * | 2006-08-29 | 2014-07-08 | Microsoft Corporation | Techniques for managing visual compositions for a multimedia conference call |
CN101202906A (en) * | 2006-12-11 | 2008-06-18 | 国际商业机器公司 | Method and equipment for processing video stream in digital video broadcast system |
US20080148322A1 (en) * | 2006-12-18 | 2008-06-19 | At&T Knowledge Ventures, Lp | System and method of providing video-on-demand content |
WO2008082441A1 (en) * | 2006-12-29 | 2008-07-10 | Prodea Systems, Inc. | Display inserts, overlays, and graphical user interfaces for multimedia systems |
US7912098B2 (en) * | 2007-03-29 | 2011-03-22 | Alcatel Lucent | System, method, and device using a singly encapsulated bundle and a tagger for re-encapsulation |
JP4935551B2 (en) | 2007-07-17 | 2012-05-23 | ソニー株式会社 | Display control apparatus, display control method, and program |
US20090025028A1 (en) | 2007-07-20 | 2009-01-22 | At&T Intellectual Property, Inc. | Systems, methods and computer products for internet protocol television voicemail monitoring |
US8230100B2 (en) * | 2007-07-26 | 2012-07-24 | Realnetworks, Inc. | Variable fidelity media provision system and method |
CA2650151C (en) * | 2008-01-17 | 2013-04-02 | Lg Electronics Inc. | An iptv receiving system and data processing method |
WO2009155963A1 (en) * | 2008-06-23 | 2009-12-30 | Ericsson Hungary Ltd | Improving transmission of media streams of broadcast services in a multimedia broadcast transmission system |
US8706863B2 (en) * | 2008-07-18 | 2014-04-22 | Apple Inc. | Systems and methods for monitoring data and bandwidth usage |
US20100161716A1 (en) * | 2008-12-22 | 2010-06-24 | General Instrument Corporation | Method and apparatus for streaming multiple scalable coded video content to client devices at different encoding rates |
US20100259595A1 (en) * | 2009-04-10 | 2010-10-14 | Nokia Corporation | Methods and Apparatuses for Efficient Streaming of Free View Point Video |
US8341672B2 (en) * | 2009-04-24 | 2012-12-25 | Delta Vidyo, Inc | Systems, methods and computer readable media for instant multi-channel video content browsing in digital video distribution systems |
-
2010
- 2010-04-22 US US12/765,767 patent/US8341672B2/en active Active
- 2010-04-22 WO PCT/US2010/032126 patent/WO2010124140A1/en active Application Filing
- 2010-04-22 CA CA2759729A patent/CA2759729A1/en not_active Abandoned
- 2010-04-22 EP EP10767793A patent/EP2422469A4/en not_active Withdrawn
- 2010-04-22 US US12/765,815 patent/US20100272187A1/en not_active Abandoned
- 2010-04-22 US US12/765,793 patent/US8607283B2/en not_active Expired - Fee Related
- 2010-04-22 JP JP2012507397A patent/JP2012525076A/en active Pending
- 2010-04-22 WO PCT/US2010/032118 patent/WO2010124133A1/en active Application Filing
- 2010-04-22 AU AU2010238757A patent/AU2010238757A1/en not_active Abandoned
- 2010-04-22 CN CN2010800183337A patent/CN102422577A/en active Pending
- 2010-04-22 WO PCT/US2010/032121 patent/WO2010124136A1/en active Application Filing
-
2013
- 2013-05-15 US US13/895,131 patent/US9426536B2/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020129374A1 (en) * | 1991-11-25 | 2002-09-12 | Michael J. Freeman | Compressed digital-data seamless video switching system |
US6510553B1 (en) * | 1998-10-26 | 2003-01-21 | Intel Corporation | Method of streaming video from multiple sources over a network |
US20090185618A1 (en) * | 2002-04-11 | 2009-07-23 | Microsoft Corporation | Streaming Methods and Systems |
US20050275752A1 (en) * | 2002-10-15 | 2005-12-15 | Koninklijke Philips Electronics N.V. | System and method for transmitting scalable coded video over an ip network |
US20060184966A1 (en) * | 2005-02-14 | 2006-08-17 | Hillcrest Laboratories, Inc. | Methods and systems for enhancing television applications using 3D pointing |
US20070195203A1 (en) * | 2006-02-21 | 2007-08-23 | Qualcomm Incorporated | Multi-program viewing in a wireless apparatus |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9426536B2 (en) | 2009-04-24 | 2016-08-23 | Vidyo, Inc. | Systems, methods and computer readable media for instant multi-channel video content browsing in digital video distribution systems |
US8607283B2 (en) | 2009-04-24 | 2013-12-10 | Delta Vidyo, Inc. | Systems, methods and computer readable media for instant multi-channel video content browsing in digital video distribution systems |
US20100272187A1 (en) * | 2009-04-24 | 2010-10-28 | Delta Vidyo, Inc. | Efficient video skimmer |
US8341672B2 (en) * | 2009-04-24 | 2012-12-25 | Delta Vidyo, Inc | Systems, methods and computer readable media for instant multi-channel video content browsing in digital video distribution systems |
US20100293584A1 (en) * | 2009-04-24 | 2010-11-18 | Delta Vidyo, Inc. | Systems, methods and computer readable media for instant multi-channel video content browsing in digital video distribution systems |
US20120090006A1 (en) * | 2009-06-19 | 2012-04-12 | Shenzhen Tcl New Technology Co., Ltd. | Television and generating method of electronic program guide menu thereof |
US20100325663A1 (en) * | 2009-06-22 | 2010-12-23 | Samsung Electronics Co. Ltd. | Broadcast receiving apparatus and method for switching channels thereof |
US8473998B1 (en) * | 2009-07-29 | 2013-06-25 | Massachusetts Institute Of Technology | Network coding for multi-resolution multicast |
US9762957B2 (en) * | 2009-07-29 | 2017-09-12 | Massachusetts Institute Of Technology | Network coding for multi-resolution multicast |
US20130259041A1 (en) * | 2009-07-29 | 2013-10-03 | Massachusetts Institute Of Technology | Network coding for multi-resolution multicast |
US20150365724A1 (en) * | 2009-07-29 | 2015-12-17 | Massachusetts Institute Of Technology | Network Coding for Multi-Resolution Multicast |
US9148291B2 (en) * | 2009-07-29 | 2015-09-29 | Massachusetts Institute Of Technology | Network coding for multi-resolution multicast |
CN102097081A (en) * | 2011-01-27 | 2011-06-15 | 明基电通有限公司 | Display device as well as driving method and displaying method thereof |
US20140341543A1 (en) * | 2011-09-12 | 2014-11-20 | Alcatel Lucent | Method for playing multimedia content, a related system and related playback module |
US20160021165A1 (en) * | 2012-07-30 | 2016-01-21 | Shivendra Panwar | Streamloading content, such as video content for example, by both downloading enhancement layers of the content and streaming a base layer of the content |
US9661051B2 (en) * | 2012-07-30 | 2017-05-23 | New York University | Streamloading content, such as video content for example, by both downloading enhancement layers of the content and streaming a base layer of the content |
US20140074911A1 (en) * | 2012-09-12 | 2014-03-13 | Samsung Electronics Co., Ltd. | Method and apparatus for managing multi-session |
US20140096167A1 (en) * | 2012-09-28 | 2014-04-03 | Vringo Labs, Inc. | Video reaction group messaging with group viewing |
US20210213359A1 (en) * | 2012-10-03 | 2021-07-15 | Gree, Inc. | Method of synchronizing online game, and server device |
US11878251B2 (en) * | 2012-10-03 | 2024-01-23 | Gree, Inc. | Method of synchronizing online game, and server device |
US11601712B2 (en) * | 2016-02-24 | 2023-03-07 | Nos Inovacao, S.A. | Predictive tuning system |
Also Published As
Publication number | Publication date |
---|---|
WO2010124133A1 (en) | 2010-10-28 |
US8341672B2 (en) | 2012-12-25 |
EP2422469A4 (en) | 2012-10-31 |
US20100272187A1 (en) | 2010-10-28 |
CN102422577A (en) | 2012-04-18 |
US9426536B2 (en) | 2016-08-23 |
AU2010238757A1 (en) | 2011-11-03 |
US8607283B2 (en) | 2013-12-10 |
JP2012525076A (en) | 2012-10-18 |
EP2422469A1 (en) | 2012-02-29 |
US20100293584A1 (en) | 2010-11-18 |
WO2010124136A1 (en) | 2010-10-28 |
US20130254817A1 (en) | 2013-09-26 |
WO2010124140A1 (en) | 2010-10-28 |
CA2759729A1 (en) | 2010-10-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8341672B2 (en) | Systems, methods and computer readable media for instant multi-channel video content browsing in digital video distribution systems | |
US8135040B2 (en) | Accelerated channel change | |
US9055312B2 (en) | System and method for interactive synchronized video watching | |
US6941575B2 (en) | Webcam-based interface for initiating two-way video communication and providing access to cached video | |
US7003795B2 (en) | Webcam-based interface for initiating two-way video communication | |
EP1842337B1 (en) | Multicast distribution of streaming multimedia content | |
CN107241564B (en) | Multi-stream video conference method, device and system based on IMS network architecture | |
KR101377952B1 (en) | Method for transmitting a broadcasting signal, method for receiveing a broadcasting signal and apparatus for the same | |
US9467706B2 (en) | Distributed encoding of a video stream | |
US8139607B2 (en) | Subscriber controllable bandwidth allocation | |
KR101356502B1 (en) | Method for transmitting a broadcasting signal, method for receiveing a broadcasting signal and apparatus for the same | |
US20100333143A1 (en) | System and method for an active video electronic programming guide | |
US20080092184A1 (en) | Apparatus for receiving adaptive broadcast signal and method thereof | |
US20120284421A1 (en) | Picture in picture for mobile tv | |
JP2011511554A (en) | Method for streaming video data | |
WO2009080114A1 (en) | Method and apparatus for distributing media over a communications network | |
JP2002084238A (en) | Multiplex broadcast method and apparatus | |
KR101242478B1 (en) | Real time personal broadcasting system using media jockey based on multi-angle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: DELTA VIDYO, INC., NEW JERSEY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CIVANLAR, REHA;SHAPIRO, OFER;SHALOM, TAL;SIGNING DATES FROM 20100507 TO 20100512;REEL/FRAME:024451/0113 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: VIDYO, INC., NEW JERSEY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DELTA VIDYO, INC.;REEL/FRAME:030985/0144 Effective date: 20130731 |
|
AS | Assignment |
Owner name: VENTURE LENDING & LEASING VI, INC., CALIFORNIA Free format text: SECURITY AGREEMENT;ASSIGNOR:VIDYO, INC.;REEL/FRAME:031123/0712 Effective date: 20130813 |
|
FEPP | Fee payment procedure |
Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: ESW HOLDINGS, INC., TEXAS Free format text: SECURITY INTEREST;ASSIGNOR:VIDYO, INC.;REEL/FRAME:046737/0421 Effective date: 20180807 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |