US20010011374A1 - Statistical disk scheduling for video servers - Google Patents
Statistical disk scheduling for video servers Download PDFInfo
- Publication number
- US20010011374A1 US20010011374A1 US09/268,512 US26851299A US2001011374A1 US 20010011374 A1 US20010011374 A1 US 20010011374A1 US 26851299 A US26851299 A US 26851299A US 2001011374 A1 US2001011374 A1 US 2001011374A1
- Authority
- US
- United States
- Prior art keywords
- disk
- queue
- requests
- request
- steady
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/232—Content retrieval operation locally within server, e.g. reading video streams from disk arrays
- H04N21/2326—Scheduling disk or memory reading operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
- G06F3/0613—Improving I/O performance in relation to throughput
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
Definitions
- the invention relates to methods of scheduling disk access requests in a video server, and, more particularly, to statistical scheduling methods that improve the effective disk bandwidth provided by video servers.
- Video-on-demand systems allow subscribers to request video programs from a video library at any time for immediate viewing in their homes. Subscribers submit requests to a video service provider via a communication channel (e.g., telephone lines or a back channel through the distribution network that carries the video to the subscriber's home), and the requested video program is routed to the subscriber's home via telephone or coaxial television lines.
- video service providers use a video server to process subscriber requests, retrieve the requested programs from storage, and distribute the programs to the appropriate subscriber(s).
- One exemplary system for providing video-on-demand services is described in commonly assigned U.S. patent application Ser. No. 08/984,710, filed Dec. 3, 1997, which is incorporated herein by reference.
- disk bandwidth in a video server is typically also required for operations such as loading content, disk maintenance and file system meta-data syncing. Disk bandwidth may also be reserved for reducing latency in data transfer to subscribers. The number of subscribers that can be properly served concurrently by a video server therefore depends on effective disk bandwidth, which in turn depends on how disk access requests are scheduled.
- SDS Statistical Disk Scheduling
- the SDS finds use in improving video server functionality by increasing the bandwidth utilization of the storage medium in the following manner: worst case performance is used for priority operations (e.g., user read operations) but the bandwith created by better than worst case performance is used for non-priority operations such as loading content onto the disk drives and disk maintenance.
- bandwidth for loading content and disk maintenance, or file system meta-data syncing does not have to be specifically reserved, thus increasing the number of users that can be served simultaneously by the video server.
- SDS maintains at least two queues and a queue selector.
- the first queue is an access request queue for access requests from a current user that are presently viewing a program and the second queue is for all other forms of access requests.
- the second queue may comprise multiple queues to provide a queuing hierarchy.
- the requests are ordered in each of the queues to optimize the bandwidth and ensure that the data to the current users is not interrupted such that a display anomaly occurs.
- the queue selector identifies the queue that will supply the next access request to a disk queue.
- the selected requests are sent to the disk queues for execution.
- the disk queues are generally located on the disk drives and are generally not accessible except to place a request in the queue for each disk drive. The requests are then executed on a first-in, first-out manner.
- the invention defers disk use to the latest possible moment because once the request is in the disk queue it is more difficult to change.
- the inventive queue structure provides opportunities to alter the disk access requests and their execution order prior to sending the requests to the disk queue. If a disk queue is not used, i.e., the disk drive does not have an internal queue, then the access requests are sent one at a time from the SDS to the disk drive for execution.
- the preferred embodiment of the SDS maintains three queues for each disk based on the type and priority of disk access requests, and a queue selector for managing queue selection. Selected requests are forwarded from the three queues to the disk such that bandwidth utilization is maximized, while giving highest priority to subscribers currently viewing a program so that their program streams are generally not interrupted. (Subscribers currently viewing a program are referred to as “steady-state” subscribers.) SDS dynamically monitors bandwidth utilization to determine when lower-priority requests can be scheduled without affecting on-time completion of the higher priority steady-state subscriber requests. In order to keep the disks busy and maximize disk bandwidth utilization, disk command queuing may be employed to ensure that the disk can begin seeking for the next access immediately after it finishes the data transfer for the current disk access.
- FIG. 1 depicts a high-level block diagram of a video-on-demand system that includes a generic video server incorporating the present invention
- FIG. 2 depicts the queuing architecture of the Statistical Disk Scheduler used to perform the method of the present invention
- FIG. 3 depicts a flowchart specification of the SDS Selection Procedure
- FIG. 4 depicts a flowchart specification of the Scheduling Interval Procedure
- FIG. 5 depicts a round-robin version of the Scheduling Interval Procedure
- FIG. 6 depicts a flowchart specification of the Command Completion Procedure
- FIG. 7 depicts a flowchart specification of the method of the present invention.
- FIG. 8 shows the software process architecture for a preferred multi-threaded implementation of the method of the present invention.
- FIG. 1 depicts a video-on-demand system that utilizes a generic video server incorporating the teachings of the present invention.
- video-on-demand system 100 contains a video server 110 that communicates with a plurality of disks 120 via a Statistical Disk Scheduler (SDS) 170 .
- SDS Statistical Disk Scheduler
- video server 110 contains a CPU 114 and memory element 117 .
- SDS 170 is coupled to disks 120 by paths 130 (e.g., fiber channel), and memory 117 by data path 177 .
- the video server sends access requests along paths 130 to disks 120 , and each disk 120 has its own internal queue 125 for buffering access requests.
- Data read from the disks are transmitted back to the video server along paths 130 n (where n is an integer greater than zero).
- the paths 130 n are “daisy chained” to form a data transfer loop 131 , e.g., a fiber channel loop. Although one loop is depicted, multiple loops may be employed to interconnect subsets of the disk drives such that the data transfer rate amongst the disk drives and the video server is increased over that of a single loop system.
- the video server contains a Distribution Manager 180 that receives the data transmitted along paths 130 n and loop 131 and distributes this data to subscribers 160 via a transport network 140 . Additionally, disks 120 send messages called command completion messages (to be discussed later) to the SDS 170 along paths 130 .
- the transport network 140 is typically, but not exclusively, a conventional bi-directional hybrid fiber-coaxial cable network. Subscribers 160 are coupled to the transport network 140 by paths 150 (e.g., coaxial cable). Additionally, transport network 140 forwards subscriber access requests along path 175 to the SDS 170 , and receives video data from Distribution Manager 180 via path 185 .
- paths 150 e.g., coaxial cable
- the SDS 170 performs the method of the present invention.
- a logical representation of the SDS data architecture is shown in FIG. 2.
- the outputs of each queue are connected to the data loop ( 131 of FIG. 1).
- the SDS queuing architecture contains three queues for each disk 120 and a queue selector 205 for managing queue selection, i.e., the queue selector determines which queue is to transfer the next access request to a disk drive.
- the logical representation is more easily understandable.
- FIG. 2 depicts three queues for each disk drive, a greater or lesser number of queues may be used to fulfill the invention, i.e., at least two queues should be used; one for the “steady-state” requests and one for all other requests.
- a steady-state subscriber queue (SSQ) 221 is used for “steady-state” subscriber disk reads for active streams (i.e., continuous content retrieval for distribution to subscribers currently watching a program.) Disk access requests in SSQ 221 are assigned the highest priority.
- a new subscriber queue (NSQ) 222 is for subscriber requests to begin viewing a program or perform other program related commands, i.e., non-steady state commands such as fast forward or rewind that in essence are a request for a new data stream.
- Disk access requests in NSQ 222 are assigned medium priority.
- the other request queue (ORQ) 223 is for all non-subscriber operations, such as loading content, disk maintenance, and file system meta-data syncing. Disk access requests in ORQ 223 are assigned the lowest priority.
- Queues 221 n , 222 n , and 223 n are collectively called the SDS queues 200 n , where n is an integer greater than zero that represents a disk drive 120 n , in an array of disk drives 120 .
- the queue selector 205 selects requests from the three SDS queues 221 n , 222 n , and 223 n and forwards the requests to the corresponding disk queue 125 n .
- Each request has an associated worst-case access time based on the type of request and data transfer size. The worst-case access time can be fixed, or dynamically computed based on prior access time statistics.
- each steady-state subscriber request has a time deadline for when the request must complete in order to guarantee continuous video for that subscriber. Disk requests in the NSQ and ORQ generally do not have time deadlines.
- Requests in the SSQ 221 n are ordered by time deadline so that the request at the front of the queue has the earliest deadline.
- Consecutive SSQ requests with the same time deadline are ordered by logical disk block address according to an elevator algorithm.
- the elevator algorithm is a disk scheduling algorithm well-known in the art in which the disk head travels in one direction over the disk cylinders until there are no more requests that can be serviced by continuing in that direction. At this point, the disk head changes direction and repeats the process, thus traveling back and forth over the disk cylinders as it services requests. Since requests in the NSQ and ORQ do not generally have deadlines, they may be ordered on a first come first serve basis, or according to some other desired priority scheme.
- disk command queuing may be employed to ensure that the disk can begin the seek for the next access immediately after it finishes the data transfer for the current disk access.
- the request is initially added to the SSQ 221 1 of the first disk 120 1 .
- the request is added to the second disk's SSQ 221 2 as soon the video server begins sending the data that was recalled from the first disk 120 n to the subscriber.
- Steady-state requests are similarly added to the SSQ 221 n of each successive disk 120 n .
- the queue selector 205 employs an SDS Selection Procedure to select requests from the three SDS queues 200 n and forward the requests to an associated disk queue 125 n located within each of the disk drives 120 n .
- the SDS Selection Procedure uses worst-case access times, request priorities, and time deadlines in determining which request to forward to the disk queue.
- the general strategy of the SDS Selection Procedure is to select a non-SSQ request only when such a selection will not cause any of the SSQ 221 n requests to miss their time deadlines, even if the non-SSQ request and all requests in the SSQ 221 n were to take their worst-case access times. If such a guarantee cannot be made, then the first request in the SSQ is always selected.
- the SDS Selection Procedure checks whether the data for the selected read request is already in cache (if caching is used). If this is the case, the disk access can be discarded and the Selection Procedure is repeated. Otherwise, the selected request is removed from the SDS queue 221 n and forwarded to an associated disk queue 125 n .
- FIG. 3 depicts a flow diagram of the SDS Selection Procedure 300 .
- the Selection Procedure checks whether the first entry in the NSQ can be selected while guaranteeing that all SSQ requests will meet their time deadlines in the worst case (step 320 ), where worst case is defined by the system.
- worst case value is the access value having a per user error rate that is acceptable.
- Each queue maintains “a sum of the worst case values” selector that performs a worst case analysis and selects the queue that will be used (i.e., steps 320 and 330 ) to send the next command to the disk drive.
- the following pseudocode represents the operation of such a selector.
- ORQ.head.worstcase and NSQ.head.worstcase are the respective worstcase access times to fulfill the next request in the ORQ and NSQ.
- the “remaining time” value is computed as follows:
- the worst case time value may be dynamically computed or empirically measured to be a cut off time that defines a period in which accesses have an acceptable error rate. If the first entry fulfills the requirement, then this first entry is selected (step 340 ); otherwise, the Selection Procedure checks whether the first entry in the ORQ can be selected while guaranteeing that all SSQ requests will meet their time deadlines in the worst case (step 330 ). If so, then this first entry is selected (step 350 ); otherwise, the procedure proceeds to step 315 , wherein the procedure queries whether the first entry in the SSQ can be executed within its time deadline assuming the worst case access. If the request cannot be executed in time, the request is discarded at step 325 and the procedure returns to step 320 .
- the first entry of the SSQ is selected at step 360 .
- the selected request is then removed from its queue (step 370 ).
- the Selection Procedure checks whether data for the selected request is already in cache (step 380 ) (the caching step 380 is shown in phantom to represent that it is an optional step). If the request is cached, the selected request is discarded and the Selection Procedure is repeated. Otherwise, the selected request is forwarded to the associated disk queue (step 390 ).
- the SDS executes the Selection Procedure during two scheduling events, called the scheduling interval and the command completion event.
- the scheduling interval is a fixed, periodic interval, while a command completion event occurs every time one of the disks completes a command. (Note that it is possible, although highly unlikely, that multiple disks complete a command simultaneously at a command completion event.)
- a procedure called the Scheduling Interval Procedure is executed, and at each command completion event, a procedure called the Command Completion Procedure is executed.
- the Command Completion Procedure is executed first (i.e., the Command Completion Procedure is given priority over the scheduling Interval Procedure).
- the execution priority of these routines is reversed. Such reversal leaves more time available to do other operations.
- steady-state requests are added to the next SSQ, if possible. (Recall that a steady-state request can be added to the next SSQ as soon as the data is output from the video server to the subscriber), and all SSQs are reordered to maintain correct time deadline order. The first entries in each of the SSQs are then sorted based on time deadlines, which determines the order with which the disks are serviced.
- the Selection Procedure 300 is repeatedly executed as long as the associated disk queue is not full, at least one of the three SDS queues (SSQ, NSQ, ORQ) is not empty, and there is a request in one of the three SDS queues that satisfies the Selection Procedure criteria. For example, if in a three-Disk system when the disk queues are not full the first entry in Disk 1's SSQ has a time deadline of 35, the first entry in Disk 2's SSQ has a time deadline of 28, and the first entry in Disk 3's SSQ has a time deadline of 39, then the disks would be serviced in the following order: Disk 2, Disk 1, Disk 3. Once the disk order has been established, then the SDS Selection Procedure is performed for each disk in that order.
- the extents for the data are very long (e.g., hundreds of kilobytes) such that the disk queues have a depth of one.
- the disk queues may have various depths, e.g., five requests could be stored and executed in a first-in, first-out (FIFO) manner.
- the extent size is inversely proportioned to disk queue depth where data delivery latency is the driving force that dictates the use of a large extent size for video server applications.
- the disk queue depth is dictated by the desire to reduce disk drive idle time.
- FIG. 4 shows a formal specification of the Scheduling Interval Procedure 400 in flowchart form.
- the Scheduling Interval Procedure adds steady-state requests to the appropriate SSQs, if possible (step 420 ), and reorders all the SSQs by time deadlines (step 430 ).
- the disk that has the earliest deadline for the first entry in its SSQ is then selected (step 450 ).
- the Selection Procedure is performed for the selected disk (step 300 ), and then the Scheduling Interval Procedure checks whether a request satisfying the Selection Procedure criteria was selected (step 460 ).
- the Scheduling Interval Procedure checks whether the selected disk's queue is full, or if all three SDS queues for the selected disk are empty. If either of these conditions are true, then the disk with the next earliest deadline for the first entry in its SSQ is selected (steps 475 , 480 , 450 ) and the Selection Procedure is repeated for this disk (step 300 ). If, however, both conditions are false, the Selection Procedure is repeated for the same selected disk.
- the disks are processed sequentially, ordered by the corresponding SSQ's first deadline, where “processing” means that the Selection Procedure is invoked repeatedly until the disk queue is full or there are no more requests for that disk.
- the Scheduling Interval Procedure fills each of the disk queues one at a time, which is most efficient for small disk queues.
- a small disk queue is used, as it facilitates the latency reduction.
- the request is aborted by the SDS, i.e., the SDS “times-out” waiting for the request to be serviced and then moves on the next procedural step.
- the server maintains a disk mimic queue that mimics the content of the disk queue of each of the disk drives. As such, the server can poll the mimic queue to determine the nature of the errant request and send an “abort” command to the disk drive for that request. The disk drive will then process the next request in the disk queue and the server updates the mimic queue.
- FIG. 5 A round-robin version of the Scheduling Interval Procedure for large disk queues is shown in FIG. 5.
- steady-state requests are first added to the appropriate SSQs (step 520 ), and disks are ordered by the deadlines of the first entry in each disk's SSQ (step 530 ).
- the Selection Procedure is executed only once for a disk, and then the next disk is selected. Once all disks have been selected, the round-robin Scheduling Interval Procedure goes through each of the disks once again in the same order, executing the Selection Procedure once per disk. This process is continued until no more requests can be added to any of the disk queues.
- a vector D is defined as an ordered list of all the disks, where the order is based on the time deadlines of the first entry in each disk's SSQ (step 530 ).
- a Boolean variable SELECT is initialized to false, and an integer variable i is initialized to 1 (step 540 ).
- i is set to 1 (start again with the first disk). If disk D i 's disk queue is full (step 560 ), or all three of D i 's SDS queues are empty (step 570 ), then the next disk is selected (step 585 ).
- the Selection Procedure is performed for D i (step 300 ), and if a request satisfying the Selection Procedure criteria was found, SELECT is set to true (step 580 ), and the next disk is selected (step 585 ).
- the SELECT variable indicates whether a request was added to one of the disk queues during a pass over the vector of disks.
- the Command Completion Procedure is executed, on a first-in, first-out basis, every time a disk completes a command. Thus, for each completed command, the Command Completion Procedure executes in the order in which the commands are completed, i.e., using the FIFO command handling step 605 . As such, the Command Handling Procedure begins at step 610 , proceeds to step 605 and ends at step 690 .
- the procedure can be adapted to handle simultaneous command events.
- it is first determined if multiple disks have completed a command simultaneously at the command completion event. (Most likely only one disk will have completed a command at the command completion event, but the multiple-disk situation is possible.) If more than one disk has completed a command, then the first entries in the SSQs of these disks are sorted based on time deadlines, determining the order in which the disks are serviced. Once the disk order has been established, the SDS Selection Procedure is performed for each disk in order in the same manner as the Scheduling Interval Procedure.
- the Selection Procedure is repeatedly executed as long as the associated disk queue is not full, at least one of the three SDS queues (SSQ, NSQ, ORQ) is not empty, and there is a request in one of the three SDS queues that satisfies the Selection Procedure criteria.
- Step 605 represents the standard FIFO command handling procedure
- the dashed box 615 represents an alternative procedure capable of handling simultaneous command occurrences.
- the Command Completion Procedure determines which disks have just completed a command, and the disk that has the earliest deadline for the first entry in its SSQ is then selected (step 650 ).
- the Selection Procedure is performed for the selected disk (step 300 ), and then the Command Completion Procedure checks whether a request satisfying the Selection Procedure criteria was selected (step 660 ).
- the disk with the next earliest deadline for the first entry in its SSQ is selected (steps 675 , 680 , 650 ) and the Selection Procedure is repeated for this disk (step 300 ). Otherwise, the Command Completion Procedure checks whether the selected disk's queue is full, or if all three SDS queues for the selected disk are empty. If either of these conditions are true, then the disk with the next earliest deadline for the first entry in its SSQ is selected (steps 675 , 680 , 650 ) and the Selection Procedure is repeated for this disk (step 300 ). If, however, both conditions are false, the Selection Procedure is repeated for the same selected disk.
- the Command Completion Procedure fills each of the disk queues one at a time, i.e., the disk with a complete event is refilled. Note that since it is highly unlikely that more than one disk is serviced on a command completion event, the choice of whether to employ round-robin or sequential filling of the disk queues in the Command Completion Procedure has essentially no impact on performance.
- a formal specification of the method of the present invention is shown in flowchart form in FIG. 7.
- the Command Completion Procedure is invoked ( 600 )
- the Scheduling Interval Procedure is invoked ( 400 ).
- the command completion is given priority and the Command Completion Procedure is executed first.
- the execution priority for these procedures is reversed.
- FIG. 8 shows the software process architecture 800 for the preferred embodiment.
- the media control thread 810 receives new-subscriber request messages from the transport network 140 and path 175 , and forwards these requests through message queues 815 to the T s thread 820 .
- the T s thread 820 is a top level scheduler responsible for two primary functions: first, it maintains all state information necessary to communicate with the disk interfaces 835 and video server memory 840 ; second, it performs the Scheduling Interval Procedure using a period of, for example, 100 ms.
- the T s Loop thread allocates the commands to the SDS queues 875 , where each disk drive is associated with a set of queues (e.g., ssa, NSQ and other queues) generally shown as queues 825 0 , 825 1 , . . . 825 N .
- the initial commands (startup commands) from the T s loop thread 820 are sent from the SDS queues 825 directly to the disk interfaces 835 .
- a response thread 830 communicates the commands from the SDS queues 825 to the disk drive interfaces 835 .
- Each interface 835 communicates to individual disk drives through a fiber channel loop.
- Response thread 330 also receives command completion messages from the disk interfaces 835 . Upon receiving these messages the response thread performs the Command Completion Procedure.
- Media control thread 810 , T s loop thread 820 , and response thread 830 are all executed by video server CPU 114 of FIG. 1.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
Description
- The invention relates to methods of scheduling disk access requests in a video server, and, more particularly, to statistical scheduling methods that improve the effective disk bandwidth provided by video servers.
- Video-on-demand systems allow subscribers to request video programs from a video library at any time for immediate viewing in their homes. Subscribers submit requests to a video service provider via a communication channel (e.g., telephone lines or a back channel through the distribution network that carries the video to the subscriber's home), and the requested video program is routed to the subscriber's home via telephone or coaxial television lines. In order to provide such movie-on-demand services, video service providers use a video server to process subscriber requests, retrieve the requested programs from storage, and distribute the programs to the appropriate subscriber(s). One exemplary system for providing video-on-demand services is described in commonly assigned U.S. patent application Ser. No. 08/984,710, filed Dec. 3, 1997, which is incorporated herein by reference.
- In order for video servers to provide good performance, it is crucial to schedule video storage (disk) access requests such that disk bandwidth is maximized. Also, once a subscriber is watching a program, it is imperative to continuously deliver program content to the subscriber without interruption. In addition to distributing content to subscribers, disk bandwidth in a video server is typically also required for operations such as loading content, disk maintenance and file system meta-data syncing. Disk bandwidth may also be reserved for reducing latency in data transfer to subscribers. The number of subscribers that can be properly served concurrently by a video server therefore depends on effective disk bandwidth, which in turn depends on how disk access requests are scheduled.
- One of the problems facing current disk scheduling methods is the potential variation in time required to service disk accesses. For example, the internal transfer rate of a Seagate Cheetah disk varies from 152 Mbps on inner tracks to 231 Mbps on outer tracks, and the seek time can vary from 0 ms to 13 ms depending on how far apart the segments of data are from one another. Given these variations in seek and transfer times and the fact that the server may contain sixteen or more disk drives, it is difficult to determine the effective disk bandwidth of a video server. As a result, current disk scheduling methods allocate a fixed amount of time for every disk access request, regardless of whether the access finishes early. This results in a deterministic system in which the available disk bandwidth is known, but since the fixed amount of time must be large enough to accommodate a worst-case disk access, disk bandwidth is wasted.
- Therefore, there is a need in the art for a method and apparatus for scheduling disk access requests in a video server without allocating worst-case access times, thus improving disk bandwidth utilization.
- The disadvantages associated with the prior art are overcome by a method of the present invention, called Statistical Disk Scheduling (SDS), which exploits the fact that disk access times are on average significantly less than the worst case access time. The SDS finds use in improving video server functionality by increasing the bandwidth utilization of the storage medium in the following manner: worst case performance is used for priority operations (e.g., user read operations) but the bandwith created by better than worst case performance is used for non-priority operations such as loading content onto the disk drives and disk maintenance. As a result, bandwidth for loading content and disk maintenance, or file system meta-data syncing does not have to be specifically reserved, thus increasing the number of users that can be served simultaneously by the video server.
- SDS maintains at least two queues and a queue selector. The first queue is an access request queue for access requests from a current user that are presently viewing a program and the second queue is for all other forms of access requests. The second queue may comprise multiple queues to provide a queuing hierarchy. The requests are ordered in each of the queues to optimize the bandwidth and ensure that the data to the current users is not interrupted such that a display anomaly occurs. The queue selector identifies the queue that will supply the next access request to a disk queue. The selected requests are sent to the disk queues for execution. The disk queues are generally located on the disk drives and are generally not accessible except to place a request in the queue for each disk drive. The requests are then executed on a first-in, first-out manner. In effect, the invention defers disk use to the latest possible moment because once the request is in the disk queue it is more difficult to change. The inventive queue structure provides opportunities to alter the disk access requests and their execution order prior to sending the requests to the disk queue. If a disk queue is not used, i.e., the disk drive does not have an internal queue, then the access requests are sent one at a time from the SDS to the disk drive for execution.
- More specifically, the preferred embodiment of the SDS maintains three queues for each disk based on the type and priority of disk access requests, and a queue selector for managing queue selection. Selected requests are forwarded from the three queues to the disk such that bandwidth utilization is maximized, while giving highest priority to subscribers currently viewing a program so that their program streams are generally not interrupted. (Subscribers currently viewing a program are referred to as “steady-state” subscribers.) SDS dynamically monitors bandwidth utilization to determine when lower-priority requests can be scheduled without affecting on-time completion of the higher priority steady-state subscriber requests. In order to keep the disks busy and maximize disk bandwidth utilization, disk command queuing may be employed to ensure that the disk can begin seeking for the next access immediately after it finishes the data transfer for the current disk access.
- Furthermore, popular content is migrated to the faster (outer) tracks of the disk drives to reduce the average access time and improve performance.
- The teachings of the present invention can be readily understood by considering the following detailed description in conjunction with the accompanying drawings, in which:
- FIG. 1 depicts a high-level block diagram of a video-on-demand system that includes a generic video server incorporating the present invention;
- FIG. 2 depicts the queuing architecture of the Statistical Disk Scheduler used to perform the method of the present invention;
- FIG. 3 depicts a flowchart specification of the SDS Selection Procedure;
- FIG. 4 depicts a flowchart specification of the Scheduling Interval Procedure;
- FIG. 5 depicts a round-robin version of the Scheduling Interval Procedure;
- FIG. 6 depicts a flowchart specification of the Command Completion Procedure;
- FIG. 7 depicts a flowchart specification of the method of the present invention; and
- FIG. 8 shows the software process architecture for a preferred multi-threaded implementation of the method of the present invention.
- To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures.
- FIG. 1 depicts a video-on-demand system that utilizes a generic video server incorporating the teachings of the present invention. Specifically, video-on-
demand system 100 contains avideo server 110 that communicates with a plurality ofdisks 120 via a Statistical Disk Scheduler (SDS) 170. In addition to the SDS 170,video server 110 contains aCPU 114 andmemory element 117.SDS 170 is coupled todisks 120 by paths 130 (e.g., fiber channel), andmemory 117 by data path 177. The video server sends access requests alongpaths 130 todisks 120, and eachdisk 120 has its own internal queue 125 for buffering access requests. Data read from the disks are transmitted back to the video server along paths 130 n (where n is an integer greater than zero). Thepaths 130 n are “daisy chained” to form a data transfer loop 131, e.g., a fiber channel loop. Although one loop is depicted, multiple loops may be employed to interconnect subsets of the disk drives such that the data transfer rate amongst the disk drives and the video server is increased over that of a single loop system. The video server contains a Distribution Manager 180 that receives the data transmitted alongpaths 130 n and loop 131 and distributes this data tosubscribers 160 via atransport network 140. Additionally,disks 120 send messages called command completion messages (to be discussed later) to theSDS 170 alongpaths 130. - The
transport network 140 is typically, but not exclusively, a conventional bi-directional hybrid fiber-coaxial cable network.Subscribers 160 are coupled to thetransport network 140 by paths 150 (e.g., coaxial cable). Additionally,transport network 140 forwards subscriber access requests alongpath 175 to theSDS 170, and receives video data from Distribution Manager 180 viapath 185. - Commonly assigned U.S. patent application Ser. No. 08/984,710, filed Dec. 3, 1997, which is incorporated herein by reference, describes an information distribution system, known as the OnSet™ system, that uses a video server that may benefit from the present invention. Additionally, the video server of the OnSet system is described in U.S. Pat. Nos. 5,671,377 and 5,581,778 which are both herein incorporated by reference.
- The
SDS 170 performs the method of the present invention. A logical representation of the SDS data architecture is shown in FIG. 2. In a physical representation, the outputs of each queue are connected to the data loop (131 of FIG. 1). In the depicted embodiment, the SDS queuing architecture contains three queues for eachdisk 120 and a queue selector 205 for managing queue selection, i.e., the queue selector determines which queue is to transfer the next access request to a disk drive. For simplicity, the logical representation is more easily understandable. Although FIG. 2 depicts three queues for each disk drive, a greater or lesser number of queues may be used to fulfill the invention, i.e., at least two queues should be used; one for the “steady-state” requests and one for all other requests. - In the three queue embodiment of the
SDS 170, a steady-state subscriber queue (SSQ) 221 is used for “steady-state” subscriber disk reads for active streams (i.e., continuous content retrieval for distribution to subscribers currently watching a program.) Disk access requests inSSQ 221 are assigned the highest priority. A new subscriber queue (NSQ) 222 is for subscriber requests to begin viewing a program or perform other program related commands, i.e., non-steady state commands such as fast forward or rewind that in essence are a request for a new data stream. Disk access requests inNSQ 222 are assigned medium priority. The other request queue (ORQ) 223 is for all non-subscriber operations, such as loading content, disk maintenance, and file system meta-data syncing. Disk access requests in ORQ 223 are assigned the lowest priority. -
Queues SDS queues 200 n, where n is an integer greater than zero that represents adisk drive 120 n, in an array of disk drives 120. For eachdisk 120 n, the queue selector 205 selects requests from the threeSDS queues - Requests in the
SSQ 221 n are ordered by time deadline so that the request at the front of the queue has the earliest deadline. Consecutive SSQ requests with the same time deadline are ordered by logical disk block address according to an elevator algorithm. The elevator algorithm is a disk scheduling algorithm well-known in the art in which the disk head travels in one direction over the disk cylinders until there are no more requests that can be serviced by continuing in that direction. At this point, the disk head changes direction and repeats the process, thus traveling back and forth over the disk cylinders as it services requests. Since requests in the NSQ and ORQ do not generally have deadlines, they may be ordered on a first come first serve basis, or according to some other desired priority scheme. - In order to keep the
disks 120 busy and maximize disk bandwidth utilization, disk command queuing may be employed to ensure that the disk can begin the seek for the next access immediately after it finishes the data transfer for the current disk access. When a steady-state request needs to access a sequence of multiple disks, the request is initially added to theSSQ 221 1 of thefirst disk 120 1. After this request is selected for servicing by thefirst disk 120 1, the request is added to the second disk'sSSQ 221 2 as soon the video server begins sending the data that was recalled from thefirst disk 120 n to the subscriber. Steady-state requests are similarly added to theSSQ 221 n of eachsuccessive disk 120 n. - The queue selector205 employs an SDS Selection Procedure to select requests from the three
SDS queues 200 n and forward the requests to an associated disk queue 125 n located within each of the disk drives 120 n. The SDS Selection Procedure uses worst-case access times, request priorities, and time deadlines in determining which request to forward to the disk queue. The general strategy of the SDS Selection Procedure is to select a non-SSQ request only when such a selection will not cause any of theSSQ 221 n requests to miss their time deadlines, even if the non-SSQ request and all requests in theSSQ 221 n were to take their worst-case access times. If such a guarantee cannot be made, then the first request in the SSQ is always selected. As an optional step, once a request is selected, the SDS Selection Procedure checks whether the data for the selected read request is already in cache (if caching is used). If this is the case, the disk access can be discarded and the Selection Procedure is repeated. Otherwise, the selected request is removed from theSDS queue 221 n and forwarded to an associated disk queue 125 n. - FIG. 3 depicts a flow diagram of the
SDS Selection Procedure 300. First, the Selection Procedure checks whether the first entry in the NSQ can be selected while guaranteeing that all SSQ requests will meet their time deadlines in the worst case (step 320), where worst case is defined by the system. Generally, the worst case value is the access value having a per user error rate that is acceptable. - Each queue maintains “a sum of the worst case values” selector that performs a worst case analysis and selects the queue that will be used (i.e., steps320 and 330) to send the next command to the disk drive. The following pseudocode represents the operation of such a selector.
- 1) perform worst case analysis
- returns remaining time (the amount of time left on the SSQ if all commands take worst case time to execute, if the SSQ is empty, the remaining time is infinity)
- 2) if NSQ is !empty && NSQ.head.worstcase<remaining time
- take request off NRQ
- else if NSQ is empty && ORQ is !empty &&
- ORQ.head.worstcase<remaining time
- take request off ORQ
- else if SSQ is !empty
- take request off SSQ
- if request.deadline−request.worstcase>current time
- request missed deadline, terminate request, try selector again
- else
- no requests pending
- Preference is given to the NRQ over the ORQ, only take things off the ORQ if the NSQ is empty.
- The ORQ.head.worstcase and NSQ.head.worstcase are the respective worstcase access times to fulfill the next request in the ORQ and NSQ. The “remaining time” value is computed as follows:
- remaining time=disk Q Remaining Time (SSQn)−disk Q worst case (PQn)
- disk Q Remaining Time (Q, now) {
- sum=0
- min=MAX
- for each entry in Q {
- sum+=entry→worstcase
- left=entry→deadline+sum−now;
- if (left<=0 ||entry→deadline>now) { /*
- out of time */
- min=0;
- break;
- }
- if (min>left)
- min=left; /* there is now less time remaining
- */
- }
- return min;
- }
- The worst case time value may be dynamically computed or empirically measured to be a cut off time that defines a period in which accesses have an acceptable error rate. If the first entry fulfills the requirement, then this first entry is selected (step340); otherwise, the Selection Procedure checks whether the first entry in the ORQ can be selected while guaranteeing that all SSQ requests will meet their time deadlines in the worst case (step 330). If so, then this first entry is selected (step 350); otherwise, the procedure proceeds to step 315, wherein the procedure queries whether the first entry in the SSQ can be executed within its time deadline assuming the worst case access. If the request cannot be executed in time, the request is discarded at
step 325 and the procedure returns to step 320. - If, however, the request can be executed in the allotted time, the first entry of the SSQ is selected at
step 360. The selected request is then removed from its queue (step 370). Alternatively, if caching is used, the Selection Procedure checks whether data for the selected request is already in cache (step 380) (thecaching step 380 is shown in phantom to represent that it is an optional step). If the request is cached, the selected request is discarded and the Selection Procedure is repeated. Otherwise, the selected request is forwarded to the associated disk queue (step 390). - The SDS executes the Selection Procedure during two scheduling events, called the scheduling interval and the command completion event. The scheduling interval is a fixed, periodic interval, while a command completion event occurs every time one of the disks completes a command. (Note that it is possible, although highly unlikely, that multiple disks complete a command simultaneously at a command completion event.) At each scheduling interval, a procedure called the Scheduling Interval Procedure is executed, and at each command completion event, a procedure called the Command Completion Procedure is executed. In the case that a scheduling interval and a command completion coincide, the Command Completion Procedure is executed first (i.e., the Command Completion Procedure is given priority over the scheduling Interval Procedure). Alternatively, if the disk queue has a depth that is greater than one, then the execution priority of these routines is reversed. Such reversal leaves more time available to do other operations.
- In the Scheduling Interval Procedure, steady-state requests are added to the next SSQ, if possible. (Recall that a steady-state request can be added to the next SSQ as soon as the data is output from the video server to the subscriber), and all SSQs are reordered to maintain correct time deadline order. The first entries in each of the SSQs are then sorted based on time deadlines, which determines the order with which the disks are serviced. For each disk, the
Selection Procedure 300 is repeatedly executed as long as the associated disk queue is not full, at least one of the three SDS queues (SSQ, NSQ, ORQ) is not empty, and there is a request in one of the three SDS queues that satisfies the Selection Procedure criteria. For example, if in a three-Disk system when the disk queues are not full the first entry inDisk 1's SSQ has a time deadline of 35, the first entry inDisk 2's SSQ has a time deadline of 28, and the first entry inDisk 3's SSQ has a time deadline of 39, then the disks would be serviced in the following order:Disk 2,Disk 1,Disk 3. Once the disk order has been established, then the SDS Selection Procedure is performed for each disk in that order. - Generally, in a video server application, the extents for the data are very long (e.g., hundreds of kilobytes) such that the disk queues have a depth of one. In other applications using shorter data extents, the disk queues may have various depths, e.g., five requests could be stored and executed in a first-in, first-out (FIFO) manner. The extent size is inversely proportioned to disk queue depth where data delivery latency is the driving force that dictates the use of a large extent size for video server applications. For other applications where the extent size is relatively small, the disk queue depth is dictated by the desire to reduce disk drive idle time.
- FIG. 4 shows a formal specification of the
Scheduling Interval Procedure 400 in flowchart form. First, the Scheduling Interval Procedure adds steady-state requests to the appropriate SSQs, if possible (step 420), and reorders all the SSQs by time deadlines (step 430). The disk that has the earliest deadline for the first entry in its SSQ is then selected (step 450). The Selection Procedure is performed for the selected disk (step 300), and then the Scheduling Interval Procedure checks whether a request satisfying the Selection Procedure criteria was selected (step 460). If not, the disk with the next earliest deadline for the first entry in its SSQ is selected (steps steps - As disclosed in FIG. 4, the Scheduling Interval Procedure fills each of the disk queues one at a time, which is most efficient for small disk queues. In the preferred embodiment, a small disk queue is used, as it facilitates the latency reduction. In particular, as soon as the servicing of a request extends past its worst-case access time, the request is aborted by the SDS, i.e., the SDS “times-out” waiting for the request to be serviced and then moves on the next procedural step. To assist in error handling when using a disk queue with a depth that is greater than one such that the server may determine which request was not fulfilled within a predefined time period, the server maintains a disk mimic queue that mimics the content of the disk queue of each of the disk drives. As such, the server can poll the mimic queue to determine the nature of the errant request and send an “abort” command to the disk drive for that request. The disk drive will then process the next request in the disk queue and the server updates the mimic queue.
- In the case of large disk queues, however, filling the disk queues in a round-robin fashion may be more efficient. A round-robin version of the Scheduling Interval Procedure for large disk queues is shown in FIG. 5. As in the previous embodiment of the Scheduling Interval Procedure, steady-state requests are first added to the appropriate SSQs (step520), and disks are ordered by the deadlines of the first entry in each disk's SSQ (step 530). In this round-robin version, however, the Selection Procedure is executed only once for a disk, and then the next disk is selected. Once all disks have been selected, the round-robin Scheduling Interval Procedure goes through each of the disks once again in the same order, executing the Selection Procedure once per disk. This process is continued until no more requests can be added to any of the disk queues.
- Specifically, a vector D is defined as an ordered list of all the disks, where the order is based on the time deadlines of the first entry in each disk's SSQ (step530). A Boolean variable SELECT is initialized to false, and an integer variable i is initialized to 1 (step 540). The following condition is then tested: if i=n+1 and SELECT=false (step 550). As will be seen shortly, this condition will only be true when all of the disks have been selected and no requests could be added to any of the disk's queues. Next (555), if i=n+1 (i.e., the last disk had been selected in the previous iteration), then i is set to 1 (start again with the first disk). If disk Di's disk queue is full (step 560), or all three of Di's SDS queues are empty (step 570), then the next disk is selected (step 585). The Selection Procedure is performed for Di (step 300), and if a request satisfying the Selection Procedure criteria was found, SELECT is set to true (step 580), and the next disk is selected (step 585). Thus the SELECT variable indicates whether a request was added to one of the disk queues during a pass over the vector of disks.
- The Command Completion Procedure is executed, on a first-in, first-out basis, every time a disk completes a command. Thus, for each completed command, the Command Completion Procedure executes in the order in which the commands are completed, i.e., using the FIFO
command handling step 605. As such, the Command Handling Procedure begins atstep 610, proceeds to step 605 and ends at step 690. - Alternatively, the procedure can be adapted to handle simultaneous command events. In this procedure, it is first determined if multiple disks have completed a command simultaneously at the command completion event. (Most likely only one disk will have completed a command at the command completion event, but the multiple-disk situation is possible.) If more than one disk has completed a command, then the first entries in the SSQs of these disks are sorted based on time deadlines, determining the order in which the disks are serviced. Once the disk order has been established, the SDS Selection Procedure is performed for each disk in order in the same manner as the Scheduling Interval Procedure. That is, for each disk, the Selection Procedure is repeatedly executed as long as the associated disk queue is not full, at least one of the three SDS queues (SSQ, NSQ, ORQ) is not empty, and there is a request in one of the three SDS queues that satisfies the Selection Procedure criteria.
- A formal specification of both forms of the Command Completion Procedure is shown in flowchart form in FIG. 6. Step605 represents the standard FIFO command handling procedure, while the dashed box 615 represents an alternative procedure capable of handling simultaneous command occurrences. In this alternative version, the Command Completion Procedure determines which disks have just completed a command, and the disk that has the earliest deadline for the first entry in its SSQ is then selected (step 650). Just as in the Scheduling Interval Procedure, the Selection Procedure is performed for the selected disk (step 300), and then the Command Completion Procedure checks whether a request satisfying the Selection Procedure criteria was selected (step 660). If not, the disk with the next earliest deadline for the first entry in its SSQ is selected (
steps steps - As disclosed in FIG. 6, the Command Completion Procedure fills each of the disk queues one at a time, i.e., the disk with a complete event is refilled. Note that since it is highly unlikely that more than one disk is serviced on a command completion event, the choice of whether to employ round-robin or sequential filling of the disk queues in the Command Completion Procedure has essentially no impact on performance.
- In both the Scheduling Interval and Command Completion Procedures, the ordering of requests within the disk queues are managed by the video server CPU, and not the disks themselves. (Any reordering operations normally performed by the disk must be disabled.) While reordering by the disks would improve the average seek time, managing the disk queues by the CPU is required to preserve the time deadlines of the user requests.
- A formal specification of the method of the present invention is shown in flowchart form in FIG. 7. Whenever a command completion event occurs (720), the Command Completion Procedure is invoked (600), and whenever a scheduling interval occurs (730), the Scheduling Interval Procedure is invoked (400). As shown in the figure, if both a scheduling interval and a command completion event occur simultaneously, the command completion is given priority and the Command Completion Procedure is executed first. Alternatively, as discussed above, when a disk queue having a depth that is greater than one is used, the execution priority for these procedures is reversed.
- In a preferred embodiment, the method of the present invention is implemented as a multi-threaded process. FIG. 8 shows the
software process architecture 800 for the preferred embodiment. The media control thread 810 receives new-subscriber request messages from thetransport network 140 andpath 175, and forwards these requests throughmessage queues 815 to the Ts thread 820. The Ts thread 820 is a top level scheduler responsible for two primary functions: first, it maintains all state information necessary to communicate with the disk interfaces 835 andvideo server memory 840; second, it performs the Scheduling Interval Procedure using a period of, for example, 100 ms. The Ts Loop thread allocates the commands to the SDS queues 875, where each disk drive is associated with a set of queues (e.g., ssa, NSQ and other queues) generally shown asqueues SDS queues 825 directly to the disk interfaces 835. Under steady-state operation, a response thread 830 communicates the commands from theSDS queues 825 to the disk drive interfaces 835. Eachinterface 835 communicates to individual disk drives through a fiber channel loop. Response thread 330 also receives command completion messages from the disk interfaces 835. Upon receiving these messages the response thread performs the Command Completion Procedure. Media control thread 810, Ts loop thread 820, and response thread 830 are all executed byvideo server CPU 114 of FIG. 1. - While this invention has been particularly shown and described with references to a preferred embodiment thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (21)
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/268,512 US6378036B2 (en) | 1999-03-12 | 1999-03-12 | Queuing architecture including a plurality of queues and associated method for scheduling disk access requests for video content |
PCT/US2000/006093 WO2000054161A1 (en) | 1999-03-12 | 2000-03-09 | Queuing architecture with multiple queues and method for statistical disk scheduling for video servers |
GB0121692A GB2365181B (en) | 1999-03-12 | 2000-03-09 | Queuing architecture with multiple queues and method for statistical disk scheduling for video servers |
CA002362727A CA2362727C (en) | 1999-03-12 | 2000-03-09 | Queuing architecture with multiple queues and method for statistical disk scheduling for video servers |
AU37323/00A AU3732300A (en) | 1999-03-12 | 2000-03-09 | Queuing architecture with multiple queues and method for statistical disk scheduling for video servers |
US09/801,021 US6691208B2 (en) | 1999-03-12 | 2001-03-07 | Queuing architecture including a plurality of queues and associated method for controlling admission for disk access requests for video content |
US10/663,237 US7165140B2 (en) | 1999-03-12 | 2003-09-16 | Queuing architecture including a plurality of queues and associated method for controlling admission for disk access requests for video content |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/268,512 US6378036B2 (en) | 1999-03-12 | 1999-03-12 | Queuing architecture including a plurality of queues and associated method for scheduling disk access requests for video content |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/801,021 Continuation-In-Part US6691208B2 (en) | 1999-03-12 | 2001-03-07 | Queuing architecture including a plurality of queues and associated method for controlling admission for disk access requests for video content |
Publications (2)
Publication Number | Publication Date |
---|---|
US20010011374A1 true US20010011374A1 (en) | 2001-08-02 |
US6378036B2 US6378036B2 (en) | 2002-04-23 |
Family
ID=23023342
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/268,512 Expired - Lifetime US6378036B2 (en) | 1999-03-12 | 1999-03-12 | Queuing architecture including a plurality of queues and associated method for scheduling disk access requests for video content |
Country Status (5)
Country | Link |
---|---|
US (1) | US6378036B2 (en) |
AU (1) | AU3732300A (en) |
CA (1) | CA2362727C (en) |
GB (1) | GB2365181B (en) |
WO (1) | WO2000054161A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030084089A1 (en) * | 2001-10-30 | 2003-05-01 | Yoshitoshi Kurose | Data transfer apparatus |
US7434242B1 (en) * | 2000-08-07 | 2008-10-07 | Sedna Patent Services, Llc | Multiple content supplier video asset scheduling |
US20090006877A1 (en) * | 2007-06-28 | 2009-01-01 | Seagate Technology Llc | Power management in a storage array |
US20100162305A1 (en) * | 2008-12-23 | 2010-06-24 | Verizon Corporate Services Group Inc | System and method for extending recording time for a digital video record (dvr) |
CN104714836A (en) * | 2013-12-12 | 2015-06-17 | 国际商业机器公司 | Method and system for coalescing memory transactions |
Families Citing this family (65)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6182197B1 (en) * | 1998-07-10 | 2001-01-30 | International Business Machines Corporation | Real-time shared disk system for computer clusters |
US6691208B2 (en) * | 1999-03-12 | 2004-02-10 | Diva Systems Corp. | Queuing architecture including a plurality of queues and associated method for controlling admission for disk access requests for video content |
JP3382176B2 (en) * | 1999-03-26 | 2003-03-04 | 株式会社東芝 | Request processing method and request processing device |
US6721789B1 (en) * | 1999-10-06 | 2004-04-13 | Sun Microsystems, Inc. | Scheduling storage accesses for rate-guaranteed and non-rate-guaranteed requests |
US9066113B1 (en) | 1999-10-19 | 2015-06-23 | International Business Machines Corporation | Method for ensuring reliable playout in a DMD system |
JP3904781B2 (en) * | 1999-11-17 | 2007-04-11 | パイオニア株式会社 | Program transmission / reception system and method |
US6748441B1 (en) * | 1999-12-02 | 2004-06-08 | Microsoft Corporation | Data carousel receiving and caching |
US6678855B1 (en) | 1999-12-02 | 2004-01-13 | Microsoft Corporation | Selecting K in a data transmission carousel using (N,K) forward error correction |
US7113998B1 (en) * | 2000-02-03 | 2006-09-26 | International Business Machines Corporation | System and method for grouping recipients of streaming data |
US20010034558A1 (en) * | 2000-02-08 | 2001-10-25 | Seagate Technology Llc | Dynamically adaptive scheduler |
US7284064B1 (en) | 2000-03-21 | 2007-10-16 | Intel Corporation | Method and apparatus to determine broadcast content and scheduling in a broadcast system |
US7167895B1 (en) | 2000-03-22 | 2007-01-23 | Intel Corporation | Signaling method and apparatus to provide content on demand in a broadcast system |
US6937611B1 (en) * | 2000-04-21 | 2005-08-30 | Sun Microsystems, Inc. | Mechanism for efficient scheduling of communication flows |
US7406554B1 (en) * | 2000-07-20 | 2008-07-29 | Silicon Graphics, Inc. | Queue circuit and method for memory arbitration employing same |
US7392291B2 (en) * | 2000-08-11 | 2008-06-24 | Applied Micro Circuits Corporation | Architecture for providing block-level storage access over a computer network |
TW486626B (en) * | 2000-09-14 | 2002-05-11 | Acer Ipull Inc | Real-time information scheduling system and its method |
US6871011B1 (en) * | 2000-09-28 | 2005-03-22 | Matsushita Electric Industrial Co., Ltd. | Providing quality of service for disks I/O sub-system with simultaneous deadlines and priority |
US7124424B2 (en) * | 2000-11-27 | 2006-10-17 | Sedna Patent Services, Llc | Method and apparatus for providing interactive program guide (IPG) and video-on-demand (VOD) user interfaces |
US7260785B2 (en) | 2001-01-29 | 2007-08-21 | International Business Machines Corporation | Method and system for object retransmission without a continuous network connection in a digital media distribution system |
US7689598B2 (en) * | 2001-02-15 | 2010-03-30 | International Business Machines Corporation | Method and system for file system synchronization between a central site and a plurality of remote sites |
US6920447B2 (en) * | 2001-02-15 | 2005-07-19 | Microsoft Corporation | Concurrent data recall in a hierarchical storage environment using plural queues |
US20020144265A1 (en) * | 2001-03-29 | 2002-10-03 | Connelly Jay H. | System and method for merging streaming and stored content information in an electronic program guide |
US20020144269A1 (en) * | 2001-03-30 | 2002-10-03 | Connelly Jay H. | Apparatus and method for a dynamic electronic program guide enabling billing broadcast services per EPG line item |
US20020143591A1 (en) * | 2001-03-30 | 2002-10-03 | Connelly Jay H. | Method and apparatus for a hybrid content on demand broadcast system |
US6940865B2 (en) * | 2001-04-17 | 2005-09-06 | Atheros Communications, Inc. | System and method for interleaving frames with different priorities |
US7185352B2 (en) * | 2001-05-11 | 2007-02-27 | Intel Corporation | Method and apparatus for combining broadcast schedules and content on a digital broadcast-enabled client platform |
US20020194585A1 (en) * | 2001-06-15 | 2002-12-19 | Connelly Jay H. | Methods and apparatus for providing ranking feedback for content in a broadcast system |
US7055165B2 (en) * | 2001-06-15 | 2006-05-30 | Intel Corporation | Method and apparatus for periodically delivering an optimal batch broadcast schedule based on distributed client feedback |
US7328455B2 (en) * | 2001-06-28 | 2008-02-05 | Intel Corporation | Apparatus and method for enabling secure content decryption within a set-top box |
US7363569B2 (en) * | 2001-06-29 | 2008-04-22 | Intel Corporation | Correcting for data losses with feedback and response |
JP3975703B2 (en) * | 2001-08-16 | 2007-09-12 | 日本電気株式会社 | Preferential execution control method, apparatus and program for information processing system |
US20030046683A1 (en) * | 2001-08-28 | 2003-03-06 | Jutzi Curtis E. | Server-side preference prediction based on customer billing information to generate a broadcast schedule |
US7047456B2 (en) * | 2001-08-28 | 2006-05-16 | Intel Corporation | Error correction for regional and dynamic factors in communications |
US7904931B2 (en) * | 2001-09-12 | 2011-03-08 | Cox Communications, Inc. | Efficient software bitstream rate generator for video server |
US7231653B2 (en) | 2001-09-24 | 2007-06-12 | Intel Corporation | Method for delivering transport stream data |
US20030061611A1 (en) * | 2001-09-26 | 2003-03-27 | Ramesh Pendakur | Notifying users of available content and content reception based on user profiles |
US7257649B2 (en) * | 2001-09-28 | 2007-08-14 | Siebel Systems, Inc. | Method and system for transferring information during server synchronization with a computing device |
US20030066090A1 (en) * | 2001-09-28 | 2003-04-03 | Brendan Traw | Method and apparatus to provide a personalized channel |
US8943540B2 (en) | 2001-09-28 | 2015-01-27 | Intel Corporation | Method and apparatus to provide a personalized channel |
US7415539B2 (en) * | 2001-09-28 | 2008-08-19 | Siebel Systems, Inc. | Method and apparatus for detecting insufficient memory for data extraction processes |
AU2002365752A1 (en) * | 2001-11-30 | 2003-06-17 | Prediwave Corp. | Fast memory access to digital data |
US20030135857A1 (en) * | 2002-01-11 | 2003-07-17 | Ramesh Pendakur | Content discovery in a digital broadcast data service |
US20030135605A1 (en) * | 2002-01-11 | 2003-07-17 | Ramesh Pendakur | User rating feedback loop to modify virtual channel content and/or schedules |
US6925539B2 (en) | 2002-02-06 | 2005-08-02 | Seagate Technology Llc | Data transfer performance through resource allocation |
US20030182464A1 (en) * | 2002-02-15 | 2003-09-25 | Hamilton Thomas E. | Management of message queues |
US20050226320A1 (en) * | 2002-04-22 | 2005-10-13 | Koninklijke Philips Electronics N.V. | Circuit, apparatus and method for storing audiovisual data |
US6986019B1 (en) | 2003-04-21 | 2006-01-10 | Maxtor Corporation | Method and apparatus for detection and management of data streams |
US20040225707A1 (en) * | 2003-05-09 | 2004-11-11 | Chong Huai-Ter Victor | Systems and methods for combining a slow data stream and a fast data stream into a single fast data stream |
US7877468B2 (en) * | 2004-01-23 | 2011-01-25 | Concurrent Computer Corporation | Systems and methods for vertically integrated data distribution and access management |
JP4345559B2 (en) * | 2004-04-15 | 2009-10-14 | ソニー株式会社 | Information processing apparatus, information processing method, program, and program recording medium |
US7254685B1 (en) | 2004-06-15 | 2007-08-07 | Emc Corporation | Method for maintaining high performance while preserving relative write I/O ordering for a semi-synchronous remote replication solution |
US7386692B1 (en) * | 2004-08-20 | 2008-06-10 | Sun Microsystems, Inc. | Method and apparatus for quantized deadline I/O scheduling |
US7281086B1 (en) | 2005-06-02 | 2007-10-09 | Emc Corporation | Disk queue management for quality of service |
US7293136B1 (en) | 2005-08-19 | 2007-11-06 | Emc Corporation | Management of two-queue request structure for quality of service in disk storage systems |
US7657671B2 (en) * | 2005-11-04 | 2010-02-02 | Sun Microsystems, Inc. | Adaptive resilvering I/O scheduling |
US20070106849A1 (en) * | 2005-11-04 | 2007-05-10 | Sun Microsystems, Inc. | Method and system for adaptive intelligent prefetch |
US7478179B2 (en) * | 2005-11-04 | 2009-01-13 | Sun Microsystems, Inc. | Input/output priority inheritance wherein first I/O request is executed based on higher priority |
US20080201736A1 (en) * | 2007-01-12 | 2008-08-21 | Ictv, Inc. | Using Triggers with Video for Interactive Content Identification |
US9826197B2 (en) * | 2007-01-12 | 2017-11-21 | Activevideo Networks, Inc. | Providing television broadcasts over a managed network and interactive content over an unmanaged network to a client device |
US7779175B2 (en) * | 2007-05-04 | 2010-08-17 | Blackwave, Inc. | System and method for rendezvous in a communications network |
US20080282245A1 (en) * | 2007-05-08 | 2008-11-13 | International Business Machines Corporation | Media Operational Queue Management in Storage Systems |
US8554941B2 (en) * | 2007-08-30 | 2013-10-08 | At&T Intellectual Property I, Lp | Systems and methods for distributing video on demand |
BRPI0914564A2 (en) * | 2008-06-25 | 2015-12-15 | Active Video Networks Inc | provide television broadcasts over a managed network and interactive content over an unmanaged network to a client device |
US20090328115A1 (en) * | 2008-06-27 | 2009-12-31 | At&T Delaware Intellectual Property, Inc. | Systems and Methods for Distributing Digital Content |
EP2438513B1 (en) | 2009-06-03 | 2015-03-18 | Hewlett Packard Development Company, L.P. | Scheduling realtime information storage system access requests |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5220653A (en) * | 1990-10-26 | 1993-06-15 | International Business Machines Corporation | Scheduling input/output operations in multitasking systems |
US5644786A (en) * | 1990-11-08 | 1997-07-01 | At&T Global Information Solutions Company | Method for scheduling the execution of disk I/O operations |
US5802394A (en) * | 1994-06-06 | 1998-09-01 | Starlight Networks, Inc. | Method for accessing one or more streams in a video storage system using multiple queues and maintaining continuity thereof |
US5561456A (en) * | 1994-08-08 | 1996-10-01 | International Business Machines Corporation | Return based scheduling to support video-on-demand applications |
US5721956A (en) * | 1995-05-15 | 1998-02-24 | Lucent Technologies Inc. | Method and apparatus for selective buffering of pages to provide continuous media data to multiple users |
US5787482A (en) * | 1995-07-31 | 1998-07-28 | Hewlett-Packard Company | Deadline driven disk scheduler method and apparatus with thresholded most urgent request queue scan window |
US6061504A (en) * | 1995-10-27 | 2000-05-09 | Emc Corporation | Video file server using an integrated cached disk array and stream server computers |
US5687390A (en) * | 1995-11-14 | 1997-11-11 | Eccs, Inc. | Hierarchical queues within a storage array (RAID) controller |
US5870629A (en) * | 1996-03-21 | 1999-02-09 | Bay Networks, Inc. | System for servicing plurality of queues responsive to queue service policy on a service sequence ordered to provide uniform and minimal queue interservice times |
US5928327A (en) * | 1996-08-08 | 1999-07-27 | Wang; Pong-Sheng | System and process for delivering digital data on demand |
US5926649A (en) * | 1996-10-23 | 1999-07-20 | Industrial Technology Research Institute | Media server for storage and retrieval of voluminous multimedia data |
US6023720A (en) * | 1998-02-09 | 2000-02-08 | Matsushita Electric Industrial Co., Ltd. | Simultaneous processing of read and write requests using optimized storage partitions for read and write request deadlines |
-
1999
- 1999-03-12 US US09/268,512 patent/US6378036B2/en not_active Expired - Lifetime
-
2000
- 2000-03-09 WO PCT/US2000/006093 patent/WO2000054161A1/en active Application Filing
- 2000-03-09 GB GB0121692A patent/GB2365181B/en not_active Expired - Fee Related
- 2000-03-09 AU AU37323/00A patent/AU3732300A/en not_active Abandoned
- 2000-03-09 CA CA002362727A patent/CA2362727C/en not_active Expired - Fee Related
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7434242B1 (en) * | 2000-08-07 | 2008-10-07 | Sedna Patent Services, Llc | Multiple content supplier video asset scheduling |
US20030084089A1 (en) * | 2001-10-30 | 2003-05-01 | Yoshitoshi Kurose | Data transfer apparatus |
US20090006877A1 (en) * | 2007-06-28 | 2009-01-01 | Seagate Technology Llc | Power management in a storage array |
US7814351B2 (en) * | 2007-06-28 | 2010-10-12 | Seagate Technology Llc | Power management in a storage array |
US20100162305A1 (en) * | 2008-12-23 | 2010-06-24 | Verizon Corporate Services Group Inc | System and method for extending recording time for a digital video record (dvr) |
US8726314B2 (en) * | 2008-12-23 | 2014-05-13 | Verizon Patent And Licensing Inc. | System and method for extending recording time for a digital video record (DVR) |
CN104714836A (en) * | 2013-12-12 | 2015-06-17 | 国际商业机器公司 | Method and system for coalescing memory transactions |
US9146774B2 (en) * | 2013-12-12 | 2015-09-29 | International Business Machines Corporation | Coalescing memory transactions |
Also Published As
Publication number | Publication date |
---|---|
AU3732300A (en) | 2000-09-28 |
GB2365181B (en) | 2003-11-26 |
CA2362727C (en) | 2006-08-15 |
GB0121692D0 (en) | 2001-10-31 |
GB2365181A (en) | 2002-02-13 |
WO2000054161A9 (en) | 2001-11-08 |
US6378036B2 (en) | 2002-04-23 |
CA2362727A1 (en) | 2000-09-14 |
GB2365181A8 (en) | 2002-03-13 |
WO2000054161A1 (en) | 2000-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6378036B2 (en) | Queuing architecture including a plurality of queues and associated method for scheduling disk access requests for video content | |
US6691208B2 (en) | Queuing architecture including a plurality of queues and associated method for controlling admission for disk access requests for video content | |
CA2142381C (en) | Scheduling policies with grouping for providing vcr control functions in a video server | |
US5768681A (en) | Channel conservation for anticipated load surge in video servers | |
US6453316B1 (en) | Scheduling unit for scheduling service requests to cyclically provide services | |
US5854887A (en) | System for the management of multiple time-critical data streams | |
JP2742390B2 (en) | Method and system for supporting pause resume in a video system | |
Chen et al. | A scalable video-on-demand service for the provision of VCR-like functions | |
US5815662A (en) | Predictive memory caching for media-on-demand systems | |
US7882260B2 (en) | Method of data management for efficiently storing and retrieving data to respond to user access requests | |
US5926649A (en) | Media server for storage and retrieval of voluminous multimedia data | |
US5572645A (en) | Buffer management policy for an on-demand video server | |
EP1390840B1 (en) | System and method for scheduling the distribution of assets from multiple asset providers to multiple receivers | |
US7073021B2 (en) | Semantically-aware, dynamic, window-based disc scheduling method and apparatus for better fulfilling application requirements | |
JP3575862B2 (en) | Stream scheduling method and apparatus | |
KR100229683B1 (en) | Video server system and its control method with active class decision function | |
WO2004095254A1 (en) | Semantically-aware, dynamic, window-based disc scheduling method and apparatus for better fulfilling application requirements | |
Najafian Razavi | Performance issues in an interactive video-on-demand server |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: DIVA SYSTEMS CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHIN, DANNY;LERMAN, JESSE S.;TAYLOR, CLEMENT G.;AND OTHERS;REEL/FRAME:009829/0495 Effective date: 19990305 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction | ||
AS | Assignment |
Owner name: TVGATEWAY, LLC, PENNSYLVANIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DIVA SYSTEMS CORPORATION BY HOWARD B. GROBSTEIN, CHAPTER 11 TRUSTEE;REEL/FRAME:014567/0512 Effective date: 20040421 Owner name: TVGATEWAY, LLC,PENNSYLVANIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DIVA SYSTEMS CORPORATION BY HOWARD B. GROBSTEIN, CHAPTER 11 TRUSTEE;REEL/FRAME:014567/0512 Effective date: 20040421 |
|
AS | Assignment |
Owner name: SEDNA PATENT SERVICES, LLC, PENNSYLVANIA Free format text: CHANGE OF NAME;ASSIGNOR:TVGATEWAY, LLC;REEL/FRAME:015177/0980 Effective date: 20040824 Owner name: SEDNA PATENT SERVICES, LLC,PENNSYLVANIA Free format text: CHANGE OF NAME;ASSIGNOR:TVGATEWAY, LLC;REEL/FRAME:015177/0980 Effective date: 20040824 |
|
FEPP | Fee payment procedure |
Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
REFU | Refund |
Free format text: REFUND - SURCHARGE, PETITION TO ACCEPT PYMT AFTER EXP, UNINTENTIONAL (ORIGINAL EVENT CODE: R2551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: COX COMMUNICATIONS, GEORGIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SEDNA PATENT SERVICES, LLC;REEL/FRAME:021817/0486 Effective date: 20080913 Owner name: COX COMMUNICATIONS, INC., GEORGIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SEDNA PATENT SERVICES, LLC;REEL/FRAME:021817/0486 Effective date: 20080913 Owner name: COX COMMUNICATIONS, INC.,GEORGIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SEDNA PATENT SERVICES, LLC;REEL/FRAME:021817/0486 Effective date: 20080913 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
FPAY | Fee payment |
Year of fee payment: 12 |