US20080208861A1 - Data Sorting Method And System - Google Patents
Data Sorting Method And System Download PDFInfo
- Publication number
- US20080208861A1 US20080208861A1 US12/115,607 US11560708A US2008208861A1 US 20080208861 A1 US20080208861 A1 US 20080208861A1 US 11560708 A US11560708 A US 11560708A US 2008208861 A1 US2008208861 A1 US 2008208861A1
- Authority
- US
- United States
- Prior art keywords
- data
- floating
- buffer
- data blocks
- buffers
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/22—Arrangements for sorting or merging computer data on continuous record carriers, e.g. tape, drum, disc
- G06F7/36—Combined merging and sorting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2207/00—Indexing scheme relating to methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F2207/22—Indexing scheme relating to groups G06F7/22 - G06F7/36
- G06F2207/224—External sorting
Definitions
- the technology described in this patent document relates generally to data sorting. More specifically, this document describes systems and methods for sorting data using a multi-block floating buffer.
- Sorting is the process of ordering items based on specified criteria. In data processing, sorting indicates the sequencing of records using a key value determined for each record. If a group of records is too large to be sorted within available random access memory, then a two-phase process, referred to as external sorting, may be used. In the first phase of the external sorting process, a portion of the records is typically sorted and the partial result, referred to as a sorted run, is stored to temporary external storage. Sorted runs are generated until the entire group of records is exhausted. Then, in the second phase of the external sorting process, the sorted runs are merged, typically to a final output record group.
- the second phase may be executed multiple times in a process commonly referred to as a multi-pass or multi-phase merge.
- a multi-phase merge existing runs are merged to create a new, smaller replacement set of runs.
- the records within a sorted run are typically written to external storage in sequential blocks of data, such that each block includes an integral number of records.
- the performance of typical merging and forecasting algorithms can be greatly affected by the size of the record block. For example, when sorting randomly ordered records, poor merge performance may result from the selection of small block size because disk latency, which may be orders of magnitude larger than any other delay (e.g., memory access latency) encountered during merging, can dominate processing time.
- One method of increasing merge performance is to establish a large block size so that access costs (i.e., time spent locating the blocks) are insignificant compared to transfer costs (i.e., time spent reading the blocks.)
- access costs i.e., time spent locating the blocks
- transfer costs i.e., time spent reading the blocks.
- a large block size may also decrease performance by resulting in a multi-pass merge and, consequently, increased processing time and increased temporary storage requirements.
- Another method for increasing performance during the merge phase is to eliminate time spent stalled on input (i.e., waiting for a record block to be retrieved from external storage) by reading blocks from storage in advance of their need while the merge is in progress.
- One algorithm used to achieve such parallelism is referred to as forecasting with floating buffers.
- This forecasting algorithm designed to execute concurrently with the merge algorithm, reads blocks in the same sequence that the merge algorithm requires them.
- a typical forecasting algorithm determines which run to read next by comparing the largest key value of the last block read from each run being merged. The run associated with the smallest such key is the run from which the next block is read.
- the buffers, into which blocks are read may be used to read data from any run, and are thus said to float among the runs.
- a method for use with one or more processing devices in order to merge sorted runs of data may include the steps of: defining a plurality of floating buffers; calculating a number of data blocks for each floating buffer; configuring the floating buffers to store the number of data blocks; and using the floating buffers to perform an external data sorting operation.
- a data sorting system may include one or more programs, and may be used with a plurality of floating buffers and a data storage device for storing a plurality of sorted runs of data blocks, each data block including a plurality of data records.
- the one or more programs in a data sorting system may be operable to calculate a number of data blocks for each floating buffer and configure the plurality of floating buffers to store the number of data blocks.
- the one or more programs in a data sorting system may be further operable to sort the plurality of data records into a single sorted output run using the plurality of floating buffers.
- the number of data blocks in the plurality of floating buffers may be pre-determined or may be calculated by a different system using the techniques described herein.
- FIG. 1 is a block diagram of an example data sorting system for performing an external sorting operation.
- FIG. 2 is a block diagram of an example data sorting system showing sorted runs.
- FIGS. 3A-3C illustrate an example operation of a data sorting system.
- FIG. 4 is a flow diagram illustrating an example operational scenario for merging sorted runs.
- FIG. 5 is a flow diagram illustrating an example operational scenario involving forecasting.
- FIG. 6 is a block diagram illustrating an example data structure for a multi-block floating buffer.
- FIG. 1 is a block diagram of an example data sorting system for performing an external sorting operation.
- the system includes a computing machine 101 , a temporary data storage device 102 and a permanent data storage device 103 .
- the computing machine 101 may include a multiprocessor computer architecture having two independent central processing units (CPUs) 105 , 106 that share access to a common random access memory (RAM) 104 .
- CPUs central processing units
- RAM random access memory
- FIG. 1 is a block diagram of an example data sorting system for performing an external sorting operation.
- the system includes a computing machine 101 , a temporary data storage device 102 and a permanent data storage device 103 .
- the computing machine 101 may include a multiprocessor computer architecture having two independent central processing units (CPUs) 105 , 106 that share access to a common random access memory (RAM) 104 .
- RAM random access memory
- the temporary and permanent data storage devices 102 , 103 are direct access storage devices accessible by the computing machine 101 .
- the temporary data storage device 102 is used to store a plurality of sorted runs of data blocks, such as, but not limited to, in a single computer file.
- the sorted runs are merged (e.g., into a single sorted computer file) and the sorted output is stored in the permanent data storage device 103 .
- the external sorting process may be performed by a forecasting program and a merge program, which may, for example, be independently executed as two separate threads of execution that are respectively executed by the multiple CPUs 105 , 106 of the computing machine 101 . It should be understood, however, that the forecasting and merge programs may be implemented in different ways, such as using a single processing device.
- the sorted runs and sorted output may be stored in other types of memory devices and/or memory configurations, which may be either external or internal to the computing machine 101 .
- FIG. 2 depicts multiple sorted runs 220 for processing by a data sorting system.
- the data sorting system includes a computing machine 201 , a first data storage device 202 for storing sorted runs 220 of data blocks 222 , and a second data storage device 203 for storing sorted output.
- a computing machine 201 for storing sorted runs 220 of data blocks 222
- a second data storage device 203 for storing sorted output.
- the sorted runs 220 and the sorted output 203 may be stored in a single data storage device.
- the sorted runs 220 can be stored in a single file within the first data storage device 202 .
- the sorted runs 220 include a plurality of data blocks 222 , and the data blocks 222 store a plurality of data records 224 .
- the data records 224 may include both record data and key values.
- the key value associated with a particular data record 224 may be extracted from or included with the data record 224 .
- the record data 224 within each sorted run 220 is sorted according to the keys (e.g., in ascending or descending order). An example of a sorted run showing sorted record keys is described below with reference to FIG. 3A .
- the computing machine 201 includes a forecasting program 226 , a merge program 228 , and a plurality of floating buffers 230 .
- the forecasting program 226 and merge program 228 are stored in a memory location in the computing machine 201 and are executed by one or more processing devices in the computing machine 201 to perform an external sorting operation.
- the forecasting program 226 and merge program 228 may be two separate threads of execution that are respectively executed by multiple CPUs 105 , 106 of the computing machine 101 , as described above with reference to FIG. 1 .
- the floating buffers 230 may, for example, be memory locations defined in one or more memory devices on the computing machine 201 .
- Each floating buffer 231 is configurable to store a plurality of data blocks 232 .
- the number of data blocks 232 in a floating buffer 231 may, for example, be configured by the merge program 228 , as described below. However, other approaches can be used, such as the number of data blocks 232 in the floating buffers 231 may be determined by a program other than the merge program 228 or may be determined by user input.
- the data blocks 222 from the sorted runs 220 are read into the floating buffers 230 and are merged by the merge program 228 to generate the sorted output 203 .
- depleted buffers 230 are passed to the forecasting program 226 to be repopulated from the sorted runs 220 .
- the forecasting program 226 begins its operation by attempting to acquire one or more empty buffers 230 but, failing that, may either allocate a new floating buffer 231 , if memory limits permit, or wait until a buffer 231 becomes available from the merge program 228 .
- the forecasting program 226 then reads data blocks 222 from the sorted runs 220 into one or more floating buffers 230 in advance of their need by the concurrently executing merge program 228 .
- one or more floating buffers 230 should be available beyond the number required to perform the merge operation. If no extra buffers 230 are available, then the forecasting program 226 waits for the merge program 228 to deplete a buffer 231 and pass the empty buffer 231 to the forecasting program 226 so that it may initiate a read request. With one extra floating buffer 231 , the forecasting program 226 can determine which data block 222 will be needed next by the merge program 228 , and initiate a read request while the runs are being merged. Upon completing the read request, the forecasting program 226 passes the newly populated floating buffer to the merge program 228 as needed, and waits for another empty buffer 231 .
- two or more floating buffers may near depletion around the same time during a merge.
- a single extra floating buffer 231 may not suffice for full parallelism because the merge program 228 will attempt to retrieve two or more filled replacement buffers in quick succession from the forecasting program 226 .
- the first attempt by the merge program 228 to retrieve a replacement buffer will succeed if the forecasting program 226 has already filled the one extra buffer 231 .
- subsequent attempts to retrieve a replacement buffer may stall because, without additional buffers, replacements cannot be produced in advance. The likelihood of this occurrence depends upon the initial order of the input records to be sorted.
- the total number of floating buffers 230 allocated to the forecasting and merge programs 226 , 228 should be twice the number of runs 220 being merged.
- additional floating buffers 230 beyond this number may be useful in maintaining a steady flow of data to the merge program 228 and smoothing the burst of input demands on the operating system and hardware in a multi-user or multi-tasking system.
- each floating buffer 231 will affect the access costs of the sorting system.
- the following mathematical process may be used to determine the optimum number of blocks 232 for each floating buffer 231 .
- This mathematical process may also be used to determine the amount of memory needed for the floating buffers 230 in order to achieve a desired reduction in disk access costs and full parallelism between the forecasting program 226 and the merge program 228 .
- the retrieval of additional, logically successive blocks from the run can reduce access costs by avoiding the disk latency often associated with locating the additional blocks. Access costs can be reduced, therefore, by effectively increasing the number of blocks read per access instead of increasing the block size.
- the performance benefits afforded by forecasting can be attained by allowing the forecasting algorithm to support input into multi-block floating buffers and ensuring that access costs are acceptably reduced by adjusting the number of blocks per floating buffer.
- This scheme may also provide flexibility in the merge phase by allowing some control over the number of passes required to complete the external sort because the number of blocks per floating buffer can be adjusted to allow for a greater or fewer number of buffers and, consequently, directly affect the number of runs that may be merged in a single pass.
- the time required to read data from storage 202 dictates the speed at which sorted runs 220 may be merged, assuming no disk latency costs. This time is equivalent to the time required to read the entire file sequentially and can be calculated as follows:
- t is the I/O time
- F is the file size (bytes)
- p is the disk transfer rate (bytes/second).
- Disk latency (L) is the sum of the positional latency (s) and the rotational latency (r). Assuming a non-zero disk latency (L) and that access costs are encountered for every block, then the time required to read the file may be expressed as follows:
- the percentage of time spent locating blocks is not dependent upon the number of blocks in the file and, therefore, is independent of the file size.
- the total time spent locating blocks may be reduced by reading not only the next block that is required, but also reading one or more subsequent blocks from the same run 220 . Because the additional blocks are likely to be adjacent to the first located block, there should be no location costs incurred to read the additional blocks. If the number of blocks read are increased by a factor of N for each disk access, where N is the number of blocks in each buffer 231 , the disk latency percentage may be reduced to:
- the above equation may be used to establish a cap (Q) on the amount of time spent locating blocks, as follows:
- N the smallest number of blocks per floating buffer (N) that will produce the desired reduction of latency
- N ⁇ ⁇ ⁇ ⁇ L ⁇ ( 100 Q - 1 ) B ⁇ .
- the number of available floating buffers 230 needed to achieve full parallelism between the forecasting program 226 and the merge program 228 is equal to twice the number of runs to be merged (n runs ).
- the memory requirements to satisfy latency reduction and to provide full parallelism may be expressed as:
- the latency average in the above hard drive specifications refers to average rotational latency, which is the amount of time required for the disk to complete half a rotation.
- the Hitachi Deskstar 180 ⁇ P rotates at 7200 rpm, yielding an average rotational latency (r) of 4.17 ms.
- the average seek time is also referred to as average positional latency (s).
- Using the smaller value of the specified sustained data rate range, 29 MB/s, yields a transfer rate ( ⁇ ) of 30,408,704 bytes per second (with 1 MB 2 20 bytes).
- the derived parameters of the Hitachi Deskstar 180GXP hard drive are as follows:
- the sorting dimensions are as follows.
- Sorting Dimensions Dimension Value File size (bytes) (F) 134,217,728 Block size (bytes) (B) 65,536 Number of sorted runs (n runs ) 4 Percentage latency cap (Q) 10%
- the number of blocks 232 per floating buffer 231 that will satisfy a 10% latency cap (Q) may be determined, as follows.
- N ⁇ ⁇ ⁇ ⁇ L ⁇ ( 100 Q - 1 )
- FIGS. 3A-3C a block diagram is provided to illustrate an example operation of a data sorting system.
- FIG. 3A illustrates an example of sorted runs stored in a data storage device.
- FIG. 3B illustrates the operation of a forecasting program and a merge program during a merge.
- FIG. 3C illustrates the operation of a forecasting program to replace floating buffers depleted by a merge program during the merge.
- This example illustrates just one of many ways to implement a merge program and a forecast program.
- merge operations and forecast operations are respectively performed by a merge thread 302 and a forecast thread 300 in a multi-thread environment.
- a stored file is illustrated that includes five sorted runs 320 (R- 1 through R- 5 ).
- the sorted runs 320 include a plurality of data blocks 322 that store the sorted data records.
- the range of key values for the data records 324 stored in each of the data blocks 322 are shown in FIG. 3A .
- the first data block 322 in the first sorted run (R- 1 ) includes sorted data records 324 having key values starting at 35 and ending at 80.
- a merge program loads blocks 322 from each run 320 into an initial set of floating buffers 330 .
- the floating buffers 330 in this example each include three blocks 332 .
- the first three blocks 322 from each run 320 are loaded into an initial set of buffers (labeled a-e).
- the blocks of data 332 within a buffer are referred to herein as a block tuple. For convenience, only the last key value in each block tuple is shown in the diagram.
- the forecasting thread 300 acquires floating buffers (g and f) that are not in use by the merge thread 302 .
- floating buffers For full parallelism between the merge thread 302 and forecasting thread 300 there can be an equal number of floating buffers available to the forecasting thread 300 . However, for the purposes of this example, only two additional floating buffers (g and f) are available.
- the forecasting thread 300 determines the order in which the buffers (a-e) being used by the merge thread 302 will be depleted, and assigns the additional buffers (g and f) based on this determination.
- the order of buffer depletion by the merge program may be determined by comparing the last key in each block tuple.
- the forecasting thread 300 determines that floating buffer “c” will be depleted first and assigns the additional buffer “f” to the third sorted run (R- 3 ). The forecasting thread 300 then initiates a read request to fill the additional buffers “f” from the next three data blocks 322 from sorted run “R- 3 ,” having a key value range of 232 through 362. Similarly, the additional buffer “g” is next assigned to and filled from the first sorted run (R- 1 ), which has the second lowest value (188) in its last key position.
- the depleted buffer is passed to the forecasting thread 300 and is replaced with a populated buffer 330 , as illustrated in FIG. 3C .
- the forecasting thread 300 has replaced depleted floating buffer “c” with populated buffer “f.”
- the forecasting thread 300 has then reassigned the depleted buffer “c” to sorted run “R- 4 ,” which has the next lowest last key value (199), and repopulated buffer “c” with the next three data blocks 322 from the run 320 .
- FIG. 4 is a flow diagram illustrating an example method of merging sorted runs, which may be performed by a merge program.
- the method begins at step 400 .
- the method determines the number of data blocks in the floating buffers. This step 401 may, for example, be performed using the calculations described above with reference to FIG. 2 .
- the initial set of buffers (one buffer for each run being merged) is allocated and filled from the stored sorted runs at step 402 .
- a forecasting method is initialized at step 403 . An example forecasting method is described below with reference to FIG. 5 .
- the method proceeds into its primary loop of execution.
- the method determines which record should next be merged from the buffers to the sorted output. When the next record to be merged is identified, the record is copied from the buffer to the sorted output file at step 405 .
- the method determines if there are more records to be merged from the same buffer. If the buffer includes more records to be merged, then the method advances to the next record in the buffer at step 408 and proceeds to step 412 . Else, if there are no more records in the buffer, then the method proceeds to step 407 .
- the method determines if there are more blocks to be merged from the same sorted run. If not, then the run is removed from the merge process (step 410 ) and the method proceeds to step 412 . If there are more blocks to be merged from the run, however, then the depleted buffer is released to the forecasting process at step 409 , a filled buffer for the same sorted run is obtained from the forecasting process at step 411 , and the method proceeds to step 412 .
- the method determines if the records in all of the sorted runs have been merged. If not, then the method returns to step 404 to repeat the primary loop of execution. Otherwise, if there are no additional records to be merged, then the method waits for the forecasting method to terminate at step 413 , and the method ends at step 414 .
- FIG. 5 is a flow diagram illustrating an example forecasting method.
- the method begins at step 500 .
- the method determines if there are any further blocks to read in the sorted runs. If not, then the method ends at step 511 . Otherwise, if there are additional blocks to be read, then the method determines at step 502 whether there are any empty floating buffers available. If there are no floating buffers available, then the method proceeds to step 503 . Else, if an empty floating buffer is available, then the method proceeds to step 507 .
- the method determines if the total amount of memory allocated for the floating buffers is below a selected limit. If the allocated memory is below the selected limit, then the method attempts to allocate an additional floating buffer at step 504 . If successful in allocating the additional floating buffer (step 505 ), then the method proceeds to step 507 . However, if either the allocation limit has been reached (step 503 ) or the method is not successful in allocating an additional floating buffer (step 505 ), then the method waits for a depleted buffer to be passed from the merging method at step 506 , and proceeds to step 507 when a buffer is available.
- step 507 the method forecasts which data blocks from the stored sorted runs will next be required by the merging method and reads those blocks into the free buffer at step 507 .
- This forecasting step may be performed by inspecting the last record of each sorted run that is currently buffered. The run associated with the record having the smallest key value for its last buffered record is the sorted run from which a new range of data blocks should be read next.
- the method determines at step 508 if there are any unread blocks remaining in the same sorted run. If unread blocks remain in the sorted run, then the method proceeds to step 510 .
- step 510 the newly filled buffer is transferred to the merging method, and the method returns to step 501 .
- FIG. 6 is a block diagram illustrating an example data structure for a multi-block floating buffer 600 .
- the floating buffer 600 includes a buffer data structure 602 and two separately allocated memory spaces 604 , 606 : a first memory space 604 to store pairs of record keys 603 and associated record pointers 605 (referred to herein as the “key space” or a “key memory location”); and a second memory space 606 to store the record blocks (referred to herein as the “record space” or “record memory location”). Also illustrated are a plurality of run descriptor data structures 608 .
- Run descriptor data structure 608 After a sorted run is created during the first stage of an external sorting process, the attributes of the sorted run are recorded in a run descriptor data structure 608 . Run descriptor data structures 608 are chained together (via the “next” field) to form a list, referred to herein as a “run list.” The run list is carried into the second (merge) phase of the external sorting process and is used by the merging and forecasting programs to track which data blocks have been retrieved from storage.
- the record space 606 is sized to hold one or more record blocks 610 and the key space 604 is sized to hold a corresponding number of keys 603 .
- both the record and key spaces 604 , 606 are monolithic allocations (e.g., the memory within a space is contiguous).
- values are set in the buffer data structure 602 for a current key pointer 614 , a start block identifier 616 , a total block identifier 618 , a total records identifier, 620 , a disk run pointer 622 and a more blocks identifier 624 .
- the current key pointer 614 indicates the smallest (top) key 603 in the buffer.
- the start block identifier 616 indicates the first block loaded into the buffer.
- the total block identifier 618 indicates the number of blocks loaded into the buffer.
- the total records identifier indicates the number of records loaded into the buffer.
- the disk run pointer 622 identifies the run descriptor data structure 608 for the sorted run from which the blocks were read.
- the more blocks identifier 624 indicates whether there are more blocks remaining to be read from the sorted run.
- the run descriptor data structure 608 is also updated after a buffer is loaded to set values for a high key pointer 626 , a total records identifier 628 , a total blocks identifier 630 and a next block identifier 632 .
- the high key pointer 626 indicates the largest (bottom) key within the key space 604 of the floating buffer 600 .
- the total records identifier 628 indicates the number of records remaining in the sorted run.
- the total blocks identifier 630 indicates the number of records remaining in the sorted run.
- the next block identifier 632 indicates the next block in the sorted run to be loaded into the floating buffer 600 .
- a merge program may maintain a priority queue of floating buffers 600 , containing one floating buffer for each sorted run being merged.
- the current key pointer 614 in the buffer data structure 602 is used to order buffers within the queue by comparing the current key value for all of the buffers. Ties may be resolved using the run ordinal 633 within the corresponding run descriptor data structure 608 .
- the priority queue may provide immediate access to the floating buffer containing the smallest key value 603 (indicated by the current key pointer value 614 for that buffer). The record from which the smallest key value originated is then emitted, the current key pointer 614 is updated to point to the next key 603 in the key space 604 , and the priority queue is adjusted to locate the buffer with the next smallest key value.
- a forecasting program may maintain a priority queue of run descriptors 608 , containing one run descriptor for each run being merged.
- the high key pointer 626 is used to order descriptors within the queue by comparing the highest key value for every participating run that has been buffered. Ties may be resolved using the run ordinal, ensuring that the forecasting read sequence is the same as the merge consumption sequence.
- the priority queue may provide immediate access to the run descriptor 608 pointing to the smallest high key value 626 .
- the sorted run associated with this run descriptor is the sorted run from which the next record block or blocks should be read. After a buffer is obtained and filled with the record blocks from the sorted run, the run descriptor data structure 608 is updated and the priority queue is adjusted to locate the run descriptor which points to the next smallest high key value 626 .
- systems and methods described herein may be implemented on various types of computer architectures, such as for example on a single general purpose computer or workstation, or on a networked system, or in a client-server configuration, or in an application service provider configuration.
- systems and methods may include data signals conveyed via networks (e.g., local area network, wide area network, internet, etc.), fiber optic medium, carrier waves, wireless networks, etc. for communication with one or more data processing devices.
- the data signals can carry any or all of the data disclosed herein that is provided to or from a device.
- the methods and systems described herein may be implemented on many different types of processing devices by program code comprising program instructions that are executable by the device processing subsystem.
- the software program instructions may include source code, object code, machine code, or any other stored data that is operable to cause a processing system to perform methods described herein.
- Other implementations may also be used, however, such as firmware or even appropriately designed hardware configured to carry out the methods and systems described herein.
- the systems' and methods' data may be stored and implemented in one or more different types of computer-implemented ways, such as different types of storage devices and programming constructs (e.g., data stores, RAM, ROM, Flash memory, flat files, databases, programming data structures, programming variables, IF-THEN (or similar type) statement constructs, etc.).
- storage devices and programming constructs e.g., data stores, RAM, ROM, Flash memory, flat files, databases, programming data structures, programming variables, IF-THEN (or similar type) statement constructs, etc.
- data structures describe formats for use in organizing and storing data in databases, programs, memory, or other computer-readable media for use by a computer program.
- the systems and methods may be provided on many different types of computer-readable media including computer storage mechanisms (e.g., CD-ROM, diskette, RAM, flash memory, computer's hard drive, etc.) that contain instructions for use in execution by a processor to perform the methods' operations and implement the systems described herein.
- computer storage mechanisms e.g., CD-ROM, diskette, RAM, flash memory, computer's hard drive, etc.
- a module or processor includes but is not limited to a unit of code that performs a software operation, and can be implemented for example as a subroutine unit of code, or as a software function unit of code, or as an object (as in an object-oriented paradigm), or as an applet, or in a computer script language, or as another type of computer code.
- the software components and/or functionality may be located on a single computer or distributed across multiple computers depending upon the situation at hand.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
In accordance with the teachings described herein, systems and methods are provided for data sorting. A method for use with one or more processing devices in order to merge sorted runs of data may include the steps of: defining a plurality of floating buffers; calculating a number of data blocks for each floating buffer; configuring the floating buffers to store the number of data blocks; and using the floating buffers to perform an external data sorting operation. A data sorting system may include one or more programs, and may be used with a plurality of floating buffers and a data storage device for storing a plurality of sorted runs of data blocks, each data block including a plurality of data records. The one or more programs in a data sorting system may be operable to calculate a number of data blocks for each floating buffer and configure the plurality of floating buffers to store the number of data blocks. In addition, the one or more programs in a data sorting system may be further operable to sort the plurality of data records into a single sorted output using the plurality of floating buffers.
Description
- This is a continuation of U.S. patent application Ser. No. 10/983,745, filed on Nov. 8, 2004, the entirety of which is incorporated herein by reference.
- The technology described in this patent document relates generally to data sorting. More specifically, this document describes systems and methods for sorting data using a multi-block floating buffer.
- Sorting is the process of ordering items based on specified criteria. In data processing, sorting indicates the sequencing of records using a key value determined for each record. If a group of records is too large to be sorted within available random access memory, then a two-phase process, referred to as external sorting, may be used. In the first phase of the external sorting process, a portion of the records is typically sorted and the partial result, referred to as a sorted run, is stored to temporary external storage. Sorted runs are generated until the entire group of records is exhausted. Then, in the second phase of the external sorting process, the sorted runs are merged, typically to a final output record group. If all of the sorted runs cannot be merged in one pass, then the second phase may be executed multiple times in a process commonly referred to as a multi-pass or multi-phase merge. In a multi-phase merge, existing runs are merged to create a new, smaller replacement set of runs.
- The records within a sorted run are typically written to external storage in sequential blocks of data, such that each block includes an integral number of records. The performance of typical merging and forecasting algorithms can be greatly affected by the size of the record block. For example, when sorting randomly ordered records, poor merge performance may result from the selection of small block size because disk latency, which may be orders of magnitude larger than any other delay (e.g., memory access latency) encountered during merging, can dominate processing time. One method of increasing merge performance is to establish a large block size so that access costs (i.e., time spent locating the blocks) are insignificant compared to transfer costs (i.e., time spent reading the blocks.) However, a large block size may also decrease performance by resulting in a multi-pass merge and, consequently, increased processing time and increased temporary storage requirements.
- Another method for increasing performance during the merge phase is to eliminate time spent stalled on input (i.e., waiting for a record block to be retrieved from external storage) by reading blocks from storage in advance of their need while the merge is in progress. One algorithm used to achieve such parallelism is referred to as forecasting with floating buffers. This forecasting algorithm, designed to execute concurrently with the merge algorithm, reads blocks in the same sequence that the merge algorithm requires them. A typical forecasting algorithm determines which run to read next by comparing the largest key value of the last block read from each run being merged. The run associated with the smallest such key is the run from which the next block is read. The buffers, into which blocks are read, may be used to read data from any run, and are thus said to float among the runs.
- In accordance with the teachings described herein, systems and methods are provided for data sorting. A method for use with one or more processing devices in order to merge sorted runs of data may include the steps of: defining a plurality of floating buffers; calculating a number of data blocks for each floating buffer; configuring the floating buffers to store the number of data blocks; and using the floating buffers to perform an external data sorting operation. A data sorting system may include one or more programs, and may be used with a plurality of floating buffers and a data storage device for storing a plurality of sorted runs of data blocks, each data block including a plurality of data records. The one or more programs in a data sorting system may be operable to calculate a number of data blocks for each floating buffer and configure the plurality of floating buffers to store the number of data blocks. In addition, the one or more programs in a data sorting system may be further operable to sort the plurality of data records into a single sorted output run using the plurality of floating buffers. In other examples, the number of data blocks in the plurality of floating buffers may be pre-determined or may be calculated by a different system using the techniques described herein.
-
FIG. 1 is a block diagram of an example data sorting system for performing an external sorting operation. -
FIG. 2 is a block diagram of an example data sorting system showing sorted runs. -
FIGS. 3A-3C illustrate an example operation of a data sorting system. -
FIG. 4 is a flow diagram illustrating an example operational scenario for merging sorted runs. -
FIG. 5 is a flow diagram illustrating an example operational scenario involving forecasting. -
FIG. 6 is a block diagram illustrating an example data structure for a multi-block floating buffer. -
FIG. 1 is a block diagram of an example data sorting system for performing an external sorting operation. The system includes acomputing machine 101, a temporarydata storage device 102 and a permanentdata storage device 103. Thecomputing machine 101 may include a multiprocessor computer architecture having two independent central processing units (CPUs) 105, 106 that share access to a common random access memory (RAM) 104. However, it should be understood that different computing devices may be used, such as computing devices that have a single processing device. - The temporary and permanent
data storage devices computing machine 101. The temporarydata storage device 102 is used to store a plurality of sorted runs of data blocks, such as, but not limited to, in a single computer file. The sorted runs are merged (e.g., into a single sorted computer file) and the sorted output is stored in the permanentdata storage device 103. The external sorting process may be performed by a forecasting program and a merge program, which may, for example, be independently executed as two separate threads of execution that are respectively executed by themultiple CPUs computing machine 101. It should be understood, however, that the forecasting and merge programs may be implemented in different ways, such as using a single processing device. In addition, the sorted runs and sorted output may be stored in other types of memory devices and/or memory configurations, which may be either external or internal to thecomputing machine 101. -
FIG. 2 depicts multiple sortedruns 220 for processing by a data sorting system. The data sorting system includes acomputing machine 201, a firstdata storage device 202 for storing sortedruns 220 ofdata blocks 222, and a seconddata storage device 203 for storing sorted output. However it should be understood that other configurations can be used, such as the sortedruns 220 and the sortedoutput 203 may be stored in a single data storage device. - The sorted
runs 220 can be stored in a single file within the firstdata storage device 202. The sortedruns 220 include a plurality ofdata blocks 222, and the data blocks 222 store a plurality ofdata records 224. Thedata records 224 may include both record data and key values. The key value associated with aparticular data record 224 may be extracted from or included with thedata record 224. Therecord data 224 within each sortedrun 220 is sorted according to the keys (e.g., in ascending or descending order). An example of a sorted run showing sorted record keys is described below with reference toFIG. 3A . - The
computing machine 201 includes aforecasting program 226, amerge program 228, and a plurality offloating buffers 230. Theforecasting program 226 andmerge program 228 are stored in a memory location in thecomputing machine 201 and are executed by one or more processing devices in thecomputing machine 201 to perform an external sorting operation. For example, theforecasting program 226 andmerge program 228 may be two separate threads of execution that are respectively executed bymultiple CPUs computing machine 101, as described above with reference toFIG. 1 . - The
floating buffers 230 may, for example, be memory locations defined in one or more memory devices on thecomputing machine 201. Each floatingbuffer 231 is configurable to store a plurality of data blocks 232. The number of data blocks 232 in a floatingbuffer 231 may, for example, be configured by themerge program 228, as described below. However, other approaches can be used, such as the number of data blocks 232 in the floatingbuffers 231 may be determined by a program other than themerge program 228 or may be determined by user input. - In operation, the data blocks 222 from the sorted runs 220 are read into the floating
buffers 230 and are merged by themerge program 228 to generate thesorted output 203. As themerge program 228 progresses, depletedbuffers 230 are passed to theforecasting program 226 to be repopulated from the sorted runs 220. Theforecasting program 226 begins its operation by attempting to acquire one or moreempty buffers 230 but, failing that, may either allocate a new floatingbuffer 231, if memory limits permit, or wait until abuffer 231 becomes available from themerge program 228. Theforecasting program 226 then reads data blocks 222 from the sorted runs 220 into one or more floatingbuffers 230 in advance of their need by the concurrently executingmerge program 228. - In order for the
forecasting program 226 to operate concurrently (if desired) with themerge program 228, one or more floatingbuffers 230 should be available beyond the number required to perform the merge operation. If noextra buffers 230 are available, then theforecasting program 226 waits for themerge program 228 to deplete abuffer 231 and pass theempty buffer 231 to theforecasting program 226 so that it may initiate a read request. With one extra floatingbuffer 231, theforecasting program 226 can determine which data block 222 will be needed next by themerge program 228, and initiate a read request while the runs are being merged. Upon completing the read request, theforecasting program 226 passes the newly populated floating buffer to themerge program 228 as needed, and waits for anotherempty buffer 231. - In some instances, two or more floating buffers may near depletion around the same time during a merge. In this case, a single extra floating
buffer 231 may not suffice for full parallelism because themerge program 228 will attempt to retrieve two or more filled replacement buffers in quick succession from theforecasting program 226. The first attempt by themerge program 228 to retrieve a replacement buffer will succeed if theforecasting program 226 has already filled the oneextra buffer 231. However, subsequent attempts to retrieve a replacement buffer may stall because, without additional buffers, replacements cannot be produced in advance. The likelihood of this occurrence depends upon the initial order of the input records to be sorted. For records that are already ordered, there is usually no chance that two or more buffers will near depletion together because themerge program 228 will draw records sequentially from eachbuffer 231, read blocks sequentially from eachrun 220, and tap theruns 220 in sequence. However, for records that are randomly ordered, it may occur that all of the buffers will near depletion together because themerge program 228 will draw records alternately and with roughly equal probability from among thebuffers 230. This latter situation results in a near immediate need for replacement buffers by themerge program 228 for everyrun 220 being merged. Thus, to facilitate full parallelism between theforecasting program 226 and themerge program 228, the total number of floatingbuffers 230 allocated to the forecasting and mergeprograms runs 220 being merged. Though not necessary to achieve full parallelism, additional floatingbuffers 230 beyond this number may be useful in maintaining a steady flow of data to themerge program 228 and smoothing the burst of input demands on the operating system and hardware in a multi-user or multi-tasking system. - In addition, as noted above, the number of
blocks 232 available in each floatingbuffer 231 will affect the access costs of the sorting system. The following mathematical process may be used to determine the optimum number ofblocks 232 for each floatingbuffer 231. This mathematical process may also be used to determine the amount of memory needed for the floatingbuffers 230 in order to achieve a desired reduction in disk access costs and full parallelism between theforecasting program 226 and themerge program 228. - As shown by the example mathematical process below, when retrieving a block from a sorted run, the retrieval of additional, logically successive blocks from the run can reduce access costs by avoiding the disk latency often associated with locating the additional blocks. Access costs can be reduced, therefore, by effectively increasing the number of blocks read per access instead of increasing the block size. For arbitrary block sizes, the performance benefits afforded by forecasting can be attained by allowing the forecasting algorithm to support input into multi-block floating buffers and ensuring that access costs are acceptably reduced by adjusting the number of blocks per floating buffer. This scheme may also provide flexibility in the merge phase by allowing some control over the number of passes required to complete the external sort because the number of blocks per floating buffer can be adjusted to allow for a greater or fewer number of buffers and, consequently, directly affect the number of runs that may be merged in a single pass.
- The time required to read data from
storage 202 dictates the speed at which sorted runs 220 may be merged, assuming no disk latency costs. This time is equivalent to the time required to read the entire file sequentially and can be calculated as follows: -
- where t is the I/O time, F is the file size (bytes), and p is the disk transfer rate (bytes/second).
- If we assume that the file size (F) is some integer multiple of the block size (B), such that F=nblocks×B, where B is the
block 222 size (bytes) and nblocks is the number of blocks in the file, then the equation becomes: -
- Disk latency (L) is the sum of the positional latency (s) and the rotational latency (r). Assuming a non-zero disk latency (L) and that access costs are encountered for every block, then the time required to read the file may be expressed as follows:
-
- Thus, the percentage of time spent locating blocks may be expressed as follows:
-
- which simplifies to:
-
- The percentage of time spent locating blocks is not dependent upon the number of blocks in the file and, therefore, is independent of the file size. The total time spent locating blocks may be reduced by reading not only the next block that is required, but also reading one or more subsequent blocks from the
same run 220. Because the additional blocks are likely to be adjacent to the first located block, there should be no location costs incurred to read the additional blocks. If the number of blocks read are increased by a factor of N for each disk access, where N is the number of blocks in eachbuffer 231, the disk latency percentage may be reduced to: -
- The above equation may be used to establish a cap (Q) on the amount of time spent locating blocks, as follows:
-
- where Q is the desired maximum percentage of total time consumed by disk latency. The number of blocks per floating buffer (N) that will satisfy this relation may then be established, as follows:
-
- That is, any number of blocks per floating buffer (N) that is greater than or equal to
-
- will cause the latency, as a percentage of total input time, to be reduced to Q or below. Thus, the smallest number of blocks per floating buffer (N) that will produce the desired reduction of latency may be expressed as:
-
- As discussed above, the number of available floating
buffers 230 needed to achieve full parallelism between theforecasting program 226 and themerge program 228 is equal to twice the number of runs to be merged (nruns). Thus, the memory requirements to satisfy latency reduction and to provide full parallelism may be expressed as: -
M=(2×n runs)×(N×B). - The following example examines the Hitachi Deskstar 180GXP hard disk drive (Model IC35L180AVV207-I), which has the following specifications.
-
Hitachi Deskstar 180GXP Hard Drive Specifications Specification Value Latency average (ms) 4.17 Average seek time (ms) 8.80 Sustained data rate (MB/sec) 29 to 56 - The latency average in the above hard drive specifications refers to average rotational latency, which is the amount of time required for the disk to complete half a rotation. The Hitachi Deskstar 180×P rotates at 7200 rpm, yielding an average rotational latency (r) of 4.17 ms. The average seek time is also referred to as average positional latency (s). Using the smaller value of the specified sustained data rate range, 29 MB/s, yields a transfer rate (ρ) of 30,408,704 bytes per second (with 1 MB=220 bytes). The derived parameters of the Hitachi Deskstar 180GXP hard drive are as follows:
-
Hitachi Deskstar 180GXP Derived Hardware Parameters Parameter Value Positional latency (s) 0.00417 Rotational Latency (r) 0.00880 Disk Latency (L = s + r) 0.01297 Disk transfer rate (ρ) 30,408,704 - For the purposes of this example, assume that a 128 MB data set is being sorted using 32 MB of random access memory. Further assume that the first phase of the external sorting process (e.g., the creation of the sorted runs) resulted in 4 sorted runs of 64 KB blocks. Then, if a percentage latency cap, Q, of 10% is desired, the sorting dimensions are as follows.
-
Sorting Dimensions Dimension Value File size (bytes) (F) 134,217,728 Block size (bytes) (B) 65,536 Number of sorted runs (nruns) 4 Percentage latency cap (Q) 10% - Using these parameters, the number of
blocks 232 per floatingbuffer 231 that will satisfy a 10% latency cap (Q) may be determined, as follows. -
- With reference now to
FIGS. 3A-3C , a block diagram is provided to illustrate an example operation of a data sorting system.FIG. 3A illustrates an example of sorted runs stored in a data storage device.FIG. 3B illustrates the operation of a forecasting program and a merge program during a merge.FIG. 3C illustrates the operation of a forecasting program to replace floating buffers depleted by a merge program during the merge. This example illustrates just one of many ways to implement a merge program and a forecast program. In this example, merge operations and forecast operations are respectively performed by amerge thread 302 and aforecast thread 300 in a multi-thread environment. - With reference first to
FIG. 3A , a stored file is illustrated that includes five sorted runs 320 (R-1 through R-5). The sorted runs 320 include a plurality of data blocks 322 that store the sorted data records. The range of key values for the data records 324 stored in each of the data blocks 322 are shown inFIG. 3A . For example, the first data block 322 in the first sorted run (R-1) includes sorted data records 324 having key values starting at 35 and ending at 80. - With reference now to
FIG. 3B , a merge program loads blocks 322 from each run 320 into an initial set of floatingbuffers 330. The floating buffers 330 in this example each include threeblocks 332. Thus, the first threeblocks 322 from each run 320 are loaded into an initial set of buffers (labeled a-e). The blocks ofdata 332 within a buffer are referred to herein as a block tuple. For convenience, only the last key value in each block tuple is shown in the diagram. Once the initial set of buffers (a-e) is loaded, themerge thread 302 begins comparing the data records in the buffers (a-e) to generate thesorted output 303. - Concurrent with the operation of the merge program, the
forecasting thread 300 acquires floating buffers (g and f) that are not in use by themerge thread 302. As discussed above, for full parallelism between themerge thread 302 andforecasting thread 300 there can be an equal number of floating buffers available to theforecasting thread 300. However, for the purposes of this example, only two additional floating buffers (g and f) are available. Theforecasting thread 300 determines the order in which the buffers (a-e) being used by themerge thread 302 will be depleted, and assigns the additional buffers (g and f) based on this determination. The order of buffer depletion by the merge program may be determined by comparing the last key in each block tuple. - In the illustrated example, the last key in floating buffer “c” has the lowest value (108). Thus, the
forecasting thread 300 determines that floating buffer “c” will be depleted first and assigns the additional buffer “f” to the third sorted run (R-3). Theforecasting thread 300 then initiates a read request to fill the additional buffers “f” from the next threedata blocks 322 from sorted run “R-3,” having a key value range of 232 through 362. Similarly, the additional buffer “g” is next assigned to and filled from the first sorted run (R-1), which has the second lowest value (188) in its last key position. - Once the
merge thread 302 has depleted abuffer 330, the depleted buffer is passed to theforecasting thread 300 and is replaced with apopulated buffer 330, as illustrated inFIG. 3C . InFIG. 3C , theforecasting thread 300 has replaced depleted floating buffer “c” with populated buffer “f.” Theforecasting thread 300 has then reassigned the depleted buffer “c” to sorted run “R-4,” which has the next lowest last key value (199), and repopulated buffer “c” with the next threedata blocks 322 from therun 320. -
FIG. 4 is a flow diagram illustrating an example method of merging sorted runs, which may be performed by a merge program. The method begins atstep 400. Atstep 401, the method determines the number of data blocks in the floating buffers. Thisstep 401 may, for example, be performed using the calculations described above with reference toFIG. 2 . After the buffers have been sized, the initial set of buffers (one buffer for each run being merged) is allocated and filled from the stored sorted runs atstep 402. Once the initial buffers are filled, a forecasting method is initialized atstep 403. An example forecasting method is described below with reference toFIG. 5 . - Starting at
step 404, the method proceeds into its primary loop of execution. Atstep 404, the method determines which record should next be merged from the buffers to the sorted output. When the next record to be merged is identified, the record is copied from the buffer to the sorted output file atstep 405. Then, atstep 406, the method determines if there are more records to be merged from the same buffer. If the buffer includes more records to be merged, then the method advances to the next record in the buffer atstep 408 and proceeds to step 412. Else, if there are no more records in the buffer, then the method proceeds to step 407. - At
step 407, the method determines if there are more blocks to be merged from the same sorted run. If not, then the run is removed from the merge process (step 410) and the method proceeds to step 412. If there are more blocks to be merged from the run, however, then the depleted buffer is released to the forecasting process atstep 409, a filled buffer for the same sorted run is obtained from the forecasting process atstep 411, and the method proceeds to step 412. - At
step 412, the method determines if the records in all of the sorted runs have been merged. If not, then the method returns to step 404 to repeat the primary loop of execution. Otherwise, if there are no additional records to be merged, then the method waits for the forecasting method to terminate atstep 413, and the method ends atstep 414. -
FIG. 5 is a flow diagram illustrating an example forecasting method. The method begins atstep 500. Atstep 501, the method determines if there are any further blocks to read in the sorted runs. If not, then the method ends atstep 511. Otherwise, if there are additional blocks to be read, then the method determines atstep 502 whether there are any empty floating buffers available. If there are no floating buffers available, then the method proceeds to step 503. Else, if an empty floating buffer is available, then the method proceeds to step 507. - At
step 503, the method determines if the total amount of memory allocated for the floating buffers is below a selected limit. If the allocated memory is below the selected limit, then the method attempts to allocate an additional floating buffer atstep 504. If successful in allocating the additional floating buffer (step 505), then the method proceeds to step 507. However, if either the allocation limit has been reached (step 503) or the method is not successful in allocating an additional floating buffer (step 505), then the method waits for a depleted buffer to be passed from the merging method atstep 506, and proceeds to step 507 when a buffer is available. - Once a free floating buffer is acquired, the method forecasts which data blocks from the stored sorted runs will next be required by the merging method and reads those blocks into the free buffer at
step 507. This forecasting step (step 507) may be performed by inspecting the last record of each sorted run that is currently buffered. The run associated with the record having the smallest key value for its last buffered record is the sorted run from which a new range of data blocks should be read next. Once the next block tuple is read into a floating buffer atstep 507, the method determines atstep 508 if there are any unread blocks remaining in the same sorted run. If unread blocks remain in the sorted run, then the method proceeds to step 510. Else, if there are no additional blocks to be read from the sorted run, then the run is removed from the forecasting process atstep 509, and the method proceeds to step 510. Atstep 510, the newly filled buffer is transferred to the merging method, and the method returns to step 501. - It should be understood that similar to the other processing flows described herein, the steps and the order of the steps in the flowchart described herein may be altered, modified, deleted and/or augmented and still achieve the desired outcome.
- This written description uses examples to disclose the invention, including the best mode, and also to enable a person skilled in the art to make and use the invention. The patentable scope of the invention may include other examples that occur to those skilled in the art. As an example of the wide scope of the systems and methods disclosed herein, various data structures can be used.
FIG. 6 is a block diagram illustrating an example data structure for a multi-block floatingbuffer 600. The floatingbuffer 600 includes abuffer data structure 602 and two separately allocatedmemory spaces 604, 606: afirst memory space 604 to store pairs ofrecord keys 603 and associated record pointers 605 (referred to herein as the “key space” or a “key memory location”); and asecond memory space 606 to store the record blocks (referred to herein as the “record space” or “record memory location”). Also illustrated are a plurality of rundescriptor data structures 608. - After a sorted run is created during the first stage of an external sorting process, the attributes of the sorted run are recorded in a run
descriptor data structure 608. Rundescriptor data structures 608 are chained together (via the “next” field) to form a list, referred to herein as a “run list.” The run list is carried into the second (merge) phase of the external sorting process and is used by the merging and forecasting programs to track which data blocks have been retrieved from storage. - With reference to the floating
buffer 600, therecord space 606 is sized to hold one or more record blocks 610 and thekey space 604 is sized to hold a corresponding number ofkeys 603. In this particular example, both the record andkey spaces key space 604 and, for each key 603, arecord pointer 605 is set to indicate thesource record 612. - After a buffer is loaded, values are set in the
buffer data structure 602 for a currentkey pointer 614, astart block identifier 616, atotal block identifier 618, a total records identifier, 620, adisk run pointer 622 and amore blocks identifier 624. The currentkey pointer 614 indicates the smallest (top)key 603 in the buffer. Thestart block identifier 616 indicates the first block loaded into the buffer. Thetotal block identifier 618 indicates the number of blocks loaded into the buffer. The total records identifier indicates the number of records loaded into the buffer. Thedisk run pointer 622 identifies the rundescriptor data structure 608 for the sorted run from which the blocks were read. Themore blocks identifier 624 indicates whether there are more blocks remaining to be read from the sorted run. - In addition, the run
descriptor data structure 608 is also updated after a buffer is loaded to set values for a highkey pointer 626, atotal records identifier 628, a total blocksidentifier 630 and anext block identifier 632. The highkey pointer 626 indicates the largest (bottom) key within thekey space 604 of the floatingbuffer 600. Thetotal records identifier 628 indicates the number of records remaining in the sorted run. The total blocksidentifier 630 indicates the number of records remaining in the sorted run. Thenext block identifier 632 indicates the next block in the sorted run to be loaded into the floatingbuffer 600. - A merge program may maintain a priority queue of floating
buffers 600, containing one floating buffer for each sorted run being merged. The currentkey pointer 614 in thebuffer data structure 602 is used to order buffers within the queue by comparing the current key value for all of the buffers. Ties may be resolved using therun ordinal 633 within the corresponding rundescriptor data structure 608. The priority queue may provide immediate access to the floating buffer containing the smallest key value 603 (indicated by the currentkey pointer value 614 for that buffer). The record from which the smallest key value originated is then emitted, the currentkey pointer 614 is updated to point to the next key 603 in thekey space 604, and the priority queue is adjusted to locate the buffer with the next smallest key value. - A forecasting program may maintain a priority queue of run
descriptors 608, containing one run descriptor for each run being merged. The highkey pointer 626 is used to order descriptors within the queue by comparing the highest key value for every participating run that has been buffered. Ties may be resolved using the run ordinal, ensuring that the forecasting read sequence is the same as the merge consumption sequence. The priority queue may provide immediate access to therun descriptor 608 pointing to the smallest highkey value 626. The sorted run associated with this run descriptor is the sorted run from which the next record block or blocks should be read. After a buffer is obtained and filled with the record blocks from the sorted run, the rundescriptor data structure 608 is updated and the priority queue is adjusted to locate the run descriptor which points to the next smallest highkey value 626. - It is further noted that the systems and methods described herein may be implemented on various types of computer architectures, such as for example on a single general purpose computer or workstation, or on a networked system, or in a client-server configuration, or in an application service provider configuration.
- It is further noted that the systems and methods may include data signals conveyed via networks (e.g., local area network, wide area network, internet, etc.), fiber optic medium, carrier waves, wireless networks, etc. for communication with one or more data processing devices. The data signals can carry any or all of the data disclosed herein that is provided to or from a device.
- Additionally, the methods and systems described herein may be implemented on many different types of processing devices by program code comprising program instructions that are executable by the device processing subsystem. The software program instructions may include source code, object code, machine code, or any other stored data that is operable to cause a processing system to perform methods described herein. Other implementations may also be used, however, such as firmware or even appropriately designed hardware configured to carry out the methods and systems described herein.
- The systems' and methods' data (e.g., associations, mappings, etc.) may be stored and implemented in one or more different types of computer-implemented ways, such as different types of storage devices and programming constructs (e.g., data stores, RAM, ROM, Flash memory, flat files, databases, programming data structures, programming variables, IF-THEN (or similar type) statement constructs, etc.). It is noted that data structures describe formats for use in organizing and storing data in databases, programs, memory, or other computer-readable media for use by a computer program.
- The systems and methods may be provided on many different types of computer-readable media including computer storage mechanisms (e.g., CD-ROM, diskette, RAM, flash memory, computer's hard drive, etc.) that contain instructions for use in execution by a processor to perform the methods' operations and implement the systems described herein.
- The computer components, software modules, functions, data stores and data structures described herein may be connected directly or indirectly to each other in order to allow the flow of data needed for their operations. It is also noted that a module or processor includes but is not limited to a unit of code that performs a software operation, and can be implemented for example as a subroutine unit of code, or as a software function unit of code, or as an object (as in an object-oriented paradigm), or as an applet, or in a computer script language, or as another type of computer code. The software components and/or functionality may be located on a single computer or distributed across multiple computers depending upon the situation at hand.
Claims (14)
1. A computer-implemented method for merging sorted runs of data, comprising:
comparing data records in a first set of floating buffers to generate a sorted output;
storing the sorted output in a computer readable medium;
copying additional data blocks from a plurality of sorted runs into a second set of floating buffers before the additional data blocks are needed by the comparing step; and
replacing depleted floating buffers from the first set with floating buffers from the second set containing the additional data blocks.
2. The computer-implemented method of claim 1 , further comprising:
prior to the comparing step, configuring the first and second sets of floating buffers to store a predetermined number of data blocks, wherein the predetermined number of data blocks is calculated to achieve an optimal number of data blocks for each floating buffer.
3. The computer-implemented method of claim 2 , wherein the optimal number of data blocks is the number for achieving a selected reduction in disk latency costs.
4. A data sorting system, comprising:
a data store for storing a plurality of sorted runs of data blocks, each data block including a plurality of data records;
a computing device configured with a plurality of floating buffers, the plurality of floating buffers including a first set of floating buffers and a second set of floating buffers; and
one or more programs stored in a memory location on the computing device and configured to compare data records in the first set of floating buffers to generate a sorted output, copy additional data blocks from the plurality of sorted runs into the second set of floating buffers before the additional data blocks are needed in the first set of floating buffers, and replace depleted floating buffers from the first set of floating buffers with floating buffers from the second set of floating buffers containing the additional data blocks.
5. The data sorting system of claim 4 , wherein the one or more programs are further configured to define the first and second sets of floating buffers to store a predetermined number of data blocks, wherein the predetermined number of data blocks is calculated to achieve an optimal number of data blocks for each floating buffer.
6. The data sorting system of claim 5 , wherein the optimal number of data blocks is the number needed to achieve a desired reduction in disk latency costs.
7. A floating buffer for use with a data sorting system having a computing device and one or more programs stored in a memory location on the computing device, the one or more programs when executed by the computing device being operable to sort data records from a plurality of sorted runs of data blocks into a single sorted output, the floating buffer comprising:
a record memory location configured to store a plurality of data blocks;
wherein the one or more programs calculate a number of data blocks stored in the record memory location and configure the record memory location to store the number of data blocks.
8. The floating buffer of claim 7 , further comprising:
a key memory location configured to store record key values for associating the data blocks stored in the record memory location with a location of the data blocks in the sorted runs of data blocks.
9. The floating buffer of claim 8 , further comprising:
a buffer data structure configured to identify the data blocks stored in the record memory location.
10. The floating buffer of claim 9 , wherein the buffer data structure includes a current key pointer for identifying a smallest record key value stored in the key memory location.
11. The floating buffer of claim 9 , wherein the buffer data structure includes a start block identifier for identifying a first data block loaded into the record memory location.
12. The floating buffer of claim 9 , wherein the buffer data structure includes a total block identifier for indicating a total number of data blocks stored in the record memory location.
13. The floating buffer of claim 9 , wherein the buffer data structure includes a total records identifier for indicating a total number of data records stored in the record memory location.
14. The floating buffer of claim 9 , wherein a plurality of run descriptor data structures are stored in a memory location on the computing device, each run descriptor data structure identifying the data blocks included in one of the sorted runs of data blocks, and wherein the buffer data structure includes a disk run pointer for identifying one or the run descriptor data structures associated with the data blocks stored in the record memory location.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/115,607 US20080208861A1 (en) | 2004-11-08 | 2008-05-06 | Data Sorting Method And System |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/983,745 US7454420B2 (en) | 2004-11-08 | 2004-11-08 | Data sorting method and system |
US12/115,607 US20080208861A1 (en) | 2004-11-08 | 2008-05-06 | Data Sorting Method And System |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/983,745 Continuation US7454420B2 (en) | 2004-11-08 | 2004-11-08 | Data sorting method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20080208861A1 true US20080208861A1 (en) | 2008-08-28 |
Family
ID=36317609
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/983,745 Active 2026-02-11 US7454420B2 (en) | 2004-11-08 | 2004-11-08 | Data sorting method and system |
US12/115,607 Abandoned US20080208861A1 (en) | 2004-11-08 | 2008-05-06 | Data Sorting Method And System |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/983,745 Active 2026-02-11 US7454420B2 (en) | 2004-11-08 | 2004-11-08 | Data sorting method and system |
Country Status (1)
Country | Link |
---|---|
US (2) | US7454420B2 (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110099179A1 (en) * | 2009-10-26 | 2011-04-28 | Oracle International Corporation | Performance boost for sort operations |
WO2011133302A2 (en) * | 2010-04-23 | 2011-10-27 | Microsoft Corporation | Multi-threaded sort of data items in spreadsheet tables |
US20120066458A1 (en) * | 2010-09-10 | 2012-03-15 | Takeru Chiba | Storage system and data transfer method of storage system |
CN102968496A (en) * | 2012-12-04 | 2013-03-13 | 天津神舟通用数据技术有限公司 | Parallel sequencing method based on task derivation and double buffering mechanism |
US8521923B2 (en) | 2008-09-19 | 2013-08-27 | Oracle International Corporation | Storage-side storage request management |
US8527866B2 (en) | 2010-04-30 | 2013-09-03 | Microsoft Corporation | Multi-threaded sort of data items in spreadsheet tables |
US20150278299A1 (en) * | 2014-03-31 | 2015-10-01 | Research & Business Foundation Sungkyunkwan University | External merge sort method and device, and distributed processing device for external merge sort |
US9405694B2 (en) | 2009-09-14 | 2016-08-02 | Oracle Internation Corporation | Caching data between a database server and a storage system |
US9418089B2 (en) | 2013-05-13 | 2016-08-16 | Microsoft Technology Licensing, Llc | Merging of sorted lists using array pair |
US9436389B2 (en) | 2007-03-08 | 2016-09-06 | Oracle International Corporation | Management of shared storage I/O resources |
CN109002467A (en) * | 2018-06-08 | 2018-12-14 | 中国科学院计算技术研究所 | A kind of database sort method and system executed based on vectorization |
US10334011B2 (en) | 2016-06-13 | 2019-06-25 | Microsoft Technology Licensing, Llc | Efficient sorting for a stream processing engine |
US10523596B1 (en) * | 2015-02-06 | 2019-12-31 | Xilinx, Inc. | Circuits for and methods of merging streams of data to generate sorted output data |
US11803509B1 (en) * | 2022-05-23 | 2023-10-31 | Apple Inc. | Parallel merge sorter circuit |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7792825B2 (en) * | 2005-09-08 | 2010-09-07 | International Business Machines Corporation | Fast select for fetch first N rows with order by |
US20090259617A1 (en) * | 2008-04-15 | 2009-10-15 | Richard Charles Cownie | Method And System For Data Management |
US10089379B2 (en) * | 2008-08-18 | 2018-10-02 | International Business Machines Corporation | Method for sorting data |
US20100191717A1 (en) * | 2009-01-28 | 2010-07-29 | Goetz Graefe | Optimization of query processing with top operations |
US9274950B2 (en) * | 2009-08-26 | 2016-03-01 | Hewlett Packard Enterprise Development Lp | Data restructuring in multi-level memory hierarchies |
US8805857B2 (en) * | 2009-10-14 | 2014-08-12 | Oracle International Corporation | Merging of items from different data sources |
US8407404B2 (en) * | 2009-12-31 | 2013-03-26 | International Business Machines Corporation | Record sorting |
US8843502B2 (en) | 2011-06-24 | 2014-09-23 | Microsoft Corporation | Sorting a dataset of incrementally received data |
US9220977B1 (en) | 2011-06-30 | 2015-12-29 | Zynga Inc. | Friend recommendation system |
US9582529B2 (en) * | 2012-06-06 | 2017-02-28 | Spiral Genetics, Inc. | Method and system for sorting data in a cloud-computing environment and other distributed computing environments |
US10169720B2 (en) | 2014-04-17 | 2019-01-01 | Sas Institute Inc. | Systems and methods for machine learning using classifying, clustering, and grouping time series data |
US9892370B2 (en) | 2014-06-12 | 2018-02-13 | Sas Institute Inc. | Systems and methods for resolving over multiple hierarchies |
US9208209B1 (en) | 2014-10-02 | 2015-12-08 | Sas Institute Inc. | Techniques for monitoring transformation techniques using control charts |
US9418339B1 (en) | 2015-01-26 | 2016-08-16 | Sas Institute, Inc. | Systems and methods for time series analysis techniques utilizing count data sets |
US10560313B2 (en) | 2018-06-26 | 2020-02-11 | Sas Institute Inc. | Pipeline system for time-series data forecasting |
US10685283B2 (en) | 2018-06-26 | 2020-06-16 | Sas Institute Inc. | Demand classification based pipeline system for time-series data forecasting |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5613085A (en) * | 1994-12-27 | 1997-03-18 | International Business Machines Corporation | System for parallel striping of multiple ordered data strings onto a multi-unit DASD array for improved read and write parallelism |
US5852826A (en) * | 1996-01-26 | 1998-12-22 | Sequent Computer Systems, Inc. | Parallel merge sort method and apparatus |
US6105024A (en) * | 1998-02-12 | 2000-08-15 | Microsoft Corporation | System for memory management during run formation for external sorting in database system |
US20020032683A1 (en) * | 2000-07-31 | 2002-03-14 | Isao Namba | Method and device for sorting data, and a computer product |
US20020065793A1 (en) * | 1998-08-03 | 2002-05-30 | Hiroshi Arakawa | Sorting system and method executed by plural computers for sorting and distributing data to selected output nodes |
US20020091691A1 (en) * | 2001-01-11 | 2002-07-11 | International Business Machines Corporation | Sorting multiple-typed data |
US6519593B1 (en) * | 1998-12-15 | 2003-02-11 | Yossi Matias | Efficient bundle sorting |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7216124B2 (en) * | 2001-03-23 | 2007-05-08 | International Business Machines Corporation | Method for generic list sorting |
-
2004
- 2004-11-08 US US10/983,745 patent/US7454420B2/en active Active
-
2008
- 2008-05-06 US US12/115,607 patent/US20080208861A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5613085A (en) * | 1994-12-27 | 1997-03-18 | International Business Machines Corporation | System for parallel striping of multiple ordered data strings onto a multi-unit DASD array for improved read and write parallelism |
US5852826A (en) * | 1996-01-26 | 1998-12-22 | Sequent Computer Systems, Inc. | Parallel merge sort method and apparatus |
US6105024A (en) * | 1998-02-12 | 2000-08-15 | Microsoft Corporation | System for memory management during run formation for external sorting in database system |
US20020065793A1 (en) * | 1998-08-03 | 2002-05-30 | Hiroshi Arakawa | Sorting system and method executed by plural computers for sorting and distributing data to selected output nodes |
US6519593B1 (en) * | 1998-12-15 | 2003-02-11 | Yossi Matias | Efficient bundle sorting |
US20020032683A1 (en) * | 2000-07-31 | 2002-03-14 | Isao Namba | Method and device for sorting data, and a computer product |
US20020091691A1 (en) * | 2001-01-11 | 2002-07-11 | International Business Machines Corporation | Sorting multiple-typed data |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9436389B2 (en) | 2007-03-08 | 2016-09-06 | Oracle International Corporation | Management of shared storage I/O resources |
US8521923B2 (en) | 2008-09-19 | 2013-08-27 | Oracle International Corporation | Storage-side storage request management |
US9336275B2 (en) | 2008-09-19 | 2016-05-10 | Oracle International Corporation | Hash join using collaborative parallel filtering in intelligent storage with offloaded bloom filters |
US8874807B2 (en) | 2008-09-19 | 2014-10-28 | Oracle International Corporation | Storage-side storage request management |
US9405694B2 (en) | 2009-09-14 | 2016-08-02 | Oracle Internation Corporation | Caching data between a database server and a storage system |
US20110099179A1 (en) * | 2009-10-26 | 2011-04-28 | Oracle International Corporation | Performance boost for sort operations |
US8204892B2 (en) * | 2009-10-26 | 2012-06-19 | Oracle International Corporation | Performance boost for sort operations |
WO2011133302A3 (en) * | 2010-04-23 | 2012-01-19 | Microsoft Corporation | Multi-threaded sort of data items in spreadsheet tables |
US20110264993A1 (en) * | 2010-04-23 | 2011-10-27 | Microsoft Corporation | Multi-Threaded Sort of Data Items in Spreadsheet Tables |
WO2011133302A2 (en) * | 2010-04-23 | 2011-10-27 | Microsoft Corporation | Multi-threaded sort of data items in spreadsheet tables |
US8527866B2 (en) | 2010-04-30 | 2013-09-03 | Microsoft Corporation | Multi-threaded sort of data items in spreadsheet tables |
US8316195B2 (en) * | 2010-09-10 | 2012-11-20 | Hitachi, Ltd. | Storage system and data transfer method of storage system |
US8782337B2 (en) | 2010-09-10 | 2014-07-15 | Hitachi, Ltd. | Storage system and data transfer method of storage system |
US20120066458A1 (en) * | 2010-09-10 | 2012-03-15 | Takeru Chiba | Storage system and data transfer method of storage system |
US9304710B2 (en) | 2010-09-10 | 2016-04-05 | Hitachi, Ltd. | Storage system and data transfer method of storage system |
CN102968496A (en) * | 2012-12-04 | 2013-03-13 | 天津神舟通用数据技术有限公司 | Parallel sequencing method based on task derivation and double buffering mechanism |
US10002147B2 (en) | 2013-05-13 | 2018-06-19 | Microsoft Technology Licensing, Llc | Merging of sorted lists using array pair |
US9418089B2 (en) | 2013-05-13 | 2016-08-16 | Microsoft Technology Licensing, Llc | Merging of sorted lists using array pair |
US10552397B2 (en) | 2013-05-13 | 2020-02-04 | Microsoft Technology Licensing, Llc | Merging of sorted lists using array pair |
US20150278299A1 (en) * | 2014-03-31 | 2015-10-01 | Research & Business Foundation Sungkyunkwan University | External merge sort method and device, and distributed processing device for external merge sort |
US10523596B1 (en) * | 2015-02-06 | 2019-12-31 | Xilinx, Inc. | Circuits for and methods of merging streams of data to generate sorted output data |
US10334011B2 (en) | 2016-06-13 | 2019-06-25 | Microsoft Technology Licensing, Llc | Efficient sorting for a stream processing engine |
CN109002467A (en) * | 2018-06-08 | 2018-12-14 | 中国科学院计算技术研究所 | A kind of database sort method and system executed based on vectorization |
US11803509B1 (en) * | 2022-05-23 | 2023-10-31 | Apple Inc. | Parallel merge sorter circuit |
US20230376448A1 (en) * | 2022-05-23 | 2023-11-23 | Apple Inc. | Parallel merge sorter circuit |
Also Published As
Publication number | Publication date |
---|---|
US7454420B2 (en) | 2008-11-18 |
US20060101086A1 (en) | 2006-05-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7454420B2 (en) | Data sorting method and system | |
US10768987B2 (en) | Data storage resource allocation list updating for data storage operations | |
US20200371879A1 (en) | Data storage resource allocation by performing abbreviated resource checks of certain data storage resources to detrmine whether data storage requests would fail | |
US6477535B1 (en) | Method and apparatus for concurrent DBMS table operations | |
CN100487659C (en) | Method and device for optimizing fractional resource allocation | |
US8156304B2 (en) | Dynamic data storage repartitioning | |
JPH05189281A (en) | File assigning system for storage device | |
US11403224B2 (en) | Method and system for managing buffer device in storage system | |
EP3186760B1 (en) | Dynamic load-based merging | |
JP6172649B2 (en) | Information processing apparatus, program, and information processing method | |
JP2007523412A (en) | Memory allocation | |
CN111639044B (en) | Method and device for supporting interrupt priority polling arbitration dispatching | |
US10049034B2 (en) | Information processing apparatus | |
CN100458792C (en) | Method and data processing system for managing a mass storage system | |
WO2013032436A1 (en) | Parallel operation on b+ trees | |
US20200285510A1 (en) | High precision load distribution among processors | |
US5678024A (en) | Method and system for dynamic performance resource management within a computer based system | |
CN109271236A (en) | A kind of method, apparatus of traffic scheduling, computer storage medium and terminal | |
US5918243A (en) | Computer mechanism for reducing DASD arm contention during parallel processing | |
JP2924725B2 (en) | Buffer allocation control system | |
JP2010061604A (en) | Consistency verification system, verification method, and program | |
WO2016032803A1 (en) | Dynamic load-based merging | |
JPH06266619A (en) | Page saving/restoring device | |
CN114168306A (en) | Scheduling method and scheduling device | |
JPH09231053A (en) | Parallel sort device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |