[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

EP1236087A1 - Self-stabilizing, portable and efficient computer arithmetic using mappings of d scale points - Google Patents

Self-stabilizing, portable and efficient computer arithmetic using mappings of d scale points

Info

Publication number
EP1236087A1
EP1236087A1 EP00970970A EP00970970A EP1236087A1 EP 1236087 A1 EP1236087 A1 EP 1236087A1 EP 00970970 A EP00970970 A EP 00970970A EP 00970970 A EP00970970 A EP 00970970A EP 1236087 A1 EP1236087 A1 EP 1236087A1
Authority
EP
European Patent Office
Prior art keywords
scale
arithmetic
values
noise
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP00970970A
Other languages
German (de)
French (fr)
Inventor
Philip Druck
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of EP1236087A1 publication Critical patent/EP1236087A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/483Computations with numbers represented by a non-linear combination of denominational numbers, e.g. rational numbers, logarithmic number system or floating-point numbers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/499Denomination or exception handling, e.g. rounding or overflow
    • G06F7/49942Significance control
    • G06F7/49947Rounding

Definitions

  • the invention relates to a method of computer calculations that avoids round off errosion. It further relates to optimizing noise processing operations, such as spectral analysis, filtering, cancellation, in telecommunications, medical or instrumentation systems. It performs the usual arithmetic (+/*-) and
  • Boolean algebra This is a self-consistent set of rules specifying the results (or mappings) of logical operations on bits.
  • a bit is the basic unit of computation having values of 0 or 1 (i.e, true or false).
  • the rules are often implemented, or regarded, as logical
  • Such operations include negation and basic arithmetic (addition, subtraction, multiplication, division).
  • a real number is either rational, expressible as a fraction, or irrational. And a real number is represented in a computer as a sequence of 0/1 bits. Many algorithms are used to perform the
  • a floating point number is conveniently expressed in scientific notation
  • s indicates the number of significant digits.
  • a significant digit is one that has
  • data types are referred to as “data types”. Examples of data types that are universally used include “int”, “float”, “double”. Thus, the double type is a packaged 64 bit number, on many architectures, while a float is typically 32 bits. Most often, these types are implemented in hardware, or even firmware, to greatly improve the efficiency of calculations. That is why data types are also referred to as "native”, to distinguish them from other objects that are customized, and not implemented in hardware.
  • the Java language packaged an infinite precision arithmetic in its Big Decimal object.
  • Another commercial example is the packaging of a Money object
  • Infinite precision arithmetic also serves as a useful benchmark against which the other restricted precision calculations can be compared against. Because the infinite precision calculations do not have inherent rounding errors, they are provide true values.
  • s again indicates the number of significant digits maintained from the actual original data measured.
  • the first s digits are significant.
  • the second set is performed on the infinite precision representation.
  • the second sequence serves as a genuine benchmark against which
  • implementations generally require the physical movement of data between a
  • a third problem is that floating point calculations are performed somewhat differently across multiple computer architectures. As a result, arithmetic
  • one technique is to use the "double" data type for calculations involving
  • ALU units tend to use embedded tables for efficient computations of more complex transcendental basic arithmetic operations.
  • the IEE754 [ 2 ] standard has contributed enormously to increasing the portability of floating point computations over many disparate computer architectures. But it still cannot guarantee portability due to flexibility and ambiguity in the interpretation of the standards.
  • the invention employs D Arithmetic which has previously been disclosed
  • D arithmetic has two components. The first is its set of mappings
  • Arithmetic multiplication mappings are created by applying the usual, floating point, multiplication to each pair of points in the cross-product of a D Scale with itself. Then each decimal result is mapped to a point in the same D Scale, using the D Scale mapping algorithm described in [ 2].
  • this algorithm generates a set of pre-assigned, or pre- wired, arithmetic operation mappings (e.g., one for multiplication, another for addition). These mappings are generated once, hence pre-wired, and then used forever onwards. This contrasts with the usual notion of arithmetic "calculations" whereby an unknown quantity is discovered for each pair of input values by applying Boolean Algebra to the bits comprising the numeric input values.
  • the second component is an algorithm to map actual, measured values, into D Scale points.
  • the accuracy and usefulness of the D Arithmetic mappings depends on the precision of the D Scale used.
  • a D Arithmetic that was constructed from a high resolution D Scale can more accurately map sampled data with a high number of significant digits. That is, the result of iterative operations will yield a D Scale value that is within a tighter tolerance of the actual value.
  • the minimum resolution of resolution of a D Arithmetic is determined by the reciprocal of the maximum prime number in the D Scale. However, most important, the average resolution can be orders of magnitude higher because of the irregular pattern of points in a D Scale.
  • a tighter variance of the resolution is achieved by using more prime sub-scales to fill in the D Scale. It amplifies the effect of far fewer sample points to attain the same mean resolution, with a tight variance, as the na ⁇ ve, high uniform sampling rate.
  • the D Arithmetic does not need to use standard Boolean Algebra, which is at the heart of virtually all mainstream commercial arithmetic implementations.
  • mappings are most readily envisioned as straightforward tables (e.g., multiplication and sine tables). It relies on the availability of ever increasing and inexpensive computer memory, persistent storage as well as on ever expanding and inexpensive communications bandwidth.
  • the most useful property of the D Arithmetic is its numeric stability, even as the number of calculations on data set values increases. This contrasts sharply
  • a second property is its portability across diverse computer platforms. That is, the same calculations are guaranteed to produce identical results regardless of computer platform used to actually perform the calculations.
  • the third property is its relative efficiency. Its efficiency derives from its
  • DFT Discrete Fourier Transform
  • the D Arithmetic provides a way for computers to iteratively compute numbers such as in signal processing in a self-stabilizing, efficient and portable
  • the D Arithmetic can calculate the Discrete Fourier Transform of a signal's potentially huge set of sample points, without losing stability by the usual erosion of significant digits from round off errors.
  • the present invention involves a novel arithmetic method whereby the results of operations on pairs of numbers is pre-mapped or pre-wired. This pre-wiring automatically insures portability
  • mappings are invariant. And these mappings necessarily are efficient because there is no data movement between CPU components to manipulate bits.
  • mapping is
  • n/p and n'/p' are primes and members of the measurement scale, and 0 ⁇ n ⁇ p, 0 ⁇ n' ⁇ p'.
  • the methodology then provides a logical or physical lookup table mapping for f(n/p, n'/p').
  • x and y one looks up f(n/p, n'/p') where x»n/p , y»n'/p' and » denotes
  • the set of points in the measurement scale constitutes a closed set with regard to the operation of function f.
  • the functions f may be simple arithmetic operations, as x+y, ⁇ -y, ⁇ *y, ⁇ /y,
  • This arithmetic is self-stabilizing such that round-off errors tend, with very high probability, to cancel one another and not accumulate as in the usual arithmetic. That is, the difference between a D Arithmetic determined value and
  • mappings are invariant and independent of computing platforms or architectures.
  • ALU arithmetic logic unit
  • the D Arithmetic tm uses a pre-defined set of mappings. Each mapping
  • mappings pair of real numbers.
  • the arithmetic operation might be multiplication, addition, subtraction or division or trigonometric operations like sine and cosine.
  • using unconstrained pre-defined mappings would be theoretically, and therefore practically unfeasible because of the uncountable number of mappings required.
  • the pre-defined arithmetic mappings have the logical structure of a two dimensional table, one table per arithmetic operation.
  • a table's cell then contains
  • the logical tables can be implemented in several ways.
  • the mappings can be in an in-memory table, a file, database tables, or distributed over a network of like objects. Or the mappings can be dynamically calculated as needed.
  • the invention's practical feasibility stems from the ongoing mega-trends in increasing capacity vs. decreasing price for CPU memory, disk storage and communications bandwidth.
  • CPU memory capacity is ever increasing, along with falling prices, to the extent that 250Mbytes of memory is well within the capability of a basic PC today, and 500Mbyte- 1Gbyte is well within the capability of a small server.
  • the D Arithmetic leverages off of the increased main memory available at ever lower prices, to perform calculations without data movement in the
  • resolution might have an average resolution of 6 significant digits.
  • This storage requirement is well within the capability of a moderately sized server.
  • a raw data set is first created from the actual measurements of a process. Each raw data point is the then mapped to one of its to nearest points on the D
  • x is a measured sample point.
  • ni/pk and nj/pl are the nearest D Scale tm point neighbors of x.
  • ni/pk is the i-th point on the unit prime sub-scale pk
  • nj/pl is the j-th point on the unit prime sub-scale pi
  • D ik, jl is the distance between these two D Scale tm points.
  • d xjl is the fraction of the distance D ik, jl between x and its
  • d xjk is the fraction of the distance D ik, jl between x and its neighbor ni/pk
  • the next step is to decide which of the neighboring scale points to map x
  • each mapping is based on the outcome of a pseudo-random 0/1 bit number generator (i.e., a coin toss).
  • the point's left neighbor is assigned for a 0 outcome, or the right neighbor for a 1 outcome, or vice- versa as long as applied consistently. It will be shown that the quality of the random number generator does not need to be optimal.
  • mapping type corresponds to a different arithmetic operation: multiplication, addition, division, subtraction.
  • the algorithm is described for multiplication. The other arithmetic operations use the same procedures.
  • ni/pk represent the i-th normalized point on the k-th prime sub-scale of a D
  • na/pm is the a-th point on the m-th prime sub-scale of the same D Scale
  • na/pm is obtained by mapping the actual multiplied value of ni/pk * nj/pl to the nearest of its two neighboring D Scaletm points, na/pm and some nn/pn.
  • the rule again, is to pick the nearest Druck Scale tm point neighbor.
  • mapping is: ni/pk * nj/pl e nb/pn
  • D Scale tm yields a point in the D Scale tm.
  • the mapped point is normalized and
  • the D Arithmetic tm is shown to be accurate and correct. The proof is based on comparisons between intermediate and final results of D Arithmetic tm mappings against corresponding calculations using "infinite” precision numbers as well as against corresponding finite precision of built-in "double” or “double precision” types.
  • mapping error distribution is similar to a Bernoulli distribution, which quantifies the probability of a certain number of
  • heads or tails in a sequence of random coin tosses.
  • Corresponding to a head is a random mapping to the left nearest neighbor, and tail to the mapping to the right nearest neighbor.
  • each sequential trial corresponds to an iterative
  • xl (nl/pk +/- (d x,lk D Ik, iljl) )
  • x2 (n2/pl +/- (dx,21 D 21, i2j2) )
  • D Ik, ilj 1 is the distance between the two nearest Druck Scale tm neighbors ofxl
  • d x,lk is the fraction of the interval D Ik, ilj 1 between x and nl/pk
  • D 2k, i2j2 is the distance between the two nearest Druck Scale tm neighbors of x2,
  • d x,21 is the fraction of the interval D 21, i2j2 between x and n2/pl
  • n3/pm xv +/- (dxv,3m * D 3m, i3j3)
  • D 3m, i3j3 is the distance between the two nearest D Scale tm neighbors of xv,
  • d xv,3m is the fraction of the interval D 3m, i3j3 between x3 and n3/pm Therefore, substituting this into the above leads to:
  • WorstCaseSum e3 V 2 1 pmax +
  • WorstCaseSum e3 V /pmax + X A /pmax - !/ 2 / p2max + % /p2max
  • xv must necessarily map into some point on the scale: n3/pm ⁇ xv
  • n3/pm xv +/- (dxv,3m * D 3m, i3j3)
  • D 3m, i3j3 is the distance between the two nearest D Scale tm neighbors of xv,
  • d xv,3m is the fraction of the interval D 3m, i3j3 between x3 and n3/pm
  • the measurement point x is related to its corresponding
  • D ab, ikjk is the distance between the two nearest D Scale tm neighbors ofx
  • d x,ab is the fraction of the interval D Ik, ilj 1 between x and na/pb
  • xn+1 (nn/pm +/- en) * ((na/pb) +/- (dx,ab * D ab, ikjk))
  • D nm, injn is the distance between the two nearest D Scale tm neighbors of (nn/pm * na/pb); nc/pp and its more distant neighbor, nin/pjn.
  • d xv,nm is the fraction of the interval D nm, injn between xv and nn/pm
  • xn+1 nc/pp +/- (dxv,nm * D nm, injn) +/-
  • xn+1 nc/pp +/- en+1
  • en+1 (dxv,nm * D nm, injn) +/-
  • WorstCaseSum en+1 ⁇ 1/ pmax +
  • a random coin is tossed to determine which mapping to use.
  • mapping error stabilization is at work to counteract any tendency for the mapping errors to accumulate during iterative mappings (or "calculations").
  • the third theorem quantifies the variance of the error distribution. This mapping error distribution is
  • each sequential trial corresponds to an iterative, intermediate, mapping along the way to the result.
  • K is set such that Probability(ek) @ 0
  • K is set such that Probability(ek) @ 0
  • a computer might not have enough memory to allocate an in-memory table of that size.
  • Each number is represented as a
  • b, c, b', c' are elements of a D Arithmetic table.
  • a * A' (b + c*10k ) * (b' + c'*10k')
  • mappings into arithmetic values are decoupled
  • the D Arithmetic accommodates this tight coupling between order-of-magnitude and normalized value, by using a
  • mapping sets with each set applicable to a set of normalized values and a specific order-of-magnitude range.
  • one 1 -dimensional D Arithmetic table would map normalized values with order-of-magnitude of 100, while another table
  • transcendental and trigonometric functions of large measurement numbers can be decomposed into smaller components using the standard operation decompositions for transcendental and trigonometric functions.
  • the validation procedure is to compare the intermediate result after each iterative step against intermediate results using the benchmark of infinite, or at least high, precision numbers. More specifically, a given set of measurements is
  • the D Arithmetic tm mappings can be implemented in several ways, depending on the local and networked systems resources available. Three approaches are presented that are feasible with current technology, in order of
  • each implementation is to map any two entries in a pair of D Scales into another entry in a D Scale tm.
  • MegBytes of memory would have to be allocated. This is well within the
  • D Arithmetic tm table would require a minimum of 1 GigaByte of main memory, clearly only feasible today on very high-end computers.
  • An efficient and easily implemented alternative is the use of memory mapped I/O.
  • the operating system provides the memory as needed by automatically mapping pages from a disk resident mappings. This is done transparent to the application software. This is a powerful yet commonly available technique, used extensively on Microsoft
  • the network is structured logically as a tree. At each branch is the direction indicator for the subsets of digits [ to be continued].
  • noisy data requires that all of its points be sampled to insure a faithful.
  • a commercially viable and attractive approach to noise processing is provided by the combination of the D Arithmetic tm, D Scale tm's irregular
  • Noise processing typically includes the determination of its spectral power distribution, noise reduction and cancellation, noise filtering and signal extraction from noise.
  • the D Arithmetic can be used in industry standard noise processing algorithm implementations. It can provide the following benefits that are generally limited in current mainstream industry practice:
  • Uniform sampling of noisy data can produce biases or artifacts not present in the actual data.
  • a second bias occurs because it is generally not practical, cost effective or
  • Arithmetic based error bias must be either bounded or managed to prevent additional bias in noise calculations.
  • each set is to create more artifacts due to the abrupt discontinuity at the partition
  • the noise spectrum consists of its amplitude and whatever meaningful
  • phase both as a function of frequency.
  • the amplitude squared is the noise power which is useful for filtering and signal detection,.
  • Adaptive equalization is a process by which the communications character ⁇
  • istics the transfer function in communications parlance of a channel are dynamically changed. This is typically done by changing the coefficients corresponding to the
  • the signals can be reapplied periodically, to dynamically readjust the - ⁇ l-
  • Signal detection involves the determination, within a pre-specified high
  • a ramification of the combined D Arithmetic tm and D Scale tm, to include the Fourier transform decomposition, is that signals and noise can be generated using its signal/noise decomposition equations.
  • processing includes filtering, noise reduction as well as edge determination.
  • D Arithmetic combined with the D Scale
  • a set of prime numbers must first be selected for their corresponding sub-scales
  • the maximum possible resolution is determined by
  • p37 ⁇ 1/37, 2/37, 35/37, 36/37 ⁇
  • a D Scale tm on the unit interval is constructed by concatenating, or merging, all of the points in all of these sets associated with the prime scales comprising the D Scale tm. The resulting set of points comprising the D
  • the resolution of the scale is determined as the minimum distance between adjacent scale points.
  • the Mathematica vector "delta" below contains these resolutions as a function of position in the D Scale tm.
  • Arithmetic tm that far fewer points than a uniform sampling scale can lead to higher resolution.
  • each point in this D Scale tm is multiplied with each point in the D Scale tm (including itself).
  • each result now in decimal format, is normalized such that each number is greater than 0.1. If a number is not, like 0.002345, then its order of magnitude is saved along with its normalized value, or 0.2345 and 10-3.
  • each normalized number is mapped to its left or right nearest D Scale tm neighbors based on a random boolean value.
  • the implementation of this mapping is shown in the following section.
  • the D Scale tm values can be part of the same table, if a table data structure is used to implement the mappings.
  • Each sample point is mapped to a D Scalea point using the algorithm described, and

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Complex Calculations (AREA)

Abstract

A signal and noise processor and method for signal and noise processing that employs a D Scale, which assigns the results of arithmetic operations to a set of values of controlled density and value. The signal processor that employs repetitive mapping calculations based on a D Arithmetic has two components. The first is its set of mappings from each pair of points in a D Scale into a point in a D Scale. Thus, the processor generates a set of predetermined, arithmetic operation mappings, stored in a computer database or its equivalent. The second component is a map of measured values, into D Scale points. The minimum resolution of resolution of a D Arithmetic is determined by the reciprocal of the maximum prime number in the D Scale. However, most important, the average resolution can be orders of magnitude higher because of the irregular pattern of points in a D Scale. A tighter variance of the resolution is achieved by using more prime sub-scales to fill in the D Scale. It amplifies the effect of far fewer sample points to attain the same mean resolution, with a tight variance, as the naive, high uniform sampling rate. Further, its self stabilizing property controllably bound the round-off errors of sequential calculations. This thereby decouples actual noise from the artifact created by calculation round-off errors. The invention provides a Discrete Fourier Transform (DFT) of a signal or image as an example of any continuous-to-discrete transformation.

Description

SELF-STABILIZING, PORTABLE AND EFFICIENT COMPUTER ARITHMETIC USING MAPPINGS OF D SCALE POINTS
FIELD OF THE INVENTION
The invention relates to a method of computer calculations that avoids round off errosion. It further relates to optimizing noise processing operations, such as spectral analysis, filtering, cancellation, in telecommunications, medical or instrumentation systems. It performs the usual arithmetic (+/*-) and
transcendental/trigonometric operations on floating point numbers, the most
prevalent representation of real numbers on diverse computer architectures. These properties stem directly from the behavior of the D Scale.
BACKGROUND OF THE INVENTION Digital numerical calculation involves round off errors that accumulate when repetitive operations are performed. Where a limited number of calculations are involved, the calculation may be carried out to 16 significant digits and results
displayed to 12 digits. However, where billions of calculations are performed, this may be insufficient. It is also known that single non linear model sof physical
systems create such problems and may appear to be chaotic, thereby limiting the
ability to forecast.
Current State-of-the-Art
Review of Current Floating Point Computer Arithmetic
The mathematical foundation of computer arithmetic is Boolean algebra [2] . This is a self-consistent set of rules specifying the results (or mappings) of logical operations on bits. A bit is the basic unit of computation having values of 0 or 1 (i.e, true or false). The rules are often implemented, or regarded, as logical
truth tables. These operations also have corresponding counterparts in the
mathematical theory of sets (e.g., union corresponding to + and intersection
corresponding to *). Such operations include negation and basic arithmetic (addition, subtraction, multiplication, division).
From that foundation, many approaches evolved to perform
calculations on groups of bits, in the form of real numbers. A real number is either rational, expressible as a fraction, or irrational. And a real number is represented in a computer as a sequence of 0/1 bits. Many algorithms are used to perform the
calculations or computations on these blocks of bits.
Out of the many arithmetic techniques that evolved over the years, floating point arithmetic emerged as the clear winner. Therefore, this presentation will focus exclusively on floating point computations as the benchmark from
which to measure the merits of the Druck Arithmetic tm. The integer arithmetic will also be mentioned because of its frequent use in real-time calculations.
A floating point number is conveniently expressed in scientific notation,
having three components. They are the sign, the mantissa, and the power [not
correct term]. Or, in the usual base 10 notation:
x = 0.nl ...ns * 10s
s indicates the number of significant digits. A significant digit is one that has
physical significance because it was actually measured during sampling. Over the years, standard packaging of real numbers evolved to improve computational efficiency. These special packaging are referred to as "data types". Examples of data types that are universally used include "int", "float", "double". Thus, the double type is a packaged 64 bit number, on many architectures, while a float is typically 32 bits. Most often, these types are implemented in hardware, or even firmware, to greatly improve the efficiency of calculations. That is why data types are also referred to as "native", to distinguish them from other objects that are customized, and not implemented in hardware.
There is also an "infinite precision" number which uses so many bits to represent a real number, that the precision is effectively "infinite". Then the roundoff errors effectively disappear, to provide the truest representation of multiple calculations on measured numbers. Until recent advances in computer CPU processing speed and memory capacity/price, it was not commercially feasible to work with infinite precision numbers. They require much more memory and processing time to implement, vis-a-vis the native data types.
Over time, the native data types were considered the golden compromise between the conflicting constraints of calculation efficiency vs. accuracy. But infinite precision numbers were increasingly used for more exaction calculations in scientific and even in financial applications. Indeed, packaging of indefinite and otherwise custom precision numbers is occuring more frequently.
For example, the Java language packaged an infinite precision arithmetic in its Big Decimal object. Another commercial example is the packaging of a Money object
to precisely handle arithmetic on currency sub-units (like cents). Infinite precision arithmetic also serves as a useful benchmark against which the other restricted precision calculations can be compared against. Because the infinite precision calculations do not have inherent rounding errors, they are provide true values.
The following floating point representation will be used to show why the current standard floating point arithmetic is unstable. A "double" data type is
indicated, resulting from n calculations. In scientific notation, it is:
0.nl ...ns...nb....nc * 10s Here, s again indicates the number of significant digits maintained from the actual original data measured. The first s digits are significant. The digits from
s+1 until b are the buffer zone that will absorb the effect of round-off. errors. This buffer, in effect, shields the significant digits from the ripple effect of roundoff. The digits from b+1 until c are the digits already corrupted by the rippling effect of previous round-off operations.
Note that the round-off error is generally indeterminate, and not
calculable to a definite value. That increases, yet further, the uncertainty in the resultant calculations.
Numerical Instability of Floating Point Calculations
One problem is the inherent instability of the arithmetic due to the ever
presence of rounding errors . Indeed, it does not even obey the basic associative
law [ ]. That is, if A, B, C are floating point representations of real numbers, then
it is not guaranteed that (A * B) * C = A * (B * C). These rounding errors
accumulate until the significant digits are corrupted by the rippling round-off errors. Thus, the floating point arithmetic can only be useful for repeated calculations on a limited number of sample points, after which instability sets in.
The numerical stability can be vividly illustrated by following these two
sets of (* or +) calculations on the same sequence of real numbers. Each number is expressed in scientific notation, with the significant, buffer and corrupted (or
rounded) digits indicated, as described in the previous section.
The first set of calculations is performed on the standard "double"
representation. And the second set is performed on the infinite precision representation. The second sequence serves as a genuine benchmark against which
the double precision calculations are compared, because it does not have inherent rounding errors.
Both traces start with the same digit, having si significant digits, and a
buffer of bl digits. The first case already starts with a corrupted digit because of the round-off at its end. The second digit does not have a round-off error because
of its infinite precision.
Standard type (double) "Infinite" Precision
0.nl l ...nsl nb .ncl * 10s 0.nl l ...nsl ...nbl .... * 10s
*/+ */+.
*/+ */+.
0.nlk...nsk..nbk nek * 10s 0.nlk...nsk...nbk.... * 10s Note at this point, the bugger digits have become corrupted in the first case.
Subsequent calculations will corrupt the significant digits.
0.nlr...nsr ncr * 10s 0.nlr...nsr...nbr.... * 10s
*/+ */+
*/+ */+
0.nlm..ncm * 10s 0.nlm...nsm...nbm.... * 10s
Note, in the first case, that more digits become corrupted as the number of calculations increase, until the significant digits become meaningless as well.
Further, the round-off process typically is not readily quantified for multiple calculations. Therefore, it is not possible to determine how a calculated result compares to its infinite precision counterpart. In a sense, each intermediate result
contains the history or memory of its previous calculations through the impact on
its buffer bits. . Data Communications Overhead in Microprocessor
There is a second problem with the usual implementations of floating point
computer arithmetic, involving the communications costs in moving data in
memory to the calculating unit. More specifically, floating point arithmetic
implementations generally require the physical movement of data between a
computer's main memory, through a cache, then through CPU registers, and then through an Arithmetic Logic unit (ALU) or the more specialized arithmetic unit needed for transcendental function calculations. Limited-Portability of Floating Point Arithmetic
A third problem is that floating point calculations are performed somewhat differently across multiple computer architectures. As a result, arithmetic
operations on the same sequence of real numbers could produce different results on different computer architectures. Therefore, the arithmetic is not completely
portable from one computer manufacturer to another. The problem is most acute with transcendental functions. Description of Known Solutions
Numerical Instability of Standard Floating Point Calculations
Many solutions have been proposed over the years, and several actually used in commercial applications. They generally include, from most to least used: a. Accept the inherent inaccuracy of floating point calculations. Add
buffer bits.
Then estimate the number of buffer digits needed to prevent the cumulative rounding errors from corrupting the good, significant digits. For
example, one technique is to use the "double" data type for calculations involving
"float" data type accuracy. Or the Java language's BigDecimal object is used to
conveniently construct numbers with even more digits. Finally, vendors have provided specialized packaging of certain number types like 'Money" [ 6 ], along
with an accompanying algebra to perform meaningful and accurate calculations.
As shocking as it may sound, this ad-hoc technique is very commonly used along with an "ignorance is bliss" attitude on the impact of round-off error propagation on ongoing calculations. That is, a self consistent step is not generally taken to verify that the result is accurate.
b. Accept the inherent inaccuracy of floating point calculations. Analyze. Then use numerical analysis techniques to quantify error bounds and to provide
guidelines on when the calculations become unstable. This technique tends not to be used because the mathematics involved is not well understood or appreciated
by users, because, even if understood, it is difficult to implement the analyses, and because the quantitative methods are time consuming and also not proven in effectiveness.
c. Use new models to drive floating point implementations.
This includes "interval analysis". Many such solutions have been proposed over the years, and some actually used in specialized, well controlled, situations. Data Communications Overhead in CPU
ALU units tend to use embedded tables for efficient computations of more complex transcendental basic arithmetic operations.
Non-Portability of Floating Point Calculations
The IEE754 [2 ] standard delineates rules on handling floating point
calculations. It is now adhered to by virtually all computer manufacturers. But
there is still enough room for interpretation in these standards that different results
can still result from executing the same sequence of numbers on different
computer architectures.
An alternate approach is to use constructs in a higher level computer language, to blanket over the differences from one architecture to the next. In particular, the Java language might be used for that purpose. Deficiencies of Existing "Prior Art"
Numerical Instability of Floating Point Calculations
Data Communications Overhead in CPU
Even with the use of optimized and tuned embedded CPU components, there is still data movement between memory and CPU. The communications overhead is most pronounced when transcendental calculations are performed. Non-Portability of Floating Point Calculations
The IEE754 [ 2 ] standard has contributed enormously to increasing the portability of floating point computations over many disparate computer architectures. But it still cannot guarantee portability due to flexibility and ambiguity in the interpretation of the standards.
Iso, the Java data types still are not as portable as originally intended or expected. A recent article in EETimes [ ] showed flaws with Java as a fully portable
computation engine.
BRIEF DESCRIPTION OF THE INVENTION
The invention employs D Arithmetic which has previously been disclosed
in patent application Serial No. 60/115,752 in connection with signal processing
and modeling. D arithmetic has two components. The first is its set of mappings
from each pair of points in a D Scale into a point in a D Scale. For example, D
Arithmetic multiplication mappings are created by applying the usual, floating point, multiplication to each pair of points in the cross-product of a D Scale with itself. Then each decimal result is mapped to a point in the same D Scale, using the D Scale mapping algorithm described in [ 2]. Thus, this algorithm generates a set of pre-assigned, or pre- wired, arithmetic operation mappings (e.g., one for multiplication, another for addition). These mappings are generated once, hence pre-wired, and then used forever onwards. This contrasts with the usual notion of arithmetic "calculations" whereby an unknown quantity is discovered for each pair of input values by applying Boolean Algebra to the bits comprising the numeric input values. The second component is an algorithm to map actual, measured values, into D Scale points. The accuracy and usefulness of the D Arithmetic mappings depends on the precision of the D Scale used. A D Arithmetic that was constructed from a high resolution D Scale can more accurately map sampled data with a high number of significant digits. That is, the result of iterative operations will yield a D Scale value that is within a tighter tolerance of the actual value. The minimum resolution of resolution of a D Arithmetic is determined by the reciprocal of the maximum prime number in the D Scale. However, most important, the average resolution can be orders of magnitude higher because of the irregular pattern of points in a D Scale. A tighter variance of the resolution is achieved by using more prime sub-scales to fill in the D Scale. It amplifies the effect of far fewer sample points to attain the same mean resolution, with a tight variance, as the naϊve, high uniform sampling rate.
The D Arithmetic does not need to use standard Boolean Algebra, which is at the heart of virtually all mainstream commercial arithmetic implementations.
Instead, its pre-wired maps embody what standard arithmetic will calculate using Boolean Algebra or whatever operations on a number's bits. Its pre-wired
mappings are most readily envisioned as straightforward tables (e.g., multiplication and sine tables). It relies on the availability of ever increasing and inexpensive computer memory, persistent storage as well as on ever expanding and inexpensive communications bandwidth.
The most useful property of the D Arithmetic is its numeric stability, even as the number of calculations on data set values increases. This contrasts sharply
with standard floating point arithmetic techniques, whereby accuracy is rapidly
lost due to significant digit erosion from cumulative round-off errors, as the number of calculations increases.
A second property is its portability across diverse computer platforms. That is, the same calculations are guaranteed to produce identical results regardless of computer platform used to actually perform the calculations. This
also contrasts with the industry standard arithmetic (although the IEEE754
standard appreciably minimized differences across diverse computing platforms).
The third property is its relative efficiency. Its efficiency derives from its
use of pre-calculated table values that are accessed in memory or on a networked
node. This avoids the communications overhead in moving data between the CPU
arithmetic logic unit (ALU) and memory. Again, this contrasts with standard
industry practice whereby data must move around to CPU ALU and back for
calculated results. Finally, D Arithmetic results' deviations or errors are bounded, quantifiable and readily verifiable.
At the heart of the processing is the industry standard calculation of the
Discrete Fourier Transform (DFT) of a signal or image using the full set of
sampled data points. Typically, 1 dimensional signal sample data is partitioned
into smaller subsets, called periodograms, and then stitched together with the aid of "windowing". And 2 dimensional images are typically partitioned into blocks [ 2 ]. Each partition is then used in a DFT calculation to insure that accurate results are still obtainable, before the cumulative round-off overwhelms the
original significant digits. For, as the number of data points used in an interative DFT calculation increases, the likelihood of intermediate calculations being
corrupted by round-off errors increases, rendering the final result highly suspicious or completely useless. But this partitioning also has a cost in
introducing extra steps to the overall data processing flow. It also requires an extra statistical analysis step to smooth the resulting data. And it introduces artifacts. Indeed, the use of blocks generates visible artifacts in the form of subtle
block boundaries across a processed image.
CONTENTS 1. Introduction to the D Arithmetic tm
1.1 Summary of Internals
1.2 Benefits
2. Current State-of-the-Art
2.1 Review of Current Floating Point Computer Arithmetic
2.2 Description of Problem
2.3 Description of Known Solutions
2.4 Deficiencies of Existing "Prior Art"
3. The D Arithmetic tm
3.1 Commercial Feasibility
3.2 Outline of D Arithmetic tm Process Flow
3.3 Mapping Actual Measurements into D Scaletm Points
3.4 Random Mapping to Right/Left D Scale Neighbors 3.5 Construction of Static D Arithmetic tm Operation Mappings
3.5.1 Multiplication
3.6 Factoring Scaling into D Arithmetic tm Mappings
3.7 Proof of Accuracy and Stability of D Arithmetic tm Operations
3.8 D Arithmetic tm on Big Numbers
3.9 D Arithmetic tm for Transcendental/Trigonometric Functions
4. Validating the D Arithmetic tm
5. Sensitivity of D Arithmetic tm to Random Number Generators
6. Implementing the D Arithmetic tm 6.1 In-memory Tables
6.2 Memory Mapped IO
6.3 Networked Tree of Tables
7. Benefits of D Arithmetic tm/ D Scale tm to Process Noisy Data
8. Commercial Noise Processing Applications of the D Arithmetic tm/ D
Scale
9. Other Commercial Applications of the D Arithmetic tm
10. Example of Construction of a D Arithmetic
11. Examples of D Arithrnetica Applications
12. Performance Analysis
13. Advantages of the D Arithmetic tm
DETAILED DESCRIPTION
The D Arithmetic provides a way for computers to iteratively compute numbers such as in signal processing in a self-stabilizing, efficient and portable
manner. That is useful for any data processing involving measured quantities. For
example, the D Arithmetic can calculate the Discrete Fourier Transform of a signal's potentially huge set of sample points, without losing stability by the usual erosion of significant digits from round off errors. The present invention involves a novel arithmetic method whereby the results of operations on pairs of numbers is pre-mapped or pre-wired. This pre-wiring automatically insures portability
across computing platforms because the mappings are invariant. And these mappings necessarily are efficient because there is no data movement between CPU components to manipulate bits.
Summary of Internals
The D Arithmetic tm relies on the properties of the D Scale. This measurement scale is described. Briefly, it relates to a method for sampling data by selecting values at discrete points x=n/p, where p is prime and 0<n<p. The
concatenation of the uniformly distributed points, x, associated with each prime p
used, constitutes a measurement scale with irregularly spaced points.
The methodology, with its associated algorithms, proceeds by first mapping two
arbitrary measured floating point numbers, x and y, into two unique,
corresponding points in the measure-ment scale noted above. The mapping is
implemented using an algorithm that can always find a correspondence between a
measured point into a unique point on the scale. Those mapped values therefore take the form n/p and n'/p' respectively, where p and p' are primes and members of the measurement scale, and 0<n<p, 0<n'<p'. The methodology then provides a logical or physical lookup table mapping for f(n/p, n'/p'). Thus, for the arbitrary values of x and y, one looks up f(n/p, n'/p') where x»n/p , y»n'/p' and » denotes
the closest approximate equality within the set of points in the measurement scale,
in this case n/p and n'/p'. (The difference between each of the original values x and y, and their corresponding approximated ones, n/p and n'/p', is proved to always be less than or equal to 1/Pmax, where Pmax is the largest prime in the measurement scale used). The methodology requires that the values of f be
previously mapped, which they are, in such a way that the result ft [measurement scale]. By requiring (or guaranteeing) that f![ measurement scale], the value off may be used iteratively in subsequent calculations using the same technique just
described. Thus, the set of points in the measurement scale constitutes a closed set with regard to the operation of function f. The functions f may be simple arithmetic operations, as x+y, χ-y, χ*y, χ/y,
or may be transcendental, or a transcendental function of a single variable.
F(x,y)=sin x is a simple example.
Benefits
D Arithmetic tm provides significant improvements over current
computation techniques. In particular, the following commercially attractive
properties will be proved:
1. Probabilistically Bounded. Tolerable. Mapping Errors
This arithmetic will be shown to exhibit probabilistically bounded mapping errors, due to the inherent error cancellations that occur during iterative computations. Note this is not error is not caused by round-off policy. Rather, it is caused by the mapping of a measured value into a D Scale point. Further, the maximum error bound, which is improbable as to be virtually impossible to attain, is itself of reasonable size, and tolerable.
2. Self-Stabilizing Round-off Errors
This arithmetic is self-stabilizing such that round-off errors tend, with very high probability, to cancel one another and not accumulate as in the usual arithmetic. That is, the difference between a D Arithmetic determined value and
the exact value will be shown to be bounded in a strong probabilistic sense, with an acceptable tolerance. Further, this tolerance is independent of the number of
calculations involved. This behavior stems directly from the D Scale tm.
3. Portability
This arithmetic is fully portable across computer architectures, because
it relies on pre-defined D Scale tm mappings. These mappings are invariant and independent of computing platforms or architectures.
4. Efficiency
This D Arithmetic is fast, especially for transcendental and
trigonometric function calculations. Its efficiency stems from its use of pre-
defined, or pre-wired, memory resident mappings. There is no need for
(relatively) time consuming data communica-tions within a microprocessor,
between memory, cache, registers and arithmetic logic units (ALU).
5. Quantifiable Error Tolerance The exact error resulting from multiple calculations can also be calculated. Thus, not only is the error guaranteed to be bounded, but it can be determined as well. That is not the case with the usual floating point arithmetic. 3. The D Arithmetic tm
The D Arithmetic tm uses a pre-defined set of mappings. Each mapping
defines the "calculated" result when an arithmetic operation is performed on a
pair of real numbers. The arithmetic operation might be multiplication, addition, subtraction or division or trigonometric operations like sine and cosine. However, using unconstrained pre-defined mappings would be theoretically, and therefore practically unfeasible because of the uncountable number of mappings required.
The feasibility of the D Arithmetic tm mapping stems directly from its use of D
Scale tm points. That is, a finite, yet manageable, set of mappings are defined
between pairs of points in the D Scale tm into results from the same set of points. For example:
x * y a z where x,y,z are points in a D Scale tm
The pre-defined arithmetic mappings have the logical structure of a two dimensional table, one table per arithmetic operation. A table's cell then contains
the result of the mapping, or the intersection of the two coordinate numbers in a
pair, where all cell and coordinate numbers are from a D Scaletm. Therefore, the
D Arithmetic tm does not actually "calculate" results. Rather, it effectively
behaves like a routing table, directing a pair of originating values to a destination value.
The logical tables can be implemented in several ways. The mappings can be in an in-memory table, a file, database tables, or distributed over a network of like objects. Or the mappings can be dynamically calculated as needed. Commercial Feasibility
The invention's practical feasibility stems from the ongoing mega-trends in increasing capacity vs. decreasing price for CPU memory, disk storage and communications bandwidth.
Memory Capacity/Price
CPU memory capacity is ever increasing, along with falling prices, to the extent that 250Mbytes of memory is well within the capability of a basic PC today, and 500Mbyte- 1Gbyte is well within the capability of a small server.
The D Arithmetic leverages off of the increased main memory available at ever lower prices, to perform calculations without data movement in the
microprocessor. Actually, the term "calculation" is a misnomer in this context.
For, a result is obtained by following memory references of the inputs to their
mapped result. Consider a D Arithmetic that is generated from a D Scale with
1000 points through the union of the D Scale's prime sub-scales. This D Scale's
resolution might have an average resolution of 6 significant digits. This
multiplication table would then consume 10002 table cells, if the D Arithmetic
multiplication mappings were implemented in one CPU's memory. If each cell
consumed on the order of 10 bytes for the result and order-of-magnitude values,
then about lOMbytes would be consumed by this multiplication table. This resource consumption is well within the capability of a plain PC. Or consider a D
table built from a D Scale of 10000 points, with an average resolution of 8 significant digits. It would consume 1Gbyte of memory. This is clearly out of range of current plan stand-alone PCs. But the multiplication table can readily be
implemented on a small server. Thus, as the number of significant digits required of a D Arithmetic increases, the memory required to implement the arithmetic
table increases such that implementations are forced up the line from stand-alone
PC to small server to large server. That is, each additional significant digit costs in terms of computing platform to implement the D Arithmetic mappings. But the
costs are well within current capabilities and will improve substantially over time.
Yet measurement instruments on average might still be required to yield anywhere from 5-20 significant digits. 3.1.2 Memory Mapped I/O
Continuing the reasoning from the previous section, consider that a
multiplication table might easily requires 1 OOGbytes for an average resolution of
15 digits. This is beyond the capacity of small servers. But it can be implemented using the standard operating system virtual memory management technique of
memory mapped IO. This technique enables the table to reside on a 100Gbyte
file, yet appear as internal memory when used by the D Arithmetic processing.
This storage requirement is well within the capability of a moderately sized server.
Memory mapped IO capacity also continues to improve along with ever increasing
storage capacity.
Bandwidth Capacity/Cost Continuing again the reasoning from the previous section, consider that a multiplication table might require 1 ,000TeraByte (1015) to support an average resolution of 20 digits. This is beyond the memory mapped IO of typical servers today. But the ever increasing bandwidth capability enables the D Arithmetic
implementation to extend over a set of PCs or servers. Each computer would contain a subset of the full D Arithmetic mappings. They could be interconnected
as a binary tree of nodes using standard techniques. The particular table used would be determined by the pair value to be mapped, using their values as an address to the computer in the Outline of D Arithmetic tm Computing Flow Figure 1 below depicts the computation process flow of a set of
measured, raw, data as it is transformed into a result set, using the D Arithmetic tm.
A raw data set is first created from the actual measurements of a process. Each raw data point is the then mapped to one of its to nearest points on the D
Scale tm used. The mapping into either left or right neighbors is determined by a
random coin toss. Then the D Arithmetic tm points that correspond to the original
measurements are iteratively computed for an arithmetic or trigonometric operation. Each step of this computation is just a mapping or memory reference
operation, without data movement.
The result of the iterative computations is then a D Scale tm point. The actual
result is shown to be within a small and acceptable neighborhood of this D Scale
tm result.
Each of these steps is detailed in the following sections. Figure 1 : D Arithme :ic tm "Calculation" Process Flow
Map Data Set D Arithmetic tm
Large Data Set e to stable calculations
Accurate Results
of Measurements 1 D Scale tm I on full data set of | in D Scale tm I
Points D Scale tm points
Values
Λ
Tables with D Scale tm Entries 0 ni/pk x nj/pl
x is a measured sample point.
ni/pk and nj/pl are the nearest D Scale tm point neighbors of x.
ni/pk is the i-th point on the unit prime sub-scale pk
nj/pl is the j-th point on the unit prime sub-scale pi
D ik, jl is the distance between these two D Scale tm points.
d xjl is the fraction of the distance D ik, jl between x and its
neighbor nj/pl
d xjk is the fraction of the distance D ik, jl between x and its neighbor ni/pk
The next step is to decide which of the neighboring scale points to map x
into. That is: x e nj/pk * 10s where 0 < nj< pk
Or:
x e nj/pl * 10s where 0 < nj< pi
3.4 Random Mappings to Right/Left D Scale Neighbors The straightforward approach is to map normalized point x to the nearest of its two neighbors. Instead, That is, an element of randomness is introduced to
insure that the arithmetic is self-stabilizing. The effect of randomness is to prevent any accumulated bias towards one direction of accumulating mapping errors. Thus, each mapping is based on the outcome of a pseudo-random 0/1 bit number generator (i.e., a coin toss). The point's left neighbor is assigned for a 0 outcome, or the right neighbor for a 1 outcome, or vice- versa as long as applied consistently. It will be shown that the quality of the random number generator does not need to be optimal.
Construction of Static D Arithmetic tm Arithmetic Operation Mappings
An algorithm is presented to generate a static set of mappings between pairs of D Scaletm numbers into D Scaletm numbers. Each mapping type corresponds to a different arithmetic operation: multiplication, addition, division, subtraction. The algorithm is described for multiplication. The other arithmetic operations use the same procedures.
3.5.1 Multiplication Mapping
The multiplication mapping between any two normalized D Scale tm
points, ni/pk and nj/pl, is determined as follows:
Let ni/pk represent the i-th normalized point on the k-th prime sub-scale of a D
Scale tm. Let nj/pl represent the j-th normalized point on the 1-th prime sub-scale of the
same
D Scale tm. Then: na/pm ni/pk * nj/pl
na/pm is the a-th point on the m-th prime sub-scale of the same D Scale
tm. na/pm is obtained by mapping the actual multiplied value of ni/pk * nj/pl to the nearest of its two neighboring D Scaletm points, na/pm and some nn/pn.
NOTE: It is an open issue whether additional randomness is needed in the
mapping of a pair of D Scale tm points to another point. The randomness of the D
Scale tm mappings of measurement points should implicitly provide enough
randomness. But if additional randomness is desired, then the multiplied D Scale tm value would be obtained by randomly mapping the actual multiplied value of ni/pk * nj/pl to one of its two nearest D Scale tm points, nk/pm and nn/pn. Consider the example depicted in Figure 3 below. Let x represent the
actual value of ni/pk * nj/pl. Then:
x = na/pm + (dx,am * D am, bn) where lΔ <= d x,am <= 1
Or x = nb/pn - (dx,bn * D am, bn) where 0 <= d x,bn <= lΔ
The rule, again, is to pick the nearest Druck Scale tm point neighbor.
Therefore the mapping is: ni/pk * nj/pl e nb/pn
D ik, jl =< 1/pmax
Figure 3: Result of Multiplying Two Normalized D Scale tm Points
ni/pk * nj/pl = nb/pn - (dx,bn * D am, bn) where 0 <= d x,am <= Vl
Therefore:
ni/pk * nj/pl enb/pn
x
a | dx,bn |β
a|dx,am |β
a| D am, bn |β
1 I j „| i — L I .. 11 — 1 ι__ . 11 11 T 11 1 |-_ _| 1 — 11 — I 1
0 ni/pk na/pm nb/pn nj/pl 1
3.6 Factoring Scaling into D Arithmetic tm Mappings
Each arithmetic (e.g., multiplication) mapping of a pair of normalized points in the
D Scale tm yields a point in the D Scale tm. The mapped point is normalized and
thereby saved as two values, its normalized fraction and its order-of-magnitude.
Its full value (fraction times its orders-of-magnitude) is used with other D Scale
tm points to generate other mappings with co-pairs.
3.7 Proof of Accuracy and Self- Stability of the D Arithmetic tm The D Arithmetic tm is shown to be accurate and correct. The proof is based on comparisons between intermediate and final results of D Arithmetic tm mappings against corresponding calculations using "infinite" precision numbers as well as against corresponding finite precision of built-in "double" or "double precision" types.
3.7.1 Multiplication Operation
The following two theorems are proved. The first quantifies the theoretical worst case mapping error. But it also indicates that that worst case scenario is so
highly unlikely as to be impossible to achieve. The second theorem quantifies the realistic error from the iterative multiplication mappings. It demonstrates that self- stabilization is at work to counteract any tendency for the mapping errors to
accumulate during iterative calculations. This mapping error distribution is similar to a Bernoulli distribution, which quantifies the probability of a certain number of
heads or tails in a sequence of random coin tosses. Corresponding to a head is a random mapping to the left nearest neighbor, and tail to the mapping to the right nearest neighbor. And each sequential trial corresponds to an iterative,
intermediate, mapping along the way to the result.
D Arithmetic tm - Theorem on Bounded Maximum Error
D Arithmetic tm - Central Theorem on Probabilistic Error Tolerance
The error, en , from any number, n, of D Arithmetic tm multiplication
"calculation" mappings is self-stabilizing such that:
Probability(en > 1/pmax) a 0 as n a ¥
decreasing rapidly at the rate of l/2n
Proof of Theorem 1 :
Mathematical induction will be applied on the number of floating point
numbers used in a series of serial calculations. That is, the assertion in Theorem 1 must be proved to be true for the first case, where n=3. Then, assuming that the assertion is true for the n-the case, the assertion must be proven for the (n+l)-st case.
Casel:xl *x2
Theorem:
If x3 = xl * x2 then x3 = n/p +/- e3 and e3 =< 1/pmax where n/p is some point on the D Scale tm
Proof:
Consider the mappings of any two measurement points xl and x2 into
their respective D Scale tm points, nl/pk and n2/pl respectively, as described in section 3.3. That is:
xl =0.nl...ns...nb....nc * lOsl e nl/pk * lOsl where0<nl< pk x2 = 0.ml...ms...mb....mc* 10s2 e n2/pl * 10s2 where0<n2<pl
Then, again as discussed in section 3.3, the corresponding measurement and D
Scale tm points are related as follows:
xl = (nl/pk +/- (d x,lk D Ik, iljl) ) x2 = (n2/pl +/- (dx,21 D 21, i2j2) )
where:
D Ik, ilj 1 is the distance between the two nearest Druck Scale tm neighbors ofxl,
nl/pk and its more distant neighbor, some scale point nil/pj 1. d x,lk is the fraction of the interval D Ik, ilj 1 between x and nl/pk
Similarly:
D 2k, i2j2 is the distance between the two nearest Druck Scale tm neighbors of x2,
n2/pk and its more distant neighbor, some scale point ni2/pj2. d x,21 is the fraction of the interval D 21, i2j2 between x and n2/pl
And:
0 <= d x,lk <= 1/2
0 <= d x,21 <= l/2
for otherwise, x would not be associated with the scale point above, but rather the other neighbor.
Consider the multiplication operation on the two measurements using
"infinite" precision arithmetic calculations . It is one of the three approaches; D
Arithmetic tm mappings, standard precision arithmetic and "infinite" precision
arithmetic calculations.
But results from the other two approaches can be readily determined from this
trace. "Infinite" Precision Calculation:
y3 = xl * x2
= (nl/pk +/- (d xl,lk D Ik, iljl) ) * (n2/pl +/- (dx2,21 D 21, i2j2) ) = (nl/pk * n2/pl) +/-
( (nl/pk * (dx2,21 D 21, i2j2)) +/- (n2/pl * (d xl,lk D Ik, iljl)) ) +/- ( (d xl,lk D Ik, iljl) * (dx2,21 D 21, i2j2) )
Then consider the relevant D Arithmetic tm mapping of the first term in the multiplication above, as follows:
If xv = nl/pk * n2/pl
then, xv must necessarily map into some point on the scale: n3/pm β xv Therefore from Section 3.5:
n3/pm = xv +/- (dxv,3m * D 3m, i3j3)
Or,
n3/pm = (nl/pk * n2/pl) +/- (dxv,3m * D 3m, i3j3) 0 <=
dxv,3m <=
where:
D 3m, i3j3 is the distance between the two nearest D Scale tm neighbors of xv,
n3/pm and its more distant neighbor, ni3/pj3.
d xv,3m is the fraction of the interval D 3m, i3j3 between x3 and n3/pm Therefore, substituting this into the above leads to:
y3 = xl * x2
= n3/pm +/-
(dxv,3m * D 3m, i3j3) +/- ( (nl/pk * (dx,21 D 21, i2j2)) +/- (n2/pl * (d x,lk D Ik, ilj 1)) ) +/-
( (d x,lk D Ik, iljl) * (dx,21 D 21, i2j2) )
Or: y3 = n3/pm +/- e3
where: e3 = (dxv,3m * D 3m, i3j3) +/-
( (nl/pk * (dx,21 D 21, i2j2)) +/- (n2/pl * (d x,lk D Ik, iljl)) ) +/- ( (d x,lk D Ik, iljl) * (dx,21 D 21, i2j2) ) Thus, the infinite precision, "true", calculation yields what the D Arithmetic tm maps to, n3/pm, and an extra amount e3.
It will now be proved that e3 =< 1/pmax
a. Consider each of the terms above:
(dxv,3m * D 3m, i3j3) =< lΛ D 3m, i3j3 =< V21 pmax
Because:
0 =< dxv,3m =< Vi and D 3m, i3j3 =< 1/pmax as a Druck Scale a
property (nl/pk * (dx,21 D 21, i2j2)) < (pmax-1/ pmax) I/3 D 21, i2j2) =< (pmax-1/ pmax) i / pmax
Because:
nl/pk =< (pmax- 1)/ pmax "k nl < pk and pk=< pmax by Druck Scale a properties dx,21 D 21, i2j2 < ! 2D 21, i2j2 < '/z/ pmax
D 21, i2j2 is the distance between two adjacent Druck Scale a points, which must be =< 1/ pmax. . Also, 0 =< dx,21 =< */2
Similarly:
(n2/pl * (d x,lk D Ik, iljl)) < (pmax-1/ pmax) ! 2 D Ik, iljl) =< (pmax-1/
where 0 =< dx, 1 k =< lA
( (d x,lk D Ik, iljl) * (dx,21 D 21, i2j2) ) =< V2 D Ik, iljl * '/2 D 21, i2j2 =< VA
(1/pmax) (1/pmax)
< lA /pmax2
a. Next, consider the worst case where the components are all additive. Then the
worst case sum is: WorstCaseSum e3 =< V21 pmax +
(pmax-1/ pmax) 1/2/ pmax +
(pmax-1/ pmax) !^/ pmax + /p2max Or, after collecting terms;
WorstCaseSum e3 = V2 1 pmax +
V2 (pmax-1)/ p2max +
After collecting these terms:
WorstCaseSum e3 = V /pmax + XA /pmax - !/2 / p2max + % /p2max
< 1/ pmax
Therefore: e3 =< 1/pmax
Identical reasoning can be applied to the worst case when all the
components are subtractive to yield the same result.
Case 2.
x3 = 0.ml ...ms...mb....mc * 10s2 e n3/pl * 10s3 where 0 < n3 < pl
Then, again as discussed in section 3.3, the corresponding measurement
and D Scale tm points are related as follows: xx = (n3/pk +/- (d x,lk D Ik, iljl) )
x4 = xl * x2 * x3 = y3 * x3
= ((nl/pk * n2/pl) +/-
( (nl/pk * (dx2,21 D 21, i2j2)) +/- (n2/pl * (d xl,lk D Ik, iljl)) ) +/-
( (d xl.lk D Ik, iljl) * (dx2,21 D 21, i2j2) )) * (n2/pl +/- (dx2,21 D 21, i2j2) )
= (nl/pk * n2/pl) +/-
( (nl/pk * (dx2,21 D 21, i2j2)) +/- (n2/pl * (d xl.lk D Ik, ilj 1)) ) +/-
( (d xl.lk D Ik, iljl) * (dx2,21 D 21, i2j2) )
Then consider the relevant D Arithmetic tm mapping of the first term in the multiplication above, as follows: If xv = nl/pk * n2/pl
then, xv must necessarily map into some point on the scale: n3/pm β xv
Therefore from Section 3.5:
n3/pm = xv +/- (dxv,3m * D 3m, i3j3)
Or,
n3/pm = (nl/pk * n2/pl) +/- (dxv,3m * D 3m, i3j3) 0 <=
dxv,3m <= V2
where: D 3m, i3j3 is the distance between the two nearest D Scale tm neighbors of xv,
n3/pm and its more distant neighbor, ni3/pj3. d xv,3m is the fraction of the interval D 3m, i3j3 between x3 and n3/pm
Therefore, substituting this into the above leads to: x3 = xl * x2 = n3/pm +/-
(dxv,3m * D 3m, i3j3) +/-
( (nl/pk * (dx,21 D 21, i2j2)) +/- (n2/pl * (d x,lk D Ik, ilj 1)) ) +/-
( (d x,lk D Ik, iljl) * (dx,21 D 21, i2j2) )
Or: x3 = n3/pm +/- e3
where:
e3 = (dxv,3m * D 3m, i3j3) +/-
( (nl/pk * (dx,21 D 21, i2j2)) +/- (n2/pl * (d x,lk D Ik, iljl)) ) +/- ( (d x,lk D Ik, iljl) * (dx,21 D 21, i2j2) )
2. Case n - Let xn result from multiplying (n-1) floating point numbers. Or:
n-1
xn = O xi 1 then assume that: xn = n/p +/- en
and en =< 2/pmax
where n/p is some point on the D Scale tm
3. Case (n+l)
Let x be a floating point number. The result of multiplying n floating point numbers is identically the result of multiplying x and xn from case n above. Or: xn+1 = xn * x
Need to prove that:
xn+ 1 = n/p +/- en+ 1 and en+1 =< 2/pmax where n/p is a point on the D Scale tm
Proof:
The details are very similar to the first case.
As discussed in section 3.3, the measurement point x is related to its corresponding
D Scale tm point na/pb by:
na/pb β x
x = (na/pb) +/- (dx,ab * D ab, ikjk)
where: D ab, ikjk is the distance between the two nearest D Scale tm neighbors ofx,
na/pb and its more distant neighbor, nik/pjk.
d x,ab is the fraction of the interval D Ik, ilj 1 between x and na/pb
Similarly, from case n above: nn/pm β xn xn = nn/pm +/- en where en =< 1/pmax
Therefore:
xn+1 = (nn/pm +/- en) * ((na/pb) +/- (dx,ab * D ab, ikjk))
= (nn/pm * na/pb) +/- (en * na/pb) +/-
( nn/pm * (dx,ab D ab, ikjk)) +/- (en * (dx,ab D ab, ikjk))
But, using the relevant D Arithmetic tm mapping:
If xv = nn/pm * na/pb
Then
nc/pp β xv
or, from Section 3.5: nc/pp = (nn/pm * na/pb) +/- (dxv,nm * D nm, injn) 0 <= dxv,nm <= V2 where:
D nm, injn is the distance between the two nearest D Scale tm neighbors of (nn/pm * na/pb); nc/pp and its more distant neighbor, nin/pjn.
d xv,nm is the fraction of the interval D nm, injn between xv and nn/pm
Therefore, substituting this into the above leads to:
xn+1 = nc/pp +/- (dxv,nm * D nm, injn) +/-
(en * na/pb) +/- ( nn/pm * (dx,ab D ab, ikjk)) +/-
(en * (dx,ab D ab, ikjk))
Or: xn+1 = nc/pp +/- en+1 where: en+1 = (dxv,nm * D nm, injn) +/-
(en * na/pb) +/-
( nn pm * (dx,ab D ab, ikjk)) +/-
(en * (dx,ab D ab, ikjk))
Will now show that en+1 =< 2/pmax a. Consider each of the components above: (dxv,nm * D nm, injn) < V D nm, injn =< '/2 dnm'/ pmax < 1/ pmax
D nm, injn is the distance between two adjacent Druck Scale a points. Therefore
D nm,injn =<1/ pmax or D nm,injn = dnm' *(1/ pmax) where 0 < dnm'
< 1.
(en * na/pb) =< (2/pmax) (pmax-1)/ pmax =< 2(pmax-l)/p2max since na < pb and en =< 2/pmax by case n above.
na/pb <= (pmax- 1)/ pmax "k na < pb and pb=< pmax by Druck Scale a properties.
( nn/pm * (dx,ab D ab, ikjk)) =< 1/2 dx,ab'/pmax (pmax-1)/ pmax =< 1/2
dx,ab' (pmax- 1 )/ p2max
where 0 =< dx,ab' =< 1
(en * (dx,ab D ab, ikjk)) =< 2/pmax Vi dx,ab'/pmax =< dx,ab7 p2max
because en =< 2/pmax (from case=n) and 0 =< dx,ab' =< 1
b. Next, consider the worst case where the components are all additive. Then the
worst case sum is:
WorstCaseSum en+1 =< 1/ pmax +
2(pmax-l)/p2max + 1/2 dx,ab'(pmax-l)/ p2max + dx,ab'/ p2max
Or, finally, if assume : db =1 WorstCaseSum < 1/pmax
Therefore:
en+1 =< 2/pmax
3.7 Realistic vs. Worst Case Error - Theorem #2
A random coin is tossed to determine which mapping to use.
Or else rely on "natural" randomness. To select either nm/pm or nn/pn
Note that this is the absolute worst bound on accumulated errors. The probability
of all components becoming additive decreases appreciably as the number of calculations are performed, to the extent as to be practically impossible. 3.7.2 Addition Operation
The following theorems are proved. The first quantifies the theoretical
worst case mapping error. But it also indicates that that worst case scenario is so
highly unlikely as to be impossible to achieve. The second theorem quantifies the
realistic error from the iterative addition mappings. It demonstrates that self-
stabilization is at work to counteract any tendency for the mapping errors to accumulate during iterative mappings (or "calculations"). The third theorem quantifies the variance of the error distribution. This mapping error distribution is
similar to a Bernoulli distribution, which quantifies the probability of a certain number of heads or tails in a sequence of random coin tosses. Corresponding to a head is a random mapping to the left nearest neighbor, and tail to the mapping to
the right nearest neighbor. And each sequential trial corresponds to an iterative, intermediate, mapping along the way to the result.
D Arithmetic tm Addition - Theorem on Bounded Maximum Error
The worst case error, en , from any number, n, of iterative
D Arithmetic tm addition "calculation" mappings is:
k=n
en =< +- & (d xl.lk D Ik, iljl)
k=0
where:
| ak | < pmax
D Ik, iljl =< pmax
d xl.lk =< &
k=n with Probability(en =< Vz pmax + n/pmax +/- a ak/pmaxk) a 0 as n > K
k=2
where K is set such that Probability(ek) @ 0
D Arithmetic tm - Central Theorem on Probabilistic Error Tolerance
The error, en , from any number, n, of D Arithmetic tm additions
"calculation" mappings is self-stablizing such that:
Probability(en > 1/pmax) a 0 as n a ¥
decreasing rapidly at the rate of l/2n
where K is set such that Probability(ek) @ 0
D Arithmetic tm - Bounded Variance of Iterative Calculation
The variance, s2n, the error distribution, en , resulting from many iterative
D Arithmetic tm additions of n number, n, is bounded such that:
s2n =< 1/pmax
"Infinite" Precision Addition Calculations:
y3 = xl + x2
= (nl/pl +/- (d xl.lk D Ik, iljl) ) + (n2/p2 +/- (dx2,21 D 21, i2j2)
= (nl/pl + n2/p2) +/- (d xl.lk D Ik, iljl) +/- (dx2,21 D 21, i2j2)
= n3/p3 +/- (d x3,13 D 13, i3j3) +/- (d xl.lk D Ik, iljl) +/- (dx2,21
D 21, i2j2)
That is, the true value is related to the D Arithmetic tm mapping by:
exactSum = D Arithmetic tm mapping +/- e3 y3 = n3/p3 +/- (dx3,13 D13,i3j3)+/-(dxl,lk
Diljl)+/-(dx2,21 D21,i2j2)
e3 =< +/- / /pmax +/- 'Λ/pmax +/- 'Λ/pmax
Expected value or average = 0
Variance =
The worst case error would occur when the additional parts are additive or subtractive....
Therefore:
Prob(y3 = n3/p3 +'/2/pmax + '/i/pmax + ^/pmax) = (!/2)3
Prob(y3 = n3/p3 -'/2/pmax - !/2/pmax - '/./pmax) = (Y)3
k-1
yk= a xi
i=0
k-1
a (ni/pi+/- (dxl,lk D Ik, iljl))
i=0 = a (ni/pi) +- a (d xl,lk D Ik, iljl)
en =< a (d xl,lk D Ik, iljl)
Prob(en >= ) = combinations of m aligned in sequence length n =
3.8 D Arithmetic tm on Big Numbers
The number of D Arithmetic mappings that can be implemented is necessarily
limited. Limitations occur because of physical constraints of the computing
system and/or network, used to implement the D Arithmetic. For example, consider a
D Arithmetic tm table with 108 (100 MegBytes) cells that was generated from a D Scale tm with a four digit maximum prime number. A computer might not have enough memory to allocate an in-memory table of that size.
A set of numbers that is bigger than the constraint of any particular D Arithmetic,
can still use D Arithmetic operations. Each number is represented as a
composition of smaller numbers that do have representations in the D Arithmetic
table For example,
0.123456789 = 0.12345 + 0.6789 * 10-5
More generally, If:
A = b + c*10k
A' = b' + c'*10k'
and b, c, b', c' are elements of a D Arithmetic table.
Then:
A * A' = (b + c*10k ) * (b' + c'*10k')
= b*b' + b*c'*10k' + b'c*10k + c*c'*10k*10k'
Therefore, A * A' is mapped into the result of the four D Arithmetic
multiplication and four addition operations above. The same technique is applicable to other arithmetic operations on big numbers. There is a slight performance penalty vis-a-vis the simpler one mapping, because a sequence of consecutive mappings is required.
3.9 D Arithmetic tm for Transcendental/Trigonometric Functions Calculations involving trigonometric or transcendental functions can also be
performed using D Arithmetic mappings. The key difference between arithmetic
and these operations is that the mappings into arithmetic values are decoupled
from their order-of-magnitudes. That is, the same pair of normalized
measurements will map to one normalized value, regardless of the order-of-
magnitudes that the pair may have. In contrast, trigonometric and transcendental
functions map to different values depending on the order of magnitude of the
input. For instance, sin(q) and sin(10*q) do not map proportionally to a value,
and to 10 times that value, respectively. The D Arithmetic accommodates this tight coupling between order-of-magnitude and normalized value, by using a
series of mapping sets, with each set applicable to a set of normalized values and a specific order-of-magnitude range. Thus, one 1 -dimensional D Arithmetic table would map normalized values with order-of-magnitude of 100, while another table
would map the same set of normalized values, but with an order-of-magnitude of
101.
Further, transcendental and trigonometric functions of large measurement numbers can be decomposed into smaller components using the standard operation decompositions for transcendental and trigonometric functions.
For example,
sin(ql + q2*10k) = sin(ql)*cos(q2*10k) + cos(ql)*sin(q2*10k) Then, the D Arithmetic sine tables for sin(ql) and cos(q2*10k) can be mapped into their D Scale values. That pair is then be mapped into a D Scale value using the D arithmetic multiplication table. The same procedure is applied to the second term, cos(ql)*sin(q2* 10k). Finally, the mapping results of each term are input to
a D Arithmetic addition mapping set, to yield the final result.
4. Validating the D Arithmetic tm
A methodology is presented to validate the result of iterative mapping of
measurements using the D Arithmetic. This validation provides the assurance that
the intermediate results are indeed self-stabilizing, and that the end result is
accurate for any number of iterative "calculation" mappings. The validation procedure is to compare the intermediate result after each iterative step against intermediate results using the benchmark of infinite, or at least high, precision numbers. More specifically, a given set of measurements is
processed using the D Arithmetic, the usual infinite (or at least high) precision
arithmetic, and the usual arithmetic using the built-in "double" or "float" data types. Three traces are then generated of the intermediate results from iteratively applying the same arithmetic operations on the data set, using the three techniques. That is, one trace is generated by the iterative mappings of the D Arithmetic. The second trace is generated using the infinite precision arithmetic. And the third trace derives from the finite precision arithmetic.
Iterations D Arithmetic tm Standard (e.g.,double) "Infinite" Precision
1 true value#l ± D true value#l ± D' roundoff true value#l
m true value#m ± kD true value#l + 10AkD' true value#m
Note that after m iterations, the D Arithmetic results remain within a tight
range about the true value. In contrast, the usual finite precision arithmetic
continues to lose precision. The D Arithmetic interval about the actual value can
increase in width as more iterations are performed. But this occurs at a slow pace,
and is highly improbable. Empirical results as provided in section [ ] confirm this analysis.
5. Sensitivity of D Arithmetic tm to Random Number Generators
The randomness quality of numbers generated by various pseudo-random number
generators can vary wildly. The statistical impact on iterative calculations is to
create more of an imbalance or a bias in the mapping of each sample point to its left or right neighbor. But this bias can be mitigated by increasing the precision of the D Scale used for the D Arithmetic, by using higher numbered primes.
6. Implementing D Arithmetic Mappings
The D Arithmetic tm mappings can be implemented in several ways, depending on the local and networked systems resources available. Three approaches are presented that are feasible with current technology, in order of
increasing memory capacity at the cost of a more involved implementation. As technology advances, other techniques should be possible as well. This list does not preclude those cases. The goal of each implementation is to map any two entries in a pair of D Scales into another entry in a D Scale tm.
6.1 In-Memory Tables
This is the most straightforward approach. On a single CPU, the operating
system is instructed to allocate memory within a running process's address space
for the required D Arithmetic table(s). This memory is then used as arrays to store
the normalized values and the order-of-magnitude offsets. For example, a 4x4
table could contain a a D Scale with maximum prime 1001. Then at least 100
MegBytes of memory would have to be allocated. This is well within the
capabilities of current computer servers, in the range of $5k and up. 6.2 Memory Mapped IO Tables
An extension of the technique above is required if the table size increases to the extent that the operating system cannot allocate more memory. For
example, a 6x6
D Arithmetic tm table would require a minimum of 1 GigaByte of main memory, clearly only feasible today on very high-end computers. An efficient and easily implemented alternative is the use of memory mapped I/O. The operating system provides the memory as needed by automatically mapping pages from a disk resident mappings. This is done transparent to the application software. This is a powerful yet commonly available technique, used extensively on Microsoft
NT and Sun Solaris operating systems. 6.3 Networked Tree of Tables
When the precision of a D Arithmetic mapping is greater than what one computer host can provide, then a network of computers might be considered.
The network is structured logically as a tree. At each branch is the direction indicator for the subsets of digits [ to be continued].
6.4 Parallel Processing
The arithmetic mappings are decoupled from one another, each only
dependent on two inputs. That lends itself well to t Mainstream, industry standard,
parallel processing techniques can be applied to the implementation techniques described
above.
7. Benefits of D Arithmetic tm D Scale tm to Process Noisy Data Noise sampling differs fundamentally from deterministic signal sampling. Whereas signal sampling has a relaxed constraint on the number and location of
sample points used, noisy data requires that all of its points be sampled to insure a faithful. A commercially viable and attractive approach to noise processing is provided by the combination of the D Arithmetic tm, D Scale tm's irregular
sampling and the mathematical kernel decompositions made possible by the D
Scale. Noise processing typically includes the determination of its spectral power distribution, noise reduction and cancellation, noise filtering and signal extraction from noise. The D Arithmetic can be used in industry standard noise processing algorithm implementations. It can provide the following benefits that are generally limited in current mainstream industry practice:
7.1 Minimal Bias from Quasi-random Sampling
Irregular, non-uniform, quasi-random sampling of noise is inherently
preferable to uniform sampling of the noise. Uniform sampling of noisy data can produce biases or artifacts not present in the actual data.
One bias occurs because the points between the uniformly sampled points
are implicitly assigned values that are extrapolations of the uniformly sampled
values. But random data, by definition, does not have predictable, deterministic,
values derived from correlation to a neighboring point. Thus, an implicit
correlation leads to an artifact.
A second bias occurs because it is generally not practical, cost effective or
feasible for the to uniformly sample noisy data at the high rate (or bandwidth) required to authentically replicate a noise signal. Therefore a bias occurs because the sampling rate is generally not high enough. In contrast, irregular sampling using the D Scale tm has an effective bandwidth of as many orders-of-magnitude
higher as the maximum possible sampling rate of the D Scale tm used.
7.2 Reduced Round-off Error Artifact in Noise Processing
The round-off errors inherent in standard arithmetic based noise calculations
effectively add new noise to the measurements. Further, more accurate noise processing requires the use of as large a data set of raw samples as is possible. But that only adds more effective noise from the unbounded round-off errors.
Arithmetic based error bias must be either bounded or managed to prevent additional bias in noise calculations.
7.3 Enables Processing of Complete Noise Data Set
Typically, large sample data sets are partitioned into smaller sets for noise processing and analysis. The main reason is that that is the only way to bound the
explosion of round-off errors. But this introduces distortions in the analysis because the set of sets of data results must then be somehow combined in a
manner that never existed in the original data. Further, the impact of bounding
each set is to create more artifacts due to the abrupt discontinuity at the partition
boundaries. The industry standard use of "windows" to mitigate this partition
boundary effect can only go so far. 8. Commercial Noise Processing Applications of the D Arithmetic tm/ D Scale tm
8.1 Noise Cancellation
The noise spectrum consists of its amplitude and whatever meaningful
coherent phase, both as a function of frequency. The amplitude squared is the noise power which is useful for filtering and signal detection,. And the phase, or
degree of correlation to other noisy signals, is useful for adaptive equalization and noise cancellation. The combined use of the D Arithmetic tm, D Scale tm, and the D Scale tm derived Fourier and wavelet kernel transformation decompositions
enable the noise spectrum to be authentically and precisely calculated. Authenticity is provided by the D Scale derived Fourier and wavelet kernel transformation decompositions. And precision stability is provided by the
D Arithmetic tm mapping. The decomposition indicated below is at least as useful
as the standard Karhunun-Loueve decomposition [ ].
communications channel to interfere destructively with the ambient noise in the channel and thereby reduce the quiescent noise level of the channel. While there are many patented noise cancellation techniques, this one relies on the D Scale tm and D Arithmetic tm to provide a true replica of the noise that will be canceled.
8.2 Adaptive Equalization
Adaptive equalization is a process by which the communications character¬
istics (the transfer function in communications parlance) of a channel are dynamically changed. This is typically done by changing the coefficients corresponding to the
taps in a transverse filter. Its benefit is in equalizing the uneven amplitude and phase distortions due to a communication channel's typically nonlinear transfer function characteristic.
As in the case above, the D Arithmetic tm and D Scale tm provide accurate modeling
of the communications channel. That can be used to increase the accuracy of changes to the taps in the existing industry standard and practiced adaptive equalization
implementations.
A technique to characterize a communication channel is described. A set of
signals corresponding to the D Scale tm primes, is applied to the channel. Within eacl
prime scale, the phase, frequency and amplitude are varied. The outputs for each
input signal are then merged to form a realistic transfer function or spectrum of that
channel. Further, the signals can be reapplied periodically, to dynamically readjust the -όl-
channel's spectrum. This works because any signal can be decomposed into these
prime signals. Therefore, it can be made yet more accurate by using the signal's decomposition, if it is known a-priori.
8.3 Signal Detection
Signal detection involves the determination, within a pre-specified high
probability, that a signal is present in noisy data. Practical implementations of signal detection devices rely on mathematical models of the expected noise. Typically, the model is simplified by assuming that the noise has a Gaussian distribution of amplitude value. In contrast, the D Arithmetic tm and D Scale tm require
8.4 Synthesis of Noisy Signals
A ramification of the combined D Arithmetic tm and D Scale tm, to include the Fourier transform decomposition, is that signals and noise can be generated using its signal/noise decomposition equations.
9. Other Commercial Applications of the D Arithmetic tm
Several immediately applicable commercial uses of the D Arithmetic tm and
D Scale tm sampling are noted.
9.1 2D/3D Image Processing 2D and 3D images are processed similarly to ID signals. The spectrum calculated is
of space as a function of spatial frequency, instead of time as a function of time frequency. As with ID signals, processing includes filtering, noise reduction as well as edge determination. In particular, the D Arithmetic combined with the D Scale
enable the calculation of a Fourier Transform of an image without block partitioning. The industry standard image processing practice is to partition an image into
many blocks [ ]. A primary reason is minimize the impact of round-off errors when
performing many FFT calculations from 2D or 3D samples. But this leads to
degradation in the generated image due to the appearance of artifacts in the image at the boundaries of the blocks.
10. Example of Construction of a D Arithmetic tm
10.1 Construction of a D Scale tm
The following steps are taken to create a D Scale tm with a specified
measurement resolution for an application domain. 10.1.1 Select Primes for the D Scale
1. A set of prime numbers must first be selected for their corresponding sub-scales
to comprise the D Scale tm. This decision is based on the needed maximum and
average resolution or precision. The maximum possible resolution is determined by
the reciprocal of the highest prime used. And the average resolution is determined
by the density of points resulting from the union of these prime scales. A sparse set
of primes can lead to a desired maximum possible resolution. But it then misses the
main benefit of a well populated D Scale tm, which is leverage in creating an effec¬
tive resolution orders of magnitude greater than possible using the standard uniform
decimal, scale.
In this case, 11 sub-scales are used, with these prime numbers:
// Mathematica notation.
10.1.2 D Scale tm Construction
For each prime number selected above, there is a corresponding set of points on
the unit interval of the form k/p, where p is a prime and k=l,2..(p-l). Thus,
P2 = {0, 1/2, 1}
p3 = {l/3, 2/3}
p37 = {1/37, 2/37, 35/37, 36/37}
Note that the endpoints, 0 and 1, are included in the D Scale tm and assigned by convention to the first prime. A D Scale tm on the unit interval is constructed by concatenating, or merging, all of the points in all of these sets associated with the prime scales comprising the D Scale tm. The resulting set of points comprising the D
Scale tm is:
10.1.3 Determination of Measurement Resolution
The resolution of the scale is determined as the minimum distance between adjacent scale points. The Mathematica vector "delta" below contains these resolutions as a function of position in the D Scale tm.
Its graph of resolution vs. position in the D Scale tm is as follows:
The maximum possible resolution of points with this scale is 1/37 (0.027027), where
37 is the highest prime in this D Scale tm. But as this graph shows, the effective
resolution, determined by the mean and variance of the distribution of deltas, is
smaller, at 0.0075. This graph highlights one of the key features of the D
Arithmetic tm, that far fewer points than a uniform sampling scale can lead to higher resolution.
10.2 Construction of a D Arithmetic tm Table from the D Scale tm
10.2.1 Cross Product of D Scale tm First, the cross product is calculated of the just constructed D Scale tm with itself,
using standard decimal multiplication. That is, each point in this D Scale tm is multiplied with each point in the D Scale tm (including itself).
10.2.2 Normalize Results
Then each result, now in decimal format, is normalized such that each number is greater than 0.1. If a number is not, like 0.002345, then its order of magnitude is saved along with its normalized value, or 0.2345 and 10-3.
10.2.3 Map Normalized Values to D Scale tm
Finally, each normalized number is mapped to its left or right nearest D Scale tm neighbors based on a random boolean value. The implementation of this mapping is shown in the following section.
A sample is shown below, of the rows 2 and 3 of the D Arithmetic tm tables
containing the D Scale tm mappings of normalized cross products, and of the
associated order-of-magnitudes. Note that in other implementations, both the order
of magnitude and the D Scale tm values can be part of the same table, if a table data structure is used to implement the mappings.
10.3 Mapping Sample Point to D Scalea
Each sample point is mapped to a D Scalea point using the algorithm described, and
proved, in the D Scalea patent application ]. The full implementation of that algorithm is presented below, in Mathematica. Note the use of the Random function to decide which of the right or left D Scalea neighbors of a sample point to map to.

Claims

What is claimed is:
1. A method for signal processing comprising performing calculations comprising mappings from the domain of sets of points in a D scale to a range of
points in a D scale, and an algorithm for mapping measured or sampled values into
a D scale.
2. The method of claim 1 further comprising controlling the variance of resolution by using a plurality of D sub-scales to fill the D scale.
3. The method of claim 2 wherein such mappings are multidimensional arithmetical operations.
4. The method of claim 2 wherein such mappings are generated from floating point arithmetical operations.
5. The method of claim 2 wherein such mappings are stored in a read only device.
6. The method of claim 2, wherein discrete kernel transforms of sample points
are calculated.
7. The method of claim 2 wherein an interactive calculation with decomposition of kernel integrals is performed.
8. The method of claim 2 wherein said D sub-scale provides discrete points of
the form n p < 1, where p is prime.
9. The method of claim 2 wherein said sub-scale points comprise a table
whose cells contain the values from a D scale.
10. The method of claim 8, wherein said table is in a memory, a file, a database table, or distributed over a network.
11. The method of claim 8, wherein said table is distributed over a optical fiber
interactive network
12. The method of claim 8, wherein such mappings are calculated when
needed.
13. The method of claim 7 further comprising mapping measurements by selecting a D scale point adjacent to an actual data value.
14. The method of claim 11, wherein the method of selecting a D scale point is accomplished by randomly selecting an adjacent higher point or an adjacent lower
point.
15. The method of claim 13, wherein the method of selecting a D scale point is
accomplished by selecting the nearest D scale value.
16. The method of claim 13 wherein the D scale points corresponding to the measurements are determined by a D scale measurement sampling algorithm.
17. Apparatus for processing a digital signal, comprising a computer network
having a memory storing a table of D scale values and an algorithm for mapping computed numerical values into values in said table said apparatus lacking an arithmetic logical unit during creation, said algorithm lacking the step of moving
data from a memory to registers to an arithmetic logical unit.
18. Apparatus for processing an n-dimensional digital signal, comprising a
computer network having D scale arithmetic instance tables generated to cover a
range of actual or expected data and a predetermined required precision.
19. Apparatus for characterizing n-dimensional noise, including its energy and phase distributions over frequency bands comprising a computer network having a
table of D scale values and an algorithm for mapping computed numerical values into values in said table said apparatus lacking a arithmetic logical unit.
20. The apparatus of claim 17, wherein said algorithm alleviates the need to
move data from a memory to registers to an arithmetic logical unit.
21. Apparatus for characterizing noise and its identifying signature, including its energy and phase distributions over frequency bands comprising a computer
network having a memory storing a table of D scale values and an algorithm for mapping computed numerical values into values in said table said apparatus lacking a arithmetic logical unit.
22. Apparatus for dynamically generating anti-noise from original noise and canceling said noise comprising a computer network storing a table of D scale values and an algorithm for mapping computed numerical values into values in
said table said apparatus lacking a arithmetic logical unit.
23. Apparatus for determining noise signature for adaptive equalization and
signal separation comprising a computer network storing a table of D scale values and an algorithm for mapping computed numerical values into values in said table said apparatus lacking a arithmetic logical unit.
24. Apparatus for sampling noise while preserving its correlation signature
comprising a computer network storing a table of D scale values and an algorithm
for mapping computed numerical values into values in said table said apparatus
lacking a arithmetic logical unit.
25. Apparatus for noise processing comprising D scale arithmetic with D Scale irregular sampling and mathematical kernel decomposition comprising a computer network storing a table of D scale values and an algorithm for mapping computed numerical values into values in said table said apparatus lacking a arithmetic logical unit.
26. Apparatus for determining the spectral power dissipation of noise and its
temporal or spacial correlations, and applying such determination to noise
reduction comprising a computer network storing a table of D scale values and an algorithm for mapping computed numerical values into values in said table said apparatus lacking a arithmetic logical unit.
27. Apparatus employing D Arithmetic and D Scale derived Fourier, wavelet kernel or other general mathematical kernel transformation decomposition to decompose a noise spectrum comprising a computer network storing a table of D scale values and an algorithm for mapping computed numerical values into values in said table said apparatus lacking a arithmetic logical unit.
28. Apparatus for detection of a signal from noisy data using a D scale.
29. Apparatus for digital filtering to dynamically change the transfer function of
a communications channel by changing the coefficients corresponding to the taps in a transverse filter comprising a computer network storing a table of D scale
values and an algorithm for mapping computed numerical values into values in
said table said apparatus lacking a arithmetic logical unit.
30. The apparatus of claim 26, wherein uneven amplitude and phase distortions
due to a communication channel's nonlinear transfer function characteristic are
changed.
31. The method of generating signals and noise from signal/noise
decomposition equations by including a kernel decomposition based upon D Arithmetic and D Scale algorithms.
32. The method of claim 31 wherein the kernel is a Fourier kernel, wavelet or other general mathematical kernel.
33. The method of generating a model for signals and noise from signal/noise decomposition equations by including a Fourier decomposition based upon D Arithmetic and D Scale algorithms.
34. A method for signal processing wherein the kernel transform of a 2D or 3D or ND image is processed by utilizing D Arithmetic and a D Scale without block partitioning.
EP00970970A 1999-10-28 2000-10-17 Self-stabilizing, portable and efficient computer arithmetic using mappings of d scale points Withdrawn EP1236087A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US16258899P 1999-10-28 1999-10-28
US162588P 1999-10-28
PCT/US2000/028721 WO2001033335A1 (en) 1999-10-28 2000-10-17 Self-stabilizing, portable and efficient computer arithmetic using mappings of d scale points

Publications (1)

Publication Number Publication Date
EP1236087A1 true EP1236087A1 (en) 2002-09-04

Family

ID=22586285

Family Applications (1)

Application Number Title Priority Date Filing Date
EP00970970A Withdrawn EP1236087A1 (en) 1999-10-28 2000-10-17 Self-stabilizing, portable and efficient computer arithmetic using mappings of d scale points

Country Status (5)

Country Link
EP (1) EP1236087A1 (en)
AU (1) AU8027400A (en)
CA (1) CA2390478A1 (en)
HK (1) HK1048531A1 (en)
WO (1) WO2001033335A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2012238001B2 (en) 2011-03-28 2015-09-17 Dolby Laboratories Licensing Corporation Reduced complexity transform for a low-frequency-effects channel

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4506340A (en) * 1983-04-04 1985-03-19 Honeywell Information Systems Inc. Method and apparatus for producing the residue of the product of two residues
US4910699A (en) * 1988-08-18 1990-03-20 The Boeing Company Optical computer including parallel residue to binary conversion
GB9627069D0 (en) * 1996-12-30 1997-02-19 Certicom Corp A method and apparatus for finite field multiplication

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO0133335A1 *

Also Published As

Publication number Publication date
AU8027400A (en) 2001-05-14
CA2390478A1 (en) 2001-05-10
WO2001033335A9 (en) 2002-08-01
WO2001033335A1 (en) 2001-05-10
HK1048531A1 (en) 2003-04-04

Similar Documents

Publication Publication Date Title
Revol et al. Taylor models and floating-point arithmetic: proof that arithmetic operations are validated in COSY
Mayo et al. Fast parallel iterative solution of Poisson’s and the biharmonic equations on irregular regions
Lodwick Interval and fuzzy analysis: A unified approach
US10534576B2 (en) Optimization apparatus and control method thereof
Chonev et al. On the Skolem problem for continuous linear dynamical systems
US20190361675A1 (en) Evaluating quantum computing circuits in view of the resource costs of a quantum algorithm
Hoeven et al. Modular SIMD arithmetic in Mathemagix
Arjevani et al. On lower and upper bounds for smooth and strongly convex optimization problems
Faranda et al. Analysis of round off errors with reversibility test as a dynamical indicator
Neugebauer et al. On the limits of stochastic computing
Graça et al. Computability of differential equations
US6757700B1 (en) Self-stabilizing, portable and efficient computer arithmetic using mappings of D Scale points
Li et al. Performance of the multiscale sparse fast Fourier transform algorithm
Heikkola et al. Fast direct solution of the Helmholtz equation with a perfectly matched layer or an absorbing boundary condition
Dayar et al. On the effects of using the Grassmann–Taksar–Heyman method in iterative aggregation–disaggregation
Kolev et al. Conservative and accurate solution transfer between high-order and low-order refined finite element spaces
WO2001033335A1 (en) Self-stabilizing, portable and efficient computer arithmetic using mappings of d scale points
Mantica Direct and inverse computation of Jacobi matrices of infinite iterated function systems
Mitchell et al. An empirical study of moment estimators for quantile approximation
John et al. On the analysis of block smoothers for saddle point problems
Johansson Numerical evaluation of elliptic functions, elliptic integrals and modular forms
US6832233B2 (en) Apparatus and method for computing multiple integral, and recording medium
Lanzara et al. Numerical solution of the Lippmann–Schwinger equation by approximate approximations
Zhang et al. A DISCRETIZING LEVENBERG-MARQUARDT SCHEME FOR SOLVING NONLIEAR ILL-POSED INTEGRAL EQUATIONS
Takouk et al. Improved composite methods using radial basis functions for solving nonlinear Volterra-Fredholm integral equations

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20020515

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20060503

REG Reference to a national code

Ref country code: HK

Ref legal event code: WD

Ref document number: 1048531

Country of ref document: HK