back to Main
FanHome > Baseball > Strategy and Sabermetrics
How To Calculate Runs/Inning distribution


post a new reply post a new topic
  author topic   this topic is 3 pages long:    1   2   3  
FanHome Merchandise - Shirts, Hats, and More
tangotiger posted September 7th, 2001 10:50 PM find more posts by tangotiger    edit/delete message   reply w/ quote
All Star
Member Since: May 2000
Location:

In response to a request, I have added a second file

http://www.geocities.com/tmasc/winsrpg2.txt

This file assumes you already know how many runs your opposition has scored. This chart would be useful to answer the question "given that the Yankees scored 3 runs, how many games would a pitcher with a 4.50 ERA have won?"


BenV-L posted October 15th, 2001 04:47 AM find more posts by BenV-L    edit/delete message   reply w/ quote
Member
Member Since: Aug 2001
Location: Lewisburg, PA USA

Here is some math for Tango's runs/inning distribution that clarifies (IMO) some of the issues raised here. Before I start that, though, a disclaimer: an accurate runs/inning distribution that accounts for variation in average runs/inning is an interesting tool. But it does not have an obvious advantage over other methods in determining runs/game distribution or in forecasting the probabilities in a game. There are problems with two steps:

* just because you can count events and make a distribution doesn't imply that the events are random variables governed by this distribution. Run/inn could be far from random. This means runs/inn doesn't obviously turn into runs/game.

* same applies for runs/game. We could in principle get the "exact" distributions by fitting all the data for runs/game binned by the average runs, and then use this to calculate probabilities for game outcomes. But again the run scoring may or may not be well modeled as a random variable.

I'm not saying that runs/inning won't give you a good prediction for runs/game, or that either of those won't predict game probabilities, but I am saying that it isn't obvious that they should. The data have to be consulted.

Okay, on with some math. Both Tango's and Woolner's distributions can be categorized as exponential for n >= 1, with n=0 taking the left over probability. That is

f_n = A x^n, for n >= 1

f_0 = 1 - sum(n=1 to inf) f_n

'x' here is Tango's dropoff rate. Woolner actually truncates his f_n to be zero for n > n_max for some n_max. There are good reasons not to do this, which I won't bother with here, but suffice to say that it won't amount to a hill of molecules anyway.

Before we get specific to Tango or Woolner, let's derive some properties of the model just as defined above. Two sum rules are useful:

sum(n=1 to inf) x^n = x/(1-x)

sum(n=1 to inf) n*x^n = x/(1-x)^2

Now, we have three parameters, f_0, A, and x, and we have two contstraints: the sum of f_n is 1 and the sum of n*f_n = r (which I use for runs/inn). So we are guaranteed to have one parameter free to set as we wish (within limits), and it's with this last degree of freedom that Tango and Woolner differ.

There are many equivalent ways to proceed. Let's use the normalization condition to eliminate 'A':

sum(n=0 to inf) f_n = f_0 + A*x/(1-x) = 1 ---> A = (1 - f_0)*(1-x)/x

Subbing this into our original expression gives

f_n = (1 - f_0) * (1 - x) * x^(n-1), for n >= 1

and f_0. Notice for n=1 we get Patriot's formula, so this follows for any distribution of this type (exponential with leftover probability shoved into f_0).

Now we use the constraint for the average runs to derive a relation between f_0 and x:

r = sum(n=1 to inf) n A x^n = A x/(1-x)^2 ---> A = r(1-x)^2/x

Matching the values of A gives us

f_0 = 1 - r*(1-x)

So as long as we pick x and f_0 to satisfy this equation, we are guaranteed to get the correct average. We could pick one of these ourselves, and then the other would follow. If we take 'x' to be our choice, then we can write everything in terms of 'x', giving

f_n = r (1-x)^2 x^(n-1), for n >= 1

f_0 = 1 - r (1-x)

Or we could take f_0 to be our choice and write everything in terms of it:

f_n = [(1 - f_0)^2/r] [(r - 1 - f_0)/r]^(n-1), for n >= 1

f_0

Now what do we do with this last degree of freedom? We should try to fit it to the data, which would then tell us how x (or f_0) should vary with r.

Here's what Tango gets:

x = (1 - c + c*r)/(1 + c*r)

f_0 = 1/(1 + c*r)

where c is the number he was using as a parameter (originally 0.73). You can see that this satisfies the condition above necessary for the average to be r. Now, Tango's 'x' as a function of 'r' may capture some of the 'r' dependence, but from his own fits he found a little variation in 'c' with 'r', which means the total 'r' dependence in 'x' is not explicit.

Here's what Woolner gets:

x = e^(a + b/(9 r) )

with a=-0.3865, b=-1.813, which translates to

x = 0.6794 e^(-0.2014/r)

Personally, I think the best thing to do would be to take Woolner's binned data and fit an 'x' to each range of runs (best is to simultaneously fit f_0 and x). This gives a small set of points for x as a function of r, and then this can probably be fit well with one or two parameters and the appropriate functional dependence on r (be it linear, quadratic, exponential). I think you could do no better with this class of functions.

Now a final comment: it appears that both Tango and Woolner use simulations to get a run/game distribution from their run/inn distribution (which involves the assumption that runs/inn is a random variable, as I mentioned above). Simulations are the harder and less accurate way to do this - I wrote Woolner an email about this but he didn't seem to notice it. Simulations are useful when the number of cases are too large to ennumerate - then you just randomly pick a bunch of cases and hope they represent a typical set. But it's easy to ennumerate all the various ways to score, say, 5 runs in 9 innings. Once you've done this, you just calculate the probability for each possibility and sum them up. It's child's play for a computer and the numbers are exact (given the assumptions of the model).

In fact, you can do some combinatorics by hand to do a lot of the computer's work for it, and get a more or less analytic expression. Writing things in terms of the parameters f_0, A, and x for simplicity, the chance for 0 runs in 9 innings is

p_0 = f_0^9

and the chance for n runs is

p_n = x^n f_0^9 sum(j=1 to min(9,n) ) B(n,j)*(A/f_0)^j

where min(9,n) means the minimum of 9 and n, and the B(n,j) are just numbers resulting from the combinatorics. Specifically

B(n,j) = (9 choose j)*(n-1 choose j-1) = 9! (n-1)!/[(9-j)! j! (n-j)! (j-1)!]

The value of j in the sum above is the number of innings out of 9 where at least 1 run is scored. The (9 choose n) = 9!/[(9-j)! j!] is just the number of ways to order j scoring innings and 9-j shutout innings. When there are j innings with run scoring, then each of them must have at least 1 run and there are n-j runs left over to allocate among the j innings. This gives the (n-1 choose j-1) factor. And that's it.


[edited by BenV-L on October 15th, 2001 at 06:41 AM]


David Smyth posted October 15th, 2001 09:43 AM find more posts by David Smyth    edit/delete message   reply w/ quote
All Star
Member Since: Dec 1999
Location: Lake Vostok

Even though I hardly understand any of it, I know it must be good work, Ben!


tangotiger posted October 15th, 2001 10:13 AM find more posts by tangotiger    edit/delete message   reply w/ quote
All Star
Member Since: May 2000
Location:

I concur with David.

First off runs/inning as I present it is far simpler than anything I have seen. While Ben repeated my hesitation in that there is some additional relationship involved with the dropoff rate, it is not worth the extra effort in my opinion.

Both mine and Woolner's methods were verified against ACTUAL data and not simulated. I first ran it against simulated because this gave me the "extremes" so often saught in validating my kooky-forumlae. When I stumbled upon Keith's article, I decided to use his actual data, which is why my dropoff rate changed.

Runs/inning is randomly distributed using league numbers, but the different types of hitting teams will have different runs/inn distributions. An all-HR team will be different from an all-SB team. I'm working on creating an RE/WE/LWTS matrix given all the hitting events as variables (i.e., it'll generate the RE for 1912 let's say, without play-by-play data). It's not pretty, but it gets the job done. Within the context that we need runs/inn though, using league numbers is sufficient.

Runs/inn does not necessarily give runs/gp. But again, within the context of league numbers, it comes pretty darn close. For example, runs/inn is high at the top of the order, but is low at the bottom of the order. However, after the first 2 innings, as it works out, the r/i IS randomly distributed over the league.

Let me study Ben's algebra to see what he is actually saying.


BenV-L posted October 16th, 2001 02:49 AM find more posts by BenV-L    edit/delete message   reply w/ quote
Member
Member Since: Aug 2001
Location: Lewisburg, PA USA

I'll await Tango's verdict on my algebra ;-). Seriously, it's dense and takes some digesting, and in my short time here on the board I've already learned to appreciate Tango's tenacity.

I would add a comment that simplicity is in the eye of the beholder. The math above shows how Tango's and Woolner's distributions are two peas in a pod, and offers a full description of the pod and the possibilities within it. From a mathematical point of view, that's a simplicity that you don't have yet with just one case (or pea, to belabor the metaphor).

And a question: Tango, how did you get the runs/game distribution from the runs/inn distribution? I was under the impression you did a simulation, and that was the source of my comment. Woolner explicitly says he does a simulation.

And while I don't want to rush things, we should probably continue the discussion about whether runs/inn really is a random variable. But maybe it's best to wait until we converge (or not) on the runs/inn distribution. Too many broths confuse the cooks. Or something.




tangotiger posted October 16th, 2001 10:02 AM find more posts by tangotiger    edit/delete message   reply w/ quote
All Star
Member Since: May 2000
Location:

Yes, the r/i to r/g was based on a simulator, but not a "baseball" simulator. Rather, using the r/i distribution, I simply ran a simulator to calculate what I could have done with algebra. But it was easier for me to do it with running 1 million games, so it gives me an excellent approximation.

(If I were to have 1 million baseball games instead, I'd have too much random numbers and calculations to generate, and would take my program forever. This way, by simply treating the r/i distribution as a random variable, I bypass all that. You can do something similar for RE or WE.)


BenV-L posted October 16th, 2001 10:50 AM find more posts by BenV-L    edit/delete message   reply w/ quote
Member
Member Since: Aug 2001
Location: Lewisburg, PA USA

There are basically 3 ways to get from r/i to r/g: one is to do simulations, but this does take awhile to converge and has to be redone for each new 'r' value. The other two involve more analytic work. One is the formula I give above, which has the advantage that the r/g distribution is given directly as a function of r. The other method is to use the computer to count (not simulate). This has to be repeated for each new r, like the simulations, but it's much faster and more accurate. Here's the idea:

Suppose a game linescore went: 0 0 1 0 2 0 0 0 1. That's 1 way to score 4 runs. Given an r/i distribution and the assumption that r/i is a random variable (with f_n the prob of scoring n runs in an inning) then the probability for scoring 4 runs exactly this way is given by f0*f0*f1*f0*f2*f0*f0*f0*f1. Of course there are many more ways to score 4 runs. So let the computer count them up and sum each individual probability, and you will get the total probability for scoring 4 runs. In pseudocode it would like this:


## calculate the probability of scoring n runs ##
## r1 = number of runs scored in inning 1 ... ##

prob = 0
for r1 = 0 to n
for r2 = 0 to n-r1
for r3 = 0 to n-r1-r2
for r4 = 0 to n-r1-r2-r3
for r5 = 0 to n-r1-r2-r3-r4
for r6 = 0 to n-r1-r2-r3-r4-r5
for r7 = 0 to n-r1-r2-r3-r4-r5-r6
for r8 = 0 to n-r1-r2-r3-r4-r5-r6-r7 {
r9 = n-r1-r2-r3-r4-r5-r6-r7-r8
prob = prob + f[r1]*f[r2]*f[r3]* ... *f[r9]
}

and that's it. That mess of for-loops will cycle through each possible set of r_i that sums to n and add the probabilities to get the exact (given the assumptions) probability for scoring n runs.

The analytic result I gave before basically just takes advantage of some of the properties of the exponential distribution and factors all the terms in this sum as much as possible.


tangotiger posted October 16th, 2001 12:24 PM find more posts by tangotiger    edit/delete message   reply w/ quote
All Star
Member Since: May 2000
Location:

Very interesting!

(I think you used the PRE tag again. What you can do is simply put a small character, like a period, and then hit enter after it. All your other text is being lost. Otherwise, we have to "cut/paste" your line to see it.)



BenV-L posted October 17th, 2001 04:59 AM find more posts by BenV-L    edit/delete message   reply w/ quote
Member
Member Since: Aug 2001
Location: Lewisburg, PA USA

I did some testing of the r/i distribution. I used Woolner's data, which is
binned by values of r/g. I just used his bins but calculated from his data
what the average r/i was for each bin, that is


r/g bin avg r/i
3.0-3.5 .36447
3.5-4.0 .40879
4.0-4.5 .46042
4.5-5.0 .51456
5.0-5.5 .56936
5.5-6.0 .62748
6.0-6.5 .66490


Then I took the model of an r/i distribution that is exponential for 1 or more
runs with the leftover probability given to 0 runs, which covers both Tango's
and Woolner's models. This model can be written in terms of the probabilities
f_n as

f_n = A x^n for n >= 1 [same as x = (dropoff rate) = f2/f1 = f3/f2 = ...]

and

f0 = 1 - f1 - f2 - f3 - ....

I showed before how all possible models of this type, when subject to the
condition that the average value of r/i is equal to r, can be written as

f_n = r (1-x)^2 x^(n-1) for n >= 1

f0 = 1 - r (1-x)

where I have chosen the dropoff rate x as the leftover parameter that is free
to vary (you could chose f0 instead). So now I can take these functions and
fit them to Woolner's data to determine the best x for each value of r. What
I get is the following:


r/g avg r/i best x x std-dev
3.0-3.5 .36447 .4013 .0031
3.5-4.0 .40879 .4126 .0029
4.0-4.5 .46042 .4318 .0022
4.5-5.0 .51456 .4542 .0036
5.0-5.5 .56936 .4682 .0027
5.5-6.0 .62748 .4902 .0049
6.0-6.5 .66490 .4978 .0073


A few technical details about the fitting: I set r exactly to its
average value for that particular bin. Then for each bin I calculated
the standard deviation for each f_n via

std-dev = sqrt( f_n*(1-f_n) / Npoints ),

where Npoints is the number of innings in that particular bin. I only
counted f_n where the bin contained at least 10 occurrences of n runs
scored, since otherwise the std dev estimate is too bad to be
valuable. Finally, I did a non-linear fitting of the function of 'x'
given above to the data, taking into account the std-dev of the f_n.

Okay, now we're ready to compare formulas for how x should depend on
r. First, I just took the 7 points above and fit a linear function
and found

x = a + b*r, with a = 0.2767 and b = 0.3378

to give a very good fit (RMS of residuals weighted by std dev is
0.682). I also did a quadratic fit and found no improvement. The
data are quite linear. Then I took Tango's formula

x = (1 - c + c*r)/(1 + c*r)

and found the best fit to give c = 0.7663, with RMS = 1.079. That's a
decent fit, and Tango does it with 1 parameter instead of 2 (so he's
fitting it with one hand tied behind his back, so to speak).

Finally, Woolner's formula amounts to

x = exp(-a - b/r)

where he gets a=0.3865 and b=0.2014, but when I make a best fit with
that function I get a=0.4496 and b=0.1754, with RMS 1.63. The
difference between my fit of Woolner's function and his fit might seem
like a lot, but if you plot them you will see they are pretty close.
As to why there is a difference, well, with all due respect to Woolner
(which is plenty - his pinky knows more about baseball than I'll ever
forget, or something), he's an expert on baseball but he is less than
an expert on data analysis.

The long and short of it is that if you limit yourself to exponential
distributions and determine the dependence of the dropoff rate x on
the runs per inning average r from Woolner's binned data, the plot of
x vs r looks pretty linear and are best fit with a linear function.
Tango's function does well, but if you plot it you can see that it has
some curvature that the data just don't show. Same for Woolner's.

If you want to use this r/i distribution that I got from the linear x
vs r fit, here's a recipe along the lines of Tango's:

f0 = 1 - 0.7233*r + 0.3378*r^2

x = 0.2767 + 0.3378*r

f1 = (1-f0)*(1-x)

f2 = x*f1, f3 = x*f2, ...

I would claim this is the best you can do with an exponential
distribution. Having said that, I will add that the f_n data show a
systematic trend of falling off slightly faster than exponential with n.

Oh well.

[edited by BenV-L on October 17th, 2001 at 06:50 AM]


BenV-L posted October 17th, 2001 05:03 AM find more posts by BenV-L    edit/delete message   reply w/ quote
Member
Member Since: Aug 2001
Location: Lewisburg, PA USA

Tango: the problem with the PRE tag appears to be browser-specific, because I have no trouble seeing the text with my browser. That makes it hard for me to figure what I need to do to make it work in your browser. I made the last post with plenty of carriage returns, does that help? Or is there another way to get preformatted text without the PRE tag? If you want me to add a line like

.

should it be in the regular text field or in the PRE text field? Does it need to be repeated each time I use the PRE tags? I'm just shooting in the dark trying to fix this, so I need some help.


tangotiger posted October 17th, 2001 10:10 AM find more posts by tangotiger    edit/delete message   reply w/ quote
All Star
Member Since: May 2000
Location:

Ben, interesting work. The one extra thing that I did over and above Woolner is that I also produced simulated data, since his sample data (as extensive as it is), is limited to the 3-6 run range, while with my simulator (a "real" baseball simulator was generated for Pedro- to Ruth-type seasons). As I might have pointed out, and as you found, that while there is the extra dependence, it wasn't worth the extra complexity.

As for the PRE tag, you can also try the XMP tag. (Oops, just tried XMP, and fanhome don't like it.)

I would do the PRE tag as
PRE-tag
data
/PRE-tag
.
hit enter
start typing


tangotiger posted February 28th, 2002 03:11 PM find more posts by tangotiger    edit/delete message   reply w/ quote
All Star
Member Since: May 2000
Location:

Here is the long-promised executable file that will generate a win% given 2 teams runs/game.

http://www.geocities.com/tmasc/GameDistr.zip

It uses the "Tango Distribution".

Note that because I only calculate the probability of up to 20 runs in any game, therefore, I wouldn't try to put in a Babe Ruth type of runs scored. Up to 8 rpg should be sufficient.

Let me know if I can make this better....

It takes about 4 seconds to run on a 800 machine.


DividedSky posted February 28th, 2002 03:32 PM find more posts by DividedSky    edit/delete message   reply w/ quote
Member
Member Since: Jun 2001
Location:

In typical Goeshi**ies fashion, if clicking the link doesn't allow you do d/l the file (which is what happened to me), copy and paste the URL into your browser address window and that should work.


tangotiger posted February 28th, 2002 03:52 PM find more posts by tangotiger    edit/delete message   reply w/ quote
All Star
Member Since: May 2000
Location:

I should have said that you can simply right-click and "Save Target As".


DividedSky posted February 28th, 2002 04:19 PM find more posts by DividedSky    edit/delete message   reply w/ quote
Member
Member Since: Jun 2001
Location:

Oh yeah that works too


tangotiger posted March 1st, 2002 10:35 AM find more posts by tangotiger    edit/delete message   reply w/ quote
All Star
Member Since: May 2000
Location:

File has been updated. Sorry for any problems.


tangotiger posted March 1st, 2002 10:47 AM find more posts by tangotiger    edit/delete message   reply w/ quote
All Star
Member Since: May 2000
Location:

I've just run through that 20 set sample that I earlier provided. If you remember, what I did was take all the extreme teams and grouped them into 20 different bins based on run environment, etc.

Here are the results of going through my Tango Distribution program.

The r-squared is 99.8%

As well, it seems that the runs per inn is probably not a random varuiable as Ben surmised. To compensate for this, you can alter the control variable to 0.852 to give you better results.

These are the results using the control variable as .760 (which is optimal for runs/inning distribution)



RSG RAG win% TangoDistr
6.361 4.721 0.638 0.633
5.634 4.178 0.637 0.629
4.583 3.435 0.624 0.617
4.991 3.826 0.621 0.612
3.961 3.134 0.591 0.592
5.302 4.543 0.575 0.568
5.921 5.046 0.572 0.572
4.744 4.108 0.568 0.561
4.283 3.681 0.567 0.562
3.791 3.392 0.550 0.544
3.742 4.278 0.438 0.445
4.114 4.754 0.433 0.439
3.330 3.940 0.428 0.433
4.550 5.283 0.426 0.435
4.985 5.941 0.419 0.421
3.482 4.553 0.379 0.391
3.093 4.234 0.376 0.376
4.213 5.699 0.361 0.370
3.745 5.115 0.356 0.370
4.679 6.393 0.349 0.361


tangotiger posted March 1st, 2002 12:22 PM find more posts by tangotiger    edit/delete message   reply w/ quote
All Star
Member Since: May 2000
Location:

I've updated the file once more to allow multiple input records. So, you can cut/paste say 20 records into the input file, and let the program loop through. It takes about 4 seconds/matchup.

I guarantee this is far far better than pythag.


David Smyth posted March 1st, 2002 08:51 PM find more posts by David Smyth    edit/delete message   reply w/ quote
All Star
Member Since: Dec 1999
Location: Lake Vostok

How do you know that the W% distribution that you are attempting to replicate is indeed correct? How do you know that the sample sizes in each individual bin are adequate?


tangotiger posted March 2nd, 2002 12:03 AM find more posts by tangotiger    edit/delete message   reply w/ quote
All Star
Member Since: May 2000
Location:

The other option is that I can run each of the 1668 seasons through my mathematical model. AT 4 seconds each, that's 6000 seconds, or 100 minutes. I suppose I can run this over night.

I'll get back to you tomorrow...


tangotiger posted March 4th, 2002 11:08 AM find more posts by tangotiger    edit/delete message   reply w/ quote
All Star
Member Since: May 2000
Location:

I ran through the whole 1668 actuals from 1919-2000, using .852 as the control value. I then compared against the Ben91 measure, the diff/10 measure, and pythag.

Because it takes 1 hr to run through 1668 computations in my program, I didn't try to "best-fit" as to maybe it would be better with .849 instead.

On the other hand, I did compensate for the other measures. Ben's formula (RS-RA)/(RS+RA) works better if multiplied by 0.92 and not 0.91. If simply doing RS-RA/x, x works best at 9.8. Pythag works best at 1.88.

I calculated the error as ABS(win%actual - win%estimate) and simply averaged out over the 1668 records. The average error / GP was:
Tango - .02004
Pythag - .02010
Ben - .02017
diff/10 - .02096

If we combine the last 2 methods using the following formula:
.72 * Ben + .28 * diff/10 we get
BenCombo - .02002
(This is the one where we do .75 * RPG + 3.4 = RPW, or some such)

Of course, this is the whole population, and not broken down by run environment. The amount of extra you would gain is virtually nothing over even simple measure as diff/10.

Of course, we haven't talked about extremes. In my next post, I will show you the error rates for the 20 "bin" sample of extreme teams that I always use.


tangotiger posted March 4th, 2002 11:20 AM find more posts by tangotiger    edit/delete message   reply w/ quote
All Star
Member Since: May 2000
Location:

Here is the results from the bin data. The Tango .852 measure works out the best, just slightly beating out the BenCombo version. (If there are any enterprising people out there, perhaps you can use the Tango Distribution exe file, and run other numbers like .845 or .857 or something to see if you can get something more accurate.)

The average errors are:
Tango - .0032
BenCombo - .0034
Ben91 - .0041
Pythag - .0042
diff/10 - .0095

Now remember, this is from the extreme teams. This is about as extreme as you'll get. I see no reason to use Pythag. I highly recommend the BenCombo method as the most accurate, and thankfully, very easy to compute.

RPG = RS/gp + RA/gp
diff = RS/gp - RA/gp
win% = actual win %

The others are the results of the average of the formula for the binned data and the error from the overall actual win%



RPG diff Win% Tango err Ben91 err diff/10 err BenCombo err Pythag Pythag err
7.09 0.83 0.591 0.598 0.008 0.607 0.017 0.584 0.006 0.601 0.010 1.564 0.608 0.017
7.18 0.40 0.550 0.547 0.003 0.551 0.001 0.541 0.009 0.548 0.002 1.234 0.552 0.002
7.27 -0.61 0.428 0.428 0.001 0.423 0.005 0.438 0.010 0.427 0.000 0.730 0.422 0.006
7.33 -1.14 0.376 0.369 0.007 0.357 0.019 0.384 0.008 0.364 0.012 0.561 0.357 0.019
8.02 1.15 0.624 0.625 0.000 0.632 0.007 0.617 0.007 0.627 0.003 1.733 0.632 0.007
7.96 0.60 0.567 0.567 0.001 0.570 0.003 0.561 0.006 0.567 0.000 1.332 0.571 0.004
8.02 -0.54 0.438 0.441 0.003 0.439 0.001 0.445 0.008 0.440 0.003 0.779 0.438 0.000
8.04 -1.07 0.379 0.383 0.005 0.377 0.001 0.391 0.012 0.381 0.002 0.607 0.377 0.002
8.82 1.16 0.621 0.619 0.003 0.622 0.000 0.619 0.003 0.621 0.001 1.657 0.622 0.001
8.85 0.64 0.568 0.565 0.003 0.566 0.002 0.565 0.003 0.566 0.002 1.314 0.567 0.001
8.87 -0.64 0.433 0.434 0.002 0.434 0.001 0.435 0.002 0.434 0.001 0.764 0.433 0.000
8.86 -1.37 0.356 0.362 0.006 0.358 0.002 0.360 0.004 0.358 0.003 0.561 0.358 0.002
9.81 1.46 0.637 0.637 0.001 0.636 0.001 0.649 0.011 0.640 0.002 1.769 0.636 0.001
9.85 0.76 0.575 0.572 0.003 0.571 0.004 0.577 0.002 0.573 0.003 1.341 0.572 0.003
9.83 -0.73 0.426 0.430 0.004 0.431 0.006 0.425 0.001 0.430 0.004 0.757 0.430 0.005
9.91 -1.49 0.361 0.361 0.001 0.362 0.001 0.348 0.012 0.358 0.003 0.571 0.362 0.001
11.08 1.64 0.638 0.641 0.003 0.636 0.002 0.667 0.029 0.645 0.006 1.758 0.636 0.002
10.97 0.88 0.572 0.577 0.005 0.574 0.001 0.589 0.017 0.578 0.006 1.356 0.575 0.002
10.93 -0.96 0.419 0.416 0.003 0.419 0.000 0.402 0.017 0.415 0.004 0.722 0.418 0.001
11.07 -1.71 0.349 0.353 0.003 0.358 0.009 0.325 0.024 0.349 0.001 0.558 0.358 0.009

[edited by tangotiger on March 4th, 2002 at 11:37 AM]


> rate this topic: 1: Worst 5: Best (5 is best)
 this topic is 3 pages long:    1   2   3   
Forum Rules:
Please read and follow our Community Standards.
You may use HTML, FanHome code or Smilies to format your posts.

post a new reply post a new topic
>show printable  >e-mail page to a friend
>back to top of page

admin options:
>open / close topic
>move thread
>delete topic
>edit topic


help>  about>  advertise>  affiliate>  contact us>  site map>

Copyright ©1999-2001, FanHome.com LLC. All rights reserved. Terms of Use and Privacy Policy.
FanHome, the FanHome logo, and 'Where Fans Connect' are service marks of FanHome.com LLC.