[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Showing posts with label Pitching. Show all posts
Showing posts with label Pitching. Show all posts

Wednesday, May 12, 2021

Rate Stat Series, pt. 3: Teams

If I tell you that three teams in the same league-season played the same number of games (113), and that one of them scored 679 runs, another scored 670, and the third scored 633, how confident would you be in using this limited data to rank the productivity of their offenses? As usual in this series, we are ignoring park factors and other contextual factors (like quality of opposition/not having to face one’s own pitching staff); since they are from the same league-season, you don’t need to worry about whether the win value of each team’s runs was the same. Assume also that runs will be distributed across games by a known distribution like Enby, so the distribution is also not a differentiator. Assume that we don’t care about any “luck”; the actual total is what matters, not what a run estimator came up with. What else do you need to know?

I would contend that given the (admittedly restrictive) parameters I’ve placed on the exercise, you now know almost everything you need to know. In a small number of cases, and to a small extent, you are missing valuable information – but for most situations, you should need no additional information.

Now suppose I told you something similar about three players: same league season, same number of games played (111), and three runs created estimates: one player created 106 runs, one 92, and one 88. Do you feel like you need any additional information to put these players in the proper order of offensive productivity?

I hope that your answer here is yes, and a lot of it. I’ve told you how many games each have played, but that doesn’t tell you how many opportunities they’ve had at the plate. Sure enough, in this case one of the players had substantially fewer plate appearances than the others (489, 490, 451 respectively). Given that the player who created 90 runs had 39 more plate appearances than the player who created 86, it seems likely that the latter player was actually more productive on a rate basis.

I did not tell you how many plate appearances each of the three teams had in their 113 games; I don’t think it’s relevant to the question at hand, but the answer is 4493, 4611, and 4556 respectively. Why do we need to know plate appearances (or something) in the case of players, but not in the case of teams? Understanding this gets to the heart of the reason this series needs to exist at all, why applying the same rate stat to team offenses and player offense may not work as intended.

In the previous installment, I asked the question: “Where do plate appearances come from?” The answer is that every inning (excluding walkoff situations) starts with three PAs guaranteed, and only by avoiding outs (reaching base and not being subsequently retired on the bases) can a team generate additional plate appearances.

From a team perspective, then, plate appearances are not an appropriate denominator for a rate stat, because differences in team plate appearances are the result of differences in performance between the teams. To return to the three teams discussed above, they are the 1994 Indians, Yankees, and White Sox respectively. The Indians had the fewest PA of the three yet scored the most runs. Does this mean that their offense, which already scored more runs than the other two clubs, was even more superior than the raw numbers would suggest?

An offense does not set out to maximize its plate appearances, nor does it set out to score the maximum number of runs it can in the minimum number of plate appearances. An offense sets out to maximize its total runs scored. Plate appearances are a function of the rate at which a team makes outs. At this point it might be helpful to consider the three teams:



New York’s OBA was 22 points higher than Cleveland’s and thus they generated an extra plate appearance per game. When ranking team offenses, it wouldn’t make sense to penalize the Yankees for this, which would be the case if we used R/PA. The difference in plate appearances simply reflects the different manner in which New York and Cleveland went about creating runs. For a team, plate appearances are inextricably linked with their OBA. Each inning, a team attempts to score as many runs as it possibly can before making three outs. It’s possible to score one run in a complete inning with as few as four or as many as seven plate appearances. Whether a team uses four, five, six, or seven plate appearances to score a single run is irrelevant in terms of that run’s impact on them winning or losing the game (*). Thus outs or an equivalent like innings are the correct choice for the denominator of a team rate stat.

(*) I am speaking here simply about the direct impact of the runs scored and not any downstream effects or the predictive value of team performance. Perhaps the team that uses seven PA to score one run benefits by wearing down the opposing pitcher or is more likely to have success in the future because they had four of seven batters reach base compared to one in four for the team that only needed four PA. Here we’re just focused on the win value directly attributable to the run scored and not any secondary or predictive effects.

The fact that outs are fixed for each team each inning (ignoring walkoffs) means that outs are also fixed for each team each game (ignoring walkoffs, rainouts, extra innings, and foregone bottom of the ninths). Which means that outs are also fixed for each team each season (ignoring those factors and cases in which teams don’t play out their full schedules, or have to play tiebreakers), which means that R/G and raw seasonal runs scored total are essentially equivalent to looking at R/O for a team. So for the question I asked at the beginning of the article, just knowing that the three teams had played an equal number of games, we had a pretty good idea how they would “truly” rank using R/O.

For players, this is not at all the case, since even in an equal number of games, players will get different numbers of plate appearances for a variety of reason (batting order position, the team’s OBA (remember, higher OBA teams will generate more PA), whether or not they play the full game), a fact that is intuitive to most baseball fans. What is less intuitive, though, is that even in the same number of plate appearances, players can make very different numbers of outs. Since we’ve already accepted that team OBA defines how many plate appearances a team will generate, it isn’t much of a leap to conclude that if we have two players who create the same number of runs (using a formula that doesn’t explicitly account for their impact on the team’s OBA) in the same number of plate appearances, the player who makes fewer outs was more productive when we consider the totality of their offensive contribution. Even though the two players were equally productive in their plate appearances, the player who made fewer outs generated more plate appearances for his teammates, a second-order effect that needs to be considered when evaluating individual offensive contribution. For teams, the runs scored total already reflects this effect.

This would be an appropriate time to note that this series is focused on evaluating offenses, but of course every offensive metric can be reviewed in reverse as a defensive metric. However, since the obvious denominator for teams is outs, it is also the obvious denominator for individual pitchers. We don’t need to worry about a pitcher’s impact on his team’s plate appearances – when he is in the game, he is solely responsible (setting aside the question of how the team’s performance should be allocated between the pitcher and his fielders) for the number of plate appearances the opponent generates, and his goal is to record three outs while minimizing the number of runs he allows, regardless of how many opponents come to the plate. Outs are clearly the correct denominator for the rate stat, and innings pitched are nothing more than outs/3 (and even better, IP account for all outs, including many that don’t show up in the standard statistical categories).

In thinking about the development of early baseball statistics and the legacy of those standard statistics on how the overwhelming majority of fans thought about baseball before the sabermetric revolution took hold, it is striking that the early statisticians understood these concepts as they applied to pitchers. When pitchers were completing almost all their starts, simple averages of earned runs allowed sufficed, for the same reason that team R/G tells you most everything you need to do. As complete games became rarer, ERA took hold, properly using innings in the denominator. For most of the twentieth century, and even post-sabermetric revolution, baseball fans are conditioned to think about innings pitched as the denominator for all manner of pitching metrics – even those like strikeout and walk frequency for which plate appearances would make a much more logical denominator. (Of course, present day sabermetrics has embraced metrics like K% and W% for pitchers, but the per inning versions remain in use as well).

The parallel development of offensive statistics resulted in the opposite phenomenon. While early box scores tracked “hands out” (essentially outs made) for individual batters, batting average eventually became the dominant statistic. Setting aside the issues with “at bats” and how they distort people’s thinking and saddled us with the mouthful of “plate appearances” to describe the more fundamental quantity of the two, the standard batting statistics have conditioned fans to think about batting rates (walk rate, home run rate, etc.) in the correct manner (or one adjacent to being correct, depending on whether at bats or plate appearances are the denominator), but leave people struggling with how to properly express a batter’s overall productivity. Again, this is the opposite problem of how pitching statistics were traditionally constructed. One can imagine that it all might be very different had the Batting Average taken the form of a hit/out ratio rather than hits/at bats.

Tuesday, March 14, 2017

Win Value of Pitcher Adjusted Run Averages

The most common class of metrics used in sabermetrics for cross-era comparisons use relative measures of actual or estimated runs per out or sother similar denominator. These include ERA+ for pitchers and OPS+ or wRC+ for batters (OPS+ being an estimate of relative runs per out, wRC+ using plate appearances in the denominator but accounting for the impact of avoiding outs). While these metrics provide an estimate of runs relative to the league average, they implicitly assume that the resulting relative scoring level is equally valuable across all run environments.

This is in fact not the case, as it is well-established that the relationship between run ratio and winning percentage depends on the overall level of run scoring. A team with a run ratio of 1.25 will have a different expected winning percentage if they play in a 9 RPG environment than if they play in a 10 RPG environment. Metrics like ERA+ and OPS+ do not translate relative runs into relative wins, but presumably the users of such metrics are ultimately interested in what they tell us about player contribution to wins.

There are two key points that should be acknowledged upfront. One is that the difference in win value based on scoring level is usually quite small. If it wasn’t, winning percentage estimators that don’t take scoring level into account would not be able to accurately estimate W% across the spectrum of major league teams. While methods that do consider scoring level are more accurate estimators of W% than similar methods that don’t, a method like fixed exponent Pythagorean can still produce useful estimates despite maintaining a fixed relationship between runs and wins.

The second is that players are not teams. The natural temptation (and one I will knowingly succumb to in what follows) is to simply plug the player’s run ratio into the formula and convert to a W%. This approach ignores the fact that an individual player’s run rate does not lead directly to wins, as the performance of his teammates must be included as well. Pitchers are close, because while they are in the game they are the team (more accurately, their runs allowed figures reflect the totality of the defense, which includes contributions from the fielders), but even ignoring fielding, non-complete games include innings pitched by teammates as well.

For the moment I will set that aside and instead pretend (in the tradition of Bill James’ Offensive Winning %) that a player or pitcher’s run ratio can or should be converted directly to wins, without weighting the rest of the team. This makes the figures that follow something of a freak show stat, but the approach could be applied directly to team run ratios as well. Individuals are generally more interesting and obviously more extreme, which means that the impact of considering run environment will be overstated.

I will focus on pitchers for this example and will use Bob Gibson’s 1968 season as an example. Gibson allowed 49 runs in 304.2 innings, which works out to a run average of 1.45 (there will be some rounding discrepancies in the figures). In 1968 the NL average RA was 3.42, so Gibson’s adjusted RA (aRA for the sake of this post) is RA/LgRA = .423 (ideally you would park-adjust as well, but I am ignoring park factors for this post). As an aside, please resist the temptation to instead cite his RA+ of 236 instead. Please.

.423 is a run ratio; Gibson allowed runs at 42.3% of the league average. Since wins are the ultimate unit of measurement, it is tempting to convert this run ratio to a win ratio. We could simply square it, which reflects a Pythagorean relationship. Ideally, though, we should consider the run environment. The 1968 NL was an extremely low scoring league. Pythagenpat suggests that the ideal exponent is around 1.746. Let’s define the Pythagenpat exponent to use as:

x = (2*LgRA)^.29

Note that this simply uses the league scoring level to convert to wins; it does not take into account Gibson’s own performance. That would be an additional enhancement, but it would also strongly increase the distortion that comes from viewing a player as his own team, albeit less so for pitchers and especially those who basically were pitching nine innings/start as in the case of Gibson.

So we could calculate a loss ratio as aRA^x, or .223 for Gibson. This means that a team with Gibson’s aRA in this environment would be expected to have .223 losses for every win (basic ratio transformations apply; the reciprocal would be the win ratio, the loss ratio divided by (1 + itself) would be a losing %, the complement of that W%, etc.)

At this point, many people would like to convert it to a W% and stop there, but I’d like to preserve the scale of a run average while reflecting the win impact. In order to do so, I need to select a Pythagorean exponent corresponding to a reference run environment to convert Gibson’s loss ratio back to an equivalent aRA for that run environment. For 1901-2015, the major league average RA was 4.427, which I’ll use as the reference environment, which corresponds to a 1.882 Pythagenpat exponent (there are actually 8.94 IP/G over this span, so the actual RPG is 8.937 which would be a 1.887 exponent--I'll stick with RA rather than RPG for this example since we are already using it to calculate aRA).

If we call that 1.882 exponent r, then the loss ratio can be converted back to an equivalent aRA by raising it to the (1/r) power. Of course, the loss ratio is just an interim step, and this is equivalent to:

aRA^(x*(1/r)) = aRA^(x/r) = waRA

waRA (excuse the acronyms, which I don’t intend to survive beyond this post) is win-Adjusted Run Average. For Gibson, it works out to .450, which illustrates how small the impact is. Pitching in one of the most extreme run environments in history, Gibsons aRA is only 6.4% higher after adjusting for win impact.

In 1994, Greg Maddux allowed 44 runs in 202 innings for a run average of 1.96. Pitching in a league with a RA of 4.65, his aRA was .421, basically equal to Gibson. But his waRA was better, at .416, since the same run ratio leads to more wins in a higher scoring environment.

It is my guess that consumers of sabermetrics will generally find this result unsatisfactory. There seems to be a commonly-held belief that it is easier to achieve a high ERA+ in a higher run scoring environment, but the result of this approach is the opposite--as RPG increases, the win impact of the same aRA increases as well. Of course, this approach says nothing about how “easy” it is to achieve a given aRA--it converts aRA to an win-value equivalent aRA in a reference run environment. It is possible that it could be simultaneously “easier” to achieve a low aRA in a higher scoring environment and that the value of a low aRA be enhanced in a higher scoring environment. I am making no claim regarding the impressiveness or aesthetic value, etc. of any pitcher’s performance, only attempting to frame it in terms of win value.

Of course, the comparison between Gibson and Maddux need not stop there. I do believe that waRA shows us that Maddux’ rate of allowing runs was more valuable in context than Gibson’s, but there is more to value than the rate of allowing runs. Of course we could calculate a baselined metric like WAR to value the two seasons, but even if we limit ourselves to looking at rates, there is an additional consideration that can be added.

So far, I’ve simply used the league average to represent the run environment, but a pitcher has a large impact on the run environment through his own performance. If we want to take this into account, it would be inappropriate to simply use LgRA + pitcher’s RA as the new RPG to plug into Pythagenpat; we definitely need to consider the extent to which the pitcher’s teammates influence the run environment, since ultimately Gibson’s performance was converted into wins in the context of games played by the Cardinals, not a hypothetical all-Gibson team. So I will calculate a new RPG instead by assuming that the 18 innings in a game (to be more precise for a given context, two times the league average IP/G) is filled in by the pitcher’s RA for his IP/G, and the league’s RA for the remainder.

In the 1968 NL, the average IP/G was 9.03 and Gibson’s 304.2 IP were over 34 appearances (8.96 IP/G), so the new RPG is 8.96*1.45/9 + (2*9.03 - 8.96)* 3.42/9 = 4.90 (rather than 6.84 previously). This converts to a Pythagenpat exponent of 1.59, and an pwaRA (personal win-Adjusted Run Average?) of .485. To spell that all out in a formula:

px = ((IP/G)*RA/9 + (2*Lg(IP/G) - IP/G)*LgRA/9) ^ .29
pwaRA = aRA^(px/r)

Note that adjusting for the pitcher’s impact on the scoring context reduces the win impact of effective pitchers, because as discussed earlier, lowering the RPG lowers the Pythagenpat exponent and makes the same run ratio convert to fewer wins. In fact, considering the pitcher’s effect on the run environment in which he operates actually brings most starting pitchers’ pwaRA closer to league average than their aRA is.

pwaRA is divorced from any real sort of baseball meaning, though, because pitchers aren’t by themselves a team. Suppose we calculated pwaRA for two teammates in a 4.5 RA league. The starter pitches 6 innings and allows 2 runs; the reliever pitches 3 innings and allows 1. Both pitchers have a RA of 3.00, and thus identical aRA (.667) or waRA (.665). Furthermore, their team also has a RA of 3.00 for this game, and whether figured as a whole or as the weighted average of the two individuals, the team also has the same aRA and waRA.

However, if we calculate the starter’s pwaRA, we get .675, while the reliever is at .667. Meanwhile, the team has a pwaRA of .679, which makes this all seem quite counterintuitive. But since all three entities have the same RA, the lower the run environment, the less win value it has on a per inning basis.

I hope this post serves as a demonstration of the difficulty of divorcing a pitcher’s value from the number of innings he pitched. Of course, the effects discussed here are very small, much smaller than the impact of other related differences, like the inherent statistical advantage of pitchers over shorter stints, attempts to model differences in replacement level between starters and relievers, and attempts to detect/value any beneficial side effects of starters working deep into games.

One of my long-standing interests has been the proper rate stat to use to express a batter’s run contribution (I have been promising myself for almost as long as this blog has been existence that I will write a series of posts explaining the various options for such a metric and the rationale for each, yet have failed to do so). I’ve never had the same pull to the question for pitchers, in part because the building block seems obvious: runs/out (which depending on how one defines terms can manifest itself as RA, ERA, component ERA, FIP-type metrics, etc.)

But while there are a few adjustments that can theoretically made between a hitter’s overall performance expressed as a rate and a final value metric (like WAR), the adjustments (such as the hitter’s impact on his team’s run scoring beyond what the metric captures itself, and the secondary effect that follows on the run/win conversion) are quite minor in scale compared to similar adjustments for pitchers. While the pitcher (along with his fielders) can be thought as embodying the entire team while he is the game, that also means that said unit’s impact on the run/win conversion is significant. And while there are certainly cases of batters whose rates may be deceiving because of how they are deployed by their managers (particularly platooning), the additional playing time over which a rate is spread increases value in a WAR-like metric without any special adjustment. Pitchers’ roles and secondary effects thereof (like any potential value generated by “eating” innings) have a more significant (and more difficult to model) impact on value than the comparable effects for position players.

Tuesday, September 01, 2015

End of Season Stats Update

The end of season statistics I post have always walked a fine line between exhibiting a reasonable balance of accuracy and simplicity and veering to far into being needlessly inaccurate. That fundamental tension will not be removed with the changes I am making, but they will clean up a couple shortcuts that were too glaring for me to ignore any longer. I'm writing these changes up about a month in advance so that I don't have to devote any more space to them in the already bloated post that explains all the stats.

Runs Created

I've modified the knockoff version of Paul Johnson's Estimated Runs Produced that I use to calculate runs created to consider intentional walks, hit batters, and strikeouts. The idea behind using ERP, which is what I refer to as a "skeleton" form of linear weights, rather than using some other construct, is that I don't want to recalculate each and every weight every season. Instead, using fixed relationships between the various offensive events that hold fairly well across average modern major league environments is easy to work with from year-to-year and avoids giving the appearance of customization that explicit weights for each event would convey. Mind you, such a formula can still be expressed as x*S + y*D + z*T + ...

The previous version I was using was:

(TB + .8H + W + .7SB - CS - .3AB)*x, where x is around .322

To this formula I need to add IW and HB. This is all fairly straightforward--hit batters can count the same as walks, intentional walks will be half of a standard walk:

TB + .8H + W + HB - .5IW + .7SB - CS - .3AB

The rationale for counting intentional walks as half of a standard walk is that it is fairly close to correct (Tom Ruane's work suggests the ratio of IW value to standard walk value was .57 for 1960-2004). There are other possible approaches, such as removing IW altogether but assigning them the value of the batter's average plate appearance. There is certainly logic behind such a method; just doing it the simple way is a bit more conservative in terms of recognizing differences between batters.

Hit batters are actually slightly more valuable on average than walks due to the more random situations in which they occur, but such a distinction would be overkill given the approximate nature of the other coefficients.

I considered making adjustments for the other events included in the standard statistics (strikeouts, sacrifice hits, sacrifice flies, double plays) but ultimately chose to forego including each. The difference between a strikeout and a non-strikeout out is around .01 runs; given that there are numerous shortcuts already being taken, this is simply not enough for me to worry about in this context. Sacrifices and double plays are problematic due to the heavy contextual influences, although I came very close to just counting sacrifice flies the same as any other out. I would include K, SH, and SF if I was trying to do this precisely, but I would still leave double plays alone.

This was also a good opportunity to update the multipliers for all versions of the skeleton, which I did with the 2005-2014 major league totals to get these formulas:

ERP = (TB + .8H + W - .3AB)*.319 (had been using .324)
= .478S + .796D + 1.115T + 1.434HR + .319W - .096(AB - H)

ERP = (TB + .8H + W + .7SB - CS - .3AB)*.314 (had been using .322)
= .471S + .786D + 1.100T + 1.414HR + .314W + .220SB - .314CS - .094(AB - H)

ERP = (TB + .8H + W + HB - .5IW + .7SB - CS - .3AB)*.310
= .464S + .774D + 1.084T + 1.393HR + .310(W - IW + HB) + .155IW + .217SB - .310CS - .093(AB - H)

The expanded versions illustrate one of the weaknesses of the skeleton approach, or perhaps more precisely using total bases and hits rather than splitting out the hit types, as it results in the relationship between hit types being a bit off, particularly in the case of the triple. Still, I find the accuracy tradeoff acceptable for the purposes for which I use the end of season statistics.

For the batters who appeared in the 2014 end of season statistics, the biggest change switching to the version including HB and IW was five runs. Jon Jay, Carlos Gomez, and Mike Zunino gain five runs while Victor Martinez loses five runs. Of the 312 hitters, 263 (84%) change by no more than a run in either direction. So the differences are usually not material, another reason why I personally didn't mind the inaccuracy. But Carlos Gomez might disagree.

Along with the RC change are some necessary changes to other statistics. PA is now defined as AB + W + HB. OBA is (H + W + HB)/(AB + W + HB), and Secondary Average is (TB - H + W + HB)/AB, which is equal to SLG - BA + (OBA - BA)/(1 - OBA).

RAA for Pitchers

For as long as I've been running these reports, I've used the league run average as the baseline for Runs Above Average for both starters and relievers. This despite using very different replacement levels (128% of league average for starters and 111% for relievers). I've rationalized this somewhere, I'm sure, but the fundamental flaw is apparent when you look at my reliever reports and see three or four run gaps between RAA and RAR for many pitchers.

I want to avoid using the actual league average split for any given season, since it can bounce around and I'd rather use the league overall average in some manner. So my approach instead will be to look at the starter/reliever split in eRA (Run Average estimated based on component statistics, including actual hits allowed, so akin to Component ERA rather than FIP) for the last five league-seasons and see what makes sense.

The resulting difference in baseline between starters and relievers will not be as large as that exhibited in the replacement levels. The replacement level split attempts to estimate the true talent difference between the two roles, recognizing that most relievers would not be anywhere near as effective in a starting role. This adjustment is simply trying to compare an individual pitcher to what the composite league average pitcher would be estimated to have in his role (SP/RP) and does not account for our belief that the average starter is a better pitcher than the average reliever.

Additionally, using eRA rather than actual RA makes the adjustment more conservative than it otherwise might be, because it considers component performance rather than actual runs allowed. Part of the reliever advantage in RA is that the scoring rules benefit them. Why did I not then take this into account? I actually don't use RA in calculating pitcher RAA or RAR, I use a version of Relief RA, which was created by Sky Andrecheck and makes an adjustment for inherited runners (a simple one that doesn't consider base/out state, simply the percentage that score). The version I use considers bequeathed runners as well, so as to adjust starter's run averages for bullpen support as well. But the statistics on inherited and bequeathed runners by role for the league are not readily available, so I based the adjustment on eRA, which I already have calculated for each league-season broken out for starters and relievers.



This chart should be fairly self-explanatory: seRA is starter eRA, reRA is reliever eRA, eRA is the league eRA, s ratio = seRA/Lg(eRA), r ratio = reRA/Lg(eRA), and S IP% is the percentage of league innings thrown by starters. The relationships are fairly stable for the last five years, and so I have just used the simple average of the league-season s and r ratios to figure the adjustments.

RAA (for SP) = (LgRA*1.025 - RRA)*IP/9

RAA (for RP) = (LgRA*.951 - RRA)*IP/9

You can check my math as the weighted average of the adjustment is 1.025(.665) + .951(1 - .665) = 1.0002.

Wednesday, October 10, 2012

End of Season Statistics 2012

The spreadsheets are published as Google Spreadsheets, which you can download in Excel format by changing the extension in the address from "=html" to "=xls". That way you can download them and manipulate things however you see fit. The player spreadsheets are not ready yet, but I want to get the team stuff posted.

The data comes from a number of different sources. Most of the basic data comes from Doug's Stats, which is a very handy site, or Baseball-Reference. KJOK's park database provided some of the data used in the park factors, but for recent seasons park data comes from B-R. Data on pitcher's batted ball types allowed, doubles/triples allowed, and inherited/bequeathed runners comes from Baseball Prospectus.

The basic philosophy behind these stats is to use the simplest methods that have acceptable accuracy. Of course, "acceptable" is in the eye of the beholder, namely me. I use Pythagenpat not because other run/win converters, like a constant RPW or a fixed exponent are not accurate enough for this purpose, but because it's mine and it would be kind of odd if I didn't use it.

If I seem to be a stickler for purity in my critiques of others' methods, I'd contend it is usually in a theoretical sense, not an input sense. So when I exclude hit batters, I'm not saying that hit batters are worthless or that they *should* be ignored; it's just easier not to mess with them and not that much less accurate.

I also don't really have a problem with people using sub-standard methods (say, Basic RC) as long as they acknowledge that they are sub-standard. If someone pretends that Basic RC doesn't undervalue walks or cause problems when applied to extreme individuals, I'll call them on it; if they explain its shortcomings but use it regardless, I accept that. Take these last three paragraphs as my acknowledgment that some of the statistics displayed here have shortcomings as well.

The League spreadsheet is pretty straightforward--it includes league totals and averages for a number of categories, most or all of which are explained at appropriate junctures throughout this piece. The advent of interleague play has created two different sets of league totals--one for the offense of league teams and one for the defense of league teams. Before interleague play, these two were identical. I do not present both sets of totals (you can figure the defensive ones yourself from the team spreadsheet, if you desire), just those for the offenses. The exception is for the defense-specific statistics, like innings pitched and quality starts. The figures for those categories in the league report are for the defenses of the league's teams. However, I do include each league's breakdown of basic pitching stats between starters and relievers (denoted by "s" or "r" prefixes), and so summing those will yield the totals from the pitching side. The one abbreviation you might not recognize is "N"--this is the league average of runs/game for one team, and it will pop up again.

The Team spreadsheet focuses on overall team performance--wins, losses, runs scored, runs allowed. The columns included are: Park Factor (PF), Home Run Park Factor (PFhr), Winning Percentage (W%), Expected W% (EW%), Predicted W% (PW%), wins, losses, runs, runs allowed, Runs Created (RC), Runs Created Allowed (RCA), Home Winning Percentage (HW%), Road Winning Percentage (RW%) [exactly what they sound like--W% at home and on the road], Runs/Game (R/G), Runs Allowed/Game (RA/G), Runs Created/Game (RCG), Runs Created Allowed/Game (RCAG), and Runs Per Game (the average number of runs scored an allowed per game). Ideally, I would use outs as the denominator, but for teams, outs and games are so closely related that I don’t think it’s worth the extra effort.

The runs and Runs Created figures are unadjusted, but the per-game averages are park-adjusted, except for RPG which is also raw. Runs Created and Runs Created Allowed are both based on a simple Base Runs formula. The formula is:

A = H + W - HR - CS
B = (2TB - H - 4HR + .05W + 1.5SB)*.76
C = AB - H
D = HR
Naturally, A*B/(B + C) + D.

I have explained the methodology used to figure the PFs before, but the cliff’s notes version is that they are based on five years of data when applicable, include both runs scored and allowed, and they are regressed towards average (PF = 1), with the amount of regression varying based on the number of years of data used. There are factors for both runs and home runs. The initial PF (not shown) is:

iPF = (H*T/(R*(T - 1) + H) + 1)/2
where H = RPG in home games, R = RPG in road games, T = # teams in league (14 for AL and 16 for NL). Then the iPF is converted to the PF by taking x*iPF + (1-x), where x = .6 if one year of data is used, .7 for 2, .8 for 3, and .9 for 4+.

It is important to note, since there always seems to be confusion about this, that these park factors already incorporate the fact that the average player plays 50% on the road and 50% at home. That is what the adding one and dividing by 2 in the iPF is all about. So if I list Fenway Park with a 1.02 PF, that means that it actually increases RPG by 4%.

In the calculation of the PFs, I did not get picky and take out “home” games that were actually at neutral sites, like the Astros/Cubs series that was moved to Milwaukee in 2008. I have also reset the NYN park factor due to park modifications; only 2012 data for the Mets is being considered.

There are also Team Offense and Defense spreadsheets. These include the following categories:

Team offense: Plate Appearances, Batting Average (BA), On Base Average (OBA), Slugging Average (SLG), Secondary Average (SEC), Walks per At Bat (WAB), Isolated Power (SLG - BA), R/G at home (hR/G), and R/G on the road (rR/G) BA, OBA, SLG, WAB, and ISO are park-adjusted by dividing by the square root of park factor (or the equivalent; WAB = (OBA - BA)/(1 - OBA) and ISO = SLG - BA).

Team defense: Innings Pitched, BA, OBA, SLG, Innings per Start (IP/S), Starter's eRA (seRA), Reliever's eRA (reRA), Quality Start Percentage (QS%), RA/G at home (hRA/G), RA/G on the road (rRA/G), Battery Mishap Rate (BMR), Modified Fielding Average (mFA), and Defensive Efficiency Record (DER). BA, OBA, and SLG are park-adjusted by dividing by the square root of PF; seRA and reRA are divided by PF.

The three fielding metrics I've included are limited it only to metrics that a) I can calculate myself and b) are based on the basic available data, not specialized PBP data. The three metrics are explained in this post, but here are quick descriptions of each:

1) BMR--wild pitches and passed balls per 100 baserunners = (WP + PB)/(H + W - HR)*100

2) mFA--fielding average removing strikeouts and assists = (PO - K)/(PO - K + E)

3) DER--the Bill James classic, using only the PA-based estimate of plays made. Based on a suggestion by Terpsfan101, I've tweaked the error coefficient. Plays Made = PA - K - H - W - HR - HB - .64E and DER = PM/(PM + H - HR + .64E)

Next are the individual player reports. I defined a starting pitcher as one with 15 or more starts. All other pitchers are eligible to be included as a reliever. If a pitcher has 40 appearances, then they are included. Additionally, if a pitcher has 50 innings and less than 50% of his appearances are starts, he is also included as a reliever (this allows some swingmen type pitchers who wouldn’t meet either the minimum start or appearance standards to get in).

For all of the player reports, ages are based on simply subtracting their year of birth from 2011. I realize that this is not compatible with how ages are usually listed and so “Age 27” doesn’t necessarily correspond to age 27 as I list it, but it makes everything a heckuva lot easier, and I am more interested in comparing the ages of the players to their contemporaries, for which case it makes very little difference. The "R" category records rookie status with a "R" for rookies and a blank for everyone else; I've trusted Baseball Prospectus on this. Also, all players are counted as being on the team with whom they played/pitched (IP or PA as appropriate) the most.

For relievers, the categories listed are: Games, Innings Pitched, estimated Plate Appearances (PA), Run Average (RA), Relief Run Average (RRA), Earned Run Average (ERA), Estimated Run Average (eRA), DIPS Run Average (dRA), Strikeouts per Game (KG), Walks per Game (WG), Guess-Future (G-F), Inherited Runners per Game (IR/G), Batting Average on Balls in Play (%H), Runs Above Average (RAA), and Runs Above Replacement (RAR).

IR/G is per relief appearance (G - GS); it is an interesting thing to look at, I think, in lieu of actual leverage data. You can see which closers come in with runners on base, and which are used nearly exclusively to start innings. Of course, you can’t infer too much; there are bad relievers who come in with a lot of people on base, not because they are being used in high leverage situations, but because they are long men being used in low-leverage situations already out of hand.

For starting pitchers, the columns are: Wins, Losses, Innings Pitched, Estimated Plate Appearances (PA), RA, RRA, ERA, eRA, dRA, KG, WG, G-F, %H, Pitches/Start (P/S), Quality Start Percentage (QS%), RAA, and RAR. RA and ERA you know--R*9/IP or ER*9/IP, park-adjusted by dividing by PF. The formulas for eRA and dRA are based on the same Base Runs equation and they estimate RA, not ERA.

* eRA is based on the actual results allowed by the pitcher (hits, doubles, home runs, walks, strikeouts, etc.). It is park-adjusted by dividing by PF.

* dRA is the classic DIPS-style RA, assuming that the pitcher allows a league average %H, and that his hits in play have a league-average S/D/T split. It is park-adjusted by dividing by PF.

The formula for eRA is:

A = H + W - HR
B = (2*TB - H - 4*HR + .05*W)*.78
C = AB - H = K + (3*IP - K)*x (where x is figured as described below for PA estimation and is typically around .93) = PA (from below) - H - W
eRA = (A*B/(B + C) + HR)*9/IP

To figure dRA, you first need the estimate of PA described below. Then you calculate W, K, and HR per PA (call these %W, %K, and %HR). Percentage of balls in play (BIP%) = 1 - %W - %K - %HR. This is used to calculate the DIPS-friendly estimate of %H (H per PA) as e%H = Lg%H*BIP%.

Now everything has a common denominator of PA, so we can plug into Base Runs:

A = e%H + %W
B = (2*(z*e%H + 4*%HR) - e%H - 5*%HR + .05*%W)*.78
C = 1 - e%H - %W - %HR
cRA = (A*B/(B + C) + %HR)/C*a

z is the league average of total bases per non-HR hit (TB - 4*HR)/(H - HR), and a is the league average of (AB - H) per game.

In the past couple years I’ve presented a couple of batted ball RA estimates. I’ve removed these this year, not just because batted ball data exhibits questionable reliability but because these metrics were complicated to figure, required me to collate the batted ball data, and were not personally useful to me. I figure these stats for my own enjoyment and have in some form or another going back to 1997. I share them here only because I would do it anyway, so if I’m not interested in certain categories, there’s no reason to keep presenting them.

Instead, I’m showing strikeout and walk rate, both expressed as per game. By game I mean not 9 innings but rather the league average of PA/G. I have always been a proponent of using PA and not IP as the denominator for non-run pitching rates, and now the use of per PA rates is widespread. Usually these are expressed as K/PA and W/PA, or equivalently, percentage of PA with a strikeout or walk. I don’t believe that any site publishes these as K and W per equivalent game as I am here. This is not better than K%--it’s simply applying a scalar multiplier. I like it because it generally follows the same scale as the familiar K/9.

To facilitate this, I’ve finally corrected a flaw in the formula I use to estimate plate appearances for pitchers. Previously, I’ve done it the lazy way by not splitting strikeouts out from other outs. I am now using this formula to estimate PA (where PA = AB + W):

PA = K + (3*IP - K)*x + H + W
Where x = league average of (AB - H - K)/(3*IP - K)

Then KG = K*Lg(PA/G) and WG = W*Lg(PA/G).

G-F is a junk stat, included here out of habit because I've been including it for years. It was intended to give a quick read of a pitcher's expected performance in the next season, based on eRA and strikeout rate. Although the numbers vaguely resemble RAs, it's actually unitless. As a rule of thumb, anything under four is pretty good for a starter. G-F = 4.46 + .095(eRA) - .113(K*9/IP). It is a junk stat. JUNK STAT JUNK STAT JUNK STAT. Got it?

%H is BABIP, more or less--%H = (H - HR)/(PA - HR - K - W), where PA was estimated above. Pitches/Start includes all appearances, so I've counted relief appearances as one-half of a start (P/S = Pitches/(.5*G + .5*GS). QS% is just QS/(G - GS); I don't think it's particularly useful, but Doug's Stats include QS so I include it.

I've used a stat called Relief Run Average (RRA) in the past, based on Sky Andrecheck's article in the August 1999 By the Numbers; that one only used inherited runners, but I've revised it to include bequeathed runners as well, making it equally applicable to starters and relievers. I am using RRA as the building block for baselined value estimates for all pitchers this year. I explained RRA in this article , but the bottom line formulas are:

BRSV = BRS - BR*i*sqrt(PF)
IRSV = IR*i*sqrt(PF) - IRS
RRA = ((R - (BRSV + IRSV))*9/IP)/PF

The two baselined stats are Runs Above Average (RAA) and Runs Above Replacement (RAR). RAA uses the league average runs/game (N) for both starters and relievers, while RAR uses separate replacement levels for starters and relievers. Thus, RAA and RAR will be pretty close for relievers:

RAA = (N - RRA)*IP/9
RAR (relievers) = (1.11*N - RRA)*IP/9
RAR (starters) = (1.28*N - RRA)*IP/9

All players with 300 or more plate appearances are included in the Hitters spreadsheets (along with some players close to the cutoff point who I was interested in). Each is assigned one position, the one at which they appeared in the most games. The statistics presented are: Games played (G), Plate Appearances (PA), Outs (O), Batting Average (BA), On Base Average (OBA), Slugging Average (SLG), Secondary Average (SEC), Runs Created (RC), Runs Created per Game (RG), Speed Score (SS), Hitting Runs Above Average (HRAA), Runs Above Average (RAA), Hitting Runs Above Replacement (HRAR), and Runs Above Replacement (RAR).

I do not bother to include hit batters, so take note of that for players who do get plunked a lot. Therefore, PA are simply AB + W. Outs are AB - H + CS. BA and SLG you know, but remember that without HB and SF, OBA is just (H + W)/(AB + W). Secondary Average = (TB - H + W)/AB = SLG - BA + (OBA - BA)/(1 - OBA). I have not included net steals as many people (and Bill James himself) do--it is solely hitting events.

BA, OBA, and SLG are park-adjusted by dividing by the square root of PF. This is an approximation, of course, but I'm satisfied that it works well. The goal here is to adjust for the win value of offensive events, not to quantify the exact park effect on the given rate. I use the BA/OBA/SLG-based formula to figure SEC, so it is park-adjusted as well.

Runs Created is actually Paul Johnson's ERP, more or less. Ideally, I would use a custom linear weights formula for the given league, but ERP is just so darn simple and close to the mark that it’s hard to pass up. I still use the term “RC” partially as a homage to Bill James (seriously, I really like and respect him even if I’ve said negative things about RC and Win Shares), and also because it is just a good term. I like the thought put in your head when you hear “creating” a run better than “producing”, “manufacturing”, “generating”, etc. to say nothing of names like “equivalent” or “extrapolated” runs. None of that is said to put down the creators of those methods--there just aren’t a lot of good, unique names available. Anyway, RC = (TB + .8H + W + .7SB - CS - .3AB)*.322.

RC is park adjusted by dividing by PF, making all of the value stats that follow park adjusted as well. RG, the rate, is RC/O*25.5. I do not believe that outs are the proper denominator for an individual rate stat, but I also do not believe that the distortions caused are that bad. (I still intend to finish my rate stat series and discuss all of the options in excruciating detail, but alas you’ll have to take my word for it now).

I have decided to switch to a watered-down version of Bill James' Speed Score this year; I only use four of his categories. Previously I used my own knockoff version called Speed Unit, but trying to keep it from breaking down every few years was a wasted effort.

Speed Score is the average of four components, which I'll call a, b, c, and d:

a = ((SB + 3)/(SB + CS + 7) - .4)*20
b = sqrt((SB + CS)/(S + W))*14.3
c = ((R - HR)/(H + W - HR) - .1)*25
d = T/(AB - HR - K)*450

James actually uses a sliding scale for the triples component, but it strikes me as needlessly complex and so I've streamlined it. I also changed some of his division to mathematically equivalent multiplications.

There are a whopping four categories that compare to a baseline; two for average, two for replacement. Hitting RAA compares to a league average hitter; it is in the vein of Pete Palmer’s Batting Runs. RAA compares to an average hitter at the player’s primary position. Hitting RAR compares to a “replacement level” hitter; RAR compares to a replacement level hitter at the player’s primary position. The formulas are:

HRAA = (RG - N)*O/25.5
RAA = (RG - N*PADJ)*O/25.5
HRAR = (RG - .73*N)*O/25.5
RAR = (RG - .73*N*PADJ)*O/25.5

PADJ is the position adjustment, and it is based on 2002-2011 offensive data. For catchers it is .89; for 1B/DH, 1.17; for 2B, .97; for 3B, 1.03; for SS, .93; for LF/RF, 1.13; and for CF, 1.02. I had been using the 1992-2001 data as a basis for the last ten years, but finally have done an update. I’m a little hesitant about this update, as the middle infield positions are the biggest movers (higher positional adjustments, meaning less positional credit). I have no qualms for second base, but the shortstop PADJ is out of line with the other position adjustments widely in use and feels a bit high to me. But there are some decent points to be made in favor of offensive adjustments, and I’ll have a bit more on this topic in general below.

That was the mechanics of the calculations; now I'll twist myself into knots trying to justify them. If you only care about the how and not the why, stop reading now.

The first thing that should be covered is the philosophical position behind the statistics posted here. They fall on the continuum of ability and value in what I have called "performance". Performance is a technical-sounding way of saying "Whatever arbitrary combination of ability and value I prefer".

With respect to park adjustments, I am not interested in how any particular player is affected, so there is no separate adjustment for lefties and righties for instance. The park factor is an attempt to determine how the park affects run scoring rates, and thus the win value of runs.

I apply the park factor directly to the player's statistics, but it could also be applied to the league context. The advantage to doing it my way is that it allows you to compare the component statistics (like Runs Created or OBA) on a park-adjusted basis. The drawback is that it creates a new theoretical universe, one in which all parks are equal, rather than leaving the player grounded in the actual context in which he played and evaluating how that context (and not the player's statistics) was altered by the park.

The good news is that the two approaches are essentially equivalent; in fact, they are equivalent if you assume that the Runs Per Win factor is equal to the RPG. Suppose that we have a player in an extreme park (PF = 1.15, approximately like Coors Field pre-humidor) who has an 8 RG before adjusting for park, while making 350 outs in a 4.5 N league. The first method of park adjustment, the one I use, converts his value into a neutral park, so his RG is now 8/1.15 = 6.957. We can now compare him directly to the league average:

RAA = (6.957 - 4.5)*350/25.5 = +33.72

The second method would be to adjust the league context. If N = 4.5, then the average player in this park will create 4.5*1.15 = 5.175 runs. Now, to figure RAA, we can use the unadjusted RG of 8:

RAA = (8 - 5.175)*350/25.5 = +38.77

These are not the same, as you can obviously see. The reason for this is that they take place in two different contexts. The first figure is in a 9 RPG (2*4.5) context; the second figure is in a 10.35 RPG (2*4.5*1.15) context. Runs have different values in different contexts; that is why we have RPW converters in the first place. If we convert to WAA (using RPW = RPG, which is only an approximation, so it's usually not as tidy as it appears below), then we have:

WAA = 33.72/9 = +3.75
WAA = 38.77/10.35 = +3.75

Once you convert to wins, the two approaches are equivalent. The other nice thing about the first approach is that once you park-adjust, everyone in the league is in the same context, and you can dispense with the need for converting to wins at all. You still might want to convert to wins, and you'll need to do so if you are comparing the 2010 players to players from other league-seasons (including between the AL and NL in the same year), but if you are only looking to compare Jose Bautista to Miguel Cabrera, it's not necessary. WAR is somewhat ubiquitous now, but personally I prefer runs when possible--why mess with decimal points if you don't have to?

The park factors used to adjust player stats here are run-based. Thus, they make no effort to project what a player "would have done" in a neutral park, or account for the difference effects parks have on specific events (walks, home runs, BA) or types of players. They simply account for the difference in run environment that is caused by the park (as best I can measure it). As such, they don't evaluate a player within the actual run context of his team's games; they attempt to restate the player's performance as an equivalent performance in a neutral park.

I suppose I should also justify the use of sqrt(PF) for adjusting component statistics. The classic defense given for this approach relies on basic Runs Created--runs are proportional to OBA*SLG, and OBA*SLG/PF = OBA/sqrt(PF)*SLG/sqrt(PF). While RC may be an antiquated tool, you will find that the square root adjustment is fairly compatible with linear weights or Base Runs as well. I am not going to take the space to demonstrate this claim here, but I will some time in the future.

Many value figures published around the sabersphere adjust for the difference in quality level between the AL and NL. I don't, but this is a thorny area where there is no right or wrong answer as far as I'm concerned. I also do not make an adjustment in the league averages for the fact that the overall NL averages include pitcher batting and the AL does not (not quite true in the era of interleague play, but you get my drift).

The difference between the leagues may not be precisely calculable, and it certainly is not constant, but it is real. If the average player in the AL is better than the average player in the NL, it is perfectly reasonable to expect the average AL player to have more RAR than the average NL player, and that will not happen without some type of adjustment. On the other hand, if you are only interested in evaluating a player relative to his own league, such an adjustment is not necessarily welcome.

The league argument only applies cleanly to metrics baselined to average. Since replacement level compares the given player to a theoretical player that can be acquired on the cheap, the same pool of potential replacement players should by definition be available to the teams of each league. One could argue that if the two leagues don't have equal talent at the major league level, they might not have equal access to replacement level talent--except such an argument is at odds with the notion that replacement level represents talent that is truly "freely available".

So it's hard to justify the approach I take, which is to set replacement level relative to the average runs scored in each league, with no adjustment for the difference in the leagues. The best justification is that it's simple and it treats each league as its own universe, even if in reality they are connected.

The replacement levels I have used here are very much in line with the values used by other sabermetricians. This is based both on my own "research", my interpretation of other's people research, and a desire to not stray from consensus and make the values unhelpful to the majority of people who may encounter them.

Replacement level is certainly not settled science. There is always going to be room to disagree on what the baseline should be. Even if you agree it should be "replacement level", any estimate of where it should be set is just that--an estimate. Average is clean and fairly straightforward, even if its utility is questionable; replacement level is inherently messy. So I offer the average baseline as well.

For position players, replacement level is set at 73% of the positional average RG (since there's a history of discussing replacement level in terms of winning percentages, this is roughly equivalent to .350). For starting pitchers, it is set at 128% of the league average RA (.380), and for relievers it is set at 111% (.450).

I am still using an analytical structure that makes the comparison to replacement level for a position player by applying it to his hitting statistics. This is the approach taken by Keith Woolner in VORP (and some other earlier replacement level implementations), but the newer metrics (among them Rally and Fangraphs' WAR) handle replacement level by subtracting a set number of runs from the player's total runs above average in a number of different areas (batting, fielding, baserunning, positional value, etc.), which for lack of a better term I will call the subtraction approach.

The offensive positional adjustment makes the inherent assumption that the average player at each position is equally valuable. I think that this is close to being true, but it is not quite true. The ideal approach would be to use a defensive positional adjustment, since the real difference between a first baseman and a shortstop is their defensive value. When you bat, all runs count the same, whether you create them as a first baseman or as a shortstop.

That being said, using "replacement hitter at position" does not cause too many distortions. It is not theoretically correct, but it is practically powerful. For one thing, most players, even those at key defensive positions, are chosen first and foremost for their offense. Empirical work by Keith Woolner has shown that the replacement level hitting performance is about the same for every position, relative to the positional average.

Figuring what the defensive positional adjustment should be, though, is easier said than done. Therefore, I use the offensive positional adjustment. So if you want to criticize that choice, or criticize the numbers that result, be my guest. But do not claim that I am holding this up as the correct analytical structure. I am holding it up as the most simple and straightforward structure that conforms to reality reasonably well, and because while the numbers may be flawed, they are at least based on an objective formula that I can figure myself. If you feel comfortable with some other assumptions, please feel free to ignore mine.

That still does not justify the use of HRAR--hitting runs above replacement--which compares each hitter, regardless of position, to 73% of the league average. Basically, this is just a way to give an overall measure of offensive production without regard for position with a low baseline. It doesn't have any real baseball meaning.

A player who creates runs at 90% of the league average could be above-average (if he's a shortstop or catcher, or a great fielder at a less important fielding position), or sub-replacement level (DHs that create 4 runs a game are not valuable properties). Every player is chosen because his total value, both hitting and fielding, is sufficient to justify his inclusion on the team. HRAR fails even if you try to justify it with a thought experiment about a world in which defense doesn't matter, because in that case the absolute replacement level (in terms of RG, without accounting for the league average) would be much higher than it is currently.

The specific positional adjustments I use are based on 1992-2001 data. There's no particular reason for not updating them; at the time I started using them, they represented the ten most recent years. I have stuck with them because I have not seen compelling evidence of a change in the degree of difficulty or scarcity between the positions between now and then, and because I think they are fairly reasonable. The positions for which they diverge the most from the defensive position adjustments in common use are 2B, 3B, and CF. Second base is considered a premium position by the offensive PADJ (.94), while third base and center field are both neutral (1.01 and 1.02).

Another flaw is that the PADJ is applied to the overall league average RG, which is artificially low for the NL because of pitcher's batting. When using the actual league average runs/game, it's tough to just remove pitchers--any adjustment would be an estimate. If you use the league total of runs created instead, it is a much easier fix.

One other note on this topic is that since the offensive PADJ is a proxy for average defensive value by position, ideally it would be applied by tying it to defensive playing time. I have done it by outs, though.

The reason I have taken this flawed path is because 1) it ties the position adjustment directly into the RAR formula rather then leaving it as something to subtract on the outside and more importantly 2) there’s no straightforward way to do it. The best would be to use defensive innings--set the full-time player to X defensive innings, figure how Derek Jeter’s innings compared to X, and adjust his PADJ accordingly. Games in the field or games played are dicey because they can cause distortion for defensive replacements. Plate Appearances avoid the problem that outs have of being highly related to player quality, but they still carry the illogic of basing it on offensive playing time. And of course the differences here are going to be fairly small (a few runs). That is not to say that this way is preferable, but it’s not horrible either, at least as far as I can tell.

To compare this approach to the subtraction approach, start by assuming that a replacement level shortstop would create .86*.73*4.5 = 2.825 RG (or would perform at an overall level of equivalent value to being an average fielder at shortstop while creating 2.825 runs per game). Suppose that we are comparing two shortstops, each of whom compiled 600 PA and played an equal number of defensive games and innings (and thus would have the same positional adjustment using the subtraction approach). Alpha made 380 outs and Bravo made 410 outs, and each ranked as dead-on average in the field.

The difference in overall RAR between the two using the subtraction approach would be equal to the difference between their offensive RAA compared to the league average. Assuming the league average is 4.5 runs, and that both Alpha and Bravo created 75 runs, their offensive RAAs are:

Alpha = (75*25.5/380 - 4.5)*380/25.5 = +7.94

Similarly, Bravo is at +2.65, and so the difference between them will be 5.29 RAR.

Using the flawed approach, Alpha's RAR will be:

(75*25.5/380 - 4.5*.73*.86)*380/25.5 = +32.90

Bravo's RAR will be +29.58, a difference of 3.32 RAR, which is two runs off of the difference using the subtraction approach.

The downside to using PA is that you really need to consider park effects if you, whereas outs allow you to sidestep park effects. Outs are constant; plate appearances are linked to OBA. Thus, they not only depend on the offensive context (including park factor), but also on the quality of one's team. Of course, attempting to adjust for team PA differences opens a huge can of worms which is not really relevant; for now, the point is that using outs for individual players causes distortions, sometimes trivial and sometimes bothersome, but almost always makes one's life easier.

I do not include fielding (or baserunning outside of steals, although that is a trivial consideration in comparison) in the RAR figures--they cover offense and positional value only). This in no way means that I do not believe that fielding is an important consideration in player valuation. However, two of the key principles of these stat reports are 1) not incorporating any data that is not readily available and 2) not simply including other people's results (of course I borrow heavily from other people's methods, but only adapting methodology that I can apply myself).

Any fielding metric worth its salt will fail to meet either criterion--they use zone data or play-by-play data which I do not have easy access to. I do not have a fielding metric that I have stapled together myself, and so I would have to simply lift other analysts' figures.

Setting the practical reason for not including fielding aside, I do have some reservations about lumping fielding and hitting value together in one number because of the obvious differences in reliability between offensive and fielding metrics. In theory, they absolutely should be put together. But in practice, I believe it would be better to regress the fielding metric to a point at which it would be roughly equivalent in reliability to the offensive metric.

Offensive metrics have error bars associated with them, too, of course, and in evaluating a single season's value, I don't care about the vagaries that we often lump together as "luck". Still, there are errors in our assessment of linear weight values and players that collect an unusual proportion of infield hits or hits to the left side, errors in estimation of park factor, and any number of other factors that make their events more or less valuable than an average event of that type.

Fielding metrics offer up all of that and more, as we cannot be nearly as certain of true successes and failures as we are when analyzing offense. Recent investigations, particularly by Colin Wyers, have raised even more questions about the level of uncertainty. So, even if I was including a fielding value, my approach would be to assume that the offensive value was 100% reliable (which it isn't), and regress the fielding metric relative to that (so if the offensive metric was actually 70% reliable, and the fielding metric 40% reliable, I'd treat the fielding metric as .4/.7 = 57% reliable when tacking it on, to illustrate with a simplified and completely made up example presuming that one could have a precise estimate of nebulous "reliability").

Given the inherent assumption of the offensive PADJ that all positions are equally valuable, once RAR has been figured for a player, fielding value can be accounted for by adding on his runs above average relative to a player at his own position. If there is a shortstop that is -2 runs defensively versus an average shortstop, he is without a doubt a plus defensive player, and a more valuable defensive player than a first baseman who was +1 run better than an average first baseman. Regardless, since it was implicitly assumed that they are both average defensively for their position when RAR was calculated, the shortstop will see his value docked two runs. This DOES NOT MEAN that the shortstop has been penalized for his defense. The whole process of accounting for positional differences, going from hitting RAR to positional RAR, has benefited him.

I've found that there is often confusion about the treatment of first baseman and designated hitters in my PADJ methodology, since I consider DHs as in the same pool as first baseman. The fact of the matter is that first baseman outhit DH. There is any number of potential explanations for this; DHs are often old or injured, players hit worse when DHing than they do when playing the field, etc. This actually helps first baseman, since the DHs drag the average production of the pool down, thus resulting in a lower replacement level than I would get if I considered first baseman alone.

However, this method does assume that a 1B and a DH have equal defensive value. Obviously, a DH has no defensive value. What I advocate to correct this is to treat a DH as a bad defensive first baseman, and thus knock another five or ten runs off of his RAR for a full-time player. I do not incorporate this into the published numbers, but you should keep it in mind. However, there is no need to adjust the figures for first baseman upwards --the only necessary adjustment is to take the DHs down a notch.

Finally, I consider each player at his primary defensive position (defined as where he appears in the most games), and do not weight the PADJ by playing time. This does shortchange a player like Ben Zobrist (who saw significant time at a tougher position than his primary position), and unduly boost a player like Joe Mauer (who logged a lot of games at a much easier position than his primary position). For most players, though, it doesn't matter much. I find it preferable to make manual adjustments for the unusual cases rather than add another layer of complexity to the whole endeavor.

2012 Leagues

2012 Park Factors

2012 Teams

2012 Team Offense

2012 Team Defense

2012 AL Relievers

2012 NL Relievers

2012 AL Starters

2012 NL Starters

2012 AL Hitters

2012 NL Hitters

Monday, August 06, 2012

All Models Are Wrong

The statistician George Box once wrote that “Essentially, all models are wrong, but some are useful.” Whether the context in which this was written is identical or even particularly close to the sabermetric issues I’m going to touch on isn’t really the point. Perfect models only occur when severe constraints can be imposed.

Since you can’t have a perfect model, the designer and user must decide what level of accuracy is acceptable for the purposes for which the model will be used. This is a question on which the designers and evaluators of sabermetric methods and the end users often disagree. In my stuff (I’ll use “stuff” because “research” is too pretentious and “work” is too sterile), my checklist of ideal properties would go something like this:

1. Does the method work under normal conditions? (i.e. does the run estimator make accurate predictions for teams at normal major league levels of offense)--This is the first hurdle that any sabermetric method must clear. If you can’t even do this, then your model truly is useless.

2. Does the method work for unusual conditions?--I'm going to draw a distinction here between “unusual” and “extreme”. Unusual conditions are those that represent the tails of the usual distributions we observe in baseball. A 70-92 team is not unusual, but a 54-108 team is (and keep in mind that individuals will exhibit a wider range of performance than teams). If the method fails under unusual conditions, it may still be useful, but extreme caution has to be taken when using it. Teams and players that threaten to break it will occur often enough that one will quickly tire of repeating the caveats.

3. Does the method work for extreme conditions?--Extreme conditions are the most unusual of cases, ones that will never occur at high levels of baseball over large samples. A player that hits a home run in every at bat will obviously never exist (although a player could have a game in which he hits a home run in every at bat). A method that can’t handle these extremes can still be quite useful. However, a method that can handle the extremes is much more likely to be a faithful representation of the underlying process. Furthermore, methods that are not accurate at extremes must begin to break down somewhere along the way, so if a model truly works better at the extremes, there’s a good chance it will also produce marginally better results than an alternative model for some of the unusual cases which will actually be observed.

And simply as a matter of advancing our knowledge of baseball, a model that works at the extremes helps us understand the true relationship between the inputs and outputs. Take W% estimators as an example. We know that, as a rule of thumb, 10 runs = 1 win. This rule of thumb holds very well for teams in the usual range of run differentials in run-of-the-mill major league scoring conditions. But why does it work? Anyone with a spreadsheet can run a regression and demonstrate the rule of thumb, but a model that works at the extremes can demonstrate why it is true and how the relationship differs as we move away from those typical conditions.

4. How simple is the method?--I differ from many other people in the relative weight given to the question of simplicity. There are some people who would rank it #1 or #2 on their own list, and would be willing to accept a much less accurate estimator if it was also much simpler.

Obviously, if two approaches are equally accurate (or very close to being equally accurate), it makes sense to use the simpler one. But I’m not a fan of sacrificing any accuracy if I don’t have to. Limits to this principle are much more relevant in fields in which more advanced models are used. However, most sabermetric models (particularly for the type of questions that I’ve always focused on) really are not complex at all and do not tax computer systems or trigger any other practical constraints on complexity. People might say that Base Runs is more complex than Runs Created, but the differences between common sabermetric methods are on the level of adding one more step or one more operation.

Now that I’ve attempted to define where I am coming from on this general question, the specific trigger for this post was Aroldis Chapman’s FIP. Trent Rosencrans of CBS pointed out that Chapman’s FIP for July was negative. It goes without saying that this is a breakdown in the model, as a negative number of runs scored makes no sense.

I have always been prone to overreact to a few comments on a site, and run off and compose a blog to respond not to a strawman, but to an extreme minority opinion. That’s probably the case here.

Still, it is interesting which metrics and which results set off this sort of reaction, and which generally don’t. A number of us tried unsuccessfully for years to argue for the replacement of Runs Created with Base Runs, but a number of very intelligent people resisted the idea. Arguments were advanced that Base Runs was too complex or that given the errors inherent to the exercise, Runs Created was good enough.

OPS+ also remains in use as a go to quick and dirty stat, due largely to its prominence on Baseball-Reference. Yet OPS+ clearly undervalues OBA and can also, in extreme situations, return a negative number of implied runs. ERA+ distorts the true difference in runs allowed rate and makes errors in aggregation across pitchers and seasons seem natural, but B-R had to backtrack quickly when they replaced it with something better because of the backlash.

It should be noted that some of the reaction to any flaws in FIP are likely related to general disagreement and distrust of DIPS theory as a whole (Colin Wyers pointed this possibility out to me). DIPS has always engendered a somewhat visceral reaction and remains the most controversial piece of the standard sabermetric toolkit. An obviously flawed result from the most ubiquitous member of the DIPS family is the perfect opportunity to lash out.

It’s no mystery why FIP fails for Chapman’s July. FIP is based on linear weights, and any linear weight estimator of absolute runs will return a negative estimate at low levels of offense. For example, a game in which there are three singles, one double, one walk, and 27 outs will result in a run estimator of around -.1 runs. [3*.5 + 1*.8 + 1*.3 - 27*.1] Run scoring is not a linear process, but there are many advantages to using a linear approximation (I won’t recap those here but this should be familiar territory). However, if the weights are tailored for a normal environment, and become less reliable as the environment becomes more extreme.

In July, Chapman’s line looked like this:

IP H HR W K
14.1 6 0 2 31

There is no linear run estimator that will be accurate for a normal context that will survive something this extreme. Fortunately, it is an extreme context observed over a very small sample size. A month may superficially seem like a significant split, but fourteen innings is roughly equivalent to two starts.

Going back to my checklist for a metric above, I do place a high weight on accuracy for extremes. While I am personally comfortable with the use of FIP for quick and dirty situation, I happen to agree with some of the critics that it isn’t particularly appropriate for use in a WAR calculation (presuming that you want to include a defense-independent metric as the input for WAR at all). It doesn’t make a lot of sense to sweat the small stuff as WAR does, except for the main driver (FIP). While the issue of negative runs will not be present over the long haul, using a linear run estimator for individual pitchers is needlessly imprecise by my criteria.

There are at least a couple different Base Runs-based DIPS formulas out there--Voros McCracken himself has one), I have one (see “dRA” here), and there could be others that have slipped my mind. Using dRA, Chapman’s July checks in at .68, which seems pretty reasonable.

The moral of the story is that our methods will always be flawed in one manner or another. Sometimes the designer or user has a tradeoff to make between simplicity and theoretical accuracy. Depending on the question the metric is being used to answer, there may be a reason to change one’s priorities in choosing a metric. Ideally, one should be as consistent as possible in making those choices. At the risk of painting with a broad brush, it is that consistency that appears to be lacking in some of the reaction in this case.

Sunday, October 02, 2011

End of Season Statistics 2011

The spreadsheets are published as Google Spreadsheets, which you can download in Excel format by changing the extension in the address from "=html" to "=xls". That way you can download them and manipulate things however you see fit.

The data comes from a number of different sources. Most of the basic data comes from Doug's Stats, which is a very handy site. KJOK's park database provided some of the data used in the park factors, but for recent seasons park data comes from anywhere that has it--Doug's Stats, or Baseball-Reference, or ESPN.com, or MLB.com. Data on pitcher's batted ball types allowed, doubles/triples allowed, and inherited/bequeathed runners comes from Baseball Prospectus.

The basic philosophy behind these stats is to use the simplest methods that have acceptable accuracy. Of course, "acceptable" is in the eye of the beholder, namely me. I use Pythagenpat not because other run/win converters, like a constant RPW or a fixed exponent are not accurate enough for this purpose, but because it's mine and it would be kind of odd if I didn't use it.

If I seem to be a stickler for purity in my critiques of others' methods, I'd contend it is usually in a theoretical sense, not an input sense. So when I exclude hit batters, I'm not saying that hit batters are worthless or that they *should* be ignored; it's just easier not to mess with them and not that much less accurate.

I also don't really have a problem with people using sub-standard methods (say, Basic RC) as long as they acknowledge that they are sub-standard. If someone pretends that Basic RC doesn't undervalue walks or cause problems when applied to extreme individuals, I'll call them on it; if they explain its shortcomings but use it regardless, I accept that. Take these last three paragraphs as my acknowledgment that some of the statistics displayed here have shortcomings as well.

The League spreadsheet is pretty straightforward--it includes league totals and averages for a number of categories, most or all of which are explained at appropriate junctures throughout this piece. The advent of interleague play has created two different sets of league totals--one for the offense of league teams and one for the defense of league teams. Before interleague play, these two were identical. I do not present both sets of totals (you can figure the defensive ones yourself from the team spreadsheet, if you desire), just those for the offenses. The exception is for the defense-specific statistics, like innings pitched and quality starts. The figures for those categories in the league report are for the defenses of the league's teams. However, I do include each league's breakdown of basic pitching stats between starters and relievers (denoted by "s" or "r" prefixes), and so summing those will yield the totals from the pitching side. The one abbreviation you might not recognize is "N"--this is the league average of runs/game for one team, and it will pop up again.

The Team spreadsheet focuses on overall team performance--wins, losses, runs scored, runs allowed. The columns included are: Park Factor (PF), Home Run Park Factor (PFhr), Winning Percentage (W%), Expected W% (EW%), Predicted W% (PW%), wins, losses, runs, runs allowed, Runs Created (RC), Runs Created Allowed (RCA), Home Winning Percentage (HW%), Road Winning Percentage (RW%) [exactly what they sound like--W% at home and on the road], Runs/Game (R/G), Runs Allowed/Game (RA/G), Runs Created/Game (RCG), Runs Created Allowed/Game (RCAG), and Runs Per Game (the average number of runs scored an allowed per game). Ideally, I would use outs as the denominator, but for teams, outs and games are so closely related that I don’t think it’s worth the extra effort.

The runs and Runs Created figures are unadjusted, but the per-game averages are park-adjusted, except for RPG which is also raw. Runs Created and Runs Created Allowed are both based on a simple Base Runs formula. The formula is:

A = H + W - HR - CS
B = (2TB - H - 4HR + .05W + 1.5SB)*.76
C = AB - H
D = HR
Naturally, A*B/(B + C) + D.

I have explained the methodology used to figure the PFs before, but the cliff’s notes version is that they are based on five years of data when applicable, include both runs scored and allowed, and they are regressed towards average (PF = 1), with the amount of regression varying based on the number of years of data used. There are factors for both runs and home runs. The initial PF (not shown) is:

iPF = (H*T/(R*(T - 1) + H) + 1)/2
where H = RPG in home games, R = RPG in road games, T = # teams in league (14 for AL and 16 for NL). Then the iPF is converted to the PF by taking x*iPF + (1-x), where x = .6 if one year of data is used, .7 for 2, .8 for 3, and .9 for 4+.

It is important to note, since there always seems to be confusion about this, that these park factors already incorporate the fact that the average player plays 50% on the road and 50% at home. That is what the adding one and dividing by 2 in the iPF is all about. So if I list Fenway Park with a 1.02 PF, that means that it actually increases RPG by 4%.

In the calculation of the PFs, I did not get picky and take out “home” games that were actually at neutral sites, like the Astros/Cubs series that was moved to Milwaukee in 2008.

There are also Team Offense and Defense spreadsheets. These include the following categories:

Team offense: Plate Appearances, Batting Average (BA), On Base Average (OBA), Slugging Average (SLG), Secondary Average (SEC), Walks per At Bat (WAB), Isolated Power (SLG - BA), R/G at home (hR/G), and R/G on the road (rR/G) BA, OBA, SLG, WAB, and ISO are park-adjusted by dividing by the square root of park factor (or the equivalent; WAB = (OBA - BA)/(1 - OBA) and ISO = SLG - BA).

Team defense: Innings Pitched, BA, OBA, SLG, Innings per Start (IP/S), Starter's eRA (seRA), Reliever's eRA (reRA), RA/G at home (hRA/G), RA/G on the road (rRA/G), Battery Mishap Rate (BMR), Modified Fielding Average (mFA), and Defensive Efficiency Record (DER). BA, OBA, and SLG are park-adjusted by dividing by the square root of PF; seRA and reRA are divided by PF.

The three fielding metrics I've included are limited it only to metrics that a) I can calculate myself and b) are based on the basic available data, not specialized PBP data. The three metrics are explained in this post, but here are quick descriptions of each:

1) BMR--wild pitches and passed balls per 100 baserunners = (WP + PB)/(H + W - HR)*100

2) mFA--fielding average removing strikeouts and assists = (PO - K)/(PO - K + E)

3) DER--the Bill James classic, using only the PA-based estimate of plays made. Based on a suggestion by Terpsfan101, I've tweaked the error coefficient. Plays Made = PA - K - H - W - HR - HB - .64E and DER = PM/(PM + H - HR + .64E)

Next are the individual player reports. I defined a starting pitcher as one with 15 or more starts. All other pitchers are eligible to be included as a reliever. If a pitcher has 40 appearances, then they are included. Additionally, if a pitcher has 50 innings and less than 50% of his appearances are starts, he is also included as a reliever (this allows some swingmen type pitchers who wouldn’t meet either the minimum start or appearance standards to get in).

For all of the player reports, ages are based on simply subtracting their year of birth from 2011. I realize that this is not compatible with how ages are usually listed and so “Age 27” doesn’t necessarily correspond to age 27 as I list it, but it makes everything a heckuva lot easier, and I am more interested in comparing the ages of the players to their contemporaries, for which case it makes very little difference. The "R" category records rookie status with a "R" for rookies and a blank for everyone else; I've trusted Baseball Prospectus on this. Also, all players are counted as being on the team with whom they played/pitched (IP or PA as appropriate) the most.

For relievers, the categories listed are: Games, Innings Pitched, Run Average (RA), Relief Run Average (RRA), Earned Run Average (ERA), Estimated Run Average (eRA), DIPS Run Average (dRA), Batted Ball Run Average (cRA), SIERA-style Run Average (sRA), Guess-Future (G-F), Inherited Runners per Game (IR/G), Batting Average on Balls in Play (%H), Runs Above Average (RAA), and Runs Above Replacement (RAR).

IR/G is per relief appearance (G - GS); it is an interesting thing to look at, I think, in lieu of actual leverage data. You can see which closers come in with runners on base, and which are used nearly exclusively to start innings. Of course, you can’t infer too much; there are bad relievers who come in with a lot of people on base, not because they are being used in high leverage situations, but because they are long men being used in low-leverage situations already out of hand.

For starting pitchers, the columns are: Wins, Losses, Innings Pitched, RA, RRA, ERA, eRA, dRA, cRA, sRA, G-F, %H, Pitches/Start (P/S), Quality Start Percentage (QS%), RAA, and RAR. RA and ERA you know--R*9/IP or ER*9/IP, park-adjusted by dividing by PF. The formulas for eRA, dRA, cRA, and sRA are in this article; I'm not going to copy them here, but all of them are based on the same Base Runs equation and they all estimate RA, not ERA:

* eRA is based on the actual results allowed by the pitcher (hits, doubles, home runs, walks, strikeouts, etc.). It is park-adjusted by dividing by PF.

* dRA is the classic DIPS-style RA, assuming that the pitcher allows a league average %H, and that his hits in play have a league-average S/D/T split. It is park-adjusted by dividing by PF.

* cRA is based on batted ball type (FB, GB, POP, LD) allowed, using the actual estimated linear weight value for each batted ball type. It is not park-adjusted.

* sRA is a SIERA-style RA, based on batted balls but broken down into just groundballs and non-groundballs. It is not park-adjusted either.

Both cRA and sRA are running a little high when compared to actual RA for 2010. Both measures are very sensitive and need to be recalibrated in order to overcome batted ball-type definition differences, frequencies of hit types on each kind of batted ball, and other factors, so keep in mind that they may not perfectly track RA without those adjustments (which I have not made in this case). I’ll let you make your own determination as to whether you find this data useful at all. Personally, I prefer to look at RRA, eRA, and dRA.

G-F is a junk stat, included here out of habit because I've been including it for years. It was intended to give a quick read of a pitcher's expected performance in the next season, based on eRA and strikeout rate. Although the numbers vaguely resemble RAs, it's actually unitless. As a rule of thumb, anything under four is pretty good for a starter. G-F = 4.46 + .095(eRA) - .113(K*9/IP). It is a junk stat. JUNK STAT JUNK STAT JUNK STAT. Got it?

%H is BABIP, more or less; I use an estimate of PA (IP*x + H + W, where x is the league average of (AB - H)/IP). %H = (H - HR)/(IP*x + H - HR - K). Pitches/Start includes all appearances, so I've counted relief appearances as one-half of a start (P/S = Pitches/(.5*G + .5*GS). QS% is just QS/(G - GS); I don't think it's particularly useful, but Doug's Stats include QS so I include it.

I've used a stat called Relief Run Average (RRA) in the past, based on Sky Andrecheck's article in the August 1999 By the Numbers; that one only used inherited runners, but I've revised it to include bequeathed runners as well, making it equally applicable to starters and relievers. I am using RRA as the building block for baselined value estimates for all pitchers this year. I explained RRA in this article, but the bottom line formulas are:

BRSV = BRS - BR*i*sqrt(PF)
IRSV = IR*i*sqrt(PF) - IRS
RRA = ((R - (BRSV + IRSV))*9/IP)/PF

The two baselined stats are Runs Above Average (RAA) and Runs Above Replacement (RAR). RAA uses the league average runs/game (N) for both starters and relievers, while RAR uses separate replacement levels for starters and relievers. Thus, RAA and RAR will be pretty close for relievers:

RAA = (N - RRA)*IP/9
RAR (relievers) = (1.11*N - RRA)*IP/9
RAR (starters) = (1.28*N - RRA)*IP/9

All players with 285 or more plate appearances are included in the Hitters spreadsheets. (I usually use 300 as a cutoff, but this year when I had the list sorted there were a number of players just below 300 that I was interested in, so I chose an arbitrarily lower threshold). Each is assigned one position, the one at which they appeared in the most games. The statistics presented are: Games played (G), Plate Appearances (PA), Outs (O), Batting Average (BA), On Base Average (OBA), Slugging Average (SLG), Secondary Average (SEC), Runs Created (RC), Runs Created per Game (RG), Speed Score (SS), Hitting Runs Above Average (HRAA), Runs Above Average (RAA), Hitting Runs Above Replacement (HRAR), and Runs Above Replacement (RAR).

I do not bother to include hit batters, so take note of that for players who do get plunked a lot. Therefore, PA are simply AB + W. Outs are AB - H + CS. BA and SLG you know, but remember that without HB and SF, OBA is just (H + W)/(AB + W). Secondary Average = (TB - H + W)/AB = SLG - BA + (OBA - BA)/(1 - OBA). I have not included net steals as many people (and Bill James himself) do--it is solely hitting events.

BA, OBA, and SLG are park-adjusted by dividing by the square root of PF. This is an approximation, of course, but I'm satisfied that it works well. The goal here is to adjust for the win value of offensive events, not to quantify the exact park effect on the given rate. I use the BA/OBA/SLG-based formula to figure SEC, so it is park-adjusted as well.

Runs Created is actually Paul Johnson's ERP, more or less. Ideally, I would use a custom linear weights formula for the given league, but ERP is just so darn simple and close to the mark that it’s hard to pass up. I still use the term “RC” partially as a homage to Bill James (seriously, I really like and respect him even if I’ve said negative things about RC and Win Shares), and also because it is just a good term. I like the thought put in your head when you hear “creating” a run better than “producing”, “manufacturing”, “generating”, etc. to say nothing of names like “equivalent” or “extrapolated” runs. None of that is said to put down the creators of those methods--there just aren’t a lot of good, unique names available. Anyway, RC = (TB + .8H + W + .7SB - CS - .3AB)*.322.

RC is park adjusted by dividing by PF, making all of the value stats that follow park adjusted as well. RG, the rate, is RC/O*25.5. I do not believe that outs are the proper denominator for an individual rate stat, but I also do not believe that the distortions caused are that bad. (I still intend to finish my rate stat series and discuss all of the options in excruciating detail, but alas you’ll have to take my word for it now).

I have decided to switch to a watered-down version of Bill James' Speed Score this year; I only use four of his categories. Previously I used my own knockoff version called Speed Unit, but trying to keep it from breaking down every few years was a wasted effort.

Speed Score is the average of four components, which I'll call a, b, c, and d:

a = ((SB + 3)/(SB + CS + 7) - .4)*20
b = sqrt((SB + CS)/(S + W))*14.3
c = ((R - HR)/(H + W - HR) - .1)*25
d = T/(AB - HR - K)*450

James actually uses a sliding scale for the triples component, but it strikes me as needlessly complex and so I've streamlined it. I also changed some of his division to mathematically equivalent multiplications.

There are a whopping four categories that compare to a baseline; two for average, two for replacement. Hitting RAA compares to a league average hitter; it is in the vein of Pete Palmer’s Batting Runs. RAA compares to an average hitter at the player’s primary position. Hitting RAR compares to a “replacement level” hitter; RAR compares to a replacement level hitter at the player’s primary position. The formulas are:

HRAA = (RG - N)*O/25.5
RAA = (RG - N*PADJ)*O/25.5
HRAR = (RG - .73*N)*O/25.5
RAR = (RG - .73*N*PADJ)*O/25.5

PADJ is the position adjustment, and it is based on 1992-2001 offensive data. For catchers it is .89; for 1B/DH, 1.19; for 2B, .93; for 3B, 1.01; for SS, .86; for LF/RF, 1.12; and for CF, 1.02. It dawned on me when re-reading this before posting that the timeframe means that I’ve been using the same PADJ for ten years--which means two things:

1) I’m getting old
2) It’s probably time for an update. I’ll look at 2002-2011 in my forthcoming annual “Offense by Postion” post

That was the mechanics of the calculations; now I'll twist myself into knots trying to justify them. If you only care about the how and not the why, stop reading now.

The first thing that should be covered is the philosophical position behind the statistics posted here. They fall on the continuum of ability and value in what I have called "performance". Performance is a technical-sounding way of saying "Whatever arbitrary combination of ability and value I prefer".

With respect to park adjustments, I am not interested in how any particular player is affected, so there is no separate adjustment for lefties and righties for instance. The park factor is an attempt to determine how the park affects run scoring rates, and thus the win value of runs.

I apply the park factor directly to the player's statistics, but it could also be applied to the league context. The advantage to doing it my way is that it allows you to compare the component statistics (like Runs Created or OBA) on a park-adjusted basis. The drawback is that it creates a new theoretical universe, one in which all parks are equal, rather than leaving the player grounded in the actual context in which he played and evaluating how that context (and not the player's statistics) was altered by the park.

The good news is that the two approaches are essentially equivalent; in fact, they are equivalent if you assume that the Runs Per Win factor is equal to the RPG. Suppose that we have a player in an extreme park (PF = 1.15, approximately like Coors Field pre-humidor) who has an 8 RG before adjusting for park, while making 350 outs in a 4.5 N league. The first method of park adjustment, the one I use, converts his value into a neutral park, so his RG is now 8/1.15 = 6.957. We can now compare him directly to the league average:

RAA = (6.957 - 4.5)*350/25.5 = +33.72

The second method would be to adjust the league context. If N = 4.5, then the average player in this park will create 4.5*1.15 = 5.175 runs. Now, to figure RAA, we can use the unadjusted RG of 8:

RAA = (8 - 5.175)*350/25.5 = +38.77

These are not the same, as you can obviously see. The reason for this is that they take place in two different contexts. The first figure is in a 9 RPG (2*4.5) context; the second figure is in a 10.35 RPG (2*4.5*1.15) context. Runs have different values in different contexts; that is why we have RPW converters in the first place. If we convert to WAA (using RPW = RPG), then we have:

WAA = 33.72/9 = +3.75
WAA = 38.77/10.35 = +3.75

Once you convert to wins, the two approaches are equivalent. The other nice thing about the first approach is that once you park-adjust, everyone in the league is in the same context, and you can dispense with the need for converting to wins at all. You still might want to convert to wins, and you'll need to do so if you are comparing the 2010 players to players from other league-seasons (including between the AL and NL in the same year), but if you are only looking to compare Jose Bautista to Miguel Cabrera, it's not necessary. WAR is somewhat ubiquitous now, but personally I prefer runs when possible--why mess with decimal points if you don't have to?

The park factors used to adjust player stats here are run-based. Thus, they make no effort to project what a player "would have done" in a neutral park, or account for the difference effects parks have on specific events (walks, home runs, BA) or types of players. They simply account for the difference in run environment that is caused by the park (as best I can measure it). As such, they don't evaluate a player within the actual run context of his team's games; they attempt to restate the player's performance as an equivalent performance in a neutral park.

I suppose I should also justify the use of sqrt(PF) for adjusting component statistics. The classic defense given for this approach relies on basic Runs Created--runs are proportional to OBA*SLG, and OBA*SLG/PF = OBA/sqrt(PF)*SLG/sqrt(PF). While RC may be an antiquated tool, you will find that the square root adjustment is fairly compatible with linear weights or Base Runs as well. I am not going to take the space to demonstrate this claim here, but I will some time in the future.

Many value figures published around the sabersphere adjust for the difference in quality level between the AL and NL. I don't, but this is a thorny area where there is no right or wrong answer as far as I'm concerned. I also do not make an adjustment in the league averages for the fact that the overall NL averages include pitcher batting and the AL does not (not quite true in the era of interleague play, but you get my drift).

The difference between the leagues may not be precisely calculable, and it certainly is not constant, but it is real. If the average player in the AL is better than the average player in the NL, it is perfectly reasonable to expect the average AL player to have more RAR than the average NL player, and that will not happen without some type of adjustment. On the other hand, if you are only interested in evaluating a player relative to his own league, such an adjustment is not necessarily welcome.

The league argument only applies cleanly to metrics baselined to average. Since replacement level compares the given player to a theoretical player that can be acquired on the cheap, the same pool of potential replacement players should by definition be available to the teams of each league. One could argue that if the two leagues don't have equal talent at the major league level, they might not have equal access to replacement level talent--except such an argument is at odds with the notion that replacement level represents talent that is truly "freely available".

So it's hard to justify the approach I take, which is to set replacement level relative to the average runs scored in each league, with no adjustment for the difference in the leagues. The best justification is that it's simple and it treats each league as its own universe, even if in reality they are connected.

The replacement levels I have used here are very much in line with the values used by other sabermetricians. This is based both on my own "research", my interpretation of other's people research, and a desire to not stray from consensus and make the values unhelpful to the majority of people who may encounter them.

Replacement level is certainly not settled science. There is always going to be room to disagree on what the baseline should be. Even if you agree it should be "replacement level", any estimate of where it should be set is just that--an estimate. Average is clean and fairly straightforward, even if its utility is questionable; replacement level is inherently messy. So I offer the average baseline as well.

For position players, replacement level is set at 73% of the positional average RG (since there's a history of discussing replacement level in terms of winning percentages, this is roughly equivalent to .350). For starting pitchers, it is set at 128% of the league average RA (.380), and for relievers it is set at 111% (.450).

I am still using an analytical structure that makes the comparison to replacement level for a position player by applying it to his hitting statistics. This is the approach taken by Keith Woolner in VORP (and some other earlier replacement level implementations), but the newer metrics (among them Rally and Fangraphs' WAR) handle replacement level by subtracting a set number of runs from the player's total runs above average in a number of different areas (batting, fielding, baserunning, positional value, etc.), which for lack of a better term I will call the subtraction approach.

The offensive positional adjustment makes the inherent assumption that the average player at each position is equally valuable. I think that this is close to being true, but it is not quite true. The ideal approach would be to use a defensive positional adjustment, since the real difference between a first baseman and a shortstop is their defensive value. When you bat, all runs count the same, whether you create them as a first baseman or as a shortstop.

That being said, using “replacement hitter at position” does not cause too many distortions. It is not theoretically correct, but it is practically powerful. For one thing, most players, even those at key defensive positions, are chosen first and foremost for their offense. Empirical work by Keith Woolner has shown that the replacement level hitting performance is about the same for every position, relative to the positional average.

Figuring what the defensive positional adjustment should be, though, is easier said than done. Therefore, I use the offensive positional adjustment. So if you want to criticize that choice, or criticize the numbers that result, be my guest. But do not claim that I am holding this up as the correct analytical structure. I am holding it up as the most simple and straightforward structure that conforms to reality reasonably well, and because while the numbers may be flawed, they are at least based on an objective formula that I can figure myself. If you feel comfortable with some other assumptions, please feel free to ignore mine.

That still does not justify the use of HRAR--hitting runs above replacement--which compares each hitter, regardless of position, to 73% of the league average. Basically, this is just a way to give an overall measure of offensive production without regard for position with a low baseline. It doesn't have any real baseball meaning.

A player who creates runs at 90% of the league average could be above-average (if he's a shortstop or catcher, or a great fielder at a less important fielding position), or sub-replacement level (DHs that create 4 runs a game are not valuable properties). Every player is chosen because his total value, both hitting and fielding, is sufficient to justify his inclusion on the team. HRAR fails even if you try to justify it with a thought experiment about a world in which defense doesn't matter, because in that case the absolute replacement level (in terms of RG, without accounting for the league average) would be much higher than it is currently.

The specific positional adjustments I use are based on 1992-2001 data. There's no particular reason for not updating them; at the time I started using them, they represented the ten most recent years. I have stuck with them because I have not seen compelling evidence of a change in the degree of difficulty or scarcity between the positions between now and then, and because I think they are fairly reasonable. The positions for which they diverge the most from the defensive position adjustments in common use are 2B, 3B, and CF. Second base is considered a premium position by the offensive PADJ (.94), while third base and center field are both neutral (1.01 and 1.02).

Another flaw is that the PADJ is applied to the overall league average RG, which is artificially low for the NL because of pitcher's batting. When using the actual league average runs/game, it's tough to just remove pitchers--any adjustment would be an estimate. If you use the league total of runs created instead, it is a much easier fix.

One other note on this topic is that since the offensive PADJ is a proxy for average defensive value by position, ideally it would be applied by tying it to defensive playing time. I have done it by outs, though.

The reason I have taken this flawed path is because 1) it ties the position adjustment directly into the RAR formula rather then leaving it as something to subtract on the outside and more importantly 2) there’s no straightforward way to do it. The best would be to use defensive innings--set the full-time player to X defensive innings, figure how Derek Jeter’s innings compared to X, and adjust his PADJ accordingly. Games in the field or games played are dicey because they can cause distortion for defensive replacements. Plate Appearances avoid the problem that outs have of being highly related to player quality, but they still carry the illogic of basing it on offensive playing time. And of course the differences here are going to be fairly small (a few runs). That is not to say that this way is preferable, but it’s not horrible either, at least as far as I can tell.

To compare this approach to the subtraction approach, start by assuming that a replacement level shortstop would create .86*.73*4.5 = 2.825 RG (or would perform at an overall level of equivalent value to being an average fielder at shortstop while creating 2.825 runs per game). Suppose that we are comparing two shortstops, each of whom compiled 600 PA and played an equal number of defensive games and innings (and thus would have the same positional adjustment using the subtraction approach). Alpha made 380 outs and Bravo made 410 outs, and each ranked as dead-on average in the field.

The difference in overall RAR between the two using the subtraction approach would be equal to the difference between their offensive RAA compared to the league average. Assuming the league average is 4.5 runs, and that both Alpha and Bravo created 75 runs, their offensive RAAs are:

Alpha = (75*25.5/380 - 4.5)*380/25.5 = +7.94

Similarly, Bravo is at +2.65, and so the difference between them will be 5.29 RAR.

Using the flawed approach, Alpha's RAR will be:

(75*25.5/380 - 4.5*.73*.86)*380/25.5 = +32.90

Bravo's RAR will be +29.58, a difference of 3.32 RAR, which is two runs off of the difference using the subtraction approach.

The downside to using PA is that you really need to consider park effects if you, whereas outs allow you to sidestep park effects. Outs are constant; plate appearances are linked to OBA. Thus, they not only depend on the offensive context (including park factor), but also on the quality of one's team. Of course, attempting to adjust for team PA differences opens a huge can of worms which is not really relevant; for now, the point is that using outs for individual players causes distortions, sometimes trivial and sometimes bothersome, but almost always makes one's life easier.

I do not include fielding (or baserunning outside of steals, although that is a trivial consideration in comparison) in the RAR figures--they cover offense and positional value only). This in no way means that I do not believe that fielding is an important consideration in player valuation. However, two of the key principles of these stat reports are 1) not incorporating any data that is not readily available and 2) not simply including other people's results (of course I borrow heavily from other people's methods, but only adapting methodology that I can apply myself).

Any fielding metric worth its salt will fail to meet either criterion--they use zone data or play-by-play data which I do not have easy access to. I do not have a fielding metric that I have stapled together myself, and so I would have to simply lift other analysts' figures.

Setting the practical reason for not including fielding aside, I do have some reservations about lumping fielding and hitting value together in one number because of the obvious differences in reliability between offensive and fielding metrics. In theory, they absolutely should be put together. But in practice, I believe it would be better to regress the fielding metric to a point at which it would be roughly equivalent in reliability to the offensive metric.

Offensive metrics have error bars associated with them, too, of course, and in evaluating a single season's value, I don't care about the vagaries that we often lump together as "luck". Still, there are errors in our assessment of linear weight values and players that collect an unusual proportion of infield hits or hits to the left side, errors in estimation of park factor, and any number of other factors that make their events more or less valuable than an average event of that type.

Fielding metrics offer up all of that and more, as we cannot be nearly as certain of true successes and failures as we are when analyzing offense. Recent investigations, particularly by Colin Wyers, have raised even more questions about the level of uncertainty. So, even if I was including a fielding value, my approach would be to assume that the offensive value was 100% reliable (which it isn't), and regress the fielding metric relative to that (so if the offensive metric was actually 70% reliable, and the fielding metric 40% reliable, I'd treat the fielding metric as .4/.7 = 57% reliable when tacking it on, to illustrate with a simplified and completely made up example presuming that one could have a precise estimate of nebulous "reliability").

Given the inherent assumption of the offensive PADJ that all positions are equally valuable, once RAR has been figured for a player, fielding value can be accounted for by adding on his runs above average relative to a player at his own position. If there is a shortstop that is -2 runs defensively versus an average shortstop, he is without a doubt a plus defensive player, and a more valuable defensive player than a first baseman who was +1 run better than an average first baseman. Regardless, since it was implicitly assumed that they are both average defensively for their position when RAR was calculated, the shortstop will see his value docked two runs. This DOES NOT MEAN that the shortstop has been penalized for his defense. The whole process of accounting for positional differences, going from hitting RAR to positional RAR, has benefited him.

I've found that there is often confusion about the treatment of first baseman and designated hitters in my PADJ methodology, since I consider DHs as in the same pool as first baseman. The fact of the matter is that first baseman outhit DH. There is any number of potential explanations for this; DHs are often old or injured, players hit worse when DHing than they do when playing the field, etc. This actually helps first baseman, since the DHs drag the average production of the pool down, thus resulting in a lower replacement level than I would get if I considered first baseman alone.

However, this method does assume that a 1B and a DH have equal defensive value. Obviously, a DH has no defensive value. What I advocate to correct this is to treat a DH as a bad defensive first baseman, and thus knock another five or ten runs off of his RAR for a full-time player. I do not incorporate this into the published numbers, but you should keep it in mind. However, there is no need to adjust the figures for first baseman upwards --the only necessary adjustment is to take the DHs down a notch.

Finally, I consider each player at his primary defensive position (defined as where he appears in the most games), and do not weight the PADJ by playing time. This does shortchange a player like Ben Zobrist (who saw significant time at a tougher position than his primary position), and unduly boost a player like Buster Posey (who logged a lot of games at a much easier position than his primary position). For most players, though, it doesn't matter much. I find it preferable to make manual adjustments for the unusual cases rather than add another layer of complexity to the whole endeavor.

Player spreadsheets should be coming by the middle of the week.

2011 Park Factors

2011 Leagues

2011 Teams

2011 Team Offense

2011 Team Defense

2011 AL Relievers

2011 NL Relievers

2011 AL Starters

2011 NL Starters

2011 AL Hitters

2011 NL Hitters