What do you do with Deacon White?
Last week, the attention of the baseball world was focused keenly on the Winter Meetings. They’re reliably one of the high points of the offseason, featuring free agent speculation, wild trade rumors, and Hall of Fame selections from the Veterans’ Committee.
Unfortunately, “keen” is a very generous description for the level of emphasis given to the last item on that list. The Veterans’ vote didn’t even get top headline status on ESPN’s MLB page when it was announced, even though it was an easier group to publicize than you’d expect from the fact that it’s composed of people whose careers were finished at least a decade before Jamie Moyer was born. You have a longtime umpire who made one of the most famous and controversial calls in baseball history. You have the Yankee owner who acquired Babe Ruth and built Yankee Stadium. And you have Deacon White, who hit 24 home runs in his career, never scored or drove in 100 runs in a season, and registered 500 plate appearances in a year for the first time at age 38. At a glance, he seems like a rather odd selection.
Rob Neyer is one of the few writers who bothered to address the Veterans’ ballot this year. He penned the following in reference to White:
“Deacon White - 19th-century catcher, and it’s always hard to know what to do with 19th-century catchers, because the demands of the position at that time—no real mitt, no shin guards, no mask—meant that catchers didn’t play many games, or last many seasons. In fact, White shifted to other positions (mostly third base) in the latter half of his career. At 42, he was still playing every day, which probably says as much about baseball at that time than about his talents.”
Indeed, White’s games played totals during his career as a catcher (1871-79) don’t look terribly impressive:
29, 22, 60, 70, 80, 66, 59, 61, 78
But games played totals have to be put in context. The numbers of games played by White’s teams in those seasons are:
29, 22, 60, 71, 82, 66, 61, 61, 81
White wasn’t missing time due to the strains of primitive catching. In fact, he was barely missing any time at all – a total of 8 games skipped in 9 years. His game totals were low because his teams weren’t playing anything approaching a modern schedule – professional baseball was in its infancy, long-distance travel was a highly challenging endeavor, teams frequently played large numbers of non-league games, and franchises would sometimes fold midseason.
The question then becomes: How do you adjust for this schedule discrepancy?
The easiest answer to this question is not to adjust at all. White played the games he played, amassed the hits and doubles and RBI that he amassed, and should be compared to other players on that basis. By this logic, White’s career totals of 2067 hits, 1140 runs, 988 RBI, and 44 Wins above Replacement (as estimated by Baseball Reference) are nice enough, but not terribly impressive in a historic context, especially considering the fact that he doesn’t have a single season exceeding 160 hits, 210 total bases, or 5 WAR.
There are a couple of easily identifiable problems with this method. First, it doesn’t account for opportunity. White played in nearly all the games he possibly could have, while a modern player who participates in 80 games in a season is not only missing half of the year, but making his team find someone else to put in his spot for the other 82. The other issue is that of impact on team results. In an 82-game schedule, a literally-interpreted 5-WAR player (White’s 1875 total was 4.9) should turn a .500 team into a .561 team (46-36); if you double the length of the schedule, you correspondingly reduce the impact of the wins (86-76, or .531).
The real issue at hand is not White’s raw contributions on the baseball field, but rather how those contributions affected his teams’ position in the standings. In an attempt to take the most direct approach possible to that question, this analysis is going to take WAR at face value as an estimate of wins added, with all the applicable caveats.
The most intuitive method is to simply pro-rate each season to 162 games, multiplying the player’s statistics by (162/team games played). This makes a 2-WAR season through 20 games equivalent to a 16-WAR season through 160. Applied to our sample player, we see that this adjustment turns White’s 1875 from a 4.9-win season into a 9.7-win season, and he also picks up 8-win campaigns in ’72, ’76, and ’77. His career WAR more than doubles, to 93.0, and he moves firmly into Hall of Fame territory – at least, if this is a fair adjustment.
The trouble is, it’s not a fair adjustment. White’s 1872 Cleveland Forest Citys played in 22 games, so let’s look at the 2012 standings after 22 games for comparison. You find the Dodgers and Rangers at 16-6, the Twins and Royals at 6-16, and the Padres and Angels at 7-15. All of those teams have winning percentages further from .500 than the 55-107 Astros did at the end of the year – and this edition of the Astros had baseball’s most extreme end-of-season record since 2004. It’s no surprise that smaller samples of games are inherently prone to wider variation in team performance. Because of this phenomenon, a player who contributes 2 wins in a 20-game schedule (which would be expected to give an otherwise-average team a 12-8 record) would be far less likely to propel his team to a pennant than one who adds 16 wins over a 162-game schedule (97-65).
We can account for this by simply comparing the variance in team performance through different points in the season. Of course, we’ll want to use a sample larger than one season of baseball to do so.
I took every 162-game season that has been played to completion. That’s 1962-2012, plus the 1961 AL, and leaving out 1972, 1981, and 1994-95 due to labor disputes – a total of 1242 team seasons. I split the seasons into not-quite-but-almost equal increments of around 10 games (there are two 11-game samples, taken to be games 41-51 and 122-132, and I also used game 154 as a breakpoint instead of 152 in a vague attempt at a tribute to baseball’s old schedule length). The results, where S% is standard deviation of winning percentage and S is standard deviation of wins (calculated simply as S% * N), are as follows:
Games S% S S(162)/S
10 .1652 1.65 6.95
20 .1234 2.47 4.65
30 .1052 3.16 3.64
40 .0952 3.81 3.01
51 .0878 4.48 2.57
61 .0830 5.06 2.27
71 .0795 5.65 2.03
81 .0771 6.24 1.84
91 .0749 6.81 1.69
101 .0733 7.40 1.55
111 .0725 8.04 1.43
121 .0720 8.71 1.32
132 .0710 9.37 1.23
142 .0711 10.10 1.14
154 .0711 10.95 1.05
162 .0709 11.48 1.00
The spread in team winning percentage becomes smaller throughout the year, as expected. Adjusting for the variance in team performance, it’s evident that a 2-win increase in a 20-game season is not, in fact, equivalent to a 16-win improvement over 162; it’s closer in impact to a 9-win enhancement over a full season. That looks about right; you’d expect a 12-8 team to be in contention to either win a weak division or take the second wild card slot, and you’d expect the same from a 90-72 team.
Accounting for this takes a bit of the air out of White’s production – his 1877 season, in which he was the best hitter in baseball but played mostly first base rather than catcher, is now equivalent to a 7.3 WAR season rather than 8.5. That’s still an excellent year, but it doesn’t look as impressive as it did under the simpler pro-rating adjustment.
Note, however, that because I lacked the stamina to enter records after each of 162 games for each of 1242 teams, we’re left with substantial gaps in the table. For maximum utility, we should try to find a curve that can be applied to any season length, up to and surpassing 162 games.
As it happens, there is just such a curve; its origins will be explained in further detail shortly. The equation is as follows (with apologies for the awkward formatting):
S%(N) = (.25/N + .0554^2)^1/2,
Where N is the number of games played. As before, multiplying S%(N) by N gives the standard deviation of team wins. For comparison, here are the results when this curve is applied to the same season lengths listed in the table above:
Games S%(N) S(N) S(162)/S(N)
10 .1675 1.68 6.57
20 .1248 2.50 4.41
30 .1068 3.20 3.43
40 .0965 3.86 2.85
51 .0893 4.55 2.42
61 .0847 5.16 2.13
71 .0812 5.76 1.91
81 .0785 6.36 1.73
91 .0763 6.94 1.59
101 .0745 7.52 1.46
111 .0729 8.10 1.36
121 .0717 8.67 1.27
132 .0704 9.30 1.18
142 .0695 9.87 1.11
154 .0685 10.55 1.04
162 .0679 11.00 1.00
It would not quite be correct to say that there are no noticeable differences. The reason for those differences comes from the model I selected to build the curve. (This is the math section. It may be very slightly more intensive than the most common sabermetric tools, but it contains nothing that can’t be found in the early chapters of a college statistics textbook, or in the Wikipedia articles on standard deviation and binomial distribution, respectively. Even though I still have my college statistics textbooks, guess where I went to confirm my memory of the formulas…)
The basic assumption I used is that there are two reasons for variance in team performance: talent and luck. The overall variance can then be expressed as a function of the variance due to talent and the variance due to luck. Assuming that there is no relationship between talent and luck (which is pretty much true by definition; if your luck is somehow based in talent, it’s not actually luck), this function should be:
S(T+L)^2 = S(T)^2 + S(L)^2
The standard deviation due to luck can be calculated using the binomial probability distribution, which applies to the answering of large numbers of identical yes-or-no questions such as “did the coin come up heads?” or “did you win the baseball game?”. Over a sample of N games, the standard deviation of the number of wins for a .500 talent team is:
S(L, N) = (.5^2 * N)^1/2
This makes the standard deviation of winning percentage due to luck
S%(L, N) = (.25/N)^1/2
If that looks familiar, that’s because it’s the first half of the equation for the curve proposed earlier. The second half is the value for the other source of variance, talent. Using the equation given above for S(T+L), we can actually find an observed value for the standard deviation due to talent through each of the samples used:
We can quickly observe two things. First, the value remains rather stable from game 20 through game 101, then increases steadily for the remainder of the season. This makes sense, because game 101 is roughly the timing of the trade deadline, when good teams get better and bad teams get worse; you’d expect the variation in talent to increase, and to continue to do so as rosters expand in September.
Second, the changes in the measurement for talent variance are not terribly large. The curve I’m using treats talent variance as a constant (I went through a few methods in selecting it, none of which change the value of the constant or the overall curve in a noteworthy way). It would be possible to modify the projection to account for the trade deadline and September callups, but I think it would be inadvisable to do so, because if the season is shorter, bad teams will give up earlier, and the aforementioned increase in variance will occur sooner.
All right, the math is done; let’s get back to the baseball side of things. What does all of this mean for Deacon White? Here’s his career WAR in three forms: raw, pro-rated, and adjusted using the proposed model.
Year Team Lg Tm G WAR PR WAR Mod WAR
1871 CLE NA 29 0.8 4.5 2.8
1872 CLE NA 22 1.1 8.1 4.6
1873 BOS NA 60 2.9 7.8 6.3
1874 BOS NA 71 1.9 4.3 3.6
1875 BOS NA 82 4.9 9.7 8.4
1876 CHC NL 66 3.5 8.6 7.0
1877 BSN NL 61 3.2 8.5 6.8
1878 CIN NL 61 2.3 6.1 4.9
1879 CIN NL 81 3.6 7.2 6.2
1880 CIN NL 83 0.5 1.0 0.8
1881 BUF NL 83 1.0 2.0 1.7
1882 BUF NL 84 1.0 1.9 1.7
1883 BUF NL 98 1.0 1.7 1.5
1884 BUF NL 115 4.8 6.8 6.3
1885 BUF NL 112 1.9 2.7 2.6
1886 DTN NL 126 2.5 3.2 3.1
1887 DTN NL 127 2.1 2.7 2.6
1888 DTN NL 134 3.2 3.9 3.7
1889 PIT NL 134 0.3 0.4 0.4
1890 BUF PL 134 1.7 2.1 2.0
Total 44.2 93.0 77.0
An adjusted career total of 77 WAR, with a peak season of 8.4 and five other seasons exceeding 6.0. That’s not an inner-circle player, but it is a really strong candidate – roughly Jeff Bagwell level (albeit before applying a length adjustment to Bagwell’s 1994 season) in the context of his own time. The differences between that context and the modern one are perhaps worth exploring, but that’s a subject that’s been tackled so thoroughly in so many forms and venues that I doubt there’s much to be gained from my taking it on here.
One final, bright red warning light about this adjustment: Since it was derived around team wins and the spread thereof, it’s not really safe to apply to non-win-based measurements. So while it might be fun (at least if you’re me) to find out that Deacon White had equivalent career totals of 3457 hits and 1957 runs, or that his 1873 season features 263 equivalent hits, exceeding Ichiro’s single-season record, that’s not an exercise in any danger of drowning in rigor.
Even with that caveat, the adjustment proposed here is still quite useful, not only for White and the other stars of the earliest era of baseball, but also for more recent players such as Heinie Groh, Bobby Grich, and Bagwell, who peaked during shortened seasons. It strikes a balance between no adjustment, which penalizes these players for missing games that never occurred, and pro-rating, which attempts to address that issue but overcompensates. By accounting for evolution in the standings over the course of the year, it gives us a better chance to answer the question we’re really trying to ask: How much did this player improve his team’s odds of winning the pennant?