Baseball for the Thinking Fan

Login | Register | Feedback

btf_logo
You are here > Home > Hall of Merit > Discussion
Hall of Merit
— A Look at Baseball's All-Time Best

Monday, February 05, 2007

Dan Rosenheck’s WARP Data

WARP Methodology and Results

Thanks, Dan!

EDIT: Link updated 2/23/2009

John (You Can Call Me Grandma) Murphy Posted: February 05, 2007 at 08:59 PM | 763 comment(s) Login to Bookmark
  Related News:

Reader Comments and Retorts

Go to end of page

Statements posted here are those of our readers and do not represent the BaseballThinkFactory. Names are provided by the poster and are not verified. We ask that posters follow our submission policy. Please report any inappropriate comments.

Page 3 of 8 pages  < 1 2 3 4 5 6 7 8 > 
   201. DavidFoss Posted: April 09, 2007 at 02:58 PM (#2329815)
bump
   202. TomH Posted: April 09, 2007 at 03:12 PM (#2329827)
League strength may not EQUAL ease of domination, but isn't it both logical and consistent with empirical data? Don't have time for a long post right now, but it seems intuitively very obvious, and it's something that's been discussed oodles of times.

Population per team was low. But it was higher than pop/team in 1901-1915 :)

Run scoring being high means more variation in runs per game, but also means far less runs per win. Which means WS were harder to accumulate. Which means less variation for those who use WS.

James' research showed that run scoring was highly influenced by speed in pre-1910 ball. So the RC formulae reflect that. Don't know about WARP. Yes, we don't have CS data, but it seems to me that this might actually DECREASE variability. Suppose two guys stole 40 bags. One had 10 (unknown) CS, one had 25. From our ignorance of CS, we actually lose the true variation that exists.
   203. David Concepcion de la Desviacion Estandar (Dan R) Posted: April 09, 2007 at 05:13 PM (#2329957)
It depends on the context. When I say "ease of domination," all I'm talking about is the standard deviation, so I'll stay away from semantics and stick to the numbers which I'm comfortable with.

Higher league quality does *not* always correlate to lower standard deviation. Any time you have a "star glut," you will see the stronger league with a higher standard deviation. This is true at many points in baseball history. In 1901, Nap Lajoie was the only big NL star to jump to the AL, but a lot of middling and weaker players did. As a result, the high number remaining stars in the NL (Flick, Wagner, Sheckard etc.) got to obliterate the weaker competition, while aside from Lajoie most of the 1901 AL players were really about the same. Lajoie's monster year looks even *better* when you take into account the standard deviation of the league, because he was the *only* star in it. In this case, you would need to make an extremely large league quality adjustment to properly evaluate Lajoie's season, because the "star drought" in the 1901 AL reduces the standard deviation and makes him look better than he actually was.

The same is true in the teens, where all the top position players (late Lajoie, Cobb, Speaker, Collins, early Ruth) were in the AL. AL standard deviations in the teens are *far* higher than NL ones, because it had the star glut and the NL had the star drought. And in World War II, where standard deviations in the AL went down as Williams, DiMaggio etc. all went to fight, creating another star drought.

Note that this is *not* a problem with my WARP2, since I use a regression-projected rather than actual standard deviation. If you plot the actual versus regression-projected standard deviations through the teens, you'll see the AL and NL regression-projected lines running right next to each other, with the actual AL standard deviation well above them and the actual NL standard deviation well below them. But in fact, AL players should be credited for playing in a tougher league, and NL players penalized for playing in an easier one, and my WARP2 does not take that step.

Well, my regression equation has one variable for population and another for expansion. As I said, I adjust the 1900's almost as much as the 1890's.

I have no idea how Pythagorean variables affect Win Shares. But higher run scoring most definitely correlates to higher standard deviations, since everyone has more plate appearances to distinguish themselves.

No, the lack of CS data DEFINITELY increases standard deviations, substantially so (I can show you the correlations if you want). The true range of SB/CS ability is probably on the order of 1 win per season, while without CS the range of just SB is more like 2 wins per season. Guys like Hamilton, Kelley, a Jimmy Sheckard season or two wind up being 10 WARP1 on the strength of their 75-100 SB/0 CS marks. You have to correct for that.
   204. TomH Posted: April 09, 2007 at 07:21 PM (#2330259)
Higher league quality does *not* always correlate to lower standard deviation. Any time you have a "star glut," you will see the stronger league with a higher standard deviation.

*** okay, agree, but a higher 'floor' league almost always means lower std dev. And when you contract, you raise the floor. When you expand, you lower it.

But higher run scoring most definitely correlates to higher standard deviations, since everyone has more plate appearances to distinguish themselves.

*** sure, for raw stats, but again, for stats that relate to wins like OPS+, ERA+, and WS, this cancels out. Some WARPy expert maybe can chime in to let us know how WARP reacts to high-run and low-run environs.

No, the lack of CS data DEFINITELY increases standard deviations, substantially so

*** I don't mean to sound so picky, Dan; you've been very upfront with the huge amount of ##-crunching you've done, for which I and other ought to be thankful, and your brief history in this regard causes me to trust what you say here. I don't know how WARP treats leagues without CS, but again, with WS/RCAA/RCAP/OWP, because the formulae have been specifically adjusted to fit leagues without CS data, I don't believe this is an issue. I'll try to run some OWP data tonight on the non-CS times versus others to check my thesis on this.
   205. David Concepcion de la Desviacion Estandar (Dan R) Posted: April 09, 2007 at 08:05 PM (#2330353)
Contraction most definitely reduces stdevs, as expansion increases them. I suspect stdevs were even higher in the 1880's than in the 1890's; I just don't have the data. Despite the expansion, stdevs were *slightly* lower in the 1900's than in the 1890's on the whole (although the single highest single season stdev I have on record is the 1901 NL, which makes sense), thanks to higher population, fewer SB, and lower run scoring.

My run estimation formulae are "specifically adjusted to fit leagues without CS data" as well. But factoring in SB but not CS into your run estimator means that guys who are good hitters *and* steal bases will be farther from the mean than they would if you had CS data, which increases the stdev. I don't see how it could be any other way without a specific correction for it. Wouldn't Billy Hamilton have had a lower % of his team's offensive production, and therefore its batting Win Shares, if CS were included?
   206. TomH Posted: April 09, 2007 at 08:35 PM (#2330435)
A good RC formula (and I hope we can accept that the RC formulae for non-CS leagues are 'good') will re-work the value of a SB when CS are not available. For example, if CS data disappeared from the NL 2006 log, I would assume that if I were to create a RC formula it would significantly lower the value of one SB, since even among the best baserunners, there are more CS. It might even show that that value of SBs (without knowing the CS) were neglible.

But circa 1900, the data show that SB were worth a lot, even knowing that more SB lead to more CS. It isn't true after 1920 or so, but it is true then. If we DID have CS data for 1900, we would assuredly find out that SBs were worth far MORE than the RC current formula gives; their value as been discounted, assuming that some CS come with SBs. I'd have to go back to the original Abstract (don't own a copy, think it was printed in the late 80s) where the formula were given for each era to do a direct comparison of some AL/NL times where only one league had the data.

Again, I can't speak for how WARP / EqA does it.

As to Billy Hamilton, the premise and problem holds only if the best hitters are also the best runners. Not sure this is so.
   207. David Concepcion de la Desviacion Estandar (Dan R) Posted: April 09, 2007 at 09:33 PM (#2330490)
Ah, then this comes down to run estimation. I use BaseRuns for pre-integration seasons, where the runner-advancement value of a SB relative to the runner-advancement value of hits, total bases, walks, and home runs is fixed. (The overall runner-advancement rate floats so that estimated runs equal actual runs for each league-season).

Well, I can only tell you that I get a very statistically significant correlation between SB runs per game and standard deviation. This is, of course, a product of my run estimation method which does not discount the value of a SB once we lose CS data.
   208. jimd Posted: April 09, 2007 at 09:34 PM (#2330491)
Note that the modern definition of "Stolen Base" dates from 1898. Before then, IIRC, "Stolen Base" also includes some other base-running advances such as going from first-to-third on a single, etc.
   209. David Concepcion de la Desviacion Estandar (Dan R) Posted: April 09, 2007 at 09:39 PM (#2330499)
But those had to have been removed from the baseball-reference/Lahman data, no? I find it hard to believe that Billy Hamilton took only 98 extra bases (between SB, first to third, second to home on a single, and first to home on a double) in 1894--he was the fastest player in baseball, and he was on base 355 times.
   210. DL from MN Posted: April 09, 2007 at 10:01 PM (#2330525)
Crossposting since you seem to be paying more attention here:

> I need to think this through some more, but I may adjust my WARP to use 0.22 wins instead of
> 0.6 for the DH adjustment.

Would I prorate the 5.4 BRAA/BRAR per 675 PA for post 1973 AL players that you gave me previously to 2/675PA?
   211. jimd Posted: April 09, 2007 at 10:03 PM (#2330528)
But those had to have been removed from the baseball-reference/Lahman data, no?

I don't think they have the PBP data to do that with any degree of certainty. I also don't remember the exact definition before 1898 and don't have a reference handy. Someone else may be able to clarify further.
   212. DCW3 Posted: April 09, 2007 at 10:10 PM (#2330535)
But those had to have been removed from the baseball-reference/Lahman data, no?

Nope. Hugh Nicol is still credited on BB-Ref with holding the single-season record with 138 SBs in 1887, even though a significant number of those were surely advancements on hits.
   213. David Concepcion de la Desviacion Estandar (Dan R) Posted: April 09, 2007 at 11:17 PM (#2330594)
DL from MN, sorry I missed that. I'm working out the details of it this week--I get 2 by the method in the ballot discussion thread but 5/6 by my original method. Plus there is another counteracting factor here which is that I just stuck the DH into my stdev regression and it had a very major effect, so once I input that it's going to make AL players look a lot better...I'd definitely stick with 5.4 for now. When I have a conclusive answer (before the end of the 1997 voting), I'll post it.

DCW3--oh dear. Well how the heck am I supposed to know how many bases were stolen for 1893-7 then? Can anyone suggest a percentage?
   214. David Concepcion de la Desviacion Estandar (Dan R) Posted: April 09, 2007 at 11:18 PM (#2330596)
DL from MN, sorry I missed that. I'm working out the details of it this week--I get 2 by the method in the ballot discussion thread but 5/6 by my original method. Plus there is another counteracting factor here which is that I just stuck the DH into my stdev regression and it had a very major effect, so once I input that it's going to make AL players look a lot better...I'd definitely stick with 5.4 for now. When I have a conclusive answer (before the end of the 1997 voting), I'll post it.

DCW3--oh dear. Well how the heck am I supposed to know how many bases were stolen for 1893-7 then? Can anyone suggest a percentage? This could really screw with a lot of things....I thought those were real SB.
   215. TomH Posted: April 09, 2007 at 11:50 PM (#2330621)
The stolen base leaders do not make a marked shift from 1891 to 1905; at least, not when you adjust for times on base.
1894 NL
1 Billy Hamilton 98
2 John McGraw 78
3 Walt Wilmot 74
4 Tom Brown 66
5 Bill Lange 65
6 Jake Stenzel 61
7 Arlie Latham 59
8 Tom Daly 51
9 Hugh Duffy 48
10 Jimmy Bannon 47

1900 NL
T1 Patsy Donovan 45
T1 George Van Haltren 45
3 Jimmy Barrett 44
4 Willie Keeler 41
T5 Sam Mertes 38
T5 Honus Wagner 38
7 Roy Thomas 37
8 Kip Selbach 36
9 Elmer Flick 35
10 Jack Doyle 34

even though the guys in 1894 (THE hitter's year) got to first base a lot more often. I doubt the definition switched during this period.
   216. Paul Wendt Posted: April 10, 2007 at 03:15 AM (#2330800)
.
Steals
(DanR) But those had to have been removed from the baseball-reference/Lahman data, no?
(jimd) I don't think they have the PBP data to do that with any degree of certainty.

: 1871-1875 alone has been rescored using reasonably good pbp.
: prospects for 1876 are better than those for 1897.

... I also don't remember the exact definition before 1898 and don't have a reference handy.
(DanR) DCW3--oh dear. Well how the heck am I supposed to know how many bases were stolen for 1893-7 then? Can anyone suggest a percentage?

: I don't know that anyone anywhere can do that. David Nemec's 2nd ed. Great Encyclopedia is probably be worth checking for his judgment; he sometimes comments insightfully on scoring & statistics. Jon Frankel has worked a lot on 1899-98, maybe he will acquire a good sense when he gets to 1897. Cliff Blau has worked on the late 80s rather than 90s, I believe . . .

(TomH) The stolen base leaders do not make a marked shift from 1891 to 1905; at least, not when you adjust for times on base. . . . I doubt the definition switched during this period.</i>

The definition changed for 1898 but we don't know about the scoring practice. That first to third was a stolen base is a myth; we don't know how often scorers used their discretion. I suppose the 1898 rule change reflected both sentiment and practice --a tide that had turned elsewhere, not one that originated with the rules committee(*). So I think Stovey and Nicol and Welch were credited with discretionary steals at a greater rate than Hamilton and Van Haltren. But the SB numbers for Lange, Davis, Keeler, Jennings (among 1897 leaders) suggest a big decrease. Check out the league data, as I have not.
* presumably not a change in sentiment and practice in every city. Indeed, my underlying theory says such rules change is more likely if there is variation in the practice. So teasing out truth is daunting. But someone will make significant progress someday. At least we will someday have home/visitor splits that may reveal city/scorer to city/scorer variation.
   217. Paul Wendt Posted: April 10, 2007 at 03:16 AM (#2330801)
TomH
we have elected FEWER HoMers who played in the latter half of the 1890s than any other period between 1885 and the present, excluding WWII. That says to me that we have perceived very few men putting up 'dominating' stats in that period.

"No one" was in the majors at age 19 and few at age 21, which continued to be true early in the aughts. Sheckard age 19.10 and Crawford 19.5, second tier stars, but Lajoie 21.11, Wagner 23.5, Flick 22.4 (the dominant batters), Chance 21.7, Donlin 21.10 (some claim to dominance). Then 5-10 years later, Magee 19.10, Cobb 18.8, Collins 19.4, Speaker 19.5.


Dan R #204
I have no idea how Pythagorean variables affect Win Shares. But higher run scoring most definitely correlates to higher standard deviations, since everyone has more plate appearances to distinguish themselves.
Despite the expansion, stdevs were *slightly* lower in the 1900's than in the 1890's on the whole (although the single highest single season stdev I have on record is the 1901 NL, which makes sense), thanks to higher population, fewer SB, and lower run scoring.

Granting the SD effect of high scoring that you have discovered, I doubt that it works via plate appearance (or when you get to pitchers BFP). Via some regression to individual means, variance should decline with plate appearances as with games (at 132 games scheduled, 1893-97 should be slightly higher variance on this ground alone).
   218. Paul Wendt Posted: April 10, 2007 at 04:54 AM (#2330877)
By eye and mind, quite errorprone,
In ratio to singles plus walks, I get SB 15% in 1897, 11% in 1898 NL1898 at bb-ref
   219. DL from MN Posted: April 10, 2007 at 02:48 PM (#2331043)
One other thing, why does it add the same number to BRAR and BRAA?
   220. David Concepcion de la Desviacion Estandar (Dan R) Posted: April 10, 2007 at 03:29 PM (#2331074)
Why wouldn't it? It's a linear adjustment, not a percentage one. Let's say a guy produces 100 RC and his replacement would produce 50 RC. The NL league average is 75 RC and the AL is 80, and BP replacement is 25 runs below average (these are not real numbers). He's actually 50 BRAR, but BP would have him at 25 BRAA/50 BRAR in the NL and just 20 BRAA/45 BRAR in the AL.
   221. DL from MN Posted: April 10, 2007 at 04:01 PM (#2331093)
The difference between 2 and 5.4 is worth 12 ballot slots for Evans who would barely ballot and moves Graig Nettles from 7th to off ballot behind Bob Elliott for me.
   222. David Concepcion de la Desviacion Estandar (Dan R) Posted: April 10, 2007 at 04:08 PM (#2331094)
I know, it's a major, major difference. I'm almost positive that two is too low. I'll have a definite response and analysis to you before the end of the election.
   223. David Concepcion de la Desviacion Estandar (Dan R) Posted: April 18, 2007 at 01:29 AM (#2337656)
I've just uploaded a revised version of my WARP to the Yahoo group, with some minor changes:

1. I completely redid my accounting of stolen bases for years without CS data. First, I reduced SB from the 1893-7 period when some advances on hits were counted as SB. I saw that the SB per time on base (not counting HR and 3B) rate was extremely steady, almost always at *exactly* 11%, from 1898-1907. It's definitely higher from 1893-7. So I reduced SB totals for each year from 1893-7 so that the league SB/time on base rate was 11%.
Second, I introduced estimated CS totals. I did a study of every position player-season from 1951 to the present, and found that a player's SB success rate is given by the equation .0842 ln X + .8703, where X is the percentage of BB+HBP+1B+2B where he stole a base. (For guys with full playing time and average OBP, it's 50% for players with 5 steal attempts, 60% for guys with 15 steal attempts, 65% for guys with 25-30 steal attempts, 70% for guys with 45 steal attempts, 75% for guys with 75 steal attempts, and 80% for guys with 120 steal attempts). I used those as estimated CS values for years where CS is not available. This will definitely take a bite out of the value of guys like Billy Hamilton (as it was intended to). I know that, in fact, SB success rates in the early days were much lower than these estimates, but I also know that SB were more important to run scoring then, so I am hoping that those two factors cancel each other out.

2. I re-did my regression on standard deviations. This eliminated the weird blips in the 1955-56 NL and 1987, and I was able to get a statistically significant result for the war by using a war variable of 1 for 1944, 0.5 for 1943 and 1945, and 0.25 for 1918. One interesting finding is that the DH dramatically decreases standard deviations. I removed 1B and DH from the AL sample, and I *still* found that the DH reduced standard deviations by 0.15, or about 5%. I don't immediately see why the presence of the DH would cause the performances of C/2B/3B/SS/OF to be bunched more closely together, but the result is extremely robust and significant. I'd be interested to hear thoughts on why this might be the case. This will cause my system to be even friendlier to post-1973 AL bats.

3. I double-checked my math on my DH replacement level adjustment, and 0.6 wins is right (DL from MN, this is in my system--the number I gave you for BP is the right one for BP). Just to take one example: Nate Silver has a freely available 3B hitting .249/.315/.391 in a .270/.340/.440 league with -0.5 FRAA per 162 games. .249/.315/.391 is 4.14 runs per 25.5 outs, while .270/.340/.440 is a 5.08 R/25.5 out league. AL run scoring is 7.7% higher than NL run scoring. So if we call the 5.08 R/G league neutral, then the replacement 3B would produce 4.14 R/25.5 with a .315 OBP in a 5.27 R/G AL, and 4.14 R/25.5 with a .315 OBP in a 4.89 R/G NL. In full playing time, that 3B would be 2.0 wins below league average in the AL and 1.4 wins below league average in the NL. The gap is 0.6 wins, and it's the same for every position. Note that the replacement levels shown on the rep level graph are for non-DH leagues--to get post-'73 AL values, subtract 0.6 from them.

4. The graph comparing the standard deviations of the leagues is extremely difficult to look at, I know, with four lines zigzagging all over the place. But the things that stand out are:

a. The gigantic gap in stdev between the 1901 NL, which has the 2nd highest stdev of any league-season, and the 1901 AL. This is because Nap Lajoie was the only star to switch to the AL in 1901. So you had all the remaining stars in the NL putting up big OPS+ scores, since the league average was dragged down by the mediocrities who were brought into the league, while *everyone* except Lajoie was mediocre in the AL, leading to a rather low stdev. (Lajoie's WARP2 score is based on the regression-projected stdev, which is higher than the real one, but it should still be docked for weak league quality).
b. The even larger gap in stdev between the teens AL and NL. This is clearly due to the star glut, where Cobb, Speaker, Lajoie, and Collins tore up the AL, and the NL was left with Zack Wheat and George Burns. You can see that the regression lines for both leagues run right down the middle between the extremely high AL stdevs and the extremely low NL ones, showing that the AL was not *actually* any easier to dominate than the NL in the teens, it simply had more dominant players.
c. The fact that AL stdevs in the 20's were often way higher than the regression line is due entirely to one George Herman Ruth. If you remove him from the sample, the stdev falls exactly in line with the regression equation.
d. I have no idea why the stdev in the AL was so low from 1948-53. There were plenty of All-Star performances. You would certainly expect the NL stdevs to be higher, since the black stars were going to the NL, but you wouldn't expect the AL ones to drop.
e. The early to mid 1960's are the reverse of the teens--NL stdevs are much higher because all the stars (Mays, Aaron, F. Robinson, Santo, Clemente) are in the NL, while the AL really only has an aging Mantle (and Killebrew who my system hates). I definitely think NL players from the teens and AL players from the early to mid 60s should be penalized for playing in weaker leagues. (Note that in these cases higher stdev = stronger league, contrary to Gould's formulation).
f. In 2001-2004, real stdevs were consistently higher than the regression-predicted values, while in 2005 (and 2006, which is not shown), they are substantially lower. This certainly seems like compelling evidence of the presence of steroids in the league, which you would expect to increase stdevs since some players use and others don't.

5. Replacement levels at every position besides SS and C show a big tumble from 1893-1900. This is presumably because Louisville, Washington, and St. Louis were really minor league franchises that couldn't fill their teams but still played in MLB. I definitely think this causes my system to overrate 1B, 2B, 3B, and OF from the period, and I would definitely subjectively reduce them all by about 0.3 wins per full 132-game season.

6. Depth at 2B has really gone all over the map--it was as easy as 1B around 1910, then the second-toughest position from 1930-80, then got a lot easier from 1985 to the present. No idea why. Thoughts?

7. Catcher is also interesting. It was definitely more abundant than 2B from 1930-80, almost as easy as 3B and CF at some points (1960, 1980), but has gotten A LOT tougher since the mid-80's. Now it's virtually as scarce as SS. Maybe this is because in the 70s, the other IF positions were all tougher due to turf, so their offensive production went down, so C just looks better by comparison? And then once the game returns to high offense and grass, teams can give up defense for offense, so the other positions improve and C looks worse by comparison?

8. 3B didn't catch up to CF difficulty until WWII, and they've run very closely together ever since.

I look forward to hearing feedback on these new results and hope voters find them helpful!
   224. Chris Cobb Posted: June 23, 2007 at 04:26 PM (#2413979)
To keep the methodological discussions of Dan's work all in one place, I'm asking some questions here that pertains to the data he is posting in the 2001 ballot discussion thread that I don't think has received discussion here before:

SFrac: Percentage of the league average plate appearances per lineup slot

In the notes on Johnny Pesky, Dan mentions that he needs to be discounted somewhat for the huge number of PAs that he racked up as part of the Red Sox's "high octane" offense.

How much of a discount?

How exactly are the components of SFrac derived?

Is it in any way normalized for either team offense or park factors?

We're at the point of making such fine distinctions that any adjustment may shift players quite significantly in the rankings, so I'd like to see more exactly what adjustments might be requisite here. How much is a player on a bad offensive team in a pitcher's park going to lose here when compared to a player on a good offensive team in a hitter's park?

I apologize if I am asking questions that were answered earlier, but in a scan of the thread I did not observe them.
   225. David Concepcion de la Desviacion Estandar (Dan R) Posted: June 23, 2007 at 06:28 PM (#2414040)
SFrac is calculated very simply--it's just the player's PA divided by 1/9 of the league PA per team. It is most definitely not normalized for team offense and park factors, and probably should be in a subsequent version.

There are really two factors to correct for here. The first is the team offense, and the second is the lineup slot. If you just want to correct for team, all you have to do is multiply the player in question's WARP (either mine or Baseball Prospectus's) by the ratio of the league PA per team to the team's PA. (e.g., the 1950 Boston Red Sox had 6322 PA, while the league had 6092 PA per team, so players on that team should have their WARP multiplied by 6092/6322 = 0.96. You could do the same for lineup slot by calculating the ratio of the overall league PA per lineup slot to the the league average PA for the player's lineup slot--this would decrease leadoff hitters' WARP and increase #8/#9 hitters' WARP. However, I personally have no problem giving leadoff hitters credit for their extra PA--that's what their manager decided to do, and it created real value for their teams.
   226. David Concepcion de la Desviacion Estandar (Dan R) Posted: June 23, 2007 at 06:37 PM (#2414048)
If you want to apply this correction (and you definitely should), it certainly is necessary for Baseball Prospectus WARP, but I do not think it is appropriate for Win Shares. I'm no WS expert, but since WS is calculated at the team level I would think that it would automatically account for this factor.
   227. Chris Cobb Posted: June 23, 2007 at 09:48 PM (#2414100)
Thanks, Dan.

I've been thinking about applying this adjustment, and it seems to me that it should not be applied to the fielding portion of the player's WAR, because his playing time in the field isn't actually affected by additional or reduced PA opportunities created by his team's offense.

In fact, your system may distort players' true fielding rates a little bit, in that you are dividing by a larger than appropriate share of playing time for players hitting high in the lineup and on high-offense teams and by a smaller than appropriate share of PT for the reverse. Since you mulitply by that PT share on the way to WAR, I think that the distortion would then be removed (since WARP and WS use estimated fielding innings, so the original numbers you are starting from aren't tied to offensive opportunities).

But it's because the relationship between fielding PT and plate appearances isn't direct that the adjustment should not be applied to fWAR.

Does that reasoning seem correct?
   228. David Concepcion de la Desviacion Estandar (Dan R) Posted: June 23, 2007 at 10:01 PM (#2414105)
Chris, that is entirely correct. But I don't have estimated fielding innings. Any chance you can send me a spreadsheet with them for every player-season over 50 PA since 1893? :)
   229. Chris Cobb Posted: June 23, 2007 at 10:24 PM (#2414127)
I don't happen to have that data handy . . .

WARP at least provides the data: it's harvesting it that's the problem :-( .

Fortunately, it's not necessary for the end results of your system, and if one wanted to correct the rates for a particular player season, it would be fairly easy to do.
   230. Dr. Chaleeko Posted: June 24, 2007 at 03:06 PM (#2415016)
The WS book also contains estimated def. innings in the appendix materials.
   231. David Concepcion de la Desviacion Estandar (Dan R) Posted: June 24, 2007 at 03:40 PM (#2415042)
Sure. But I need them in a spreadsheet! I have WARP totals for over 33,000 player-seasons and I don't plan on inputting these defensive innings all by hand.
   232. Paul Wendt Posted: June 26, 2007 at 03:55 AM (#2417928)
Mike Webber may know whether Bill James or STATS holds the copyright.
It may be educational to ask STATS the price.
   233. KJOK Posted: June 28, 2007 at 11:25 PM (#2421619)
Dan - I think I have those estimated innings totals in my 'super duper spreadsheet' I use for the HOM...I'll send them to you....
   234. David Concepcion de la Desviacion Estandar (Dan R) Posted: June 28, 2007 at 11:37 PM (#2421626)
ooh, that would be dandy. Thanks, KJOK.
   235. David Concepcion de la Desviacion Estandar (Dan R) Posted: July 11, 2007 at 10:50 PM (#2437766)
I've just updated the file in the Yahoo! group to include the breakdown for each player-season of batting, baserunning, fielding, replacement level, and standard deviation. Hopefully this will help to make my data more transparent and enable voters to pick out the parts they find useful and discard those that they don't. Take a look if you haven't already!
   236. Paul Wendt Posted: July 12, 2007 at 04:15 AM (#2438031)
7. Catcher is also interesting. It was definitely more abundant than 2B from 1930-80, almost as easy as 3B and CF at some points (1960, 1980), but has gotten A LOT tougher since the mid-80's. Now it's virtually as scarce as SS. Maybe this is because in the 70s, the other IF positions were all tougher due to turf, so their offensive production went down, so C just looks better by comparison? And then once the game returns to high offense and grass, teams can give up defense for offense, so the other positions improve and C looks worse by comparison?

The running game diminished during the 1920s with no one but George Case running much during the 30s-40s-50s and (60s still low?). The running game returned during the 60s and 70s. DanR says essentially that catchers were "bats" during the 1930s-70s, which fits if the selection and training of catchers lags the running game by about a decade.

Or more than a decade. "A LOT tougher since the mid-80s" implies a turning point only half-way through the 1980s. Not only Robinson, Aparicio, and Wills, not only Lou Brock and Ron LeFlore (70s), but even Tim Raines, Vince Coleman, and Eric Davis (early and mid-80s) were running on catchers selected more for batting than was true for Max Carey or would be true in the Ivan Rodriguez era. Only when Raines, Coleman, Davis took things to the heights of ridiculousness, a response in training and selecting catchers seemed imperative.
1986: Raines 70-9, Coleman 107-14, Davis 80-11 = 257 sb, 34 cs
   237. David Concepcion de la Desviacion Estandar (Dan R) Posted: July 12, 2007 at 12:59 PM (#2438178)
That's a terrific and totally logical analysis, Paul. I would just also add that if the defensive demands of infield positions were higher in the 1970's and early 80's (as they seem to have been) than at other points in the game's history, that led teams to run Rob Picciolos out there, which in turn lowered the overall league average offense and made catchers look stronger by comparison.
   238. Jim Sp Posted: July 12, 2007 at 04:23 PM (#2438387)
Thanks Dan R, it's great to have the breakout.
   239. jimd Posted: July 13, 2007 at 12:54 AM (#2439237)
The graph of average OPS by position by decade (X means Catcher):
1870's ..................LC..321S.XR.....................
1880'
S 1.............L.C......!R3...2.S.........X........
1890'S ........L...RC1........!...3..2S.......X..........
1900'
........L.R.C...1......!.2...3.S...............X..
1910'S .........CR...L.1......!..23.........S.....X......
1920'
........RL1.C..........!...2.......3.X.....S......
1930'S ..1.....R...L.....C....!.......3..2X.S............
1940'
......L...R.1.C........!.3.........2XS............
1950'S ........L.1...RC.....3.!.......X...2...S..........
1960'
......1R..L...C......3.!...........X...2.S........
1970'S ......1...RL....C....3.!.......X.....2...........S
1980'
........1...RL.....3C..!........X2.......S........
1990'S ......1.....R...L.....3C.......2.X.....S..........

.Mean. ........1.L.R.C........!.3.....2...X.S............ 
   240. Dr. Chaleeko Posted: July 13, 2007 at 01:16 AM (#2439336)
That's a terrific and totally logical analysis, Paul. I would just also add that if the defensive demands of infield positions were higher in the 1970's and early 80's (as they seem to have been) than at other points in the game's history, that led teams to run Rob Picciolos out there, which in turn lowered the overall league average offense and made catchers look stronger by comparison.

Dan, I understand why you're saying this, but what about shortstop in particular might have changed? Looking at jimd's chart, 2B remains pretty close to its traditional location batting wise, but ithe 1970s, SS is off the chart bad at hitting. What about the 1970s made SS go off the deep end?

One answer could be turf, but I think it's a red herring. There were as many turf parks in the 1980s (or more, perhaps?) than in the 1970s. And even so, if the turf parks were so defensively challenging, how come SS returned to its normal spot in relative batting prowess in the 1980s? Sure Ripmellountkin came around in the 1980s, but why would teams suddenly decide big SS are OK, and that turf isn't such a big deal after all? Smells fishy.

What else? Runs were scarcer in the late 1960s and into the 1970s. In fact into the 1980s in the NL, the AL's a little spottier, DH and all. So maybe GMs thought that defense was more important for that reason? But was it so much more important that it needed to be sacrificed completely at SS? Seems a bit unlikely.

Is it indeed possible that in an industry with so much good-old-boying and inbreeding that it was just faddish to have a weak-hitting shorstop? That teams started thinking that shortstops shouldn't have to contribute with the bat? I certainly remember this very idea growing up, listening to games, and hearing phrases like this: "he's good with the glove, and anything he gives you at the plate is gravy." Is it possible that conventional wisdom just took a ten-to-twenty year swim in the irrational pond?

I agree in principle with zop when he says the wisdom of crowds is typically smarter than individuals, but in this case, there simply are no easy-to-discern ideas about why this downturn in production should have occurred...the pool of decision makers is very small...the industry is very dogmatic...and the problem "self-corrected" as offensive levels turned upward, suggesting a possible collective coming to the senses in the wake of Yount/Ripken.
   241. J. Lowenstein Apathy Club Posted: July 13, 2007 at 01:25 AM (#2439389)
Dan, I understand why you're saying this, but what about shortstop in particular might have changed?

Maybe artificial turf. The all-around athletic demands on a shortstop on turf are uniquely challenging. A turf SS has to play deeper to counteract the quicker grounders; while the first step isn't as crucial on turf as on grass where a SS plays more shallowly, it means a turf SS needs better straight-line speed and a stronger arm. Plus, the hitters run faster on turf, meaning that the fielder has less time to make the play.

At second base, for example, while the players would need to be faster, the arm strength doesn't come into the equation (a deeper-lying 2B doesn't have a longer throw, but a deeper-lying SS definitely does).
   242. OCF Posted: July 13, 2007 at 01:30 AM (#2439418)
...a possible collective coming to the senses in the wake of Yount/Ripken.

And maybe even an overcorrection? Do you believe in Hubie Brooks, SS? Or how about Howard Johnson? HoJo did have 273 games at SS, although never any one year when it was his primary position.
   243. Dr. Chaleeko Posted: July 13, 2007 at 01:36 AM (#2439451)
Maybe artificial turf. The all-around athletic demands on a shortstop on turf are uniquely challenging.

Craig, that's what I keep thinking, but maybe it was illusory all along, and the emergence of Ripken et al woke up the baseball world to this illusion? Again, the change in SS offense in the 1980s, back toward historical norms, well away from the depths of the 1970s, despite just as many turf parks makes me wonder if turf is really the answer.
   244. Paul Wendt Posted: July 13, 2007 at 01:58 AM (#2439559)
Dan, I understand why you're saying this, but what about shortstop in particular might have changed? Looking at jimd's chart, . . .

The patterns in OPS+ by fielding position by traditional decade, posted by jimd, do not closely match the changes in ease and difficulty of catching inferred from batting data and described by DanR. Ideally we would understand those differences before looking for answers in the detail El Chaleeko hopes for.

Anyway I think the table shows great variation from decade to decade. Against that background the recent down-up movement for shortstops as batters is notable but is not an evident anomaly. The magnitude is much smaller than the up-down for 1Bmen as batters to and fro the 1880s and comparable to the bump for 1Bmen in the 1930s and the dip for 3Bmen in the 1920s. And it is about 50% greater than the collective batting dips for catchers in the 1900s, shortstops in the 1920s, CFs in the 1930s, RFs in the 1950s.
Those bumps and dips do not all have explanations and I guess at least half of the dip for shortstops specifically in the 1970s may be inexplicable luck.
   245. jimd Posted: July 13, 2007 at 02:34 AM (#2439643)
My chart measures averages, and so is affected by star clusters and voids.

Dan's work is purely about the replacement level.
Whether Wagner is average or the greatest player ever has no effect.

They will not necessarily move in parallel, though there should be some correlation.
   246. Dr. Chaleeko Posted: July 13, 2007 at 11:53 AM (#2439771)
DanR,

Any chance you could put together a chart like jimd's for replacement level? I'd be wicked intereested to compare them.
   247. David Concepcion de la Desviacion Estandar (Dan R) Posted: July 13, 2007 at 01:17 PM (#2439806)
I'm ashamed to admit that I actually can't understand jimd's chart for the life of me. But there is a chart of the evolution of replacement level at each postiion from 1893-2005 in the Rosenheck WARP.zip file in the Yahoo group.
   248. David Concepcion de la Desviacion Estandar (Dan R) Posted: July 13, 2007 at 01:47 PM (#2439824)
And you'll not on my chart that ALL the infield positions, even 1B, show a marked decrease in replacement level centered around about 1979. They are all calculated separately using the worst-3/8-of-regulars average at the position. The fact that all four positions, calculated completely independently, all show the same pattern seems to me very strong evidence that something "real" was going on.
   249. David Concepcion de la Desviacion Estandar (Dan R) Posted: July 13, 2007 at 02:07 PM (#2439837)
To clarify, the differences between my chart and jimd's should be directly related to the standard deviation of performance *within* a given position. My standard deviation correction is on a *leaguewide* basis--it reflects the overall league ease of domination, rather than the comparative ability of SS to dominate their positoin vs. 1B. If I corrected for positional stdev rather than for overall league stdev, then the "best of a bad lot" guys like Pie Traynor or Mickey Vernon would come out looking like HoM'ers, whereas in my system they're not close, since they didn't distinguish themselves from their positional peers to the same degree that their contemporaries at other positions did. David Concepción is a clear HoMer in my system because he was *by far* the best shortstop in the game in an era where distancing yourself from the competition at *any* position was extremely difficult, whereas Traynor does poorly because he only exceeded his positional replacement level by a small amount, in an era where distancing yourself from the competition was rather easy.

The biggest gaps between my replacement level chart and jimd's average chart should thus be found where the stdev of intra-position performance is high relative to the overall league stdev. What leaps to mind here is the early 1980's AL, an extremely difficult-to-dominate league where freely available shortstops were absolutely putrid and yet Ripken, Yount, and Trammell rang up monster seasons year after year. My system shows them all as upper-echelon HoM'ers, since they were exceeding freely available shortstops by like 80-90 points of OPS with great fielding to boot, while the best players at other positions were exceeding their replacements by only say 60 points. A RCAP system will not be nearly as friendly to them, since it penalizes each of that Holy Trinity for playing in the same league as the other two, and also doesn't account for the fact that no one was generating very high RCAP totals in those days due to the low league stdev.

Conversely, if the stdev of intra-position performance is quite low and the overall league stdev quite high, my system will be unimpressed, while RCAP should be much more favorable. Again, Pie Traynor seems like a good example of this. Are his RCAP excellent?
   250. DL from MN Posted: July 13, 2007 at 05:17 PM (#2440036)
I'm playing around with this today but I want to keep pitchers in the spreadsheet. What is the conversion from runs to wins so I can convert PRAA into PWAA?
   251. Jim Sp Posted: July 13, 2007 at 06:31 PM (#2440103)
Dan,
Are you planning to create warp for pitchers, or is that out of scope at this point?
   252. jimd Posted: July 13, 2007 at 06:35 PM (#2440107)
Long ago, I calculated from the Lahman database the average OPS at each position during each decade, expressed as a % of league average OPS with pitchers removed. The graph is a pictorial representation of that data; the scale, IIRC, is two dots for each percentage point, the ! is the 0 point.

There is not stdev adjustment. If that was applied, it would expand or compress each row depending on whether the decade was difficult or easy to dominate. (1980's expand, 1930's compress, etc.) If you have stdev coefficients for each decade measured with 1.0 as the average stdev taken over all time, I could attempt to rescale the above graph.
   253. KJOK Posted: July 13, 2007 at 07:04 PM (#2440133)
I'm playing around with this today but I want to keep pitchers in the spreadsheet. What is the conversion from runs to wins so I can convert PRAA into PWAA?


Runt to Wins varies based on the offensive context. The formula that approximates the relationship is:

2 x RPG ^ 0.72. So, if League RPG = 9, Runs Per Win = 9.7

Hopefully this chart will post correctly:

RPG    RPW
1    2.0
2    3.3
3    4.4
4    5.4
5    6.4
6    7.3
7    8.1
8    8.9
9    9.7
10    10.5
11    11.2
12    12.0
13    12.7
14    13.4
15    14.1 
   254. Paul Wendt Posted: July 13, 2007 at 07:04 PM (#2440134)
A few calendar years ago, jimd presented a numerical version of the data, (relative) decada-average OPS by fielding position. One column for each fieldpos.
We may have seen the graphical version with and without pitchers.
   255. David Concepcion de la Desviacion Estandar (Dan R) Posted: July 13, 2007 at 07:17 PM (#2440142)
DL from MN--Are you using Baseball Prospectus' PRAA? Those are calculated for a "standard league" where run scoring is 4.5 per team per game and the Pythagorean exponent is 2, which makes the runs/wins conversion just about 9.0. Note that BP PRAA include some rather opaque methods to divvy up credit between pitchers and fielders.

Jim Sp--I most definitely am working on WARP for pitchers at the moment. It's a big undertaking, but I'm making progress. I will post results as soon as I get them, of course.

jimd--the stdev chart is in the Rosenheck WARP.zip file in the Yahoo group. 1 is set as the average of the 2005 NL and AL, but you could easily recenter it to whatever you want.
   256. Jim Sp Posted: July 16, 2007 at 11:36 PM (#2442973)
Can Manny Ramirez's fwaa1 really be -3.3 in 2005?
   257. David Concepcion de la Desviacion Estandar (Dan R) Posted: July 17, 2007 at 02:26 AM (#2443283)
Yes, courtesy of Chris Dial's Zone Rating data which don't adjust for the Green Monster, along with the fact that UZR stopped being publicly available after 2003 so I've just used straight Dial for 2004-05 (I have PMR and Fielding Bible data but I'd have to input them all by hand!). I actually wrote a whole story on the question of How Bad Is Manny's Defense for the NY Times. He was probably about -1.8 FWAA1 that year. Some Colorado players should also get some park adjustments to their FWAA1. But park corrections are usually small--the Manny case is a gigantic exception; I think a *lot* of balls must have hit off the Green Monster that year--and for 1987-2003 Dial never represents more than 40% of the weighted average.
   258. Howie Menckel Posted: July 17, 2007 at 03:13 AM (#2443371)
First off,
Great efforts here by too many to mention.

As to the query:
"Is it indeed possible that in an industry with so much good-old-boying and inbreeding that it was just faddish to have a weak-hitting shorstop?"

My answer is yes.
For those under 35 or so, you cannot imagine the "didn't we kick the crap out of these nerds in high school?" factor of the 1970s NL SSs, for an example.

I actually like the "the market tends to be sensible" in the long run, but that doesn't mean it never fails.
Those artificial-turf parks had old-time GMs all revved up, I suspect.
   259. David Concepcion de la Desviacion Estandar (Dan R) Posted: July 17, 2007 at 05:22 AM (#2443550)
Howie Menckel--again, it wasn't just SS circa 1980 that had a low replacement level, it was all four infield positions (and OF replacement levels, by definition, moved inversely to the IF ones and reached all-time highs during this period). Take a look at the chart. It makes sense, intuitively--strikeout rates were about 1/5 lower then than they are now and HR rates about 1/3 lower, so defense was a markedly bigger part of the game then than it is today (which explains a lot of why SP back then threw more innings with a tighter stdev of ERA+). I don't have league GB/FB data but I certainly suspect there were far more slap hitters then than there are today. It's certainly possible that there was some groupthink and/or overkill involved, but there are *plenty* of reasons to believe that the defensive demands on infielders 30 years ago were substantially greater than they are today, and that that would be reflected in teams' unwillingness to play mediocre fielders at key defensive positions.
   260. Dr. Chaleeko Posted: July 17, 2007 at 03:06 PM (#2443724)
Howie Menckel--again, it wasn't just SS circa 1980 that had a low replacement level, it was all four infield positions (and OF replacement levels, by definition, moved inversely to the IF ones and reached all-time highs during this period).

1) It also seemed like there might hav been an unusual number of good catchers. Did their rep. level go up a bit too?

2) Yet the number of HOM-level CFs in this period is not very high at all, and in fact, there's something of a CF drought since the 1960s. Could CFs in the aggregate have been better players than the aggregation of CFs out there during the previous generation or two which was jammed with top-level stars and filled-out quite fully with second-tier players (the Brutons, Bells, and Virdons to the WMD/Whitey/Doby bunch)?

Take a look at the chart. It makes sense, intuitively--strikeout rates were about 1/5 lower then than they are now and HR rates about 1/3 lower, so defense was a markedly bigger part of the game then than it is today (which explains a lot of why SP back then threw more innings with a tighter stdev of ERA+). I don't have league GB/FB data but I certainly suspect there were far more slap hitters then than there are today.

Using BP's conventient statistics sort, I found gb/fb rates for the NL in five-year blocks: 1962-1966, 1972-1976; 1982-1986, 1992-1996, 2002-2006. Here's the gb/fb averages for each five-year block:

1960s: 1.58
1970s: 1.50
1980s: 1.35
1990s: 1.80
2000s: 1.74

So the average number of ground balls versus flyballs has increased over time. I think, however, that some FB information is leaching out into the LD and PoPUp categories. Those categories aren't consistently populated in BP's data. So let me approach this another way. Here's the ground balls per inning for the NL in this same period.

1960s: 1.34
1970s: 1.28
1980s: 1.25
1990s: 1.47
2000s: 1.32

So according to BP's PBP stats, the NL's GB/inning rates have went down in the 1970s and 1980s, then shot up in the 1990s and modulated to a level slightly higher than the 1960s-1970s. It may be that more slap hitters were in lineups in the period in question, but overall this data suggests that the total number of grounders facing an infield was lower, not higher, than surrounding generations.

It's certainly possible that there was some groupthink and/or overkill involved, but there are *plenty* of reasons to believe that the defensive demands on infielders 30 years ago were substantially greater than they are today, and that that would be reflected in teams' unwillingness to play mediocre fielders at key defensive positions.

I'm still looking for specific, substantial reasons to believe this. It is true that K rates were lower, increasing the burden on fielders. HR rates were indeed lower, but there's another side to that coin: higher homer rates are also strong incentives to play outstanding fielders, to prevent baserunners and turn three-run homers into solo shots. Meanwhile, it also seems that GB rates have gone up, not down over the same period, leading one to wonder why the replacement level at all four infield positions would go down if the grounders they faced were fewer than their latter-day brethren who faced more grounders. That is, why would GMs sacrifice offense for defense at four infield positions if grounder rates were lower, suggesting less, not more, need to cut down would-be hits?

[Tangent: is the recent increases in GB rate the emergence of the splitter? Or more recently, the increaed interest in the two-seamer/power-sinker among righties?]

Now, my little unscientific survey of someone else's sometimes-iffy data pool might well reflect its biases (and create or confirm mine), but until I see an argument that's a little less cloudy, I'm of the mind that we are dealing with an unusual moment in time when perception and reality strongly diverged. (With due acknowledgement of the wisdom of crowds and an understanding that I'm going against that wisdom.) I'm very much open to be swayed, but I'm just not yet convinced by the data.
   261. David Concepcion de la Desviacion Estandar (Dan R) Posted: July 17, 2007 at 03:49 PM (#2443778)
Dr. Chaleeko--

1. Catcher replacement level shows a modest dip at the same time the infield positions do in the 1970's, but picks up a bit in the early 80s before tailing off considerably post-1985. Catcher rep level throughout the period was certainly quite high compared to where it was pre-1930 or post-1990.
2. I think that a lot of big shortstops, like Ripken and A-Rod, would have been CF had they been born earlier.
3. That's extremely surprising. Just eyeballing baseball-reference pages, the NL league average range factor for shortstops was about 4.40 for 1976-82, and then falls abruptly to around 4.00 from 1984 to the present--exactly what you would expect from my replacement level data. What would account for the substantially higher range factors (representing 65 extra plays made per SS per season) if not a higher GB rate? Just the lower K rate? Fewer errors? More popups?
4. I totally disagree with your "other side of the coin." The more HR that are hit, the lower the proportion of offense that fielders can influence, and the greater the incentive to get guys who can hit HRs rather than catch balls in play.
   262. DL from MN Posted: July 17, 2007 at 04:09 PM (#2443799)
I think you may be confusing cause and effect. In the 1970s everyone had spacious outfields and there weren't many HRs hit. Everyone had turf infields that sped up the ball and made groundballs less likely to be fielded. This probably led to pitchers throwing fastballs and curves to induce pop flies to the outfield. In the 1990s the fences moved in, grass came back and hitters juiced up. The number of HR exploded and the safe way to pitch became throwing sinkers and sliders that the infielders could gobble up.
   263. Dr. Chaleeko Posted: July 17, 2007 at 05:50 PM (#2443882)
3. Again, it's not my data, I pulled it off BP's pages. It's possible they have an issue, and I'm just reporting on erroneous data. But that's what it says.

4. Has SS defense declined absolutely since the 1970s? If decreasing replacement level in the 1970s suggests trading offense in on better defense, which is the model we've all been working with, then Smith and Concepcion's defense compared to their modern peers must have been markedly better. Because by trading for defense, the league of the 1970s should have been filled with a group of better defensive shortstops than today's leagues are.

By trading back some defense for a lot of offense in the 1990s-2000s, we would conclude that the contemporary SS is, on average, less effective defensively in an absolute sense than the 1970s guys. But how much less effective? Turning back to the 1970s guys, FRAA and similar measures look at how good the best 1970s SS were on defense compared to their league's average at the position. How much better can you actually be than a pack of shortstops all selected for playing good defense? There has got to be some ceiling. If Ozzie and Davey are like 15 runs above average per year (and upwards of 20-25+ in best years) then they must be more than just excellent since they are being compared to this bloc of shortstops selected for defensive prowess. Would we expect Davey or Ozzie to be +25 a year instead of +15 a year in 2007? But what if they'd only be +17 better than today's average SS instead of +15? Then the difference in absolute ability between 1970s and 2000s shortstops is very small, two runs. Furthermore, the difference of a couple runs above average could hardly be construed as an appropriate reason for multiple generations of GMs to almost uniformly select for defensive excellence at the cost of putrid offensive performance.

So let's take a quick look at the average SS offense. Looking at SS offense can help us establish paramters for how big a tradeoff the leagues might have made for defense or for offense.

I picked two three-year samples from the SBE, 1973-1975 and 2003-2005.

In 1973-1975, the NL's SS created 3.16 R/G in a 4.54 RC/G league (pitcher removed). SS created runs at 70% of the league average.

In 2003-2005, the NL's SS created 4.34 R/G in a 5.21 RC/G league (pitcher removed). SS created runs at 83% of the league average.

That's a big jump. 13%. In 2003-2005 terms, the 1970s SS were creating 3.65 runs per game. In a full year with 400 outs the SS would be 23 runs below average in today's terms. Wow. In 400 outs, a 2000s SS created about 13 fewer than the league. Of course, a 2000s SS being a better offensive player than the 1970s outmachine would probably make fewer outs, but let's ignore that for the moment. There's our threshold for the difference in defense. If today's SS is 10 runs better against average than the 1970s guys, then the 1970s guys need to be in that neighborhood to make the exchange of defense for offense reasonable. Or to put it another way, are 1970s SS a full win better than today's SS on defense?

I kind of doubt they are. I'm intuiting this, I have no numbers. But men get moved off SS in the minors pretty quickly, and even in the majors can move off SS very quickly when skills go south. Like Nomar. Or like Jeter would have (and A-Rod did) in a less complicated decision process. Or like B.J. Upton has at a young age. The big leagues still don't have much tolerance for poor interior defense, and with good reason, it adds up to runs, especially with so many extra-base hits out there. In addition, the players at SS today might well be MORE athletic than the pipsqueaks at SS back then (as Dan said, they might have been CFs back then, another very athletic position). And being taller may even help (or at least not hurt) their range somewhat in terms of wing span, height, and stride length. I can't build a good case for today's SS being absolutely worse than the 1970s SS except in that he hits better (the absence/presence of offense theory). Nor can I build a good case that he's as good or better a defender, except in that today's athletes are in many ways superior to their predecessors thanks to access to the latest training techniques and may also have better positioning data (see below*).

So I'm left to assume that the difference in defensive abilities is there, but it's small---not obviously a win's worth (above average, not replacement). Could be that it is that big (or bigger), but I can't come to a point where I think it's obviously true. I'm left to think that the GMs of the time had certain beliefs that led them to select for defense and punt on offense or move big SS elsewhere (like Schmidt to third). Again, I could be wrong, and I'm willing to be proven wrong.


*Re: advanced scouting and positioning. Is it possible that current abilities to collect and analyze advance scouting data has led to more accurate positioning, which means that fielders are more often in positions where they needn't range outside their normal zone to cut down hits? In Concepcion's (and to some degree Smith's) day, perhaps the position relied more on range than positioning, making the absolute differences in ability more meaningful than an above-average type stat would imply.
   264. David Concepcion de la Desviacion Estandar (Dan R) Posted: July 17, 2007 at 08:15 PM (#2444045)
I have NL SS creating runs at 77% of league average (pitcher included) from 1973-75, equivalent to 2.1 wins below average with average playing time, and 89% of league average from 2003-05, equivalent to 1.2 wins below average with average playing time, so roughly the same one-win gap. Adjusting for standard deviations doesn't change it too much: 2.0 wins below average for 1973-75, 1.1 wins below average from 2003-05.

The difference is more pronounced in the AL, though--76% of league average offense from 1973-75, equivalent to 2.3 wins below average with average playing time, and a massive 98% of league average from 2003-05, equivalent to 0.3 wins below average with average playing time--in a DH league, no less. Standard deviations don't cause any changes. Pretty remarkable. So it's a gap of 0.9 wins in the NL and 2.0 wins in the AL, for a 1.45-win spread overall.

The only thing I have to add is that I would ask the question the other way around--how many runs *below* average would a league-average SS today have been in the 1970's, assuming (incorrectly) that the overall quality of play was equal? Putting Smith or Concepción in today's game doesn't do them justice, since they have 65 fewer balls hit to them. By contrast, putting one of today's SS in the slap-hitting turf game would really put them to the test.

From a value rather than ability perspective, of, course, this is all irrelevant. The basic point that Concepción was worth a HoM-quantity buttload of pennants to the Reds--in a way that Mickey Vernon or Pie Traynor were *not*, since they did not exceed replacement level at their positions by nearly the same number of league standard deviations--seems indisputable to me. Which is why I'll keep voting for him.
   265. TomH Posted: July 17, 2007 at 08:30 PM (#2444059)
thot experiment:

Let's say I researched team wins versus shortstop productivity, comparing 1972-76 (an example of an atypical lousy-shortstop period) with 2002-2006 (a decent shortstop period).

If I found that more teams who won often had very good shortstops in 1972-76, might it mean

a. see, replacement level was really low then, I told you!
b. good teams were smart enough to acquire (mostly thru trade in the earlier era) better-than-replacement shortstops, negating the position-specific low replacement level
c. good teams were able to push borderline-defensive shortstops into the lousy-hitting position, again negating the position-specific low replacement level via the position fungibility concept
d. combo of these, or something else?

I won't do the research if the data wouldn't seem to lead to any conclusion.
   266. David Concepcion de la Desviacion Estandar (Dan R) Posted: July 17, 2007 at 08:51 PM (#2444076)
Well, I can say off the top of my head that the best shortstop in the AL in those years was Dagoberto Campaneris, and the best in the NL was David Concepción, and if I'm not mistaken no team won the World Series between 1972 and 1976 without one of those two players. :) If I gave Campaneris credit for estimated non-SB baserunning for the first part of his career, he'd be in the middle of my ballot. But I'll just focus on Concepción, Rizzuto, Bancroft, and Pesky for now. :)

I think 2002-06 counts as more than a "decent" shortstop period, with A-Rod, Nomar, Jeter, Tejada, Reyes, Hanley, that Rentería year...I would guess that SS offense is at an all-time high now on the whole (although it was quite strong in the 1950's NL as well). That said, SS rep level is no higher now than it was for most of MLB history; it's just that teams are now getting more comfortable putting their absolute best athletes, who would have been CF in previous eras, at SS. Didn't Mickey Mantle come up as a shortstop?
   267. DL from MN Posted: July 17, 2007 at 09:20 PM (#2444102)
The stdev approach seems to really like Gene Tenace. Any ideas why?
   268. Dr. Chaleeko Posted: July 17, 2007 at 10:15 PM (#2444151)
So, Dan and I are coming to the question. Are 65 more 1970s chances in the field worth 1-2 wins of modern offense? (Depending on the league in question, and also assuming that the 65 chances Dan mentioned were all going to SS, he didn't elaborate on their distribution.) I've got a quick idea about it, but I don't know if it's going to work.

Let's think about it this way...for a SS, virtually all hits through his area are singles. XR suggests that a single is worth .50 runs. If a SS lets through all 65 as singles, that's 32.5 runs. But that's silly, of course. Typical teams in the era allowed about 28.5% of balls in play to become hits (per BP's team DER report).

So making an huge leap, let's say the typical SS allowed the same total of BIPs in his direction to be turned into hits. In 65 chances, that's 18.5 singles allowed and 9.26 runs worth of singles. To equal the 10 runs above league average advantage on offense of the 2000s NL, the shortstops would have to be better than the league's other defenders by 10 runs. In other words, they would have to allow zero singles through their area in those 65 chances.

I don't think I've quite figured this right, to be honest about it, because of the extrapolation of DER onto one position. But I'm not sure it's far off since Ron Shandler recently reported that GBs are turned into outs about 72% of the time (in the contemporary game), and that's essentially the same rate as the league DER...and virtually every chance a SS has to take away a hit is on a GB.

Anyway, I can't quite get my head around a better way to do it right now...anyone got any ideas?
   269. David Concepcion de la Desviacion Estandar (Dan R) Posted: July 18, 2007 at 12:03 AM (#2444237)
Well, the first thing to note is that Tenace is one of those multi-position players that can give my system trouble--one highly demanding position, one easy one, with similar games played totals at each position. I've gone in and fixed him by hand, so here's a revised chart for Tenace:

Year WARP1 WARP2 WARP2/Yr PennAdd      Salary
1970   1.8   1.6      8.8    .020  
$3,738,414
1971   2.0   1.9      6.0    .022  
$3,141,710
1972   1.1   1.1      2.7    .012  
$1,030,969
1973   4.2   4.2      4.6    .055  
$5,772,341
1974   4.3   4.4      4.9    .058  
$6,376,801
1975   6.4   6.3      6.9    .088 
$11,765,104
1976   4.2   4.2      5.6    .055  
$6,812,271
1977   5.2   4.9      5.8    .066  
$8,092,020
1978   4.5   4.4      5.8    .058  
$7,255,261
1979   5.5   5.3      6.2    .072  
$9,158,565
1980   3.1   3.0      4.9    .038  
$4,379,571
1981   2.3   2.2      5.8    .027  
$3,634,349
1982   2.0   1.9      7.8    .023  
$3,930,966
1983  
-0.2  -0.2     -2.0   -.003          $0
TOT   46.4  45.4      5.5    .591 
$75,088,341 


Not a huge difference--he loses 2 career WARP, and a little more proportionally off his peak. The salary estimator is friendlier to him than the raw career WARP total due to its ignorance of in-season durability and affinity for peak rate, but even so, $75M isn't much to write home about. The HoM in/out line is somewhere in the vicinity of $90M, and while Tenace should be eligible for some catcher bonus, he shouldn't get a full one because he wasn't a full-time catcher. I'm not sure why you say the system is so fond of him--even if you boost him 20% for being a part-time catcher, he isn't in the top 15 eligible position players, not counting pre-1893 guys and Negro Leaguers.
   270. David Concepcion de la Desviacion Estandar (Dan R) Posted: July 18, 2007 at 12:13 AM (#2444251)
Eric, the 65 number refers to the .40 difference in SS range factor between the 1970s and subsequent eras. That's 65 *successful* extra plays made by SS per season, implying that the number of extra *chances* they got (and opportunities for great SS to excel by using their range) is greater still.
Also, the value of a marginal SS play is approximately .78 runs--.5 for the single, .1 for the out, and .18 for taking away a future plate appearance from the other team. How are you calculating the 0 singles in 65 chances? That doesn't make sense to me.
   271. Dr. Chaleeko Posted: July 18, 2007 at 12:30 AM (#2444273)
I thought you meant that the SS would get 65 chances, not 65 extra plays made.
   272. David Concepcion de la Desviacion Estandar (Dan R) Posted: July 18, 2007 at 03:53 AM (#2444670)
Nope, range factor is just PO+A per game, no? It was 4.40 in the 70s and is 4 today on averag.
   273. Dr. Chaleeko Posted: July 18, 2007 at 11:58 AM (#2444845)
I didn't realize that was what you were talking about Dan, sorry about that.
   274. 'zop sympathizes with the wrong ####### people Posted: July 18, 2007 at 07:28 PM (#2445208)

I agree in principle with zop when he says the wisdom of crowds is typically smarter than individuals, but in this case, there simply are no easy-to-discern ideas about why this downturn in production should have occurred...the pool of decision makers is very small...the industry is very dogmatic...and the problem "self-corrected" as offensive levels turned upward, suggesting a possible collective coming to the senses in the wake of Yount/Ripken.


Sorry, not to blast back into the past, but I've been busy the last few days and I'd like to respond to this again..

I agree with the idea that the baseball market is vulnerable to inefficiencies; its small, closed, and front-office types tend to jump from team to team, which I would guess standardizes the approach each team takes within reasonable limits. The market isn't perfect; AOL was never more valuble than Time Warner.

But in the case of the SS, I think there's compelling evidence that the market was acting rationally. In the modern game, the advantage provided over a "big" shortstop over a Belanger-type is huge...measured not just in runs, but in wins. I think the "bigger" the mistake the market is making, the less likely it is; its one thing for teams to leave a few runs on the table each season by batting a bat-control guy in the 2-hole, but really, nobody tries a big athletic guy at SS for 30 years? Hell, that's just not true: Rico Petrocelli was a pretty big guy, if ESPN Classic can be trusted, and he hit the snot out of the ball compared to other SS; exactly as you'd expect. I've met Gene Michael, he's a big guy, and he was a SS, albeit a terrible hitter. I would argue exigency forces enough trial and error onto every MLB team in every season (Johnny Damon as a 1B! Wily Mo Pena in CF!) that if a huge innate advantage to a big SS existed in that era, SOMEONE would have stumbled upon it.

And there are plenty of reasons
   275. 'zop sympathizes with the wrong ####### people Posted: July 18, 2007 at 07:31 PM (#2445213)
And there are plenty of reasons you can identify why the SS environment may have been different in that era. This thread elucidates many of them: turf, BIP, GB/FB, etc etc... I would add modern training techniques. I'm under the impression that the biggest advantage afforded by modern training techniques is improvement and preservation of flexibility, even in stronger athletes. It may be that, back in the 70's, the population of big guys who retained "SS athleticism" into their baseball primes was much smaller than it is today.
   276. DL from MN Posted: July 18, 2007 at 07:50 PM (#2445225)
In comparison to generic WARP Tenace was a lot higher on the list. I think I got my answer.
   277. jimd Posted: July 18, 2007 at 10:47 PM (#2445395)
The 80's is the turf decade (actually 1982-1994).
The NL played 50% of its games on turf and the AL 29%.
Though for the NL it's really a quarter-century from 1971-1995 at 50% or more.

*********

Turf in the AL (compiled from ballparks.com)

1969 Chi(IF only)
...
1973 KC Chi(IF only)
...
1976 KC
1977 KC Sea Tor
...
1982 KC Min Sea Tor
...
1995 Min Sea Tor
...
1998 Min Sea TB Tor
1999 Min TB Tor (Sea until midseason)
...

From 1982-94 10 AL grass teams played 25-27 games on turf (15%).
From 1982-94 4 AL turf teams played 99-101 games on turf (62%).

*********

Turf in the NL (compiled from ballparks.com)

1966 Hou
...
1970 Cin Hou Pit StL
1971 Cin Hou Phi Pit StL SF
...
1977 Cin Hou Mon Phi Pit StL SF
...
1979 Cin Hou Mon Phi Pit StL
...
1996 Cin Hou Mon Phi Pit
...
2000 Cin Mon Phi Pit
2001 Mon Phi
...
2004 Mon
2005 none

In 1977-78 4 NLE turf teams played 126 games on turf (78%).
In 1977-78 7/12 of NL games were played on turf (58%).

From 1979-95 4 NLW grass teams played 42 games on turf (26%).
From 1979-95 2 NLE grass teams played 48 games on turf (30%).
From 1979-95 2 NLW turf teams played 114 games on turf (70%).
From 1979-95 4 NLE turf teams played 120 games on turf (74%).
   278. Dr. Chaleeko Posted: July 19, 2007 at 12:28 PM (#2446182)
Has anyone ever studied the question of how turf affects fielding stats? We all know the ball moves quicker on turf, but there's also reason to believe that turf provides some potential advantages to infielders as well:
-ball arrives to infielder quicker, DPs might be more turnable
-ball arrives to infielder quicker, allowing infielders to play back since bunts and inf hits harder to come by
-possibility of Concepcionesque turf throws widen repertoire of infielders, more flexibility
-truer hops reduce incidence of "bad" bounces
-does turf made players "faster" in any way? does it somehow give them a better jump or better traction?

Do turf teams play worse defense on grass and vise verse? Or is there just not much difference?
   279. 'zop sympathizes with the wrong ####### people Posted: July 19, 2007 at 02:31 PM (#2446278)

Do turf teams play worse defense on grass and vise verse? Or is there just not much difference?


Definately a difference; see Knoblauch, Chuck and Matsui, Kazuo
   280. Dr. Chaleeko Posted: July 19, 2007 at 02:37 PM (#2446287)
Two data points probably isn't enough, and Knobby's a very weird point at that---those weren't Concepcionesque turf-bounce throws.... ; )
   281. DavidFoss Posted: July 19, 2007 at 04:39 PM (#2446433)
How do Robbie Alomar's Toronto fielding numbers compare with the rest of his career? Are there other high profile fielders who switched parks in mid-prime?

They had turf in Candlestick in the 70s? I have a hard time picturing Willie Mays running around a turf CF, but I guess it happened for a couple of years.
   282. Juan V Posted: July 19, 2007 at 04:51 PM (#2446449)
One thing I wanted to ask: What is the ratio of the standard deviation of your FRAA and that of BPro's FRAA? I've been trying to use this to "fix" BPro's fielding numbers.
   283. jimd Posted: July 19, 2007 at 07:28 PM (#2446638)
I have a hard time picturing Willie Mays running around a turf CF, but I guess it happened for a couple of years.

Basically just 1971 at age 40, and even then he had 48 GP at first vs 84 in CF.
Then traded to the Mets 5/11/72.
   284. Dr. Chaleeko Posted: July 19, 2007 at 10:46 PM (#2446901)
They had turf in Candlestick in the 70s? I have a hard time picturing Willie Mays running around a turf CF, but I guess it happened for a couple of years.

was it the stick where they had the weird half-n-half where the OF was turf the INF grass? (or vise verse?)
   285. jimd Posted: July 20, 2007 at 12:33 AM (#2447031)
Comiskey was one place they installed turf in the infield, while leaving the OF grass. Could have done the same at the Stick in the 70's and I never would have noticed, not paying much attention to the NL back then. ballparks.com notes the partial turf at Comiskey but not at any of the others.
   286. OCF Posted: July 20, 2007 at 01:17 AM (#2447097)
Are there other high profile fielders who switched parks in mid-prime?

We're discussing one this year: Ozzie. From the grass park in San Diego to turf in St. Louis. His range factors were higher in San Diego, but there a lot of things that go into that - in truth, he was fabulous in both places. I do remember Ozzie specifically practicing a few things he could do only on turf, but he could play anywhere. Also, those 80's Cardinals, with all of the emphasis on speed? When they were good, they had no trouble winning on the road, in grass parks.
   287. Cblau Posted: July 20, 2007 at 02:35 AM (#2447283)
I think Candlestick had all dirt basepaths, which was what made it unusual among artificial turf stadia. Well, this site Candlestick Park claims that was the case for just one year. That was apparently 1970. Although this site Artificial Turf says it was 1971.
   288. David Concepcion de la Desviacion Estandar (Dan R) Posted: July 20, 2007 at 02:42 AM (#2447311)
Juan V.--Ask a simple question, get a simple answer. I regress both BP FRAA and Fielding Win Shares (converted to FRAA) to the standard deviation of Chris Dial's Zone Rating data at each position. I think Chris's stdev is too small for catchers, as a cautionary note, which might explain some of Mike Piazza's monster WARP scores.

Catcher: .457
First Base: .634
Second Base: .791
Third Base: .898
Shortstop: .781
Center field: .873
Corner outfield: .894

Also, I don't know how you're using BP FRAA, but remember that one BP FRAA = 1/9 of a win, regardless of run environment (ignoring these standard deviation corrections). So if you want to combine BP FRAA with real offensive runs for a given year, you have to correct for the run environment. Lave Cross's 21 FRAA in 1894 represent 32 "real" runs saved, while Joe Tinker's 30 in 1908 are only 26 "real" runs saved. But Cross's defense was worth 21/9 wins in 1894 above average, while Tinker's was worth 30/9 wins in 1908 above average. All of this is accounted for in my WARP system.
   289. Juan V Posted: July 20, 2007 at 03:04 AM (#2447384)
Good to know. Thanks.
   290. TomH Posted: July 20, 2007 at 01:27 PM (#2447999)
Dan R, I'm confused about the numerator v denominator in your table. Does the first line mean that the spread (variation, dispersion, std dev) of your system's fielding differences among catchers is 46% of Dial's ZR? Or the other way around? Or some other interpretation? And for what years do the figures apply?
   291. David Concepcion de la Desviacion Estandar (Dan R) Posted: July 20, 2007 at 02:33 PM (#2448040)
Sorry to be so opaque. My system has the same stdev as Dial's system for every position. Dial's stdev (and, therefore, mine) is 45.7% as big as BP FRAA's season-adjusted stdev at catcher, 63.4% of BP FRAA's season-adjusted stdev at 1B, 79.1% of BP FRAA's season-adjusted stdev at 2B, etc...
   292. TomH Posted: July 20, 2007 at 03:01 PM (#2448067)
comprendo
   293. Paul Wendt Posted: July 22, 2007 at 09:52 PM (#2450798)
Turf in the NL (compiled from ballparks.com)

1966 Hou
... [1969]
1970 Cin Hou Pit StL
1971 Cin Hou Phi Pit StL SF


This 1969-1971 change from 1/12 (1/10 in 1968) to 6/12 turf stands out in the history of both leagues as the only sudden and large change.

AL 1976-1977 is only 1/12 to 3/14, about 8% to 21%.
NL 2000-2001 is only 4/16 to 2/16, about 25% to 12%.

So it seems to me that NL 1969-71 is a special, compeling period for the study of fielding on turf vs grass. Because adaptation takes time, and adaptation takes place almost entirely after rather than before the change, it may be valuable to focus on a longer and asymmetric timespan such as 1968-1975.
   294. OCF Posted: July 23, 2007 at 01:32 AM (#2451147)
Dal Maxvill - superb defensive shortstop, weak hitter - lost his job just about the time turf came in. Of course, how much are you going to bet on the future prospects of a 32-33 year old who never could hit? But over in more grass-heavy environment, that 1968-1975 timespan is the prime of Mark Belanger's career, and he continued past that. The legendary Mario Mendoza was never a full-time starter; his playing time peaked in 1979-80, and he was always called turf parks home.

OK, the anecdotes aren't proving anything.
   295. DavidFoss Posted: July 23, 2007 at 03:11 PM (#2451461)
So it seems to me that NL 1969-71 is a special, compeling period for the study of fielding on turf vs grass.

Plus, my understanding is that the turf of that time was likely not the best of turfs. That is the differences between turf and grass were even more stark in 1971.
   296. jimd Posted: July 23, 2007 at 07:59 PM (#2451752)
This 1969-1971 change from 1/12 (1/10 in 1968) to 6/12 turf stands out in the history of both leagues as the only sudden and large change.

In every case, the decision to change surface, either way, had an enormous impact on that organization, affecting half of the team's games. It had a small effect on the other teams in the league, 4-6% depending on the particular schedule.

The 1969-71 NL is unusual in that 5 teams made that switch over two seasons. Those 5 teams therefore had the expected "big" impact, but the other six bystanders went from just 9 games in Houston, to 45 road games (28%) on a variety of turf surfaces, a larger effect than any of the other transitions.
   297. jimd Posted: July 23, 2007 at 08:03 PM (#2451755)
If you read Sporting News or Sports Illustrated (or anything else, I suppose) during the 1970's, people were constantly writing about turf and how it was changing/had changed the game, for better and for worse.
   298. Jim Sp Posted: July 24, 2007 at 12:05 AM (#2452061)
Dan R,
Looking at the BP Warp numbers, they have Bartell ahead of Bancroft both on peak (10.6, 10.3 vs. 9.7, 9.2) and career (103.7 vs. 88.2) warp3, can you summarize the factors that lead to you reaching the opposite conclusion?
   299. David Concepcion de la Desviacion Estandar (Dan R) Posted: July 24, 2007 at 12:38 AM (#2452154)
Seems pretty straightforward to me--BP barely docks Bartell at all for quality of play (105.4 WARP1, 103.7 WARP3), while they absolutely clobber Bancroft (111.5 WARP1, 88.2 WARP3) for it. My sytem sees the 1930's NL as no more difficult to dominate than the 1915-25 NL, and makes no adjustment for league quality. (I find it nearly impossible to believe that the overall quality of play in the NL increased fully 25% in just 15 years, but YMMV).

Both my system and BP WARP1 see Bancroft as about 6 wins above Bartell on career value, before giving Bartell war credit. As for peak, BP gives Bartell some exorbitant single-season FRAA scores that it does not credit Bancroft with, while my system weights Fielding Win Shares and BP FRAA equally and also reduces the standard deviation of SS FRAA by about 20%. With fielding given lesser weight (and with a relative assessment of the pair's fielding that is more favorable to Bancroft thanks to the inclusion of FWS), Bancroft's greater number of quality offensive seasons give him a higher peak than Bartell in my system.
   300. baudib Posted: July 24, 2007 at 01:00 AM (#2452218)
I think the "bigger" the mistake the market is making, the less likely it is; its one thing for teams to leave a few runs on the table each season by batting a bat-control guy in the 2-hole, but really, nobody tries a big athletic guy at SS for 30 years?


Mike Schmidt was a great athlete and a fantastic defensive player who very likely would have been a shortstop if he had been born 20 years later. One thing that is notable is that third base is probably stronger in the 1970s-80s than it's ever been. There's absolutely no comparison to the defensive skills of today's third basemen and the guys playing there in the 1970s-80s -- guys like Brooks Robinson, Schmidt, Nettles, Bell, DeCinces, Rodriguez, Wallach, Brett, etc. Some of those guys could have played SS in today's game.
Page 3 of 8 pages  < 1 2 3 4 5 6 7 8 > 

You must be Registered and Logged In to post comments.

 

 

<< Back to main

BBTF Partner

Support BBTF

donate

Thanks to
Infinite Joost (Voxter)
for his generous support.

Bookmarks

You must be logged in to view your Bookmarks.

Syndicate

Page rendered in 1.0033 seconds
49 querie(s) executed