User Comments, Suggestions, or Complaints | Privacy Policy | Terms of Service | Advertising
Page rendered in 1.2146 seconds
48 querie(s) executed
| ||||||||
You are here > Home > Baseball Newsstand > Discussion
| ||||||||
Baseball Primer Newsblog — The Best News Links from the Baseball Newsstand Saturday, December 11, 2010King Kaufman: In defense of replacement-level players?As The Kratzenjammer Kids (the only known Erik Kratz fan club…for now) begin to cheer!
Repoz
Posted: December 11, 2010 at 12:50 PM | 38 comment(s)
Login to Bookmark
Tags: history, projections, sabermetrics, special topics |
Login to submit news.
You must be logged in to view your Bookmarks. Hot TopicsNewsblog: Spring training OMNICHATTER 2023
(154 - 10:16pm, Mar 25) Last: The Duke Newsblog: “Friday Night Baseball” resumes on Apple TV+ on April 7 (8 - 8:35pm, Mar 25) Last: It's Spelled With a CFBF, But Not Where You Think Newsblog: Rhys Hoskins suffers torn ACL in Phillies' spring game | ESPN (14 - 6:37pm, Mar 25) Last: The Duke Newsblog: MLB to stream all minor league games for free on Bally’s casinos app (4 - 6:33pm, Mar 25) Last: Walt Davis Newsblog: OT Soccer Thread - Champions League Knockout Stages Begin (301 - 6:22pm, Mar 25) Last: Infinite Yost (Voxter) Newsblog: MLB 26-and-under power rankings: Which clubs have the best young players? (6 - 3:05pm, Mar 25) Last: Walt Davis Newsblog: Check out all 30 Opening Day starters (10 - 1:59pm, Mar 25) Last: The Mighty Quintana Newsblog: 2023 NBA Regular Season Thread (1285 - 11:08am, Mar 25) Last: Crosseyed and Painless Newsblog: Reggie Jackson: Former commissioner Bud Selig blocked me from buying A's (24 - 10:29am, Mar 25) Last: Starring Bradley Scotchman as RMc Newsblog: Jed Lowrie announces retirement, reflects on 7 years with A’s (5 - 10:26am, Mar 25) Last: Starring Bradley Scotchman as RMc Newsblog: Trevor Bauer is Reportedly Off to Pitch in Japan (44 - 10:18am, Mar 25) Last: Misirlou cut his hair and moved to Rome Newsblog: Baseball’s Most Valuable Teams 2023: Price Tags Are Up 12% Despite Regional TV Woes (7 - 10:16am, Mar 25) Last: Starring Bradley Scotchman as RMc Newsblog: All-Star pitcher Miles Mikolas, Cardinals agree on multi-year extension: Sources (14 - 8:53am, Mar 25) Last: sanny manguillen Newsblog: LAT: Ralph Avila, who helped Dodgers develop a pipeline in Latin America, dies at 92 (1 - 3:27am, Mar 25) Last: amityusa0106 Newsblog: Rays near 3-year extension with Yandy Díaz (source) (9 - 3:24am, Mar 25) Last: amityusa0106 |
|||||||
About Baseball Think Factory | Write for Us | Copyright © 1996-2021 Baseball Think Factory
User Comments, Suggestions, or Complaints | Privacy Policy | Terms of Service | Advertising
|
| Page rendered in 1.2146 seconds |
Reader Comments and Retorts
Go to end of page
Statements posted here are those of our readers and do not represent the BaseballThinkFactory. Names are provided by the poster and are not verified. We ask that posters follow our submission policy. Please report any inappropriate comments.
1. Walt Davis Posted: December 11, 2010 at 02:56 PM (#3708590)Not necessarily true. This is the joy of random variation. We expect roughly half of "true" replacement-level players to put up a negative WAR.
Now, it might be surprising that 40% of PA are by replacement-level players ... and I'm pretty sure that can't be the case. I hope King's got his numbers right because I recall that when I looked at "bench" players that about 1200-1500 PAs was the norm but that's only 22-25% of all non-pitcher PAs and not all bench players are replacement level.
OK, my quickie estimate of the AL put it at about 1200 PA per team below replacement -- which is about 20% or so. I get a total of -37 WAR so that's about 2.5 per team.
Almost none of those guys were worse than -1 WAR. I get about 1500 PA per team between 0 and 1 WAR.
So it may be the case that about 40% of ML PAs are "replacement level" which seems a bit silly on our part. (Our concept of freely available has been silly for some time. :-)
Now, it might be surprising that 40% of PA are by replacement-level players ... and I'm pretty sure that can't be the case.
But you don't need to have 40% of PA going to replacement level players for this to be the case. Shouldn't random variation allow players with a true talent of "better" than replacement level to be worse than replacement level? Should we expect this variation to follow a standard distribution?
The Rangers this year started the season with Chris Davis at 1B, and Davis performed below replacement level. So they called up Justin Smoak, who was right at replacement level. And there was a good reason to keep running him out there at 0.0 WAR, as Seattle did, because he's both cheap and needed development in the majors. In fact Smoak, though he played 100 games in the majors last year at a dead-even 0.0 WAR, is emphatically not free talent; he's a high draft pick and he was most of the toll for Cliff Lee in a trade.
When they traded Smoak, they put Davis back at 1B. Davis was terrible so they called up Mitch Moreland and traded for Jorge Cantu. Cantu was another 0.0 guy; Moreland was actually well above replacement, 0.4 in just a quarter-season of play. But who knew? Moreland was initially not the prospect Davis was (a much lower draft pick, not as good a hitter in the minors as Davis). But he ended up above replacement (in limited play) and Davis ended up below. By the time you know who did what, it's too late to correct your mistakes. So the actual records of real replacement guys fluctuate a lot around nominal "replacement."
Just an illustration of what Walt says above ...
Well, it doesn't want to be arbitrary, right? Isn't the theory that "replacement level" should be placed somewhere near what you might call the "mean replacement level" of several seasons, ie, the true talent spot at which AAA players could be expected to play? It will always, of necessity, be an estimation, but I see no reason for it to be arbitrary, and I don't think the people who build stats like WAR and VORP intend for it to be arbitrary.
I understand why an unexpectedly large (or small) number of players might fall under that line in any given year -- it's in the nature of players to have good seasons and bad seasons, above and beyond their natural talent level -- but at some point one should start considering whether "replacement level" is being marked in the right spot. If fully a fifth of the league is playing below the point at which people have put replacement-level, that suggests to me that perhaps it's been pegged too high. I dunno, maybe not. But there's definitely a point at which the calculation must be deemed incorrect.
No. It wants to be loved.
A majority of the league (people) are worse than average - since good players use up more innings / PA than bad ones. Take that Wobegon.
There's a bunch of issues here to keep in mind here, some of which have been touched on already (key among them that performance does not equal ability/talent level)
* Not all opportunities are created equal. (So, if I had a replacement level player on my roster primarily in a pinch hitter role, he'd likely perform below what we consider replacement level production (often getting the platoon advantage wouldn't make up for the difficulty of that job)
* Perceived talent level does not equal actual talent level (so, I might think someone is a 1 WAR player, but they've lost something and end up performing below replacement level before I jettison them).
* Oliver Perez. I mean, sunk costs and how teams deal with them.
* The replacement level isn't static within the season - for instance, it's easier to pick up a guy from outside your org for the minimum now than in July (when people are under contract).
* Measurement error (mainly on the defensive side) - should lead us to think more players are below replacement than actually are (assuming efficient markets and absent real concerns like the Smoak example above).
* You could also make an argument that sub-replacement (overall) players who are tactically useful matter here - though I imagine that this applies less to baseball than elsewhere (given small position player bench ... LOOGYs not doing a lot of mop-up work, etc...)
* Other stuff not coming readily to mind. :)
Personally, I'm happy with where many people are drawing the replacement level line (CHONE, for example, sets the average at a 2 WAR per 150 games - or, replacement level at -2 WAA - that seems reasonable).
Well played sir.
I'll assume you mean normal distribution. The answer is yes, no, yes and yes. Or possibly maybe.
"Natural" measures are often normally distributed or close to it. So, in the population, raw baseball talent might well be normally distributed (so yes). But, of course, only the top 1% get to the majors so, if the underlying distribution is normal, you expect the distribution of ML talent to be heavily concentrated at the low end with a handful out in the tail (so no).
However, it's also true that more talented (or better performing) players get more playing time. Tango looked at a sort of weighted distribution a few years back and that looks pretty normal -- i.e. the WAR (or RAR) distribution (which incorporates playing time) will look a lot more normal than the OPS distribution.
Then, within the context of a player, the central limit theorem shows that their mean/proportion from a sample of size n will be normal for any n of about 30 or more, pretty much no matter what the underlying distribution. So the notion that, given equal playing time, we'd expect a set of replacement players to fall withing +/- 2 SDs should be sound. Of course they aren't given equal playing time (the ones who are performing poorly hit the bench) and, as you suggest, I don't actually know what the SD is.
So, fair enough, if the 95% CI is +/- 2 WAR (per 150 games say) and if almost no starting player does worse than -1 WAR, that would suggest these guys are no worse than "true" 1 WAR players. Under a normal with that sort of SD, you'd expect about 17% of them to finish below replacement. (Note, if teams "knew" the "true" talent was +1, these guys wouldn't lose playing time.) As you move closer to "true" replacement level, you also see reductions in playing time -- increasing the variance on the rate stats, decreasing the variance on WAR -- and you probably get performances falling within -1 to +1 WAR, probably slightly skewed positive. So maybe 40-50% of these guys would be below.
The debate over where to place the replacement level goes back to at least 1982, when Bill James' Abstracts became mainstream published. No one agrees; estimates vary from almost as low as a .300 winning percentage to upwards of .480 for individual years. Right now, at least one WAR system (Chone), uses different replacement rates for each season. I don't know how they calculate it, but it involves trying to adjust for defense, which is hard and gets harder as you go back in time.
Now, for those of you who don't know, Walt Davis is VERY good at math. So, I'm going to ask him here: Walt, where would you place the replacement rate, and why? ("I would not place the RR at any one point, but vary by season or whatever" is a perfectly fine response.) I've always just eyeballed it as one standard deviation below the average, but that may be meaningless. Or there may be better methods to measure it. So, I'm going to the guy with the math. He's welcome to opt out, as he may not have spent the time to do the work, but if he has calculated a RR, I want to know what Walt Davis' version is.
Also, while I'm talking to Walt, there's another problem I'm having in analysis. I know that the concept of standard deviation does not behave exactly the same when dealing with items that can be stretched to infinity in theory (like batting, where, theoretically, you and your 8 other Babe Ruths can just keep hitting homers for as long as they can avoid three outs), or those which have a hard wall beyond which the SD cannot go (like pitchers, who have a limit of a total Runs Allowed of zero). I remember, back about 18 years ago, concluding that one consequence of this is that time periods with low league ERAs are what I call "compressed" leagues, because the SD is so small, so one individual point of ERA is worth more SDs in a compressed league than in a normal or high-scoring league. What I have forgotten is how SD actually behaves when confronted with a wall, and how large the effect is. I tried looking it up on the internet, but couldn't find a reference that deals with this question. I've just been eyeballing pitcher ERAs and giving a little boost to those in low-scoring eras, but I don't know if that's a good idea or whether there is a mathematical formula you can use to compensate for the wall. So I'm asking Walt, because he will likely know, and may be able to send me to a reference I can use to calculate my own.
BTW, the reason for eyeballing, rather than computing up my own replacement level, is that there is no consensus. So working up my own method would just add to the confusion. - Brock Hanke
TVErik, as Der K2 points out, much more than half the league is worse than average. The two previous blog posts were about this.
In 2010, 95 MLB players qualified for the batting title with a league-average or better OPS. That’s 16 percent of all position players, but they accounted for 35.4 percent of the non-pitcher plate apearances, 37.2 percent of the hits and 48.8 percent of the home runs. Sixty pitchers qualified for their league’s ERA title with a league-average or better ERA. That’s 9.4 percent of all pitchers, and they accounted for 28.3 percent of all innings pitched.
In defense of league average
For the want of league average, greatness was missed (about Phillies starting pitching)
Sorry Brock, just saw this.
My answer is I don't know.
First, the zero point is arbitrary anyway so in one sense it doesn't matter where you put it. The shape of the distribution of talent and production will be the same regardless of where you put zero. "Replacement level" is just a convenient "substantive" concept. So I tend to think of the search for "true replacement level" as a bit of a folly. What I wish I knew more about was the standard deviation/variance and what the distribution looks like.
There are two arbitrary points which have the benefit of existing and not being up for debate however. First is league average; second is "true zero" (which makes sense for things like runs created). League average has the advantage that it should be just as easy to estimate marginal cost of a win from average as it is from replacement. The drawback of league average as a baseline is that it makes it harder to account for playing time. But, FWIW, I prefer WAA for HoF debates ... well, I would if b-r listed it. :-)
Anyway, I don't doubt that the folks who look at this are doing a pretty good job. Find league average and SD, convert each year to a "standard normal" (which may not be normal) look at the distribution, pick an arbitrary point (1 SD below? 2 SD below?) where only a relatively small percentage of performance ever falls, go from there. The issues around replacement level mainly only relate to putting a price tag on wins and players, not to measuring performance.
On your second question -- I don't think the "wall" effect on SD is likely to matter a lot in baseball, especially with regard to individual player performance (it would matter in something like odds of winning a playoff series). The normal provides a good approximation to count/binomial distributions for decent sample sizes (somewhere around 30-50) and the exact methods are only going to offer a small improvement unless we are talking about genuinely rare events (e.g. no-hitters).
The important point I think (and this is Dan R's point) is that, for count variables, the mean and the variance are always positively related.* So higher/lower-scoring eras will have higher/lower SDs and measures relative to average (OPS+, ERA+) are going to be more extreme in the higher-scoring eras. It's one of the reasons Edgar would not (quite) be on my HoF ballot.
*Chone's WAR seems to make some adjustment for this as the RAR to WAR conversion rate seems to shift by era (and park).
I don't think this is correct. I have seen -- not sure if it's Tango or not, but probably -- the claim that the weighted distribution of ability by rate stat (OPS, wOBA, whatever) is approximately normal, and it appears plausible, but what that means is that if you *weight* each observation by PA (e.g. Evan Longoria gets a weight of 661, Rocco Baldelli gets a weight of 25) the resulting distribution is close to normal. Which is different than what you are saying: going from a rate stat to a counting stat like WAR just re-scales each observation by a function involving PA, but leaves the weights the same, at 1 for each person.
WAR is Wins Above Replacement (Level). 0 WAR is replacement level. It's 2 wins below average. Negative WAR is below replacement level.
Average is 2 WAR.
EDIT: Was gonna add a caveat, but AROM already did.
For instance, in the quickest and sloppiest study imaginable, I just looked at the nine long-career (3000+ PA) shortstops who retired between 2006 and 2009. In their last year as a regular shortstop, the nine posted a mean OPS+ of 91. In their first year somewhere else or on the bench, after having been a regular, their mean OPS+ was 69.
Obviously this study has ghastly flaws (I'm adding a bunch of rate stats and dividing them by N, for a start; for another thing, shortstops get replaced because of defensive issues or injury (Nomar Garciaparra is in the set). But basically, it does seem like there's a correlation between dropping below a certain offensive threshold and losing your starting shortstop job (every one of the nine hit better in his last year as a regular than in the year when he was finally moved off SS).
This seems to me a common-sense definition of "replacement" in some of its senses from the perspective MLB teams face (see Voros's comment in #14 above). Who's out there to replace you is a dicey proposition, but teams seem to have a definite idea of when they need to start looking for them.
I like that approach in #25. I've always looked at waiver wire guys, or minor league free agents to define replacement level. Other approaches are to look at the worst X regulars in the league or something. That would be interesting to do a more comprehensive study of at what point teams actually decide a guy needs to be replaced.
#29 AROM, I'm sure that you've read Keith Woolner's article (2002 Prospectus) about where replacement level "should" be. Pity it doesn't seem to be online. Stathead.com seems to be gone as well so his earlier articles on replacement level are gone. Hopefully the wayback machine has something.
I would suggest player development is another reason. Teams often put up with sub-replacement play from prospects who they think will ultimately develop into valuable players.
Another thing is uncertainty around a player's true talent level. In other words, even if all you're looking for is a replacement-level fill-in for your injured 3B, you may have to try two or three guys with limited experience there before you find one who actually meets that description. In that sense I think replacement level may be a bit high--i.e. there's some value to a track record which can give you more certainty around a player's future performance.
I think it depends on what you mean by this. Replacement level players surely cannot be more valuable than they are credited for. By definition these are freely available players who have zero value. However, replacement level play may have some value, since roughly half of replacement level players will play worse than replacement level over a season, possibly the difference between making or not making the playoffs. Certainly there sometimes may be value to reducing variance-if two players both have mean projections at replacement level, but your confidence in one projection is higher than the other, then you may wish to use the player in whose performance you are more confident (alternately you may wish to use the higher variance player if you're a team that is going to need some luck to make the playoffs.) However, I don't think this variance/performance tradeoff explains much in the way of PAs going to replacement level or below players.
As others have mentioned this could be misleading- some of those guys who had negative WAR one year are not true sub-replacement players- of course the converse is true, some guys with positive WAR one year actually are not that good either...
So I looked at 2008-2010 combined, players who were below 0.0 WAR over the past 3 years accumulated 97,996 PAs, or 17.5%of all PAs
No much of a difference - but is that too high?
Any way, 2008-2010, here are the guys given 1000 PAs despite being sub replacement level:
Player PA WAR/pos
Jeff Francoeur 1787 -2.1
Yuniesky Betancourt 1686 -0.3
Jose Guillen 1521 -1.2
Brad Hawpe 1503 -0.2
Mark Teahen 1456 -1.5
Garret Anderson 1290 -0.2
Jeremy Hermida 1289 -0.1
Lastings Milledge 1264 -2.5
Garrett Atkins 1215 -0.2
Ken Griffey 1137 -1.3
Ronny Cedeno 1114 -0.1
Andy LaRoche 1113 -0.5
Bobby Crosby 1066 0
Brendan Harris 1063 -0.2
Jonny Gomes 1062 -0.4
Ryan Spilborghs 1056 -1.2
Mike Jacobs 1025 -2
Garrett Jones 1012 -0.1
Mark Kotsay 1001 -2.2
Geoff Blum 1001 -0.7
Ryan Garko 1001 -0.1
Many of these guys are not terrible hitters- bad, but not terrible- but if you are a 1B/DH/LF even a 100 OPS+ won't save you if you are brutal with the glove and on the bases- some guys are on here because WAR says their Dee is utterly execrable- Hawpe has an amazing negative 60 fielding runs 2008-2010, Milledge and Teahen both -40 fielding and baserunning...Many of these guys can do something, or used to do something- or were "supposed" to be better...
Some of these guys, Betancourt, Frenchy, GA... oh come on, the fact that they got some many PAs is truly disgraceful
****
Is it possible that some of the better teams stockpile above-replacement players on their benches, and that some of the worse teams are fielding several guys who are at or below the free-talent level?
I'll answer with a qualified no. To whit (and how you feel about MLEs / which set you use makes a big difference here) - the following is a listing of position players, by team, projected to be at least average or at least replacement level, as of the last available CHONE file (8/10 projections - I took out the 4 NPB guys, left in the free agents).
Team >0_WAR >2_WAR Total
ARI 17 5 46
ATL 21 8 51
BAL 20 4 49
BOS 27 11 53
CHA 21 4 44
CHN 22 6 46
CIN 26 9 51
CLE 29 7 50
COL 21 7 46
DET 21 6 45
FLO 23 6 48
HOU 13 2 51
KCA 22 4 51
LAA 21 7 56
LAN 24 7 58
MIL 24 7 48
MIN 19 7 44
NYA 30 10 55
NYN 25 8 63
OAK 24 8 46
PHI 22 7 56
PIT 28 5 57
SDN 24 6 57
SEA 23 5 51
SFN 22 4 50
SLN 21 8 47
TBA 25 13 54
TEX 26 6 61
TOR 23 6 44
WAS 22 5 54
free 16 2 29
Tot 702 200 1561
(note: All three teams with double-digit average position players are in the AL East. Oof.)
So, every team has enough replacement level players to fill out a roster and then some - but some guys might be hurt, or the positional aspects don't work, or you want to keep him in the minors a bit longer / keep the other guy in the bigs a bit longer ... and so on. Also, once the season starts, holes get harder to fill (as teams want to keep surplus talents around for insurance purposes).
Is this correct? A team with all-average players will win 82 games, no? If all 25 players are 2 WAR that would make a replacement-level team a 32-win team, which seems lower than usually placed. And surely not all of the bench players and short relievers are going to be 2 WAR. Or perhaps you meant the average average player is 2 WAR, as opposed to the average everyday player. But it still doesn't quite seem to work out.
You must be Registered and Logged In to post comments.
<< Back to main