#### Slicing the Stats

Batting Average and Home Runs visit Dan’s delicatessen.

When looking at a player’s performance, my favorite thing to do is break it down into small pieces—to see how the individual components of their play combine to create their total value. Sometimes, useful bits of information come from that kind of analysis; other times, you find something interesting or unexpected. Two statistics that are useful for this purpose are batting average on balls in play and home runs on contact. I used both statistics in articles about players during the 2002 season, so this will serve as a chance to see how those players finished the season, and how players in 2002 stacked up against each other and against players of the past.

**Batting Average on Balls In Play (BABIP)**

Batting average on balls in play isolates the component of batting average (and on-base percentage) that comes on balls are put into play by the batter—balls that fielders can make plays on. While hitters do exert a great deal of influence on the outcome of balls in play, a lot of other factors affect the outcome as well, including the skill of the fielders and simple chance. BABIP is calculated with the formula (H-HR)/(AB+SF+SH-SO-HR).

Around the middle of the season, I noted that Bret Boone was struggling despite performing at his career level or better in many important categories such as walks, strikeouts, and power. Boone’s problem was an abysmal batting average on balls in play, which appeared to be the best case for him: BABIP is subject to a higher degree of variance than other offensive statistics, so it was more likely that Boone was experiencing a flukish low than a meaningful offensive decline.

At that time, Boone’s BABIP was just .243, lower than it had ever been over a full season, and well below his career .288 average. As such, it seemed reasonable to expect Boone to do noticeably better during the second half, and end up with a BABIP between his season-to-date .243 and his career .288. Instead, Boone closed out the season by posting a .369 BABIP over 261 at bats, raising his final BABIP to .296—*above* his career rate. During that time, Boone essentially hit as well as he did in 2001, which turned out to be just about the only bright spot for Seattle in the second half of the season.

While Boone’s BABIP turned out better than expected, it was far from the best of 2002. That honor goes to Milwaukee’s Jose Hernandez—yes, the same Jose Hernandez who made news by nearly breaking the strikeout record reached historical significance in another statistic, only nobody noticed this one.

When Hernandez put the ball in play, he got a lot of hits—so many that he made more outs by striking out (188) than he did on balls in play (187, including reaching on errors). Hernandez’s BABIP was .404, which comfortably led the league:

**2002 BABIP Leaders (Min 300 PA)**
1. Jose Hernandez, MIL .404
2. Jim Edmonds, STL .375
3. Manny Ramirez, BOS .373
4. Bernie Williams, NYY .372
5. Austin Kearns, CIN .370
6. Quinton McCracken, ARI .357
7. Adam Kennedy, ANA .356
8. Bobby Abreu, PHI .354
9. Larry Walker, COL .353
10. Dan Wilson, SEA .350

More interesting is where Hernandez’s season ranks all-time:

**1913-2002 Single Season BABIP Leaders (Min 500 PA, SF not included)**
1. Babe Ruth, 1923 .419
2. Rod Carew, 1977 .411
3. Rogers Hornsby, 1924 .411
4. George Sisler, 1922 .411
5. Manny Ramirez, 2000 .408
6. Jose Hernandez, 2002 .406
7. Andres Galarraga, 1993 .405
8. Roberto Clemente, 1967 .405
9. Ty Cobb, 1913 .403
10. Willie McGee, 1990 .400

Jose Hernandez reached a historical level of success at turning his balls in play into hits in 2002, but unfortunately, all the attention was focused on his strikeout record.

**Home Runs on Contact (HR/Contact)**

Home runs on contact provides a very pure measurement of a player’s home run power by removing non-home run opportunities such as strikeouts. The question HR/Contact answers is this: if a hitter hits a ball, how often does he hit it hard enough to hit a home run? HR/Contact is calculated with the formula HR/(AB+SF-SO). Here are the 2002 leaders

**2002 HR/Contact Leaders (Min 300 PA)**
1. Jim Thome, CLE .150
2. Barry Bonds, SF .128
3. Sammy Sosa, CHI .118
4. Alex Rodriguez, TEX .113
5. Russell Branyan, CIN .104
6. Manny Ramirez, BOS .094
7. Rafael Palmeiro, TEX .094
8. Jeremy Giambi, PHI .091
9. Lance Berkman, HOU .091
10. Jason Giambi, NYY .091

And the historical leaders:

**1913-2002 Single Season HR/Contact Leaders (Min 500 PA, SF not included)**
1. Mark McGwire, 1998 .198
2. Barry Bonds, 2001 .191
3. Mark McGwire, 1999 .171
4. Mark McGwire, 1996 .167
5. Jim Thome, 2002 .152
6. Sammy Sosa, 2001 .151
7. Jim Thome, 2001 .144
8. Babe Ruth, 1920 .143
9. Sammy Sosa, 1998 .140
10. Sammy Sosa, 1999 .139

Jim Thome’s best HR/Contact season to date was also one of the best ever. His 2.23 total bases per hit (TB/H) also led the major leagues and was the 12th best mark ever in that category. Thome is clearly one of the most powerful hitters in the major leagues, if not the most powerful.

Jeremy Giambi’s eighth-place ranking is a show of power that some worried he would lack, and another indication of his potential. And Alex Rodriguez reached a career high in this metric and in TB/H as his power hitting continued to improve.

Rather than measuring overall value, these statistics help to demonstrate some of the specific areas where that value comes from. In doing so, they also show the characteristics that distinguish major league hitters—characteristics that might not show up as clearly in more inclusive metrics.

Dan 'The Boy' Werr
Posted: November 20, 2002 at 05:00 AM |

15 comment(s)
Login to Bookmark
Related News:

## Reader Comments and Retorts

Go to end of page

1. Ephus Posted: November 20, 2002 at 01:05 AM (#607326)Statements posted here are those of our readers and do not represent the BaseballThinkFactory. Names are provided by the poster and are not verified. We ask that posters follow our submission policy. Please report any inappropriate comments.First, a couple of picayune things. I wouldn't include SH in the denominator of a player's BABIP. In fact, I never inlcude IBB's and SH's, when I am studying characteristics of players, unless I am specifically interested in those events. SH's will only screw things up, as their absolute number per PA are mainly a function of a player's overall hitting ability (less ability, more bunts), his bunting ability (better bunter, called on to bunt more often), his spot in the lineup (e.g., #2 hitter more likley to bunt), and his manager's predilection for bunting. Sure, there will be some hits and some non-SH outs that were actually bunt attempts, but they are a small percentage of most players' PA's and they will tend to cancel each other out (although not in the same proportion as the rest of a player's hits and outs).

The proper thing to do when evaluating players, or doing the type of study that you did, is actually to ignore all SH attempts. Obviosuly you can't do that unless you are using PBP data. In any case, you should not use SH in your denominator. No big deal though.

The other picayune thing (also no big deal) is that you correctly use SF's in the HR per BIP for one group and then you do not use SF's in the other group (the historical group). Obviously, you cannot fairly compare the numbers from both groups as you would be comparing apples with oranges (more like delicious apples and macintosh apples, since with or without SF's, the numbers will be close). Again, no big deal, but why did you eliminate SF's in the hostorical group (were they not counted in the early 1900's)? If you don't have SF's for a particular player, you should pro-rate their BIP outs to "add in" fictitious SF's, so that the denominators for both groups are equivalent. (BTW, so everyone knows, when you do most research studies, you should almost always count SF's as a regular fly ball out).

Anyway, I spent too much ink on these two picayune things. Someone mentioned in a post that they thought that batters tended to have around the same BABIP. That is not true at all! While it is true for pitchers (DIPS), batters have a wide variety of true BABIP for a variety of reasons. It is kind of like pitcher and batter HR rates (per BIP or per PA). There is much less variability in a pitcher's HR rate than in a batter's. We know this intuitively, of course. The same is true, however, but to a lesser extent, for BABIP - but primarily for the same reasons.

Batters achieve their particular BABIP by virtue of three things: how hard they hit the ball, how often they hit the ball squarely on the bat, and the trajectory of the ball. Although all three ovrelap and are interrelated, the first is a function of how hard they swing and their body mass, the second is a function of their "batting eye", and the third is a function of the average "levelness" of their swing. In any case, some batters hit the ball harder (on the average) than others because of their bat speed and their body mass (f=mv), some hit the ball harder because they make better contact, and some hit the ball harder because they tend not to uppercut or whatever the opposite of uppercut is (although certainly an uppercut combined with good bat velocity and mass yields more HR's). So you can see how batters with different strength, mass, batting eye, batting approach, etc. can yield lots of different true BABIP.

Batters who strike out alot probably (I never checked) tend to have higher BABIP, since a major reason for striking out a lot is swinging hard, especially with 2 strikes. Of course, if a particular batter strikes out a lot principally because he has a bad eye (swings and misses a lot and/or swings at a lot of bad pitches) and NOT because he necessarily swings hard, he may not have a higher than average BABIP even with lots of K's. I assume that one of the principal reasons why Jose Hernandez has a high K rate and a high BABIP is because he swings hard "all" the time, especially with 2 strikes (at least until the last 2 weeks of the season). And yes, his BABIP will (should, on the average) regress precipitously next year (so will his K rate for that matter), so that overall he is not expected to perform nearly as well next year as this year. Remember, any stat that is above or below average for any period of time will (should, on the average) regress towards the mean in the future. If we are talking about only one season, depending upon the stat (as the author states, correctly, each stat has a different "rate" [coefficient] of regression), that regression will be quite a bit. Of course, if we want to project Hernandez' BABIP next year (and we can), we want to use his career or his last 3 or 4 year's sample BABIP (and NOT only this year's) and then regress (not TOO much) that number towards the average of a player who swings hard (I am assuming that he does) and K's alot.

The above discussion brings up another important point point that the author referenced. Since each stat (K rate, BB rate, BABIP, HR rate, etc.) regresses differently (some more, some less), if we want to project a player's stats for next year, or we want to see how "over or under a player's head" he is playing, like Hernandez or Boone, say, half-way through this year, it is not a bad idea to look at the various stats individually and separately, as the author suggests. For example, if a player is playing horribly and his BB rate is way below normal (for him) and his K rate is way above normal, and the rest of his stats are around normal, then we can conclude that he is not so likely to return to form (again, all players will return to SOME degree), since BB and K's don't get regressed as much as BABIP, for example. OTOH, if a player is playing poorly, like Boone, simply becuase his BABIP is down (his BB, K, and HR rate, are around normal) then we can expect him to have a greater likelihood of returning to normal. That was good and useful insight by the author.

BTW, in reality, in BOTH examples above, an overall projection would be somewhere between his current season's stats and his career stats, however, in the first example, (anomolous K and BB rate) that projection would be closer to the current season's stats than in the second example (anomolous BABIP). This illustrates the "danger" of using a projection system that uses OPS or lwts or RC (or whatever), rather than a projection system that projects each component separately and then combines them if anyone wants to know the projection, in terms of OPS, lwts, etc...

I wouldn't include SH in the denominator of a player's BABIP. In fact, I never inlcude IBB's and SH's, when I am studying characteristics of players, unless I am specifically interested in those events.You're right, and I'm not sure why I ever did include them (obviously was a conscious decision at some point). Old habits die hard.

Again, no big deal, but why did you eliminate SF's in the hostorical group (were they not counted in the early 1900's)?Should have mentioned this... I have no SF data until 1954. Adding some in probably would have been wiser, but I took the easy way out and dropped them altogether (even from the players where I had the data... you'll notice that players who appear in both the 2002 and the historical charts have slightly different numbers; this is why).

Since each stat (K rate, BB rate, BABIP, HR rate, etc.) regresses differently (some more, some less), if we want to project a player's stats for next year, or we want to see how "over or under a player's head" he is playing, like Hernandez or Boone, say, half-way through this year, it is not a bad idea to look at the various stats individually and separately, as the author suggests.I agree with this point on regression, MGL. I also think it might apply to park effects. I'm not familiar enough with LWTS, for example, to know if the inputs are park-adjusted, or the output.

I appreciate the feedback and the compliments.

Second, Jose Hernandez's recent BABIP rates (SH not included this time):

He's been a good BABIP hitter, but obviously 2002 was far out of line.

Third, the correlation between K-rate and BABIP for 2002 hitters with at least 300 PA was .221. However, what makes that really interesting with regard to hitting the ball hard: the correlation between BABIP and TB/H was -0.055, and between BABIP and HR/Contact was .100. As a side note, TB/H is correlated with K-rate at .511, and HR/Contact with K-rate at .527.

There's probably at least some selection bias occuring: a player with a high K-rate, for example, would need a high BABIP to have a batting average good enough to play.

I appreciate all the comments.

Took them and ranked them by GB/FB ratio.

The MLB average in 2002 was 1.133 GB : 1 FB and the standard deviation .440.

I then broke the players into 4 cohorts, 1) GB/FB ratio more than .5 STDEV above average, 2) 0 - .499 STDV, 3) 0 - -.499 STDEV and 4) below -.5 STDEV

cohort - Plyrs - avgBABIP - avgGB/FB

1 ( above .5) - 41 - .312 - 1.692

2 ( 0 to .5 ) - 35 - .304 - 1.218

3 ( 0 to -.5) - 36 - .295 - 1.019

4 ( less -.5) - 39 - .292 -- .742

Sorry if the formatting goes bad.

I think it's safe to say a GB-tendancy is better to have if HRs are out of the equation.

And the benefit in separating into hitting components would be to get a "read" on a player's hitting approach, and to determine better regression values. I think this point also jives with Ben's first paragraph.

Thus, there is a real tendency for groundball teams to allow more hits. This balances out however:

* Those hits are singles. Groundball and flyball teams give up doubles and triples at the same rate.

* These teams do give up fewer home runs.

* These teams turn more double plays.

I think this means -- if you can belt the ball out of the park, use a slight uppercut and don't worry about the infield singles you'll lose. If you cannot belt the ball out of the par, don't even try.

I think back to Dick Schofield. He spent the second half of 1990 and the first half of 1991 not trying to hit his piddly 8 home runs a year. He hit .290-something for this time period, but stopped for some silly reason. The Angels would have won more games had Schofield learned he was helping them more as a slap hitter. Baseball needs more of the Richie Ashburn, Kenny Lofton-type hitters.

I'm still failing to see the motivation for this particular splitting of offensive stats. DIPS provides a good motivation for considering BABIP separately for pitchers, but not for batters. MGL mentions that BABIP, homer rate, etc regress differently, and I'm sure they do, but that would be true of any separation of performance measures (e.g. OBP and isolated power).I think you and Tango might be missing my point slightly. I'm not talking at all about a hitter changing their approach (and I wasn't in the Boone or the Giambi articles).

Boone is a great example. Many people had written Boone off at midseason as he was performing well below career levels. OPS, OBP, and SLG all were low. For a player Boone's age, it made perfect sense to decide that he was in real and serious decline.

However, when I broke Boone's stats down, I found that his BABIP was very low, while everything else?K rate, power, and BB rate?were all at his career levels or higher.

We know that those values are more reflective of ability and more consistent than batting average. I posted these correlations on an earlier thread, but here they are again, from 1990-2001, players who in year n were on the same team as in year n+1 and who had 500 PA each year (there were 696 two-year sets)... this is the correlation in each of these stats between year n and year n+1:

http://www.baseballprimer.com/clutch/archives/00004796.shtml#52

The above link has a little more detail on that.

Boone's BABIP, with no help of his own, was so obviously out of whack that it was extremely likely to rise. Remember that his other stats were all in-line or better than his career, and the one area where he was struggling was the most variable, outside-influenced area of a hitter's production. A turnaround for Boone was a much safer bet than it would have been if his decline came any other way (as MGL notes), and Boone sure enough came around (in a bigger way than could have been expected).

Same story for Giambi:

I especially agree with Ben's last paragraph. To compare the brother Giambis, they have each determined that the optimal production for them is their current hitting approach. On the other hand, the value in separating these things is that perhaps they are not using an optimal hitting approach for their abilities. If the bad Giambi were to cut down on his Ks by changing his hitting approach, will this lower or increase his overall production? Who knows.First of all, there's no bad Giambi. :-)

Second, I think the confusion on this point came from this sentence in my Giambi article: "If Jeremy can reduce his strikeouts, he may well find himself among the league's most dangerous hitters." I was not advocating a change in approach but hoping for an improvement in skill.

If a player spends all his times at bat simply flailing at the ball as hard as possible, I would fully expect that as time passed, he would become more and more adept at connecting (perhaps I should study that at some point, but that's an intuitive conclusion). It's my belief that a hitter's batting eye, coordination, etc., can improve without a change in approach.

But then, he might also get better with the same K rate and getting more power. I just don't see any intrinsic value in this particular separation.It's informative. More inclusive numbers can be much more deceptive. For example, you might see the Giambi's slugging percentage and conclude that one was more powerful than the other. But that would be wrong; it's the K-rates causing the difference. You might see their OBP and assume one walked more than the other (well, one does, but it's the other one), but again, it's the K-rate. You might see their batting average and conclude one got more hits in play, but once again, it's the K-rate creating the distinction (their BABIP was equal at the time, Jason's pulled away some now, but nowhere near enough to create the massive BA difference).

I point that out not because I hope Jeremy will read it and do something about it, but because I find it incredibly surprising and interesting, and because it changed how I view each of those hitters.

1) They do obviosuly beat out more infield ground balls.

2) (this is the little discussed reason) They get more ground balls hits THROUGH the infield because the infielders must play closer to home plate especially at the corners (mostly third, of course).

If they don't get more hits (i.e., have a higher BABIP), it is because all other things are NOT equal - i.e., they hit fewer line drives or less "hard" ground and fly balls, etc.

Ben V., correct me if I'm wrong - you are one of the best stat guys - but how much we regress a stat (i.e., the r from time period to time period), for the same given sample size, depends upon the sampling error. As the article on "regression to the mean" in Clutch Hits pointed out, the sampling error comes from 2 things - measurement error and a change in an individual's true mean. For this discussion, we are only concerned with the former - measurement error.

A player's overall offensive talent is generaly measured by some combination (OPS, lwts) of his various offensive events. However, each event (s,d, hr, K,) is sampled differently, with different measurement error associated with each event. Apparently some of those measurements (singles, for example) have a much higher measurement error than others (K's, for example). As far as I can tell, that is why we have different regression coefficients for each event, even for the same sample size. For example, if speed down to first base were part of an offensive metric (there's no reason why it couldn't be), the measruement error would be very small, as compared to other events (s,d,t, etc.). That is why when projecting, we get more accurate results when regressing each event independently (in fact, we must do that), and that is also why, if we want to do a rough estimate of whether a player is playing "over his head" or not (i.e., how much and how likely it is that he will regress in the future), it behooves us to look at those events which have the largest measurement error (the largest regression coefficients), rather than those events which have the smallest coefficients, or rather than some combined measure of offense, like OPS...

However: (1) a coefficient of .5 would seem to indicate a pretty significant character trait, so I'm not sure the DIPSish Boone analysis is justified...Sure, there's a lot of skill there, but also implicit in that correlation is that a lot of other stuff affects it, too. So... if we know this about Boone:

And we know that Boone has established himself as being able to post a significantly higher BABIP over his career, is it more likely that his skill has suddenly and precipitously declined, or that for 389 PA, he's had some bad luck, and that the other things that affect BABIP besides his own skill have just been going poorly?

Either, of course, is possible, but by knowing that it's BABIP, and knowing that BABIP is much more variable than other offensive categories, we know that it's more likely for Boone to rebound than it would be if it were power or strikeouts, for example.

So here's Boone's 2002 breakdown. In this chart, the career numbers are through 2001, BB rate is BB/(BB+AB) and the halves are defined as before and after the article--the %Change is between the first and second half:

What's amazing is that while Boone was being written off, his TB/H, HR/Con, K/AB, and BB rate were equal to or better than his career rates, but you would never have known it from AVG, OBP, SLG, or OPS. K/AB could have created the same appearance of decline, but the chances of rebound would have been much worse.

I'll happily defend the utility of this breakdown in Boone's or similar cases. In Giambi's case, it was less useful, but still interesting in my opinion. At the very least, we know that there's an error in saying, for example, "Jeremy is a good hitter, but he lacks his brother's power," based on the SLG differential.

But I also think there could be benefit?intuitively, without studying it, would you expect a hitter to be more likely to improve his strikeout rate or power if he doesn't change his approach? If the answer isn't that they're equal, there's use in knowing Giambi's breakdown.

Apologies for the long-winded reply.

In reality, his BABIP was down and his homer, walk, and K rates were steady (trusting you on this).Don't take my word for it... look at the "Career" (92-01) and "1stHalf" columns in my last post. TB/H slightly up, HR/Con steady, strikeouts way down, and walks notably up. BABIP abysmal.

What if, instead, his BABIP, homer, and K rates were steady but he wasn't walking? In which case is he more likely to return to form?First of all, the real Boone: yes, it is possible that Boone lost it. However, you have to look at what we know and determine whether that's likely:

1. Boone's BABIP was good in 98, good in 99, good in 00, fantastic in 01, horrible in the first half of 02.

2. BABIP is highly subject to random variation when compared with other offensive stats.

3. Boone was not struggling but was in fact doing better than normal in the other important offensive categories.

So what are the odds that Boone has fallen off a cliff, and what are the odds that he's experiencing bad luck, considering that this is a category where we expect bad luck from time to time? I'd say that with only a half season of bad BABIP, variation is by far the most likely reason. As time passes, of course, variation becomes less and less likely...if we looked back at the end of the season and his BABIP was still so low, we would think it was more likely a real change. After two seasons, we'd be pretty sure (I'm guessing).

(Note: "Decline" in this post refers to diminishing skill)

Now, about walking (of course, walking can't directly affect SLG?for Boone to slump as much, you'd have to see a change in his other peripherals, too, even if a result of the walking thing. Not so with BABIP or K's)... over a half season, the degrees to which BABIP and BB rate vary are pretty different, I'd guess based on those correlations. So let's take random variation as our starting point. The real Boone has a low BABIP, meaning there is a good chance it's caused by variation. The fake Boone has a low BB rate, which means there's not as good a chance it's caused by variation. Variation, in my opinion, would mean the best chance for a rebound.

Now approach: The real Boone has a low chance of having changed his approach to cause the decrease. The fake Boone has a better chance. I'd say a new approach was the second best case for rebound, with the hitter ideally changing back.

Now decline: The real Boone has an okay (I'd argue still low) chance of being in sudden decline based on a half season of a low number in the least steady category after several years of success (that is, at least being an average player) and immediately after great success. The fake Boone also has a low chance of this based on the idea that BB rate doesn't typically decline (do we know this?) very much, but this is inflated by the fact that variation is low in this category.

I think the missing link is how likely it is that it happens at all. We say that BB rate isn't very subject to decline, so a sudden drop likely isn't decline. We also say that BB rate isn't very subject to variation, so a sudden drop likely isn't variation. What perhaps we should actually say is that BB rate isn't very subject to decline or variation, so a sudden drop isn't likely. The odd time it does happen might be the odd case of decline or variation.

In other words, the chance that BB rate experiencing this degree of decline or variation are low, but that doesn't mean that when that change has happened, the chances are still low. It could be that there's a 1 in 10,000 chance of a precipitous drop in walk rate, and a 99% chance it would be caused by decline (Obviously an exaggeration).

With BABIP, we know the chance of variation is high, so when we see something that looks like variation, it probably is. It's not that Boone can't get worse at BABIP--it's just that you wouldn't infer it based on one half-season, and it probably wouldn't be so dramatic.

Rob, I wish I knew, and maybe someone else does--if not, maybe I'll try to work something out.

Dan, regarding Boone v. Ben's Boone2, do these components have "typical" career paths. For example, does BABIP on average decline over time? Same question for K, HR, BB rates.I took a stab at this. It's my first real try at running some sort of study beyond the basic - so please make any suggestions for ways to improve - and please feel free to point out things I've done wrong.

Anyway, I calculated the BABIP for all players since 1950 and greater than 400 BIPs in the season. I also recorded the player's age in that season (year - birth year, could be improved by using age as of a certain date) and the player's years of experience (year - debut year). For any player where I had at least 2 consecutive seasons, I then calculated the difference in BABIP.

For example, for Edgardo Alfonzo I have the following data:

<table>

<tr>

<td>Year</td>

<td>Experience</td>

<td>Age</td>

<td>BIP</td>

<td>HIP</td>

<td>BABIP</td>

<td>Change</td>

</tr>

<tr>

<td>1998</td>

<td>3</td>

<td>25</td>

<td>468</td>

<td>138</td>

<td>.295</td>

<td>-.034</td>

</tr>

<tr>

<td>1999</td>

<td>4</td>

<td>26</td>

<td>526</td>

<td>164</td>

<td>.312</td>

<td>.017</td>

</tr>

<tr>

<td>2000</td>

<td>5</td>

<td>27</td>

<td>455</td>

<td>151</td>

<td>.332</td>

<td>.020</td>

</tr>

</table>

I know I probably should not consider all 3 of these together, since the samples really aren't independent, but I can't figure out how to separate them in Excel.

Anyway, I then take a look at the correlation between age and change in BABIP and between experience and change in BABIP. There was basically no correlation. (-.02 for the first, -.04 for the second, 2645 samples). So it doesn't appear that BABIP rate varies in any predictable way with either age or experience.

Perhaps this would be measured better by calculating a player's peak and then seeing if BABIP declines after that. I don't know if that would make a difference.

Once again, I would appreciate any criticism of what I've done here - since I'd love to improve my methods. Thanks.

On the other hand, if we use some neutral criterion (neutral with regard to the BABIP etc slicing) to identify the whole set of decliners, then on average I would agree with you: the ones whose decline is primarily BABIP are the ones most likely to rebound.Right, and that's really what I originally did with Boone?saw a decline, pinpointed the cause, and then evaluated the possibility of rebound.

Dan, I appreciate the study. I'm not a statistician, so my apologies for not being able to give you the feedback you want. Hopefully someone here can...

"Plate dsicipline", as measured by BB/(AB+BB), or whatever, has sample error, just like every other measurement stat we encounter in baseball. Therefore, A-Rod's or anyone else's BB (per PA) will regress, just as their OPS, lwts ration, HR rate, etc., will regress. One of the things that we are discussing on this thread is the fact that different measurements have different magnitudes of sample error and therefore have different magnitudes of regression. When we look at a sample of a player's stats (any stat), if we are "forced" to regress a lot in order to estimate the true ability (that that stat measures), then we can say that that player got very lucky or unlucky in that sample. If we do not have to regress a lot, then that player, by defintion, did not get that lucky or unlucky. This is all assuming that we characterize being lucky or unlucky based on the difference between a player's sample stats and our estimate of his real ability as measured by that stat.

Since BABIP is a stat that has a large measurement error (sample error), relative to, say, BB rate or K rate, we are "forced" to regress any sample BABIP more than an equivalent sized BB or K sample. That means that an unusually high or low BABIP indicates lucky or unlucky more than an unusually high BB or K rate. And of ocurse, when we talk about lucky or unlucky in a sample, in the same breath, we are also talking about the likeliness of that player returning to his previous level (outside of that sample) AND the lilkeliness of that player returning TOWARDS the overall level of the opulation that he comes from.

This last point is an interesting one. Suppose that a player has a BA in year 1, his first year in baseball, of .375, BA in year 2 of .372, and a BA is year 3 of .344, but only around 100 AB's in each year. Well, it looks like he got "unlucky" in year 3, relative to his previous lifetime BA. However, because stats get regressed to the population mean, and because this guy only has 300 or so total AB's, his likely BA for year 4 is some weighted combination of all 3 years regressed to the average BA of all players, which will yield something like .310 (we have to regress a lot because we only have 300 AB's - hence we potentially have large sample error). As you can see, his projected BA in year 4 does not "return" him to or towards his previous lifetime BA!

Anyway, when we talk about regressing the various stats, how much to regress, why some stats get regressed more or less than others, etc., we have to think about WHY we regress sample stats (for purposes of estimating the "true" stats, which is the same as projecting a player's future stats). Again, I refer everyone to that article referenced in clutc hits on "regression to the mean". It excellently explains, among other things, WHY we regress sample measurements towards a particular mean (which is the same thing as why a re-measurement tends to regress toward a particular mean).

The principla reason for sample or measurement error in baseball, which "forces" us to regress, is the "gap" between the skill we are trying to measure and the method by which we attempt to measure that skill. The larger th gap, the greater the sample or measurement error and the large our "regression coefficients", again, given the same sample size (remember that the size of the regression coefficients, or the "amount of regression" is a function of 2 things - magnitude of measurement error, per until measurement, AND the size of our sample). If we want to measure the true speed of a player from home to first after a swing, we can time them, say, during batting practice. What is our potential measurment errors? Our watch or our clicking of the button on the watch may not be accurate; the weather conditions on the day of practice may be different than the average weather conditions in a real game; etting out of the box and running down to first in BP may not accurately reflect the conditions in real game; etc. All in all, however, our measurement error is probably pretty darn small, as comapared to other traditional offensive stats. BTW, that's why you hear the adage "Speed never slumps." There is comparatively litle fluctuation in things in a game that reflect a player's speed.

What about BABIP? Well, cetainly the overall talent of the pitchers that a batter faces in a sample of AB's will create measurement error. Within that, so will the types of pitches the batter happens to receive in a sample of AB's (batter 1 may see lots of great pitches in 100 AB, whereas batter 2 may see nothing but crap). And of course the most "important" or salient factor in terms of measurement error in BABIP (or BA) is exactly where the ball happens to be hit or whether it hapens to be caught or not.

BB and K's, do not have the same magnitude of measurement error as do BABIP or regular BA, becuase they are more like the speed thing. The principle thing that causes measruement error in a batter's BB or K rate is the actual sample of pitches a batter happens to see in any given sample of PA's. Since we are (presumably) trying to measure the BATTER'S skill in walking or not striking out, if a batter happens to have a sample of PA's where he gets mostly strikes or mostly balls, as compared to the average frquency of balls and strikes, his sample BB and K rate will have lots of measurement error. The reason that BB anf K rate do not have as much measurement error as a BA type stat is that it lacks that extra "random" element of exactly where the ball happens to fall on the field...

1) Regress his 2002 stats towards the league mean?You'd have to regress about one-third of the way. Not good, but not horrible.2) Regress his 2000-2002 stats towards the 3-year league mean?Pretty good. Probably regress about one-sixth of the way.3) Regress his 2002 stats towards his own 3-year mean? (I know you don't like this one)?Ugh. No way!4) Regress his 2002 stats towards some combination of his 3-year average and the league mean, as a "best guess" for his true mean?No.The best way is to take his last 3 years, individually, and "de-age" his stats. I.e., adjust each year towards a "common" age, like age 27. (e.g., His age 24 performance gets adjusted upwards alot, 26 very little, his age 38 gets adjusted upwards tons.)

Now, weight each year, with slightly more weight to his 2002 season. (I like the 5/4/3 weighting principle, with 5 "weights" for 2002, and 3 for 2000.)

Then, combine these totals and regress towards the league mean about one-sixth of the way.

Then "age" him based on aging patterns for that age. (heavy downard adjustments for age 39, little for age 29) Even better would be to break down his walk, k, hr, etc, rates, and handle each one individually.

I suppose you should bring in the park as well, but unless it's Coors, don't bother. It won't change much, and the current PF are not reliable in their application to individual players.

You must be Registered and Logged In to post comments.

<< Back to main