Skip to 2000 Park Factors
People have become much better in recent years at recognizing that the possibility exists that where a player plays his home games could have a significant effect on his statistics. Coors Field has helped necessitate this by helping produce some ungodly numbers from players like Todd Helton, Larry Walker and Dante Bichette. It seems that when the evaluation of a player is done, most everybody now makes some mention of where that player plays and how they think that affects things.
But of course there are problems. One issue involves whether parks affect different players in different ways. Another involves whether to adjust all the individual component stats or just by runs scored. Still another revolves around whether to use single year factors or whether to use factors based on three years worth of data. This article will mostly focus on the last point, along with further investigation in that area.
If we think of a players statistics as an insight into his playing abilities, in order for us to get the best possible grasp on what that player's abilities are, we have to separate from his statistics those things which clearly reflect some ability on his part and those things which represent external factors. So theoretically if a park could distort a player's statistics enough, we would need to remove that distortion to get an accurate view of that player's abilities.
And so the idea of a Park Factor was created. The idea was to put a player's performance in the context of where he played and therefore get a clearer idea of how well or poorly that player did. One way this can be done is looking at a players Home/Road splits (IE, what he hit at home and what he hit on the road), the major problem with this (though not the only one) is that the sample is too small to be significant and they often fluctuate in bizarre ways (see Matt Williams in 1997).
So a better way, it was thought, would be to count up how the whole team's hitters and pitchers stats at home and on the road and adjust from there. This is currently how park factors are done though the process has become more precise over the years.
But the problem arose as to what data to use. Using one year park factors is nice but as Keith Woolner has stated, "Most 1 year park factors are indistinguishable from chance, and therefore probably should not be used in sabermetric analysis." To illustrate what he means consider the following:
Say we have a team. We'll call them the Troy Trojans. Say that during the year the Trojans' Hitters and Pitchers combined for 10,000 At Bats and 2,750 hits for a .275 overall batting average. Now let's randomly rearrange all of the Trojans' at bats during the season. Now that they're all rearranged in some random order, let's call the first 5,000 at bats "Home" and the second 5,000 "Road." When I did this the results I got were:
Troy Trojans Home Road At Bats 5000 5000 Hits 1403 1307 Average .281 .269 Park Factor 104
The home park appears to have increased hits by 4 percent, but as we know the sorting of what at bats were home and road was random. We could have said "Odd and Even days of the month" or "Prime Numbered days" or whatever and got the same result by chance. So if we have:
Texas Rangers Home Road At Bats 5120 4992 Hits 1504 1418 Average .294 .284 Park Factor 103
who is to say that's the result of the park as opposed to some meaningless random distribution that happened to occur that way?
So then theoretically, using additional years worth of park data should reduce this problem some (though even with three years, randomness is a concern). But then a different problem occurs. What about the fact that there are often changes made to existing parks? What about brand new parks being built? Park factors are figured relatively. The factor is compared to an "average" Major League park. If that "average" park changes, doesn't that skew the factor? As Nelson Lu puts it, "by reintroducing the '99 and '98 data, you are adding more noise -- noise that's irrelevant to '00 -- than reducing noise."
As if this wasn't enough of a problem, I did a short study involving 200 players who have switched teams in consecutive seasons, and seeing which is the better predictor of the players production the next year. The production levels without adjusting for park, and the production levels after park adjusting. Whether I use three year factors or one year factors, the highest correlation between the two seasons is always when the stats are not adjusted for park in any way. Any adjustment for park in either season reduces the correlation at least slightly no matter how you look at it. (Note:One Year park factors did slightly better than three year ones in this study).
Now this study is hardly comprehensive of all that can be studied on the subject, but the results nevertheless are contrary to what most of us would tend to believe they would be.
Why is this? Well if you set minimum values for factors below which the numbers remain unadjusted, the correlations gradually increase as you raise these levels up. It seems that the problems are being caused by the smaller adjustments being made when there really isn't any statistical justification for doing so.
So I decided to try another method. Using something called Binomial Distributions, I can test and see what the chances are that a team's home/road split could occur by chance. To spell out the process:
And that's it in a nutshell. I did this for a series of stats:
The stats were designed to remain independent of each other so that one effect won't disrupt another. The Triples stat is well constructed in a sense but it has some inherent sample size problems which make it difficult for it to be judged significant. The others are all sound, however and I assume triples isn't a major concern of most people.
The results for 2000 can be found here if you're interested in using them.
|Home Page||Hitting||Pitching||Macro Baseball||FAQ||Links||Contact|