Baseball for the Thinking Fan

Login | Register | Feedback

btf_logo
You are here > Home > Transaction Oracle > Discussion
Transaction Oracle
— A Timely Look at Transactions as They Happen

Tuesday, October 05, 2010

ZiPS Percentiles 2010 - A Quick Review

With Colin Wyers taking a look at PECOTA percentiles, I decided to take a quick up-to-date look at how ZiPS percentiles fared in 2010.  This doesn’t replace a player-by-player evaluation of the projection system, simply a test to see if upside and downside are being overestimated, underestimated, or right on as a group.

While ZiPS doesn’t use percentiles in the same way, ZiPS can still be evaluated for how well it matched up with probabilities.  To do this, I used the most generic, easy-to-use stats, OPS+, and compared how many regulars (300 PA or more) played to a certain level compared to how they did.  For pitchers, I used ERA+ and 300 BF.

The first figure, “PRED” is, based on the probabilities given by ZiPS, how many regulars should have beaten the threshold given the “ACT” is how many actually did.

There are obviously more robust ways to do this (OMG TEH HETEROSKEDASTICITY~!), but I’m looking at a simple ballpark here.  As I’m not an impartial observer, if you would like to check the data this is derived from, please let me know.


HITTERS
          PRED     ACT
OPS+ >140       7%    8%
OPS+ >130       14%    14%
OPS+ >120       21%    25%
OPS+ >110       33%    38%
OPS+ >100       49%    55%
OPS+ >90       67%    71%
OPS+ >80       83%    87%

PITCHERS
            PRED     ACT
ERA+ >140       8%    11%
ERA+ >130       14%    19%
ERA+ >120       22%    24%
ERA+ >110       35%    35%
ERA+ >100       52%    49%
ERA+ >90       73%    69%
ERA+ >80       88%    81%

 

Dan Szymborski Posted: October 05, 2010 at 08:06 PM | 11 comment(s) Login to Bookmark
  Related News:

Reader Comments and Retorts

Go to end of page

Statements posted here are those of our readers and do not represent the BaseballThinkFactory. Names are provided by the poster and are not verified. We ask that posters follow our submission policy. Please report any inappropriate comments.

   1. Dan Szymborski Posted: October 05, 2010 at 08:14 PM (#3655596)
If it isn't clear, the "expected" number over a certain threshold isn't the number of players that had a mean projection over X, but how many you would expect given the probabilities of that particular threshold as estimated by ZiPS. So, if ZiPS is too generous with the error bars, then it would project more players to have 140 OPS+ based on the probabilities of 140 OPS+ than in reality and vice-versa.
   2. Lassus Posted: October 05, 2010 at 08:47 PM (#3655635)
As a non-math guy, the thing that seems to jump out at me here is that for all of the hitting estimates, the average was better than surmised. This falls into line very similarly to the impression I have here of people talking about prospects, where no one anywhere will ever be an average or useful hitter in the majors.

Have you found in past year for ZiPS if what you show above is commonly the case? The projected is worse than the actual, or was that more the case for this year?

The pitcher PRED vs. ACT makes a nice little X-graphic as you move, which is also pretty interesting.
   3. Dan Szymborski Posted: October 05, 2010 at 09:08 PM (#3655674)
Another thing to note is that since, as a whole, the group got slightly less PA than I projected, there should be a little more variance if I reduced the PA that ZiPS was basing the range on. The median player with more than 300 PA got 528 PA rather than the 544 I projected, which makes a very slight difference that I didn't bother with.
   4. Athletic Supporter can feel the slow rot Posted: October 05, 2010 at 09:10 PM (#3655678)
Just a small note: Assuming you're summing PRED only for the players who were regulars this year, I think you'd expect ACT for positive things to be higher than PRED, as it is, due to selection bias (players performing better are more likely to get to 300 PA). I'm sure you (Szym) know this, just pointing it out to stave off critics (eyeballing, the effect is probably about of this magnitude, I imagine).
   5. Dan The Mediocre Posted: October 06, 2010 at 04:09 AM (#3655902)
So this shows that regulars did better compared to league average than you predicted. I'm not really sure of how this would be significant. A better way to do it would be to show how many in each range performed within 5 points of their predicted OPS+, how many were under that, and how many were over that.
   6. AROM Posted: October 06, 2010 at 12:14 PM (#3655922)
Let me see if I understand. Joe Average has a mean projection of 100, with a 5% chance of beating 140. So he counts as .05 players in the 140+ bucket, repeat for all players and groups, right?
   7. bjhanke Posted: October 07, 2010 at 03:14 AM (#3657035)
Just as a quick and dirty, at least these percentages look reasonable. You'd expect, given that the assumed distribution resembles the right end of a normal curve, that there would be more people with high performances than low ones, as the lowest performances should cluster closer to the mean than the higher ones do. And sure enough, the percentage less than 80 OPS+ or ERA+, or 20 points below mean, is noticeably less than the percent above 120. In fact, it's right about the percentage above 130. There is also a concentration of percentages right between 90 and 120, which should be correct according to theory. So it passes the First Plausibility Test. Your percentages vary so little from actual that error is almost certainly due to chance.

In other words, your predictions seem to be doing a very good job of following both actual results and theoretical percents. I don't know about other systems' percentages, nor whether they are tainted by using them in the system development process, and it is, after all, a very quick and dirty, as you disclaimed. But for your system to come out clean by a quick and dirty that you do NOT use as a corrective when you're developing your methods is a very good sign. Nice work.

- Brock Hanke
   8. Russ Posted: October 07, 2010 at 01:43 PM (#3657250)
Dan, a quick and dirty way to possibly adjust for the selection bias is to weight the actuals by the inverse of the estimated probability that someone would have 300 at bats. And even dirtier (and potentially not useful -- but very fast) way to do the weighting is to simply do a logistic regression predicting a binary response ( over/under 300 at bats) with OPS+ as the only predictor. Then to get your "weighted" percentages, you would simply have to sort peoples OPS's and normalized weights by their OPS and do a cumsum over the weights going down.

This is super dirty, but it might be interesting to see if it helps the selection bias. If you email me a spreadsheet with three columns for each group (OPS+, PA, Predicted OPS+ for hitters, ERA+, BF, ERA+ for pitchers), I'll even do it for you. :-) Send it to my work email that you have for the FF league, I don't know what email I registered here with.

You must be Registered and Logged In to post comments.

 

 

<< Back to main

BBTF Partner

Support BBTF

donate

Thanks to
robneyer
for his generous support.

Bookmarks

You must be logged in to view your Bookmarks.

Syndicate

Page rendered in 0.2375 seconds
66 querie(s) executed