Baseball Primer Newsblog— The Best News Links from the Baseball Newsstand
Wednesday, August 04, 2004
Steve Treder writes: The extreme focus on counting pitches in the modern era has not only meaningfully reduced the proportion of pitching that is performed by every team’s best pitchers—thus increasing the proportion pitched by the worst—it has done so while producing no noticeable reduction in pitching injuries.
|
Reader Comments and Retorts
Go to end of page
Statements posted here are those of our readers and do not represent the BaseballThinkFactory. Names are provided by the poster and are not verified. We ask that posters follow our submission policy. Please report any inappropriate comments.
1. Dr. Vaux Posted: August 04, 2004 at 08:00 AM (#775496)I've alway thought that this was a troublesome arguement, for a number of reasons:
1: Improvement in diagnoses means that more injuries are discovered, rather than misdiagnosed as decline, crappyness, "tired arm," etc.
2: Improved medicine means that that fragile pitchers who would have been weeded out of previous eras can now survive beyond their first major injury to have a second, third, and beyond.
3: Advanced training methods (and/or chemicals) are brought to bare, pitchers train more often, gain muscle mass, throw harder, and put more strain on their conective tissue (I'm not a doctor, so any MD's may feel free to dispute this one).
4: There are more pitcher on modern rosters than in the past, more bodies means more injuries.
Alright, Steve, maybe the "pressure" and "situational" stresses arn't different, but maybe more pitchers now throw stressful pitches like sliders and splitfingers, throw them throughout their careers, than the greats did back then. Obviously, Carlton for one is an exception and of course there were no splitfingers then, Roy Face forkballs excepted.
More muscle mass. I remember reading an old baseball book printed in the late sixties profiling some contemporary stars. In the article about Jim Maloney, IIRC, some trainers were quoted as saying that Maloney was inordinately muscular, and that it had a detrimental effect on his arm, led to more nagging injury and between starts pain. Of course now every pitcher is more muscular, whether naturally or not.
IIRC, the average fastball in the mid eighties was 88-89 mph. Isnt it more now? I cant remember my source for this.
Maybe the culture's different in this sense. Pitchers then could work their way through arm trouble, injury, and transformation. It was allowed and even encouraged. What I'm getting at is that Tanana, Sutton and Forsch, to name three innings-eaters, started as "stuff" fb pitchers. Forsch wasnt a K master but the other two were. Yet they all lost velocity in mid career IIRC which, I believe, would now result in surgery, release, or being forced into spotty mid relief roles. Now did they lose "stuff" because of abuse? And if so, did they maintain some sort of effectiveness ALSO because of abuse?
Im not certain about any of this, and i'm not really awake yet so it may not make sense but well you know. yeah.
First, one other change in the game has been in the length of time between innings. I'm not sure how this has affected pitchers. One could argue they have more time to rest, or one could argue that they have to work over a longer period of time, or one could argue their arms are more likely to "cool down" and need extra effort to throw in the next inning.
Second, there's this statement from the end of the article:
Pitchers get hurt a lot; they always have, and 15 years into the era of significantly reduced workloads, they still do.
This is, and has always been, the big area of dispute. It sure *seems* like pitchers get injured just as much now as in the bad old days, but I don't know of a study that confirms it.
The real issue is what Steve termed "cost-benefit" analysis. Here's how I would, at least, pose the question. It's indisputable that more innings from the best pitchers is more valuable. The question is, by how much. Given that, how much of an increase in injury risk would be worth the increase in value?
That's a really interesting point about Tanana et. al.
Given that shoulder/labrum surgeries seem to have such ridiculously low success rates, I wonder if it would be best to see if the guy can work through it. Pedro Martinez has allegedly been pitching with a partially torn labrum for three years, now.
Just to add a couple of points onto what he said. First, adding up the number of injuries (or looking at the injury rate) ignores the pitcher's response to the usage regime in place. We know that relievers go all out since they know that they will be used for a shorter appearance than a starter. Similarly, it seems that a starter that knows he will only be asked to go 100 pitches will put more effort into each of those pitches, making injury more likely. Just comparing the injury rate ignores any endogenous response.
Second, anything unrelated to pitch counts that affects injury risks and varies across eras will lead to a bias in this type of comparison. For instance, how has amateur pitcher usage changed? If young kids are throwing year round now when they weren't before, this could increase their risk for injury later in their career. Without the current use of pitch counts in the majors then, we might see an explosion of injuries that we wouldn't have seen with similar workloads in past eras.
Could it be that Pitchers of Ye Olde Era threw 10-20% more pitches per season than starters of Ye Newe Era because they made 10-20% more starts? Someone can lead the league in starts these days with 35 or 36. Once upon a time, you needed 40 or more to lead the league.
Plus how we can regard the very top starters as representative of the typical starter? That's crazy talk. All those guys who lead the league in pitches per year are freaks of nature and luck and not representative of your typical starters.
Also, I don't trust the Pitch Count Estimator without proof,
Further, I don't believe any big-league manager or GM cares so much about pitch counts in the way that the article presumes. The underlying premise that management uses a pitch count to regulate total pitches thrown in a season is just wrong. Managers and GMs are using pitch counts to monitor "efficiency" and pitch counts per inning and/or per start.
Odds are the addition of the pitch count to the box score hath wrought nothing as significant as the switch to the five-man rotation. Also there is the ever-more-frequent practice of letting your best starters skip starts to wait for inflammation to subside or to give them rest or to have them do "side sessions" while they iron out some mechancial problems. Look at the way the Yankees have handled Mike Mussina this year. Were top starters handled that way thirty years ago? One could argue that the five-man rotation itself has become a myth; most teams are six or seven deep in starters and are not afraid to work number six and number seven into the rotation if they (or some MRI machine) perceives a problem with one of the guys they really need to be healthy in the playoffs.
Say it's true that the top starters represent the typical top 100 starter for those seasons. We could test my argument there by answering these questions: How many starts, on average, did the top 100 starters make per year for the period, say, 1965-1975? How many starts, on average, did the top 100 starters make in the last ten years? My guess is the difference between the former and the latter accounts for the difference between pitch counts at the end of the season.
Then again, I'm a moron, and this one is over my head, so feel free to ignore me.
I mean, it seems to me that comparing his workload to the top workload per season isn't a very meaningful comparison, since that's not going to tell you how much different Hentgen's workload was in comparison to the average pitcher.
(Really all that the lists told me were that really great pitchers tend to be really durable)
The question shouldn't be "How do high pitch counts affect the great players of each year/era", but rather, "How do high pitch counts affect the average..."?
is relatively constant on a team per game basis. Obviously IP should be constant, so this is showing that there's not a significant difference in the combination of hits, walks, and strikeouts? From the article: With a fairly high rate of walks in the game today, and with both home runs and strikeouts at unprecedented levels.... I first passed this sentence without giving it a second thought, but how can that be true? Are singles somehow significantly down?
Ditto. Using statistical outliers to prove a point proves nothing other than that they are statistical outliers. The conclusions of this study aren't necessarily wrong, but the method is anecdotal.
I think what Steve's really saying is that veterans who have proven themselves to be durable already can be pushed harder than they currently are. I don't think this article proves it, but that's the thesis. This of course also raises the question of at which point in a pitcher's career has he proven durability.
Certainly they are in relative terms. Over the last century, HR, BB, and SO numbers have increased dramatically, while S, D, and T have declined.
the problem with pushing some guys harder is that a team has to keep all the guys on some kind of schedule. sending one guy every fourth day will make it even harder to send the rest of the guys every fifth.
another thing. a higher portion of today's MLB starters are foreign and pitch in winter leagues. how many odalis perezes were there in 1970? who works harder, a guy who goes every fourth day for six months or a guy who goes every fifth day for eight months? people always talk about how the pitch counts don't include bullpen work etc. but they also don't include winter league work and many starters today pitch for another team in the offseason.
1. I don't think it's impossible to run your Ace out there every four starts while keeping the other starters on a 5-day rotation; teams did that frequently in the 1940s-1960s and everyone accomodated the juggling. Keep in mind that the set 4-man rotation -- see the 1971 Orioles -- was an historical anomaly. This means that for most of baseball history, teams juggled the use of their 4th and 5th starters, and often their 3rd starters.
2. I don't think you have to give your Durable Ace© more starts to increase his effectiveness. I think that occasionally letting him go an extra inning or two -- when he's throwing well, of course -- will save you a few innings of bullpen usage, especially in the 7th and 8th, which can mean fewer innings from your worst relievers. Obviously, this doesn't give you as much optimal usage in theory as starting your Durable Ace© in 4 or 5 extra games per season. But it could be a first step.
Pedantry alert!
You can't copyright a title.
Funny, I'm the guy who said that and I'm a Backlasher against Pitch Counts.
Great responses. My email box is filling up, too. Obviously there is a lot to say here on everyone's part.
First off: I will write a followup article for next week, expanding on the data presentation and (hopefully) addressing many of the well-taken points that are being made.
To quickly summarize a few things:
I think what Steve's really saying is that veterans who have proven themselves to be durable already can be pushed harder than they currently are.
Bingo. That's it exactly, and I'd go a little further and say that the intention of developing (many, not all) starters should be to build their durability up to the point of being pushed harder than they currently are. That was done throughout the history of baseball until about 15 years ago, and I am not at all convinced it can't or shouldn't be still.
What Loren says in #17 is right on the money, although I would advocate a few more starts for top starters than the 33-35 they get today. The lockstep 5-man rotation, which goes hand in hand with the overemphasis on limiting pitches, has delivered a very poor cost-benefit.
Are singles somehow significantly down?
Yes. Given that home runs are up and batting average is very close to its historical norm, it is simply a fact that there aren't as many singles hit nowdays than in many periods in the past.
Not enough time to really engage on this today. But thanks for all the great feedback, and rest assured that at least one article in followup is forthcoming!
He's been over 100 pitches in every start since the beginning of June (12 starts) and over 110 in 7 of them. He went 6 innings once in that stretch, and at least 7 innings in the other 11. There have been a couple of times where I thought Gardenhire could have gotten him out an inning earlier to save on workload, but generally its been great usage, I think. He's been allowing so few baserunners that he simply hasn't had the "high stress" innings that some have worried about.
Obviously, a lot of it is that he has been so dominant in this stretch, but it seems pretty clear watching the games that there is some thought going in to trying to build his durability, and also save the bullpen for the inevitable Kyle Lohse meltdowns.
The Twins have also been skipping the "5th starter" as often as they can over the past couple of months, though they are using him this week specifically because Brad Radke requested the extra day rest. 4 guys have 22 or 23 starts each, and everyone else has 15. I think both Radke and Santana will get to 35 starts this year, which would certainly be an improved use of resources.
On the other hand, many of the pitchers of yesteryear went on barmstorming tours in the offseason to earn a little extra cash.
There are a couple of points to be made about Steve's research. Both Drysdale (torn rotator cuff) and Whitey Ford (circulatory problems in his pitching arm) saw their careers curtailed with arm problems. Robin Roberts, after four straight years of heavy usage, suffered an immediate dropoff in 1956 and never really got back to where he was. Jim Bunning's last big year was (surprise) 1967; he was certainly quite ineffective when coming to Pittsburgh in 1968, and never regained his prior form, either. I'll admit that in none of these cases, other than Drysdale's, do we know for sure that there was an injury, or that in Ford's case the injury was related to his pitching - but I think it's a fair speculation. There's also the case of Herb Score, who people believe lost effectiveness because of the eye injury but who himself believed it was an arm injury suffered in 1959 that led to the demise of his career.
Really, we don't always know whether a pitcher's career was ended or shortened because of injury, unless it's widely reported like in the case of Koufax. It's certainly possible that pitchers like Robin Roberts and Jim Bunning who experienced *sudden* losses of effectiveness immediately after their seasons of heavy workloads were injured, but tried to work through it. I'm inclined to think that the only honest answer that we can give as to whether pitcher injuries have been affected by workload changes is "we don't know".
-- MWE
It's a judgment call, obviously, as to what constitutes an unusually early loss of effectiveness for a pitcher. But in my judgment, neither Drysdale's arm trouble at age 32 nor Ford's at age 37 fall into that category.
Drysdale chose to retire rather than undergo therapy and attempt to continue his pitching career, opting to begin a broadcasting career instead. Very likely today he would undergo surgery (the advances in sports medicine in the past 30 years have been stupendous) and then continue to pitch. But in any case Drysdale's career (he threw an estimated 52,586 pitches, and never was injured until 1969) is anything but a cautionary tale about the cause-and-effect of a heavy workload and arm trouble.
Robin Roberts, after four straight years of heavy usage, suffered an immediate dropoff in 1956 and never really got back to where he was.
Roberts' 1956 slump occurred after 6 straight years of extremely heavy work, not 4. And while it's obviously the case that he lost velocity and went into a mid-career decline, he only had one truly bad year (1961, at the age of 34), and following that rebounded and reeled off four more good years, remaining a very effective major league pitcher through the age of 38.
Jim Bunning's last big year was (surprise) 1967; he was certainly quite ineffective when coming to Pittsburgh in 1968, and never regained his prior form, either.
Bunning was 36 when coming to Pittsburgh in '68; it's hardly notable that a pitcher at that age goes into decline. What's far more remarkable about Bunning is the terrific peak he had from ages 32 through 35, despite having already thrown 3500-4100 pitches every year beginning at age 25.
There's also the case of Herb Score, who people believe lost effectiveness because of the eye injury but who himself believed it was an arm injury suffered in 1959 that led to the demise of his career.
Score absolutely hurt his arm. Whether the arm injury was related to the long layoff due to the eye injury will always be unknowable.
I'm inclined to think that the only honest answer that we can give as to whether pitcher injuries have been affected by workload changes is "we don't know".
Well, of course we don't. All we can ever do is apply deductive reasoning based on the best evidence available.
But here's what we do know, for certain: despite not having the benefit of modern sports medicine, the top pitchers in baseball before the late 1980s routinely threw significantly more pitches per season than modern aces do, and routinely threw signficantly more pitches over the course of their careers.
That much is beyond question.
My followup article will provide more data on the pitches and innings of pitchers well beyond just the #1 in each season, to hopefully provide a more comprehensive look at the subject.
If the "old school" wants to look back at the days they consider better than the game we see on the field today, they'll need to accept the results those days gave them. We remember the ones who survived -- Tom Seaver, Whitey Ford, Jim Palmer, Robin Roberts, Lefty Gomez, Bert Blyleven, Ted Lyons. And there are a few flameouts that never matched the success of their early 20's -- Dwight Gooden, Vida Blue, Denny McLain.
But there are other guys who didn't survive, lost to the game and conveniently forgotten. Pete Donahue, Ralph Branca, Gary Nolan, Dan Petry, John Rigney, Dave Rozema, Dean Chance, Russ Bauers, Bill Monbouquette, Mel Harder, Steve Hargan, Mike McCormick, Van Mungo among them. For the most part, we remember the survivors, and think they were representative of all players of the past.
They weren't. We remember them because they were exceptional."
It's one of the few smart things he's ever said. (The whole article is here: http://mlb.mlb.com/NASApp/mlb/mlb/news/mlb_perspectives.jsp?ymd=20040604&content_id=761173&vkey=perspectives&fext;=.jsp) I wonder if he had help.
Steve Treder's had some great posts here, but his article is worse than Carroll's ramblings. Treder offers up anecdote disguised as data, conclusion disguised as fact, and junk dressed up like science. I'll give credit for a well-written piece, but even Carroll knows better than to use Don Malcolm and a guy named "trevise" for data. He hides behind Tom House, Jim Andrews, and Mike Marshall, but I'm much more willing to listen to those guys than I am to a crank and a psuedonym.
I can only hope part 2 lives up to Treder's previous output. This one is worthy only of Hardball Times.
I'm not sure how this is indisputably true. If you want to say that Chris Reitsma is inferior to Russ Ortiz and John Smoltz, then there is some truth to this statement. However, I don't see how you can say that a rested Chris Reitsma is inferior to a tired Russ Ortiz and John Smoltz. While indisputable sounds nice, I don't think its the appropriate word.
I would work on instituting a conditioning and pitcher-use program throughout my organization that would strive to develop starting pitchers capable of throwing at least 10% more pitches per season than the modern norm. I'm confident that in the long run such a program would provide a significant competitive advantage, without producing greater injury rates than are occurring now.
I'm glad your confident, but this is not prudent any more so than strictly living by a pitch count. Setting minimum duration standards (be they innings, batters, or pitches) is no more prudent than setting maximum standards. In fact the minimum standards are more dangerous because you don't just run experiments on million dollar investments, and you don't just run experiments to test a statistical theory when the cost is damaging a human beings health.
Maybe, Ferris O'Dowd did this in the 1950s, but hopefully we've grown to be more humane, and we certainly have a higher monetary cost.
At the developmental level there is no standard, you aren't developing starters, ace starters, closers, or ACE RELIEVERS . You are developing pitchers, where the skill set of the individual is used to maximize the performance of the franchise.
What today's model has done, is allowed you to be able to retrieve value from pitchers that in year's past would have been thrown away or injured. You take a person that through ability, physiology, or injury that can only contribute x; you find x; and then you use that x to your advantage. You don't just retain the few that can fill the roles that you long for.
To this extent, its useless to try to develop anytype of single monitoring metric. Some pitchers will have durational problems because of their muscoskeletal structure. Others will have impact stress problems due to the types of stresses they encumber in certain situations (e.g. Retardo's example of the forkballers); others will have intermediate levels of stress because of the kinesiology of their delivery (e.g. Pete Smith). The former you can monitor with pitch count numbers. The latter its more situational dependent, i.e. don't put them in a situation where they must utilize their high stress pitches often. The final is intermediate durations, i.e. don't subject them to long innings. For all of them, you should be monitoring for aberrant stresses caused by fatigue or other injuries.
I doubt you'll ever have a uber-stat or collection of alphabet soup of stats that you can plug into a spreadsheet and reach definitive conclusions. Because of the varation in style, delivery, and the patterns of recurrence that are taught at a team level, the best way to monitor stress is for someone familiar with the game, familiar with mechanics, and with specific knowledge of the resource to make the decision. If you have to create the ultimo-model, then your going to find it with scanning equipment and stress simulation equipment--- not with numbers you can just add and subtract.
At the end of the day, all of that historical adding of numbers is nice, but I'm not sure its probative. Because of the nature of the stresses, there will always be a minimum level of injury. More important, we should find that we tend to an equilibrium of injury. That is, we are finding new resources that previously would be discarded. The new resources take the place of resources that may have strong durability, but decreased effectiveness. This should increase the chance of injury. The ability to come back from injury with some weakened musco-skeletal parts will also increase this number.
Durability is important in how it affects roster management. There is a tolerable amount of injury a roster can withstand without decreasing the team effectiveness. A team will likely try to reach this equilibrium point by selecting different resources.
If you truly chart injuries, I'd be willing to bet you can interpolate a line through the ages that is going to have a fairly flat slope near 0.
FWIW, pitch counts aren't that new of a creation. A while ago, I read some article that indicated Dierker was on a pitch count when he came up to the bigs.
I think so. I suspect that - on balance - teams are getting as least as much total value out of all of their pitchers as they did back in the '40s and '50s. They may be getting less value out of their front-end pitchers, but countering that loss by getting more value out of the bottom of the staff - with the (possible) added advantage of having those front-end pitchers have longer careers in terms of the number of years in which they can contribute at or near the top of their games.
James noted back when he was still writing Abstracts that the trend over time has been for the workload of top pitchers to decrease. I don't think that teams would have moved in that direction if it weren't in their own best interests to do so; I don't believe that over the long run teams will deliberately choose a sub-optimal strategy for resource utilization.
I'm wondering, also, if there's been a change in the available talent mix. Earl Weaver always felt that it was easier to find hitters who could contribute than it was to find pitchers, and he always built his teams around a relatively few strong front-line pitchers and a lot of interchangeable parts among the position players. It's possible that when Earl was managing there were, in fact, more hitters that could contribute - and that in today's talent pool, it's easier to find pitchers who can contribute in some role, and harder to find hitters, and for that reason, the paradigm has shifted toward carrying and using more pitchers.
-- MWE
I agree that we'll never know and probably the best recourse is Backlasher's, because causes of injury are so varied, even individual. In short IMO my typical position of Anti-Unified Field Theories applies here as well.
"RETARDO,
That's a really interesting point about Tanana et. al."
Matt Morris is going through this right now. There's something wrong with him physically, I have no doubt. Maybe he was abused too much early in his career. But whatever repair surgery has given him, it's obvious to me that it's worn off and now he's going through a Sutton/Tanana stage where he's learning to pitch and not rely on "stuff" that he no longer has.
Incidentally, this relates to my beef with Recieved Opinion that RH starters automatically suck when their K rate goes down. I believe that most merely don't get the opportunity (or enough of one) to transform from power K pitchers to the late career junkball usefulness desparaged by so many.
An interesting observation that I have not considered. Obviously I do not know conclusively, but you can find evidence for this proposition. It depends on how much we choose to split hairs on harder or easier. I would opine that it is easier to find the pitchers, and its only harder to find the hitters when considered relative to the ease in finding the pitchers.
For instance, if you establish a 10 man or less pitching staff, virtually everyone you select is going to have to project to at least 90 innings. Obviously the players you plan to get starts will have to project higher. Even though you want use all pitchers for 90 innings, the roster depth is thinned to the point that you have to have your lesser pitchers to project to that amount to stand in for possible injury.
I think today, you can project most of your bullpen to 60 innings, and maybe 1 or 2 guys to 80 innings. This opens up a class of contributors that may have been to fragile in the old model. Thus your supply pool is increased.
Interesting thoughts even if my take turns out to be incorrect.
I think because there are more RH pitchers this could be true. If your guy loses 5-7 MPH, you probably have a replacement. The crafty junkballing left doesn't have as many people to push them out.
Interesting thought about the K rate. I always presumed there was wisdom in this James thought, but that it was secondary wisdom. Namely, that if someone has lost enough to where they don't get their K's they are also leaving meat pitches in the zone. I guess this depends on whether you think the pitcher has an effect on hits for balls in play though.
Jeebus backlasher, you really are a piece of work. Nowhere did Treder claim anything of the sort; he merely presented data, and infered (correctly), that modern starters are being used (for whatever reason), suboptimally.
Do you have a problem with that thesis? Or are you merely picking a fight with a poster whth whom you have a well known animus? If it's the former, attact it on it's merits, not with some ill percieved backlasher boilerplate harangue.
I doubt you'll ever have a uber-stat or collection of alphabet soup of stats that you can plug into a spreadsheet and reach definitive conclusions. Because of the varation in style, delivery, and the patterns of recurrence that are taught at a team level, the best way to monitor stress is for someone familiar with the game, familiar with mechanics, and with specific knowledge of the resource to make the decision. If you have to create the ultimo-model, then your going to find it with scanning equipment and stress simulation equipment--- not with numbers you can just add and subtract.
Jeebus backlasher, you really are a piece of work. Nowhere did Treder claim anything of the sort; he merely presented data, and infered (correctly), that modern starters are being used (for whatever reason), suboptimally.
Do you have a problem with that thesis? Or are you merely picking a fight with a poster whth whom you have a well known animus? If it's the former, attact it on it's merits, not with some ill percieved backlasher boilerplate harangue.
I do as well as the data that is used to arrive at the thesis.
Or are you merely picking a fight with a poster whth whom you have a well known animus?
No Steve doesn't talk to me at all.
If it's the former, attact it on it's merits, not with some ill percieved backlasher boilerplate harangue.
It was discussed on its merits.
In fairness to Treder, Bill James and Rob Neyer don't suffer from credibility problems. A perusal of Dr. Mike Marshall's website makes it pretty clear that he thinks Carroll is clueless. Of course Marshall thinks everyone else is clueless and he may be right, for all I know.
I doubt you'll ever have a uber-stat or collection of alphabet soup of stats that you can plug into a spreadsheet and reach definitive conclusions.
This may be a semantic point but... if you mean we won't be able to parse the Lahman database and come up with Jason Schmidt's maximum pitch count, then I agree. But a bio-mechanical model with a thousand variables is still a model.
I don't think that teams would have moved in that direction if it weren't in their own best interests to do so; I don't believe that over the long run teams will deliberately choose a sub-optimal strategy for resource utilization.
The way Barry Bonds has been pitched to, or really not pitched to, suggests many of the men running baseball teams are risk averse to the point of hurting their teams. The way McClatchy runs thes Pirates is another example. Not taking a calculated risk with the star pitcher's health is a case of the agency problem, where sometimes the manager's desire to keep his job (or not be chased out of town by a torch-bearing mob) overrides the good of the team.
That said, this talk of cost-benefit analysis leaves me cold. These aren't just VORP machines, churning out $2.5 million for every ten runs saved. I wouldn't want to be the guy who gets tell Gary Nolan or Jim Bouton "Sorry we ruined your arm, kid, but it was a calculated risk." I find a sub-optimal risk averse strategy quite understandable.
They just transform well to the spreadsheet, and their relationship's are much more complex than arithmetic operations.
find a sub-optimal risk averse strategy quite understandable.
I agree. Steve once proposed that the answer to the steroid problem was to let this generation be the guinea pigs for adverse effects. That's almost Nieporentian.
That's a little too Panglossian for me, Mike. Teams have access to different information than we do, but they don't seem to have a very good handle on the pitcher abuse issue either. Teams can't "deliberately" make optimal or suboptimal decisions unless they know which is which. Now, are you arguing that in the "marketplace of ideas," good strategies will drive out bad, even if teams don't consciously know which is which? I can buy that, but I think the process is going to be much slower in baseball than in the real world, because baseball is a closed system with no competition.
Besides, it assumes that your/our definition of "optimal" comports with theirs. Are teams trying to maximize the career value of pitchers? I don't think so. For instance, it seems to me that all pitcher usage trends in the last decade or two have been not towards "optimal" usage, but towards regular usage. That is, towards pitchers filling more and more specific, defined roles with as little variation in usage as possible. For instance, as others have noted, in the Old Days starters having bad games would be yanked much earlier than they are today, but when they had good games they'd remain in longer. And of course for relievers, we've seen the development of closer, setup man, loogy, etc.
Now, that may be "optimal" if you're the manager trying to narrow down your possible decisions, but that's a different subject. It's _possible_ that this regularization of roles _also_ happens to maximize pitcher career value -- but that would be awfully coincidental, don't you think? And maximizing team performance within a season is yet a third variable; I don't see any reason why the optimal use of pitchers for that purpose would be the same as for each of the other two purposes I identify.
There's no alternative. A "sub-optimal risk averse strategy" is still a calculated risk. It's just a slightly different calculation, and slightly less risk. You may put your thumb on the scale when doing your cost-benefit analysis, but there's no way around a cost-benefit analysis.
Anyway, what about "ruined his arm"? The whole point of his arm is to pitch; if you don't use it, what are you saving it for? It's not like the arm is going to fall off if overused; it just won't be suited for pitching anymore.
And even if your comments were valid wrt the 1960s, this isn't the 1960s. Pitchers -- players -- have choices. They have agents, they have free agency. If they don't like how a team treats them, they can go to a team that treats them better. They have access -- paid for by the team -- to top medical care.
The trends don't always move in the same direction, especially at the level of individual teams. One of the things I've discovered in participating in the Hall of Merit is the different approaches teams took. The Cubs from about 1906 to about 1912, and the Pirates of about the same time or a little later used their top pitchers less than the other teams of their time, and used more pitchers - at least, more pitchers at the 150-200 IP level - than the other teams. The thing about it is that these were two fabulously successful teams. The Cubs did have Three-Finger Brown, and the Pirates had a succession of near-great pitchers: Willis, Leever, Phillippe, Adams, Cooper. But each of these teams - especially the Cubs - also had numerous other pitchers with brief careers, or who had little success with other teams but who for a short time were as good as anyone else on the staff. Go look at the team pages on bbref for those teams and see for yourself how big the gap between the #1 pitcher and the #5 pitcher was.
The lesson seemed to be: if you spread the load evenly among 5 or more pitchers (and if you back them up with a great defense), then you get so much more value out of your second-line pitchers that it compensates for using your ace or aces less. The lesson wasn't widely heeded and certainly wasn't generally adopted.
I don't know, maybe some selfish goal like picking up your child without having pain, or being able to type on baseball primer without pain, or being able to wake up in the morning without pain.
If they don't like how a team treats them, they can go to a team that treats them better
Not universally true. But they do have choices, they could quit baseball, or go play for the St. Paul Saints.
I think this is almost certainly the #1 goal for the major-league manager.
For instance, it seems to me that all pitcher usage trends in the last decade or two have been not towards "optimal" usage, but towards regular usage.
...which may also be optimal usage of the staff, in terms of maximizing team performance over the season. If it weren't close to being optimal in the current game, I think you would see teams trying other things to move away from the basic usage model. But most recent teams that have tried to do something else have abandoned the experiment fairly quickly, almost always because they do *not* see the results for which they were hoping.
-- MWE
This is an interesting possibility, Mike. However, I'd find it more compelling if there were some evidence in support of it.
It's obviously true that teams have striven to achieve more value out of the bottom of the staff; the development of ever-more specialized niche roles in the bullpen is nothing if not the attempt to derive more value. But it isn't at all clear that the attempt has had much success on balance, particularly after factoring in the issue of the increase in pitching staff size, a choice which cannot possibly have anything but a deleterious effect on the team's flexibility in dealing with the other eight positions on the field.
It's possible that when Earl was managing there were, in fact, more hitters that could contribute - and that in today's talent pool, it's easier to find pitchers who can contribute in some role, and harder to find hitters, and for that reason, the paradigm has shifted toward carrying and using more pitchers.
Again, an intriguing speculation, but another one for which I've never seen any evidence. Weaver was managing only 25 years ago; I don't know why or how it would be that the talent pool would make a meaningful shift in that short timeframe to produce fewer good position players and more good pitchers.
If it weren't close to being optimal in the current game, I think you would see teams trying other things to move away from the basic usage model. But most recent teams that have tried to do something else have abandoned the experiment fairly quickly, almost always because they do *not* see the results for which they were hoping.
Well, maybe, sure. But I guess, Mike, you seem to have more faith in the capacity of MLB managers and front offices to develop and execute optimal player usage models than I do. I guess I'm a bit more cynical, and I think that MLB managers and front offices are a lot more like fallible human beings in most every other endeavor, in which issues like fashion, stubbornness, prejudice, careerism, risk aversion, blind faith, and plain old ineptitude manifest themselves in far greater magnitude than we would often care to admit.
In the long run, does the broad general overall model of how baseball teams are operated tend to evolve in the direction of greater efficacy? Sure, I think it does. But that doesn't mean that every major choice that typifies every era is the smartest possible one. Baseball history is filled with fits and starts and reverses in tactical direction; witness the waxing and waning of the popularity of the stolen base, for just one example.
To say that the model of pitching staff usage that has generally prevailed for the past 15 years or so hasn't been optimal is not to say that every model that preceded it was, or that this one is egregiously wrong-headed. But I do think that the preponderance of the information available to us as armchair analysts pretty clearly suggests that the flaws in this model are rather severe, and therefore it too shall pass in time. Forced to predict, I would say that within 10 years or so, pitching staff usage (particularly that of the bullpen) will have mutated in some significant manner yet again, and we will look back on the 1990-2005 period and say, as we can and do say about so many things regarding so many other eras, "What were they thinking?"
OCF,
I wish threads were easier to find. There was a recent thread about bullpen utilization where points close to your conclusion where discussed for more recent teams. I think you would find that information interesting. Emeigh made some good points on that thread that Treder has yet to respond too.
That doesn't do any good to a young pitcher who's looking at 6 years before he can be a free agent.
One explanation might be the growth of non-wood bats at all levels of the game outside of MLB. Worth came out with the first aluminum bat in 1970, but it wasn't until Easton came out with a better grade bat in the late '70s that aluminum bats took off.
I don't think there's been any systematic research done on this (it would be an interesting topic), but there are people who think that aluminum bats retarded the development of hitting skills needed to succeed when the bats are mde of wood, because hitters can still drive the ball even without making consistently good contact and are therefore less likely to identify and correct flaws in their swings. Some of that may be (probably is) old-fogeyism, but I don't think we should dismiss that possibility out of hand. As a corollary to that, pitchers, in order to be successful in an aluminum-bat environment, have to work harder at developing their pitches, and are therefore *more* likely to have developed the skills they need to succeed in the majors.
Looking at the minor leagues right now, it certainly seems to me that there are fewer guys who are good bets to develop into impact hitters than I can remember. There are maybe five guys total who look like can't miss hitting prospects. Meanwhuile, it seems like every team has two or three pitching prospects who look like reasonable bets to develop into solid rotation starters, and more than a handful who look like they could be something special (keeping in mind that projecting pitching prospects is difficult at best).
I readily admit this is speculative, but I think there's enough evidence so that it's a reasonable speculation.
But I do think that the preponderance of the information available to us as armchair analysts pretty clearly suggests that the flaws in this model are rather severe, and therefore it too shall pass in time.
Where I'm having trouble is that, while I see the flaws in this particular model, I don't see anything that suggests that the flaws in this particular model are any more severe than the flaws in the model used in the '50s and '60s, or the model used in the '20s and '30s. If anything, my reaction is that this model is "less" flawed, because (as OCF suggested in 42) there seems to be real value to be gained by getting more out of the back end of the staff while giving up something from the front end.
-- MWE
I ran a li'l study using the Lahman db; here's what I did:
I used as my sample all pitchers born between 1934 and 1967 who started at least 30 games in a single season before their age-26 seasons.
That gave me about 290 pitchers, from Jim Abbott, who started 33 games at age 22, to Pat Zachary, who started 31 games at age 25.
Then I grouped the pitchers together by year of birth, averaged their career lengths, and tried to see if their was any relationship - i.e., if, as time has gone by, have pitchers had longer or shorter careers.
Here's what I found: For the pitchers in my sample, their careers have, indeed, become shorter. Fitting a line to the data gave this equation:
Career Length (in GS) = 283 - 1.8 * (year born - 1934)
The data are so varying that this is hardly conclusive; the r2 for this equation is 0.09.
But, at the least, I think it shows that there has been no dramatic or noticeable positive change in career length, at least for the pitchers I chose for this sample.
Anybody have any comments, questions, suggestions, or improvements? All are welcomed!
I readily admit this is speculative, but I think there's enough evidence so that it's a reasonable speculation.
It's certainly that.
Let's assume it's true, that good-hitting position players are a more scarce resource than in the past, and good pitchers are more abundant. If that's the case, it would seem the the competitive advantage to be gained in maximizing flexible, platoon-efficient offensive alignments -- which is significantly enabled with a 15-position-player roster, rather than a 14, 13, or 12 -- would be greater than ever before.
If that's true, then the time would seem to be ripe these days for some team to come along and exploit the window of opportunity by doing something differently than their competitors before the competitors change as well. This is the manner in which every tactical innovation in history has taken place, including the 5-man rotation, the closer, the fireman, platooning, and everything else. Orthodoxy provides the best setting for the innovator to seize an advantage, however small or temporary.
Interesting stuff.
Which definitely conforms to my own admittedly unscientific observation of pitching careers over the past 40 years. Bravo, bob mong, for attempting to address the question objectively and quantitatively.
What's striking in its absence from Jazayerli & Woolner's rebuttal to James in The Guide to Pitchers is data of any sort dealing with change in injury rates over the past 10, 20, or 30 years. One could easily disprove the position of those of us who question the wisdom of the empirical reduction in ace pitcher workload limits by demonstrating that there has, in fact, been a noticeable reduction in pitcher injury rates overall, or ace pitcher injury rates in particular. But no one, not J&W, not Will Carroll, no one that I'm aware of has discovered and presented such data. It's my very strong hypothesis that it doesn't exist, that the occurrence of injuries to pitchers, once one factors out all the noise created by the great advances in sports medicine and the far greater eagenerness of teams to make use of the DL, the general occurrence of injuries to pitchers hasn't meaningfully changed at all.
Of course I don't know this for certain, but it is my assessment of the situation. J&W, who are extremely sharp and capable analysts, and who I am certain have studied this issue in far greater depth than me, only offer this in their "Response in Defense of PAP":
"In the end, we feel that the notion that lowering pitch counts can reduce injury risk will be vindicated [my emphasis] in the clearest way possible: on the playing field."
They go on to note that pitch count limits in MLB have gone down further still within the past 3 years than they were even then, and this "revolution in the management of starting pitchers is underway, and the early signs suggest the the revolution may well lead to fewer injuries."
In other words, despite the fact that pitch loads have been reduced for starting pitchers for the past 15-20 years, and no one has yet noticed any meaningful change in the rate of injury occurrence, the further reduction in pitch loads that has occurrred within the past 3 years "may well lead to further injuries."
Well, of course it might. But there is no historical data brought to bear by J&W that supports that prediction.
Oops! Make that "fewer injuries."
Thanks, Dr. Freud. :-)
The fact that career "length" hasn't changed doesn't mean that the "value" of those careers hasn't changed. Pitcher A may have pitched 240 games by starting 40 a season over six seasons, while pitcher B got there by starting 30 a season over eight seasons. There are two seasons where pitcher B was still pitching while pitcher A's team needed to find a replacement - was that replacement worth more, or less, than the value gained by the extra 10 starts/season that pitcher A made during the first six seasons?
Teams aren't generally trying to leverage the total number of games pitched; they're trying to leverage the value of those efforts. The fact that pitchers today are pitching no longer, on balance, than pitchers of 20-30 years ago is an interesting fact, but not especially relevant to me. What is relevant is whether the changes in pitcher workloads have allowed teams to extract more value from their pitchers - not just the front-line pitchers, but *all* of their pitchers. Workload reductions may not reduce the rate of injury as a whole - although as I indicated earlier we probably can't be sure of that, because we don't always know whether careers were ended or severely curtailed due to injuries that went unrecorded - but they may allow teams to spread the pitcher's value out so that they get better utilization of it over the long haul. And that's really what I think they're hoping to accomplish.
-- MWE
You're measuring career length in games started, and pitchers today start fewer games per season than in the past, so careers are not necessarily getting shorter. Could be there's no real change in career length at all.
Well, geez, Mike. Of course that's what they're hoping to accomplish. There's no reason to doubt that.
The question is whether they have or not. And if they have, the next question becomes whether in the bargain, what they've lost by working their best pitchers in fewer innings per season (and thus, by definition, using less accomplished pitchers in more innings per season) outweighs the gain.
I'm skeptical on both questions.
Assuming that total career-length hasn't changed in the last 30 years, as I tentatively showed in post 49, you could reach the conclusion that all the advanced technology, reduced pitch-counts, etc. haven't done a darn thing.
But an alternative conclusion could be that the practice of teams shutting down pitchers so quickly at the first hint of arm trouble has two, offsetting, effects: (1) It prolongs their career in terms of years, and (2) it shortens their career in terms of GS
Here's a made-up example:
Pitcher, debuting circa 1960, year-by-year GS:
30, 35, 35, 35 (arm pain begins, pitches through it), 35 (more pain, pitches through), 30 (more pain, misses a couple starts), 25 (finally hits DL), 35 (pitches through the pain), 10 (injured arm, career over).
270 total starts over 9 years.
Pitcher, debuting circa 1985, year-by-year GS:
26, 32, 32, 20 (arm pain begins, goes onto DL), 25 (more time on DL), 5 (shut down for the season with surgery), 15, 25, 25, 20, 15, 10 (final injury, career over).
250 total starts over 12 years.
So in my hypothetical, completely made up example, the pitcher who debuts in 1985 gets 3 more years by aggressive use of the DL and taking most of a year off for advanced surgery, but he loses so many starts in the process that he makes fewer starts than the hypothetical 1960-debut pitcher who pitches through the pain and blows his arm out.
I have no idea whether this is the case or not; I'm just throwing it out there.
And the answer to that question is on whether they have squeezed extra years out of all available pitchers. I started doing a Lanhman query last night but it was 3AM, I'll try to replicate it soon. What I would measure is career length in years, and see if for any pitcher that's not just a cup of coffee kid (e.g. at least three years in the majors), if we are increasing the career length of pitchers in multiple different classes.
what they've lost by working their best pitchers in fewer innings per season (and thus, by definition, using less accomplished pitchers in more innings per season) outweighs the gain.
You keep missing the picture on this one. Showing the workload of Mike Marshall does not imply that other pitchers can have this workload. Showing the decrease in workload for today's pitchers doesn't imply that today's pitcher's given the extra workload will perform with the same rate output as their rate output in their actual workload.
You are making a value judgment that Pedro Martinez is always better than Mike Timlin. There are game circumstances where this is not true.
I'll give you credit, you are right about what to measure, your just going about the measurement in the wrong way.
That's as may be; I'm not convinced either way myself. But I don't think that by just looking at pitcher injury rates and career lengths, you can legimately draw a conclusion that because those haven't changed much current usage patterns are severely flawed and teams should go back to working their best pitchers more often. That choice is based on the assumption that if you add more innings to a pitcher's workload per season he not only won't be any more likely to get hurt but he *also* will be able to sustain a level of performance that is higher than the likely replacement pitchers over those extra innings, most of which he will be pitching when he is tiring.
-- MWE
..which is exactly what I was trying to say, only better.
-- MWE
I think that more starts are valuable than fewer starts. I think that is the common-sense, prima facie assumption we should start with. If you want to argue that fewer overall starts spread over more years is more valuable than more overall starts over fewer years than the burden of proof is on you.
And these days, with free agency prompting more player movement (at least more that isn't controlled directly by the teams), it seems like the value of a player having a 14 year, 25-starts-per-year career vs. a 10 year, 35 starts-per-year career is negligible - except, of course, for the pitcher in question :)
Well, obviously this is the gist of the issue that we see differently. I think that such a conclusion can be drawn, and I draw it.
Keep in mind that I base my opinion not only on the expectation that the difference in performance between the incremental innings thrown by a team's best pitchers (both starters and relievers) is meaningfully better than that of their replacements, but also on the related issue that concentrating more innings in fewer pitchers allows a team the choice of carrying a smaller pitching staff. The turning over of a roster spot or two for the use of the rest of the team is a non-trivial factor that is all too often overlooked in the consideration of this question.
I don't think anybody who watched game seven last year could argue with that.
But I don't think that's necessarily the judgment. Assuming Timlin is our example of a quality reliever, then he'll be getting his innings no matter what. But by using him to relieve Pedro, he's less available on other days, such as when Derek Lowe is starting, and then you need to use Curt Leskanic to relieve him. (Players picked just for the sake of example based on 2004 performances.)
You can't only shift starters' innings to good relievers; you're also shifting them to bad relievers. If you could manage to do only the former, then of course it would make sense to do so. But unless good relievers are really easy to find, you're not going to be able to do that.
Yes, and, of course, the obviously interrelated usage of the 5-man starting rotation shifts innings from your best starters to your worst.
When you consider leverage, is it? Where are those incremental innings that are shifted to the lesser pitchers coming from?
In the case of Bobby Cox's staffs in Atlanta, for example, the incremental innings that his lesser pitchers soak up are almost entirely low-leverage innings. You don't see starters going into the eighth inning with five and six-run leads on a Bobby Cox team, nor do you see his better relievers pitching when the team is down two or three runs in the late innings. On a 10-man pitching staff, a lot of those low-leverage innings would be soaked up by starters or top-end relievers.
While Cox's specific usage isn't duplicated in very many places, a lot of teams use their lesser relievers mostly in relatively low-leverage situations to avoid using up a better pitcher. And a value calculation has to consider where those innings would go if the back-end pitchers weren't available - and what the likely impact would be in higher-leverage situations.
The turning over of a roster spot or two for the use of the rest of the team is a non-trivial factor that is all too often overlooked in the consideration of this question.
If this were such a big factor, then why have teams moved away from it in favor of keeping extra pitchers? Why has no one experimented with using those roster spaces for position players in recent years? Why does everyone carry those extra pitchers?
Before I made a blanket criticism that teams should be doing something else, I'd at least want to take a look at the evidence in favor of doing things the way that they are being done today, beyond the macro-level effects addressed in this article. I don't think a macro-level analysis comes close to capturing the real value of the change in pitcher usage patterns.
-- MWE
1: Improvement in diagnoses means that more injuries are discovered, rather than misdiagnosed as decline, crappyness, "tired arm," etc.
Maybe, but if we attribute all the "tired arms" to injuries, there's still a ton of guys getting hurt now, as there were then....but the old guys were throwing 230-250 innings ayear, not 180-200.
Plus how we can regard the very top starters as representative of the typical starter? That's crazy talk. All those guys who lead the league in pitches per year are freaks of nature and luck and not representative of your typical starters.
He was comparing Hentgen to other top starters because once upon a time Hentgen was a top starter. He's got a Cy Young award on the mantle, which is why Steve made that comparison, or so I think.
If the "old school" wants to look back at the days they consider better than the game we see on the field today, they'll need to accept the results those days gave them. We remember the ones who survived -- Tom Seaver, Whitey Ford, Jim Palmer, Robin Roberts, Lefty Gomez, Bert Blyleven, Ted Lyons. And there are a few flameouts that never matched the success of their early 20's -- Dwight Gooden, Vida Blue, Denny McLain.
But there are other guys who didn't survive, lost to the game and conveniently forgotten. Pete Donahue, Ralph Branca, Gary Nolan, Dan Petry, John Rigney, Dave Rozema, Dean Chance, Russ Bauers, Bill Monbouquette, Mel Harder, Steve Hargan, Mike McCormick, Van Mungo among them. For the most part, we remember the survivors, and think they were representative of all players of the past.
They weren't. We remember them because they were exceptional."
It's one of the few smart things he's ever said.
Carroll is guilty here of creating a strawmen. Those sabermetricians who are advocating looking at more than just pitch counts are NOT advocating riding young pitchers very hard. All of the examples Carroll cites that I am familiar with are young pitchers who got rode hard. That is the OPPOSITE of what Bill James, Steve Treder or myself (yeah, not in the same league as the first two) think should be done. We all think, IMHO, that youngsters should be treated with kid gloves until their arms are mature, i.e. past the age of 25. It's after then that we encourage them to develop, strengthen and stretch out their arms.
It's all about a few things, looking for the right guys (big guys, basically) who throw tons of fastballs.
another thing. a higher portion of today's MLB starters are foreign and pitch in winter leagues. how many odalis perezes were there in 1970? who works harder, a guy who goes every fourth day for six months or a guy who goes every fifth day for eight months? people always talk about how the pitch counts don't include bullpen work etc. but they also don't include winter league work and many starters today pitch for another team in the offseason.
This is an interesting question. In my opinion winter ball helps so long as their winter manager isn't throwing them out there every 3rd day for 150 pitches. It will keep the arm strong. Odalis Perez has had some problems, but Livan Hernandez hasen't, and Luis Tiant had his best years after winter ball in Puerto Rico.
Second, anything unrelated to pitch counts that affects injury risks and varies across eras will lead to a bias in this type of comparison. For instance, how has amateur pitcher usage changed? If young kids are throwing year round now when they weren't before, this could increase their risk for injury later in their career. Without the current use of pitch counts in the majors then, we might see an explosion of injuries that we wouldn't have seen with similar workloads in past eras.
Depends what young kids are doing. If they're being abused, yes, it will not be a good thing, but throwing year round isn't bad, it just depends on what and when and why and how.
Like Tommy John said, throwing a lot is good for you, it's throwing when you're tired that causes nearly all injuries, IMHO.
No. Smart usage of a 10-man staff gives those garbage-time innings to the starter-reliever swingmen, the role that has almost completely been phased out in the drive to ever-more-narrow specialization.
If this were such a big factor, then why have teams moved away from it in favor of keeping extra pitchers? Why has no one experimented with using those roster spaces for position players in recent years? Why does everyone carry those extra pitchers?
For a variety of reasons that we've discussed numerous times, Mike. Essentially the 5-man rotation and the closer/setup/LOOGY bullpen model require 11 or more pitchers to operate, and there is a cycle of operation/expectation/risk aversion that has permeated pitching staff management over the past decade or so. As we've discussed many times, managers love this model, not only because it makes their decision-making process much easier, but also because sticking to it serves to insulate them from criticism -- as I've put it many times, if I the manager use my staff in a non-orthodox manner and we lose, I get blamed, but if I use may staff in the orthodox manner and we lose, the staff (and/or the GM) gets the blame.
Before I made a blanket criticism that teams should be doing something else, I'd at least want to take a look at the evidence in favor of doing things the way that they are being done today
Good heavens, Mike. The evidence in favor of doing things the way they are being done is abundantly well-known by everyone, re-hashed thousands of times over the past 10 years -- that's why things keep getting done this way. You seem to assume that to critique the current model is to be blind to its positive attributes, or something. I already said it post #45, but I'll say it again:
"To say that the model of pitching staff usage that has generally prevailed for the past 15 years or so hasn't been optimal is not to say that every model that preceded it was, or that this one is egregiously wrong-headed."
Then what you said in post #48:
"while I see the flaws in this particular model, I don't see anything that suggests that the flaws in this particular model are any more severe than the flaws in the model used in the '50s and '60s, or the model used in the '20s and '30s."
is something I wouldn't disagree with either.
If I'm not being clear on this, I sincerely apologize. I'll try to make it as plain as I can: I'm not advocating a return to any previous era's model, nor am I saying the current model has no merit. I'm agreeing with you that every model has its flaws as well as its strengths, and I'm identifying what I think are the particular flaws of this one, and I'm suggesting that the model that eventually replaces this one -- and if history has taught us anything, it's that every model is eventually replaced -- will likely develop as a means of addressing these particular flaws.
And if someone does propose a specific system, it would be interesting to take a team's gamelogs and run through them with your hypothetical new system. Maybe something like this...
Mariners 2003
Apr-01, Loss 5-0 to Athletics
Actual Usage: Garcia 4.7, Hasegawa 1.3, Mateo 1, Carrara 1
NewStyle: Garcia 4.7, Hasegawa 2.3, Mateo 1
Apr-02, Loss 8-3 to Athletics
Actual Usage: Moyer 4.3, Mateo 2.7, Rhodes 0.7, Nelson 0.3
NewStyle: Moyer 4.3, Soriano 2.7, Rhodes 1
Apr-03, Win 7-6 over Athletics (in 11 innings)
Actual Usage: Pineiro 6, Hasegawa 0 (1 BB), Rhodes 2, Sasaki 1, Nelson 2
NewStyle: Pineiro 7, Hasegawa 1, Sasaki 2, Rhodes 1
Or something to that effect. Actually, just looking at those first three box scores, if you wanted to persuade someone that your new method would work, that wouldn't be a bad place to give your method a hypothetical test-run: First two games, the starter can't make it out of the fifth and the final game goes extra-innings. A lot of bullpen innings to soak up.
Part of what drove the swingmen out has been that the regularization of the schedule and changes in field maintenence have taken away their expected uses. Far fewer rainouts, far fewer suspended games, shorter travel times, no doubleheaders at all - that's what has made possible the notion of a rigid rotation. But there are still plenty of unexpected uses for swingmen, many of them created by injuries, but some by starters simply having bad days.
Your starter's throwing BP and it's already starting to look ugly in the 1st or 2nd inning - who you gonna call? In the current model, the only pitcher available is the 11th or 12th man on the staff, the garbageman who isn't good enough for any of the defined roles. Where's Ernie Shore when you need him?
In less-specialized times, swingmen are pitchers who would be starters except that the jobs ahead of them are taken. My first team as a fan was the 1967 Cardinals. They may well have won the pennant no matter how they managed their pitching resources, since they won by plenty - but it sure helped that swingmen Nelson Briles and Dick Hughes were availaible to step in and start.
Obviously we can't know. The teams that leap immediately to mind are last year's Cardinals (Tomko, Fassero, Simontacchi, Stephenson) or the 2000 Blue Jays (Halladay was awful that year, Carpenter, Frascatore). But I don't think either team really had frontline pitching to make up the difference.
I think that Treder's original point of comparing Hentgen to other top starters is on the money, rather than comparing "average" starters' pitches thrown, because this is a point that's really relevant to top starters. In talking about whether a starter could pitch 260 innings in a season, there are only a handful (I won't hazzard a guess at a specific number) of pitchers in each league who should realistically be pitching that much.
This is a complex issue, but to me the question that jumps out is risk. Because there's no certainty that making, say, two extra starts per season will hurt the arm of ace starter who has already proved he can pitch 220-230 innings. Just as there's no certainty that transferring those extra innings from a lesser starter or reliever will guarantee one or two extra wins.
To me, the question is: How much risk is there in pushing your ace to pitch 250 innings instead of 240? Is there more risk in getting an inning from a tired Pedro than in getting an inning from a rested Leskanic? Now, I can understand the appeal of adopting the risk-averse approach. I'm a little risk averse myself. But I'm also struck by the fact that from 1999 through 2002, the AL didn't have a single starter throw 240 innings in a season. I find it hard to believe that if one treats pitchers with the same intelligent care that Cox and Mazzone treated Maddux and Glavine, you couldn't have a number of 240+ IP guys each season, and maybe a few who go 250+ IP. It doesn't have to be Hall of Fame-quality pitchers like Maddux and Glavine: heck, Freddy Garcia is currently on pace to throw 240+ innings. And if this means that over the course of a season your 5th starter makes two fewer starts, or your worst reliever pitches 10 fewer innings, the potential reward seems to be worth the potential risk.
2003 White Sox. Missed the playoffs by 4 games; got 135 starts from Loaiza, Colon, Buerhle, and Garland for a combined 3.83 ERA; got remaining 27 starts from collection of stiffs for a combined 6.75 ERA.
2003 Phillies. Missed playoffs by 5 games; got 132 starts from Padilla, Wolf, Millwood, and Myers for a combined 4.06 ERA; got remaining 30 starts from Duckworth and others for a combined 4.96 ERA.
Almost every team has hideously awful 5th starters, so if you take almost any team that missed the playoffs by a little bit, you can partially blame the 5th starters.
Even the Dodgers last year (who missed the playoffs by six games), who had a fantastic pitching staff (leading the league in ERA, runs allowed, hits allowed, and shutouts) gave 12 starts to Andy Ashby, who responded with a 5.67 ERA.
- Junk the fixed 5-man rotation, start your best horse or 2 or 3 on 3 days' rest and fill in the rest of the starts with swingmen.
- More flexibility in the # of pitches that starters are allowed to throw, rather than the between-90-and-110-no-matter-what that pervades today. If he's really struggling, pull him early, but if he's cruising and all the other variables are positive, let him go to 130 or 140.
- Pitchers, especially starters, WORK FAST. Time elapsed is a major element in fatigue too.
- Use the best reliever in more of a fireman role, not a strict closer role.
- Freeing up the closer from 1-inning outings only frees up the top couple of setup men to work fewer games, more innings/game.
- Never, ever waste a roster spot on a LOOGY. A pitcher with significantly fewer IP than G is simply not pulling his weight.
I would hope that it would combine the best elements of previous models and the current one, without going to the extremes of any.
I wouldn't go that far, but I would definitely have my big guys go EVERY fifth day.
i.e.:
Thursday: Starter 1
Friday: Starter 2
Saturday: Starter 3
Sunday: Starter 4
Monday: off day
Tuesday: Starter 1
And so on, take advantage of the off days so that your big guys get more starts than your back end by a significant margin. Relegate that back starter to swingman work in between - he'll certainly be useful for short 2-3 inning stints, which is pretty much how fifth starters are.
However, I'm not averse to a four man rotation when need be, if it's late in the year and you're in a pennant race.
Working regularly makes your breaking balls sharper too.
More starts by a pitcher that produces the same rate output on each start and does not lose time to injury are more valuable, unless the lack of starts by your other pitchers cause them to decrease their rate output due to the myriad of problems caused by underwork or lack of certainty in work. There is a lot of things you have to factor before you can state this as a foregone conclusion.
If you want to argue that fewer overall starts spread over more years is more valuable than more overall starts over fewer years than the burden of proof is on you.
No there is no burden on anybody. Also the actual answer depends on circumstance. What matters is where your actual performance of the replacment for those person be it spread over years or be it at the end of the term. We did a whole thread where Treder's ACE RELIEVER hypothesis was discussed and IMHO exposed. Since Treder never answered I am not sure I've seen any opposition to the performance information we've shown for teams based on roster construction, or the items we posit as availability of replacement. Can anyone link the thread, because there is no sense in repeating 50 posts and hours of research.
but also on the related issue that concentrating more innings in fewer pitchers allows a team the choice of carrying a smaller pitching staff.
Then why didn't you support that position when it was discussed and exposed on the other thread. Is it easier to ignore the counter arguments, restate your thesis, and make everyone repost all the information that shows the increase in performance by the pitchers when the workload is shared. Are is it just easier to claim moral superiority by ignoring counter arguments?
... You can't only shift starters' innings to good relievers; you're also shifting them to bad relievers.
Nor can any plan hoard the innings for just a small number of pitchers. The key is the plan you use to maximize the LI for the relievers you wish to use. We discussed this in the other thread.
there's still a ton of guys getting hurt now,
And some of them are pitchers that previously would not have been on major league rosters when durability was the paramount selection criteria. Because of the personnel management issues, injuries will likely tend to an equilibrium. There may be unintended spikes, like as the steroid generation gets older, but injury time will find an equilibrium.
Bill James, Steve Treder or myself (yeah, not in the same league as the first two) think should be done
Steve specifically advocates a program to increase work load in the minor leagues.
Like Tommy John said, throwing a lot is good for you, it's throwing when you're tired that causes nearly all injuries, IMHO.
I guess you ignored most of the comments upthread. None of these things are always bad and none of these things are always good. Its dependent on physiology, mechanics, length of high intensity workload, and start/stop duration for activity.
the role that has almost completely been phased out in the drive to ever-more-narrow specialization.
As previously mentioned, please go read the other thread where your model was discussed. Until/Unless you address those counter-points your repeating of the same point/same mantra over and over is not going to make your cause any more solid.
Almost every team has hideously awful 5th starters
Really, let me introduce you to the Atlanta Braves. I am not sure what you consider a "starter" and what you consider to be hideously awful, but
(1) If you take any pitcher that started 10 or more games, and look for ERA+ less than 90, then you get the following list of awful pitchers:
Shane Reynolds
Jason Schmidt
Jason Marquis
(2) Lets be liberal, let's say less than 100+ ERA, the adds us up to:
Pete Smith
Steve Avery
(3) Don't like 10 starts, let's make it 5: That gives you:
Terrell Wade
I'd offer three propositions from that data:
(1) Leo has a problem with pitchers named "Jason"
(2) Not every team has a crummy fifth starter
(3) Except for maybe Shane Reynolds, these are guys that I want to start. They are all prospects who deserve a chance to get some starts. That's a calculated cost for player development.
Other thread shows the patterns of the Braves staff and a decreased performance in concentrated innings strategies. This is ripe for injuries to the 2 or 3 big starters. Swingmen were awful in performance compared to their rates in the other models.
More flexibility in the # of pitches that starters are allowed to throw, rather than the between-90-and-110-no-matter-what that pervades today. If he's really struggling, pull him early, but if he's cruising and all the other variables are positive, let him go to 130 or 140.
Reasonable, the idea of using a counting metric on all pitchers is absurd. Its appropriate for some and not for others.
- Pitchers, especially starters, WORK FAST. Time elapsed is a major element in fatigue too.
Working fast can have collateral benefits. I don't think people are taught to work slow.
Use the best reliever in more of a fireman role, not a strict closer role.
Discussed ad nausiem. It assumes an endurance rate for all pitchers that is not true. It also generates availability problems and tends to reduce the LI of your best pitcher. Pitchers having the same baseline from the Braves model in the other thread tend to perform worse in concentrated innings scenarios.
- Freeing up the closer from 1-inning outings only frees up the top couple of setup men to work fewer games, more innings/game.
Don't think I understand the point trying to be made. Unless its back to "everybody pitch more innings" which is just another variation whose downside has been discussed.
- Never, ever waste a roster spot on a LOOGY. A pitcher with significantly fewer IP than G is simply not pulling his weight.
Discussed in the other thread. If we get rid of the prejorative terms this is a disguised appeal to reduce pitcher roster spots. In the other thread, we showed Steve that in the "good old days" these roster spots were occupied with hitters way below league average. There is a much more thorough discussion of roster management in the other thread. Steve has still not addressed any of these points.
I doubt his program entails a manager slapping a kid on the back and saying go out and throw 150 pitches at 19, which is what happened to many of these pitchers.
I guess you ignored most of the comments upthread. None of these things are always bad and none of these things are always good. Its dependent on physiology, mechanics, length of high intensity workload, and start/stop duration for activity.
Which is why I suffixed it with IMHO, IN MY HUMBLE OPINION. And if throwing when you're tired doesn't increase risk for injury, then one must ask why we are arguing about pitch counts at all?
Really, let me introduce you to the Atlanta Braves. I am not sure what you consider a "starter" and what you consider to be hideously awful, but
So you pick a team which has recently had some of the best staffs ever and point to them?
(2) Not every team has a crummy fifth starter
Well yay, the Braves don't have crummy starters. Big find there.
Go check out the 1990-2000 Yankees, who had teams among the best ever, and teams that sucked, a pretty diverse representation, I think. Just about every team except perhaps the 98 Yankees (who may be the best team ever), had at least one starter who flat out sucked.
Absolutely 100% you can, because:
(1) Other than three pitchers they have had incredible turnover in their pitching staff.
(2) Because those three starters are all arguable HOF, the Braves should even have more decreased performance because innings were taken away from them.
(3) They have had variation in how they have had to utilize pitchers because of exigent circumstances, and that variation forms an internal basis of comparision.
(4) Part of the reason they have had a good staff is because of their model (or do you think its luck, or do you think that the effect Cox and Mazzone have is just something mystical.)
And if throwing when you're tired doesn't increase risk for injury, then one must ask why we are arguing about pitch counts at all?
Of course throwing when you are tired increases chances for injury, also throwing for long spurts increases chance, and a lot of other things. I doubt any pitcher is not going to be "tired" when the sixth inning roles around (except maybe Maddux on one of his low pitch count groove days). The risk profile varies from pitcher to pitcher.
Which is why I suffixed it with IMHO, IN MY HUMBLE OPINION
Well your humble opinion either ignored or didn't take into account other factors.
Well yay, the Braves don't have crummy starters. Big find there.
And almost every year it was a different fifth starter. Isn't the appropriate question, why don't they have crummy fifth starters rather than thinking you should exclude them because they don't have crummy fifth starters?
Just about every team except perhaps the 98 Yankees (who may be the best team ever), had at least one starter who flat out sucked.
So what do the Braves do different than the Yankees? Is it all luck?
A lot of talk has been about the back end of the rotation, but what about the #2 starter? Would you rather have Mark Prior and Carlos Zambrano each throwing 210 innings, or Mark Prior throwing 260 and Shawn Estes throwing 160 because Zambrano blew his arm out?
I'm reminded of a Calvin and Hobbes cartoon where he asks his dad how they know the weight limit on bridges. His dad replies that they drive heavier and heavier trucks over the bridge until it breaks. Then they rebuild the bridge and post the weight limit.
How do you know Kerry Wood's pitch limit? You pitch him until he blows out his elbow, and that was past his limit.
Heck, maybe that cartoon is a better analogy than I thought because they rebuilt Wood and he's pitching about like he always did.
Anyway, I'm not saying that I know anything about what causes the injury, I'm just agreeing with the Caroll quote above that analyzing the guys who made it may not be a very good way to look at it. You also need to look at the guys who didn't make it in the old days. Not that I have any idea how to do that.
I think the way to start doing it is by seeing the high percentage of pitchers from all periods of baseball history who threw 200+ innings at young ages and got injured or stopped being effective shortly thereafter, and then not making pitchers throw that many innings at those ages. Even though some survive, it's better to be safe than sorry. The second paragraph of the preceeding post is absolutely right.
But I don't think anybody here is debating the wisdom of letting a 21-year old throw 200 innings. The question being debated, to my mind, is what is gained and lost by not having the best pitcher on your team (say, Pat Hentgen at age 27) throw as many innings as he would have 30 years ago. And Mike and Backlasher are right to point out that it isn't something that can be decreed on a macro level.
Steve,
Thanks for the mention, but I reckon I shouldn't allow curiosity to get the better of me in deriving answers to questions myself or others may be interested in. Now that I know that all that work I did on the 1969 - 2003 200+ Innings Pitched project and charts was just wasted effort. Although I made an offer of the data to anyone for the asking, I received no requests for it. So with no responses to my offer, and the evidence of "this" thread showing overwhelmingly that there's absolutely no interest in the subject, maybe I'll stop being my head against the wall and quit working on the larger 1901 - 2003 200+ IP project.
Thanks again,
--------
trevise
If anybody has the soundtrack to this little known Sergio Leone film...I could really use it.
Thanks!
More and more, I imagine you being a long haired hippy freak wearing a Groucho Marx face mask... ;-) ...
-----------
trevise :-) ...
And yes, I realise that one day you'll make that comment come back and haunt me. T
Almost every team has hideously awful 5th starters [well, this was you quoting me]
Really, let me introduce you to the Atlanta Braves.
[...]
I'd offer three propositions from that data:
(1) Leo has a problem with pitchers named "Jason"
(2) Not every team has a crummy fifth starter
(3) Except for maybe Shane Reynolds, these are guys that I want to start. They are all prospects who deserve a chance to get some starts. That's a calculated cost for player development.
A few comments:
First, giving an anecdote about the Braves doesn't rebut my statement.
Second, you attempt to directly rebut me when you say, "Not every team has a crummy fifth starter," but that isn't rebuttal; I agree with that statement. I said <u>almost</u> every team has hideously awful 5th starters.
So, here's some data for ya:
Using ESPN.com so that I can isolate starter innings, here are the 2003 teams that gave 15+ starts to someone with a 5.00 ERA:
Anaheim: Kevin Appier, 19 GS, 5.63 ERA and Aaron Sele, 25 GS, 5.77 ERA.
Arizona: Elmer Dessens, 30 GS, 5.13 ERA.
Atlanta: Shane Reynolds, 29 GS, 5.50 ERA
Baltimore: Omar Daal 17 GS, 6.29 ERA; Rick Helling, 24 GS, 5.71 ERA; Rodrigo Lopez, 26 GS, 5.82 ERA
Boston: John Burkett, 30 GS, 5.26 ERA
Chicago Cubs: Shawn Estes, 28 GS, 5.70 ERA
Chicago White Sox: Dan Wright, 15 GS, 6.85 ERA
Cincinnati: D Graves, 26 GS, 5.33 ERA; J Haynes, 18 GS, 6.30 ERA; R Dempster, 20 GS, 6.71 ERA
Cleveland: Billy Traber, 18 GS 5.68 ERA; R Rodriguez, 15 GS 5.73 ERA
Colorado: Cook, 16 GS, 6.00 ERA (I'll use a 6.00 ERA as the cutoff for Colorado)
Detroit: Adam Bernero 17 GS, 6.05 ERA (that's just the worst; I'm ignoring Bonderman, Knotts, Maroth)
Florida: Nobody! There's one so far.
Houston: J Robertson, 31 GS, 5.04 ERA
Kansas City: Kyle Snyder, 15 GS, 5.17 ERA
Los Angeles: Nobody! That makes two.
Milwaukee: M Kinney, 31 GS, 5.10 ERA; W Franklin, 34 GS, 5.58 ERA; G Rusch, 19 GS, 7.36 ERA
Minnesota: R Reed, 21 GS, 5.13 ERA; J Mays, 21 GS, 6.77 ERA
Montreal: Nobody! We're up to three.
New York Mets: Nobody! Four.
New York Yankees: J Weaver, 24 GS, 5.73 ERA
Oakland: Nobody! Five.
Philadelphia: B Duckworth, 18 GS, 5.04 ERA
Pittsburgh: J Fogg, 26 GS, 5.26 ERA; S Torres, 16 GS, 5.51 ERA
San Diego: O Perez, 19 GS, 5.38 ERA; K Jarvis, 16 GS, 5.87 ERA
San Francisco: J Foppert, 21 GS, 5.00 ERA
Seattle: Nobody! Six.
St. Louis: B Tomko, 32 GS, 5.02 ERA; Simontacchi, 16 GS, 6.33 ERA
Tampa Bay: R Bell, 18 GS, 5.40 ERA; J Kennedy, 22 GS, 6.60 ERA
Texas: Benoit, 17 GS, 5.55 ERA; I Valdez, 22 GS, 6.10 ERA; C Lewis, 26 GS, 7.30 ERA
Toronto: M Hendrickson, 30 GS, 5.51 ERA; C Lidle, 31 GS, 5.75 ERA
So, of 30 teams, six didn't have a 5th starter that could be considered hideously awful in 2003. On the other hand, 80% of MLB teams did (24 of 30), including 10 of the 13 teams that finished within 5 games of making the playoffs.
I think that more starts are valuable than fewer starts. [This is BL quoting me]
More starts by a pitcher that produces the same rate output on each start and does not lose time to injury are more valuable, unless the lack of starts by your other pitchers cause them to decrease their rate output due to the myriad of problems caused by underwork or lack of certainty in work. There is a lot of things you have to factor before you can state this as a foregone conclusion.
Given what I've shown in the post #86, it is hard to imagine that the "other" pitchers could possibly pitch any worse :)
I think the point of Treder's theory is that the crappy pitchers who might be underworked in his new method would be dropped from the roster.
Buck up, dude. Just because I didn't directly resource your data in this article doesn't mean I never will. It's a terrific source of info, and I will unquestionably make use of it on an ongoing basis.
So you do have one satisfied customer!
You must be Registered and Logged In to post comments.
<< Back to main