March 7, 2013 at 1:45 pm by Franklin Rabon under Atlanta Braves
Those who wander over to this article may well think that some sort of anti-sabermetric screed is about to be levied upon the reading public. That I’m about to attack WAR and how it specifically applies to relievers. This belief may well elicit a feeling of giddiness or a groan of baseball blog world weariness, depending on the nature of the audience. But that’s not what this is about. This article is about how traditional thinking from the true old timers somehow was replaced with the idiocy of the saves ‘rule’ and further, how relievers in general came to be wildly overrated. Sprinkled throughout will be how this applies to the Atlanta Braves.
First, let’s get a little back history of how reliever usage patterns came about, loosely adapted from Baseball Prospectus’ Extra Innings.
In the truly olden days, the days of Old Hoss Radbourn, Cy Young and the like, there weren’t really relief pitchers. In fact most relief innings were sort of in an emergency and were usually performed by position players. These would be cases like, for instance, when the day’s starter got drunk between the 6th and 7th, as his team was having a particularly good offensive half in between his trips to the mound, and he couldn’t stumble back out to the mound (or maybe I’m just remembering my days of softball in law school). This trend continued into the early twenties or thereabouts, when we started to see the emergence of ‘swingmen’, starters who would also occasionally throw relief innings. These pitchers would typically throw a couple of relief innings once a week and also probably start a game that week as well. It was something akin to the current practice of occasionally using a starter in relief on his ‘throw day’. But during this time, roughly 90% of all innings were thrown by starting pitchers, ie it was more common that a complete game was thrown than one wasn’t thrown.
Around the 1930′s we began to see the appearance of the dedicated reliever. But even so, it wasn’t until the 1970s that more than half of all relief innings were thrown by dedicated relievers. Today, the figure stands around 80% (if that seems low, it’s essentially because spot starters aren’t considered ‘dedicated relievers’). From the 1960s to the mid 1980s, we saw what were called ‘firemen’, ie relief aces that came in when the game was most in peril, often pitched multiple innings of tie games or games that were close in either direction, ie what we modern saberists would call high leverage situations. These were the days of Goose Gossage. Managers were using their relievers in what they truly felt were the most important situations, not whatever situations an arbitrary rule for stat keeping dictated.
If three men can be responsible for such an atrocity as how modern relievers are used, they can be said to be Jermoe Holtzman, Tony LaRussa and Dennis Eckersley. However, three men also couldn’t be less culpable. To start with Holtzman, the inventor of the save, let’s understand where he was coming from. Mr. Hotlzman was a sportswriter, and the genesis for his inventing the save came from vulture artist extraordinaire Elroy Face, who in 1959 went an astounding 18-1, as a reliever. Holtzman realized that a large number of these wins were simply coming when Face surrendered the lead upon entering the game, but then had the offense retake the lead for him later in the game. Holtzman understood that pitcher wins were a stupid way to evaluate relievers, that Face had had an historically lucky season in 1959, and was actually better the previous season when he went just 5-2. Thus he dreamt up the save rule as we now know it. But Holtzman wasn’t setting out to change the game, Holtzman was a writer, and seemed to just feel that he could better help write about the game by using the save statistic.
It’s incredibly important to realize that in many ways this statistic was invented in response to one particular issue that one particular writer had with the evaluation of one particular player, ie all the times that Face came into a game when he had the lead, gave it up and then got the win. This fundamentally altered how the save rule was formulated, in contrast to other ways it may well have been formulated. Since Holtzman was primarily worried about not giving credit to losing a lead, the saves rule ignored situations where the game was tied or close, but the pitcher’s team was behind. Face hadn’t achieved his farcical 18-1 record through come from behind wins, so the stat ignored those. The wrong that Holtzman believed needed to be redressed was strictly blown leads that resulted in a win. So, while Holtzman was correct that pitcher wins are a particularly terrible way to measure relievers, he ended up creating a stat in its place that made about as little sense. Thinking about it logically, does anybody ever anywhere actually think that protecting a three run lead is nearly as important as keeping a game tied int he bottom of the ninth, giving your team a chance to win in extras?
In Holtzman’s defense, again, he was just coming up with a stat to use to write about the game of baseball. This was during a time when pocket calculators weren’t a thing, and quickly figuring advanced metrics was not only yet to be envisioned, but probably wasn’t even practically feasible anyway. Holtzman likely never remotely figured that managers would change their strategies based on his new stat. It wasn’t even until nine years later that saves became an ‘official’ stat, the first such statistic to gain that designation since the RBI in 1920.
Even after the designation of the save as an official statistic in 1969, it went largely ignored until the late ’80s, when Dave Duncan, Dennis Eckersley and Tony LaRussa entered the picture with a plan to revitalize the once great, but now aging starter’s career. Essentially Eckersley’s arm was aging, and the primary loss was in stamina. He couldn’t retain his effectiveness for more than one inning. Because of reasons that were somewhat personal to him, it was deemed that he would only pitch in one inning stints at the end of the game. And it worked spectacularly well. A career that had begun to fall by the wayside was totally revitalized, and Eck became a dominant force at the end of ball games. And because that’s how the game works, the approach was copied everywhere. Even in situations where it probably wasn’t warranted. It’s important to remember that just like the original formulation of the saves rule, the first truly modern closer was a specific adaptation, to a very specific set of circumstances, that was then over-applied and over-generalized.
By this time we began to see the save as a statistic all over broadcasts, both television and radio. The save went from outsider stat, to officially recognized but largely ignored, to payed attention to, to finally dogma. It’s hard to nail down a specific point where the save transitioned from something managers considered to dogma such that it would lead Fredi Gonzalez to say “When you’re on the road, you’ve got to push guys back a little bit, because you can’t use your closer on the road in the ninth inning of a tie ballgame.” but we know it happened due to the way that managers manage, and that gem of a Fredi quote. Not to single Fredi out, because evidently most managers think this way, but he’s one of the few that actually said it.
One criticism often levied at ‘new-agey’ stats is how complicated they are, however, let’s contrast the ‘king of new age stats’ on base percentage, with the save rule;
1) take times where a player reached base via a hit, hit by pitch or walk
2) divide by number of times a player came to the plate.
1)He is the finishing pitcher in a game won by his team;
2) He is not the winning pitcher;
3) He is credited with at least ⅓ of an inning pitched; and
4) He satisfies one of the following conditions:
i) He enters the game with a lead of no more than three runs and pitches for at least one inning
ii) He enters the game, regardless of the count, with the potential tying run either on base, at bat or on deck
iii) He pitches for at least three innings
Essentially, we see that when traditional media baseball writers’ criticize criticism of the saves rule as “newfangled, new age-y, spreadsheet mumbo jumbo” they’re ignoring the reality that the save is a new age statistic whose importance only came about because of two different overly reactionary movements. First, the reaction to one particular pitcher, that had a lot of vulture wins, leading to a bit of a weird formulation of the the stat to begin with, and then one particular pitcher, Eckersley, with very particular abilities in the twilight of his career. If a relatively unknown reliever in 1959 didn’t have a statistically improbable season with regards to pitcher wins, or if Dennis Eckersley had blown his elbow out in 1987, we may well today see a lot of ‘firemen’ pitching multiple innings in high leverage situations. Instead, we see them brought in to protect three run leads, or because of the relative paucity of save situations, brought into meaningless games with near insurmountable leads in either direction to ‘get in work’.
Further impacting the situation is the well known cognitive bias, endowment effect. The endowment effect is based on the observed fact that when someone views something as theirs, the perceived loss is much greater than the perceived gain would be if the roles were reversed. In baseball this manifests itself in the fact that managers seem to hate losing games where they held a lead much more than they ‘like’ come from behind wins. This permeates the save rule because managers will disproportionately deploy resources (good reliever innings) in relatively safe margins of victory (three run leads) while often letting mediocre relievers *cough* Durbin *cough* pitch in tied games or games where the team is behind by only a run or two. It’s unclear whether this effect is purely based on the manager’s own endowment effect bias, or because he fears the media backlash of theirs, or maybe most likely some of both. But the effect is real, and relatively well established amongst major league managers.
However, all of that merely states why relievers aren’t used particularly effectively, it doesn’t begin to actually make the case that relievers are overvalued. It may well be the case that while they could have more value squeezed out of them if used properly, that they’re already highly valuable to begin with. After all, relievers are used more and more today than ever.
Like most sabermetric concepts, the basic and most important issue with reliever evaluation is a mind numbingly simple one, they don’t throw very many innings comparatively. This fact manifests itself in two very important ways 1) Small sample size anomalies and 2) the fact that it’s hard to accrue a whole lot of value when you don’t throw very many innings.
The first point is one that people often simply don’t want to hear. People hate you telling them that some observation isn’t significant because of a small sample. Beat writers love nothing more than to cite small sample batter pitcher splits, such as John Morosi recently championing Ryan Vogelsong to be on team USA because Joey Votto from team Canada is just 1-7 against him. Try to point out the objective fact that that’s not even two games worth of at bats, and you get called a spreadsheet obsessed nerd. Fans further don’t want to hear that most of the best reliever seasons are at least partially a product of small sample anomalies. Sure, there are some truly great relievers who do it year in and year out, Mariano Rivera comes to mind, but for most relievers a great season is often just a series of good breaks in a relatively small number of innings pitched as much as anything else. We readily accept that a starter can simply have a bad month, but in a month a starter often accumulates as many innings as a reliever does in a half season or longer. So, many times if you pay for a reliever coming off a great season, you’re most likely paying at least in part for a statistical anomaly, and there’s no real way of knowing when you are or aren’t.
The second point is one that I think everybody understands when they simply start to think about it. Simply put, by virtually any measure used to evaluate how much is paid for a player’s contribution per win, relievers are by far the highest paid position on a per win basis. The most common way to measure this is $/WAR, and not only do relievers ‘win’ by this measure as most overpaid position, but they nearly double up the next closest competitor, 1B. Further, it’s even understandable why 1B is so high, as the position is often where aging stars go after they’ve moved off tougher defensive positions. Since many relievers are pre-arbitration young guys, this really makes the few of those that are paid in free agency really stand out. It’s simply difficult for relievers to pitch enough to warrant their pay. This is further exacerbated by the fact that there aren’t even that many close games. Many games are out of hand in one direction or another before a reliever ever takes the mound.
None of this would really matter if budgetary constraints weren’t an issue. While relievers might be overvalued, they still have value. One can argue that having the greatest reliever in baseball history made sense for the Yankees, as they had virtually zero budgetary constraints, and they were a virtual lock to be in the playoffs nearly every year. If they overpaid for performance a bit or a bunch, so what? For most of the rest of the league, who do operate on relatively strict financial constraints, overpaying for relievers can be damning. Imagine a budget of $10 million dollars, according to what that money can usually buy on the open market, you could either get a reliever that’s worth something like 1 win, or you could get a second baseman worth nearly 2.25 wins. Make that mistake a few times and you’ve contracted yourself into an awesome bullpen, but straight out of the playoffs.
It’s obvious to anybody who is paying any attention that the Braves are one of those ‘financially constrained’ franchises. Not totally devoid of money, but due to an awful local TV deal, unable to outspend, or even spend with, the ‘big boys’. While it may well make sense for the Yankees to pay a historically great closer $15 million, as they did with Mariano Rivera from 2008-2012, it almost certainly doesn’t for the Braves. Further, any opportunity the Braves may have to turn bullpen arms into comparatively more valuable commodities should be acted upon. Rich teams like the Tigers and Yankees can afford luxury closers, and the Braves should leverage this whenever possible.
This fact has lead to two of the more controversial ideas this spring. First the idea of trading Craig Kimbrel at some point before he becomes a free agent versus signing to a long term lucrative extension. And most recently the possibility of trading Jonny Venters to the Tigers for Rick Porcello.
First, trading Kimbrel has been dealt with amply by Mark here. But a further point, even outside of a trade is the real point, that in no way, shape or form does it make sense for the Braves to sign Craig Kimbrel to a long term extension, unless he falls on his face in one of the next few years and his price comes way down. At this point Kimbrel is putting up historic numbers, leading some commentators, such as KC Covington, to call that we do whatever it might cost to lock him up long term. Some saying if it takes $15 million per year, so be it. While I don’t mean to denigrate opinions, because that’s essentially what we’re all doing, that’s just freaking nuts. Such an extension would likely preclude extending a player like Heyward, Freeman, Justin Upton, Andrelton Simmons, etc. to say nothing of a hometown favorite like McCann (but we got Gattis, so let McCann walk if needed, I suppose).
Next, the idea of trading Jonny Venters for Rick Porcello is indeed ludicrous, but not for the reason that many Braves fans seem to think, it’s ludicrous because Detroit would never trade a young league average starter for an inconsistent set up man. Even as much as teams like the Tigers may (perhaps correctly given their finances and team) overvalue relievers, even they’re not that dumb. There’s just not a world where any league average starter should be traded for a middle reliever, or even anything short of a historically dominant closer. And if this was indeed possible, it would border on General Managerial malpractice if Wren didn’t immediately make such a trade. Last year Rick Procello was closer to Craig Kimbrel in value than Jonny Venters was to Porcello. Heck, even in Jonny Venters’ best year (2010) he was behind Porcello’s value in his worst full season (also 2010). Porcello was worth nearly 3 wins last season, while even at Venters’ best, he likely won’t ever crack 2 wins. Again, this trade idea/rumor was ludicrous, but only because the Tigers would be idiots to do it, not the Braves.
The next several years will indeed be interesting for the Braves when it comes to relievers. Thus far, Wren has largely shown that he prefers to build the core of his bullpen with young power arms in their pre-free agency seasons. Now, as those arms mature into dominant forces, but consequently get more expensive and near free agency, it will be worth following as to whether or not Wren sticks to this philosophy, or cracks and pays Craig Kimbrel a king’s ransom. One can only hope that Wren sticks to the principles that got him here, more than the horses that got him here.