On Monday night, some of baseball’s best sluggers will compete in the Home Run Derby. One of them is likely to have a more disappointing second half of the season, and someone is then likely to blame the derby.
Case in point: At the MLB All-Star break in 2013, Chris Davis of the Baltimore Orioles had a stat line fit for a king. He had slugged 37 home runs, driven in 93 runs and had an enviable 1.109 OPS. Naturally, Davis was selected for the Home Run Derby, placing fourth. After the Derby, Davis’s season took a turn for the worse; he hit 16 home runs and battled with an injury on his hand — a blister he popped during the derby.
The decline in Davis’s power numbers fueled the belief that participating in the Home Run Derby alters a player’s swing in the second half of the season. We heard a similar story when Josh Hamilton’s HR production fell after the 2008 derby. And Bobby Abreu famously blamed the derby when he had fell back to Earth in the second half of the 2005 season.
But here’s the more likely culprit in these post-derby declines: regression to the mean. Data from the 2009 to 2013 seasons shows that although derby participants’ second-half stats do, on average, fall off, participants actually outperform the other top home-run hitters from the first half of the season.
Consider the difference between the first and second halves of Davis’s 2013 season. In the first half, Davis was hitting a home run every 9.7 at-bats, tops in the MLB among hitters who qualified for the batting title in both halves of the season. In the second half, Davis hit a home run every 12.9 at-bats, still No. 1 among qualified hitters. This rise of 3.2 AB/HR seems large, but the top home-run hitters from the first half experienced an overall rise of 4.8 AB/HR from the first half to the second half. Additionally, Davis’s walk percentage rose from 9.9 percent to 11.7 percent in the second half, which is an indication he saw fewer good pitches to hit and thus had fewer opportunities to hit home runs.
Another stat that relates to home runs supports the regression theory. Per the advanced baseball data site FanGraphs.com, Davis had a home-run-per-fly-ball ratio (HR/FB) of 35.6 percent in the first half of 2013, whereas that number was 21.3 percent in the second half. Davis has a career HR/FB rate of 22.3 percent. In his career, he has an average drop of 6.3 percentage points in his HR/FB numbers from the first half to the second half. The rate has dropped in three of the four seasons he has played at least 80 games.
Davis isn’t alone in seeming to suffer after the Home Run Derby, when, in fact, that difference in statistics can be explained otherwise.
For this study, we compared the first and second halves of players who qualified for the batting title in both halves and finished tied for 50th or better in total home runs in the first half. Home Run Derby participants were compared to these players in at-bats per home run, walk percentage, strikeout percentage and one other key statistic: hard-hit average.
Hard-hit average is developed by measuring the number of hard-hit balls a player records per at-bat. Hard-hit balls are a subjective measure of contact quality gathered by Inside Edge scouting services — one of the baseball industry’s primary data providers.
At-bats per home run are used rather than slugging percentage, isolated power or raw home runs, because we don’t want to punish hitters for doing other things. For example, if home runs per plate appearance were used, we would be punishing a hitter for taking walks. If we used slugging percentage, we may be hurting hitters who don’t gather many hits other than home runs. The goal is to measure home-run production, which can be done by looking at how often a player hits home runs in the opportunities he is given to hit them (at-bats).
These numbers help show that the Home Run Derby has little to no impact on a players’ power numbers. Home Run Derby participants hit the ball hard with more frequency in the second half of the season while maintaining similar AB/HR ratios. Over the past five years, derby participants who qualified in both halves of the season had a hard-hit average of .253 in the second half, which is 11 points greater than that of the other power hitters in the majors. Additionally, they strike out less often and walk more frequently than non-derby participants in the second half.
Second-Half Performance Among Top Power Hitters
Another aspect to consider is the type of player who is picked for the derby. Consider 2009 participant Brandon Inge: He hit 21 home runs in the first half of the 2009 season for a home run every 14.2 at-bats. Inge’s first-half numbers warranted consideration for the derby. However, Inge’s career AB/HR is 33.0, which suggests he was a primary candidate to regress to the mean in the second half. After the derby, Inge managed a home run every 43.8 at-bats, second worst among the top 50 power hitters from the first half who qualified for the batting title in both halves.
We can also consider that the average Home Run Derby participant over the past five years has had a first-half AB/HR rate of 15.6, the same as the first-half AB/HR rate of Miguel Cabrera over that span. We should not expect Home Run Derby participants to produce home runs at the same rate as Cabrera over an extended period.
None of this means that the Home Run Derby has zero impact on a player’s second-half results. It does appear likely, however, that the decline in results — among participants and the majors’ other top-50 home-run hitters — is due to expected regression. The players selected for the Derby are typically among the best home-run producers of the first half, though they may not necessarily be among the best power hitters in baseball. Uncharacteristic performances help players get selected for the Home Run Derby, and the decline in their numbers in the second half is more likely to be due to natural regression than their participation in the event.