In Week 12 last season, Carolina Panthers starting cornerback Donte Jackson went down with a quad injury on the first play of the game, forcing the Panthers to shuffle their cornerback depth chart. In a joint relief effort, Captain Munnerlyn and Corn Elder combined to allow seven catches for 161 yards and a touchdown. The recently released All or Nothing documentary detailed the storyline and seemed to emphasize that Russell Wilson was going out of his way to pick on the new corners.
This isn’t a new tactic. Any game that includes an injury to a starting corner also includes extensive commentary from the broadcasters about the impact it might have. It is intuitive to think the new corners would struggle. They likely aren’t as warmed up, haven’t had the same chance to establish themselves in the game, and most importantly aren’t as talented as the starter to begin with. It seems like a lock that quarterbacks would target these players early and often. So how big of an impact is there, and do coaches and quarterbacks actually take advantage?
Measuring the Impact
To test this, we looked at corners who were targeted on their first drive after entering the game as a non-starter. In this case, “non-starter” is loosely defined as any defender who was not on the field for the defense’s first 15 defensive snaps, or approximately one-quarter of a game. The goal of the 15-snap cutoff was to exclude any rotational players who play regularly but may not have been on the field to start the game, such as a nickel corner.
Offensive Statistics When Targeting New Corners (NFL, 2016-2018)
|Non-Starter on First Drive||0.16||7.6||12.2||49.6%||31.7%|
|Everyone Else on All Snaps||0.12||7.4||11.9||48.0%||25.1%|
The numbers are up across the board, but the most intriguing bit of evidence is the increase in Boom%. (Boom% is the percent of plays that are worth more than one EPA, or essentially how often they give up big plays.) Overall, corners are about 26% more likely to give up a big play when coming on in relief duty. Even on the lowest end of the confidence interval, the Boom% is still meaningfully higher than the league average (~3 percentage points). This indicates that, at a minimum, players entering in a relief role are more susceptible to big plays.
Increases in Expected Points Added per Attempt (EPA/A), Yards per Attempt (Y/A), and Positive% (percentage of plays with a positive EPA) also indicate that there is a more consistent effect outside of just big plays. It is important to keep in mind there is a large discrepancy in sample size between the two groups, but given that every metric points to there being a relationship, it’s not much of a leap to conclude there is an effect.
This research can also be extended to use Sports Info Solutions’ college charting data to examine to what extent the same effect exists at the college level. Using the same methodology as above, we looked at all FBS teams in 2018.
Offensive Statistics When Targeting New Corners (NCAA, 2018)
|Non-Starter on First Drive||0.19||8.2||13.8||45.4%||29.8%|
|Everyone Else on All Snaps||0.07||7.5||13.7||43.8%||26.2%|
The effect seems to be more consistent in college. EPA/A and Y/A both show a much wider gap in college, while the Positive% and Boom% both show very similar effects. This isn’t particularly surprising given how much wider the skill gap is between starters and backups in college, especially in smaller conferences. It also acts as another piece of evidence that there is something to the theory.
Are teams taking advantage?
In short, not really.
Looking only at snaps from their first drive after entering the game, NFL corners had a target rate of 15.8%. This is only slightly higher than the overall target rate for corners of 13.9%. Given the narrative around these situations from broadcasters and fans, this is a surprisingly small increase and shows no statistical significance.
The difference in college is even tighter. The overall target share for players entering the game in college only increased by 0.2 percentage points from 13.1% to 13.3%. This isn’t quite as surprising as the lack of increase at the pro level. Intuitively, college quarterbacks are less likely to make adjustments at the line of scrimmage or adjust their progressions based on defensive personnel, but it is still notable nonetheless.
Given all of this, the logical assumption is that defenses are scheming to protect new corners and prevent quarterbacks from being able to pick on them, but this doesn’t seem to check out either. There was no meaningful change in the distribution of coverages called when a new player is entering the game at either the pro or college level.
At the NFL level, teams ran man coverage about 2% less often when a new corner was entering the game and had no change in the amount of zone coverage they utilized (the 2% change was shared among screens, combo coverages and prevent defense). There was also no change in the distribution between Cover 0, Cover 1, and Cover 2, which may have indicated an increased use of safety help. In the NCAA there was also only about a 2% decrease in man coverage and a small bump in zone coverage of 2%, very similar to what we saw in the pros.
Man-zone splits are a high-level way of looking at this. For example, not all Cover 4 is created equal and pattern matching rules can vary greatly from team to team, even within the same coverage family. But even still, one would expect to see some amount of change in this subset of plays if teams were truly scheming to shelter the new corners.
This appears to be yet another thing to add to the long list of NFL coaching blind spots. Similar to avoiding 2nd-and-long runs, or using a QB sneak in short-yardage situations, there seems to be an effect that is not being taken advantage of fully.
It is hard to say whether this is more related to general overconfidence in your own players or a lack of understanding of such effects, but it is surprising that on the aggregate teams seem to ignore these factors in their decision making.