Rams GM Les Snead likes Sam Bradford, hates Analytics (which hate Bradford)

By Warren Sharp

Recently, Sam Bradford has become a popular topic. Drafted #1 overall 5 years ago, rumors have flown about trades to the Bills or Browns. The Rams need to decide (possibly soon) if they will re-sign Bradford, let him play out his contract, or trade him. Rams GM Les Snead hates analytics, so he probably has never seen any of the below information.

For the purpose of this study, I’m going to refer to a term which is the opposite of the popular term “YAC”, or yards AFTER catch. That term describes what a receiver does with the ball after the quarterback completes the pass to him. The opposite is “YBC”, or yards BEFORE the catch. I’ve seen it sometimes referred to as “Air Yards”. They are interchangeable. It’s passing yardage attributed solely to the quarterback, because its measured from the time the ball leaves his hand to when the receiver catches it.

Its very hard to have a terribly low average of YBC on the season. Primarily because the players who are not throwing down field much are typically backup QBs or young, inexperienced QBs. The coach creates a very safe gameplan for them, which involves a lot of short drops and passes close to (or behind) the line of scrimmage. To illustrate how rare it is: since 2010, only 7 times did a quarterback pass for less than 3.0 yards before catch on the season (with a minimum of 250 attempts).

Sam Bradford was the only quarterback to have two such seasons (2013 and 2010). That’s correct. Bradford joins Jimmy Clausen (in his rookie year), Blaine Gabbert (in his rookie year), Christian Ponder (in his second year) and perennial backups Chad Henne (in 2013) and Shaun Hill (in 2010) as the only quarterbacks to throw for FEWER than 3.0 yds BEFORE catch over an entire season (min 250 att) since 2010. But Bradford did it twice, something no other QB did.

Why would a #1 overall draft pick, playing in his 4th season in 2013, average fewer than 3.0 YBC? More importantly, why would the offensive design encourage him to attempt so many low risk, low reward passes? The first question is certainly valid. The second one, however, might not be. Perhaps its not just “the offensive design”, its “the offensive design for Sam Bradford.”

That is because in 2013, while Bradford threw 262 passes and averaged just 2.97 YBC (a whopping 54.1% of his total passing yardage was YAC), his replacement after injury, Kellen Clemens, threw 272 passes and averaged a strong 3.89 YBC, with only 43.7% of his total passing yardage coming as YAC. That certainly looks a lot different than Bradford, but yet the two were playing in the same offense, the same season.

In 2014, both Austin Davis and Shaun Hill had over 225 attempts, and both averaged over 3.7 YBC, and both saw less than 49% of their total passing yardage coming as YAC.

The last 2 years, three of the 4 starting Rams QBs averaged a combined 45.6% of their total passing yardage as YAC, while Bradford was at 54.1%.

Its puzzling to say the least, considering Bradford is the higher pedigree quarterback.

If we play a game and assume (incorrectly) that Bradford was instructed to not throw deeper routes (and the other Rams QBs were given opposite instruction), I wanted to look at what happened when Bradford DID throw deeper passes.  So I took multiple different measures of Bradford’s success when passing 15+ yards in the air.  The results are depicted in the infographic below.  But in a nutshell, Bradford is consistently at the bottom of the NFL however you measure it, whether it’s based on completion percentage, passer rating, TD:INT ratio, and of qualifying quarterbacks, is clustered among the backups rather than NFL starters (save for Jay Cutler).

But before I introduce the infographics, I might as well lather it on heavy for those really trying to understand what the analytics say about Bradford:

  • Sam Bradford’s QBR (with 50=Avg):
    • 2010: 38 (29th)
    • 2011: 26 (33rd)
    • 2012: 50 (20th)
    • 2013: 48 (24th)
  • Sam Bradford’s accuracy percentage when under pressure was 53%, which was 38th out of 41 qualifying quarterbacks in 2013.  The only QBs worse were:
    • Brandon Weeden (Cleveland)
    • Matt McGloin (rookie backup from Oakland)
    • Thaddeus Lewis (backup from Buffalo)
    • Note that this is even more concerning for Bradford than for other QBs, as so many of Bradford’s passes were close to the line of scrimmage.  So to have the 38th rated accuracy percentage despite so many close to the line of scrimmage targets is an issue.
  • The only teams Sam Bradford beat in 2013 were the 4-12 Jaguars, the 2-14 Texans and the 10-6 Cardinals (which featured a come-from-behind win entering the 4th quarter with a 11 point deficit).

Jeff Fisher is quoted as relying on the infamous:  “QB Winz” stat, by suggesting:  “When we’ve had him, he’s 5-2-1 in our division.”  Of course, Fisher conveniently took Bradford’s last 2 seasons, and ignored the fact Bradford went 3-7 vs the division in 2010 and 2011, including losing all 4 games in 2011.  He also ignored the fact that of Bradford’s “QB Winz” in division, Bradford has never won a road game in Seattle or San Francisco.  He is 2-5-1 on the road in division.  He’s only defeated Arizona in Arizona, back when they were terrible (his last game in Arizona was 2012).

Of course, when we’re looking at “QB Winz” stats, it goes without saying that we should totally overlook the fact that in Bradford’s career “QB Winz” vs division opponent’s, his defense has allowed a miniscule 12 ppg, including a mere 10 ppg in 7 of his 8 career “QB Winz” vs division opponents.  The most important factor when measuring “QB Winz” is to ignore HOW your team actually won the game (such as allowing only 10 ppg with awesome defensive effort) and only look at the fact that your team won, so therefore, your “QB Winz” as well.

With that said, let’s examine some of the analytics below, which Les Snead can continue to ignore.

  1. First, you can clearly see where Bradford ranks of all qualifying QBs in terms of the YBC statistic I referred to above.  Every year, Bradford ranked well below the NFL quarterback curve.  Pat attention not just to where Bradford sits, but who sits with him.  And notice the QB names on the “good” side of the curve.
  2. Next, I ranked YBC vs YAC, and once again Bradford’s seasons are clustered in the lower left quadrant.
  3. As you continue to scroll, now we’re getting into the rankings on the deeper passes, those which traveled 15+ yds in the air, to see how Bradford stacks up.  Its pretty easy to understand what each of these graphics depict.

The bottom line is, when it comes to Sam Bradford, teams should be very cautious, including the Rams.  The quarterback is the single position you cannot screw up on and hope to see success, particularly if you are looking at signing a QB to a long term contract.  If the Rams (or another team) is able to get Bradford at a discount, with a pay cut to his salary, that’s one thing.  But if Bradford wants market price for a #1 overall draft pick, he needs to be aware that the analytics are not his friend.  Fortunately, that does not seem to bother his GM, Les Snead.

Analyzing ESPN’s NFL Great Analytics Rankings

By Warren Sharp

Recently, ESPN published an intriguing piece devoted to ranking all 122 pro-teams in NFL, NBA, NHL and MLB based non how well they implement analytics. To no surprise, the NFL had ZERO teams in the overall top 10. Of the top 10, five were from MLB, 4 were from the NBA and 1 was from the NHL. However, the NFL had no problem fitting into the bottom 10 teams, placing 4 teams (NY Jets, SD Chargers, Tennessee Titans and Washington Redskins) as analytics “nonbelievers”. To the surprise of almost no one, the Washington Redskins were ranked as the worst NFL team to implement analytics. Which is oddly hilarious considering that owner Dan Snyder has bottomless pockets when it comes to signing washed up free agents, but does not want to spend anything out of his money bin to study how analytics could help his franchise, which has won 6 or fewer games in 4 of the last 5 years. ESPN’s Kevin Siefert ranked the NFL teams and did a tremendous job compiling and segregating these results.

I decided to look at those teams that ESPN ranked as analytics “believers” to see how they performed as compared to the rest of the NFL, which was divided into categories of “one foot in”, “skeptics” or “nonbelievers”. As it turns out, sure enough, the teams who were the strongest believers in analytics made more playoff appearances and won more playoff games (on average) the last 3 seasons. And that considers the fact that both the Jaguars and Browns are in this group, and arguably the strongest team over the last 3 years, the Seahawks, are not in this group.

It is interesting to note that several teams categorized as “one foot in” saw multiple 7 to 8 win seasons but missed the playoffs. The Chicago Bears won 8 games in 2011 and 2013, but missed the playoffs both years. The Miami Dolphins won 7 to 8 games each of the last 3 years, but missed the playoffs each season. And the Buffalo Bills are off of a 9 win season but did not make the playoffs. Perhaps if these teams moved their other foot into the analytics pool, they could be swimming with the playoff teams, as they just needed 1 more win to qualify.

This also speaks to the hard-hardheadedness of some franchises to refuse to adapt. You possibly could understand the Tennessee Titans and Washington Redskins unwillingness to shift to incorporate analytics more if they were consistently in the playoffs with double digit wins each year. Then, perhaps, they could develop a “don’t fix it if it ain’t broke” mentality. But these teams are consistent cellar dwellars and laughing stocks of the NFL. Each has seen just 1 season at or above .500 in the last 5 seasons, and in the last 2 seasons combined, the Titans and Redskins have a grand total of only 16 wins, an average of 4 wins/season/team.

There are some classically hilarious quotes in the article as well:

  • Joe Gibbs once said (regarding his dislike of analytics) “We’re still about people here.” As if incorporating analytics automatically and literally morphs a front office into a group of cyborgs (at best) or fully self-aware robots (at worst).
  • Ken Whisenhunt stated: “When you see a guy like [Frank] Wycheck make a one-handed catch in the back of the end zone with the guy draped all over him, how do you put an analytic on that?” Its almost as if Whizenhunt knows that you simply cannot compile analytical data on your opponent to understand that given certain receiver groupings and defensive goal line personnel, the opponent historically struggles to cover the tight end, and then use those analytics to help you call a pass play to your tight end (Frank Wycheck) for a touchdown.
  • Mike McCoy went on record saying: “No one on a piece of paper can tell me this is the right thing or the wrong thing to do.” And then he promptly looked down at his play sheet (written on a piece of paper) to decide which play to call into Philip Rivers.

The bottom line here is these teams (and coaches) can continue to live in the past and ignore that more and more teams are incorporating analytics into football decisions, and they can continue to lose games to the more progressive franchises who heavily incorporate analytics into decision making. No one in the analytics community would suggest you never allow the humans involved to think for themselves. Analytics are a tool to be incorporated into the repertoire, which (more than likely) will make a decision maker look smarter in the long run. But refusing to incorporate them in some manner, even if it falls short of fully embracing them, is folly.

As Kevin Siefert indicated, there yet are no NFL teams to go “all in” on analytics. Hopefully in the near future, one (or more) teams will embrace analytics more deeply to be considered for that category.

Catching Jerry Rice “Stickum” Handed? Investigating His Own Admission

When Jerry Rice admitted to using “stickum” on his hands to help catch passes, my thoughts immediately shifted to looking at the data from his player stats to see if we can find any type of anonomly. 

The goal is to find data which shows (or does not show) that:

  1. There was a time when it became clear that Jerry Rice’s ability to hold onto the football dramatically improved, and
  2. Such an improvement was unnaturally well above any of his peers.

The most obvious statistic to dive into is catch rate.  How often was the ball passed to a receiver (aka a target) and how often did that receiver catch the ball.  But you can’t look at catch rate in a vacuum, because all passes are not equal.  The further down field, the less likely a catch will result, so you have to factor in the air yardage of the passes the receiver is catching into the equation.

The Methodology chosen was simple:  calculate a receiver’s actual catch rate (catches/target) and then compare it to their predicted catch rate which is a function of the average distance of the “air yards” on the passes they are catching.  Clearly, its easy for a RB to catch a screen pass, because of his proximity to the QB, whereas a WR trying to catch a 18 yard pass is not going to result in completions as often.  To determine “air yards”, I took the total receiving yardage and “backed out” the YAC (yards after catch).

The first setback in this analysis was I could only find targets starting in 1991, and YAC starting in 1992.  Thus, the first calculated year I could use was 1992.  All of the data I used for this analysis came from Sporting Charts.  They were the only site I could find NFL targets back to 1991.

On the positive, this captures Rice’s prime years with Steve Young.  On the negative, we have no data from Rice’s formative years with Joe Montana.  Statistically, Rice’s apex years were the 1993-1995 seasons, which saw him win his 3rd Super Bowl ring.  But it obviously would have been nice to see his catch rates with Joe Montana.  (If anyone has access to NFL-wide target and YAC data dating back to 1986, feel free to send me a link and I can update this analysis.)

The Regression yielded strong results.  I grabbed all player-seasons from 1992-2000 and found a much better relationship in the air yards/reception with the catch rate when stipulating seasons of 40+ catches.  So between 1992 and 2000, there were almost 750 data points of individual players, their air yds/reception, and their catch rate.  The relationship showed a R^2 of 61% and a P-value of 1.4E-152.  In layman’s terms, this basically means that 61% of a receivers catch rate can be explained by the distance of the pass that they are catching alone.  That is pretty strong when you consider we’re talking about 22 players interacting on the field and many other variables playing into the equation, not the least of which is the talent differential between receivers.  Yet 61% of all catch rates are explained by the air yards of the passes.

The result of such a regression is that it provides a formula from which we can calculate the “predicted catch rate”.  Its the most likely rate the receiver should catch the ball based on the historical data during this period.  And from there, you can look at the rate the receiver actually caught the ball, to see if he exceeded expectations or underachieved.  (I’ll note this is far from a perfect analysis.  For example, a pass thrown 2 yds down field but across the middle of the field travels a much shorter distance than a pass thrown 2 yds down field but toward the sideline.  But based on receiving yds and YAC, these are calculated vertically down the field.)

The methodology makes sense, the regression was very strong, so we should be able to generate some solid results.  And we did:

The Results are below in a few different interactive visuals.  Lets start with the first one, which depicts each season’s YIA (yds in the air)/reception vs the player’s catch rate over the time period analyzed.  You can see the strong clustering near the trendline.  You can also see two distinct groupings of passes thrown:  the first quite near the line of scrimmage to running backs, and the second right at (or just before) the 10 yard marker, signifying passes thrown 10 yds in the air almost exactly, which you would expect to see on 3rd and 10.

I’ve highlighted Jerry Rice in red, and you can hover over each data point to see relevant player information for that season.

What this chart shows us is that a number of Rice’s seasons were certainly above what was expected from a catch rate perspective, particularly 1994.  But it can also be noted that in 1993, one of Rice’s teammates, John Taylor, actually had a better catch rate than Rice and was working a bit further down field on those catches.


In this second chart, I’ve modified the y-axis to show the variance for the receivers, instead of the pure catch rate.  Now, we’re looking how each receiver did vs their expectations, based on the yards in the air of their average receptions.  The dominance of Rice’s 1994 campaign is even more evident now.  No other receiver with at least 75 receptions in a season had a better catch rate vs expectations between 1992-2000 than Jerry Rice in 1994.

You can also see a number of other dominant years, 1995 and 1996.  But if you look closely, you will actually see that in terms of the top 5 seasons between 1992-2000, Michael Irvin actually posted two years in that top 5:  1994 and 1995.  He added another top 7 performance in 1992.  So Irvin, not Rice, produced 3 of the best seasons in the top 7.  Rice’s 1995 season placed 8th and his 1996 season placed 10th overall among his peers during this time span.

Looking further down, you can see that Rice actually underperformed expectations in 1998, as he returned from tearing his ACL and MCL the first game of the 1997 season.  Later that 1997 season, he returned from injury early, but cracked the patella in his left kneecap, so 1998 was his first season back off of those two injuries, making that drop in production a bit more understandable.  Then in 1999, Steve Young was replaced by Jeff Garcia in Rice’s final two seasons with the 49ers (99 & 2000).

The bottom line in this visual is that while Rice’s 1994 was dominant, other receivers had seasons as good and better than Rice’s other years in San Francisco between 1992 and 2000.  Particularly Michael Irvin, who posted 3 of the best 7 seasons in this time span.


Next, I’ll take the exact same chart, but instead of limiting it to 75+ receptions, I’ll scale back to 50+ receptions. Here we notice something very interesting. The single best year for a receiver with 50+ catches between 1992-2000 was Jerry Rice’s teammate, John Taylor, in 1993. Taylor caught deeper passes (10.1 yds in the air/recep) than Rice did in his remarkable 1994 season (8.8 YIA), and Taylor caught 75.7% of his targets, a whopping 18.8% above the predicted 56.8%. Rice, as you know, caught 74.2% in 1994, which was 14.7% above the predicted 59.4%. This begins to demonstrate the notion that there could be a significant portion of the receiver’s catch rate variance which is attributed to the quarterback and system he is playing under, something the next two graphics will address in more detail.


We saw John Taylor flash on that last graphic as a player with 50+ receptions for the 49ers whose catch rate variance exceeded even that of Jerry Rice in his 1994 season. Let’s now look at all 49ers receivers during Steve Young’s tenure (thru 1998) who caught at least 35 receptions, and see how their average variance looks. This graphic clearly demonstrates that many receivers had stellar production when playing with Steve Young in San Francisco, it was not just Jerry Rice. We cannot minimize, however, how tremendous Rice’s numbers were due to the sheer volume of receptions he had. It’s far more difficult to maintain an insane catch rate over 100+ receptions than it is 35+ receptions. But the point for this graphic is to demonstrate that many receivers outperformed their NFL peers while playing with Young on the 49ers.


Before moving on to looking at other WRs and how their careers rose and fell over time in terms of catch rate variance, I wanted to tie a bow on the 49ers during Steve Young’s tenure, and this next graphic does that perfectly. As you can see, looking at the team average for the 49ers as compared to the rest of the NFL, for receivers with 35+ receptions in a season, the 49ers dominance is on full display. Something clearly is to be said about not only the quarterback throwing VERY catchable footballs, but also the design of the offense to allow receivers to achieve more space with which to use to get the easiest targets. This was primarily during the George Seifert 49ers, though I’m sure the Bill Walsh/Joe Montana offense would look quite similar (if we had the data to go back that far). And Steve Mariucci came in at the tail end of this analysis (1997).


Looking at other receivers, this next graphic perfectly depicts how receivers will rise and fall in their catch rate above expectation over the course of their career.  Certainly, nothing can be explained by any one factor, and it would be nearly impossible to determine when (or if) a player began using stickum based on his catch rate.

This graphic includes certain receivers who have 4+ seasons between 1992-2000 with 50+ catches, so we can accurate gauge how these players truly do rise and fall.  It does not appear that Jerry Rice’s ascendance into his tremendous 1994 season was abnormal when compared to similar fluctuation seen by his peers.  {To navigate, you can select an individual player’s name from the legend below.}


On this next graphic I pulled out Cris Carter, and isolated Carter and Jerry Rice vs other receivers.  Carter called out Jerry Rice for the stickum comment, and also went on record saying that he (Carter) never used any himself.  Carter’s 1999 season was his best in terms of adjusted catch rate.  He outperformed expectations by 8.2%.  He was also named First-Team All-Pro in 1999, the second time of his career.

Carter’s best year was almost equivalent to Rice’s 1993 season, which was Rice’s 4th best year during this period.  Rice saw a definite rise, from 92 to 93, and peaked in 1994, with 95 and 96 not too far behind.   Carter saw a similar occurrence.  His 1997 season was his worst in terms of actual catch rate vs predicted, but his 1998 season was dramatically better, and he peaked in 1999, with 2000 being a year he fell back.

In fact, Carter’s improvement in catch rate variance from 1997 (-3.5%) to 1998 (5.6%) was 9.1% in one season. Rice’s best 1 year improvement in catch rate variance came between 1993 (8.4%) and his remarkable 1994 season (14.7%) which was an improvement of 6.3%, well less than the best 1 year improvement from Cris Carter (9.1%). Carter’s career was much less stable at the QB position than was Rice’s. Carter worked with a ton of QBs. While with the Vikings alone, his leading QBs on a per-yr basis included: Rich Gannon, Jim McMahon, Warren Moon, Brad Johnson, Randall Cunningham, Jeff George and Daunte Culpepper. Carter’s spike in catch rate variance came after Brad Johnson’s 1997 season led to Randall Cunningham’s 1998 season. Which shows that there is so much more to a receiver’s productivity or ability to catch passes than just his hands alone (or stickum on those hands). The QB and offensive scheme play major roles, and Jerry Rice was extremely fortunate to be paired with Joe Montana and Steve Young for most of his NFL career.


Next, I ran a similar analysis looking at the last 5 years in this “pass heavy” era we entered since the hit rule changes of the 2010 season.  I analyzed each player’s season in the same manner as I did for the earlier period.  What we notice first off is a decline in the average YIA/reception as compared to the 1992-2000 time frame.  Looking at all receivers w 75+ receptions during the earlier time frame, the avg YIA/rec was 9.5.  From 2010-14, it decreased to 8.5.  Some of this can be explained by the fact that the middle of the field is more open now, and receivers do not fear the hits across the middle, so QBs are more routinely throwing completions into the short middle of the field.

That aside, I’ve overlaid Rice’s seasons into this group of modern day WRs.  Despite the improved technology in tackiness of the receiving gloves, Jerry Rice’s numbers still stand well above most of his modern-day peers.  We should note that because the 2010-14 period was very different from the prior period, I ran a different regression, one which was unique to this later period.  And I applied Rice’s numbers to the new algorithm to obtain a new predicted catch rate.  Which is why, for example, in 1994 his predicted rate was 60.4% rather than 59.4% which was what it was from the 1992-2000 period.  In other words, receivers ARE catching balls at a better rate now than they were during Rice’s era, so his over-acheivement gets reduced slightly, from 14.7% down to 13.7%.  Clearly, however, that makes minimal difference:

The only receiver with a superior adjusted catch rate since 2010 than Rice had in his 1994 season was Marques Colston in 2011.


The Conclusions show that we are unable to find data which indicates either of our two goals.  We wanted to find data that shows (or does not show) that:

  1. There was a time when it became clear that Jerry Rice’s ability to hold onto the football dramatically improved, and
  2. Such an improvement was unnaturally well above any of his peers.

Its very clear that Jerry Rice had a tremendous career.  In particular, his 1994 season was outstanding, but his 1994-96 seasons were three of the best we’ve seen from one receiver.  But his rise and then descent from those 3 seasons is not unlike what we’ve seen from many good receivers.  There was not a clear, sudden change in Rice that would indicate anything abnormal, such as a huge advantage he suddenly gained over his peers. Additionally, its clear that the 49ers offense, powered by Steve Young at the time, was strongly tied to high catch rates. It was not Jerry Rice who pulled up the average, many players with 35+ receptions were very similar to Rice’s catch rate variance.

But one thing is clear, Rice’s ability to catch the ball from Young in that 49ers offense has withstood the test of time, and even modern day NFL receivers, many playing with great quarterbacks, do not have the positive catch rate variance that Rice displayed. Michael Irvin is another player who needs to be mentioned in these concluding paragraphs, as his catch rate variance was likewise spectacular and one of the best we’ve ever seen, particularly over a span of multiple seasons. Overall, during the 1992-2000 span, it is Michael Irvin whose catch rate variance was the best in the NFL, slightly superior to that of Jerry Rice. Catch rate is not the deciding factor in which player is the best receiver, but it should be a statistic that gets included in the discussion.

The one issue that remains is the fact that, unfortunately, we currently can’t look prior to 1992.  So it could be possible that something more obvious and abnormal would appear if we looked at that data.  But given this analysis, while Rice may have gained an edge using stickum, its very difficult to actually use the data to show when this practice started.  Additionally, its very hard to ascertain the nominal boost it would have provided from his own prowess as a receiver. It surely would have been “some” type of factor, otherwise he would not have used it (unless it was a mental edge). But the data does not give us any clues, and we’re sticking to what the data tells us.

At the end of the day, this study and the associated mini-analyses were very interesting and resulted in some fascinating infographics, but ultimately did not prove a “when” to the start the use of stickum, or a “how much” of the edge that Jerry Rice received from using stickum to catch passes. Was Jerry Rice an outstanding receiver? The data analyzed between 1992-2000 clearly confirms that he was. Did the stickum help his numbers? That is impossible to precisely determine, and additionally, we don’t know if he admitted to using stickum during this period or if it was earlier in his career (which we lack the data to analyze). But obviously, stickum is going to improve your catch rate. Most likely, the majority of passes caught by Rice were passes he would catch without stickum. There would be a much smaller percentage which he would have dropped but for using stickum. But to visually see the edge it gave him in the data is not happening. And it would be pure speculation to predict the edge it gave him, absent more obvious findings from my analysis. My analysis is not the only way to look at the numbers, there are a myriad of other possibilities to examine from a data analysis perspective. But based on the data I chose and the methodology I used, I cannot conclude any obvious statistical edge that Jerry Rice may have received by using stickum.

Note: In addition, I will add that an analysis such as this is subjective by the very nature that it is based on data such as “targets” and backs out “yards after the catch”. Whereas stats like receptions, touchdowns, fumbles, field goals, etc are very precise, there is a lot of gray area surrounding a stat like “targets”. Often one can look back at film and conclude that a “target” to a certain receiver was totally uncatchable, and thus, should not be held against that receiver. One could perform this same analysis and look at “drops” instead of “targets”. I did look at the drop rate instead of the catch rate, but I found no relationship between drop rate and air yardage on passes. And I preferred the correlation between catch rate and air yardage. I did look into drop rates, and as it turns out (as the below graphics show) from 1992-2000 Jerry Rice had an average rate of drops. Some seasons were noticeably better, such as 1995 and 1996 (in the 2-3% range) while others were worse, such as 1992, 1993 and 1999 (in the 7-9% range). Drops, like targets, are sure to be subjective.

The first graphic looks at player average across the entire period, the second graphic separates by individual season so you can select players. Cris Carter was noticeably a beast when it came to not dropping the football, and he claims to never have used stickum.