About Warren Sharp

Warren Sharp of SharpFootballAnalysis.com is an industry pioneer at the forefront of incorporating advanced analytics and metrics into football analysis. A licensed Professional Engineer by trade, Warren applies the same critical thought process and problem solving techniques into his passion, football. After spending years constructing, testing and perfecting computer models written to understand the critical elements to win NFL football games, Warren’s quantitative analytics are used in private consulting work, and elements of which are publicly shared on SharpFootballAnalysis.com. To contact Warren, please email [email protected] or send a direct message on Twitter to @SharpFootball.

The Unbelievable Story of the 2017 Colts

By Warren Sharp

I have a pretty unbelievable story to share regarding the 2017 Colts.  This team has largely been forgotten, because they didn’t have Andrew Luck & the coaching staff was fired after the season. This is a story about the consequences of horrendous decision making.

This is what happens when a team doesn’t pay attention to detail.  This is what happens when they don’t even know the details because they haven’t studied them.  They haven’t studied them because they haven’t bothered to crunch the data and realize there are details to be found.  That’s analytics.  The use of math or statistics to gain valuable knowledge from data.  Knowledge which can be used to recommend action or guide decision making.  Literally, the Colts didn’t know what they didn’t know.  But if they bothered to break it down, here is what they would have learned.  And perhaps they would have learned it in time to do something about it.  To win more games.  To save jobs.  To change their season:

The 2017 Colts finished as the 3rd worst team in the league, per the standings, and they won only 4 games all season.  That much you know.  You probably don’t know any of the rest.  Hell, the Colts themselves probably don’t know most of it…

(NOTE: I shared this story on Twitter.  Presenting the data in a step-wise manner (like tweets in a thread) seemed easier to digest rather than compacting in paragraph form, so I maintained that format with bullet points below)

  • The 2017 Colts may have finished 4-12, but in their 16-game season, the Colts trailed at halftime in only 6 games.  They held halftime leads in 9 games.  Yet they went 2-7 in games they led at halftime.
  • The 2017 Colts are the ONLY team in the last 27 years to lose at least 7 games which they led at halftime.
  • Their leads didn’t mysteriously evaporate in the 3rd quarter, however.  The Colts led through three quarters in 9 games.
  • The 2017 Colts are the ONLY team in the last 20 years to hold a lead entering the 4th quarter in at least 9 games, but win no more than 4 games.
  • Last year, 25 of 32 teams lost no more than one game when leading entering the 4th quarter, and roughly 35% of them won every game. Six teams lost two games when entering the 4th quarter with a lead. The only team to lose more than twice was the Colts, who lost FIVE times.
  • Last year, a total of five teams held a lead entering the 4th quarter in 9 games.  Every team posted a winning record (aside from the Colts):
    • Jaguars (10 wins)
    • Chiefs (10 wins)
    • Falcons (10 wins)
    • Chargers (9 wins)
    • Colts (4 wins)
  • Last year, the only teams to post 4 or fewer wins on the year led after the 4th quarter in an avg of only 2.7 games (aside from the Colts):
    • Browns (0 wins, led after 3Q in 1 gm)
    • Giants (3 wins, led after 3Q in 4 gms)
    • Texans (4 wins, led after 3Q in 3 gms)
    • Colts (4 wins, led after 3Q in 9 gms)
  • In their first 11 games of the season, the Colts led entering the 4th quarter in 8 games. The only two teams that led entering the 4th quarter more than 8 games (through 11) were the Super Bowl Champion Eagles and the Super Bowl runner-up Patriots.
  • Although they led entering the 4th quarter in 8 of their first 11 games, the Colts did not start the year 8-3.  Instead they won only 3 of these games.  They lost 5 games by blowing leads in the 4th quarter, to drop to 3-8 on the season.
  • The 2017 Colts led by one-score entering the 4th quarter in 6 games, but won just one of six.  They were down by one-score entering the 4th quarter in 2 games and lost both.  They were 1-7 in games within one-score entering the 4th quarter, despite leading 6 of these games entering the 4th.

Clearly the 2017 Colts were not a great team. But terrible teams are unable to consistently build leads into halftime and into the 4th quarter, like the Colts were routinely able to do. “Something” happened in the 4th quarter to cause such disastrous results.  Let’s dive deeply into the analytics of their play calling and decision making to understand WHAT they did, WHY they did it, and HOW it affected the team.  Note that I had planned to save this for my 2018 Football Preview book, which I’m writing and will publish in late-June.  But this was so incredibly breathtaking, I thought it needed to stand on its own:

ON FIRST DOWN

  • On 1st down in the 4th quarter, if a team is in a one-score game, they run the ball 53% of the time.  The Colts ran the ball 64% of the time, 3rd most in the NFL. This despite the fact that on 1st down runs they recorded just a 35% success rate (2.5 YPC), while they were successful on 53% of their passes with 7.8 YPA.
  • In these 4th quarter runs, an older Frank Gore posted just a 30% success rate. A younger, fresher Marlon Mack recorded a 57% success rate, but received a third of the carries that Gore received.  As the data shows, potentially due to overuse and wear & tear, Gore was clearly less fresh than Mack, but was still used 3 times more often.
  • On 1st down in the 4th quarter, when winning by 1 score, the Colts ran the ball 79% of the time and recorded a 42% success rate on these runs (2.6 YPC).  However, on their passes, they recorded a 100% success rate with 22.0 YPA.
  • On 1st down in the 4th quarter, when winning by 1 score, the Colts used 11 personnel (3 WRs) 30% of the time.  The other 70%, they were in 12 or 13 personnel (1-2 WRs).
  • If they had less than 3 WRs, they went 100% run, posting 2.4 YPC and a 38% success rate.
  • Here begins a theme of predictability. A huge key to winning in the NFL is being unpredictable. If the opponent knows your tendencies, you are waging an uphill battle. Especially if your “tendencies” are actually 100% “tells”.
  • How about 1st downs when losing?  On 1st down in the 4th quarter, if a team is losing they pass the ball 73% of the time on avg (27% run).  But the Colts were 43% run, 2nd most in the NFL. The only team with a greater run rate was the Rams, but they were successful on 57% of these runs.  The Colts were successful on just 32%.

ON SECOND DOWN

  • On 2nd down in the 4th quarter when winning, if the Colts did not use 11 personnel (3 WRs) they went 100% run.  These runs averaged just 1.7 YPC.

ON BOTH EARLY DOWNS

  • Combining 1st and 2nd down in the 4th quarter, in a one-score game, the Colts ran the ball on 34 of 40 (85%) plays from non-11 personnel (3 WRs).  They averaged just 2.0 YPC and a 29% success rate.
  • When not using 3 WRs, they used 12 personnel (1 RB, 2 TEs, 2 WRs) on 95% of their plays, the highest rate in the league.  Their 85% run rate when in non-11 personnel was 3rd highest in the league.
  • In the 4Q when playing with a lead, the Colts were the only team in the NFL to NEVER pass unless they had 3 WRs on the field on early downs. If they had a lead & anything less than 3 WRs on the field, they ran 100% of the time. They avg’d 2.1 YPC. There was ZERO threat to pass.
  • Bottom line:  the 2017 Colts were the most predictable early down offense in the NFL in the 4th quarter of one-score games.  But it gets worse…

ON THIRD DOWN

  • On 3rd down with a 4th quarter lead, once again the Colts were 100% run unless they were in 11 personnel (3 WRs).  These runs were so predictable, the Colts posted a 0% success rate and they gained an average of 0 YPC on these runs.

COMBINING EVERY DOWN

  • Combining every down in the 4th quarter, if the Colts were leading, they went 100% run unless they lined up in 11 personnel with 3 WRs.  They were the only team in the NFL to go 100% run when fewer than 3 WRs were on the field.  With these predictable runs, they gained just 1.9 YPC and recorded a 38% success rate.

OTHER PROBLEMS

  • 4th quarter predictability led to inefficient rushing, which severely hampered the Colts ability to win those 9 games they led entering the 4th quarter.  But rushing alone wasn’t the sole cause of their horrible 4th quarter results.
  • The Colts were the only team with zero passing TDs and 2 interceptions.  No other team posted 2 interceptions while nursing a one-score 4th quarter lead.
  • In the 4th quarter when leading, the Colts pass efficiency ranked 28th in the league, with only 29% of pass plays grading as successful (avg was 43%).  Outside of the Colts 40-yard line, their passing success rate dropped to 21% (avg was 44%).
  • Why were the Colts so bad when passing with a 4th quarter lead?  First, we need to understand the Colts were primarily a 11 personnel team when passing, which means 3 WRs.  When passing, they used 3 WRs approximately 79% of the time and 2 or fewer WRs 21% of the time.
  • The Colts were substantially more efficient when passing from 2 or fewer WR sets.
    • When using 3 WRs on the season, they were successful on 43% of passes, delivered an 82 rating and averaged 6.8 YPA.
    • When using 2 or fewer WRs, they delivered a 52% success rate, with a 91 rating and 7.8 YPA.
  • The Colts were 1 of only 6 teams to post a sub-45% success rate with 3+ WRs and over-50% success rate with 2 or less WRs.
  • The Colts were extremely successful when passing with 2 or fewer WRs with a lead in the first 3 quarters, recording an incredibly strong 58% success rate on those passes.  That rate was 8% better than the NFL average.
  • But for whatever reason, when leading in the 4th quarter the Colts NEVER attempted a pass using 2 or fewer WRs.  They only used 3+ WRs.  And on these 3+ WR attempts, they recorded a 33% success rate, 6.9 YPA and a 46.2 rating.

The Colts were completely predictable in a number of ways in the 4th quarter. They worked against themselves. They refused to pass out of their most successful personnel groupings for passing. Their predictability in rushing led to inefficiency, which caused hard-earned leads through 3 quarters to slip away in the fourth.

When their leads slipped away, why didn’t Colts come back any time they were trailing by close margins in the 4th quarter?

  • When down one score in the 4th quarter, the NFL average is 64% pass on early downs.  When they do run the ball, NFL average is 4.5 YPC and a 49% success rate.
  • When down one score in the 4Q, the Colts ran the ball 10% more than average on early downs.  This would only make sense if they were phenomenal when running.  But they averaged just 1.9 YPC and posted a 19% success rate.  Both were the WORST of any team in the NFL.
  • Meanwhile, on early down passes when down one score in the 4Q, the Colts averaged a 53% success rate (well above NFL average of 47%) and they posted a 104 passer rating (well above the NFL average of 80).   Choosing to run 10% more than average & sacrificing such value was unwise.

Here is a look at the overall play calling from the Colts in 2017, courtesy of Sharp Football Stats:

Colts Playcalling

 

Over the course of the entire game, it’s evident the Colts were far too predictable, and featured substantial amounts (see middle column) of

  • Frank Gore on 1st down
  • Frank Gore on 2nd down
  • Hope TY Hilton bails them out on 3rd down

Using this visual data, the amount of “red” in the middle column (representing unsuccessful plays) jumps out at you.  Meanwhile, the amount of green for those same down & distance situations in the far right column represents the Colts most successful plays.  Clearly they had lots of other possibilities apart from riding Frank Gore so frequently on early downs that weren’t explored enough.

The sad part is, the Colts probably didn’t even realize or measure the impact of their 4th quarter play calling. It was far worse than they could have imagined. My guess is they had no idea that they NEVER passed while maintaining a 4th qtr lead without 3 WRs on the field.

They likewise probably had no idea that the only teams to enter the 4Q with a lead more often than themselves (through week 12) faced each other in this year’s Super Bowl: the Eagles & Patriots.

It’s unfortunate to sit back now and realize that many of these 4th quarter leads which became losses were avoidable with stronger attention to detail and a better focus on analytics.

Often time fans and media are quick to place blame on players making a mistake on the field, without realizing the play call wasn’t optimal to begin with, and majority blame should shift elsewhere. Understanding responsibility for the error is essential to correcting it and ensuring it doesn’t become repetitive.  Because repetition leads to habit-forming behavior.  And unfortunately, that is exactly what happened to the Colts play calling.

When placed into certain situations (leading in the 4th quarter) the Colts changed their strategy and style of play which earned them the lead through the first three quarters.  They played tighter.  They played predictably.  They played not to lose.  They refused to use their optimal play calls.  They forced their quarterback (experienced as he was) into predictable passing situations and allowed the defense to attack, knowing what to expect.

Reviewing this in hindsight is certainly infuriating.  I will be doing similar dives into the 31 other teams for my 2018 Football Preview book, out in late-June.  I’m guessing none will be quite as eye-opening as what the 2017 Colts did to themselves in the fourth quarter and the monumental impact it had on their final record, but we shall see.

For the 2017 Colts, this is what happens when a team does not pay attention to detail.  Details they would only know if they incorporated more analytics.  Analytics isn’t a dirty buzz word.  Teams have been winning Super Bowls for decades using analytics, such as Bill Walsh’s 49ers.  Analytics is simply the use of math or statistics to gain valuable knowledge from data.  If you add up the rushing yards of the Colts from 1-2 WR sets when leading in the 4th quarter, and divide by rushing attempts, and realize that these runs are totally inefficient, you’re essentially using analytics.  Sounds far less scary and more basic, and teams to accept this level of detail with open arms and incorporate it into their arsenal of weaponry as they try to improve and put the best product on the field which gives them the best chance to succeed.

Once more teams start self-scouting in this manner, they will avoid more pitfalls like the 2017 Colts faced.  The impact it had on their season and final record was profound.  Additionally, this level of analysis is also able to find holes in opponents on a weekly basis.  Such as why I believed the Patriots would have success throwing on the #1 pass defense of the Jaguars in the AFC Championship, or why I believed the Eagles would have immense success running on the Patriots in the Super Bowl.

With the Eagles winning the Super Bowl this year, seeing a ton of success thanks to incorporating far more analytics into their team than most, I believe we’re turning the corner.  Teams that do not use more analytics to self-scout and identify their own strengths and weaknesses they weren’t otherwise aware of, or to scout their opposition, will soon fall behind.  Way behind.  “Analytics” are not a robot overlord, send out of the matrix to tell a coach what to do, expecting subservience. Intelligently incorporating analytics will not paralyze a team nore make them too smart for their own good.

Working with experts who can share analytical insights tactfully, while allowing the coaching staff to take what they want from the information as part of their own final decision making process, will no longer be the path less traveled in the NFL.  It is now, will be in the future, and truthfully, has been for some decades, the easiest way to win in the NFL. 

Analytics, Free Agency, New NFL Models and much more – 2/15/18

The 2017-2018 season has ended, but the 2018 offseason is underway.  Warren Sharp (@SharpFootball) of SharpFootballAnalysis.com and Evan Silva (@EvanSilva) of Rotoworld.com fill a jam packed episode with takeaways and insights, including discussion of the usage of analytics by both Super Bowl teams, especially by the Philadelphia Eagles.  They also discuss the future of analytics.  Then they take a deep dive into the top/deepest position groups in free agency and discuss individual players in those classes.  Finally, they discuss DFS and sports betting now that the NFL season is over.  Next up, Warren brings on a special guest, Vegas icon and pro sports bettor Bill “Krackman” Krackomberger (@BillKrackman), to discuss that same subject and what NFL bettors should or should not do now that the season is over.

  • 0:00:00 – Intro
  • 0:01:41 – Super Bowl takeaways
  • 0:11:16 – The future of analytics in football
  • 0:29:07 – The top free agents of 2018
  • 0:29:32 – Top DBs of the 2018 free agent class
  • 0:36:13 – Top QBs of the 2018 free agent class
  • 0:43:33 – Top WRs of the 2018 free agent class
  • 0:49:13 – Discussion of new NFL model
  • 1:00:04 – Bankroll management now that the NFL season is over
  • 1:08:18 – Bill Krackomberger

You don’t want to miss it!  Plus, get a huge discount when you join Bill Krackomberger’s website from this private page: CLICK HERE.

Be sure to check out sharpfootballstats.com (on your desktop) for the advanced, visualized Strength of Schedule, the Sharp Box Score and other data tools.

Subscribe on iTunes and listen below:

Ignore Virtually All Offseason NFL Strength of Schedule Information

By Warren Sharp

Measuring future strength of schedule by incorporating prior year win rate is lazy, inaccurate and inefficient.  But like most things in the NFL, because it has been an accepted method from years past, there is a strong reluctance to shift from this process.  You don’t need any math at all to understand this method must be flawed.  In the case of 2018 strength of schedule, the traditional method of calculation looks at 2017 W/L of teams to predict 2018 strength of schedule.  Seems nonsensical, particularly when you consider the small sample and high variance in a 16-game season. By looking at W/L rate, we intentionally avoid increasing the sample size by rejecting the use of efficiency metrics or average performance on a per play basis.  We’re ignoring play-level data that could help smooth out performance and instead, we’re pulling just one, single number from each game:  did the team win the game?  A win or a loss.

I can tell you it’s pure nonsense, but for the dinosaur generation hung up on accepted behavior, I’ll use math to prove that it’s nonsense.

Teams change considerably from year to year, particularly from a W/L perspective.  With such a small sample size of 16 games, a significant part of a team’s overall record is driven by pure chance and luck.  Fumble recovery and tipped passes are two major factors in turnover margin, and both are difficult to predict.  A close game can hinge on whether one ball gets fumbled or one pass is tipped instead of caught by a WR.  Whether the offense recovers the fumble or the tipped pass turns into an interception can alone swing that close game.  Teams that win the turnover battle win a staggering 79% of games.

Non-offensive touchdowns are major factors as well, and that doesn’t include just special teams touchdowns.  A game can swing on a DB after he intercepts a pass at his own 30 yard line: is he tackled immediately or does he evade the tackle and return it 70 yards for a touchdown?  Teams that win the return touchdown margin win 75% of the time.

These factors play major roles in winning games and thus, a team’s end of season record.  But they are not descriptive of a team’s strength.

I measure strength of schedule in a variety of ways at Sharp Football Stats.  You can visit the site and see strength of schedule for teams across 30+ metrics.  Such as which offense faced the weakest defenses against RB-passes, or which defenses faced the worst offenses at explosive passing.

In such a small sample size sport, context is king in the NFL.  And a W/L record from the prior season is devoid of all context.  It literally has zero context.  Which is why, for instance, when sports books look to hang regular season win totals for the next season, the very first thing they look to do is attempt to add context to a team’s prior year record.  “This team won 5 games last year, but they were 1-6 in games decided by one score and they lost the turnover battle by 2+ turnovers in 4 of their other games.  Their starting QB was injured in week 4 and he missed the remainder of the season.”  And on and on.  Building context to understand why the team went 5-11, and understanding how that same team might look totally different the next season, is vital for projecting future performance from a W/L perspective.

When mainstream websites post stories this time of year about strength of schedule, they are taking their reader down a dead end path with two stops:

1)  The first stop is the schedule itself.  They list all of the teams and the W/L percentage of their future opponents using those opponent’s prior year records.  And then they rank the schedules, from hardest to easiest.  Their goal is to showcase which teams have the “toughest” schedule the next year.  That’s their first stop.

2) Their second stop is forecasting success or failure the next season because of the strength of schedule.  They will look to the extremes of the strength of schedule, and imply that it will be really hard for the teams with the toughest schedule to do well the next year.  And the teams with the easy schedules are set up for a lot of success.

So let’s take a swipe at each of those two stops.  First, let’s examine whether or not the traditionally used offseason strength of schedule comes anywhere close to helping predict actual strength of schedule for that next season. We’ll do this by comparing prior year W/L rate of a team’s opponents (the “traditionally used strength of schedule calculation”) to the actual W/L rate of the team’s opponents in that upcoming season, to see if those two metrics are closely related.  If they are, it’s legitimately a good method to forecast strength of schedule.  If they are unrelated, it’s worthless.  [We’ll stay basic and ignore that even same-season W/L record is an inaccurate way to determine strength of schedule, and efficiency metrics are far superior.]

Then, let’s examine whether the traditionally used strength of schedule can help explain team success the following year.  Do teams that have tough schedules (based on the traditional method of calculation) actually fare worse, and if so, how much worse than the teams with easier schedules?

 

Does the Traditionally Used Strength of Schedule Calculation Help to Predict Actual Strength of Schedule?

This should be the first question any writer asks when tasked by their boss to write an article on strength of schedule.  [Let’s assume their boss doesn’t know any better, and just wants *CLICKS* from *CONTENT*.]  The entire point of “forecasting” strength of schedule before the season is the hope and belief that this calculation will be close to reality.  Using 2018 as an example, there must be a belief from the writer that the W/L rate of teams from 2017 will be somewhat similar to what they actually are in 2018, so that such an calculation has merit.  If the 2017 W/L rate of opponents is nothing like the actual 2018 W/L rate of opponents, what is the point?  [Apart from *CLICKS* from *CONTENT* obviously.]

The exercise is pretty simple by use of linear regression.  How much of the actual W/L rate of a team’s future opponents is explained by the W/L rate from the prior year for those same opponents?  Using data since 2010, the answer is 5.7%.  If we define strength of schedule as opponent’s combined W/L rate (the traditional method), only 5.7% of a team’s actual strength of schedule is explained by the W/L rate from the prior season.  The other 94.3%, the vast majority, is not explained by that prior season W/L rate at all.  The p-value is acceptable (0.0001) but the R-squared is only 0.057.  Here is the plot of this data since 2010:

Preseason v Year-End SOS

If we shrink it down to just the last three years, we find the p-value has moved to slightly outside acceptable range (0.055) and the R-squared was much worse, down to 0.039.  Meaning just 3.9% of the team’s actual strength of schedule is explained by the W/L rate from the prior season.  Visually, it is easy to see the lack of a meaningful relationship by how far the logos are spread out vertically:

Preseason v Year-End SOS since 2015

The null hypothesis is that the traditional offseason measure of strength of schedule DOES NOT help to predict actual strength of schedule in that season.  And we cannot reject the null hypothesis – traditional strength of schedule simply isn’t predictive at all.

 

Can the Traditionally Used Strength of Schedule Calculation Help to Predict Successful or Unsuccessful Seasons?

This is the second stop.  Most articles take the leap that because a certain team has a really tough strength of schedule, that team may struggle to win games.  And the opposite is true as well;  easier schedules should result in more successful seasons.

Let’s pretend that we didn’t run the first test above.  Let’s pretend that we still believe  the traditionally used strength of schedule is acceptable and beneficial to showing real strength of schedule for the upcoming season. Testing this hypothesis is done in a similar manner via linear regression.  And we will test to see how much of a team’s actual wins are explained by the traditionally used strength of schedule calculation.

The results are terrible to say the least.  The R-squared value is 0.00028, which means that 0.028% of a team’s wins are explained by the traditional strength of schedule calculation.  In addition, the p-value is totally unacceptable (0.79).  Let’s see how this looks graphically:

Preseason SOS v Year-End wins since 2010

By examining the trend line, it is apparent that it actually trends upwards ever so slightly.  Meaning that teams with a tougher schedule actually are winning more games.  In other words, there is an inverse relationship between traditional strength of schedule and actual wins. How could this be?  It hinges around the fact that the best teams each year are given “slightly” more difficult schedules the next year.  At least, that is how it is designed to work.  The truth is, where a team finishes in the prior season changes only two games on their schedule for the next season.  Here is how the 16 games are determined, and let’s use the Patriots as an example, the team that finished in first place in the AFC East:

  • 6 games against a team’s own division (AFC East)
  • 4 games against an entire division within your own conference (AFC North, West or South – let’s assume the AFC North for this example)
  • 4 games against an entire division outside your own conference (any division from the NFC)
  • 2 game against similarly placing teams in divisions within your conference (since NE finished in first, they play the first place team in the AFC West and AFC South, as they already are playing the first place team in the AFC North by virtue of the second bullet)

So in reality, the “best teams” from the prior year only have to play 2 opponents based on those opponent’s prior year division finish.  And it’s therefore very possible that the “best teams” from the prior year will win many games the following season even if they are playing what calculates (using the traditional method) to be a tough schedule.

It’s not until we shrink the sample down to the last two years that we see the trend line return to a relationship which suggests that the tougher strength of schedule results in fewer wins.  However, the results are still completely unacceptable.  The R-squared is 0.0019 and the p-value is 0.73, as illustrated below.  Both teams who made it to the Super Bowl this year played much more difficult schedules than average, and the Super Bowl winning Patriots from 2016 did as well.  On the opposite end of the spectrum, the 2017 Bengals played the 4th easiest schedule of the past two years and didn’t even hit .500.  

Preseason SOS v Year-End wins since 2016-2

The null hypothesis that the traditional offseason measure of strength of schedule DOES NOT help to predict successful or unsuccessful seasons.  And we cannot reject the null hypothesis – traditional strength of schedule doesn’t predict anything related to future success.

 

Please Stop Using Offseason Strength of Schedule Information

It is totally unhelpful.  It does NOT help to predict actual strength of schedule for the upcoming season.  And it especially does NOT help to predict that a tough schedule (by its own formula) will result in fewer wins or an easier schedule will result in more wins.  I completely understand the desire to discuss the NFL.  But discussing strength of schedule in this manner is just foolish.

Unfortunately, I can’t tell you how many times you will see strength of schedule for the 2018 season based on opponent’s 2017 W/L results.  Between now and July, you will see hundreds of articles published on this subject.  You will hear it discussed on countless radio shows.  You will see graphics packages built and thrown up repeatedly on mainstream NFL programming.  It will be unavoidable.  And I shudder to think of the tweetstorm that we’ll be inundated with related to the traditional calculation of strength of schedule.

Don’t take any of it to heart.  Feel free to refer them to this article.  I’ve written about this before and will inevitably do the same in the future.  You don’t need any math at all to understand this method must be flawed.  But sure enough, the math supports that not only is it a tremendous waste of time to study traditional offseason measure of strength of schedule, it is even less meaningful to the prediction of 2018 wins and losses than anyone would think.

If we want to discuss strength of schedule, there are FAR more accurate methods to use for calculation rather than W/L rate, even if it is actual in-season win rate.  Readers want to know if their play tough opponents next year.  Discussing those teams in an article, with context about those teams and what they may look like in 2018, is vastly superior to the traditional article, which is centered around prior year win rate of current year opponents.  I’ll certainly spend time this offseason sharing a methodology I’ve created to best forecast strength of schedule.  And it has nothing to do with prior year win rate.  But I’ll be the first to say, as I always do, that far too much is made of offseason strength of schedule, and far too little is discussed about in-season strength of schedule.

We just finished a season where article after article continued to discuss the “mighty” defense of the New England Patriots, and how they “turned it around” from earlier in the season and played so much better down the stretch run, allowing so few points.  This was discussed ad nauseum in the two weeks leading up to the Super Bowl.  And there was zero discussion about the fact that the Patriots defense clearly looked better thanks to playing a schedule with just one top-10 offense.  And Nick Foles, the Eagles and the #8 offense put up 41 points on this perceived “strong” Patriots defense.

NE SoS

This reiterates my point that too much is made of strength of schedule in the offseason, but the sad part is, it’s not even being calculated in a useful manner whatsoever.  In typical NFL fashion, much hype is delivered to something that isn’t calculated in a manner that correlates to “real” strength of schedule, and very little is made of the most useful and best information (in-season strength of schedule).  This should surprise no one.