Week 14 Hangover and Updated Efficiency

Week 14 Hangover

As promised, I wanted to say what I got right and what I got wrong from my previous post that can be found here.

Cowboy’s offense versus Giant’s defense 

Although I got the prediction wrong, I think my reasoning was correct. What my efficiency numbers do not do well is understand the likelihood of an explosive play. So if you gain 5 yards on 3rd and 5 that is the same as gaining 25 yards on 3rd and 5. I am currently trying to find a way to change this, but it seems the Giant’s offense is built around ODBJ making hugely explosive plays a couple of times a game. Take out the 61 yard TD pass to Odell and Eli was 16/27 for 132 yards. This matchup didn’t disappoint and it seems the formula to beat the Cowboys is a crazy good defense with a serviceable offense.

Baltimore at New England

I  picked this one right and got the logic correct again, but I have to admit I expected the game to be closer and the score to be lower.  The gameplan from NE was particularly interesting. To manufacture a run they did a few receiver sweeps and ran outside the tackles a good bit. The final stat line is a little misleading because the Patriots got up big relatively early in the game and thus ran the ball more, but they did what I thought they might do by sticking to the passing game against this defense.

Washington’s offense versus Philadelphia’s defense

This game was weird. I got the pick wrong here, but in the first half of this game, the Eagles should have been up big. Their defense did a great job against this offense for most of the game, but their offense kept shooting itself in the foot. This game also illustrated that finding a way to try to predict yardage gains and not simply the binomial 0/1 success/fail of plays is really important.

Atlanta’s defense versus LA’s offense

I got this very wrong and couldn’t be happier about it. I was elated to see the Falcon’s defense destroy this Ram’s offense. Outside of the first drive and the last quarter, the Rams were hardly able to move the ball. I am still cautious about the Falcon’s defense. I think Vic Beasley Jr. has been insanely good, but I worry about te secondary. Deion Jones has looked very rangy over the middle of the field as of late, but I still worry about the outside talent and all of the “deep 3” in the deep 3 zone concept. With Trufant out, teams can really attack this team deep.

Why are the Raiders and Chiefs good?

Well, I think the short answer to this is that they are inefficient, but thrive in the extreme parts of the game. What I mean by this is big plays and turnovers for the Raiders and Chiefs respectively. I do want to say that I have little faith in the ability of the Raider’s offensive coordinator. From the efficiency statistics in my previous post, it was clear that the Chiefs were very bad at defending the run. From a not using my statistics, but looking at the game, they were 40% run versus 60% pass and 4.7 yards per rush versus 2.9 yards per pass attempt. So what were the Raiders doing? Their QB has a jacked pinkie on this throwing hand and it was 12 degrees Farenheight. This is one of the scenarios where I feel like the coach believes their team is so good they focus on their strength instead of the other team’s weakness. Focusing on your opponent’s weakness is the way of good football coaches and the Sith.

Season Pick Stats

Logic: 4/5

Outcome: 2/4 (didn’t pick the RAI/KAN game)

 

Updated Efficiency

These numbers will reflect through week 14.

ovr_eff_2016wk14

off_eff_2016wk14

def_eff_2016wk14

 

NFL Offensive/Defensive Efficiency and Week 14 Things to Watch

Hey everyone! Over the next year, I want to understand the current state of football analysis and hopefully use that knowledge to start innovating in the space. So while this is crazy late in the season, I want to start writing a weekly post about some of the things I have been looking at. The first thing I have looked into (and the main insight of this post) is Football Outsider’s DVOA. What I wanted to do was create a proxy of these numbers using data that I collected, cleaned, and summarized myself. I still believe that the DVOA ratings have more bells and whistles and thus are more accurate, but hopefully over the next few weeks, I can add some novel insight to these numbers.

Another thing that I want to do is create a “hangover” post where I basically document what I got wrong and what I got right in my predictions. One of the things that I hate about a lot of sports writing is the lack of accountability to numbers and/or predictions that are produced. I’m pretty sure the fundamental idea of learning is making mistakes, understanding them, and learning from them. So having said that, if you watch the games and think “man, that dude is an idiot and got this really wrong” then feel free to comment on this post or let me know in a tweet @pspitler3. I welcome the criticism.

So the numbers that I present are standardized efficiency numbers over expectation. To determine efficiency I defined a play “success” as a first down earning 40% of the yards to gain, a second down earning 50% of the yards to gain and a third or fourth down gaining the yards to gain or more. Thus a team’s total ability to meet these metrics over their total plays defined that team’s efficiency. To account for defenses or offenses that they faced I took the rest of the teams on the competing team’s defenses or offenses schedule and determined what the expected efficiency would be based on that team’s defense/offense. So the difference from expected determined how far above the team was from expected results in playing who they did. The results can be seen here:

ovr_efficiency_wk13

These results are pretty interesting in that they fit the narrative of the season quite well in that teams either have elite offenses or defenses, but not both. The teams that do have both in the cloud of SEA/GNB/CIN/TEN/NWE seem to be teams that are favored to win their division with the exception of CIN (and maybe TEN). These teams seem to accurately represent the most “well rounded” teams in football. With a few exceptions (RAI/PIT/DET) It looks like the offensive efficiency numbers work well. DAL/NOR/ATL/WAS are some of the best offenses in the league this year. A future step for this might be adding in expected yards per success and how that correlates to efficiency. The idea behind this is the Raiders seem to pick up huge chunks of yards when they are successful. With the exception of JAX, the defensive efficiency numbers also seem to make sense with the best defenses. One reason for JAX might be the game situation; If a team is up by two scores a “success” is defined more by shortening the game (burning clock) than by gaining yards.

To break this down a little further I wanted to see what the most efficient offenses and defenses were from a pass/run perspective.

off_efficiency_wk13

def_efficiency_wk13

This is interesting because it shows Dallas as by far the most efficient offense. They excel in both the run and pass game. NO/WAS/ATL round out the most efficient offensive units. On the defensive side of the ball it seems like teams are built to stop the run or the pass, but not both. Denver has an extremely efficient pass defense and Baltimore has an extremely efficient run defense. The Giants have the overall most efficient defense. They are also the only team to beat the Cowboys and make their offense look anything short of unstoppable. That leads me into…

Things to Watch:

  1. Cowboy’s offense versus Giant’s defense – In what is the most interesting matchup of the weekend we get to see the most efficient defense play the most efficient offense. If I had to pick, I choose the Cowboys to win mainly because the team that they were in the first game is probably not reflective of the unit they are today. Their quarterback and running back are both rookies that have matured into their roles and I also believe that offensive line play improves as the season progresses due to the limited contact allowed in the offseason. However, if you’re expecting a blowout, I don’t think that is what we will see.
  2. Baltimore at New England – How one dimensional can the Ravens rush defense make the Patriots? I am curious to see the Ravens rush defense and how that effects the pass/run distribution of the Patriots. The Patriots are notorious for picking on a team’s weakness so I  would expect Brady to have a lot of pass attempts in this game. On another note, was Baltimore’s offense last week a blip or the new norm? This one is going to be closer than a lot of people think with a relatively low score. Belichick is a Sith Lord however… So expect the schemiest of the schemes in their path to dominance over the universe.
  3. Washington’s offense versus Philadelphia’s defense – The closest proxy to Washington’s offense is Atlanta’s offense from an efficiency standpoint. Philly’s defense made the Falcon’s offense look mortal in their matchup. Can they do the same for Washington’s? This matchup comes down to Philly’s rush game against Washington’s rush D. I’m going out on a limb and taking the Eagles at home.
  4. Atlanta’s defense versus LA’s offense – In the matchup that proves to be a case of the extremely movable object matching up against the supremely stoppable force, what happens? As much as it pains me to admit, Atlanta has one of the worst defensive units from an efficiency standpoint in the league. The upside is they get to face the most terrible offense in the league. Expect the fireworks to be returned to the store in this whamp whamp of a matchup. I  expect the Falcons to win this game, but watching Goff look like a real quarterback is going to be annoying.
  5. Why are the Raiders and Chiefs good? – From a pure efficiency standpoint both of these teams seem to be average or below average. I am extremely interested in looking at this game to understand what I am missing and can add to the efficiency alone to better represent these teams.

 

Draft Picks – An Undervalued Asset

The NFL draft is tomorrow! In a recent post I talked about how the Browns trading back was probably the best strategy and the one that a new analytically-driven front office would probably pursue. I am going to do a few things with the formula developed on this blog, then I am going to expand on this concept and try to get an understanding on how many “valuable” players each team can expect and their relative value. Both the analysis that I did and the blog that was referenced use approximate value from pro-football-reference.com. Its important to note that I used the formula in the blog’s chart so my numbers will be slightly different due to rounding. Below are the draft picks followed by the value over an average draft pick. This means a value of 500 is 5 times more valuable than the average draft pick. The picks that are in the future are unknown in that we don’t know how these teams will finish in 2017 and 2018 and thus which draft picks they are giving up. What is shown is the range of those values. I sampled from a uniform distribution and the array represented is the range of percentiles from 0 to 100 in increments of 10. So they represent the [0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100] percentiles where 0 and 100 represent the last pick in the round and the first pick in the round respectively. So if the Eagles finish last, the Browns get the 1st overall pick and thus the highest value for that pick.

  • Rams get (from Titans):
    • 2016 1st round (1 overall) – 497
    • 2016 4th round (113 overall) – 92
    • 2016 6th round (117 overall) – 54
  • Titans get (from Rams):
    • 2016 1st round (15 overall) – 265
    • 2016 2nd round (43 overall) – 175
    • 2016 2nd round (45 overall) – 171
    • 2016 3rd round (76 overall) – 126
    • 2017 1st round – [ 200, 209, 218, 229, 242, 257, 274, 297, 327, 375, 497]
    • 2017 3rd round – [ 106, 109, 112, 115, 118, 121, 125, 128, 132, 136, 140]
  • Eagles get (from Browns):
    • 2016 1st round (2 overall) – 437
    • 2017 4th round – [ 82, 84, 86, 88, 90, 93, 95, 98, 100, 103, 105]
  • Browns get (from Eagles):
    • 2016 1st round (8 overall) – 319
    • 2016 3rd round (77 overall) – 125
    • 2016 4th round (100 overall) – 103
    • 2017 1st round – [ 200, 209, 218, 229, 242, 257, 274, 297, 327, 376, 497]
    • 2018 2nd round – [ 141, 145, 150, 155, 159, 165, 170, 176, 183, 190, 198]

So which team made out the best? There are a few ways to look at this.

Trade Value Chart

From the data we gained from the Harvard sports blog we can look at total team value and value per pick. Below you can see that in general the Titans and Browns are going to add a lot of team value with their trade partners. On a per-pick basis things seem to favor the team that traded up. This is a flawed metric because it biases towards higher picks, but is interesting because the team may be adding value if they subscribe to the fact that top-tier talent is only found in the first round. Some additional explanations could be that the team feels the draft is top-heavy in talent, the team doesn’t have faith in its scouting departments ability to assess talent properly in later rounds, these quarterbacks are slam-dunk prospects, etc.

table

One Player Away

There is another way to look at this. What if the Eagles and the Rams believe they are one player away from contending? If this is the case then you may be able to justify the move in that it gives the team the ability to win now. Under this assumption, you would expect the approximate value of the team to be relatively high for the past year. In the following analysis I wrote a web crawler to gather the roster information for every team from 2005 to 2015 off of pro-football-reference.com. I then took the average approximate value for players that played 10 or more games for every team-year. The histogram of the average team AV can be found below:

team_avg_av

In general this is just an OK way to measure good vs bad. I wont ride or die on this metric to appropriately predict record. New England’s 15-1 team is at the top and Detroit’s 0-16 is at the bottom so it also isn’t terrible. Special note: San Francisco went from 3rd overall in 2013 to last in 2015. The talent dump on that team over the last few years has been insane. So where do the teams in this trade fit in?

  • Rams: 4.05 (23rd in 2015)
  • Titans: 3.65 (31st in 2015)
  • Eagles: 3.96 (26th in 2015)
  • Browns: 3.69 (29th in 2015)

In general this doesn’t represent teams that are one person away from contention as they all fall relatively low on the distribution of team-AV. So what would happen if these teams got an extremely exceptional first year starting quarterbacks (not likely, but what if). This charts shows AV by position from 2005-2015.

valuebyposition

So this means that an extremely high valued QB would be worth around 15 AV points in 2016. This is a good bit higher than the AV of other positions. So if the Eagles and Rams got great quarterbacks in this trade, what would their average team AV turn into?

  • Rams: 4.40 (~16th place in 2015)
  • Eagles: 4.30 (~ 20th place in 2015)

So basically these teams might get close to average in total team AV next year with extremely high quality quarterback play. For any quarterback. Not just a rookie. This suggests that the teams may have issues that can not be fixed with solely elite quarterback play.

Is the Trade Worth the Quarterback?

Lets play a scenario game to emphasize the importance of the draft picks over the top overall pick. I created a Bayesian hierarchical linear regression model to predict the expected AV points added per game played by draft pick and draft position. The cliff-notes on the model is that the slope and intercept of the line for expected AV versus draft position is unique by position, but is assumed to come from some underlying distribution. The reason I noted that it was Bayesian is the fact that I can sample from the underlying distribution of the slope and the intercept. Which means I can create simulations from the posterior distributions. To summarize this we are going to look at three charts. The first chart represents the histogram of the expected AV of every player per game since 2005. So I took the sum of a player’s AV and divided by games played. I multiplied this out to get an expected value for a full season (16 games). The second chart is some summary statistics. The last chart is the expected number of players each team could add through the traded picks at different cutoff criteria.

Screenshot from 2016-04-28 02:40:03

table2

table3

This is where it gets pretty interesting. It looks like the Eagles and Rams actually maximized their likelihood of drafting an extreme outlier. This makes sense intuitively as well due to the fact that most of the talent (and quarterbacks) are usually found at the top of the draft, both of which cause greater AV scores. The interesting thing about the chart is really in the > 6 AV range. These are highly exceptional players (in the top 80th percentile). In a million simulations The Browns net out 1.52 more expected high level contributors and the Titans net out 1.79 more expected high level contributors. To put this in perspective, some of the players that netted out between 6 and 8 AV are: B.J. Raji, Jordan Mathews, Andre Ellington, Dominique Rodgers-Cromartie, Greg Olsen, Akeem Ayers, and Allen Hurns.

In Conclusion

I think in general the Browns and Titans got the better ends of these deals. I think there is clear evidence here that if you are one player away (preferably a quarterback) from contention the trade can sometimes make sense. For these quarterbacks I don’t really see it, but I could be wrong. This post does a great job comparing these quarterbacks to other quarterbacks that have come out of the draft. We also saw that all four of these teams seem to be more than a single player away from being a contender. When this is the situation, we have seen that having more picks is much more valuable for the franchise.

This post doesn’t even go into the benefits of having rookies based on the rookie pay-scale. So imagine more rookie contributors to your team and what that means for cost savings to sign top free agents and lock up top home-grown talent. I might have to do that analysis in a future post.

Draft Coverage is Silly

I have seen a lot of media coverage recently about the Cleveland Browns and their draft strategy paired with their acquisition of Robert Griffin III. Most of it is a lot of media coverage without a lot of activity in the NFL world, but I have read a lot of articles that are fairly critical of their actions and what they have been saying. What I am going to do is try to understand why they would do the things that they have done. They may be making a mistake and I might be wrong about their strategy, but I want to try and defend their actions. It should also be noted that I am in no way a Browns fan. Lets dive into it.

They Got RG3:

This one is slightly difficult to defend mainly due to the price, but given the landscape of the NFL; RG3 didn’t have many places where he could use his skill sets. Under a certain system, RG3 has shown a very high upside. I think a lot of his success depends on a couple things:

  1. How willing they are to run a read-option type offense. I think the best way to get the most out of RG3 is to go back to his “comfort zone.” Hue Jackson has shown the ability to adapt to his situation and ran a lot of these concepts with the Bengals so I don’t foresee this being too much of an issue.
  2. How much defenses have caught up to the read-option. I don’t buy too much into this. Although there are teams that haven’t thrived recently with these offensive concepts (49ers), there are teams who have thrived with it (Panthers).

They said they were going to draft another quarterback:

This is really what I wanted to talk about. The Browns are getting a lot of grief for saying they will take the best prospect even if thats a quarterback. This is where I think the news is over-reported and reporters want to report on what is said rather than what it might mean. So now we get into some game theory. Basically it is a school of mathematical models that try to understand the interaction the value payoffs of interactions of rational decision-makers. So this is where I want to be critical about people getting upset with the Browns for saying they will take the “best prospect” even if that means taking a quarterback. This goes beyond the Browns when it comes to the draft, some of the media over-reporting, and general over-reaction; but they are the pet project for this post.

So lets assume the Browns tell the media or fans what they are going to do so that we all know and everyone can decide before the draft if its the right decision. You know who else hears what the Brown’s plans are? The other 31 teams in the NFL. So lets play the game of fans and the media getting what they want. The Browns have two choices in the draft: draft, trade. Why would they commit to taking a player if there are a bevy of other teams potentially looking at the same player?

If we play the scenario game of them wanting to trade back, this is where things make more sense. If they say they aren’t going to take one of the quarterbacks in the draft then why would any team trade with them? This website does a great job of breaking down the relative value of draft positions. The potential trade franchise can wait to deal with the San Diego Chargers at the three spot and potentially gain back a 7th rounder versus dealing with the Browns. If the team thinks the Chargers aren’t in the market yet (although Rivers is getting old), they can try to deal with the Cowboys at the four spot which is basically the equivalent of that team gaining a late 5th round pick by not dealing with the Browns. If the Browns say they aren’t going to take a quarterback after picking up RG3, a potential trade partner just gained a draft pick if they decide to deal with someone else. Since the NFL is a competitive league, why would the Browns basically give a team bargaining power? Its also important to note that they are being non-committal about the actual pick. They are just saying the “best player” which still keeps bargaining power if a team wants to trade up for another asset. Quarterbacks are also notoriously the most sought after asset in the NFL so saying you will potentially take one makes teams desperate to trade up.

Being transparent about your strategy

This one is a little less definitive and measurable, but its basically stating that any statement followed by some sort of action to that statement gives other players an idea of how you will react in scenarios. For some people learning poker for the first time, the idea of bluffing or varying betting strategies so their opponent can’t guess their hand is confusing. This line of thought is no different for this scenario. Whats important to note is that you can vary your betting strategy with positive outcome, but you can never show your hand and expect a positive outcome if there is still betting to be done.

Why does some of the media care so much?

I honestly do not understand why the media spends so much time covering things like “who are you going to draft?” and “how are you going to beat team X’s defense?” If I were one of the teams I would just constantly troll the media. As fans and outsiders looking in I hope teams are at least this smart about their actions. If I were them I would constantly try to confuse my opposition towards my actions. Or try to understand the payout structure of various options. Here’s to hoping the new brain trust at the Browns know what they are doing. Also, this article is more about understanding the actions of teams leading up to the draft. It would actually be interesting if there was a log of teams that tried to trade up and their trade offers. If this could be married up with the press releases of the team, it would make for an interesting dataset.

I see you

Hey everyone, been a while since I posted something so I figured I would just go ahead and share some of the stuff I am working on. For those of you interested in it from a technical standpoint I am going to go light on the theory and code. Im just going to show you the results and some of the interesting bits about it.

What have I been working on? Teaching computers to see like people do. Its super duper fun. What I want to focus on in this post is recognition in images AFTER I have trained a model. If you’re curious about how I created the model I am not going to cover it here. I might make a post about it later so stay tuned, but these are some of the buzzwords of things that I used:

  • Python – Programming language
  • Opencv – computer vision library for python/C++
  • dlib – computer vision library for python/C++
  • Tensorflow – Deep learning framework by Google (released  to the public a couple months ago)
  • Convolutional Neural Networks or convnets
  • AWS

What I used for data is the CIFAR10 dataset. Its a fairly popular dataset for image recognition tasks and has well-established benchmarks. It is a collection of images that are classified into 10 categories: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck. What I created got a test accuracy of 89.7% which would have been good enough for 24th a couple years ago in a data science competition hosted on kaggle. Not too shabby.

So I created this cool model but what I wanted to talk about was how to use that model to find stuff in new images. This post will be about using semantic segmentation over the naive approach to recognize images. There will be pretty pictures to illustrate what I am talking about. So first, lets look at our input image:

dogandcat

Awww. So cute.

What is the naive approach?

The naive approach in this case is trying to find “interesting” data points in images to then feed to the model. There are a few ways to do this. There is a built in function to the dlib library, but before I knew about this I actually built my own using opencv. I named it boundieboxies because I don’t take myself, my work, or grammar srsly. Overall, I just try to give as few fucks as possible. What it does is the following:

  1. Finds interesting points that exist in the image. You can read up on the documentation of opencv, but basically it looks for color gradients and uses some edge detection algorithms.
  2. Uses K-means clustering of the x, y coordinates of the points found.
  3. Creates a box around the cluster with some padding.

Boom. Easy as that.

In case some of you are not familiar with K-means clustering. Its a clustering algorithm used to group data together. The gif below pretty much explains it better than any statistician. There are some complexities to K-means, but I’m not going to belabor this post with it.

kmeans

Boundieboxies does a good job with foreground object detection in images and has comparable speeds to the dlib library up to around 500 objects per image. Which is a shit ton of objects for an image. Over 500 objects, dlib smokes boundieboxies in performance.

Thats an unrealistic scenario Okay?! Okay? Okay….. okay….

okay

Okay… mine isn’t as good at scale.

So running the program and getting my “potential” objects took 0.170 seconds and found 12 potential objects. This is what the output looks like for boundieboxies:

Screen Shot 2016-02-26 at 2.15.21 PM

Dope. It looks like it did a good job of finding the dog and the cat with a few of these boxes. To look at how boundieboxies compares to the dlib find_candidate_object_locations function, the dlib output is below with a runtime of 0.143 seconds and found 10 potential objects.

Screen Shot 2016-02-26 at 2.24.28 PM

Both algorithms work well in about the same amount of time. Boundieboxies is better for foreground object detection and does a good job of that while the dlib function does a good job of blending objects with the background. Both have their uses and a comparable runtime (when there aren’t very many objects <cries gently>).

So now I know what to feed my convnet model to determine what each box likely is and hopefully get the computer to tell me that there is a doggy and a kitty in this picture. Running the different boxes through the model took 0.412 seconds and the output was:

  • Box 1: Dog, score: 792
  • Box 2: Ship, score: 92
  • Box 3: Cat, score: 228
  • Box 4: Ship, score: 53
  • Box 5: Dog, score: 1013
  • Box 6: Cat, score: 346
  • Box 7: Dog, score: 220
  • Box 8: Dog, score: 201
  • Box 9: Airplane, score: 151
  • Box 10: Cat, score: 304
  • Box 11: Ship, score: 35
  • Box 12: Dog, score: 505

So we basically got what we wanted. The boxes were mostly recognized as dogs and cats. The ship comes in because the bed they’re laying on kinda looks like a boat. Even if there are some random objects, the top scores were for dog and cat by a pretty significant margin.

So in summary the naive approach used boxes to feed into the trained convnet model to produce output. Overall it took 0.170 + 0.412 = 0.582 seconds. Nice!

Semantic Segmentation

So now lets move onto semantic segmentation. This is something that is new (to me) and took a good bit of figuring out to get it working on tensorflow as both tensorflow and this method are relatively new. There are actually a lot of ways to do it and for the most part it is used in academia so I had to decode a white paper to get it working. The white paper is here:

http://www.cs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf

Nerd Summary:

Semantic segmentation is created by transforming what would be neural network layers to fully convolutional layers, adding some skip layers into the model architecture, and scaling the predictions up to have a weighted output the same shape as the input image.

Regular Summary:

There is a cool trick to make the model look at pieces of the picture as the picture moves through the trained model. What it gives us is classifications as well as pixel-by-pixel probabilities of what class it is a part of.

Picture Summary:

It does this.

figure_1.png

Pretty cool right? Its able to mix what each pixel is with the probability of what it is. Or as I like to say, “It goes all Predator”. In addition to it being more precise with more robust output, it actually runs faster too! The output above took 0.371 seconds. Once again you get the fact that the bed looks like a boat and it also thinks the dog’s ear is a bird. It clearly sees the dog and cat in the image (red means a high likelihood that that pixel has the given category).

That is a savings of 0.582 – 0.371 = 0.211 seconds per image. For this example that might seem very small, but say you are running this on a million images or the images have more objects than just a couple. That adds up. Also, the output holds a lot more information.

Importance of Writing in Data Science

So recently I have been thinking about data science and how much writing I have been doing recently in blogging for work, blogging about football, and blogging about random things. What I has been interesting to observe is how writing and data science complement each other. I figured I would write a quick blog post for anyone in an analytic profession describing what publishing or writing down your work does to help complement your analytic capabilities. I am putting code comments as part of “writing” because in a lot of cases my write-up is started by an outline from my code comments. So here are a few ways that I think it helps tremendously:

It helps you finish projects

In a lot of cases it is easy to find quick nuggets of information in data. It’s a lot harder to talk about what those nuggets mean and why they are significant. Also, does that nugget lead to another interesting nugget?

A lot of times a completed project exists in the form of code that you wrote that not a lot of people can understand besides you. Comments in your code are important to understand the code. Write-ups are important to understand the findings you got from the model you used.

It gives you a baseline to beat

There have been a lot of times where I never feel like a project is finished because I can constantly do x, y, or z better. While this doesn’t necessarily get remedied by writing things up directly, I feel like if you have regular weekly write-up deliverables for yourself, you are more likely to write down what you have. In a lot of cases what you have isn’t a finished product in statistics. There are so many models to try, so many transforms to do, and so many other omitted variables to add into the fold. As George Box said “All models are wrong, but some are useful.” It’s almost as important to get a baseline out quickly as it is to get the best model out eventually. It gives you something to beat. It also gives you something to show. Important to emphasize in the writeup that this is just a baseline and more analytics should be done before this is taken seriously. Be aware that project managers and executives will be ready to roll with whatever you say. It’s on you to make the model better.

It gives you a clear outline on where to go

In a lot of cases the process of writing up your findings will make you be more critical in assessing model violations as well as giving you a place to go. There is almost always a conclusion/next steps portion to a writeup. When you write something up you spend time trying to explain why you used what to yourself and others. I think the old adage that you don’t know a subject until you can teach it holds here. You don’t really know the data until you have to write about it. I have never finished a writeup thinking “that’s it, I’m done.” The process usually helps me think of questions that I didn’t have while writing the code, violations in the model assumptions I may have made, or other pieces of data that I might want to add to the analysis.

It helps you communicate to anyone that relies on you for insights

This one is really important. This is also one of the biggest reasons I created this blog. Complex models are cool, but being able to explain complex models in a straight-forward manner is extremely valuable. Scratch that. Being able to explain ANY statistical concept in a straight-forward manner is extremely valuable. Not everyone you are going to talk to will have a masters or PhD in statistics. Being able to clearly communicate your findings is a skill, not a given. It has been my experience that it is a skill that many data scientists put little stock in at the detriment of their own careers. The data scientist that can explain advanced concepts to the everyman is the real hero, not the one that gets frustrated that nobody understands him or her. I have also seen scenarios when the correct numbers and verbiage are not used simply because the person that matters in the company didn’t understand what you just told them. Writing stuff up helps you understand how they want to hear what you are trying to say. That sentence might sound weird, but its important. Trust me. We are outliers in our train of thought and approaches. The outlier should never simply be ignored, but investigated (corny stats joke, feel free to ignore).

It helps you understand what value it adds

I have seen countless forays into analytics that are interesting, but what really matters is if they are valuable. You can run the most bomb model in all of models and if it gives a 2% accuracy boost over a linear model, why should anyone care? Outside of the learning I might have had from running that model in the wild, chances are nobody will ever care. There are certain exceptions to this, but unless you are working on the most cutting edge methods, I doubt most of us see much value in a 2% accuracy boost. I think a writeup helps you answer the question “Why is what I did important.”

Anyway, that’s my two cents on the matter. As always feel free to say “You don’t know anything about anything” and move on or bash me in the comments. I welcome your criticism. Maybe you think I missed or overlooked something. I welcome your insights.

Defense? Falcons?

This year I am going to try writing about the NFL with a sort of focus on the Falcons because I am a homer and OK with it. I have been anticipating this season way more than I have past seasons and been eating up a lot of media about the NFL and a lot more about the Falcons. Since I do statistics for a living, I figure I will put that sort of spin on this. Bill Barnwell is probably my favorite writer when it comes to the NFL. I am a big fan on how he uses and interprets numbers in his articles in a relatively easy/straight-forward/accessible way. He also gots the jokes.

Alright, lets do it then! Lets get in it with some classic over-reaction to week 1 action. We will start with the team I love. The Atlanta Falcons showed what was a dominant defense in the first half and a barely competent defense in the second half. Average that together and its still better than last year! So one of the statistics that I thought explained Atlanta’s defense best last year was the fact that we allowed a 46.8% conversion rate on third down to opposition which was good enough for dead last in the league. While not the only statistic to look at for defense, it has a weakly positive correlation with the defensive DVOA rankings (courtesy of football outsiders) of 0.53. Nothing to write home about, but enough to at least weakly illustrate my point without taking me down a rabbit hole. It is also of note that the Eagles were 9th in the league last year on offense in this category with a rate of 43.5%. The numbers definitely suggested a beatdown was going to occur in this category, but what is that? The Eagles converted 3 of 12 for a 25.0% rate. If we look at the dominant first half that number is actually 1 of 6 for a rate of 16.7%.

So this basically means Dan Quinn completely overhauled the defense and the new additions proved to be shrewd, intelligent acquisitions and we be #quinning. As much as I wish this is the case, lets slow down. Point estimates are very rarely informative. What we really care about is if there is enough data to suggest this is truly coming from a different distribution (which can be read as this defense is not last years defense). For those of you who know me, you know I subscribe to Bayesian statistics. So while many of you know about confidence intervals, I will be using the Bayesian interpretation of examining if something comes from another distribution. The following description and image shows the interpretation.

  • Θ1 – Falcon’s third down conversion rate last year
  • Θ2 – Eagle’s third down conversion rate last year
  • Θ3 – Falcons vs Eagles third down rate last night
  • Θ4 – Falcons vs Eagles third down rate in first half last night

Conversion Rates

So we can see the point estimates going diagonally from the top left to the bottom right. We can also see the differences between the distributions on the off-diagonals with the top right being a mirror of the bottom left. We also clearly see that theta1 and theta2 show much more certainty in their estimates because we had a lot more data last year than we did in one game this year. So what does this all mean?! Well… it basically means we don’t know enough about our defense to make real conclusions yet. If you look at the difference between theta1 and both theta3 and theta4. Which represent 2014 Falcons, First game, and First half of first game; zero is still very much a credible value. While it does seem to pull towards this being an improved unit, I wouldn’t put too much stock in it yet. To further illustrate this, the numbers seem to think we didn’t look much different in the first half versus the whole game (theta3 versus theta4). This is in large part because the credible intervals for the true value are so large. For anyone watching the game, the Eagles were giving us the business in the second half.

Well thats my statistically backed way to say we actually don’t know too much about the Falcons or any team just yet. Might be fun to look at all the crazy projections to see how many come true. I’m looking at you Skip.