The Importance of Blocks for NBA Defenses, Over Time

Introduction

I have always been ambivalent on how I feel about blocking shots. On one hand, they are really cool. Seeing someone send a shot into the fifth row or watching Anthony Davis recover and get hand on a shot just in time is fun. Perhaps the most important play of LeBron's entire career was a block. At the same time, blocks mean more shooting fouls, and blocks don't often result in a defensive rebound.

My interest in blocks continues in this post: I will be looking at the relationship between blocks and defensive rating, both measured at the player level. Both are per 100 possessions statistics: How many blocks did they have in 100 possessions? How many points did they allow in 100 possessions (see the Basketball-Reference glossary)?

My primary interest here is looking at the relationship over time. Are blocks more central to the quality of a defense now than they used to be? The quality of defense will be measured by defensive rating—the number of points a player is estimated to have allowed per 100 possessions. For this reason, a better defensive rating is a smaller number.

There is some reason for why blocks might be more important now than they used to be. In the 2001-2002 NBA season, the NBA changed a bunch of rules, many relating to defense. There were two rules in particular that changed how people could play defense (quoting from the link above): “Illegal defense guidelines will be eliminated in their entirety,” and “A new defensive three-second rule will prohibit a defensive player from remaining in the lane for more than three consecutive seconds without closely guarding an offensive player.” This meant that people could start playing zone defense, as long as the defense didn't park guys in the paint for longer than three seconds. This also means blocking shots became a skill that took self-control: You had to be able to slide into place when necessary to block shots—you couldn't just stand there in the paint and wait for guys to swat.

Other rules made it more likely that guys were going to be able to drive past backcourt players and reach the paint: “No contact with either hands or forearms by defenders except in the frontcourt below the free throw line extended in which case the defender may use his forearm only.” So, have blocks grown more important for a defense over time?

Analysis Details

I looked at every NBA player in every season from 1989 to 2016, but players were only included if they played more than 499 minutes in a season. If you are interested in getting this data, you can see my code for scraping Basketball-Reference—along with the rest of this code for this blog post—over at GitHub.

I fit a Bayesian multilevel model to look at this, using the brms package, which interfaces with Stan from R. I predicted a player's defensive rating from how many blocks they had per 100 possessions, what season it was, and the interaction between the two (which tests the idea that the relationship has changed over time). I also allowed the intercept and relationship between blocks and defensive rating to vary by player. I made the first year in the data, 1989, the intercept (e.g., subtracted 1989 from the year variable) and specified some pretty standard, weakly-informative priors, as well:

brm(drtg ~ blkp100*year + (1+blkp100|player),
    prior=c(set_prior("normal(0,5)", class="b"),
            set_prior("student_t(3,0,10)", class="sd"),
            set_prior("lkj(2)", class="cor")),
    data=dat, warmup=1000, iter=2000, chains=4)

Results

The probability that the relationship between blocks and defensive rating depends on year, given the data (and the priors), is greater than 0.9999. So what does this look like? Let's look at the relationship at 1989, 1998, 2007, and 2016:

Year Slope
1989 -1.04
1998 -1.48
2007 -1.92
2016 -2.36

Our model says that, in 1989, for each block someone had in 100 possessions, they allowed about 1.04 fewer points per 100 possessions. In 2016? This estimate more than doubled: For every block someone had in 100 possessions, they allowed about 2.36 fewer points per 100 possessions! Here is what these slopes look like graphically:

plot of chunk plot

However, the model assumes a linear change in the relationship between blocks and defensive rating per year. What if there was an “elbow” (i.e., drop-off) in this curve in 2002, when the rules changed? That is, maybe there was a relatively stable relationship between blocks and defensive rating until 2002, when it quickly became stronger. To investigate this, I correlated blocks per 100 possessions and defensive rating in each year separately. Then, I graphed those correlations by year. (Note: Some might argue a piecewise latent growth curve model is the appropriate test here; for brevity and simplicity's sake, I chose to take a more descriptive approach here).

plot of chunk next-plot

It is hard to tell specifically where the elbow is occuring, but it does look like there is less importance of blocks before the rule change (i.e., the correlation was getting closer to zero), whereas after the rule change, the correlation was getting stronger (e.g., getting more and more negative).

Conclusion

It looks like blocks are not only cool, but especially relevant for players to perform well defensively in recent years. A consequence of the rule changes in 2001-2002 is that blocks are more important now for individual defensive performance than they were in the past.

Analyzing Rudy Gay Trades Using the CausalImpact Package

Introduction

I have been meaning to learn more about time-series and Bayesian methods; I'm pumped for a Bayesian class that I'll be in this coming semester. RStudio blogged about the CausalImpact package back in April—a Bayesian time-series package from folks at Google—and I've been meaning to play around with it ever since. There's a great talk posted on YouTube that is a very intuitive description of thinking about causal impact in terms of counterfactuals and the CausalImpact package itself. I decided I would use it to put some common wisdom to the test: Do NBA teams get better after getting rid of Rudy Gay? I remember a lot of chatter on podcasts and on NBA Twitter after he was traded from both the Grizzlies and the Raptors.

Method

I went back to the well and scraped Basketball-Reference using the rvest package. Looking at the teams that traded Gay mid-season, I fetched all the data from the “Schedule & Results” page and from that I calculated a point differential for every game: Positive numbers meant the team with Rudy Gay won the game by that many points, while negative numbers meant they lost by that many points. I ran the CausalImpact model with no covariates or anything: I just looked at point differential over time. I did this separately for the Grizzlies 2012-2013 season and the Raptors 2013-2014 season (both teams traded Rudy mid-season). The pre-treatment sections are before the team traded Gay; the post-treatment sections are after the team traded Gay.

Code for scraping, analyses, and plotting can be accessed over at GitHub.

Results

The package is pretty nice. The output is easy to read and interpret, and they even include little write-ups for you if you specify summary(model, "report"), where model is the name of the model you fit with the CausalImpact function. Let's take a look at the Grizzlies first.

Actual Predicted Difference 95% LB 95% UB
Average 4.4 3.6 0.82 -5 6.6
Cumulative 167.0 135.8 31.22 -190 252.5

The table shows the average and cumulative point differentials. On average, the Grizzlies scored 4.4 points more than their opponent per game after Rudy Gay was traded. Based on what the model learned from when Gay was on the team, we would have predicted this to be 3.6. Their total point differential was 167 after Rudy Gay was traded, when we would have expected about 136. The table also shows the differences: 0.82 and 31.22 points for average and cumulative, respectively. The lower bound and upper bound at a 95% confidence interval fell on far opposite sides of zero, suggesting that the difference is not likely to be different from zero. The posterior probability here of a causal effect (i.e., the probability that this increase was due to Gay leaving the team) is 61%—not a very compelling number. The report generated from the package is rather frequentist—it uses classical null hypothesis significance testing language, saying the model “would generally not be considered statistically significant” with a p-value of 0.387. Interesting.

What I really dig about this package are the plots it gives you. This package is based on the idea that it models a counterfactual: What would the team have done had Rudy Gay not been traded? It then compares this predicted counterfactual to what actually happened. Let's look at the plots:

plot of chunk unnamed-chunk-34

The top figure shows a horizontal dotted line, which is what is predicted given what we know about the team before Gay was traded. I haven't specified any seasonal trends or other predictors, so this line is flat. The black line is what is actually happened. The vertical dotted line is where Rudy Gay was traded. The middle figure shows the difference between predicted and observed. We can see that there is no reliable difference between the two after the Gay trade. Lastly, the bottom figure shows the cumulative difference (that is, adding up all of the differences between observed and predicted over time). Again, this is hovering around zero, showing us that there was really no difference in the Grizzlies point differential that actually occurred and what we predicted would have happened had Gay not been traded (i.e., the counterfactual). What about the Raptors?

The Raptors unloaded Gay to the Kings the very next season. Let's take a look at the same table and plot for the Raptors and trading Rudy:

Actual Predicted Difference 95% LB 95% UB
Average 4.4 -0.37 4.8 -0.88 10
Cumulative 279.0 -23.22 302.2 -55.59 657

plot of chunk unnamed-chunk-36

The posterior probability of a causal effect here was 95.33%—something that is much more likely than the Grizzlies example. The effect was more than five times bigger than it was for Memphis: There was a difference of 4.8 points per game (or 302 cumulatively) between what we observed and what we would have expected had the Raptors never traded Gay. Given that this effect was one (at the time, above average) player leaving a team is pretty interesting. I'm sure any team would be happy with getting almost 5 whole points better per game after getting rid of a big salary.

Conclusion

It looks like trading Rudy Gay likely had no effect on the Grizzlies, but it does seem that getting rid of him had a positive effect on the Raptors. The CausalImpact package is very user-friendly, and there are many good materials out there for understanding and interpreting the model and what's going on underneath the hood. Most of the examples I have seen are simulated data or data which are easily interpretable, so it was good practice seeing what a real, noisy dataset actually looks like.

Quantifying "Low-Brow" and "High-Brow" Films

I went and saw Certain Women a few months ago. I was pretty excited to see it; a blurb in the trailer calls it “Triumphant… an indelible portrait of independent women,” which sounds pretty solid to me. The film had a solid point in that it exposed the mundane, everyday ways in which women have to confront sexism. It isn't always a huge dramatic thing that is obvious to everyone—instead, most of the time sexism is commonplace and woven into the routine of our society.

The only problem is that I found the movie, well, pretty boring. Showing how quotidian sexism is in a film makes for a slow-paced, quotidian plot. A few days ago, I happened upon the Rotten Tomatoes entry for the movie. It scored very well with critics (92% liked it), but rather poorly with audiences (52%). It made me think of the divisions between critics and audiences; I thought that the biggest differences between audience and critic scores could be an interesting way to quantify what is “high-brow” and what is “low-brow” film. So what I did was got critic and audience scores for movies in 2016, plotted them against one another, and looked at where they differed most.

Method

The movies I chose to examine were all listed on the 2016 in film Wikipedia page. The problem was I needed links to Rotten Tomatoes pages, not just names of movies. So, I scraped this table, took the names of the films, and I turned them into Google search URLs by taking "https://google.com/search?q=rottentomatoes+2016+" and using paste0 to add the name of the film at the end of the string. Then, I wrote a little function (using rvest and magrittr) that takes this Google search URL and fetches me the URL for the first result of a Google search:

# function for getting first hit from google page
getGoogleFirst <- function(url) {
  url %>% 
    read_html() %>% 
    html_node(".g:nth-child(1) .r a") %>% 
    html_attr("href") %>% 
    strsplit(split="=") %>% 
    getElement(1) %>% 
    strsplit(split="&") %>% 
    getElement(2) %>% 
    getElement(1)
}

After running this through a loop, I got long vector of Rotten Tomatoes links. Then, I fed them into two functions that gets critic and audience scores:

# get rotten tomatoes critic score
rtCritic <- function(url) {
  url %>% 
    read_html() %>% 
    html_node("#tomato_meter_link .superPageFontColor") %>% 
    html_text() %>% 
    strsplit(split="%") %>% 
    as.numeric()
}
# get rotten tomatoes audience score
rtAudience <- function(url) {
  url %>% 
    read_html() %>% 
    html_node(".meter-value .superPageFontColor") %>% 
    html_text() %>% 
    strsplit(split="%") %>% 
    as.numeric()
}

The film names and scores were all put into a data frame.

Results

Overall, I collected data on 224 films. The average critic score was 56.74, while the average audience score was 58.67; while audiences tended to be more positive, this difference was small, 1.93, and not statistically significant, ,t(223) = 1.34, p = .181. Audiences and critics tended to agree; scores between the two groups correlated strongly, r = .68.

But where do audiences and critics disagree most? I calculated a difference score by taking critic - audience scores, such that positive scores meant critics liked the film more than audiences. The five biggest difference scores in both the positive and negative direction are found in the table below.

“High-Brow” Films

Film Critic Audience Difference
The Monkey King 2 100 49 51
Hail, Caesar! 86 44 42
Little Sister 96 54 42
The Monster 78 39 39
The Witch 91 56 35
Into the Forest 77 42 35

“Low-Brow” Films

Film Critic Audience Difference
Hillary's America: The Secret History of the Democratic Party 4 81 -77
The River Thief 0 69 -69
I'm Not Ashamed 22 84 -62
Meet the Blacks 13 74 -61
God's Not Dead 2 9 63 -54

Interactive Plot

Below is a scatterplot of the two scores with a regression line plotted. The dots in blue are those films in the tables above. You can hover over any dot to see the film it represents as well as the audience and critic scores:



I won't do too much interpreting of the results—you can see for yourself where the movies fall by hovering over the dots. But I would be remiss if I didn't point out the largest difference score was an anti-Hillary Clinton movie: 4% of critics liked it, but somehow 81% of the audience did. Given all of the evidence that pro-Trump bots were all over the Internet in the run-up to the 2016 U.S. presidential election, I would not be surprised if many of these audience votes were bots?

Apparently I'm a low-brow plebian; I did not see any of the five most “high-brow” movies, according to the metric. Both critics and audiences seemed to love Hidden Figures (saw it, and it was awesome) and Zootopia (still haven't seen it).

Let me know what you think of this “low-brow/high-brow” metric or better ways one could quantify the construct.