# The seventy two percent solution (to police violence)

And now it is your turn,
We are tired of praying, and marching, and thinking, and learning —  Gil Scott-Heron

So. It turns out that Gil Scott-Heron was right and he was wrong. We once again, during a time of serious social inequality and political upheaval, sent whiteys to the moon (ish). On the other hand, the revolution is definitely being televised. Live.

And as the protests and discussions continue there are a lot of reform ideas being mooted. Some ideas are transformative, like defunding and demilitarizing the police (this does not mean “get rid of police”! see here and here for what that might mean) and redistributing parts budget to housing, community projects, health, employment, education and other vital social programs. (This is predicated on the idea that the police are not the right solution to every problem they’ve been hurled at.)

At the other end of the spectrum we have 8 Simple Rules for Dating My Teenage Daughter to reduce police killings by 72%.

• Require de-escalation
• Have a use-of-force continuum
• Ban chokeholds and strangleholds
• Warn before shooting
• Don’t shoot at moving vehicles
• Exhaust all other means before shooting
• Other officers have a duty to intervene to stop another office from using excessive force
• Require comprehensive reporting of uses and threats of force.

And really, all of those are perfectly reasonable things. The question is do they do anything? (NB: Eric Garner died from a chokehold in 2014. They were banned in NY in 1993.)

Well the reason why I’m talking about this at all is that these rules were heavily being promoted as being “data driven”. (I am not even going to pretend that I’m well versed in police violence or structural racism in the US. I have a very well-defined lane to stay in.)

In fact, the interventions were sold with the tag-line “Data proves that together these eight policies can decrease police violence by 72 percent”. (See, e.g. here). Very recently, project has walked this back, somewhat, to “Research shows more restrictive use of force policies can reduce killings by police and save lives”.

If you look into their docs, you get this

Moreover, our research found that having all eight of these use of force restrictions in place was associated with 72% fewer police-involved killings compared to departments with none of these policies in place and a 54% reduction for the average police department.

This research was part of the Police Use of Force project, which looked at police killings linked to 100-ish large police forces and compared which of these 8 policies had been implemented. (Why these 8? No idea.) The data is not open, but on Twitter, Emma Glennon, who is an epi researcher and PhD student at Cambridge has compiled some similar data and released it on github. She wrote a really great twitter thread that dives into the data and the analysis and that I’m copping  a lot from here. (Eve Ewing also has a really great discussion about this.)

So where did that 72% come from? Well, if you read the report (which doesn’t appear to be peer reviewed, but who even knows) you see the following:

• 100 departments were approached for their use of force policies, leading to 91 departments that could be included.
• Deaths were taken from The Guardian’s The Counted database, which covers 2015-2016. (A more up-to-date database is available from the Fatal Encounters project.)
• The policies were included as a total number of policies enacted
• The model was fit using a negative binomial regression that also included (with regression coefficients and standard errors)
• Percent minority, 1.7 (0.7)
• Arrests, 0.9 (0.2)
• Income, 0.8 (0.5)
• Inequality (Gini coefficient from the census), 0.1 (1.7)
• Assault on officers (~0 effect) and number of officers (~0 effect)
• The regression coefficient for Number of Policies was -0.16 (0.07).

Now, there is no way that this analysis supports causal conclusions. It’s not built for it and it’s not analyzed that way. And that’s ok. Exploratory data analysis is great. We all love it.

(Let’s not be glib about this though. That policy data was hard got and has value beyond this particular analysis. As always the hardest part of doing a data-driven analysis is getting the damn data.)

But the problem comes when making the 72% claim. That is a claim about the effect of a hypothetical intervention. Which is to say that it’s a causal claim

And it’s not a good one.

The argument is that if $k$ policies are implemented, then the reduction in police killings compared to similar city with no policies enacted is

$left[exp(-0.16k) -exp(-0.16*0)right]times 100%$.

This gives the 72% reduction. (Yes Andrew, that extra precision annoys me too.)

An average city has 3 of these 8 policies enacted, so the change would be<

$left[exp(-0.16k) -exp(-0.16*3)right]times 100%$,

which is a 34% change.

Note: The 54% quoted above is wrong. You get the 54% quoted above if a department that previously had zero policies enacted (in the data Irving, Kansas City, Reno, and Stockton) enacts 3 policies.

So. Are these numbers in any way real? Well, cracking open my copy of Regression and Other Stories, I know that this will be ok if the model holds (there is no model checking in the report) and under some unlikely assumptions that basically mean that observations with the same covariates are randomized into each of the 9 treatment levels (from 0 to 8 policies).

The numbers are also assuming that if you enact 4 policies, it doesn’t matter which 4 policies you enact. In the data there are 49 different policy combinations. And it is not unreasonable to expect that some combinations will be more effective than others.

All of this is to say that it is difficult to use this data and this modelling to predict the effect of enacting these policies. It’s definitely wrong to say they will lead to a 72% or 34% (or 54%!) reduction.

The correct interpretation is that (modulo fit issues), of the departments studied, those that enacted more of these policies recorded fewer killings (even after accounting for income, inequality, racial makeup of the population, and the number of arrests).

Does this mean the policy proposal should be scrapped? Well. That’s a hard question. It’s similar to the question of if COPSS should rename the Fisher Lecture after someone who wasn’t a notorious racist. (There is a petition) Sure. Do it. But beware of opportunity cost, because there is no evidence that this intervention will even touch the sides of the actual problem. So if enacting these policies is the end of the story, we likely won’t get very far.

And, as always, we should probably be aware that simple interventions rarely solve complex problems. And we should always be fairly skeptical when a relatively small intervention is advertised as leading to a large change. Even (or especially) what that intervention is advertised as “data driven”.