Image by Mike Sweeney from Pixabay

The problems with OIS data that only capture fatalities

Image by Mike Sweeney from Pixabay

The problems with OIS data that only capture fatalities

Over the last few years, several websites that compile fatal officer-involved shooting (OIS) data have become available. These include:

These databases are useful for answering some of the most basic questions about OIS. For example, a few years ago my colleagues and I analyzed the Killed By Police data and found that fatal OIS have neither increased nor decreased significantly since the shooting of Michael Brown in Ferguson, MO (despite claims to the contrary). More recently, we used The Washington Post’s data to discuss the strengths and weaknesses of various “benchmarks” that can be used to make sense of racial disparities in fatal OIS.

What I worry gets lost on many people is that these data only capture fatal OIS. But if an officer purposely discharges his/her firearm at a person, the officer has used deadly force regardless of whether the person is killed. And in fact, OIS frequently do not result in death.

In the 1980s, James Fyfe summarized seven studies that had analyzed OIS and found that fatality rates ranged from a low of 21% in Philadelphia (1975-78) to a high of 41.7% in Los Angeles (1974-78). A few years ago, David Klinger and colleagues analyzed 10 years’ worth of OIS by the St. Louis Metropolitan Police Department (2003-12), and found that 230 OIS resulted in 37 citizen fatalities.

Recently, VICE News compiled data on both fatal and nonfatal OIS in 47 of the largest 50 jurisdictions in the United States from 2010 to 2016. They documented 4,400 OIS - at least 1,382 of which were fatal (31.4%). Even if we assumed all of the 288 OIS with “unknown” outcomes were fatal, the overall fatality rate would increase only to about 38%. But the overall fatality rate, whatever it is, masks a ton of variation within this sample:


Focusing just on the jurisdictions with 100+ OIS, you can see that fatality rates during this time ranged from a low of 16.8% in St. Louis to 51.9% in Phoenix. Now consider some of the conclusions you might draw if you only had the “fatal” columns. You would probably conclude that police in Las Vegas shoot citizens over twice as often as police in St. Louis (47 v. 20). But the reality is that their OIS totals were virtually identical (115 v. 119). Similarly, while Boston and Atlanta each had 10 fatal OIS, Boston had just 4 additional nonfatal OIS, while Atlanta had an additional 32. And keep in mind these are just raw counts which don’t account for differences in population or violent crime rates.

There are many factors at play which likely influence whether an OIS results in a fatality, including:

  • type and caliber of gun used by officer(s)
  • number of rounds fired
  • proximity to the nearest Level 1 Trauma Center
  • departmental policy on rendering aid
  • if/how many bullets hit vital organs

This last factor can come down to centimeters, which is a big part of the reason why OIS data that only capture fatalities can be misleading.

To be clear, even complete data on fatal and nonfatal OIS (which Texas and a few other states are now collecting) would leave important questions unanswered. Ultimately, to make sense of patterns/trends in OIS, we need to be able to compare incidents that resulted in OIS to a comparable universe of incidents that did not result in OIS. As Andrew Wheeler and his colleagues have demonstrated, we really need data on all instances in which officers draw and point their firearms, regardless of whether they ultimately shoot.

Our understanding of OIS has come a long way since Fyfe’s pioneering work in the 70s and 80s. But we’ve still got a long way to go.

Justin Nix
Assistant Professor of Criminology and Criminal Justice

My research interests include police legitimacy, procedural justice, and officer-involved shootings.