Fatal Police Shootings and Race: A Review of the Evidence and Suggestions for Future Research
When the 2014 police shooting of Michael Brown in Ferguson, Missouri, set off riots, we knew very little about the true number of people killed by American law enforcement. But since that time, private actors have stepped up efforts to count such killings comprehensively—and to collect some basic details about each incident. These data, along with agency-specific information furnished by police departments, have facilitated a massive amount of research into an important question: whether there is racial bias in police officers’ use of lethal force.
This report summarizes several major lines of work on this question. The simplest studies merely compare racial groups’ rates of police-killing deaths with their rates of crime. Other studies meticulously account for the situational factors of each case, ask whether black officers are less likely than white officers to kill black suspects, analyze police-shooting rates geographically, or even put officers in simulation exercises to see how they respond to suspects of different races.
On balance, these data and studies rebut the most extreme accusations of racial bias, in which police officers are thought to be killing nonthreatening black men with astounding frequency. But research continues as to whether there is some detectable level of bias in the nationwide data, and especially whether there are problems that manifest themselves differently from place to place.
After reviewing the state of the debate, this paper makes several suggestions for the path forward. For example, the government should rectify the lack of official data that led to the creation of so many private efforts to monitor police killings, and it should continue to increase the use of body cameras. Researchers, meanwhile, should move on from simple methods that merely compare crime rates and police shooting rates, focusing on more promising designs that better tease out the role of race. They should also invest more effort in studying place-to-place variations in police shootings and racial bias therein.
Since the Ferguson unrest, a narrative has solidified around the idea that police use lethal force disproportionately and without justification against African-Americans. Some data show the strength of this perception, particularly among blacks and on the political left.
In a survey conducted by Manhattan Institute colleague Eric Kaufmann, for example, eight in 10 African-Americans and about half of white Biden voters said that they thought that young black men were more likely to be shot to death by police than to die in a car accident—one of the largest mortality risks to the young and healthy. Another survey, by Skeptic magazine, showed that more than a third of liberal and very liberal respondents thought that the number of unarmed blacks killed by police each year was “about 1,000” or more. About a fifth of those calling themselves “very conservative” thought the same thing.  Yet another survey, from a trio of academics, found that about four in 10 African-Americans reported being “very afraid” of being killed by the police, which was roughly twice the share of black respondents who reported being “very afraid” of being murdered by criminals, as well as about four times the share of whites who reported being “very afraid” of being killed by the police. 
The assumption of widespread, highly consequential police racism has also inspired hasty policy changes. For example, “implicit bias” training has become common for police officers, despite the fact that, as two policing researchers put it in 2018, “no empirical evidence exists on the impact of implicit bias training on officer decision-making in the field, whether officers who are trained in implicit bias are perceived to be fairer by citizens, which training modality (e.g., classroom vs. simulation-based) is most effective in producing persistent changes in police behavior, or how long training effects last.”  A subsequent randomized trial showed that one type of implicit-bias training changed NYPD officers’ opinions about the concept of implicit bias itself—but had no measurable effect on racial disparities in the officers’ subsequent enforcement actions. A director of the project cautioned that “we don’t know whether or to what extent enforcement disparities stem from officers’ implicit bias” to begin with. 
When it comes to bias in lethal force, however, there is good news: research is advancing at an impressive clip.
When Ferguson burst into flames, we knew very little about the true number of people killed by police, unarmed or otherwise. But around that time, numerous efforts were launched to tally these deaths and collect some basic information about them—including a project by the Washington Post to summarize every fatal shooting by police in the line of duty since 2015. Researchers also have made progress on the question of whether there is bias in these killings, using methods that range from simple comparisons of police-shooting rates and crime rates, to complicated statistical models designed to separate the role of race from everything else that can lead to a police shooting.
The purpose of this report is to review the new data and studies, summarize where things stand, and offer suggestions for future work. I urge readers, whatever their prior beliefs, to consider these data and studies with an open mind because results in either direction could be plausible—and different strands of evidence indeed point in different directions.
America has a brutal history of racial violence and discrimination, and a small but non-negligible minority of whites still have racist views that they are willing to share with pollsters (such as that they would oppose a close relative marrying a black person or feel generally “cool” toward blacks). And sometimes, police do commit murder. More than 100 law-enforcement officers were criminally charged with murder or manslaughter for on-duty shootings between 2005 and mid-2019, and 35 of them were convicted of a crime even though the overwhelming majority of shooting incidents in that period undoubtedly lacked the body-camera footage that can most compellingly show what happened in each case. Other police killings have led to civil settlements with the jurisdictions involved but no criminal convictions for the officers. Given these facts, the possibility that some number of police killings are driven by racial bias cannot be dismissed out of hand.
At the same time, precisely because of the widespread belief that police are biased, killings involving black suspects can be subject to an extra layer of very intense scrutiny—scrutiny that most police officers want to avoid. Thus some researchers have posited a “counter-bias” effect in which cops might be particularly hesitant to shoot black suspects. Of course, it is also possible—even likely—that race plays different roles in different cases and that certain effects may be more or less pronounced in different cities or regions of the country.
So what do the basic numbers and five years of research reveal? These are the major findings detailed in the following pages:
• On-duty police fatally shoot about 1,000 people every year. This number and its racial breakdown have remained remarkably steady since 2015. The overall Post tally has ranged from a low of 958 in 2016, to a “record” of 1,055 in 2021 (reported as this paper went to press), with any pattern difficult to distinguish from random chance. 
• Approximately a quarter of those killed are black. This is roughly double the black share of the overall population, but it is in line with—and sometimes below—many other “bench-marks” that one might use for comparison, such as the racial breakdowns of arrests, murders, and violent-crime offenders as reported by victims in surveys.
• Blacks are an even higher percentage of unarmed civilians shot and killed by police (34%), which is a potential sign of bias. However, not all shootings of unarmed civilians are unjustified, and it is difficult to objectively classify these cases in a more granular fashion. And contrary to the popular perceptions outlined above, confirmed fatal police shootings of unarmed African-Americans number about 22 per year.
• More rigorous research into the question of whether police killings reflect racial bias is in its infancy, and it has been subject to intense debates over the appropriate methods. But existing studies are divided on the bias question. Many papers fail to find bias in lethal force, though one of the most careful studies in the literature—of an unnamed city with a high murder rate—does find that white cops discharge their guns several times as often as black cops when sent to 911 calls in heavily black neighborhoods.
Clearly, the most extreme narratives, in which police kill nonthreatening, unarmed black men with high frequency, are false. But research continues as to whether there is some detectable, smaller level of bias in the nationwide data and whether problems manifest themselves differently in different places.
This paper makes several suggestions for the path forward. One is that the simple benchmarking approaches that early studies employed are past their use-by date—this approach has taught us all that it has to teach. Instead, the focus should be on more rigorous designs that, for example, account for the particularities of each shooting or leverage the races of the officers involved. Also, because the raw data suggest that there are enormous differences from place to place, fleshing out variations in police shootings might provide more information about the question of bias.
What This Report Does Not Do
This paper concerns the question of bias against African-Americans. This is not to dismiss concerns about bias against other groups, but blacks have a unique history in this country and are at the center of public concern about racism in policing. Further, other groups pose difficulties for research. Hispanic is an ethnicity rather than a race, meaning that someone might be Hispanic and white or black; ethnicity data are spotty in some key criminal-justice databases; allegations of lethal-force bias against Hispanics tend to be far less frequent and more muted, thanks partly to a far smaller raw disparity;  Asians are killed at lower overall rates than whites; and Native Americans, though killed by police at elevated rates, are a very small share of the U.S. population. 
This paper is also restricted to potential bias in lethal force. There is no intent to downplay other kinds of bias—or of denying that racial bias in, say, stops or arrests could lead to unnecessary interactions that later escalate into lethal force. It simply reflects the reality that public concern has largely focused on lethal force.
The data used in this report are overwhelmingly based on research conducted since about 2015, for two reasons. First, this was the time when private actors began collecting usable nationwide numbers on police killings. Second, while there are studies on race and police killings that date back several decades,  American whites’ racial attitudes have undergone a sea change since then, and it’s unclear how applicable older studies would be to the present moment. In the General Social Survey, for example, the share of non-Hispanic white adults who say that they’d oppose a close relative marrying a black person declined from 38% to 14% between 2000 and 2018;  14% is still significant but a long way from where we were just two decades ago, to say nothing of half a century ago. The American National Election Studies, with data dating back to the 1960s, also show an enormous drop in whites’ choosing to rate whites more than three points higher than blacks on a 100-point “feeling thermometer” scale—though about one-third still do. 
The focus of this paper is on the existence and extent of bias—and how further research and data collection could improve our estimates of it—and not on potential reforms to police lethal force policies in general.
The Data Problem
In the wake of Ferguson, many were shocked to find that there is no comprehensive government data set of police killings. The Centers for Disease Control and Prevention (CDC) tracked every death certificate in the country through its National Vital Statistics System, using a classification scheme in which police shootings were supposed to be coded as “legal interventions”—but many of these deaths were coded as generic homicides instead. A more intensive data-collection effort from CDC, the National Violent Death Reporting System, was limited to certain states. The U.S. Dept. of Justice collected records from police departments via the Supplementary Homicide Reports (which is part of the FBI’s Uniform Crime Reports) and the Arrest-Related Deaths program run by the Bureau of Justice Statistics (BJS)—but these were voluntary systems and missed most police killings. 
The upshot was that, while some specific departments did keep usable data on their own officers, it was hard to draw any conclusions about nationwide police shootings. The overall counts were too low. Patterns or trends could reflect reporting practices just as easily as they could reflect meaningful information. Some databases captured more killings than others, but even this wasn’t consistent over time: Supplementary Homicide Reports captured more than the National Vital Statistics System in the 1980s and 1990s, although the latter system captured more, starting in 2010.  Some places seemed to do a better job of reporting their deaths to these systems than others did. 
Numerous projects sprouted up after Ferguson to fill the gap, primarily by aggregating media reports of police killings. The Washington Post began an effort in 2015; other undertakings include Fatal Encounters, Killed by Police, Mapping Police Violence, and a two-year effort from The Guardian. The Major Cities Chiefs Association has also surveyed its members on officer-involved shootings.  Each project has a somewhat different focus, different methods, and different inclusion criteria.
Fatal Encounters, for example, collects its cases from a mix of paid researchers, public-records requests, and “crowdsourcing” (meaning that anyone can provide tips).  It also casts a very wide net. As the head of the project has written, “We try to document all deaths that happen when police are present or that are caused by police: on-duty, off-duty, criminal, line-of-duty, local, federal, intentional, accidental—all of them.”  This includes suicides and stabbings in the presence of police, car crashes and drownings resulting from suspects’ efforts to flee police, and murders committed by cops off-duty, as well as incidents stretching back to 2000, more than a decade before the creation of the database itself, though the database contains the information needed to strain out some undesired cases.
The Washington Post database, by contrast, is compiled by journalists (though there is a public tip system as well), includes only incidents that occurred since its debut in 2015, and counts only fatal shootings by police in the line of duty. This rule is straightforward to apply, does not require sorting out complicated medical evidence to evaluate officers’ culpability in non-shooting deaths,  focuses on cases that are highly likely to be reported in the media when they happen, and captures the prototypical (and, by far, the most common) use of lethal force by police.  For these reasons, this report uses this database as a source of basic tallies.
Yet by focusing on fatal shootings, the Post excludes some of the most protested police-involved deaths, including those of George Floyd, Eric Garner, and Freddie Gray, none of whom was shot. In many such cases, the police did not intentionally use lethal force, but the force or restraints they did use, combined with the suspect’s preexisting health problems or substance use, caused death. Of course, in creating inclusion criteria for a database of fatalities attributable to police—and not merely deaths that occur when police are “present”—it would be very difficult to draw a precise line as to what role police actions must play when causes of death are complicated in these ways.
The Post database also misses the case of Jacob Blake, who was shot multiple times but survived—and the best data sets’ focus on fatalities creates an ongoing gap in our knowledge. A great many police shootings are nonfatal, and whether the person shot survives often depends on factors outside the control of the police, such as the availability and quality of trauma care. These factors are not necessarily evenly distributed with regard to race, and there is evidence, from cities and states with available data, that shootings of blacks are more likely to be nonfatal.  This could mean that racial disparities measured in fatalities underestimate the disparity in the overall use of lethal-level force.
To this day, meanwhile, the federal government is still getting its act together. The National Use-of-Force Data Collection from the FBI began gathering numbers from police departments in 2019, but participation is voluntary and spotty—and thus far, the publicly available numbers merely tally participation in the program, rather than informing the public about actual use-of-force incidents.  The Government Accountability Office recently warned that “the FBI faces risks that it may not meet the participation thresholds established . . . for publishing data from the National Use-of-Force Data Collection, and therefore may never publish use of force incident data from the collection.”  Also in 2019, BJS reported some results from a pilot study of improvements to its Arrest-Related Deaths program, which involved surveying police departments and analyzing news reports,  but these improvements still have not produced nationwide annual data. CDC’s “legal intervention” tally has improved; but as of 2020, it still fell about a fifth short of the Washington Post’s fatal-police-shooting count.  CDC’s National Violent Death Reporting System has started collecting data from all 50 states, but not all states have data publicly available yet. 
When a government employee, acting in his official capacity, shoots or kills an American civilian—justified or not—the public should know what happened. The government’s failure to comprehensively report this information needs to be rectified, and private efforts to keep track of this information deserve the public’s gratitude. Thanks to these projects, for example, we know that police shoot and kill about 1,000 people throughout the country each year and that changes in the number and racial composition of the shootings have been minor in recent years,  though there are signs of different trends in urban, suburban, and rural areas.  We also know some basic details about who was shot and in what situations. In other words, there are now decent data that can be analyzed.
Crime-Rate Gaps and the Simplest “Benchmarking” Approaches
If police were shooting suspects without any trace of racial bias, what would be the racial breakdown of those shot? This is the elusive concept behind benchmarking studies, which are a simple and intuitive way to start a dive into the data—even if they can never resolve the issue.
It is undisputed that black Americans are “overrepresented” among those shot to death by police, accounting for a little over 25% of those killed, despite being only about 13% of the general population.  But racial bias by police is only one potential explanation. That is, the general population might not be the appropriate benchmark against which to compare those shot by police.
Police are deployed to neighborhoods partly on the basis of crime rates; they are also expected to respond to civilian reports of crime, get involved when they see illegal behavior, and serve arrest warrants. Meanwhile, police are generally allowed—and, indeed, trained—to shoot suspects when they reasonably believe that it’s necessary to end a threat to life or limb —for example, when a suspect attacks the officer or someone else with a lethal weapon. Thus, racial gaps in police contact and rates of serious violent crime are commonly posited as alternative explanations for elevated rates of police-shooting deaths for African-Americans.
There are factors on the other side of the scale, too, however—phenomena that can lead to police shootings and disproportionately affect other groups, including whites, who account for a majority of the U.S. population and about half of those shot and killed by police. For instance, there is substantial reason to believe that “suicide by cop,” in which suspects deliberately behave in ways police will interpret as a lethal threat, is more common among whites. One study estimated that about 18% of all police killings bore signs of suicide by cop and that killings of whites fell into this category disproportionately.  About 30% of whites shot by police from 2015 to 2020 had signs of mental illness, as tallied by the Washington Post, compared with 16% for blacks, and the overall white suicide rate was more than 2.5 times the rate for blacks during that period.  While more difficult to measure, white-predominated areas may also have more lenient attitudes toward the use of force by police.  Numerous observers have noticed, for example, that police-shooting rates in the West and Southwest are surprisingly high.  Any honest reckoning of racial bias has to be mindful of such variables as well. One can’t just take into account the factors that might explain the disparity we observe, while ignoring everything that pushes in the opposite direction.
The idea of benchmarking, ultimately, is to compare the racial breakdown of those shot by police with the racial breakdown of some other population that represents who would be shot by police if they did their jobs in the way they are trained. One clear limitation is that this exercise is necessarily crude, amounting to comparing two numbers side by side while ignoring the particulars of all the shootings. Even if the real-world disparities matched the “correct” disparities that we’d expect to see in a world where police shot only those who posed an immediate threat, that wouldn’t disprove the existence of unjustified shootings and bias in those cases.  Another limitation is that the ideal benchmark does not exist: there is no data set of every police–civilian interaction in the whole country, objectively graded as to the degree of threat that the civilian posed to the officer or others, much less a data set that also includes situations where cops could have stopped someone but did not.
In practice, what benchmarking studies do is compare those shot by police with various other populations with relevant characteristics —for example, those who commit assorted crimes as reported in surveys of victims; individuals who are arrested or otherwise interact with police; murderers; homicide victims (who are counted more thoroughly and tend to have the same race as their killers); and cop killers (who represent proven lethal threats to police, though police shoot and kill roughly 20 people for every officer killed).
These measures vary widely, and there are limitations to the data involved. For instance, not all police agencies report their arrests to the federal government, and race tallies in these sources often rely on the perceptions of crime victims or police officers.  Races are “unknown” for many observations in some data sets, which can warp the numbers if these omissions are nonrandom with respect to race. (For example, it’s possible that the race of someone killed by police is more likely to be reported in the media if the deceased is black.  And murders of blacks are less likely to be “cleared” by police,  which, since homicide is overwhelmingly intra-racial, could lead to an undercount of black murder offenders.)
Further, each of these measures represents only some aspects of the population that we are trying to conceptualize. Even in individual cases, it is not always clear what the relevant comparison group might be. Imagine, for example, that an officer stops someone for jaywalking, and then the suspect attacks the cop with a knife and yells, “Shoot me!” The scenario invokes jaywalking stops, knife attacks on police officers, and suicide-by-cop. And while it’s a contrived hypothetical, the actual situations surrounding police shootings vary immensely.
In its pilot study of improvements to the Arrest-Related Deaths program, BJS found that about half of arrest-related homicides by law enforcement had begun with a request for police assistance regarding some type of suspicious or criminal activity. Four other categories combined to explain most of the rest: routine patrols, traffic stops, warrant service, and assorted health and welfare-check calls. Over 10% of cases went into the catch-all bucket “some other reason.”  In the data collected by the Major City Chiefs Association, about 45% of officer-involved shootings began with self-initiated police activity, with police being summoned to the remaining situations by the public; shooting incidents “were most commonly precipitated by calls for service or officer-initiated activity relating to armed person (18%), robbery (10%), and traffic stops (8.5%).” 
Finally, many possible benchmarks, such as arrests, are affected by cops’ use of discretion, potentially including racial bias—which greatly complicates any attempt to use these benchmarks as tests for bias in themselves. If innocent blacks are frequently arrested out of bias, for example, this will artificially inflate the arrest benchmark by including people who have little risk of attacking cops and thus are not part of the population we’re trying to measure. The degree of bias in arrests and other police activities is a matter of much dispute, though I will highlight the fact that, at least for violent crimes, arrest rates loosely track crime rates as estimated in surveys of victims. 
Where the Benchmarks Fall
With these serious caveats in mind, it remains worthwhile to review the basic data on various benchmarks. After all, the core idea behind benchmarking—that crime rates and other contextual variables are relevant and that we should not assume that bias is the only possible reason for any disparity—remains entirely sound.
Federal data overlapping with the Washington Post’s police-shooting database  are easily available on arrests, both in general and for specific crime categories, such as assault;  those who kill police officers or injure them with knives or guns;  police contacts as measured through a survey of the general public;  and homicide and suicide deaths.  In addition, I conducted a simple analysis of the U.S. Census Bureau’s National Crime Victimization Survey for the years 2015 to 2020, dividing violent-crime incidents into “less” and “more” serious categories (drawing from a schema employed in a previous benchmarking study).  This victimization survey asks the general public about the crimes that they have experienced, and it collects data on the race of the offenders.
Figure 1 is a simple illustration of where these numbers lie, merely comparing the black share of the various populations as measured in these data sets. (“Race unknown” numbers are removed from the denominators, and the chart includes data on the unarmed that we will discuss in the next two sections.)
The black share of total fatal police shootings is higher than the black share of a few potential comparison groups—most severely so in the case of suicide deaths, though this category is relevant to only a minority of all police shootings, but also in the case of police contacts as measured in a survey of the public. (Blacks are actually underrepresented among those with any type of contact with police, and while they are overrepresented among those reporting the use or threat of force by police, this overrepresentation is smaller than that for fatal police shootings.) Yet the black share of police shootings is in line with, or even below, most measures of violent crime and criminal-justice-system involvement, including the most extreme measures: homicides and cop-killings. Indeed, a notable pattern is that the more severe measures tend to have higher black shares. The more selective that we imagine police are in whom they shoot, the higher the black share we should expect among those shot, if we are going by crime-rate benchmarks.
The results above, while relying on simpler calculations and including more recent data, are consistent with numerous studies using benchmarking approaches. Perhaps most prominently, Joseph Cesario, David J. Johnson, and William Terrill benchmarked two years’ worth of police shootings to 16 different crime rates. They found “no systematic evidence of anti-Black disparities in fatal shootings”; virtually none of the possible comparisons suggested antiblack disparities, and some suggested antiwhite ones. 
A 2017 study found: “Ratios of [hospital-]admitted and fatal injury due to legal police intervention per 10,000 stops/arrests did not differ significantly between racial/ethnic groups.”  Another study, using data from 2015–17, found: “Using population, police-citizen interactions, or total arrests as a benchmark, we observe that black citizens appear more likely than white citizens to be fatally shot by police officers”; but using “violent crime arrests or weapons offense arrests, we observe that black citizens appear less likely to be fatally shot by police officers.”  Additionally, a 2015 New York Times article from the discrimination researcher Sendhil Mullainathan noted that blacks’ share of arrestees and of those killed by police were quite similar,  as did a 2021 piece (relying on arrests for violent crimes specifically) by the criminologist Barry Latzer published in the Manhattan Institute’s City Journal. 
A paper by John A. Shjarback and Justin Nix, however, reached more nuanced conclusions, drawing on state-specific data that included nonfatal shootings.  It used three different data sets of police shootings (“The Washington Post’s counts of fatal officer-involved shootings, fatal and injurious officer-involved shootings in Texas, and all firearm discharges by officers in California”) and compared them with demographic data on people who assaulted cops in various ways in the relevant jurisdictions. The authors concluded: “African-Americans were not more likely than whites to be fatally shot nationally or shot and injured/killed by police in Texas based on the benchmarks used. However, African-Americans were more likely than whites to be shot at by California police.”
Nationwide, the black share of those killed by police is not implausibly high, once one takes into account these other statistics. That is important, as it contradicts the common belief that police racism is shockingly rampant and lethal—in which case, one would expect the black police-killing rate to outstrip any plausible benchmark. But benchmarking leaves many questions unanswered—as the more nuanced study with state-specific data implies—and it cannot be said to disprove the existence of any bias whatsoever.
Benchmarking the Unarmed
In the universe of police shootings, African-Americans may not be overrepresented to a degree that’s hard to explain other than by racial bias. But what about the much smaller number of shootings that are unjustified?
There is no way to tally such cases without resorting to subjective judgments  because the evidence is not always clear—and even experts sometimes disagree as to whether a given use of force was consistent with the law and/or officers’ training. So researchers and journalists instead use what they view as proxies—or, at least, red flags—for unjustified killings, such as whether the victim was unarmed. Yet there are a few notable facts about fatal shootings of unarmed civilians that need to be kept in mind.
First, there were 403 confirmed fatal police shootings of unarmed civilians in the six years from 2015 through 2020, for an average of about 67 per year, going by the Washington Post database and excluding the 331 cases where the database contains no information on the suspect’s weapon or says that it’s “undetermined” as to whether a weapon was present. This is not a trivial number, and it does not include non-shooting deaths.  But for context, recall that about 1,000 police shootings occur every year. There are also tens of millions of police-civilian interactions,  10 million arrests (more than a million of them for assaults), and about 16,000 murders  annually.
Second, of unarmed civilians fatally shot by police, the black share—34% of cases where the race of the deceased is known, with a total of 133 cases, or about 22 unarmed blacks killed per year—is higher than the black share of armed civilians fatally shot by police, 26%.  This is a potential sign of bias, especially if we assume that these shootings are predominantly unjustified killings of innocent people—in which case, we would expect them to reflect overall population demographics more, not less, than shootings of armed suspects do.
But third, the context surrounding these shootings should make us question that assumption. As the Post database and other tallies of unarmed killings rose in prominence, there were several efforts to evaluate these incidents.
In their book In Context: Understanding Police Killings of Unarmed Civilians,  Nick Selby, Ben Singleton, and Ed Flosi evaluated every known case of an unarmed person being killed by police in 2015—a total of 153.  Their evaluations of cops’ behavior were often critical, and sometimes they faulted police departments for failing to release all the information that a sound judgment would require. But they also warned that “unarmed” does not imply “not dangerous,” noting that “the majority of those ultimately killed by police were themselves engaging in behavior that was criminal (which brought the police to the scene), and posing direct threats to law enforcement or other civilians (which most often precipitated the use of force).” They further pointed out that the racial composition of those killed did not vary based on whether police had been called to the scene, or whether the incident started with an interaction that may have involved discretion.
In her own review of 38 unarmed killings of African-Americans in 2015, Heather Mac Donald (a colleague at the Manhattan Institute)  pointed out that some suspects had tried to grab officers’ guns or been killed in accidental discharges triggered by their own attacks on officers.  In two cases, unarmed people were struck by shots intended for armed targets, one of those victims being an uninvolved bystander, the other being a passenger in a car whose driver had shot at the police.
Also worth pointing out is that, just as not all shootings of unarmed civilians are necessarily unjustified, not all shootings of armed civilians are necessarily justified, either. Every state in the country allows at least some civilians to legally carry guns;  and even individuals illegally carrying guns or holding other weapons do not necessarily pose an imminent threat at any given moment.
When an unarmed person is killed by police, it might indicate a cold-blooded murder by the officer. Or it might indicate that a violent suspect posed a serious threat despite lacking a weapon. (Nationwide, more than 600 murders per year are accomplished without a weapon, roughly 5% of all murders.  And while suspects armed with guns account for the overwhelming majority of police officers slain —likely in large part because police are armed with guns themselves and it is famously unwise to bring a lesser weapon to a gunfight—about two officers are killed with their own weapons every year, and another officer is killed by an unarmed assailant approximately every two years.)  Or it could indicate any number of other situations. These situations will all reflect different underlying racial disparities in society; in the aggregate, those disparities need not be the same as the disparities underlying killings of the armed.
Importantly, blacks are a higher share of the unarmed than they are of police shootings in general. This fact certainly raises the possibility of bias in a number of cases—and one that skeptics of the police-racism narrative must grapple with. But like shootings in general, shootings of the unarmed do not have a clear benchmark to measure against, so deeper conclusions will have to come from more elaborate analyses of the data.
What About Unarmed and Not Aggressing?
In addition to a variable indicating whether the deceased was armed, the Washington Post codes the “threat level” of each case. The classification is rudimentary. There is an “attack” category representing what the Post deems “the most direct and immediate threat to life,” with all other situations coded as “other” or “undetermined.”  The Post cautions that the “other” category includes “many incidents where officers or others faced significant threats,” especially situations where a suspect brandished a knife and refused to drop it.
Some analysts have thus focused on situations where the suspect was unarmed and not attacking—sometimes relying on the coding contained in the Post database or similar projects, and sometimes coding the incidents themselves. Through 2020 in the Post data, 208 cases meet this narrower definition. In these cases, 32% of the deceased in these cases were black, versus 26% of all other cases (though excluding from the denominators those with unknown races, weapons, or threat levels).
On average, the Post records about 11 fatal shootings of blacks in this category per year. Although this approach does a better job of zeroing in on cases that might be unjustified or at least fall into a gray area, it still covers a broad enough range of situations to justify caution.
For instance, this narrower category, as employed by the Post (and by its literal terms), includes cases where suspects disregarded police orders in menacing ways—for instance, by reaching into a vehicle when instructed at gunpoint not to do so.  This is the canonical “split-second decision” that turns tragic: based on the suspect’s behavior, the officer might reasonably conclude that the suspect is about to pull a gun—and if the officer waits to find out for sure, he’ll no longer have time to shoot before the suspect does. (The law does not require police to know the future or have X-ray vision; it requires police to react reasonably to what they see happening.) This category also includes situations where noncompliant suspects held nonthreatening objects that officers misperceived—perhaps reasonably or perhaps not, given the situation—as guns.  We should want to know whether race plays a role in misperceptions, especially unreasonable ones, but the aggregate data alone cannot prove that.
Discretion is also involved in these classifications and has often served as the basis for criticism of the Post’s data.  Curiously, the Post counts the case of Ashli Babbitt in early 2021 in the “attack” category. Babbitt, a white woman acting as part of a mob storming the Capitol on January 6, was shot climbing through a window toward the Speaker’s Lobby, but she was not directly attacking anyone at the moment the officer fired. (One trio of prominent use-of-force experts has expressed “serious reservations about the propriety of the shooting” as evaluated through the “typical legal framework,” though they also doubt that the normal rules are appropriate in such an outlier of a case.)  And in the aforementioned case where the deceased, a black woman, was an unarmed passenger in a car whose driver had fired on the police, the threat level is coded as “undetermined.” 
At any rate, the debate over benchmarking is even more fraught for categories like “unarmed and not attacking” than it is in other circumstances. For example, while they declined to reach firm conclusions on the topic, Cesario and his coauthors found that blacks may not be overrepresented among the unarmed and non-aggressing, as well as among cases where officers misperceived non-weapon objects, when compared with many crime benchmarks. A response to the study, however, benchmarked these shootings to the population of “unarmed noncriminals”—in which case, there was an apparently substantial antiblack bias because blacks are overrepresented among these shootings relative to their share of the overall population. 
Neither approach comes anywhere close to being satisfactory. In no possible world would those shot by police, even if unarmed and not attacking, be representative of the overall noncriminal population, because even these cases hinge on where police are deployed, where they are called for service, and with whom they interact in tense circumstances, including stops and arrests. Nor would we expect these individuals to be representative of any particular criminal population.
“Control for What Happened” Studies
Consider the following alternative to a typical benchmarking study. First, compile a data set that includes rich detail about police-shooting incidents, as well as a “risk set” of incidents where police encountered civilians in confrontational situations but did not shoot. This comparison group might include cases where suspects were Tased instead of shot, or cases where cops drew their guns without firing. Second, instead of just comparing the racial breakdowns of the two groups, as one would do in the simplest benchmarking analysis, put together a statistical model that takes into account all other details, such as the time of day, whether the suspect whom the cop encountered was armed or attacking, whether the incident started with a call or a traffic stop, and the violent-crime rate of the surrounding neighborhood. This allows you to compare black suspects and white suspects who encountered cops in roughly similar situations, and determine whether those from one racial group or the other were more likely to be shot.
Such an analysis includes far more detail than raw benchmarking  does, but it still comes with serious limitations. Most importantly, no set of control variables can perfectly account for everything that is relevant in every situation, leaving race itself as the only explanation for any remaining racial difference in outcomes. Relatedly, critics of these kinds of studies often point out that if racial bias helps determine who is included in the risk set—who, e.g., gets Tased or has a cop’s gun pointed at them—then the risk set might include a lot of innocent or low-risk black civilians, and the controls may not be sufficient to capture how little of a threat they pose.  This effectively gives the police “credit” for not shooting blacks who shouldn’t have been in the data to begin with; those conducting this research have taken steps to address this potential problem.
The most famous study in this vein is Roland Fryer’s “An Empirical Analysis of Racial Differences in Police Use of Force,” which received prominent coverage in the New York Times in 2016 and was ultimately published in the Journal of Political Economy.  Fryer found evidence of bias in the use of nonlethal force (which is outside the scope of this report) but not in fatal shootings.
The study’s most thorough lethal-force analysis relied on data from Houston in 2000–2015 and thus may not apply to other areas of the country. Relying on narratives provided by police— another key limitation—Fryer constructed a data set including situations where officers had fired their guns, as well as a risk set comprising arrests for “attempted capital murder of a public safety officer, aggravated assault on a public safety officer, resisting arrest, evading arrest, and interfering in an arrest”: situations that generally involve physical confrontations between officers and suspects and conceivably could have escalated to lethal force. For each record, Fryer and his team painstakingly coded 290 variables concerning what happened, and merged in other sources of data regarding the officers and offenders, too. They also created a separate risk set of situations where officers had discharged their Tasers, though these data allowed the use of fewer control variables.
Fryer ran a wide variety of models on these data. Many of the results were statistically insignificant and imprecisely estimated, but there was little sign that blacks were more likely than others to be shot. Some results suggested the opposite. Importantly, the results were not sensitive to the types of calls from which the incidents stemmed, which helps address a theory that has captivated Fryer’s critics: that the practice of “officers seeking confrontation in random street interactions” had distorted his comparison group. 
Additionally, Fryer did a different analysis of data from Houston and 15 other jurisdictions. The focus here was on the timing of the shooting: Did the officer fire before or after the suspect attacked, at least according to police reports, with controls for some key variables regarding the encounter? Once again, there was no sign of an antiblack disparity, and some results suggested the opposite. And once again, the results were consistent across different call types.
Several other studies are worth noting here as well.
In a study using data from New Jersey, Carl Lieberman took steps to limit the role of officer discretion.  In one analysis, he restricted the data to cases where race was unlikely to play a role in the officer’s initial decision to get involved (e.g., crimes in progress and traffic stops at night) and also to cases where the officers had, in fact, used some force. For gun discharges, the results of Lieberman’s stronger models were statistically insignificant. However, the raw (“no controls”) numbers in his analysis, unlike those in Fryer’s, suggest higher lethal-force rates for blacks and Hispanics, and some of Lieberman’s more complicated models produce results that, while statistically insignificant, point in the same direction. (Like Fryer, Lieberman found much stronger signs of bias in nonlethal force.) The statistical imprecision in both Fryer’s and Lieberman’s studies, of course, highlights the value of big sample sizes: with more data, researchers could obtain more precise estimates.
Two studies have compared police shootings in a department with situations where officers drew and pointed their guns but did not shoot. In a study of Dallas, Andrew P. Wheeler and three coauthors found that “situational factors of whether the suspect was armed and whether an officer was injured were the best predictors of the decision to shoot” and that “African Americans are less likely than Whites to be shot.”  Relying on data from an unnamed department in the Southwest, a study by John L. Worrall and four coauthors similarly found that black suspects were substantially less likely to be shot, both in the raw data and in a model that controlled for assorted characteristics of the incident and officers involved.  Of course, these studies are limited by the fact that officers may not always properly document decisions to draw their guns without shooting, as well as the possibility of racial bias in officers’ decision to draw and point their guns, which could distort the comparison group.
However, two follow-up studies took a step back, looking for racial bias in the decision to draw a firearm or a Taser, in order to gauge the extent of the latter problem. One study by John L. Worrall, Stephen A. Bishopp, and William Terrill, relying on data from Dallas, was limited to “arrest and active aggression cases” and found that “black suspects were no more or less likely to have weapons drawn against them than other suspects.”  The other study, by Jordan R. Riddell and John L. Worrall—using “response to resistance” data from New Orleans—found “no consistent evidence of racial bias in firearm draws,” though the results did vary a bit across different models. 
Several studies were limited to killings—i.e., they didn’t include similar nonlethal incidents for comparison—but analyzed whether race correlated with other details of the situation. Shea Streeter found that “decedent characteristics, criminal activity, threat levels, police actions, and the setting of the lethal interaction are not predictive of race, indicating that the police—given contact—are killing blacks and whites under largely similar circumstances.”  By contrast, Justin Nix and three coauthors found: “Black civilians were more than twice as likely as White civilians to have been unarmed” when killed by police, even after controlling for threat level using data from the Post, which they interpreted as a sign of “implicit” bias, an indicator that cops were misperceiving threats when dealing with minority suspects.  Marilyn D. Thomas, Nicholas P. Jewell, and Amani M. Allen found that, among those shot by police, black men are more likely than white men to be unarmed “among those older than 54 years, mentally impaired, and residing in the South.” 
Leveraging the Race of the Officer
Intuitively, it seems reasonable to assume that black police officers have less antiblack bias than white officers do. So in theory, if white officers are shooting black suspects out of antiblack bias, black officers should shoot black suspects less often.
However, this assumption is difficult to leverage in a statistical analysis. For one thing, the African-American population is not evenly distributed throughout the country; in places with higher black populations, both police and civilians are more likely to be black,  causing officer and civilian race to be correlated in national data sets unless researchers account for local demographics. Also, black officers are more likely to be assigned to black neighborhoods even within some cities, often by choice. For example, a recent study by Bocar Ba and four coauthors, using data from Chicago—where district assignments are based on officers’ preferences and seniority (a common system)—found: “Black officers have the greatest preference for working in majority-black districts and the lowest preference for working in majority-white districts.” 
Bearing in mind these limitations, it helps to divide research on officer race into three categories: studies focusing on the overall racial composition of police departments; studies of the correlation between officer and civilian race among those shot by police; and an especially promising study that leveraged the quasi-random assignment of cops of different races to calls for service.
The literature is mixed on the question of whether hiring more black officers reduces the use of lethal force against blacks.
In a study of “group threat,” Joscha Legewie and Jeffrey Fagan found that more blacks are killed by police in places with higher rates of black-on-white homicide but that this relationship weak-ens as black representation on the police force grows. However, their direct estimates of how black police representation affects killings of blacks were statistically insignificant, and thus any finding regarding “an overall effect of Black officer representation on the number and rate of officer-involved killings of blacks is inconclusive.” 
Another study, by Sean Nicholson-Crotty, Jill Nicholson-Crotty, and Sergio Fernandez, found that more black civilians might be killed by departments with more black officers, at least until the black share in the department reaches about 35% or 40%, after which the relationship may change directions.  The U.S. population is 13% black, so 40% black police departments are a plausible goal for relatively few cities. The Nicholson-Crotty paper further challenges the notion that we should expect black officers to be less biased toward black suspects to begin with, unless there is a “critical mass” of black officers in the department: “Some of the literature on policing actually suggests that black police officers may be more likely to discriminate against black citizens because of increased pressure to adopt an organizational role that prescribes such behavior.”
Still another study, by Malcolm Holmes, Matthew Painter, and Brad Smith, built models predicting police homicides at the city level, both overall and of blacks specifically.  In these models, greater black representation on the police force was not a significant predictor.
Most recently, however, Shytierra Gaston, Matthew J. Tetti, and Mattheson Sanchez found that—after controlling for local crime rates and economic conditions, among other variables—departments with higher black representation killed relatively fewer black civilians.  Specifically, they divided the percentage of each police force that is black by the percentage of the population that is black (so that the result is one, if the two shares are equal) and found that “for a 1-unit increase in the Black racial congruence ratio, the rate at which police kill Black civilians decreases by 28%.”
Individual Officer and Suspect Demographics
One of the most widely discussed police-shooting studies, by David Johnson and four coauthors, has been retracted.  Oddly, the authors’ retraction notice maintained that their “data and statistical approach were appropriate for investigating whether officer characteristics are related to the race of civilians fatally shot by police.”  While they had described some of their methods and results in confusing and imprecise terms, and had previously corrected bits of that language,  there was no sign of any miscalculation in the data analysis. As justification for fully retracting the study, the authors said only that “our work has continued to be cited as providing support for the idea that there are no racial biases in fatal shootings, or policing in general.”
Since the numbers in themselves have not been called into question, and since no scientific reason has been given for the complete retraction, it remains worthwhile to review the study’s findings and limitations.
The study revolves around a variable that most fatal-police-shooting data sets lack: the race of the officer, which the authors were able to secure for most, but not all, of their cases and which they statistically “imputed” for the remaining ones. Thus the study could shed light on whether white officers were disproportionately involved in shootings of blacks. In a model that controlled for the demographics of homicide victims in each county where a shooting occurred, the paper found no relationship between the races of deceased suspects and the races of the officers involved. Of course, the major limitation is the one noted above (and forcefully pointed out in a reply to the paper):  the study cannot fully account for how often officers and civilians of different races encounter each other. Again, police are often assigned to same-race neighborhoods even within counties, so county-level demographic controls are inadequate for this purpose.
Two less widely reported studies—one by John R. Lott Jr. and Carlisle E. Moody  and the other by Charles Menifield, Geiguen Shin, and Logan Strother —employed similar methods, found the same thing, and were subject to the same caveat. However, both these studies were far less successful in getting officers’ races, missing the information in about two-thirds of cases,  which could be problematic if officer races are particularly likely to come out in the media in cases involving black civilians and/or white cops  (in which case, such shootings would be overrepresented in the data).
Beyond checking for an overall link between suspect and officer race, Lott and Moody further analyzed whether a black suspect is more likely to be unarmed when the officer who killed him is white than when the officer is black, finding no statistically significant difference in a regression with controls. This result is hardly definitive, given the incomplete and relatively small sample (especially of certain key combinations, such as unarmed black suspects shot by black officers)—not to mention that black officers and white officers might differ in which demographics of black suspects they encounter, just as they differ in how often they encounter black suspects at all. Still, their approach is promising. Similarly, the Menifield study reported that “white officers are not disproportionately killing lower-threat (non-gun-wielding) minority suspects.”
A 2020 study by Katelyn K. Jetelina and three coauthors was limited to officer-involved shootings in Dallas (2005–15), but the results did not contradict the findings above. The Jetelina study found that “officer race/ethnicity was not associated with the race/ethnicity of the civilian during [officer-involved shooting] incidents.” 
The aforementioned studies look for correlations between the races of officers and the civilians they kill. By contrast, another study of Dallas, by Scott W. Phillips and Dae-Young Kim, looked at whether the interaction of officer and suspect race correlated with the decision to shoot in the first place, including data on situations where police had drawn their weapons without firing.  The results for these interactions were statistically insignificant (as was the effect for citizen race by itself); the biggest correlates of the decision to shoot were obvious factors such as whether the suspect had a gun and whether the officer was injured.
Another notable analysis (by George Fachner and Steven Carter) that included officer race was an assessment of about 350 officer-involved shootings in Philadelphia.  Their data allowed them to compare rates at which members of various racial groups shot by police were unarmed or apparent victims of threat-perception failure, as well as to see whether shootings of black suspects by black officers were less likely to involve threat-perception failure. The differences were statistically insignificant. The study was published in 2015, and it would be worth repeating with an expanded data set that included numbers from more recent years.
The Lieberman and Fryer studies discussed above checked to see whether the races of the officers on the scene made a difference for lethal force. Lieberman looked at whether cops were less likely to discharge their weapons when they were of the same race as the suspect whom they were using force against, and found statistically insignificant results;  Fryer found that his results didn’t change in subsamples based on officers’ races, and he reported a detailed analysis revealing how the officers’ races interacted with whether the person shot was armed:
A Natural Experiment
Mark Hoekstra and CarlyWill Sloan presented an especially promising research design in their study, “Does Race Matter for Police Use of Force? Evidence from 911 Calls.”  The study’s lethal-force results for blacks and whites pertain to an anonymous city with one of the 20 highest big-city homicide rates in the nation. (The study also has overall use-of-force results for another city that is heavily Hispanic.)
The important insight of the paper is that, in the big city, there is no discretion in who responds to a 911 call. Dispatchers follow a strict protocol: the officer working the geographic beat in question gets the call, unless that cop is busy; in that case, the next officer geographically closest gets the call. Thus, there is an element of randomness in the race of the officer assigned to any given call, as it is not the result of, say, dispatchers preferring to send black cops into black neighborhoods.
Hoekstra and Sloan’s strongest statistical models make comparisons across 911 calls within beats and shifts, or even across 911 calls worked by specific officers, so the results are not driven by certain areas and times being worked more heavily by officers of one race or another. Fortuitously, though, officers in this city do not strongly tend to work same-race neighborhoods to begin with.
The key finding of the study is that while white and black officers discharge their guns at similar rates when sent to white and mixed-race neighborhoods, white officers are five times as likely to discharge their guns in neighborhoods that are at least 80% black. Given the strengths of the study’s design, this is easily among the strongest bits of evidence that, at least in some places, race makes a big difference in the use of lethal force.
There are numerous limitations to the analysis, starting with the fact that it is confined to an unidentified single city. Further, the study does not include the type of detailed information about what happened that, say, Fryer’s analysis of Houston did, leaving the mechanisms by which white officers end up using more force up for debate. It’s possible that white cops are shooting people unjustifiably in black neighborhoods, but it’s also possible that black cops are better at defusing tense situations in black neighborhoods, or that civilians in these neighborhoods react differently to officers of different races.
The Hoekstra and Sloan study relies on 94 shooting incidents, and a thorough description of each would be helpful. If four-fifths of the shootings by white officers in black neighborhoods would not have happened if a black officer had been sent instead, there is much to be learned from the details of those cases. But this level of detail would cost the city its anonymity.
Moreover, the results of the Hoekstra-Sloan paper are also somewhat in tension with a paper by Greg Ridgeway, based on data from New York City.  Ridgeway’s study found that black officers are about three times as likely to fire their guns as white officers at the same scene, though it did not focus on the race of suspects or neighborhoods specifically, and, of course, not every officer on the same scene necessarily has the same opportunity to shoot.
Whatever uncertainties it leaves, the Hoekstra-Sloan study points a way forward for future research.  In other cities with similar dispatch rules, researchers should try to get access to the data. And in places with less strict criteria for sending officers to calls, researchers should pursue other rigorous ways of leveraging the fact that police officers of different races often respond to similar types of calls when working the same beats and shifts.
One also hopes that the city that Hoekstra and Sloan studied, wherever it is, has taken the findings to heart and is pursuing reforms.
Police are organized into departments that cover specific geographic jurisdictions, individual neighborhoods differ in their demographics and crime rates, and American regions differ radically in their politics and attitudes, including their attitudes toward the police force. Also, geography and race can interact. One study, by David Hemenway and four coauthors, found that whites were more likely to be shot and killed by police if they lived in rural areas, whereas blacks have higher rates of police-shooting deaths in urban areas. 
Therefore, it can be informative to examine shooting rates at the geographic level rather than the individual one, and some studies have taken this approach in studying racial disparities. This can tell us, for example, whether heavily black areas have more police shootings even after controlling for crime rates, or whether police shootings of blacks correlate with the racial atti-tudes of whites in the same places. However, aggregating data to the neighborhood, city, county, or state level has its own pitfalls, including that the nuances of each individual shooting are lost and that it is difficult to separate racial bias from other factors that differ across areas.
Some of the geographic studies suggest racial bias. Cody T. Ross, in a much-cited 2015 study that relied on county-level data, found that blacks are more likely than whites to be killed while unarmed and that this disparity is not related to rates of arrest for assault and weapons violations (including race-specific rates).  Jeffrey A. Fagan and Alexis D. Campbell also analyzed shootings at the county level, controlling for factors such as officer killings and violent-crime rates (though not race-specific rates), and found that blacks have higher shooting rates than whites—not just overall but also in cases where the suspects were neither armed nor suffering from mental-health issues.  Justin Feldman found that the white/black gap in police killings (including non-shooting deaths) declines only slightly when accounting for the poverty rates of the census tracts where killings occur.  A study by Eric Hehman, Jessica K. Flake, and Jimmy Calanchini found that racial disproportions in police shootings are strongest in places where whites score poorly on several—controversial—measures  of “implicit bias” and stereotyping.  Another paper compiled what it calls a “racism index”—actually containing measures of segregation and racial disparities, and ranking Wisconsin, Minnesota, and New Jersey as the most racist states in the nation—and found: “After controlling for numerous state-level factors and for the underlying rate of fatal shootings of black victims in each state, the state racism index was a significant predictor of the Black–White disparity in police shooting rates of victims not known to be armed.” 
But such findings are hardly ubiquitous. A study of St. Louis neighborhoods by David Klinger and three coauthors found a “curvilinear” relationship between violent crimes and police shootings (including shootings where the target survived): police shootings increased with crime, but only up to a point, after which the relationship reversed.  The demographic composition of the neighborhood was not correlated with police-shooting rates in a model that also included crime. A similar analysis of Los Angeles by Debbie Ma, Steven Graves, and Jonathan Alvarado found that neighborhoods’ age profiles predicted their police-shooting rates, while results for their racial makeup and other sociological variables were far weaker: “[W]e identified models in which race/ethnicity, measures of income, or educational attainment were statistically significant, but they were not as robust as models including mean age.”  A cross city analysis by Malcolm Holmes, Matthew Painter, and Brad Smith  found that cities’ black population shares negatively correlated with police-shooting rates (with numerous factors such as crime rates controlled), though black–white segregation within cities increased them —and when the segregation variable was removed from the model, the effect of black share became statistically insignificant. Another study by David Hemenway and three coauthors focused on the role of gun ownership and used state-level data.  It found that the nonwhite share of the population did not significantly predict police shootings in models that also included poverty and violent-crime rates, urbanization, and the percentage of suicides committed with a gun (a proxy for gun ownership).
In general, geographic analyses are of limited usefulness for identifying bias;  in any event, they have not produced consistent results. But on a simpler level, it is certainly noteworthy, and deserving of further study, that places differ immensely in their rate of fatal police killings—even after accounting for crime rates—and in the racial disparities therein.
Figure 2 is a simple, state-level scatter plot of annual per-capita rates of homicide and fatal police shootings that aggregates police shootings and homicides, 2015–20, to boost the sample size (especially of the former). There is a noticeable connection between the two variables; but clearly, states can have radically different police-killing rates while sharing very similar homicide rates. The West, the Southwest, and Appalachia stand out. And as for racial disparities in fatal police shootings, one study by Michael Siegel and three coauthors found the ratio of black to white police-killing rates to be about 1.5 in Atlanta, about 10 in New York and Houston, and nearly 40 in Chicago. 
In addition to studies using real-world data on police violence, there is a line of research involving psychology experiments, in which police officers or other subjects are presented with shoot/ don’t shoot decisions, and the researchers analyze whether the race of the suspects affects the decision to “fire” (in terms of, for instance, reaction time and mistakes). This research has produced highly mixed results and is of debatable use.
Early research in this vein relied on still photos and sometimes was not conducted on real police officers. Increasingly, however, researchers have turned to realistic training simulators.  Some have also added contextual details that are available to real-world cops, such as information about the situation provided in advance by dispatchers.  These studies often suggest that race can affect decision-making; but directionally, they have found everything from antiblack bias  to a pro-black “counter bias,”  possibly because police are sensitive to the protests and scrutiny that killings of blacks are more likely to spur.
To put it bluntly, we can never be confident that psych-lab exercises, no matter how realistic, tell us much about the real-world scenarios that they are simulating.  The situations are different, especially in their level of fear and consequences for mistakes, in this case, and the subjects of this kind of research are known to take the context into account—they might try to give the researcher the result that he wants, or behave in ways that make themselves look better. I would argue that these studies should not inform our views of racial bias in police shootings much; I discuss them here only because this has been an active area of research.
Where to Next?
Given the foregoing, several suggestions come to mind for future research and data collection:
1. No more simplistic benchmarking studies. Bias is not the only reason that the racial breakdown of those shot by police might differ from the racial breakdown of the general population. Crime rates and arrest rates can be helpful in trying to understand that. However, the simple act of comparing police-shooting odds or proportions to benchmarks such as crime rates has reached the end of its usefulness. There is not much left to be learned from such basic analyses, and questions about racial disparity need more nuanced designs, as many of the studies discussed above already have.
2. Use data from individual departments to flesh out variation in shootings and fatalities. The U.S. is home to about 18,000 police agencies. Some of these are highly professional and well run; others are not, and areas of the country differ as to their level of racial tension and animosity as well. Police departments almost certainly differ as to the existence and extent of racial bias in the use of lethal force. (Again, they certainly vary in terms of how much they use lethal force in general—even after accounting for crime rates—and in terms of the overall black/white disparity in lethal force.) Since any reforms will need to be implemented at the level of individual departments, too, there is more to be gained in studying where the problems lie specifically, rather than hunting for traces of bias in nationwide numbers.
3. Fill in the many remaining data gaps. Thanks to the Washington Post and others, basic questions about how many fatal police shootings occur each year, and where they happened, can be answered. However, these data sets lack many key details—including the race of the officers involved—and even less is known about situations where an officer uses lethal force but the suspect does not die. Governments should collect this information in a systematic fashion and make it available to the public.
When it comes to tallying fatal uses of force, some of the existing projects are on the right track, even if they are moving slowly. The National Violent Death Reporting System combines numerous sources, from medical examiner reports to law-enforcement documents, to ensure that violent deaths are classified correctly. Improvements to the Arrest-Related Deaths program bring media reports into the mix so that shootings are counted even if departments do not report them to the federal government.
When it comes to nonfatal incidents, the importance of good reporting by police departments specifically rises. If a shot misses, or if a wounded suspect does not go to a hospital, uses of lethal force might not be documented by public-health officials or discovered by the news media. These sources may also not be in a position to know details such as the race of the officer involved. Given the level of interest in racial bias, departments should record this information and make it available to the public in a timely fashion. States can require it, and the federal government should consider making funding available for the creation of a national lethal-force database that is far more comprehensive than the federal projects currently nearing completion. These data are easily worth the extra paperwork required.
Many researchers would also like better data on non-interactions between police and civilians.  Think, for example, of data from surveillance or traffic cameras that could reveal the demographics of civilians who were near police without being stopped. Discretionary interactions can escalate into lethal force (which, at that later stage, may be legally justified or not), so bias at these earlier stages of the process could help drive disparities in lethal force, though the costs and benefits of collecting this information will vary from situation to situation.
4. Body-Worn Cameras. Cameras provide key information about what happened in any given shooting incident. The footage can be used to prosecute bad cops and exonerate good ones. In a 2018 report, the Police Executive Research Forum noted that, per the organization’s survey work, “more than one-third of American law enforcement agencies have already deployed BWCs [body-worn cameras] to some or all of their officers, and another 50% currently have plans to do so”; it further noted that more than 40% of agencies with cameras deployed them to all officers rather than just some.  A BJS report drawing on data collected in 2016 found that about 50% of law-enforcement agencies had body cameras, though many had not fully deployed them. 
The need for more body cameras is obvious, and there should be policies to ensure that they are worn, as incidents unfold. Technological improvements can make the videos smoother and the cameras harder to dislodge. In addition to the practical effects of body cameras, which appear to be modest but positive,  the footage is crucial to our understanding of how and why police killings occur. As it becomes more available, researchers should seek to include this footage in their studies of bias.
5. Study cases where lethal force was not intended. As noted, many of the most controversial and unrest-provoking policing incidents have involved situations where officers did not discharge their guns, but instead, suspects died following other uses of force that often interacted with preexisting health issues. These cases are difficult to include in a database of police killings because it can be hard to sort out the precise role of officers’ actions in the death. They deserve more separate attention of their own.
We know far more about the use of lethal force by police than we did half a decade ago. Those expecting these data to prove the existence of extreme, flagrant racial bias are bound to be disappointed because that is not what the numbers show. The popular narrative of homicidally racist cops everywhere is false.
At the same time, much is left to learn about fatal police shootings, including whether there is a more subtle role for racial bias nationally and why racial disproportions in lethal force vary so much from place to place. More than likely, at least some places in the U.S. have very real problems in this regard, and we should want to know where they are.
Perhaps we will, once another half-decade or so has passed.
About the Author
Robert VerBruggen is a fellow at the Manhattan Institute where he provides policy research, writes for City Journal, and contributes to special projects and initiatives in the President’s office. Hav-ing held roles as Deputy Managing Editor of National Review, Managing Editor of The Ameri-can Conservative, Editor at RealClearPolicy, and Assistant Book Editor at the Washington Times, VerBruggen writes on a wide array of issues, including economic policy, public finance, health care, education, family policy, cancel culture, and public safety. VerBruggen was a Phillips Foundation Journalism Fellow in 2009 and a 2005 winner of the Chicago Headline Club Peter Lisagor Award. He holds a B.A. in journalism and political science from Northwestern University.
Photo by: 400tmax/iStock