From the President: Predictive Policing: The Modernization of Historical Human Injustice

Predictive policing refers to the police tactic that uses computer data to anticipate where and when crime will occur. Ostensibly to help departments deploy officers more effectively, predictive policing magnifies pre-existing bias in policies and tactics that are unnecessarily harming communities of color.

Access to The Champion archive is one of many exclusive member benefits. It’s normally restricted to just NACDL members. However, this content, and others like it, is available to everyone in order to educate the public on why criminal justice reform is a necessity.

A  cell-site stimulator — or “stingray” as it is commonly called — allows the owner to electronically mimic a cellphone tower. By using a stronger signal than an actual tower, a stingray effectively forces a cellphone to connect to the device rather than a tower. Developed for clients like the NSA, Special Forces and CIA, a stingray allows the operator to pinpoint a citizen’s location, the numbers he or she is calling and texting, and data for incoming calls and messages. With some devices having the added capability of “jamming” a signal and shutting down cellular communication in a geographic area, the potential applications of a stingray on the battlefield or employed in an anti-terrorism capacity are profound.

It is of concern, therefore, to learn that this military-grade technology is being deployed on American streets. The extent of its use is unclear thanks to a lack of both transparency from law enforcement and guidance from lawmakers, but some reports suggest that local law enforcement has been using iterations of this technology for decades.1 And the use hasn’t been sparing, either. Between 2008 and 2015, for example, the New York City Police Department used stingray devices over 1,000 times.2 Between January 2014 and May 2015, the San Bernardino Sheriff’s Department used its stingray more than 300 times.3 Perhaps most alarming, however, is that rather than each use of a stingray being vetted by a judge and then granted a warrant, most departments rely on the less-stringent pen register order, if they rely on any safeguarding policy at all.4 

Stingrays are just one piece in the rapidly changing landscape of policing in America. While stingrays make it possible for law enforcement to pinpoint communication from a suspect, civil liberty groups have highlighted how the communications of countless bystanders can be caught up in an electronic dragnet. Coupled with new developments using algorithms and advanced search capabilities, data can be turned into a powerful tool for police departments to supposedly predict criminal behavior and tailor their response. But the implications for citizens — particularly historically overpoliced groups like communities of color — are far-reaching and severe.

Predictive policing refers to the police tactic that uses computer data to anticipate where and when future crime will occur, and who will commit it.5 Ostensibly to help departments deploy officers more effectively and prevent incidents before they happen, predictive policing magnifies pre-existing bias in policies and tactics that are unnecessarily harming communities of color. One of the most contested technologies, for instance, is Beware Software. Functioning as an immediate form of predictive policing, Beware is a threat-scoring technology used in response to 911 calls. Beware uses the address given during a 911 call to find the names associated with the residence and compiles publicly available information, ranging from criminal convictions to mental disorders to even social media posts. From this information, a color-coded threat is generated: green (low risk), yellow (medium risk), and red (high risk).6 While police officers have lauded the technology for providing useful intelligence before entering a situation, community leaders have labeled Beware as inaccurate and biased. During a hearing in Fresno, California, Councilman Clinton J. Oliver asked for his own threat level to be randomly run through the software. Oliver himself returned as green, but his home came back yellow. An officer explained that a person who previously lived in his house must have caused the discrepancy, but Oliver pinpointed the inherent danger in mistakes like this: “Even though it’s not me that’s the yellow guy, your officers are going to treat whoever comes out of that house in his boxer shorts as the yellow guy. … [Beware] has failed right here with a council member as the example.”7 

Inaccuracies like these leave the public’s right to fair treatment entirely to chance — the chance that the person who lived in your house prior to you did not have a drug conviction, the chance that not many people have the same name as you, or the chance that you did not write a slightly aggressive Facebook posting eight years ago or use the hashtag “Black Lives Matter.” These inaccuracies become even more dangerous when acknowledging the fact that Intrado, the private company that built Beware, refuses to publicize how the software works, characterizing it as a “trade secret.”8 Consequently, individuals cannot even correct their information if it is inaccurate because there is no way to know where data is pulled from. By way of reference, it has become increasingly common knowledge that the FBI database is notoriously inaccurate, with half the states failing to supply complete dispositions.9 As the National Employment Law Project (NELP) noted in 2015, “People of color are especially disadvantaged by faulty FBI records because they are consistently arrested at higher rates that whites, and large numbers of their arrests never lead to a conviction.” NELP went on to note how a study of background checks into port workers after the September 11 attacks found that African Americans were nearly three times as likely to appeal an inaccurate FBI record as non-blacks.10 

Still, the software is problematic on a more fundamental level. Like all algorithms, Beware relies on human input, including human bias. When a tool like Beware is used specifically to anticipate someone’s likelihood to commit crime, it wraps human bias regarding race, class, and gender in the outwardly unbiased cloak of technology. In this way, the algorithm is simply a digital replication of the weighted threat levels the implicitly biased creators deem most appropriate. By thus criminalizing certain groups of people, ultimately, “existing societal prejudices and biases will be institutionalized within algorithmic processes, which just hide, harden, and amplify the effects of those biases.”11 

This damaging and biased software is matched by another well-known technology called PredPol. This technology collects data on the location, time, and nature of prior crimes and runs it through an algorithm. The result is that areas are marked as high risk for future crime, or “hot spots.”12 Similar to Beware, PredPol’s use of data and technology creates the appearance of an objective law enforcement tool. Predpol is susceptible to the same biases, however. It uses data that is fundamentally distorted: data corrupted by decades of overpolicing, arresting, and incarcerating people in African American and Hispanic communities. This skewed intelligence creates a self-fulfilling prophecy where police are disproportionately deployed to “hot spots” in black and Hispanic neighborhoods, resulting in disproportionately high arrest ratesthat then reinforce the need for police presence in the area. This feedback loop enables police to advance discriminatory practices behind the presumed objectivity of technology. With the capacity to justify targeting African American and Hispanic neighborhoods, PredPol easily lends itself to the incarceration of even more people of color and the further destruction of overpoliced communities.13 

Although PredPol and Beware software are gaining rapid popularity, one data collection tool has taken the spotlight in police departments across the country: social media.14 Social media is often used in investigations because sites like Facebook, Twitter, and Instagram encourage users to create profiles featuring their names, pictures, friends, and tagged locations while also frequently updating personal newsfeeds that reveal their comments and “likes” on other people’s pages. All of this information is purported to help police officers and investigators get a better understanding of the people involved in their cases.

Continue reading below

But using social media to solve cases is only half the story. Law enforcement is employing services that monitor and track citizens and social movements, irrespective of any criminal acts. Through public records requests last year, the ACLU revealed that several social media platforms — including Twitter, Instagram, and Facebook — were providing user data to Geofeedia, a technology company that tracks certain groups, including Black Lives Matter, for law enforcement.15 When the story broke, Twitter immediately ended their relationship with Geofeedia.16 The ACLU noted, however, that at least 13 police agencies in California alone have partnered with Geofeedia.17 Studies have long linked government surveillance with a chilling effect on behavior.18 The fact that such pervasive technology is readily adopted by law enforcement and used to monitor citizens holding particular social and political beliefs has far-reaching implications.

Recently, the Committee on Public Safety of the New York City Council held a hearing on proposed legislation to create comprehensive reporting and oversight of surveillance technologies used by the NYPD. Representatives of the police department discussed terrorist threats in and around New York City as justification for avoiding greater transparency with respect to the devices the department is using. As members of the defense bar, we must be the voice of reason that halts the lurch toward a complete surveillance state. Predictive policing will further weaken the public’s trust of the police and will result in more aggressive policing in neighborhoods where people of color live. I created the NACDL Task Force on Predictive Policing to study this phenomenon and the potential for the software to perpetuate and enhance race, gender, and income bias.19 We must recognize that seemingly innocuous or objective technologies are not, and are instead subject to the same biases and disparities that exist throughout the rest of our justice system. As a result, we must be well versed in these systems and prepared to contradict and counteract the air of impartiality an algorithm or a device lends to policy. Predictive policing may very well be the future; but for vulnerable communities, we can’t let it repeat the past.

Notes 

  1. Ryan Gallagher, FBI Files Unlock History Behind Clandestine Cellphone Tracking Tool, Slate (Feb. 15, 2013), http://www.slate.com/blogs/future_tense/2013/02/15/stingray_imsi_catcher_fbi_files_unlock_history_behind_cellphone_tracking.html.
  2. New York Civil Liberties Union, NYPD Has Used Stingrays More Than 1,000 Times Since 2008, ACLU (Feb. 11, 2016), https://www.nyclu.org/en/press-releases/nypd-has-used-stingrays-more-1000-times-2008.
  3. Cyrus Farivar, County Sheriff Has Used Stingray Over 300 Times With No Warrant, (May 24, 2015), Ars Technica, https://arstechnica.com/tech-policy/2015/05/county-sheriff-has-used-stingray-over-300-times-with-no-warrant.
  4. NYCLU, supra note 2.
  5. David Robinson & Logan Koepke, Stuck in a Pattern: Early Evidence on ‘Predictive Policing’ and Civil Rights, Upturn (August 2016), https://www.teamupturn.com/reports/2016/stuck-in-a-pattern.
  6. Privacy SOS, Pre-crime Policing Software Determines Your Threat Level Based Off of Your Social Media Use, Criminal Record, Privacy SOS, (Jan. 11, 2016), https://privacysos.org/blog/pre-crimefresno.
  7. Justin Jouvenal, The New Way Police Are Surveilling You: Calculating Your Threat ‘Score,’ (Jan. 10, 2016), Wash. Post, https://www.washingtonpost.com/local/public-safety/the-new-way-police-are-surveilling-you-calculating-your-threat-score/2016/01/10/e42bccac-8e15-11e5-baf4-bdf37355da0c_story.html?utm_term=.1d82028fae52.
  8. Privacy SOS, supra note 6.
  9. National Employment Law Project, Faulty FBI Background Checks for Employment: Correcting FBI Records Is Key to Criminal Justice Reform, (Dec. 8, 2015), http://www.nelp.org/publication/faulty-fbi-background-checks-for-employment/#_edn5.
  10. Id.
  11. Jay Stanley, Eight Problems With Police ‘Threat Scores,’ ACLU (Jan. 13, 2016), https://www.aclu.org/blog/free-future/eight-problems-police-threat-scores.
  12. PredPol, How Predictive Policing Works, Predpol.com (2015), http://www.predpol.com/how-predictive-policing-works.
  13. David Robinson & Logan Koepke, supra note 5.
  14. Alexandra Mateescu et al., Social Media Surveillance and Law Enforcement, Data & Civil Rights (Oct. 27, 2015), http://www.datacivilrights.org/pubs/2015-1027/Social_Media_Surveillance_and_Law_Enforcement.pdf.
  15. Sam Levin, ACLU Finds Social Media Sites Gave Data to Company Tracking Black Protesters, The Guardian (Oct. 11, 2016), https://www.theguardian.com/technology/2016/oct/11/aclu-geofeedia-facebook-twitter-instagram-black-lives-matter.
  16. Id.
  17. Id.
  18. See Jon Penney, Chilling Effects: Online Surveillance and Wikipedia Use, 31(1) Berkeley Technology L.J. (Apr. 27, 2016), and Elizabeth Stoycheff, Under Surveillance: Examining Facebook’s Spiral of Silence Effects in the Wake of NDS Internet Monitoring, 3(2) Journalism & Mass Communication Quarterly (March 8, 2016).
  19. The following individuals serve on the task force: Cynthia Roseberry and Melinda Sarafa (co-chairs), Juval Scott, Robert Toale, Hanni Fakhoury, and William Wolf. Michael Pinard, a professor of law at the University of Maryland, is the task force reporter. Jumana Musa, NACDL senior privacy and national security counsel, is the staff person assigned to work with the group.

Rick Jones
Neighborhood Defender Service of Harlem
New York, NY
212-876-5500
www.ndsny.org
rjones@ndsny.org 

Featured Products