Monday, July 18, 2016

Big Data and Predictive Policing: Can algorithms become racists?

Interesting article on Forbes by Thomas Davenport on Big Data. The articles discusses how various government, including Canadian Public Safety Operations Organization (CanOps), have used big data tools for "situational awareness". These systems draw on myriad sources of data to give users (e.g. law enforcement) the information they need to deal with a particular situation.

Here are a few points that I thought were worth noting:

Government is making strides in big data: We often think of Amazon, Google and other tech-giants as key users of this data. However, as the Davenport points out that the government is using this technology to assist with decision making. However, whether this is something that should be celebrated remains to be seen (see predictive policing below)

Privacy versus Value trade-off: He talks about how CanOps use of MASAS, the Multi-Agency Situational Awareness System, is limited by the filtering of sensitive information: "breadth of MASAS is noble, but it seems to limit its value. For example, as the CanOps website notes, because agencies are reticent to share sensitive information with other agencies, all the information shared was non-sensitive (i.e. not terribly useful)." It seems that this continues to be a theme that we had noted in back a couple years when discussing a similar trade-off the companies face when dealing with big data. As I noted in this post:

"privacy policies require the user to consent to a specific uses of data at the time they sign up for the service. This means future big data analytics are essentially limited by what uses the user agreed upon sign-up. However, corporations in their drive to maximize profits will ultimately make privacy policies so loose (i.e. to cover secondary uses) that the user essentially has to give up all their privacy in order to use the service."

Consequently, there still needs to be a solution as to how privacy can be respected but organizations can use the data they have collected to make better decisions.

Predictive Policing is an emerging reality: The sci-fi movie, Minority Report, paints a future where law enforcement arrests people before they commit crimes.


That future seems to be well on its.  Davenport mentions how "predictive policing" was introduced in 2014 to the NYPD.  He also mentions how much data is being collected by the police:

"It collects and analyzes data from sensors—including 9,000 closed circuit TV cameras, 500 license plate readers with over 2 billion plate reads, 600 fixed and mobile radiation and chemical sensors, and a network of ShotSpotter audio gunshot detectors covering 24 square miles—as well as 54 million 911 calls from citizens. The system also can draw from NYPD crime records, including 100 million summonses."

The idea of predictive policing was also raised in the book,  Big Data: A Revolution That Will Transform How We Live, Work, and Think, which I had explored in a multi-blog post series (click here for the first installment).

Andrew Guthrie Ferguson, Law professor UDC David A. Clarke School of Law, wrote an article on how that predictive policing is something that has not be really sorted in out in terms of legality. He notes:

"The open question is whether this big-data information combined with predictive technologies will create “predictive reasonable suspicion“ undermining Fourth Amendment protections in ways quite similar to the stop-and-frisk practices challenged in federal court.

In two law review articles I have detailed the distorting effects of predictive policing and big data on the Fourth Amendment and have come to the conclusion that insufficient attention has been given at the front end to these constitutional questions. New York has the chance now to address these issues before the adoption of the technology and should be encouraged by the same civil libertarians and ordinary citizens who challenged the stop and frisk policies."

His commentary highlights another limitation: big data predictions are biased based on how the data is collected. The stop and frisk policies he refers to disproportionately targeted minorities. Furthermore, policing is more focused on poor, black/hispanic neighbourhoods. Michelle Alexander documents in her book, The New Jim Crow, how this happens:

"Alexander explains how the criminal justice system functions as a new system of racial control by targeting black men through the “War on Drugs.” The Anti-Drug Abuse Act of 1986, for example, included far more severe punishment for distribution of crack (associated with blacks) than powder cocaine (associated with whites). Civil penalties, such as not being able to live in public housing and not being able to get student loans, have been added to the already harsh prison sentences."

Consequently, if the data by law enforcement is used to predict crime that essentially the targeting of minorities will continue to target such groups given that it is based on biased data. 

Technology often is seen to be a silver bullet for problems. However, we need to keep in mind that it is vulnerable to the human element that makes it. Given Microsoft's recent faux pas of accidentally allowing an AI avatar to become a Nazi, it is something that should actively be considered in the systems that are built to police and govern. 


No comments: