Artificial Intelligence Is Now Used to Predict Crime. But Is It Biased?

The software is supposed to make policing more fair and accountable. But critics say it still has a way to go.

predpol
Predictive policing is built around algorithms that identify potential crime hotspots.. PredPol

What is fair?

It seems a simple question, but it’s one without simple answers. That’s particularly true in the arcane world of artificial intelligence (AI), where the notion of smart, emotionless machines making decisions wonderfully free of bias is fading fast.

Perhaps the most public taint of that perception came with a 2016 ProPublica investigation that concluded that the data driving an AI system used by judges to determine if a convicted criminal is likely to commit more crimes appeared to be biased against minorities. Northpointe, the company that created the algorithm, known as COMPAS, disputed ProPublica’s interpretation of the results, but the clash has sparked both debate and analysis about how much even the smartest machines should be trusted.

“It’s a really hot topic—how can you make algorithms fair and trustworthy,” says Daniel Neill. “It’s an important issue."

Neill now finds himself in the middle of that discussion. A computer scientist at Carnegie Mellon University, he and another researcher, Will Gorr, developed a crime-predicting software tool called CrimeScan several years ago. Their original concept was that in some ways violent crime is like a communicable disease, that it tends to break out in geographic clusters. They also came to believe that lesser crimes can be a harbinger of more violent ones, so they built an algorithm using a wide range of “leading indicator” data, including reports of crimes, such as simple assaults, vandalism and disorderly conduct, and 911 calls about such things as shots fired or a person seen with a weapon. The program also incorporates seasonal and day of week trends, plus short-term and long-term rates of serious violent crimes. 

The idea is to track sparks before a fire breaks out. “We look at more minor crimes,” Neill says. “Simple assaults could harden to aggravated assaults. Or you might have an escalating pattern of violence between two gangs.”

Predicting when and where

CrimeScan is not the first software designed for what’s known as predictive policing. A program called PredPol was created eight years ago by UCLA scientists working with the Los Angeles Police Department, with the goal of seeing how scientific analysis of crime data could help spot patterns of criminal behavior. Now used by more than 60 police departments around the country, PredPol identifies areas in a neighborhood where serious crimes are more likely to occur during a particular period.  

The company claims its research has found the software to be twice as accurate as human analysts when it comes to predicting where crimes will happen. No independent study, however, has confirmed those results.  

Both PredPol and CrimeScan limit their projections to where crimes could occur, and avoid taking the next step of predicting who might commit them—a controversial approach that the city of Chicago has built around a “Strategic Subject List” of people most likely to be involved in future shootings, either as a shooter or victim.

The American Civil Liberties Union [ACLU], the Brennan Center for Justice and various civil rights organizations have all raised questions about the risk of bias being baked into the software. Historical data from police practices, critics contend, can create a feedback loop through which algorithms make decisions that both reflect and reinforce attitudes about which neighborhoods are “bad” and which are “good.” That’s why AI based primarily on arrests data carries a higher risk of bias—it’s more reflective of police decisions, as opposed to actual reported crimes. CrimeScan, for instance, stays away from trying to forecast crimes that, as Neill puts it, “you’re only going to find if you look for them.”

“I can’t say we’re free of bias,” says Neill, “but it’s certainly more reduced than if we were trying to predict drug possession.”

Then there’s the other side of the feedback loop. If a predictive tool raises expectations of crimes in a certain neighborhood, will police who patrol there be more aggressive in making arrests?

“There’s a real danger, with any kind of data-driven policing, to forget that there are human beings on both sides of the equation,” notes Andrew Ferguson, a professor of law at the University of the District of Columbia and author of the book, The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement. “Officers need to be able to translate these ideas that suggest different neighborhoods have different threat scores. And, focusing on the numbers instead of the human being in front of you changes your relationship to them.”

Inside the black box

The reality is that artificial intelligence now plays a role—albeit often in the background—in many decisions affecting daily lives—from helping companies choose who to hire to setting credit scores to evaluating teachers. Not surprisingly, that has intensified public scrutiny of how machine learning algorithms are created, what unintended consequences they cause, and why they generally aren’t subjected to much review. 

For starters, much of the software is proprietary, so there’s little transparency behind how the algorithms function. And, as machine learning becomes more sophisticated, it will become increasingly difficult for even the engineers who created an AI system to explain the choices it made. That opaque decision-making, with little accountability, is a consequence of what’s become known as “black box” algorithms.

“The public never gets a chance to audit or debate the use of such systems,” says Meredith Whittaker, a co-founder of the AI Now Institute, a research organization at New York University that focuses on AI’s impact in society. “And, the data and logics that govern the predictions made are often unknown even to those who use them, let alone to the people whose lives are impacted.”

In a report issued last fall, AI Now went so far as to recommend that no public agencies responsible for such matters as criminal justice, health care, welfare and education should use black box AI systems. According to AI Now, seldom are legal and ethical issues given much consideration when the software is created.

“Just as you wouldn’t trust a judge to build a deep neural network, we should stop assuming that an engineering degree is sufficient to make complex decisions in domains like criminal justice,” says Whittaker.

Another organization, the Center for Democracy & Technology, has generated a “digital decisions” tool  to help engineers and computer scientists create algorithms that produce fair and unbiased results. The tool asks a lot of questions meant to get them to weigh their assumptions and identify unforeseen ripple effects.

“We wanted to give people a concrete starting point for thinking through issues like how representative their data is, which groups of people might be left out, and whether their model’s outputs are going to have unintended negative consequences,” says Natasha Duarte, who oversees the project.

Who’s accountable?

While there has been a push to make developers more cognizant of the possible repercussions of their algorithms, others point out that public agencies and companies reliant on AI also need to be accountable.

“There is this emphasis on designers understanding a system. But it’s also about the people administering and implementing the system,” says Jason Schultz, a professor of law at New York University who works with the AI Now Institute on legal and policy issues. "That’s where the rubber meets the road in accountability. A government agency using AI has the most responsibility and they need to understand it, too. If you can’t understand the technology, you shouldn’t be able to use it.”

To that end, AI Now is promoting the use of “algorithmic impact assessments,” which would require public agencies to disclose the systems they’re using, and allow outside researchers to analyze them for potential problems. When it comes to police departments, some legal experts think it’s also important for them to clearly spell out how they’re using technology and be willing to share that with the local community.

“If these systems are designed from the standpoint of accountability, fairness and due process, the person implementing the system has to understand they have a responsibility,” Schultz says. “And when we design how we’re going to implement these, one of the first questions is ‘Where does this go in the police manual?' If you’re not going to have this somewhere in the police manual, let’s take a step back, people.”

Andrew Ferguson sees a need for what he refers to as a “surveillance summit.”

“At least once a year, there should be an accountability moment for police technology in every local jurisdiction,” he says. “The police chief, the mayor or maybe the head of the city council would have to explain to the community what they’re using taxpayer dollars for in terms of surveillance and technology, why they think it’s a good use of the money, what they’re doing to audit it and protect the data, what are the privacy implications. And the community would be there to ask questions.”

Daniel Neill, the CrimeScan creator, says he wouldn’t object to the idea of regular audits of AI results, although he has reservations about that being done before an algorithm is adequately field-tested. He is currently working with the Pittsburgh Bureau of Police on a CrimeScan trial, and at least initially there was a challenge with “getting the right patrol intensity for the predicted hot spots.”

It’s been a learning process, he says, to adapt CrimeScan so that police officers at the street level believe it’s helpful. “We need to show that not only can we predict crime, but also that we can actually prevent it,” Neill notes. “If you just throw the tool over the wall and hope for the best, it never works that well.”

He also acknowledges the risk of deferring too much to an algorithm.   

“A tool can help police officers make good decisions,” he says. “I don’t believe machines should be making decisions. They should be used for decision support."

Neill adds, “I do understand that, in practice, that’s not something that happens all the time.”

Get the latest stories in your inbox every weekday.