Can Artificial Intelligence Help Stop School Shootings?

Some researchers believe it could help predict student violence. Others worry about unintended consequences

Parkland vigil.jpg
People attend a vigil for the victims of the shooting at Marjory Stoneman Douglas High School, in Pine Trails Park in Parkland, Florida on February 15, 2018. Xinhua/Alamy

For all their stunning frequency, school shootings remain a confounding horror.

Not only is there little consensus on how to stop them—with suggestions ranging from restricting gun access to arming teachers—but there’s even less certainty about why a student would open fire on his classmates.

Now, some scientists are starting to explore if artificial intelligence (AI) could help find answers. The idea is that algorithms might be able to better analyze data related to school shootings, and perhaps even identify patterns in student language or behavior that could foreshadow school violence. The research is still in its early stages, and the prospect of using machines to predict who might become a school shooter raises privacy issues and other ethical questions associated with any kind of profiling, particularly since the process would involve children. The goal, though, is to see if the analytical power of intelligent machines can provide more clarity to tragedies too often consumed in a swirl of high emotions and political rhetoric.

Turning to technology

Using artificial intelligence as a way to bring scientific analysis to something as unfathomable as school shootings very much appealed to Shreya Nallapati. She just graduated from high school in Colorado, but back in February, after the shooting deaths of 17 students in Parkland, Florida, she was inspired by student leader Emma Gonzalez to take action.

“I felt we shouldn’t just be posting our thoughts and condolences,” Nallapati says. “I thought that as a rising generation of millennials, we should try to use what we know best—technology.”

So Nallapati, who’s been studying artificial intelligence in high school, reached out to other young women she knows through a program called Aspirations in Computing that's run by the National Center for Women & Information Technology. Aspirations in Computing encourages young women to enter computing and technological fields.

Nallapati asked others in the group to join her in a new project, #NeverAgainTech. She hopes that the collaborative effort will result in an AI-driven compilation and analysis of a wide range of data related to school shootings—from demographic and socio-economic information about past shooters, to any history of drug use or neurological disorders, to the availability of guns in the states where attacks have occurred. The goal is to develop a more comprehensive breakdown of the many components of school shootings than anything that currently exists, and make the resulting software available to the public, particularly schools and law enforcement agencies, next year.

Assessing risk

A team of researchers at Cincinnati Children's Hospital Medical Center is taking a different approach in using AI to address school violence. It published a recent study suggesting machine learning could potentially help therapists and counselors in discerning the level of risk a student may present.

Specifically, the scientists found that AI was as accurate as a team of child and adolescent psychiatrists when it came to assessing the risk of violent behavior, based on interviews with 119 kids between the ages of 12 and 18. While the study focused broadly on physical aggression, lead researcher Drew Barzman says it also was applicable to assessing school shooting risk.

“There are usually warning signs before there is school violence,” he says. In particular, the language a student uses during an interview can help distinguish a high-risk teenager from a low-risk one, according to previous research Barzman directed. That study concluded that the former was more likely to express negative feelings about himself and about the acts of others. He also was more likely to talk about violent acts involving himself and violent video games or movies.

The team took another step by having an AI algorithm use results of the earlier study to analyze transcripts of students interviewed for the new research. Based on language patterns, it indicated if a person was a high or low risk of committing violence. More than 91 percent of the time, the algorithm, using only the transciripts, aligned with the more extensive assessments of a team of child and adolescent psychiatrists, who also had access to information from parents and schools.

The students in the study were largely recruited from psychiatry outpatient clinics, inpatient units and emergency departments. Some had recently exhibited major behavioral changes, but for others, the changes were more minor. Barzman says they attended a diverse range of schools, although none were home-schooled.

According to Barzman, the study focused on predicting physical aggression at school, but that it's still not known if machine learning could actually prevent violence. The focus at this point is to provide therapists and counselors with a tool that could sharpen their assessments of students based on interviews. The intent, Barzman notes, is not to have machines make decisions about students.

"It would basically be meant to help the clinician in his or her decision making," says Barzman. "We would be providing them with a structure of what we've found to be important questions. It can be difficult to interview a student, pick out the right information and remember everything. The idea is to give them a tool that can help them through the process and increase the accuracy of their assessments."

Matty Squarzoni is another believer in the potential of artificial intelligence in addressing school violence. He’s CEO of a California startup called Sitch AI, which plans to market technology that he says could help schools deal with such threats. The initial focus will be on developing a system of sensors that will enable police officers to detect the precise location of gunshots, and also track a shooter’s movements through a school. But Squarzoni says the company also is looking at ways to use predictive analysis to spot potential problems before they turn violent.

He believes that artificial intelligence could analyze a student's data and flag notable changes in his or her performance or behavior. Squarzoni acknowledges potential concerns about privacy, but says the company would not know students’ identities.

“We’re not talking about creating profiles,” he says. “We’d be looking at each person as a unique entity. But humans are creatures of habit. When they start to have irregularities, that’s when you start looking at them. You spot flags, and maybe the flags start getting closer and closer. They could be mental health issues, or maybe their grades are dropping.

“We are not looking at being able to say, ‘This person’s going to be a shooter.’ We want to be able to say, ‘This person needs help.’"

Not so fast?

But others have serious concerns about the rush to use software algorithms to address complex societal issues.

“We are now seeing a trend of AI being applied to very sensitive domains at alarming speeds, and people making these algorithms don’t necessarily understand all the social, and even political, aspects of the data they’re using,” says Rashida Richardson, director of policy research at the AI Now Institute, a program at New York University that studies the social implications of artificial intelligence.

One area where the use of AI has come under fire is what's known as predictive policing. These are software products that analyze crime statistics, and then predict where crimes are more likely to be committed. But critics point out that data such as arrests can be the result of human bias, which ultimately can get baked into the algorithm.

That's always a risk of predictive analysis and why the source of the data is a key factor in determining how objective it actually may be. With the AI tool being developed by the Cincinnati Children's Hospital researchers, however, the analysis is based on what individual students say during an interview, rather than a broad compilation of statistics.

Still, Richardson believes it’s important that teams that create this kind of software are “interdisciplinary,” so that educators, for instance, are involved in programs that assess student behavior.

“Researchers may not understand a lot of the nuances of what people in the education and legal policy world call school climate. That includes safety and behavioral issues,” she says. “The kind of school you’re in will often dictate how behavior is dealt with and how discipline is handled.

“For example, charter schools have been found to have much more stringent disciplinary policies,” Richardson adds. “Children in that environment are going to be treated much differently than in a high-end private school and even in different public-school settings.

“Trying to understand very complicated issues that have a myriad of input and applying a tech solution that reflects a sliver of it is a problem because it can either reiterate the same problems we see in society or create a solution for a problem that’s not there.”

Richardson says another concern is that even if an AI program is developed with the best of intentions, it can end up being used in ways not anticipated by its creators.

“Once you come up with these tools,” she says, “it’s not like you continue to have control over how they’re implemented or how they’ll continue to affect society at large.”

Get the latest stories in your inbox every weekday.