There have been more than 80 mass shootings in the United States in the first two months of 20231 – that averages to more than one mass shooting every single day.  With these numbers, the U.S. is currently on track to beat 2021 and 2022’s record setting mass shooting numbers. 

The most recent mass shooting to sweep the nation’s headlines occurred at Michigan State University. On February 13th, 2023, a 43-year old man allegedly killed three students and left five others critically injured on MSU’s campus before turning the gun on himself.

After each mass shooting, gun control gets renewed interest in politics and the media, and the recent attack at MSU is no different.  Following the incident, President Biden once again called for a nationwide ban on assault rifles, while Republicans pushed for improved mental health services. 

With congressional gridlock and a steady rise in mass shootings, one new solution has drawn attention for its potential ability to stop a shooting before it happens – artificial intelligence.

According to the artificial intelligence security industry, enhanced security cameras can utilize A.I. to identify suspicious activity.  From spotting guns drawn to identifying a suspect loitering outside a school, A.I.-enabled cameras can help spot potential threats and alert security officers to act preventatively. 

While this A.I. video enhancement may be new, the use of security cameras as a public safety and security measure is not.  Schools have used cameras as deterrents for years with their popularity growing dramatically over the past decade.  In fact, security camera usage in public schools has increased from 61% to 91% between school years 2009-2010 and 2019-2020.2

However, A.I. security officials say that while existing security cameras can be an essential tool to explain what happened after-the-fact, or convict a suspect, they often provide little assistance during the attack or preventing it from happening.  Essentially, the officials believe A.I. technology corrects for fallible security officers, who often struggle to monitor multiple video feeds at one time, while also reducing critical response time.

Kris Greiner, a Vice President of Scylla, an Austin, Texas-based A.I. company, said, “At a time when every second counts, it’s quite possible that it would have a heavy impact”. Greiner is referring to their fully automated system, which can identify a weapon or suspicious person, and immediately notify the officials present.  The system can also be set to immediately deny access and lock doors.  This is especially useful since mass shooters often draw their gun prior to entering the facility.

Since Scylla’s A.I. system works with existing security cameras, it can be easily and more affordably installed.  Additionally, according to Greiner, there’s no limit to what AI can watch,

“Imagine a human sitting in a command center watching a video wall, the human can only watch four to five cameras for four to five minutes before he starts missing things.” So, they use the existing security cameras and, “just give it a brain”. 

ZeroEyes, another A.I. security company, also provides video monitoring services.  However, their A.I.-enhanced security focuses solely on gun detection.  Similarly to Scylla, ZeroEyes utilizes A.I. to track live video feeds and sends an alert when a gun is detected.  The alert is first sent to an internal control for verification, before notifying officials.  Alaimo, ZeroEyes Co-founder, says the process takes as little as three seconds from verification to communication.

Critics, however, question the effectiveness of these A.I.-enhanced systems, and say the companies have not provided independently verified data. 

Greiner of Scylla states his company’s AI is 99.9% accurate in identifying weapons, but he did not provide information on how accurate it is at identifying suspicious activity. He also said the company has not yetundergone independent verification, but they do allow customers to test the system before purchasing.  

Since ZeroEyes utilizes employees to verify the threat, Alaimo said they eliminate the potential for false positives.  Alaimo did not, however, provide information on the number of false positives AI identified, or whether employees themselves have made any errors.

In addition to concerns over effectiveness, critics also say the products raise concerns over privacy infringement and even discrimination.  With open-carry laws, AI could impose on people’s rights to carry a gun, and identify non-lethal threats.  Additionally, critics are worried that the same racial discrimination that’s been found in facial recognition systems could replicate in AI-enhanced security.   According to both Greiner and Alaimo, their systems do not spot individuals based on race, gender, or ethnicity.  And Alaimos said, “If one life is saved, that’s a victory.”