The use of AI chatbots in the police is changing the way police respond to crime

A body camera recorded every word and bark as Police Sergeant Matt Gilmore and his sniffer dog Gunner searched for a group of suspects for nearly an hour.

Normally, the Oklahoma City police sergeant would take his laptop and spend another 30 to 45 minutes writing a report on the search, but this time he had an artificial intelligence write the first draft.

Based on all the sounds and radio messages recorded by the microphone connected to Gilbert's body camera, the AI ​​tool created a report within eight seconds.

“It was a better report than I could have ever written, and it was 100 percent accurate. It was more fluid,” Gilbert said. He even documented a fact he didn't remember – another officer's mention of the color of the car the suspects fled from.

Oklahoma City police are one of a few experimenting with AI chatbots to create early drafts of incident reports. Officers who have tried it out are enthusiastic about the time-saving technology, while some prosecutors, police watchdogs and legal scholars are concerned that it could alter a fundamental document of the criminal justice system that plays a role in who is prosecuted or incarcerated.

Based on the same technology as ChatGPT and distributed by Axon, the leading US provider of body cameras and developer of the Taser, the program could be another “game changer” for policing, according to Gilbert.

“They become police officers because they want to do police work, and spending half the day doing data entry is just a boring part of the job that they hate,” said Rick Smith, founder and CEO of Axon, describing the new AI product, called Draft One, as the product that has generated “the most positive response” of all the products the company has launched to date.

“There are certainly concerns,” Smith added. Prosecutors pursuing criminal cases in particular want to be sure that police officers – and not just an AI chatbot – are responsible for preparing their reports, as they may have to testify in court about their observations.

“You never want to call an officer to the stand who says, 'The AI ​​wrote that, not me,'” Smith said.

AI technology is nothing new to police departments. They have introduced algorithmic tools to read license plates, recognize suspects' faces, detect gunshot sounds, and predict where crimes might occur. Many of these applications come with privacy and civil rights concerns, and there are attempts by lawmakers to put safeguards in place. But the introduction of AI-generated police reports is so new that there are little to no guardrails for their use.

Concerns that racial biases and tendencies in society could be woven into AI technology are just part of what Oklahoma City community activist Aurelius Francisco finds “deeply disturbing” about the new tool, he learned from the Associated Press.

“The fact that the technology is being used by the same company that supplies the department with Tasers is alarming enough,” said Francisco, co-founder of the Foundation for Liberating Minds in Oklahoma City.

He said automating these reports would “take away the ability of police to harass, surveil and inflict violence on community members. While this makes the police's job easier, it makes the lives of black and brown people more difficult.”

Before the tool was tried in Oklahoma City, police officers showed it to local prosecutors, who advised caution in using it in high-risk crime cases. Currently, it is only used for reports of minor incidents that do not result in an arrest.

“So no arrests, no felonies, no violent crimes,” said Captain Jason Bussert, Oklahoma City police captain and information technology officer for the 1,170-officer police department.

The situation is different in another city, Lafayette, Indiana, where Police Chief Scott Galloway told the AP that all of his officers can use Draft One on any type of case and that the procedure has been “incredibly popular” since the pilot project began earlier this year.

Or in Fort Collins, Colorado, where police sergeant Robert Younger said officers can use the system for any type of report, but they found it doesn't work well when patrolling the downtown bar district because of the “overwhelming noise.”

In addition to using artificial intelligence to analyze and summarize the audio recording, Axon also experimented with computer vision to summarize what can be “seen” in the video footage. However, the company quickly realized that the technology was not yet mature.

“Given the sensitivities in policing, around race and other identities of the people involved, this is an area where I think we still have some work to do before we roll it out,” said Smith, CEO of Axon, describing some of the responses tested as not “overtly racist” but insensitive in other ways.

These experiments led Axon to focus entirely on audio for the product it unveiled in April during the company's annual police officer conference.

The technology is based on the same generative AI model as ChatGPT, developed by San Francisco-based OpenAI, a close business partner of Microsoft, Axon's cloud computing provider.

“We use the same underlying technology as ChatGPT, but have access to more knobs and dials than an actual ChatGPT user,” said Noah Spitzer-Williams, who manages Axon's AI products. Turning down the “creativity dial” helps the model stick to the facts so it “doesn't exaggerate or hallucinate in the same way that you would if you were just using ChatGPT,” he said.

Axon is not disclosing how many police departments are already using the technology. The company is not alone, with startups like Policereports.ai and Truleo offering similar products. But given Axon's close relationships with police departments that buy its Tasers and body cameras, experts and law enforcement officials expect AI-generated reports to become more common in the coming months and years.

But before that happens, legal scholar Andrew Ferguson wants to see more public discussion about the benefits and potential harms. For one thing, the large language models behind AI chatbots tend to invent false information. This problem is known as hallucination and can lead to a police report being filled with convincing and barely detectable untruths.

“I fear that the automation and ease of use of the technology could cause police officers to be less careful in their writing,” said Ferguson, a law professor at American University who is working on what is expected to be the first law journal article on the emerging technology.

Ferguson said a police report is important to determine whether an officer's suspicions “justify the deprivation of a person's liberty.” Sometimes it's the only statement a judge sees, especially in minor offenses.

Human-generated police reports also have flaws, Ferguson said, but it is an open question as to which are more reliable.

For some police officers who have tried it, it is already changing the way they respond to a reported crime. They tell the story of what is happening so that the camera can better capture what they want to put on record.

Bussert expects that as this technology becomes more widespread, officers will become “more and more verbal” in describing what lies before them.

After Bussert loaded the traffic stop video into the system and pressed a button, the program generated a colloquial-narrative-style report that included the date and time — just as if a police officer had typed something from his notes — all based on the body camera audio.

“It literally took seconds,” Gilmore said, “and it was so complete that I thought, 'I don't need to change anything anymore.'”

At the end of the report, the officer must check a box indicating that the report was created using AI.

You may also like...