© 2025 KVPR | Valley Public Radio - White Ash Broadcasting, Inc. :: 89.3 Fresno / 89.1 Bakersfield
89.3 Fresno | 89.1 Bakersfield
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

New state law requires additional safeguards when police use generative AI

The tool being used by the Fresno Police Department, called Draft One, uses the audio from body camera footage to write reports.
Gary Kazanjian
The tool being used by the Fresno Police Department, called Draft One, uses the audio from body camera footage to write reports.

FRESNO, Calif. – Police departments are preparing for a new state law signed earlier this month by Governor Gavin Newsom. The legislation aims to increase transparency around the use of generative artificial intelligence (AI) by police. California is one of the first states to tackle the issue.

Senate Bill 524, signed into law on Oct. 10, requires police officers to disclose when they use AI tools to write police reports. Those tools include Draft One, an AI “assistant” that transcribes and summarizes audio from police bodycam footage to draft a report based on the incident. Officers can then revise and edit AI’s work as necessary.

Specifically, the law now requires a written disclosure to appear at the bottom of each page of a police report for which Draft One or other similar tools were used. The legislation also requires an “audit trail” that would preserve the original draft as well as identify the source bodycam footage or audio.

Police departments in Fresno and East Palo Alto were among the first in the state to adopt the technology, as first reported by KVPR and KQED last year. KQED’s reporting was cited in the legislative analysis of the bill that ultimately became the law.

Axon, the company that developed Draft One, told KVPR and KQED last year that its developers built safeguards into their software. For example, officers must fill in prompts within the generated report then sign off on the report’s accuracy before it can be submitted. The tool also includes a disclaimer that Draft One was used, though police agencies have thus far been able to customize where in a report it’s placed.

Police departments have said the technology saves officers significant time, and even that some AI-generated reports are better than the ones written entirely by officers. Nevertheless, the bill arose out of concerns that bias or errors generated by AI software could make their way into final incident reports, which play a key role in charging, detaining and sentencing suspects.

Kate Chatfield, executive director of the California Public Defenders Association, which sponsored the bill, said she’s grateful for the new law.

“Due process requires transparency," Chatfield wrote in a public statement. “Everyone in the legal system – judges, juries, attorneys, and the accused – deserve to know who wrote the police report.”

“With SB 524, California is sending a clear message: Innovation in policing must be tethered to accountability,” wrote State Senator Jesse Arreguín, who wrote the bill, in the statement. “No more opaque reports, no more guessing whether AI shaped the narrative.”

Kevin Little, a defense attorney in Fresno, said the law is a step in the right direction, but not a true remedy.

“My own experience with AI in an unrelated context leads me to conclude that AI platforms have a significant amount of user bias and tend to support the agendas of the user,” he said.

Larry Bowlan, a spokesperson for the Fresno Police Department, said his agency has previously implemented a handful of the safeguards now required by the law, and doesn’t anticipate the new rules to be especially burdensome.

“Our AI-powered narrative assistant…already generates a disclosure and requires our users to sign acknowledgements. Draft One also already produces the requisite audit trail,” he wrote in an email. “We are actively working with our vendor on the best solution for preserving and storing the first draft provided by the assistant, as well as a minor tweak to ensure the disclosure is present on each printed page, rather than just the first page as it is now.”

A spokesperson for the East Palo Alto Department said his agency has no official response to the law at this time.

Opponents of the bill include the California Police Chiefs Association and the Police Officers Research Association of California (PORAC), a police union advocacy and lobbying group.

The California Police Chiefs Association did not respond to a request for comment. In a statement, PORAC President Brian R. Marvel said the signed version of the law is an improvement over initial drafts.

"In its original form, SB 524 would have put significant administrative burden on already short-staffed police forces and created broad liability by requiring agencies to retain every AI-generated draft, interim, and final version of a report, each labeled with AI disclosure language,” he wrote. "PORAC advocated to amend this bill…We were pleased to see several of these amendments taken, with the final version of the bill significantly narrowed.”

Axon representative Victoria Keough said the company is committed to complying with all state and federal laws, including SB 524.

“When developing AI for public safety, transparency and accountability are essential,” Keough wrote in a statement. “Responsible innovation remains at the core of how Axon designs and delivers new technology.”

The new requirements go into effect on January 1, 2026.

Additional reporting by Sukey Lewis (KQED, San Francisco).

This story was produced with support from the California Newsroom, a collaboration of public media outlets throughout the state, with NPR as its national partner. 

Kerry Klein
Related Content