AI-Generated Police Reports: Efficiency or Ethical Concern?

Police departments are increasingly embracing cutting-edge technologies, from drones to facial recognition. The latest addition to their toolkit? AI-driven software designed to auto-generate police reports. Dubbed Draft One, this generative AI tool aims to reduce the time officers spend on paperwork, freeing them for community engagement and other duties.

What Is Draft One?

Draft One, launched by Axon in April, leverages Microsoft’s Azure OpenAI platform to transcribe body camera audio and create preliminary police reports. The software claims to save officers 30-45 minutes per report, with some departments reporting savings of up to an hour daily.

Axon, known for its Tasers and body cameras, describes this innovation as a step toward its goal of reducing gun-related incidents between police and the public. According to Axon, by reducing the administrative burden, officers can focus more on physical and mental well-being, leading to better decision-making and de-escalation outcomes.

How It Works

Draft One operates by converting body camera audio into text and using AI to generate a draft narrative. To minimize inaccuracies, Axon has fine-tuned the AI’s “creativity dial” to focus strictly on facts from the audio transcript, avoiding speculation or embellishments.

Once the draft is complete, officers review and add any missing details. The finalized report undergoes another round of human review, ensuring accuracy before submission. Additionally, reports created with AI involvement are flagged for transparency.

Applications in Police Work

The software’s use varies across departments. In Oklahoma City, Draft One is limited to minor incidents that don’t involve arrests. In contrast, officers in Lafayette, Indiana, have the flexibility to use it for any type of case.

While many officers are enthusiastic about the tool’s time-saving capabilities, concerns are being raised about its reliability and ethical implications.

Ethical and Accuracy Concerns

Draft One’s underlying technology is similar to ChatGPT, which has faced criticism for generating misleading or false information. According to Noah Spitzer-Williams, Axon’s AI products manager, the system has been tailored to prioritize factual accuracy.

However, experts like Lindsay Weinberg, a Purdue University professor specializing in digital ethics, warn against relying on AI for tasks with significant legal and societal consequences. Weinberg notes that generative AI is designed to produce plausible-sounding sentences, not necessarily truthful or unbiased information.

“Almost every algorithmic tool you can think of has been shown time and again to reproduce and amplify existing forms of racial injustice,” Weinberg points out. These biases, coupled with the risk of errors in life-altering situations, raise serious concerns about the widespread adoption of AI tools in law enforcement.

Balancing Efficiency and Accountability

While Draft One has the potential to streamline police work, its use highlights the delicate balance between leveraging AI for efficiency and ensuring ethical, unbiased law enforcement practices. Policymakers, developers, and law enforcement must collaborate to establish safeguards that prevent misuse and protect civil rights.

Explore Axon’s advancements in public safety technology here.