On September 25, Peter Lambrinakos, Director of Public Safety Program, uOttawa Professional Development Institute, and co-founder of the AI Center of Excellence for Justice, Public Safety, and Security, presented "AI Policy Development." Watch the recording here.
In September, Ohio became the first state to deploy AI-powered technology that actively interrogates citizens reporting suspicious activity through their new Safeguard Ohio app.
While proponents argue that AI prompts will enhance the anonymous tip process, the launch raises an important question for law enforcement agencies: What happens when this AI system faces its first legal challenge?
Safeguard Ohio is "trained to keep asking questions until the person reporting says they have no more information," according to Executive of Ohio’s Department of Homeland Security, Mark Porter.
Ohio's pioneering approach to AI-enhanced public safety reporting is a case study in why AI governance policies aren't optional anymore. Rather it’s a legal shield.
While Ohio may be enforcing these policies, legal frameworks in other places across the country aren't keeping up with the rapid deployment of AI technologies. Courts are demanding disclosure of AI error rates and training data. Privacy commissioners are ruling against agencies that implement tools without proper policies. Civil liability cases are mounting.
The gap between AI adoption and legal preparedness creates serious risks for agencies and communities. Public professionals who understand these challenges can build frameworks to harness AI's benefits while avoiding costly mistakes.
Policy as Your Legal Foundation
Peter Lambrinakos leads AI governance work at the University of Ottawa and previously served as chief of police in Canada. He warns that agencies deploying high-risk AI tools without clear policies operate without legal protection.
"You will be challenged in court," Lambrinakos said in a recent InnovateUS workshop. "An attorney will put officers on the stand and ask them to explain how the AI tool that justified a Fourth Amendment stop actually works."
The Clearview AI case demonstrates the stakes. Canada's federal privacy commissioner ruled that law enforcement's use of the facial recognition tool was unlawful because it was deployed without proper policy or risk assessment. Agencies had to implement comprehensive policy changes while managing public scrutiny and potential civil liability.
Building Legal Defensibility Into AI Governance
Legal frameworks must determine whether AI tools enhance or undermine organizational missions. This requires embedding legal defensibility into every stage of AI governance.
Start with risk classification. Policies must require teams to use frameworks like the NIST AI Risk Management Framework to categorize tools by potential impact. For high-risk applications, explicitly address Criminal Justice Information Services security requirements.
"Many AI tools are cloud-based," Lambrinakos said. "Your policy must forbid any solution that stores or processes criminal justice information in a vendor's cloud environment, unless that vendor provides documented evidence that their controls fully meet CJIS security policy."
Policies must also mandate "meaningful human oversight" to combat automation bias — the tendency to over-trust computer outputs. This means establishing hard stops where trained personnel review and validate AI suggestions before taking operational action. Officers must articulate their reasoning beyond "the computer said so."
Transparency as Legal Strategy
Legal defensibility extends beyond courtroom preparation to building community trust that prevents legal challenges. Modern best practices require stakeholder engagement before high-risk tools are procured.
Lambrinakos said agencies should show communities "the blueprint before you build" to replace "suspicion with transparency."
Create plain-language summaries that answer four key questions: What problem are you trying to solve? What technology are you considering? What safeguards are you building into your policy? What are community concerns?
This transparency demonstrates due diligence while documenting community input that can support your legal position if challenges arise.
Data Governance Fundamentals
The most complex legal issues center on data — who owns it, how it's used, and what happens when contracts end. When agencies purchase this technology, they aren’t just buying the algorithms behind it. Instead, they are buying the decisions those algorithms make based on training data.
Procurement policies must establish three core pillars: data ownership, data integrity and data security.
First, establish in writing that your agency retains full ownership of its data, including AI-generated outputs. Be wary of vendors whose business models involve absorbing agency data to improve their proprietary models for sale to others.
Second, demand transparency into training data. Require vendors to disclose data sources, demographic representation, and bias testing measures.
"If a vendor won't provide transparency, you can't buy what you can't defend in court," Lambrinakos said.
Third, address comprehensive data security and access controls, including cybersecurity standards, vendor access limitations and explicit prohibitions on third-party data sharing.
Your Next Steps
AI governance is no longer optional. Courts are increasingly sophisticated about AI capabilities and limitations. Community stakeholders expect transparency.
The Government of Canada established the FASTER framework:
-
Fair (tested for bias)
-
Accountable (human responsible for decisions)
-
Safe (security controls in place)
-
Transparent (explainable policies and processes)
-
Explainable (understandable outputs)
-
Responsible (meeting legal, ethical, and community standards).
As you work to integrate AI technology in your agency, it is essential to establish both mandatory annual policy reviews and event-based triggers for immediate updates, including new tool procurements, incidents that attract media attention, new laws or court rulings, and audit findings revealing performance drift. If a tool fails an audit or becomes legally indefensible, policies should include clear decommissioning procedures.
Start by inventorying AI tools currently in use across your agency. Many departments may be using AI-powered solutions without realizing it. Then develop both a general enterprise AI policy and specific use-case policies for each high-risk application.
With proper policy frameworks, you can enable innovation while protecting your agency and community from unnecessary risk.
The AI and Law Enforcement Workshop Series is cohosted by InnovateUS in partnership with the State of New Jersey, the AI Center of Excellence for Justice, Public Safety, and Security, the University of Ottawa Professional Development Institute, and the Rutgers Miller Center on Policing and Community Resilience.
The next session, "AI in Police Operations," takes place October 1, and will be led by Anita McGahan, Patrick Poulin, and Shane Evangelist. Register here.