On October 6, Michael Navin, Principal Court Management Consultant with the National Center for State Courts (NCSC), presented "Stakeholder Development in AI Implementation." Watch the recording here.
Artificial intelligence (AI) can be an essential asset to streamline operations in the public sector. However, implementing the right tools into day-to-day workflows is not as easy as it seems. The truth is, most AI initiatives in government fail not because of technology problems, but because organizations skip the unglamorous work of stakeholder engagement.
In particular, courts are consistently under the microscope and must be exceptionally cautious when using AI.
“Courts are a high-risk entity,” said Michael Navin, Principal Court Management Consultant with the National Center for State Courts (NCSC), in a recent InnovateUS workshop. “They really can’t afford AI failures, because if something doesn’t work, it makes the news.”
The same caution applies across law enforcement, corrections, and public safety agencies. When trust erodes, operational efficiency gains become irrelevant. That’s why stakeholder engagement is not a waste of time, it’s your project’s foundation.
Start With Problems, Not Solutions
The most common mistake agencies make when exploring AI is asking, “Where can we use AI?” instead of “What problems need solving?”
Navin urged leaders to begin with listening sessions that include frontline staff, administrators, and members of the public the agency serves. These sessions should generate conversations about surface pain points, bottlenecks, and sources of frustration.
Mapping where errors occur and which repetitive tasks consume the most resources reveals whether AI is truly needed, or whether simpler process improvements could solve the problem more effectively. Just as important, this early engagement builds buy-in from the people who will later need to use and trust the new system.
Build Governance Structures Strategically
Once problems are identified, resist the urge to research products immediately. Instead, form a core decision-making group that reflects both legal and operational perspectives.
For courts, Navin recommends beginning with four key roles:
-
A judge to champion the initiative
-
The court administrator who understands processes
-
The chief information officer who knows the technology landscape
-
General counsel to flag legal and regulatory issues early
For law enforcement, the equivalent group might include agency leadership, operations staff, IT personnel, and legal advisors.
From this foundation, agencies can expand to a governance committee that includes executives, finance, HR, procurement, communications, and frontline supervisors.
“The governance is so important because that’s setting the rules for how AI will be governed,” Navin said.
Learn From Real Implementation
Palm Beach County, Florida, offers a concrete example of stakeholder engagement driving measurable results.
Their courts faced a common problem: clerks were overwhelmed by routine e-filing reviews while complex cases created backlogs. Instead of buying software immediately, they involved stakeholders throughout the process. Clerks evaluated usability. IT teams assessed integration with existing systems. Leadership ensured policy alignment.
By tracking accuracy, backlog levels, and acceptance times, the court now automatically routes 150 document types with no human intervention—equivalent to the work of 45 full-time employees.
Crucially, no one lost their job. Instead, staff were reassigned to higher-value work.
“In public sector work, there’s always more to do,” Navin said. “When time spent on repetitive tasks decreases, people can redirect toward work that serves communities better.”
Address Data Reality First
Before evaluating any AI tool, agencies must assess their data readiness.
“AI is only as good as the data,” Navin said. “If there’s bad data, there’s going to be bad AI.”
Agencies should review what personal information they handle, how it’s protected, and whether records are complete. Engaging IT staff early helps determine whether existing infrastructure can support the tools under consideration.
While not glamorous, skipping this step leads to failed audits, legal liability, and implementations that simply don’t work as promised.
Permission to Go Slowly
Perhaps Navin’s most reassuring message was about pace.
“AI is moving quicker than how the human brain can comprehend right now,” he said. “Don’t ever feel like we’re behind. It’s okay to be unsure. Take the time needed—especially in the public sector—so everyone’s comfortable with where things are going.”
This patience isn’t just prudent; it’s legally significant. When AI-assisted decisions face court challenges, agencies must demonstrate due diligence through documented stakeholder engagement, clear governance structures, bias testing, and meaningful human oversight at every stage.
“Don’t try to implement AI just to implement AI,” Navin said. “Make sure there’s a problem being solved or a process being enhanced.”
The next session in the AI and Law Enforcement Workshop Series takes place October 29, when Mihir Kshirsagar, Tech Policy Clinic Lead at Princeton University, will present “Predictive Policing and Algorithmic Bias.” Register here.
The workshop series is cohosted by InnovateUS in partnership with the State of New Jersey, the AI Center of Excellence for Justice, Public Safety, and Security, and the Rutgers Miller Center on Policing and Community Resilience.