On September 10, Luis Tomlinson, Unit Head of the Communication Infrastructure Unit, New Jersey State Police, and Ergin Orman, Detective Sergeant First Class, Internet Crimes Against Children Unit, New Jersey State Police, presented "AI in Action: Use Cases and Capabilities in Law Enforcement." Watch the recording here.
Every month, your department encounters more AI-generated evidence in criminal cases.
Every month, you are also discovering new ways these same tools could improve your own investigations.
Law enforcement has to become proficient with technology that criminals are simultaneously using against you, Orman said
How AI Creates Images: Pattern Recognition Like Investigative Work
AI image generation tools like Stable Diffusion do not simply copy and paste from a database of pictures. Instead, they use a process called pattern recognition that's surprisingly similar to how you analyze crime scenes as an experienced investigator.
"The process isn't memorizing the pictures," Orman said. "It looks at the pictures, it's pattern recognition, it's looking at shapes, colors, different values in the picture that make it unique."
Just as you learn to recognize patterns in criminal behavior through training and experience, AI systems learn to identify visual patterns through exposure to millions of examples. This distinction matters enormously for your work because it affects how reliable AI-generated content can be and how you should evaluate it as evidence.
Practical Applications for Your Investigations
You can discover several applications for AI image generation in law enforcement operations:
Persona Development for Online Operations: You can generate consistent, realistic persona images for undercover online investigations. Instead of using stock photos that suspects might reverse-search, you can create entirely synthetic but believable personas that maintain consistency across multiple interactions.
Scene Recreation and Enhancement: Using "inpainting" techniques, you can modify existing crime scene photos or surveillance images to test different scenarios or enhance unclear details. This capability allows you to test hypotheses without contaminating original evidence.
Training Scenario Creation: You can create realistic training scenarios without compromising real victim privacy or safety. This proves particularly valuable for sensitive investigations involving children or other vulnerable populations.
Manage the Critical Risks
While AI image generation offers powerful capabilities, it also presents significant risks. The most immediate concern is not technical, but rather security and privacy.
"When it comes to using anything online, all these online products, it's data collection," Orman warned workshop participants. "If it's free, you are the product."
Data security threatens operations. Free AI platforms often store user data, images, and prompts, potentially exposing your sensitive investigative information or compromising ongoing operations. You need platforms that guarantee data isn't saved or shared—capabilities that typically require paid, enterprise-grade solutions.
Policy gaps create legal vulnerabilities. Courts, prosecutors, and defense attorneys are increasingly sophisticated about AI capabilities and limitations. If you deploy these tools without clear policies and legal backing, you risk having evidence challenged or cases dismissed.
Transparency gaps undermine trust. Community members deserve to understand how decisions affecting them are made. You must ensure your AI systems can provide clear explanations for their recommendations and document your decision-making process so you can explain both AI inputs and human reasoning to stakeholders.
The Policy Foundation: Rules Before Tools
The most critical insight from early AI adoption in law enforcement is that you must establish policy before implementation.
"We tell everybody, as long as it's within your policies and procedures, follow your standard operating procedures. When it comes to your legal, your prosecutors, have them on board if you decide to use it," Orman said.
Build Responsible Implementation Practices
You can ensure successful AI integration by following the same systematic approach you would use for any major operational change.
Establish clear policies before deployment. Define when AI assistance is appropriate, what level of human oversight is required, and how you'll handle system failures.
Invest in ongoing education. AI capabilities evolve rapidly, and your officers need to understand both current limitations and emerging possibilities.
Maintain community engagement. Proactively communicate about AI use in your operations. Explain what these tools do, what safeguards you've implemented, and how community members can provide feedback.
Your Next Steps
If you're considering AI image generation adoption, start with these concrete actions:
-
Conduct a security assessment. Evaluate platforms for data protection guarantees before considering any implementation. Prioritize enterprise solutions that don't store or share your data.
-
Develop clear use policies. Establish specific protocols for when and how officers can use AI image generation tools. Include requirements for documentation and human oversight.
-
Train your legal team. Ensure prosecutors and legal counsel understand AI capabilities and limitations before you deploy these tools in investigations.
-
Create transparency protocols. Develop procedures for explaining AI use to community members, courts, and other stakeholders when these tools assist in decisions affecting individuals.
The goal isn't to avoid AI image generation—these tools offer substantial benefits for public safety operations. Instead, you can ensure that adoption enhances rather than undermines your core responsibility of protecting and serving your community effectively and equitably.
“We tell everybody, as long as it's within your policies and procedures, follow your SOPs. When it comes to your legal, your prosecutors, you know, have them on board if you do decide to use [AI],” Orman said.
You can watch the workshop here. The full series on AI and Law Enforcement is here.
The AI and Law Enforcement Workshop Series is cohosted by InnovateUS in partnership with the State of New Jersey, the AI Center of Excellence for Justice, Public Safety, and Security, and the Rutgers Miller Center on Policing and Community Resilience.
The next session, "AI Policy Development," takes place September 25, and will be led by Peter Lambrinakos from the University of Ottawa's Professional Development Institute. Register here.