Public service professionals learn how to write generative AI use policy

By Jess Silverman
June 21, 2023

In his second workshop with InnovateUS, Boston’s Chief Information Officer, Santiago (Santi) Garces, explained to public service professionals how he developed the Boston guidelines on using generative artificial intelligence (AI) in public service.

Generative AI is a set of new technologies that leverage large volumes of data along with some machine learning (ML) techniques to produce content based on inputs from the users

known as prompts. With generative AI becoming more and more prevalent in our society, it is necessary for governments to learn how to regulate its usage within our bureaucratic practices. Studies have found that technologies like ChatGPT are overwhelmingly useful to those who use them. With these tools so widely available it is important to have some guidelines in place to prevent cases of bias and malpractice. 

“When we have technology that is so pervasive throughout society, it’s kind of hard to ignore it from a government perspective because it’s just out there,” Garces said. “ … The benefits are too valuable and the risks are too pervasive to ignore [generative AI].”

Responsible experimentation

In crafting the City of Boston’s AI policy for employees, Garces explained that his team took the responsible experimentation approach. If they had come up with a policy that was too rigid, it may render itself ineffective when technology changes. Therefore, he wanted his team’s outlook to be cautious, but also flexible.

“We said, let’s create an environment where we can work as a society, as a community … to create an environment of collective learning, where we are controlling the risks,” he said.  “We won’t be able to understand the risk unless we’re experimenting.”

Garces made sure to emphasize that there are several considerations that must be taken into account as you create your policy for your jurisdiction. It is important to be mindful of unintended consequences, your constituents’ trust, time and money constraints, and the environmental costs of generative AI. To create the most effective policies, public service professionals should value people, transparency, innovation and risk management, and public purpose. 

As a reminder to participants, Garces stressed that at the end of the day, generative AI is a tool, similar to a spreadsheet or spellcheck, to support us in our work. Tools serve people, and as long as we hold the people behind those tools accountable, the potential for negative risks is limited. Therefore, AI-use policy should put trust in people, especially public service professionals.

“There’s a lot of wisdom in our public servants, and it is when we help support them that we get the best outcomes,” he said. 

Crafting effective guidelines

AI-use guidelines should be as simple as possible to encourage accessibility and empower our communities. Garces mentioned the many uses of AI tools, and that not everyone will be using AI for malintent purposes. For example, community groups may use AI to draft research points for their organization and learn alternative views that they may face in their field. 

The three guidelines Garces stressed the most were:

  1. Fact-check and review all content generated by AI, especially if it will be used in public communication or decision-making. 
  2. Disclose that you have used AI to generate the content.
  3. Do not share sensitive or private information in the prompts

He also reminded participants that they are responsible for their work, even if it is AI-generated. It is important to not share private information in your prompts, as these servers are public and can be accessible to the creators of these tools.

“[AI] is a tool,” Garces said. “Public servants are responsible for the good and the bad that comes with the usage of the tool. If Bard of ChatGPT puts out something that you use, it’s you using it. You are going to be held responsible. 

AI applications

There are many applications for AI usage that have certain levels of risk that come with it. Garces mentioned that generative AI is used in his department for writing job descriptions. Using Bard, Garces prompted the tool to write a job description for a senior manager of software engineering role. Once he got his initial output, he included follow-up prompts to specify the job description and tailor more toward what he was looking for.

 

Screenshot 2023 06 21 at 2.37.45 Pm

 

Some other AI use examples include:

Low Risk
  1. Writing a draft of a job description
  2. Summarizing notes
  3. Support in translation
Medium Risk
  1. Job skills analysis
  2. Making interactive quizzes and other training materials
  3. Support in developing code, or analysis
Higher risk
  1. Community meeting simulation
  2. Drafting policy proposals or talking points

Garces shared some final other tips for creating generative AI guidelines.

  1. Get feedback from others (academics, community partners, etc.)
  2. Iterate incrementally
  3. Create mechanisms for participation
  4. Host training sessions
  5. Continue to talk with others, be creative and learn

Following the presentation, participants were able to engage with Garces in a 30-minute Q&A session. 

Feedback 

In a survey issued after the workshop, 100% of participants said that they would recommend the training to a friend or colleague and 88% felt they could use what they learned in their work. Here is what some of them had to say!

“Santi shared thinking and approach to policy that was thoughtful, experiential, and based on consulting people within different fields. The format was interactive, encouraging thinking about different viewpoints and exploring generative AI tools. Folks participating were polite, curious, kind.” -Advanced career (20+ years) Public Service Professional 

“So many of our state agencies are in the process of drafting responsible use policies, and it is helpful to have a governmental body that has already created a policy to point to!”  -Information Technology Public Service Professional from Texas, advanced career (20+ years)

“I am still new to AI and this was very easy to digest for me.” -Administration Public Service Professional from New Jersey, early career (less than 10 years).

You can watch the recorded workshop here! Make sure to sign up for our mailing list here to stay up to date on all things InnovateUS!

Want to be a part of our community of innovators?

We’d love to keep in touch!

Three icons stacked horizontally, including: Creative Commons logo with the letters 'cc' in a black outlined circle, next to the attribution logo with a person icon in a black outlined circle and the letters BY below, next to the attribution-sharealike icon with a circular arrow in a black outlined circle and the letters SA below.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.