In a follow-up to the successful What the Heck is ChatGPT? How Generative AI Will Impact Government workshop with over 250 participants, Alexis Bonnell returned to lead a follow-up Q&A session with 68 public service professionals about AI government.
Bonnell was one of the founding members of the Internet’s original Trade Association, which helped companies understand the impact that the Internet, innovation, and technology would have on business, customers, and society. Formerly the “Emerging Technology Evangelist for Public Sector and Strategic Business Executive” at Google, Bonnell dedicated her time to helping public servants catalyze their missions with technology, solving the world’s toughest challenges, including working on digital transformation in healthcare, education, COVID response, natural disaster, defense, benefits, and service institutions. While at USAID, Bonnell led transformation and knowledge management in the Management Bureau, was the first Telework Executive, and served as the Chief of Engagement for Education. She co-founded the U.S Global Development Lab of USAID, and she served as the Chief of Applied Innovation and Acceleration.
Bonnell, with the help of AI tech experts Eden Canlilar and Rajit Gupta from Google, first addressed the audience’s questions asked in the previous workshop, before moving on to questions asked using Slido and topics attendees were curious about coming into the session.
The session kicked off with a discussion about privacy concerns with AI, especially as it is being used in government. Bonnell stressed that if public service professionals are using AI like ChatGPT or Bard in their work, they must keep in mind that these services are public.
“You don’t want to put in anything that is proprietary, anything that you wouldn’t normally want out there in those models,” she said.
Bonnell advised attendees to use a personal email address to set up a ChatGPT or Bard account, as opposed to their government account. To practice caution in queries, Bonnell told participants to be mindful about what information they are putting in and to talk about their agencies about privacy more generally.
Another participant asked how we should conceive of bias in the space of AI and how to mitigate instances of it occurring in these systems’ outputs. Bonnell explained that bias exists in all people, and because the language model is built on humans, AI is shaping its outputs based on established assumptions. Public service professionals must be aware that there will be bias regardless of the query, and need to actively discover where these biases are being displayed.
“The way I look at generative AI is actually assuming there’s already bias there. I actually go looking for it,” she said. “A lot of us assume that the bias is starting with the AI and AI has the potential to propagate it, but it’s often because [bias] is already there. Assume it’s there, look for it, and find ways to counter it.”
Bonnell wanted participants to look at AI as an opportunity for constituents to engage with the government in ways they have not been able to previously, which in turn combats previous biases. With the implementation of AI chatbots, constituents who could not take time off from work could access government services outside of work hours, when humans would not usually be available. This expansion of accessibility aids in limiting opportunities for financial, diversity, and poverty bias.
Bonnell discussed how one could incorporate bias checks into their queries to ensure the most inclusive outputs. For example, in a query, an individual could ask the language model to recommend a career path to someone, while taking an individual’s disability into account.
Many public service professionals asked Bonnell to discuss their own fears about the future of workforce and employment with the expanding usage of AI. She argued that we should not be pessimistic about the usage of AI, but rather view growing technology as an opportunity to move to higher cognitive ground in our career paths.
“In this exponential age where things are constantly changing, curiosity is my most critical survival trait. Curiosity is a trait that allows us to be open to adapting,” she said. “If we know the future of work is going to change, the question is what do we do to be ready for it? The way I’ve chosen to use generative AI is to use it as a tool to give me the ability to experiment and be curious in new and powerful ways.”
Bonnell suggested using these technologies more often to ease any anxieties about what is unknown. She suggested taking this time to get to know these tools and build a relationship with them.
Another common question among participants was how to utilize AI in their day-to-day work to complete tasks more efficiently. Bonnell listed several ways she has used AI in her work, whether it be to create a packing list for a work trip, a speech for an event, or a press release for her work. However, she explained that most of the time, she uses AI to simply learn.
“I use it to learn about AI. I use it to construct and get over writer’s block in a lot of ways,” Bonnell said. “When someone asks me for a policy recommendation I may ask AI for what is already out there and then apply my own experience to my research.”
Participants listed several ways they have used AI in their work, including generating PowerPoint presentations, programming in coding languages such as R, and setting meeting agendas, among others.
Participants were curious about whether or not they should risk using AI in fields they are not experts in. One individual asked Bonnell how she checks AI’s work when one isn’t a professional and is able to discern mistakes automatically.
“If I know something, the level of sophistication in which I curate the query is going to be higher than if I don’t know it … For me I will cross-check by comparing one language model to another. I always validate sources and facts before I would put out anything publicly,” she said.
Another strategy she suggested using is asking for the sources of facts that AI generates. Large language models often pull from experts, but it is important to exercise discretion nonetheless when using it in your work. This could include asking the AI follow-up questions after receiving work it has generated, what Bonnell refers to as “pressure testing.”
Rajat Gupta, a Customer Engineer at Google, added to Bonnell’s comments by explaining the limits of AI and its capacity to process certain materials. Usually, AI can only process about 10 to 12 pages of written documentation, which equates to about 6,000 words. To combat these, an individual can do recursive summarizations or use services and products that are being developed to do these for you. Google, for example, is working on a product that will help an individual upload a 300-page PDF for an AI language model to process and summarize.
Other discussion topics included the future of project management with AI, citing references and sources through AI, using AI for higher education, and AI resources. Bonnell and her team compiled a document of tools for public service professionals to use in their work, which can be found here. These resources include InnovateUS’ hands-on tutorial that provides an introduction to how to use Generative AI to work more efficiently while improving governance, community engagement, and responsiveness to residents. Additional links include tools that allow an individual to identify AI-generated content, and a comprehensive list of public AI beyond popular chat and answer models.
While there are many existing trainings and documents about AI, Bonnell encouraged participants to keep practicing.
“Long story short, the best training you can do is just start playing with it! Because it will change your relationship and the way you think about not only doing what you’re capable of doing, what your organization is capable of doing, but how these things are relevant to your mission or to your life” she said.
You can watch the recorded workshop here! Make sure to sign up for additional InnovateUS workshops this summer here as well!
We’d love to keep in touch!
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.