Learning the importance of applying human-centered design to government AI projects

By Jess Silverman
March 25, 2024

In a workshop held on February 29, Elham Ali, Researcher from the Beeck Center for Social Impact and Innovation, discussed how to apply human-centered design to government artificial intelligence (AI) projects. With over 200 participants in the live session, attendees learned critical approaches to integrate the principles of human-centered design and equity to AI design, use, and evaluation.

Ali has over 10 years of experience in UX research, data analysis, program evaluation, and human-centered design in public health and civic technology, and has worked with local and state governments, public health agencies, startups, and civic technology groups to uncover behavioral insights about their users through evidence-based research.  Some of the clients she’s worked with include the Pima County Health Department, the New York State Department of Health AIDS Institute, the City of Los Angeles, US Digital Response, and Food + Planet, among others.

The presentation began with a brief history of artificial intelligence, explaining that AI technologies with rule-based AI programs dated back to the 1980s. From there, Ali listed several use cases of how both general and generative AI are incorporated into public sector operations. For example, in the City of Amarillo, Texas, they have a “digital human” to help residents and newcomers navigate City Hall in several languages. These technologies have also helped with verifying mail-in ballot signatures.

Ali noted the importance of making the distinction between general AI and generative AI. 

“[With general AI} we can follow the path and the logic of what’s happening, which is very different from generative AI,” she said, “has a lot of possibilities, it’s not deterministic, and it also has different innovation potential.”

She stressed that machines alone cannot become moral agents, which is why both general and generative AI need human-centered design and intervention to ensure accountability. 

With this understanding in place, Ali then provided examples of human-centered design in public service, such as using journey mapping to personalize the experiences of veterans and care-seekers with the Department of Veterans Affairs. However, human-centered design can also present challenges.

“The problem with human-centered design on its own is that it has a problem with scaling, especially with large volumes of data when it comes to the human experience in context,” she said.

Human-centered AI (HCAI), however, is about ensuring that what is built begins and ends with people in mind. 

“It also means we are thinking about people early in the research, decision makings and behaviors, and using that understanding to drive all technical decisions of a system,” Ali said.

Ali then presented an example of a challenge faced by the Washington State Board of Health and U.S. Digital Response. The state considered adding the COVID-19 vaccine to the list of requirements for school entry. The board received over 30,000 emails and 50,000 community comments. With only two people on the team to cover the requests, analyzing all the responses was impossible. This led to a need for an approach that would address both the scale and volume of the data and the human context of the language.


Ali and her colleagues first empathized and hypothesized how AI could or could not solve user needs. Using the HCAI framework, the team then worked to define their data, by performing sentiment and text/content analysis, defining a list of names and keywords, and doing an early scan of comments and survey responses. 

Next, was the ideate and build model part of the process. Ali and her colleagues wanted to figure out which words they were hearing and the sentiment of these keywords in context. Ali stressed the importance of looking for a wide range of keywords, especially those centered around beliefs, values, attitudes, and behaviors. She also wanted participants to understand that AI tools are only as useful as we make them and that HCAI work is grounded in collective labor, where Intelligence is distributed.

“We should recognize and speak about AI tools as products of human collaboration and effort rather than as independent entities,” she said. 

When an individual prototypes and deploys their model, it is important to know what they want to optimize it for and why. When making this decision, it is also essential to understand the tradeoffs for choosing one method over another and what impact this will have on the user.

By following this process, Ali and her team learned 4 key insights:

  1. Beliefs: Economic, cognitive, and health, as well as relational barriers, exist. Are children symbols of protection? What is the role of precedence?

  2. Values: Perception of limited reciprocity between the Board and residents. Range of values ranging from fear, risk averse, to value protection, like preservation of agency, to freedom and choice infringement.

  3. Attitudes: Various degrees of manipulation to spreading misinformation/disinformation, but also a spectrum of interrogation and critical thinking. Conflicting sources and information overload.

  4. Behaviors: The continuum of vaccine acceptance is reflected in comments, especially in future decision-making.

After testing their final solution, Ali and her colleagues learned that language and learning have a uniquely human element that goes beyond syntax. It also involves the meanings that come from our experiences.

“What happened at the end is this helped the board reconfigure how they communicate with their team and how they design future communication campaigns,” she said.

After summarizing her key findings from this experience, Elham concluded the workshop session with a moderated panel, who collaborated to integrate human-centered design with Generative AI in a community mobility data project. The panelists were experts from across the industry. They included Konner Petz, Senior Mobility Strategist at the Office of Mobility Innovation for the City of Detroit, Arena Johnson, Climate Equity Project Coordinator at Eastside Community Network, and Anuradha Bajpai, a pro-bono technologist at Google.org.

The three experts shared their experiences they collaborated on with Ali in the City of Detroit. The City’s mobility data was not centralized, making it difficult for residents and advocates to use it to create new programs and initiatives.

City Administration also faced difficulties making informed decisions on community mobility issues due to the disorganization of data. By working through the HCAI framework, the panelists learned the problem was much more multifaceted than they originally thought. Petz and Johnson shared that they also organized a community demonstration day to share the GenAI prototype with city staff and community advocates to get their feedback.

One major takeaway from the process, Petz said, was learning how to manage stakeholder expectations of AI’s capabilities. 

To conclude the panel discussion, Ali asked her colleagues how the community reacted to their prototype HCAI framework.

“We realized we had to go back to the drawing board,” Johnson said. “We realized that a lot of the things we needed to work on were like adding different reports to the dashboard and to the tool for it to be a little more fleshed out for folks to understand.

Overall, the workshop served to inform participants that collective labor is the basis of HCAI work, and that community co-design is essential when working on government AI projects.

To watch the workshop recording, click here, and to sign up for a future InnovateUS workshop, click here.

Want to be a part of our community of innovators?

We’d love to keep in touch!

Three icons stacked horizontally, including: Creative Commons logo with the letters 'cc' in a black outlined circle, next to the attribution logo with a person icon in a black outlined circle and the letters BY below, next to the attribution-sharealike icon with a circular arrow in a black outlined circle and the letters SA below.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.