Building an ethics ecosystem for AI and big data

By Jess Silverman
September 16, 2024

In a recent InnovateUS workshop, Dr. John Basl from Northeastern University presented a compelling case for the need to develop a comprehensive ethics ecosystem for artificial intelligence (AI) and big data. Drawing parallels with established practices in healthcare, Basl highlighted the current shortcomings in AI ethics and proposed a path forward.

Basl began by outlining four common "pain points" or recurring mistakes in the field of AI ethics:

  1. Misapplication of existing ethical tools - Attempts to apply ethical frameworks from other fields, such as informed consent in healthcare, often fall short in the context of AI and big data.

  2. Technical insensitivity - Ethicists and policymakers sometimes lack the technical understanding necessary to provide effective guidance for AI systems.

  3. (Normative) ethical insensitivity - Technologists may overlook the nuances of ethical principles when developing AI solutions.

  4.  Lack of stakeholder engagement - Failure to involve key stakeholders in the development and deployment of AI systems can lead to unintended consequences and public distrust.

From these mistakes, Basl explained that urging individuals to take all the steps to avoid these mistakes is unlikely to succeed.

The need for an AI ethics ecosystem

Instead, Basl argues for the creation of a comprehensive "ethics ecosystem" for AI and big data. He defines this as "a coordinated system of components that distributes the task and responsibility of doing ethics."

This ecosystem would consist of several interconnected components:

  1. Ethics Infrastructure: Day-to-day tools, checklists, and protocols for ethical decision-making;

  2. Policy and Regulation: Formal and informal policies and regulations to incentivize the ethics infrastructure and penalize violations;

  3. Outreach, Training, and Education: Programs to build ethical awareness and competence among practitioners to be able to work within the ecosystem;

  4. Interdisciplinary Practice, Scholars, and Practitioners: Collaboration between experts from relevant fields to inform decision-making and people with interdisciplinary experts that can facilitate communication across the ecosystem.

  5. Research Programs: Ongoing studies to advance our understanding of AI ethics and translate findings into the ecosystem;

  6. Shared Language and Concepts: A common vocabulary to facilitate communication between stakeholders within the ecosystem.

Basl drew comparisons between this model and existing ethics ecosystems in healthcare, particularly in areas like human subjects research and clinical care. He noted that healthcare has developed robust systems for ethical oversight, including Institutional Review Boards (IRBs) for human subjects research, ethics training for practitioners, a shared language and set of concepts (e.g., "informed consent"), and interdisciplinary collaboration between clinicians, researchers, and ethicists

The AI field currently lacks many of these components, relying instead on high-level guidelines and voluntary commitments. 

"In the AI case we do not have anything that structured,” Basl said. Nor do we have individual components that are very robust. For example, there is no fully formed ethics infrastructure for AI and Big Data. “The closest thing we have in the US is the NIST AI risk management from the National Institute for Standards and Technology."

The path forward

While building a comprehensive ethics ecosystem for AI will take time, Basl offered some practical steps for organizations and practitioners:

  1. Identify existing tools and guidance that can be adapted to your specific context

  2. Advocate for and invest in human infrastructure – people with the expertise to navigate ethical challenges in AI

  3. Develop local, context-specific ethical frameworks while working towards broader ecosystem development

  4. Engage in ongoing education and training to build ethical competence within teams

“We need to act locally and think globally,” Basl said. “We need to advocate for an ecosystem. We need to make sure we have an eye on what's going on in the different components. At the same time, we need to bring in the human capacity to solve these problems at a local level."

You can watch the recording of Basl’s workshop here. To sign up for future workshops visit our page here.

Want to be a part of our community of innovators?

We’d love to keep in touch!

Three icons stacked horizontally, including: Creative Commons logo with the letters 'cc' in a black outlined circle, next to the attribution logo with a person icon in a black outlined circle and the letters BY below, next to the attribution-sharealike icon with a circular arrow in a black outlined circle and the letters SA below.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.