In recent years, generative AI has become a major player in both healthcare and public health. It helps in drafting clinical notes and crafting patient messages. Public health departments are using it to tailor health messages to specific communities. But there’s a catch: these tools often lack transparency and don’t engage with the very communities they aim to serve. This is a significant issue, especially for groups facing systemic inequities.
Healthcare is deeply personal, and decisions about AI should include the voices of those affected. In the U.S., healthcare inequities, especially for marginalized groups, are rooted in long-standing systemic issues. For example, predictive AI has already shown harmful tendencies. One notable algorithm underestimated the need for follow-up care for Black patients, resulting in fewer referrals and services compared to white patients. This happened because the algorithm incorrectly equated lower health spending with lower need, ignoring the barriers Black patients face in accessing care, like lack of insurance and provider bias.
Similarly, people with disabilities have faced risks from AI tools that deprioritized them in treatments during the COVID-19 pandemic. These tools were often trained on biased data, furthering existing health disparities. The lack of transparency in these systems makes it challenging to hold them accountable.
If community members had been involved in developing these algorithms, they could have challenged faulty assumptions. They might have insisted on more equitable metrics or regular assessments to catch unexpected disparities. A community-focused approach could have prioritized fair access over merely cutting costs.
Generative AI is beginning to influence public health as well. For instance, the CDC has used it to track school closures to spot potential outbreaks and forecast overdose trends. However, as public health systems adopt off-the-shelf generative AI tools, there’s a risk they might not incorporate community input effectively. This moment could serve as an opportunity to integrate community governance before these models are fully scaled.
To make AI in health equitable, it’s essential to involve the communities impacted by these systems. Zainab Garba-Sani’s ACCESS AI framework highlights this need for community engagement, underlining the importance of addressing barriers to AI use in healthcare.
At Health Justice, we launched the Grounded Innovation Lab to ensure community accountability and equity in AI. Here, community members can help define training data and evaluate AI systems. Their lived experiences can guide improvements, ensuring the technology is more relevant and effective.
Moreover, we must recognize the environmental costs of AI, especially in marginalized communities already facing health inequities. The placement of energy-intensive data centers often adds another burden to these communities.
As seen in discussions around AI regulation, there is momentum for community-led initiatives in health and technology. We must push for investment in these community-driven approaches to avoid deepening existing inequities. There’s a clear need for health equity in AI design and governance, ensuring that those most affected have a say in the tools they use.
The fight for equitable AI can begin with health. By centering community voices, we can pave the way for a more inclusive, transparent AI landscape that serves everyone better.
Oni Blackstock, M.D., is a health equity advocate and leader at Health Justice. Akinfe Fatou, M.S.W., is a disability justice strategist at Cre8tive Cadence Consulting.
Source link
Artificial intelligence,diversity and inclusion,health care disparities,public health
 




















