Health Disparity News

Artificial Intelligence in Medicine Risks Reinforcing Health Disparities

A recent National Library of Medicine (NLM) lecture explored the limitations and ethical concerns surrounding artificial intelligence in healthcare, particularly in cancer diagnosis. AI ethics expert Meredith Broussard (above) emphasized that while AI can support medical professionals, it should not be relied upon as a standalone diagnostic tool.
 
Broussard shared her personal experience with breast cancer to illustrate the current capabilities of AI in medical imaging. Contrary to expectations of a detailed analysis, the AI algorithm she encountered simply highlighted an area of concern on a mammogram, requiring physician follow-up for actual diagnosis.
 
The lecture stressed that AI is best suited for low-stakes, routine tasks rather than high-stakes medical decisions. Broussard warned against viewing algorithms as neutral or all-knowing, noting that they reflect the data they’re trained on, which can perpetuate existing social inequities in healthcare.
 
To address potential biases, Broussard advocated for collaboration among humanities scholars, social scientists, technologists, biomedical researchers, and clinicians. She highlighted the importance of “algorithmic auditing” to evaluate risks associated with AI use in specific contexts.
 
While the lecture did not explicitly focus on racial disparities, Broussard’s emphasis on how AI can reflect and reinforce societal biases suggests that underrepresented groups could be particularly vulnerable to algorithmic bias in healthcare settings.
 
The NLM’s ongoing lecture series aims to raise awareness about the societal and ethical implications of advanced technologies in biomedical research, contributing to efforts to harness AI’s potential while mitigating associated risks.
 
Facebook
Twitter

Posts of Interest