Popular AI chatbots like ChatGPT and Google’s Bard are spreading racist and debunked medical ideas, which could exacerbate health disparities for Black patients, according to a new study led by researchers at Stanford School of Medicine.
As hospitals and health systems increasingly turn to artificial intelligence to summarize doctors’ notes and analyze medical records, the study’s findings raise concerns about the potential negative impact on patient care, particularly in communities of color.
The research team, co-led by post-doctoral researcher Tofunmi Omiye, found that when asked health-related questions, the AI models responded with a range of misconceptions and inaccuracies. These chatbots, powered by large language models trained on vast amounts of internet text data, have the potential to reinforce and amplify existing biases in the healthcare system.
“We need to be very cautious about the use of these tools in clinical settings,” said Roxana Daneshjou, an assistant professor at Stanford School of Medicine and faculty advisor for the study. “If left unchecked, they could contribute to worsening health disparities for already underserved populations.”
The study underscores the importance of addressing bias and ensuring equitable representation in the development and deployment of AI technologies in healthcare. As the use of AI chatbots continues to grow, researchers emphasize the need for rigorous testing, transparency, and collaboration between healthcare providers, technology companies, and diverse patient communities to mitigate potential harms and promote health equity.
See “Health providers say AI chatbots could improve care. But research says some are perpetuating racism” (October 20, 2023)