Article
Author(s):
The World Health Organization shared its enthusiasm for the “appropriate” use of these technologies. However, they are calling for caution to be exercised to protect and promote human well-being, safety, and autonomy and preserve public health.
Artificial Intelligence (AI) generated large language model tools (LLMs) like ChatGPT, Bert and Bard have gained much public attention in their use for health-related purposes. The World Health Organization shared its enthusiasm for the “appropriate” use of these technologies. However, they are calling for caution to be exercised to protect and promote human well-being, safety, and autonomy and preserve public health.
These LLM platforms have been rapidly expanding as users take advantage of their features that imitate understanding and processing, and produce human communication. Their growing experimental use for health-related purposes is generating excitement around the potential to support users' health needs, the WHO reported in a release in May.
If used appropriately, LLMs can support health care professionals, patients, researchers, and scientists. But, there are risks and the WHO stressed how crucial it is for these risks to be examined carefully to improve access to health information or enhance diagnostic capacity to protect user’s health and reduce inequity. There is concern that caution that would normally be exercised for any new technology is not being exercised consistently with LLMs. This includes widespread adherence to key values of transparency, inclusion, public engagement, expert supervision, and rigorous evaluation, according to the release.
Abrupt adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI, and delay any potential long-term benefits or uses of these tools globally.
Concerns shared by the WHO that call for caution of these technologies to be used in safe, effective, and ethical ways include:
The WHO encouraged these concerns yo be addressed, and clear evidence of benefit be measured before their widespread use in routine healthcare and medicine – whether by individuals, care providers, or health system administrators and policy-makers.
Though further evidence is needed to support these concerns, results from a study published in JAMA Internal Medicine in April shared responses to patients using ChatCPT were preferred by healthcare professionals over physician responses.
In the cross-sectional study of 195 randomly drawn patient questions from a social media forum, a team of licensed healthcare professionals compared physicians' and chatbots' responses to patients' questions asked publicly. The chatbot responses were not only preferred but were also rated significantly higher for both quality and empathy.
Researchers of the study claimed the results suggest AI assistants may be able to aid in drafting responses to patient questions.
[This article was originally published by our sister brand, Managed Healthcare Executive.]