Answers provided by OpenAI’s ChatGPT to a series of drug related questions posed as part of a study by pharmacists found that nearly three-fourths of responses were incomplete or inaccurate.
The researchers at the American Society of Health-System Pharmacists conducted a study challenging ChatGPT, which employs generative artificial intelligence to generate responses based on internet data. The study involved real questions posed to Long Island University’s College of Pharmacy drug information service over a 16-month period in 2022 and 2023. The findings were presented at the ASHP’s Midyear Clinical Meeting.
Pharmacists initially researched and answered 45 questions, and these responses served as the standard against which ChatGPT’s answers were evaluated. Six questions were excluded due to a lack of literature for data-driven responses, leaving 39 questions for ChatGPT to address.
The study revealed that ChatGPT provided satisfactory answers to only 10 out of the 39 questions. In the remaining 29 cases, there were 11 instances where ChatGPT’s responses did not directly address the question, 10 cases of providing inaccurate information, and 12 incomplete answers. The researchers also requested references in ChatGPT’s responses, and while it complied in eight instances, each reference provided was non-existent according to the study. a series of drug
Sara Grossman, PharmD, the lead author of the study and an associate professor of pharmacy practice at Long Island University, advised caution regarding the use of ChatGPT as an authoritative source for medication-related information. The study’s findings highlight the importance of relying on established and verified sources, particularly in the healthcare field, where accuracy and reliability are paramount for patient safety and well-being.
