Show simple item record

AuthorYigit, Yavuz
AuthorOzbek, Asım Enes
AuthorDogru, Betul
AuthorGunay, Serkan
AuthorAl Kahlout, Baha Hamdi
Available date2026-01-13T10:48:59Z
Publication Date2025
Publication NameInternational Journal of Emergency Medicine
ResourceScopus
Identifierhttp://dx.doi.org/10.1186/s12245-025-00895-3
CitationYigit, Y., Ozbek, A.E., Dogru, B. et al. Evaluating large language models for renal colic imaging recommendations: a comparative analysis of Gemini, copilot, and ChatGPT-4.0. Int J Emerg Med 18, 123 (2025). https://doi.org/10.1186/s12245-025-00895-3
ISSN1865-1372
URIhttp://hdl.handle.net/10576/69294
AbstractBackground: The field of natural language processing (NLP) has evolved significantly since its inception in the 1950s, with large language models (LLMs) now playing a crucial role in addressing medical challenges. Objectives: This study evaluates the alignment of three prominent LLMs-Gemini, Copilot, and ChatGPT-4.0-with expert consensus on imaging recommendations for acute flank pain. Methods: A total of 29 clinical vignettes representing different combinations of age, sex, pregnancy status, likelihood of stone disease, and alternative diagnoses were posed to the three LLMs (Gemini, Copilot, and ChatGPT-4.0) between March and April 2024. Responses were compared to the consensus recommendations of a multispecialty panel. The primary outcome was the rate of LLM responses matching the majority consensus. Secondary outcomes included alignment with consensus-rated perfect (9/9) or excellent (8/9) responses and agreement with any of the nine panel members. Results: Gemini aligned with the majority consensus in 65.5% of cases, compared to 41.4% for both Copilot and ChatGPT-4.0. In scenarios rated as perfect or excellent by the consensus, Gemini showed 69.5% agreement, significantly higher than Copilot and ChatGPT-4.0, both at 43.4% (p = 0.045 and < 0.001, respectively). Overall, Gemini demonstrated an agreement rate of 82.7% with any of the nine reviewers, indicating superior capability in addressing complex medical inquiries. Conclusion: Gemini consistently outperformed Copilot and ChatGPT-4.0 in aligning with expert consensus, suggesting its potential as a reliable tool in clinical decision-making. Further research is needed to enhance the reliability and accuracy of LLMs and to address the ethical and legal challenges associated with their integration into healthcare systems.
Languageen
PublisherBioMed Central Ltd
SubjectChatGPT-4.0
Copilot
Gemini
Imaging recommendations
Large Language models (LLMs)
Natural Language processing (NLP)
Renal colic
TitleEvaluating large language models for renal colic imaging recommendations: a comparative analysis of Gemini, copilot, and ChatGPT-4.0
TypeArticle
Issue Number1
Volume Number18
dc.accessType Open Access


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record