عرض بسيط للتسجيلة

المؤلفAl-Thani, Shaikha Nasser
المؤلفAnjum, Shahzad
المؤلفBhutta, Zain Ali
المؤلفBashir, Sarah
المؤلفMajeed, Muhammad Azhar
المؤلفKhan, Anfal Sher
المؤلفBashir, Khalid
تاريخ الإتاحة2025-10-26T11:34:54Z
تاريخ النشر2025-08-07
اسم المنشورInternational Journal of Emergency Medicine
المعرّفhttp://dx.doi.org/10.1186/s12245-025-00949-6
الاقتباسAl-Thani, S. N., Anjum, S., Bhutta, Z. A., Bashir, S., Majeed, M. A., Khan, A. S., & Bashir, K. (2025). Comparative performance of ChatGPT, Gemini, and final-year emergency medicine clerkship students in answering multiple-choice questions: implications for the use of AI in medical education. International Journal of Emergency Medicine, 18(1), 146.
الرقم المعياري الدولي للكتاب1865-1372
معرّف المصادر الموحدhttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=105012752173&origin=inward
معرّف المصادر الموحدhttp://hdl.handle.net/10576/68187
الملخصBackground: The integration of artificial intelligence (AI) into medical education has gained significant attention, particularly with the emergence of advanced language models, such as ChatGPT and Gemini. While these tools show promise for answering multiple-choice questions (MCQs), their efficacy in specialized domains, such as Emergency Medicine (EM) clerkship, remains underexplored. This study aimed to evaluate and compare the accuracy of ChatGPT, Gemini, and final-year EM students when it comes to answering text-only and image-based MCQs, in order to assess AI’s potential for use as a supplementary tool in the field of medical education. Methods: In this proof-of-concept study, a comparative analysis was conducted using 160 MCQs from an EM clerkship curriculum, comprising 62 image-based questions and 98 text-only questions. The performance of the free versions of ChatGPT (4.0) and Gemini (1.5), as well as that of 125 final-year EM students, was assessed. Responses were categorized as “correct”, “incorrect”, or “unanswered”. Statistical analysis was then performed using IBM SPSS Statistics (Version 26.0) to compare accuracy across groups and question types. Results: Significant performance differences were observed across the three groups (χ² = 42.7, p < 0.001). Final-year EM students demonstrated the highest overall accuracy at 79.4%, outperforming both ChatGPT (72.5%) and Gemini (54.4%). Students excelled in text-only MCQs, with an accuracy of 89.8%, and performed robustly on image-based questions (62.9%). ChatGPT showed strong performance on text-only items (83.7%) but had reduced accuracy on image-based questions (54.8%). Gemini performed moderately on text-only questions (73.5%) but struggled significantly with image-based content, achieving only 24.2% accuracy. Pairwise comparisons confirmed that students outperformed both AI models across all formats (p < 0.01), with the widest performance gap observed in image-based questions between students and Gemini (+ 38.7% points). All AI “unable to answer” responses were treated as incorrect for analysis. Conclusion: This proof-of-concept study demonstrates that while AI shows promise as a supplementary educational tool, it cannot yet replace traditional training methods—particularly in domains requiring visual interpretation and clinical reasoning. ChatGPT’ s strong performance on text-based questions highlights its utility, but its limitations in image-based tasks emphasize the need for improvement. Gemini’s lower accuracy further highlights the challenges current AI models face in processing visually complex medical content. Future research should focus on enhancing AI’s multimodal capabilities to improve its applicability in medical education and assessment.
اللغةen
الناشرSpringer Nature
الموضوعartificial intelligence (AI)
large language model (LLM)
العنوانComparative performance of ChatGPT, Gemini, and final-year emergency medicine clerkship students in answering multiple-choice questions: implications for the use of AI in medical education
النوعArticle
رقم العدد1
رقم المجلد18
ESSN1865-1380
dc.accessType Open Access


الملفات في هذه التسجيلة

Thumbnail

هذه التسجيلة تظهر في المجموعات التالية

عرض بسيط للتسجيلة