• English
    • العربية
  • العربية
  • Login
  • QU
  • QU Library
  •  Home
  • Communities & Collections
  • Copyrights
View Item 
  •   Qatar University Digital Hub
  • Qatar University Institutional Repository
  • Academic
  • Faculty Contributions
  • College of Medicine
  • Medicine Research
  • View Item
  • Qatar University Digital Hub
  • Qatar University Institutional Repository
  • Academic
  • Faculty Contributions
  • College of Medicine
  • Medicine Research
  • View Item
  •      
  •  
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Comparative performance of ChatGPT, Gemini, and final-year emergency medicine clerkship students in answering multiple-choice questions: implications for the use of AI in medical education

    Thumbnail
    View/Open
    s12245-025-00949-6.pdf (1.379Mb)
    Date
    2025-08-07
    Author
    Al-Thani, Shaikha Nasser
    Anjum, Shahzad
    Bhutta, Zain Ali
    Bashir, Sarah
    Majeed, Muhammad Azhar
    Khan, Anfal Sher
    Bashir, Khalid
    ...show more authors ...show less authors
    Metadata
    Show full item record
    Abstract
    Background: The integration of artificial intelligence (AI) into medical education has gained significant attention, particularly with the emergence of advanced language models, such as ChatGPT and Gemini. While these tools show promise for answering multiple-choice questions (MCQs), their efficacy in specialized domains, such as Emergency Medicine (EM) clerkship, remains underexplored. This study aimed to evaluate and compare the accuracy of ChatGPT, Gemini, and final-year EM students when it comes to answering text-only and image-based MCQs, in order to assess AI’s potential for use as a supplementary tool in the field of medical education. Methods: In this proof-of-concept study, a comparative analysis was conducted using 160 MCQs from an EM clerkship curriculum, comprising 62 image-based questions and 98 text-only questions. The performance of the free versions of ChatGPT (4.0) and Gemini (1.5), as well as that of 125 final-year EM students, was assessed. Responses were categorized as “correct”, “incorrect”, or “unanswered”. Statistical analysis was then performed using IBM SPSS Statistics (Version 26.0) to compare accuracy across groups and question types. Results: Significant performance differences were observed across the three groups (χ² = 42.7, p < 0.001). Final-year EM students demonstrated the highest overall accuracy at 79.4%, outperforming both ChatGPT (72.5%) and Gemini (54.4%). Students excelled in text-only MCQs, with an accuracy of 89.8%, and performed robustly on image-based questions (62.9%). ChatGPT showed strong performance on text-only items (83.7%) but had reduced accuracy on image-based questions (54.8%). Gemini performed moderately on text-only questions (73.5%) but struggled significantly with image-based content, achieving only 24.2% accuracy. Pairwise comparisons confirmed that students outperformed both AI models across all formats (p < 0.01), with the widest performance gap observed in image-based questions between students and Gemini (+ 38.7% points). All AI “unable to answer” responses were treated as incorrect for analysis. Conclusion: This proof-of-concept study demonstrates that while AI shows promise as a supplementary educational tool, it cannot yet replace traditional training methods—particularly in domains requiring visual interpretation and clinical reasoning. ChatGPT’ s strong performance on text-based questions highlights its utility, but its limitations in image-based tasks emphasize the need for improvement. Gemini’s lower accuracy further highlights the challenges current AI models face in processing visually complex medical content. Future research should focus on enhancing AI’s multimodal capabilities to improve its applicability in medical education and assessment.
    URI
    https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=105012752173&origin=inward
    DOI/handle
    http://dx.doi.org/10.1186/s12245-025-00949-6
    http://hdl.handle.net/10576/68187
    Collections
    • Medicine Research [‎1932‎ items ]

    entitlement


    Qatar University Digital Hub is a digital collection operated and maintained by the Qatar University Library and supported by the ITS department

    Contact Us
    Contact Us | QU

     

     

    Home

    Submit your QU affiliated work

    Browse

    All of Digital Hub
      Communities & Collections Publication Date Author Title Subject Type Language Publisher
    This Collection
      Publication Date Author Title Subject Type Language Publisher

    My Account

    Login

    Statistics

    View Usage Statistics

    Qatar University Digital Hub is a digital collection operated and maintained by the Qatar University Library and supported by the ITS department

    Contact Us
    Contact Us | QU

     

     

    Video