ChatGPT-A double-edged sword for healthcare education? Implications for assessments of dental students
Author | Ali, Kamran |
Author | Barhom, Noha |
Author | Tamimi, Faleh |
Author | Duggal, Monty |
Available date | 2023-08-29T06:20:18Z |
Publication Date | 2023-01-01 |
Publication Name | European Journal of Dental Education |
Identifier | http://dx.doi.org/10.1111/eje.12937 |
Citation | Ali, K, Barhom, N, Tamimi, F, Duggal, M. ChatGPT—A double-edged sword for healthcare education? Implications for assessments of dental students. Eur J Dent Educ. 2023; 00: 1-6. doi:10.1111/eje.12937 |
ISSN | 13965883 |
Abstract | Introduction: Open-source generative artificial intelligence (AI) applications are fast-transforming access to information and allow students to prepare assignments and offer quite accurate responses to a wide range of exam questions which are routinely used in assessments of students across the board including undergraduate dental students. This study aims to evaluate the performance of Chat Generative Pre-trained Transformer (ChatGPT), a generative AI-based application, on a wide range of assessments used in contemporary healthcare education and discusses the implications for undergraduate dental education. Materials and Methods: This was an exploratory study investigating the accuracy of ChatGPT to attempt a range of recognised assessments in healthcare education curricula. A total of 50 independent items encompassing 50 different learning outcomes (n = 10 per item) were developed by the research team. These included 10 separate items based on each of the five commonly used question formats including multiple-choice questions (MCQs); short-answer questions (SAQs); short essay questions (SEQs); single true/false questions; and fill in the blanks items. Chat GPT was used to attempt each of these 50 questions. In addition, ChatGPT was used to generate reflective reports based on multisource feedback; research methodology; and critical appraisal of the literature. Results: ChatGPT application provided accurate responses to majority of knowledge-based assessments based on MCQs, SAQs, SEQs, true/false and fill in the blanks items. However, it was only able to answer text-based questions and did not allow processing of questions based on images. Responses generated to written assignments were also satisfactory apart from those for critical appraisal of literature. Word count was the key limitation observed in outputs generated by the free version of ChatGPT. Conclusion: Notwithstanding their current limitations, generative AI-based applications have the potential to revolutionise virtual learning. Instead of treating it as a threat, healthcare educators need to adapt teaching and assessments in medical and dental education to the benefits of the learners while mitigating against dishonest use of AI-based technology. |
Language | en |
Publisher | Wiley |
Subject | artificial intelligence ChatGPT dental education education technology machine learning |
Type | Article |
ESSN | 1600-0579 |
Files in this item
This item appears in the following Collection(s)
-
Dental Medicine Research [342 items ]