عرض بسيط للتسجيلة

المؤلفAli, Kamran
المؤلفBarhom, Noha
المؤلفTamimi, Faleh
المؤلفDuggal, Monty
تاريخ الإتاحة2023-08-29T06:20:18Z
تاريخ النشر2023-01-01
اسم المنشورEuropean Journal of Dental Education
المعرّفhttp://dx.doi.org/10.1111/eje.12937
الاقتباسAli, K, Barhom, N, Tamimi, F, Duggal, M. ChatGPT—A double-edged sword for healthcare education? Implications for assessments of dental students. Eur J Dent Educ. 2023; 00: 1-6. doi:10.1111/eje.12937
الرقم المعياري الدولي للكتاب13965883
معرّف المصادر الموحدhttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85167367205&origin=inward
معرّف المصادر الموحدhttp://hdl.handle.net/10576/46955
الملخصIntroduction: Open-source generative artificial intelligence (AI) applications are fast-transforming access to information and allow students to prepare assignments and offer quite accurate responses to a wide range of exam questions which are routinely used in assessments of students across the board including undergraduate dental students. This study aims to evaluate the performance of Chat Generative Pre-trained Transformer (ChatGPT), a generative AI-based application, on a wide range of assessments used in contemporary healthcare education and discusses the implications for undergraduate dental education. Materials and Methods: This was an exploratory study investigating the accuracy of ChatGPT to attempt a range of recognised assessments in healthcare education curricula. A total of 50 independent items encompassing 50 different learning outcomes (n = 10 per item) were developed by the research team. These included 10 separate items based on each of the five commonly used question formats including multiple-choice questions (MCQs); short-answer questions (SAQs); short essay questions (SEQs); single true/false questions; and fill in the blanks items. Chat GPT was used to attempt each of these 50 questions. In addition, ChatGPT was used to generate reflective reports based on multisource feedback; research methodology; and critical appraisal of the literature. Results: ChatGPT application provided accurate responses to majority of knowledge-based assessments based on MCQs, SAQs, SEQs, true/false and fill in the blanks items. However, it was only able to answer text-based questions and did not allow processing of questions based on images. Responses generated to written assignments were also satisfactory apart from those for critical appraisal of literature. Word count was the key limitation observed in outputs generated by the free version of ChatGPT. Conclusion: Notwithstanding their current limitations, generative AI-based applications have the potential to revolutionise virtual learning. Instead of treating it as a threat, healthcare educators need to adapt teaching and assessments in medical and dental education to the benefits of the learners while mitigating against dishonest use of AI-based technology.
اللغةen
الناشرWiley
الموضوعartificial intelligence
ChatGPT
dental education
education technology
machine learning
العنوانChatGPT-A double-edged sword for healthcare education? Implications for assessments of dental students
النوعArticle
ESSN1600-0579


الملفات في هذه التسجيلة

Thumbnail

هذه التسجيلة تظهر في المجموعات التالية

عرض بسيط للتسجيلة