عرض بسيط للتسجيلة

المؤلفPlevris, Vagelis
المؤلفPapazafeiropoulos, George
المؤلفJiménez Rios, Alejandro
تاريخ الإتاحة2024-10-02T05:59:50Z
تاريخ النشر2023
اسم المنشورAI (Switzerland)
المصدرScopus
الرقم المعياري الدولي للكتاب26732688
معرّف المصادر الموحدhttp://dx.doi.org/10.3390/ai4040048
معرّف المصادر الموحدhttp://hdl.handle.net/10576/59670
الملخصIn an age where artificial intelligence is reshaping the landscape of education and problem solving, our study unveils the secrets behind three digital wizards, ChatGPT-3.5, ChatGPT-4, and Google Bard, as they engage in a thrilling showdown of mathematical and logical prowess. We assess the ability of the chatbots to understand the given problem, employ appropriate algorithms or methods to solve it, and generate coherent responses with correct answers. We conducted our study using a set of 30 questions. These questions were carefully crafted to be clear, unambiguous, and fully described using plain text only. Each question has a unique and well-defined correct answer. The questions were divided into two sets of 15: Set A consists of "Original" problems that cannot be found online, while Set B includes "Published" problems that are readily available online, often with their solutions. Each question was presented to each chatbot three times in May 2023. We recorded and analyzed their responses, highlighting their strengths and weaknesses. Our findings indicate that chatbots can provide accurate solutions for straightforward arithmetic, algebraic expressions, and basic logic puzzles, although they may not be consistently accurate in every attempt. However, for more complex mathematical problems or advanced logic tasks, the chatbots' answers, although they appear convincing, may not be reliable. Furthermore, consistency is a concern as chatbots often provide conflicting answers when presented with the same question multiple times. To evaluate and compare the performance of the three chatbots, we conducted a quantitative analysis by scoring their final answers based on correctness. Our results show that ChatGPT-4 performs better than ChatGPT-3.5 in both sets of questions. Bard ranks third in the original questions of Set A, trailing behind the other two chatbots. However, Bard achieves the best performance, taking first place in the published questions of Set B. This is likely due to Bard's direct access to the internet, unlike the ChatGPT chatbots, which, due to their designs, do not have external communication capabilities.
راعي المشروعThe APC was funded by Oslo Metropolitan University (OsloMet).
اللغةen
الناشرMDPI
الموضوعAI
chatbot
ChatGPT
Google Bard
GPT-3.5
GPT-4
logic
mathematics
العنوانChatbots Put to the Test in Math and Logic Problems: A Comparison and Assessment of ChatGPT-3.5, ChatGPT-4, and Google Bard
النوعArticle
الصفحات949-969
رقم العدد4
رقم المجلد4
dc.accessType Open Access


الملفات في هذه التسجيلة

Thumbnail

هذه التسجيلة تظهر في المجموعات التالية

عرض بسيط للتسجيلة