Show simple item record

AuthorPlevris, Vagelis
AuthorPapazafeiropoulos, George
AuthorJiménez Rios, Alejandro
Available date2024-10-02T05:59:50Z
Publication Date2023
Publication NameAI (Switzerland)
ResourceScopus
ISSN26732688
URIhttp://dx.doi.org/10.3390/ai4040048
URIhttp://hdl.handle.net/10576/59670
AbstractIn an age where artificial intelligence is reshaping the landscape of education and problem solving, our study unveils the secrets behind three digital wizards, ChatGPT-3.5, ChatGPT-4, and Google Bard, as they engage in a thrilling showdown of mathematical and logical prowess. We assess the ability of the chatbots to understand the given problem, employ appropriate algorithms or methods to solve it, and generate coherent responses with correct answers. We conducted our study using a set of 30 questions. These questions were carefully crafted to be clear, unambiguous, and fully described using plain text only. Each question has a unique and well-defined correct answer. The questions were divided into two sets of 15: Set A consists of "Original" problems that cannot be found online, while Set B includes "Published" problems that are readily available online, often with their solutions. Each question was presented to each chatbot three times in May 2023. We recorded and analyzed their responses, highlighting their strengths and weaknesses. Our findings indicate that chatbots can provide accurate solutions for straightforward arithmetic, algebraic expressions, and basic logic puzzles, although they may not be consistently accurate in every attempt. However, for more complex mathematical problems or advanced logic tasks, the chatbots' answers, although they appear convincing, may not be reliable. Furthermore, consistency is a concern as chatbots often provide conflicting answers when presented with the same question multiple times. To evaluate and compare the performance of the three chatbots, we conducted a quantitative analysis by scoring their final answers based on correctness. Our results show that ChatGPT-4 performs better than ChatGPT-3.5 in both sets of questions. Bard ranks third in the original questions of Set A, trailing behind the other two chatbots. However, Bard achieves the best performance, taking first place in the published questions of Set B. This is likely due to Bard's direct access to the internet, unlike the ChatGPT chatbots, which, due to their designs, do not have external communication capabilities.
SponsorThe APC was funded by Oslo Metropolitan University (OsloMet).
Languageen
PublisherMDPI
SubjectAI
chatbot
ChatGPT
Google Bard
GPT-3.5
GPT-4
logic
mathematics
TitleChatbots Put to the Test in Math and Logic Problems: A Comparison and Assessment of ChatGPT-3.5, ChatGPT-4, and Google Bard
TypeArticle
Pagination949-969
Issue Number4
Volume Number4
dc.accessType Open Access


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record