Show simple item record

AuthorManojit, Bhattacharya
AuthorPal, Soumen
AuthorChatterjee, Srijan
AuthorAlshammari, Abdulrahman
AuthorAlbekairi, Thamer H.
AuthorJagga, Supriya
AuthorIge Ohimain, Elijah
AuthorZayed, Hatem
AuthorByrareddy, Siddappa N.
AuthorLee, Sang-Soo
AuthorWen, Zhi-Hong
AuthorAgoramoorthy, Govindasamy
AuthorBhattacharya, Prosun
AuthorChakraborty, Chiranjib
Available date2024-06-12T10:59:04Z
Publication Date2024-03-02
Publication NameCurrent Research in Biotechnology
Identifierhttp://dx.doi.org/10.1016/j.crbiot.2024.100194
CitationBhattacharya, M., Pal, S., Chatterjee, S., Alshammari, A., Albekairi, T. H., Jagga, S., ... & Chakraborty, C. (2024). ChatGPT’s scorecard after the performance in a series of tests conducted at the multi-university level: A pattern of responses of generative artificial intelligence or large language models. Current Research in Biotechnology, 100194.
URIhttps://www.sciencedirect.com/science/article/pii/S2590262824000200
URIhttp://hdl.handle.net/10576/56120
AbstractRecently, researchers have shown concern about the ChatGPT-derived answers. Here, we conducted a series of tests using ChatGPT by individual researcher at multi-country level to understand the pattern of its answer accuracy, reproducibility, answer length, plagiarism, and in-depth using two questionnaires (the first set with 15 MCQs and the second 15 KBQ). Among 15 MCQ-generated answers, 13 ± 70 were correct (Median : 82.5; Coefficient variance : 4.85), 3 ± 0.77 were incorrect (Median: 3, Coefficient variance: 25.81), and 1 to 10 were reproducible, and 11 to 15 were not. Among 15 KBQ, the length of each question (in words) is about 294.5 ± 97.60 (mean range varies from 138.7 to 438.09), and the mean similarity index (in words) is about 29.53 ± 11.40 (Coefficient variance: 38.62) for each question. The statistical models were also developed using analyzed parameters of answers. The study shows a pattern of ChatGPT-derive answers with correctness and incorrectness and urges for an error-free, next-generation LLM to avoid users’ misguidance.
SponsorThis work was funded the by Researchers Supporting Project number (RSP2024R491), King Saud University, Riyadh, Saudi Arabia.
Languageen
PublisherElsevier
SubjectChatGPT
Accuracy
Reproducibility
Plagiarism
Answer length
TitleChatGPT’s scorecard after the performance in a series of tests conducted at the multi-country level: A pattern of responses of generative artificial intelligence or large language models
TypeArticle
Volume Number7
Open Access user License http://creativecommons.org/licenses/by-nc-nd/4.0/
ESSN2590-2628
dc.accessType Open Access


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record