Towards the automatic risk of bias assessment on randomized controlled trials: A comparison of RobotReviewer and humans
Date
2024-01-01Author
Tian, YuanYang, Xi
Doi, Suhail A.
Furuya-Kanamori, Luis
Lin, Lifeng
Kwong, Joey S.W.
Xu, Chang
...show more authors ...show less authors
Metadata
Show full item recordAbstract
RobotReviewer is a tool for automatically assessing the risk of bias in randomized controlled trials, but there is limited evidence of its reliability. We evaluated the agreement between RobotReviewer and humans regarding the risk of bias assessment based on 1955 randomized controlled trials. The risk of bias in these trials was assessed via two different approaches: (1) manually by human reviewers, and (2) automatically by the RobotReviewer. The manual assessment was based on two groups independently, with two additional rounds of verification. The agreement between RobotReviewer and humans was measured via the concordance rate and Cohen's kappa statistics, based on the comparison of binary classification of the risk of bias (low vs. high/unclear) as restricted by RobotReviewer. The concordance rates varied by domain, ranging from 63.07% to 83.32%. Cohen's kappa statistics showed a poor agreement between humans and RobotReviewer for allocation concealment (κ = 0.25, 95% CI: 0.21–0.30), blinding of outcome assessors (κ = 0.27, 95% CI: 0.23–0.31); While moderate for random sequence generation (κ = 0.46, 95% CI: 0.41–0.50) and blinding of participants and personnel (κ = 0.59, 95% CI: 0.55–0.64). The findings demonstrate that there were domain-specific differences in the level of agreement between RobotReviewer and humans. We suggest that it might be a useful auxiliary tool, but the specific manner of its integration as a complementary tool requires further discussion.
Collections
- Medicine Research [1508 items ]