Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer
Date
2017Author
Ehteshami Bejnordi, BabakVeta, Mitko
Johannes van Diest, Paul
van Ginneken, Bram
Karssemeijer, Nico
Litjens, Geert
van der Laak, Jeroen A W M
the CAMELYON16 Consortium
Hermsen, Meyke
Manson, Quirine F
Balkenhol, Maschenka
Geessink, Oscar
Stathonikos, Nikolaos
van Dijk, Marcory Crf
Bult, Peter
Beca, Francisco
Beck, Andrew H
Wang, Dayong
Khosla, Aditya
Gargeya, Rishab
Irshad, Humayun
Zhong, Aoxiao
Dou, Qi
Li, Quanzheng
Chen, Hao
Lin, Huang-Jing
Heng, Pheng-Ann
Haß, Christian
Bruni, Elia
Wong, Quincy
Halici, Ugur
Öner, Mustafa Ümit
Cetin-Atalay, Rengul
Berseth, Matt
Khvatkov, Vitali
Vylegzhanin, Alexei
Kraus, Oren
Shaban, Muhammad
Rajpoot, Nasir
Awan, Ruqayya
Sirinukunwattana, Korsuk
Qaiser, Talha
Tsang, Yee-Wah
Tellez, David
Annuscheit, Jonas
Hufnagl, Peter
Valkonen, Mira
Kartasalo, Kimmo
Latonen, Leena
Ruusuvuori, Pekka
Liimatainen, Kaisa
Albarqouni, Shadi
Mungal, Bharti
George, Ami
Demirci, Stefanie
Navab, Nassir
Watanabe, Seiryo
Seno, Shigeto
Takenaka, Yoichi
Matsuda, Hideo
Ahmady Phoulady, Hady
Kovalev, Vassili
Kalinovsky, Alexander
Liauchuk, Vitali
Bueno, Gloria
Fernandez-Carrobles, M Milagro
Serrano, Ismael
Deniz, Oscar
Racoceanu, Daniel
Venâncio, Rui
...show more authors ...show less authors
Metadata
Show full item recordAbstract
IMPORTANCE: Application of deep learning algorithms to whole-slide pathology imagescan potentially improve diagnostic accuracy and efficiency. OBJECTIVE: Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin-stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists' diagnoses in a diagnostic setting. DESIGN, SETTING, AND PARTICIPANTS: Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC). EXPOSURES: Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation. MAIN OUTCOMES AND MEASURES: The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor. RESULTS: The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4% [95% CI, 64.3%-80.4%]) at a mean of 0.0125 false-positives per normal whole-slide image. For the whole-slide image classification task, the best algorithm (AUC, 0.994 [95% CI, 0.983-0.999]) performed significantly better than the pathologists WTC in a diagnostic simulation (mean AUC, 0.810 [range, 0.738-0.884]; P <.001). The top 5 algorithms had a mean AUC that was comparable with the pathologist interpreting the slides in the absence of time constraints (mean AUC, 0.960 [range, 0.923-0.994] for the top 5 algorithms vs 0.966 [95% CI, 0.927-0.998] for the pathologist WOTC). CONCLUSIONS AND RELEVANCE: In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting. 2017 American Medical Association. All rights reserved.
Collections
- Computer Science & Engineering [2402 items ]