TY - JOUR
T1 - The importance of multi-modal imaging and clinical information for humans and AI-based algorithms to classify breast masses (INSPiRED 003)
T2 - an international, multicenter analysis
AU - Pfob, André
AU - Sidey-Gibbons, Chris
AU - Barr, Richard G.
AU - Duda, Volker
AU - Alwafai, Zaher
AU - Balleyguier, Corinne
AU - Clevert, Dirk André
AU - Fastner, Sarah
AU - Gomez, Christina
AU - Goncalo, Manuela
AU - Gruber, Ines
AU - Hahn, Markus
AU - Hennigs, André
AU - Kapetas, Panagiotis
AU - Lu, Sheng Chieh
AU - Nees, Juliane
AU - Ohlinger, Ralf
AU - Riedel, Fabian
AU - Rutten, Matthieu
AU - Schaefgen, Benedikt
AU - Schuessler, Maximilian
AU - Stieber, Anne
AU - Togawa, Riku
AU - Tozaki, Mitsuhiro
AU - Wojcinski, Sebastian
AU - Xu, Cai
AU - Rauch, Geraldine
AU - Heil, Joerg
AU - Golatta, Michael
N1 - Publisher Copyright:
© 2022, The Author(s).
PY - 2022/6/1
Y1 - 2022/6/1
N2 - Objectives: AI-based algorithms for medical image analysis showed comparable performance to human image readers. However, in practice, diagnoses are made using multiple imaging modalities alongside other data sources. We determined the importance of this multi-modal information and compared the diagnostic performance of routine breast cancer diagnosis to breast ultrasound interpretations by humans or AI-based algorithms. Methods: Patients were recruited as part of a multicenter trial (NCT02638935). The trial enrolled 1288 women undergoing routine breast cancer diagnosis (multi-modal imaging, demographic, and clinical information). Three physicians specialized in ultrasound diagnosis performed a second read of all ultrasound images. We used data from 11 of 12 study sites to develop two machine learning (ML) algorithms using unimodal information (ultrasound features generated by the ultrasound experts) to classify breast masses which were validated on the remaining study site. The same ML algorithms were subsequently developed and validated on multi-modal information (clinical and demographic information plus ultrasound features). We assessed performance using area under the curve (AUC). Results: Of 1288 breast masses, 368 (28.6%) were histopathologically malignant. In the external validation set (n = 373), the performance of the two unimodal ultrasound ML algorithms (AUC 0.83 and 0.82) was commensurate with performance of the human ultrasound experts (AUC 0.82 to 0.84; p for all comparisons > 0.05). The multi-modal ultrasound ML algorithms performed significantly better (AUC 0.90 and 0.89) but were statistically inferior to routine breast cancer diagnosis (AUC 0.95, p for all comparisons ≤ 0.05). Conclusions: The performance of humans and AI-based algorithms improves with multi-modal information. Key Points: • The performance of humans and AI-based algorithms improves with multi-modal information. • Multimodal AI-based algorithms do not necessarily outperform expert humans. • Unimodal AI-based algorithms do not represent optimal performance to classify breast masses.
AB - Objectives: AI-based algorithms for medical image analysis showed comparable performance to human image readers. However, in practice, diagnoses are made using multiple imaging modalities alongside other data sources. We determined the importance of this multi-modal information and compared the diagnostic performance of routine breast cancer diagnosis to breast ultrasound interpretations by humans or AI-based algorithms. Methods: Patients were recruited as part of a multicenter trial (NCT02638935). The trial enrolled 1288 women undergoing routine breast cancer diagnosis (multi-modal imaging, demographic, and clinical information). Three physicians specialized in ultrasound diagnosis performed a second read of all ultrasound images. We used data from 11 of 12 study sites to develop two machine learning (ML) algorithms using unimodal information (ultrasound features generated by the ultrasound experts) to classify breast masses which were validated on the remaining study site. The same ML algorithms were subsequently developed and validated on multi-modal information (clinical and demographic information plus ultrasound features). We assessed performance using area under the curve (AUC). Results: Of 1288 breast masses, 368 (28.6%) were histopathologically malignant. In the external validation set (n = 373), the performance of the two unimodal ultrasound ML algorithms (AUC 0.83 and 0.82) was commensurate with performance of the human ultrasound experts (AUC 0.82 to 0.84; p for all comparisons > 0.05). The multi-modal ultrasound ML algorithms performed significantly better (AUC 0.90 and 0.89) but were statistically inferior to routine breast cancer diagnosis (AUC 0.95, p for all comparisons ≤ 0.05). Conclusions: The performance of humans and AI-based algorithms improves with multi-modal information. Key Points: • The performance of humans and AI-based algorithms improves with multi-modal information. • Multimodal AI-based algorithms do not necessarily outperform expert humans. • Unimodal AI-based algorithms do not represent optimal performance to classify breast masses.
KW - Artificial intelligence
KW - Breast cancer
KW - Machine learning
KW - Ultrasonography
UR - http://www.scopus.com/inward/record.url?scp=85124731021&partnerID=8YFLogxK
U2 - 10.1007/s00330-021-08519-z
DO - 10.1007/s00330-021-08519-z
M3 - Article
C2 - 35175381
AN - SCOPUS:85124731021
SN - 0938-7994
VL - 32
SP - 4101
EP - 4115
JO - European Radiology
JF - European Radiology
IS - 6
ER -