A novel virtual patient approach for cross-patient multimodal fusion in enhanced breast cancer detection
| Author | Abdullakutty, Faseela |
| Author | Al-Maadeed, Somaya |
| Author | Saady, Rafif Al |
| Author | Bouridane, Ahmed |
| Author | Hamoudi, Rifat |
| Author | Akbari, Younes |
| Available date | 2026-01-28T09:43:46Z |
| Publication Date | 2026-12-08 |
| Publication Name | Computerized Medical Imaging and Graphics |
| Identifier | http://dx.doi.org/10.1016/j.compmedimag.2025.102687 |
| Citation | Akbari, Y., Abdullakutty, F., Al-Maadeed, S., Al Saady, R., Bouridane, A., & Hamoudi, R. (2026). A novel virtual patient approach for cross-patient multimodal fusion in enhanced breast cancer detection. Computerized Medical Imaging and Graphics, 127, 102687. |
| ISSN | 1879-0771 |
| Abstract | Multimodal medical imaging combining conventional imaging modalities such as mammography, ultrasound, and histopathology has shown significant promise for improving breast cancer detection accuracy. However, clinical implementation faces substantial challenges due to incomplete patient-matched multimodal datasets and resource constraints. Traditional approaches require complete imaging workups from individual patients, limiting their practical applicability. This study investigates whether cross-patient multimodal fusion combining imaging modalities from different patients, can provide additional diagnostic information beyond single-modality approaches. We hypothesize that leveraging complementary information from heterogeneous patient populations enhances cancer detection performance, even when modalities originate from separate individuals. We developed a novel virtual patient framework that systematically combines imaging modalities across different patients based on quality-driven selection strategies. Two training paradigms were evaluated: Fixed scenario with 1:1:1 cross-patient combinations (∼250 virtual patients), and Combinatorial scenario with systematic companion selection (∼20,000 virtual patients). Multiple fusion architectures (concatenation, attention, and averaging) were assessed, and we designed a novel co-attention mechanism that enables sophisticated cross-modal interaction through learned attention weights. These fusion networks were evaluated using histopathology (BCSS), mammography, and ultrasound (BUSI) datasets. External validation using the ICIAR2018 BACH Challenge dataset as an alternative histopathology source demonstrated the generalizability of our approach, achieving promising accuracy despite differences in staining protocols and acquisition procedures across institutions. All models were evaluated on consistent fixed test sets to ensure fair comparison. This dataset is well-suited for multiple breast cancer analysis tasks, including detection, segmentation, and Explainable Artificial Intelligence (XAI) applications. Cross-patient multimodal fusion demonstrated significant improvements over single-modality approaches. The best single modality achieved 75.36% accuracy (mammography), while the optimal fusion combination (histopathology-mammography) reached 97.10% accuracy, representing a 21.74 percentage point improvement. Comprehensive quantitative validation through silhouette analysis (score: 0.894) confirms that the observed performance improvements reflect genuine feature space structure rather than visualization artifacts. Cross-patient multimodal fusion demonstrates significant potential for enhancing breast cancer detection, particularly addressing real-world scenarios where complete patient-matched multimodal data is unavailable. This approach represents a paradigm shift toward leveraging heterogeneous information sources for improved diagnostic performance. |
| Language | en |
| Publisher | Elsevier |
| Subject | Breast cancer detection Multimodal fusion Cross-patient learning Virtual patients Medical imaging |
| Type | Article |
| Volume Number | 127 |
| Open Access user License | http://creativecommons.org/licenses/by/4.0/ |
| ESSN | 0895-6111 |
Check access options
Files in this item
This item appears in the following Collection(s)
-
Computer Science & Engineering [2520 items ]
-
Medicine Research [2051 items ]


