+
Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

An MRI–pathology foundation model for noninvasive diagnosis and grading of prostate cancer

Abstract

Prostate cancer is a leading health concern for men, yet current clinical assessments of tumor aggressiveness rely on invasive procedures that often lead to inconsistencies. There remains a critical need for accurate, noninvasive diagnosis and grading methods. Here we developed a foundation model trained on multiparametric magnetic resonance imaging (MRI) and paired pathology data for noninvasive diagnosis and grading of prostate cancer. Our model, MRI-based Predicted Transformer for Prostate Cancer (MRI-PTPCa), was trained under contrastive learning on nearly 1.3 million image–pathology pairs from over 5,500 patients in discovery, modeling, external and prospective cohorts. During real-world testing, prediction of MRI-PTPCa demonstrated consistency with pathology and superior performance (area under the curve above 0.978; grading accuracy 89.1%) compared with clinical measures and other prediction models. This work introduces a scalable, noninvasive approach to prostate cancer diagnosis and grading, offering a robust tool to support clinical decision-making while reducing reliance on biopsies.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Schema outlining the study.
Fig. 2: Study design.
Fig. 3: Performance of the MRI-PTPCa for diagnosing and grading in a multicenter study.
Fig. 4: Clinical improvements of MRI-PTPCa compared with conventional clinical measures in diagnosing PCa, CSPCa and grading.
Fig. 5: Multicenter diagnostic performance of MRI-PTPCa versus clinical approaches.
Fig. 6: Interpretability of MRI-PTPCa for enhanced performance.
Fig. 7: Prospective evaluation of MRI-PTPCa for diagnosing PCa, CSPCa and grading.
Fig. 8: Analysis of MRI-PTPCa in different clinical scenarios.

Similar content being viewed by others

Data availability

The data of the PICAI set were derived from the dataset of PI-CAI challenge at https://pi-cai.grand-challenge.org/. Raw data of mp-MRI from PUTH, JSPH, BJFH and AHQU are not currently permitted in public repositories because ethical and legal implications are still being discussed at an institutional level. For study transparency and reproducibility, research data (that is, deidentified participant data and original images of MRI) will be made available at publication upon request to the corresponding authors. Interested researchers should send data access request to renshancheng@gmail.com. The corresponding authors will review the requests with other authors for consideration. Data sharing will only be available for academic research (that is, reference for model parameter and study design), instead of other objectives (that is, commercial use). A data use agreement will be required before the release of participant data and institutional review board approval as appropriate. The data that support the findings of this study are available from the corresponding authors upon request. Source data are provided with this paper.

Code availability

All code necessary for the analyses is available without access restrictions via GitHub at https://github.com/StandWisdom/MRI-based-Predicted-Transformer-for-Prostate-cancer. We have proactively open-sourced the pretrained model trained on the referenced dataset, making it available to the research community for further exploration and application.

References

  1. Siegel, R. L. et al. Cancer statistics, 2024. CA Cancer J. Clin. 74, 12–49 (2024).

    PubMed  Google Scholar 

  2. Epstein, J. I. et al. The 2005 International Society of Urological Pathology (ISUP) consensus conference on Gleason grading of prostatic carcinoma. Am. J. Surg. Pathol. 29, 1228–1242 (2005).

    PubMed  Google Scholar 

  3. Godtman, R. A. et al. The association between age, prostate cancer risk, and higher Gleason score in a long-term screening program: results from the Göteborg-1 prostate cancer screening trial. Eur. Urol. 82, 311–317 (2022).

    PubMed  Google Scholar 

  4. Epstein, J. I. et al. Contemporary Gleason grading of prostatic carcinoma: an update with discussion on practical issues to implement the 2014 International Society of Urological Pathology (ISUP) consensus conference on Gleason grading of prostatic carcinoma. Am. J. Surg. Pathol. 41, e1–e7 (2017).

    PubMed  Google Scholar 

  5. Oerther, B. et al. Update on PI-RADS version 2.1 diagnostic performance benchmarks for prostate MRI: systematic review and meta-analysis. Radiology 312, e233337 (2024).

    PubMed  Google Scholar 

  6. Mottet, N. et al. EAU-EANM-ESTRO-ESUR-SIOG guidelines on prostate cancer—2020 update. Part 1: screening, diagnosis, and local treatment with curative intent. Eur. Urol. 79, 243–262 (2021).

    PubMed  CAS  Google Scholar 

  7. Eggener, S. E. et al. Low-grade prostate cancer: time to stop calling it cancer. J. Clin. Oncol. 40, 3110–3114 (2022).

    PubMed  Google Scholar 

  8. Willemse, P. M. et al. Systematic review of active surveillance for clinically localised prostate cancer to develop recommendations regarding inclusion of intermediate-risk disease, biopsy characteristics at inclusion and monitoring, and surveillance repeat biopsy strategy. Eur. Urol. 81, 337–346 (2022).

    PubMed  Google Scholar 

  9. Goldenberg, S. L. et al. A new era: artificial intelligence and machine learning in prostate cancer. Nat. Rev. Urol. 16, 391–403 (2019).

    PubMed  Google Scholar 

  10. Yang, D. D. et al. Risk of upgrading and upstaging among 10 000 patients with Gleason 3 + 4 favorable intermediate-risk prostate cancer. Eur. Urol. Focus 5, 69–76 (2019).

    PubMed  Google Scholar 

  11. Van Poppel, H. et al. Serum PSA-based early detection of prostate cancer in Europe and globally: past, present and future. Nat. Rev. Urol. 19, 562–572 (2022).

    PubMed  Google Scholar 

  12. Merriel, S. W. et al. Systematic review and meta-analysis of the diagnostic accuracy of prostate-specific antigen (PSA) for the detection of prostate cancer in symptomatic patients. BMC Med. 20, 54 (2022).

    PubMed  PubMed Central  CAS  Google Scholar 

  13. Ahmed, H. U. et al. Diagnostic accuracy of multi-parametric MRI and TRUS biopsy in prostate cancer (PROMIS): a paired validating confirmatory study. Lancet 389, 815–822 (2017).

    PubMed  Google Scholar 

  14. Yang, L. et al. Association of prostate zonal volume with location and aggressiveness of clinically significant prostate cancer: a multiparametric MRI study according to PI-RADS version 2.1. Eur. J. Radiol. 150, 110268 (2022).

    PubMed  Google Scholar 

  15. Weinreb, J. C. et al. PI-RADS prostate imaging-reporting and data system: 2015, version 2. Eur. Urol. 69, 16–40 (2016).

    PubMed  Google Scholar 

  16. Hao, S. et al. Cost-effectiveness of prostate cancer screening using magnetic resonance imaging or standard biopsy based on the STHLM3-MRI study. JAMA Oncol. 9, 88–94 (2023).

    Google Scholar 

  17. Turkbey, B. & Purysko, A. S. PI-RADS: where next? Radiology 307, e223128 (2023).

    PubMed  Google Scholar 

  18. Kasivisvanathan, V. et al. MRI-targeted or standard biopsy for prostate-cancer diagnosis. N. Engl. J. Med. 378, 1767–1777 (2018).

    PubMed  PubMed Central  Google Scholar 

  19. Ahdoot, M. et al. MRI-targeted, systematic, and combined biopsy for prostate cancer diagnosis. N. Engl. J. Med. 382, 917–928 (2020).

    PubMed  PubMed Central  Google Scholar 

  20. Connor, M. J. et al. Landmarks in the evolution of prostate biopsy. Nat. Rev. Urol. 20, 241–258 (2023).

    PubMed  Google Scholar 

  21. Weinstein, I. C. et al. Impact of magnetic resonance imaging targeting on pathologic upgrading and downgrading at prostatectomy: a systematic review and meta-analysis. Eur. Urol. Oncol. 6, 355–365 (2023).

    PubMed  Google Scholar 

  22. Goel, S. et al. Concordance between biopsy and radical prostatectomy pathology in the era of targeted biopsy: a systematic review and meta-analysis. Eur. Urol. Oncol. 3, 10–20 (2020).

    PubMed  Google Scholar 

  23. Bratt, O. et al. Population-based organised prostate cancer testing: results from the first invitation of 50-year-old men. Eur. Urol. 85, 207–214 (2024).

    PubMed  CAS  Google Scholar 

  24. Mikulas, C. J., Ali, K. & Onteddu, N. Prostate cancer screening: is it time for a new approach? A review article. J. Invest. Med. 73, 27–34 (2025).

    Google Scholar 

  25. Ranasinghe, W. et al. Optimizing the diagnosis and management of ductal prostate cancer. Nat. Rev. Urol. 18, 337–358 (2021).

    PubMed  Google Scholar 

  26. Chen, S. et al. Use of artificial intelligence chatbots for cancer treatment information. Jama Oncol. 9, 1459–1462 (2023).

    PubMed  PubMed Central  Google Scholar 

  27. Shmatko, A. et al. Artificial intelligence in histopathology: enhancing cancer research and clinical oncology. Nat. Cancer 3, 1026–1038 (2022).

    PubMed  Google Scholar 

  28. Bulten, W. et al. Artificial intelligence for diagnosis and Gleason grading of prostate cancer: the PANDA challenge. Nat. Med. 28, 154–163 (2022).

    PubMed  PubMed Central  CAS  Google Scholar 

  29. Rajpurkar, P. et al. AI in health and medicine. Nat. Med. 28, 31–38 (2022).

    PubMed  CAS  Google Scholar 

  30. Fassia, M. et al. Deep learning prostate MRI segmentation accuracy and robustness: a systematic review. Radiol. Artif. Intell. 6, e230138 (2024).

    PubMed  PubMed Central  Google Scholar 

  31. Zhao, L. et al. What benefit can be obtained from magnetic resonance imaging diagnosis with artificial intelligence in prostate cancer compared with clinical assessments? Military Med. Res. 10, 29 (2023).

    CAS  Google Scholar 

  32. Baydoun, A. et al. Artificial intelligence applications in prostate cancer. Prostate Cancer Prostatic Dis. 27, 37–45 (2024).

    PubMed  CAS  Google Scholar 

  33. Barrett, T. et al. Quality checkpoints in the MRI-directed prostate cancer diagnostic pathway. Nat. Rev. Urol. 20, 9–22 (2023).

    PubMed  CAS  Google Scholar 

  34. Hamm, C. A. et al. Interactive explainable deep learning model informs prostate cancer diagnosis at MRI. Radiology 307, e222276 (2023).

    PubMed  Google Scholar 

  35. Saha, A. et al. Artificial intelligence and radiologists in prostate cancer detection on MRI (PI-CAI): an international, paired, non-inferiority, confirmatory study. Lancet Oncol. 5, 879 (2024).

    Google Scholar 

  36. Cai, J. C. et al. Fully automated deep learning model to detect clinically significant prostate cancer at MRI. Radiology 312, e232635 (2024).

    PubMed  Google Scholar 

  37. Fransen, S. J. et al. Patient perspectives on the use of artificial intelligence in prostate cancer diagnosis on MRI. Eur. Radiol. 35, 769–775 (2025).

    PubMed  Google Scholar 

  38. Su, G. et al. Radiogenomic-based multiomic analysis reveals imaging intratumor heterogeneity phenotypes and therapeutic targets. Sci. Adv. 9, f837 (2023).

    Google Scholar 

  39. Bhattacharya, I. et al. Selective identification and localization of indolent and aggressive prostate cancers via CorrSigNIA: an MRI–pathology correlation and deep learning framework. Med. Image Anal. 75, 102288 (2022).

    PubMed  Google Scholar 

  40. Shi, Z. et al. MRI-based quantification of intratumoral heterogeneity for predicting treatment response to neoadjuvant chemotherapy in breast cancer. Radiology 308, e222830 (2023).

    PubMed  Google Scholar 

  41. Vanguri, R. S. et al. Multimodal integration of radiology, pathology and genomics for prediction of response to PD-(L) 1 blockade in patients with non-small cell lung cancer. Nat. Cancer 3, 1151–1164 (2022).

    PubMed  PubMed Central  CAS  Google Scholar 

  42. Moor, M. et al. Foundation models for generalist medical artificial intelligence. Nature 616, 259–265 (2023).

    PubMed  CAS  Google Scholar 

  43. Nguyen, V. et al. Assessing the performance of foundation models in prostate segmentation across different ultrasound modalities. In 2024 IEEE Canadian Conference on Electrical and Computer Engineering 876–880 (2024).

  44. Yang, D. D. et al. AI-derived tumor volume from multiparametric MRI and outcomes in localized prostate cancer. Radiology 313, e240041 (2024).

    PubMed  Google Scholar 

  45. Wilson, P. F. et al. ProstNFound: integrating foundation models with ultrasound domain knowledge and clinical context for robust prostate cancer detection. In International Conference on Medical Image Computing and Computer-Assisted Intervention 499–509 (2024).

  46. Armato, S. G. III et al. PROSTATEx challenges for computerized classification of prostate lesions from multiparametric magnetic resonance images. J. Med. Imaging 5, 44501 (2018).

    Google Scholar 

  47. Bi, W. L. et al. Artificial intelligence in cancer imaging: clinical challenges and applications. CA Cancer J. Clin. 69, 127–157 (2019).

    PubMed  PubMed Central  Google Scholar 

  48. Patrício, C. et al. Explainable deep learning methods in medical image classification: a survey. ACM Comput. Surv. 56, 1–41 (2023).

    Google Scholar 

  49. Mongan, J. et al. Checklist for Artificial Intelligence in Medical Imaging (CLAIM): a guide for authors and reviewers. Radiol. Artif. Intell. 2, e200029 (2020).

    PubMed  PubMed Central  Google Scholar 

  50. Collins, G. S. et al. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement. Circulation 131, 211–219 (2015).

    PubMed  PubMed Central  Google Scholar 

  51. Kleppe, A. et al. Designing deep learning studies in cancer diagnostics. Nat. Rev. Cancer 21, 199–211 (2021).

    PubMed  CAS  Google Scholar 

  52. Tejani, A. S. et al. Updating the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) for reporting AI research. Nat. Mach. Intell. 5, 950–951 (2023).

    Google Scholar 

  53. Lotter, W. et al. Artificial intelligence in oncology: current landscape, challenges, and future directions. Cancer Discov. 14, 711–726 (2024).

    PubMed  PubMed Central  Google Scholar 

  54. Nguyen, D. C. et al. Federated learning for smart healthcare: a survey. ACM Comput. Surv. 55, 1–37 (2022).

    Google Scholar 

  55. Grill, J. et al. Bootstrap your own latent-a new approach to self-supervised learning. Adv. Neural Inf. Process. Syst. 33, 21271–21284 (2020).

    Google Scholar 

  56. Han, K. et al. A survey on vision transformer. IEEE Trans. Pattern Anal. Mach. Intell. 45, 87–110 (2023).

    PubMed  Google Scholar 

  57. Pérez-García, F. et al. TorchIO: a Python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning. Comput. Meth. Programs Biomed. 208, 106236 (2021).

    Google Scholar 

Download references

Acknowledgements

This work was supported by grants from National Natural Science Foundation of China (grant nos. 82125025 to S.R., 82330091 to S.R. and 82302316 to L.S.), Shanghai Shenkang Hospital Development Center (grant nos. SHDC12022117 to S.R. and SHDC2022CRT005 to S.R.), Shanghai Municipal Education Commission (grant no. 2023ZKZD46 to S.R.), The Jiangsu Natural Science Foundation (grant no. BK20191077 to C.L). The funders had no role in the study design, data collection and analysis, decision to publish or preparation of the paper.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization: L.S. and Y.Y.; data curation: Y.Y., C.L., H.Z., P.Z. and P.N.; formal analysis: L.S., Y.Y. and C.L.; funding acquisition: S.R., S.Z. and J.L.; methodology: L.S.; project administration: S.R., S.Z. and J.L.; software: L.S., H.Z. and X.J.; validation: L.S., H.Z., P.N. and X.H.; data collection: Y.Y., C.L., L.W., P.N., M.B. and J.L.; data quality control: H.Z., C.L., Y.Y., X.J. and H.Z.; writing—original draft: L.S., L.W. and C.L.; writing—review and editing: L.W., Y.Y., J.L., X.J. and S.R.

Corresponding authors

Correspondence to Pei Nie, Liang Wang, Jie Li, Shudong Zhang or Shancheng Ren.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Cancer thanks Feixiong Cheng and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Pipeline for development and validation of MRI-PTPCa under the multi-center, retrospective, prospective collection.

The numbers in the pie chart and Venn diagram represent the number of patients.

Extended Data Fig. 2 Multi-center decision curve analysis of MRI-PTPCa.

a) PCa diagnosis. b) CSPCa diagnosis.

Source data

Extended Data Fig. 3 Analysis of performance differences between MRI-PTPCa and clinical tools in prostate cancer diagnosis.

a) Distribution of PSA. For visualization purposes, PSA values > 40 ng/mL were capped at 40 ng/mL. The x-axis represents PSA values, and the y-axis represents the corresponding frequency. Q1 represents the cut-off point for the first 25% of the data points. Q3 represents the cut-off point for the first 75% of the data points. PSA here denotes total prostate specific antigen. b) ROC analysis of Improvement between PSA and MRI-PTPCa in PCa detection. c) ROC difference between PI-RADS and MRI-PTPCa in PCa detection based on PSA gray zone. Two-sided DeLong test was used to compare the AUCs of the ROC curves.

Source data

Extended Data Fig. 4 IDI-NRI analysis of MRI-PTPCa compared to clinical approaches for PCa diagnosis.

a) Comparison between PSA and MRI-PTPCa. b) Comparison for PSA gray zone (4-10 ng/mL) between PSA and MRI-PTPCa. c) Comparison for PAS gray zone between PI-RADS and MRI-PTPCa. The events indicated PCa diagnosis, and nonevents indicated non-PCa. The confidence intervals were computed using a 1000-times bootstrapping strategy with replacement. Error bars represent the error between the NRI value of the prediction model and the reference. Error bands represent IDI value. The n indicated on the diagram represents numbers of patients.

Source data

Extended Data Fig. 5 IDI curve analysis of different approaches for biopsy-proven CSPCa diagnosis.

a) Comparison between PI-RADS and MRI-PTPCa. b) Comparison between PI-RADS and MRI-PTPCa in PI-RADS Score=3 subgroup. c) Comparison between PI-RADS and MRI-PTPCa in PI-RADS score≥3 subgroup. d) Comparison between PI-RADS and MRI-PTPCa in PI-RADS score<3 subgroup. e) Comparison between PI-RADS and MRI-PTPCa in PCa subgroup. The confidence intervals were computed using a 1000-times bootstrapping strategy with replacement. Error bars represent the error between the NRI value of the prediction model and the reference. Error bands represent IDI value.

Source data

Extended Data Fig. 6 IDI curve analysis of variant approaches for comparing performance of RP-proven CSPCa diagnosis.

a) Comparison between PI-RADS and MRI-PTPCa. b) Comparison between biopsy and MRI-PTPCa. The confidence intervals were computed using a 1000-times bootstrapping strategy with replacement. Error bars represent the error between the NRI value of the prediction model and the reference. Error bands represent IDI value. The n indicated on the diagram represents numbers of patients.

Source data

Extended Data Fig. 7 Multi-center confusion matrix of MRI-PTPCa for GGG prediction.

True GGG, GGG from pathology evaluation of tumor aggressive after RP. Predicted GGG, GGG from predictions of MRI-PTPCa. a) F1-score matrix. b) Number matrix.

Source data

Extended Data Fig. 8 Downgrading and Upgrading of GGG after whole-mount histopathological analysis, According to Biopsy and MRI-PTPCa.

a) Distribution of up- and down-grading in pathological evaluation of biopsy tissue. b) Distribution of up- and down-grading in prediction of MRI-PTPCa. The n(%) indicated on the diagram represents numbers of patients and ratios.

Source data

Extended Data Fig. 9 Development of MRI-PTPCa.

a) Training phase of MRI-PTPCa. b) Workflow of MRI-PTPCa using MRI for prostate cancer pathology prediction. c) Contrastive learning for 2-D images of MRI. d) Contrastive learning for 3-D images of mp-MRI. e) Interpretability of improvements from modeling view. The significant improvement of the model was from the progress of contrastive representation learning in image feature encoding and transformer in attention fusion. The mp-MRI foundation model of PCa provides the pre-trained parameters of the network and high-quality MRI features. Transformer-based model built attention and fusion among images of single sequences and multiple sequences. MRI-PTPCa enabled mp-MRI to surpass the limits of human vision, information association, and memory association in the characterization of prostate cancer. We also experimentally proved the importance of ground truth for modeling based on a supervised learning strategy. It was meaningful and efficient for modeling to explore the correlation between mp-MRI and pathology symmetrically. There is an inconsistency between needle biopsy evaluation and whole-mount histopathological analysis results of RP. Weakly supervised learning led to performance degradation from the modeling strategy, multi-classification networks were easier to detect differences between patients than binary classification networks. In addition, the richness of samples participating in training was also important to enhance performance, including the number of effective training samples and data augmentation types. Mp-MRI, multiparametric MRI; PCa, prostate cancer; CSPCa, clinically significant prostate cancer; NB, needle biopsy; RP, radical prostatectomy; MLP, multilayer perceptron; MLE, maximum likelihood estimation.

Extended Data Fig. 10 Precision-recall curves analysis of MRI-PTPCa for diagnosis.

a) PCa diagnosis. b) CSPCa diagnosis. c) CSPCa diagnosis in PCa subgroup.

Source data

Supplementary information

Source data

Source Data Fig. 3

Statistical source data.

Source Data Fig. 4

Statistical source data.

Source Data Fig. 5

Statistical source data.

Source Data Fig. 6

Statistical source data.

Source Data Fig. 7

Statistical source data.

Source Data Fig. 8

Statistical source data.

Source Data Extended Data Fig. 2/Table 2

Statistical source data.

Source Data Extended Data Fig. 3/Table 3

Statistical source data.

Source Data Extended Data Fig. 4/Table 4

Statistical source data.

Source Data Extended Data Fig. 5/Table 5

Statistical source data.

Source Data Extended Data Fig. 6/Table 6

Statistical source data.

Source Data Extended Data Fig. 7/Table 7

Statistical source data.

Source Data Extended Data Fig. 8/Table 8

Statistical source data.

Source Data Extended Data Fig. 10/Table 10

Statistical source data.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shao, L., Liang, C., Yan, Y. et al. An MRI–pathology foundation model for noninvasive diagnosis and grading of prostate cancer. Nat Cancer 6, 1621–1637 (2025). https://doi.org/10.1038/s43018-025-01041-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Version of record:

  • Issue date:

  • DOI: https://doi.org/10.1038/s43018-025-01041-x

This article is cited by

Search

Quick links

Nature Briefing: Cancer

Sign up for the Nature Briefing: Cancer newsletter — what matters in cancer research, free to your inbox weekly.

Get what matters in cancer research, free to your inbox weekly. Sign up for Nature Briefing: Cancer
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载