Joeky Senders

102 Chapter 5 interpretability, clinical utility, computational efficiency, and their associated limitations vary widely across different models due to their mathematical underpinnings. CPHR has emerged as the cornerstone of survival analysis but is limited by the assumption of proportionality, which assumes that the relationship between covariate and outcome is constant over time. In the real world, this association is often dynamic, and the assumption of proportionality is effectively violated. The AFT model does allow for increasing or decreasing covariate risk contribution over time, which is particularly useful in individualizing survival predictions. The AFT model has been shown to be a valuable alternative to CPHR in simulation studies, 27 as well as survival studies on glioblastoma patients. 16 Molecular markers (e.g., IDH1 mutation, 1p19q codeletion, and MGMT methylation status), as well as functional status (e.g., KPS, MMSE), have been demonstrated to impact survival in glioblastoma patients and are commonly used for stratifying patient cohorts in clinical decision-making. However, they have not yet been included in large- scale, multicenter registries. Inclusion of these variables would improve individual patient survival modeling. Furthermore, granular information with regards to the healthcare setting (e.g., academic versus non-academic) and provided clinical care (e.g., volumetric measurements of tumor size and extent of resection, as well as the timing, type, dose, and sequence of adjuvant treatment) would be valuable to further improve model performance. If addition of any of these variables improves model performance only slightly, however, it may be preferable to exclude some predictors for ease of use at the point of care. Another method to overcome the lack of large-scale granular data sets could be to explore the concept of transfer learning, a common machine learning approach of updating a pre-trained model on novel data sources or even different outcomes. 28 In the context of glioblastoma survival prediction, this could mean developing a base model on population-based data, which is further trained on institutional data to fit institutional patterns and include relevant institutional parameters not available in population-based registries. Although many machine learning algorithms show great predictive performance, their utility is often limited to continuous and binary models, which merely provide point estimates of overall survival and one-year survival probability at a given point in time, respectively. Transferring the predictive power of these algorithms to time- to-event models allows for the computation of subject-level survival curves, thereby enabling more granular insight into expected survival. Furthermore, time-to-event models can be trained on patients with either complete or incomplete follow-up, which mitigates the systematic bias associated with exclusion of the latter group. Although many machine learning models demonstrate high performance in the

RkJQdWJsaXNoZXIy ODAyMDc0