Joyce Molenaar

60 CHAPTER 3 We added the variable multidimensional vulnerability to the dataset of 4172 individuals. All women who were previously assigned to the multidimensional vulnerability-class were classified as ‘yes’ (n = 249) and women in all other classes as ‘no’ (n = 3923). Statistical analyses To assess whether it is feasible to predict multidimensional vulnerability during pregnancy using solely routinely collected data at population-level (objective 1), we employed Random Forest (RF). RF is a machine learning method for regression and classification that operates through the construction of multiple decision trees (28). The method makes no assumptions about data distribution and works well with the number of individuals in our dataset relative to the number of variables. Sensitivity analyses were conducted using XGBoost and Lasso for validation (see Appendix 1). We sought for the optimal model using the Area Under the Curve (AUC) and F1-measure (29). The AUC, ranging between 0.5 (random) to 1.0 (perfect model), illustrates the ability of the model to distinguish between those with and without multidimensional vulnerability. Due to our imbalanced dataset with relatively few cases of multidimensional vulnerability, we calculated F1-measures to focus on correct predictions of vulnerability (29). The F1measure balances precision, also known as positive predictive value (i.e. proportion of correct predictions out of all predicted as vulnerable) and recall/sensitivity (i.e. proportion of individuals with vulnerability correctly predicted as vulnerable by the model). We treated both elements as equally important. A perfect score means the model can identify all positive cases while also identifying only positive cases (instead of assigning those without vulnerability incorrectly to the vulnerability-class). We additionally report on specificity (i.e. proportion of correct negative predictions out of all without vulnerability) and the confusion matrices showing true/false positives and true/false negatives. In model development, we used default hyperparameters settings in the R-packing ‘ranger’ (30), as these typically perform well. We used nested cross-validation to choose the threshold probability for classifying multidimensional vulnerability into ‘yes’ and ‘no’ and to assess model performance (31). This involved splitting the dataset in an outer loop (six folds of train-test combinations) and inner loop (five train-validate combinations), detailed in Appendix 1. The final RF-model can be utilized for predicting outcomes on new datasets. Being the best performing model, it was also used to report on the prevalence and spatial variation of multidimensional vulnerability during pregnancy from 2017 to 2021. We computed percentages for both national and municipality levels in the five imputed datasets and we conducted an additional complete cases analysis at national level for comparison. Municipality level results were visualized on a map of the Netherlands. Next, to identify if self-reported data on health, wellbeing and lifestyle could improve predictions with solely routinely collected data (objective 2), we gradually added selfreported data from the PHM-2016 to the RF-model. Using the previous six train-test combinations, we calculated average F1-measures for different variable sets; 1) solely routinely collected data (baseline, 31 variables); 2) baseline combined with one varying PHM-2016 variable (comprising 32 variables); 3) baseline combined with two varying

RkJQdWJsaXNoZXIy MTk4NDMw