Hester van Eeren

General discussion | 6 143 | as an ‘additional’ part of the evaluation of interventions, but should become part of the evaluation process itself. Collecting relevant cost data on health care expenditures of patients in for example the ROM system, is a first step towards an economic evaluation. A second step is creating more awareness about the relevance of these types of analyses among clinicians and youth care institutions. For them, evidence based practice should also mean that one knows about the effectiveness of the intervention, its costs, and its cost-effectiveness and that they should be able to explain these results when needed. Moreover, policymakers who decide on the expenditure of youth care budgets (i.e., the municipalities in the Netherlands) should be informed about these types of analyses and the interpretation of the results too, since they should distribute youth care budgets wisely. Additionally, municipalities can play an important role in supporting institutions, scientists, and clinicians translating the data and findings to practically relevant outcomes. When the policymakers, youth care institutions, and clinicians understand each other’s language, results canmore easily be applied and translated into reimbursement decisions in order to avoid wasting money on less effective and costly youth care interventions. Recommendations for future research and policy The importance of evaluating youth care interventions on effectiveness and cost- effectiveness should not only be emphasized within clinical practice, but also within research and policymaking. The following recommendations are important for future research and policy in the field of youth care. First, evaluating interventions using ‘real world data’ is becoming more widely accepted (Berger, Dreyer, Anderson, Towse, Sedrakyan, & Normand, 2012). Findings and conclusions from such datasets should be interpreted in light of the selected data and differences with the broader population should be emphasized. Furthermore, the statistical method that is chosen to analyse the data is likely to depend on the content of the data available and should take into account uncertainties within the dataset, such as missing values or having data only for a subgroup of adolescents. Using clinical practice data, or ‘real world data’, is not a substitution of conducting RCTs, since these study designs can address the comparative efficacy of two interventions without allocation bias (Borah et al., 2014). It is, however, a valid alternative to be able to use clinical practice data and apply correction methods such as the PS. When using methods like the PS, one should consider existing guidelines for observational research (i.e., Berger et al., 2012) so that using non randomized clinical practice data does not become an excuse for not gathering crucial data on patient characteristics and assignment to treatment (Borah et al., 2014). Even more, it could be necessary to develop standards in reporting results of PS methods to be transparent about the assessed balance and treatment outcome (Borah et al., 2014) and to register these observational studies (Berger et al., 2012; Williams, Tse, Harlan, & Zarin, 2010). In addition, the context of the field of youth care at the moment the data was gathered is also relevant, since for example reimbursement decisions or referral policies can change yearly within each municipality. If intervention A, for example, is reimbursed within year ‘X’, while it is not in year ‘Y’, the number and ‘type’ of adolescents assigned to intervention A can differ between these years,

RkJQdWJsaXNoZXIy MTk4NDMw