Stefan Elbers

87 Self-management for patients with chronic pain Outcome Reporting and Data Synthesis Between-group comparisons for post-intervention (within one month of the end of the intervention) and follow-up (at minimum six months post-intervention) were calculated per study for each of the outcomes of interest, using RevMan 5.3 software (Cochrane, 2011). In case of more than one follow-up measurement, we included the last time point in our analysis. If more than one self-management group was included within a study, we used only the intervention group that best fitted our definition of self-management interventions. In the situation of more than one control group within a study, we included only the most active control group in our comparisons. Results were presented for each outcome separately. If the GRADE analyses revealed both directness and consistency as a serious risk of bias, we concluded that the data were too heterogeneous to perform a meta- analysis and presented the results narratively. Each outcome was expected to be measured with differing varying questionnaires. Therefore, standardized mean differences (SMD) with 95% confidence intervals were used. A priori, we decided to select random effects models because we assumed differences in the true outcomes across studies, based on between-study variation in duration, intensity and patient characteristics. If the pooled SMD was significant, we re-expressed this effect on one of the outcome measures to examine the clinical importance. This was performed by multiplying the SMD with the standard deviation of the control group of one of the included studies that adopted this measure. Subsequently, we compared this effect with available estimates of the minimal important change to assess the clinical importance. When it was not possible to obtain measures of central tendency or dispersion, the results were narratively presented and compared to the results of the meta-analysis. BCTs were graphically visualized in a table. Relative differences between studies and between domains of the taxonomy were calculated and presented narratively. Assessment of the Quality of Evidence For each comparison in the meta-analysis, we used the GRADEpro Guideline Development Tool (Evidence Prime I, 2015) to determine the quality of evidence. As only randomized controlled trials were included, the initial quality of evidence started as ‘high’ and was downgraded as a result of limitations with respect to risk of bias, inconsistency, indirectness, imprecision or publication bias. For each comparison, we downgraded the level of evidence when (1) more than 25% of the sample came from studies with high risk of bias; (2) the I 2 was more than 60% combined with a limited overlap of confidence intervals (inconsistency); (3) substantial differences were present in study population, intervention protocol, control group or

RkJQdWJsaXNoZXIy ODAyMDc0