Marianne Welmers

Alliance and Treatment Outcome 37 CHAPTER 2 analyses should be conducted. Because of the small number of studies and effect sizes included in our meta-analyses on split alliance – outcome, and alliance improvement – outcome, the more traditional approach of log-likelihood-ratio-tests might not lead to significant results when in reality there is substantial variance. Applying the 75% rule of Hunter and Schmidt is an appropriate solution to this power problem (Assink &Wibbelink, 2016). For the sake of completeness, we also report results of two separate one-tailed log-likelihood-ratio-tests in which the deviance of the full model was compared with the deviance of a model excluding one of the variance parameters. The sampling variance of observed effect sizes (Level 1) was estimated by using the formula of Cheung (2014), as is appropriate for multilevel analysis (Assink &Wibbelink, 2016). The log-likelihood-ratio- tests were one-tailed, whereas all other tests were two-tailed. When models were extended with categorical moderators consisting of three or more categories, the omnibus test of the null hypothesis that all group mean effect sizes are equal, followed an F- distribution. We estimated all model parameters using the restricted maximum likelihood estimation method, and before we conducted the moderator analyses, each continuous variable was centered around its mean. To enable analysis of categorical variables with three or more categories, we created (dichotomous) dummy variables (Tabachnick & Fidell, 2012). These dummies contain all information included in the original categorical variable. Given that our moderators were tested in multilevel regression analyses, the intercept is the reference category, while the dummies (the number of categories minus one) reveal if, and to what extent, the other categories deviate from the reference category. Analysis of Publication Bias A problem in the overall estimates of effect sizes in a meta-analysis is that studies with non-significant or negative results are less likely to be accepted for publication by journals. Rosenthal (1995) referred to this problem as the ‘file drawer problem’. Although obtaining and including unpublished studies as best as possible should resolve this problem, we examined file drawer bias by applying two conventional methods. First, we performed Egger regression (Egger, Smith, Schneider, & Minder, 1997), which tests the degree of funnel plot asymmetry as measured by the intercept from regression of standard normal deviates (effect size divided by its standard error) against the estimate’s precision (the inverse of the standard error). A significant Egger regression test is an indicator of funnel plot asymmetry. We performed the funnel plot asymmetry test using the “regtest” function of themetafor package in R (Viechtbauer, 2015). To account for the dependency of effect sizes, we added the standard error of the effect size as amoderator to the Egger regression model.

RkJQdWJsaXNoZXIy ODAyMDc0