26 Chapter 2 statistics. Frequencies and percentages are presented for categorical variables and means with standard deviations (SD) or medians with ranges for continuous variables. We expressed the inter- and intraobserver agreement among professional groups as a proportion of agreement with a 95% confidence interval (CI) because agreement is a better concept than Cohen’s kappa for answering our research questions. Cohen’s kappa is a widely-used measure of reliability, providing information on how well subjects/objects can be distinguished from each other, while agreement measures assess to which extent classifications or scores are identical.19 To calculate the degree of agreement between and within the professional groups, we used the agreement formula and calculations (R package from https://github.com/ iriseekhout/Agree), including a 95% CI. We analyzed the interobserver agreement for the aCTG classifications (reassuring, non-reassuring) and various aCTG components (baseline heart frequency, variability, presence of accelerations, decelerations, and contractions) for each possible pairing of two participants. At least five assessors in each professional group led to a minimum of 10 different pairs of assessors in each professional group, calculated by (m * (m −1)/2), where m is the number of assessors. This means the proportion of agreement for each set of 10 aCTGs was calculated on at least 100 pairwise comparisons. For the interobserver agreement between the professional groups, the classification of 10 aCTGs of each primary care midwife’s first assessment were compared with those of each hospital-based midwife, resident and obstetrician. Similarly, hospitalbased midwives were compared with residents and obstetricians, and residents with obstetricians. Two professional groups always concerned five versus five assessors, so there were 25 comparisons per aCTG and 250 pairwise comparisons per set of 10 aCTGs. The proportion of agreement of the first and second set of 10 aCTGs was statistically pooled. For the interobserver agreement within the professional groups, the proportion of agreement was calculated within five assessors. Therefore, the formula m (m − 1)/2 applies. The results were statistically pooled over the first and second set of 10 aCTGs.20 For the intraobserver agreement, the results of the first and second assessments of each individual assessor for each aCTG were compared. The results were statistically pooled over the members of each professional group.20 Whether the four professional groups differed in proportions of intra- and interobserver agreement was tested with the independent sample t-test for differences in proportions. A P-value below 0.05 was considered statistically significant. Appendix A1 justifies the statistical methods used.
RkJQdWJsaXNoZXIy MTk4NDMw