Danique Heuvelings

94 Chapter 4 with peri-anastomotic fluid and/or presence of a peri-anastomotic abscess. Also, any signs of disrupted integrity of the anastomosis will lead to this score. This category implies a very high level of suspicion for AL and a subsequent intervention is suggested. The final CO-RADS 5 category indicated proven AL after endoscopic examination or a surgical intervention. CT scoring procedure CT images will be anonymously extracted from the picture archive and communication system. The CAL-RADS study will involve observers (radiologists) with varying levels of experience in interpreting abdominal CT scans for suspected AL after colorectal surgery using the CAL-RADS score. A total of six observers will participate in scoring the CT scans with the proposed CAL-RADS score. These observers will be blinded for all extracted patient data regarding AL outcomes. First, five test cases will be assessed by each radiologist and then discussed in a plenary session. These test cases will validate the accuracy of the radiologists’ scores and identify any discrepancies or biases in their interpretations before the official assessment. The plenary discussion will ensure that everyone is well-versed in the criteria and protocols, leading to more consistent and accurate readings. All 150 included CT scans will be assessed using a standardized excel sheet to score the criteria and add comments if necessary. Afterwards, the final CAL-RADS scores will be added to the Castor Database for every patient. Statistical analysis Statistical analysis will be performed using SPSS (IBM SPSS Statistics for Apple, Version 27, Armonk, New York, NY, USA) and GraphPad Prism (GraphPad software for Apple, version 8.0.0, San Diego, CA, USA). Data will be presented as the mean ± standard deviation or median and interquartile range based on normality of data. A 4 x 4 confusion matrix will be made separately per observer, in which the CAL-RADS score of the observer will be compared with the median CAL-RADS score of the remaining observers. Subsequently, a similar matrix will be computed by aggregating all individual 4 x 4 tables. To assess interobserver agreement, the Fleiss’ kappa (κ) value will be calculated among observers. The κ values are derived by comparing the CAL-RADS scores of each observer to the median score of the remaining observers. Interobserver agreement is categorized as slight (κ = 0.01 – 0.20), fair (κ = 0.21 – 0.40), moderate (κ = 0.41 – 0.60), substantial (κ = 0.61 – 0.80) or almost perfect (κ = 0.81 – 1.00) 5.

RkJQdWJsaXNoZXIy MTk4NDMw