Joyce Molenaar

90 CHAPTER 4 Prior to Delphi study The study started with compiling a list of indicators originating from existing monitoring tools or documents from local coalitions, scientific and grey literature, and the indicator set used in the national Solid Start monitor (16, 22-26). The list of possible indicators was long (in a first endeavour >350) because the scope of the first thousand days is comprehensive. As this was expected to be a burden to the participants, we decided to first select topics instead of indicators directly. One researcher (JM, health scientist) categorized and named the topics in line with existing monitoring tools and documents, and another researcher (IB, former midwife and advisor integrated maternity care organizations) crosschecked this. We categorized and named the topics based on the shared characteristics and common themes in indicators (e.g. indicators relating to a low household income, debts, receiving social benefits and stress due to finances were categorized into the topic ‘poverty’). Differences were discussed by three researchers (JM, IB and JS (expertise health economy)) until consensus was reached. We excluded topics that 1) did not have at least one operationalized indicator, or 2) exceeded the time period of the Solid Start program (i.e. beyond the first thousand days of life). Topics were classified in the three phases of Solid Start (preconception, pregnancy and after birth) with the reason to eventually get a sufficient number of indicators per phase. Some topics were relevant in more than one phase. Expert panel The expert panel consisted of a heterogeneous group of experts involved in Solid Start activities and experienced with monitoring, geographically distributed over the Netherlands (i.e. both rural and urban areas in the northern, eastern, western and southern parts of the country). We aimed for a balanced representation of experts in practice, policy and research (purposive sampling), including managers of local coalitions, policy makers, policy advisors, epidemiologists, researchers, educators, primary and secondary healthcare providers (e.g. midwife, nurse, gynaecologist, paediatrician) and social workers. We invited members of the monitoring support program (Appendix 1) and their network (‘snowballing method’), and we recruited participants through social media, Solid Start-newsletters and webpages, and personal invitation. Those interested received more information about the aim, design and voluntary nature of the study. The views of participants all received equal weight during the study. Delphi round 1: questionnaire In an online questionnaire, the Delphi panel was instructed to rate 121 topics based on relevance to monitor Solid Start on a local level on a nine-point Likert-scale (1 = not relevant at all, 9 = highly relevant). We gave an example of a possible indicator for each topic for comprehensibility. In addition, experts were invited to comment on the topics or to suggest additional topics for each of the three phases in the open spaces of the questionnaire. All ratings were analysed by calculating the median score and level of agreement between experts, following the RAND/UCLA Appropriateness Method user’s manual (27). Based on the median scores, topics were classified as either inappropriate (median range 1 – 3), uncertain (median range 4 – 6) or appropriate (median range 7 – 9) (Appendix 1). Level of

RkJQdWJsaXNoZXIy MTk4NDMw