Dunja Dreesens
54 Experts participating in Delphi In the first Delphi round, 31 experts participated (54% response rate of the 57 participants of the survey), of whom 21 (36%) completed all the questions in this round. In the second Delphi round, 20 (35%) experts participated, of whom 17 (30%) completed all questions. Table 3 describes the backgrounds of the experts. Table 3: Background of Delphi experts* BACKGROUND* OF EXPERT (more than one could apply) NUMBER OF EXPERTS Tool developer and/or implementer 13 Employed by research/knowledge institute (not being a university) 11 Healthcare professional a,b 9 Patient (representative) 6 Professor/lecturer at (applied sciences) university 6 Policy advisor c 5 Employed by industry / commercial party 3 Other d 4 Legend table 3 * The experts could choose to indicate more than one background, so the total of the backgrounds exceeds the number of experts a Healthcare professionals worked in long term care, curative care, mental healthcare and public health b Healthcare professionals were either medical specialist, general practitioner, nurse or paramedic c Policy advisors were from the Department of Health, Health Inspectorate or National Health Care Institute d Others were: linguists, IT expert or hospital employee Delphi-process Each question was scored by at least nine participants, making the results valid (153). Six additional tools were suggested, but each by one participant only. There was consensus on the importance of nine tools, and on the definition of nine tools; of which six tools overlapped. For one tool – ‘quality assessment framework’ – the scores differed on its importance. The remaining tools were scored ambiguously, with the divergent Likert scores indicating a lack of consensus on either agreement or disagreement of their importance. Notably, the newly emerging tool ‘Option Grid TM ’ was the tool most often scored as ‘no opinion’: eight times for importance and six times for definition. Most comments of the participants could be categorised into the following issues: can we limit the number of tools; can we merge similar tools; and can we visualise how the tools relate to each other? Looking at the scores and comments, the project group decided to eliminate four tool types, even though there was no consensus on the unimportance of these tools: ‘indicator’, ‘information standard’, ‘quality assessment framework’ and ‘information folder’ (indicated with f in Appendix B). Taking the definition of a knowledge tool into account, the project group did not consider the first three tools as knowledge tools. ‘Information folder’ was judged too broad by the experts, as it could cover information of any kind. Moreover, it was felt that there was Chapter 3
Made with FlippingBook
RkJQdWJsaXNoZXIy ODAyMDc0