Charlotte Poot

285 General discussion 9 employ the PRECIS-2 tool, which allows for purposeful decision-making regarding trial design (50). Given that pragmatic trials, like most trials, can be financially demanding, researchers should consider performing a pilot study. This pilot study serves to refine study procedures, optimize recruitments strategies, minimize drop-out rates and inform decisions related to the allocation of time and budgetary resources (51). Historically, meta-analysis has paid little attention to the role of context in the observed effect, typically including RCTs in tightly controlled settings. However, pragmatic RCTs often vary strongly in context (e.g., national healthcare system), and these contextual differences may impact the observed effects. Our subgroup analysis based on study country (as described in chapter 6) indicated that the context in which the intervention is implemented may be crucial for overall inference. Given the growing use of pragmatic trails to address the aforementioned challenges in eHealth evaluation, we propose that leading institutions like the Cochrane Library include an explanatory-pragmatic assessment in their quality assessment. This assessment would not only evaluate the quality of evidence using tools like GRADE-2, but also consider the external validity and generalizability of study results. This information can aid policymakers and healthcare leaders in assessing the applicability of the meta-analysis results to their specific context, considering factors such as study population, available resources, local needs, and adapting the intervention accordingly in a context-sensitive manner. Finally, collecting qualitative data is crucial for evaluating eHealth interventions as it provides valuable insights into the human experience of using these interventions in real-world settings. Qualitative data complements the quantitative data by providing contextual information and a deeper understanding of the reasons behind observed outcomes. While clinical outcomes such as asthma control and medication adherence (chapter 5) offer an objective measure of the impact of the eHealth intervention, they do not shed light on the underlying reasons for those outcomes. In addition, qualitative data can provide insight in potential acceptance or implementation issues (52, 53). To conclude, eHealth evaluation should be considered a continuous process which should include formative and summative evaluation moments throughout the development and implementation phase (10). In doing so, we recommend that researchers consider alternative designs as better options for evaluating eHealth interventions and include process outcomes and qualitative data for a comprehensive understanding of the impact of the eHealth interventions on health benefits.

RkJQdWJsaXNoZXIy MTk4NDMw