Charlotte Poot

240 7 Chapter 7 and consistency of results. Second, given that the study focuses on health budget and health apps, study participants were probably more highly sensitised to eHealth than the general population. However, given that eHLQ scores were comparable with previous eHLQ studies, it is unlikely that this has biased our results. The sample was also not fully representative in terms of educational level and nationality, with an over representation of high education and Dutch nationality. Nonetheless, we encourage researchers to be aware of the multi-cultural Dutch populations when evaluating eHealth Literacy in a Dutch setting and considering the most appropriate language based on the study population. Importantly, our population included both people with a current diagnosis as well as without (72). This increases the generalizability and applicability of our findings as the eHLQ has been developed to be used in a wide range of settings. Another limitation may be the use of cognitive interviewing to assess response process and test content. Cognitive interviewing has been criticized and considered inappropriate for people who are less articulate and find it difficult to verbalize their thought process. Consequently, this could result in overestimation or underestimation of response difficulties (i.e., difficulties in articulation of thoughts interpreted by the investigator as response process issues or the other way around, when people are unable to accurately articulate the problems, they encounter). We tried to minimize this limitation by combining think-aloud as a primarily respondent-driven with scriptedprobing as a more interviewer-driven approach. Finally, a limitation of our approach is that we have not evaluated the eight items we improved on validity. Considering the minor changes made in wording we expect that the internal structure validity will remain equivalent to the original. Nonetheless, in line with The Standards and the establishment that tests, or instruments are themselves not valid or invalid, but rather are valid for a particular use, we encourage researchers to use this initial validity evidence, build on it and always consider validity of the eHLQ in the context of the particular use and intended purpose. Implications for practice and research Our findings have implications for future use of the eHLQ by policy makers, eHealth developers and researchers in understanding people’s eHealth literacy. Researchers should collect relevant contextual data (e.g., experience with technology, current diagnosis) to aid the interpretation of eHealth literacy scores and understand score differences between groups. For example, we noticed differences in the interpretation of ‘health technology’ and ‘health technology services’ depending on former experience with eHealth. Also, contextual information on current diagnosis and/or extent of healthcare usage can aid score interpretation. Next to the use of contextual information of the individual, researchers should interpret their eHLQ scores in light of the local or national digital healthcare context (macro context) (74). Understanding of the digital landscape from a macro perspective, in terms of the delivery, access, integration, and (inter)connectivity of systems and services is particularly important in the interpretation of scores for domain 6 ‘Access to digital services that work’ and domain 7 ‘Digital services that suit individual needs’.

RkJQdWJsaXNoZXIy MTk4NDMw