Charlotte Poot

238 7 Chapter 7 a global instrument to measure eHealth literacy. The translated and culturally adapted eHLQ items were found to be highly coherent with the original intended item meanings and demonstrated good internal structure, comparable with the original eHLQ (35). All 35 items loaded strongly or moderately on their respective factor. After one modification (i.e., allowing residual correlations with item 26 ‘I use measurements about my body to help me understand my health’), the model showed good fit with the data. Item 26 showed similar residual correlation issues in a validity study in an Australian population (41) and in Taiwan using a Mandarin version (72). In fact, both studies found a lower factor loading (factor loading 0.36 and 0.56 respectively) than our study (factor loading 0.61). Hence, it is unlikely that the observed validity issue with item 26 results from translation or cultural adaption, but rather is a characteristic of the item that is notable across settings and languages. We also tested invariance of item loadings and thresholds between age, gender, education and current diagnosis groups, and found that only a small subset of loadings differed between groups, indicating that the eHLQ measures largely the same construct in the same manner, in different groups. Multi-group comparison showed that, overall, people who were younger scored higher across domains. This is in line with other literature demonstrating that older age is associated with lower eHealth literacy (73). We also observed that people with a lower education overall scored lower than those with a higher education. This is in line with previous eHLQ studies and with the notion that, generally, people with lower education use eHealth less often (72, 73). Interestingly, and contrasting with previous studies, in our study, people with lower education scored higher on the domain ‘feel safe and control’. At the same time, items that loaded on this domain showed metric noninvariance based on education. Hence, the higher score could potentially result from a difference in interpretation between people with low and high education, rather than reflect true differences in domain 4 ‘feel safe and control’. Future research should explore these observed differences further. Our approach of collecting and combining the three sources of validity evidence to inform the final translation and cultural adaption of a questionnaire is a new, highly disciplined and transparent approach to validity testing, informed by contemporary validity testing theory (45). By combining the insights from the cognitive interviews with results from CFA and invariance testing, we were able to leverage both the depth of qualitative data as well as the quantitative power of large sample analysis and psychometric evaluation methods. While cognitive interviews were successful in identifying items which demonstrated potential problems (in wording, phrasing, or resonance with world views), CFA helped to understand if and how interpretation issues may affect the internal structure. Vice versa, the qualitative data helped to interpret CFA results, such as lower standardized factor loadings in some subgroups, that may indicate interpretation difficulties. However, low factor loadings on itself do not provide information of where the problem lies. Therefore, in-depth exploration of response process and how items are interpreted using cognitive interviewing was important. Hence, with our approach, we were able to better understand the

RkJQdWJsaXNoZXIy MTk4NDMw