4 Guide development for eHealth interventions targeting people with a low socioeconomic position 75 prototype (Maguire, 2001). The first evaluation was performed to determine professionals’ goals and needs based on content, system, and service level. The second evaluation was conducted to determine how the participants valued the recommendations (content) and to get an indication of user acceptance of the prototype. We ran a pilot for both evaluation rounds with 2 researchers to refine the protocol. The first evaluation was conducted on the internet (in accordance with COVID-19 regulations), while the second evaluation was conducted either on the internet (Microsoft Teams; Microsoft Inc) or face-to-face based on the preference of the participant. The sessions lasted between 45 and 60 minutes and were recorded using a voice recorder or through Microsoft Teams. The determination of the number of interview sessions conducted in each evaluation round was based on the input received from the participants in the study, which played a crucial role in guiding this decision. After consultation with the research team, it was concluded that both evaluation sessions yielded sufficient data to proceed with the development of the website. In the first evaluation round, we started asking participants about their background information, including their role, age, experience with eHealth, and the target group. Subsequently, we discussed the 3 lowfidelity prototypes. We first introduced the participant to a predetermined scenario. The scenarios were written according to different roles: eHealth developer, researcher, and health care provider. An example scenario for researchers was: “Imagine you are involved in a study on eHealth and people with a low SEP. The problem is there is too much information available. You are looking for a central place to find all the information. A colleague tells you about an online guide for the development of eHealth interventions for people with a low SEP. You decide to visit. Your goal is to quickly get a good overview of the information and to quickly access the information source through the website.” We also asked participants to try each prototype and offer a brief verbal evaluation. In the last part, we asked questions about the prototypes and the content: “Which prototype do you like the most? Which specific themes or topics do you want to see in the guide?” In the second evaluation round, we again started with collecting relevant background information from the new participants. Thereafter, we asked all participants to execute 5 tasks while verbalizing their thoughts: (1) explore the pages, (2) find a barrier on a specific topic, (3) find an associated facilitator, (4) find the associated practical tips, and (5) find the associated user perspective. Finally, at the end of the interview, we administered a short questionnaire as an assessment tool to evaluate the prototype and assess the likelihood of acceptance of the final guide among study participants. We
RkJQdWJsaXNoZXIy MTk4NDMw