Carl Westin

5-2 Automation transparency research 89 5-2-2 Empirical explorations While a generally agreed framework for automation transparency is lacking, the ba- sic theoretical foundation is widely shared across domains. It can be considered an attribute of a system that aspires to communicate the system’s cognitive processes to better account for its behavior. In general, empirical research supports the hy- pothesis that increasing system transparency increases operator understanding of the system. For example, Dzindolet et al. 174 found that trust and reliance in a de- cision support system for target detection increased when operators were provided with a rationale for why the automation might err. Recommender system research has shown that transparency in terms of explanations can help users make more ac- curate decisions and increase their satisfaction of using the system, 201 improve their trust and acceptance of recommendations, 201, 209 and benefits their understanding of system recommendations 111 and system behavior. 196 In a recent study, Sadler 195 investigated how trust and reliance were affected by the transparency of a ground- based decision support system aiding “dispatchers” (consisting of airline pilots) in choosing a suitable diversion airport. Results showed that trust increased while the need to consider options decreased when system transparency was increased, by either providing probability estimates for successful diversions (intermediate trans- parency) or probability estimates together with supporting statements describing the information considered and how it have been interpreted (logic transparency). Benefits of automation transparency on trust and workload have also been found in experiments where the SAT model 200 has been used to guide interface visualiza- tion design for supervision and interaction with an autonomous unmanned ground vehicle, named the autonomous squad member (ASM), in a military operational context. In two separate studies using inexperienced operators, trust in the ASM in- creased with increasing SAT level transparency. In addition to information reflecting the ASM’s current status (level 1 SAT), Boyce et al. 215 increased transparency by either adding environmental constraints affecting the ASM’s activities (i.e., level 2 SAT pertaining to the agents reasoning) alone, or together with a visualized projec- tion of agent status and uncertainty (level 3 SAT). In the other study, Selkowitz et al. 216 provided level 2 SAT by means of a symbol expressing motivation underlying the ASM’s reasoning, and level 3 SAT by means of symbols indicating the conse- quence of decisions on the ASM’s resource usage. Furthermore, the two studies showed that increasing transparency by means of the SAT framework, allows for increasing additional information without increasing operator workload. Helldin 191 showed in a series of experiments that transparency, by means of visualizing meta-information in relation to system reliability, uncertainty, and un- derlying reasoning, benefited appropriate trust calibration and task performance among various skilled operators. For instance, trust was more appropriately cal-

RkJQdWJsaXNoZXIy MTk4NDMw