Carl Westin

4-2 Trust in and credibility of decision aids 63 toward disuse that increases over time, as controllers do not give the system the op- portunity to demonstrate proper performance. 156 On the other hand, it could also be that controllers are simply reluctant to take advice from any outside “agent,” be it a machine or a colleague. To explore this notion, the term source bias was defined to refer to the potential differences in operators’ trust in a source and reliance on its advice, based on the presumed source of that advice. This paper describes a human- in-the-loop ATC simulation investigating the effect of conflict resolution advisory conformance and advisory source on controller’s trust in respective source (human or automated), acceptance of advice, and overall task performance. 4-2 Trust in and credibility of decision aids Formally, trust implies that there is a trustor (i.e., the one that exerts trust), a trustee (i.e., the one that is trusted), and the circumstances in which they interact. Automa- tion trust is most commonly defined as the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerabil- ity (p. 51). 15 It highlights four key aspects: 1) trust is a cognitive construct of the individual human; 2) trust is considered in relation to task objectives; 3) trust is considered in relation to the environment in which the human and agent (e.g., automation) interacts; and 4) the outcome is uncertain and associated with a risk. Automation trust research has been paralleled by credibility research in the com- munication and information systems fields. 146 Tseng and Fogg 157 argued that trust generally refers to the reliability and dependability of a technology, while credi- bility more specifically refers to the believability of its output. According to their distinction, terms such as “trust in the information” and “trust in the advice” refer to the perceived credibility and not trust, while “trust the system” refers to trust. Researchers generally distinguish between trust and credibility attitudes before and during use. 47, 110, 157, 158 Attitudes before use tend to be more generic and sta- ble and influenced by, for example, previous automation experiences in general ( propensity to trust ) in combination with the affinity for a specific system ( disposi- tional trust ). 110 Attitudes can also be driven by assumptions and stereotypes of a specific system ( presumed credibility ), influenced by third-party references, fram- ing, and labeling ( reputed credibility ). 157 Attitudes during use tend to be more short- term, dynamic, and reactionary, 15, 47 primarily influenced by first hand experience ( experienced credibility ). 157 Additional influences include the perceived appearance of the technology (i.e., “judging the book by its cover”), such as how attractive the interface is, 159 and the emotional response it provokes ( surface credibility . 157

RkJQdWJsaXNoZXIy MTk4NDMw