Carl Westin

4-2 Trust in and credibility of decision aids 65 effects of source and pedigree on trust before and during use of a signal detection aid (x-ray luggage screening task). They found that, overall, trust in the automated source was higher across pedigree levels (novice or expert). Before use, however, the expert human source was trusted more than the expert automated source. No- tably, participants’ (students) trust and perceived reliance of the two sources varied differently across pedigree levels. The authors concluded that the perceived relia- bility is more sensitive to the aid’s performance than trust. 167 Human sources seem to be judged less by their performance and more by their immutable personality-related credibility attributes including education, experience, effort, and honesty. 170, 171 People tend to expect less of human sources, and more easily forgive errors they make. 49, 165 Additionally, differences in trust and accep- tance may be affected by social mechanisms influencing interpersonal interaction but not human-automation interaction, such as a shared outcome responsibility, the other person’s perception of oneself (e.g., trustworthiness, performance), unwilling- ness to share credit, fear of other to fail, or losing face value (being perceived as not working or not contributing). 160 In contrast, automated sources appear to be perceived as less fallible, more ob- jective, rational, 172, 173 and stable. 174 In addition, automated sources are appraised by fewer credibility attributes (mainly knowledge) 171 and more strictly judged by their performance. 50, 160, 171 Errors made by an automated source may be less for- given because people cannot understand how the automated aid reasoned and de- rived at the decision. Errors are therefore more likely to be attributed to hidden internal and permanent factors of the automated source rather than situational fac- tors. 50 Several recommendations have been proposed for appropriately calibrating trust in automated sources, including increased transparency for why the system might err, 171, 174 manipulate expectations by framing the system’s reliability 165, 175 or pro- viding character descriptions, 48, 110, 171, 173 and anthropomorphism. 48, 176 Measures of how much a source is liked have been found to predict both reliance and trust in that source. 49, 177 Taken together, it is reasonable to assume that they high expecta- tions of automated sources (e.g., perfect automation schema) partly can be attributed to their portrayal and marketing as superior, infallible, and credible sources. 157 Over time, however, research indicates that people calibrate an appropri- ate reliance behavior reflecting the actual performance and reliability of the source. 48, 49, 167 Acceptance seems to be increasingly determined by the source’s performance while attitudes, such as trust, confidence, and liking, become less im- portant. 49, 146, 171 In support of such performance calibration, Wickens et al. found that controllers’ use of a conflict detection decision aid was largely unaffected in spite of high false alarm rates (45%). Safety was not compromised as controllers

RkJQdWJsaXNoZXIy MTk4NDMw