Carl Westin

4-5 Discussion 79 4-5 Discussion Results could not establish the prevalence of any source bias affecting controllers’ trust in respective advisory source, or the acceptance and agreement of conformal and nonconformal advisories. Results suggest that the experienced controllers ap- plied an “accept all” strategy. Because of the small sample size and observed ceil- ing effect, it was not possible to draw any meaningful conclusions on the effects of source and conformance rate. Surprisingly, questionnaire responses and comments made by participants indicate that advisories were accepted even though partici- pants sometimes did not agree with them. Unfortunately, the generally high ratings of agreement across conditions did not support this finding, despite agreement rating data displaying a larger spread compared to acceptance data. Similar to the simulator results, the SBQ questionnaire results did not indi- cate any notable differences in participants’ trust perceptions of the two advisory sources. Overall, trust was high in both sources, supporting the high acceptance rate. Similar patterns of universal acceptance have, however, been observed in other studies. For example, when investigating participants’ (undergraduate students) ac- ceptance behavior with automated diagnostic aids, 73 observed two contrasting au- tomation utilization strategies. One group agreed with the aid in the majority of all trials even when diagnosis was wrong (which it was in 20%). The author suggested that participants did so in order to assess aid reliability accurately without confusing it with their own decision-making reliability. In contrast, SBVAS questionnaire results indicated a clear preference for the human source. Although effects were small, participants’ responses indicated that, the human source provided safer resolution advisories, and solutions more similar to how participants’ would have solved the conflict. In contrast, automation was perceived as more risky and difficult to work with. These differences were noted even though solutions were identical between the source conditions. 4-5-1 Trust measurements and time The trust preference for the human source may appear to contradict previous re- search arguing the widespread general preference for an automated source (i.e., au- tomation bias and perfect automation schema). One important difference is that this present study investigated trust after using the automation, while much previous research have considered perceptions held before use. Trust measures reported in this paper reflects the experience of using the automation rather than the disposi- tional attitudes of trust that exist before use. As suggested by previous research, the factors influencing trust perceptions vary depending on whether trust measures are collected before or after use. 171

RkJQdWJsaXNoZXIy MTk4NDMw