Carl Westin

7-1 Retrospective 153 ment of LOA frameworks because of their simplistic focus on the technology, task allocation, and discrete authority levels. 258–260 Complex and demanding scenarios were essential for simulations because of expected, and more pronounced, benefits of automation during high workload and stress. 45, 51, 127, 129 For simplifying purposes, traffic count was used to vary com- plexity at two levels, high and low. Results showed that controllers were more accepting of, indicated higher agreement, and responded faster to advisories un- der complex conditions. This could be interpreted as if controllers, under the time pressure of complex conditions, did not fully evaluate advisories and instead prema- turely accepted them. Rather, results indicate that controllers adequately evaluated advisories and were satisfied with their decision to accept of reject. During low complexity (and low perceived difficulty), controllers tried alternative solutions to conflicts. Note that scenario difficulty was not varied in the Source bias and Trans- parency studies because of the smaller sample sizes and data collected. Source bias. Previous research has shown that operators’ willingness to accept or trust advice is influenced by the perceived credibility and expertise of the ad- viser. 47, 50, 68, 157, 160, 161, 163, 164 Such effects have been obtained in studies simply by framing the same source differently. 48, 110, 165, 171, 173, 175 It was hypothesized that this could explain why controllers in the First empirical study did not fully accept conformal advisories: that is because they were biased against advice from what they perceived was an automated source. To investigate this, resolution advisories (varied by conformance) were presented as derived from either an automated or hu- man source. Although questionnaire data showed a slight preference for the human adviser (portrayed as an air traffic controller), there were no effects of source bias found in relation to the acceptance of advisories in the simulation. However, the manipulation was subtle with different source information provided in instructions prior to the simulation and a text accompanying advisories during simulation. Con- trollers may not have reflected over the different sources during simulation runs. It should be noted that the Source bias study has a philosophical question at heart, for which the answer can help us better understand how people interact dif- ferently with automated systems and humans. In many future contexts, humans will be assigned to work with an automated agent. Hence, there will be little doubt in regards to who the source is. However, if true, as indicated by some re- search, 48, 114, 118, 176 that the more human-like automation is (i.e., anthropomorphic), the more it is treated as if it was human. Then, depending on interaction and sys- tems goals, automation could be designed for the purpose of being perceived as a certain type of source (e.g., human or automation).

RkJQdWJsaXNoZXIy MTk4NDMw