Carl Westin

62 Source bias effects a serious issue that historically has contributed to several aviation accidents (e.g., ¨Uberlingen 150 and Air Asiana Flight 214 151 )”. An operator’s trust and acceptance, it seems, must be calibrated to the reliability and performance of the automated aid. 70 Miscalibrations of trust carry specific and predictable human performance problems. 13, 15, 152 With insufficient trust, a system is likely to be disused with the consequence that benefits are not realized. Excessive trust, on the other hand, raises potential problems of misuse , with the operator blindly accepting the operation of the automated aid. 153 Much of automation trust research builds on theories of interpersonal trust (human-to-human). 15, 70, 72 Given that automation is becoming more capable, and increasingly playing the role of strategic partner and adviser, the analogy seems ap- propriate. The human template is increasingly being used in design, with automated aids increasingly resembling human beings, not only by appearance but also in cog- nition and behavior. This deliberate direction in design can partly be attributed to the notion and realization that human-like technology yields human-like responses: humans seem to prefer to interact with human-like technology. Such increased af- fection for human-like technology has also been explored in fiction, such as Spike Jonze’s movie Her . To achieve acceptable and effective teamwork between human and machine it is necessary to develop automation that better acknowledges and responds to indi- vidual differences. 154, 155 There is, however, an underlying philosophical question regarding to what degree humans and automated aids are perceived similarly, and whether it is sound to design decision aids based on a human template. Several projects in the past had explored the potential benefits of strategic aiding automa- tion in ATC, but until now all had been limited in one important regard: they could not ensure that automation strategy matched that of the human. The first empirical study we conducted as part of the ATC MUFASA project investigated the influ- ence of strategic conformal conflict resolution advisories on advisory acceptance and task performance. Situation-specific solutions that conformed to the individ- ual controller’s preferred way of solving that conflict (i.e., conformal advisories) were accepted more often, rated higher, and responded to faster than were non- conformal advisories. 101 Still, controllers reject some of the conformal solutions (around 25%), all of which they (mistakenly) believed were derived from the auto- mated CD&R aid when in reality they were replays of the individuals own previ- ously recorded solution for the same conflict. The rejection of conformal advisories in our previous study led us to question whether such rejections were driven by a potential bias against automation. One candidate explanation is that controllers have a disposition against the use of au- tomation. If such a bias persists against automation, there might be a tendency

RkJQdWJsaXNoZXIy MTk4NDMw