Carl Westin

4-1 Introduction 61 A BSTRACT Our understanding of automation trust is often compared and contrasted with our knowledge of interpersonal trust. Research in this area indicate that trust and acceptance of problem- solving advice varies depending on the presumed source of the adviser. The objective of this paper is to examine the effects of source bias and advisory conformance, including the interaction, on controllers’ trust and acceptance of conflict resolution advisories. Five expe- rienced controllers participated in a real-time simulation. Source bias was investigated by presenting advisories as generated either by the automated system or by another controller. Advisory conformance was investigated by providing advisories that either matched a con- troller’s own solution (conformal) or another participating controller’s contrasting solution (nonconformal). Questionnaire responses showed a clear preference for the human source over the automated. These perceptual differences were, however, not reflected in simulation results. In part, this can be explained by controllers’ sometimes accepting advisories even though they disagreed with them. Findings suggests that human and automated advisers are perceived differently, which supports previous research, but that these source-related differences have a small effect on advisory acceptance during task execution. 4-1 Introduction A long held philosophical question has been whether automation ever can be trusted to the same extent that we can trust other humans. In fiction, automation is often characterized as “too dumb” to trust, or “too smart” to the point that its artificial intelligence exceeds our own and “they” eventually decide to overthrow “us,” the humans. Despite the mind-boggling visions we have grown accustomed to see- ing in literature and on screen (such as Stanley Kubrick’s classic 2001: A Space Odyssey , Steven Spielberg’s A.I. , or Alex Garland’s Ex Machina ), there is as of now no satisfying answer to these types of questions regarding the intelligence ex- plosion , otherwise known as technology singularity. 142, 143 While this evolution, or revolution, may be far into the future, the reality of our situation is that many advanced systems, in particular decision support automation, are not appropriately used. Although fully automated systems are expected, current systems including those being developed for the foreseeable future, remain dependent on the human. This acknowledges that automated systems are imperfect and that it is critical that the operator is able to depend on the system when it is correct and reject it when it is wrong or fails. Trust in automated systems has been studied extensively in fields such as medicine, 144–146 ATC, 26, 147 combat identification, 148 uninhabited aerial opera- tions 149 and others. The ATC community has for some time recognized that in- sufficient acceptance (for instance of new CD&R advisory systems) can jeopardize the introduction of new automation. 8, 12 Improper reliance on automated aids is

RkJQdWJsaXNoZXIy MTk4NDMw