Carl Westin

3-1 Introduction 39 A BSTRACT In a series of real-time trials, we simulated sophisticated ATC conflict resolution automa- tion using unrecognizable replays of air traffic controllers’ own performance. Using a novel experimental design and a prototype ATC interface, we explored with operational controllers the interactive effects of traffic complexity, level of automation, and “strate- gic conformance” (defined as the match between human and machine solution strategy) on a number of dependent measures. Personalized conformal advisories (exact replays of a given controller’s previous solution) were accepted more often, rated higher, and responded to faster than were nonconformal advisories (replays of a colleague’s different solution). Controllers not only discriminated between resolution advisories, but more importantly, preferred those that matched their own solution for the same conflict. In the end, one result stood out in particular: roughly 25% of conformal advisories were rejected by controllers. Taken together, this study has provided empirical insights into the critical role that strategic conformance can play, at least in a transitional phase, as new and sophisticated decision support automation is being introduced. 3-1 Introduction Roughly 60 years ago, English mathematician Alan Turing famously posed the ul- timate test for artificial intelligence: that its performance be indistinguishable from that of a human. If one could converse with an unseen agent, and mistake computer for human responses, then that computer could truly be said to “think.” This notion has driven research into artificial intelligence for over half a century. We are at a point in the evolution of automation that we routinely turn over to computers many of the “thinking” tasks previously performed only by humans. Our planes, trains, and even automobiles rely on more (and more capable) automation than ever before. Despite various achievements, however, some gap remains be- tween the theory and practice of automation design. For instance, as of this date, not a single computer has passed the Turing test, and we have not realized in any meaningful way the highest levels of autonomous systems. As Sheridan noted, we still have no idea how to program computers to “take care of children, write sym- phonies, or manage corporations...” [127, p. 129]. The Multidimensional Framework for Advanced SESAR Automation (MU- FASA) project started, in a sense, from the opposite view: What if we could build perfect automation that behaved and solved complicated problems in exactly the same way as the human? Would the human accept its advice? Or might the humans reject such solutions simply because they were proposed by automation? The ques- tion is whether we have evidence of a fixed bias against automation, irrespective of its performance. That is, do operators show an inherent bias against automation?

RkJQdWJsaXNoZXIy MTk4NDMw