Carl Westin

7-3 Research challenges and limitations 161 during interaction with others. We infer the concealed reasoning on others based on what we can sense and observe. In this sense, human-automation interaction is no different than most human-human interaction. Future research should explicitly measure participants’ understanding of advisories and investigate whether confor- mal advisories are better understood than nonconformal advisories, even though both objectively can be considered equally opaque. 7-3 Research challenges and limitations This thesis has not been without its limitations and challenges. Several were fore- seen at the onset of the research explorations, and have been addressed in the intro- duction of this thesis, while others were encountered and realized as the research progressed. In this section, those challenges and limitations considered most rele- vant are discussed. In addition, recommendations for future research are provided. 7-3-1 The great deception: defining conformal advisories The ambition was never to develop an actual conformal system, but to empirically investigate the benefits and drawbacks of such capable automation. To do so, it was necessary to develop a method for determining and measuring how a person prefers to solve a problem, and empirically test that person’s reaction when an automated system suggests the same solution to an identical problem. Inspired by a previous study by Fuld et al. , 39 an experimental design was de- veloped that set out to expose controllers to the same problem repeatedly and first observe and record their solutions, then provide replays of their own solutions as automated advisories. In simulations, each scenario was encountered four times. In the First empirical study (Chapter 3), conformal and nonconformal were matched directly to each scenario. While this use of exact replays ensured high conformance, it provided a challenge for ascertaining reliability in solutions across repetitions. Say that we want to develop a conformal system, how can a conformal solution be predicted by the system if the operator inconsistently solves the same problem over time? Therefore, in the two latter studies (Chapters 4 and 5), conformal advisories were defined by each controller’s consistent problem-solving style. In all studies, the solution parameter hierarchy was used to define conformal advisories. Following the Consistency study (Chapter 6), however, two more clas- sifications were identified (discussed above: the control problem and solution ge- ometry classifications). Since all conformal advisories were based on the solution parameter hierarchy only, some controller’s conformal advisories were likely erro- neous and better represented by one of the other classifications. The definition of consistency is critical for the creation of conformal automation. Future research is

RkJQdWJsaXNoZXIy MTk4NDMw