Carl Westin

1-6 Research scope 11 ence the acceptance of an automated system than those considered in this thesis, including human-related factors, automation-related factors, and task- and environment-related factors. 47 For example, within the human factors field the human-specific factor of trust is often considered analogously with acceptance, as a primary proxy for automation reliance. While the relevance of this research is acknowledged (see Chapter 4 for a more detailed review) this thesis considers acceptance as a more suitable and explicit measure for automation usage. Furthermore, the focus has been on the under-reliance of automation. While several automation issues are related to the over-reliance on automation (e.g., complacency, automation bias, the perfect automation schema), these fall outside the scope of this thesis. Controlling trust. In simulations, the advisory system was presented as trustwor- thy and its advice was always safe (i.e., solve the conflict). This frame was used in an attempt to control trust and prevent controllers’ different levels of trust from affecting their acceptance and agreement of resolution advisories. Data quality. The underlying data are not subjected to issues such as uncertain- ties. Hence, the advisories given by the automation are always 100% correct and safe. The main reason for this assumption is to rule out any artifacts in decision-making caused by trust issues. Control task. The tactical CD&R task takes place in the horizontal plane only, making it a 2D control task by means of speed and/or heading clearances. This significantly reduces the number of control strategies to resolve conflicts, allowing for better comparisons between controllers and scenarios. Note that without vertical resolutions, the control task is not necessarily easier. A single horizontal plane is more limiting and requires careful monitoring and predic- tion of traffic movements. Advisory timing. The timing of an advisory may be critical to its value. Ideally, a decision aid would provide support “just in time” when the operator needs it. Considering that trust is the result of a comparison process between one owns ability and the automation’s ability, researchers have argued that trust in an automated aid should be measured after the decision-maker has made a decision. 48–50 If provided before (i.e., too early), the decision-maker may be unable to adequately evaluate the advice and there is a risk that the automated advisory is “blindly” accepted. In addition, such advice may be inappropriate and interruptive. While true in theory, for all practical purposes, an advisory provided after a decision has been made (i.e., too late) would be redundant as the problem already has been solved. Furthermore, the benefits of introducing

RkJQdWJsaXNoZXIy MTk4NDMw