Carl Westin

88 Automation transparency effects SAT addresses the agents’ behavior and underlying reasoning process as framed by the beliefs, desires, intentions (BDI) agent architectural framework. 206 Finally, level 3 SAT addresses what the expected outcomes and future states in regards to the agent’s behavior. All three levels are considered distinct, meaning that design- ing for transparency does not require all three levels to be achieved. Rather they address different aspects of transparency, which relevance will vary with task ob- jectives and contexts. Furthermore, Chen et al. argue that transparency improves operator performance by facilitating appropriate trust calibration, which in turns drives appropriate automation usage decisions (AUD, 207 ). Transparency in relation to automation output, such as recommendations and advisories, has been studied in recommender systems research. It has been ex- tensively applied in the context of e-commerce, semantic web services and enter- tainment, 194, 201 although examples can be found in health care diagnostics appli- cations 208 and personalized tour guidance in museums and cultural institutions. 111 Noteworthy is that this research field has merged the notion of transparency and strategic conformance. Transparency is typically increased by providing a text- based explanation personalized to the user’s preferences, needs and knowledge. In general, three explanation categories are used: why explanations justifies the rec- ommendation; how explanations provides the underlying reasoning process used to generate the recommendation; and tradeoff explanations acknowledges competing alternatives and considers the constraints for avoiding these. 209 However, deciding on what to explain (i.e., why, how or tradeoff) requires consideration of the task at hand, 210 benefits sought, 211 and technique(s) used for generating recommenda- tions. 212 Although recommender systems often involve complex and extensive problem solving algorithms, they have typically been associated with low-risk decision mak- ing, 213 presented in a static interface environment, 214 and provided in a text-based form either as a single recommendation or a list of alternatives. 212 Thus far, how- ever, recommender systems have received little attention in ATC and similar time- critical and high-risk control room environments that incorporate advanced decision support system and human-agent teams (see Sadler et al. 195 for an exception). For example, ATC conflict resolution decision aids have typically been designed to for- mulate its advice in a text-based format, often providing a list of alternative solu- tions, constituting a dichotomous ‘accept’ or ‘reject’ choice. 31, 35, 90 A justification for why these solutions are suggested is not presented, making it difficult for the controller to evaluate them properly. Controllers may therefore doubt the quality of solutions 34 leading to low acceptance.

RkJQdWJsaXNoZXIy MTk4NDMw