Carl Westin

5-2 Automation transparency research 87 5-2 Automation transparency research To date, researchers have used a variety of descriptions to address the need for fa- cilitating system understanding through interface design. Examples include observ- ability and transparency in general human factors design guidelines, 187, 199 trans- parency in applied human factors research 191, 195, 200 and recommender systems re- search, 201 visibility in usability engineering research 202 and information automation research, 203 and comprehensibility in adaptive interface and artificial intelligence research. 194 From here on the term transparency is used in lieu of all these terms. 5-2-1 Theoretical antecedents of transparency Helldin 191 proposed several generic guidelines for automation transparency design based on a review and synthesis of human- and user-centered automation design ap- proaches in the human factors literature. Among the more prominent guidelines are the importance to provide system feedback; notify operator of information short- ages, inconsistencies, or uncertainties; provide rationale underlying automation be- havior (i.e., algorithm rules); and communicate possible reasoning conflicts. No- tably many of the guidelines emphasize a positive relationship between increasing transparency and appropriate trust calibration. A more structured theory for automation transparency can be derived from work by Brown 204 on the transparency of intelligent tutoring and learning systems. Ac- cording to Brown, the design for transparency requires three criteria to be met. First, the system should facilitate understanding of the domain and environment in focus of attention ( domain transparency). Brown’s second criterion, and of particular im- portance for decision aids, is that the system should facilitate understanding of its reasoning and diagnostic processes ( internal transparency). This does, however, not imply that a detailed algorithm description or complete input-output relationship should be provided, being neither practical nor feasible. Finally, the system should facilitate an understanding of the overall process within which the user and system is connected to the real world ( embedding transparency). Combined, these criteria should provide a simplified view of the content in the “black box,” making it pos- sible for the user to understand how the system works, why it is doing what it is doing, and to anticipate what it will do next. 192 Focusing specifically on the behavior of autonomous agents, Chen et al. 200 pro- posed a transparency model for the design of interfaces supporting autonomous agent mission supervision. The model, named situation awareness-based agent transparency (SAT), specifies three SAT levels based on Endsley’s three levels of situation awareness. 205 Level 1 SAT addresses what the agent is doing as described by the three Ps for facilitating trust (process, purpose, performance). 15 Level 2

RkJQdWJsaXNoZXIy MTk4NDMw