Ridderprint

Discussion 151 7.3.3.3 Automating Historic Biases While ethics can be considered an important factor in any data analytics project, it is particularly so in people analytics projects. HRM decisions have profound implications in an imbalanced relationship, whereas the data within the HRM field often suffer from inherent biases. This becomes particularly clear when exploring applications of predictive analytics in the HRM domain. For example, imagine that we want to implement a decision-support system to improve the efficiency of our organization’s selection process. A primary goal of such a system could be to minimize the human time (both of our organizational agents and of the potential candidates) wasted on obvious mismatches between candidates and job positions. Under the hood, a decision-support system in a selection setting could estimate a likelihood (i.e., prediction) for each candidate that he/she makes it through the selection process successfully. Recruiters would then only have to interview the candidates that are most likely to be successful, and save valuable time for both themselves and for less probable candidates. In this way, an artificially intelligent system that reviews candidate information and recommends top candidates could considerably decrease the human workload and thereby the total cost of the selection process. For legal compliance as well as ethical considerations, we would not want such a decision-support system to be biased towards any majority or minority group. Should we therefore exclude demographic and socio-economic factors from our predictive model? What about the academic achievements of candidates, the university they attended, or their performance on our selection tests? Some of those are scientifically validated predictors of future job performance (e.g., Hunter & Schmidt, 1998). However, they also relate to demographic and socio-economic factors and would therefore introduce bias (e.g., Hough, Oswald, & Ployhart, 2001; Pyburn, Ployhart, & Kravitz, 2008; Roth & Bobko, 2000). Do we include or exclude these selection data in our model? Maybe the simplest solution would be to include all information, to normalize our system’s predictions within groups afterwards (e.g., gender), and to invite the top candidates per group for follow-up interviews. However, which groups do we consider? Do we only normalize for gender and nationality, or also for age and social class? What about combinations of these characteristics? Moreover, if we normalize across all groups and invite the best candidate within each, we might end up conducting more interviews than in the original scenario. Should we thus account for the proportional representation of each of these groups in the whole labor population? As you notice, both the decision- support system and the subject get complicated quickly. Even more problematic is that any predictive decision-support system in HRM is likely biased from the moment of conception. HRM data is frequently infested with human biases as bias was present in the historic processes that generated the data. For instance, the recruiters in our example may have historically favored candidates with a certain profile (e.g., red hair). After training our decision-support system (i.e., predictive model) on these historic data, it will recognize and copy the pattern that candidates with red hair (or with correlated features, such as a Northwest European nationality) are more likely successful. The system thus learns to recommend those individuals as the top candidates.

RkJQdWJsaXNoZXIy MTk4NDMw