Ridderprint

Chapter 7 158 quantify the predictability of phenomena; et cetera (Shmueli, 2010; Shmueli & Koppius, 2011; Yarkoni & Westfall, 2017). In the long run, HRM as a scientific field would benefit from a better balance between studies that aim to explore, explain, and predict HRM phenomena and prescribe HRM solutions. A related implication is that the social sciences should be more open to novel and/or alternative methodology. A familiarity with machine learning concepts such as cross- validation and regularization 18 could help HRM researchers to improve the real-world value of their HRM models substantially (Yarkoni & Westfall, 2017, p. 1107). Even black- box algorithms, whichmay not seem attractive from the perspective of basic science, could be valuable in explanatory HRM research. Their predictive accuracy may be valuable in the early research stages, for instance, to impute missing values or for propensity scoring in quasi-experimental designs. The value of unsupervised techniques for the future of the HRM domain should also not be neglected. Data reduction techniques become increasingly important in light of the enormous volumes of data that are being collected through new sources (e.g., wearables, sensors, gamification; Chapter 3). Clustering techniques could help to identify different types of employees based on longitudinal or high-dimensional information of employees’ behaviors, cognitions, or emotions. In all these cases, a general familiarity of HRM scholars with the requirements, the possibilities, the inner workings, and the pitfalls of non-conventional methods would facilitate collaboration with other disciplines and the identification of novel research opportunities. 7.4.1.3 Publication Issues Third, the changes mentioned above (science-practice collaborations, alternative HRM research) are only viable if the scientific publication process correctly incentivizes researchers. Currently, manuscripts which follow a deductive cycle where theory-driven hypotheses are tested in a confirmatory way have much more publication potential and this discourages scholars to study anything else (e.g., Hambrick, 2007; Leung, 2011; Pratt, 2008; Woo et al., 2017). If left unchanged, this process will greatly hinder the growth of people analytics as a discipline, both in the scientific world as well as in practice. People analytics is often problem-focused instead of theory-driven whereas decisions made in the statistical modelling process (e.g., construct operationalization, data handling, chosen technique or algorithm) may introduce additional barriers for getting accepted for publication in contemporary management and psychology journals (e.g., Shmueli, 2010; Woo et al., 2017). I foresee two scenarios in this regard. In one scenario, the scientific HRM community reviews its publication process to assure that machine learning and people analytics projects are not discouraged. Publishers could devote special issues or journals to science-practice collaborations, knowledge transfer, technological innovation, or business impact. Similarly, research institutions could support career patterns that facilitate people analytics research, rewarding science-practice partnerships, 18 A process whereby the objective function of a statistical model is changed in order to prevent overfitting.

RkJQdWJsaXNoZXIy MTk4NDMw