Ridderprint
Chapter 1 12 1.2 People Analytics 1.2.1 Need for People Analytics The ability to measure and quantify the strategic impact of their activities in their own, local organization context would be very valuable for the HRM function. While scientific insights form a basis of evidence for many of the common HRM practices and thus provide some assurance for their effectiveness in general (e.g., performance appraisal, compensation; Briner, 2000; Cascio & Aguinis, 2010), this external evidence is by no means a guarantee for the esteemed impact of these HRM practices in local practice. To turn scientific research into practice, HRM practitioners first have to translate the scientific findings into a policy or a practice that would presumably render the same effect. Second, this policy or practice needs to be implemented, perceived, and responded to in ways in which the original effect is not lost (Nishii & Wright, 2007; Piening et al., 2014). Third, HRM research has shown that the context in which HRM is implemented is crucial to its effectiveness (e.g., Johns, 2006; Paauwe & Farndale, 2017), and what works for students in an academic lab may not necessarily work in an organizational context. Similarly, the effects of practices may differ between or within organizations (see Chapter 6; Johns, 2006; Huselid & Becker, 2011). All this implies that the effects of HRM, once implemented in practice, may thus vary considerably fromwhat was found in the original scientific setting. Therefore, instead of blindly relying on scientific evidence, it would be valuable to double-check whether HRM activities actually achieve the esteemed effects in practice and to adjust where needed. 1.2.2 People Analytics Terminology This process of internally examining the impact of HRM activities goes by many different labels. Contemporary popular labels include people analytics (e.g., Green, 2017; Kane, 2015), HR analytics (e.g., Lawler, Levenson, & Boudreau, 2004; Levenson, 2005; Rasmussen & Ulrich, 2015; Paauwe & Farndale, 2017), workforce analytics (e.g., Carlson & Kavanagh, 2018; Hota & Ghosh, 2013; Simón & Ferreiro, 2017), talent analytics (e.g., Bersin, 2012; Davenport, Harris, & Shapiro, 2010), and human capital analytics (e.g., Andersen, 2017; Minbaeva, 2017a, 2017b; Levenson & Fink, 2017; Schiemann, Seibert, & Blankenship, 2017). Other variations including metrics or reporting are also common (Falletta, 2014) but there is consensus that these differ from the analytics-labels (Cascio & Boudreau, 2010; Lawler, Levenson, & Boudreau, 2004). While HR metrics would refer to descriptive statistics on a single construct, analytics involves exploring and quantifying relationships between multiple constructs. Yet, even within analytics, a large variety of labels is used interchangeably. For instance, the label people analytics is favored in most countries globally, except for mainland Europe and India where HR analytics is used most (Google Trends, 2018). While human capital analytics seems to refer to the exact same concept, it is used almost exclusively in scientific discourse. Some argue that the lack of clear terminology is because of the emerging nature of the field (Marler & Boudreau, 2017). Others argue that differences beyond semantics exist, for instance, in terms of the accountabilities the labels suggest, and the connotations they invoke (Van den Heuvel & Bondarouk, 2017). In
Made with FlippingBook
RkJQdWJsaXNoZXIy MTk4NDMw