107 Idiographic personality networks 5 2.4.2 Autocorrelation structures Before we estimate idiographic personality networks, we explored the full autocorrelation structure with R function acf of each participant’s multivariate timeseries. Whereas networks can only estimate dynamic patterns across two timescales (lag-0 and lag-1), autocorrelation functions reveal interactions across all possible timescales. For example, whether happiness is correlated with restlessness seven days ago (lag-7) or a month ago (lag-30). Exploring this is important, because when theoretical support for a single timescale of personalityrelated dynamics is lacking (see Introduction) then an informed decision about which timescale to (not) model should be data-driven. If the dynamics manifest predominantly at lag-0 and lag-1 but not beyond, then conventional networks on these two timescales are justified. To assess this within our sample, we counted statistically significant autocorrelations (evaluated at p < 0.01) per lag and divided by the maximum number of evaluated autocorrelations. This provides the percentage of statistically significant autocorrelations for one lag across participants, relative to all meaningful bivariate comparisons. At lag-1 and beyond, all variables may meaningfully covary with their own and others’ previous values. At lag-0, each variable may meaningfully covary with other’s but not with itself (i.e., per definition correlation of 1), which is why these same-variable sametimepoint autocorrelations were not considered when calculating this relative percentage for lag-0. 2.4.3 Estimating idiographic networks We then estimated idiographic network models. Because network analyses cannot handle missing data, missing data-points were first imputed using a structural model fitted by maximum likelihood and Kalman smoothing (Moritz & BartzBeielstein, 2017). We iterated through each of the 26 participants, employing a Gaussian graphical vector autoregression model (GVAR) with functionality from R package graphicalVAR (Epskamp et al., 2018). GVAR essentially entails estimating a sparse contemporaneous and lag-1 partial correlation matrix. One such matrix is 6 rows and 6 columns, such that all 6 variables are compared to one another. All parameters (i.e., edges) in the contemporaneous model (lag-0) are estimated after conditioning for all other associations between variables at lag-0 and lag-1. To control for model complexity, they are constrained with the graphical least absolute shrinkage and selection operator (LASSO; Friedman et al., 2008; cf. Morosan et al., 2020). This is a regularization technique appropriate for ordinal data – making weak and likely spurious associations to be estimated at exactly zero. GVAR does both model parameter estimation and model selection. That is, all possible models are iteratively estimated and the best fitting one (i.e., the one
RkJQdWJsaXNoZXIy MTk4NDMw