Table of Contents

  1. The Pipe-Based Workflow
  2. Matrix Specification
  3. Estimators
  4. Inspecting Results
  5. Model Modification
  6. Multi-Group Analysis
  7. Penalized Estimation
  8. Missing Data
  9. Bootstrap
  10. Exporting Results

The Pipe-Based Workflow

All models in psychonetrics follow a consistent pipeline: specifyrunmodelmodifycompare. This makes it easy to iterate on models, test hypotheses, and explore alternative specifications.

Here's the general pattern using a simple Gaussian Graphical Model (GGM) with the bfi dataset from the psych package:

library("psychonetrics")
library("dplyr")
library("psych")

# Load data: Big Five Inventory
data(bfi)

# Select 5 items from Agreeableness scale
vars <- c("A1", "A2", "A3", "A4", "A5")

# Step 1: Specify a saturated GGM (all edges estimated)
model <- ggm(bfi, vars = vars, omega = "full")

# Step 2: Run the model (estimate parameters)
model <- model %>% runmodel

# Step 3: Inspect results
model %>% fit()           # Fit indices
model %>% parameters()    # Parameter estimates

# Step 4: Modify the model (e.g., prune non-significant edges)
model_pruned <- model %>% prune(alpha = 0.01)

# Step 5: Compare models
compare(saturated = model, pruned = model_pruned)

This workflow applies to all model families in psychonetrics: varcov models (GGMs, precision matrices), latent variable models (CFAs, SEMs, residual networks), and Ising models.

Matrix Specification

Most psychonetrics models are built from matrices. When specifying a model, you can control which parameters are free, fixed, or constrained to be equal using intuitive matrix specifications.

Basic Specification Options

Value Meaning
"full" All elements are free parameters (except diagonal for symmetric matrices)
"diag" Only diagonal elements are free parameters; off-diagonals fixed to zero
"zero" All elements fixed to zero
Numeric matrix Fine-grained control: 0 = fixed to zero, 1 = free, values >1 = equality constraints

Custom Matrix Specification

You can specify a numeric matrix to control exactly which parameters are free or constrained:

Example: 3-Factor CFA Lambda Matrix

# Define a lambda matrix for 9 items loading on 3 factors
# Items 1-3 load on Factor 1, Items 4-6 on Factor 2, Items 7-9 on Factor 3
lambda <- matrix(0, nrow = 9, ncol = 3)
lambda[1:3, 1] <- 1   # Items 1-3 load on Factor 1
lambda[4:6, 2] <- 1   # Items 4-6 load on Factor 2
lambda[7:9, 3] <- 1   # Items 7-9 load on Factor 3

# Use in a latent network model:
model <- lnm(data, lambda = lambda, vars = paste0("item", 1:9))

Multi-Group Specifications

For multi-group models, use a list or array to specify different matrices for each group:

# Different omega matrices for two groups:
omega_group1 <- matrix(1, 5, 5)  # All edges free in group 1
omega_group2 <- "diag"            # Only variances in group 2

model <- ggm(data, vars = vars, groups = "gender",
             omega = list(omega_group1, omega_group2))

Estimators

psychonetrics supports multiple estimation methods for different data types and research goals. The estimator is set during model specification or changed later using setEstimator().

Maximum Likelihood (ML)

Default for complete continuous data. Assumes multivariate normality. Provides chi-square test of exact fit and standard fit indices (CFI, TLI, RMSEA).

model <- ggm(data, vars = vars, estimator = "ML")

Full Information ML (FIML)

For data with missing values. Estimates parameters using all available information without listwise deletion. Automatically selected when missing = "auto" and NAs are detected.

model <- ggm(data, vars = vars, estimator = "FIML")

Penalized ML (PML / PFIML) [experimental]

Adds regularization (LASSO, Ridge, or Elastic Net) to shrink small parameters toward zero. Useful for high-dimensional data or exploratory modeling. PFIML is the penalized version of FIML for missing data. Note: this estimator is experimental and has not yet been fully validated.

model <- ggm(data, vars = vars, estimator = "PML")

Weighted Least Squares (WLS, DWLS, ULS) [experimental]

For ordinal or non-normal data. Estimates parameters from polychoric/tetrachoric correlations. Note: ordinal data support is experimental and has not yet been fully validated.

model <- ggm(data, vars = vars, estimator = "DWLS", ordered = TRUE)

Mean-Variance Adjusted WLSMV [experimental]

For ordinal data. Provides robust chi-square test with mean and variance adjustments. Note: this estimator is experimental and has not yet been fully validated.

model <- ggm(data, vars = vars, estimator = "WLSMV", ordered = TRUE)

Changing Estimators

You can change the estimator after model specification:

model <- model %>% setEstimator("FIML") %>% runmodel

Inspecting Results

After running a model, psychonetrics provides several functions to extract and inspect results.

Parameter Estimates

Use parameters() to view all estimated parameters with standard errors, z-values, and p-values:

model %>% parameters()

Fit Indices

Use fit() to extract model fit statistics:

model %>% fit()

# Returns:
# - chisq: Chi-square value
# - df: Degrees of freedom
# - pvalue: Chi-square test p-value
# - CFI: Comparative Fit Index
# - TLI: Tucker-Lewis Index
# - RMSEA: Root Mean Square Error of Approximation
# - BIC: Bayesian Information Criterion
# - AIC: Akaike Information Criterion
# - ... and more

Extract Model Matrices

Use getmatrix() to extract specific parameter matrices:

# Extract the omega matrix (partial correlation network):
omega <- getmatrix(model, "omega")

# Extract the sigma matrix (implied covariance):
sigma <- getmatrix(model, "sigma")

# For multi-group models, specify the group:
omega_group1 <- getmatrix(model, "omega", group = 1)

Use threshold = TRUE to set non-significant parameters to zero, retaining only significant edges at a given alpha level (default alpha = 0.01). The adjust argument controls p-value correction for multiple testing:

# Extract omega with non-significant edges removed:
omega_thresh <- getmatrix(model, "omega", threshold = TRUE)

# Use a different alpha level:
omega_thresh_05 <- getmatrix(model, "omega", threshold = TRUE, alpha = 0.05)

# Apply Bonferroni correction:
omega_thresh_bon <- getmatrix(model, "omega", threshold = TRUE, adjust = "bonferroni")

Confidence Interval Plots

Use CIplot() to visualize parameter estimates with confidence intervals. This works on any fitted model and is one of the most useful tools for inspecting estimated edge weights:

# Plot confidence intervals for all partial correlations:
CIplot(model, "omega")

# Edges are colored by significance level:
#   - Dark colors: significant at stricter alpha levels (0.001, 0.0001)
#   - Lighter colors: significant at more lenient levels (0.05, 0.01)
#   - Grey: not significant

# Customize the confidence interval level (default is 95%):
CIplot(model, "omega", alpha_ci = 0.01)  # 99% CIs

CIplot() can also be used on bootstrapped results (see Bootstrapping), where it additionally shows the proportion of bootstrap samples in which each edge was zero.

Modification Indices

Use MIs() to identify parameters that, if freed, would most improve model fit:

model %>% MIs()

# Returns expected chi-square decrease (mi) and estimated parameter values (epc)
# Sorted by mi (largest improvements first)

Complete Summary

# Print the model:
print(model)

Model Modification

psychonetrics provides flexible tools for iteratively modifying models. You can manually fix or free parameters, or use automated procedures like pruning and stepwise search.

Manual Parameter Control

Fix a Parameter

# Fix a specific parameter to a value (e.g., zero):
model <- model %>% fixpar("omega", row = 2, col = 3, value = 0) %>% runmodel

# Fix to a non-zero value:
model <- model %>% fixpar("lambda", row = 1, col = 1, value = 1)

Free a Parameter

# Free a previously fixed parameter:
model <- model %>% freepar("omega", row = 2, col = 3) %>% runmodel

Automated Model Search

Prune: Remove Non-Significant Parameters

Use prune() to iteratively remove non-significant parameters based on a significance threshold:

# Remove parameters with p > 0.01:
model_pruned <- model %>% prune(alpha = 0.01)

# Prune only specific matrices:
model_pruned <- model %>% prune(alpha = 0.01, matrices = "omega")

Stepup: Add Parameters via Modification Indices

Use stepup() to iteratively add parameters that improve model fit according to modification indices:

# Add parameters that minimize BIC:
model_stepup <- model %>% stepup(criterion = "bic")

# Use AIC instead:
model_stepup <- model %>% stepup(criterion = "aic")

# Limit to specific matrices:
model_stepup <- model %>% stepup(criterion = "bic", matrices = "omega")

Modelsearch

Use modelsearch() for a stepwise model search that both adds and removes parameters until optimal fit is obtained:

# Stepwise search adding and removing parameters:
model_final <- model %>% modelsearch(criterion = "bic", prunealpha = 0.01, addalpha = 0.01)

Multi-Group Analysis

psychonetrics makes it easy to estimate models across multiple groups and test for measurement invariance or network differences.

Specifying Multi-Group Models

Use the groups argument to specify a grouping variable:

# Estimate separate networks for males and females:
model <- ggm(data, vars = vars, groups = "gender")

Constraining Parameters Across Groups

Group Equality Constraints

Use groupequal() to constrain a matrix to be equal across groups:

# Constrain omega (network structure) to be equal:
model_equal <- model %>% groupequal("omega") %>% runmodel

Free Group Constraints

Use groupfree() to allow a matrix to differ across groups:

# Allow omega to differ between groups:
model_free <- model %>% groupfree("omega") %>% runmodel

Measurement Invariance Testing

For latent variable models, you can test for configural, metric, scalar, and strict invariance:

# Configural invariance: same factor structure, all parameters free
model_configural <- lnm(data, lambda = lambda, groups = "gender") %>% runmodel

# Metric invariance: constrain factor loadings equal
model_metric <- model_configural %>% groupequal("lambda") %>% runmodel

# Scalar invariance: constrain loadings + intercepts equal
model_scalar <- model_metric %>% groupequal("nu") %>% runmodel

# Strict invariance: constrain loadings + intercepts + residuals equal
model_strict <- model_scalar %>% groupequal("sigma_epsilon") %>% runmodel

# Compare all models:
compare(configural = model_configural,
        metric = model_metric,
        scalar = model_scalar,
        strict = model_strict)

Comparing Multi-Group Models

Use compare() to test whether constraints significantly worsen fit:

compare(free = model_free, equal = model_equal)

# Returns chi-square difference test and information criteria (AIC, BIC)
# If p < 0.05, equality constraint significantly worsens fit
# AIC and BIC can also be used to compare models (lower is better)

Example: Network Comparison

# Test if network structure differs between groups:
model_free <- ggm(data, vars = vars, groups = "gender") %>% runmodel
model_equal <- model_free %>% groupequal("omega") %>% runmodel

# Chi-square difference test:
compare(free = model_free, equal = model_equal)

# Extract group-specific networks:
omega_group1 <- getmatrix(model_free, "omega", group = 1)
omega_group2 <- getmatrix(model_free, "omega", group = 2)

Penalized Estimation [experimental]

Experimental: Penalized estimation in psychonetrics is experimental and has not yet been fully validated. Use with caution and verify results.

Penalized estimation adds regularization to shrink small parameters toward zero. This is useful for high-dimensional data, exploratory modeling, or preventing overfitting.

Setting the Estimator

Use estimator = "PML" (Penalized ML) or estimator = "PFIML" (Penalized FIML for missing data):

# Automatic lambda selection (default):
model <- ggm(data, vars = vars, estimator = "PML") %>% runmodel

# Or set a fixed penalty strength:
model <- ggm(data, vars = vars, estimator = "PML", penalty_lambda = 0.2) %>% runmodel

Automatic Lambda Selection via EBIC

By default, psychonetrics searches over a grid of penalty values (lambda) and selects the one that minimizes the Extended Bayesian Information Criterion (EBIC):

# Automatic lambda selection (default):
model <- ggm(data, vars = vars, estimator = "PML") %>% runmodel

# The selected lambda is stored in the model object
# View selected lambda:
model@parameters$penalty_lambda

Manual Lambda Selection

You can manually specify the penalty strength using penalty_lambda:

# Set a specific penalty value:
model <- ggm(data, vars = vars, estimator = "PML",
             penalty_lambda = 0.5) %>% runmodel

# Higher lambda = more shrinkage (sparser model)
# Lower lambda = less shrinkage (denser model)

Elastic Net Mixing: LASSO vs. Ridge

Control the type of regularization using penalty_alpha:

# LASSO (default):
model <- ggm(data, vars = vars, estimator = "PML", penalty_alpha = 1)

# Ridge regression:
model <- ggm(data, vars = vars, estimator = "PML", penalty_alpha = 0)

# Elastic Net (50% LASSO, 50% Ridge):
model <- ggm(data, vars = vars, estimator = "PML", penalty_alpha = 0.5)

Post-Selection Inference: Refit

After selecting a sparse model via penalized estimation, use refit() to re-estimate the selected parameters without the penalty. This provides unbiased parameter estimates:

# Step 1: Estimate with penalty (parameter selection):
model_penalized <- ggm(data, vars = vars, estimator = "PML") %>% runmodel

# Step 2: Refit without penalty (post-selection inference):
model_refit <- model_penalized %>% refit

# Now standard errors and p-values are valid for hypothesis testing
model_refit %>% parameters()

Complete Example

library("psychonetrics")
library("dplyr")

# Load data
data(bfi)
vars <- c("A1", "A2", "A3", "A4", "A5")

# Penalized estimation with automatic lambda selection:
model_pml <- ggm(bfi, vars = vars, estimator = "PML") %>% runmodel

# Inspect selected parameters (many will be zero):
model_pml %>% parameters

# Refit for valid inference:
model_refit <- model_pml %>% refit

# Compare penalized vs. refitted:
model_refit %>% parameters

Missing Data

psychonetrics provides flexible options for handling missing data, including Full Information Maximum Likelihood (FIML), which uses all available information without listwise deletion.

Automatic Handling

By default, psychonetrics automatically switches from ML to FIML when missing values are detected:

# Default: missing = "auto"
# If NAs are present, ML automatically becomes FIML
model <- ggm(data, vars = vars) %>% runmodel

# Explicitly set missing data handling:
model <- ggm(data, vars = vars, missing = "auto")

Missing Data Options

Option Behavior
"auto" Automatically switch ML to FIML when NAs detected (default)
"listwise" Complete case analysis (remove all rows with any missing values)
"pairwise" Pairwise deletion (use all available pairs for covariances)

Full Information Maximum Likelihood (FIML)

FIML is the recommended approach for missing data. It estimates parameters using all available information for each case, providing unbiased estimates under Missing At Random (MAR) assumptions.

# Explicit FIML estimation:
model <- ggm(data, vars = vars, estimator = "FIML") %>% runmodel

# FIML works seamlessly with ML estimator when missing = "auto"
model <- ggm(data, vars = vars, estimator = "ML", missing = "auto") %>% runmodel

Listwise Deletion

Use complete cases only (removes all rows with any missing values):

model <- ggm(data, vars = vars, missing = "listwise") %>% runmodel

Pairwise Deletion

Use all available pairs of variables when computing covariances:

model <- ggm(data, vars = vars, missing = "pairwise") %>% runmodel

Example with Missing Data

library("psychonetrics")
library("dplyr")

# Simulate data with missing values:
data <- psych::bfi
vars <- c("A1", "A2", "A3", "A4", "A5")

# Introduce 10% missing completely at random:
set.seed(123)
for (v in vars) {
  data[sample(1:nrow(data), size = 0.1 * nrow(data)), v] <- NA
}

# FIML automatically used (missing = "auto"):
model_fiml <- ggm(data, vars = vars) %>% runmodel

# Compare to listwise deletion:
model_listwise <- ggm(data, vars = vars, missing = "listwise") %>% runmodel

# FIML uses more information (check sample sizes):
model_fiml@sample@nobs       # Full sample
model_listwise@sample@nobs   # Reduced sample

Bootstrap

Bootstrap resampling provides robust standard errors and confidence intervals, especially useful when distributional assumptions are violated or sample sizes are small.

Enabling Bootstrap

Set bootstrap = TRUE or specify the bootstrap type:

# Non-parametric bootstrap (resample cases):
model <- ggm(data, vars = vars, bootstrap = TRUE) %>% runmodel

# Explicit non-parametric bootstrap:
model <- ggm(data, vars = vars, bootstrap = "nonparametric") %>% runmodel

# Case bootstrap (same as nonparametric):
model <- ggm(data, vars = vars, bootstrap = "case") %>% runmodel

Bootstrap Options

Each model call with bootstrap = TRUE creates one resampled dataset. To obtain multiple bootstrap replicates, run the model repeatedly in a loop (or in parallel) and aggregate the results with aggregate_bootstraps().

Complete Example

The loop_psychonetrics() function handles parallel cluster setup, variable export, and cleanup automatically:

library("psychonetrics")
library("dplyr")

# Load data:
data(bfi)
vars <- c("A1", "A2", "A3", "A4", "A5")

# Fit the original model:
model <- ggm(bfi, vars = vars) %>% runmodel

# Run 100 bootstrap replicates in parallel (use 1000+ in real analysis):
bootstraps <- loop_psychonetrics({
  ggm(bfi, vars = vars,
      bootstrap = "nonparametric") %>%
    runmodel
}, reps = 100, nCores = 8)

# Aggregate bootstrap results:
boot_agg <- aggregate_bootstraps(
  sample = model,
  bootstraps = bootstraps
)

# View parameters with bootstrap SEs and CIs:
boot_agg %>% parameters

# Plot confidence intervals:
CIplot(boot_agg)

How loop_psychonetrics() works

loop_psychonetrics() replaces the boilerplate of makeCluster() / clusterExport() / parLapply() / stopCluster(). Variables referenced in the expression (like bfi and vars above) are automatically detected and exported to parallel workers. Set nCores = 1 for sequential execution. The function can also be used for simulation studies or any repeated psychonetrics computation.

Bootstrapping with Model Search

The expression inside loop_psychonetrics() can include any analysis pipeline. For example, to bootstrap a pruned network:

# Fit and prune the original model:
model_pruned <- ggm(bfi, vars = vars) %>%
  runmodel %>%
  prune(alpha = 0.05)

# Bootstrap the full pipeline:
bootstraps <- loop_psychonetrics({
  ggm(bfi, vars = vars,
      bootstrap = "nonparametric") %>%
    runmodel %>%
    prune(alpha = 0.05)
}, reps = 1000, nCores = 8)

boot_agg <- aggregate_bootstraps(
  sample = model_pruned,
  bootstraps = bootstraps
)

# Plot with zero-proportion indicators:
CIplot(boot_agg, split0 = TRUE)

Exporting Results

psychonetrics provides a convenient function to export comprehensive model results to a text file.

Write to File

Use write_psychonetrics() to export all model information:

# Export complete model output to a text file:
write_psychonetrics(model, "model_output.txt")

What Gets Exported

The exported file includes:

Example

library("psychonetrics")

# Estimate a model:
data(bfi)
vars <- c("A1", "A2", "A3", "A4", "A5")
model <- ggm(bfi, vars = vars, omega = "full") %>%
  runmodel %>%
  prune(alpha = 0.01) %>%
  runmodel

# Export results:
write_psychonetrics(model, "ggm_results.txt")

# The file "ggm_results.txt" now contains:
# - Complete model summary
# - Fit statistics
# - All parameter estimates
# - Estimated model matrices

Alternative: Manual Export

You can also manually extract and save specific components:

# Save parameter table:
params <- model %>% parameters()
write.csv(params, "parameters.csv", row.names = FALSE)

# Save fit indices:
fit_stats <- model %>% fit()
write.csv(fit_stats, "fit_indices.csv", row.names = FALSE)

# Save omega matrix:
omega <- getmatrix(model, "omega")
write.csv(omega, "omega_matrix.csv", row.names = FALSE)