Contents

Overview

The Ising model is a network model specifically designed for binary or dichotomous data. While the varcov and lvm families assume multivariate normal (Gaussian) distributions, the Ising model provides a principled probabilistic framework for modeling binary variables.

Originating from statistical physics, where it was developed to model ferromagnetic materials, the Ising model has found widespread application in psychology and social sciences for analyzing binary questionnaire data, such as yes/no responses, present/absent symptoms, or agree/disagree items.

The Ising model represents the joint probability distribution of binary variables through pairwise interactions, making it a natural choice for network analysis of binary data. Each edge in the network represents the association between two binary variables after controlling for all other variables in the system.

Mathematical Model

The Ising model defines the probability of observing a particular configuration of binary variables $\boldsymbol{y} = (y_1, y_2, \ldots, y_m)$ through an energy-based formulation:

$$P(\boldsymbol{Y} = \boldsymbol{y}) = \frac{1}{Z} \exp\left(-\beta\, H(\boldsymbol{y})\right)$$

Where $H(\boldsymbol{y})$ is the Hamiltonian or energy function:

$$H(\boldsymbol{y}) = -\sum_{i=1}^{m} \tau_i y_i - \sum_{i < j} \omega_{ij} y_i y_j$$

The components of this model are:

The exponential form ensures that configurations with lower energy (more negative $H$) have higher probability. Positive interactions ($\omega_{ij} > 0$) decrease energy when both variables are in the same state, making co-activation more likely.

Model Matrices

The Ising model in psychonetrics uses three main parameter matrices:

Matrix Description Default
omega Network interaction matrix (pairwise parameters) "full"
tau Threshold/intercept parameters (node activation) free
beta Inverse temperature (scalar, overall strength) 1 or free

Interpretation of Parameters

Important Notes

Input Encoding Matters

The Ising model results depend critically on whether your binary data is coded as {0, 1} or {-1, 1}. The Ising() function accepts both encodings, but they lead to different model interpretations:

  • {0, 1} coding: Standard for questionnaire data (no/yes, absent/present)
  • {-1, 1} coding: Common in physics and attitude modeling

You can specify the response coding explicitly via the responses argument to ensure correct interpretation.

Computational Limits

The partition function $Z$ requires summing over all $2^m$ possible states. This becomes computationally prohibitive for large networks:

  • Default maximum: maxNodes = 20
  • 20 variables: $2^{20} \approx 1$ million states
  • 30 variables: $2^{30} \approx 1$ billion states

For networks larger than 20 nodes, computation time increases exponentially. Consider using approximation methods or sampling-based approaches for very large networks.

Estimator Support

Only Maximum Likelihood (ML) and Penalized Maximum Likelihood (PML) estimators are supported for Ising models. Other estimators available in psychonetrics (FIML, WLS, DWLS, ULS) are not applicable to binary data models.

Example: Single-Group Ising Model

This example demonstrates fitting an Ising model to the Jonas dataset, which contains 10 binary attitude items about a researcher named Jonas.

Step 1: Load and Inspect Data

library("psychonetrics")
library("dplyr")

# Load the Jonas dataset (built into psychonetrics)
data(Jonas)

# The dataset contains 10 binary attitude items and a grouping variable
vars <- names(Jonas)[1:10]
head(Jonas)

# Check data structure
str(Jonas)
# 215 participants rated 10 yes/no attitude items

Step 2: Fit a Saturated Ising Model

Start with a fully saturated model where all pairwise interactions are estimated:

# Fit saturated Ising model (all edges free)
mod <- Ising(Jonas, vars = vars, omega = "full")
mod <- mod %>% runmodel

# Inspect model
mod

# View all parameters
mod %>% parameters

# Check fit (saturated model should fit perfectly)
mod %>% fit

Step 3: Prune Non-Significant Edges

Remove edges that are not statistically significant to obtain a sparse network:

# Prune edges with p > 0.05
mod_pruned <- mod %>% prune(alpha = 0.05)

# Check how many edges remain
mod_pruned %>% parameters %>%
  filter(matrix == "omega", !fixed) %>%
  nrow()

Step 4: Stepup Search

Add back edges that significantly improve model fit using modification indices:

# Stepup search: add edges that improve fit
mod_final <- mod_pruned %>% stepup(alpha = 0.05)

# Compare models
compare(
  saturated = mod,
  pruned = mod_pruned,
  final = mod_final
)

Step 5: Extract and Visualize Network

# Extract the omega (interaction) matrix
omega <- getmatrix(mod_final, "omega")

# Visualize with qgraph
library("qgraph")
qgraph(omega,
       labels = vars,
       theme = "colorblind",
       cut = 0,
       layout = "spring",
       title = "Jonas Attitude Network")

# Positive edges (green): items tend to co-occur
# Negative edges (red): items tend to be mutually exclusive
About the Jonas Dataset: This dataset was collected to study attitudes toward a researcher. Participants were divided into two groups: those who personally knew Jonas and those who did not. The 10 binary items measure various positive and negative attitudes.

Example: Multi-Group Ising Analysis

Multi-group Ising models allow us to test whether network structures differ between populations. This example compares attitude networks between people who know Jonas personally versus those who do not.

Step 1: Prepare Data

# Order data so "Doesn't know" group comes first
# (group 1 = doesn't know, group 2 = knows Jonas)
Jonas <- Jonas[order(Jonas$group), ]

# Check group sizes
table(Jonas$group)

Step 2: Test Progressive Constraints

We fit a series of increasingly constrained models to test different levels of invariance:

Model 1: All Parameters Free

# Configural model: all parameters free across groups
mod1 <- Ising(Jonas, vars = vars, groups = "group") %>%
  runmodel

# Sparse version: prune and stepup
mod1b <- mod1 %>%
  prune(alpha = 0.05) %>%
  stepup(alpha = 0.05)

Model 2: Equal Networks

# Constrain omega (network) to be equal across groups
mod2 <- mod1 %>%
  groupequal("omega") %>%
  runmodel

# Sparse version with equality constraints
mod2b <- mod2 %>%
  prune(alpha = 0.05) %>%
  stepup(mi = "mi_equal", alpha = 0.05)

Model 3: Equal Networks + Equal Thresholds

# Constrain both omega and tau to be equal
mod3 <- mod2 %>%
  groupequal("tau") %>%
  runmodel

mod3b <- mod3 %>%
  prune(alpha = 0.05) %>%
  stepup(mi = "mi_equal", alpha = 0.05)

Model 4: Fully Equal (Including Beta)

# All parameters equal across groups
mod4 <- mod3 %>%
  groupequal("beta") %>%
  runmodel

mod4b <- mod4 %>%
  prune(alpha = 0.05) %>%
  stepup(mi = "mi_equal", alpha = 0.05)

Step 3: Compare All Models

# Compare all models by information criteria
compare(
  "free (dense)"            = mod1,
  "free (sparse)"           = mod1b,
  "equal omega (dense)"     = mod2,
  "equal omega (sparse)"    = mod2b,
  "equal omega+tau (dense)" = mod3,
  "equal omega+tau (sparse)"= mod3b,
  "fully equal (dense)"     = mod4,
  "fully equal (sparse)"    = mod4b
) %>% arrange(BIC)

# Model with lowest BIC is preferred

Step 4: Interpret Results

# If equal network model fits well:
# → Network structure is the same across groups

# If only free model fits:
# → Network structure differs between groups

# Extract group-specific networks if they differ
omega_group1 <- getmatrix(mod1b, "omega", group = 1)
omega_group2 <- getmatrix(mod1b, "omega", group = 2)

# Visualize both networks
par(mfrow = c(1, 2))
qgraph(omega_group1, layout = "spring", cut = 0, title = "Doesn't Know Jonas")
qgraph(omega_group2, layout = "spring", cut = 0, title = "Knows Jonas")
Important: When using stepup search with equality constraints, always specify mi = "mi_equal" to ensure modification indices respect the constraints.

Interpretation Guide

Understanding Ising model parameters requires careful attention to their probabilistic interpretation:

Network Edges (Omega)

Thresholds (Tau)

The interpretation of thresholds depends on the data coding. For {−1, 1} coded data, thresholds represent the baseline tendency of a node toward one state over the other when all other variables are neutral. For {0, 1} coded data, the interpretation is more complex. See Epskamp, Haslbeck, Isvoranu & Van Borkulo (2022) for a detailed discussion.

Temperature (Beta)

Conditional Independence

A key feature of the Ising model is that it represents the conditional dependence structure of binary variables:

Summary

The Ising model extends network analysis to binary and dichotomous data. Key takeaways:

Limitations

When to Use the Ising Model

The Ising model is appropriate when:

Next Steps

Now that you understand the Ising model, explore:

Further Reading

For theoretical background on the Ising model in psychology: