Introduction To Mixed Modeling Procedures: Sas/Stat 13.2 User's Guide
Introduction To Mixed Modeling Procedures: Sas/Stat 13.2 User's Guide
Discover all that you need on your journey to knowledge and empowerment.
support.sas.com/bookstore
for additional books and resources.
SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries. ® indicates USA registration. Other brand and product names are
trademarks of their respective companies. © 2013 SAS Institute Inc. All rights reserved. S107969US.0613
Chapter 6
Introduction to Mixed Modeling Procedures
Contents
Overview: Mixed Modeling Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Types of Mixed Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Linear, Generalized Linear, and Nonlinear Mixed Models . . . . . . . . . . . . . . . 113
Linear Mixed Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Generalized Linear Mixed Model . . . . . . . . . . . . . . . . . . . . . . . 114
Nonlinear Mixed Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Models for Clustered and Hierarchical Data . . . . . . . . . . . . . . . . . . . . . . . 116
Models with Subjects and Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Linear Mixed Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Comparing the MIXED and GLM Procedures . . . . . . . . . . . . . . . . . . . . . 119
Comparing the MIXED and HPMIXED Procedures . . . . . . . . . . . . . . . . . . 119
Generalized Linear Mixed Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Comparing the GENMOD and GLIMMIX Procedures . . . . . . . . . . . . . . . . . 121
Nonlinear Mixed Models: The NLMIXED Procedure . . . . . . . . . . . . . . . . . . . . . 121
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Y D Xˇ C Z C
In a broader sense, mixed modeling and mixed model software is applied to special cases and generalizations
of this model. For example, a purely random effects model, Y D Z
C , or a correlated-error model,
Y D Xˇ C , is subsumed by mixed modeling methodology.
Over the last few decades virtually every form of classical statistical model has been enhanced to accommodate
random effects. The linear model has been extended to the linear mixed model, generalized linear models
have been extended to generalized linear mixed models, and so on. In parallel with this trend, SAS/STAT
software offers a number of classical and contemporary mixed modeling tools. The aim of this chapter is
112 F Chapter 6: Introduction to Mixed Modeling Procedures
to provide a brief introduction and comparison of the procedures for mixed model analysis (in the broad
sense) in SAS/STAT software. The theory and application of mixed models are discussed at length in
many monographs, including Milliken and Johnson (1992); Diggle, Liang, and Zeger (1994); Davidian and
Giltinan (1995); Verbeke and Molenberghs (1997, 2000); Vonesh and Chinchilli (1997); Demidenko (2004);
Molenberghs and Verbeke (2005); and Littell et al. (2006).
The following procedures in SAS/STAT software can perform mixed and random effects analysis to various
degrees:
GLM is primarily a tool for fitting linear models by least squares. The GLM procedure has
some capabilities for including random effects in a statistical model and for performing
statistical tests in mixed models. Repeated measures analysis is also possible with the
GLM procedure, assuming unstructured covariance modeling. Estimation methods for
covariance parameters in PROC GLM are based on the method of moments, and a portion
of its output applies only to the fixed-effects model.
GLIMMIX fits generalized linear mixed models by likelihood-based techniques. As in the MIXED
procedure, covariance structures are modeled parametrically. The GLIMMIX proce-
dure also has built-in capabilities for mixed model smoothing and joint modeling of
heterocatanomic multivariate data.
HPMIXED fits linear mixed models by sparse-matrix techniques. The HPMIXED procedure is
designed to handle large mixed model problems, such as the solution of mixed model
equations with thousands of fixed-effects parameters and random-effects solutions.
LATTICE computes the analysis of variance and analysis of simple covariance for data from an
experiment with a lattice design. PROC LATTICE analyzes balanced square lattices,
partially balanced square lattices, and some rectangular lattices. Analyses performed
with the LATTICE procedure can also be performed as mixed models for complete or
incomplete block designs with the MIXED procedure.
MIXED performs mixed model analysis and repeated measures analysis by way of structured
covariance models. The MIXED procedure estimates parameters by likelihood or moment-
based techniques. You can compute mixed model diagnostics and influence analysis for
observations and groups of observations. The default fitting method maximizes the
restricted likelihood of the data under the assumption that the data are normally distributed
and any missing data are missing at random. This general framework accommodates many
common correlated-data methods, including variance component models and repeated
measures analyses.
NESTED performs analysis of variance and analysis of covariance for purely nested random-effects
models. Because of its customized algorithms, PROC NESTED can be useful for large
data sets with nested random effects.
NLMIXED fits mixed models in which the fixed or random effects enter nonlinearly. The NLMIXED
procedure requires that you specify components of your mixed model via programming
statements. Some built-in distributions enable you to easily specify the conditional
distribution of the data, given the random effects.
VARCOMP estimates variance components for random or mixed models.
The focus in the remainder of this chapter is on procedures designed for random effects and mixed model
analysis: the GLIMMIX, HPMIXED, MIXED, NESTED, NLMIXED, and VARCOMP procedures. The
important distinction between fixed and random effects in statistical models is addressed in the section “Fixed,
Types of Mixed Models F 113
Random, and Mixed Models” on page 27, in Chapter 3, “Introduction to Statistical Modeling with SAS/STAT
Software.”
Y D Xˇ C Z
C
N.0; G/
N.0; R/
CovŒ
; D 0
The matrices G and R are covariance matrices for the random effects and the random errors, respectively.
A G-side random effect in a linear mixed model is an element of
, and its variance is expressed through
an element in G. An R-side random variable is an element of , and its variance is an element of R. The
GLIMMIX, HPMIXED, and MIXED procedures express the G and R matrices in parametric form—that
is, you structure the covariance matrix, and its elements are expressed as functions of some parameters,
known as the covariance parameters of the mixed models. The NLMIXED procedure also parameterizes
the covariance structure, but you accomplish this with programming statements rather than with predefined
syntax.
Since the right side of the model equation contains multiple random variables, the stochastic properties of Y
can be examined by conditioning on the random effects, or through the marginal distribution. Because of
the linearity of the G-side random effects and the normality of the random variables, the conditional and the
marginal distribution of the data are also normal with the following mean and variance matrices:
Yj
N.Xˇ C Z
; R/
Y N.Xˇ; V/
V D ZGZ0 C R
Parameter estimation in linear mixed models is based on likelihood or method-of-moment techniques. The
default estimation method in PROC MIXED, and the only method available in PROC HPMIXED, is restricted
114 F Chapter 6: Introduction to Mixed Modeling Procedures
(residual) maximum likelihood, a form of likelihood estimation that accounts for the parameters in the fixed-
effects structure of the model to reduce the bias in the covariance parameter estimates. Moment-based
estimation of the covariance parameters is available in the MIXED procedure through the METHOD= option
in the PROC MIXED statement. The moment-based estimators are associated with sums of squares, expected
mean squares (EMS), and the solution of EMS equations.
Parameter estimation by likelihood-based techniques in linear mixed models maximizes the marginal (re-
stricted) log likelihood of the data—that is, the log likelihood is formed from Y N.Xˇ; V/. This is a model
for Y with mean Xˇ and covariance matrix V, a correlated-error model. Such marginal models arise, for
example, in the analysis of time series data, repeated measures, or spatial data, and are naturally subsumed
into the linear mixed model family. Furthermore, some mixed models have an equivalent formulation
as a correlated-error model, when both give rise to the same marginal mean and covariance matrix. For
example, a mixed model with a single variance component is identical to a correlated-error model with
compound-symmetric covariance structure, provided that the common correlation is positive.
As an example, suppose that s pairs of twins are randomly selected in a matched-pair design. One of the
twins in each pair receives a treatment and the outcome variable is some binary measure. This is a study
with s clusters (subjects) and each cluster is of size 2. If Yij denotes the binary response of twin j D 1; 2 in
cluster i, then a linear predictor for this experiment could be
ij D ˇ0 C xij C
i
where xij denotes a regressor variable that takes on the value 1 for the treated observation in each pair, and
0 otherwise. The
i are pair-specific random effects that model heterogeneity across sets of twins and that
induce a correlation between the members of each pair. By virtue of random sampling the sets of twins, it is
Linear, Generalized Linear, and Nonlinear Mixed Models F 115
reasonable to assume that the
i are independent and have equal variance. This leads to a diagonal G matrix,
2 3 2 2 3
1
0 0 0
6
2 7 6 0
2 0 0 7
6 7 6 7
VarŒ
D Var 6
3 7 D 6 0
6 7 6 0
2 0 7 7
6 :: 7 6 : :: :: : : :: 7
4 : 5 4 :: : : : : 5
s 0 0 0
2
A common link function for binary data is the logit link, which leads in the second step of model formulation
to
1
E Yij j
i D ij j
i D
1 C expf ij g
ij j
i
logit Dij
1 ij j
i
The final step, choosing a distribution from the exponential family, is automatic in this example; only the
binary distribution comes into play to model the distribution of Yij j
i .
As for the linear mixed model, there is a marginal model in the case of a generalized linear mixed model that
results from integrating the joint distribution over the random effects. This marginal distribution is elusive for
many GLMMs, and parameter estimation proceeds by either approximating the model or by approximating
the marginal integral. Details of these approaches are described in the section “Generalized Linear Mixed
Models Theory” on page 3186, in Chapter 44, “The GLIMMIX Procedure.”
A marginal model, one that models correlation through the R matrix and does not involve G-side random
effects, can also be formulated in the GLMM family; such models are the extension of the correlated-error
models in the linear mixed model family. Because nonnormal distributions in the exponential family exhibit
a functional mean-variance relationship, fully parametric estimation is not possible in such models. Instead,
estimating equations are formed based on first-moment (mean) and second-moment (covariance) assumptions
for the marginal data. The approaches for modeling correlated nonnormal data via generalized estimating
equations (GEE) fall into this category (see, for example, Liang and Zeger 1986; Zeger and Liang 1986).
An example of a nonlinear mixed model is the following logistic growth curve model for the jth observation
of the ith subject (cluster):
ˇ1 C
i1
f .ˇ;
i ; xij / D
1 C expŒ .xij ˇ2 /=.ˇ3 C
i 2 /
The inclusion of R-side covariance structures in GLMM and NLMM models is not as straightforward as in
linear mixed models for the following reasons:
• The normality of the conditional distribution in the LMM enables straightforward modeling of the
covariance structure because the mean structure and covariance structure are not functionally related.
• The linearity of the random effects in the LMM leads to a marginal distribution that incorporates the R
matrix in a natural and meaningful way.
To incorporate R-side covariance structures when random effects enter nonlinearly or when the data are not
normally distributed requires estimation approaches that rely on linearizations of the mixed model. Among
such estimation methods are the pseudo-likelihood methods that are available with the GLIMMIX procedure.
Generalized estimating equations also solve this marginal estimation problem for nonnormal data; these are
available with the GENMOD procedure.
• The selection of groups is often performed randomly, so that the associated effects are random effects.
• The data from different clusters are independent by virtue of the random selection or by assumption.
• The observations from the same cluster are often correlated, such as the repeated observations in a
repeated measures or longitudinal study.
• It is often believed that there is heterogeneity in model parameters across subjects; for example, slopes
and intercepts might differ across individuals in a longitudinal growth study. This heterogeneity, if due
to stochastic sources, can be modeled with random effects.
Models with Subjects and Groups F 117
A linear mixed models with clustered, hierarchical structure can be written as a special case of the general
linear mixed model by introducing appropriate subscripts. For example, a mixed model with one type of
clustering and s clusters can be written as
Yi D Xi ˇ C Zi i C i i D 1; ; s
In SAS/STAT software, the clusters are referred to as subjects, and the effects that define clusters in your
data can be specified with the SUBJECT= option in the GLIMMIX, HPMIXED, MIXED, and NLMIXED
procedures. The vector Yi collects the ni observations for the ith subject. In certain disciplines, the
organization of a hierarchical model is viewed in a bottom-up form, where the measured observations
represent the first level, these are collected into units at the second level, and so forth. In the school data
example, the bottom-up approach considers a student’s score as the level-1 observation, the classroom as
the level-2 unit, and the school district as the level-3 unit (if these were also selected from a population of
districts).
The following points are noteworthy about mixed models with SUBJECT= specification:
• A SUBJECT= option is available in the RANDOM statements of the GLIMMIX, HPMIXED, MIXED,
and NLMIXED procedures and in the REPEATED statement of the MIXED and HPMIXED proce-
dures.
• A SUBJECT= specification is required in the NLMIXED and HPMIXED procedures. It is not required
with any other mixed modeling procedure in SAS/STAT software.
• Specifying models with subjects is usually more computationally efficient in the MIXED and GLIM-
MIX procedures, especially if the SUBJECT= effects are identical or contained within each other. The
computational efficiency of the HPMIXED procedure is not dependent on SUBJECT= effects in the
manner in which the MIXED and GLIMMIX procedures are affected.
• There is no limit to the number of SUBJECT= effects with the MIXED, HPMIXED, and GLIMMIX
procedures—that is, you can achieve an arbitrary depth of the nesting.
class id;
model y = x;
random intercept x / subject=id;
The interpretation of the RANDOM statement is that for each ID an independent draw is made from a
bivariate normal distribution with zero mean and a diagonal covariance matrix. In the following statements
(in any of these procedures) these independent draws come from different bivariate normal distributions
depending on the value of the grp variable.
class id grp;
model y = x;
random intercept x / subject=id group=grp;
Adding GROUP= effects in your model increases the flexibility to model heterogeneity in the covariance
parameters, but it can add numerical complexity to the estimation process.
Y D Xˇ C Z
C
N.0; G/
N.0; R/
CovŒ
; D 0
with the MIXED procedure, you specify the fixed-effects design matrix X in the MODEL statement, the
random-effects design matrix Z in the RANDOM statement, the covariance matrix of the random effects
G with options (SUBJECT=, GROUP=, TYPE=) in the RANDOM statement, and the R matrix in the
REPEATED statement.
By default, covariance parameters are estimated by restricted (residual) maximum likelihood. In supported
models, the METHOD=TYPE1, METHOD=TYPE2, and METHOD=TYPE3 options lead to method-of-
moment-based estimators and analysis of variance. The MIXED procedure provides an extensive list of
diagnostics for mixed models, from various residual graphics to observationwise and groupwise influence
diagnostics.
The NESTED procedure performs an analysis of variance in nested random effects models. The VARCOMP
procedure can be used to estimate variance components associated with random effects in random and mixed
models. The LATTICE procedure computes analysis of variance for balanced and partially balanced square
lattices. You can fit the random and mixed models supported by these procedures with the MIXED procedure
as well. Some specific analyses, such as the analysis of Gauge R & R studies in the VARCOMP procedure
(Burdick, Borror, and Montgomery 2005), are unique to the specialized procedures.
The GLIMMIX procedure can fit most of the models that you can fit with the MIXED procedure, but it does
not offer method-of-moment-based estimation and analysis of variance in the narrow sense. Also, PROC
Comparing the MIXED and GLM Procedures F 119
GLIMMIX does not support the same array of covariance structures as the MIXED procedure and does
not support a sampling-based Bayesian analysis. An in-depth comparison of the GLIMMIX and MIXED
procedures can be found in the section “Comparing the GLIMMIX and MIXED Procedures” on page 3236,
in Chapter 44, “The GLIMMIX Procedure.”
• The default estimation method for covariance parameters in the MIXED procedure is restricted
maximum likelihood. Covariance parameters are estimated by the method of moments by solving
expressions for expected mean squares.
• In the GLM procedure, fixed and random effects are listed in the MODEL statement. Only fixed effects
are listed in the MODEL statement of the MIXED procedure. In the GLM procedure, random effects
must be repeated in the RANDOM statement.
• You can request tests for model effects by adding the TEST option in the RANDOM statement of the
GLM procedure. PROC GLM then constructs exact tests for random effects if possible and constructs
approximate tests if exact tests are not possible. For details on how the GLM procedure constructs
tests for random effects, see the section “Computation of Expected Mean Squares for Random Effects”
on page 3502, in Chapter 45, “The GLM Procedure.” Tests for fixed effects are constructed by the
MIXED procedure as Wald-type F tests, and the degrees of freedom for these tests can be determined
by a variety of methods.
• Some of the output of the GLM procedure applies only to the fixed effects part of the model, whether a
RANDOM statement is specified or not.
• Variance components are independent in the GLM procedure and covariance matrices are generally
unstructured. The default covariance structure for variance components in the MIXED procedure is
also a variance component structure, but the procedure offers a large number of parametric structures
to model covariation among random effects and observations.
To some extent, the generality of the MIXED procedure precludes it from serving as a high-performance
computing tool for all the model-data scenarios that the procedure can potentially estimate parameters for. For
example, although efficient sparse algorithms are available to estimate variance components in large mixed
models, the computational configuration changes profoundly when, for example, standard error adjustments
and degrees of freedom by the Kenward-Roger method are requested.
and distribution in the exponential family. The fixed-effects design matrix X is specified in the MODEL
statement of the GLIMMIX procedure, and the random-effects design matrix Z is specified in the RANDOM
statement, along with the covariance matrix of the random effects and the covariance matrix of R-side random
variables. The link function and (conditional) distribution are determined by defaults or through options in
the MODEL statement.
The GLIMMIX procedure can fit heterocatanomic multivariate data—that is, data that stem from different
distributions. For example, one measurement taken on a patient might be a continuous, normally distributed
outcome, whereas another measurement might be a binary indicator of medical history. The GLIMMIX
procedure also provides capabilities for mixed model smoothing and mixed model splines.
The GLIMMIX procedure offers an extensive array of postprocessing features to produce output statistics and
to perform linear inference. The ESTIMATE and LSMESTIMATE statements support multiplicity-adjusted
p-values for the protection of the familywise Type-I error rate. The LSMEANS statement supports the slicing
of interactions, simple effect differences, and ODS statistical graphs for group comparisons.
The default estimation technique in the GLIMMIX procedure depends on the class of models fit. For linear
mixed models, the default technique is restricted maximum likelihood, as in the MIXED procedure. For
generalized linear mixed models, the estimation is based on linearization methods (pseudo-likelihood) or on
integral approximation by adaptive quadrature or Laplace methods.
The NLMIXED procedure facilitates the fitting of generalized linear mixed models through several built-in
distributions from the exponential family (binary, binomial, gamma, negative binomial, and Poisson). You
have to code the linear predictor and link function with SAS programming statements and assign starting
values to all parameters, including the covariance parameters. Although you are not required to specify
starting values with the NLMIXED procedure (because the procedure assigns a default value of 1.0 to every
parameter not explicitly given a starting value), it is highly recommended that you specify good starting
values. The default estimation technique of the NLMIXED procedure, an adaptive Gauss-Hermite quadrature,
is also available in the GLIMMIX procedure through the METHOD=QUAD option in the PROC GLIMMIX
statement. The Laplace approximation that is available in the NLMIXED procedure by setting QPOINTS=1
is available in the GLIMMIX procedure through the METHOD=LAPLACE option.
Comparing the GENMOD and GLIMMIX Procedures F 121
References
Burdick, R. K., Borror, C. M., and Montgomery, D. C. (2005), Design and Analysis of Gauge R&R Studies:
Making Decisions with Confidence Intervals in Random and Mixed ANOVA Models, Philadelphia, PA and
Alexandria, VA: SIAM and ASA.
Davidian, M. and Giltinan, D. M. (1995), Nonlinear Models for Repeated Measurement Data, New York:
Chapman & Hall.
Demidenko, E. (2004), Mixed Models: Theory and Applications, New York: John Wiley & Sons.
122 F Chapter 6: Introduction to Mixed Modeling Procedures
Diggle, P. J., Liang, K.-Y., and Zeger, S. L. (1994), Analysis of Longitudinal Data, Oxford: Clarendon Press.
Laird, N. M. and Ware, J. H. (1982), “Random-Effects Models for Longitudinal Data,” Biometrics, 38,
963–974.
Liang, K.-Y. and Zeger, S. L. (1986), “Longitudinal Data Analysis Using Generalized Linear Models,”
Biometrika, 73, 13–22.
Littell, R. C., Milliken, G. A., Stroup, W. W., Wolfinger, R. D., and Schabenberger, O. (2006), SAS for Mixed
Models, 2nd Edition, Cary, NC: SAS Institute Inc.
Milliken, G. A. and Johnson, D. E. (1992), Designed Experiments, volume 1 of Analysis of Messy Data, New
York: Chapman & Hall.
Molenberghs, G. and Verbeke, G. (2005), Models for Discrete Longitudinal Data, New York: Springer.
Verbeke, G. and Molenberghs, G., eds. (1997), Linear Mixed Models in Practice: A SAS-Oriented Approach,
New York: Springer.
Verbeke, G. and Molenberghs, G. (2000), Linear Mixed Models for Longitudinal Data, New York: Springer.
Vonesh, E. F. and Chinchilli, V. M. (1997), Linear and Nonlinear Models for the Analysis of Repeated
Measurements, New York: Marcel Dekker.
Zeger, S. L. and Liang, K.-Y. (1986), “Longitudinal Data Analysis for Discrete and Continuous Outcomes,”
Biometrics, 42, 121–130.
Index
Introduction to Mixed Modeling procedures, 112
assumptions, 113, 114 R matrix, 113, 115, 116, 118
clustered data, 116 R-side random effect, 113, 116
compound symmetry, 114 random effect, 111
conditional distribution, 113–115, 120 random effect, G-side, 113–115
correlated error model, 111, 114 random effect, R-side, 113, 116
covariance matrix, 113 residual likelihood, 113, 118
covariance parameters, 113 restricted likelihood, 112, 113, 118
covariance structure, 116, 117 smoothing, 120
diagnostics, 112, 118 sparse techniques, 112
distribution, conditional, 113–115, 120 splines, 120
distribution, marginal, 115, 116 subjects, 117
fixed effect, 111 subjects, compared to groups, 117
G matrix, 113, 115, 118 variance components, 112
G-side random effect, 113–115
gauge R & R, 118 link function
GEE, 115 Introduction to Mixed Modeling, 114, 120
generalized estimating equations, 115 inverse (Introduction to Mixed Modeling), 114
generalized linear mixed model, 112, 114, 120 logit (Introduction to Mixed Modeling), 115
GENMOD v. GLIMMIX, 121
GLIMMIX v. GENMOD, 121 matrix
GLM v. MIXED, 119 covariance (Introduction to Mixed Modeling), 113
GLMM, 114 mixed model
groups, 117 assumptions (Introduction to Mixed Modeling),
heterocatanomic data, 120 113, 114
hierarchical data, 116 clustered data (Introduction to Mixed Modeling),
HPMIXED v. MIXED, 119 116
lattice design, 112, 118 compound symmetry (Introduction to Mixed
level-1 units, 117 Modeling), 114
level-2 units, 117 conditional distribution (Introduction to Mixed
likelihood, residual, 113, 118 Modeling), 113–115, 120
likelihood, restricted, 112, 113, 118 covariance matrix (Introduction to Mixed
linear mixed model, 112, 113, 116, 118 Modeling), 113
link function, 114, 120 covariance parameters (Introduction to Mixed
logit link, 115 Modeling), 113
marginal distribution, 115, 116 covariance structure (Introduction to Mixed
marginal model, 115 Modeling), 116, 117
mean structure, 116 definition (Introduction to Mixed Modeling), 111
method of moments, 113, 118 diagnostics (Introduction to Mixed Modeling),
mixed model smoothing, 120 112, 118
mixed model, definition, 111 distribution, conditional (Introduction to Mixed
MIXED v. GLM, 119 Modeling), 113–115, 120
MIXED v. HPMIXED, 119 distribution, marginal (Introduction to Mixed
monographs, 112 Modeling), 115, 116
multiplicity adjustment, 120 fixed effect (Introduction to Mixed Modeling),
nested model, 112, 118 111
nonlinear mixed model, 112, 115 G matrix (Introduction to Mixed Modeling), 113,
parameter estimation, 113, 115 115, 118
G-side random effect (Introduction to Mixed nonlinear (Introduction to Mixed Modeling), 112,
Modeling), 113–115 115
gauge R & R study (Introduction to Mixed parameter estimation (Introduction to Mixed
Modeling), 118 Modeling), 113, 115
GEE (Introduction to Mixed Modeling), 115 procedures in SAS/STAT (Introduction to Mixed
generalized estimating equations (Introduction to Modeling), 112
Mixed Modeling), 115 R matrix (Introduction to Mixed Modeling), 113,
generalized linear (Introduction to Mixed 115, 116, 118
Modeling), 112, 114, 120 R-side random effect (Introduction to Mixed
GENMOD and GLIMMIX compared Modeling), 113, 116
(Introduction to Mixed Modeling), 121 random effect (Introduction to Mixed Modeling),
GLIMMIX and GENMOD compared 111
(Introduction to Mixed Modeling), 121 random effect, G-side (Introduction to Mixed
GLM and MIXED compared (Introduction to Modeling), 113–115
Mixed Modeling), 119 random effect, R-side (Introduction to Mixed
GLMM (Introduction to Mixed Modeling), 114 Modeling), 113, 116
groups (Introduction to Mixed Modeling), 117 residual likelihood (Introduction to Mixed
hierarchical data (Introduction to Mixed Modeling), 113, 118
Modeling), 116 restricted likelihood (Introduction to Mixed
HPMIXED and MIXED compared (Introduction Modeling), 112, 113, 118
to Mixed Modeling), 119 smoothing (Introduction to Mixed Modeling), 120
lattice design (Introduction to Mixed Modeling), sparse techniques (Introduction to Mixed
112, 118 Modeling), 112
level-1 units (Introduction to Mixed Modeling), splines (Introduction to Mixed Modeling), 120
117 subjects (Introduction to Mixed Modeling), 117
level-2 units (Introduction to Mixed Modeling), subjects, compared to groups (Introduction to
117 Mixed Modeling), 117
likelihood, residual (Introduction to Mixed variance component (Introduction to Mixed
Modeling), 113, 118 Modeling), 112
likelihood, restricted (Introduction to Mixed mixed model smoothing
Modeling), 112, 113, 118 Introduction to Mixed Modeling, 120
linear (Introduction to Mixed Modeling), 112, multivariate data
113, 116, 118 heterocatanomic (Introduction to Mixed
link function (Introduction to Mixed Modeling), Modeling), 120
114, 120
logit link (Introduction to Mixed Modeling), 115
marginal distribution (Introduction to Mixed
Modeling), 115, 116
marginal model (Introduction to Mixed
Modeling), 115
mean structure (Introduction to Mixed Modeling),
116
method of moments (Introduction to Mixed
Modeling), 113, 118
MIXED and GLM compared (Introduction to
Mixed Modeling), 119
MIXED and HPMIXED compared (Introduction
to Mixed Modeling), 119
monographs (Introduction to Mixed Modeling),
112
multiplicity adjustment (Introduction to Mixed
Modeling), 120
nested (Introduction to Mixed Modeling), 112,
118