Term
|
Definition
We use HLM with hierarchical (nested) data.
In other words, when data is organized at more than one level. |
|
|
Term
| In HLM, what do the intercepts and slopes correspond with? |
|
Definition
| In HLM, intercepts correspond with means and slopes correspond with IV-DV relationships. |
|
|
Term
| Give a brief example of a classic study that uses HLM. |
|
Definition
| A classic HLM study would use student characteristics as the Level 1 variables, their respective Class as a Level 2 variable, and academic performance as the dependent variable. |
|
|
Term
| Why do Tabachnick and Fidell say we should use HLM? |
|
Definition
| They say we should use HLM because "analyzing data organized into hierarchies as if they are all on the same level leads to both interpretational and statistical errors." |
|
|
Term
| What is a covariance matrix? |
|
Definition
| A covariance matrix is symmetric. It shows the covariances, which is an unstandardized version of a correlation (measured in the original units). A covariance of 0 means no linear relationship. |
|
|
Term
| What is a covariance structure? Name two. |
|
Definition
| A covariance structure is a pattern in a covariance matrix. Variance Components and Unstructured are two examples. |
|
|
Term
| Describe the Variance Components covariance structure. |
|
Definition
| VC is a scaled identity matrix, meaning that each variance is different but all of the covariances of the random effects are equal to zero. |
|
|
Term
| Give an example of when we would use the VC structure in HLM. |
|
Definition
| An example would be if we had four variables that were all independent of each other and that were measured on four different scales (pounds, miles, seconds, and volts). |
|
|
Term
| Describe the Unstructured covariance structure. |
|
Definition
| UN is a variance-covariance matrix, which means there is no pattern within the matrix at all. Each variance and covariance has not relation to the others. |
|
|
Term
| What are the two errors we can get when running HLM? |
|
Definition
-iteration error -Hessian matrix not positive definite error |
|
|
Term
| Discuss why we might get the iterative error in HLM. |
|
Definition
HLM is an iterative process that uses maximum likelihood to estimate means and variances of intercepts and slopes in a regression model.
It might take more than the default 100 iterations to get the data to converge. |
|
|
Term
| Discuss why we might get the Hessian Matrix not positive definite error in HLM. |
|
Definition
This is the program telling us that it is not sure that the regression equation that it developed really fits the data that we gave it.
Sometimes we can fix this by changing the covariance matrix to make some variables more or less related to one another. |
|
|
Term
| Discuss categorical and continuous variables in SPSS Mixed Models. |
|
Definition
-Categorical variables are called factors and have the word "by" in the syntax.
-Continuous variables are called covariates and have the word "with" in the syntax. |
|
|
Term
| In SPSS Mixed Models, what is the test of significance for the random effects? |
|
Definition
|
|
Term
| Discuss the Wald Statistic in SPSS Mixed Models. |
|
Definition
| The Wald Statistics are two-tailed when they should be one-tailed. That means we should divide the reported p-values by 2. |
|
|
Term
| What should you consider doing to your variables in SPSS Mixed Models? |
|
Definition
| You should consider centering them by hand because SPSS does not do this in the same way that HLM does. |
|
|
Term
| What is the closest covariance structure in SPSS Mixed Models to HLM6? |
|
Definition
-Variance Component -There will still be differences due to differences in the algorithms used by the two programs. |
|
|
Term
| Discuss the parameters created by SPSS Mixed Models and HLM6. |
|
Definition
These two programs will often estimate different numbers of parameters.
This is because of differences in the algorithms they use in addition to assumptions about the covariance structures between the two programs. |
|
|
Term
| What is the purpose of SPSS Mixed Models? |
|
Definition
| The purpose is the same as general HLM: to study the effect of several independent variables that are organized on different levels on one dependent variable. The ability to perform HLM in SPSS is a convenience to researchers without HLM6. |
|
|
Term
| What is the underlying rationale for SPSS Mixed Models? |
|
Definition
| The underlying rationale is that the Level 2 groupings reflect meaningful inter-subject variation that should be accounted for. The ability to do this in SPSS is a benefit for those without HLM6. |
|
|
Term
| What is the purpose of Principal Components/Factor Analysis? |
|
Definition
| To determine the number of dimensions necessary to explain the bulk of the relationships among the variables. Then we interpret this by investigating the correlations of the variables with the dimensions. |
|
|
Term
| What is the rationale of Principal Components/Factor Analysis? |
|
Definition
| Measures are influenced by many underlying constructs, and that the dimensions in factor analysis uncover these. This is done by decomposing the correlations among them. |
|
|
Term
| Name the 5 assumptions of Principal Components/Factor Analysis. |
|
Definition
1) interval-level measurement 2) random sampling 3) linearity 4) normal distributions 5) bivariate normal distribution |
|
|
Term
| When is factor analysis most used in psychology? |
|
Definition
| Intelligence research. It is also common in personality, attitudes, psychometrics, etc. |
|
|
Term
| What are the theoretical limitations of Principal Components/Factor Analysis? |
|
Definition
| Because of its exploratory nature, many decisions about number of factors and rotational scheme are based on pragmatic rather than theoretical criteria. |
|
|
Term
| What are the practical limitations of Principal Components/Factor Analysis? |
|
Definition
| PC/FA are sensitive to the sizes of correlations, so honest correlations must be used. Outliers, missing data, or degraded correlations between poorly distributed variables are a big problem for this PC/FA. |
|
|
Term
| Describe how Principal Components and Confirmatory Factor Analysis are different. |
|
Definition
In EFA, variables are grouped on their correlations, without prior assumptions. You start with a matrix of associations and try to identify the factors that underlie them.
In CFA, the analysis tests specific hypotheses. Variables are specifically chosen to reveal the underlying processes. You start with a theory about what factors are responsible and then test that theory. |
|
|
Term
| Describe a type of rotation in Principal Components. |
|
Definition
| Varimax: an orthogonal rotation |
|
|
Term
| What is the measure of sampling adequacy in Principal Components? |
|
Definition
| Kaiser-Meyer Olkin measure of sampling adequacy |
|
|
Term
| In principle components, where are the results of the rotation shown? |
|
Definition
| in the rotated component matrix |
|
|
Term
| In CFA, for the most parsimonious model... |
|
Definition
| ...one variable is defined by one factor, but it is possible for a variable to have more than one factor. |
|
|
Term
| What is the computer program used to perform CFA? |
|
Definition
|
|
Term
| What does the Regression Weights table in CFA tell us? |
|
Definition
| what regressions were significant |
|
|
Term
| What does the Standardized Regression Weights table in CFA tell us? |
|
Definition
the correlations between the observed variable and the corresponding common factor
the higher these are, the more reliable an indicator the variable is of the factor |
|
|
Term
| What does the Covariances table tell us in CFA? |
|
Definition
| indicates whether the factors themselves were significantly correlated (report using P) |
|
|
Term
| What does the Correlations table in CFA tell us? |
|
Definition
| provides the R for the significant correlations seen in the covariance table |
|
|
Term
| In CFA, what does the RMSEA tell us? |
|
Definition
| it is a measure of goodness of fit and should be below .06 |
|
|
Term
| In CFA, what does the CMIN tell us? |
|
Definition
| it is a measure of goodness of fit and should be less than 2 |
|
|
Term
| In CFA, what does the GFI tell us? |
|
Definition
| it is a measure of goodness of fit and shows how much variance is accounted for by the model |
|
|
Term
| What is the purpose of logistic regression? |
|
Definition
| to create an equation that maximizes the probability of predicting group membership for individuals |
|
|
Term
| Name the assumption for logistic regression. |
|
Definition
| multivariate normality of predictors |
|
|
Term
| Name and describe the 3 approaches to logistic regression. |
|
Definition
-direct: all predictors are entered at once -sequential: predictors are added in stages as determined by the researcher -statistical: predictors are added one at a time to yield the best prediction at each stage |
|
|
Term
| What is the underlying rationale of logistic regression? |
|
Definition
| if a relationship between the outcome and predictors is found, the equation can then be used to predict outcomes for new cases |
|
|
Term
| What are the applications of logistic regression? |
|
Definition
-to use maximum likelihood to estimate a model that best predicts group membership -use goodness-of-fit tests to choose the model that does the best job of predicting with the fewest number of predictors (predictors+constants vs. just constants) |
|
|
Term
| What are the limitations of logistic regression? |
|
Definition
-the outcome variable must be a discrete value (depressed or not depressed, not in between) -sample size -multicollinearity (high correlation among predictors) -outliers -independence of errors (no repeated measures) |
|
|
Term
| What is the theoretical limitation of logistic regression? |
|
Definition
| causal inferences: correctly predicting group membership based on predictors does not mean those predictors cause that outcome |
|
|
Term
| Describe how logistic regression and MRC are similar. |
|
Definition
-both make predictions on individual cases (classifying one person as depressed or not) -both are sensitive to high correlation among predictor variables |
|
|
Term
| Describe how logistic regression and MRC are different. |
|
Definition
-in logistic regression, coefficients reflect the ratio of two probabilities, while in MRC they reflect the weight for the predictor variable -logistic regression and MRC have different distributional requirements for predictors (MRC has more) |
|
|
Term
| Give an example of a logistic regression experiment. |
|
Definition
-predicting depression based on 6 predictors -half subjects depressed and half not, between subject -finds model with least predictors and most variance accounted for -predictions? 3 predictors add significantly to the model |
|
|
Term
| What are the important things to report for logistic regression? |
|
Definition
-significance of full model vs. constant-only model -Hosmer and Lemeshow for goodness of fit (non-sig = good) -significant Wald = predictor adds significantly to model -odds ratio indicates the odds of being depressed are so many times greater with a unit increase in that variable |
|
|