| Term 
 | Definition 
 
        | Theory of mind is conceptualized as a child's ability to recognize a causal relationship between mental states and actions and to recognize that beliefs can be false. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | The homogeneity of items used to measure a construct |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | Items underlie a single factor (construct). |  | 
        |  | 
        
        | Term 
 
        | What do reliability and validity depend on? |  | Definition 
 | 
        |  | 
        
        | Term 
 | Definition 
 
        | Items underlie more than one factor |  | 
        |  | 
        
        | Term 
 
        | In terms of dimensionality if the partial correlation between two pairs of items is equal to zero that means... |  | Definition 
 
        | It is likely unidimensional |  | 
        |  | 
        
        | Term 
 
        | A set of items is unidimensional if.. |  | Definition 
 
        | Correlations among the items can be accounted for by a single factor |  | 
        |  | 
        
        | Term 
 
        | An item is unidimensional if... |  | Definition 
 
        | It is a measure of only a single construct |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | 1) Pearson 2) Polychoric
 3) Tetrachoric
 |  | 
        |  | 
        
        | Term 
 
        | Pearson correlation is used when... |  | Definition 
 
        | The data are continuous or when variables are measured on at least interval scales. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | The degree of linear relationship between X and Y, but it is affected by outliers |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | Data is skewed, there is an outlier bias, and data may be sparse.  It then ranks the values and helps smooth a curve to a straight line |  | 
        |  | 
        
        | Term 
 
        | Use tetrachoric correlation when |  | Definition 
 
        | you have binary outcomes and you assume that both traits are normally distributed. |  | 
        |  | 
        
        | Term 
 
        | What is the difference between a correlation and covariance? |  | Definition 
 
        | Correlation is a scaled version of covariance.  Covariance is a sum of variance between Xs and Ys |  | 
        |  | 
        
        | Term 
 
        | Reliability is (two definitions) |  | Definition 
 
        | 1) The correlation between parallel measures 2) The ratio of TRUE score to TOTAL score
 |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | Is an intraclass correlation coefficient, which corrects for chance and measures inter-rater agreement for categorical items. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | Po = Observed proportion of agreements Pe = Expected proportion of agreements
 
 Po - Pe / 1 - Pe
 |  | 
        |  | 
        
        | Term 
 
        | What are Intraclass Correlation Coefficient |  | Definition 
 
        | Assesses inter-rater reliability as variance in true score/variance in observed score |  | 
        |  | 
        
        | Term 
 
        | What are the three research designs of ICCs |  | Definition 
 
        | 1) Unique 2) Random
 3) Fixed
 |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | Average ratings or individual ratings |  | 
        |  | 
        
        | Term 
 
        | Explain the difference in terms of raters (unique, random, fixed) |  | Definition 
 
        | Unique: No overlap of raters Random: Pool of raters, with random selected and then total overlap of raters.
 Fixed: Total Overlap of Raters (but they are assigned raters, not random)
 |  | 
        |  | 
        
        | Term 
 
        | Unique Rater Design would be... |  | Definition 
 
        | Each of the i subjects are rated by a unique set of m raters, such that the total number of raters is m*i. |  | 
        |  | 
        
        | Term 
 
        | Provide an example of the unique rater design |  | Definition 
 
        | Each child in a survey is rated by two parents |  | 
        |  | 
        
        | Term 
 
        | Random Rater Design is... |  | Definition 
 
        | m raters are drawn from a larger pool of raters.  Each of the subjects is rated by each of the m raters. |  | 
        |  | 
        
        | Term 
 
        | Provide an example of a random rater design |  | Definition 
 
        | Any psychiatrists' rating of subjects: questionnaire items drawn randomly from a large pool. |  | 
        |  | 
        
        | Term 
 
        | What is a fixed rater design? |  | Definition 
 
        | Each subject is rated by each of the same m raters, such that the total number of raters, R is m.  These raters are the only raters of interest |  | 
        |  | 
        
        | Term 
 
        | Can you provide an example of a fixed rater design? |  | Definition 
 
        | Team of psychiatrist ratings of subjects in a clinical trial; a fixed set of questionnaire items. |  | 
        |  | 
        
        | Term 
 
        | What is typically more reliable? Individual or averaged ratings? |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | Name three tests that give you internal consistency? |  | Definition 
 
        | Cronbach's alpha Kuder-Richardson
 Split-half
 |  | 
        |  | 
        
        | Term 
 
        | Split-half estimate steps |  | Definition 
 
        | 1) Arbitrarily divide the scale into two halves and create total score for the two halves 2) Correlate the two total scales
 3) Adjust the correlation upwards with the Spearman-Brown prophecy formula
 |  | 
        |  | 
        
        | Term 
 
        | What is the Spearman-Brown prophecy formula? |  | Definition 
 
        | It is used to account for a scale having more or less items than it currently does.  A required adjustment with split-half. |  | 
        |  | 
        
        | Term 
 
        | How is the Spearman-Brown Prophecy formula calculated? |  | Definition 
 
        | Calculate the old correlation = r and phi = theoretical no. of items/observed no. of items= p
 
 Rsb = pr / 1 + (p-1)r
 |  | 
        |  | 
        
        | Term 
 
        | How is Cronbach's alpha a function of scale length? |  | Definition 
 
        | As scale length increases, alpha increases, because the inter-item correlations go up. |  | 
        |  | 
        
        | Term 
 
        | Cronbach's Alpha is equivalent to which ICC? |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | Is Cronbach's alpha a measure of unidimensionality? |  | Definition 
 
        | No, it is a measure of internal consistency after unidimensionality has been established |  | 
        |  | 
        
        | Term 
 
        | What measure of reliability should be used for test-retest continuous variables? |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | What measure of reliability should be used for test-retest categorical variables? |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | What measure of reliability should be used for inter-rater continuous variables? |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | What measure of reliability should be used for inter-rater categorical variables? |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | What measure of reliability should be used for internal consistency continuous variables? |  | Definition 
 
        | Cronbach's alpha or split-half or ICC |  | 
        |  | 
        
        | Term 
 
        | What measure of reliability should be used for internal consistency categorical variables? |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | Rank in order from weakest to strongest validity measures: construct, content, criterion, face |  | Definition 
 
        | 1) Face 2) Content
 3 Criterion
 4) Construct (weakest to strongest)
 |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | The extent to which an item appears to be valid |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | The extent to which there is representativeness across domain of meaning.  Also the extent to which one can generalize from a particular collection of items to all possible items that would be representative of a specified domain of items |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | The extent of correspondence of a measure with a criterion variable (gold standard) |  | 
        |  | 
        
        | Term 
 
        | What are the two types of criterion validity? |  | Definition 
 
        | Predictive and concurrent |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | How well the test picks up cases from true number of cases |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | How well the test picks up controls (non-cases) from true number of controls (non-cases) |  | 
        |  | 
        
        | Term 
 
        | Increasing sensitivity increases what? |  | Definition 
 
        | Type I errors (false positives) |  | 
        |  | 
        
        | Term 
 
        | Increasing specificity increases... |  | Definition 
 
        | Type II errors (false negatives) |  | 
        |  | 
        
        | Term 
 
        | Kappa changes how with prevalence? |  | Definition 
 
        | It goes down when a disease is rare |  | 
        |  | 
        
        | Term 
 
        | Positive Predictive Value is... |  | Definition 
 
        | The proportion of people who test positive who really have the disease |  | 
        |  | 
        
        | Term 
 
        | Negative Predictive Value is... |  | Definition 
 
        | The proportion of people who test negative who are really disease-free |  | 
        |  | 
        
        | Term 
 
        | How do PPV and NPV relate to prevalence? |  | Definition 
 
        | The higher the disease prevalence (the more common) the higher PPV and NPV |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | Each point represents a 2x2 table and a probability cutoff used to distinguish people with and without disease (it's a measure of discrimination) |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | The Area under the ROC curve |  | 
        |  | 
        
        | Term 
 
        | What does the C statistic mean? |  | Definition 
 
        | Probability that a sick person (case) will more likely test to be "sick" (a case) than well people (non-cases) |  | 
        |  | 
        
        | Term 
 
        | What are on the axes of the ROC? What does the ROC provide? |  | Definition 
 
        | X = False-Positive Rate (1-Specificity) Y = True-Positive Rate (Sensitivity)   It is a measure of diagnostic utility |  | 
        |  | 
        
        | Term 
 
        | Where is the best cut-off point for the ROC curve? |  | Definition 
 
        | Closest to the top left, maximize combination of sensitivity and specificity |  | 
        |  | 
        
        | Term 
 
        | What is construct validity? |  | Definition 
 
        | The degree to which a measure satisfies theoretical predictions about the construct, across a range of theories, and with a range of modalities of measurement |  | 
        |  | 
        
        | Term 
 
        | Internal Construct Validity is... |  | Definition 
 
        | The degree to which items in a measure are associated with each other in the theoretically predicted direction |  | 
        |  | 
        
        | Term 
 
        | Convergent vs. Discriminant Internal Validity? |  | Definition 
 
        | Convergent = similarity even with different modality Discriminant = similar to similar, but distinct from others
 |  | 
        |  | 
        
        | Term 
 
        | Convergent Internal Validity is... |  | Definition 
 
        | The degree to which a scale is associated with measures of similar constructs even when they are measured with a different modality |  | 
        |  | 
        
        | Term 
 
        | Discriminant Construct Validity... |  | Definition 
 
        | Is the degree to which a scale is associated with measures of similar constructs and not associated with measures of distinct constructs |  | 
        |  | 
        
        | Term 
 
        | What is the Multitrait Multimethod Matrix |  | Definition 
 
        | A way of determining the internal construct validity by comparing the the same traits measured with different methods |  | 
        |  | 
        
        | Term 
 
        | Describe the four measurements within the MTMM Matrix |  | Definition 
 
        | MTMM= reliability diagonals MTHM= validity diagonals, convergent validity, shouldn't be as high as reliability diagonals
 HTMM>HTHM
 |  | 
        |  | 
        
        | Term 
 
        | External Construct Validity is... |  | Definition 
 
        | The degree to which the scale is associated with other constructs in the theoretically predicted direction (a generalization to other theories, not to populations -- nomological network) |  | 
        |  | 
        
        | Term 
 
        | The relationship between reliability and validity? |  | Definition 
 
        | Reliability sets a maximum for validity, which validity establishes reliability |  | 
        |  | 
        
        | Term 
 
        | What is the point of correcting for attenuation? |  | Definition 
 
        | it is possible that the measure you are using is not 100% reliable, therefore your correlation will be weaker then it should be.  The formula will correct for that... |  | 
        |  | 
        
        | Term 
 
        | What are the main uses of factor analysis? |  | Definition 
 
        | 1) Data reduction 2) Determine underlying variables among  a set of observed variables
 |  | 
        |  | 
        
        | Term 
 
        | What are the four assumptions of factor analysis? |  | Definition 
 
        | 1) Measurement error has constant variance and is on average = 0 2) There is no association between factor and measurement error
 3) There is no association between errors
 4) There is local conditional independence: given the factor, observed variables are independent of one another
 |  | 
        |  | 
        
        | Term 
 
        | What is communality within a factor analysis model? |  | Definition 
 
        | For standarized variables this is the of variability in X that can be explained by F (similar to the r-squared in regression analysis) |  | 
        |  | 
        
        | Term 
 
        | What is the inverse of communality? |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | Is uniqueness good or bad? |  | Definition 
 
        | It is bad if an items is not related to other items in the FA |  | 
        |  | 
        
        | Term 
 
        | What do the loadings in a factor matrix tell you? |  | Definition 
 
        | They represent the degree to which each of the variables is associated with each of the factors (range from 0 to 1) |  | 
        |  | 
        
        | Term 
 
        | What does a high factor loading tell you? |  | Definition 
 
        | They provide meaning and interpretation of the factors |  | 
        |  | 
        
        | Term 
 
        | What are the steps to performing an EFA |  | Definition 
 
        | 1) Collect data, choose relevant variables 2) Extract initial factors (via PCA)
 3) Choose number of factors to retain
 4) Choose estimation method
 5) Rotate and interpret
 6) Construct scales for future use
 |  | 
        |  | 
        
        | Term 
 
        | What questions need to be asked in a Principal components analysis? |  | Definition 
 
        | 1) How many factors to retain? 2) What type of factor analysis
 3) Note: It makes the number of factors equal to the number of variables
 |  | 
        |  | 
        
        | Term 
 
        | What are ways to determine how many factors to retain? |  | Definition 
 
        | 1) Eigenvalues (>1) 2) Scree plot (look for elbow)
 3) Parallel analysis
 4) Theory on what makes sense
 |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | They are the number of variables represented by the factor, and help to explain the variance in the data.  They do tend to overestimate the number of factors. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | It shows the relative contribution of variables.  You want to take above the elbow. |  | 
        |  | 
        
        | Term 
 
        | What is a parallel analysis in PCA? |  | Definition 
 
        | Generates several eigenvalues that would be expected from random data.  It is the most accurate and becoming the standard |  | 
        |  | 
        
        | Term 
 
        | Describe the two types of rotation in a factor analysis |  | Definition 
 
        | Oblique (promax) = factors are correlated Orthogonal (varimax) = factors are not correlated
 |  | 
        |  | 
        
        | Term 
 
        | What the goal of factor rotation? |  | Definition 
 
        | To distribute variance more evenly among factors and to make sharper distinctions among data - to make pattern matrix coefficents either very high or very low. |  | 
        |  | 
        
        | Term 
 
        | Does rotation improve fit? |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | An oblique rotation will produce what that a orthogonal rotation does not? |  | Definition 
 
        | A pattern matrix (loadings) and a structure matrix (correlations) 
 Note: pattern matrix=structure matrix for orthogonal rotation
 |  | 
        |  | 
        
        | Term 
 
        | What are the key differences between CFA and EFA? |  | Definition 
 
        | EFA is used to summarize data and describe the correlation structure between variables.  CFAs on the other hand are used to test consistency with a preconceived theory, is a kind of SEM, and more useful when looking for associations between factors or between factors and other observed variables |  | 
        |  | 
        
        | Term 
 
        | How is latent class probability defined? |  | Definition 
 
        | n1= P(S1=1) and n2 = P(s1=2) |  | 
        |  | 
        
        | Term 
 
        | What are the conditional probabilities in a latent class analysis/ |  | Definition 
 
        | pi11 - for probability within class 1 of endorsed symptom Y1 |  | 
        |  | 
        
        | Term 
 
        | How do you calculate pieces of data in an LCA? |  | Definition 
 
        | 2^M - 1, where m is the number of Ys |  | 
        |  | 
        
        | Term 
 
        | What is the relationship between # of parameters and # of classes? |  | Definition 
 
        | Parameters increase as the number of classes increase |  | 
        |  | 
        
        | Term 
 
        | What is identifiability mean? |  | Definition 
 
        | Can you uniquely identify what all the parameters mean (they all have unique formulas)  It is an attribute of the model; do parameters have unique interpretations? |  | 
        |  | 
        
        | Term 
 
        | How to determine if model is identifiable using latent class probability, conditional probability and pieces of data in a LCA? |  | Definition 
 
        | Latent class probabilities (J-1) + Conditional probabilities (J*M) < Pieces of data (2^M - 1) |  | 
        |  | 
        
        | Term 
 
        | Identifiability improves or worsens as you remove or fix parameters? |  | Definition 
 | 
        |  | 
        
        | Term 
 | Definition 
 
        | It is an attribute of the data --- the question whether the sample is large enough to estimate the parameters, also speaks to the distribution of the data. |  | 
        |  | 
        
        | Term 
 
        | What are the two assumptions of latent class modeling? |  | Definition 
 
        | 1) Independent Individuals 2) Conditional Independence
 |  | 
        |  | 
        
        | Term 
 
        | What does entropy measure? |  | Definition 
 
        | The level of misclassfication error, a high number is good.  This is in a Latent Class Analysis |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | Rejecting the H0 when H0 is true |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | A failure to reject the Ho when the Ha is true. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | P-value: The probability of obtaining a test statistic that is as extreme or more as the test statistic calculated from the current sample if H0 is true. |  | 
        |  | 
        
        | Term 
 
        | What is Social Desirability Bias? |  | Definition 
 
        | : the tendency of respondents to reply in a manner that will be viewed favorably by others |  | 
        |  | 
        
        | Term 
 
        | What is Acquiescence Bias? |  | Definition 
 
        | A tendency to agree with all the questions or to indicate a positive connotation |  | 
        |  | 
        
        | Term 
 
        | What is Experimenter Bias? |  | Definition 
 
        | bias towards a result expected by the human experimenter. |  | 
        |  | 
        
        | Term 
 
        | What ICC is typical for inter-rater reliability and which is typical for internal consistency? |  | Definition 
 
        | inter-rater = (2,1), or (2,k) internal consistency = (3,k) - cronbach's
 |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | reject H0 when the H1 is true, power is a function of N, Δ and alpha, as these three increase so does power. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | “the process of assigning a number to an attribute (or phenomenon) according to a rule or set of rules” or the result obtained from such a process. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | extent to which the measurements remain consistent over repeated tests of the same subject under identical conditions. |  | 
        |  | 
        
        | Term 
 
        | What is the relationship between bias and validity? |  | Definition 
 
        | Smaller the bias, more valid the measure |  | 
        |  | 
        
        | Term 
 
        | What is a latent variable? |  | Definition 
 
        | A variable that is unobserved, or not measured directly. |  | 
        |  | 
        
        | Term 
 
        | What is the difference between variance and covariance? |  | Definition 
 
        | Variance: Measures variability in one variable, X. sum of (xi - xbar) 
 Covariance: Measures how two variables, X and Y, covary. sum of (xi - xbar)*(yi - ybar)
 |  | 
        |  | 
        
        | Term 
 
        | What is the formula for a correlation? |  | Definition 
 
        | Covariance of xy / square root (x2 variance)(y2 variance) |  | 
        |  | 
        
        | Term 
 
        | What does internal consistency measure? |  | Definition 
 
        | Degree of homogeneity of items within a scale |  | 
        |  | 
        
        | Term 
 
        | A crap internal consistency could be the consequence of what two things? |  | Definition 
 
        | 1) More than one dimension 2) Bad items
 |  | 
        |  | 
        
        | Term 
 
        | How to calculate cronbach's alpha? |  | Definition 
 
        | 1 - (sum of items variances / total score variance) |  | 
        |  | 
        
        | Term 
 
        | What are examples of translational validity? |  | Definition 
 
        | Content (theory driven) Face (appearance driven)
 |  | 
        |  | 
        
        | Term 
 
        | Sensitivity and Specificity are affected by the prevalence of what? |  | Definition 
 
        | Sensitivity and Specificity are affected by the prevalence of positive test results |  | 
        |  | 
        
        | Term 
 
        | Nomological Networks were proposed by whom? |  | Definition 
 
        | Cronbach and Meehl (1955) |  | 
        |  | 
        
        | Term 
 
        | Multitrait-Multimethod Matrix proposed by? |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | What are the six steps to creating a scale? |  | Definition 
 
        | 1.  See if a suitable scale exists already 2.  Define the construct carefully
 3.  Choose a modality
 4.  Generate items
 Choose a response format
 Choose wording carefully
 Avoid common biases
 5.  Conduct pilot tests
 6.  Evaluate results
 |  | 
        |  | 
        
        | Term 
 
        | Give an example of some modalities for scales? |  | Definition 
 
        | 1) Clinical Rating 2) Examination
 3) Self-report-- Structured Interview
 4) Telephone Interview
 5) Computer-assisted Interview
 6) Paper and pencil
 7) Informant Interview
 |  | 
        |  | 
        
        | Term 
 
        | What are the assumptions of IRT? |  | Definition 
 
        | 1) Unidimensionality 2) Local Independence
 3) Invariance
 |  | 
        |  | 
        
        | Term 
 
        | What is the item characteristic curve? |  | Definition 
 
        | plots the probability of responding correctly to an item as a function of the latent trait (denoted by θ) |  | 
        |  | 
        
        | Term 
 
        | Three ways the item characteristic curve can be modified (and in way direction)? |  | Definition 
 
        | 1) Difficulty (left-right) 2) Discrimination (steepness)
 3) Adjustment for guessing (up-down)
 |  | 
        |  | 
        
        | Term 
 
        | What does the information function of the IRT tell you? |  | Definition 
 
        | Information is a function of a, the discrimination parameter.  It shows the responses |  | 
        |  | 
        
        | Term 
 
        | What are three benefits to a large sample size? |  | Definition 
 
        | 1) minimize the probability of errors 2) maximize the accuracy of population estimates
 3) increase the generalizability of the results
 |  | 
        |  | 
        
        | Term 
 
        | Traits vs. states in terms of stability? |  | Definition 
 
        | Traits are much stable over time as they described underlying personality traits, etc.  States are reflections of the present moment, and are more likely to fluctuate over time.  Think someone who has a melancholic personality (trait) vs. someone who is currently depressed (state) |  | 
        |  | 
        
        | Term 
 
        | What are the potential consequences of using a pearson correlation when you had an ordinal variable in regards to FA? What should you have used? |  | Definition 
 
        | A potential consequence is incorrect estimates of factor loadings and attenuation of the correlations.  A polychoric correlation? |  | 
        |  | 
        
        | Term 
 
        | How do you calculate class membership given a symptom profile type? |  | Definition 
 
        | You need to use Bates Theorem here. 
 1) Calculate the conditional probability of a symptom profile for each class.
 2) Multiple these symptom profile probabilities by the class probabilities to calculate an alpha
 3) Divide the symptom profile and the class of interest by the alpha you just calculated.
 
 This will tell you, give the profile type, what is the probability that they belong to class X.
 |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | It tests for equal variances across groups, with the null being that they are all equal, used in ANOVAs |  | 
        |  | 
        
        | Term 
 
        | What is the null hypothesis in an ANOVA, what does the F-statistic tell you? |  | Definition 
 
        | That mean value of Y differs in at least one of the groups compared to the others |  | 
        |  | 
        
        | Term 
 
        | What does the Central Limit Theorem say? |  | Definition 
 
        | The distribution of sample means (from all possible samples of the given sample size) is approximately normal |  | 
        |  | 
        
        | Term 
 
        | A hazard at time t is approximately |  | Definition 
 
        | The expected number of events per unit time at time t divided by the number at risk of an event at time t |  | 
        |  | 
        
        | Term 
 
        | What is the coefficient of determination (r2)? |  | Definition 
 
        | The proportion in the variability of Y explained by X |  | 
        |  | 
        
        | Term 
 
        | What is the order of the mean, mode and median in a positive skew (right skew)? |  | Definition 
 
        | You have skew towards lower values   Mean |  | 
        |  | 
        
        | Term 
 
        | What is the order of the mean, mode and median in a negative skew (left skew)? |  | Definition 
 
        | You have skew towards higher values Mode |  | 
        |  | 
        
        | Term 
 
        | What are the assumptions in linear regression models? |  | Definition 
 
        | 1. (L)inear relationship 2. (I)ndependent distribution
 3. (N)ormally distributed
 4. (E)rror terms ind. and normally distributed
 
 Remember to Stay in LINE
 |  | 
        |  | 
        
        | Term 
 
        | What is the addition rule? |  | Definition 
 
        | P(A OR B)= P(A)+ P(B) - P(A and B) |  | 
        |  | 
        
        | Term 
 
        | What is item-rest correlation? |  | Definition 
 
        | The item vs. the remaining items in the subscale |  | 
        |  | 
        
        | Term 
 
        | What is the item-test correlation? |  | Definition 
 
        | The item vs. the entire scale, including that item |  | 
        |  | 
        
        | Term 
 
        | How do you improve identifiability? |  | Definition 
 
        | Remove parameters, thereby decreasing classes and indicators |  | 
        |  | 
        
        | Term 
 
        | How do you improve estimability? |  | Definition 
 
        | Increasing sample size, decrease classes, increase random starts |  | 
        |  |