Shared Flashcard Set

Details

Experimental Design
Exam 3 Study
74
Psychology
Graduate
05/11/2009

Additional Psychology Flashcards

 


 

Cards

Term
MRC
Definition
A general, flexible, universal technique for assessing relationships of any form using independent variables of any type. Backbone of general linear model. MRC can do anything that ANOVA (or ANOVA) can do, but not vice versa. Problems of unequal sample sizes are reduced with this technique.
Term
MRC=/ correlation.
Definition
MRC can be used to analyze data collected in experimental studies as well as in correlational studies. The question of causation is a methodological one, not a statistical one. The analytic strategy is guided by the types of variables/data in studies.
Term
Advantages of MRC
Definition
Besides that whole universality thing…the production of “built-in” measures of effect size and complexity.
Term
MRC effect size
Definition
r (and R) – magnitude (and, for r, direction) of relationship between IV(s) and DV. r2 (and R2) – proportion of DVs variance accounted for by the IV(s), difference between r and R one IV and many IVs. Beta – standardized regression coefficient, magnitude of unique relationship between one IV and the DV (unit free, easily understood, and easily compared).
Term
MRC handles complexity
Definition
(1) multiplicity of influences, (2) correlation among research factors and partialling, (3) form of information, (4) shape of the relationship, and (5) general and conditional relationships.
Term
MRC Multiplicity of influences
Definition
Simple experiment – 1IV and 1DV, y=f(x). Two approaches to more complex experiment – (1) Exploring single factors in multiple studies, y=f(x1), then y=f(x2), then y=f(x3), (2) Exploring multiple factors in single studies, MRC: y = f(x1, x2, x3) which allows you to assess influences of IVs individually and as sets.
Term
MRC Correlation among research factors and partialling
Definition
(1) Simple case: orthogonal IVs (unrelated x1, x2 and x3). But if related there may be overlap in the prediction of y. MRC allows for unique predictive value of each IV to be assessed. Can asses direct (IV affects DV), mediator (connector variable, effect is going through another), or suppressor effects.
Term
MRC Form of Information
Definition
MRC can be employed with variables of any form of measurement. (1) Quantitative – continuous, ratio, interval and ordinal. (2) Qualitative – categorical, nominal, dummy coding is used to carry the variables info.
Term
MRC Shape of the relationship
Definition
MRC can examine relationships of any time (linear and curvilinear). MRC is linear in the sense that all relationships will be expressed in a line equation. Y = A+B1X1+B2X2+B3X3+B4X4…. Each B = the relationship between an IV and the DV. Each X = an IV (a term representing a predictor). An “IV” = a variable or a function of the variable(s) (e.g. product terms to carry interaction effects, or polynomials to carry curvilinear relationships).
Term
MRC General and Conditional Relationships
Definition
MRC can detect both types of effects. (1) General/Unconditional Relationships – effects of X1 on Y are the same at all levels of X2, X3, and X4 (main effects). (2) Conditional Relationships – effects of X1 and Y depend on the levels of X2, X3, and/or X4 (interactions).
Term
Role in Causal Analysis
Definition
For definitive causality, the conditions of causality must be met. This is not always successfully achieved. Causal models (e.g. path analysis, structural equation modeling (SEM)). State a theory in terms of the relevant variables to construct a causal diagram. Assess to see if the observed data fit the theorized model. MRC is instrumental in these analyses.
Term
MRC Techniques (General)
Definition
(1) Bivariate correlation – one IV and one continuous DV. (2) Simple Regression – one IV and one continuous DV. (3) Multiple regression – more than one IV and one continuous DV. (simultaneous, stepwise, and hierarchical types)
Term
Bivariate Correlation
Definition
(*) relationships between two variables. (*) a bivariate relationship exists when one variable carries information about another variable with it. (*) can be investigated graphically (scatterplot) (*) can be investigated statistically (correlation coefficient).
Term
Scatterplot
Definition
(*) data display used to show the relationship between two variables. (*) x-axis is used to display the values of the IV (predictor) (*) y-axis is used to display the values of the DV (criterion). (*) often a theoretical distinction between predictor and criterion because the IV is often not manipulated.
Term
Examples of Scatterpolots
Definition
(*) estimating the strength of relationship – the spread of the points around the line of best fit gives you an indication of the strength. (*) Line of best fit rotates around the centroid because that is the best predictor of our data. (*) Types of relationships include: positive, negative, linear – the association between variables is constant throughout the scales, a unit increase in one variables is accompanied by a constant increase in the other variable, IV gives you the same information for people who scored low, med, high, curvilinear relationship – the association between variables depends on where you are on the scales – usually unstable and unreliable.
Term
Calculating Strength of the Relationship
Definition
(*) we want to calculate the Pearson product moment r (*) convert to standard scores to make units comparable before calculating r (so that the comparison is possible between variables and differences are measure in numbers of SD).
Term
Why do we convert to Z?
Definition
After conversion (1) In a data set sum(z)=0 (2) SD^2=1 and SD=1 (3) The shape of the distribution of the data and the relationships among the variable remain unchanged. That is, the proportionality of the differences between scores remains the same. (*) r changes as a function of the differences between pairs of z-scores. (*) perfect positive relationship – all Zx, Zy pairs are exactly the same values. Degree of relationships is a function of the departure from this perfect state. (*) As Zx, Zy differences get less systematic, r gets closer to a value of 0.
Term
Calculating r
Definition
(*) Find the average squared deviations of Zx and Zy for all the data pairs. If perfect positive, this value equals zero. (*) Next, employ a linear transformation to increase the interpretability of the statistic.
Term
Pearson product moment r
Definition
(*) standard measure of the linear relationship between two variables. (*) features: (a) a pure number, it is independent of the units of measurement (b) its absolute value (0-1) shows degree of relationship. (c) Sign indicates direction of relationship.
Term
Theoretical calculation of r
Definition
(*) r is a ratio of the covariation of the variables to the total variation of each of the variables. (*) effect/error  GLM. Covary/total = covary/(covary+total). (*) if |1|=r error=0covary/covary (*) if r=0 covary = 0  0/error (*) as covariation consumes more of the total variation (i.e. a stronger relationship) the numerator increases, and r’s value increases. (*) numerator can never be larger than the denominator. (*) why r maxes out at one?
Term
Special cases for r calculation
Definition
(*) slight adjustments in calculations for all of these techniques. (*) each of these will yield identical r values to those produced by the general equation. (*) to simplify hand calculations – (1) point biserial correlation – one variable is dichotomous. (2) Phi coefficient – both variables are dichotomous (interjudge or interrater reliability). (3) Rank correlation (spearman’s) – both variables are ranked
Term
Beyond Correlation – Simple Regression
Definition
In bivariate correlation, the two variables are treated as if they were equivalent. (*) in research one variable is conceptualized as the IV and the other variable as the DV. (*) that is, one variable is targeted as more reasonably being the causal agent of the other variable. (*) Therefore, one variable is chosen to predict the other.
Term
Predicting Y from X
Definition
(*) Assess change in y per unit change in X (*) if rxy=1 – perfect positive relationship. (*) then for any participant j: Z^yj=Zxj. (*)if converting from z scores to raw scores then (Y^j-My)/SDy = Xj-Mx/SDx. (*) solve for Y^j: Y^j= (SDx/SDy) * (Xj-Mx) + My (*) given a value for X1, the value of Y can be easily predicted.
Term
How to estimate Y from X
Definition
(*) least squares criterion – balances deviations to zero (i.e. over and under predictions cancel out) – minimizes the sum of the squared deviations (Y-Y^)2 – actual from estimate and minimizes inaccuracies. No prediction is perfect (on the line of best fit), so we basically minimize the total distance of points from the regression line. Find average of (Xbar, Ybar). Rotate around that point until (y-y^)2 is the smallest.
Term
How to estimate Y from X (nonperfect relationships)
Definition
(*) Rather than using Z^y = Zx (*) use Z^y=rZx (multipled by the relationship) (*) convert to raw data, then solve for Y^. Same as other equation except multiplied by the relationship as well.
Term
Creation of the Regression Line
Definition
(*) Form of line: Yhat = slope * X + intercept  Yhat = Byx*X + Ayx (*) Byx = (unstandardized) regression coefficient – rate of change in Y per unit change in X. (*) Ayx = regression constant or intercept – makes adjustments for differences between My and Mx.
Term
Regression to the Mean
Definition
(*) for nonperfect relationships, extreme x values will be paired with less extreme Y values (both X & Y converted to Z). (*) This is a mathematical necessity. (*) If extreme X went with extreme Y, then the relationship would be close to perfect. (*) The weaker the relationship, the more regression to the mean. (*) This is not a disadvantage of regression or “artifact” or “regression fallacy”.
Term
Partitioning the Variance of Y
Definition
(*) the variance of y can be split into two parts Y = SDy^2 (*) variance associated with X – equal to the variance of Y^ -- SDy^^2 (*) variance not associated with X – equal to the variance of the deviation between Y and Y^ -- SD^2(y-y^).
Term
Proportion of Variance
Definition
(*) r2 is the area of overlap the proportion of variance of Y associated with X – must be positive, maximum value is 1. (*) (1-r2) is the area of non-overlap (of Y), the proportion of Y not associated with X.
Term
MRC Significance Testing
Definition
(*) Assumptions – None technically need to be satisfied for calculations performed solely to describe the data collected. The following are useful when some inferential conclusion is to be drawn. Derived from the fixed linear regression model: (1) X is a “fixed” IV (manipulated). (2) Normality of deviations of Y at each X value. (3) Homogeneity of variance of Y at each X value – if violations occur, tests are generally robust. (*) Transformations may be appropriate if heteroscedasity (i.e. residuals have severe heterogeneity of variance at different X values. Ho: no difference/relationship/effect. Many possible alternative hypotheses. Don’t need to know for exam.
Term
Factors Affecting the Size of r
Definition
(1) Distributions of X and Y – perfect relationships require distribution similarity (2) Reliability of the variables – measurement error may reduce correlations. (3) restrictions in the ranges of the variables – may affect correlations in unpredictable ways (4) Part-whole correlations – when one variable contributes to or is part of another variable – may result in spurious correlations (5) Ratio of index variables – when one variable is divided by another – may result in spurious correlations (6) Curvilinear relationships – the correlation (linear) will underestimate the relationship.
Term
Cronbach’s Alpha
Definition
(*) measure of internal consistency (*) if you take a correlation measure and split values in half and correlated them, they should be correlated with each other. Split-half reliability = average of all possible split-halfs. (*) if not highly correlated then you are not measuring what you think you are measuring. There’s crap messing up your measurement. – Fix disattenuation – mathematical fix to reliability, looks at relationship of two measures as if there was no error (as if cronbach’s alpha =1). When predicting the null you might want to consider using this procedure because you are expecting nothing to happen even in a perfect world. Another fix could be SEM (Structure Equation Model) – cleaned up theoretical version of your construct – can control of acquiescent response set and single method bias as well.
Term
Multiple Regression
Definition
(*) Regression with two or more predictors (*) provides the foundation fore the empirical assessment of causal models (*) will create regression lines using the predictive weights of the respective predictors. (*) However, we need to account for the redundancy of prediction for any case in which a non-zero correlation exists between the predictors.
Term
Regression Line with 2 IVs
Definition
(*) creation of a regression line of the form: Y^ = By1|2X1 + By2|1X2 + Ay|12 (*) By1|2X1 = relationship of IV1 controlling for IV2 – partial regression coefficient for Y on X1 when X2 is also in the model (*) By2|1X2 = relationship of IV2 controlling for IV1 – partial regression coefficient for Y on X2 when X1 is also in the model – redundancy between variables is removed. (*) Ay|12 = y intercept – minimizes deviations between Y and Y^, least squares criterion based on values of X1 and X2.
Term
Regression W/ 2 IVs
Definition
(*) if redundancy of prediction by X1 and X2 exists, then their: partial regression coefficients < zero-order B values. (*) Interpretation of partial regression coefficients – (By1|2) = for any given value of X2, y changes by this amount per one unit change in X1. – (By2|1) = for any given value of X1, y changes by this amount per one unit change in x2. (*) The other IVs is “held constant, controlled for, partialled out, residualized”.
Term
R and R^2
Definition
r2 = the variance of each IV shared with the other (i.e. redundancy).
R = the measure of association between DV and a set of IVs (can range from 1 to 0), is the correlation between the actual y values and the values predicted for y by the regression equation containing the indicated predictors.
R2 = the proportion of DV variance shared with the set of IVS (can range from 0 to 1).
(*) Given that SD2y = SD2y^12 + SD2y – y^12 – total variance of y equals the variance predicted by X1 and X2 and the variance not predicted by X1 and X2. (*) Then: R2 = SD2y^12/SD2y – proportion of variance of y predicted by X1 & X2 (*) and: 1-R2 = SD2y-y^12/SD2y – proportion of variance of Y not predicted by X1 and X2. Note: R can never be less than r of largest relationship between Y and an IV – will almost always be larger, the whole can not be less than the parts.
Term
Beta
Definition
(*) betas are more useful in evaluating the strength of the relationship. B’s don’t account for variance. They tell us the unique relationship between IV and DV, with redundancy removed.
Term
Semipartial Correlation Coefficients
Definition
(*) defining the contribution of each IV to R (*) sr (*) sr2 – indicates the increase in the overall predicted variance of Y (i.e. R2) when the IV is added to the regression model (*) simple regression – the amount of overlap between IV and DV. (*) multiple regression – e = unexplained variance; e = 1-R2; a+b+c = R2 (explained variance) = R2y|12; a = variance uniquely predicted by X1B1 =sr21; b = variance uniquely predicted by X2B2 = sr22; c = variance redundantly accounted for by X1 and X2; a+c = total variance predicted by X1 (unique + redundant); (b+c) = total variance predicted by X2 (unique + redundant). a = (a)/(a+b+c+e) = a/1, same w/ b. sr1 is the correlation between Y and X1 from which the effects of X2 have been removed – semipartial because the effects of X2 have been removed from X1 but not from 1
Term
Partial Correlation Coefficients
Definition
(*) estimates the proportion of variance of y that is estimated by an IV that is not estimated by other IVS. pr12 = (a)/(a+e) – of the stuff that is left without other IVS – variance not predicted by other IVs. pr2 is the proportion of SDy2 that is associated only with this IV (e.g. x1) and not with other IVs (e.g. x2) – indicates the correlation between Y and an IV1 when both DV and IV have been residualized from the other IV(s).
Term
Relationship between partials and semi-partials
Definition
(*) numbers between two won’t match but the conclusions will. (*) note: the additional factors in the denominators of these equations compared to the calculation of semipartial correlations. The denominator for the partial calculation does not represent the total variance of Y – it only contains the variance of Y left over after the removal of the other IVs. Thus the relationship between sr & pr: sr < pr in virtually every instance – semi partial is a smaller number because there is less taken away from the denominator, making the denominator a bigger number and thus the value smaller. Sr=pr only when r of y and other IVs=0. (*) R2 may be significant, but no individual IVs – will occur w/ high levels of redundancy (*) IVs may be significant, but not R2 – will occur when IVs with no contribution dilute effects. (*) in general, treat as nonsignificant (like protected “t”).
Term
Multiple Regression Strategies
Definition
(1) Simultaneous regression – entry of all IVs at the same step, all IVs have all other IVs partialled out (2) hierarchical regression (theoretically controlled by experimenter) – entry of IVs into the predictive model in a series of preordered steps. (3) Stepwise regression (empirically controlled by the statistical program) – entry (or elimination) of IVs into predictive model in order of greatest (or smallest) contribution.
Term
Hierarchical Regression
Definition
(*) entry of IVs into predictive models in prespecified sequence to determine changes in the value of the model (i.e. in R2). Sequence is determined by theory and logic. (1) Causal priority and removal of confounds – enter IVs only after IVs that may account for spurious relationships have been entered – No IV entered later in the sequence may be a cause for an earlier entry (i.e. t1 before t2) – enter potential confound in first step (covariates). (2) research relevance – enter IVs in decreasing order of relevance, increases power for earlier tests, does not dilute effects w/ less important IVs (3) Order is everything – all statistics are completely determined by the order of entry.
Term
Stepwise regression
Definition
(*) differs from hierarchical in terms of underlying philosophy – does not test theory is not logical – provides optimal empirical predictive model. (1) Forward – adds IVs in decreasing order of sr2 value until additions case to significantly increase R2 (2) backward – removes IVs in ascending order of sr2 value until deletions start to significantly decrease r2
Term
Problems of Stepwise Regression and Cases Where it is Appropriate
Definition
(1) does not require researches to formalize their thinking about (potential) causal relationships (2) capitalizes on chance relationships in a sample (3) these are exaggerated with large k (i.e., # of IVs) (4) solutions are highly unstable (problem w/ regression in general because regression equation doesn’t stay the same especially when IVs are correlated with each other). (*)Appropriate when: (1) Model is predictive rather than explanatory (2) N is large and k is reasonable (40:1 ratio?) (3) cross-validation among samples is done and only conclusions common among multiple samples are drawn.
Term
Structural vs. Functional sets
Definition
(*) classification of IVs into sets, structural and functional sets. (*) structural sets – when research factors can not be fully represented as single IVs. (*) functional sets – when variables can be theoretically ordered and/or combined when assessing effects.
Term
Necessity for structural sets
Definition
(1) when using nominal/qualitative scales – comprised of mutually exclusive and exhaustive categories – if research factor G has g categories, then it has g-1aspects, this is needed to identify individuals who fall into each category for the purpose of regression analysis – e.g. the variable to identify participant’s race. (2) Possibly w/ quantitative variables – research factors may be related to DVs in linear and/or nonlinear ways – structural sets aid in the detection and assessment of nonlinear effects – inclusion of linear and nonlinear IVs for the same research factor (e.g. age, age2, age3, log(age). (3) for the inclusion and identification of individuals with missing data – can include an IV (missing/not missing) for any research factor to examine relationships with the willingness to complete research measures – especially relevant when (a) data are difficult to collect (b) sample size is small by necessity.
Term
Functional Sets
Definition
(*) determined by the theory behind and the logic of the research (*) research factors (and sets of research factors) may be hypothesized to precede other research factors. (*) makes sense to partial out these effects first – analogous to ANCOVA (*) all antecedent factors should be entered into the model before the primary factors of interest (usual suspects include demographics, pretest scores).
Term
Choosing the size of Functional and Structural sets
Definition
(*) it is of paramount importance to minimize then number of sets arranged for entry into the hierarchical regression analysis. (*) higher number of sets: reduce the power of the primary analysis & increase the chances of discovering spurious relationships between single IVs and the DV. (*) reduction of sets may be achieved by factor and/or cluster analysis.
Term
Advantage of Alternative Hypothesis – Hierarchical Analysis of sets
Definition
identification of the incremental improvement in the proportion of predicted y variances with the addition of each new set (*) For the case of factors comprised of one IV (*) For k IVs  R2y|12…k = r2y1 + r2y2|1 + r2y3|12 + r2yk|1234…k-1 (*) For factors comprised of sets of IVs – for sets of T U V & W  r2y|tuvw = r2y|t + r2yu|t + r2yv|tu + r2yw|tuv
Term
Examining Hierarchical Analysis of sets
Definition
After model summary table, get ANOVA output—tests the overall model, not the increment – the increment is tested by the change statistics. (*) the ANOVA test if r2 tests a significant amount of variance accounted for. (*) coefficients table – evaluates predictors at the point of entry.
Term
Note on Using Sets
Definition
(*) The partitioning of variance for different sets is a direct extension of partitioning of variances for different IVs. (*) Note the use of multiple R for sets versus the use of r for individual IVs. (*) Significant testing is a direct extension from our previous discussions as well.
Term
Testing and Probing Interactions in MRC
Definition
(*) the interaction term = the interaction term for all IVs (nominal, quantitative, combo) are carried by the products of variance (can create interaction terms to go into regular models to act as predictors). (*) An interaction exists when IVs have a joint effect that accounts for variance of y beyond that which is accounted for by the singular effects (i.e. main effects) -- That is the relationship between x1 and y depends on the value of x2. (*) Assessed when predictions of y by x2 and by x2 are partialled from the prediction by x1x2 – you have to remove the effects of x1 and x2 before you can do the product – the product must add to the equation above and beyond the MEs (*) probed via simple slopes.
Term
Testing Main Effects & Interactions
Definition
(*) Testing the interaction of x1 & x2 in the prediction of y. (*) the product (x1x2) carries the interaction (x1 x x2). (*) The values (x1x2 * x1x2) is the interaction (x1 + x2) – MEs for x1 and x2 are partialled out – however do not partial the interaction term from the MEs (*) Testing MEs and Interactions is accomplished with hierarchical regression – (step 1) enter MEs x1 and x2 – (step 2) enter interaction term; will be evaluated as x1x2|x1x2
Term
Three-Way Interaction Terms
Definition
(*) the interaction effect between research factors x1 & x2 & x3 on y – is carried by the product x1x2x3 (*) interaction – X1 x X2 x X3 = x1x2x3|x1, x2, x3, x1x2, x1x3, x2x3 (*) partial out both the main effect and the 2-way interaction (*) hierarchical regression – (step1) enter the main effects x1, x2, x3 – (step2) enter the 2 way interactions x1x2, x2x3, x1x3 – (step 3) enter the 3 way interaction x1x2x3 (*) this procedure is simply extended to situations in which more complex interactions are to be assessed.
Term
Interactions among sets of IVs
Definition
(*) essentially the same procedure is followed. (*) the interaction between sets of research factors are carried by the products of each combo of IVs from the sets – (set A) x1, x2, x3 – (set B) x4, x5 – (Interaction A x B is carried by) x1x4, x1x5, x2x4, x2x5, x3x4, x3x5 (*) After partialling out the singular effects of A and B together these represent the interaction of A and B. (*) each term represents an aspect of the interaction.
Term
Scope of this Approach (Hierarchical Multiple Linear Regression Analysis – Simple Slopes)
Definition
(*) interactions may be assess between (a) any # of predictors (b) predictors of any type (cont x cont, cont x cat, cat x cat) (c) any # of sets (d) sets containing any type of predictors (e) the same general procedure is used in every case (f) probing effects may be complicated but it is possible.
Term
Simple Slopes between Continuous Variables
Definition
(*) Enter MEs together in Step 1 – unless one has causal priority (*) Enter interaction product in step 2 (*) assess interaction for significance – interpret the value and significant of the R2 change (*) regression equation Yhat = B1X1 + B2X2 + B3X1X2 +A
Term
Probing Interactions (MRC)
Definition
(*) solve regression equation as a function of one IV (*) solve as a function of x1  yhat = (B1 + B3X2)X1 + (B2X2 + A) – hold x2 constant and solve for x1 (*) represents a family of regression lines each with a slope = (B1 + B3X2) and an intercept = (B2X2 + A) )(*)each is a simple regression line with a simple slope (*) regression of Y on X1 depends on the value of X2 (*) insert high and low x2 to evaluate line and slope. (*) can insert values for x2 to assess differences in the slopes to see how effects of x1 on y change.
Term
Centering your variables
Definition
(*) center the independent variables (*) create a product term to carry the interaction (*) to center you need to subtract the mean of the factor from every value like standardizing but doesn’t change the standard deviation.
Term
Interpreting Simple Slopes
Definition
(*) simple slopes – the relationship of the predictor with a dependent measure at one level of another predictor. (*) for two predictors it is customary to calculate the relationships of on x w/ y at low mean, and high levels of another x (*) interpretation of the simple slope values? – It is important to understand that these slopes are predictive and may not refer to any actual relationships
Term
MRC Interaction
Definition
(*) center and standardize your variables (*) enter your main effects – can be 2 steps if theoretical support (*) calculate your product term (*) how do you know significant interaction – sig change R2 adding to models (*) probe interactions by families of regression lines controlling for IVs and w/ high med and low values (move zeros around) (*) manipulate regression to make 0 lines
Term
Issues to Consider
Definition
(*) centering data to reduce multicollinearity between main and interaction effect (*) Less is more Cohen & Cohen (*) assert that being able to test all interactions does not justify doing so – but do need to partial lower-order from higher-order (*) also recall the “least is last” principle – last steps have lower df from the error term (*) median splits – not necessary, not good – loss of power, potential for “warped effects”.
Term
Post-hoc Probing
Definition
(*) for a specific value of x2, is the simple slope of the regression of Y on X1 different from zero? (*) identify this value as the conditional value of x2 and let it equal CVx2 (1) calculate x2cv = x2 – CVx2 (2) compute a cross product equal to (x1) (x2cv) (3) regression y on x1, x2cv, & (x1) (x2cv) (*) this produces a set of results in which B1 equals the simple slope representing the relationship of x1 and y at the conditional value of x2. (*) the accompanying t-test answer the question above. (*) do the simple slope values of two simple regression lines differ significantly – one line no interaction – more than one line interaction.
Term
Advantages of Centering
Definition
(1) Reduces multicollinearity – reduces the correlations between the terms x1 and x2 w/ the product term x1x2. (2) increases interpretability of regression coefficients – while leaving the results for the interactions unaffected (*) basically, the zero points provide meaning – represent the mean of the IVs – aid in probing interactions, determining simple slope values (*) note: it is not necessary to center Y – no effects on coefficients, changes scale of data.
Term
Interpreting Coefficients
Definition
(*) B3 (for the product x1x2) – gives the amount of change in the slope of regression y on x2 that results from a one unit change in x2 – this is why the test of b3 compare different simple slopes (*) B1 and B2 (for X1 and X2 respectively) – show conditional effects – when centered; b1 = regression y on x at x2 = 0 (simple slope at Mx2); b2 = regress of y on x2 at x1 = 0 (simple slope at Mx1).
Term
Interactions between categorical and continuous variables
Definition
(*) each of the categorical predictors needs to be coded (*) the purpose is to make each specific level distinct from the other levels (*) dummy coding is frequently used (others: effects coding, contrast coding, nonsense coding) (*) these are sophisticated advantages of each – interpretability of intercept.
Term
Basics of Dummy Coding
Definition
(*) a categorical predictor with g levels is represented by a set of g-1 different IVs (*) each represents a different aspect of distinction (*) needs to be mutually exclusive and exhaustive (*) each DumV accounts for a meaningful aspect of the research factor
Term
Interactions between Categorical and Continuous Variables
Definition
(*)Terms in the regression equation need to represent the singular and interactive effects (e.g. a categorical IV w/ 3 levels, represented by 2 dummy variables D1 & D2, and a continuous IV represented by Con (*)Yhat = A + B1D1 + B2D2 + B3Con + B4D1Con + B5D2Con (*) solve as a function of the continuous variable (*) Yhat = (B3 + B4D1 + B5D2)Con + B1D1 + B2D2 + A (*) [group 1] – Yhat = (B3 + B4)Con + (B1 + A) [group 2] – Yhat = (B3 + B5)Con + (B2 + A) [group 3] – Yhat = B3Con + A
Term
Testing of Simple Slopes (cat x cont)
Definition
(*) can test the regression of Y on the continuous IV for each level of the categorical IV (*) know who your zeros are! – i.e. know which group is the comparison group in the dummy coding (*) the test of the B3 coefficient in the overall analysis is the test of the simple slope for the regression of Y on the continuous IV for the comparison group (zero level for the categorical IV) (*) re-running after changing the comparison group will test the simple slopes for each category.
Term
Differences between Regression Lines at a Specific Point
Definition
(*) do the predicted values for any pair of groups differ at a specified value of the continuous variable? (1) Subtract the specified CV from the continuous variable (2) Run the regression after computing the appropriate cross products (3) BdummyV = distance between the lines – the t-test gives the sig test – b/t the group = 1 and the comparison (i.e. gth) group (4)recode and repeat as desired (bonferroni?)
Term
Step Down Procedure
Definition
(1) Start w/ assessment of the full regression equation w/ all higher order terms (*) omit highest order terms if nonsignificant (3) rerun the regression and assess the significant of only the remaining highest order terms
Term
Approaching Interactions (ANOVA vs. MRC)
Definition
(*) in ANOVA, interactions are assessed between categorical IVs (predictors) (*) product terms to carry interaction effects are created automatically (*) In MRC, interactions are assessed between any types of IVs (predictors) (*) product terms to carry interaction effects are created and entered by researchers (*) thus, MRC allows more control and flexibility for the researchers
Term
Differences in Results (ANOVA vs. MRC)
Definition
(*) The results and conclusions for the assessment of interactions between categorical predictors using ANOVA and MRC will be virtually identical (*) ANOVA is merely a simplification of the GLM for the special case of using only categorical predictors.
Term
Bottom Line (ANOVA vs. MRC)
Definition
(*) The GLM is a universal technique for assessing effects (*) predictors may take any form and specific procedures are derived from simplifications in the mathematics involved (*) all analyses create regression equations in which effects are added in to predict DVs (*) select techniques based on the nature of the data the research question and traditions within your discipline.
Supporting users have an ad free experience!