Term
|
Definition
| the defree to which a test or instrument actually measures what it claims to measure |
|
|
Term
| Types of validity evidence |
|
Definition
Logical Criterion Construct evidence |
|
|
Term
| Logical/Content/Face Validity |
|
Definition
| related to how well a test or instrument measures the content objectives |
|
|
Term
| Criterion evidence is based on |
|
Definition
| a correlation coefficient between scores on a test and scores on a criterion measure or standard |
|
|
Term
| What is the traditional type of validity evidence |
|
Definition
|
|
Term
| Desired qualities in a Criterion Evidence |
|
Definition
Relevance Freedom from Bias Reliability Availability |
|
|
Term
| The validity coefficient indicates |
|
Definition
| how well a test measures what it claims to measure |
|
|
Term
| A test used as a substitute for another validated test should have a validity coefficient of |
|
Definition
|
|
Term
|
Definition
| indicates how well an individual currently performs |
|
|
Term
| concurrent validity is important if you wish to |
|
Definition
| develop a new test that requires less equipment or time |
|
|
Term
| Predictive validity is used to |
|
Definition
| estimate future performance |
|
|
Term
|
Definition
| variable that has been defined as indicating successful performance of a trait |
|
|
Term
| One method of determining successful performance is |
|
Definition
| through a panel of experts by correlating experts ratings with performance on test |
|
|
Term
| Ways to develop criterion evidence |
|
Definition
Actual participation Perform the criterion Expert judges Tournament participation Known valid test |
|
|
Term
| Steps involved in determining concurrent validity are as follows |
|
Definition
1. Administer the new test to a defined group of individuals 2. Administer a previously established, valid criterion test to the same group, at the same time, or shortly thereafter 3. Correlate the two sets of scores 4. Evaluate the results |
|
|
Term
| Steps involved in determining predictive validity |
|
Definition
1. Administer the predictor variable to a group 2. Wait until the behavior to be predicted, the criterion variable, occurs 3. Obtain measures of the criterion for the same group 4. Correlate the two sets of scores 5. Evaluate the results |
|
|
Term
|
Definition
| something known to exist although it may not be precisely defined or measure; "you know it when you see it" |
|
|
Term
|
Definition
| anxiety, intelligence, and motivation |
|
|
Term
| How do administrative procedures affect validity |
|
Definition
| if unclear directions are given or if all individuals do not perform the test the same way |
|
|
Term
|
Definition
|
|
Term
| A reliability test should |
|
Definition
| obtain approximately the same results each time it is administered |
|
|
Term
| Types of relative reliability |
|
Definition
Internal consistency reliability Stability reliability |
|
|
Term
| Internal consistency reliability |
|
Definition
| Consistency of scores within a day |
|
|
Term
|
Definition
| Consistency of scored across days |
|
|
Term
|
Definition
| each person must have at least two repeated scores |
|
|
Term
| Repeated scores for reliability could be |
|
Definition
from multiple administrations on diff days from multiple items on a test on two different forms of a test on multiple trials of a test within a day |
|
|
Term
| Methods of estimating reliability |
|
Definition
Test-Retest Method Parallel Forms Method Split-Half Method |
|
|
Term
| Factors affecting reliability |
|
Definition
Method of scoring The heterogeneity of the group The length of the test Administrative procedures |
|
|
Term
|
Definition
| the degree to which multiple scores/testers agree on the values of collected measures or scores |
|
|
Term
| A test has high objectivity when |
|
Definition
| two or more persons can administer the same test to the same group and obtain approximately the same results |
|
|