| Term 
 
        | Evidence-based Practice (EBP) |  | Definition 
 
        | An approach in which current, high quality research is integrated with practitioner expertise and client preferences and values into the process of making clinical decisions. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | Poor Quality:  -Opinions, Committee Reports    Medium Quality:  -Case Studies, Well-designed quasi-experimental studies   High Level:  -Well-designed controlled study without randomization Random controlled studies Systematic reviews or meta-analyses |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | Systematic study of a given phenomenon:  Research question, Statement of the problem,  Hypothesis Development Observation/data collection  Interpretation of data to accept or reject hypothesis Involves controlling as many variables as possible Is sometimes hard to apply to the study of human behavior |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | Descriptive Research: Describe populations Exploratory 
 Research: Find Relationships    Experimental: Cause & Effect |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | Descriptive Research:  Describe populations 
 Exploratory Research: Find Relationships 
 Experimental: Cause & Effect |  | 
        |  | 
        
        | Term 
 
        | Types of Literature Reviews |  | Definition 
 
        | Narrative: Most common.  Nonsystematic review.  Systemic: Method for locating, appraising & synthesizing evidence to help readers determine the best treatment practices.  Meta-analyses: Systematic review to which statistical weight is assigned to the studies included in the review to provide treatment results information. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | Measurable quantities that vary or change under different circumstances rather than remaining constant. Varying characteristics of a phenomenon. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | Independent (IV) Conditions that cause changes in behavior. Presumed cause of the dependent variable.   Dependent (DV) What is measured to see if a change had been caused (by the IV) Especially important in experimental research.  Used in cause-and-effect relationships. |  | 
        |  | 
        
        | Term 
 
        | Types of Independent Variables |  | Definition 
 
        | Active Variables:  Can be manipulated to determine the effect of one independent variable on another variable.  In an experimental study, active & IVs are the same thing.  Not all IVs are active variables which can be manipulated(e.g. age, gender, ethnicity cannot be manipulated)  Attribute Variables:   Subject characteristics or attributes(e.g. age, gender, ethnicity) In a descriptive study Attribute & IV are the same thing |  | 
        |  | 
        
        | Term 
 
        | Identify the IV & the DV.  State whether the IV is an active or attribute variable: Do boys or girls have higher rates of autism? |  | Definition 
 
        | Boy or girl : Independent attribute variable 
 Autism: Dependent |  | 
        |  | 
        
        | Term 
 
        | Identify IV(s) and DV(s).  State whether IV(s) is/are active or attribute(s):    Are people who stutter more fluent when they decrease their speaking rate by 20%, 10% or 0%? |  | Definition 
 
        | Population: People Independent attribute variable  Who stutter:??? Independent Variable 20, 10, 0% One active independent variable with 3 rates   **not sure this is correct 
 |  | 
        |  | 
        
        | Term 
 
        | Identify IV(s) & DV(s).  State whether IV(s) is/are active or attribute variable(s)    Are underweight elementary school children who were born prematurely more likely to have lower standardized language scores than underweight elementary school children who were not born prematurely? |  | Definition 
 
        | Population: Underweight schoolchildren 
 Language scores: Dependent variable 
 Birth status:  One independent attribute (can't change) variable with 2 positions |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | Categorical:  -Cannot be measured  -Named or categorized  -Uses the nominal scale of measurement (simple grouping) Continuous: -May be measured on a continum  -Rank ordering or numerical  -Uses the ratio, interval or ordinal form of measurement |  | 
        |  | 
        
        | Term 
 
        | Categorical variables -Nominal |  | Definition 
 
        | Nominal  -lowest on the measurement hierarchy  -Subjects are sorted into categories    *Male/female    *People who do/do not stutter    *Caucasian, African American, Hispanic  -Sometimes data or participants can be coded e.g., 1=normal, 2=abnormal.  The coded data is only a means of sorting the data into categories, and cannot be analyzed numerically. Lowest on the measurement hierarchy Least quantifiable Least accurate Results have less meaning |  | 
        |  | 
        
        | Term 
 
        | Types of continuous variables Ordinal Measures |  | Definition 
 
        | Ordinal Measures  -Ranked data  -Very much like nominal scale       Heaviest-lightest       Tallest-shortest       Mild, moderate, severe stuttering, hearing loss -The intervals between variables are not equal  -Data often coded into number, (e.g., severe=1,                moderate=2, mild=3) but cannot be analyzed mathematically     Some numbers based on continuous variables data that is ranked.  (numbers underpin the data rankings) |  | 
        |  | 
        
        | Term 
 
        | Categorical Variables Examples Do SLP's who hold the ASHA CCCs earn more money on average than SLPs without CCCs?  
 Is there a relationship between type of career and being diagnosed with a specific type of hearing loss (e.g. conductive, sensorineural, mixed)? |  | Definition 
 
        | SLPs population  With or without CCCs: Independent, Attribute (cannot be manipulated by researcher  Descriptive Research    Population: people w/ hearing loss Corralative study: Relationship, type of career might predict the hearing loss type: Categorical Type of career: Nominal, Categorical variable |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | Do women with mild, moderate or severe hearing loss earn as much money on average as women who have no hearing loss?   Is the smallest available dose of a drug (5-10mg) less ototoxic than the largest available doses (25-50mg)? |  | 
        |  | 
        
        | Term 
 
        | Types of Continuous Variables |  | Definition 
 
        | Interval Measures -Equal intervals between values (2, 4, 6, 8) -The difference between intervals is the same, (e.g., the difference between 1 and 5 is not the same as the difference between 12 and 16.  - No Meaningful zero    Some Language test scores    Ratings made on a Likert or Likert-type scale(Survey results: Rating teachers at the end of a term)      Temperature |  | 
        |  | 
        
        | Term 
 
        | Examples of Continuous Variables |  | Definition 
 
        | -What are student's attitudes toward people who stutter as measured by a 7-point Likert scale in which 1=strongly agree and 7=strongly disagree?    -Are students' scores on a language test correlated with frequency of aggressive behaviors noted in the classroom?  (Score on the test is the interval) |  | 
        |  | 
        
        | Term 
 
        | Types of Continuous Variables |  | Definition 
 
        | - Ratio Measures  -Highest level of measurement  -Equal intervals  -There is a true zero (e.g., the attitude being measured may not be present)   -Age -Weight Height  -Measurements of frequency, time, or distance |  | 
        |  | 
        
        | Term 
 
        | Examples of Types of Continuous Variables |  | Definition 
 
        | -Is the preceived age of a person's voice directly related to their chronological age? (ratio variable)    -Are students' scores on a language test correlated with frequency of aggressive behaviors noted in the classroom? |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | -Appropriate when investigating cause-and-effect relationships  -Characteristics Researcher has a purpose and has determined when to observe a certain phenomenon.    Researcher can control the events thus observe specific behavior changes.    Able to replicate.    Researcher can manipulate or control conditions (independent variables) to measure the effect of the condition on behaviors (dependent variables). |  | 
        |  | 
        
        | Term 
 
        | Bivalent Experiments   One IV Two Values   ** See graph |  | Definition 
 
        | Study of the effects of 2 values of one independent variable on the dependent variables  (Gender is a categorical variable that has 2 levels of one independent variable).  ( The effects of quiet & noise on stutters{independent variables]) |  | 
        |  | 
        
        | Term 
 
        | Multivalent Experiments   One IV 3 or More Values   **  See graph |  | Definition 
 
        | The independent variable can be altered in a dynamic manner that allows for 3 or more levels to affect the dependent variable. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | -Examines group differences   Do not lead to cause effect relationships.   Describes a phenomenon.   Not to say it is inferior to experimental research.   A question of which research design is the most appropriate. (Your question needs to be descriptive or experimental) |  | 
        |  | 
        
        | Term 
 
        | Variables in Descriptive Research |  | Definition 
 
        | -Classification Variable   Analogous in independent variable    A way of describing participants or variables.    -Criterion Variable Performance variables    Analogous to dependent variable |  | 
        |  | 
        
        | Term 
 
        | Example of Bivalent 
 (2 levels of 1 independent variable) 
 Descriptive Research   **See graph |  | Definition 
 
        | Comparison of results on tests Criteria variable |  | 
        |  | 
        
        | Term 
 
        | Example of Multivalent Descriptive Research   One IV 3 Values   ** See graph |  | Definition 
 
        | In study of on Normal, Hearing Impaired & Deaf subjects (Independent/Classification Variable)  One IV 3 Values 
 Number of words produced (Dependent/Criterion Variable) |  | 
        |  | 
        
        | Term 
 
        | Other Variables in Descriptive Research |  | Definition 
 
        | -Predictor Variable Change Agent/Independent    Predicts the effect on the predicted variable.   -Predicted Variable/Dependent Variable    Affected by the predictor |  | 
        |  | 
        
        | Term 
 
        | Example of Predictor/Predicted Variables     **See graph |  | Definition 
 
        | Scatterplot depicting the relationship between perceived age and chronological age of 175 talkers.  Perceived age judged by listeners.  Predictor/Independent Variable: Chronological Age Predicted/Dependent Variable:Vocally Perceived Age |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | Internal External Content Criterion Construct |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | -Degree to which observed changes (dependent variable) were a result of the variables that the researcher manipulated or controlled (independent variables)    Associated with experimental, cause-and-effect research (not used on surveys) |  | 
        |  | 
        
        | Term 
 
        | Extraneous variables which may affect internal validity Not interested in, but may affect my study |  | Definition 
 
        | -History:The specific events occurring between the first and second measurement in addition to the experimental variable.  (English language learners.  We have accounted for the parents but not the siblings)  On going history 
 -Maturation:  Processes within the subjects operating as a function of the passage of time per se (not specific to the particular events).   Including growing older, growing hungrier (during tests), growing more tired, etc.  How do you know you caused the effect & maturation did not?   -Testing: The effects of taking a test upon the scores of a second (i.e., subsequent) testing Pretests help your scores on actual test.  Warms you up.  "Testing Effect" on internal validity.  Longitudinal studies. |  | 
        |  | 
        
        | Term 
 
        | Extraneous Variables Which May Affect Internal Validity |  | Definition 
 
        | -Statistical regression: Occurs when groups have been selected on the basis of their extreme scores (random has more control)   -Differential selection of subjects: Biases may result if the participants in the control or comparison groups are different from one another(Broca's vs Wernicki's Aphasia) |  | 
        |  | 
        
        | Term 
 
        | Extraneous Variables Which May Affect Internal Validity |  | Definition 
 
        | -Experimental Mortality: Differential loss of subjects from the comparison or control groups  
 - Interaction of Factors: The possible interaction effects among two or more of the previously described jeopardizing factors.  Lose participants for one reason or another .  A combination of these "threats" to the study. 
 -Instrumentation: Changes in the calibration of a measuring instrument or changes in hte observers or scorers used may produce changes in the obtained measurements.  Audiometer off by 10dB so could not get consistent scoring protocol |  | 
        |  | 
        
        | Term 
 
        | Ensuring "Objectivity" of Measurements Made by Human Observers or Judges |  | Definition 
 
        | - Identify specific criteria for judging/evaluating the performance/behavior of participants 
 - Train observers/judges in order to ensure that they are applying the criteria correctly and consistantly 
  - Evaluate intraobserver reliability (consistency "within" each observer or judge)  - Evaluate interobserver reliability (consistency "between" observers or judges) |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | - Generalization of finding across populations, settings, and times. (Applying to other populations in other studies.  - Addresses to what populations, settings, treatment variable, and measurement variable the effect can be generalized  - Is associated with the replication of studies (Want to replicate in other circumstances) |  | 
        |  | 
        
        | Term 
 
        | Factors That Jeopardize External Validity |  | Definition 
 
        | - Subject Selection: Biases the degree to which the subjects chosen for the study are representative of the population to which the researcher wishes to generalize. (As much as you can.  Include both sexes, Pretesting effects)  
 Reactive or Interactive Effects of Pretesting: In which a pretest might increase or decrease the subject's sensitivity or responsiveness to the experimental variable and thus make the results obtained for a pretested population unrepresentative of the effects of the experimental variable for the un-pretested universe from which the experimental subjects were selected. (Research environment may not mirror outside world) |  | 
        |  | 
        
        | Term 
 
        | Factors That Jeopardize External Validity |  | Definition 
 
        | - Reactive effects of experimental arrangements, which would preclude generalization about the effect of the experimental variable upon persons being exposed to it in non-experimental settings.   (Your environment may affect how well your study generalizes)  - Multiple-treatment interference: Likely to occur whenever multiple treatments are applied to the same subjects, because the effects of prior treatments are usually not erasable. (A threat to internal validity also and a threat to generalizing to other populations.) |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | - Refers to the consistency of a rater or of a measurement.  -Intra or interobserver (or judge)reliability  -Test-retest reliability    Participants' test scores are compared from the first time participants take the test to the second time they take the same test.  -Does not confirm validity |  | 
        |  | 
        
        | Term 
 
        | Quantitative Research / (Descriptive is Quantitative)  
 |  | Definition 
 
        | Based on numerical measurements of observed phenamenon Places much emphasis on control Vary on a continum from most valid to least valid    True experimental designs (most credible)    Single-subject designs    Quasiexperimental designs    Pre-experimental designs    Nonexperimental designs(weskest design)     Descriptive research/Get info out there 
   |  | 
        |  | 
        
        | Term 
 
        | Characteristics of good quantitative, experimental design   |  | Definition 
 
        | A good study attempts to ensure that only the IV(s) caused the change to the DV(s) 
 Extraneous (didn't consider how would impact results) variables must be accounted for or controlled    Intrinsic variables    Extrinsic variables   |  | 
        |  | 
        
        | Term 
 
        | Intrinsic Variables/ Inherent to the participant 
 |  | Definition 
 
        | Associated with the research participant or subject May be controlled in a number of ways   Randomization - randomly assign participants or groups   Homogeneity - choose only subjects who have the same characteristics   Blocking - account for extraneous variables by making them independent variables(aphasia subjects who attend support grp. are built into the study to compare effects.)    Analysis of covariance - the effects of one variable are statistivally analyzed to compute its effects on another variable.  (One variable builds on another.  Ven diagram, overlap.  Same dependent variable.)    |  | 
        |  | 
        
        | Term 
 | Definition 
 
        |   Are associated with the research environment and research protocols (researcher may have effect , with personality) Should be controlled through systematic application of research protocol and consistency of researchers/observers * How do you standardize protocols to avoid.  i.e. The experimenter nodding head will affect outcome.   |  | 
        |  | 
        
        | Term 
 
        | Other Characteristics of a Good Design   |  | Definition 
 
        | 
 
Design is appropriate for the question being askedShould not result in biased dataPrecision of measurements & procedures The power of design & its relationship to detect relationships among variables; often related to statistical analysis   |  | 
        |  | 
        
        | Term 
 
        | Null Hypothesis (Used in Experimental) 
 |  | Definition 
 
        | Scenario: Clinician hypothesizes that early intervention works better than traditional intervention when treating children with autism.  (Designs study to determine if early intervention is more effective based on language standardized test) Clinician's goal should not be to prove her hypothesis is correct. Instead, she experiments to determine whether to reject or fail to rejectnull hypothesis.   Null Hypothesis (Ho) = No difference between early/traditional intervention   Alternative Hypothesis(H1): = Early intervenion more effective than tradidional intervention.   |  | 
        |  | 
        
        | Term 
 
        | Null Hypothesis (2)   (significance level) 
 |  | Definition 
 
        | Many statistical tests can be run, but the end result is a p value, or probability value.  P value is pre-determined before data is analyzed & often is .05 If p is = to or greater than .05 we fail to reject null hypothesis(i.e., early intervention is not more effective than traditional intervention).  If p value is less than .05, we reject null hypothesis & assume our alternative hypothesis is correct (i.e., early intervention is more effective than traditional intervention). For those who are designing an experiment or quasi-experimental study, results should be framed this way. (Experimental design needs null hypothesis.)   |  | 
        |  | 
        
        | Term 
 
        | Factorial (group) designs 
 |  | Definition 
 
        |   Sometimes researchers need to measure different levels of multiple variables. Factorial designs have two or more independent variables which are manipulated simultaneously. Researchers may analyze the main effects of independent variables separately and the interaction effects of the variables.   |  | 
        |  | 
        
        | Term 
 
        | Factorial (Group) Designs 
 |  | Definition 
 
        |   e.g.Does language output(what's being measured) of children who use AAC devices vary according to the child's gender (IV, 2 levels) & whether child uses device at home or school. 2x2 design (e.g., there were 2 IV's, each with 2 different levels) 4 levels, 2 associated with each variable, use of device at school or at home (IV, 2 levels).   |  | 
        |  | 
        
        | Term 
 
        | Factorial (Group) Designs 
 |  | Definition 
 
        |   e.g. Do cochlear implants versus hear aids (hear devices, IV) result in better speech intelligibility scores (DV) for men or women (gender  IV) with mild, moderate, or severe (3 levels IV) hearing loss? This is a 2x2x3 design (e.g., 3 IV's, 2 have 2 levels. 1 has 3 levels. (Seen in the methods section: factorial IV's)   |  | 
        |  | 
        
        | Term 
 
        | Multivariate(2 or more DV's) (Group) Designs 
 |  | Definition 
 
        |   There are multiple DV's, just as there are multiple IV's (Bivalent, Multivalent) Multivariate designs involve two or more DV's & provide information about the effect of the IV on each dependent variable. Example: Do cochlear implants versus hearing aids result in (1) better speech intelligibility & (2) language proficiency scores for men & women with mild, moderate or severe hearing loss? (Multiple dependent variables)     |  | 
        |  | 
        
        | Term 
 
        | Within-Subjects Group Designs 
 |  | Definition 
 
        |   Unlike group designs, participants are not divided into separate groups.  Each participant will experience all of the treatments or conditions. Also called "repeated measures designs" Can include factorial and multifactorial designs as well as multiple control groups or simpler designs (e.g., single factors)   |  | 
        |  | 
        
        | Term 
 
        | Within-Subjects Group Designs   |  | Definition 
 
        |   Example: An SLP wants to know which form of sensory integration (bouncing on a large ball or swinging on a blanket) results in more immediate verbal output for children with autism. She identifies 30 children with autism & trains parents research protocol. On 1 day, child bounces on ball.  Tape record for 30 minutes after activity. On 2nd day, swing child in blanket.  Child recorded for 30 min. after. Researcher analyzes recordings & makes compares ball & blanket data. All 30 kids do both treatment conditions. Group of 30 more power than grp of 15. Don't need peer matching cause kids act as own control.   |  | 
        |  | 
        
        | Term 
 | Definition 
 
        |   Advantages Requires fewer subjects Subjects do not have to be matched Statistics can be easier to interpret Disadvantages Can be less powerful ensuring the change on DV due to IV & not extraneous variables. Carryover effects of one treatment on another Treatment order is great concern Counterbalance.  Reverse order of treatment with 1/2 group 
   |  | 
        |  | 
        
        | Term 
 
        | Single-Subject Designs (Respected)   |  | Definition 
 
        |   Focus on behaviors of one or few participants, but NOT same a case study Utilize repeated measures Involve design phases Baseline designs most common type, e.g. ABA studies.  Each letter is a phase.   |  | 
        |  | 
        
        | Term 
 
        | Single-Subjects Designs (cont)   |  | Definition 
 
        |   Baselines - Repeated measurement of DV prior to introduction of treatment Baseline is solid if stable, consistent   Probes - Repeated measurement of DV obtained during treatment A(no treatment) B(treatment) 
   |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | Want withdrawal to go down to baseline. Show treatment works. Shows definite trend. Ethically could leave at 30% over baseline, better for subject   |  | 
        |  | 
        
        | Term 
 
        | Visual Analysis of Single-Subject Designs |  | Definition 
 
        | Although both visual & statistical analysis can be used, single-subjext designs typically use visual analysis ofGRAPHS or DIAGRAMS.  You can look at graph & see progress. Treatment effects are typically large enough to support conclusions without statistical analyses  Single-subject designs focus on relatively large treatment effects for individual subjects rather than small differences between groups of subjects.  What are the ethical issues of withdrawal?  Ideally do 2nd B phase & end on high note. |  | 
        |  | 
        
        | Term 
 
        | Qualitative Research - An Overview |  | Definition 
 
        | Not numerically driven Not objective Addresses subjective experiences of subjects rather than objective measures Often conducted via surveys, interviews, focus grp Thematic analysis versus statistical analysis Not less credible than other research!!! May code data with #'s, but not analyzed statistically. Should not be evaluated with same valuation as quantitative research. Credibility is a qualitative value |  | 
        |  | 
        
        | Term 
 
        | Null Hypothesis Continued |  | Definition 
 
        | Many statistical tests can be run, but the end result is a p value, or probability value.  P value is pre-determined before data is analyzed & ofter is .05 If p is = to or greater than .05 we fail to reject null hypothesis(i.e., early intervention is not more effective than traditional intervention).
 If p value is less than .05, we reject null hypothesis & assume our alternative hypothesis is correct (i.e., early intervention is more effective than traditional intervention).
 For those who are designing an experiment or quasi-experimental study, results should be framed this way.
 (Experimental design needs null hypothesis.)
 |  | 
        |  |