Term
| Keys to finding good articles |
|
Definition
-locate on or more review articles -fine several primary-source articles -identify key references from these that you can use |
|
|
Term
| journals we have access to |
|
Definition
| ASHA, Journal of the Acoustical Society of America, Public library journals, University library/e journals, google scholar |
|
|
Term
|
Definition
conscientious and judicious use of current best evidence to guid client care for improving outcomes = approach to clinical problem solving |
|
|
Term
|
Definition
| demonstrate knowledge of processes used in research, integrate to EBP practice, demonstrate comprehension of principles of basic and applied research and design, how to access sources, relating research to clinical practice |
|
|
Term
|
Definition
| practicing based on evidence and accepting clinical guidelines |
|
|
Term
|
Definition
| reduce wide variations in individual clinician's practices, eliminating worst practices, and enhancing best practices, synthesize best evidence |
|
|
Term
|
Definition
| demonstrate CURRENT KNOWLEDGE OF PRINCIPLES AND METHODS of prevention, assessment, and intervention |
|
|
Term
|
Definition
| knowledge of standards of ETHICAL CONDUCT |
|
|
Term
|
Definition
| KNOWLEDGE OF PROCESSES USED IN RESEARCH AND INTEGRATION OF RESEARCH PRINCIPLES INTO EBP |
|
|
Term
|
Definition
| knowledge of CONTEMPORARY PROFESSIONAL ISSUES |
|
|
Term
|
Definition
| skills in ORAL AND WRITTEN OR OTHER FORMS OF COMMUNICATION |
|
|
Term
|
Definition
| The expert clinician should consistently seek new info to improve therapeutic effectiveness |
|
|
Term
|
Definition
ASKING a well-built question SELECTING evidence sources IMPLEMENTING a search strategy APPRAISING and SYNTHESIZING the evidence APPLYING the evidence EVALUATING the application DISSEMINATING the findings |
|
|
Term
|
Definition
P: population, patient, problem I: intervention or exposure C: comparison, control O: outcome |
|
|
Term
|
Definition
| primary-source articles in PEER REVIEWED journals related to the field |
|
|
Term
| Four view of science: Science as PUBLICLY VERIFIABLE KNOWLEDGE which... (2) |
|
Definition
| permits replication and peer review |
|
|
Term
| four view of science: science as knowledge of ________ |
|
Definition
|
|
Term
| Four view of science: Science as the treatment of empirically soluble problems |
|
Definition
| Science is empirical --> data driven |
|
|
Term
| Four view of science: science as an attempt to find the strongest unambiguous ___________supporting a view |
|
Definition
|
|
Term
|
Definition
| method of evaluating evidence include the possibility that the data will falsify the theory |
|
|
Term
|
Definition
| must be a testable theory to be a solvable problem |
|
|
Term
| testability and falsifiability |
|
Definition
| genuine knowledge is not impossible, rather knowledge is provisional |
|
|
Term
| Different ways of knowing: priori method |
|
Definition
| believing something based on a premise which is "priori" without evidence) believed to be true, then using reasoning to go forward |
|
|
Term
| Different ways of knowing: rationalism |
|
Definition
| believing something based on rational logic (deductive/inductive) |
|
|
Term
| Different ways of knowing: empricism |
|
Definition
| believing something based on observable events |
|
|
Term
| Different ways of knowing: authority |
|
Definition
| believing something because someone perceived to be an authority said it was so |
|
|
Term
| Different ways of knowing: tenacity |
|
Definition
| believing something because it's always been that way |
|
|
Term
| Different ways of knowing: superstition |
|
Definition
| believing something based on things you can't directly observe |
|
|
Term
| Different ways of knowing: intuition |
|
Definition
| believing something based on FEELINGS |
|
|
Term
| Logical reasoning: Inductive |
|
Definition
gathering observations and hypotheses into a unifying whole--> going from specific example to generalizations i see a 2 white dogs --> all dogs are white |
|
|
Term
| Logical reasoning: Deductive |
|
Definition
application of generalizations to a specific circumstance all apples are fruits --> a granny smith is and apple, therefore it is a fruit |
|
|
Term
| Sources for research (3 types) |
|
Definition
| primary, secondary, tertiary |
|
|
Term
| Peer-reviewed primary source research general to SLP |
|
Definition
american journal of speech-language pathology, journal of speech, language, and hearing research, language, speech, and hearing services in schools, journal of communication disorders, journal of medical speech-language pathology |
|
|
Term
| Special topics- speech-language-hearing disorders and sciences |
|
Definition
journal of fluency disorders, american journal of audiology, ear and hearing, journal of child language, volta review, cleft palate-craniofacial journal, studies in second language acquisition |
|
|
Term
| phonetics-clinical and experimental |
|
Definition
clinical linguistics and phonetics, journal of the acoustical society of America, Language and speech Journal of phonetics |
|
|
Term
| cognitive and linguistic processes journals |
|
Definition
journal of memory and language, language and cognitive processes, etc |
|
|
Term
| Neurobiological mechanisms in language |
|
Definition
brain and language, brain and cognition, journal of neurolinguistics |
|
|
Term
| Getting started... finding a research article |
|
Definition
-PubMed database search, Quick Start: google scholar search -GOAL: find citations (references) that allow you to find research article -look up ASHA's Compendium of EBP guidelines and systematic reviews |
|
|
Term
| What types of articles? IDEAL: |
|
Definition
locate one or more review articles -alternative: find several primary-source articles; focus reading on introduction sections, discussion sections -identify key references (citations) from these materials |
|
|
Term
| Primary-source article: Structure |
|
Definition
abstract, Introductiong/ background method results discussion |
|
|
Term
|
Definition
concise summary of the paper that should include the following: -statement of general problem -summary of method -summary of results conclusions and implications of the research (specified limit by journal |
|
|
Term
|
Definition
-provides the background and orientation that introduces a reader to the study -overview, review of relevant literature, discuss a specific problem, hypothesis |
|
|
Term
|
Definition
detailed description of how the study was carried out -enough info to be able to replicate the experiment |
|
|
Term
|
Definition
summarizes the data and statistical analysis -data is NOT interpreted in this section |
|
|
Term
|
Definition
briefly restate hypothesis and provide summary of the main findings -offer interpretation, evaluation, and discussion of the implications of the finds, relate back to previous literature |
|
|
Term
|
Definition
| provide complete information about each item cited in the paper |
|
|
Term
|
Definition
standard and common format for citing resources and references -American Psychological Association |
|
|
Term
| Journal Articles: APA format |
|
Definition
Author(s), in the given order, year, title of article, journal title, volume, page number (issue number and DOI optional) |
|
|
Term
|
Definition
| Author in given order, year, title of book, publisher info |
|
|
Term
| Book Section/Chapter: APA format |
|
Definition
| section author, year, title of section, book editor, title of book, section pages, publisher info |
|
|
Term
| Citing references in text |
|
Definition
the results .... (author, year) Author and Author (year) showed that.... Author et al. (year) showed that.... |
|
|
Term
| The process of research (phases) |
|
Definition
idea-generating, problem-definition, procedures design, observation, data-analysis, interpretation, communication IPDODIC |
|
|
Term
|
Definition
noun: physical expression or emotional state verb: to influence e.g. the treatment will affect the outcome |
|
|
Term
|
Definition
noun: caused or produced verb: to bring about or accomplish e.g. the policy is likely to effect many changes |
|
|
Term
variable: - independent -dependent |
|
Definition
something that varies -variable related to conditions in an experiment or study suspected to case a change -variable related to the behavior that may be changed |
|
|
Term
|
Definition
| when a study or experiment is conceived of in terms of IV and DV, there is an assumption of causality |
|
|
Term
Active Variables: *in general research designs that use active variables are stronger |
|
Definition
a variable that can be MANIPULATED (changed) by the experimenter e.g. therapy type, sound pressure level, etc |
|
|
Term
|
Definition
a variable which can't be manipulated (NONMANIPULATED) or changed by the experimenter e.g. age, gender, intelligence, type of speech disorder, degree of hl |
|
|
Term
|
Definition
| a variable which potential confuses the picture of a cause-effect relationship if left uncontrolled |
|
|
Term
|
Definition
| a variable which has confused the picture of cause-effect relationship because it was left uncontrolled and presents an alternative explanation for findings |
|
|
Term
| Manipulated variables present a ______ picture of _____ |
|
Definition
| clearer, cause-and-effect |
|
|
Term
| Why is it often necessary to use non manipulated variables? |
|
Definition
because to do otherwise would be unethical, impractical, or impossible *importance of converging evidence |
|
|
Term
| Strength of study conclusions largely follows _____ |
|
Definition
|
|
Term
|
Definition
different values of the IV that can be taken related term: "conditions" |
|
|
Term
|
Definition
the study has on IV with 2 levels e.g. IV of Autism Diagnosis (child with ASD, control group) |
|
|
Term
|
Definition
the study has on IV with 3 or more levels e.g. IV of therapy type= experimental Tx A, Tx B, Tx C |
|
|
Term
|
Definition
the study has 2 or more IVs E.g. effect of intervention type and diagnosis on academic achievement |
|
|
Term
| Operations vs. Essentialism |
|
Definition
essentialism: the idea that the only good theories are those that give ultimate explanations of phenomena operationism: idea that concepts in scientific theories must in some way be grounded in or linked to observable events **scientists are operationist |
|
|
Term
|
Definition
| a definition which turns an abstract concept into an empirical observation |
|
|
Term
|
Definition
an abstract idea constructed by the researcher to explain observed events e.g. intelligence, gravity, person who stutters, stuttering-like disfluencies |
|
|
Term
| T/F: an operational definition can be defined in terms of another operational definition |
|
Definition
| True, e.g. type A personality |
|
|
Term
|
Definition
| measure of consistency of data collected using the same methodology on more than one occasion; across different, but related test items; or by different individuals -- REPEATABILITY |
|
|
Term
|
Definition
| property of data, concepts or research findings whereby they are useful for measuring or understanding phenomena |
|
|
Term
| To be use to science, a measure must have both high ____ and _____ |
|
Definition
|
|
Term
| four lessons in history and research ethics |
|
Definition
| nazi medical experiments/neuremberg trials, tuskegee study, milligram study, monster study on stuttering |
|
|
Term
| Nuremberg trials led to widespread discussion of protections for___ |
|
Definition
| human subjects --> contributed to development of safeguards for human participation |
|
|
Term
|
Definition
aka Tuskegee Syphilis study --> led to IRBs --> confidentiality and informed consent |
|
|
Term
|
Definition
must be in writing, general goals of research must be stated, risks of harm/discomfort, benefits, duration --> study is voluntary and can be terminated at any point |
|
|
Term
|
Definition
deliberately misleading participants -additional safeguards employed e.g. increased IRB scrutiny, debriefing, no anticipated longterm risks |
|
|
Term
| Special groups that are protected |
|
Definition
| children, fetuses, mentally disabled, prisoners |
|
|
Term
|
Definition
| many government agencies funding research require that researchers actively recruit participants to reflect diversity of the population unless scientifically justified not to |
|
|
Term
|
Definition
agencies set up within institutions in charge of reviewing research proposals to determine their compliance with existing regulations -include scientist and non scientists -evaluate the risk benefit ratio |
|
|
Term
| Research that does not require and IRB |
|
Definition
-research conducted in established or commonly accepted educational settings -research involving educational tests, survey processes, interview processes, or observation of public behavior -study of or collection of existing, publicly available data, documents, or records |
|
|
Term
|
Definition
used deception -considered to have followed existing ethical protocols, but raised questions about ethics of experiments because of extreme emotional stress suffered by participants |
|
|
Term
| Monster study on stuttering |
|
Definition
positive group: received praise negative group: was belittled and told they stuttered -no debriefing, no informed consent, used deception, psychological harm to children and their communication |
|
|
Term
|
Definition
generally more invasive, not capable of informed consent, must be approved by institutional animal care and use committee -efforts can be made to reduce the amount needed for a study |
|
|
Term
|
Definition
| looking across studies at the evidence |
|
|
Term
|
Definition
the same individual received all levels of the IV drawbacks: fatigue, carry-over effects |
|
|
Term
|
Definition
different individuals are in different levels of the IV drawbacks: severity level, multiple extraneous variables possible control: counterbalancing/averaging out |
|
|
Term
|
Definition
involves assignment of different individuals to different levels of the IV via a random (or quasi-random) method -controls for known and unknown possible confounds -only applies to between-subjects IVs |
|
|
Term
| if an IV can not be manipulated, it is necessarily a (within/between) subject design |
|
Definition
|
|
Term
| Experiments (2 requirements) |
|
Definition
| manipulated IV and random assignment |
|
|
Term
|
Definition
comparative studies; two or more groups of participants are compared -either no manipulation of a least one of the IVs (i.e. attribute variable) or no random assignment to groups |
|
|
Term
| Random assignment has the effect of creating initially ________ groups |
|
Definition
equivalent *should account for known and unknown extraneous variables |
|
|
Term
| __________ studies provide a clearer evidence of causation (experiment/comparative) |
|
Definition
|
|
Term
| Stronger evidence of causation |
|
Definition
| experiments, comparative studies, meta-analysis, small n and single-subject experiments |
|
|
Term
|
Definition
Type of comparative study IV is age |
|
|
Term
|
Definition
type of developmental research IV of age is between-subject variable -cohort effects |
|
|
Term
|
Definition
type of development research IV of age is a within subject variable |
|
|
Term
|
Definition
combines the results of several experiments or quasi-experiments addressing related research -building on the statistical power of each -part of systematic reviews |
|
|
Term
| small n or single subject |
|
Definition
sometimes the population of interest doesn't have many individuals NOT case studies |
|
|
Term
| strength of study design is largely determine by how much you can learn about _________ |
|
Definition
|
|
Term
| Types of studies where cause and effect is less clear |
|
Definition
| correlational, survey, retrospective, qualitative research |
|
|
Term
| types of qualitative research |
|
Definition
| observational research, interview, narrative, case studies |
|
|
Term
| Strength of evidence: strong |
|
Definition
| experiments, meta-analysis, comparative, small n/single subject |
|
|
Term
| Strength of evidence:medium |
|
Definition
| correlational, survey, retrospective |
|
|
Term
| Strength of evidence: low |
|
Definition
| observational research, case studies, unstructured interview research, narrative research, testimonials |
|
|
Term
|
Definition
quantitative, medium strength examine how changes in one variable correspond to changes in another not described in terms of IV and DV x-axis: predictor variable y-axis predicted variable |
|
|
Term
|
Definition
detailed inspection of prevalence of conditions, practices or attitudes by asking them directly -selection bias and obtaining a representative sample are challenges -frequently used in communicative disorders research |
|
|
Term
|
Definition
a method in which old records are examined for possible relationships between variables -related to correlational -reliability and validity of the data may be questionable |
|
|
Term
|
Definition
behaviors are studied in their natural setting with reliance on description, not quantification -concerns: many factors that may confound results, lack of controls, not replicable -appropriate when very unclear causality in a given area, or a humanistic approach is preferred |
|
|
Term
|
Definition
an intensive, in-depth obersvation, often in a naturalistic setting -investigation of phenomena is in its earliest stages -when conducted with extreme care to avoid confounds |
|
|
Term
|
Definition
uses interview format, stuctured/semistructured/unstructured -difficulties replication, generalizing etc. -humanistic perspective |
|
|
Term
|
Definition
stories/interview/journals/autobiographies -tied to personal identity -humanistic perspective |
|
|
Term
|
Definition
intensive in-depth study of a single individual (or a few) -qualitative, descriptive -low strength: uncontrolled and qualitative |
|
|
Term
| Requirements for cause-effect |
|
Definition
1. covariance 2. temporal precedence 3. internal validity |
|
|
Term
|
Definition
| a causal variable must covary systematically with the variable it's assumed to cause |
|
|
Term
|
Definition
a variable assumed to have a causal effect must precede the effect if is supposed to cause (cause comes before effect) |
|
|
Term
|
Definition
| the variable assumed to be causal must be the most plausible cause, with other competing variables ruled out as the cause |
|
|
Term
|
Definition
If two variables A and B, are correlated, it not possible to tell: Whether a caused b or vice versa |
|
|
Term
|
Definition
| if two variables a and b are correlated a third variable c, may have caused the changes in both |
|
|
Term
| Goldberger's pellagra experiment |
|
Definition
|
|
Term
| When selection bias exists i makes spurious correlation highly _______ (unlikely, likely) |
|
Definition
| likely correlation between SAT scores, teachers salaries |
|
|
Term
|
Definition
| internal, external, statistical conclusion validity, construct validity |
|
|
Term
|
Definition
refers to the extent that causal inferences are justified based on observed changes in a DV in response to systematic variation in an IV -the extent that extraneous variables have been removed from the study -levels of evidence hierarchy is based largely on the concept of internal validity |
|
|
Term
| Threats to internal validity |
|
Definition
| study design, maturation, testing effects, sleeper effects, history, selection bias, attrition |
|
|
Term
|
Definition
the fact that an individual matures/ changes over time -no control group to compare for effects of normal maturation |
|
|
Term
|
Definition
effects of previous testing on performance -taking the same test twice generally improved your score -safeguards: use different questions or a different assessment on repeated testing, wait a long time before testing again |
|
|
Term
|
Definition
| change in DV due to a tx effect that is not immediately observable over a span of time |
|
|
Term
|
Definition
| any even between the beginning of tx and measurement of the DV that could produce the observed outcome |
|
|
Term
|
Definition
lack of representativeness among participants selected, there may be imbalance in how groups are constructed safeguard: random sampling, random assignment |
|
|
Term
|
Definition
some individuals "drop out" safeguard: provide incentives and enroll new participants |
|
|
Term
| What are two reasons that correlation can't tell you about causation? |
|
Definition
| directionality problem and third variable problem |
|
|
Term
|
Definition
| the variable assumed to be causal must be the most plausible cause, with other competing variables ruled out |
|
|
Term
| well conducted single subject experiments may satisfy covariance and temporal precedence but they can't satisfy the internal validity rule because _____ |
|
Definition
|
|
Term
|
Definition
| whether a causal relationship holds over people, settings, treatment and measurement variables, and time |
|
|
Term
| 4 types of external validity: |
|
Definition
| population validity, treatment variation validity, temporal validity, ecological validity |
|
|
Term
|
Definition
| describes an all inclusive data set about which researchers want to draw a conclusion |
|
|
Term
|
Definition
| a subset of the population |
|
|
Term
|
Definition
| inevitable to a degree; difference between the measures collected for a sample and the population its believed to represent |
|
|
Term
|
Definition
| ability to generalize from the sample in a study to the larger population of individuals |
|
|
Term
|
Definition
| larger population to whom the experimental results are generalized |
|
|
Term
| experimentally accessible population |
|
Definition
| one that is available to the researcher |
|
|
Term
| sample --> experimentally accessible pop. --> target pop. |
|
Definition
| generalization from sample to experimental pop can be achieved through random selection however the experimentally accessible pop. compared to target population comparison can't be made with confidence. the experimentally accessible group is almost never randomly selected |
|
|
Term
|
Definition
| random selection involves drawing observation from a population in a way that each individual has an equal chance of being selected |
|
|
Term
|
Definition
sampling method in which each individual has an equal probability of being selected "true random selection" |
|
|
Term
|
Definition
| involves dividing a population into subgroups called state to assure that certain segments of the population are adequately represented in the sample |
|
|
Term
|
Definition
| refers to the extent to which the results of an experiment can be generalized across time |
|
|
Term
| treatment variation validity |
|
Definition
refers to the generalizability of results across variation in treatment e.g. administration of a therapy |
|
|
Term
|
Definition
refers to the degree to which the behaviors observed and recorded in a study reflect the behaviors that actually occur in natural settings trade off between control and ecological validity |
|
|
Term
|
Definition
subject's behavior changes simply due to the knowledge of participation in a study safeguards: ensuring comparability of Tx groups and knowledge about the nature of the experimental tx |
|
|
Term
|
Definition
any variable that confound that ability of a chosen sample to represent the population parameter from which it was drawn "self-selection" is a common type of selection bias |
|
|
Term
| selection bias minimization |
|
Definition
1. random assignment and selection 2. matching of participant groups 3. well defined inclusion criteria 4. developed exclusion criteria |
|
|
Term
|
Definition
how well the study's results support the theory or constructs behind the research whether the theory supported by the findings provides the best available explanation of the results |
|
|
Term
| maximizing construct validity |
|
Definition
1. use clearly stated definitions 2. build hypotheses on solid, well validated constructs that have support from many studies 3. theoretical bases must be clear and well-supported, with rival theories carefully ruled out 4. a good match between constructs and operations used to define them |
|
|
Term
| T/F for maximal construct validity, there must be theoretically clear and organized bases, including rival theories being ruled out |
|
Definition
|
|
Term
| Statistical conclusion validity |
|
Definition
| defined as a degree of confidence that the inference about study outcomes based on statistics are correct |
|
|
Term
| factors that affect all types of validity: measurement must be reliable and valid |
|
Definition
ex. shoes size and IQ measurement must be calibrated |
|
|
Term
| factors that affect all types of validity: ceiling effect |
|
Definition
| refers an effect whereby data cannot take on a value higher than some "ceiling" |
|
|
Term
| factors that affect all types of validity: floor effect |
|
Definition
refers to an effect whereby data cannot take on a value higher than the floor e.g. a test is way too hard |
|
|
Term
| factors that affect all types of validity: researcher bias |
|
Definition
| may be lessened where researcher is "blind to conditions" |
|
|