Term
Scientific Method
(4 steps)


Definition
1 Observation and description of phenomenon
2 Creation of hypothesis to explain phenomenon
3 Application of the hypothesis to new observations (or to predict other phenomenon
4 Adjustment of hypothesis to fit new information, and repetition of experiment and observation.
Note: Most resources list 7 steps: 1. Ask and define the question; 2. Gather information and resources through observation; 3. Form a hypothesis; 4. Perform one or more experiments and collect and sort data; 5. Analyze the data; 6. Interpret the data and make conclusions that point to a hypothesis; 7. Formulate a "final" or "finished" hypothesis 


Term

Definition
scientific law is merely a formal way of stating what has been observed; it offers no explanations as to why anything has occurred. 


Term

Definition
Proposed explanation for a phenomenon.
Must be testable
Must be disprovable
Must predict and outcome
Must be based on observation 


Term

Definition
A hypothesis (or group of hypotheses), which has been supported by data from experiments and observation. One definition of a theory is to say it's an accepted hypothesis. 


Term

Definition
A hypothesis that is known to be at least partially true 


Term

Definition
Controlled set of circumstances designed to reproducibly generate an outcome given a set of initial conditions.
Always begins with the question: How will changing A affect B? 


Term

Definition
The aspect of nature being manipulated. often the objective in an experiment is to determine the relationship between an independent variable (which is being purposely manipulated) and the dependent variable (whose out come is being measured). 


Term

Definition
The factor whose outcome is being measured. often the objective in an experiment is to determine the relationship between an independent variable (which is being purposely manipulated) and the dependent variable (whose out come is being measured). 


Term
Internally valid experiment
(nine factors) 

Definition
One that demonstrates a causeeffect relationship between independent and dependent variables. In other words, are there any other factors within the experiment that might cause the result?
1. Sample selection. Samples are not properly randomized
2. Influence of history: Failure to compensate for an event occurring before the measurement of the dependent variable.
3. Maturation. Time passes betweeen measurement of variables, allowing subjects to change before dependent variable is measured.
4. Repeated measurement. Reusing samples in testing compromises outcome
5. Inconsistency in instrumentation. Machine or operator error.
6. Regression to the mean of outliers. Outliers will regress. Don't choose outliers exclusively for second measurement
7. Experimental mortality. Subject dropping out of experiment.
8. Selectionmaturation interaction. Initial sample biases will be exacerbated over time.
9. Experimenter bias. Lack of objectivity.



Term

Definition
Negative control: Independent variable isn't manipulated. Positive control: Independent variable is manipulated to guarantee a variation in the dependent variable.
For example, an antibacterial agent is being tested to determine the minimum concentration needed to be effective. To a series of Petri dishes containing identical bacterial colonies, a range of concentrations of the new agent is applied. On one dish, no agent is applied; this is the negative control for the experiment. On another, a concentration of the new agent that is sure to kill the bacteria is applied—this is the positive control for the experiment. 


Term

Definition
Qualitative (descriptive), Quantitative (numerical)
1. nominal (qualitative): Names. No scale, rank, or order
2. ordinal (qual or quant): Ranked but no set scale.
3. interval (quant): Rank, order, and scale, but scale has no fixed reference.
4. ratio (quant): Ranked, and occurs on a scale that has a fixed reference.



Term

Definition
The same as the standard deviation of the sampling distribution of a statistic 


Term
standard error of the mean 

Definition
The spread of values of the distribution of means in a sample of means. The greater the number of samples, the narrower the spread of the distribution of means will be (in other words, the greater the number of samples, the lower the standard error). 


Term

Definition
Representation of only one variable. Best represented by bar graphs or pie charts. Drawback is they gon't reveal much about accuracy or precision of the data. 


Term

Definition


Term

Definition


Term
How meaningful is the outcome of an experiment? (two factors) 

Definition
Confidence Interval: Construct an interval of confidence, and determine behavior relevant to the entire population
Significance Testing: Prove an effect is real and not due to chance. 


Term

Definition
A zscore is used to represent the value of a statistic on a normalized curve. A zscore is found by using the standard error of the mean. The standard error of the mean is calculated by dividing the standard deviation by the square root of the number of samples (N). The value of the standard error of the mean can be converted to a zscore though a zscore table.
The distribution of zscores is called the standard normal distribution. The standard normal distribution always has a mean of 0 and a standard deviation of 1. A zscore reflects the position of that value with respect to the normalized mean in terms of the standard deviation (for example, a zscore of 2 means that the value falls 2 standard deviations above the mean). 


Term

Definition
In situations where the standard deviation is not known, the tscore is used. The tscore is a more conservative estimate of the zscore. The tscore table relies on the use of degrees of freedom. The number of degrees of freedom in an experiment comes from the number of samples. The number of degrees of freedom used in finding a value in a ttable is N1. 


Term

Definition
A confidence interval is a range of values within which there is a certain probability of finding the value of the population parameter (which is being estimated by the sample statistic). The most common confidence intervals use 95% and 99% probabilities. A 95% confidence interval means that if the experiment is performed over and over, the value of the descriptive statistic would fall within the range of the confidence interval 95% of the time. Once a confidence interval has been established, an experimenter can use it to show that further results are meaningful. When an experimental result falls within the confidence interval, it is probably a meaningful description of the population.
A confidence interval is calculated using the sample statistic (for example, the sample mean), the zscore and the standard error of the mean. The confidence interval's lower bound is equal to the sample statistic minus the product of the zscore and the standard error. The upper bound of the confidence interval is calculated as the sample statistic plus the product of the zscore and the standard error. 


Term
Steps of significance testing (7 steps) 

Definition
1. State the null hypothesis (and the alternative hypothesis) using a parameter.
2. Decide upon a value of the significance level.
3. Calculate the statistic analogous to the parameter used in the null hypothesis.
4. Calculate the probability value (pvalue).
5. Compare the probability value to the significance level; if the pvalue < value of the significance level, then the finding is statistically significant, otherwise it is not.
6. If the outcome is statistically significant, then the null hypothesis is rejected in favor of the alternate hypothesis.
7. Finally, the result and the statistical conclusion are reported in a clear and understandable manner (there are several common formats). 


Term

Definition
test statement for any experiment. It refers to the probability of the experimental result not occurring by chance. The null hypothesis is usually the opposite of what the experimenter actually believes. An example of an expression of the null hypothesis is something like, "The dependence of factor y on the factor x is due entirely to chance." Testing the experimental result against the null hypothesis is called significance testing. The point of significance testing is to determine whether or not it is possible to reject the null hypothesis. If the null hypothesis is rejected, then a conclusion can be made. This is the only outcome that involves a conclusion. Any other result in significance testing is inconclusive. Type I error ()– Reject the null hypothesis when it is true.
Type II error () – the null hypothesis is invalid, but not rejected.
The type II error is the same as a missed opportunity, because no conclusion can be drawn when the null hypothesis is not rejected. Remember: not rejecting does not mean accepting the null hypothesis.
An analogy can be made to a jury verdict: failing to reject null hypothesis is like a finding of not guilty, rejecting the null hypothesis is like finding guilty. You can reject the null hypothesis only when there is a preponderance of evidence to do so. A decision to not reject the null hypothesis does not mean to accept it any more than finding not guilty means to find innocent.


