Term
|
Definition
|
|
Term
| The Goal of Marketing Research and Analysis |
|
Definition
| to provide objective, competitively advantageous decision-aiding insights |
|
|
Term
| Three Part Focus of Marketing Decision Making |
|
Definition
1. New customer acquisition 2. Existing customer retention (loyalty) 3. Customer value maximization |
|
|
Term
|
Definition
Learning and/ or answering stated business questions using data collected expressly for that purpose (commonly referred to as 'marketing research') |
|
|
Term
|
Definition
| Learning and/or answering stated business questions using already existing data (commonly referred to as 'database analytics') |
|
|
Term
|
Definition
-Data collected expressly for the purposes of doing research -Frequency: periodic -Typically, data capture instrument design is necessary -Qualitative and quantitative -Exploratory and confirmatory -Focuses on behavioral antecedents: opinions, attitudes, preferences, etc -Usage: exploration of consumer motivation, preferences, etc; testing new products, designs, etc -Key strength: flexibility of design -Key limitations: tenuous attitude-behavior link |
|
|
Term
|
Definition
| exploratory and confirmatory |
|
|
Term
| Survey research focuses on... |
|
Definition
| behavioral antecedents: opinions, attitudes, preferences, etc |
|
|
Term
|
Definition
| exploration of consumer motivation, preferences, etc; testing new products, designs, etc |
|
|
Term
| Key Strength of Survey Research |
|
Definition
|
|
Term
| Key Limitation of Survey Research |
|
Definition
| tenuous attitude-behavior link |
|
|
Term
| Who are the research users/ providers? |
|
Definition
| Companies, students, advertising executives, etc |
|
|
Term
|
Definition
| planning, collection, and analysis of data relevant to marketing decision making and the communication of results to management |
|
|
Term
|
Definition
| a "special purpose vehicle" - it encompasses a subset of marketing analyses, namely those utilizing primary data |
|
|
Term
| The Primary Research Process |
|
Definition
| Business problem > Research design > Data collection > Data analyses > Knowledge creation |
|
|
Term
|
Definition
| What is the business question that needs to be addressed? |
|
|
Term
|
Definition
-Qualitative vs quantitative -Descriptive vs causal -Approach: survey vs observation vs experimentation -Sampling frame |
|
|
Term
|
Definition
| Where and how will data be captured? |
|
|
Term
|
Definition
-Data validation (missing values, non-response bias, etc) -Data re-coding & summarization -Descriptive analysis: univariate vs bivariate -Multivariate analysis |
|
|
Term
|
Definition
-Answering the business question -Result communication |
|
|
Term
|
Definition
| research whose findings are not subject to quantification or quantitative analysis |
|
|
Term
| Qualitative research techniques: Focus groups |
|
Definition
| Group of 8-12 participants, led by a moderator in an in-depth discussion of topic/ concept |
|
|
Term
| Qualitative research techniques: Individual in-depth interviews |
|
Definition
| one-on one interviews that probe and elicit details of hidden motivations, often via indirect techniques |
|
|
Term
| Qualitative research techniques: Delphi method |
|
Definition
experts in a particular area are asked to provide their evaluations, following which are given a chance to re-evaluate via an iterative process; also known as the "jury of expert opinion" -> panel members evaluate on own and then chance to hear what colleagues think, then have chance to change first evaluation |
|
|
Term
| Qualitative research techniques: Projective tests |
|
Definition
| Attempts to tap into deep feelings by guiding participants to project those feelings onto unstructured situations |
|
|
Term
|
Definition
| research whose findings are subject to quantification or quantitative analysis; matching informational needs with research types |
|
|
Term
| Quantitative research techniques: Survey |
|
Definition
types: self-administered questionnaires, mail surveys, panel studies, phone interviews, mall intercept, door-to-door interviews -Design/ administration flaws: random error vs systematic error or bias (sample design vs. measurement error) |
|
|
Term
| Quantitative research techniques: Experimentation |
|
Definition
ascertaining causation through purposeful manipulation of relevant factors -Natural/ field/ in-market vs. laboratory experiments: internal vs external validity -Treated vs control; extraneous variation; randomization vs blocking |
|
|
Term
| Quantitative research techniques: Observation |
|
Definition
Human vs non=human; open vs disguised -Human observation: people watching people or phenomena (ethnographic research, mystery shoppers, two-way mirrors) -Machine observation: machines "watching" people or phenomena -Consumer panels: scanner, diary-based, and other -Physiological vs opinion vs behavior measurement |
|
|
Term
|
Definition
| the process of assigning numbers or labels to persons, objects or events in accordance with specific rules for representing quantities or qualities of attributes |
|
|
Term
|
Definition
| Concept Delineation > Operational Specification > Scale Development > Reliability & Validity Assessment > Deployment |
|
|
Term
|
Definition
-What is the business question that needs to be addressed? -What concept(s) does that entail? |
|
|
Term
| Operational Specification |
|
Definition
-Clearly spell out specifics of the concept(s) to be measured -Detail the process for what values/ labels are to be assigned to the concept(s) and how it is to be done -What type of scale is to be used: nominal, ordinal, interval, or ratio? |
|
|
Term
|
Definition
-Select a scale type: non-comparative (graphic vs itemized rating) vs comparative (paired comparisons, constant sum), Likert, semantic differential -Brainstorm scale items or categories -"Purify" the scale and create the final version |
|
|
Term
| Reliability & Validity Assessment |
|
Definition
| Assess the reliability and validity of scale |
|
|
Term
|
Definition
-Create the usable form -Identify the type of data capture mechanism (self-administered, mall intercept, etc) |
|
|
Term
| Categorical (discrete/ non-comparative) scales |
|
Definition
nominal and ordinal (graphic, itemized rating) *Can never compute a need -> can't make less information into more |
|
|
Term
| Continuous (comparative) scales |
|
Definition
| ratio and interval (Likert, semantic differential) |
|
|
Term
|
Definition
| an underlying attitude toward a brand |
|
|
Term
|
Definition
| questions meant to be indicative of the same underlying construct |
|
|
Term
|
Definition
| labels of states that can be expressed numerically, but have no "numeric" meaning; used for counting (ie: gender, marital status(; analysis: cross-tabulation |
|
|
Term
|
Definition
| rank-ordering based categorization; intervals between adjacent values are indeterminate; used for greater than or less than comparisons (ie: Movie Ratings of PG or R); analysis: frequencies |
|
|
Term
|
Definition
| fixed distance (with respect to the attribute being measured) rank-ordered categories; used for addition and subtraction (ie: degrees F, attitude); analysis: mean |
|
|
Term
|
Definition
| an interval scale in which distances are stated with to a rational zero; used for addition, subtraction, multiplication and division (ie: degrees K, distance); analysis: coefficient of variation |
|
|
Term
|
Definition
| reliability = consistency; the degree to which an instrument measures the same way each time it is used under the same conditions with the same subjects |
|
|
Term
|
Definition
| ability of the instrument to produce consistent results when re-used under similar conditions |
|
|
Term
| internal consistency (type of reliability) |
|
Definition
| the strength of association between/ among subsets of a measurement scale (Cronbach alpha) |
|
|
Term
|
Definition
| the strength of conclusions, inferences or propositions |
|
|
Term
|
Definition
| on their face value, are the indicators as a whole a good reflection of the latent construct? |
|
|
Term
|
Definition
| Are these measure of the latent construct operationally different from other sets of measures? |
|
|
Term
| Data Capture Instrument: Questionnaire |
|
Definition
| a stand-alone set of questions designed to generate the data necessary to accomplish the objective(s) of the research |
|
|
Term
| Questionnaire Design Process |
|
Definition
| Survey Objectives > Data Capture Method > Scales & Added Questions > Layout & Pre-Test > Revise & Deploy |
|
|
Term
|
Definition
What is the business question that needs to be addressed? What data needs to be captured? |
|
|
Term
|
Definition
| How is the questionnaire to be administered: mail, online, phone, intercept, etc? |
|
|
Term
|
Definition
-Earlier-created measurement scales -Additional questions: demographics or ad hoc questions (open- vs. closed-ended, scaled-response questions) |
|
|
Term
|
Definition
-Ordering of questions and physical layout -Pre-test for ease of use and understanding (clarity, absence of bias, understandability) -Evaluate the length and complexity -Validity & reliability assessment |
|
|
Term
|
Definition
| Make changes deemed necessary |
|
|
Term
|
Definition
| worthwhile to combine into these factors. Suppress all coefficients smaller than some amount, continue to iterate so that the variables only have coefficients on one of the factors |
|
|
Term
|
Definition
| shows the amount of information (ignorance reduction) contributed by individual components/ factors |
|
|
Term
|
Definition
| indicator of two different factors, need to take the variable out and re-run the analysis |
|
|
Term
|
Definition
| the overall measure of scale's reliability |
|
|
Term
|
Definition
| basic univariate statistics for individual scale items |
|
|
Term
|
Definition
| the contribution of individual items to the overall scale's cohesiveness |
|
|
Term
|
Definition
| Tests the equality of means hypothesis (ie: are all item-specific means statistically equal?) |
|
|
Term
| Data Capture: Response sampling |
|
Definition
| selection of a subset of individual observations within a larger population with the goal of deriving some knowledge about the population of concern |
|
|
Term
|
Definition
| difference between the sample and the population that exists only because of the observations that happened to be selected for the sample |
|
|
Term
|
Definition
| Identify the Population of Interest > Define Sampling Frame > Determine Sample Size > Implement the Sampling Plan > Collect Data |
|
|
Term
|
Definition
| adversely affect analysis, something you want to stay away from like an undesirable skew |
|
|
Term
|
Definition
1. Desired precision (+/- 4%?) 2. Confidence level (ie: alpha = .05) 3. Degree of variability -> .5 = max variability; conservative estimate |
|
|
Term
| Interdependencies of the Sampling Process |
|
Definition
Desired precision: want to narrow? -> expand sample size want to narrow? -> decrease sample size To decrease variability: increase sample size |
|
|
Term
|
Definition
1. Simple Random Sampling 2. Stratified Random Sampling 3. Cluster Sampling |
|
|
Term
|
Definition
| a subset of individuals (a sample) chosen from a larger set (a population). Each individual is chosen randomly and entirely by chance, such that each individual has the same probability of being chosen at any stage during the sampling process, and each subset of k individuals has the same probability of being chosen for the sample as any other subset of k individuals |
|
|
Term
| Stratified Random Sampling |
|
Definition
| A method of sampling that involves the division of a population into smaller groups known as strata. In stratified random sampling, the strata are formed based on members' shared attributes or characteristics. A random sample from each stratum is taken in a number proportional to the stratum's size when compared to the population. These subsets of the strata are then pooled to form a random sample |
|
|
Term
|
Definition
| a sampling technique used when "natural" groupings are evident in a statistical population. It is often used in marketing research. In this technique, the total population is divided into these groups (or clusters) and a sample of the groups is selected. Then the required information is collected from the elements within each selected group. This may be done for every element in these groups or a subsample of elements may be selected within each of these groups. |
|
|
Term
| Data Coding & Preliminary Analysis Process |
|
Definition
| Validation > Coding & Re-Coding > Data Entry/ Analytic File Creation > Aggregate Assessment & Cleansing > Descriptive Analysis |
|
|
Term
|
Definition
-Completeness and internal validity -Generalizability considerations: self-selection, response and non-response bias |
|
|
Term
|
Definition
-Coding: converting open-ended questions into numeric fields -Re-coding: converting non-numeric fields into numeric ones |
|
|
Term
| Data Entry/ Analytic File Creation |
|
Definition
-Paper/ mail survey entry -Conversion of data capture into data analysis format |
|
|
Term
| Aggregate Assessment & Cleansing |
|
Definition
-Univariate distributions of variables of interest -Sample size adequacy -Missing value imputation |
|
|
Term
|
Definition
-Sample size and key respondent characteristics -One-way frequency table for individual responses vs. two-way cross-tabulations -Numeric vs graphical displays -Key statistics measures of central tendency (mean, median, mode) and dispersion (standard deviation), outlier identification |
|
|
Term
| Statistically Significant at 95% |
|
Definition
|
|
Term
| Statistically significant at 99% |
|
Definition
|
|