0% found this document useful (0 votes)
53 views13 pages

BRM

The document outlines various research methods and statistical techniques. It discusses qualitative research tools like interviews and observations. It also discusses secondary data sources and quantitative methods like different types of statistical analyses to test relationships between variables. These include techniques like regression analysis, ANOVA, chi-square analysis, and more. SPSS can be used to conduct various descriptive and inferential statistical analyses on different variable types.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
0% found this document useful (0 votes)
53 views13 pages

BRM

The document outlines various research methods and statistical techniques. It discusses qualitative research tools like interviews and observations. It also discusses secondary data sources and quantitative methods like different types of statistical analyses to test relationships between variables. These include techniques like regression analysis, ANOVA, chi-square analysis, and more. SPSS can be used to conduct various descriptive and inferential statistical analyses on different variable types.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 13

Stages in the Research Process

1. Defining the research objectives


2. Planning a research design
3. Planning a sample
4. Collecting the data
5. Analyzing the data
6. Formulating the conclusions and preparing the report

Qualitative Research Tools


● Focus Group Interviews
● Depth Interviews
● Conversations
● Semi-Structured Interviews
● Word Association/ Sentence Completion
● Observation Collages
● Thematic Apperception/ Cartoon Tests

SECONDARY DATA
Sources of Internal and Proprietary Data
LIBRARIES
THE INTERNET
VENDORS
PRODUCERS
Books and Periodicals
Government Sources
Media Sources
Trade Association Sources
Commercial Sources

QUANTITATIVE METHODS
Analysis of Interdependence
● Factor Analysis: A prototypical multivariate, interdependence technique that statistically
identifies a reduced number of factors from a larger number of measured variables
● Cluster Analysis: A multivariate approach for grouping observations based on similarity
among measured variables
● Multidimensional Scaling: A statistical technique that measures objects in
multidimensional space on the basis of respondents’ judgments of the similarity of
objects

TECHNIQUES
1. Cross-Tabulation: Differences among groups based on nominal or ordinal categories
(Categorical data)
2. Contingency table: A data matrix that displays the frequency of some combination of
possible responses to multiple variables; cross-tabulation results
3. Quadrant analysis: An extension of cross-tabulation in which responses to two
ratingscale questions are plotted in four quadrants of a two dimensional table
4. Univariate statistical analysis tests hypotheses involving only one variable.
5. Bivariate statistical analysis tests hypotheses involving two variables.
6. Multivariate statistical analysis tests hypotheses and models involving multiple (three
or more) variables or sets of variables.
7. SIGNIFICANCE LEVEL: how likely an inference supporting a difference between an
observed value and some statistical expectation is true. The acceptable level of Type I
error.
8. P-value: computed significance level

9.
10. Type I error occurs when the researcher concludes that a relationship or difference exits
in the population when in reality it does not exist
11. Type II error occurs when a researcher concludes that no relationship or difference
exists when in fact one does exist.
12. Parametric statistics: Involve numbers with known, continuous distributions; when the
data are interval or ratio scaled and the sample size is large, parametric statistical
procedures are appropriate
13. Nonparametric statistics: Appropriate when the variables being analyzed do not
conform to any known or continuous distribution
14. T-test: A univariate t-test is appropriate when the variable being analyzed is interval or
ratio. When sample size (n) is larger than 30, the t-distribution and Z-distribution are
almost identical. (Scale data)
15. Chi-Square Test: One of the basic tests for statistical significance that is particularly
appropriate for testing hypotheses about frequencies arranged in a frequency or
contingency table (Categorical data)
16. The t-Test for Comparing Two Means: Independent samples t-test: A test for
hypotheses stating that the mean scores for some interval- or ratio-scaled variable
grouped based on some less-than interval classificatory variable. (Scale vs
Categorical) Two groups on any continuous variabl, if categorical then chi square
17. Paired-samples t-test: An appropriate test for comparing the scores of two interval
variables drawn from related populations (Scale vs Scale), at two different time
horizons
18. More than 2 groups- ANNOVA
19. Z-test for differences of proportions: A technique used to test the hypothesis that
proportions are significantly different for two independent samples or groups.
20. Analysis of Variance (ANOVA): Analysis involving the investigation of the effects of
one treatment variable on an interval scaled dependent variable—a hypothesis-testing
technique to determine whether statistically significant differences in means occur
between two or more groups. (Scale)
21. F-test: A procedure used to determine whether there is more variability in the scores of
one sample than in the scores of another sample.
22. A correlation coefficient is a statistical measure of covariation, or association between
two variables. Covariance is the extent to which a change in one variable corresponds
systematically to a change in another
23. Coefficient of determination (R2): A measure obtained by squaring the correlation
coefficient; the proportion of the total variance of a variable accounted for by another
value of another variable
24. Regression analysis is another technique for measuring the linear association between
a dependent and an independent variable

LAVENES TEST
1. P < or =0.05 : violated assumption, use second row
2. P > 0.05 : not violated, use 1st row

RELIABILTY
1. Aplpha > = 0.7

STRENGTH OF CORELATION
1. R = 0.10 to 0.29 or -0,10 to -0.29 : small, dont do regression
2. 0.30 to 0.49 : medium
3. 0.5 to 1 : high

ADJUSTED R SQUARE
1. When sample size less than 30

Chi Sqaure - Categorical vs Categorical


T -Test - DV is scale and IV is categorical

Multiple regression is a statistical method used to examine the relationship between a


dependent variable and two or more independent variables. Interpreting multiple regression
results in marketing and business research methodology involves understanding the
relationships between variables and the significance of the coefficients. Here are some steps to
guide you through interpreting multiple regression results in this context:

Look at the coefficients: The coefficients represent the strength and direction of the relationship
between the dependent variable and the independent variables. Positive coefficients indicate a
positive relationship between the variables, while negative coefficients indicate a negative
relationship.

Check the p-value: The p-value tells you whether the coefficient is statistically significant. A p-
value less than 0.05 indicates that the coefficient is statistically significant, meaning that it is
unlikely to have occurred by chance.

Examine the adjusted R-squared: The adjusted R-squared value measures how much of the
variation in the dependent variable can be explained by the independent variables. It adjusts for
the number of independent variables in the model. A higher adjusted R-squared value indicates
a better fit of the model.

Look for multicollinearity: Multicollinearity occurs when two or more independent variables are
highly correlated with each other. It can affect the regression results, so it is important to check
for it and consider its impact on the model.

Consider the practical significance: Even if a coefficient is statistically significant, it may not be
practically significant. Consider the size of the coefficient and whether it is meaningful in the
context of the study.

Consider the limitations: Finally, it is important to consider the limitations of the study and the
regression model. Are there other variables that could be influencing the relationship between
the independent and dependent variables? Are there any measurement or sampling errors that
could be affecting the results?

SPSS Techniques
Descriptive statistics: Descriptive statistics are used to summarize and describe the
characteristics of a dataset, such as mean, standard deviation, and range. They are often used
to provide a baseline understanding of the data before more advanced analyses are conducted.

Correlation analysis: Correlation analysis is used to examine the relationship between two
variables. It can help researchers determine whether there is a relationship between variables
and the strength of that relationship.
Factor analysis: Factor analysis is used to identify underlying factors or dimensions within a
dataset. It can be used to reduce the number of variables in a dataset and to better understand
the relationships between variables.

Regression analysis: Regression analysis is used to examine the relationship between a


dependent variable and one or more independent variables. It can help researchers determine
the strength and direction of the relationship between variables.

Cluster analysis: Cluster analysis is used to group similar cases together based on the
similarities or differences between them. It can help researchers identify patterns and segment
the market.

ANOVA (analysis of variance): ANOVA is used to compare means across two or more groups. It
can be used to determine whether there are differences between groups and which groups
differ.

Chi-square analysis: Chi-square analysis is used to examine the relationship between two
categorical variables. It can help researchers determine whether there is a significant
association between the variables.

Categorical variables:

Chi-square analysis: This technique is used to examine the relationship between two categorical
variables. For example, it can be used to determine whether there is a significant relationship
between gender and brand preference.

Contingency tables: This technique is used to display the frequency distribution of two or more
categorical variables. For example, it can be used to show the frequency of responses to a
survey question by age and gender.

Logistic regression: This technique is used to examine the relationship between a categorical
dependent variable and one or more categorical or scaled independent variables. For example,
it can be used to predict the likelihood of purchasing a product based on age, gender, and
income.

Scaled variables:

Correlation analysis: This technique is used to examine the relationship between two scaled
variables. For example, it can be used to determine whether there is a significant correlation
between customer satisfaction and sales.

Regression analysis: This technique is used to examine the relationship between a scaled
dependent variable and one or more categorical or scaled independent variables. For example,
it can be used to predict sales based on advertising expenditure, product price, and brand
image.

Analysis of variance (ANOVA): This technique is used to compare the means of two or more
groups for a scaled variable. For example, it can be used to determine whether there is a
significant difference in the average customer satisfaction score across different regions.

Interpret SPSS
Review the descriptive statistics: Descriptive statistics provide an overview of the data, including
measures of central tendency (mean, median, mode) and dispersion (range, standard
deviation). Reviewing descriptive statistics can help you understand the distribution of the data
and identify any outliers or anomalies.

Conduct hypothesis tests: Hypothesis tests are used to determine whether there is a significant
difference or relationship between variables. The choice of hypothesis test depends on the
research question and the type of variables being analyzed. For example, chi-square analysis
can be used to test the relationship between two categorical variables, while t-tests or ANOVA
can be used to test the difference in means between two or more groups.

Interpret regression coefficients: In regression analysis, the coefficients represent the


relationship between the independent variables and the dependent variable. Interpretation of the
coefficients can help identify which independent variables are most strongly related to the
dependent variable.

Examine the significance level: The significance level (p-value) indicates the probability of
obtaining the observed results by chance. Typically, a significance level of 0.05 or lower is
considered statistically significant, meaning that there is a low probability of obtaining the
observed results by chance.

Consider the practical significance: While statistical significance is important, it is also important
to consider the practical significance of the results. This involves considering whether the effect
size is large enough to be meaningful in a practical sense.

Draw conclusions and make recommendations: Based on the results of the analysis, draw
conclusions and make recommendations for further research or action.

Interpret Desciptive
Measures of central tendency: These measures describe the central or typical value of a
dataset. The most common measures of central tendency are the mean, median, and mode.
Mean: The mean is the sum of all values in a dataset divided by the total number of values. It is
useful for describing data with a symmetric distribution. To analyze the mean, compare it to
other measures of central tendency and consider any outliers that may be affecting the value.
Median: The median is the middle value of a dataset. It is useful for describing data with skewed
distributions or outliers. To analyze the median, consider the range of values in the dataset and
the number of outliers.

Mode: The mode is the most common value in a dataset. It is useful for describing data with
discrete or categorical values. To analyze the mode, consider the number of times the value
appears in the dataset and any other measures of central tendency.

Measures of dispersion: These measures describe the spread or variability of a dataset. The
most common measures of dispersion are the range, variance, and standard deviation.
Range: The range is the difference between the highest and lowest values in a dataset. It is
useful for describing the spread of data. To analyze the range, consider any outliers that may be
affecting the values.

Variance: The variance is a measure of how spread out the values in a dataset are from the
mean. It is useful for describing data with a symmetric distribution. To analyze the variance,
compare it to other measures of dispersion and consider any outliers that may be affecting the
values.

Standard deviation: The standard deviation is the square root of the variance. It is useful for
describing data with a symmetric distribution. To analyze the standard deviation, compare it to
other measures of dispersion and consider any outliers that may be affecting the values.

Measures of shape: These measures describe the shape of the distribution of a dataset. The
most common measures of shape are skewness and kurtosis.
Skewness: Skewness measures the symmetry or lack of symmetry of a dataset. A positive
skewness indicates that the dataset is skewed to the right, while a negative skewness indicates
that the dataset is skewed to the left. To analyze skewness, consider the direction and
magnitude of the skewness value.

Kurtosis: Kurtosis measures the peakedness or flatness of a dataset. A positive kurtosis


indicates a dataset that is more peaked than a normal distribution, while a negative kurtosis
indicates a dataset that is flatter than a normal distribution. To analyze kurtosis, consider the
direction and magnitude of the kurtosis value.

Interpret Regression
Check the regression equation: The regression equation shows the relationship between the
dependent variable and the independent variables. The coefficients represent the strength and
direction of the relationship. Check whether the equation is statistically significant and whether
the coefficients have the expected signs.

Examine the R-squared value: The R-squared value represents the proportion of variation in the
dependent variable that is explained by the independent variables. A higher R-squared value
indicates a better fit between the model and the data. Check whether the R-squared value is
high enough to justify the model's use.

Analyze the coefficients: The coefficients represent the impact of the independent variables on
the dependent variable. Check the magnitude of the coefficients to determine which
independent variables have the most significant impact on the dependent variable.

Check for statistical significance: Statistical significance indicates whether the coefficients are
different from zero. Check the p-values of the coefficients to determine whether they are
statistically significant. A p-value less than 0.05 is typically considered significant.

Check for multicollinearity: Multicollinearity occurs when the independent variables are highly
correlated with each other, which can make it difficult to interpret the impact of each variable.
Check the variance inflation factor (VIF) to determine whether multicollinearity is present. If the
VIF is greater than 10, there may be a problem with multicollinearity.

Evaluate the residuals: The residuals represent the difference between the predicted and actual
values of the dependent variable. Check whether the residuals are normally distributed and
whether there is any pattern in the residual plot. A normal distribution and a random pattern in
the residual plot indicate that the regression model is appropriate.

Check for outliers: Outliers are observations that are significantly different from the rest of the
data. Check whether there are any outliers that may be influencing the regression results.

Factor Analysis
Examine the eigenvalues: The eigenvalues represent the amount of variance explained by each
factor. Look for factors with eigenvalues greater than 1, as these indicate significant factors.

Check the factor loadings: The factor loadings indicate the strength of the relationship between
each variable and each factor. Look for high loadings (close to 1 or -1), as these indicate a
strong relationship between the variable and the factor.

Interpret the factors: Based on the variables that load highly on each factor, interpret what the
factor represents. This can help you understand the underlying dimensions that drive consumer
behavior or market segmentation.

Assess the reliability: Check the Cronbach's alpha coefficient for each factor to ensure that the
variables that load highly on the factor are reliable and measure the same construct.

Validate the results: Use additional analyses such as confirmatory factor analysis or structural
equation modeling to validate the results of the factor analysis.
Draw conclusions: Based on the factor analysis results, draw conclusions about the underlying
dimensions that drive consumer behavior or market segmentation. Use these insights to inform
marketing and business decisions.

Descriptive
Cross-sectional research: Cross-sectional research involves collecting data from a sample of
participants at a single point in time. This research design is commonly used to study consumer
behavior and attitudes.

Longitudinal research: Longitudinal research involves collecting data from the same sample of
participants over an extended period of time. This research design is commonly used to study
changes in consumer behavior and attitudes over time.

Projective Techniques
Association Technique: This technique involves presenting respondents with a stimulus and
asking them to say the first thing that comes to their mind. This method is particularly useful for
understanding the associations consumers make between products and brands.

Completion Technique: In this technique, respondents are given an incomplete sentence and
asked to complete it with their own words. For example, a researcher might say, "When I see
this product, I feel...". This technique is useful for understanding consumers' emotions and
attitudes towards products.

Picture Interpretation Technique: This technique involves showing respondents a picture and
asking them to describe what they see. The picture can be related to the product or brand being
studied, or it can be a more general image. This technique is useful for understanding the
subconscious associations consumers make with products.

Role-playing Technique: In this technique, respondents are asked to act out a scenario related
to the product or brand being studied. For example, a researcher might ask a respondent to
pretend to be a customer in a store and describe their experience. This technique is useful for
understanding consumers' behavior in different situations.

Storytelling Technique: This technique involves asking respondents to tell a story related to the
product or brand being studied. The story can be fictional or based on their own experiences.
This technique is useful for understanding the underlying motivations and beliefs that drive
consumer behavior.

Secondary Research
Literature review: A literature review involves reviewing existing research studies, books, and
articles related to a particular research topic. This research design is commonly used to gather
information on industry trends, consumer behavior, and market competition.
Database research: Database research involves searching for and analyzing data from various
databases such as government statistics, market research reports, and industry journals. This
research design is commonly used to gather data on industry trends, consumer behavior, and
market competition.

Content analysis: Content analysis involves analyzing the content of marketing materials such
as advertisements, social media posts, and product reviews. This research design is commonly
used to understand consumer perceptions and attitudes toward a particular product or brand.

Case study research: Case study research involves analyzing a particular case or situation in
depth. This research design is commonly used to gather information on specific marketing
problems or business challenges.

Meta-analysis: Meta-analysis involves combining the results of multiple studies to obtain a more
comprehensive understanding of a particular research problem. This research design is
commonly used to synthesize the results of multiple studies on a particular marketing topic.

Here are the steps to perform Levene's test of homogeneity:

Identify the groups: The first step is to identify the groups you want to compare. These groups
can be based on different categories such as demographic factors, product types, or geographic
regions.

Determine the dependent variable: The dependent variable is the variable you want to compare
across the groups. This can be a metric such as sales, revenue, or customer satisfaction.

Calculate the group variances: Calculate the variances of each group using the dependent
variable. This can be done using a statistical software package such as SPSS, SAS, or R.

Perform Levene's test: Use a statistical software package to perform Levene's test of
homogeneity. The test compares the variances of each group and produces a p-value. If the p-
value is less than the significance level (typically 0.05), the variances are considered
significantly different, and the assumption of homogeneity of variances is violated.

Interpret the results: If the p-value is greater than the significance level, the variances are
considered homogenous, and you can proceed with the statistical analysis. If the p-value is less
than the significance level, the variances are considered non-homogeneous, and you need to
use alternative statistical tests that do not assume equal variances.

Overall, Levene's test of homogeneity is an essential tool in marketing and business research
methodology for ensuring the validity of statistical tests that assume equal variances. By
performing this test, researchers can ensure that their results are reliable and not affected by
differences in variances across groups.
Here are some key steps to interpret reliability in marketing and business research
methodology:

Assess the reliability coefficient: The most common measure of reliability is Cronbach's alpha,
which ranges from 0 to 1. A reliability coefficient of 0 indicates no reliability, and a coefficient of
1 indicates perfect reliability. In general, a reliability coefficient above 0.7 is considered
acceptable, while a coefficient above 0.9 is considered very good.

You might also like