0% found this document useful (0 votes)
32 views5 pages

Module - 3 (1) RM

The document outlines various statistical tests including Z-Test, T-Test, F-Test, Chi-Square Test, ANOVA, Sign Test, Kruskal-Wallis Test, and Run Test, detailing their assumptions, examples, and applications in business and research. Each test serves specific purposes, such as comparing means, variances, or assessing relationships between categorical variables. It also differentiates between parametric and non-parametric tests, highlighting when to use each based on data characteristics.

Uploaded by

patraarup2000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views5 pages

Module - 3 (1) RM

The document outlines various statistical tests including Z-Test, T-Test, F-Test, Chi-Square Test, ANOVA, Sign Test, Kruskal-Wallis Test, and Run Test, detailing their assumptions, examples, and applications in business and research. Each test serves specific purposes, such as comparing means, variances, or assessing relationships between categorical variables. It also differentiates between parametric and non-parametric tests, highlighting when to use each based on data characteristics.

Uploaded by

patraarup2000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Module-3

1. Z-Test

Assumptions:

1. Population standard deviation (σ) is known.

2. Sample size is large (n ≥ 30) or the population is normally distributed.

3. Observations are independent.

4. Data is on interval or ratio scale.

Example:

A manufacturing firm wants to check if the average length of rods produced (from a known historical
mean of 20 cm and σ = 2 cm) has changed. A sample of 50 rods is tested. A one-sample Z-test helps
determine if the current mean significantly differs from 20 cm.

2. T-Test

Assumptions (for all types of t-tests):

1. Population standard deviation is unknown.

2. Data is approximately normally distributed (more important for small samples).

3. Observations are independent.

4. For two-sample t-tests:

o Equal variances (unless using Welch’s t-test).

o Independent groups.

5. For paired t-tests:

o Observations are dependent (paired).

Examples:

• One-Sample t-test: A café wants to test if the average time customers spend (claimed to be
30 minutes) has changed, but the standard deviation is unknown.

• Two-Sample t-test: Comparing average test scores of two different training programs.

• Paired t-test: Evaluating weight before and after a fitness program for the same individuals.

3. F-Test

Assumptions:
1. Data comes from normally distributed populations.

2. Samples are independent.

3. The F-statistic is the ratio of two variances (hence always positive).

4. Used primarily to test for equality of variances.

Example:

An operations manager compares variability in production times between two machines. An F-test
determines if the variability (not the mean) is significantly different.

4. Chi-Square Test

Assumptions:

1. Data must be in frequency (count) form.

2. Categories are mutually exclusive and exhaustive.

3. Expected frequency in each cell should be ≥ 5.

4. Observations are independent.

Examples:

• Test of Independence: A retailer tests whether product preference is associated with gender.

• Goodness-of-Fit Test: A fast-food chain wants to test if the number of customers visiting
each day of the week follows a uniform distribution.

5. ANOVA (Analysis of Variance – Extension of F-test)

Assumptions:

1. Observations are independent.

2. Normal distribution of the dependent variable in each group.

3. Homogeneity of variances (equal variances across groups).

Example:

A business evaluates if there’s a significant difference in average sales across three regions. One-way
ANOVA helps test whether the mean sales differ across these regions.

1. Sign Test

Assumptions:
1. Data is ordinal or can be meaningfully ranked.

2. Observations are independent.

3. Test is based on signs of differences, not their magnitude.

4. Assumes under the null hypothesis that positive and negative signs occur with equal
probability (0.5).

5. Zero differences are excluded from the test.

Example:

A company wants to evaluate the effectiveness of a new training program by comparing employee
performance before and after the program. The sign test is applied by counting how many
employees improved (positive sign) versus worsened (negative sign) in performance, ignoring
unchanged results.

2. Kruskal-Wallis Test

Assumptions:

1. Independent samples from each group.

2. Measurement scale is at least ordinal.

3. Observations are randomly selected.

4. Does not assume normality or equal variances (non-parametric).

5. Used when comparing 3 or more groups.

Example:

A business compares customer satisfaction ratings (ranked on a scale) across three service centers.
Data is ordinal and not normally distributed, so the Kruskal-Wallis test helps determine if any center
has significantly different satisfaction levels.

3. Run Test

Assumptions:

1. Data must be in sequence (e.g., time series).

2. Each observation falls into one of two categories (e.g., above/below median).

3. Observations are independent.

4. Tests the randomness of a sequence.

Example:
A stock analyst wants to check if the daily returns of a company show a random pattern or trends.
The run test is applied to determine whether upward/downward price movements occur randomly
or follow a pattern.

General Notes on Non-Parametric Tests (from your document):

• Non-parametric tests make fewer assumptions than parametric ones.

• They are especially suitable when:

o Data is ordinal (e.g., Likert scale).

o Data violates normality assumptions.

o There are outliers or small sample sizes.

• They typically assess medians or ranks, not means.

PARAMETRIC TESTS

Test Purpose/Use Business/Research Applications

To test whether sample mean differs - Quality control in manufacturing-


significantly from the population Comparing average spending to national
Z-Test
mean, assuming known population standards- Benchmarking performance
variance and large sample size. metrics across locations

- Evaluating training program impact-


To test sample mean differences when
Comparing customer satisfaction between
T-Test population variance is unknown
two services- Financial analysis of investment
and/or sample size is small.
returns

- Testing variability of two


To compare two variances and
machines/processes- Precondition check for
F-Test determine if they are significantly
ANOVA- Evaluating consistency across
different.
different stores or shifts

- Comparing regional sales performance-


ANOVA
To test whether three or more group Testing marketing strategies across
(Analysis of
means are significantly different. demographics- Analyzing production outputs
Variance)
across shifts

NON-PARAMETRIC TESTS
Test Purpose/Use Business/Research Applications

- Association between gender and product


Chi- To test relationships between
choice- Comparing satisfaction across store
Square categorical variables (independence,
locations- Evaluating expected vs. observed
Test goodness-of-fit, homogeneity).
frequencies in sales

To test the median difference - Before-and-after comparison of employee


Sign Test between paired observations, based performance- Testing effectiveness of new
on direction only (ignores magnitude). software tools- Analyzing service improvements

- Detecting patterns in stock price changes-


To test for randomness in a sequence Analyzing customer behavior sequences
Run Test
of binary outcomes. (buy/don’t buy)- Evaluating randomness in defect
occurrences

Non-parametric alternative to ANOVA - Comparing ranked customer satisfaction across


Kruskal-
used when comparing 3+ departments- Assessing performance of teams
Wallis
independent groups with ordinal or on ordinal scales- Analyzing service feedback
Test
non-normal data. across regions

Summary of Selection Criteria:

Test When to Use

Z-Test Known σ, large n (≥ 30), normal or near-normal distribution

T-Test Unknown σ, small n, approximately normal data

F-Test Comparing variability (variances) between two samples

ANOVA Comparing 3+ means under normality and equal variances

Chi-Square For frequency data and categorical variable associations

Sign Test Small sample, ordinal data, or when data violates normality

Run Test When analyzing sequence patterns (time-based, binary)

Kruskal-Wallis Ordinal/ranked data, comparing 3+ groups without normality

You might also like