Complete Randomized Design (CRD):
A CRD is a design in which the selected treatments are allocated to the experimental units
completely at random. This is the basic design where an investigator is interested to compare the
effects of several treatments such as varieties, fertilizers, teaching methods and so on in
homogeneous units. In fact, this is the simplest design involving the principle of replication and
randomization without local control.
Advantages of a CRD
• Its layout is very easy.
• There is complete flexibility in this design i.e. any number of treatments and replications
for each treatment can be tried.
• Whole experimental material can be utilized in this design.
• This design yields maximum degrees of freedom for experimental error.
• The analysis of data is simplest as compared to any other design.
• Even if some values are missing the analysis can be done.
Disadvantages of a CRD
• It is difficult to find homogeneous experimental units in all respects and hence CRD is
seldom suitable for field experiments as compared to other experimental designs.
• It is less accurate than other designs.
Question: Write down the fixed effect linear model for CRD. Stating the necessary assumptions.
Estimate the parameters of the model and analysis the data. Also find the expected value of
different mean sum of squares of the model and show that
E [Treatment SS ] ≥ E[ Error SS ] .
Answer: Let us consider the fixed effect linear model for CRD when the observations of a CRD
with one value per unit is
Yij = µ + α i + ε ij ; i = 1,2,..., k and j = 1,2,..., ri
where Yij corresponding to the jth unit under ith treatment
µ is the general mean effect
α i is the ith treatment effect
ε ij is a random error component.
Assumptions:
1) The samples come from normal population.
2) ε ij ~ NID(0, σ 2 )
3) Different effects are additive in nature.
4) All the observations Yij are independent.
k
5) Restriction, ∑rα
i =1
i i =0
Estimation of the Model Parameters:
Let µ̂ and α̂ i be the OLS estimator of µ and α i respectively.
∴ eij =Y ij− µˆ − αˆ i
2
⇒ SSE = ∑∑ (Y ij− µˆ − αˆ i )
k ri
i =1 j =1
Now we minimize SSE with respect to µ̂ and α̂ i respectively.
∂SSE
= 0 ⇒ ∑∑ (Y ij− µˆ − αˆ i ) = 0
k ri
∂µˆ i =1 j =1
k ri k k
⇒ ∑∑ Y ij − ∑ ri µˆ − ∑ riαˆ i = 0
i =1 j =1 i =1 i =1
k ri
∑∑ Y ij k
⇒ µ̂ =
i =1 j =1
k
= Y•• ; Since ∑rα i i =0
∑r
i =1
i
i =1
∂SSE
= 0 ⇒ ∑ (Y ij− µˆ − αˆ i ) = 0
ri
∂αˆ i j =1
ri
⇒ ∑ Y ij − ri µˆ − riαˆ i = 0
j =1
q
∑Y j =1
ij
⇒ αˆ i = − µˆ = Yi• − Y••
ri
∴ eij = Y ij− µˆ − αˆ i
= Y ij−Y•• − Yi• + Y••
= Y ij−Yi•
Partition of Sum of Square Total (SST):
SST = ∑∑ (Y ij−Y•• )
k ri
2
i =1 j =1
= ∑∑ {(Yi• − Y•• ) + (Y ij−Yi• )}
k ri
2
i =1 j =1
= ∑∑ (Yi• − Y•• ) + ∑∑ (Y ij−Yi• ) + 2∑∑ (Yi• − Y•• )(Y ij−Yi• )
k ri k ri k ri
2 2
i =1 j =1 i =1 j =1 i =1 j =1
∴ SST = ∑ ri (Yi• − Y•• ) + ∑∑ (Y ij−Yi• )
p k ri
2 2
i =1 i =1 j =1
= SS (treat ) + SSE
Test of hypothesis:
H 0 : α 1 = α 2 = ... = α k = 0
or H 0 : α i = 0 for all i = 1,2,..., k
Vs H 1 : α i ≠ 0
Test statistic is
n
MSS (treatment )
F= ~ F( k −1),( n −k ) ; n = ∑ ri
MSSE i =1
ANOVA Table
Sources of D.F. SS MSS Cal F Tab F
Variation (SV)
Treatment k-1 SS(treat) MSS(treat)=SS(treat)/(k-1) F= MSS(treat)/ MSSE
Error n-k SSE MSSE=SSE/(n-k)
Total n-1 SST
Comment: If Cal F ≥ Tab F at α % level of significance with (k-1)) df, we may reject our null
hypothesis, otherwise accept the null hypothesis.
Expected values of different SS:
We have, Yij = µ + α i + ε ij
Yi• = µ + α i + ε i•
Y•• = µ + ε ••
Now
Treatment SS = ∑ ri (Yi• − Y•• )
k
2
i =1
k
= ∑ ri (µ + α i + ε i• − µ − ε •• )
2
i =1
k
= ∑ ri (α i + ε i• − ε •• )
2
i =1
{ }
k
= ∑ ri α i + 2α i (ε i• − ε •• ) + (ε i• − ε •• ) 2
2
i =1
= ∑ riα i + 2∑ riα i (ε i• − ε •• ) + ∑ ri (ε i• − ε •• ) 2
2
i i i
= ∑ riα i + ∑ ri (ε i• − ε •• ) 2
2
i i
∴ E[Treatment SS ] = ∑ riα i + E ∑ ri (ε i• − ε •• ) 2
2
i i
= ∑ riα i + E ∑ ri (ε i• − ε •• ) 2
2
i i
= ∑ riα i + (k − 1) σ 2
2
∑ rα
2
i i
∴ E[Treatment MSS ] = σ 2 + i
(k − 1)
###########
E (ε i• − ε •• ) 2 = E (ε i• + ε •• − 2ε i• .ε •• )
2 2
= E (ε i• + ε •• − 2ε i• .ε •• )
2 2
ri 2
ri
2 ri ri
∑ ε ij ∑∑ ε ij ∑ ε ij ∑∑ ε ij
j =1 i j =1 j =1 j =1
= E + −2 r
i
r n n
i
i
σ2 σ2 σ2
= + −2
ri n n
σ2 σ2
= −
ri n
σ 2 σ 2
∑i i i• •• ∑i i r − n =(kσ 2 − σ 2 ) = (k − 1)σ 2
r E (ε − ε ) 2
= r
i
################
SSE
E ( Error MSS ) = E
n−k
σ 2 SSE
= E
2
n−k σ
σ2 SSE SSE
= E 2 Since 2 ~ χ n2−k
n−k σ σ
σ2
= n−k
n−k
=σ 2
∴ E ( Error MSS ) = σ 2
Finally, E ( Error MSS ) ≥ E (Treatment MSS )
###################
∑∑ (ε −ε i• )
k ri
2
ij
SSE i =1 j =1
=
σ 2
σ2
∑ (ε −ε i• )
k ri
∑ (r − 1)S 2 2
i i ij
j =1
= i =1
; where , S i2 =
σ 2
(ri − 1)
(ri − 1) S i2 k
(ri − 1) S i2
We know that
σ2
~ χ 2 r1 −1 then ∑
i =1 σ2
~ χ 2 n−k
E ( χ n2 ) = n
####################
Unbiasedness and variance of the estimated parameters:
We have, Yij = µ + α i + ε ij
Yi• = µ + α i + ε i•
Y•• = µ + ε ••
E ( µˆ ) = E (Y•• ) = E (µ + ε •• ) = E ( µ ) + E (ε •• ) = µ , Since, ε ij ~ NID(0, σ 2 )
E (αˆ i ) = E (Yi• − Y•• ) = E (α i + ε i• − ε •• ) = α i
Thus µ̂ and α̂ i are unbiased estimator of µ and α i respectively.
V ( µˆ ) = E ( µˆ − µ ) 2
= E (Y•• − µ ) 2
σ2
= E (ε •• ) 2 = k , Since, ε ij ~ NID(0, σ 2 )
∑r i =1
i
V (αˆ i ) = E (αˆ i − α i )
2
= E (ε i• − ε •• ) 2
= E (ε i• ) 2 + E (ε •• ) 2 − 2 E (ε i• , ε •• )
σ2 σ2 2σ 2
= + k
− k
∑r ∑r
ri
i i
i =1 i =1
σ2 σ2
= − k
.
∑r
ri
i
i =1