MINISTRY OF EDUCATION
What is the Grade Six Achievement Test (GSAT)?
The Grade Six Achievement Test (GSAT) forms part of the National Assessment
Programme (NAP). The National Assessment Programme is comprised of the Grade One
Individual Learning Profile, the Grade Three diagnostic Test, the Grade Four Literacy
and Numeracy Tests and the Grade Six Achievement Test. The NAP is aimed at
determining how students are learning at key stages throughout the primary level and
their readiness to access secondary level education.
The GSAT is a curriculum-based examination. This means that the items on the
examination match the objectives in the curriculum. The GSAT is based on materials
covered in the Grade 4 to Grade 6 curriculum. The test is used to place students at the
secondary level by the ranking of their overall performance on all subject areas tested.
Table 1 below gives a break down of the structure of the GSAT.
Table 1: Structure of the GSAT
Subject Type of Item No. of Items Value of Scale Duration
Language Arts Multiple Choice 80 questions 1 Raw score 75 mins
Computer marked Scale 0-80
Mathematics Multiple Choice 80 questions 1 Raw score 75 mins
Computer marked Scale 0-80
Social Studies Multiple Choice 80 questions 1 Raw score 75 mins
Computer marked Scale 0-80
Science Multiple Choice 60 questions 1 Raw score 60 mins
Computer marked Scale 0-60
Communication One Short Answer 1 question 1 Raw score 60 mins
Tasks One Extended Writing 1 question Scale 0-12
Specialized marking Team
1
Assessment Process
1.0 Registration
Registration for the examination begins in the month of October and closes in the month
of November. There is a prescribed registration form to be completed by each student and
signed by the parent/guardian. Students and parents select five schools in which they
wish to be placed, in order of preference. As part of the registration process, parents are
required to present the original or certified copy of child’s birth certificate or the child’s
passport to the school. The registration form and the birth certificate are submitted to the
Student Assessment Unit through the regional offices. Education Officers from the
Student Assessment Unit visit the regional offices to verify the accuracy of the
information presented on the registration form. If the forms are found to be improperly
completed, requests are made for the correction to be done for final submission. Once the
verification process is completed, birth certificates are returned to the principals for
return to parents. Students’ data is then processed and individual timetables are
generated.
In order to sit the GSAT students should not be younger than 11 years nor older than 13
years at the time of the sitting.
2.0 The Test
The GSAT is curriculum-based and each test component is designed to cover the critical
areas of the curriculum, both in terms of the scope and sequence of the content areas.
Examinations are set to balance and account for, grade level, time allotted, volume of
material and difficulty of material. As a loose general principle, curricula with wider
content scope would require more items on a test to adequately cover the content scope
than curricula with narrower content scope. Additionally, more difficult content would
mean that fewer questions are placed on a test. As a result of these and other
2
psychometric considerations the number of items and therefore the relative value of each
item between various tests will differ.
For example, Mathematics, Social Studies and Language Arts are marked on a scale of
zero to eighty (0 - 80) with each item valued at one raw score. Science is marked on a
scale of zero to sixty (0 – 60) with each item on the science test valued at 1 raw score.
Communications task is marked on a subjective scale of zero to twelve (0 - 12) with two
items, each valued at a maximum of six (6) raw scores. The science curriculum is narrow
in scope and more challenging in terms of content and is therefore marked on a different
scale from the other objective test.
The Communication Tasks tests the written aspect of the Language Curriculum. It tests
the integration of the mechanics, content, and style of writing. It is a subjective test and is
marked on a different scale.
3.0 Administration of the Test
The test is done over a two-day period under strict examination conditions. Each centre
has an examiner and invigilators external to the school. Scripts are packaged and returned
to the Ministry for marking and processing at the end of the test.
4.0 Marking Process
Data capture and scoring of all multiple choice examinations is done by computers. The
Communication Tasks are marked by specially selected and trained teachers. As a
safeguard, no teacher is allowed to mark scripts from any schools in the parish in which
they live or work. Teachers selected to mark the Communication Tasks are required to
sign a contract to ensure that confidentiality is maintained. Marking is done in a sterile
environment at a central location. Markers work in groups, referred to as tables, each
headed by a table leader who is required to ensure that there is consistency in the grading
3
of scripts. Scripts are second marked as part of the quality assurance process. Where there
are concerns during the marking process, these are resolved by the table leader and the
chief examiner.
Communications Tasks scores are recorded on a machine-readable form from which the
data is captured, using a computer system, and then electronically merged with the data
from the multiple choice examinations.
5.0 Scoring
The raw scores for the four multiple choice examinations are tallied by computer and
merged with the Communication Tasks scores. These raw scores are then used to
compute standard scores for each subject using a computer algorithm. Standard scores
for each subject are summed to provide a composite score which is used to rank and place
students.
Sixty percent of the Communication Tasks score is used in computing the composite
score. This is referred to as weighting. The weighted score adjusts for the limited scale
of zero to twelve (0 – 12) resulting from the narrow area of Language Curriculum
covered by the Communication Tasks.
5.1 Standardizing Test Scores
For the GSAT, as previously stated, the subject scores are on different scales. This
presents a challenge in combining scores to determine overall achievement, though this
might not be immediately obvious. It is not simply a matter of adding or averaging all
the percent scores together, or even adding all the raw scores of each subject. The raw
scores on each test have different values relative to each other. Adding the score in Math
with the score in Science and the score in Language Arts would amount to adding inches,
metres and miles. This method is misleading as demonstrated in the example below.
4
Consider competitors in an endurance test consisting of three components; a 100 metre
run, a 10 nautical mile swim and a 400 foot climb. The winner will be decided on the
basis of distance covered.
From the average or total percentage of the endurance test (Table 2) it would appear that
the Leader of the Opposition did the best. However that would be a statistically incorrect
basis on which to award a prize for performance since the average and total percentage is
influenced by the relative value of each point on the different scales. In this instance 1
nautical point is valued at 6,076.11 feet while one metre is valued at 3.28 feet.
Table 2: Endurance Test Raw
Endurance Test Raw
Swim 10
Run 100 Nautical Climb 400 Average Total
metres Miles feet Percent Percent
Scale Metres % Miles % Feet %
Speaker 100 100% 7 70% 250 63% 77.66 % 233 %
Leader of the
Opp. 100 100% 8 80% 250 63% 81.00 % 243 %
Prime Minister 50 50% 9 90% 350 88% 76.00 % 228 %
A more accurate way would be to convert all distances to a common unit or scale, in
other words, standardise. For the endurance test let us use feet as the common or standard
scale. Table 3 below provides the results of this standardisation. Standardisation reveals
that the Prime Minister covered a greater distance overall and should be ranked in 1st
place.
Table 3: Endurance Test Standardised
Endurance Test Standardised
Run 100 Swim 10 Climb 400 Total
metres Nautical Miles feet distance
Scale Feet % Feet % Feet %
Speaker 328.084 100% 42469.81 70% 250 63% 43,047.89
Leader of the
Opp. 328.084 100% 48536.92 80% 250 63% 49,115.01
Prime Minister 164.042 50% 54604.04 90% 350 88% 55,118.08
5
Calculating Standard Scores
To parallel the Endurance Test example with an academic test example, let us examine an
example of students sitting a test. For this purpose we will look at four students who sat
Test A, Test B and Test C each having 25, 50 and 100 items respectively. The tables
below represent the scores attained by these students. The first gives the raw scores and
their equivalent percent scores, the type of test results most of reported.
Table 4: Student raw and percent test scores
Test A Test B Test C
Raw % Raw % Raw %
Student 1 12 48 40 80 70 70
Student 2 7 28 30 60 40 40
Student 3 20 80 45 90 91 91
Student 4 20 80 46 92 89 89
Mean 14.75 59 40.25 80.5 72.5 72.5
Standard
Deviation 5.53963 6.339361 20.4756
In addition to the raw scores two other critical pieces of information are required, the
subject mean and standard deviation. The subject/population mean is that value
computed by dividing the sum of the raw scores by the number of test takers. The
standard deviation indicates the extent of dispersion for a group around the mean. Note
that the population means (e.g. 14.75 for Test A) and the standard deviations (e.g.
5.53963 for Test A) are also included.
Statisticians also determine what the new scale should be before calculating the standard
scores. They choose a standard scale mean and standard deviation for ease of
calculations. For this example the standard scale mean will be 200 and the standard
deviation 15.
6
Let us now compute the standard scores for these students.
Standard scores are computed in two steps:
• Converting raw scores to z-scores
Raw score – Population mean
Standard deviation
• Converting z-scores to standard scores
(Z – score X Standard scale standard deviation) + Standard scale mean
Applying the above we will now compute the standard score of Student 1 on Test A.
Calculate Z-score by subtracting the population mean from the raw score and then
dividing the result by the standard deviation, that is:
(12-14.75) / 5.53963 = -0.496423
Next, the Z-score is converted to the standard score by multiplying the Z-score by the
agreed standard scale standard deviation of 15, then adding the mean of 100, that is:
(-0.496423 x 15) + 100 = 92.55365
A similar computation is applied to each student’s score for each paper, the results of
which are in the table below.
7
Table 5: Student Z and standard test scores
Test A Test B Test C
Composite
z Standard z Standard Z Standard Standard
Student
1 -0.496423 92.55365 -0.03944 99.4085 -0.1221 98.1686 290.1307
Student 2 -1.39901 79.01484 -1.61688 75.7468 -1.58726 76.1912 230.9528
Student 3 0.9477167 114.21575 0.749287 111.2393 0.903515 113.5527 339.0078
Student 4 0.9477167 114.21575 0.907031 113.6055 0.805837 112.0876 339.9088
Note that in the case of Students 3 and 4, who would have had a tied average percent
score of 87, their standards score are 639.0078 and 639.9088 respectively.
Standard scores are calculated for all the papers and summed to obtain the composite
standard score. The composite standard score accurately represents students 1 to 4 overall
performance on Tests A, B and C.
Why Standardise GSAT Scores?
Returning to the GSAT, standard scores are needed as:
1. the GSAT placement and scholarship processes requires that students be ranked
on overall performance;
2. the papers differ, in that all subjects do not have the same number of items, i.e. are
on different scales;
3. it is statistically unsound to add scores which are on different scales.
For the GSAT standard scale the mean is 100 and the standard deviation is 15. So, for
example, the raw score mean on the Social Studies examination for 2008 of 42.92
becomes 100 on the standard scale and each raw score distance of 17.96 from the mean
(Standard deviation) is represented by 15 on the standard scale. Let’s calculate the
standard scores for raw scores of 60 and 35 on this examination.
8
Raw Score 60 Raw Score 35
Formula : Formula :
Rawscore − mean Rawscore − mean
100 + × 15 100 + × 15
S tan dardDeviation S tan dardDeviation
Calculation : Calculation :
60 − 42.918 35 − 42.918
100 + × 15 100 + × 15
17.959 17.959
100 + (17.082 / 17.959) × 15 100 + (−7.918 / 17.959) × 15
100 + (0.9512 × 15) 100 + (−0.4409 × 15)
100 + 14.2675 100 + −6.6134
= 114.2675 = 93.3866
So a student with a raw score of 60 on the Social Studies paper gets a standard score of
114.2675 and another student with a raw score of 35 gets a standard score of 93.3866 on
that paper.
A standard score indicates how far a particular score is from a test's average. The unit that
tells the distance from the average is the standard deviation (sd) for that test. For the
GSAT the mean is 100 and the standard deviation is 15. Standard Scores between -1
standard deviation (85) and +1 standard deviation (115) fall in the normal range on the
ability being tested. Above + 1 standard deviation (115+) a student is in the top 15% of
performances. Below -1 standard deviation (-85), she/he is in the lowest 15% of
performances.
6.0 The GSAT Placement Mechanism
The Ministry recognizes that placement of GSAT students is one of the main areas of
concern and anxiety among parents. It is with this in mind that this paper seeks to shed
some light on the process of placement. As was mentioned in section 1.0 (Registration),
students select five secondary schools and rank them in order of preference.
Inputs in Placement
1. Students’ composite standard score
9
2. Students’ choices
3. Available places for each secondary school, broken down by gender
4. Students’ gender
5. Ministry’s Proximity List: This list is compiled by officers in the different regions
who have intimate knowledge of the location and proximity of sending schools in
relation to receiving schools.
Placement Process
The GSAT placement is about 95% automatic, that is, approximately 95% of
participating students are placed by computer. The process begins with the computerized
ranking of students with the student attaining the highest composite standard score being
ranked one (1) with each subsequent student, in descending order of performance, being
assigned a rank, being the last rank incremented by one (1), until all students have been
assigned a rank.
The same principle is used to place each student, starting with the highest rank. The
student’s school of first choice is checked for the availability of a space in which to place
a student of this gender. If a place is available, the student is placed in that secondary
school. If there is no place available, the process is repeated with the student’s next
preferred school. If all five preferences are exhausted, without being able to place the
student, the process continues in the same manner, with each school on the MoE
Proximity List.
If the computer is then unable to automatically place the child in a preferred school or
one on the MoE Proximity List, the child is manually placed by MoE regional officers,
whom possess an intimate knowledge of the schools under their jurisdiction. Typically
approximately 5% of students are manually placed. This process is ‘blind’, with officers
effecting the placements being given no indication as to the student’s identity. All that is
provided to officers are a randomly generated student number, and the school at which
they sat the GSAT. They are placed in schools which have available spaces and which
can cater to their needs.
10
One challenge faced by the process is the public perception that students are not ‘placed
in a preferred school’. Upon examination of the system it becomes clear that the
Ministry’s ability to place students in their preferred schools is dependent on the number
of places available in each school, as well as the number of students selecting that school.
On completion of placement each year the process and its results are audited by the
National Examination Committee chaired by Archdeacon Stone and comprising
representatives from the Ministry of Education, Jamaica Teachers’ Association, Jamaica
Council of Churches and the company contracted to process the examinations. The
committee is provided with information on the processes used, challenges encountered
and the results.
Release of GSAT data
Each year, the GSAT results are made available to schools and parents/guardians in the
form of rounded percentage scores for each subject, together with the schools at which
students have been placed. The Ministry’s permanent records would include all scores,
raw, percent and standard, as well as placement data.
Since 2006, an electronic platform has been adopted for the release of result data, and this
has enabled the Ministry to more easily and cost-effectively make more data available to
stakeholders. Result data is typically released in June of each year.
7.0 Scholarships
7.1 Eligibility Criteria
Scholarships are generally awarded based on criteria set by donors or sponsors. Among
the criteria are:
Performance level
Gender
11
Location
1. County
2. Parish
3. Inner-City
Economic need
Subject area
Membership at:
1. Financial Institutions
2. Organizations/Groups (eg. Jamaica Civil Service Association, Blue Cross,
JCF
Scholarships are also awarded to students of particular schools through endowments or
by philanthropic past-students. Example: Mable Downer Memorial Scholarship which is
awarded to the top performer from the Watt Town All Age School.
7.2 Selection Process
No beneficiary of a Government Scholarship or a Bank of Nova Scotia Scholarship
under the GSAT scheme may hold more than one scholarship award at any given
time.
The computerized system used for placement of students would have already ranked all
the students sitting the examination based on their composite standard score from the 1st
to final candidate. Students who are to be considered for scholarship awards are therefore
selected using the stipulated criteria and information from the database used for Ranking
and Placement.
6.0 Appeals and Review Protocol
Existing Protocol
1. Informal queries received by telephone or made orally by drop-in visitors:
The staff member receiving the query responds, or directs the query to the
relevant Head of Section. These queries usually relate to registration, examination
and results dates, vacancies for raters, eligibility for the examination, and the
curriculum.
12
2. Queries from the Media: These are usually referred to the SAU by the
Communications Unit. The Manager of the SAU provides written responses or
grants interviews.
3. Requests for GSAT data: The request must be presented in writing. Persons
making the request complete a form specifying details of the request. The
Manager of the Unit reviews the request and grants approval for the data to be
generated. Persons making requests which are denied are so advised by telephone.
Some persons requesting data are uncertain and sometimes unable to clearly
articulate what they require. These persons are invited to consult with an Officer
of the Unit to clarify the request.
4. Requests made under the Access to Information Act: These requests are
usually forwarded to the SAU by the Director of Information, MOE. The SAU
reviews the request, gathers and reviews the pertinent documents, and schedules
an appointment through the Director of Information, with the person (s)
requesting access.
Review of the Existing Appeals Protocol
We have decided to establish a Review Committee. This Committee consists of:
A representative from the University of the West Indies
A representative from the company contracted to process the examination results
A representative from the Ministry of Education
Public Defender?
President of NPTA
Aggrieved parents/guardians may submit a written request through the Student
Assessment Unit to the Committee.
NB: All requests/queries must be submitted to the Committee within ten (10)
working days after the release of the scholarship/placement information.
7.0 Review of the Test
For the past 10 years, the GSAT component of the National Assessment Programme has
been used as a placement mechanism. The test was piloted in 1996 in preparation for the
replacement of the pass/fail Common Entrance Examination. The Ministry recognizes the
need for continuous review of processes in order to ensure continued improvement in
13
standards and quality. As a result, a technical review of the test will be carried out.
Particular focus will be given to the area of the marking scale for the Communications
Task component and the Science component.
14