0% found this document useful (0 votes)
15 views2 pages

Document 3

The document discusses the importance of a good research instrument in conducting studies, emphasizing the need for validity, reliability, usability, and economy. It outlines various types of validity, including construct, content, face, and criterion validity, as well as different reliability measures such as internal consistency, test-retest, inter-rater, and parallel forms reliability. Ultimately, the quality of the research instrument significantly impacts the accuracy and consistency of research results.

Uploaded by

vuxxgang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views2 pages

Document 3

The document discusses the importance of a good research instrument in conducting studies, emphasizing the need for validity, reliability, usability, and economy. It outlines various types of validity, including construct, content, face, and criterion validity, as well as different reliability measures such as internal consistency, test-retest, inter-rater, and parallel forms reliability. Ultimately, the quality of the research instrument significantly impacts the accuracy and consistency of research results.

Uploaded by

vuxxgang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

The research instrument is very important in conducting a research study for the result will

serve as the basis in answering the research problem and selecting the correct hypothesis
in the latter part of the research. Thus, it is but right to consider the different scales in
establishing a valid and reliable instrument.

What makes a Good Research Instrument?

Valid and Reliable: The instrument should measure what intends to measure. The
instrument should have accuracy and consistency.

Usable: The degree to which the tests are used without much expenditure of

Time, money and effort. It also means practicability, Factors that determine usability are:
administrability, scorability, and economy.

Scorable: A good instrument is easy to score thus: scoring direction is clear, scoring key is
simple, answer is available.

Economical: One way to economize cost is to use answer sheet and reusable test.
However, test validity and reliability should not be sacrificed for economy.

Types of Validity of Instruments

Construct validity: This type of validity determines whether an instrument/measurement


tool really represents the thing that the researcher wants to measure. Furthermore,
construct validity ensures that the measurement matches the construct that you want to
measure. Construct could be characteristics that the researcher intends to measure.
Content validity: Content Validity evaluates whether an instrument covers all aspects of
the construct. This is very important in producing valid results. The researcher should
always assure that the instrument produced covers all relevant parts of the subject it aims
to measure.

Face validity: This considers how suitable the content of an instrument seems to be as it
appears. It is a subjective measure and considered as the weakest form of validity.

Criterion validity: This type of validity evaluates how closely the result of your test to the
result of other tests conducted. Criterion refers to the external measurement of the same
thing

Reliability of Instrument

Internal consistency reliability: This type of test of reliability gauges how well an instrument
is actually measuring what you want to measure. This is very important for the researcher
to insure that they have included sufficient number of items to capture the concept
adequately.

Test-retest measures the correlation between scores from one administration of an


instrument to another. This measures test consistency- the reliability of a test measured
over time.

Inter-rater reliability checks the degree of agreement among raters. This refers to the extent
to which two or more raters give consistent estimates of the same phenomenon.

Parallel Forms Reliability: Used to asses the consistency of the results of two tests
constructed in the same way from the same content domain.

You might also like