Validity and Reliability
Presented by Palak Brahmbhatt
OBJECTIVES
1. 2. 3. 4. 5. 6. The Concept of Validity The types of Validity The Concept of Reliability Factors affecting the Reliability of a research instrument Methods of determining the reliability of an instrument Reliability Vs. Validity
1/6/2013 5:39 AM
VALIDITY
1/6/2013 5:39 AM
INTRODUCTION
In Data collection, The questions asked of our respondents are the basis of our findings and conclusions. The Questions The Input 1. 2. 3. 4. The selection of a sample The collection of information The processing of data The Application of statistical procedures 5. Writing of a report The Conclusion The output
Affect Accuracy and Quality
1/6/2013 5:39 AM
Validity of -
1. Study Design 2. Sampling Strategy 3. Conclusion 4. Statistical procedure 5. Measurement procedure
1/6/2013 5:39 AM
2 PERSPECTIVES ON VALIDITY
1. Is the Research Investigations providing answers to the research questions for which it was undertaken? 2. If so, Is it providing these answers using appropriate methods and procedures?
1/6/2013 5:39 AM
THE CONCEPT OF VALIDITY
To examine the concept of Validity, E.g. A study to ascertain the health needs of a community. Interview schedule . Aim - To find out about health needs Finding out What attitude respondents have to the health services Thus instrument is not measuring what it was designed to measure.
1/6/2013 5:39 AM
Definition : 1. By Smith : Validity is defined as the degree to which the researcher has measured what he has set out to measure. 2. Kerlinger : The commonest definition of validity is epitomised by the question: Are we measuring what we think we are measuring? 3. Babbie writes, validity refers to the extent which an empirical measure adequately reflects the real meaning of the concept under consideration
1/6/2013 5:39 AM
Definitions raise some quiestions: 1. Who decides that an instrument is measuring what it is supposed to measure? 2. How can it be established that an instrument is measuring what it is supposed to measure?
There are 2 approaches to establishing the validity of a research instrument : 1. Logic : Justification of each question in relation to the objectives of the study 2. Statistical evidence : Provides hard evidence by way of calculating the coefficient of correlation between the questions and outcome variables.
1/6/2013 5:39 AM 9
Establishing logical link between the questions and objectives is both simple and difficult. For E.g. Age,Height,Weight,Income But to establish whether a set of of questions is measuring , say the effectiveness of a program, the attitude of a group of people towards an issue, or the extent of satisfaction of a group of consumer with service provided by an organisation is more difficult. Concept of Validity is appropriate only to a particular instrument and it is an ideal state that we as a researcher aim to achieve
1/6/2013 5:39 AM
10
TYPES OF VALIDITY
3 Types :
1. Face and Content 2. Concurrent and Predictive 3. Construct
1/6/2013 5:39 AM
11
FACE AND CONTENT
Each question or item on the scale must have a logical link with an [Link] of this link is called Face Validity. Advantage : Easy to apply Content Validity : It is equallly important that the items and questions cover the full range of the issues or attitude being measured. Assessment of the items of an instrument in this respect is called Content Validity. It is also judged on the basis of the extent to which statements or questions represent the issue they are supposed to measure, as judged by us as a researcher.
1/6/2013 5:39 AM
12
Although Easy to present logical arguments to establish validity : Certain Problems: 1. The Judgement is based upon subjective Logic: No conclusions can be drawn 2. The extent to which questions reflect the objectives of a study may differ.
1/6/2013 5:39 AM
13
CONCURRENT AND PREDICTIVE
In Situations where a scale is developed as an indicator of some observable criterion, the scales validity can be investigated by seeing how good an indicator it is For E.g. An instrument to determine the suitability of applicants for a profession. Comparison: 1. with another assessment, By a psychologist [Link] of how well these applicants have done in job. If similar Higher Validity.
1/6/2013 5:39 AM
14
Theses types of comparison Establish 2 types of Validity: 1) Predictive : The degree to which an instrument can forecast an outcome. 2) Concurrent : How well an instrument compares with a second assessment concurrently done.
It is usually Possible to express predictive validity in terms of the correlation coefficient between the predicted status and the criterion. Such a coefficient is called a validity coefficient
1/6/2013 5:39 AM
15
CONSTRUCT
More sophisticated Based upon Statistical procedures Determined by ascertaining the contribution of each construct to the total variance observed in a phenomenon. E.g. To find the degree of Job satisfaction among employees of an Organisation. 3 important factors : 1. Status 2. Nature of Job Construct questions 3. Remuneration
1/6/2013 5:39 AM
16
After Pretest and Data Analysis we use statistical procedures to establish the contribution of each construct. The contribution of these factors to the total variance is an indication of the degree of validity of the instruement. The greater the variance The higher the validity. Disadvantage : you need to know about the required statistical Procedures.
1/6/2013 5:39 AM 17
1/6/2013 5:39 AM
18
Reliability
The Concept : E.g. In our lives : A person Research tool: Consistent,Stable,Predictable,Accurate Reliable
The greater the consistency and stability : The greater the Reliability The concept of reliability can be looked upon at from 2 sides : 1. How reliable is an instrument? Consistent Measurements 2. How unreliable is it? Degree of inconsistency Error
1/6/2013 5:39 AM 19
Reliability : The degree of accuracy in the measurement made by research instrument. The lower the error, The higher the reliability. Eg. A questionnaire to Ascertain prevalence of domestic violence.
1/6/2013 5:39 AM
20
Factors Affecting Reliability
1. 2. 3. 4. 5. The Wordings of the questions The Physical setting The Respondents mood The Nature of Interaction The Regression effect of an instrument
1/6/2013 5:39 AM
21
Methods Determining The Reliability
2 Groups : 1. External Consistency Procedures 2. Internal Consistency Procedures
1/6/2013 5:39 AM
22
1. External Consistency Procedures: Compare findings from 2 independent process of data collection with each other as a means of verifying the reliability of the measure. 2 Methods : 1. Test/Retest : Commonly used An instrument is aof the reliability administered once then again under the same or similatr conditions. The ratio between the test and re-test scores is an indication of the reliability of the instrument - The greater the ratio the higher the reliability. (Test score)/ (Re-test ) = 1 OR (Test score)-(Re-test) = 0
1/6/2013 5:39 AM
23
Advantage : Compared with itself Disadvantage : A respondent may recall the responses that he/she gave in the first round. To overcome this Increase the time span between 2 sets. Affect reliability for other reasons : 1. Maturation of respondents 2. Impossible to achieve similar condition
1/6/2013 5:39 AM
24
2. Parallel Forms of the same test : Two instruments that are intended to measure the same phenomenon. Administered to similar population. Comparison between 2 results. If similar : Assumed that the instruments are reliable. Advantage : No problem of Recall. Time lapse is not required. Disadvantage : Need to construct 2 instruments
1/6/2013 5:39 AM
25
2. Internal Consistency Procedures: The idea is that items measuring the same phenomenon should produce similar results. The following Method is used : 1. The Split Half Technique : Correlate half of the items with the other half and is appropriate for instruments that are designed to measure attitudes towards an issue or phenomenon. The questions or statements are divided in half in such a way that any 2 questions or statements intended to measure the same aspect fall into different halves.
26
The scores obtained by administering the 2 halves are correlated. Reliability is calculated by using the product moment correlation between scores obtained from 2 halves.
Because the Product moment correlation is calculated on the basis of only half the instrument, it needs to be corrected to assess reliability for the [Link] as stepped up ReliabilityCalculated by a formula called the Spearman- Brown formula
1/6/2013 5:39 AM
27
Reliability Vs. Validity
1/6/2013 5:39 AM
28
1/6/2013 5:39 AM
29
SUMMARY
1. The Concept of Validity : refers to quality and can be applied to any aspect of the research process. 2 Approaches of Validity: 1. Logical link : B/W the objectives of a study and the questions used 2. Statistical analysis : To demonstrate this link 3. Three types of Validity : 1. Face and Content 2. Concurrent and Predictive 3. Construct 4. The Reliability : Its Ability to produce consistent measurements each time. 5. Reliability from 2 sides : 1. reliability 2. Unreliabilty 6. Factors affecting Reliability 7. External Consistency Procedure : 1. test/Re-test 2. Parallel Forms 8. Internal Consistency Procedure : 1. Split Half technique.
1/6/2013 5:39 AM 30