Data Collection Methods
Data Collection Methods
Once data is obtained, it is then analyzed to become the basis for informed
decision making
Concept versus Construct
• Concept
1.Term (nominal definition) that represents an idea that you
wish to study;
2.Represents collections of seemingly related observations
and/or experiences
• Concepts as Constructs
• We refer to concepts as constructs to recognize their
social construction.
More on constructs
• Three classes of things that social scientists measure:
• Directly observable: # of people in a room
• Indirectly observable: income
• Constructs: creations based on observations;
cannot themselves be
directly or indirectly observed
With each dimension, you must decide on indicators; signs of the presence or
absence of that dimension. (Dimensions are usually concepts themselves).
• A variable is any characteristics, number, or quantity that can be measured or
counted. Age, income and expenses, country of birth, capital expenditure, class
grades, eye colour and vehicle type are examples of variables. It is called a variable
because the value may vary between data units.
• Numeric variables have values that describe a measurable quantity as a
number, like 'how many' or 'how much'. Therefore numeric variables are
quantitative variables.
• Categorical variables have values that describe a 'quality' or 'characteristic' of
a data unit, like 'what type' or 'which category'. Therefore, categorical variables
are qualitative variables and tend to be represented by a non-numeric value.
Operationalizing Choices
• The process of creating a definition(s) for a concept
that can be observed and measured
• The development of specific research procedures
that will result in empirical observations
• Examples
• SES is defined as a combination of income and education
and I will measure each by…
• The development of questions (or characteristics of data in
qualitative work) that will indicate a concept
Independent and Dependent Variables
• ‘Factor’ • ‘Measure’
• Degree of Precision
• selection depends on your research interest, but if
you’re not sure, it’s better to include more detail
than too little
• Level of Measurement
Data Collection methods
•1. Test
•2. Questionnaire
•3. Interviews
•4. Focus group
•5. Observation
•6. Secondary or existing data
Measurement and Scaling: How we measure Concepts
The statistics which can be used with nominal scales are in the non-parametric
group. The most likely ones would be:
Ordinal
An ordinal scale is next up the list in terms of power of measurement.
• When you are asked to rate your satisfaction with a piece of software on a 7
point scale, from Dissatisfied to Satisfied, you are using an interval scale.
• Has arbitrary zero. ( 0 doesn’t mean no heat. difference 0-20 degree same as
40-60 degrees)
• We contrast this to an ordinal scale where we can only talk about differences in
order, not differences in the degree of order.
• The factor which clearly defines a ratio scale is that it has a true
zero point. Means changes in proportion.
How to compute?
• Test-retest: A measure of the consistency of scores overtime
• Equivalent forms: A consistency of a group of individuals’ scores on
alternative forms of a test measuring the same thing
• Internal consistency: The consistency with which the items on a test measure
a single construct.
Split half: It is obtained from two equivalent halves of the same test ( spearman-Brown)
• Or
• Cronbach alpha
• Concurrent evidence: is a type of evidence that can be gathered to defend the use of a test for
predicting other outcomes (SAT results and School GPA). Or administer 2 test at the same time.
• Predictive evidence: is the extent to which a score on a scale predicts scores on some criterion
measure at one point in time and later. (will drop out and did dropout).
Construct validity: Construct validity refers to the degree to which a test measures what it claims, or
purports, to be measuring. ( autonomy….measure choice or happiness…measure finance
•Convergent evidence: Convergent validity can be established if two similar constructs correspond with one
another( teacher feedback and test score)
•Discriminant evidence:tests whether concepts or measurements that are supposed to be unrelated are, in fact,
unrelated.
Factor analysis: It tell correlations among test items. It tells whether the test is uni or multi dimensions.
Convergent Validity
Discriminant Validity