Concepts and Constructs
Variables
Qualitative and Quantitative Research
The Nature of Measurement
Levels of Measurement
Measurement Scales
Reliability and Validity
2. CHAPTER
OUTLINE
Concepts and Constructs
Variables
Qualitative and Quantitative Research
The Nature of Measurement
Levels of Measurement
Measurement Scales
Reliability and Validity
3. CONCEPT
A concept is a term that expresses an abstract idea of research formed by generalizing
from particulars and summarizing related observations. Concepts are based on our
experiences. Concepts can be based on real phenomena and are a generalized idea
of something of meaning.
We can measure concepts through direct and indirect observations.
Concepts are important for at least two reasons. First, they simplify the research
process by combining particular characteristics, objects, or people into general
categories.
Second, concepts simplify communication among those who have a shared
understanding of them. Researchers use concepts to organize their observations into
meaningful summaries and to transmit this information to others
4. CONSTRUCT
The word ‘construct’ means focused abstract idea, underlying theme or
subject matter that one wishes to measure.
Constructs exist at a higher level of abstraction than concepts. A
construct is a combination of concepts.
Third, A construct is usually designed for a specific research purpose so
that its exact meaning relates only to the context in which it is found.
5. Constructs cannot be directly observable or measured. Typical constructs in
marketing research include Brand Loyalty, Purchase Intent, and Customer
Satisfaction. Constructs are the basis of working hypotheses.
Advertising involvement is construct that is difficult to see directly, an it
includes the concepts of attention, interest, and arousal.
Political Behavior is a construct. This is the combination of different concepts
like political affiliations, political knowledge, political activities etc.
6. VARIABLES
Variables are the empirical counterpart of concept and construct.
Variables are important because they link the empirical world with the
theoretical; they are the phenomena and events that are measured or
manipulated in research.
Researchers try to test a number of associated variables to develop an
underlying meaning or relationship among them. After suitable analysis,
the most important variables are kept and the others are discarded.
These important variables are labeled marker variables because they
tend to define or highlight the construct under study.
7. KINDS OF VARIABLES
Independent and Dependent Variables
Discreet and Continuous Variables
Extraneous and Confounding Variables
8. INDEPENDENT AND DEPENDENT VARIABLES:
Independent
Cause/Its value is independent of
other variables
Systematically varied by Researcher.
Researcher wants to manipulate or
change it.
Direct effect on dependent variable
Predictor
Dependent
Effect/ Its value depends on changes
in the independent variable.
Variables are observed.
Their values are presumed to depend
on the effect of independent variables
Criterion
9. CON…..
Examples:
• A human resources professional wonders if how much money a
person earns can impact the extent to which an individual
experiences job satisfaction.
independent variable - compensation (salary or wages)
dependent variable - job satisfaction
• A researcher wants to study effects of political leanings of
media houses on their viewership.
Political leaning on media house- independent
Viewership- dependent
10. DISCRETE VARIABLE / CONTINUOUS VARIABLE=
QUANTITATIVE
Discrete
Finite set of numbers
Cannot subdivided into parts
Cannot measurable in points
Examples: Family member, gender,
political affiliation
Continuous
Takes on any value including
fractions
Divided into subsections
Measurable in points.
Examples: Height, temperature,
distance, interests rates,
11. EXTRANEOUS AND CONFOUNDING VARIABLES:
Extraneous
Extraneous are all variables in the study other
than the independent and dependent
variables.
Confounding
An extraneous variable becomes confounding
variable when it affects the results of study.
These should be controlled for validity of research, especially in experimental research.
For example, if a researcher is investigating the role of media talk shows on political
knowledge of audience, then audience interpersonal interactions, bradirism, education
level all are extraneous variable. Anyone of them may become confounding variable.
12. QUALITATIVE RESEARCH
Qualitative research involves several methods of data collection, such
as focus groups, field observation, in-depth interviews, and case
studies.
In all of these methods, the questioning approach is varied.
In other words, although the researcher enters the project with a
specific set of questions, follow-up questions are developed as needed.
The variables in qualitative research may or may not be measured or
quantified.
13. QUANTITATIVE RESEARCH
Quantitative research also involves several methods of data collection,
such as telephone surveys, mail surveys, and Internet surveys.
In these methods, the questioning is static or standardized—all
respondents are asked the same questions and there is no opportunity
for follow-up questions.
Description of trends or an explanation of variables, relations.
Collecting info from a large number of individuals.
14. DIFFERENCE BETWEEN
The only difference between qualitative and quantitative research is the
style of questioning. Qualitative research uses flexible questioning;
quantitative uses standardized questions. Assuming that the sample
sizes are large enough and that the samples are properly selected, the
results from both methods can be generalized to the population from
which the sample was drawn.
15. NATURE OF MEASUREMENT
The importance of mathematics to mass media
research is difficult to overemphasize. As pointed
out by measurement expert J. P. Guilford.
Mathematics is a universal language that any
science or technology may use with great power
and convenience. Its vocabulary of terms is
unlimited.
. The idea behind measurement is simple: A
researcher assigns numerals to objects, events,
or properties according to certain rules.
16. LEVELS OF MEASUREMENT
Scientists have distinguished
four different ways to measure things, or
four different levels of measurement,
depending on the rules that are used to
assign numbers to objects or events.
17. NOMINAL LEVEL
The nominal level is the weakest form of measurement. In nominal
measurement, numerals or other symbols are used to classify people,
objects, or characteristics.
Example: Gender category 1 for male 2 for Female
18. ORDINAL LEVEL
Objects measured at the ordinal level are usually ranked along some
dimension, such as from smallest to largest.
Example: measured "socioeconomic status” categorizing families according to
class: lower, lower middle, middle, upper middle, or upper.
19. INTERVAL LEVEL
When a scale has all the
properties of an ordinal
scale and the intervals
between adjacent points
on the scale are of equal
value, the scale is at the
interval level.
There is no zero value in
it.
Example: temperature
20. RATIO LEVEL
Scales at the ratio level of measurement have all the properties of interval
scales plus one more
Existence of zero point
Example: time spent watching television or number of words per story, are
ratio measurements.
21. MEASUREMENT SCALES
A scale represents a composite measure of a variable; it is based on more
than one item. Scales are generally used with complex variables that do not
easily lend themselves to single-item or single-indicator measurements. Some
items, such as age, newspaper circulation, or number of radios in the house,
can be adequately measured without scaling techniques.
22. RATING SCALES
Rating scales are common in mass
media research. Researchers frequently
ask respondents to rate a list of items
such as a list of programming elements
that can be included in a radio station’s
weekday morning show, or to rate how
much respondents like radio or TV on-air
personalities.
Example: 0–9 scale form least to most
23. LIKERT SCALE
Likert scale, also called the summated
rating approach, was developed by
psychologist Rensis Likert (LICK-ert)
in 1932.
A number of statements are developed
with respect to a topic, and respondents
can strongly agree, agree, be neutral,
disagree, or strongly disagree with the
statements
24. THURSTONE
SCALES
Thurstone scales are also called equal appearing
interval scales because of the technique used to
develop them and are typically used to measure the
attitude toward a given concept or construct.
To develop a Thurstone scale, a researcher first
collects a large number of statements
25. GUTTMAN
SCALING
Guttman scaling, also called scalogram analysis, is
based on the idea that items can be arranged along a
continuum in such a way that a person who agrees with
an item or finds an item acceptable will also agree with
or find acceptable all other items expressing a less
extreme position
. For example, here is a hypothetical four-item Guttman
scale:
1. Indecent programming on TV is harmful to society.
2. Children should not be allowed to watch indecent TV
shows.
3. Television station managers should not allow
indecent programs on their stations.
4. The government should ban indecent programming
from TV.
26. SEMANTIC
DIFFERENTIAL
SCALE
The semantic differential scale measures the
connotative meaning of things.
Example: How do you perceive the social media
for political awareness?
Good ______ ______ ______ ______ ______ ______ ______Bad
27. VALIDITY
Validity is the extent to which a test
measures, what it is supposed to
measure.
The question of validity is raised in the
context of the three points:
the form of the test, the purpose of the test
and the population for whom it is
intended.
28. TYPES OF VALIDITY
1. External Validity: External validity occurs when the causal relationship
discovered can be generalized to other people, time and contexts. Correct
sampling will allow generalization and hence give external validity.
2. Internal validity: Internal validity occurs when it can be concluded that there
is a causal relationship between the variables being studied. It is related to the
design of the experiment.
29. 3. Content Validity: When we want to find out if the entire content of the
behavior/construct/area is represented in the test we compare the test task
with the content of the behavior. This is a logical method, not an empirical
one.
Example, if we want to test knowledge on American Geography it is not fair to
have most questions limited to the geography of New England.
30. RELIABILITY
Reliability is the degree to which a test consistently measures whatever it
measures.
When a measurement procedure yields consistent scores when the
phenomenon being measured is not changing
Degree to which scores are free of “Measurement Error
Consistency of the measurement
31. TYPES OF RELIABILITY
1. Stability Reliability: Test-retest: Test-retest reliability is the degree to which
scores are consistent over time. It indicates score variation that occurs from
testing session to testing session as a result of errors of measurement.
Same test- different Times Only works if phenomenon is unchanging
Example: Administering the same questionnaire at 2 different times
2. Equivalence Reliability: Inter-item reliability: (internal consistency) The
association of answers to set of questions designed to measure the same
concept
32. RELATIONSHIP BETWEEN VALIDITY & RELIABILITY
The relationship between reliability and validity is that they are both essential for any study or
research project. Without them, the results will not be fit for purpose.
In terms of a relationship, what is interesting to note about reliability and validity is that they
are not mutually exclusive. In other words, you can have a study which is reliable but not valid
and, equally, you can have a study that is valid but lacks reliability.
For any researcher, then, the goal is to have a study that is both high in validity and high in
reliability because this will provide a set of high-quality results from which to draw conclusions
and make an analysis.