Influence of media source on political interest;
The road to a research design
1e semester 2009/2010
Research Designs, Explanatory 'Narratives' and their Presuppositions II:
Causality in Quantitative Research Designs
Lecturer: Dr. D. Grunow
22 December 2009
Communication studies (Research MSc)
Faculty of Social and Behavioural Sciences
University of Amsterdam
Table of contents
ISSUES OF CAUSALITY......................................................................................................................................4
In this paper the road to a research design is drawn up from the very start of an idea, the
different hurdles to establish causality and the alternative routes that could also have been
taken. The research for which this is elaborated on, wants to study if there is a causal relation
between the television broadcasting channels people watch and the political interest they
have. This research is an imaginary one and can therefore be reflected upon from the idea that
it is conducted in an ideal situation, with no boundaries of money, time or willingness to
In The Netherlands television channels can be divided in two groups: three public
broadcasting channels and multiple commercial broadcasting channels. The first are for a
large part financed by the Dutch state, thus by tax revenues. Commercial channels on the
other hand have to earn all the money themselves and do not depend on the state. The
difference between both groups is not only found in how they are financed, it also seems that
they are different in the content that is broadcasted (De Vreese & Boomgaarden, 2006). An
explanation for this is that public broadcasting channels are filled with programs of different
broadcasting corporations that often have some kind of social ideals. Those ideals are
interwoven in the programs they make. Besides it is obligatory for public broadcasters to fill
their programming at least for some part with information, culture and education (Brants,
2000). Commercial channels by contrast are just organisations that have one goal; earn
money, and entertainment seems the fastest way to arrive at this. Ideals thus do not seem to be
a primary matter here. In spite of the competition to reach the largest audience, it seems that
public broadcasters still can distinguish themselves from commercial channels with different
programming (Brants, 1998, in: Brants, 2000).
Because of the ideals that are inconspicuous integrated in the programs on public
television, it could be expected that people who watch these programs often, might become
more engaged with ideals in society and for that reason become more interested in politics.
People on the other hand that watch most of the time commercial broadcasting channel, might
not become this inspired by ideals, and therefore have some lower interest in politics. If this
hypotheses causal relation really exists, needs to be investigated. The research question I want
to answer in this paper is therefore: Does someone’s main use of public broadcasting
channels, raise the probability of having more political interest, while watching commercial
broadcasting does not?
Answering this question will be relevant on two issues. First it will lead to more
scientific knowledge of the influence media have on society. Many thoughts and untested
hypotheses are in circulation about this, however not very much are tested. Next to this it has
a social relevance. Much public money is spent on public broadcasting coming from taxes.
How this can be justified is however sometimes the question. Showing that programs on
public channels raise the political interest of people, could be a strong justification. Because
more political interested people will lead to some stronger form of democracy. As is known
media play an important role in this because it can fulfil the function of information
dissemination, expression function and a criticize function (Bardoel & Van Cuilenborg,
The continuation of this paper is structured in the following way: first in the necessary
theoretical background (Goldthorpe, 2001) will be described how media could have some
influence on the perceptions, interests or attitudes people have, then what issues of causality
should be taken into account when this topic is studied, finally a research design will be
presented that seems to have the best possibilities to answer the research question and
alternatives to this. To give an overview of the different issues of causality that a researcher
will be confronted with when this study is conducted several scientific sources where used.
Most of them are articles that were presented in the course ‘Causality in Quantitative
Because of limits of time and space media cannot provide all the details to a story. Therefore
choices have to be made. As a consequence different aspects of a story are highlighted while
others are not even shown. What aspects and attributions of a story are presented is called the
framing of a message (De Boer & ’t Hart, 2007). There are diverse interpretations of this
concept, however the most general is that framing is to show, conscious or unconscious,
certain aspects of a story, by which these become more salient (De Boer & ‘t Hart, 2007; De
Vreese, 2005; Entman, 1993; Nelson, Oxley & Clawson, 1997).
When a media message is received by a person, by either reading, watching or
listening, this person logically also receives the frame of a message. The agenda setting
theory of McCombs and Shaw (1972) explained that media messages made clear to people
what news items are important at that moment. It seems however that media messages can
also make clear what specific aspects of items are important, this is called second level
agenda setting (McCombs, Shaw & Weaver, 1997). These framing effects do refer to long-
term effects and not to short-term effects like priming (Price & Tewksbury, 1997).
It is assumed that the way topics are presented can influence how receivers of these
messages think about it. However this could only happen while people receive messages that
are usually presented in the same way by consonant media coverage, otherwise thoughts
could not become solid due to opposite messages (De Vreese & Boomgaarden, 2006). When
frames in media messages have an influence on the thoughts of people this is called frame
setting (De Vreese, 2005; Scheufele, 1999).
However some people might be affected more effortlessly than others. Of course
demographic variables as age and intelligence play some role, but important is also how
dependent people are on media to know or learn about particular topics. The media system
dependency theory of Ball-Rockeach and DeFleur (1976) supposes that as people are more
dependent on media to find out about some topic, they will be more inclined to follow the
media, than those people who can also obtain information via social contacts or in real-life
situations. If people adopt particular ideas that were obtained via media exposure, this is
called cultivation (Gerbner, 1967). As a consequence of this, effects of framing can influence
people’s knowledge, attitudes, feelings, opinions or even behaviour (De Vreese, 2003).
As was already written in the introduction, public broadcaster’s programming in The
Netherlands depends on the choice of different broadcasting corporations. Broadcasting time
is given to them and they make or buy programs to fill this. Because those corporations have
some ideal background and they are obliged to broadcast not only entertainment, it is logical
that these programs are framed with some kind of political frame. De Vreese and
Boomgaarden (2006) already demonstrated that public broadcasting news differed
substantially from that of commercial news. For this research it is expected that the same, or
even a larger difference, is the case when all programs of public broadcasters are compared to
all programs of commercial broadcasters. The expectation is that programs on public
broadcasting channels will many more times have a political frame, than programs on
commercial broadcasting channels in which such a frame usually will be lacking. A content
analysis should prove this assumption, however with the knowledge existing it can for the
time being be assumed to be true.
De Vreese and Boomgaarden (2006) showed that the difference between news
coverage of public and commercial broadcasters and what people generally watch, could be
associated with the knowledge people have of politics and the propensity to vote in elections.
The scope of this research is if a comparable relation can be found with the general programs
people watch and the interests they have in politics. Furthermore it is not only the ambition to
show a relation between the channels people watch and their political interest, the goal is to
prove a causal effect.
Issues of causality
When this study is being done in reality, the researcher should be very conscious about the
many obstacles he might be confronted with in making a causal conclusion. First is would be
useful to make clear what causality is understood to mean in this paper. Although there are
multiple interpretations of the concept ‘causality’, something is uniting them all: that a cause
raises the probability of an event to occur (Gerring, 2005). When a cause is observed we can
therefore never be sure that a particular event is also taking place. It will only be more likely
that this might occur.
Many times causality is understood to simplistic in the sense that X causes Y, however
in social situations there are always multiple factors that influence if an event can and will
take place. Another way in which causality might be used naïvely is explained by Slack
(1984). Three concepts of causality are considered. First mechanistic conceptions of causality
were rejected. Those with roots in mathematics, assume that causes and effects are discrete,
isolated events or conditions, that have just one-way effects and next to the effect they are
unrelated to each other. However in communication science, and social sciences in general,
causes and effects are most often inextricable. Expressive causality takes this into account.
Causes and effects are then both linked to the society in which they emerge and cannot be
viewed separate any longer. Furthermore it is possible now to be aware that an effect might
take place the one way, however an opposite effect of the hypothesed effect on the
hypothesed cause is possible too. Finally Slack (1984) describes the even more refined
concept of structural causality. Cause and effect are still seen as related to each other,
however the environment in which these take place is moreover assessed as something that
enables an effect or just makes it impossible. The environment is thus very powerful in this
Every study that has as a goal to research a causal relation has one or more independent and
dependent variable(s). In this research the independent variable will be the kind of programs
people usually watch. Are this to a large extent programs on public broadcasting channels,
programs on commercial broadcasting channels or more a mix of programs on both types of
broadcasting channels? The kind of programs people watch will be measured on a discrete
scale of five steps ranging from almost only programs on public broadcasting channels, via
for the greater part programs on public broadcasting channels, via a neutral mix of programs,
then for the greater part programs on commercial broadcasting channels and ending at almost
only commercial broadcasting channels. The data to assign people to these categories will be
gathered as objective as possible and therefore be collected via Stichting Kijkonderzoek. More
about this in the research designs section.
The dependent variable in this research is the interest people have in politics. With
interest is meant how much people care for the things that happen in politics, especially on a
national level. To measure this again five-point scale will be used starting at being very
interested in politics and ending via a logical way at having no interest in politics. The data to
assign people to these categories cannot be collected as objective as the measurement of
viewing behaviour. Surveys will be used to asking people different questions about how
interested they are in politics, how much they know about it and to what extent they think it is
important. Together they will form the latent variable ‘interest in politics’.
To make reliable causal statements it is not enough to measure only the dependent and
independent variables. In almost every causal relation confounding variables are involved.
According to Goldthorpe (2001) these should be traced, so wrong conclusion of spurious
effects can be avoided. These confounding variables are certain characteristics or states that
correlate both with a treatment, the cause, and an outcome, the effect (Frank, 2000).
Researchers have therefore to be more cautious in asserting causal relationships when they
expect there might be confounding variables.
A way to control for those confounding variables might be to assign people at random
to control and treatment groups. In social sciences this is however often not possible due to
practical or ethical reasons (e.g. Elwert & Christakis, 2008). Another way to control for those
potentially confounding variables was developed in the form of a statistical manner (Frank,
2000). Variables that are expected to confound the hypothesed relation between a cause and
effect, can be included in the statistical analysis as a covariate between both variables. Effects
that might seem significant in analysis without this covariate, can become insignificant when
it is included. This makes clear that the effect should not be interpreted to exist. Researchers
should therefore not close their eyes for alternate mechanisms than the ones they stated in
their hypotheses. Of course in many instances it is not possible to measure all variables that
are expected to have some confounding influence. However having the most important
variables in the analysis, makes it less uncertain that a causal relationship that was found was
caused confounding variables. This makes the research more robust.
For the causal relation between the television programs people and how much they are
interested in politics are of course also possible confounding variables. First as the theory
already implied people should be controlled for how much they have to depend on media to
learn about situations. It is likely for example that someone who watches only commercial
television, even can have much political interest when his or her partner is involved in politics
and talks much with the person about this. How much people depend on media to learn about
politics should thus be controlled for. Besides as in every research the demographic variables
will be controlled for: age, gender, level of education, what the household looked like,
etcetera. When an experiment was conducted this all was not necessary, because people could
be assigned at random to different groups. However in the research design that was preferred
this is not possible.
Cause-and-effects-questions are often difficult to answer in social sciences due to constraints
in collecting data (Morgan & Winship, 2007). It was in this research for example not possible
to assign people at random to one condition or another. People select themselves into the
condition of which programs they watch more often. As a researcher I cannot determine what
channel a person watches the most, except when an experimental design was chosen. This
choice is however not made, because interesting is the effect on a long-term, not just what
happened after people viewed for sometime one program. Nevertheless the difference
between the people who either watch more public channels or commercial channels is
important. To address such issues nonetheless a counterfactual model of causality was
developed (Morgan & Winship, 2007).
The core of this model is that people can be situated in different states, the channels
people watch, and these states probably influence an outcome of interest. The different states
are called alternative treatments. By using this word, the study is to some extent comparable
with experiments. The difference is however that people are not randomly assigned to the
different treatments; therefore the term quasi-experiment is used and the research is an
observational study. Not assigning respondents at random to treatment can potentially lead to
bias (Campbell & Stanley, 1963), because of two reasons (Morgan & Winship, 2007):
individuals in one group might have some other average outcome score already in the
beginning, another level of political interest; second, it could be that persons in one group are
more susceptible to the treatments, than those in the other group.
In theory it is possible that people can be in all kind of states, however in practice this
is often impossible. People, who are watching more programs on commercial channels, can in
a research not suddenly be observed as if they are watching most often programs on public
television. Sketched in a model it would look like this:
Sample Y2 Y1 Y0
People watching more Not observable, Not observable,
Observable as Y
commercial broadcasters (D=2) counterfactual counterfactual
People watching a neutral mix of Not observable, Not observable,
Observable as Y
programs (D=1) counterfactual counterfactual
People watching more public Not observable, Not observable,
Observable as Y
broadcasters (D=0) counterfactual counterfactual
With all measures taken at one point in time, researchers can only observe the diagonal of this
model. Therefore such a cross-sectional design would have the problem of just studying
different groups, who cannot be observed when they are in another condition. Differences
between them can be related to the hypothesed cause, the treatment, however it could very
well be a wrong conclusion because background variables might have caused why they are in
a certain treatment group and why they have some kind of outcome. For instances it might be
that people who are older or have a higher education watch more programs of public
broadcasters and have more interest in politics, than younger and lower educated people that
watch more programs of commercial broadcasters. If this is true the higher level of political
interest does not have to been caused by the programs that are watched, but perhaps by the
age of people or the level of education. Differences within individuals can thus not be
observed and because individuals assigned themselves to particular states and these
treatments are nonmanipulable, it becomes almost impossible to conclude what caused the
difference in outcome.
The counterfactual model can help thinking about this problem and making causal
arguments of which the researcher can be more confident, by suggesting what-if questions
(Morgan & Winship, 2007). For example: what would be the political interest of someone,
who watches programs most often on commercial broadcasting channels, would instead watch
more often public broadcasting channels. This question shows that cross-sectional data are
limited, because different states of one respondent could not be measured. This suggests that
differences within one individual should be observed, because then the treatment to which
someone is exposed could have changed. Multiple moments to measure would be needed for
that, resulting in the conclusion that longitudinal data should be used. More about this later.
The what-if questions hold some other assumptions too. Besides the change in
treatment, what somebody watches, nothing should be changed, otherwise this might be the
cause of a change in political interest. To observe if there happened something that disturbed
this ceteris paribus assumptions, it seems that continuous measurement is necessary, also of
events not directly related to the variables that were measured. Another assumption that
researchers maintain is that the outcomes of individuals are not affected by changes in the
treatments of others. This is comparable with the independence criterion to proof a causal
argument of Gerring (2005). Cases, in this study respondents, should be independent of each
other; otherwise every case measures kind of the same and no new information would be
obtained. This assumption does not seem to be a large problem; media use and political
interest are relatively independent among respondents as long as they do not belong to the
same household. Disturbances of the various variables thus do not seem to correlate with
those of other respondents.
Because of the many problems confronted with when this counterfactual model was
discussed, it was not chosen to use for this research. However it made it possible to think of
the pitfalls that could occur when a research to this topic was conducted. Therefore it was
very useful and it created the insight that differences within people should be observed, not
only differences between groups. As a consequence it was necessary to think of a longitudinal
Longitudinal data provide a researcher to see how variables changed over time within a unit
of analysis; respondents in this study. This is practical because it makes it possible to leave
out counterfactual arguments, while it is still possible to make causal arguments. An
advantage is that in counterfactual arguments, things have to be assumed which are
unobservable; while in time-series data people actually can change in their treatments, this
can be observed, and therefore arguments do not stay hypothetical and are therefore more
robust. When cross sectional measures are repeated over time, the result is pooled time-series
data: measures of multiple units of analysis at multiple points in time. Stimson (1985)
mentions that this combination is not used often, however it should be noticed that he wrote
this almost thirty years ago. This combination makes it possible to do both between and
within analysis, resulting in “an extraordinarily robust research design” (p. 916). The
robustness is the consequence of having two ways to analyse if there is a causal relationship.
Cross sectional analysis presumes that an effect of covariance did take place before the
measurement, while time-series analysis hopes to capture the moment of a causal process into
the data set.
However when a relation is proved between two or more variables, a causal process is
not yet also demonstrated. When two variables covary, it is not directly clear what variable
caused the change of the other variable, or in other words: what was the direction of a causal
effect? Does the broadcast channel of the programs someone watches influence the political
interest of this person, or is it the other way round; does political interest influences which
channels are chosen to watch television programs? To study this it is necessary to look at the
temporal order in which events take place (Thurman & Fisher, 1988). This often seems to be
quite problematic with time-series data gathered with a panel study, because people are just
measured some moments in time, it is likely that both cause and effect did take place already.
The measurement could as a consequence again just show a correlation. To make good causal
arguments, it would be necessary to prove that the cause precedes in time before an effect
takes place (Pötter & Blossfeld, 2001).
Event History Analysis can be used for that. Blossfeld, Golsch and Rohwer (2007)
state that cause and effects do not happen at the same moment (t’- t ≠ 0). Therefore when
measuring all variables in time, a variation can first be observed in one variable and thereafter
a variance in the other variable, the effect. When structural a time order is found in the
process in which the variables change and this is in accordance with the theory, it is more
probable that the hypothesed effect takes place and not in another way round. Event History
Analysis is a method of gathering data continuously and makes it therefore possible to
observe which variable did change first, and will probably be the cause. It can however be
discussed if this is a completely proper argument. It could be that individuals result in a
particular condition (Y), caused by an event (X), this event however could possibly
influenced by an individual’s expectations of the outcome (Pötter & Blossfeld, 2001). This
however seems not to be likely in this research. People will not consciously use certain media
to become more interested in politics.
To make causal arguments it seemed to be necessary that ceteris paribus assumptions
were the case, or at least to know that something did change. Event History Analysis can be
helpful to that. It can ask respondents what happened in the period between the two moments
of measurement, with that covering a continuous time axis. Did certain programs started on a
channel that never was watched, but now got attention because the new program was
attractive. Or did some favourite anchor man transferred from a public broadcaster to a
commercial channel? Events like this can be alternative causes for a change in viewing
behaviour. On the other hand special political events like another strange statement of
Wilders, the sudden attendance of a charismatic politician as Fortuyn or a political quiz in the
pub on the corner, may be events that caused an increase in political interest. Events like this
would be unobserved with normal panel data, however with Event History Analysis they may
appear and be really important.
Next to this Event History Analysis has the possibility to ask people about their past.
What did people watch when they did grow up, what kind of family did they came from, were
respondents’ parents politically engaged. A benefit of Event History Analysis is that it makes
it possible to observe censored events, by stretching the observation window back to people’s
childhood years (Vermunt & Moors, 2005). In other methods these fall outside the time
borders of the research and can thus not be observed. Retrospective event histories can make
events like this observable and enriches the study in this way. It is for example possible to
know how people self selected themselves in to certain categories of media use or political
interest. This background knowledge can give great insights that are necessary to make causal
arguments more robust.
After considering all the issues accompanying making causal arguments, an ideal research
design was devised. That will be presented next, together with the questions about both
internal and external validity. Afterwards an alternative design will be presented.
To answer the research question about the effect of media exposure on political interest, a
longitudinal quasi-experiment is devised. Not all media will be considered because that might
create too much work and difficulties to analyse. To measure media exposure, television was
chosen, for the reason that people can change their viewing behaviour easily and it seems that
people are not so loyal to their television channels as they are for example to the newspapers
they read. It was therefore more probable that events could be observed.
In a five year lasting research a panel survey will be combined with Event History
Analysis. Every four weeks people will be questioned. This period is chosen, because it is not
so short that it will irritate people and it will be not too long to forget events. The survey is
designed to know for every respondent what the interest in politics is. Various questions will
together measure this latent variable. The values of this latent variable will be round off to the
nearest number, resulting in a five-point scale. This is necessary because Event History
Analysis can only work with discrete variables (Blossfeld et al., 2007). A shift in score of
political interest (e.g. a score of 2 increases to 3) is observed as an event. What might cause
distortion is that some respondents can due to rounding off be closer to an event, than others;
therefore some can change more effortlessly than others. However much problems are not
expected due to this.
Next to this the survey will also include questions for the Event History Analysis.
Every four weeks people will be asked what special events happened last months. They will
be asked for the date or an estimation of it so it can be placed in time. In separate questions
they will again be questioned about important events, however emphasis is laid on events
with regard to media, especially television, and politics. When important things happened not
on an individual level, but on a national level like elections or remarkable statements by a
politician, people can be asked how this affected them. Important events on a non-individual
level should also be kept by the researcher in a separate data file, so these can be included in
the analysis. In the very first questionnaire people will be asked to their demographic
variables, the control variables already named and also their past: from the moment of the
measurement back to someone’s childhood years; with questions like in what family they
were raised, what kind of political involvement their fellow humans had, what kind of
programs were watched when somebody was young etcetera. This information could possibly
make clear why people were part of a certain condition. Combining the panel data with the
Event History questions it is believed that a good measurement of the dependent variable is
Questions will be asked in an online questionnaire. The reason for this is that people
feel more anonymous in such a situation than when they are questioned face-to-face or by
telephone. Social desirable answer about media use or political interest can therefore be
avoided as much as possible, although this still cannot be considered impossible to happen.
To measure the independent variable of the television programs people watch, an
agreement with Stichting Kijkonderzoek has to be made and their participants should be
convinced to participate. This institution measures the reach of television programs in The
Netherlands with 2900 participants in 1245 households (Intomart GfK, 2008). Three hundred
of these respondents coming from different households will be randomly selected and invited
to participate in this research. The initial use of the reach research conducted by Stichting
Kijkonderzoek is that advertisers can tune their expenditures on the reach of a program. In a
commercial break with many viewers an advertiser is logically more willing to pay much,
than when there are not much people watching. Because advertising on television is very
expensive, the sample of this research is very representative of the Dutch population above
the age of thirteen. Advertisers after all want reliable estimates.
It would be an ideal situation when the respondents of this research could also be
included in the study to the effects of television program exposure on political interest. First
continuous, objective and reliable measurements of the programs people watch are obtained;
next the external validity of the results is very representative, because a representative sample
of the Dutch population is studied and this happened in natural circumstances. To make the
scale of this measurement also discrete people can be categorized in five groups: people
watching 100 to 80% percent of the time public broadcasting programs, people watching it 80
to 60% of the time, people with a mixed media use and watch these programs 60 to 40% of
the time, people watching public channels just 40 to 20% of the time and people watching it
almost never 20 to 0% of the time. It is logical that when they watch less public broadcasters,
they will watch more commercial channels since these are relative measures and not total
measures. The average media use of people will be based on the measurements of the last 168
hours (seven days), so it is compensated for differences in the television guide that changes
day by day and fluctuations of not-structural media use. Wide categories are used to not let
happen very much events in this variable. If the measures were very sensible to changes in
media use, it might seem that this variable is a bad predictor of changes in political interest,
therefore bigger events were necessary.
When both the channels people watch and the political interest are measured and also
control variables and non-individual events, they can all be analysed together. Between
groups it could be observed if there was some relation between the programs particular people
watched and the political interests they had. This could however not result in conclusions that
go beyond a correlation, because it was not possible to observe what caused an effect. When
respondents were analyzed however individually changes in the different variables can be
observed and also the time order in which these changes occurred. This hopefully forms a
good methodical basis to make causal arguments after the research is conducted.
This design of course also has some difficulties. First it is very uncertain that Stichting
Kijkonderzoek and its respondents want to participate in the research. When they do not want
to participate it would be very expensive to do similar measurements and also difficult to find
an as representative sample as this. However assuming an ideal situation this would not be a
problem. Furthermore it is not clear to what extent people (can) give reliable answers on the
retrospective questions needed for Event History Analysis. Will be remembered why they
changed their viewing behaviour or will they be unconscious about it? It might be that in an
online questionnaire people easily overlook events that might be important for the research,
but seem unimportant for the respondents.
There are some factors that should be taken into account when judging about the
internal validity of a research (Campbell & Stanley, 1963). History is an important factor and
this one should not threaten the internal validity of this research, because respondents are as
good as possible questioned about what important events happened in between the various
measurement moments and important events on a non-individual level are reported by the
researcher. Maturation of the respondents might be more problematic. It could be that when
people get older both their viewing behaviour and political interest change. This process
might differ from person to person and can therefore difficult be controlled statistically.
However because the data gathering does not take place longer than five years, it is not
expected that this has a large influence.
The factor testing might cause bigger problems. People are monthly questioned to
measure the latent variable ‘political interest’. However because respondents are forced to
think about this, they might also get some more interest in it. As a consequence it is likely
that people will (unconsciously) pay more attention to media messages about political topics.
The questionnaires then have a large influence on both the dependent and independent
variable and it is impossible to know how this works out. When doing the study, the
researcher should be aware of this, however it seems difficult to avoid it. Besides this effects
of testing, might also have some consequence for the external validity. It can be questioned
how much respondents that are monthly tested can still be generalized to people never being
It is also necessary to be aware that for the collection of data via Stichting
Kijkonderzoek, the researcher is depended on some other organisation and that when they
change they their measuring instrument effects of instrumentation may occur. Another factor
that should be taken into account with reference to internal validity is mortality. Particular
people that are part of the Stichting Kijkonderzoek sample might not like to actively
participate in the research, because they are used to inactively participation. It could be that a
specific group, for example those who dislike politics, do not want to participate, because
they do not like the topic or will stop participating after a while. This might cause some
misrepresentation; however yet there is no reason to believe this will happen.
To avoid problems of age cohort effects the group of respondents will be divided in
three; people that are younger than 25 years at the start of the research, people between 25 and
50 years, and those above 50 years old. This is done because the people in these groups can
better be compared with each other, than the general group due to differences in interest, but
also because people who grew up with commercial television might use media differently
than those who did not. By having these groups, possible effects might be stronger or can be
found in the one group while it is weaker or even not existing in another age cohort.
The research design just discussed is of course not the only way to study the possible effects
of the channels being watched on the political interest of people. However in my opinion this
seems to be the best way. In this part however some alternative design will be sketched
roughly. An alternative to the research design could be a fully controlled experiment. People
can in a laboratory be exposed to certain pieces of television programs, after which the effect
of it could be tested. Because it is fully controlled people can randomly be assigned to
conditions, either pieces of programs broadcasted by public television stations or by
commercial broadcasters. With the test it could be measured how people differ in their
political interest afterwards.
This test however does not measure what I am interested in, long-lasting effects. It will
only measure how people react on the short-term to messages; then not the effects of framing
are studied, but priming effects. Besides it would be very unnatural to let people watch
television in a laboratorial environment. Next to this it could be that participants will be
shown programs that they under normal circumstances never would watch. The internal
validity of this research can thus be rather good, however the results of the research can
hardly be generalized to real life circumstances and is thus not external valid.
To conduct a research that has the goal of studying or proving a causal mechanism in social
sciences seems to be very difficult. Many issues have to be taken into account and still it is
hard to come to robust conclusions that go beyond correlation, but really prove causality.
Choices are always accompanied by desired and also unwanted consequences. Therefore
many times compromises have to be made. Especially when choices have to be made between
a good internal validity and a good external validity. Both can often not be achieved within
the same research design.
However related to the research questions if someone’s main use of public
broadcasting channels, raise the probability of having more political interest, while watching
commercial broadcasting does not; I think the drafted research design is at least a good way to
try it. The continuous, longitudinal measures should make it possible to observe and analyze
if there is a causal mechanism and in which direction this influence heads.
Ball-Rokeach, S. J., & DeFleur, M. L. (1976). A dependency model of mass media effects.
Communication Research, 3(1), 3-21.
Bardoel, J., & Van Cuilenborg, J. (2003). Communicatiebeleid en communicatiemarkt: Over
beleid, economie en managements voor de communicatiesector. Amsterdam: Otto
Blossfeld, H., Golsch, K., & Rohwer, G. (2007). Event History Analysis with Stata. Mahwah
(NJ): Lawrence Erlbaum Associates.
Brants, K. (2000). Tussen beeld en inhoud: Politiek en media in de verkiezingen van 1998.
Amsterdam: Uitgeverij Het Spinhuis.
Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for
research. Chicago: Rand McNally.
De Boer, C., & ’t Hart, H. (2007). Publieke opinie. Amsterdam: Boom onderwijs.
De Vreese, C. H. (2003). Framing Europe: Television news and European integration.
Amsterdam: Aksant Academic Publishers.
De Vreese, C. H. (2005). News framing: Theory and typology. Information Design Journal,
De Vreese, C. H., & Boomgaarden, H. (2006). News, political knowledge and participation:
The differential effects of news media exposure on political knowledge and
participation. Acta Politica, 41(4), 317–341.
Elwert, F., & Christakis, N. A. (2008). Wives and ex-wives: A new test for homogamy bias in
the widowhood effect. Demography, 45(4), 1-23.
Entman, R. M. (1993). Framing: Toward clarification of a fractured paradigm. Journal of
Communication, 43(4), 51-58.
Frank, K (2000). Impact of a confounding variable on the inference of a regression
coefficient. Sociological Methods and Research, 29(2), 147-194.
Gerbner, G. (1967). Mass communication and human communication theory. In F. E. X.
Dance (Ed.), Human communications theory, pp. 40-57. New York: Holt, Rinehard
Gerring, J. (2005). Causation: A unified framework for the social sciences. Journal of
Theoretical Politics, 17(2), 163-198.
Goldthorpe, J. H. (2001). Causation, statistics and sociology. European Sociological Review,
Intomart GfK (2008). Het kijkonderzoek: Methodologische beschrijving. Amstelveen:
McCombs, M. E., & Shaw, D. L. (1972). The agenda-setting function of mass media. Public
Opinion Quarterly, 36(2), 176-187.
McCombs, M. E., Shaw, D. L., & Weaver, D. H. (1997). Communication and democracy:
Exploring the intellectual frontiers in agenda-setting theory. Mahwah, NJ: Lawrence
Morgan, S. L., & Winship, C. (2007). Counterfactuals and causal inference: Methods and
principles for social research. Cambridge: Cambridge University Press.
Nelson, T. E., Oxley, Z. M., & Clawson, R. A. (1997). Toward a psychology of framing
effects. Political Behavior, 19(3), 221-246.
Pötter, U., & Blossfeld, H. (2001). Causal inference from series of events. European
Sociological Review, 17(1), 21-32.
Price, V., & Tewksbury, D. (1997). News values and public opinion: A theoretical account of
media priming and framing. In G. Barnett & F. J. Boster (Eds.), Progress in the
communication sciences (pp. 173-212). Greenwick, CT: Ablex Publishing.
Scheufele, D. A. (1999). Framing as a theory of media effects. Journal of Communication,
Slack, F. D. (1984). Communication technologies & society: Conceptions of causality & the
politics of technological intervention. Norwood (NJ): Ablex Publishing Corporation.
Stimson, J. A. (1985). Regression in space and time: A statistical essay. American Journal of
Political Science, 29(4), 914-947.
Thurman, W. N., & Fisher, M. E. (1988). Chickens, eggs and causality or which came first?
American Journal of Agricultural Economics, 70(2), 237-238.
Vermunt, J. K., & Moors, G. (2005). Event history analysis. In B. Everitt & D. Howell (Eds.),
Encyclopedia of Statistics in Behavioral Science, (pp. 568–575). Chichester: Wiley.