This is the book to use for this assignment. I am sure you probably know websites where you can have access to e-books.
Book:
Making Sense of the Social World: Methods of Investigation Fifth Edition
ISBN: 978-1-4833-8061-2
Class:
Applied Research Methods for Policy & Management – PAD4723
I am going to try to help you through the questions and how to approach this assignment. This is basically answering these questions using some materials from the book.
Questions:
1. Identification of the research question(s), objective(s), and hypothesis, if available.
2. Brief discussion of the linkage between the research question(s) and the broader literature reviewed.
3. Identification of the dependent and major independent variables and their measurement.
4. Identification of data source(s), unit of analysis and type of data (time series, or cross sectional, and etc.).
5. Identification and brief discussion of the main research methods used.
6. Brief discussion of the main research results and their generalizability.
7. Brief discussion of the overall quality and organization of the article.
For question #1:
To answer question 1, I would read the article first and then define what the research question(s), objective(s), and hypothesis.
For question #2:
To answer question 2, It is pretty much self-explanatory, you just identify the research question(s) and find linkage to the remainder of the article.
For question #3:
To answer question 3, use this link https://www.simplypsychology.org/variables.html to learn about the D and I variables, and find the dependent and independent variables in the article.
For question #4:
To answer question 4, I would identify the data source, like what are they using to do this research (Facebook and Instagram). I don’t know what the unit analysis would be. The type of Data would be the type of system within the source are they using to do the research (for example, The crowding-out perspective).
For question #5:
To answer question 5, I would find out which research methods were used. Some examples of research methods studies in class would be: quantitative and qualitative methods of analysis.
For question #6 and #7:
These two questions are pretty much self-explanatory.
627
Article
Using Large-Scale Social Media Experiments
in Public Administration: Assessing Charitable
Consequences of Government Funding of
Nonprofits
Sebastian Jilke*, Jiahuan Lu*, Chengxin Xu*, Shugo Shinohara†
*Rutgers University; †International University of Japan
Abstract
In this article, we introduce and showcase how social media can be used to implement experi-
ments in public administration research. To do so, we pre-registered a placebo-controlled field
experiment and implemented it on the social media platform Facebook. The purpose of the ex-
periment was to examine whether government funding to nonprofit organizations has an effect
on charitable donations. Theories on the interaction between government funding and charitable .
This is the book to use for this assignment. I am sure you probabl.docx
1. This is the book to use for this assignment. I am sure you
probably know websites where you can have access to e-books.
Book:
Making Sense of the Social World: Methods of Investigation
Fifth Edition
ISBN: 978-1-4833-8061-2
Class:
Applied Research Methods for Policy & Management –
PAD4723
I am going to try to help you through the questions and how to
approach this assignment. This is basically answering these
questions using some materials from the book.
Questions:
1. Identification of the research question(s), objective(s), and
hypothesis, if available.
2. Brief discussion of the linkage between the research
question(s) and the broader literature reviewed.
3. Identification of the dependent and major independent
variables and their measurement.
4. Identification of data source(s), unit of analysis and type of
data (time series, or cross sectional, and etc.).
5. Identification and brief discussion of the main research
methods used.
6. Brief discussion of the main research results and their
generalizability.
7. Brief discussion of the overall quality and organization of the
article.
For question #1:
To answer question 1, I would read the article first and then
define what the research question(s), objective(s), and
hypothesis.
For question #2:
To answer question 2, It is pretty much self-explanatory, you
2. just identify the research question(s) and find linkage to the
remainder of the article.
For question #3:
To answer question 3, use this link
https://www.simplypsychology.org/variables.html to learn about
the D and I variables, and find the dependent and independent
variables in the article.
For question #4:
To answer question 4, I would identify the data source, like
what are they using to do this research (Facebook and
Instagram). I don’t know what the unit analysis would be. The
type of Data would be the type of system within the source are
they using to do the research (for example, The crowding-out
perspective).
For question #5:
To answer question 5, I would find out which research methods
were used. Some examples of research methods studies in class
would be: quantitative and qualitative methods of analysis.
For question #6 and #7:
These two questions are pretty much self-explanatory.
627
Article
Using Large-Scale Social Media Experiments
in Public Administration: Assessing Charitable
Consequences of Government Funding of
Nonprofits
Sebastian Jilke*, Jiahuan Lu*, Chengxin Xu*,
Shugo Shinohara†
*Rutgers University; †International University of Japan
3. Abstract
In this article, we introduce and showcase how social media can
be used to implement experi-
ments in public administration research. To do so, we pre-
registered a placebo-controlled field
experiment and implemented it on the social media platform
Facebook. The purpose of the ex-
periment was to examine whether government funding to
nonprofit organizations has an effect
on charitable donations. Theories on the interaction between
government funding and charitable
donations stipulate that government funding of nonprofit
organizations either decreases (crowd-
ing-out), or increases (crowding-in) private donations. To test
these competing theoretical predic-
tions, we used Facebook’s advertisement facilities and
implemented an online field experiment
among 296,121 Facebook users nested in 600 clusters. Through
the process of cluster-randomiza-
tion, groups of Facebook users were randomly assigned to
different nonprofit donation solicitation
ads, experimentally manipulating information cues of nonprofit
funding. Contrary to theoretical
predictions, we find that government funding does not seem to
matter; providing information
about government support to nonprofit organizations neither
increases nor decreases people’s
propensity to donate. We discuss the implications of our
empirical application, as well as the mer-
its of using social media to conduct experiments in public
administration more generally. Finally,
we outline a research agenda of how social media can be used to
implement public administration
experiments.
5. All rights reserved. For permissions, please e-mail:
[email protected]
Journal of Public Administration Research And Theory, 2019,
627–639
doi:10.1093/jopart/muy021
Article
Advance Access publication May 12, 2018
D
ow
nloaded from
https://academ
ic.oup.com
/jpart/article-abstract/29/4/627/4995543 by Florida International
U
niversity Library S
erials user on 18 A
pril 2020
mailto:[email protected]?subject=
Journal of Public Administration Research and Theory, 2019,
Vol. 29, No. 4628
neighboring fields like political science, economics, or
marketing (for an overview see Aral 2016). Examples
include a wide range of topics, including political
behavior (e.g., Bond et al. 2012), advertising (e.g.,
Bakshy et al. 2012a), product pricing (e.g., Ajorlou,
Jabdabaie, and Kakhbod 2016), information propaga-
6. tion (e.g., Bakshy et al. 2012b), or emotional contagion
(e.g., Kramer, Guillory and Hancock 2014). This trend
is encouraging but it misses an important component
of people’s online behavior, namely citizen–state inter-
actions. Indeed, the advent of e-government and an
increased presence of government agencies on social
media platforms has led to the rise of online citizen–
state interactions (Thomas and Streib 2003; Wukich
and Mergel 2015). These interactions range from gath-
ering information, for example, about how to fill out
online applications, to crisis communication, and even
complaints about poor services. In this study, we il-
lustrate how to implement large-scale social media
experiments on Facebook by examining interactions
between nonprofit organizations seeking donations
and their potential donors. In particular, we assess
whether changes in government funding affect the lev-
els of charitable income of nonprofit organizations.
Theoretically, two competing mechanisms have been
distinguished in explaining why levels of government
funding may have an effect on private donations to
nonprofit organizations. The crowding-out perspective
argues that government funding would decrease peo-
ple’s willingness to donate because donors as taxpay-
ers perceive government funding as a substitution to
their donations (Andreoni 1989; Warr 1982; Kim and
Van Ryzin 2014). If they contributed already via taxes,
why should they give in addition to them? Therefore,
the crowding-out model predicts a decrease in private
donations as a result of government funding. In con-
trast, the signaling model of crowding-in suggests that
government funding is used by potential donors as an
imperfect signal of an organization’s effectiveness (e.g.,
Borgonovi and O’Hare 2004; Rose-Ackerman 1981).
In the absence of complete information about how a
7. nonprofit organization will perform and operate with
the funds at its disposal, government funding serves
as an organization’s “quality stamp,” signaling the or-
ganization is not only trustworthy but also effective
because it managed to receive competitive government
grants. The crowding-in perspective, therefore, predicts
an increase in private donations as a result of govern-
ment funding.
In this study, we test these somewhat competing
claims in the context of a large-scale social media
experiment. Conducted in a naturalistic setting, so-
cial media experiments, similar to conventional field
experiments, combine high levels of internal validity
with external validity. This allows us to test whether
government funding crowds-in, or crowds-out, pri-
vate donations. We implemented a field experiment
on the social media platform Facebook by assigning
clusters of approximately 300,000 Facebook users to
donation solicitations of groups of real food banks.
Using a pre-registered, placebo-controlled between-
subjects design, groups of users were randomly allo-
cated to three experimental conditions: (1) the control
group (i.e., no funding information), (2) the placebo
group (i.e., donation funded), and (3) the treatment
group (i.e., government funded). As outcome meas-
ures, we monitored people’s revealed donation inten-
tions by their click-through-rates (i.e., the frequency
people clicked on the links in the ad solicitations),
but also other behavioral measures such as website
visits. Consistently, we find no direct evidence for ei-
ther model, suggesting that public and private fund-
ing streams of nonprofit organizations do not seem to
interact in the real world.
8. In addition to these findings, we provide an over-
view of social media experiments and how they can
be implemented in public administration research,
including an agenda for studying online citizen–state
interactions using large-scale social media experi-
ments. The remainder of the study is as follow: in
the next section, we review empirical applications
of social media experiments in neighboring fields
to provide an overview of the applicability of social
media to conduct experiments in public administra-
tion. We then discuss our empirical application by
first reviewing the literature on the crowding-out
and crowding-in hypotheses. On this basis, we intro-
duce our experimental research design and report the
results of the experiment subsequently. In the final
section, we draw implications for public administra-
tion research and practice from both, our empirical
application and the review of innovative social media
experiments.
Conducting Social Media Experiments
Before turning to the empirical application in this art-
icle, we provide an overview of the potential of social
media to conduct experiments. Recent years have seen
an increase in online field experiments implemented
on social media platforms (Aral 2016). Indeed, com-
panies like Amazon, Google, and Facebook constantly
perform small experiments on their clients, for ex-
ample through randomly altered website content
where two different versions of the same website, on-
line ad, or any other online parameter are randomly
assigned to service users—a procedure marketers
commonly refer to as A/B testing. In the past, social
scientists have collaborated with major social media
platforms to implement experiments. For example,
9. D
ow
nloaded from
https://academ
ic.oup.com
/jpart/article-abstract/29/4/627/4995543 by Florida International
U
niversity Library S
erials user on 18 A
pril 2020
Journal of Public Administration Research and Theory, 2019,
Vol. 29, No. 4 629
Bond et al. (2012) implemented a 61-million-person
political mobilization experiment on Facebook; simi-
larly, Kramer, Guillory, and Hancock (2014) have
implemented an online experiment to study emotional
contagion among 690,000 Facebook users. Most stud-
ies on Facebook include advertisement related topics,
however. For example, Bakshy and colleagues (2012a)
study the effectiveness of social cues (i.e., peers’ asso-
ciations with a brand) on consumer responses to ads
for more than 5 million Facebook users. In another
study, Bakshy et al. (2012b) look at the randomized
exposure of links shared by peers of more than 250
million Facebook users and how it affects information
sharing behavior on Facebook. In all of these cases,
researchers had to work closely with Facebook to im-
10. plement the process of randomization at the individual
level of Facebook users. But, this would also mean that
experimenting on Facebook would be limited to those
with industry contacts.
In the following, we report from recent experiments
that have been conducted without industry collabor-
ation. We aim to showcase how social media platforms
like Facebook or Twitter can be used by scholars or
government agencies to implement experiments in a
relatively straightforward manner. Ryan (2012) was
one of the first social scientists to use Facebook’s ad-
vertisement facilities to conduct research without hav-
ing to collaborate with Facebook directly. Similar to
the empirical application we report in this article, he
randomly assigned clusters of individuals to different
advertisements instead of randomizing on the user
level (see Teresi and Michelson 2015 for an alternative
approach2). To do so, he used Facebook’s advertise-
ment facilities, which allow targeting ads to Facebook
users on a number of demographic characteristics,
such as age and gender, but also zip code (see also Ryan
and Brockman 2012). Based on these user parameters,
researchers can predetermine clusters of users and ran-
domly allocate them to varying ad content. This is what
Ryan did in his study. In particular, he looked at how
advertisements that evoke emotions such as anger or
anxiety affect information seeking behavior. He then
used cluster-level click-through-rates as a dependent
variable. Across three studies and more than 1.8 mil-
lion impressions grouped into 360 clusters in total, he
found consistent evidence that political advertisements
that ought to evoke anger increase users’ proclivity
to click through to a website. In other words, anger
makes us click. A similar methodological approach
11. was also used by Ryan and Brader (2017) who studied
partisan selective exposure to election messages during
the 2012 US presidential elections, using a total of 846
clusters of Facebook users.
Similar applications also exist in fields like market-
ing or economics. Aral and Walker (2014) for instance
report from an experiment conducted with 1.3 million
users of a Facebook application to test how social in-
fluence in online networks affects consumer demand.
They experimentally manipulated users’ network em-
beddedness and the strength of their social ties, finding
that both increase influence in social networks. Wang
and Chang (2013) studied a similar topic looking at
whether social ties and product-related risks influ-
ence purchase intentions of Facebook users that were
recruited via an online application. Although they
found that online tie strength leads the higher purchase
intentions, product-related risks had no direct effect on
purchase intentions.
In another Facebook study by Broockman and
Green (2013), users were exposed to different types of
political candidate advertisements over the course of 1
week. Like Ryan (2012), they randomized clusters of
individuals, instead of individuals themselves. However,
since they had access to public voter registries, they
targeted 32,000 voters, which they assigned to 1,220
clusters across 18 age ranges, 34 towns, and 2 genders.
These clusters of Facebook users were assigned to one
of four experimental conditions: a control group with
no advertisement, and three different types of adver-
tisements that ought to increase Facebook users’ name
recognition of the candidate. The innovation that
Brookman and Green’s study introduced was that they
used contact information from public voter records to
12. gather public opinion data from these individual voters
through telephone interviews later on. Since the cre-
ation of clusters was done on the basis of assigning
32,000 registered voters to 1,220 clusters, they had
detailed contact information of registered voters that
belong to each respective cluster. In other words, they
were able to link cluster assignment on Facebook with
attitudinal outcomes from survey data, such as can-
didate’s name recognition, positive impression of the
candidate, whether people voted for the candidate, and
whether they recall having seen the ad.
Social media experiments exist outside of Facebook
also. Gong et al. (2017) conducted a large-scale ex-
periment among the social microblogging service Sina
Weibo (i.e., the “Chinese Twitter”). They examined
the return-on-investment of company tweets on view-
ers of TV shows. To do so, they randomly assigned
2 Teresi and Michelson (2015) randomized individual Facebook
users with
whom they connected via a Facebook profile (i.e., becoming
“friends”)
into experimental conditions. While one group of “friends”
received
mainly apolitical status updates from the host account, the
treatment
group received political messages about the upcoming 2010
elections.
After the election, authors searched for each “friend” in the
state list of
registered voters using information provided via Facebook’s
profile (i.e.,
names, age, gender, etc.) to examine whether these online get-
out-the-
vote messages distributed through social media encouraged
13. subjects
to vote.
D
ow
nloaded from
https://academ
ic.oup.com
/jpart/article-abstract/29/4/627/4995543 by Florida International
U
niversity Library S
erials user on 18 A
pril 2020
Journal of Public Administration Research and Theory, 2019,
Vol. 29, No. 4630
98 different TV shows into three experimental condi-
tions: (1) the control group, where there is no tweet
sent out about the particular TV shows; (2) the tweet
condition, where each show is tweeted by the com-
pany; and (3) a tweet and retweet condition, where
each show is tweeted by a company and retweeted
by a so-called “influencer.” TV show viewing percent-
ages were used as an outcome measure, finding that
both tweeting and tweeting coupled with retweeting
boost TV show views relative to the shows in the con-
trol group. In other words, social media efforts of TV
companies result in a significant increase in viewers.
They also found that retweets of influencers are more
14. effective in generating new viewers than tweets by the
companies. In another Twitter experiment, Coppock,
Guess, and Ternovski (2016) looked at online mobil-
ization behavior. In particular, authors were interested
in whether Twitter users could be encouraged to sign
a petition. To do so, they randomly divided 8,500 fol-
lowers of a US nonprofit advocacy group into three ex-
perimental conditions. In the first stage, the nonprofit
organization published a tweet in which its followers
were encouraged to sign a petition. All three groups
were exposed to the public tweet. In the second stage,
a treatment condition received a direct message with
a similar request, referring to them as “followers,” an-
other treatment condition got the same direct message,
but they were referred to as “organizers,” whereas the
control group received no direct message. On this basis,
authors checked whether subjects either retweeted or
signed the petition.
Other notable examples of using social media plat-
forms like Twitter to implement experiments involve
studying social media censorship in China (King, Pan,
and Roberts 2013), the effectiveness of repeated pol-
itical messages on twitter followers of politicians
(Kobayashi and Ichifuji 2015), the effectiveness of
news media (King, Schneer, and White 2017), or racist
online harassment (Munger 2017).
The aforementioned examples provide rich inspir-
ation for conducting social media experiments in public
administration research. Social media experiments have
the distinct advantage that they combine the internal
validity of experiments with an increased realism and
external validity. In this sense, they are a subtype of
conventional field experiments, which are conducted in
an online environment where people interact via social
15. media. In addition, social media experiments can easily
be conducted on large-scale samples using a variety
of unobtrusive outcome measures to assess respond-
ents’ revealed behaviors. They are therefore a viable
option to complement survey-based experiments that
often employ stated preferences (i.e., attitudes, evalu-
ative judgments, or behavioral intentions), which make
up the majority type of experiments implemented in
public administration to date (Li and Van Ryzin 2017).
People increasingly interact with government and
government agencies using social media platforms
like Twitter and Facebook (Mergel 2012). Scholars
and government agencies alike can implement social
media experiments to test the effectiveness of using
these relatively new channels of communication and
information provision. Examples may include assess-
ing whether providing information on social media
about the performance of government agencies affect
citizen trust in those agencies, or may lead citizens to
desirable behaviors, including coproduction. Indeed,
implementing such innovative experimental designs in
the context of online citizen–government interactions
may be a viable avenue for future experimentation in
public administration research. In the following, we
introduce an empirical application of an online so-
cial media experiment that examines in how far gov-
ernment funding and charitable donation intentions
interact.
Empirical Application: How Government
Funding and Private Donations Interact
An impressive body of literature has emerged from
various disciplines that focus on the issue of whether
government funding would displace (crowding-out
16. effect) or leverage (crowding-in effect) private con-
tributions to nonprofit organizations. The impact of
government funding on private donations to non-
profits largely rests on how potential donors and
nonprofits themselves strategically respond to gov-
ernment funding of nonprofit activities (Tinkelman
2010; Lu 2016). We focus on the strategic responses
of private donors in the present analysis: how
would donors change their levels of charitable giv-
ing when a nonprofit organization is supported by
government funding. The literature distinguishes
two contrasting models of how government-funded
nonprofits are perceived, which ultimately affects
charitable donations via processes of crowding-in
or crowding-out.
Early crowding-out theory assumes that private
donors are altruistic and care about the optimal level
of public goods provision. Donors as taxpayers would
consider government funding as their contributions
through taxation and thus perceive it as a perfect sub-
stitute for voluntary donations. In this way, increases
in government support would lower the need for add-
itional private contributions. Therefore, when a non-
profit receives more support from the government,
private donors would consciously reduce their giving
to this organization. As a result, there is a dollar-for-
dollar replacement between private giving and govern-
ment funding (Roberts 1984; Warr 1982). This pure
altruism assumption was later challenged by Andreoni’s
(1989) model of impure altruism, which predicts that
D
ow
nloaded from
17. https://academ
ic.oup.com
/jpart/article-abstract/29/4/627/4995543 by Florida International
U
niversity Library S
erials user on 18 A
pril 2020
Journal of Public Administration Research and Theory, 2019,
Vol. 29, No. 4 631
donors are also motivated by a “warm-glow”—the
utility from the act of giving to help others. In this im-
pure altruism line of reasoning, government funding
and private giving would not completely substitute
each other. As a result, there may exist a crowding-
out effect between these two funding sources, but the
magnitude of the effect is less than the dollar-for-dollar
model that pure altruism would predict.
On the other hand, private donors might consider
government funding favorably and become more will-
ing to contribute to government-funded nonprofits be-
cause they perceive them as more competent and/or
needy. Crowding-in theory proposes that government
funding may stimulate charitable contributions in two
ways. First, when donors do not have complete know-
ledge concerning beneficiary nonprofits and their pro-
grams, government funding serves as a direct signal of
the nonprofit’s quality and reliability (Rose-Ackerman
1981). Indeed, to be funded by government agencies,
18. nonprofit organizations have to go through a com-
petitive merit-based selection process and meet finan-
cial and programmatic requirements (Lu 2015; Suárez
2011). Therefore, the receipt of government funding
can be perceived by uninformed donors as an indicator
of trustworthiness and competence. Second, govern-
ment funding also is considered as a signal of unmet
social needs, calling for more donor attention and fur-
ther facilitating the leveraging effect of government
funding (Brooks 1999; Okten and Weisbrod 2000).
There exist a rich body of empirical studies in sup-
port of the crowding-out hypothesis (e.g., Andreoni
1993; Andreoni and Payne 2011; Brooks 2000; De
Wit, Bekkers, and Broese van Groenou 2017; Dokko
2009; Hughes, Luksetich, and Rooney 2014; Kingma
1989; Steinberg 1987) and the crowding-in model (e.g.,
Borgonovi and O’Hare 2004; De Wit and Bekkers
2016; Heutel 2014; Khanna and Sandler 2000; Lu
2016; Okten and Weisbrod 2000; Smith 2007). Most
recently, De Wit and Bekkers (2016) and Lu (2016)
respectively employed meta-analytical techniques to
aggregate existing empirical evidence on crowding-in
and crowding-out. Both studies conclude a significant
positive association between government funding and
private donations, even though the magnitude of the
relationship is trivial.
The above-mentioned body of literature on
crowding-out and crowding-in greatly advances our
understanding of the complex interaction between
government funding and private donations. However,
it generally suffers from two limitations. First, the ma-
jority of existing empirical literature testing the crowd-
ing-in or crowding-out effect employs observational
data. Although observational studies enable scholars
19. to explore the association between the two revenue
sources, drawing causal inferences remains challenging
(Blom-Hansen, Morton, and Serritzlew 2015; James,
Jilke, and Van Ryzin 2017a). Second, both crowding-in
and crowding-out lines of reasoning assume that po-
tential donors possess perfect information about the
nonprofits they might want to donate to, especially
whether these organizations are funded by govern-
ment and to what extent. However, this assumption
might not be true in the real world (De Wit et al. 2017;
Horne, Johnson, and Van Slyke 2005; Krasteva and
Yildirim 2013). For example, Horne, Johnson, and Van
Slyke (2005) used public opinion data to demonstrate
that individual donors do not necessarily have com-
plete information on the financial structures of their
beneficiary organizations and subsequently link dona-
tion decisions to the levels of government funding.
In recent years, scholars began to employ ex-
perimental designs to address these two limitations
(i.e., endogeneity and imperfect information) in the
crowding-out and crowding-in literature (e.g., Eckel,
Grossman, and Johnston 2005; Kim and Van Ryzin
2014; Ottoni-Wilhelm, Vesterlund, and Xie 2017;
Wasif and Prakash 2017). Existing experimental stud-
ies testing crowding-out/-in effects usually include
manipulations of the existence and the level of direct
government support to beneficiaries, and then measure
variations in of subjects’ donations. Methodologically,
experimental designs are advantageous over observa-
tional settings in terms of their internal validity when
testing crowding-out/-in effects because experimental
studies create a controlled environment of information
exchange to rule out confounding factors. As a result,
scholars are provided with more direct evidence of the
20. causal linkage between government support and char-
itable giving (Blom-Hansen, Morton, and Serritzlew
2015; James, Jilke, and Van Ryzin 2017a). Table 1
reviews the experimental studies of crowding-out/-in
effects to date, including type, setting, and results.
As can be seen in Table 1, most experimental stud-
ies employ laboratory experimental designs, primarily
using two specific experimental paradigms. One is the
public goods game (e.g., Andreoni 1993; Eckel et al.
2005; Isaac and Norton 2013) and the other type of
experimental setting employed in the literature is the
dictator game (e.g., Blanco, Lopez, and Coleman 2012;
Konow 2010; Korenok, Millner, and Razzolini 2012).
Despite different experimental paradigms, most of the
laboratory experiments report a partial crowding-out
effect between government funding and charitable
contributions (see also De Wit and Bekkers 2016).
In addition to laboratory experiments, there are a
few experimental studies that employ survey experi-
ments to test the crowding-in and crowding-out prop-
ositions. For example, Kim and Van Ryzin (2014)
conducted an online survey experiment with 562
participants and found that an arts nonprofit with
D
ow
nloaded from
https://academ
ic.oup.com
/jpart/article-abstract/29/4/627/4995543 by Florida International
U
21. niversity Library S
erials user on 18 A
pril 2020
Journal of Public Administration Research and Theory, 2019,
Vol. 29, No. 4632
government funding would receive about 25% less pri-
vate donations than an identical hypothetical organ-
ization without government funding. In contrast, Wasif
and Prakash’s (2017) survey experiment with 530
respondents in Pakistan reported that federal funding
would not change respondents’ willingness to donate
to a hypothetical faith-based educational nonprofit.
When meta-analyzing results from experimental
studies only, De Wit and Bekkers (2016) find substan-
tially different results compared to observational stud-
ies, with experimental studies showing a considerable
crowding-out effect and nonexperimental studies a
very small crowding-in effect. There are two potential
explanations for these differences. A first possibility
would be that observational studies on crowding-
out/-in are plagued by endogeneity, and hence the
discrepancies in results may be a product of the com-
paratively poor internal validity of observational re-
search designs. A second possibility would be that
findings predominately produced within stylized set-
tings such as economic games or hypothetical scenar-
ios may hardly extrapolate beyond the laboratory. Or
in other words, people may behave differently in lab
and survey experiments as in the real world. Indeed, a
recent systematic comparison between laboratory and
22. field experiments concluded that the ability of stylized
experiments to extrapolate social preferences from the
lab to the field is limited, at best …