SlideShare a Scribd company logo
1 of 68
This is the book to use for this assignment. I am sure you
probably know websites where you can have access to e-books.
Book:
Making Sense of the Social World: Methods of Investigation
Fifth Edition
ISBN: 978-1-4833-8061-2
Class:
Applied Research Methods for Policy & Management –
PAD4723
I am going to try to help you through the questions and how to
approach this assignment. This is basically answering these
questions using some materials from the book.
Questions:
1. Identification of the research question(s), objective(s), and
hypothesis, if available.
2. Brief discussion of the linkage between the research
question(s) and the broader literature reviewed.
3. Identification of the dependent and major independent
variables and their measurement.
4. Identification of data source(s), unit of analysis and type of
data (time series, or cross sectional, and etc.).
5. Identification and brief discussion of the main research
methods used.
6. Brief discussion of the main research results and their
generalizability.
7. Brief discussion of the overall quality and organization of the
article.
For question #1:
To answer question 1, I would read the article first and then
define what the research question(s), objective(s), and
hypothesis.
For question #2:
To answer question 2, It is pretty much self-explanatory, you
just identify the research question(s) and find linkage to the
remainder of the article.
For question #3:
To answer question 3, use this link
https://www.simplypsychology.org/variables.html to learn about
the D and I variables, and find the dependent and independent
variables in the article.
For question #4:
To answer question 4, I would identify the data source, like
what are they using to do this research (Facebook and
Instagram). I don’t know what the unit analysis would be. The
type of Data would be the type of system within the source are
they using to do the research (for example, The crowding-out
perspective).
For question #5:
To answer question 5, I would find out which research methods
were used. Some examples of research methods studies in class
would be: quantitative and qualitative methods of analysis.
For question #6 and #7:
These two questions are pretty much self-explanatory.
627
Article
Using Large-Scale Social Media Experiments
in Public Administration: Assessing Charitable
Consequences of Government Funding of
Nonprofits
Sebastian Jilke*, Jiahuan Lu*, Chengxin Xu*,
Shugo Shinohara†
*Rutgers University; †International University of Japan
Abstract
In this article, we introduce and showcase how social media can
be used to implement experi-
ments in public administration research. To do so, we pre-
registered a placebo-controlled field
experiment and implemented it on the social media platform
Facebook. The purpose of the ex-
periment was to examine whether government funding to
nonprofit organizations has an effect
on charitable donations. Theories on the interaction between
government funding and charitable
donations stipulate that government funding of nonprofit
organizations either decreases (crowd-
ing-out), or increases (crowding-in) private donations. To test
these competing theoretical predic-
tions, we used Facebook’s advertisement facilities and
implemented an online field experiment
among 296,121 Facebook users nested in 600 clusters. Through
the process of cluster-randomiza-
tion, groups of Facebook users were randomly assigned to
different nonprofit donation solicitation
ads, experimentally manipulating information cues of nonprofit
funding. Contrary to theoretical
predictions, we find that government funding does not seem to
matter; providing information
about government support to nonprofit organizations neither
increases nor decreases people’s
propensity to donate. We discuss the implications of our
empirical application, as well as the mer-
its of using social media to conduct experiments in public
administration more generally. Finally,
we outline a research agenda of how social media can be used to
implement public administration
experiments.
Introduction
Using social media has become an increasingly com-
mon activity as platforms like Facebook have be-
come ubiquitous. For instance, to date there are 218
million Facebook users in the United States (Statistia
2017) with one-third of the US adult population log-
ging into Facebook at least once a day (Public Religion
Research Institute 2012). Social media platforms like
Facebook or Twitter provide brand-new opportunities
to implement online experiments studying citizen–state
interactions. However, within public administration
this potential has been largely untapped.1 Therefore,
the purpose of this article is to introduce and show-
case how social media, particularly Facebook, can be
used to implement experiments in public administra-
tion research.
As a consequence of digitization, large-scale so-
cial media experiments have been used increasingly in
1 Despite a number of important studies on social media use in
government agencies and online citizen–government
interactions (e.g.,
Mergel 2012; Grimmelikhuijsen and Meijer 2015; Porumbescu
2015;
Im et al. 2014), to date no experimental applications on social
media
platforms exist in public administration scholarship (for a
notable
exception see a recent conference contribution by Olsen
2017).Address correspondence to the author at [email protected]
© The Author(s) 2018. Published by Oxford University Press on
behalf of the Public Management Research Association.
All rights reserved. For permissions, please e-mail:
[email protected]
Journal of Public Administration Research And Theory, 2019,
627–639
doi:10.1093/jopart/muy021
Article
Advance Access publication May 12, 2018
D
ow
nloaded from
https://academ
ic.oup.com
/jpart/article-abstract/29/4/627/4995543 by Florida International
U
niversity Library S
erials user on 18 A
pril 2020
mailto:[email protected]?subject=
Journal of Public Administration Research and Theory, 2019,
Vol. 29, No. 4628
neighboring fields like political science, economics, or
marketing (for an overview see Aral 2016). Examples
include a wide range of topics, including political
behavior (e.g., Bond et al. 2012), advertising (e.g.,
Bakshy et al. 2012a), product pricing (e.g., Ajorlou,
Jabdabaie, and Kakhbod 2016), information propaga-
tion (e.g., Bakshy et al. 2012b), or emotional contagion
(e.g., Kramer, Guillory and Hancock 2014). This trend
is encouraging but it misses an important component
of people’s online behavior, namely citizen–state inter-
actions. Indeed, the advent of e-government and an
increased presence of government agencies on social
media platforms has led to the rise of online citizen–
state interactions (Thomas and Streib 2003; Wukich
and Mergel 2015). These interactions range from gath-
ering information, for example, about how to fill out
online applications, to crisis communication, and even
complaints about poor services. In this study, we il-
lustrate how to implement large-scale social media
experiments on Facebook by examining interactions
between nonprofit organizations seeking donations
and their potential donors. In particular, we assess
whether changes in government funding affect the lev-
els of charitable income of nonprofit organizations.
Theoretically, two competing mechanisms have been
distinguished in explaining why levels of government
funding may have an effect on private donations to
nonprofit organizations. The crowding-out perspective
argues that government funding would decrease peo-
ple’s willingness to donate because donors as taxpay-
ers perceive government funding as a substitution to
their donations (Andreoni 1989; Warr 1982; Kim and
Van Ryzin 2014). If they contributed already via taxes,
why should they give in addition to them? Therefore,
the crowding-out model predicts a decrease in private
donations as a result of government funding. In con-
trast, the signaling model of crowding-in suggests that
government funding is used by potential donors as an
imperfect signal of an organization’s effectiveness (e.g.,
Borgonovi and O’Hare 2004; Rose-Ackerman 1981).
In the absence of complete information about how a
nonprofit organization will perform and operate with
the funds at its disposal, government funding serves
as an organization’s “quality stamp,” signaling the or-
ganization is not only trustworthy but also effective
because it managed to receive competitive government
grants. The crowding-in perspective, therefore, predicts
an increase in private donations as a result of govern-
ment funding.
In this study, we test these somewhat competing
claims in the context of a large-scale social media
experiment. Conducted in a naturalistic setting, so-
cial media experiments, similar to conventional field
experiments, combine high levels of internal validity
with external validity. This allows us to test whether
government funding crowds-in, or crowds-out, pri-
vate donations. We implemented a field experiment
on the social media platform Facebook by assigning
clusters of approximately 300,000 Facebook users to
donation solicitations of groups of real food banks.
Using a pre-registered, placebo-controlled between-
subjects design, groups of users were randomly allo-
cated to three experimental conditions: (1) the control
group (i.e., no funding information), (2) the placebo
group (i.e., donation funded), and (3) the treatment
group (i.e., government funded). As outcome meas-
ures, we monitored people’s revealed donation inten-
tions by their click-through-rates (i.e., the frequency
people clicked on the links in the ad solicitations),
but also other behavioral measures such as website
visits. Consistently, we find no direct evidence for ei-
ther model, suggesting that public and private fund-
ing streams of nonprofit organizations do not seem to
interact in the real world.
In addition to these findings, we provide an over-
view of social media experiments and how they can
be implemented in public administration research,
including an agenda for studying online citizen–state
interactions using large-scale social media experi-
ments. The remainder of the study is as follow: in
the next section, we review empirical applications
of social media experiments in neighboring fields
to provide an overview of the applicability of social
media to conduct experiments in public administra-
tion. We then discuss our empirical application by
first reviewing the literature on the crowding-out
and crowding-in hypotheses. On this basis, we intro-
duce our experimental research design and report the
results of the experiment subsequently. In the final
section, we draw implications for public administra-
tion research and practice from both, our empirical
application and the review of innovative social media
experiments.
Conducting Social Media Experiments
Before turning to the empirical application in this art-
icle, we provide an overview of the potential of social
media to conduct experiments. Recent years have seen
an increase in online field experiments implemented
on social media platforms (Aral 2016). Indeed, com-
panies like Amazon, Google, and Facebook constantly
perform small experiments on their clients, for ex-
ample through randomly altered website content
where two different versions of the same website, on-
line ad, or any other online parameter are randomly
assigned to service users—a procedure marketers
commonly refer to as A/B testing. In the past, social
scientists have collaborated with major social media
platforms to implement experiments. For example,
D
ow
nloaded from
https://academ
ic.oup.com
/jpart/article-abstract/29/4/627/4995543 by Florida International
U
niversity Library S
erials user on 18 A
pril 2020
Journal of Public Administration Research and Theory, 2019,
Vol. 29, No. 4 629
Bond et al. (2012) implemented a 61-million-person
political mobilization experiment on Facebook; simi-
larly, Kramer, Guillory, and Hancock (2014) have
implemented an online experiment to study emotional
contagion among 690,000 Facebook users. Most stud-
ies on Facebook include advertisement related topics,
however. For example, Bakshy and colleagues (2012a)
study the effectiveness of social cues (i.e., peers’ asso-
ciations with a brand) on consumer responses to ads
for more than 5 million Facebook users. In another
study, Bakshy et al. (2012b) look at the randomized
exposure of links shared by peers of more than 250
million Facebook users and how it affects information
sharing behavior on Facebook. In all of these cases,
researchers had to work closely with Facebook to im-
plement the process of randomization at the individual
level of Facebook users. But, this would also mean that
experimenting on Facebook would be limited to those
with industry contacts.
In the following, we report from recent experiments
that have been conducted without industry collabor-
ation. We aim to showcase how social media platforms
like Facebook or Twitter can be used by scholars or
government agencies to implement experiments in a
relatively straightforward manner. Ryan (2012) was
one of the first social scientists to use Facebook’s ad-
vertisement facilities to conduct research without hav-
ing to collaborate with Facebook directly. Similar to
the empirical application we report in this article, he
randomly assigned clusters of individuals to different
advertisements instead of randomizing on the user
level (see Teresi and Michelson 2015 for an alternative
approach2). To do so, he used Facebook’s advertise-
ment facilities, which allow targeting ads to Facebook
users on a number of demographic characteristics,
such as age and gender, but also zip code (see also Ryan
and Brockman 2012). Based on these user parameters,
researchers can predetermine clusters of users and ran-
domly allocate them to varying ad content. This is what
Ryan did in his study. In particular, he looked at how
advertisements that evoke emotions such as anger or
anxiety affect information seeking behavior. He then
used cluster-level click-through-rates as a dependent
variable. Across three studies and more than 1.8 mil-
lion impressions grouped into 360 clusters in total, he
found consistent evidence that political advertisements
that ought to evoke anger increase users’ proclivity
to click through to a website. In other words, anger
makes us click. A similar methodological approach
was also used by Ryan and Brader (2017) who studied
partisan selective exposure to election messages during
the 2012 US presidential elections, using a total of 846
clusters of Facebook users.
Similar applications also exist in fields like market-
ing or economics. Aral and Walker (2014) for instance
report from an experiment conducted with 1.3 million
users of a Facebook application to test how social in-
fluence in online networks affects consumer demand.
They experimentally manipulated users’ network em-
beddedness and the strength of their social ties, finding
that both increase influence in social networks. Wang
and Chang (2013) studied a similar topic looking at
whether social ties and product-related risks influ-
ence purchase intentions of Facebook users that were
recruited via an online application. Although they
found that online tie strength leads the higher purchase
intentions, product-related risks had no direct effect on
purchase intentions.
In another Facebook study by Broockman and
Green (2013), users were exposed to different types of
political candidate advertisements over the course of 1
week. Like Ryan (2012), they randomized clusters of
individuals, instead of individuals themselves. However,
since they had access to public voter registries, they
targeted 32,000 voters, which they assigned to 1,220
clusters across 18 age ranges, 34 towns, and 2 genders.
These clusters of Facebook users were assigned to one
of four experimental conditions: a control group with
no advertisement, and three different types of adver-
tisements that ought to increase Facebook users’ name
recognition of the candidate. The innovation that
Brookman and Green’s study introduced was that they
used contact information from public voter records to
gather public opinion data from these individual voters
through telephone interviews later on. Since the cre-
ation of clusters was done on the basis of assigning
32,000 registered voters to 1,220 clusters, they had
detailed contact information of registered voters that
belong to each respective cluster. In other words, they
were able to link cluster assignment on Facebook with
attitudinal outcomes from survey data, such as can-
didate’s name recognition, positive impression of the
candidate, whether people voted for the candidate, and
whether they recall having seen the ad.
Social media experiments exist outside of Facebook
also. Gong et al. (2017) conducted a large-scale ex-
periment among the social microblogging service Sina
Weibo (i.e., the “Chinese Twitter”). They examined
the return-on-investment of company tweets on view-
ers of TV shows. To do so, they randomly assigned
2 Teresi and Michelson (2015) randomized individual Facebook
users with
whom they connected via a Facebook profile (i.e., becoming
“friends”)
into experimental conditions. While one group of “friends”
received
mainly apolitical status updates from the host account, the
treatment
group received political messages about the upcoming 2010
elections.
After the election, authors searched for each “friend” in the
state list of
registered voters using information provided via Facebook’s
profile (i.e.,
names, age, gender, etc.) to examine whether these online get-
out-the-
vote messages distributed through social media encouraged
subjects
to vote.
D
ow
nloaded from
https://academ
ic.oup.com
/jpart/article-abstract/29/4/627/4995543 by Florida International
U
niversity Library S
erials user on 18 A
pril 2020
Journal of Public Administration Research and Theory, 2019,
Vol. 29, No. 4630
98 different TV shows into three experimental condi-
tions: (1) the control group, where there is no tweet
sent out about the particular TV shows; (2) the tweet
condition, where each show is tweeted by the com-
pany; and (3) a tweet and retweet condition, where
each show is tweeted by a company and retweeted
by a so-called “influencer.” TV show viewing percent-
ages were used as an outcome measure, finding that
both tweeting and tweeting coupled with retweeting
boost TV show views relative to the shows in the con-
trol group. In other words, social media efforts of TV
companies result in a significant increase in viewers.
They also found that retweets of influencers are more
effective in generating new viewers than tweets by the
companies. In another Twitter experiment, Coppock,
Guess, and Ternovski (2016) looked at online mobil-
ization behavior. In particular, authors were interested
in whether Twitter users could be encouraged to sign
a petition. To do so, they randomly divided 8,500 fol-
lowers of a US nonprofit advocacy group into three ex-
perimental conditions. In the first stage, the nonprofit
organization published a tweet in which its followers
were encouraged to sign a petition. All three groups
were exposed to the public tweet. In the second stage,
a treatment condition received a direct message with
a similar request, referring to them as “followers,” an-
other treatment condition got the same direct message,
but they were referred to as “organizers,” whereas the
control group received no direct message. On this basis,
authors checked whether subjects either retweeted or
signed the petition.
Other notable examples of using social media plat-
forms like Twitter to implement experiments involve
studying social media censorship in China (King, Pan,
and Roberts 2013), the effectiveness of repeated pol-
itical messages on twitter followers of politicians
(Kobayashi and Ichifuji 2015), the effectiveness of
news media (King, Schneer, and White 2017), or racist
online harassment (Munger 2017).
The aforementioned examples provide rich inspir-
ation for conducting social media experiments in public
administration research. Social media experiments have
the distinct advantage that they combine the internal
validity of experiments with an increased realism and
external validity. In this sense, they are a subtype of
conventional field experiments, which are conducted in
an online environment where people interact via social
media. In addition, social media experiments can easily
be conducted on large-scale samples using a variety
of unobtrusive outcome measures to assess respond-
ents’ revealed behaviors. They are therefore a viable
option to complement survey-based experiments that
often employ stated preferences (i.e., attitudes, evalu-
ative judgments, or behavioral intentions), which make
up the majority type of experiments implemented in
public administration to date (Li and Van Ryzin 2017).
People increasingly interact with government and
government agencies using social media platforms
like Twitter and Facebook (Mergel 2012). Scholars
and government agencies alike can implement social
media experiments to test the effectiveness of using
these relatively new channels of communication and
information provision. Examples may include assess-
ing whether providing information on social media
about the performance of government agencies affect
citizen trust in those agencies, or may lead citizens to
desirable behaviors, including coproduction. Indeed,
implementing such innovative experimental designs in
the context of online citizen–government interactions
may be a viable avenue for future experimentation in
public administration research. In the following, we
introduce an empirical application of an online so-
cial media experiment that examines in how far gov-
ernment funding and charitable donation intentions
interact.
Empirical Application: How Government
Funding and Private Donations Interact
An impressive body of literature has emerged from
various disciplines that focus on the issue of whether
government funding would displace (crowding-out
effect) or leverage (crowding-in effect) private con-
tributions to nonprofit organizations. The impact of
government funding on private donations to non-
profits largely rests on how potential donors and
nonprofits themselves strategically respond to gov-
ernment funding of nonprofit activities (Tinkelman
2010; Lu 2016). We focus on the strategic responses
of private donors in the present analysis: how
would donors change their levels of charitable giv-
ing when a nonprofit organization is supported by
government funding. The literature distinguishes
two contrasting models of how government-funded
nonprofits are perceived, which ultimately affects
charitable donations via processes of crowding-in
or crowding-out.
Early crowding-out theory assumes that private
donors are altruistic and care about the optimal level
of public goods provision. Donors as taxpayers would
consider government funding as their contributions
through taxation and thus perceive it as a perfect sub-
stitute for voluntary donations. In this way, increases
in government support would lower the need for add-
itional private contributions. Therefore, when a non-
profit receives more support from the government,
private donors would consciously reduce their giving
to this organization. As a result, there is a dollar-for-
dollar replacement between private giving and govern-
ment funding (Roberts 1984; Warr 1982). This pure
altruism assumption was later challenged by Andreoni’s
(1989) model of impure altruism, which predicts that
D
ow
nloaded from
https://academ
ic.oup.com
/jpart/article-abstract/29/4/627/4995543 by Florida International
U
niversity Library S
erials user on 18 A
pril 2020
Journal of Public Administration Research and Theory, 2019,
Vol. 29, No. 4 631
donors are also motivated by a “warm-glow”—the
utility from the act of giving to help others. In this im-
pure altruism line of reasoning, government funding
and private giving would not completely substitute
each other. As a result, there may exist a crowding-
out effect between these two funding sources, but the
magnitude of the effect is less than the dollar-for-dollar
model that pure altruism would predict.
On the other hand, private donors might consider
government funding favorably and become more will-
ing to contribute to government-funded nonprofits be-
cause they perceive them as more competent and/or
needy. Crowding-in theory proposes that government
funding may stimulate charitable contributions in two
ways. First, when donors do not have complete know-
ledge concerning beneficiary nonprofits and their pro-
grams, government funding serves as a direct signal of
the nonprofit’s quality and reliability (Rose-Ackerman
1981). Indeed, to be funded by government agencies,
nonprofit organizations have to go through a com-
petitive merit-based selection process and meet finan-
cial and programmatic requirements (Lu 2015; Suárez
2011). Therefore, the receipt of government funding
can be perceived by uninformed donors as an indicator
of trustworthiness and competence. Second, govern-
ment funding also is considered as a signal of unmet
social needs, calling for more donor attention and fur-
ther facilitating the leveraging effect of government
funding (Brooks 1999; Okten and Weisbrod 2000).
There exist a rich body of empirical studies in sup-
port of the crowding-out hypothesis (e.g., Andreoni
1993; Andreoni and Payne 2011; Brooks 2000; De
Wit, Bekkers, and Broese van Groenou 2017; Dokko
2009; Hughes, Luksetich, and Rooney 2014; Kingma
1989; Steinberg 1987) and the crowding-in model (e.g.,
Borgonovi and O’Hare 2004; De Wit and Bekkers
2016; Heutel 2014; Khanna and Sandler 2000; Lu
2016; Okten and Weisbrod 2000; Smith 2007). Most
recently, De Wit and Bekkers (2016) and Lu (2016)
respectively employed meta-analytical techniques to
aggregate existing empirical evidence on crowding-in
and crowding-out. Both studies conclude a significant
positive association between government funding and
private donations, even though the magnitude of the
relationship is trivial.
The above-mentioned body of literature on
crowding-out and crowding-in greatly advances our
understanding of the complex interaction between
government funding and private donations. However,
it generally suffers from two limitations. First, the ma-
jority of existing empirical literature testing the crowd-
ing-in or crowding-out effect employs observational
data. Although observational studies enable scholars
to explore the association between the two revenue
sources, drawing causal inferences remains challenging
(Blom-Hansen, Morton, and Serritzlew 2015; James,
Jilke, and Van Ryzin 2017a). Second, both crowding-in
and crowding-out lines of reasoning assume that po-
tential donors possess perfect information about the
nonprofits they might want to donate to, especially
whether these organizations are funded by govern-
ment and to what extent. However, this assumption
might not be true in the real world (De Wit et al. 2017;
Horne, Johnson, and Van Slyke 2005; Krasteva and
Yildirim 2013). For example, Horne, Johnson, and Van
Slyke (2005) used public opinion data to demonstrate
that individual donors do not necessarily have com-
plete information on the financial structures of their
beneficiary organizations and subsequently link dona-
tion decisions to the levels of government funding.
In recent years, scholars began to employ ex-
perimental designs to address these two limitations
(i.e., endogeneity and imperfect information) in the
crowding-out and crowding-in literature (e.g., Eckel,
Grossman, and Johnston 2005; Kim and Van Ryzin
2014; Ottoni-Wilhelm, Vesterlund, and Xie 2017;
Wasif and Prakash 2017). Existing experimental stud-
ies testing crowding-out/-in effects usually include
manipulations of the existence and the level of direct
government support to beneficiaries, and then measure
variations in of subjects’ donations. Methodologically,
experimental designs are advantageous over observa-
tional settings in terms of their internal validity when
testing crowding-out/-in effects because experimental
studies create a controlled environment of information
exchange to rule out confounding factors. As a result,
scholars are provided with more direct evidence of the
causal linkage between government support and char-
itable giving (Blom-Hansen, Morton, and Serritzlew
2015; James, Jilke, and Van Ryzin 2017a). Table 1
reviews the experimental studies of crowding-out/-in
effects to date, including type, setting, and results.
As can be seen in Table 1, most experimental stud-
ies employ laboratory experimental designs, primarily
using two specific experimental paradigms. One is the
public goods game (e.g., Andreoni 1993; Eckel et al.
2005; Isaac and Norton 2013) and the other type of
experimental setting employed in the literature is the
dictator game (e.g., Blanco, Lopez, and Coleman 2012;
Konow 2010; Korenok, Millner, and Razzolini 2012).
Despite different experimental paradigms, most of the
laboratory experiments report a partial crowding-out
effect between government funding and charitable
contributions (see also De Wit and Bekkers 2016).
In addition to laboratory experiments, there are a
few experimental studies that employ survey experi-
ments to test the crowding-in and crowding-out prop-
ositions. For example, Kim and Van Ryzin (2014)
conducted an online survey experiment with 562
participants and found that an arts nonprofit with
D
ow
nloaded from
https://academ
ic.oup.com
/jpart/article-abstract/29/4/627/4995543 by Florida International
U
niversity Library S
erials user on 18 A
pril 2020
Journal of Public Administration Research and Theory, 2019,
Vol. 29, No. 4632
government funding would receive about 25% less pri-
vate donations than an identical hypothetical organ-
ization without government funding. In contrast, Wasif
and Prakash’s (2017) survey experiment with 530
respondents in Pakistan reported that federal funding
would not change respondents’ willingness to donate
to a hypothetical faith-based educational nonprofit.
When meta-analyzing results from experimental
studies only, De Wit and Bekkers (2016) find substan-
tially different results compared to observational stud-
ies, with experimental studies showing a considerable
crowding-out effect and nonexperimental studies a
very small crowding-in effect. There are two potential
explanations for these differences. A first possibility
would be that observational studies on crowding-
out/-in are plagued by endogeneity, and hence the
discrepancies in results may be a product of the com-
paratively poor internal validity of observational re-
search designs. A second possibility would be that
findings predominately produced within stylized set-
tings such as economic games or hypothetical scenar-
ios may hardly extrapolate beyond the laboratory. Or
in other words, people may behave differently in lab
and survey experiments as in the real world. Indeed, a
recent systematic comparison between laboratory and
field experiments concluded that the ability of stylized
experiments to extrapolate social preferences from the
lab to the field is limited, at best (Galizzi and Navarro-
Martinez, Forthcoming; see also Levitt and List 2007).
We respond to these competing interpretations by
implementing a naturalistic field experiment using so-
cial media.
Method
In the following, we report the experimental design for
this study. Doing so, we follow James, Jilke, and Van
Ryzin’s (2017b) recommended reporting guidelines for
experiments in public administration research.
Experimental Design, Subject Recruitment, and
Treatment
We designed a large-scale placebo-controlled online
field experiment on the social networking platform
Facebook by purchasing ads using Facebook’s ad-
vertisement facilities. These ads were designed to so-
licit donations to human service nonprofits in New
York City (NYC). We randomly allocated groups of
Facebook users to three different ads of local human
service organizations. Depending on experimental
conditions, these ads included information cues about
(1) government funding (treatment group), (2) dona-
tion funding (placebo group), or (3) no such infor-
mation (control group) as depicted in Figure 1. The
Table 1. Experimental Studies on Crowding-in and Crowding-
out Effects
Reference
Type of
experiment Experimental setting Subjects Finding
Andreoni (1993) Lab Public goods game Students Crowding-out
Chan et al. (1996) Lab Public goods game Students No effect
Bolton and Katok (1998) Lab Dictator game Students Crowding-
out
Chan et al. (2002) Lab Public goods game Students Crowding-
out
Sutter and Weck-Hannemann
(2004)
Lab Public goods game Students Crowding-out
Eckel et al. (2005) Lab Public goods game Students Crowding-
out
Güth et al. (2006) Lab Public goods game Students Crowding-
out
Galbiati and Vertova (2008) Lab Public goods game Students
Crowding-in
Hsu (2008) Lab Public goods game Students Crowding-out
Reeson and Tisdell (2008) Lab Public goods game Students
Crowding-out
Konow (2010) Lab Dictator game Students Crowding-out
Blanco et al. (2012) Lab Dictator game Tourists No effect
Gronberg et al. (2012) Lab Public goods game Students
Crowding-out
Korenok et al. (2012) Lab Dictator game Students Crowding-out
Issac and Norton (2013) Lab Public goods game Students
Crowding-out
Lilley and Slonim (2014) Lab Contribution decision
making
Students Crowding-out
Galbiati and Vertova (2014) Lab Public goods game Students
Crowding-in
Kim and Van Ryzin (2014) Survey Hypothetical vignette
General population sample (US) Crowding-out
Wasif and Prakash (2017) Survey Hypothetical vignette General
population sample
(Pakistan)
No effect
Ottoni-Wilhelm et al. (2017) Lab Contribution decision
making
Students Crowding-out
D
ow
nloaded from
https://academ
ic.oup.com
/jpart/article-abstract/29/4/627/4995543 by Florida International
U
niversity Library S
erials user on 18 A
pril 2020
Journal of Public Administration Research and Theory, 2019,
Vol. 29, No. 4 633
experimental design was pre-registered at the AEA
RCT registry3 and received ethics approval from
Rutgers University’s Institutional Review Board.
We chose to target Facebook users who live in NYC
because this allowed us to leverage local information
about government funding of human service organi-
zations (i.e., real food banks in NYC) in the experi-
mental design. All ads included a donation appeal,
encouraging Facebook users to donate to local food
banks through a webpage they could access by click-
ing on the ad. Once clicked, and depending on the ex-
perimental condition, users were directed to one of
three web pages where Food Banks from NYC4 were
listed and hyperlinked. Facebook users in the control
condition (i.e., no funding information) were directed
to the Rutgers Observatory of Food Banks webpage.
Here, all NYC Food Banks were listed and hyper-
linked. Those subjects who were allocated to the pla-
cebo condition (i.e., donation-funded), were directed
to the Rutgers Observatory of Donation-Funded Food
Banks webpage. The presence of a placebo condition
(i.e., donation-funded) allows us to test whether any
differences between treatment and control is due to the
mere provision of funding information. On this web-
site, all donation-funded5 food banks from NYC were
listed and hyperlinked. In the treatment condition (i.e.,
government-funded), Facebook users were sent to the
Rutgers Observatory of Government-Funded Food
Banks website, where all Food Banks that were classi-
fied as government-funded were listed and hyperlinked.
Unique ads were developed for the experiment.
Figure 2 shows the ads that were randomly presented
to groups of Facebook users. Taking the ad of the con-
trol condition as a baseline, the treatment and placebo
ads look exactly like the control ad, except for four
important differences:
1) A different sponsor was used:
i. Rutgers Observatory of Food Banks for the
control group;
ii. Rutgers Observatory of Donation-Funded
Food Banks for the placebo group;
iii. Rutgers Observatory of Government-Funded
Food Banks for the treatment group.
2) The ad’s headline was changed from “Donate
to Food Banks in your area” to “Donate to gov-
ernment-funded Food Banks in your area,” or
“Donate to donation-funded Food Banks in your
area” respectively.
3) The heading under the image changed from “Food
Banks in NYC” to “Government-funded Food
Banks in NYC,” or “Donation-funded Food Banks
in NYC” respectively.
4) The two-sentence-description used “government-
funded Food Banks in NYC” and “donation-
funded Food Banks in NYC” instead of “Food
Banks in NYC.”
Cluster Randomization
Randomization into experimental conditions was not
performed on the individual level, but on the level of
groups of Facebook users (see also Ryan 2012 for a
similar approach). This was done because Facebook’s
3 AEARCTR-0002345 (see
https://www.socialscienceregistry.org/trials/
2345).
4 The food banks listed on the websites are all real food banks
that
operate in NYC, as collected through the National Center for
Charitable
Statistics (NCCS) database.
5 We distinguished donation-funded and government-funded
food banks
based on the revenue information provided in food banks’ latest
990
forms in the NCCS database. For this study, we defined
government-
funded food banks as the ones that receive government funding
in
addition to private donations, and donation-funded food banks
as the
ones that do not receive government funding but private
donations.
Figure 1. Experimental Design.
D
ow
nloaded from
https://academ
ic.oup.com
/jpart/article-abstract/29/4/627/4995543 by Florida International
U
niversity Library S
erials user on 18 A
pril 2020
https://www.socialscienceregistry.org/trials/2345
https://www.socialscienceregistry.org/trials/2345
Journal of Public Administration Research and Theory, 2019,
Vol. 29, No. 4634
commercial advertisement facilities do not permit dif-
ferent ads to be randomized on the individual level
(Ryan and Brockman 2012). Instead, users were classi-
fied into groups. This approach makes use of Facebook’s
ability to target users based on demographics. More
precisely, we stratified Facebook users into clusters of
users by zip code, age, and gender. Cluster-randomized
experiments are widely used across the social and med-
ical sciences where groups of individuals—for example,
people living in an area, or classmates—are jointly ran-
domized into experimental conditions (e.g., Campbell
and Walters 2014; Hayes and Moulton 2017). For our
experiment, we randomized demographic clusters of
Facebook users, rather than users nested within these
clusters. Doing so, we first collected all 177 NYC zip
codes and randomly chose 100. In a second step, we
grouped respondents of 18 to 60 years into three age
categories (18–27, 28–38, and 39–60). We composed
these three groups to have an approximately equal
“potential reach” across strata (potential reach refers
to the approximate number of Facebook users that can
be exposed to an ad). Each of the three groups has a
potential reach of about 1.5 to 1.6 million users who
live in NYC. Next, we created 600 clusters by taking
100 zip codes * 3 age categories * 2 genders. These 600
clusters were randomized into one of the three experi-
mental conditions, so that each experimental condition
ended up with 200 clusters with a potential reach of
about 1 million Facebook users6.
An important concern when analyzing data from
cluster-randomized experiments is that clusters vary
in size so that the conventional difference-in-means
estimator would be biased (Middleton and Arronow
2015). Indeed, the 600 clusters we have come up with
vary in (anticipated) size, with a potential reach be-
tween fewer than 1,000 to 110,000 Facebook users
(mean potential reach of approximately 5,000 users).
Therefore, we followed Gerber and Green’s (2012)
suggestion and blocked on cluster size (200 groups of 3
clusters each) in the randomization procedure, so that
the difference-in-means estimator can be used without
risk of bias. The blocked cluster-randomization was
performed using Stata 14.
The actual implementation of the experiment was
done using Facebook’s Adverts Manager interface. We
purchased 600 ads and targeted them to specific demo-
graphic clusters (stratified by zip code, age categories,
and gender). A total of 600 clusters (i.e., 200 per experi-
mental condition) were purchased for $10 each and our
ads were constructed to pay per impression7. All ads ran
on the same day for 24 h, from 6 a.m. EST on August
21, 2017 until 6 a.m. the following day. Although the
sample size of the clusters was pre-determined, the
number of Facebook users who have been actually
exposed to the ads was not; however, the potential reach
across experimental conditions was approximately the
same (see footnote 5). This is so because Facebook
uses its own “Optimization for Ad Delivery” algorithm
(which is the same across experimental conditions) to
determine who will be exposed to an ad. In essence,
Facebook users are shown ads which—according to
Facebook’s algorithm—have the highest probability of
resulting in an impression or click.8 The 600 clusters
had a joint potential reach of 2,972,500, but 296,121
Facebook users were actually exposed to the ads as a
result the Facebook’s ad bidding procedure.
Outcomes
Our primary outcome of interest is subjects’ revealed in-
tention to donate. Or in other words, we look at whether
6 988,200 in the control group; 985,300 in the placebo group;
999,000 in the
treatment group.
7 This makes a total project cost of $6,000. However, from the
purchased
$6,000, Facebook ran our ads for a total value of $3,299, or a
mean value
of $5.50 per cluster.
8 Further information can be found here:
https://www.facebook.com/
business/help/1619591734742116.
Figure 2. Facebook Ads (Control versus Placebo versus
Treatment Conditions).
D
ow
nloaded from
https://academ
ic.oup.com
/jpart/article-abstract/29/4/627/4995543 by Florida International
U
niversity Library S
erials user on 18 A
pril 2020
https://www.facebook.com/business/help/1619591734742116
https://www.facebook.com/business/help/1619591734742116
Journal of Public Administration Research and Theory, 2019,
Vol. 29, No. 4 635
the ads encourage people to click on them. We interpret
link clicks as people’s intention to donate because the
ads explicitly solicit donations by encouraging people to
click. Indeed, clicking is commonly seen as a crucial ante-
cedent for purchase intentions in the marketing literature
(Zhang and Mao 2016). Against this background and
given the explicit ad solicitations for donations we used
in the ads (i.e., “click on one of the listed websites to
make a donation”), we assume that ad clicks are a mo-
tivational precursor for donation intentions. Doing so,
we use the so-called unique-outbound-link-click-rate per
cluster as our primary dependent variable. The unique-
outbound-click-rate (also referred to as click-through-
rate) denotes a cluster’s actual reach (i.e., the number of
people who saw our ads at least once) divided through its
unique outbound clicks (i.e., the number of people who
performed a click that took them off Facebook-owned
property). In our case, the outbound click led users to
one of three web pages—depending on the experimental
condition they were assigned to—we have developed for
this study. On these web pages, which were labeled as
either the Rutgers Observatory of Food Banks (control
group), the Rutgers Observatory of Food Banks (placebo
group), or the Rutgers Observatory of Government-
Funded Food Banks (treatment group), subjects were
informed about the purpose of the study and provided
with a hyperlinked list of either government-funded,
donation-funded, or all food banks in NYC.
As a secondary, though not pre-registered, measure,
we analyze unique page views (i.e., the number of
times people visited their assigned Observatory web-
page at least once) as provided by Google Analytics
for the three web pages. We calculated the percentage
of unique page views relative to the number of people
who saw the ads at least once. This measure serves as
a robustness check for our primary measure of interest
as it provides the actual number of users who actually
ended up on the web pages.
Results
The ads we ran on Facebook reached a total 296,121
NYC Facebook users, resulting in a mean click-through-
rate (CTR) of 0.48 (standard deviation of 0.49) for a
total of 600 clusters (which is our unit of analysis). This
means that of the approximately 300,000 Facebook
users who saw the ads at least once, 0.48% performed
at least one click which was intended to take them off
Facebook and to one of the respective Observatory web
pages that were designed for this study.
To determine whether we find evidence for a crowd-
ing-out/-in effect of government funding on donation
intentions, we analyzed whether there are systematic
differences in users’ click-through-rates between experi-
mental conditions. The left panel in Figure 3 depicts the
results of tests on the equality of means between control,
placebo (i.e., donation), and treatment (i.e., government)
groups. Independent two-sample t-tests were conducted
to compare (1) control and treatment conditions (mean
difference of 0.00, p = 0.99; Cohen’s d = 0.00), (2) pla-
cebo and treatment conditions (mean difference of 0.06,
p = 0.21; Cohen’s d = −0.13), and (3) control and placebo
conditions (mean difference of −0.06, p = 0.17; Cohen’s
d = −0.14) (see also Table A1 in the Appendix). In sum,
all three experimental conditions are statistically indistin-
guishable from each other in terms of their CTRs. In add-
ition, they are of trivial magnitude with very small values
in terms of effect sizes (i.e., Cohen’s d). This means that we
neither find support for the crowding-out nor the crowd-
ing-in hypothesis. In addition, there is also no significant
difference between control and placebo conditions.
When analyzing unique page views of the
Observatory web pages that were created for this study,
we used page visit data from Google Analytics, which
provided us with the total number of unique page visits
for each experimental condition. Therefore, the unit of
analysis for this measure is individual Facebook users
(n = 296,121). In total, 524 page views were recorded
representing 0.18% of our sample of Facebook users.9
In the control condition, there were a total of 162 page
visits, representing 0.16% of Facebook users that have
been assigned to the control condition. In the pla-
cebo and treatment conditions there were respectively
a total of 168 (0.17%) and 194 (0.19%) page views
(see also the right panel on Figure 3). When analyzing
whether these somewhat-minor differences are of sys-
tematic nature, we performed x2 tests. The percentage
points difference in page visits between control and
treatment condition is 0.03 (x2(1) = 2.30, p = 0.13;
Odds Ratio = 1.18), and the difference between pla-
cebo and treatment condition is 0.02 (x2(1) = 1.16,
p = 0.28; Odds Ratio = 1.12). Between control and
placebo conditions, the percentage point difference is
even smaller (i.e., 0.01) (x2(1) = 1.87, p = 0.66; Odds
Ratio = 1.05). In sum, all three experimental condi-
tions are statistically indistinguishable from each other.
In addition, differences are small in magnitude with
small effect sizes (i.e., Odds Ratios), confirming prior
results from analyzing Facebook’s CTRs. We can there-
fore confidently reject both, the crowding-out and the
crowding-in hypotheses.
9 There is an inconsistency between the number of unique
outbound
clicks reported by Facebook and actual unique page visits as
provided
by Google Analytics. This may be the result of some Facebook
clicks
not reaching its intended destination with users potentially
aborting
connecting with a website outside Facebook, possibly because
of fear
of virus-infected websites.
D
ow
nloaded from
https://academ
ic.oup.com
/jpart/article-abstract/29/4/627/4995543 by Florida International
U
niversity Library S
erials user on 18 A
pril 2020
Journal of Public Administration Research and Theory, 2019,
Vol. 29, No. 4636
Conclusion
This study has developed and implemented a large-scale
but relatively low-cost online experiment in the context
of public administration research, where about 300,000
Facebook users were randomly assigned to different do-
nation solicitation ads. The study has found no evidence
for crowding-in, nor the crowding-out model. In this
sense, our results differ from prior experimental studies
that stem from the behavioral laboratory or have been
produced in the context of hypothetical vignette designs.
While our study is certainly not the first that finds nei-
ther evidence for the crowding-out, nor the crowding-in
hypotheses (see Blanco et al. 2012; Chan et al. 1996;
Wasif and Prakash 2017), we provide—to our know-
ledge—the first experimental evidence that has been
produced within a naturalistic context.
However, our results have to be taken with a fair
grain of salt. Our effective sample size of 600 clusters
may not be able to have detected a difference in groups
for our secondary outcome measure. Future stud-
ies are advised to replicate our design using a larger
number of clusters, possibly by stratifying clusters for
single age categories (i.e., one for each year) instead of
grouping them as we did. In this sense, our findings do
not provide a strict theory test of crowding-in versus
crowding-out but provide some suggestive evidence
that government funding does not seem to matter in
the case of online donation solicitations of local food
banks. Future studies may extend this to other areas of
human service delivery, including more direct cues of
government funding because sometimes people do not
pay appropriate attention to informational cues.
Although there is a long-standing scholarly debate
on crowding-out and crowding-in between government
funding and private donations, our study adds to this
literature that in an online setting, whether a nonprofit
is supported by government funding seems not to affect
donors’ behaviors. Therefore, it is possible that, from
donors’ perspective, government funding and charitable
donations are two independent revenue streams. Donors
do not necessarily consider a nonprofit’s revenue struc-
ture, especially whether it is supported by government
funding when making donation decisions. If so, the
crowding-out versus crowding-in debate over the last
several decades might be overstated. Certainly, we did not
examine in the present study how nonprofits strategic-
ally modify their fundraising efforts when they receive
government funding, which might actually contribute
to the crowding-in or crowding-out effect. For example,
Andreoni and Payne (2011) argued that the crowding-
out effect is mostly driven by nonprofits’ decreased fun-
draising efforts as a response to government funding.
Again, we cannot guarantee the generalizability of our
findings from food banks to other service areas.
The primary aim of this study was to showcase how
public administration scholars and government agen-
cies can utilize social media to conduct experimental
research in the realm of online citizen–government
interactions. Social media experiments enable public
administration researchers to ensure both internal and
external validity because they have the distinct advan-
tage that they combine the internal validity of experi-
ments with an increased realism and ecological validity.
Furthermore, as we demonstrate in this study, industry
contacts are no longer necessary for implementing
large-scale social media experiments. Social media
service platforms such as Twitter or Facebook can be
easily used to test a variety of important questions to
public administration scholars and practitioners. Here,
online experiments can be conducted on large-scale
samples using a variety of unobtrusive outcome meas-
ures to assess respondents’ revealed behaviors.
A research agenda for applying social media experi-
ments in studying online citizen–state interactions has
ns
ns
ns
0
.1
.2
.3
P
e
rc
e
n
ta
g
e
o
f
u
n
iq
u
e
p
a
g
e
v
ie
w
s
Control Donation Government
ns
ns
ns
0
.25
.5
.75
1
C
lic
k−
th
ro
u
g
h
−
ra
te
(
u
n
iq
u
e
o
u
tb
o
u
n
d
c
lic
ks
)
Control Donation Government
Figure 3. Experimental Results. Note: ns = not significant at a
95% confidence level.
D
ow
nloaded from
https://academ
ic.oup.com
/jpart/article-abstract/29/4/627/4995543 by Florida International
U
niversity Library S
erials user on 18 A
pril 2020
Journal of Public Administration Research and Theory, 2019,
Vol. 29, No. 4 637
implications for a wide range of areas. Possible applica-
tions may include assessing the effectiveness of different
public communication strategies on social media and
how these strategies affect, for example, agency reputa-
tion. Indeed, bureaucratic reputation is a core topic area
in public administration, with studies examining how
news media shape agencies’ reputation, including trust
(e.g., Grimmelikhuijsen, de Vries, and Zijlstra 2018).
Little is known about how social media communication
strategies may alter agency reputation. Social media
experiments may assess the effect of different types of
communication strategies on reputation—examples
may include diffusing responsibility attributions for
service failure across actors, active agency branding, or
just randomly publishing information about adminis-
trative procedures. Another possible area of applica-
tion may include assessing the effectiveness of different
targeted job advertisements for government agencies,
using, for instance, Facebook’s advertisement facilities
to get underrepresented groups to apply. Recent schol-
arship has exemplified the importance of signaling per-
sonal benefits to increase the number of women and
people of color to apply for government jobs (e.g., Linos
2017). Testing and refining such targeted job advertise-
ments and/or messages via social media platforms like
Facebook may be a great way to further this promising
line of scholarship, but also be an excellent evaluation
tool in seeking the most effective online recruitment
strategy to increase the demographic representation of
government agencies. A more obvious area of applica-
tion may include assessing whether different ways of
how public performance information is presented on-
line affects citizens’ attitudes and subsequent behaviors.
Indeed, a growing literature in behavioral public ad-
ministration examines the psychology of performance
information (e.g., James and Olsen 2017), assessing
whether different ways of presenting numbers affects
perceptions of public service performance or satisfac-
tion evaluations of government agencies. This agenda
may be extended to social media experiments where
large-scale experiments could be conducted on social
media platforms to probe for the external validity of
prior survey experiments. Social media experiments
may also be useful in simply testing the effectiveness of
governmental online crisis warning systems by assess-
ing which types of messages on social media are most
likely to be shared across users. Such an agenda would
have important practical implications for the effective-
ness of early crisis warning systems.
The emergence of Internet-based social media has made
it possible for researchers to easily interact with thousands
or even millions of citizens. Implementing such innovative
experimental designs in the context of online citizen–gov-
ernment interactions may be a viable avenue for future
experimentation in public administration research both, A
p
p
e
n
d
ix
Ta
b
le
A
1.
M
e
a
n
d
if
fe
re
n
ce
t
e
st
s
C
T
R
(
in
d
e
p
e
n
d
e
n
t
tw
o
-s
a
m
p
le
t
-t
e
st
; n
=
6
0
0
)
C
o
n
tr
o
l
P
la
ce
b
o
T
re
at
m
en
t
M
ea
n
St
an
d
ar
d
e
rr
o
r
M
ea
n
St
an
d
ar
d
er
ro
r
M
ea
n
St
an
d
ar
d
e
rr
o
r
M
ea
n
d
if
fe
re
n
ce
;
co
n
tr
o
l
vs
.
tr
ea
tm
en
t
p
v
al
u
e
M
ea
n
d
if
fe
re
n
ce
;
p
la
ce
b
o
v
er
su
s
tr
ea
tm
en
t
p
v
al
u
e
M
ea
n
d
if
fe
re
n
ce
;
co
n
tr
o
l
ve
rs
u
s
p
la
ce
b
o
p
v
al
u
e
C
T
R
0
.5
0
0
.0
3
0
.4
4
0
.0
3
0
.5
0
0
.0
4
0
.0
0
0
.9
9
0
.0
6
0
.2
1
−
0
.0
6
0
.1
7
D
ow
nloaded from
https://academ
ic.oup.com
/jpart/article-abstract/29/4/627/4995543 by Florida International
U
niversity Library S
erials user on 18 A
pril 2020
Journal of Public Administration Research and Theory, 2019,
Vol. 29, No. 4638
for academics and practitioners alike. We hope this contri-
bution sparks a wider implementation of public adminis-
tration experiments on social media platforms.
FUNDING
This study received funding from the Rutgers SPAA
Faculty Research Funds AY16-17. This study was
supported by a National Research Foundation
of Korea Grant from the Korean Government
[NRF-2017S1A3A2065838].
References
Ajorlou, A., A. Jadbabaie, and A. Kakhbod. 2016. Dynamic
pricing in social
networks: The word-of-mouth effect. Management Science
64:971–79.
Andreoni, J. 1989. Giving with impure altruism: Applications to
charity and
Ricardian equivalence. The Journal of Political Economy
97:1447–58.
———. 1993. An experimental test of the public-goods
crowding-out hypoth-
esis. The American Economic Review 83:1317–27.
Andreoni, J., and A. A. Payne. 2011. Is crowding out due
entirely to fund-
raising? Evidence from a panel of charities. Journal of Public
Economics
95:334–43.
Aral, S. 2016. Networked experiments. In The Oxford
Handbook of the
Economics of Networks, ed. Yann Bramoullé, Andrea Galeotti,
and Brian
Rogers, New York: Oxford University Press.
Aral, S., and D. Walker. 2014. Tie strength, embeddedness, and
social influence:
A large-scale networked experiment. Management Science
60:1352–70.
Bakshy, E., D. Eckles, R. Yan, and I. Rosenn. 2012a. Social
influence in social
advertising: evidence from field experiments.
arXiv:1206.4327v1.
Bakshy, E., I. Rosenn, C. Marlow, and L. Adamic. 2012b. The
role of netwroks
in information diffusion. arXiv:1207.4145v5.
Blanco, E., M. C. Lopez, and E. A. Coleman. 2012. Voting for
environmental
donations: Experimental evidence from Majorca, Spain.
Ecological
Economics 75:52–60.
Blom-Hansen, Jens, Rebecca Morton, and Soren Serritzlew.
2015. Experiments
in public management research. International Public
Management Journal
18:151–70.
Bolton, G. E., and E. Katok. 1998. An experimental test of the
crowding
out hypothesis: The nature of beneficent behavior. Journal of
Economic
Behavior & Organization 37(3):315–31.
Bond, R. M., C. Fariss, J. Jones, A. Kramer, C. Marlow,
J. Settle, and J. Fowler.
2012. A 61-million-person experiment in social influence and
political mo-
bilization. Nature 489:295–98.
Borgonovi, F., and M. O’Hare. 2004. The impact of the national
endowment
for the arts in the United States: Institutional and sectoral
effects on pri-
vate funding. Journal of Cultural Economics 28:21–36.
Broockman, David and Donald Green. 2014. Do online
advertisements in-
crease political candidates’ name recognition or favorability?
Evidence
from randomized field experiments. Political Behavior 36:263–
89.
Brooks, A. C. 1999. Do public subsidies leverage private
philanthropy for
the arts? Empirical evidence on symphony orchestras. Nonprofit
and
Voluntary Sector Quarterly 28:32–45.
———. 2000. Is there a dark side to government support for
nonprofits?
Public Administration Review 60:211–18.
Campbell, Michael and Stephen Walters. 2014. How to design,
analyse and
report cluster randomised trials in medicine and health related
research.
Hoboken: Wiley.
Chan, K. S., S. Mestelman, R. Moir, and R. A. Muller. 1996.
The voluntary
provision of public goods under varying income distributions.
Canadian
Journal of Economics 29(1):54–69.
Chan, K. S., R. Godby, S. Mestelman, and R. A. Muller. 2002.
Crowding-out
voluntary contributions to public goods. Journal of Economic
Behavior &
Organization 48(3):305–17.
Coppock, A., A. Guess and J. Ternovski. 2016. When treatments
are tweets:
A network mobilization experiemnt over twitter. Political
Behavior
38:105–2018.
De Wit, A., and R. Bekkers. 2016. Government support and
charitable dona-
tions: A meta-analysis of the crowding-out hypothesis. Journal
of Public
Administration Research and Theory 27:301–19.
De Wit, A., R. Bekkers, and M. Broese van Groenou. 2017.
Heterogeneity in
crowding-out: When are charitable donations responsive to
government
support? European Sociological Review 33:59–71.
Dokko, J. K. 2009. Does the NEA crowd out private charitable
contributions
to the arts? National Tax Journal 62:57–75.
Eckel, C. C., P. J. Grossman, and R. M. Johnston 2005. An
experimental test
of the crowding out hypothesis. Journal of Public Economics
89:1543–60.
Galbiati, R., and P. Vertova. 2008. Obligations and cooperative
behaviour in
public good games. Games and Economic Behavior 64(1):146–
70.
Galbiati, R., and P. Vertova. 2014. How laws affect behavior:
Obligations,
incentives and cooperative behavior. International Review of
Law and
Economics 38:48–57.
Galizzi, Mateo and Daniel Navarro-Martinez. Forthcoming. On
the ex-
ternal validity of social preference games: A systematic lab-
field study.
Management Science.
Gerber, Alan and Donald Green. 2012. Field experiments:
design, analysis, and
interpretation. New York: Norton.
Gong, S., J. Zhang, P. Zhao, and X. Jiang. 2017. Tweeting as
a marketing
tool: A field experiment in the TV industry. Journal of
Marketing Research
54:833–50.
Grimmelikhuijsen, Stephan and Albert Meijer. 2015. Does
twitter increase
perceived police legitimacy? Public Administration Review
75:598–607.
Grimmelikhuijsen, S., F. de Vries, and W. Zijlstra. 2018.
Breaking bad news
without breaking trust: The effects of a press release and
newspaper
coverage on perceived trustworthiness. Journal of Behavioral
Public
Administration 1.
Gronberg, T. J., R. A. Luccasen III, T. L. Turocy, and J. B. Van
Huyck. 2012.
Are tax-financed contributions to a public good completely
crowded-out?
Experimental evidence. Journal of Public Economics 96(7–
8):596–603.
Güth, W., M. Sutter, and H. Verbon. 2006. Voluntary versus
compulsory soli-
darity: Theory and experiment. Journal of Institutional and
Theoretical
Economics 162(2):347–63.
Hayes, Richard and Lawrence Moulton. 2017. Cluster
randomised trials. Boca
Raton: Chapman & Hall/CRC.
Heutel, G. 2014. Crowding out and crowding in of private
donations and gov-
ernment grants. Public Finance Review 42:143–75.
Horne, C. S., J. L. Johnson, and D. M. Van Slyke. 2005. Do
charitable donors
know enough—and care enough—About government subsidies
to affect
private giving to nonprofit organizations? Nonprofit and
Voluntary Sector
Quarterly 34:136–49.
Hsu, L. C. 2008. Experimental evidence on tax compliance and
voluntary
public good provision. National Tax Journal 61(2):205–23.
Hughes, P., W. Luksetich, and P. Rooney. 2014. Crowding‐Out
and fundraising
efforts. Nonprofit Management and Leadership 24(4):445–64.
Im, T., W. Cho, G. Porumbescu, and J. Park. 2014. Internet,
trust in govern-
ment, and citizen compliance. Journal of Public Administration
Research
and Theory 24(3):741–63.
Isaac, R. M., and D. A. Norton. 2013. Endogenous institutions
and the possi-
bility of reverse crowding out. Public Choice 156:253–84.
James, Oliver, Sebastian Jilke, and Gregg Van Ryzin, eds.
2017a. Experiments
in public management research: challenges and opportunities.
Cambridge:
Cambridge University Press.
James, Oliver, Sebastian Jilke, and Gregg Van Ryzin. 2017b.
Causal inference
and the design and analysis of experiments. In Experiments in
public man-
agement research: Challenges and opportunities, ed. Oliver
James, Sebastian
Jilke, and Gregg Van Ryzin, 59–88. Cambridge: Cambridge
University Press.
D
ow
nloaded from
https://academ
ic.oup.com
/jpart/article-abstract/29/4/627/4995543 by Florida International
U
niversity Library S
erials user on 18 A
pril 2020
Journal of Public Administration Research and Theory, 2019,
Vol. 29, No. 4 639
James, Oliver, and Asmus Leth Olsen. 2017. Citizens and public
performance
measures: making sense of performance information. In
Experiments in
public management research: Challenges and opportunities, ed.
Oliver
James, Sebastian Jilke, and Gregg Van Ryzin, 270–89.
Cambridge:
Cambridge University Press.
Khanna, J., and T. Sandler. 2000. Partners in giving: The
crowding-in effects of
UK government grants. European Economic Review 44:1543–
56.
Kim, Mirae, and Gregg G. Van Ryzin. 2014. Impact of
government funding
on donations to arts organizations: A survey experiment.
Nonprofit and
Voluntary Sector Quarterly 43:910–25.
King, G., J. Pan, and M. E.Roberts. 2013. How censorship in
China allows
government criticism but silences collective expression.
American Political
Science Review 107(2):1–18.
King, G., B. Schneer, and A. White. 2017. How the news media
activate public
expression and influence national agendas. Science 358:776–80.
Kingma, B. R. 1989. An accurate measurement of the crowd-out
effect, income
effect, and price effect for charitable contributions. Journal of
Political
Economy 97:1197–207.
Kobayashi, T. and Y. Ichifuji. 2015. Tweets that matter:
Evidence from a ran-
domized field experiment in Japan. Political Communication
32:574–93.
Konow, J. 2010. Mixed feelings: Theories of and evidence on
giving. Journal of
Public Economics 94(3–4):279–97.
Korenok, O., E. L. Millner, and L. Razzolini. 2012. Are
dictators averse to
inequality? Journal of Economic Behavior & Organization
82:543–47.
Kramer, A., J. Guillory and J. Hancock. 2014. Experimental
evidence of mas-
sice-scale emotional contagion through social networks.
Proceedings of the
National Academy of Science of the United States of America
111:8788–90.
Krasteva, S., and H. Yildirim. 2013. (Un) informed charitable
giving. Journal
of Public Economics 106:14–26.
Ledyard, J. 1995. Public goods: A survey of experimental
research. In
Handbook of experimental economics, ed. J. Kagel and A. Roth,
111–94.
Princeton: Princeton University Press.
Levitt, Steven and John List. 2007. What do laboratory
experiments measur-
ing social preferences reveal about the real world? Journal of
Economic
Perspectives 21:153–74.
Li, H. and G. Van Ryzin. 2017. A systematic review of
experimental studies
in public management journals. In Experiments in public
management re-
search: challenges and opportunities, ed. James, O., S. Jilke,
and G. Van
Ryzin. Cambridge: Cambridge University Press.
Lilley, A., and R. Slonim. 2014. The price of warm glow.
Journal of Public
Economics 114:58–74.
Linos, E. 2017. More than public service: A field experiment
on job adver-
tisements and diversity in the police. Journal of Public
Administration
Research and Theory 28(1):67–85.
Lu, J. 2015. Which nonprofit gets more government funding?
Nonprofit
Management and Leadership 25:297–312.
———. 2016. The Philanthropic consequence of government
grants to non-
profit organizations. Nonprofit Management and Leadership
26:381–400.
Mergel, I. 2012. Social media in the public sector: A guide to
participation,
collaboration and transparency in the networked world.
Hoboken: Wiley:
Jossey-Bass.
Middleton, Joal and Peter Arronow. 2015. Unbiased estimation
of the average
treatment effect in cluster-randomized experiments. Statistics,
Politics and
Policy 6:39–75.
Munger, K. 2017. Tweetment effects on the tweeted:
experimentally reducing
racist harassment. Political Behavior 39(3):629–49.
Okten, C., and B. A. Weisbrod. 2000. Determinants of
donations in private
nonprofit markets. Journal of Public Economics 75:255–72.
Olsen, A. 2017. Adventures into the negativity bias. Paper
presented at EGPA
conference in Milan, Italy.
Ottoni-Wilhelm, M., L. Vesterlund, and H. Xie. 2017. Why do
people give?
Testing pure and impure altruism. American Economic Review
107:3617–33.
Porumbescu, Gregory. 2015. Comparing the effects of E-
Government and so-
cial media use on trust in government: Evidence from Seoul,
South Korea.
Public Management Review 18:1308–34.
Public Religion Research Institute. 2012. Few Americans use
social media
to connect with their faith communities.
http://publicreligion.org/re-
search/2012/08/july-2012-tracking-survey/ (accessed April 30,
2018).
Reeson, A. F., and J. G. Tisdell. 2008. Institutions, motivations
and public
goods: An experimental test of motivational crowding. Journal
of
Economic Behavior & Organization 68(1):273–81.
Roberts, R. D. 1984. A positive model of private charity and
public transfers.
The Journal of Political Economy 92:136–48.
Rose-Ackerman, S. 1981. Do government grants to charity
reduce private
donations? In Nonprofit firms in a three sector economy, ed.
White M. J.,
95–114. Washington, DC: Urban Institute.
Ryan, T. 2012. What makes us click? Demonstrating incentives
for angry
discourse with digital-age field experiments. The Journal of
Politics
74:1138–52.
Ryan, T. J., and Brader, T. 2017. Gaffe appeal a field
experiment on partisan
selective exposure to election messages. Political Science
Research and
Methods 5:667–87.
Ryan, T. and D. Broockman. 2012. Facebook: A new frontier
for field experi-
ments. The Experimental Political Scientist: Newsletter of the
ASPA
Experimental Section 3:2–8.
Smith, T. M. 2007. The impact of government funding on
private contribu-
tions to nonprofit performing arts organizations. Annals of
Public and
Cooperative Economics 78:137–60.
Statistia. 2017. Number of Facebook users by age in the U.S. as
of January
2017. https://www.statista.com/statistics/398136/us-facebook-
user-age-
groups/ (accessed April 30, 2018).
Steinberg, R. 1987. Voluntary donations and public
expenditures in a feder-
alist system. The American Economic Review 77:24–36.
Sutter, M., & H. Weck-Hannemann. 2004. An experimental test
of the
public goods crowding out hypothesis when taxation is
endogenous.
FinanzArchiv: Public Finance Analysis 60(1):94–110.
Suárez, D. F. 2011. Collaboration and professionalization: The
contours of
public sector funding for nonprofit organizations. Journal of
Public
Administration Research and Theory 21:307–26.
Teresi, H. and M. Michelson. 2015. Wired to mobilize: The
effect of so-
cial networking messages on voter turnout. The Social Science
Journal
52:195–204
Thomas, J. C., and G. Streib. 2003. The new face of
government: Citizen-
initiated contacts in the era of E-Government. Journal of public
adminis-
tration research and theory 13:83–102.
Tinkelman, D. 2010. Revenue interactions: crowding out,
crowding in, or
neither? In Handbook of research on nonprofit economics and
manage-
ment, ed. Seaman, B. A., and D. R. Young, pp. 18–41.
Northampton, MA:
Edward Elgar.
Wang, J. C., and C. H. Chang. 2013. How online social ties and
product-
related risks influence purchase intentions: A Facebook
experiment.
Electronic Commerce Research and Applications 12:337–46.
Warr, P. G. 1982. Pareto optimal redistribution and private
charity. Journal of
Public Economics 19:131–38.
Wasif, R., and A. Prakash. 2017. Do government and foreign
funding influ-
ence individual donations to religious nonprofits? A survey
experiment in
Pakistan. Nonprofit Policy Forum 8(3):237–73.
Wukich, C., and I. Mergel. 2015. Closing the citizen-
government commu-
nication gap: Content, audience, and network analysis of
government
tweets. Journal of Homeland Security and Emergency
Management
12:707–35.
Zhang, J., and E. Mao. 2016. From online motivations to ad
clicks and to
behavioral intentions: An empirical study of consumer response
to social
media advertising. Psychology & Marketing 33:155–64.
D
ow
nloaded from
https://academ
ic.oup.com
/jpart/article-abstract/29/4/627/4995543 by Florida International
U
niversity Library S
erials user on 18 A
pril 2020
http://publicreligion.org/research/2012/08/july-2012-tracking-
survey/
http://publicreligion.org/research/2012/08/july-2012-tracking-
survey/
https://www.statista.com/statistics/398136/us-facebook-user-
age-groups/
https://www.statista.com/statistics/398136/us-facebook-user-
age-groups/

More Related Content

Similar to This is the book to use for this assignment. I am sure you probabl.docx

World Civilization I Professor Cieglo Spring 2019 .docx
World Civilization I Professor Cieglo Spring 2019 .docxWorld Civilization I Professor Cieglo Spring 2019 .docx
World Civilization I Professor Cieglo Spring 2019 .docxdunnramage
 
Research Proposal Grade SheetTitle Page (4 points)______.docx
Research Proposal Grade SheetTitle Page (4 points)______.docxResearch Proposal Grade SheetTitle Page (4 points)______.docx
Research Proposal Grade SheetTitle Page (4 points)______.docxgholly1
 
Vigrass_SeniorThesis
Vigrass_SeniorThesisVigrass_SeniorThesis
Vigrass_SeniorThesisJhena Vigrass
 
Running head SELECTING RESEARCH DIRECTION AND QUESTIONS1SELE.docx
Running head SELECTING RESEARCH DIRECTION AND QUESTIONS1SELE.docxRunning head SELECTING RESEARCH DIRECTION AND QUESTIONS1SELE.docx
Running head SELECTING RESEARCH DIRECTION AND QUESTIONS1SELE.docxtodd521
 
Engagement parentingsct munno
Engagement parentingsct munnoEngagement parentingsct munno
Engagement parentingsct munnoGreg Munno
 
Engagement parentingsct munno
Engagement parentingsct munnoEngagement parentingsct munno
Engagement parentingsct munnoGreg Munno
 
Social implications of social networking sites
Social implications of social networking sitesSocial implications of social networking sites
Social implications of social networking sitesPetter Bae Brandtzæg
 
6 TH INTERNATIONAL PUBLIC RELATIONS RESEARCH CONFERENCE Exploring The Strateg...
6 TH INTERNATIONAL PUBLIC RELATIONS RESEARCH CONFERENCE Exploring The Strateg...6 TH INTERNATIONAL PUBLIC RELATIONS RESEARCH CONFERENCE Exploring The Strateg...
6 TH INTERNATIONAL PUBLIC RELATIONS RESEARCH CONFERENCE Exploring The Strateg...Dustin Pytko
 
Running head FUNDAMENTAL ASSESSMENT CHILD WELFARE UNIVERSAL ORGAN.docx
Running head FUNDAMENTAL ASSESSMENT CHILD WELFARE UNIVERSAL ORGAN.docxRunning head FUNDAMENTAL ASSESSMENT CHILD WELFARE UNIVERSAL ORGAN.docx
Running head FUNDAMENTAL ASSESSMENT CHILD WELFARE UNIVERSAL ORGAN.docxwlynn1
 
Research proposal on impect of social media of university students.docx
Research proposal on impect of social media of university students.docxResearch proposal on impect of social media of university students.docx
Research proposal on impect of social media of university students.docxMDNAYANMia2
 
Synergizing Natural and Research Communities
Synergizing Natural and Research CommunitiesSynergizing Natural and Research Communities
Synergizing Natural and Research CommunitiesTom De Ruyck
 
Synergizing natural and research communities: Caring about the research ecosy...
Synergizing natural and research communities: Caring about the research ecosy...Synergizing natural and research communities: Caring about the research ecosy...
Synergizing natural and research communities: Caring about the research ecosy...InSites Consulting
 
RESEARCH ARTICLETalking about Climate Change and GlobalW.docx
RESEARCH ARTICLETalking about Climate Change and GlobalW.docxRESEARCH ARTICLETalking about Climate Change and GlobalW.docx
RESEARCH ARTICLETalking about Climate Change and GlobalW.docxdebishakespeare
 
RUNNING HEAD BIG DATA IN SOCIAL MEDIA .docx
RUNNING HEAD BIG DATA IN SOCIAL MEDIA                            .docxRUNNING HEAD BIG DATA IN SOCIAL MEDIA                            .docx
RUNNING HEAD BIG DATA IN SOCIAL MEDIA .docxcheryllwashburn
 
Protecting elections from social media manipulationscience.s
Protecting elections from social media manipulationscience.sProtecting elections from social media manipulationscience.s
Protecting elections from social media manipulationscience.sjanekahananbw
 
How Much Can We Generalize? Measuring the External Validity of Impact Evaluat...
How Much Can We Generalize? Measuring the External Validity of Impact Evaluat...How Much Can We Generalize? Measuring the External Validity of Impact Evaluat...
How Much Can We Generalize? Measuring the External Validity of Impact Evaluat...Stockholm Institute of Transition Economics
 

Similar to This is the book to use for this assignment. I am sure you probabl.docx (20)

World Civilization I Professor Cieglo Spring 2019 .docx
World Civilization I Professor Cieglo Spring 2019 .docxWorld Civilization I Professor Cieglo Spring 2019 .docx
World Civilization I Professor Cieglo Spring 2019 .docx
 
Research Proposal Grade SheetTitle Page (4 points)______.docx
Research Proposal Grade SheetTitle Page (4 points)______.docxResearch Proposal Grade SheetTitle Page (4 points)______.docx
Research Proposal Grade SheetTitle Page (4 points)______.docx
 
Vigrass_SeniorThesis
Vigrass_SeniorThesisVigrass_SeniorThesis
Vigrass_SeniorThesis
 
Running head SELECTING RESEARCH DIRECTION AND QUESTIONS1SELE.docx
Running head SELECTING RESEARCH DIRECTION AND QUESTIONS1SELE.docxRunning head SELECTING RESEARCH DIRECTION AND QUESTIONS1SELE.docx
Running head SELECTING RESEARCH DIRECTION AND QUESTIONS1SELE.docx
 
Engagement parentingsct munno
Engagement parentingsct munnoEngagement parentingsct munno
Engagement parentingsct munno
 
Engagement parentingsct munno
Engagement parentingsct munnoEngagement parentingsct munno
Engagement parentingsct munno
 
Facebook vs google+
Facebook vs google+Facebook vs google+
Facebook vs google+
 
Social implications of social networking sites
Social implications of social networking sitesSocial implications of social networking sites
Social implications of social networking sites
 
6 TH INTERNATIONAL PUBLIC RELATIONS RESEARCH CONFERENCE Exploring The Strateg...
6 TH INTERNATIONAL PUBLIC RELATIONS RESEARCH CONFERENCE Exploring The Strateg...6 TH INTERNATIONAL PUBLIC RELATIONS RESEARCH CONFERENCE Exploring The Strateg...
6 TH INTERNATIONAL PUBLIC RELATIONS RESEARCH CONFERENCE Exploring The Strateg...
 
Running head FUNDAMENTAL ASSESSMENT CHILD WELFARE UNIVERSAL ORGAN.docx
Running head FUNDAMENTAL ASSESSMENT CHILD WELFARE UNIVERSAL ORGAN.docxRunning head FUNDAMENTAL ASSESSMENT CHILD WELFARE UNIVERSAL ORGAN.docx
Running head FUNDAMENTAL ASSESSMENT CHILD WELFARE UNIVERSAL ORGAN.docx
 
Module-9-Organization.pptx
Module-9-Organization.pptxModule-9-Organization.pptx
Module-9-Organization.pptx
 
Research proposal on impect of social media of university students.docx
Research proposal on impect of social media of university students.docxResearch proposal on impect of social media of university students.docx
Research proposal on impect of social media of university students.docx
 
Synergizing Natural and Research Communities
Synergizing Natural and Research CommunitiesSynergizing Natural and Research Communities
Synergizing Natural and Research Communities
 
Synergizing natural and research communities: Caring about the research ecosy...
Synergizing natural and research communities: Caring about the research ecosy...Synergizing natural and research communities: Caring about the research ecosy...
Synergizing natural and research communities: Caring about the research ecosy...
 
RESEARCH ARTICLETalking about Climate Change and GlobalW.docx
RESEARCH ARTICLETalking about Climate Change and GlobalW.docxRESEARCH ARTICLETalking about Climate Change and GlobalW.docx
RESEARCH ARTICLETalking about Climate Change and GlobalW.docx
 
Research Proposal Writing
Research Proposal Writing Research Proposal Writing
Research Proposal Writing
 
RUNNING HEAD BIG DATA IN SOCIAL MEDIA .docx
RUNNING HEAD BIG DATA IN SOCIAL MEDIA                            .docxRUNNING HEAD BIG DATA IN SOCIAL MEDIA                            .docx
RUNNING HEAD BIG DATA IN SOCIAL MEDIA .docx
 
Protecting elections from social media manipulationscience.s
Protecting elections from social media manipulationscience.sProtecting elections from social media manipulationscience.s
Protecting elections from social media manipulationscience.s
 
How Much Can We Generalize? Measuring the External Validity of Impact Evaluat...
How Much Can We Generalize? Measuring the External Validity of Impact Evaluat...How Much Can We Generalize? Measuring the External Validity of Impact Evaluat...
How Much Can We Generalize? Measuring the External Validity of Impact Evaluat...
 
Facebook psychology
Facebook psychologyFacebook psychology
Facebook psychology
 

More from juliennehar

One way to improve your verbal communication is to own your thoughts.docx
One way to improve your verbal communication is to own your thoughts.docxOne way to improve your verbal communication is to own your thoughts.docx
One way to improve your verbal communication is to own your thoughts.docxjuliennehar
 
One paragraphHas your family experienced significant upward or .docx
One paragraphHas your family experienced significant upward or .docxOne paragraphHas your family experienced significant upward or .docx
One paragraphHas your family experienced significant upward or .docxjuliennehar
 
one paragraph for each conceptoriginal workSocial Stratifica.docx
one paragraph for each conceptoriginal workSocial Stratifica.docxone paragraph for each conceptoriginal workSocial Stratifica.docx
one paragraph for each conceptoriginal workSocial Stratifica.docxjuliennehar
 
one pageExamine the importance of popular culture and technology.docx
one pageExamine the importance of popular culture and technology.docxone pageExamine the importance of popular culture and technology.docx
one pageExamine the importance of popular culture and technology.docxjuliennehar
 
One-half pageWhat accounts are included in the revenue cycleD.docx
One-half pageWhat accounts are included in the revenue cycleD.docxOne-half pageWhat accounts are included in the revenue cycleD.docx
One-half pageWhat accounts are included in the revenue cycleD.docxjuliennehar
 
One way chemists use to determine the molecular weight of large biom.docx
One way chemists use to determine the molecular weight of large biom.docxOne way chemists use to determine the molecular weight of large biom.docx
One way chemists use to determine the molecular weight of large biom.docxjuliennehar
 
One page paper answering following questions. Describe the charact.docx
One page paper answering following questions. Describe the charact.docxOne page paper answering following questions. Describe the charact.docx
One page paper answering following questions. Describe the charact.docxjuliennehar
 
One page on Applying Platos Allegory of the Cave in the light o.docx
One page on Applying Platos Allegory of the Cave in the light o.docxOne page on Applying Platos Allegory of the Cave in the light o.docx
One page on Applying Platos Allegory of the Cave in the light o.docxjuliennehar
 
one page in APA format.Using the Competing Values Framework, how w.docx
one page in APA format.Using the Competing Values Framework, how w.docxone page in APA format.Using the Competing Values Framework, how w.docx
one page in APA format.Using the Competing Values Framework, how w.docxjuliennehar
 
One more source needs to be added to the ppt. There is a 5-6 min spe.docx
One more source needs to be added to the ppt. There is a 5-6 min spe.docxOne more source needs to be added to the ppt. There is a 5-6 min spe.docx
One more source needs to be added to the ppt. There is a 5-6 min spe.docxjuliennehar
 
One of the recent developments facing the public administration of c.docx
One of the recent developments facing the public administration of c.docxOne of the recent developments facing the public administration of c.docx
One of the recent developments facing the public administration of c.docxjuliennehar
 
One of the most important functions (protocols) in a packet-switched.docx
One of the most important functions (protocols) in a packet-switched.docxOne of the most important functions (protocols) in a packet-switched.docx
One of the most important functions (protocols) in a packet-switched.docxjuliennehar
 
One of the main themes of this course has been culture as an on-goin.docx
One of the main themes of this course has been culture as an on-goin.docxOne of the main themes of this course has been culture as an on-goin.docx
One of the main themes of this course has been culture as an on-goin.docxjuliennehar
 
One of the main political separations that divide people today is Li.docx
One of the main political separations that divide people today is Li.docxOne of the main political separations that divide people today is Li.docx
One of the main political separations that divide people today is Li.docxjuliennehar
 
One of the very first cases that caught Freud’s attention when he wa.docx
One of the very first cases that caught Freud’s attention when he wa.docxOne of the very first cases that caught Freud’s attention when he wa.docx
One of the very first cases that caught Freud’s attention when he wa.docxjuliennehar
 
One of the great benefits of the Apache web server is its wide range.docx
One of the great benefits of the Apache web server is its wide range.docxOne of the great benefits of the Apache web server is its wide range.docx
One of the great benefits of the Apache web server is its wide range.docxjuliennehar
 
One of the most difficult components of effective .docx
One of the most difficult components of effective .docxOne of the most difficult components of effective .docx
One of the most difficult components of effective .docxjuliennehar
 
One of the high points of the campaign will be a look to the future .docx
One of the high points of the campaign will be a look to the future .docxOne of the high points of the campaign will be a look to the future .docx
One of the high points of the campaign will be a look to the future .docxjuliennehar
 
One of the most basic aims of human computer interaction has been sp.docx
One of the most basic aims of human computer interaction has been sp.docxOne of the most basic aims of human computer interaction has been sp.docx
One of the most basic aims of human computer interaction has been sp.docxjuliennehar
 
One of the most common workplace communication tools is a telephon.docx
One of the most common workplace communication tools is a telephon.docxOne of the most common workplace communication tools is a telephon.docx
One of the most common workplace communication tools is a telephon.docxjuliennehar
 

More from juliennehar (20)

One way to improve your verbal communication is to own your thoughts.docx
One way to improve your verbal communication is to own your thoughts.docxOne way to improve your verbal communication is to own your thoughts.docx
One way to improve your verbal communication is to own your thoughts.docx
 
One paragraphHas your family experienced significant upward or .docx
One paragraphHas your family experienced significant upward or .docxOne paragraphHas your family experienced significant upward or .docx
One paragraphHas your family experienced significant upward or .docx
 
one paragraph for each conceptoriginal workSocial Stratifica.docx
one paragraph for each conceptoriginal workSocial Stratifica.docxone paragraph for each conceptoriginal workSocial Stratifica.docx
one paragraph for each conceptoriginal workSocial Stratifica.docx
 
one pageExamine the importance of popular culture and technology.docx
one pageExamine the importance of popular culture and technology.docxone pageExamine the importance of popular culture and technology.docx
one pageExamine the importance of popular culture and technology.docx
 
One-half pageWhat accounts are included in the revenue cycleD.docx
One-half pageWhat accounts are included in the revenue cycleD.docxOne-half pageWhat accounts are included in the revenue cycleD.docx
One-half pageWhat accounts are included in the revenue cycleD.docx
 
One way chemists use to determine the molecular weight of large biom.docx
One way chemists use to determine the molecular weight of large biom.docxOne way chemists use to determine the molecular weight of large biom.docx
One way chemists use to determine the molecular weight of large biom.docx
 
One page paper answering following questions. Describe the charact.docx
One page paper answering following questions. Describe the charact.docxOne page paper answering following questions. Describe the charact.docx
One page paper answering following questions. Describe the charact.docx
 
One page on Applying Platos Allegory of the Cave in the light o.docx
One page on Applying Platos Allegory of the Cave in the light o.docxOne page on Applying Platos Allegory of the Cave in the light o.docx
One page on Applying Platos Allegory of the Cave in the light o.docx
 
one page in APA format.Using the Competing Values Framework, how w.docx
one page in APA format.Using the Competing Values Framework, how w.docxone page in APA format.Using the Competing Values Framework, how w.docx
one page in APA format.Using the Competing Values Framework, how w.docx
 
One more source needs to be added to the ppt. There is a 5-6 min spe.docx
One more source needs to be added to the ppt. There is a 5-6 min spe.docxOne more source needs to be added to the ppt. There is a 5-6 min spe.docx
One more source needs to be added to the ppt. There is a 5-6 min spe.docx
 
One of the recent developments facing the public administration of c.docx
One of the recent developments facing the public administration of c.docxOne of the recent developments facing the public administration of c.docx
One of the recent developments facing the public administration of c.docx
 
One of the most important functions (protocols) in a packet-switched.docx
One of the most important functions (protocols) in a packet-switched.docxOne of the most important functions (protocols) in a packet-switched.docx
One of the most important functions (protocols) in a packet-switched.docx
 
One of the main themes of this course has been culture as an on-goin.docx
One of the main themes of this course has been culture as an on-goin.docxOne of the main themes of this course has been culture as an on-goin.docx
One of the main themes of this course has been culture as an on-goin.docx
 
One of the main political separations that divide people today is Li.docx
One of the main political separations that divide people today is Li.docxOne of the main political separations that divide people today is Li.docx
One of the main political separations that divide people today is Li.docx
 
One of the very first cases that caught Freud’s attention when he wa.docx
One of the very first cases that caught Freud’s attention when he wa.docxOne of the very first cases that caught Freud’s attention when he wa.docx
One of the very first cases that caught Freud’s attention when he wa.docx
 
One of the great benefits of the Apache web server is its wide range.docx
One of the great benefits of the Apache web server is its wide range.docxOne of the great benefits of the Apache web server is its wide range.docx
One of the great benefits of the Apache web server is its wide range.docx
 
One of the most difficult components of effective .docx
One of the most difficult components of effective .docxOne of the most difficult components of effective .docx
One of the most difficult components of effective .docx
 
One of the high points of the campaign will be a look to the future .docx
One of the high points of the campaign will be a look to the future .docxOne of the high points of the campaign will be a look to the future .docx
One of the high points of the campaign will be a look to the future .docx
 
One of the most basic aims of human computer interaction has been sp.docx
One of the most basic aims of human computer interaction has been sp.docxOne of the most basic aims of human computer interaction has been sp.docx
One of the most basic aims of human computer interaction has been sp.docx
 
One of the most common workplace communication tools is a telephon.docx
One of the most common workplace communication tools is a telephon.docxOne of the most common workplace communication tools is a telephon.docx
One of the most common workplace communication tools is a telephon.docx
 

Recently uploaded

Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Celine George
 
Micromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersMicromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersChitralekhaTherkar
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsanshu789521
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxContemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxRoyAbrique
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfUmakantAnnand
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesFatimaKhan178732
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application ) Sakshi Ghasle
 
PSYCHIATRIC History collection FORMAT.pptx
PSYCHIATRIC   History collection FORMAT.pptxPSYCHIATRIC   History collection FORMAT.pptx
PSYCHIATRIC History collection FORMAT.pptxPoojaSen20
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting DataJhengPantaleon
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 

Recently uploaded (20)

Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
Micromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersMicromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of Powders
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha elections
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxContemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.Compdf
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application )
 
PSYCHIATRIC History collection FORMAT.pptx
PSYCHIATRIC   History collection FORMAT.pptxPSYCHIATRIC   History collection FORMAT.pptx
PSYCHIATRIC History collection FORMAT.pptx
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 

This is the book to use for this assignment. I am sure you probabl.docx

  • 1. This is the book to use for this assignment. I am sure you probably know websites where you can have access to e-books. Book: Making Sense of the Social World: Methods of Investigation Fifth Edition ISBN: 978-1-4833-8061-2 Class: Applied Research Methods for Policy & Management – PAD4723 I am going to try to help you through the questions and how to approach this assignment. This is basically answering these questions using some materials from the book. Questions: 1. Identification of the research question(s), objective(s), and hypothesis, if available. 2. Brief discussion of the linkage between the research question(s) and the broader literature reviewed. 3. Identification of the dependent and major independent variables and their measurement. 4. Identification of data source(s), unit of analysis and type of data (time series, or cross sectional, and etc.). 5. Identification and brief discussion of the main research methods used. 6. Brief discussion of the main research results and their generalizability. 7. Brief discussion of the overall quality and organization of the article. For question #1: To answer question 1, I would read the article first and then define what the research question(s), objective(s), and hypothesis. For question #2: To answer question 2, It is pretty much self-explanatory, you
  • 2. just identify the research question(s) and find linkage to the remainder of the article. For question #3: To answer question 3, use this link https://www.simplypsychology.org/variables.html to learn about the D and I variables, and find the dependent and independent variables in the article. For question #4: To answer question 4, I would identify the data source, like what are they using to do this research (Facebook and Instagram). I don’t know what the unit analysis would be. The type of Data would be the type of system within the source are they using to do the research (for example, The crowding-out perspective). For question #5: To answer question 5, I would find out which research methods were used. Some examples of research methods studies in class would be: quantitative and qualitative methods of analysis. For question #6 and #7: These two questions are pretty much self-explanatory. 627 Article Using Large-Scale Social Media Experiments in Public Administration: Assessing Charitable Consequences of Government Funding of Nonprofits Sebastian Jilke*, Jiahuan Lu*, Chengxin Xu*, Shugo Shinohara† *Rutgers University; †International University of Japan
  • 3. Abstract In this article, we introduce and showcase how social media can be used to implement experi- ments in public administration research. To do so, we pre- registered a placebo-controlled field experiment and implemented it on the social media platform Facebook. The purpose of the ex- periment was to examine whether government funding to nonprofit organizations has an effect on charitable donations. Theories on the interaction between government funding and charitable donations stipulate that government funding of nonprofit organizations either decreases (crowd- ing-out), or increases (crowding-in) private donations. To test these competing theoretical predic- tions, we used Facebook’s advertisement facilities and implemented an online field experiment among 296,121 Facebook users nested in 600 clusters. Through the process of cluster-randomiza- tion, groups of Facebook users were randomly assigned to different nonprofit donation solicitation ads, experimentally manipulating information cues of nonprofit funding. Contrary to theoretical predictions, we find that government funding does not seem to matter; providing information about government support to nonprofit organizations neither increases nor decreases people’s propensity to donate. We discuss the implications of our empirical application, as well as the mer- its of using social media to conduct experiments in public administration more generally. Finally, we outline a research agenda of how social media can be used to implement public administration experiments.
  • 4. Introduction Using social media has become an increasingly com- mon activity as platforms like Facebook have be- come ubiquitous. For instance, to date there are 218 million Facebook users in the United States (Statistia 2017) with one-third of the US adult population log- ging into Facebook at least once a day (Public Religion Research Institute 2012). Social media platforms like Facebook or Twitter provide brand-new opportunities to implement online experiments studying citizen–state interactions. However, within public administration this potential has been largely untapped.1 Therefore, the purpose of this article is to introduce and show- case how social media, particularly Facebook, can be used to implement experiments in public administra- tion research. As a consequence of digitization, large-scale so- cial media experiments have been used increasingly in 1 Despite a number of important studies on social media use in government agencies and online citizen–government interactions (e.g., Mergel 2012; Grimmelikhuijsen and Meijer 2015; Porumbescu 2015; Im et al. 2014), to date no experimental applications on social media platforms exist in public administration scholarship (for a notable exception see a recent conference contribution by Olsen 2017).Address correspondence to the author at [email protected] © The Author(s) 2018. Published by Oxford University Press on behalf of the Public Management Research Association.
  • 5. All rights reserved. For permissions, please e-mail: [email protected] Journal of Public Administration Research And Theory, 2019, 627–639 doi:10.1093/jopart/muy021 Article Advance Access publication May 12, 2018 D ow nloaded from https://academ ic.oup.com /jpart/article-abstract/29/4/627/4995543 by Florida International U niversity Library S erials user on 18 A pril 2020 mailto:[email protected]?subject= Journal of Public Administration Research and Theory, 2019, Vol. 29, No. 4628 neighboring fields like political science, economics, or marketing (for an overview see Aral 2016). Examples include a wide range of topics, including political behavior (e.g., Bond et al. 2012), advertising (e.g., Bakshy et al. 2012a), product pricing (e.g., Ajorlou, Jabdabaie, and Kakhbod 2016), information propaga-
  • 6. tion (e.g., Bakshy et al. 2012b), or emotional contagion (e.g., Kramer, Guillory and Hancock 2014). This trend is encouraging but it misses an important component of people’s online behavior, namely citizen–state inter- actions. Indeed, the advent of e-government and an increased presence of government agencies on social media platforms has led to the rise of online citizen– state interactions (Thomas and Streib 2003; Wukich and Mergel 2015). These interactions range from gath- ering information, for example, about how to fill out online applications, to crisis communication, and even complaints about poor services. In this study, we il- lustrate how to implement large-scale social media experiments on Facebook by examining interactions between nonprofit organizations seeking donations and their potential donors. In particular, we assess whether changes in government funding affect the lev- els of charitable income of nonprofit organizations. Theoretically, two competing mechanisms have been distinguished in explaining why levels of government funding may have an effect on private donations to nonprofit organizations. The crowding-out perspective argues that government funding would decrease peo- ple’s willingness to donate because donors as taxpay- ers perceive government funding as a substitution to their donations (Andreoni 1989; Warr 1982; Kim and Van Ryzin 2014). If they contributed already via taxes, why should they give in addition to them? Therefore, the crowding-out model predicts a decrease in private donations as a result of government funding. In con- trast, the signaling model of crowding-in suggests that government funding is used by potential donors as an imperfect signal of an organization’s effectiveness (e.g., Borgonovi and O’Hare 2004; Rose-Ackerman 1981). In the absence of complete information about how a
  • 7. nonprofit organization will perform and operate with the funds at its disposal, government funding serves as an organization’s “quality stamp,” signaling the or- ganization is not only trustworthy but also effective because it managed to receive competitive government grants. The crowding-in perspective, therefore, predicts an increase in private donations as a result of govern- ment funding. In this study, we test these somewhat competing claims in the context of a large-scale social media experiment. Conducted in a naturalistic setting, so- cial media experiments, similar to conventional field experiments, combine high levels of internal validity with external validity. This allows us to test whether government funding crowds-in, or crowds-out, pri- vate donations. We implemented a field experiment on the social media platform Facebook by assigning clusters of approximately 300,000 Facebook users to donation solicitations of groups of real food banks. Using a pre-registered, placebo-controlled between- subjects design, groups of users were randomly allo- cated to three experimental conditions: (1) the control group (i.e., no funding information), (2) the placebo group (i.e., donation funded), and (3) the treatment group (i.e., government funded). As outcome meas- ures, we monitored people’s revealed donation inten- tions by their click-through-rates (i.e., the frequency people clicked on the links in the ad solicitations), but also other behavioral measures such as website visits. Consistently, we find no direct evidence for ei- ther model, suggesting that public and private fund- ing streams of nonprofit organizations do not seem to interact in the real world.
  • 8. In addition to these findings, we provide an over- view of social media experiments and how they can be implemented in public administration research, including an agenda for studying online citizen–state interactions using large-scale social media experi- ments. The remainder of the study is as follow: in the next section, we review empirical applications of social media experiments in neighboring fields to provide an overview of the applicability of social media to conduct experiments in public administra- tion. We then discuss our empirical application by first reviewing the literature on the crowding-out and crowding-in hypotheses. On this basis, we intro- duce our experimental research design and report the results of the experiment subsequently. In the final section, we draw implications for public administra- tion research and practice from both, our empirical application and the review of innovative social media experiments. Conducting Social Media Experiments Before turning to the empirical application in this art- icle, we provide an overview of the potential of social media to conduct experiments. Recent years have seen an increase in online field experiments implemented on social media platforms (Aral 2016). Indeed, com- panies like Amazon, Google, and Facebook constantly perform small experiments on their clients, for ex- ample through randomly altered website content where two different versions of the same website, on- line ad, or any other online parameter are randomly assigned to service users—a procedure marketers commonly refer to as A/B testing. In the past, social scientists have collaborated with major social media platforms to implement experiments. For example,
  • 9. D ow nloaded from https://academ ic.oup.com /jpart/article-abstract/29/4/627/4995543 by Florida International U niversity Library S erials user on 18 A pril 2020 Journal of Public Administration Research and Theory, 2019, Vol. 29, No. 4 629 Bond et al. (2012) implemented a 61-million-person political mobilization experiment on Facebook; simi- larly, Kramer, Guillory, and Hancock (2014) have implemented an online experiment to study emotional contagion among 690,000 Facebook users. Most stud- ies on Facebook include advertisement related topics, however. For example, Bakshy and colleagues (2012a) study the effectiveness of social cues (i.e., peers’ asso- ciations with a brand) on consumer responses to ads for more than 5 million Facebook users. In another study, Bakshy et al. (2012b) look at the randomized exposure of links shared by peers of more than 250 million Facebook users and how it affects information sharing behavior on Facebook. In all of these cases, researchers had to work closely with Facebook to im-
  • 10. plement the process of randomization at the individual level of Facebook users. But, this would also mean that experimenting on Facebook would be limited to those with industry contacts. In the following, we report from recent experiments that have been conducted without industry collabor- ation. We aim to showcase how social media platforms like Facebook or Twitter can be used by scholars or government agencies to implement experiments in a relatively straightforward manner. Ryan (2012) was one of the first social scientists to use Facebook’s ad- vertisement facilities to conduct research without hav- ing to collaborate with Facebook directly. Similar to the empirical application we report in this article, he randomly assigned clusters of individuals to different advertisements instead of randomizing on the user level (see Teresi and Michelson 2015 for an alternative approach2). To do so, he used Facebook’s advertise- ment facilities, which allow targeting ads to Facebook users on a number of demographic characteristics, such as age and gender, but also zip code (see also Ryan and Brockman 2012). Based on these user parameters, researchers can predetermine clusters of users and ran- domly allocate them to varying ad content. This is what Ryan did in his study. In particular, he looked at how advertisements that evoke emotions such as anger or anxiety affect information seeking behavior. He then used cluster-level click-through-rates as a dependent variable. Across three studies and more than 1.8 mil- lion impressions grouped into 360 clusters in total, he found consistent evidence that political advertisements that ought to evoke anger increase users’ proclivity to click through to a website. In other words, anger makes us click. A similar methodological approach
  • 11. was also used by Ryan and Brader (2017) who studied partisan selective exposure to election messages during the 2012 US presidential elections, using a total of 846 clusters of Facebook users. Similar applications also exist in fields like market- ing or economics. Aral and Walker (2014) for instance report from an experiment conducted with 1.3 million users of a Facebook application to test how social in- fluence in online networks affects consumer demand. They experimentally manipulated users’ network em- beddedness and the strength of their social ties, finding that both increase influence in social networks. Wang and Chang (2013) studied a similar topic looking at whether social ties and product-related risks influ- ence purchase intentions of Facebook users that were recruited via an online application. Although they found that online tie strength leads the higher purchase intentions, product-related risks had no direct effect on purchase intentions. In another Facebook study by Broockman and Green (2013), users were exposed to different types of political candidate advertisements over the course of 1 week. Like Ryan (2012), they randomized clusters of individuals, instead of individuals themselves. However, since they had access to public voter registries, they targeted 32,000 voters, which they assigned to 1,220 clusters across 18 age ranges, 34 towns, and 2 genders. These clusters of Facebook users were assigned to one of four experimental conditions: a control group with no advertisement, and three different types of adver- tisements that ought to increase Facebook users’ name recognition of the candidate. The innovation that Brookman and Green’s study introduced was that they used contact information from public voter records to
  • 12. gather public opinion data from these individual voters through telephone interviews later on. Since the cre- ation of clusters was done on the basis of assigning 32,000 registered voters to 1,220 clusters, they had detailed contact information of registered voters that belong to each respective cluster. In other words, they were able to link cluster assignment on Facebook with attitudinal outcomes from survey data, such as can- didate’s name recognition, positive impression of the candidate, whether people voted for the candidate, and whether they recall having seen the ad. Social media experiments exist outside of Facebook also. Gong et al. (2017) conducted a large-scale ex- periment among the social microblogging service Sina Weibo (i.e., the “Chinese Twitter”). They examined the return-on-investment of company tweets on view- ers of TV shows. To do so, they randomly assigned 2 Teresi and Michelson (2015) randomized individual Facebook users with whom they connected via a Facebook profile (i.e., becoming “friends”) into experimental conditions. While one group of “friends” received mainly apolitical status updates from the host account, the treatment group received political messages about the upcoming 2010 elections. After the election, authors searched for each “friend” in the state list of registered voters using information provided via Facebook’s profile (i.e., names, age, gender, etc.) to examine whether these online get- out-the- vote messages distributed through social media encouraged
  • 13. subjects to vote. D ow nloaded from https://academ ic.oup.com /jpart/article-abstract/29/4/627/4995543 by Florida International U niversity Library S erials user on 18 A pril 2020 Journal of Public Administration Research and Theory, 2019, Vol. 29, No. 4630 98 different TV shows into three experimental condi- tions: (1) the control group, where there is no tweet sent out about the particular TV shows; (2) the tweet condition, where each show is tweeted by the com- pany; and (3) a tweet and retweet condition, where each show is tweeted by a company and retweeted by a so-called “influencer.” TV show viewing percent- ages were used as an outcome measure, finding that both tweeting and tweeting coupled with retweeting boost TV show views relative to the shows in the con- trol group. In other words, social media efforts of TV companies result in a significant increase in viewers. They also found that retweets of influencers are more
  • 14. effective in generating new viewers than tweets by the companies. In another Twitter experiment, Coppock, Guess, and Ternovski (2016) looked at online mobil- ization behavior. In particular, authors were interested in whether Twitter users could be encouraged to sign a petition. To do so, they randomly divided 8,500 fol- lowers of a US nonprofit advocacy group into three ex- perimental conditions. In the first stage, the nonprofit organization published a tweet in which its followers were encouraged to sign a petition. All three groups were exposed to the public tweet. In the second stage, a treatment condition received a direct message with a similar request, referring to them as “followers,” an- other treatment condition got the same direct message, but they were referred to as “organizers,” whereas the control group received no direct message. On this basis, authors checked whether subjects either retweeted or signed the petition. Other notable examples of using social media plat- forms like Twitter to implement experiments involve studying social media censorship in China (King, Pan, and Roberts 2013), the effectiveness of repeated pol- itical messages on twitter followers of politicians (Kobayashi and Ichifuji 2015), the effectiveness of news media (King, Schneer, and White 2017), or racist online harassment (Munger 2017). The aforementioned examples provide rich inspir- ation for conducting social media experiments in public administration research. Social media experiments have the distinct advantage that they combine the internal validity of experiments with an increased realism and external validity. In this sense, they are a subtype of conventional field experiments, which are conducted in an online environment where people interact via social
  • 15. media. In addition, social media experiments can easily be conducted on large-scale samples using a variety of unobtrusive outcome measures to assess respond- ents’ revealed behaviors. They are therefore a viable option to complement survey-based experiments that often employ stated preferences (i.e., attitudes, evalu- ative judgments, or behavioral intentions), which make up the majority type of experiments implemented in public administration to date (Li and Van Ryzin 2017). People increasingly interact with government and government agencies using social media platforms like Twitter and Facebook (Mergel 2012). Scholars and government agencies alike can implement social media experiments to test the effectiveness of using these relatively new channels of communication and information provision. Examples may include assess- ing whether providing information on social media about the performance of government agencies affect citizen trust in those agencies, or may lead citizens to desirable behaviors, including coproduction. Indeed, implementing such innovative experimental designs in the context of online citizen–government interactions may be a viable avenue for future experimentation in public administration research. In the following, we introduce an empirical application of an online so- cial media experiment that examines in how far gov- ernment funding and charitable donation intentions interact. Empirical Application: How Government Funding and Private Donations Interact An impressive body of literature has emerged from various disciplines that focus on the issue of whether government funding would displace (crowding-out
  • 16. effect) or leverage (crowding-in effect) private con- tributions to nonprofit organizations. The impact of government funding on private donations to non- profits largely rests on how potential donors and nonprofits themselves strategically respond to gov- ernment funding of nonprofit activities (Tinkelman 2010; Lu 2016). We focus on the strategic responses of private donors in the present analysis: how would donors change their levels of charitable giv- ing when a nonprofit organization is supported by government funding. The literature distinguishes two contrasting models of how government-funded nonprofits are perceived, which ultimately affects charitable donations via processes of crowding-in or crowding-out. Early crowding-out theory assumes that private donors are altruistic and care about the optimal level of public goods provision. Donors as taxpayers would consider government funding as their contributions through taxation and thus perceive it as a perfect sub- stitute for voluntary donations. In this way, increases in government support would lower the need for add- itional private contributions. Therefore, when a non- profit receives more support from the government, private donors would consciously reduce their giving to this organization. As a result, there is a dollar-for- dollar replacement between private giving and govern- ment funding (Roberts 1984; Warr 1982). This pure altruism assumption was later challenged by Andreoni’s (1989) model of impure altruism, which predicts that D ow nloaded from
  • 17. https://academ ic.oup.com /jpart/article-abstract/29/4/627/4995543 by Florida International U niversity Library S erials user on 18 A pril 2020 Journal of Public Administration Research and Theory, 2019, Vol. 29, No. 4 631 donors are also motivated by a “warm-glow”—the utility from the act of giving to help others. In this im- pure altruism line of reasoning, government funding and private giving would not completely substitute each other. As a result, there may exist a crowding- out effect between these two funding sources, but the magnitude of the effect is less than the dollar-for-dollar model that pure altruism would predict. On the other hand, private donors might consider government funding favorably and become more will- ing to contribute to government-funded nonprofits be- cause they perceive them as more competent and/or needy. Crowding-in theory proposes that government funding may stimulate charitable contributions in two ways. First, when donors do not have complete know- ledge concerning beneficiary nonprofits and their pro- grams, government funding serves as a direct signal of the nonprofit’s quality and reliability (Rose-Ackerman 1981). Indeed, to be funded by government agencies,
  • 18. nonprofit organizations have to go through a com- petitive merit-based selection process and meet finan- cial and programmatic requirements (Lu 2015; Suárez 2011). Therefore, the receipt of government funding can be perceived by uninformed donors as an indicator of trustworthiness and competence. Second, govern- ment funding also is considered as a signal of unmet social needs, calling for more donor attention and fur- ther facilitating the leveraging effect of government funding (Brooks 1999; Okten and Weisbrod 2000). There exist a rich body of empirical studies in sup- port of the crowding-out hypothesis (e.g., Andreoni 1993; Andreoni and Payne 2011; Brooks 2000; De Wit, Bekkers, and Broese van Groenou 2017; Dokko 2009; Hughes, Luksetich, and Rooney 2014; Kingma 1989; Steinberg 1987) and the crowding-in model (e.g., Borgonovi and O’Hare 2004; De Wit and Bekkers 2016; Heutel 2014; Khanna and Sandler 2000; Lu 2016; Okten and Weisbrod 2000; Smith 2007). Most recently, De Wit and Bekkers (2016) and Lu (2016) respectively employed meta-analytical techniques to aggregate existing empirical evidence on crowding-in and crowding-out. Both studies conclude a significant positive association between government funding and private donations, even though the magnitude of the relationship is trivial. The above-mentioned body of literature on crowding-out and crowding-in greatly advances our understanding of the complex interaction between government funding and private donations. However, it generally suffers from two limitations. First, the ma- jority of existing empirical literature testing the crowd- ing-in or crowding-out effect employs observational data. Although observational studies enable scholars
  • 19. to explore the association between the two revenue sources, drawing causal inferences remains challenging (Blom-Hansen, Morton, and Serritzlew 2015; James, Jilke, and Van Ryzin 2017a). Second, both crowding-in and crowding-out lines of reasoning assume that po- tential donors possess perfect information about the nonprofits they might want to donate to, especially whether these organizations are funded by govern- ment and to what extent. However, this assumption might not be true in the real world (De Wit et al. 2017; Horne, Johnson, and Van Slyke 2005; Krasteva and Yildirim 2013). For example, Horne, Johnson, and Van Slyke (2005) used public opinion data to demonstrate that individual donors do not necessarily have com- plete information on the financial structures of their beneficiary organizations and subsequently link dona- tion decisions to the levels of government funding. In recent years, scholars began to employ ex- perimental designs to address these two limitations (i.e., endogeneity and imperfect information) in the crowding-out and crowding-in literature (e.g., Eckel, Grossman, and Johnston 2005; Kim and Van Ryzin 2014; Ottoni-Wilhelm, Vesterlund, and Xie 2017; Wasif and Prakash 2017). Existing experimental stud- ies testing crowding-out/-in effects usually include manipulations of the existence and the level of direct government support to beneficiaries, and then measure variations in of subjects’ donations. Methodologically, experimental designs are advantageous over observa- tional settings in terms of their internal validity when testing crowding-out/-in effects because experimental studies create a controlled environment of information exchange to rule out confounding factors. As a result, scholars are provided with more direct evidence of the
  • 20. causal linkage between government support and char- itable giving (Blom-Hansen, Morton, and Serritzlew 2015; James, Jilke, and Van Ryzin 2017a). Table 1 reviews the experimental studies of crowding-out/-in effects to date, including type, setting, and results. As can be seen in Table 1, most experimental stud- ies employ laboratory experimental designs, primarily using two specific experimental paradigms. One is the public goods game (e.g., Andreoni 1993; Eckel et al. 2005; Isaac and Norton 2013) and the other type of experimental setting employed in the literature is the dictator game (e.g., Blanco, Lopez, and Coleman 2012; Konow 2010; Korenok, Millner, and Razzolini 2012). Despite different experimental paradigms, most of the laboratory experiments report a partial crowding-out effect between government funding and charitable contributions (see also De Wit and Bekkers 2016). In addition to laboratory experiments, there are a few experimental studies that employ survey experi- ments to test the crowding-in and crowding-out prop- ositions. For example, Kim and Van Ryzin (2014) conducted an online survey experiment with 562 participants and found that an arts nonprofit with D ow nloaded from https://academ ic.oup.com /jpart/article-abstract/29/4/627/4995543 by Florida International U
  • 21. niversity Library S erials user on 18 A pril 2020 Journal of Public Administration Research and Theory, 2019, Vol. 29, No. 4632 government funding would receive about 25% less pri- vate donations than an identical hypothetical organ- ization without government funding. In contrast, Wasif and Prakash’s (2017) survey experiment with 530 respondents in Pakistan reported that federal funding would not change respondents’ willingness to donate to a hypothetical faith-based educational nonprofit. When meta-analyzing results from experimental studies only, De Wit and Bekkers (2016) find substan- tially different results compared to observational stud- ies, with experimental studies showing a considerable crowding-out effect and nonexperimental studies a very small crowding-in effect. There are two potential explanations for these differences. A first possibility would be that observational studies on crowding- out/-in are plagued by endogeneity, and hence the discrepancies in results may be a product of the com- paratively poor internal validity of observational re- search designs. A second possibility would be that findings predominately produced within stylized set- tings such as economic games or hypothetical scenar- ios may hardly extrapolate beyond the laboratory. Or in other words, people may behave differently in lab and survey experiments as in the real world. Indeed, a recent systematic comparison between laboratory and
  • 22. field experiments concluded that the ability of stylized experiments to extrapolate social preferences from the lab to the field is limited, at best (Galizzi and Navarro- Martinez, Forthcoming; see also Levitt and List 2007). We respond to these competing interpretations by implementing a naturalistic field experiment using so- cial media. Method In the following, we report the experimental design for this study. Doing so, we follow James, Jilke, and Van Ryzin’s (2017b) recommended reporting guidelines for experiments in public administration research. Experimental Design, Subject Recruitment, and Treatment We designed a large-scale placebo-controlled online field experiment on the social networking platform Facebook by purchasing ads using Facebook’s ad- vertisement facilities. These ads were designed to so- licit donations to human service nonprofits in New York City (NYC). We randomly allocated groups of Facebook users to three different ads of local human service organizations. Depending on experimental conditions, these ads included information cues about (1) government funding (treatment group), (2) dona- tion funding (placebo group), or (3) no such infor- mation (control group) as depicted in Figure 1. The Table 1. Experimental Studies on Crowding-in and Crowding- out Effects Reference Type of
  • 23. experiment Experimental setting Subjects Finding Andreoni (1993) Lab Public goods game Students Crowding-out Chan et al. (1996) Lab Public goods game Students No effect Bolton and Katok (1998) Lab Dictator game Students Crowding- out Chan et al. (2002) Lab Public goods game Students Crowding- out Sutter and Weck-Hannemann (2004) Lab Public goods game Students Crowding-out Eckel et al. (2005) Lab Public goods game Students Crowding- out Güth et al. (2006) Lab Public goods game Students Crowding- out Galbiati and Vertova (2008) Lab Public goods game Students Crowding-in Hsu (2008) Lab Public goods game Students Crowding-out Reeson and Tisdell (2008) Lab Public goods game Students Crowding-out Konow (2010) Lab Dictator game Students Crowding-out Blanco et al. (2012) Lab Dictator game Tourists No effect Gronberg et al. (2012) Lab Public goods game Students Crowding-out Korenok et al. (2012) Lab Dictator game Students Crowding-out Issac and Norton (2013) Lab Public goods game Students Crowding-out Lilley and Slonim (2014) Lab Contribution decision making Students Crowding-out Galbiati and Vertova (2014) Lab Public goods game Students
  • 24. Crowding-in Kim and Van Ryzin (2014) Survey Hypothetical vignette General population sample (US) Crowding-out Wasif and Prakash (2017) Survey Hypothetical vignette General population sample (Pakistan) No effect Ottoni-Wilhelm et al. (2017) Lab Contribution decision making Students Crowding-out D ow nloaded from https://academ ic.oup.com /jpart/article-abstract/29/4/627/4995543 by Florida International U niversity Library S erials user on 18 A pril 2020 Journal of Public Administration Research and Theory, 2019, Vol. 29, No. 4 633 experimental design was pre-registered at the AEA RCT registry3 and received ethics approval from
  • 25. Rutgers University’s Institutional Review Board. We chose to target Facebook users who live in NYC because this allowed us to leverage local information about government funding of human service organi- zations (i.e., real food banks in NYC) in the experi- mental design. All ads included a donation appeal, encouraging Facebook users to donate to local food banks through a webpage they could access by click- ing on the ad. Once clicked, and depending on the ex- perimental condition, users were directed to one of three web pages where Food Banks from NYC4 were listed and hyperlinked. Facebook users in the control condition (i.e., no funding information) were directed to the Rutgers Observatory of Food Banks webpage. Here, all NYC Food Banks were listed and hyper- linked. Those subjects who were allocated to the pla- cebo condition (i.e., donation-funded), were directed to the Rutgers Observatory of Donation-Funded Food Banks webpage. The presence of a placebo condition (i.e., donation-funded) allows us to test whether any differences between treatment and control is due to the mere provision of funding information. On this web- site, all donation-funded5 food banks from NYC were listed and hyperlinked. In the treatment condition (i.e., government-funded), Facebook users were sent to the Rutgers Observatory of Government-Funded Food Banks website, where all Food Banks that were classi- fied as government-funded were listed and hyperlinked. Unique ads were developed for the experiment. Figure 2 shows the ads that were randomly presented to groups of Facebook users. Taking the ad of the con- trol condition as a baseline, the treatment and placebo ads look exactly like the control ad, except for four
  • 26. important differences: 1) A different sponsor was used: i. Rutgers Observatory of Food Banks for the control group; ii. Rutgers Observatory of Donation-Funded Food Banks for the placebo group; iii. Rutgers Observatory of Government-Funded Food Banks for the treatment group. 2) The ad’s headline was changed from “Donate to Food Banks in your area” to “Donate to gov- ernment-funded Food Banks in your area,” or “Donate to donation-funded Food Banks in your area” respectively. 3) The heading under the image changed from “Food Banks in NYC” to “Government-funded Food Banks in NYC,” or “Donation-funded Food Banks in NYC” respectively. 4) The two-sentence-description used “government- funded Food Banks in NYC” and “donation- funded Food Banks in NYC” instead of “Food Banks in NYC.” Cluster Randomization Randomization into experimental conditions was not performed on the individual level, but on the level of groups of Facebook users (see also Ryan 2012 for a similar approach). This was done because Facebook’s 3 AEARCTR-0002345 (see
  • 27. https://www.socialscienceregistry.org/trials/ 2345). 4 The food banks listed on the websites are all real food banks that operate in NYC, as collected through the National Center for Charitable Statistics (NCCS) database. 5 We distinguished donation-funded and government-funded food banks based on the revenue information provided in food banks’ latest 990 forms in the NCCS database. For this study, we defined government- funded food banks as the ones that receive government funding in addition to private donations, and donation-funded food banks as the ones that do not receive government funding but private donations. Figure 1. Experimental Design. D ow nloaded from https://academ ic.oup.com /jpart/article-abstract/29/4/627/4995543 by Florida International U niversity Library S erials user on 18 A
  • 28. pril 2020 https://www.socialscienceregistry.org/trials/2345 https://www.socialscienceregistry.org/trials/2345 Journal of Public Administration Research and Theory, 2019, Vol. 29, No. 4634 commercial advertisement facilities do not permit dif- ferent ads to be randomized on the individual level (Ryan and Brockman 2012). Instead, users were classi- fied into groups. This approach makes use of Facebook’s ability to target users based on demographics. More precisely, we stratified Facebook users into clusters of users by zip code, age, and gender. Cluster-randomized experiments are widely used across the social and med- ical sciences where groups of individuals—for example, people living in an area, or classmates—are jointly ran- domized into experimental conditions (e.g., Campbell and Walters 2014; Hayes and Moulton 2017). For our experiment, we randomized demographic clusters of Facebook users, rather than users nested within these clusters. Doing so, we first collected all 177 NYC zip codes and randomly chose 100. In a second step, we grouped respondents of 18 to 60 years into three age categories (18–27, 28–38, and 39–60). We composed these three groups to have an approximately equal “potential reach” across strata (potential reach refers to the approximate number of Facebook users that can be exposed to an ad). Each of the three groups has a potential reach of about 1.5 to 1.6 million users who live in NYC. Next, we created 600 clusters by taking 100 zip codes * 3 age categories * 2 genders. These 600 clusters were randomized into one of the three experi-
  • 29. mental conditions, so that each experimental condition ended up with 200 clusters with a potential reach of about 1 million Facebook users6. An important concern when analyzing data from cluster-randomized experiments is that clusters vary in size so that the conventional difference-in-means estimator would be biased (Middleton and Arronow 2015). Indeed, the 600 clusters we have come up with vary in (anticipated) size, with a potential reach be- tween fewer than 1,000 to 110,000 Facebook users (mean potential reach of approximately 5,000 users). Therefore, we followed Gerber and Green’s (2012) suggestion and blocked on cluster size (200 groups of 3 clusters each) in the randomization procedure, so that the difference-in-means estimator can be used without risk of bias. The blocked cluster-randomization was performed using Stata 14. The actual implementation of the experiment was done using Facebook’s Adverts Manager interface. We purchased 600 ads and targeted them to specific demo- graphic clusters (stratified by zip code, age categories, and gender). A total of 600 clusters (i.e., 200 per experi- mental condition) were purchased for $10 each and our ads were constructed to pay per impression7. All ads ran on the same day for 24 h, from 6 a.m. EST on August 21, 2017 until 6 a.m. the following day. Although the sample size of the clusters was pre-determined, the number of Facebook users who have been actually exposed to the ads was not; however, the potential reach across experimental conditions was approximately the same (see footnote 5). This is so because Facebook uses its own “Optimization for Ad Delivery” algorithm (which is the same across experimental conditions) to
  • 30. determine who will be exposed to an ad. In essence, Facebook users are shown ads which—according to Facebook’s algorithm—have the highest probability of resulting in an impression or click.8 The 600 clusters had a joint potential reach of 2,972,500, but 296,121 Facebook users were actually exposed to the ads as a result the Facebook’s ad bidding procedure. Outcomes Our primary outcome of interest is subjects’ revealed in- tention to donate. Or in other words, we look at whether 6 988,200 in the control group; 985,300 in the placebo group; 999,000 in the treatment group. 7 This makes a total project cost of $6,000. However, from the purchased $6,000, Facebook ran our ads for a total value of $3,299, or a mean value of $5.50 per cluster. 8 Further information can be found here: https://www.facebook.com/ business/help/1619591734742116. Figure 2. Facebook Ads (Control versus Placebo versus Treatment Conditions). D ow nloaded from https://academ ic.oup.com
  • 31. /jpart/article-abstract/29/4/627/4995543 by Florida International U niversity Library S erials user on 18 A pril 2020 https://www.facebook.com/business/help/1619591734742116 https://www.facebook.com/business/help/1619591734742116 Journal of Public Administration Research and Theory, 2019, Vol. 29, No. 4 635 the ads encourage people to click on them. We interpret link clicks as people’s intention to donate because the ads explicitly solicit donations by encouraging people to click. Indeed, clicking is commonly seen as a crucial ante- cedent for purchase intentions in the marketing literature (Zhang and Mao 2016). Against this background and given the explicit ad solicitations for donations we used in the ads (i.e., “click on one of the listed websites to make a donation”), we assume that ad clicks are a mo- tivational precursor for donation intentions. Doing so, we use the so-called unique-outbound-link-click-rate per cluster as our primary dependent variable. The unique- outbound-click-rate (also referred to as click-through- rate) denotes a cluster’s actual reach (i.e., the number of people who saw our ads at least once) divided through its unique outbound clicks (i.e., the number of people who performed a click that took them off Facebook-owned property). In our case, the outbound click led users to one of three web pages—depending on the experimental condition they were assigned to—we have developed for this study. On these web pages, which were labeled as
  • 32. either the Rutgers Observatory of Food Banks (control group), the Rutgers Observatory of Food Banks (placebo group), or the Rutgers Observatory of Government- Funded Food Banks (treatment group), subjects were informed about the purpose of the study and provided with a hyperlinked list of either government-funded, donation-funded, or all food banks in NYC. As a secondary, though not pre-registered, measure, we analyze unique page views (i.e., the number of times people visited their assigned Observatory web- page at least once) as provided by Google Analytics for the three web pages. We calculated the percentage of unique page views relative to the number of people who saw the ads at least once. This measure serves as a robustness check for our primary measure of interest as it provides the actual number of users who actually ended up on the web pages. Results The ads we ran on Facebook reached a total 296,121 NYC Facebook users, resulting in a mean click-through- rate (CTR) of 0.48 (standard deviation of 0.49) for a total of 600 clusters (which is our unit of analysis). This means that of the approximately 300,000 Facebook users who saw the ads at least once, 0.48% performed at least one click which was intended to take them off Facebook and to one of the respective Observatory web pages that were designed for this study. To determine whether we find evidence for a crowd- ing-out/-in effect of government funding on donation intentions, we analyzed whether there are systematic differences in users’ click-through-rates between experi- mental conditions. The left panel in Figure 3 depicts the
  • 33. results of tests on the equality of means between control, placebo (i.e., donation), and treatment (i.e., government) groups. Independent two-sample t-tests were conducted to compare (1) control and treatment conditions (mean difference of 0.00, p = 0.99; Cohen’s d = 0.00), (2) pla- cebo and treatment conditions (mean difference of 0.06, p = 0.21; Cohen’s d = −0.13), and (3) control and placebo conditions (mean difference of −0.06, p = 0.17; Cohen’s d = −0.14) (see also Table A1 in the Appendix). In sum, all three experimental conditions are statistically indistin- guishable from each other in terms of their CTRs. In add- ition, they are of trivial magnitude with very small values in terms of effect sizes (i.e., Cohen’s d). This means that we neither find support for the crowding-out nor the crowd- ing-in hypothesis. In addition, there is also no significant difference between control and placebo conditions. When analyzing unique page views of the Observatory web pages that were created for this study, we used page visit data from Google Analytics, which provided us with the total number of unique page visits for each experimental condition. Therefore, the unit of analysis for this measure is individual Facebook users (n = 296,121). In total, 524 page views were recorded representing 0.18% of our sample of Facebook users.9 In the control condition, there were a total of 162 page visits, representing 0.16% of Facebook users that have been assigned to the control condition. In the pla- cebo and treatment conditions there were respectively a total of 168 (0.17%) and 194 (0.19%) page views (see also the right panel on Figure 3). When analyzing whether these somewhat-minor differences are of sys- tematic nature, we performed x2 tests. The percentage points difference in page visits between control and treatment condition is 0.03 (x2(1) = 2.30, p = 0.13;
  • 34. Odds Ratio = 1.18), and the difference between pla- cebo and treatment condition is 0.02 (x2(1) = 1.16, p = 0.28; Odds Ratio = 1.12). Between control and placebo conditions, the percentage point difference is even smaller (i.e., 0.01) (x2(1) = 1.87, p = 0.66; Odds Ratio = 1.05). In sum, all three experimental condi- tions are statistically indistinguishable from each other. In addition, differences are small in magnitude with small effect sizes (i.e., Odds Ratios), confirming prior results from analyzing Facebook’s CTRs. We can there- fore confidently reject both, the crowding-out and the crowding-in hypotheses. 9 There is an inconsistency between the number of unique outbound clicks reported by Facebook and actual unique page visits as provided by Google Analytics. This may be the result of some Facebook clicks not reaching its intended destination with users potentially aborting connecting with a website outside Facebook, possibly because of fear of virus-infected websites. D ow nloaded from https://academ ic.oup.com /jpart/article-abstract/29/4/627/4995543 by Florida International U niversity Library S
  • 35. erials user on 18 A pril 2020 Journal of Public Administration Research and Theory, 2019, Vol. 29, No. 4636 Conclusion This study has developed and implemented a large-scale but relatively low-cost online experiment in the context of public administration research, where about 300,000 Facebook users were randomly assigned to different do- nation solicitation ads. The study has found no evidence for crowding-in, nor the crowding-out model. In this sense, our results differ from prior experimental studies that stem from the behavioral laboratory or have been produced in the context of hypothetical vignette designs. While our study is certainly not the first that finds nei- ther evidence for the crowding-out, nor the crowding-in hypotheses (see Blanco et al. 2012; Chan et al. 1996; Wasif and Prakash 2017), we provide—to our know- ledge—the first experimental evidence that has been produced within a naturalistic context. However, our results have to be taken with a fair grain of salt. Our effective sample size of 600 clusters may not be able to have detected a difference in groups for our secondary outcome measure. Future stud- ies are advised to replicate our design using a larger number of clusters, possibly by stratifying clusters for single age categories (i.e., one for each year) instead of grouping them as we did. In this sense, our findings do not provide a strict theory test of crowding-in versus
  • 36. crowding-out but provide some suggestive evidence that government funding does not seem to matter in the case of online donation solicitations of local food banks. Future studies may extend this to other areas of human service delivery, including more direct cues of government funding because sometimes people do not pay appropriate attention to informational cues. Although there is a long-standing scholarly debate on crowding-out and crowding-in between government funding and private donations, our study adds to this literature that in an online setting, whether a nonprofit is supported by government funding seems not to affect donors’ behaviors. Therefore, it is possible that, from donors’ perspective, government funding and charitable donations are two independent revenue streams. Donors do not necessarily consider a nonprofit’s revenue struc- ture, especially whether it is supported by government funding when making donation decisions. If so, the crowding-out versus crowding-in debate over the last several decades might be overstated. Certainly, we did not examine in the present study how nonprofits strategic- ally modify their fundraising efforts when they receive government funding, which might actually contribute to the crowding-in or crowding-out effect. For example, Andreoni and Payne (2011) argued that the crowding- out effect is mostly driven by nonprofits’ decreased fun- draising efforts as a response to government funding. Again, we cannot guarantee the generalizability of our findings from food banks to other service areas. The primary aim of this study was to showcase how public administration scholars and government agen- cies can utilize social media to conduct experimental research in the realm of online citizen–government
  • 37. interactions. Social media experiments enable public administration researchers to ensure both internal and external validity because they have the distinct advan- tage that they combine the internal validity of experi- ments with an increased realism and ecological validity. Furthermore, as we demonstrate in this study, industry contacts are no longer necessary for implementing large-scale social media experiments. Social media service platforms such as Twitter or Facebook can be easily used to test a variety of important questions to public administration scholars and practitioners. Here, online experiments can be conducted on large-scale samples using a variety of unobtrusive outcome meas- ures to assess respondents’ revealed behaviors. A research agenda for applying social media experi- ments in studying online citizen–state interactions has ns ns ns 0 .1 .2 .3 P e rc
  • 40. tb o u n d c lic ks ) Control Donation Government Figure 3. Experimental Results. Note: ns = not significant at a 95% confidence level. D ow nloaded from https://academ ic.oup.com /jpart/article-abstract/29/4/627/4995543 by Florida International U niversity Library S erials user on 18 A pril 2020
  • 41. Journal of Public Administration Research and Theory, 2019, Vol. 29, No. 4 637 implications for a wide range of areas. Possible applica- tions may include assessing the effectiveness of different public communication strategies on social media and how these strategies affect, for example, agency reputa- tion. Indeed, bureaucratic reputation is a core topic area in public administration, with studies examining how news media shape agencies’ reputation, including trust (e.g., Grimmelikhuijsen, de Vries, and Zijlstra 2018). Little is known about how social media communication strategies may alter agency reputation. Social media experiments may assess the effect of different types of communication strategies on reputation—examples may include diffusing responsibility attributions for service failure across actors, active agency branding, or just randomly publishing information about adminis- trative procedures. Another possible area of applica- tion may include assessing the effectiveness of different targeted job advertisements for government agencies, using, for instance, Facebook’s advertisement facilities to get underrepresented groups to apply. Recent schol- arship has exemplified the importance of signaling per- sonal benefits to increase the number of women and people of color to apply for government jobs (e.g., Linos 2017). Testing and refining such targeted job advertise- ments and/or messages via social media platforms like Facebook may be a great way to further this promising line of scholarship, but also be an excellent evaluation tool in seeking the most effective online recruitment strategy to increase the demographic representation of government agencies. A more obvious area of applica- tion may include assessing whether different ways of how public performance information is presented on- line affects citizens’ attitudes and subsequent behaviors.
  • 42. Indeed, a growing literature in behavioral public ad- ministration examines the psychology of performance information (e.g., James and Olsen 2017), assessing whether different ways of presenting numbers affects perceptions of public service performance or satisfac- tion evaluations of government agencies. This agenda may be extended to social media experiments where large-scale experiments could be conducted on social media platforms to probe for the external validity of prior survey experiments. Social media experiments may also be useful in simply testing the effectiveness of governmental online crisis warning systems by assess- ing which types of messages on social media are most likely to be shared across users. Such an agenda would have important practical implications for the effective- ness of early crisis warning systems. The emergence of Internet-based social media has made it possible for researchers to easily interact with thousands or even millions of citizens. Implementing such innovative experimental designs in the context of online citizen–gov- ernment interactions may be a viable avenue for future experimentation in public administration research both, A p p e n d ix Ta b
  • 52. .0 6 0 .1 7 D ow nloaded from https://academ ic.oup.com /jpart/article-abstract/29/4/627/4995543 by Florida International U niversity Library S erials user on 18 A pril 2020 Journal of Public Administration Research and Theory, 2019, Vol. 29, No. 4638 for academics and practitioners alike. We hope this contri- bution sparks a wider implementation of public adminis- tration experiments on social media platforms. FUNDING This study received funding from the Rutgers SPAA Faculty Research Funds AY16-17. This study was
  • 53. supported by a National Research Foundation of Korea Grant from the Korean Government [NRF-2017S1A3A2065838]. References Ajorlou, A., A. Jadbabaie, and A. Kakhbod. 2016. Dynamic pricing in social networks: The word-of-mouth effect. Management Science 64:971–79. Andreoni, J. 1989. Giving with impure altruism: Applications to charity and Ricardian equivalence. The Journal of Political Economy 97:1447–58. ———. 1993. An experimental test of the public-goods crowding-out hypoth- esis. The American Economic Review 83:1317–27. Andreoni, J., and A. A. Payne. 2011. Is crowding out due entirely to fund- raising? Evidence from a panel of charities. Journal of Public Economics 95:334–43. Aral, S. 2016. Networked experiments. In The Oxford Handbook of the Economics of Networks, ed. Yann Bramoullé, Andrea Galeotti, and Brian Rogers, New York: Oxford University Press. Aral, S., and D. Walker. 2014. Tie strength, embeddedness, and social influence:
  • 54. A large-scale networked experiment. Management Science 60:1352–70. Bakshy, E., D. Eckles, R. Yan, and I. Rosenn. 2012a. Social influence in social advertising: evidence from field experiments. arXiv:1206.4327v1. Bakshy, E., I. Rosenn, C. Marlow, and L. Adamic. 2012b. The role of netwroks in information diffusion. arXiv:1207.4145v5. Blanco, E., M. C. Lopez, and E. A. Coleman. 2012. Voting for environmental donations: Experimental evidence from Majorca, Spain. Ecological Economics 75:52–60. Blom-Hansen, Jens, Rebecca Morton, and Soren Serritzlew. 2015. Experiments in public management research. International Public Management Journal 18:151–70. Bolton, G. E., and E. Katok. 1998. An experimental test of the crowding out hypothesis: The nature of beneficent behavior. Journal of Economic Behavior & Organization 37(3):315–31. Bond, R. M., C. Fariss, J. Jones, A. Kramer, C. Marlow,
  • 55. J. Settle, and J. Fowler. 2012. A 61-million-person experiment in social influence and political mo- bilization. Nature 489:295–98. Borgonovi, F., and M. O’Hare. 2004. The impact of the national endowment for the arts in the United States: Institutional and sectoral effects on pri- vate funding. Journal of Cultural Economics 28:21–36. Broockman, David and Donald Green. 2014. Do online advertisements in- crease political candidates’ name recognition or favorability? Evidence from randomized field experiments. Political Behavior 36:263– 89. Brooks, A. C. 1999. Do public subsidies leverage private philanthropy for the arts? Empirical evidence on symphony orchestras. Nonprofit and Voluntary Sector Quarterly 28:32–45. ———. 2000. Is there a dark side to government support for nonprofits? Public Administration Review 60:211–18. Campbell, Michael and Stephen Walters. 2014. How to design, analyse and report cluster randomised trials in medicine and health related
  • 56. research. Hoboken: Wiley. Chan, K. S., S. Mestelman, R. Moir, and R. A. Muller. 1996. The voluntary provision of public goods under varying income distributions. Canadian Journal of Economics 29(1):54–69. Chan, K. S., R. Godby, S. Mestelman, and R. A. Muller. 2002. Crowding-out voluntary contributions to public goods. Journal of Economic Behavior & Organization 48(3):305–17. Coppock, A., A. Guess and J. Ternovski. 2016. When treatments are tweets: A network mobilization experiemnt over twitter. Political Behavior 38:105–2018. De Wit, A., and R. Bekkers. 2016. Government support and charitable dona- tions: A meta-analysis of the crowding-out hypothesis. Journal of Public Administration Research and Theory 27:301–19. De Wit, A., R. Bekkers, and M. Broese van Groenou. 2017. Heterogeneity in crowding-out: When are charitable donations responsive to government support? European Sociological Review 33:59–71.
  • 57. Dokko, J. K. 2009. Does the NEA crowd out private charitable contributions to the arts? National Tax Journal 62:57–75. Eckel, C. C., P. J. Grossman, and R. M. Johnston 2005. An experimental test of the crowding out hypothesis. Journal of Public Economics 89:1543–60. Galbiati, R., and P. Vertova. 2008. Obligations and cooperative behaviour in public good games. Games and Economic Behavior 64(1):146– 70. Galbiati, R., and P. Vertova. 2014. How laws affect behavior: Obligations, incentives and cooperative behavior. International Review of Law and Economics 38:48–57. Galizzi, Mateo and Daniel Navarro-Martinez. Forthcoming. On the ex- ternal validity of social preference games: A systematic lab- field study. Management Science. Gerber, Alan and Donald Green. 2012. Field experiments: design, analysis, and interpretation. New York: Norton. Gong, S., J. Zhang, P. Zhao, and X. Jiang. 2017. Tweeting as a marketing tool: A field experiment in the TV industry. Journal of Marketing Research 54:833–50.
  • 58. Grimmelikhuijsen, Stephan and Albert Meijer. 2015. Does twitter increase perceived police legitimacy? Public Administration Review 75:598–607. Grimmelikhuijsen, S., F. de Vries, and W. Zijlstra. 2018. Breaking bad news without breaking trust: The effects of a press release and newspaper coverage on perceived trustworthiness. Journal of Behavioral Public Administration 1. Gronberg, T. J., R. A. Luccasen III, T. L. Turocy, and J. B. Van Huyck. 2012. Are tax-financed contributions to a public good completely crowded-out? Experimental evidence. Journal of Public Economics 96(7– 8):596–603. Güth, W., M. Sutter, and H. Verbon. 2006. Voluntary versus compulsory soli- darity: Theory and experiment. Journal of Institutional and Theoretical Economics 162(2):347–63. Hayes, Richard and Lawrence Moulton. 2017. Cluster randomised trials. Boca Raton: Chapman & Hall/CRC. Heutel, G. 2014. Crowding out and crowding in of private donations and gov- ernment grants. Public Finance Review 42:143–75. Horne, C. S., J. L. Johnson, and D. M. Van Slyke. 2005. Do charitable donors
  • 59. know enough—and care enough—About government subsidies to affect private giving to nonprofit organizations? Nonprofit and Voluntary Sector Quarterly 34:136–49. Hsu, L. C. 2008. Experimental evidence on tax compliance and voluntary public good provision. National Tax Journal 61(2):205–23. Hughes, P., W. Luksetich, and P. Rooney. 2014. Crowding‐Out and fundraising efforts. Nonprofit Management and Leadership 24(4):445–64. Im, T., W. Cho, G. Porumbescu, and J. Park. 2014. Internet, trust in govern- ment, and citizen compliance. Journal of Public Administration Research and Theory 24(3):741–63. Isaac, R. M., and D. A. Norton. 2013. Endogenous institutions and the possi- bility of reverse crowding out. Public Choice 156:253–84. James, Oliver, Sebastian Jilke, and Gregg Van Ryzin, eds. 2017a. Experiments in public management research: challenges and opportunities. Cambridge: Cambridge University Press. James, Oliver, Sebastian Jilke, and Gregg Van Ryzin. 2017b. Causal inference and the design and analysis of experiments. In Experiments in public man- agement research: Challenges and opportunities, ed. Oliver James, Sebastian
  • 60. Jilke, and Gregg Van Ryzin, 59–88. Cambridge: Cambridge University Press. D ow nloaded from https://academ ic.oup.com /jpart/article-abstract/29/4/627/4995543 by Florida International U niversity Library S erials user on 18 A pril 2020 Journal of Public Administration Research and Theory, 2019, Vol. 29, No. 4 639 James, Oliver, and Asmus Leth Olsen. 2017. Citizens and public performance measures: making sense of performance information. In Experiments in public management research: Challenges and opportunities, ed. Oliver James, Sebastian Jilke, and Gregg Van Ryzin, 270–89. Cambridge: Cambridge University Press. Khanna, J., and T. Sandler. 2000. Partners in giving: The crowding-in effects of UK government grants. European Economic Review 44:1543–
  • 61. 56. Kim, Mirae, and Gregg G. Van Ryzin. 2014. Impact of government funding on donations to arts organizations: A survey experiment. Nonprofit and Voluntary Sector Quarterly 43:910–25. King, G., J. Pan, and M. E.Roberts. 2013. How censorship in China allows government criticism but silences collective expression. American Political Science Review 107(2):1–18. King, G., B. Schneer, and A. White. 2017. How the news media activate public expression and influence national agendas. Science 358:776–80. Kingma, B. R. 1989. An accurate measurement of the crowd-out effect, income effect, and price effect for charitable contributions. Journal of Political Economy 97:1197–207. Kobayashi, T. and Y. Ichifuji. 2015. Tweets that matter: Evidence from a ran- domized field experiment in Japan. Political Communication 32:574–93. Konow, J. 2010. Mixed feelings: Theories of and evidence on giving. Journal of Public Economics 94(3–4):279–97. Korenok, O., E. L. Millner, and L. Razzolini. 2012. Are dictators averse to inequality? Journal of Economic Behavior & Organization
  • 62. 82:543–47. Kramer, A., J. Guillory and J. Hancock. 2014. Experimental evidence of mas- sice-scale emotional contagion through social networks. Proceedings of the National Academy of Science of the United States of America 111:8788–90. Krasteva, S., and H. Yildirim. 2013. (Un) informed charitable giving. Journal of Public Economics 106:14–26. Ledyard, J. 1995. Public goods: A survey of experimental research. In Handbook of experimental economics, ed. J. Kagel and A. Roth, 111–94. Princeton: Princeton University Press. Levitt, Steven and John List. 2007. What do laboratory experiments measur- ing social preferences reveal about the real world? Journal of Economic Perspectives 21:153–74. Li, H. and G. Van Ryzin. 2017. A systematic review of experimental studies in public management journals. In Experiments in public management re- search: challenges and opportunities, ed. James, O., S. Jilke, and G. Van Ryzin. Cambridge: Cambridge University Press. Lilley, A., and R. Slonim. 2014. The price of warm glow. Journal of Public Economics 114:58–74.
  • 63. Linos, E. 2017. More than public service: A field experiment on job adver- tisements and diversity in the police. Journal of Public Administration Research and Theory 28(1):67–85. Lu, J. 2015. Which nonprofit gets more government funding? Nonprofit Management and Leadership 25:297–312. ———. 2016. The Philanthropic consequence of government grants to non- profit organizations. Nonprofit Management and Leadership 26:381–400. Mergel, I. 2012. Social media in the public sector: A guide to participation, collaboration and transparency in the networked world. Hoboken: Wiley: Jossey-Bass. Middleton, Joal and Peter Arronow. 2015. Unbiased estimation of the average treatment effect in cluster-randomized experiments. Statistics, Politics and Policy 6:39–75. Munger, K. 2017. Tweetment effects on the tweeted: experimentally reducing racist harassment. Political Behavior 39(3):629–49. Okten, C., and B. A. Weisbrod. 2000. Determinants of donations in private nonprofit markets. Journal of Public Economics 75:255–72.
  • 64. Olsen, A. 2017. Adventures into the negativity bias. Paper presented at EGPA conference in Milan, Italy. Ottoni-Wilhelm, M., L. Vesterlund, and H. Xie. 2017. Why do people give? Testing pure and impure altruism. American Economic Review 107:3617–33. Porumbescu, Gregory. 2015. Comparing the effects of E- Government and so- cial media use on trust in government: Evidence from Seoul, South Korea. Public Management Review 18:1308–34. Public Religion Research Institute. 2012. Few Americans use social media to connect with their faith communities. http://publicreligion.org/re- search/2012/08/july-2012-tracking-survey/ (accessed April 30, 2018). Reeson, A. F., and J. G. Tisdell. 2008. Institutions, motivations and public goods: An experimental test of motivational crowding. Journal of Economic Behavior & Organization 68(1):273–81. Roberts, R. D. 1984. A positive model of private charity and public transfers. The Journal of Political Economy 92:136–48. Rose-Ackerman, S. 1981. Do government grants to charity reduce private donations? In Nonprofit firms in a three sector economy, ed. White M. J.,
  • 65. 95–114. Washington, DC: Urban Institute. Ryan, T. 2012. What makes us click? Demonstrating incentives for angry discourse with digital-age field experiments. The Journal of Politics 74:1138–52. Ryan, T. J., and Brader, T. 2017. Gaffe appeal a field experiment on partisan selective exposure to election messages. Political Science Research and Methods 5:667–87. Ryan, T. and D. Broockman. 2012. Facebook: A new frontier for field experi- ments. The Experimental Political Scientist: Newsletter of the ASPA Experimental Section 3:2–8. Smith, T. M. 2007. The impact of government funding on private contribu- tions to nonprofit performing arts organizations. Annals of Public and Cooperative Economics 78:137–60. Statistia. 2017. Number of Facebook users by age in the U.S. as of January 2017. https://www.statista.com/statistics/398136/us-facebook- user-age- groups/ (accessed April 30, 2018). Steinberg, R. 1987. Voluntary donations and public expenditures in a feder- alist system. The American Economic Review 77:24–36.
  • 66. Sutter, M., & H. Weck-Hannemann. 2004. An experimental test of the public goods crowding out hypothesis when taxation is endogenous. FinanzArchiv: Public Finance Analysis 60(1):94–110. Suárez, D. F. 2011. Collaboration and professionalization: The contours of public sector funding for nonprofit organizations. Journal of Public Administration Research and Theory 21:307–26. Teresi, H. and M. Michelson. 2015. Wired to mobilize: The effect of so- cial networking messages on voter turnout. The Social Science Journal 52:195–204 Thomas, J. C., and G. Streib. 2003. The new face of government: Citizen- initiated contacts in the era of E-Government. Journal of public adminis- tration research and theory 13:83–102. Tinkelman, D. 2010. Revenue interactions: crowding out, crowding in, or neither? In Handbook of research on nonprofit economics and manage- ment, ed. Seaman, B. A., and D. R. Young, pp. 18–41. Northampton, MA: Edward Elgar. Wang, J. C., and C. H. Chang. 2013. How online social ties and product- related risks influence purchase intentions: A Facebook experiment.
  • 67. Electronic Commerce Research and Applications 12:337–46. Warr, P. G. 1982. Pareto optimal redistribution and private charity. Journal of Public Economics 19:131–38. Wasif, R., and A. Prakash. 2017. Do government and foreign funding influ- ence individual donations to religious nonprofits? A survey experiment in Pakistan. Nonprofit Policy Forum 8(3):237–73. Wukich, C., and I. Mergel. 2015. Closing the citizen- government commu- nication gap: Content, audience, and network analysis of government tweets. Journal of Homeland Security and Emergency Management 12:707–35. Zhang, J., and E. Mao. 2016. From online motivations to ad clicks and to behavioral intentions: An empirical study of consumer response to social media advertising. Psychology & Marketing 33:155–64. D ow nloaded from https://academ ic.oup.com /jpart/article-abstract/29/4/627/4995543 by Florida International U
  • 68. niversity Library S erials user on 18 A pril 2020 http://publicreligion.org/research/2012/08/july-2012-tracking- survey/ http://publicreligion.org/research/2012/08/july-2012-tracking- survey/ https://www.statista.com/statistics/398136/us-facebook-user- age-groups/ https://www.statista.com/statistics/398136/us-facebook-user- age-groups/