SlideShare a Scribd company logo
1 of 57
Download to read offline
This work is distributed as a Discussion Paper by the
STANFORD INSTITUTE FOR ECONOMIC POLICY RESEARCH
SIEPR Discussion Paper No. 12-014
Expanding College Opportunities for
High-Achieving, Low Income Students
By
Caroline Hoxby and Sarah Turner
Stanford Institute for Economic Policy Research
Stanford University
Stanford, CA 94305
(650) 725-1874
The Stanford Institute for Economic Policy Research at Stanford University supports
research bearing on economic and public policy issues. The SIEPR Discussion Paper
Series reports on research and policy analysis conducted by researchers affiliated
with the Institute. Working papers in this series reflect the views of the authors and
not necessarily those of the Stanford Institute for Economic Policy Research or
Stanford University
Expanding College Opportunities for
High-Achieving, Low Income Students
Caroline Hoxby Sarah Turner
Stanford University University of Virginia
NBER NBER
Abstract
Only a minority of high-achieving, low-income students apply to colleges in the same way
that other high-achieving students do: applying to several selective colleges whose
curriculum is designed for students with a level of achievement like their own. This is
despite the fact that selective colleges typically cost them high-achieving, low-income
students less while offering them more generous resources than the non-selective
postsecondary institutions they mainly attend. In previous work, we demonstrate that the
vast majority of high-achieving, low-income students are unlikely to be reached by
traditional methods of informing students about their college opportunities since such
methods require the students to be concentrated geographically. In this study, we use a
randomized controlled trial to evaluate interventions that provide students with semi-
customizedinformationonthe application processand colleges'net costs. Theinterventions
also provide students with no-paperworkapplication fee waivers. The ECO Comprehensive
(ECO-C) Intervention costs about $6 per student, and we find that it causes high-achieving,
low-income students to apply and be admitted to more colleges, especially those with high
graduation rates and generous instructional resources. The students respond to their
enlarged opportunity sets by enrolling in colleges that have stronger academic records,
higher graduation rates, and more generous resources. Their freshman grades are as good
as the control students', despite the fact that the control students attend less selective
colleges and therefore compete with peers whose incoming preparation is substantially
inferior. Benefit-to-cost ratios for the ECO-C Intervention are extremely high, even under
the most conservative assumptions.
JEL Codes: I21,I23,I24*
Acknowledgements. This project was instigated and initially supported by a group of college leaders,*
deans of admissions, and deans of financial aid including those of Stanford University, Princeton University,
Syracuse University, The University of Michigan, University of Virginia, Duke University, Harvard University,
Massachusetts Institute of Technology, Northwestern University, Smith College, The University of California,
Vassar College, and participants in the Windsor Group meetings of higher education leaders. We especially wish to
acknowledge John Hennessy and Roberta Katz of Stanford University for their extraordinary effort in securing initial
support for the project. John Casteen III of the University of Virginia convened the Windsor Group meeting at
which this project was initially discussed. We had important conversations with numerous deans of admission and
financial aid including William Fitzsimmons (Harvard), Rick Shaw (Stanford), Ted Spencer (University of
Michigan), Janet Rapelye (Princeton), and Greg Roberts (University of Virginia). This project would simply not
have been possible without data and support from The College Board and ACT. At The College Board, we
especially thank Herb Elish, Connie Betterton, Anne Sturvenant, Ryan Williams, Michael Matthews, Sharon Lavelle,
and Bettina Donohoe. They were crucial. We also thank David Coleman for continuing support of the project into
his Presidency of The College Board. At ACT, we especially thank Robert Ziomek and Ranjit Sidhu. Crucial, early
funding for the project came from the Andrew W. Mellon Foundation, the Spencer Foundation for Research Related
(continued...)
1 Introduction
In previous work (Hoxby and Avery, forthcoming), we show that the vast majority of very high-
achieving students from low-income families do not apply to any selective college or university.2
(We call these students "income-typical.") Only a small minority of high-achieving, low-income
students apply in a manner that resembles that of their high-achieving counterparts from more
affluent families--namely, applying to several colleges that enroll students whose incoming
achievement is similar to their own. (We call these students "achievement-typical.") What puzzles
many observers is that income-typical students, having worked hard in high school to prepare
themselves extremely well for college, do not even apply to the colleges whose curriculum is most
geared toward students with their level of preparation. We hereafter call these "peer" colleges as a
reminder that these are the colleges where most of their peers would be similarlypreparedand where
the curriculum is designed with such students in mind.
In the aforementioned work, we eliminate several explanations for this puzzle. First, the
(...continued)*
to Education, and the Smith Richardson Foundation. We are grateful to Harriet Zuckerman of Mellon, Michael
McPherson of Spencer, and Mark Steinmeyer of Smith Richardson for key seminal discussions. The interventions,
survey, and evaluation were supported by grant R305A100120 from the U.S. Department of Education's Institute for
Education Sciences (IES) and by a generous grant from the Bill & Melinda Gates Foundation. We have been greatly
helped by discussions with our IES program officers Karen Douglas and Hiromi Ono and by the energy and
creativity of our Bill & Melinda Gates Foundation program officer Greg Ratliff. We had important conversations
with William Bowen (Mellon), Catherine Hill (Vassar), and Henry Bienen (Northwestern). Joshua Leake-Campbell
and LaTonya Page at the National Student Clearinghouse have been a great help with data. Finally, we are grateful
to the ECO Project staff and the many research assistants who made the interventions and surveys come to life. We
would like especially to thank Stephanie Adams, Christina Harvey, Alexandra Cowell, Jacqueline Basu, and Casey
Cox. Deborah Carvalho, Marian Perez, Karen Prindle, and Shehnaz Kahn of the Stanford Institute for Economic
Policy Research ably assisted us in project management.
Hereafter, "high-achieving" refers to a student who scores at or the 90th percentile on the ACT2
comprehensive or the SAT I (math and verbal) and who has a high school grade point average of A- or above. This
is approximately 4 percent of U.S. high school students. Hereafter, "low income" and "high income" mean,
respectively, the bottom tercile and top quartile of the income distribution for families with a child who is a high
school senior. Note that these are slightly income categories than we study in Hoxby and Avery (forthcoming).
When we say "selective college" in a generic way, we refer to colleges and universities that are in categories from
Very Competitive Plus to Most Competitive in Barron's Profiles of American Colleges. There were 236 such
colleges in the 2008 edition. Together, they have enrollment equal to 2.8 times the number of students who scored at
or above the 90th percentile on the ACT or SAT I. Later, we are more specific about colleges' selectivity: we define
schools that are peer schools for an individual student, based on a comparison between his college aptitude test
scores and the median aptitude test scores of enrolled students at the school.
1
income-typical students are not using reliable information to predict that they will fail at peer
colleges because the high-achieving, low-income students who do apply are admitted, enroll,
progress, and graduate at the same rates as high-income students with equivalent test scores and
grades. Second, the income-typical students are not deterred by accurate information about3
colleges' net costs (the total a student will pay for tuition, fees, and living expenses). For high-
achieving, low-income students, the peer institutions that will admit them have lower net costs than
the far less selective or non-selective post-secondary institutions that most of them attend. (See
Appendix Table 1. ) Third, low-income students can have most application fees and testing fees4
waived for them if they complete sufficient paperwork, so the actual availability of fee waivers is
not itselfaproblem. Fourth, achievement-typicalstudentscomefromhouseholdsandneighborhoods
that are just as disadvantaged, on numerous socio-economic dimensions, as income-typical students
do.5
In our previous work, we observe that achievement-typical students are concentrated in a small
number of high schools and in very large metropolitan areas. Therefore, they are likely reached by
traditional methods of informing students about their college opportunities, including (i) expert
college guidance counseling at high schools with a critical mass of high-achievers, (ii) admissions
staff visiting high schools or areas with a critical mass of high-achievers, (ii) colleges' encouraging
visits by local students. In contrast, income-typical students--who are the vast majority of high-
achieving, low-income students--are dispersed. They are often the sole or one of only a few high-
achieving students in their school or area. Thus, their highschool counselor is unlikelyto have much
expertise about selective colleges and probablymust focus on other issues. Admissions staff cannot
visit their high school or area in a cost effective manner. Income-typical students tend not to live
See Hoxby and Avery (forthcoming) for information on high-achieving, low-income students. For3
broader evidence that the application stage is where students' behavior differs most, see Avery, Hoxby, Jackson,
Burek, Pope, and Raman (2006), Bowen, Kurzweil, and Tobin (2005), Roderick, Nagaoka, Coca, and Moeller
(2009), Avery and Turner (2009), Pallais (2009), Bowen, Chingos and McPherson (2009), Dillon and Smith (2012),
Smith, Pender, and Howell (forthcoming).
This evidence is also discussed at some length in Hoxby and Avery (forthcoming), Pallais and Turner4
(2006), and Hill, Winston, and Boyd (2005).
See Hoxby and Avery (forthcoming).5
2
in metropolitan areas that are home to multiple, selective colleges. In short, many high-achieving,
low-income students may be unreached by traditional information methods even if counselors and
admissions staff conscientiously do everything that is cost-effective for them to do.
There are two remaining explanations for the behavior of income-typical students. First,
income-typical students could be well informed about their college opportunities, net costs,
probabilities of success, and the availabilityof fee waivers. However, because they come from high
schools and communities where students with their achievement are rare, they could have formed
preferences or relationships that make them averse to attendingpostsecondaryinstitutions that differ
from those that manyof their high school classmates attend. For instance, an income-typical student
couldbeextremelywell-informedbutcould preferto attendalocal,non-selectiveinstitution because
he has a family member who needs daily help or because he is romantically involved with a low-
achieving student who could not attend a selective college. Second, income-typical students could
be poorly informed about their college opportunities and/or deterred by apparently small costs such
as the paperwork associated with fee waivers. Although a great deal of information is apparently
available on the internet, it is not easy for a neophyte to distinguish reliable sources of information
on colleges' curricula, peers, and net costs from the numerous unreliable (sometimes egregiously
misleading) sources that are available. Furthermore, the information available is not only not
customized, it tends to assume that low-income students are low-achieving and gives them guidance
that corresponds to this assumption. Probably because they are atypical, high-achieving, low-
income students might find very little information oriented toward them.
Thesetwoexplanations--income-typicalstudentsareinformedbut prefernon-selective colleges,
income-typical students uninformed but would prefer peer colleges if they knew about them-- are
not mutually exclusive. What matters, from society's point of view, is not whether the second
explanation accounts for most or all of the income-typical students' behavior. What matters is (i)
whether the second explanation accounts for any of the income-typical students behavior and (ii)
whether there exists a cost-effective way to give such students the information and means they need
to realize their full array of college opportunities. It is crucial to understand why these two things
(and only these two things) matter.
Any income-typical student has already academically prepared himself for college through a
3
combination of his own effort and society's investments in him (through tax support of public
schools, income transfers, other social insurance programs, philanthropy, etc.). These investments
are not small: approximately 14,000 hours of the student's own time in school plus thousands of
hours spent on homework, $180,000 in public school expenditures on the average student from
kindergarten to grade twelve, Medicaid and related health expenditures directed to low-income
families, Earned Income Tax Credits, and so on. For many students, these personal and social
investments may not bear obvious fruit, but for the income-typical student they have: excellent test
scores and grades that indicate that he is well-prepared for college and the careers that college
graduates pursue. If the only reason that the student does not take advantage of his full range of
college opportunities is that he is unaware of these opportunities or deterred from exploring them
for essentially trivial reasons (such as the paperwork associated with fee waivers), then much of the
enormous investment already made is potentially wasted. That is, the investment potentially earns
much smaller returns than it could because of barriers that should be insignificant. Society should
care about the number of such income-typical students because the social loss is proportional to that
number. The ratio of income-typical students who fit under the two explanations is, in contrast,
unimportant except insofar as the ratio affects the cost effectiveness of anyintervention. Put another
way, income-typical students who are alreadyfully informed and prefer not to applyto peer colleges
cannot be hurt by receiving information that, according to the first explanation, they already have.
Thus, providing them with information can reduce the cost effectiveness of an intervention, but can
do nothing more.
In this study, we simultaneously test (i) whether there are income-typical students who would
change their behavior if they knew more about colleges and (ii) whether we can construct a cost-
effective way to inform and help such students realize their full array of college opportunities. We
implement this test by randomly assigning interventions that contain semi-customized information
and/or no-paperwork application feewaivers to 39,677 students, including 7,749 students who serve
as controls (they are evaluated but receive no treatment). The interventions are designed to be low-
cost (about $6 per student) and fully scalable. Crucially, the interventions are delivered directly to
the students and, therefore, do not depend on students being concentrated geographically or having
a counselor who has time to work with them. The interventions are specifically designed to test
4
prominent hypotheses about the information and other barriers most often thought to affect high-
achieving, low-income students. We survey the students and also use the National Student
Clearinghouse's (NSC's) administrative data to ascertain how the interventions affect students'
college applications, admissions, enrollment, and --to a limited extent given the recentness of the
interventions-- success in college.
We show both intention-to-treat and treatment-on-the-treated effects of the interventions. We
set a very conservative standard for a student's having been treated: whether he or she was aware
of having seen the intervention materials at all. The intention-to-treat effects are dilute versions of
the treatment-on-the-treated effects because a good share of the interventions were not actually
receivedornoticed byparticipants–most often owingto familymembers'discardingtheintervention
materials because they did not recognize the bona fides of the intervention organization, The
Expanding College Opportunities project (ECO). The lack of recognition of ECO is inherent in the
randomized control trial: ECO could not establish a high-profile among the public without
contaminating the control group. Because this dilution would likely not occur if the interventions
werebrought to scale bya highlyreputable organization such as The CollegeBoard,ACT,oranother
third party, the treatment-on-the-treated effects are the best guide to the likely effects of the
interventions at scale.
The remainder of this paper is organized as follows. In the next section, we explain the
interventions and how theywere targeted to students. We describe the data we use to select potential
participants and to evaluate their responses. In section 3, we demonstrate that the randomization
worked --that is, each of the treatment and control groups looks alike on observable characteristics.
We also show that there was no differential survey response or differential NSC coverage that could
plausibly bias our results. In section 4, we present the effects of the ECO-C Intervention, which
combines semi-customized information and low-paperwork fee waivers, on students' application,
admissions, and enrollment. In section 5, we demonstrate that the ECO-C Intervention had its
greatest effects on students in our target group: low-income students who are relatively isolated
high-achievers. In section 6, we evaluate the role played by specific elements of the intervention by
examining the outcomes of students who were assigned only parts of the ECO-C Intervention. We
perform a variety of cost-benefit analyses in Section 7, and we conclude in Section 8.
5
Owing the large amount of material we cover in this paper, we have written a companion paper
that more thoroughly describes how the ECO project was implemented and the research literature
related to the ECO project. We also confine some material to online appendices.
2 The Expanding College Opportunities Project
a The Origins of the Expanding College Opportunities Project
Using administrative data for the entire population of students who take any College Board or
ACT exam, Hoxby and Avery (forthcoming) show that many high-achieving, low-income students
fail to apply to any selective postsecondary institution. In failing to do so, they act in a manner that
is typical of students with the same income (they are "income-typical," despite their unusually high
achievement) rather than the manner typical of students with the same achievement ("achievement-
typical"). This evidence is reinforced by studies of mid-achieving, low-income students who, it
appears, are also unlikely to choose colleges for which they academically best prepared. Given the6
mounting array of evidence, a number of leaders of postsecondary institutions decided to support
research on interventions that might inform low-income students about their college opportunities.7
Several foundations and the Institute for Education Sciences also provided support. The two main8
college assessment organizations in the U.S., the College Board and ACT, agreed to provide data.
Aconsortiummodelofsupportwaslogicalbecauseinformingstudents abouttheiropportunities
is a public service. If one institution made the effort to inform students on its own, many of the
benefits would accrue to other colleges. That is, informing high-achieving, low-income students
about their full array of college opportunities may benefit society, but it does not ensure that any
particular college ends up with students whom it prefers.
Moreover, it would not make sense for one postsecondary institution to undertake the task of
informingstudentsbecausethereareenormouseconomiesofscaleinthedatagatheringanddatabase
See, for instance, Bowen, Chingos and McPherson (2009), Dillon and Smith (2012), Smith, Pender, and6
Howell (forthcoming).
See the acknowledgments.7
See the acknowledgments.8
6
infrastructure that support semi-customized interventions like those of the ECO Project. By "semi-
customized," we mean that each part of the interventions has a standard framework but that this
framework is filled with information that is most likely to be relevant to the student. That is, given
a rich database infrastructure and intervention framework, we can ensure that students see
information about colleges that are local, colleges at which they will pay in-state rates, financial aid
for which theywould qualify, and the like. The semi-customization of the interventions is described
below.
The project focuses on high-achieving, low-income students because these students' behavior
deviates the most from the behavior of affluent students with the same achievement. The focus also
made sense because high-achieving, low-income students generally pay less to attend a college that
has higher graduation rates, richer instructional resources, and a curriculum designed for students
who are highly prepared, as they are (Appendix Table 1).
The interventions were designed to meet several constraints. First, the aforementioned research
had demonstrated that income-typical students could not be informed about college in a cost-
effective manner via traditional recruiting methods that depend on students being geographically
concentrated. This is because income-typical students are geographically dispersed. (The opposite
is true of the small minority of high-achieving, low-income students who exhibit achievement-
typical behavior. These students are highly concentrated in a small number of schools and areas.)
Thus, we designed interventions that did not depend on geographic concentration for their efficacy.
They did not require that students show up at a central location or be visited in person, for instance.
Second, the research had demonstrated that income-typical students were extremelyunlikelyto have
a teacher or counselor who was expert in the process of applying to selective colleges or who had
time to researchcollegeswith therarehigh-achievingstudent. Thus,theinterventionsweredesigned
to go directly to students themselves, not to teachers or counselors who would have to relay them
to students. Third, the interventions were designed to be both fully scalable and semi-customized
in ways that were inexpensive and yet "smart." By "smart," we mean that the interventions were
intended to takeadvantageoftheenormouseconomiesof scale that arise when a central organization
efficiently uses large databases to inform the intervention that each person encounters. It is these
economies of scale that account for the interventions' low cost of about $6 per student. Fourth, the
7
interventions were designed to take advantage of the enormous amount of reliable information about
colleges that was already available. A naive student mayfind it hard to determine for himself which
online and other information sources are reliable, but the interventions could inform students about
how to use accurate data sources such as the U.S. Department of Education's College Navigator.
Fifth and perhaps most important, because informing students about their opportunities was a public
service, the interventions were written in the voice of a trusted third party. The interventions are
designed to help a student gather the information that he needs to make a knowledgeable decision
for himself. They do not direct students to any particular college or group of colleges.
b Hypotheses about Income-Typical Students' Behavior and the
Interventions Designed to Test Them
There are at least four prominent hypotheses about why income-typical students exhibit
application behaviorverydifferent from that of high-achieving, high-income students. We designed
the interventions to test each of the four hypotheses.
Hypothesis I is that income-typical students lack the advice that a expert college counselor
would give a high-achieving student. An expert counselor would advise such a student to apply to
eight or more colleges, most of which would be peer colleges whose median student scores within
5 percentiles of the student's own score. A student would also typically be advised to apply to a
couple of colleges whose median student scores 5 to 10 percentiles above him and one or more
colleges whose median student scores 5 to 10 percentiles below him. These colleges are sometimes
described as "match," "reach," and "safety" colleges, but we prefer the "peer" nomenclature because
it focuses us on preparation and curriculum as opposed to strategy. An expert counselor would also
advise a student to obtain letters of reference; take college assessments on schedule; send verified
assessment scores to colleges; write application essays; complete the Free Application for Federal
Student Aid and the CSS Profile; and meetall other deadlinesand requirements of selective colleges'
applications. Finally, an expert college counselor would advise a student to compare colleges on9
The CSS Profile gathers financial and tax information similar to that required by the FAFSA. It also9
requires some additional information. The colleges and universities that offer the most generous financial aid tend to
require the CSS Profile in addition to the FAFSA. This is because they require more information to construct richer
aid packages well.
8
the basis of their curricula, instructional resources, other resources (housing, extracurricular
resources), and outcomes (such as graduation rates).
The Application Guidance intervention was designed to test Hypothesis I. It provides the
aforementioned information and gives students timely and customized reminders about deadlines
and requirements. It provides students with semi-customized tables that compare colleges'
graduation rates. The student is always confronted with the graduation rates of his nearest colleges,
his state's flagship public university, other in-state selective colleges, and a small number of out-of-
state selective colleges. (Colleges in latter two categories are selected at random among those that
qualify.) Students' secondary materials show the graduation rates of four-year colleges nationwide.
Moreover, students' primary materials explain how to use tools like the College Navigator to
investigate colleges' curricula, instructional resources, and housing in detail.
Hypothesis II is that income-typical students misperceive their costs of attending selective
colleges. Specifically, we hypothesize that students focus unduly on colleges' "list prices" (the
tuition and fees that an affluent student who received no aid would pay) and fail to understand the
net costs for students like themselves. We also hypothesize that students are unaware that some
colleges provide financial aid for living expenses while other colleges do not. We suspect that
income-typical students do not realize that theywould generallypayless to attend colleges that were
more selective and that had richer instructional and other resources (Appendix Table 1).10
The Net Cost intervention was designed to test Hypothesis II. It provides students with
information aboutnetcostsforlow- to middle-income students at an arrayof colleges. Thematerials
are semi-customized in that a student will always see the list prices, instructional spending, and net
costs of his state's public flagship university, at least one other in-state public college, nearby
colleges, a selective private college in his state, one out-of-state private liberal arts college, and one
The reason why a student might especially mistake the cost of attendance at selective colleges is that10
institutional aid is a large part of a low-income student's financial aid package. This is especially true since 2000
because selective colleges have increased their institutional aid to low-income students (Hill, Winston and Boyd,
2005; Avery, Hoxby, Jackson, Burek, Pope, and Raman, 2006; Pallais and Turner, 2006). It is not surprising, then,
that Avery and Kane (2004) find that students from economically disadvantaged backgrounds are particularly likely
to err when they estimate what various colleges would cost them. Similarly, Avery and Hoxby (2004) show that,
compared to their higher-income counterparts, low-income, high-achieving students are less likely to understand
their actual cost of attendance and more likely to be confused by superficial variables, such as list prices and whether
a grant is called a "scholarship".
9
out-of-state private selective university. (Institutions in the latter categories are selected at random
from among those that fit the criteria.) The net cost information is shown for hypothetical families
with incomes of $20,000, $40,000, and $60,000. Students' secondary materials contain net cost
information for a fuller array of colleges nationwide.
The Net Cost materials are not intended to give a student precise information about his net costs
but, rather, to make him recognize that list prices and net costs can differ greatly--especially at
selective institutions. The materials repeatedly state that a student will not learn exactly how much
a given college will cost him unless he applies. The Net Cost materials also explain how financial
aid works, emphasize how crucial it is to complete the FAFSA and CSS Profile on time, clarify how
a student's Expected Family Contribution is computed, decipher a prototypical financial aid offer,
and illustrate the trade-offs between loans, grants, and working while in college.
Hypothesis III is that income-typical students are deterred from applying to college by
application fees. At first glance, this hypothesis might seem unlikely because low-income students
are eligible to have most application fees waived. (They are also eligible to have most college
assessment fees waived.) Obtaining a fee waiver requires some paperwork, most often income
verification by a counselor or similar authority. Students can also qualify for College Board fee
waivers bycompleting the CSS Profile. None of the required paperwork is particularlyonerous, and
it is a modest element of the entire process of applying to a selective college. Nevertheless,
Hypothesis III is plausible for a few reasons. A student may fail to realize that fee waivers are
available until it is too late to qualify for them. (Details about obtaining a waiver are often on the
final screens or pages of a college application. ) Or, a student who may be willing to fill out11
financial aid forms to be analyzed by a stranger may still balk at revealing his family's income to a
counselor. Or, his counselor may be too busy to do his part of the fee waiver process. Indeed,
research by Bettinger, Long, Oreopoulos and Sanbonmatsu (2009) suggests that apparently modest
FAFSA paperwork deters some students from applying to college.12
For instance, a student who is applying to colleges online might not see detailed information on fee11
waivers until the very last screens of the process.
However, it is not obvious that the Bettinger, Long, Oreopoulos, and Sanbonmatsu(2009) evidence is12
relevant to our target students. The students they evaluate tend to be marginal applicants for any college, not very
(continued...)
10
The Fee Waiver intervention is designed to test Hypothesis III. It provides students with no-
paperwork fee waivers that allow them to apply to a 171 selective colleges. When we recruited13
colleges to accept ECO fee waivers, we specifically agreed to reimburse institutions for any case in
which a student utilized a fee waiver when he was, in fact, ineligible based on that institution's
waiver criteria. The Fee Waiver materials instruct students on how to submit an ECO fee waiver--14
some institutions preferred students to mail paper waivers while others preferred that students enter
an ECO code on their online applications.
Hypothesis IV is that it is the parents of income-typical students, rather than the students
themselves, who cause the divergence in application behavior byfamilyincome. Specifically, there
are ethnographic studies that suggest that low-income parents fail to perceive differences among
postsecondaryinstitutions. For instance, theymayfail to differentiate amonginstitutions that offer15
2-year versus 4-year degrees, have low versus high graduation rates, have poor versus rich
instructional resources, offer meager versus generous financial aid, and so on. Thus, parents may
focus on low-list-price institutions that are very nearby, to the exclusion of all other alternatives.
The Parent Intervention was designed to test hypothesis IV. It consists of materials that cover
the same information presented to students in the Application Guidance and Net Cost part of the
intervention, but the materials are explicitly addressed to parents and the information is conveyed
differently. Specifically, the materials are written to be accessible to adults who have limited
education, limited familiarity with American higher education, and limited English skills. Parents
who live in a neighborhood where Spanish speakers prevail received materials in both English and
Spanish. In addition, the materials emphasize issues that, according to the ethnographic research,
(...continued)12
high-achieving students who are excellent applicants for selective colleges.
171 is the number of colleges for the ECO-C Intervention, which is mainly what we evaluate in this13
paper. The number of colleges was 151 for the 2010 high school graduating cohort of students, on whom we do not
focus except for a subset of results.
Institutions' criteria for waiving application fees differ somewhat. Since we wanted the fee waivers to be14
simple, we did not attempt to incorporate all of the different criteria. In addition, a small percentage of the students
in the ECO evaluation would have family income above most colleges' thresholds for waiver eligibility. See the
section on targeting below.
See, for instance, Tornatzky, Cutler, and Lee (2002).15
11
especially concern parents: financial returns to college, financial aid (especially loans), and
differences in time-to-degree among colleges.
c Development of the ECO-C Intervention
We randomlyassigned the four interventions to students in our Pilot cohort of 2009 high school
seniors and in 2010 cohort of high school seniors. Although the materials sent to the Pilot cohort16
were formulated with the help of experienced college mentors, admissions staff, financial aid staff,
and student focus groups, we obtained substantial new information after the Pilot year, using focus
groups drawn from the Pilot cohort itself. While much of the feedback confirmed our prior
information, we learned some things worthy of note.
First, families strongly preferred to receive paper materials in the mail, as opposed to receiving
electronic materials by email or other online means. Families also preferred materials that did not
look like typical college recruiting brochures. We therefore adjusted our dissemination strategyand
sent virtually all materials by mail (with extensive online backup, links, and secondary materials).
Each intervention was sent in a tabbed, expandable file designed to help students organize their
materials for multiple college applications. That is, the design was meant to signal that ECO
intended to help them learn about their options, not recruit them for a specific college.
Second,familieswereoftensuspicious oftheinterventions(especiallyanythingthatwasonline)
because they feared falling prey to for-profit firms selling college advice. We investigated these
firms to ensure that the design of our materials was as distinct as possible from theirs. We also
ensured that all ECO materials prominently stated that the project was research conducted by the
principal investigators and funded by the relevant foundations and IES. We promptly answered
questions about the project by email and telephone, often assuring families that it was legitimate
research. While these efforts reassured some wary families, the credibility of the ECO project
continued to be an issue simply because we could not give the ECO project a strong, public presence
without contaminating the randomized experiment. In particular, we did not want the control group
to know what the interventions were, and we did not want the students to know what hypotheses we
were testing. We believe that credibility would not be an issue if the same interventions were
Parent Intervention materials were sent only to the 2010 cohort.16
12
conducted by a well-known non-profit organization with a public presence. We return to this issue
below because we are confident that it caused the take-up rate of the experimental interventions to
be substantially below that which a well-known organization would attain.
As an aside, we discovered that there was good reason for families to be suspicious of firms
offering free college advice. Numerous for-profit firms offer initial advising materials for free but
then charge substantial fees once a familyis engaged. We found multiple firms that sell information
that is inaccurate and many firms that sell information that is available for free elsewhere. Some
websites that appear to offer unbiased advice are actually recruiting tools for specific, profit-making
institutions. A neophyte might have considerable difficulty distinguishing between public-minded
organizations that offer reliable information and firms that offer inaccurate or overpriced
information.
Third, we learned that particular members of each familytendedto vet incoming college-related
mail, regardless of to whom it was addressed. By "vet," we mean that the person felt free to discard
or read the materials before handing them to the addressee. This person was sometimes a parent,
sometimes the student himself (in the case of the Parent Intervention), and sometimes another adult.
In many cases, the same member of the family vetted incoming college-related email. In short, we
learned that our attempts to direct interventions to particular family members were largely useless.
Fourth, we found that the Fee Waivers had a consequence that we had not foreseen. Like
"earnest money," the Fee Waivers apparently made families believe that the information was
provided with earnest intentions. This caused them to pay more attention to materials that
accompanied the Fee Waivers.
Having learned these lessons, we created the ECO Comprehensive or "ECO-C" intervention.
It combines the Application Guidance, Net Cost, and Fee Waiver interventions. It does not include
the Parent Intervention because we concluded that materials directed to parents were often read by
students and vice versa. Since the Parent Intervention simply bundled content from the Application
Guidance and Net Cost interventions in a different way, we believed that it would prove repetitive
if added to the ECO-C Intervention.
We randomly assigned the ECO-C Intervention, each of the four interventions, and control
status to 3000 students, per treatment, in the cohort of 2011-12 high school seniors. Most of the
13
findings in this paper are based on the results of that randomized control trial.
The keyfeatures of the ECO-C Intervention and each intervention are summarized in Appendix
Figure 1.
d Selecting Students for the Evaluation
We identified target students by combining student data from the College Board and ACT with data
from an array of sources that allow us to estimate whether a student comes from a low-income
family. The data combination and estimation process is described in greater detail in Hoxby and
Avery (forthcoming).
Briefly, we start with data that contain a student's College Board or ACT scores, his location at
the level of a Census block (the smallest unit of geography in the Census), his high school, his self-
reported high school grade point average (which has been demonstrated to be quite accurate), the
identityof the postsecondaryinstitutions where he sends his scores, and a varietyof information that
the student reports about his high school experience and his college plans (if any). We match each
student to 454 variables that describe the socio-demographics of his neighborhood (at the Census
Block Group level), the socio-demographics and other characteristics of his high school, the history
of college application and college attendance among former students of his high school, the scores
of former students of his high school on college assessments and statewide high school exams, and
income information on his zip code from the Internal Revenue Service. The variables focus on
issues that summarize or predict parents' and other local adults' incomes; parents' and other local
adults' educational attainment; local house values (a key measure of wealth); race; ethnicity; and the
propensity to apply to postsecondary institutions, to four-year colleges, and to selective colleges.
A list of these variables is available in Online Appendix Table 1. The variables come from the
student's self-description at the time he or she took a college assessment, from the U.S. Department
of Education's Common Core of Data (2009) and Private School Survey (2009), from the 2000
Census of Population and Housing, from Geolytics 2009 estimates for Census Block Groups, from
the Internal Revenue Service, and from statistics computed by the authors for each U.S. high school
14
using multiple years of past College Board and ACT data.17
We use all of these variables to estimate students' family incomes, where the verified income
variable that we use to generate our parameter estimates comes from financial aid data (which we
have for about one tenth of students). We estimate students' family income rather than use the
students'self-reportedfamilyincomebecausethemajorityofCollegeBoardtest takersdonotanswer
the question about family income and because ACT test takers understate their incomes. We are18
most likelyto underestimate a student's income if he lives in a neighborhood and attends high school
with poorer people. Symmetrically, we are most likely to overestimate a student's income if he lives
in a neighborhood and attends high school with richer people. Put another way, we are better at
estimating a student's family's permanent income than current income. This is desirable from a
policyperspectivebecausedisadvantageis moreafunctionofafamily'spermanent(lifetime)income
than its current income, which can be temporarily affected by job loss, insurance benefits, and the
like.
For the 2011-12 cohort of high school seniors, we used a random number generator to randomly
select 18,000 students. 12,000 of these were our target students who:
(i) scored in the top decile of test-takers of the SAT I or ACT (1300 math plus verbal on the SAT,
28 on the ACT);19
(ii) had estimated family income in the bottom third of the income distribution for families with a
twelfth grader (based on the 2007-2011 American Community Survey);
(iii) did not attend a "feeder" high school.
We define a feeder high school as one in which more than 30 students in each cohort typically score
Specifically, there are 48 variables based on students' self-descriptions when they took the ACT or SAT17
test, 36 variables from the Common Core of Data, 165 variables from the 2000 Census at the Block or Block Group
level (whichever was the most disaggregated available), 82 variables from the Geolytic 2009 estimates, 24 variables
based on Internal Revenue Service data, 98 variables computed for high schools by the authors using historical
College Board and ACT data, and 1 variable that contains the authors' estimate of the student's family's income.
See Hoxby and Avery (2013) for an explanation of the income understatement in ACT data.18
The 2011-12 cohort was drawn from takers of College Board tests while the 2009-10 and 2010-1119
cohorts were drawn from both College Board and ACT test takers. However, the effects are very similar between the
2010-11 and 2011-12 cohorts, with one exception noted below that was due to external circumstances, not the
sample.
15
in the top decile on college assessment exams. The test score cut-offs ensure that all the selected
students have a high probability of admission at the 236 most selective colleges in the U.S. This
corresponds to the set of colleges in Barron's Most Competitive, Highly Competitive, and Very
Competitive Plus categories.20
For the 2011-12 cohort of high school seniors, we also randomly selected 6,000 students who
met the same test score criteria but who had estimated familyincome above the bottom tertile and/or
attended a feeder high school. Although these students are outside our target group, we selected
some of them--at a lower sampling rate--so that we could test whether the effects of the ECO-C
Intervention were different for the target students than for non-target students. Most of the results
shown in this paper are for target students. It will be clear when we use data on non-target students.
For the 2010-11 cohort and 2009-10 Pilot cohorts, we selected totals of--respectively--12,500
and 9000 students in a similar manner.21
Once students were selected from each cohort, we randomly assigned an equal number to each
intervention offered that year or to the control status. Most of the results shown in this paper are for
students from the 2011-12 cohort because only they experienced the ECO-C Intervention. It will be
clear when we show findings based on the interventions applied to students in the 2010-11 cohort.
We do not show results for the 2009-10 Pilot cohort because--as mentioned above--the interventions
changed substantially after feedback from the pilot year.22
e Tracking Application Behavior, Admissions, and Enrollment
To evaluate students' response to the ECO-C Intervention and interventions, we obtained two
For the 2009-10 Pilot cohort and 2010-11 cohort, we eliminated students who did not self-report a grade20
point average of A- or above. However, given the test score cut-offs, this criterion eliminated only a small share of
students--more often because they failed to self-report any variables than because they actively reported a grade
point average below A-. We therefore did not apply this criterion to the 2011-12 cohort, the cohort we mainly
evaluate in this paper.
Because we had more data with each subsequent cohort, we refined the income estimation process21
between each cohort. Using the 2011-12 estimation process to re-select students from the 2010-11 cohort, we can
confirm that the refinement did not substantially affect the results for those two cohorts. However, we used students'
self-reported family incomes to select some students in the 2009 Pilot cohort. This proved to be a mistake because
the self-reports turned out to be inaccurate and systematically biased downward among ACT takers.
Once we allow for the difference in selection (see previous footnote), the estimated effects of the22
interventions for the pilot year are largely consistent with those for the next (2010-11) cohort.
16
sources of data on their application behavior, admissions outcomes, and college enrollment. First,
we surveyed students during each summer after they were selected for an ECO treatment or control
group. In the summer after he is expected to graduate from high school, each student is asked to take
a fairly comprehensive survey on his college application process, where he was admitted, what his
financial aid offers were, and so on. The resulting survey data are so rich that most of the variables
must be analyzed in future papers bythe authors. In the summers after which the student might have
completed a year of college, each student is asked to complete a shorter surveythat measures college
enrollment, course-taking, time allocation in college, work for pay, and degree attainment.
Our second sourceof outcome data is NSC dataon enrollment, persistence, and progresstoward
a degree. These data are reported by postsecondary institutions. The NSC covers 96 percent of
students enrolled in colleges and universities in the U.S. Our selected students arematched to NSC23
data using their names and birth dates. This is a largely accurate match, but it is not perfect owing
to variation in how students write their names and owingto typographical errors in students' reported
their birth dates.
We report descriptive statistics for variables in the dataset in Online Appendix Table 2.
3 Randomization, Survey Response, and the Probability that ECO
Materials Are Seen by the Intended Recipient
In a randomized controlled trial such as this, the econometrics entail only fairly simple
comparisons between the treatment and control group so long as (i) the groups were actuallyselected
at random and (ii) there is not differential attrition. In our case, differential attrition could take the
form of students failing to respond to the surveyin such a waythat theybias the comparison between
treatment and control groups. In this section, we explore these issues.
We also describe the extent to which students who were sent intervention materials did not
actually see them. That is, we describe the extent to which students whom we "intended to treat"
were actually "treated."
The source is http://www.studentclearinghouse.org/about/.23
17
Given our use of a random number generator, the large number of students in each treatment
group and the cohort group (3000 in the 2011-12 cohort), and the Law of Large Numbers, we expect
each group to be have observable and unobservable characteristics that are the same. Nevertheless,
it makes sense to check that the groups' observable characteristics are, indeed, as similar as we
expect with randomization. To do this, we check the 454 predetermined (pre-treatment) variables
that we used in the process of selecting students. These variables describe the student, his family,
his neighborhood, his high school, and the college-going behavior of students in his high school in
previous years.24
We regress each of the 454 predetermined variables onindicatorsforastudent's treatment group
and his cohort. We find that 3.2 percent of the coefficients on the treatment group indicators are
statistically significantly different from 0 at the 5 percent level and that less than 1 percent are
statistically significantlydifferent from 0 at the 1 percent level. These results are consistent with the
randomization having worked just as intended.
66.9 percent of students answered the survey. While this is a high response rate, it is
nevertheless possible that, within the survey respondents, the randomization fails so that the
characteristics of the treatment and control groups differ. We do not find evidence of such failure,
however. Using just the students who answered the survey, we again regress each of the 454
predetermined variables on treatment group and cohort indicators. We find that 1.7 percent of the
coefficients on the treatment group indicators are statistically significantly different from 0 at the 5
percent level and that less than 1 percent are statistically significantly different from 0 at the 1
percent level. This evidence is consistent with there being no differential surveyresponse that could
bias our results.
Another way to judge whether the survey-based outcomes will generate unbiased results is to
compare the institution in which the student reports enrolling (in the survey) with the institution in
which the student appears to enroll in the NSC data. These institutions are the same 95.2 percent
of the time. The remaining 4.8 percent of the time, the institutions are not identical but the
differences are not systematic. For instance, it does not appear that students areoverstatingtheir true
Note that the 454 variables are not independent. This is useful information for interpreting the tests that24
we describe.
18
institution in the survey. When the institutions are not identical, the survey-based institution has a
lower Barrons' competitiveness ranking 2/5ths of the time, a higher Barrons' ranking 2/5ths of the
time, and the same Barrons' ranking 1/5th of the time. More precisely, the Barrons' rankings of the
survey-reported and NSC institutions are not statistically significantly different in a paired t-test.
While we find that students who answer the survey attend institutions with slightly higher Barrons'
rankings than students who do not answer the survey, what matters for our results is whether the25
survey-NSC ranking gap differs across treatment and control groups. We find that this gap does not
differ statistically significantly across the groups.26
Using a combination of individual inspection and cross-validation with colleges' student
directories, we conclude that at least half of the survey-NSC conflicts in the enrolled institutions
occur because the student has been matched to the wrong person in the NSC. (Because the match
uses only names and birth dates, incorrect matches are possible although unlikely.) The remaining
conflicts appear to be due to students reallychanging their institution of enrollment between the time
theyare surveyed and the time theyenroll in the fall. Some of these changes are minor--for instance,
changing between the Stout and Eau Claire campuses of the University of Wisconsin.
When weconsiderNSC-basedoutcomes,weadjustourestimatesfortheattenuationbiascaused
by having a small percentage of the students matched to the wrong person in the NSC data.27
The difference is 0.4 ranks when we convert the Barrons' (plus) groups into numerical groups based on25
their order. The conversion is 12=Most Competitive, 11=Highly Competitive Plus, 10=Highly Competitive, 9=Very
Competitive Plus, 8=Very Competitive, 7=Competitive Plus, 6=Competitive, 5=Less Competitive,
4=Noncompetitive but still in the Barrons' Profiles of American Colleges. Two-year institutions and many four-year
institutions are not listed at all in Barrons' Profiles. Also, specialty institutions such as culinary schools are an
awkward fit for the Barrons' vertical ranking on selectivity. They have a separate category in Barrons, known as
"Special," that we have explored as an outcome without producing results of much interest. The 0.4 difference in
ranks is the same if we assign plausible rankings to institutions that do not appear at all in Barrons' Profiles. For
instance, we have tried assigning non-selective, non-appearing four-year institutions the number 2 and assigning non-
selective two-year institutions the number 1.
In fact, the point estimates of the gaps have the "wrong" sign if one is concerned that treated students26
(relative to control students) are less likely to respond to the survey if they enroll in a low selectivity institution.
That is, the survey-NSC gap in the enrolled institution's ranking is (very slightly) larger for control group students,
but their gap is not statistically significantly different from the gap for students in any treatment group.
Students being matched to the wrong person causes attenuation bias of a simple form. Some percentage27
of NSC outcomes are for people whom we did not, in fact, attempt to treat. Therefore, the treatment indicator is
erroneously set to 1 in some percentage (we assume 2.4 percent) of cases when it ought to be set to 0. The remedy
(continued...)
19
Specifically, we assume that 2.4 percent of the students arematched to the wrong person's outcomes.
A relatedissuethataffectsall estimates but especiallyNSC outcomes is undeliverable mail. We
sent intervention materials to the address that each student had when he or she registered for a
college assessment exam. The gap between our sending materials and the student's registration date
was usually four to fourteen months, although the lag was shorter in some cases. Thus, some
students had moved between the day they registered for an exam and the day we sent interventions.
Also, a small share of addresses supplied by students are incomplete--most often, an apartment or
unit number is missing. When such situations arise, the ECO-C Intervention materials are
undeliverable. The materials are sometimes though not always returned to us. We received return-
to-sender mail for 10 percent of our targeted participants, and it is likely that some additional
percentage of materials was undeliverable but was not sent back to us because the people at the
address did not bother to return it. In any case, households that do not receive any materials are
clearly untreated by the interventions, and in a typical research design they would not even be
recorded as participants whom we intended to treat since they could not possibly have been treated.
This leads us to two points. First, if the interventions were to be run at scale byan organization-
-such as the College Board, ACT, or third party--that sent intervention materials at the same time
students received their test scores, most of the undeliverable mail problem would disappear because
most of it is due to students who move. That is, the undeliverable mail problem arises in the
experiment, which is not timed tightly with test-taking, but would occur much less in an at-scale
program. Second, when we compare enrollment outcomes based on the NSC to enrollment
outcomes based on our survey, we rescale the NSC outcomes for the undeliverable mail problem.
This is because survey-based outcomes are relatively unaffected by the problems: a person who
could not receive intervention materials owing to moving or a bad address also would not have
received an invitation to take the survey. To rescale the NSC-based outcomes, we use a simple Wald
(1940) Estimator that rescales the intention-to-treat by the actual probability of receiving the mail.28
(...continued)27
for such attenuation bias is a re-scaling of each coefficient estimate.
Note that the Wald Estimator is the appropriate remedy, as opposed to dropping potential participants28
whose intervention materials were returned to us. This is because we do not send control students interventions
(continued...)
20
More generally, there is a significant gap between the intention-to-treat and treatment in the
experiment, and we believe that a substantial share of this gap would disappear if the interventions
were done at scale by a well-known, reputable organization such as the College Board, ACT, or a
third party organized for the purpose. The nature of the experiment made it necessary for ECO to
maintain a very low profile for fear of contaminating the control group or informing the participants
ofthehypothesesunderconsideration. This verylowprofileundoubtedlycontributedto households'
discarding the materials without the intended recipient ever seeing them. If, for instance,
intervention materials were delivered in coordination with students' receipt of their college
assessment scores, it is likely that few materials would be discarded out of hand.
In order to present the effects that the interventions would likely have if conducted by, say, The
College Board, we show treatment-on-the-treated effects where treatment is defined very broadly
so thatanyintendedparticipantwhocouldmerelyrememberhavingreceivedthematerialsis counted
as treated. This measure of treatment is meant to show what a reputable organization could expect,
not to indicate the effect of reading the materials thoroughly--a treatment whose effects we will
probably never know. To compute the percentage of participants who were treated (broadly29
defined), we conducted a very brief telephone survey of a sample random of 200 students in each
of the treatment groups (not the control group) in the 2010-11 cohort. We simply asked them30
whether they could recall having seen any materials from the Expanding College Opportunities
project. Among students who were assigned to the Fee Waiver treatment, 40 percent could recall
the materials. Among students who were assigned to another treatment, 28 percent could recall the
materials. As described in the previous section, we believe that these low percentages reflect
families' mistaking the materials for solicitations from companies engaged in for-profit college
(...continued)28
materials, so we naturally do not observe which of them would have such materials returned to us.
We could learn the effects of such a treatment by giving students an incentive to read the materials online29
and monitoring the time they spent doing so. However, we do not think that such a version of the treatment is
scalable, so we do not test it. The goal of the project was to test fully scalable interventions.
We did not attempt to survey all participants on this question for fear of changing the nature of the30
intervention since a full-scale intervention would not include such telephone calls. Since the calls could potentially
generate Hawthorne effects, we have verified that the participants who received such calls do not exhibit outcomes
that differ from those who did not. The survey was conducted at the end of the students' senior year in high school.
21
advising and parents discarding mail addressed to their children from organizations without a well-
known profile.
The bottom line is that we use the above percentages to construct Wald Estimates of the
treatment-on-the-treated. These estimates are our best estimate of the effects that an organization31
could expect to get if it were well-known and reputable (as the College Board and ACT are) and if
it conducted the interventions with timing that was coordinated with students' receiving their test
scores. Moregenerally,weinvite readerstointerpolatebetweentheintent-to-treat andtreatment-on-
the-treated estimates as they see fit, according the way they envision fully-scaled-up interventions
being conducted.
In summary, all ofourtests confirmthatrandomization workedasintendedto createstatistically
same treatment and control groups. The randomization also worked as intended if we consider only
those students who answered our survey. We are therefore confident that our estimates reflect the
causal effect of the ECO interventions. We present both intention-to-treat results and treatment-on-
the-treated results, where as student is defined as treated if he or she merely remembers having
received ECO materials.
4 The Effects of the ECO-C Intervention on Students' College
Applications, Admissions, and Enrollment
In this section, we describe the effect of the ECO-C Intervention on students' outcomes.
Because the ECO-C Intervention was given only to students in the 2011-12 cohort, all of the results
shown in this section are based on them only. In later sections, we show results for the interventions
for which we have data from the 2010-11 cohort as well.
We show estimates of the intent-to-treat effect, â, from the straightforward regression.
(1)
We conducted the telephone survey for a cohort in which the (combined) ECO-C Intervention was not31
used, but it is clear from conversation with participants that it is the fee waivers that account for the greater
memorability and credibility of the Fee Waiver intervention materials. Therefore, we use 40 percent as the
percentage of participants who were treated for both the Fee Waiver and (combined) ECO-C Intervention.
22
where Outcome is the relevant application, admissions, or enrollment outcome and ECOinterv is an
indicator variable for the student's having been assigned to the ECO-C Intervention. The estimate32
of the parameter á records the average outcome among students in the control group.
We also show estimates of the treatment-on-the-treated-effect based on the Wald Estimator
(2)
where Prob(Recall Intervention), our measure of treatment, is equal to 40 percent for the ECO-C
Intervention.
Our estimating equation (1) does not control for any predetermined covariates, such as a
student's gender, race, ethnicity, and neighborhood characteristics. This is because we found that
adding them, in various plausible combinations, does not affect the coefficients. This is not
surprising given the balance in the covariates reported in the previous section. Estimates for
regression in which covariates are included are available from the authors.
Each table in the sections that follow shows the effects in native units (for instance, the number
of applications submitted), in percentage changes relative to the control group's mean, and in effect
size (as a share of the control group's standard deviation). On the whole, we believe that the
percentage changes are easiest for readers to interpret because theydo not require the reader to know
the mean of the outcome for himself. Indeed, we have found that readers usually have only hazy
ideas of the outcomes of the high-achieving, low-income students who are targeted in this study.
Thus, they often end up with misimpressions if the effects are not put into percentage changes or
effect sizes for them. However, each reader is welcome to focus on whichever translation of the
effects is most transparent to him.
a Effects of the ECO-C Intervention on College Application Behavior
Table 1 shows intention-to-treat effects of the ECO-C Intervention on students' application
behavior. We find that the ECO-C Intervention causes students to submit 19 percent more
applications and to be 27 percent more likely to submit at least five applications. The ECO-C
NSC-based outcomes are slightly rescaled to account for mail that was undeliverable and for mis-32
matching to NSC data. Survey-based outcomes need no such rescaling.
23
Intervention raises their probability of applying to a peer public university by 19 percent, a peer
private university by 17 percent, and a peer liberal arts college by 15 percent. Students were also
more likely to apply to institutions in the range immediately below and above peer institutions. The
pattern of effects clearly shows students targeting peer institutions the most, and other institutions
less as their median student differs more from the student himself. (Note that, because they are so
high-achieving themselves, only a small share of our students can apply to an institution whose
median students score more than 5 percentiles higher.) The ECO-C Intervention causes students'
"maximum" application to have higher median SAT scores (by 34 points), a graduation rate that is
7 percent higher, instructional spending that is 22 percent higher, and student-related spending that
is 21 percent higher.33
If an organization like the College Board or ACT were to conduct the ECO-C Intervention, we
believe that the effects would be more like the treatment-on-the-treated effects shown in Table 2.
If a student could at least recall having seen ECO materials, the ECO-C Intervention caused her to
submit 48 percent more applications and be 66 percent more likely to submit at least five
applications. She was 48 percent more likely to apply to a peer public university, 42 percent more
likely to apply to a peer private university, and 38 percent more likely to apply to a peer liberal arts
college. If she could recall seeing ECO materials, the ECO-C Intervention also caused her to apply
to a college with a 17 percent higher four-year graduation rate, 55 percent higher instructional
spending, 52 percent higher student-related spending, and a 86 point higher median SAT score.
b Effects of the ECO-C Intervention on College Admissions Outcomes
Because the students targeted by the ECO program had high college assessment scores and
grades, we expected that they would be admitted to more selective colleges if the intervention did,
in fact, cause them to applyto such colleges. This expectation was correct, as shown in Tables 3 and
4. First consider the intention-to-treat effects. Students who were assigned to the ECO-C
Intervention were admitted to 12 percent more colleges. They were 31 percent more likely to be
admitted toapeercollegeandthemaximum college to which theywere admitted had students whose
median SAT score was 21 points higher. Students were admitted to a college with a 10 percent
Student-related spending is spending on instruction, student services, academic support, and institutional33
support. It does not include research spending or public service spending.
24
higher graduation rate, 14 percent higher instructional spending, and 14 percent higher student-
related spending. The effects for a student who could at least recall having seen ECO materials are
larger. He was admitted to 31 percent more colleges. He was 78 percent more likely to be admitted
to a peer college, and the maximum college to which he was admitted had students whose median
SAT score was 53 points higher. The maximum college that admitted him also had a 24 percent
higher graduation rate, 34 percent higher instructional spending, and 34 percent higher student-
related spending.
c Effects of the ECO-C Intervention on Fee Waiver Use and FAFSA Filing
Tables 3 and 4 show that the ECO-C Intervention increased the probability that a student used
any application fee waivers and increased the number of application fee waivers she used. This is
not a mechanical effect because most of the target students could have had their application fees
waived at most institutions anyway. That is, receiving low-paperwork fee waivers did make a
difference in students' use of fee waivers. This indicates that eligible students tend not to utilize all
of the fee waiver opportunities normally available to them. This is not a surprising result because
it accords with the evidence of other studies such as Pallais (2009).
The ECO-C Intervention did not affect a student's probability of filing the FAFSA. We suspect
that this is because nearly all of our target students apply to some postsecondary institution, and all
of them require that students who wish to be considered for financial aid file the FAFSA. In fact,
there is evidence that non-selective institutions, especiallythose that arehighlydependent on federal
aid, may be especially proficient at ensuring that students file FAFSA forms (U.S. General
Accountability Office, 2010).
d Effects of the ECO-C Intervention on College Enrollment Outcomes
It is not obvious that the ECO-C Intervention should have affected college enrollment outcomes
simply because it affected the colleges to which students applied and were admitted. After all, a
student might be willing to invest the time and effort to apply to a college in order to learn about it
and the financial aid package it would offer. The same student might, upon receiving this
information, decide that the college was--after all--not for him. If we find evidence that the ECO-C
Intervention affected enrollment outcomes, the logical interpretation is that our target students were
25
initially somewhat misinformed about their college opportunities. To see this, consider a student
who is not certain about where he will gain admission and what aid he will be offered but whose
information or beliefs are right on average. When such a student receives the ECO-C Intervention,
he may decide to apply to more colleges simply because the low-paperwork fee waivers reduce his
(time) cost of doing so. However, when he gets his admissions letters, his prior beliefs will be
confirmed (since he was right on average), and he will not change his enrollment decision much.
That is, unless the target students' information is changed fairlysystematicallybythe application and
admissions experience, they should not change their enrollment decisions much.34
In Tables 5 and 6, we show that the ECO-C Intervention does, in fact, alter students' enrollment
decisions. The intention-to-treat effects are as follows. Students who were assigned to the ECO-C
Intervention enrolled in a college that was 19 percent more likely to be a peer institution. Also,
students assigned to the ECO-C Intervention enrolled in colleges with graduation rates that were 6
percent higher, instructional spending that was 8.6 percent higher, and student-related spending that
was 10.4 percent higher.
The treatment-on-the-treated effects, which are a better guide to what a well-reputed
organization could expect, are larger. For instance, a student who experienced the ECO-C
Intervention enrolled in a college that was 46 percent more likely to be peer institution, whose
graduation rate was 15.1 percent higher, whose instructional spending was 21.5 percent higher, and
whose student-related spending was 26.1 percent higher.
Tables 5 and 6 are based on the college in which students report enrolling when they take the
ECO survey. Thus, it is directly comparable to Tables 1 through 4, which are also based on the
surveydata. However, we find that we obtain very similar results if we use NSC data on the college
in which students enroll first after high school graduation or enroll the longest after high school
graduation. These results are shown in Online Appendix Table 3. This is further evidence that the
students who take the survey are not differentially selected among treatment and control status.
A student's preferred college could potentially change simply because the fee waivers induced him to34
apply to more reach schools and one of them unexpectedly admitted him. However, if students were fully informed
and making rational calculations about application costs and benefits, this effect would be trivially small.
26
5 Who is Most Affected by the ECO-C Intervention?
We chose to target the students we did because we hypothesized initially that they would be
more affected by the ECO-C Intervention than more affluent students or students who attended a
high school with a critical mass of high-achieving students. In this section, we test whether this
hypothesis was accurate byexamining the effects of the ECO-C Intervention for the relatively small
sample of students whom we drew from outside our target group. (See Section 2.)
Tables 7 and 8 present this evidence. The second column in each table shows the intention-to-
treat effect of the ECO-C Intervention (in native units) for our target group of students. The third
column shows the difference in the intention-to-treat effect for students who are from feeder high
schools (Table 7) or students whose families have incomes above our target range (Table 8). The
right-hand column indicates whether the difference shown in the third column is statistically
significantly distinguishable from zero.
About one-fourth of the differences shown in the third columns of the tables are statistically
significant. Moreover, the patternof the differences is consistentlynegative in sign. Thus, it appears
that students who are from feeder high schools and students from more affluent families are indeed
less affected by the ECO-C Intervention. This broadly confirms our initial hypothesis.
Of course, even within our target group of high-achieving, low-income students, students still
differ on dimensions such as their family income, test scores, gender, parents' education, race, and
ethnicity. We also tested whether the effects of the ECO-C Intervention differed along these
dimensions. Specifically, we tested whether there were statistically significant differences in the
effects that depended upon whether the student had an above versus below median SAT/ACT score
(among the target students), whether the student was female versus male, whether the student had
parents whosemaximum educational attainment was greater or less than a baccalaureate degree, and
whether the student was white, Asian, or from a underrepresented minority group (black, Latino,
Hispanic, Native American, Native Pacific Islander). While we lack the statistical power to cut35
the sample in finer slices, we have sufficient statistical power to find heterogeneous effects for three
We tried cutting the students at various other points in parents' educational attainment, and we tried35
various other cuts of race and ethnicity. We saw little evidence of heterogeneous effects.
27
groups of fairly equal size. However, we not only found no statistically significant evidence of
heterogeneity in the effects along the aforementioned dimensions, we did even see patterns in the
point estimates that might suggest systematic heterogeneity in the effects. This suggests that the
ECO-C Intervention has fairly similar effects on students in the target group.
6 Which Parts of the ECO-C Intervention Matter?
Students in the Pilot and 2010-11 cohorts did not experience the ECO-C Intervention. Instead,
they could experience one of the four interventions that the ECO-C Intervention largely subsumes:
Application Guidance, the Net Cost intervention, the Fee Waiver intervention, or the Parent
Intervention.
As mentioned above, the ECO-C Intervention was created as a combination of the first three of
the interventions because we observed a few phenomena in the Pilot and 2010-11 cohorts. First,
based on our telephone survey, we observed that Fee Waivers were associated with a substantial
increase in the probability that a student would recall having seen the ECO materials--that is, be
"treated" in our generous terminology. Second, a large number of informal contacts with parents and
students suggested that the Fee Waivers raised the credibility of the information in the materials.
Third, we observed that Fee Waivers tended to have disproportionate effects on application
behavior, relative to enrollment behavior. In contrast, the Application Guidance and Net Cost
interventions tended to have disproportionate effects on enrollment behavior, relative to application
behavior. As a result, we speculated that the Fee Waivers induced students to apply to more
institutions but that students did not consistently submit their induced, marginal applications to
colleges that were a good match for them. We speculated that the Application Guidance and Net
Costs interventions gave students more information about how to find institutions that suited them
and gave them a better sense of how colleges differed in terms of graduation rates, resources, and
aid. However, these two purely informational interventions were less likely to be taken up by
students.
We hoped that the ECO-C Intervention would have the take-up of the Fee Waiver intervention
combined with the greater informativeness of the Application Guidance and Net Costs intervention.
In this section, we investigate whether the ECO-C Intervention was, in fact, capable of inducing
28
students to recognize a larger arrayof postsecondaryoptions and to recognize a better matched array
of postsecondary options (that is, an array from which they would rather choose even if the number
of institutions in the array were unchanged).36
For this investigation, we relyonevidencefromthe2011-12cohort,which was randomized into
the control group or into the ECO-C, Application Guidance, Net Cost, Fee Waiver, or Parent
intevention. Before presenting the evidence, however, we need to explain one circumstance that
changed betweenthe Pilot and 2010-11 cohorts (for whom weobserved the patterns describe above)
and the 2011-12 cohort (on whom we will conduct the investigation). Net cost information was
difficult for the Pilot and 2010-11 cohort to obtain on their own–that is, if they did not receive the
Net Cost intervention. Although a small number of institutions had readily accessible net cost
calculators posted on their websites in the years that the Pilot and 2010-11 cohort applied to college,
the vast majority of institutions did not post such calculators. In contrast, the 2011-12 cohort was
the first to experience very widespread availability of net cost calculators, owing to the Higher
Education Opportunity Act of 2008, which mandated that such calculators be posted on each
college's website and linked to the U.S. Department of Education's College Navigator by October
29, 2011. We expected that this legal change would make all of the interventions except the Net
Cost intervention more effective and would make the Net Cost intervention less effective. This is,
in fact, what occurred. The Net Cost intervention went from having effects that were routinely
substantially larger than those of Application Guidance to having effects that were routinely
smaller.37
To understand why this occurred, consider the experience of a student assigned to Application
Guidance before and after net cost calculators were widely posted. The intervention would advise
a student on how to find colleges that counselors would consider to be a match. It would inform a
student about graduation rates and application requirements for various colleges. All this might help
We did not fold the Parent Intervention into the ECO-C Intervention because our numerous informal36
contacts with families suggested that it was not possible to target any of the interventions to a specific family
member. We concluded that adding the Parent Intervention to the ECO-C Intervention would cause families to read
similar information in two different forms, an experience that might prove frustrating.
The Net Cost intervention's effects on enrollment outcomes were, on average, twice those of Application37
Guidance for the Pilot and 2010-11 cohorts. As shown below, this is not the case for the 2011-12 cohort.
29
a student choose a portfolio of colleges that were better suited to him academically. However,
before the net cost calculators were posted, the student might still have substantial difficultylearning
which of the colleges for which he was academically suited were likely to offer him sufficient aid
so that he could attend. In contrast, once the net cost calculators were posted, a student who had
found his way to an academically suitable college using the College Navigator could easily
determine what net cost he was likely to face with a link on the same page.
Now consider the experience of a student assigned to the Net Cost intervention which--recall--
did not give students information on how to find colleges that were academic matches. Rather, it
helped them find colleges that were attractive in terms of aid. Once the student had found these
colleges, he still had figure out which of them was academically suited to him. This task was made
no easier by the posting of the net cost calculators. Indeed, one might even argue that, owing to the
media's and U.S. Department of Education's focus on the net cost calculators (which was natural
because they were new), students were distracted from learning which colleges suited them
academically. Such distraction might be a short-term phenomenon since net cost calculators will
not have so much news value in future years.
With this important caveat about the changing environment for the Net Cost and other
interventions for the 2011-12 cohort, let us examine how the four "partial" interventions compared
to the ECO-C Intervention. Table 9 presents this comparison. Each row shows an outcome that was
affected by the ECO-C Intervention. Each column shows how a intervention affected the same
outcome relative to the ECO-C Intervention. An number less (greater) than one indicates that the
intervention had a smaller (greater) effect than the ECO-C Intervention. For instance, a 0.7 for
Application Guidance would indicate that the Application Guidance intervention had 0.7 of the
effect on the same outcome that the ECO-C Intervention had. A 1.2 would indicate that the
Application Guidance intervention had 1.2 times the effect on the same outcome that the ECO-C
Intervention had. The asterisks in the table indicate that a intervention had an effect that was
statistically significantly different from the randomized control group.38
The first thing to observe about Table 9 is that the vast majority of numbers in it are less than
That is, the asterisks do not indicate that the intervention had an effect that was statistically significant38
different from that of the ECO-C Intervention or another intervention.
30
1, suggesting that the ECO-C Intervention tends to have larger effects than any one of its parts The
second thing to observe is that the Fee Waiver intervention tends to have effects that are close in size
to those of the ECO-C Intervention when the outcome is an application behavior. Its effects are,
however, less commensurate in size when the outcome is an enrollment outcome. In contrast, the
Application Guidance intervention tends to have effects that are close in size to the those of the
ECO-C Intervention when the outcome is an enrollment outcome. Its effects are less commensurate
in size when the outcome is an application behavior.
Summing up, we find support for both of our hypotheses: the ECO-C Intervention is more
effective than any of its parts and it appears to combine the strengths of the Fee Waiver intervention
and the strengths of the Application Guidance and Net Cost interventions.
7 The Effects of the ECO-C Intervention on Freshman Year
Grades and Persistence through the Middle of the Sophomore
Year?
The interventions induce students to attend colleges where their peers are higher-achieving but
also more like them in terms of incoming test scores and high school grade point averages. Thus,
one can form a variety of hypotheses about how the interventions will affect students' grades and
persistence in college. On the one hand, if we hypothesize that students study more if they are
surrounded by peers who also study and if they take courses more attuned to their level of incoming
achievement, then the interventions may improve students' learning. On the other hand, colleges
tend to grade students "on a curve" and promote them according to their "curved" grades. Thus, the
interventions will systematically put downward pressure on a student's grades and (therefore)
persistence simply because he is more likely to be attending an institution where other students tend
to be high-achieving too. Note that this "on the one hand" and "on the other" is not symmetric. In
the first case, the student is learning more. In the second case, the student could still be learning
more but might earn lower grades nonetheless. Until our target students have outcomes such as
earnings which areon a fairlyabsolute standard, we can onlycomparestudents'in-college outcomes,
which are measured in relative terms and are therefore biased in favor of students who attend less
31
selective colleges.
Students in the 2011-12 cohort are still freshman for whom we do not yet have grades or
measures of persistence. We can, however, examine students in the 2010-11 cohort who were
affected by the interventions. We could presents the results as reduced-form effects--for instance,
the effect of the intention-to-treat for a specific intervention on the student's freshman grades. In our
experience, though, readers find such effects hard to interpret because readers are required to
remember each intervention's effects on enrollment outcomes and then multiply through in order to
obtain useful and comparable magnitudes. Therefore, we present the effects on grades/persistence
of being induced, by a intervention, to attend a more selective college. These results are from an
instrumental variables estimation where our measure of selectivity is the most continuous one
available to us: the median SAT score of the students at the college where the student enrolls.39
Table 10 shows, first, that the indicators for the interventions have a statistically significant
effect on the selectivity of the college attended by students in the 2010-11 cohort. That is, the first
stage of the instrumental variables procedure has power: the F-statistic is 3.05. Second, if a student
was induced bythe interventions to enroll in a more selective college, his grades and persistence into
sophomore year were not statistically significantly affected. (Both point estimates are positive and
have p-values around 0.3.) This evidence suggests that students induced to attend more selective
colleges are earning similar grades and persisting with similar probabilities as theywould if theyhad
attended less selective colleges. Because grading and promotion standards are higher in more
selective colleges, this evidence hints that the students induced to attend more selective colleges are
learning more. We will have more evidence on this question in future years as the 2011-12 cohort
ages.
8 Cost-Benefit Analyses for the ECO-C Intervention
In this section, we compare the costs and benefits of the ECO-C Intervention. The costs of the
intervention are simple: approximately$6 per student whom we intended to treat. We can scale this
Our alternative measures of selectivity produce results that are similar in spirit but harder to interpret39
because they are categorical.
32
up to generate a cost of actually treating a student. The upper bound on this cost of treatment is $15
($6 divided by 0.4, the probability of treatment). We believe, however, that a highly reputable
organization like the College Board or ACT could achieve a cost of treatment of approximately $6
simply because mail from such an organization would likely be opened and at least cursorily
reviewed (our definition of treatment). Such an organization would presumably also have lower
mailing and in-house printing costs than our small experimental organization had.
Some useful are statistics are the following. Per 10 dollars of cost, the ECO-C Intervention
caused target students to apply to 4 more colleges and to be 50.5 percentage points more likely to
apply to a peer college. Per 10 dollars of cost, the ECO-C Intervention caused target students to
enroll in colleges where graduation rates were 13 percentage points higher, where instructional
spending was $5906 greater, where the median student had college assessment scores that were 65
points higher, and that were 22.2 percentage points more likely to be peer colleges.
a Comparing ECO Benefit-to-Cost to that of In-Person Counseling
Interventions
The most comparable alternative intervention is a counseling intervention also directed to high-
achieving, low-income students: Avery (2010). Students in the target population were asked
whether they were willing to experience expert college counseling. If they agreed that they were,
they were randomly selected to a control status or to receive ten hours of individualized college
advising from a experienced counselor normally employed in full-time college counseling at high
school with a critical mass of high achievers. The cost of the intervention was approximately $600
perstudent. Thiscounselinginterventiondid not havestatisticallysignificanteffectsonmost college
application or enrollment outcomes, but this was partlydue to the experiment's low statistical power.
If we take the most optimistic point estimates from the study, however, we find that, per 10 dollars
of cost, the counseling intervention caused high-achieving, low-income students to apply to 0.006
more colleges and to be 0.18 percent (18 one-hundredths of 1 percent) more likelyto enroll in highly
selective colleges (Barron's "most competitive" category). These per-dollar effects are clearly very
33
small compared to those of the ECO-C Intervention.40
Most other related studies have evaluated counseling interventions targeted to much less high-
achieving students. Thus, they typically examine whether a student attends a postsecondary
institution at all as the main outcome. Since our target students nearly always attend some
postsecondaryinstitution evenwithnointervention,comparingourbenefit-costratioto theirbenefit-
cost ratio is not possible. However, we can nevertheless learn about the cost of an in-person
counseling intervention from these studies. Carroll and Sacerdote (2012) study a counseling
intervention randomlyassigned to about 500 New Hampshire high school seniors. The intervention
would probably cost between $600 and $900 per student to replicate at scale because it paid each
student $100 for participating, paid all of a student's application and CollegeBoard/ACT assessment
fees, and paid Dartmouth college students to mentor each student through the application process.41
Cuhna and Miller (2009) study Texas' GO Centers, which are centers placed in high schools,
designed to encourage college attendance. Each GO Center costs about $80,000, is staffed, has
computer facilities and information to make applying online easy, provides a marketing campaign
that emphasizes peer-to-peer persuasion about the importance of college, and links students with
local colleges and universities. The number of students in each school averages 197, no more than
half of whom use the GO Center. Thus, the cost per student is about $812 per student.42
The most direct comparison we can make is based on the probability of applying to a Barron's Most40
Competitive institution. The treatment-of-the-treated effect of the ECO-C intervention is 30.8 percentage points or
51.2 percentage points per $10 of cost. Per dollar of cost, the ECO-C intervention is thus 285 times more
"productive" (on this measure) than the counseling intervention reported by Avery (2010).
Carroll and Sacerdote report that they paid 20 Dartmouth students to work full-time for about 2.541
months. If we figure that Dartmouth students' full-time earnings would be at least $40,000 per year (which is much
less per hour than the counselors were paid in the Avery (2010) interventions), then the cost of the mentoring was
$333 per student. Application and assessment fees would typically total between $100 and $400. There were also
costs associated with arranging the mentoring times, matching mentors with the students, and the like.
Among those that have been studied rigorously, the least expensive intervention with some in-person42
elements is investigated by Castleman and Page (2012). This summer-after-senior-year intervention used email,
text-messaging, and Facebook along with in-person counseling and cost only about $200 per student. However, it is
not comparable to the ECO-C Intervention or the in-person interventions described above because it was directed to
students who were attending non-selective colleges (mainly community colleges). These colleges have only a small
fraction of the application requirements associated with selective four-year colleges: test scores, letters of reference,
mid-senior year transcripts, essays, applications submitted by specific mid-senior-year deadlines. Given that the
summer-after-senior-year intervention was not attempting to advise students on any of these dimensions, it is not
(continued...)
34
In short, it seems reasonable to conclude that alternative, in-person counseling interventions
typically cost upwards of $600. Thus, in order to achieve the same benefit-to-cost ratio as the ECO-
C Intervention, such interventions would need to have effects that are at least 100 times as large as
those of the ECO-C Intervention.
It is noteworthy that each of the in-person counseling interventions described above is reported
by its evaluators to be a much more cost efficient way of changing students' college-going behavior
than reducing the cost of college through tuition reductions, grants, and other forms of aid. That is,
in-person counseling has high benefits to costs relative to manyalternative policies that are intended
to increase college-going.
Some other low-cost interventions have been shown to change college application and/or
enrollment behavior. These tend to have something of the same character as the ECO interventions
insofar as they are not only primarily informational but are driven by large-scale databases (as
opposed to information conveyed byindividual human beings). We especiallynote Bettinger, Long,
Oreopoulos, andSanbonmatsu's (2009)of how H&R Block'sautomaticallyfillingout FAFSA forms
affectedcollege-going,Pallais'(2009)studyof howcollegeapplicationsrosewhenACTmadeit free
to sendscores to additionalcolleges,andBulman's (2012)studyofhowahighschool's college-going
rises when it allows the SAT to be administered on school premises on at least one Saturday
morning.
All of the above interventions are inexpensive. However, because all of them target much
lower-achieving students than the ECO-C Intervention, their main outcomes are whether a student
applies to or attends college at all (or applies to or attends a four-year college). This makes it
difficult to compare their benefits-to-cost ratios to those of the ECO-C Intervention.
b The ECO-C Intervention's Long-Run Benefits Relative to its Costs
The ECO-C Intervention causes students to attend colleges that have higher graduation rates,
have richer resources devoted to instruction and other student-related activities, and are more
selective. Whethersuchcollegesimprovestudents'long-runoutcomes,especiallytheirearnings,has
(...continued)42
surprising that it cost less than the interventions studied by Avery, Carroll and Sacerdote, and Cuhna and Miller.
35
Expanding College Opportunities for High-Achieving, Low-Income Students
Expanding College Opportunities for High-Achieving, Low-Income Students
Expanding College Opportunities for High-Achieving, Low-Income Students
Expanding College Opportunities for High-Achieving, Low-Income Students
Expanding College Opportunities for High-Achieving, Low-Income Students
Expanding College Opportunities for High-Achieving, Low-Income Students
Expanding College Opportunities for High-Achieving, Low-Income Students
Expanding College Opportunities for High-Achieving, Low-Income Students
Expanding College Opportunities for High-Achieving, Low-Income Students
Expanding College Opportunities for High-Achieving, Low-Income Students
Expanding College Opportunities for High-Achieving, Low-Income Students
Expanding College Opportunities for High-Achieving, Low-Income Students
Expanding College Opportunities for High-Achieving, Low-Income Students
Expanding College Opportunities for High-Achieving, Low-Income Students
Expanding College Opportunities for High-Achieving, Low-Income Students
Expanding College Opportunities for High-Achieving, Low-Income Students
Expanding College Opportunities for High-Achieving, Low-Income Students
Expanding College Opportunities for High-Achieving, Low-Income Students
Expanding College Opportunities for High-Achieving, Low-Income Students
Expanding College Opportunities for High-Achieving, Low-Income Students

More Related Content

What's hot

Ohio Reach Campus Liaison Training
Ohio Reach Campus Liaison TrainingOhio Reach Campus Liaison Training
Ohio Reach Campus Liaison TrainingLisa Dickson
 
Высшее образование в США. Университеты,государство и бизнес. Тим Стендерт, по...
Высшее образование в США. Университеты,государство и бизнес. Тим Стендерт, по...Высшее образование в США. Университеты,государство и бизнес. Тим Стендерт, по...
Высшее образование в США. Университеты,государство и бизнес. Тим Стендерт, по...Discussion Club
 
Keeping College Grads in VT
Keeping College Grads in VTKeeping College Grads in VT
Keeping College Grads in VTLaura Felone
 
SCHOLARSHIPS FOR WOMEN
SCHOLARSHIPS FOR WOMENSCHOLARSHIPS FOR WOMEN
SCHOLARSHIPS FOR WOMENsharen preston
 
Recruiting and Retaining First Generation and Diverse students
Recruiting and Retaining First Generation and Diverse studentsRecruiting and Retaining First Generation and Diverse students
Recruiting and Retaining First Generation and Diverse studentsBen Webb, MBA
 
Training Module Part A
Training Module Part ATraining Module Part A
Training Module Part AErinDKennedy
 
The Impact of Open Educational Resources on Various Student Success Metrics
The Impact of Open Educational Resources on Various Student Success MetricsThe Impact of Open Educational Resources on Various Student Success Metrics
The Impact of Open Educational Resources on Various Student Success MetricsSidharthS28
 
2015-SVT-Postsecondary-Project-Report-full copy
2015-SVT-Postsecondary-Project-Report-full copy2015-SVT-Postsecondary-Project-Report-full copy
2015-SVT-Postsecondary-Project-Report-full copyGentry Fitch
 
Trends In Student Affairs 2008
Trends In Student Affairs 2008Trends In Student Affairs 2008
Trends In Student Affairs 2008BDDusbiber
 
Academic Fraud, Corruption, and Implications for Credential Assessment
Academic Fraud, Corruption, and Implications for Credential AssessmentAcademic Fraud, Corruption, and Implications for Credential Assessment
Academic Fraud, Corruption, and Implications for Credential AssessmentGAURAV. H .TANDON
 
student-centered-learning-impact-academy-arts-and-technology_0
student-centered-learning-impact-academy-arts-and-technology_0student-centered-learning-impact-academy-arts-and-technology_0
student-centered-learning-impact-academy-arts-and-technology_0Joel Key
 
The Challenges and Opportunities in School Transportation Today
The Challenges and Opportunities in School Transportation TodayThe Challenges and Opportunities in School Transportation Today
The Challenges and Opportunities in School Transportation TodayJeremy Knight
 
Bsi leadership for student success what matters_most_2010
Bsi leadership for student success what matters_most_2010Bsi leadership for student success what matters_most_2010
Bsi leadership for student success what matters_most_2010harrindl
 

What's hot (20)

Wp 182
Wp 182Wp 182
Wp 182
 
CLEAN.final draft (2)
CLEAN.final draft (2)CLEAN.final draft (2)
CLEAN.final draft (2)
 
Ohio Reach Campus Liaison Training
Ohio Reach Campus Liaison TrainingOhio Reach Campus Liaison Training
Ohio Reach Campus Liaison Training
 
iGen Goes to College
iGen Goes to CollegeiGen Goes to College
iGen Goes to College
 
Высшее образование в США. Университеты,государство и бизнес. Тим Стендерт, по...
Высшее образование в США. Университеты,государство и бизнес. Тим Стендерт, по...Высшее образование в США. Университеты,государство и бизнес. Тим Стендерт, по...
Высшее образование в США. Университеты,государство и бизнес. Тим Стендерт, по...
 
Keeping College Grads in VT
Keeping College Grads in VTKeeping College Grads in VT
Keeping College Grads in VT
 
SCHOLARSHIPS FOR WOMEN
SCHOLARSHIPS FOR WOMENSCHOLARSHIPS FOR WOMEN
SCHOLARSHIPS FOR WOMEN
 
Recruiting and Retaining First Generation and Diverse students
Recruiting and Retaining First Generation and Diverse studentsRecruiting and Retaining First Generation and Diverse students
Recruiting and Retaining First Generation and Diverse students
 
IELeaders
IELeadersIELeaders
IELeaders
 
Training Module Part A
Training Module Part ATraining Module Part A
Training Module Part A
 
Shaking The Money Tree
Shaking The Money TreeShaking The Money Tree
Shaking The Money Tree
 
The Impact of Open Educational Resources on Various Student Success Metrics
The Impact of Open Educational Resources on Various Student Success MetricsThe Impact of Open Educational Resources on Various Student Success Metrics
The Impact of Open Educational Resources on Various Student Success Metrics
 
2015-SVT-Postsecondary-Project-Report-full copy
2015-SVT-Postsecondary-Project-Report-full copy2015-SVT-Postsecondary-Project-Report-full copy
2015-SVT-Postsecondary-Project-Report-full copy
 
Trends In Student Affairs 2008
Trends In Student Affairs 2008Trends In Student Affairs 2008
Trends In Student Affairs 2008
 
Academic Fraud, Corruption, and Implications for Credential Assessment
Academic Fraud, Corruption, and Implications for Credential AssessmentAcademic Fraud, Corruption, and Implications for Credential Assessment
Academic Fraud, Corruption, and Implications for Credential Assessment
 
student-centered-learning-impact-academy-arts-and-technology_0
student-centered-learning-impact-academy-arts-and-technology_0student-centered-learning-impact-academy-arts-and-technology_0
student-centered-learning-impact-academy-arts-and-technology_0
 
The Challenges and Opportunities in School Transportation Today
The Challenges and Opportunities in School Transportation TodayThe Challenges and Opportunities in School Transportation Today
The Challenges and Opportunities in School Transportation Today
 
Bsi leadership for student success what matters_most_2010
Bsi leadership for student success what matters_most_2010Bsi leadership for student success what matters_most_2010
Bsi leadership for student success what matters_most_2010
 
Crdbase
CrdbaseCrdbase
Crdbase
 
Higher education in the 21st century
Higher education in the 21st centuryHigher education in the 21st century
Higher education in the 21st century
 

Similar to Expanding College Opportunities for High-Achieving, Low-Income Students

RealizingPotentialWP
RealizingPotentialWPRealizingPotentialWP
RealizingPotentialWPMaggie Snyder
 
Academic Performance Of Students Who Receive Need-Based Financial Aid
Academic Performance Of Students Who Receive Need-Based Financial AidAcademic Performance Of Students Who Receive Need-Based Financial Aid
Academic Performance Of Students Who Receive Need-Based Financial AidJim Webb
 
A College Education Is A Sound Investment
A College Education Is A Sound InvestmentA College Education Is A Sound Investment
A College Education Is A Sound Investmentnoblex1
 
Institutional Retention Strategies at Historically Black Colleges and Univers...
Institutional Retention Strategies at Historically Black Colleges and Univers...Institutional Retention Strategies at Historically Black Colleges and Univers...
Institutional Retention Strategies at Historically Black Colleges and Univers...Dawn Follin
 
Year 1 Intro to CTO Workshop 2014 - BHSEC Queens, March 2014
Year 1 Intro to CTO Workshop 2014 - BHSEC Queens, March 2014Year 1 Intro to CTO Workshop 2014 - BHSEC Queens, March 2014
Year 1 Intro to CTO Workshop 2014 - BHSEC Queens, March 2014bhsecqueensmanhattan
 
A One-Stop Approach to Supporting the Nonacademic Needs of Community College ...
A One-Stop Approach to Supporting the Nonacademic Needs of Community College ...A One-Stop Approach to Supporting the Nonacademic Needs of Community College ...
A One-Stop Approach to Supporting the Nonacademic Needs of Community College ...April Smith
 
The Relationship Between Federal Financial Aid And Tuition...
The Relationship Between Federal Financial Aid And Tuition...The Relationship Between Federal Financial Aid And Tuition...
The Relationship Between Federal Financial Aid And Tuition...Casey Hudson
 
NCII Guided Pathways: Urgency and Transformations
NCII Guided Pathways: Urgency and Transformations NCII Guided Pathways: Urgency and Transformations
NCII Guided Pathways: Urgency and Transformations VCCS_ASR
 
Txt 4 Success: Utilizing personalized text messages to promote college access...
Txt 4 Success: Utilizing personalized text messages to promote college access...Txt 4 Success: Utilizing personalized text messages to promote college access...
Txt 4 Success: Utilizing personalized text messages to promote college access...Jessica Vodden
 
Help Amplify The Number Of College Bound Students
Help Amplify The Number Of College Bound StudentsHelp Amplify The Number Of College Bound Students
Help Amplify The Number Of College Bound Studentsnoblex1
 
Making A Difference Through College Counseling
Making A Difference Through College CounselingMaking A Difference Through College Counseling
Making A Difference Through College Counselingdaveshafron
 
Research Report Flyer
Research Report FlyerResearch Report Flyer
Research Report FlyerSteve Junge
 
A College Education Has Become An Essential Part Of The American Dream
A College Education Has Become An Essential Part Of The American DreamA College Education Has Become An Essential Part Of The American Dream
A College Education Has Become An Essential Part Of The American Dreamnoblex1
 
Taxonomy of Research on At-Risk Students
Taxonomy of Research on At-Risk StudentsTaxonomy of Research on At-Risk Students
Taxonomy of Research on At-Risk StudentsJohn Charles
 
Foundation Blueprint: Broadening our approach and expanding our impact
Foundation Blueprint: Broadening our approach and expanding our impactFoundation Blueprint: Broadening our approach and expanding our impact
Foundation Blueprint: Broadening our approach and expanding our impactAndy Pino
 
Trends in F A for Counselors
Trends in F A for CounselorsTrends in F A for Counselors
Trends in F A for CounselorsCSLF
 
Is This Time Different.docx
Is This Time Different.docxIs This Time Different.docx
Is This Time Different.docxmosope1
 

Similar to Expanding College Opportunities for High-Achieving, Low-Income Students (19)

RealizingPotentialWP
RealizingPotentialWPRealizingPotentialWP
RealizingPotentialWP
 
Academic Performance Of Students Who Receive Need-Based Financial Aid
Academic Performance Of Students Who Receive Need-Based Financial AidAcademic Performance Of Students Who Receive Need-Based Financial Aid
Academic Performance Of Students Who Receive Need-Based Financial Aid
 
Paper
PaperPaper
Paper
 
A College Education Is A Sound Investment
A College Education Is A Sound InvestmentA College Education Is A Sound Investment
A College Education Is A Sound Investment
 
Institutional Retention Strategies at Historically Black Colleges and Univers...
Institutional Retention Strategies at Historically Black Colleges and Univers...Institutional Retention Strategies at Historically Black Colleges and Univers...
Institutional Retention Strategies at Historically Black Colleges and Univers...
 
Year 1 Intro to CTO Workshop 2014 - BHSEC Queens, March 2014
Year 1 Intro to CTO Workshop 2014 - BHSEC Queens, March 2014Year 1 Intro to CTO Workshop 2014 - BHSEC Queens, March 2014
Year 1 Intro to CTO Workshop 2014 - BHSEC Queens, March 2014
 
A One-Stop Approach to Supporting the Nonacademic Needs of Community College ...
A One-Stop Approach to Supporting the Nonacademic Needs of Community College ...A One-Stop Approach to Supporting the Nonacademic Needs of Community College ...
A One-Stop Approach to Supporting the Nonacademic Needs of Community College ...
 
The Relationship Between Federal Financial Aid And Tuition...
The Relationship Between Federal Financial Aid And Tuition...The Relationship Between Federal Financial Aid And Tuition...
The Relationship Between Federal Financial Aid And Tuition...
 
NCII Guided Pathways: Urgency and Transformations
NCII Guided Pathways: Urgency and Transformations NCII Guided Pathways: Urgency and Transformations
NCII Guided Pathways: Urgency and Transformations
 
Txt 4 Success: Utilizing personalized text messages to promote college access...
Txt 4 Success: Utilizing personalized text messages to promote college access...Txt 4 Success: Utilizing personalized text messages to promote college access...
Txt 4 Success: Utilizing personalized text messages to promote college access...
 
Help Amplify The Number Of College Bound Students
Help Amplify The Number Of College Bound StudentsHelp Amplify The Number Of College Bound Students
Help Amplify The Number Of College Bound Students
 
Making A Difference Through College Counseling
Making A Difference Through College CounselingMaking A Difference Through College Counseling
Making A Difference Through College Counseling
 
Research Report Flyer
Research Report FlyerResearch Report Flyer
Research Report Flyer
 
A College Education Has Become An Essential Part Of The American Dream
A College Education Has Become An Essential Part Of The American DreamA College Education Has Become An Essential Part Of The American Dream
A College Education Has Become An Essential Part Of The American Dream
 
Taxonomy of Research on At-Risk Students
Taxonomy of Research on At-Risk StudentsTaxonomy of Research on At-Risk Students
Taxonomy of Research on At-Risk Students
 
Foundation Blueprint: Broadening our approach and expanding our impact
Foundation Blueprint: Broadening our approach and expanding our impactFoundation Blueprint: Broadening our approach and expanding our impact
Foundation Blueprint: Broadening our approach and expanding our impact
 
Trends in F A for Counselors
Trends in F A for CounselorsTrends in F A for Counselors
Trends in F A for Counselors
 
Is This Time Different.docx
Is This Time Different.docxIs This Time Different.docx
Is This Time Different.docx
 
Pathways to Student Success: Slideshow
Pathways to Student Success: SlideshowPathways to Student Success: Slideshow
Pathways to Student Success: Slideshow
 

Expanding College Opportunities for High-Achieving, Low-Income Students

  • 1. This work is distributed as a Discussion Paper by the STANFORD INSTITUTE FOR ECONOMIC POLICY RESEARCH SIEPR Discussion Paper No. 12-014 Expanding College Opportunities for High-Achieving, Low Income Students By Caroline Hoxby and Sarah Turner Stanford Institute for Economic Policy Research Stanford University Stanford, CA 94305 (650) 725-1874 The Stanford Institute for Economic Policy Research at Stanford University supports research bearing on economic and public policy issues. The SIEPR Discussion Paper Series reports on research and policy analysis conducted by researchers affiliated with the Institute. Working papers in this series reflect the views of the authors and not necessarily those of the Stanford Institute for Economic Policy Research or Stanford University
  • 2. Expanding College Opportunities for High-Achieving, Low Income Students Caroline Hoxby Sarah Turner Stanford University University of Virginia NBER NBER Abstract Only a minority of high-achieving, low-income students apply to colleges in the same way that other high-achieving students do: applying to several selective colleges whose curriculum is designed for students with a level of achievement like their own. This is despite the fact that selective colleges typically cost them high-achieving, low-income students less while offering them more generous resources than the non-selective postsecondary institutions they mainly attend. In previous work, we demonstrate that the vast majority of high-achieving, low-income students are unlikely to be reached by traditional methods of informing students about their college opportunities since such methods require the students to be concentrated geographically. In this study, we use a randomized controlled trial to evaluate interventions that provide students with semi- customizedinformationonthe application processand colleges'net costs. Theinterventions also provide students with no-paperworkapplication fee waivers. The ECO Comprehensive (ECO-C) Intervention costs about $6 per student, and we find that it causes high-achieving, low-income students to apply and be admitted to more colleges, especially those with high graduation rates and generous instructional resources. The students respond to their enlarged opportunity sets by enrolling in colleges that have stronger academic records, higher graduation rates, and more generous resources. Their freshman grades are as good as the control students', despite the fact that the control students attend less selective colleges and therefore compete with peers whose incoming preparation is substantially inferior. Benefit-to-cost ratios for the ECO-C Intervention are extremely high, even under the most conservative assumptions. JEL Codes: I21,I23,I24* Acknowledgements. This project was instigated and initially supported by a group of college leaders,* deans of admissions, and deans of financial aid including those of Stanford University, Princeton University, Syracuse University, The University of Michigan, University of Virginia, Duke University, Harvard University, Massachusetts Institute of Technology, Northwestern University, Smith College, The University of California, Vassar College, and participants in the Windsor Group meetings of higher education leaders. We especially wish to acknowledge John Hennessy and Roberta Katz of Stanford University for their extraordinary effort in securing initial support for the project. John Casteen III of the University of Virginia convened the Windsor Group meeting at which this project was initially discussed. We had important conversations with numerous deans of admission and financial aid including William Fitzsimmons (Harvard), Rick Shaw (Stanford), Ted Spencer (University of Michigan), Janet Rapelye (Princeton), and Greg Roberts (University of Virginia). This project would simply not have been possible without data and support from The College Board and ACT. At The College Board, we especially thank Herb Elish, Connie Betterton, Anne Sturvenant, Ryan Williams, Michael Matthews, Sharon Lavelle, and Bettina Donohoe. They were crucial. We also thank David Coleman for continuing support of the project into his Presidency of The College Board. At ACT, we especially thank Robert Ziomek and Ranjit Sidhu. Crucial, early funding for the project came from the Andrew W. Mellon Foundation, the Spencer Foundation for Research Related (continued...)
  • 3. 1 Introduction In previous work (Hoxby and Avery, forthcoming), we show that the vast majority of very high- achieving students from low-income families do not apply to any selective college or university.2 (We call these students "income-typical.") Only a small minority of high-achieving, low-income students apply in a manner that resembles that of their high-achieving counterparts from more affluent families--namely, applying to several colleges that enroll students whose incoming achievement is similar to their own. (We call these students "achievement-typical.") What puzzles many observers is that income-typical students, having worked hard in high school to prepare themselves extremely well for college, do not even apply to the colleges whose curriculum is most geared toward students with their level of preparation. We hereafter call these "peer" colleges as a reminder that these are the colleges where most of their peers would be similarlypreparedand where the curriculum is designed with such students in mind. In the aforementioned work, we eliminate several explanations for this puzzle. First, the (...continued)* to Education, and the Smith Richardson Foundation. We are grateful to Harriet Zuckerman of Mellon, Michael McPherson of Spencer, and Mark Steinmeyer of Smith Richardson for key seminal discussions. The interventions, survey, and evaluation were supported by grant R305A100120 from the U.S. Department of Education's Institute for Education Sciences (IES) and by a generous grant from the Bill & Melinda Gates Foundation. We have been greatly helped by discussions with our IES program officers Karen Douglas and Hiromi Ono and by the energy and creativity of our Bill & Melinda Gates Foundation program officer Greg Ratliff. We had important conversations with William Bowen (Mellon), Catherine Hill (Vassar), and Henry Bienen (Northwestern). Joshua Leake-Campbell and LaTonya Page at the National Student Clearinghouse have been a great help with data. Finally, we are grateful to the ECO Project staff and the many research assistants who made the interventions and surveys come to life. We would like especially to thank Stephanie Adams, Christina Harvey, Alexandra Cowell, Jacqueline Basu, and Casey Cox. Deborah Carvalho, Marian Perez, Karen Prindle, and Shehnaz Kahn of the Stanford Institute for Economic Policy Research ably assisted us in project management. Hereafter, "high-achieving" refers to a student who scores at or the 90th percentile on the ACT2 comprehensive or the SAT I (math and verbal) and who has a high school grade point average of A- or above. This is approximately 4 percent of U.S. high school students. Hereafter, "low income" and "high income" mean, respectively, the bottom tercile and top quartile of the income distribution for families with a child who is a high school senior. Note that these are slightly income categories than we study in Hoxby and Avery (forthcoming). When we say "selective college" in a generic way, we refer to colleges and universities that are in categories from Very Competitive Plus to Most Competitive in Barron's Profiles of American Colleges. There were 236 such colleges in the 2008 edition. Together, they have enrollment equal to 2.8 times the number of students who scored at or above the 90th percentile on the ACT or SAT I. Later, we are more specific about colleges' selectivity: we define schools that are peer schools for an individual student, based on a comparison between his college aptitude test scores and the median aptitude test scores of enrolled students at the school. 1
  • 4. income-typical students are not using reliable information to predict that they will fail at peer colleges because the high-achieving, low-income students who do apply are admitted, enroll, progress, and graduate at the same rates as high-income students with equivalent test scores and grades. Second, the income-typical students are not deterred by accurate information about3 colleges' net costs (the total a student will pay for tuition, fees, and living expenses). For high- achieving, low-income students, the peer institutions that will admit them have lower net costs than the far less selective or non-selective post-secondary institutions that most of them attend. (See Appendix Table 1. ) Third, low-income students can have most application fees and testing fees4 waived for them if they complete sufficient paperwork, so the actual availability of fee waivers is not itselfaproblem. Fourth, achievement-typicalstudentscomefromhouseholdsandneighborhoods that are just as disadvantaged, on numerous socio-economic dimensions, as income-typical students do.5 In our previous work, we observe that achievement-typical students are concentrated in a small number of high schools and in very large metropolitan areas. Therefore, they are likely reached by traditional methods of informing students about their college opportunities, including (i) expert college guidance counseling at high schools with a critical mass of high-achievers, (ii) admissions staff visiting high schools or areas with a critical mass of high-achievers, (ii) colleges' encouraging visits by local students. In contrast, income-typical students--who are the vast majority of high- achieving, low-income students--are dispersed. They are often the sole or one of only a few high- achieving students in their school or area. Thus, their highschool counselor is unlikelyto have much expertise about selective colleges and probablymust focus on other issues. Admissions staff cannot visit their high school or area in a cost effective manner. Income-typical students tend not to live See Hoxby and Avery (forthcoming) for information on high-achieving, low-income students. For3 broader evidence that the application stage is where students' behavior differs most, see Avery, Hoxby, Jackson, Burek, Pope, and Raman (2006), Bowen, Kurzweil, and Tobin (2005), Roderick, Nagaoka, Coca, and Moeller (2009), Avery and Turner (2009), Pallais (2009), Bowen, Chingos and McPherson (2009), Dillon and Smith (2012), Smith, Pender, and Howell (forthcoming). This evidence is also discussed at some length in Hoxby and Avery (forthcoming), Pallais and Turner4 (2006), and Hill, Winston, and Boyd (2005). See Hoxby and Avery (forthcoming).5 2
  • 5. in metropolitan areas that are home to multiple, selective colleges. In short, many high-achieving, low-income students may be unreached by traditional information methods even if counselors and admissions staff conscientiously do everything that is cost-effective for them to do. There are two remaining explanations for the behavior of income-typical students. First, income-typical students could be well informed about their college opportunities, net costs, probabilities of success, and the availabilityof fee waivers. However, because they come from high schools and communities where students with their achievement are rare, they could have formed preferences or relationships that make them averse to attendingpostsecondaryinstitutions that differ from those that manyof their high school classmates attend. For instance, an income-typical student couldbeextremelywell-informedbutcould preferto attendalocal,non-selectiveinstitution because he has a family member who needs daily help or because he is romantically involved with a low- achieving student who could not attend a selective college. Second, income-typical students could be poorly informed about their college opportunities and/or deterred by apparently small costs such as the paperwork associated with fee waivers. Although a great deal of information is apparently available on the internet, it is not easy for a neophyte to distinguish reliable sources of information on colleges' curricula, peers, and net costs from the numerous unreliable (sometimes egregiously misleading) sources that are available. Furthermore, the information available is not only not customized, it tends to assume that low-income students are low-achieving and gives them guidance that corresponds to this assumption. Probably because they are atypical, high-achieving, low- income students might find very little information oriented toward them. Thesetwoexplanations--income-typicalstudentsareinformedbut prefernon-selective colleges, income-typical students uninformed but would prefer peer colleges if they knew about them-- are not mutually exclusive. What matters, from society's point of view, is not whether the second explanation accounts for most or all of the income-typical students' behavior. What matters is (i) whether the second explanation accounts for any of the income-typical students behavior and (ii) whether there exists a cost-effective way to give such students the information and means they need to realize their full array of college opportunities. It is crucial to understand why these two things (and only these two things) matter. Any income-typical student has already academically prepared himself for college through a 3
  • 6. combination of his own effort and society's investments in him (through tax support of public schools, income transfers, other social insurance programs, philanthropy, etc.). These investments are not small: approximately 14,000 hours of the student's own time in school plus thousands of hours spent on homework, $180,000 in public school expenditures on the average student from kindergarten to grade twelve, Medicaid and related health expenditures directed to low-income families, Earned Income Tax Credits, and so on. For many students, these personal and social investments may not bear obvious fruit, but for the income-typical student they have: excellent test scores and grades that indicate that he is well-prepared for college and the careers that college graduates pursue. If the only reason that the student does not take advantage of his full range of college opportunities is that he is unaware of these opportunities or deterred from exploring them for essentially trivial reasons (such as the paperwork associated with fee waivers), then much of the enormous investment already made is potentially wasted. That is, the investment potentially earns much smaller returns than it could because of barriers that should be insignificant. Society should care about the number of such income-typical students because the social loss is proportional to that number. The ratio of income-typical students who fit under the two explanations is, in contrast, unimportant except insofar as the ratio affects the cost effectiveness of anyintervention. Put another way, income-typical students who are alreadyfully informed and prefer not to applyto peer colleges cannot be hurt by receiving information that, according to the first explanation, they already have. Thus, providing them with information can reduce the cost effectiveness of an intervention, but can do nothing more. In this study, we simultaneously test (i) whether there are income-typical students who would change their behavior if they knew more about colleges and (ii) whether we can construct a cost- effective way to inform and help such students realize their full array of college opportunities. We implement this test by randomly assigning interventions that contain semi-customized information and/or no-paperwork application feewaivers to 39,677 students, including 7,749 students who serve as controls (they are evaluated but receive no treatment). The interventions are designed to be low- cost (about $6 per student) and fully scalable. Crucially, the interventions are delivered directly to the students and, therefore, do not depend on students being concentrated geographically or having a counselor who has time to work with them. The interventions are specifically designed to test 4
  • 7. prominent hypotheses about the information and other barriers most often thought to affect high- achieving, low-income students. We survey the students and also use the National Student Clearinghouse's (NSC's) administrative data to ascertain how the interventions affect students' college applications, admissions, enrollment, and --to a limited extent given the recentness of the interventions-- success in college. We show both intention-to-treat and treatment-on-the-treated effects of the interventions. We set a very conservative standard for a student's having been treated: whether he or she was aware of having seen the intervention materials at all. The intention-to-treat effects are dilute versions of the treatment-on-the-treated effects because a good share of the interventions were not actually receivedornoticed byparticipants–most often owingto familymembers'discardingtheintervention materials because they did not recognize the bona fides of the intervention organization, The Expanding College Opportunities project (ECO). The lack of recognition of ECO is inherent in the randomized control trial: ECO could not establish a high-profile among the public without contaminating the control group. Because this dilution would likely not occur if the interventions werebrought to scale bya highlyreputable organization such as The CollegeBoard,ACT,oranother third party, the treatment-on-the-treated effects are the best guide to the likely effects of the interventions at scale. The remainder of this paper is organized as follows. In the next section, we explain the interventions and how theywere targeted to students. We describe the data we use to select potential participants and to evaluate their responses. In section 3, we demonstrate that the randomization worked --that is, each of the treatment and control groups looks alike on observable characteristics. We also show that there was no differential survey response or differential NSC coverage that could plausibly bias our results. In section 4, we present the effects of the ECO-C Intervention, which combines semi-customized information and low-paperwork fee waivers, on students' application, admissions, and enrollment. In section 5, we demonstrate that the ECO-C Intervention had its greatest effects on students in our target group: low-income students who are relatively isolated high-achievers. In section 6, we evaluate the role played by specific elements of the intervention by examining the outcomes of students who were assigned only parts of the ECO-C Intervention. We perform a variety of cost-benefit analyses in Section 7, and we conclude in Section 8. 5
  • 8. Owing the large amount of material we cover in this paper, we have written a companion paper that more thoroughly describes how the ECO project was implemented and the research literature related to the ECO project. We also confine some material to online appendices. 2 The Expanding College Opportunities Project a The Origins of the Expanding College Opportunities Project Using administrative data for the entire population of students who take any College Board or ACT exam, Hoxby and Avery (forthcoming) show that many high-achieving, low-income students fail to apply to any selective postsecondary institution. In failing to do so, they act in a manner that is typical of students with the same income (they are "income-typical," despite their unusually high achievement) rather than the manner typical of students with the same achievement ("achievement- typical"). This evidence is reinforced by studies of mid-achieving, low-income students who, it appears, are also unlikely to choose colleges for which they academically best prepared. Given the6 mounting array of evidence, a number of leaders of postsecondary institutions decided to support research on interventions that might inform low-income students about their college opportunities.7 Several foundations and the Institute for Education Sciences also provided support. The two main8 college assessment organizations in the U.S., the College Board and ACT, agreed to provide data. Aconsortiummodelofsupportwaslogicalbecauseinformingstudents abouttheiropportunities is a public service. If one institution made the effort to inform students on its own, many of the benefits would accrue to other colleges. That is, informing high-achieving, low-income students about their full array of college opportunities may benefit society, but it does not ensure that any particular college ends up with students whom it prefers. Moreover, it would not make sense for one postsecondary institution to undertake the task of informingstudentsbecausethereareenormouseconomiesofscaleinthedatagatheringanddatabase See, for instance, Bowen, Chingos and McPherson (2009), Dillon and Smith (2012), Smith, Pender, and6 Howell (forthcoming). See the acknowledgments.7 See the acknowledgments.8 6
  • 9. infrastructure that support semi-customized interventions like those of the ECO Project. By "semi- customized," we mean that each part of the interventions has a standard framework but that this framework is filled with information that is most likely to be relevant to the student. That is, given a rich database infrastructure and intervention framework, we can ensure that students see information about colleges that are local, colleges at which they will pay in-state rates, financial aid for which theywould qualify, and the like. The semi-customization of the interventions is described below. The project focuses on high-achieving, low-income students because these students' behavior deviates the most from the behavior of affluent students with the same achievement. The focus also made sense because high-achieving, low-income students generally pay less to attend a college that has higher graduation rates, richer instructional resources, and a curriculum designed for students who are highly prepared, as they are (Appendix Table 1). The interventions were designed to meet several constraints. First, the aforementioned research had demonstrated that income-typical students could not be informed about college in a cost- effective manner via traditional recruiting methods that depend on students being geographically concentrated. This is because income-typical students are geographically dispersed. (The opposite is true of the small minority of high-achieving, low-income students who exhibit achievement- typical behavior. These students are highly concentrated in a small number of schools and areas.) Thus, we designed interventions that did not depend on geographic concentration for their efficacy. They did not require that students show up at a central location or be visited in person, for instance. Second, the research had demonstrated that income-typical students were extremelyunlikelyto have a teacher or counselor who was expert in the process of applying to selective colleges or who had time to researchcollegeswith therarehigh-achievingstudent. Thus,theinterventionsweredesigned to go directly to students themselves, not to teachers or counselors who would have to relay them to students. Third, the interventions were designed to be both fully scalable and semi-customized in ways that were inexpensive and yet "smart." By "smart," we mean that the interventions were intended to takeadvantageoftheenormouseconomiesof scale that arise when a central organization efficiently uses large databases to inform the intervention that each person encounters. It is these economies of scale that account for the interventions' low cost of about $6 per student. Fourth, the 7
  • 10. interventions were designed to take advantage of the enormous amount of reliable information about colleges that was already available. A naive student mayfind it hard to determine for himself which online and other information sources are reliable, but the interventions could inform students about how to use accurate data sources such as the U.S. Department of Education's College Navigator. Fifth and perhaps most important, because informing students about their opportunities was a public service, the interventions were written in the voice of a trusted third party. The interventions are designed to help a student gather the information that he needs to make a knowledgeable decision for himself. They do not direct students to any particular college or group of colleges. b Hypotheses about Income-Typical Students' Behavior and the Interventions Designed to Test Them There are at least four prominent hypotheses about why income-typical students exhibit application behaviorverydifferent from that of high-achieving, high-income students. We designed the interventions to test each of the four hypotheses. Hypothesis I is that income-typical students lack the advice that a expert college counselor would give a high-achieving student. An expert counselor would advise such a student to apply to eight or more colleges, most of which would be peer colleges whose median student scores within 5 percentiles of the student's own score. A student would also typically be advised to apply to a couple of colleges whose median student scores 5 to 10 percentiles above him and one or more colleges whose median student scores 5 to 10 percentiles below him. These colleges are sometimes described as "match," "reach," and "safety" colleges, but we prefer the "peer" nomenclature because it focuses us on preparation and curriculum as opposed to strategy. An expert counselor would also advise a student to obtain letters of reference; take college assessments on schedule; send verified assessment scores to colleges; write application essays; complete the Free Application for Federal Student Aid and the CSS Profile; and meetall other deadlinesand requirements of selective colleges' applications. Finally, an expert college counselor would advise a student to compare colleges on9 The CSS Profile gathers financial and tax information similar to that required by the FAFSA. It also9 requires some additional information. The colleges and universities that offer the most generous financial aid tend to require the CSS Profile in addition to the FAFSA. This is because they require more information to construct richer aid packages well. 8
  • 11. the basis of their curricula, instructional resources, other resources (housing, extracurricular resources), and outcomes (such as graduation rates). The Application Guidance intervention was designed to test Hypothesis I. It provides the aforementioned information and gives students timely and customized reminders about deadlines and requirements. It provides students with semi-customized tables that compare colleges' graduation rates. The student is always confronted with the graduation rates of his nearest colleges, his state's flagship public university, other in-state selective colleges, and a small number of out-of- state selective colleges. (Colleges in latter two categories are selected at random among those that qualify.) Students' secondary materials show the graduation rates of four-year colleges nationwide. Moreover, students' primary materials explain how to use tools like the College Navigator to investigate colleges' curricula, instructional resources, and housing in detail. Hypothesis II is that income-typical students misperceive their costs of attending selective colleges. Specifically, we hypothesize that students focus unduly on colleges' "list prices" (the tuition and fees that an affluent student who received no aid would pay) and fail to understand the net costs for students like themselves. We also hypothesize that students are unaware that some colleges provide financial aid for living expenses while other colleges do not. We suspect that income-typical students do not realize that theywould generallypayless to attend colleges that were more selective and that had richer instructional and other resources (Appendix Table 1).10 The Net Cost intervention was designed to test Hypothesis II. It provides students with information aboutnetcostsforlow- to middle-income students at an arrayof colleges. Thematerials are semi-customized in that a student will always see the list prices, instructional spending, and net costs of his state's public flagship university, at least one other in-state public college, nearby colleges, a selective private college in his state, one out-of-state private liberal arts college, and one The reason why a student might especially mistake the cost of attendance at selective colleges is that10 institutional aid is a large part of a low-income student's financial aid package. This is especially true since 2000 because selective colleges have increased their institutional aid to low-income students (Hill, Winston and Boyd, 2005; Avery, Hoxby, Jackson, Burek, Pope, and Raman, 2006; Pallais and Turner, 2006). It is not surprising, then, that Avery and Kane (2004) find that students from economically disadvantaged backgrounds are particularly likely to err when they estimate what various colleges would cost them. Similarly, Avery and Hoxby (2004) show that, compared to their higher-income counterparts, low-income, high-achieving students are less likely to understand their actual cost of attendance and more likely to be confused by superficial variables, such as list prices and whether a grant is called a "scholarship". 9
  • 12. out-of-state private selective university. (Institutions in the latter categories are selected at random from among those that fit the criteria.) The net cost information is shown for hypothetical families with incomes of $20,000, $40,000, and $60,000. Students' secondary materials contain net cost information for a fuller array of colleges nationwide. The Net Cost materials are not intended to give a student precise information about his net costs but, rather, to make him recognize that list prices and net costs can differ greatly--especially at selective institutions. The materials repeatedly state that a student will not learn exactly how much a given college will cost him unless he applies. The Net Cost materials also explain how financial aid works, emphasize how crucial it is to complete the FAFSA and CSS Profile on time, clarify how a student's Expected Family Contribution is computed, decipher a prototypical financial aid offer, and illustrate the trade-offs between loans, grants, and working while in college. Hypothesis III is that income-typical students are deterred from applying to college by application fees. At first glance, this hypothesis might seem unlikely because low-income students are eligible to have most application fees waived. (They are also eligible to have most college assessment fees waived.) Obtaining a fee waiver requires some paperwork, most often income verification by a counselor or similar authority. Students can also qualify for College Board fee waivers bycompleting the CSS Profile. None of the required paperwork is particularlyonerous, and it is a modest element of the entire process of applying to a selective college. Nevertheless, Hypothesis III is plausible for a few reasons. A student may fail to realize that fee waivers are available until it is too late to qualify for them. (Details about obtaining a waiver are often on the final screens or pages of a college application. ) Or, a student who may be willing to fill out11 financial aid forms to be analyzed by a stranger may still balk at revealing his family's income to a counselor. Or, his counselor may be too busy to do his part of the fee waiver process. Indeed, research by Bettinger, Long, Oreopoulos and Sanbonmatsu (2009) suggests that apparently modest FAFSA paperwork deters some students from applying to college.12 For instance, a student who is applying to colleges online might not see detailed information on fee11 waivers until the very last screens of the process. However, it is not obvious that the Bettinger, Long, Oreopoulos, and Sanbonmatsu(2009) evidence is12 relevant to our target students. The students they evaluate tend to be marginal applicants for any college, not very (continued...) 10
  • 13. The Fee Waiver intervention is designed to test Hypothesis III. It provides students with no- paperwork fee waivers that allow them to apply to a 171 selective colleges. When we recruited13 colleges to accept ECO fee waivers, we specifically agreed to reimburse institutions for any case in which a student utilized a fee waiver when he was, in fact, ineligible based on that institution's waiver criteria. The Fee Waiver materials instruct students on how to submit an ECO fee waiver--14 some institutions preferred students to mail paper waivers while others preferred that students enter an ECO code on their online applications. Hypothesis IV is that it is the parents of income-typical students, rather than the students themselves, who cause the divergence in application behavior byfamilyincome. Specifically, there are ethnographic studies that suggest that low-income parents fail to perceive differences among postsecondaryinstitutions. For instance, theymayfail to differentiate amonginstitutions that offer15 2-year versus 4-year degrees, have low versus high graduation rates, have poor versus rich instructional resources, offer meager versus generous financial aid, and so on. Thus, parents may focus on low-list-price institutions that are very nearby, to the exclusion of all other alternatives. The Parent Intervention was designed to test hypothesis IV. It consists of materials that cover the same information presented to students in the Application Guidance and Net Cost part of the intervention, but the materials are explicitly addressed to parents and the information is conveyed differently. Specifically, the materials are written to be accessible to adults who have limited education, limited familiarity with American higher education, and limited English skills. Parents who live in a neighborhood where Spanish speakers prevail received materials in both English and Spanish. In addition, the materials emphasize issues that, according to the ethnographic research, (...continued)12 high-achieving students who are excellent applicants for selective colleges. 171 is the number of colleges for the ECO-C Intervention, which is mainly what we evaluate in this13 paper. The number of colleges was 151 for the 2010 high school graduating cohort of students, on whom we do not focus except for a subset of results. Institutions' criteria for waiving application fees differ somewhat. Since we wanted the fee waivers to be14 simple, we did not attempt to incorporate all of the different criteria. In addition, a small percentage of the students in the ECO evaluation would have family income above most colleges' thresholds for waiver eligibility. See the section on targeting below. See, for instance, Tornatzky, Cutler, and Lee (2002).15 11
  • 14. especially concern parents: financial returns to college, financial aid (especially loans), and differences in time-to-degree among colleges. c Development of the ECO-C Intervention We randomlyassigned the four interventions to students in our Pilot cohort of 2009 high school seniors and in 2010 cohort of high school seniors. Although the materials sent to the Pilot cohort16 were formulated with the help of experienced college mentors, admissions staff, financial aid staff, and student focus groups, we obtained substantial new information after the Pilot year, using focus groups drawn from the Pilot cohort itself. While much of the feedback confirmed our prior information, we learned some things worthy of note. First, families strongly preferred to receive paper materials in the mail, as opposed to receiving electronic materials by email or other online means. Families also preferred materials that did not look like typical college recruiting brochures. We therefore adjusted our dissemination strategyand sent virtually all materials by mail (with extensive online backup, links, and secondary materials). Each intervention was sent in a tabbed, expandable file designed to help students organize their materials for multiple college applications. That is, the design was meant to signal that ECO intended to help them learn about their options, not recruit them for a specific college. Second,familieswereoftensuspicious oftheinterventions(especiallyanythingthatwasonline) because they feared falling prey to for-profit firms selling college advice. We investigated these firms to ensure that the design of our materials was as distinct as possible from theirs. We also ensured that all ECO materials prominently stated that the project was research conducted by the principal investigators and funded by the relevant foundations and IES. We promptly answered questions about the project by email and telephone, often assuring families that it was legitimate research. While these efforts reassured some wary families, the credibility of the ECO project continued to be an issue simply because we could not give the ECO project a strong, public presence without contaminating the randomized experiment. In particular, we did not want the control group to know what the interventions were, and we did not want the students to know what hypotheses we were testing. We believe that credibility would not be an issue if the same interventions were Parent Intervention materials were sent only to the 2010 cohort.16 12
  • 15. conducted by a well-known non-profit organization with a public presence. We return to this issue below because we are confident that it caused the take-up rate of the experimental interventions to be substantially below that which a well-known organization would attain. As an aside, we discovered that there was good reason for families to be suspicious of firms offering free college advice. Numerous for-profit firms offer initial advising materials for free but then charge substantial fees once a familyis engaged. We found multiple firms that sell information that is inaccurate and many firms that sell information that is available for free elsewhere. Some websites that appear to offer unbiased advice are actually recruiting tools for specific, profit-making institutions. A neophyte might have considerable difficulty distinguishing between public-minded organizations that offer reliable information and firms that offer inaccurate or overpriced information. Third, we learned that particular members of each familytendedto vet incoming college-related mail, regardless of to whom it was addressed. By "vet," we mean that the person felt free to discard or read the materials before handing them to the addressee. This person was sometimes a parent, sometimes the student himself (in the case of the Parent Intervention), and sometimes another adult. In many cases, the same member of the family vetted incoming college-related email. In short, we learned that our attempts to direct interventions to particular family members were largely useless. Fourth, we found that the Fee Waivers had a consequence that we had not foreseen. Like "earnest money," the Fee Waivers apparently made families believe that the information was provided with earnest intentions. This caused them to pay more attention to materials that accompanied the Fee Waivers. Having learned these lessons, we created the ECO Comprehensive or "ECO-C" intervention. It combines the Application Guidance, Net Cost, and Fee Waiver interventions. It does not include the Parent Intervention because we concluded that materials directed to parents were often read by students and vice versa. Since the Parent Intervention simply bundled content from the Application Guidance and Net Cost interventions in a different way, we believed that it would prove repetitive if added to the ECO-C Intervention. We randomly assigned the ECO-C Intervention, each of the four interventions, and control status to 3000 students, per treatment, in the cohort of 2011-12 high school seniors. Most of the 13
  • 16. findings in this paper are based on the results of that randomized control trial. The keyfeatures of the ECO-C Intervention and each intervention are summarized in Appendix Figure 1. d Selecting Students for the Evaluation We identified target students by combining student data from the College Board and ACT with data from an array of sources that allow us to estimate whether a student comes from a low-income family. The data combination and estimation process is described in greater detail in Hoxby and Avery (forthcoming). Briefly, we start with data that contain a student's College Board or ACT scores, his location at the level of a Census block (the smallest unit of geography in the Census), his high school, his self- reported high school grade point average (which has been demonstrated to be quite accurate), the identityof the postsecondaryinstitutions where he sends his scores, and a varietyof information that the student reports about his high school experience and his college plans (if any). We match each student to 454 variables that describe the socio-demographics of his neighborhood (at the Census Block Group level), the socio-demographics and other characteristics of his high school, the history of college application and college attendance among former students of his high school, the scores of former students of his high school on college assessments and statewide high school exams, and income information on his zip code from the Internal Revenue Service. The variables focus on issues that summarize or predict parents' and other local adults' incomes; parents' and other local adults' educational attainment; local house values (a key measure of wealth); race; ethnicity; and the propensity to apply to postsecondary institutions, to four-year colleges, and to selective colleges. A list of these variables is available in Online Appendix Table 1. The variables come from the student's self-description at the time he or she took a college assessment, from the U.S. Department of Education's Common Core of Data (2009) and Private School Survey (2009), from the 2000 Census of Population and Housing, from Geolytics 2009 estimates for Census Block Groups, from the Internal Revenue Service, and from statistics computed by the authors for each U.S. high school 14
  • 17. using multiple years of past College Board and ACT data.17 We use all of these variables to estimate students' family incomes, where the verified income variable that we use to generate our parameter estimates comes from financial aid data (which we have for about one tenth of students). We estimate students' family income rather than use the students'self-reportedfamilyincomebecausethemajorityofCollegeBoardtest takersdonotanswer the question about family income and because ACT test takers understate their incomes. We are18 most likelyto underestimate a student's income if he lives in a neighborhood and attends high school with poorer people. Symmetrically, we are most likely to overestimate a student's income if he lives in a neighborhood and attends high school with richer people. Put another way, we are better at estimating a student's family's permanent income than current income. This is desirable from a policyperspectivebecausedisadvantageis moreafunctionofafamily'spermanent(lifetime)income than its current income, which can be temporarily affected by job loss, insurance benefits, and the like. For the 2011-12 cohort of high school seniors, we used a random number generator to randomly select 18,000 students. 12,000 of these were our target students who: (i) scored in the top decile of test-takers of the SAT I or ACT (1300 math plus verbal on the SAT, 28 on the ACT);19 (ii) had estimated family income in the bottom third of the income distribution for families with a twelfth grader (based on the 2007-2011 American Community Survey); (iii) did not attend a "feeder" high school. We define a feeder high school as one in which more than 30 students in each cohort typically score Specifically, there are 48 variables based on students' self-descriptions when they took the ACT or SAT17 test, 36 variables from the Common Core of Data, 165 variables from the 2000 Census at the Block or Block Group level (whichever was the most disaggregated available), 82 variables from the Geolytic 2009 estimates, 24 variables based on Internal Revenue Service data, 98 variables computed for high schools by the authors using historical College Board and ACT data, and 1 variable that contains the authors' estimate of the student's family's income. See Hoxby and Avery (2013) for an explanation of the income understatement in ACT data.18 The 2011-12 cohort was drawn from takers of College Board tests while the 2009-10 and 2010-1119 cohorts were drawn from both College Board and ACT test takers. However, the effects are very similar between the 2010-11 and 2011-12 cohorts, with one exception noted below that was due to external circumstances, not the sample. 15
  • 18. in the top decile on college assessment exams. The test score cut-offs ensure that all the selected students have a high probability of admission at the 236 most selective colleges in the U.S. This corresponds to the set of colleges in Barron's Most Competitive, Highly Competitive, and Very Competitive Plus categories.20 For the 2011-12 cohort of high school seniors, we also randomly selected 6,000 students who met the same test score criteria but who had estimated familyincome above the bottom tertile and/or attended a feeder high school. Although these students are outside our target group, we selected some of them--at a lower sampling rate--so that we could test whether the effects of the ECO-C Intervention were different for the target students than for non-target students. Most of the results shown in this paper are for target students. It will be clear when we use data on non-target students. For the 2010-11 cohort and 2009-10 Pilot cohorts, we selected totals of--respectively--12,500 and 9000 students in a similar manner.21 Once students were selected from each cohort, we randomly assigned an equal number to each intervention offered that year or to the control status. Most of the results shown in this paper are for students from the 2011-12 cohort because only they experienced the ECO-C Intervention. It will be clear when we show findings based on the interventions applied to students in the 2010-11 cohort. We do not show results for the 2009-10 Pilot cohort because--as mentioned above--the interventions changed substantially after feedback from the pilot year.22 e Tracking Application Behavior, Admissions, and Enrollment To evaluate students' response to the ECO-C Intervention and interventions, we obtained two For the 2009-10 Pilot cohort and 2010-11 cohort, we eliminated students who did not self-report a grade20 point average of A- or above. However, given the test score cut-offs, this criterion eliminated only a small share of students--more often because they failed to self-report any variables than because they actively reported a grade point average below A-. We therefore did not apply this criterion to the 2011-12 cohort, the cohort we mainly evaluate in this paper. Because we had more data with each subsequent cohort, we refined the income estimation process21 between each cohort. Using the 2011-12 estimation process to re-select students from the 2010-11 cohort, we can confirm that the refinement did not substantially affect the results for those two cohorts. However, we used students' self-reported family incomes to select some students in the 2009 Pilot cohort. This proved to be a mistake because the self-reports turned out to be inaccurate and systematically biased downward among ACT takers. Once we allow for the difference in selection (see previous footnote), the estimated effects of the22 interventions for the pilot year are largely consistent with those for the next (2010-11) cohort. 16
  • 19. sources of data on their application behavior, admissions outcomes, and college enrollment. First, we surveyed students during each summer after they were selected for an ECO treatment or control group. In the summer after he is expected to graduate from high school, each student is asked to take a fairly comprehensive survey on his college application process, where he was admitted, what his financial aid offers were, and so on. The resulting survey data are so rich that most of the variables must be analyzed in future papers bythe authors. In the summers after which the student might have completed a year of college, each student is asked to complete a shorter surveythat measures college enrollment, course-taking, time allocation in college, work for pay, and degree attainment. Our second sourceof outcome data is NSC dataon enrollment, persistence, and progresstoward a degree. These data are reported by postsecondary institutions. The NSC covers 96 percent of students enrolled in colleges and universities in the U.S. Our selected students arematched to NSC23 data using their names and birth dates. This is a largely accurate match, but it is not perfect owing to variation in how students write their names and owingto typographical errors in students' reported their birth dates. We report descriptive statistics for variables in the dataset in Online Appendix Table 2. 3 Randomization, Survey Response, and the Probability that ECO Materials Are Seen by the Intended Recipient In a randomized controlled trial such as this, the econometrics entail only fairly simple comparisons between the treatment and control group so long as (i) the groups were actuallyselected at random and (ii) there is not differential attrition. In our case, differential attrition could take the form of students failing to respond to the surveyin such a waythat theybias the comparison between treatment and control groups. In this section, we explore these issues. We also describe the extent to which students who were sent intervention materials did not actually see them. That is, we describe the extent to which students whom we "intended to treat" were actually "treated." The source is http://www.studentclearinghouse.org/about/.23 17
  • 20. Given our use of a random number generator, the large number of students in each treatment group and the cohort group (3000 in the 2011-12 cohort), and the Law of Large Numbers, we expect each group to be have observable and unobservable characteristics that are the same. Nevertheless, it makes sense to check that the groups' observable characteristics are, indeed, as similar as we expect with randomization. To do this, we check the 454 predetermined (pre-treatment) variables that we used in the process of selecting students. These variables describe the student, his family, his neighborhood, his high school, and the college-going behavior of students in his high school in previous years.24 We regress each of the 454 predetermined variables onindicatorsforastudent's treatment group and his cohort. We find that 3.2 percent of the coefficients on the treatment group indicators are statistically significantly different from 0 at the 5 percent level and that less than 1 percent are statistically significantlydifferent from 0 at the 1 percent level. These results are consistent with the randomization having worked just as intended. 66.9 percent of students answered the survey. While this is a high response rate, it is nevertheless possible that, within the survey respondents, the randomization fails so that the characteristics of the treatment and control groups differ. We do not find evidence of such failure, however. Using just the students who answered the survey, we again regress each of the 454 predetermined variables on treatment group and cohort indicators. We find that 1.7 percent of the coefficients on the treatment group indicators are statistically significantly different from 0 at the 5 percent level and that less than 1 percent are statistically significantly different from 0 at the 1 percent level. This evidence is consistent with there being no differential surveyresponse that could bias our results. Another way to judge whether the survey-based outcomes will generate unbiased results is to compare the institution in which the student reports enrolling (in the survey) with the institution in which the student appears to enroll in the NSC data. These institutions are the same 95.2 percent of the time. The remaining 4.8 percent of the time, the institutions are not identical but the differences are not systematic. For instance, it does not appear that students areoverstatingtheir true Note that the 454 variables are not independent. This is useful information for interpreting the tests that24 we describe. 18
  • 21. institution in the survey. When the institutions are not identical, the survey-based institution has a lower Barrons' competitiveness ranking 2/5ths of the time, a higher Barrons' ranking 2/5ths of the time, and the same Barrons' ranking 1/5th of the time. More precisely, the Barrons' rankings of the survey-reported and NSC institutions are not statistically significantly different in a paired t-test. While we find that students who answer the survey attend institutions with slightly higher Barrons' rankings than students who do not answer the survey, what matters for our results is whether the25 survey-NSC ranking gap differs across treatment and control groups. We find that this gap does not differ statistically significantly across the groups.26 Using a combination of individual inspection and cross-validation with colleges' student directories, we conclude that at least half of the survey-NSC conflicts in the enrolled institutions occur because the student has been matched to the wrong person in the NSC. (Because the match uses only names and birth dates, incorrect matches are possible although unlikely.) The remaining conflicts appear to be due to students reallychanging their institution of enrollment between the time theyare surveyed and the time theyenroll in the fall. Some of these changes are minor--for instance, changing between the Stout and Eau Claire campuses of the University of Wisconsin. When weconsiderNSC-basedoutcomes,weadjustourestimatesfortheattenuationbiascaused by having a small percentage of the students matched to the wrong person in the NSC data.27 The difference is 0.4 ranks when we convert the Barrons' (plus) groups into numerical groups based on25 their order. The conversion is 12=Most Competitive, 11=Highly Competitive Plus, 10=Highly Competitive, 9=Very Competitive Plus, 8=Very Competitive, 7=Competitive Plus, 6=Competitive, 5=Less Competitive, 4=Noncompetitive but still in the Barrons' Profiles of American Colleges. Two-year institutions and many four-year institutions are not listed at all in Barrons' Profiles. Also, specialty institutions such as culinary schools are an awkward fit for the Barrons' vertical ranking on selectivity. They have a separate category in Barrons, known as "Special," that we have explored as an outcome without producing results of much interest. The 0.4 difference in ranks is the same if we assign plausible rankings to institutions that do not appear at all in Barrons' Profiles. For instance, we have tried assigning non-selective, non-appearing four-year institutions the number 2 and assigning non- selective two-year institutions the number 1. In fact, the point estimates of the gaps have the "wrong" sign if one is concerned that treated students26 (relative to control students) are less likely to respond to the survey if they enroll in a low selectivity institution. That is, the survey-NSC gap in the enrolled institution's ranking is (very slightly) larger for control group students, but their gap is not statistically significantly different from the gap for students in any treatment group. Students being matched to the wrong person causes attenuation bias of a simple form. Some percentage27 of NSC outcomes are for people whom we did not, in fact, attempt to treat. Therefore, the treatment indicator is erroneously set to 1 in some percentage (we assume 2.4 percent) of cases when it ought to be set to 0. The remedy (continued...) 19
  • 22. Specifically, we assume that 2.4 percent of the students arematched to the wrong person's outcomes. A relatedissuethataffectsall estimates but especiallyNSC outcomes is undeliverable mail. We sent intervention materials to the address that each student had when he or she registered for a college assessment exam. The gap between our sending materials and the student's registration date was usually four to fourteen months, although the lag was shorter in some cases. Thus, some students had moved between the day they registered for an exam and the day we sent interventions. Also, a small share of addresses supplied by students are incomplete--most often, an apartment or unit number is missing. When such situations arise, the ECO-C Intervention materials are undeliverable. The materials are sometimes though not always returned to us. We received return- to-sender mail for 10 percent of our targeted participants, and it is likely that some additional percentage of materials was undeliverable but was not sent back to us because the people at the address did not bother to return it. In any case, households that do not receive any materials are clearly untreated by the interventions, and in a typical research design they would not even be recorded as participants whom we intended to treat since they could not possibly have been treated. This leads us to two points. First, if the interventions were to be run at scale byan organization- -such as the College Board, ACT, or third party--that sent intervention materials at the same time students received their test scores, most of the undeliverable mail problem would disappear because most of it is due to students who move. That is, the undeliverable mail problem arises in the experiment, which is not timed tightly with test-taking, but would occur much less in an at-scale program. Second, when we compare enrollment outcomes based on the NSC to enrollment outcomes based on our survey, we rescale the NSC outcomes for the undeliverable mail problem. This is because survey-based outcomes are relatively unaffected by the problems: a person who could not receive intervention materials owing to moving or a bad address also would not have received an invitation to take the survey. To rescale the NSC-based outcomes, we use a simple Wald (1940) Estimator that rescales the intention-to-treat by the actual probability of receiving the mail.28 (...continued)27 for such attenuation bias is a re-scaling of each coefficient estimate. Note that the Wald Estimator is the appropriate remedy, as opposed to dropping potential participants28 whose intervention materials were returned to us. This is because we do not send control students interventions (continued...) 20
  • 23. More generally, there is a significant gap between the intention-to-treat and treatment in the experiment, and we believe that a substantial share of this gap would disappear if the interventions were done at scale by a well-known, reputable organization such as the College Board, ACT, or a third party organized for the purpose. The nature of the experiment made it necessary for ECO to maintain a very low profile for fear of contaminating the control group or informing the participants ofthehypothesesunderconsideration. This verylowprofileundoubtedlycontributedto households' discarding the materials without the intended recipient ever seeing them. If, for instance, intervention materials were delivered in coordination with students' receipt of their college assessment scores, it is likely that few materials would be discarded out of hand. In order to present the effects that the interventions would likely have if conducted by, say, The College Board, we show treatment-on-the-treated effects where treatment is defined very broadly so thatanyintendedparticipantwhocouldmerelyrememberhavingreceivedthematerialsis counted as treated. This measure of treatment is meant to show what a reputable organization could expect, not to indicate the effect of reading the materials thoroughly--a treatment whose effects we will probably never know. To compute the percentage of participants who were treated (broadly29 defined), we conducted a very brief telephone survey of a sample random of 200 students in each of the treatment groups (not the control group) in the 2010-11 cohort. We simply asked them30 whether they could recall having seen any materials from the Expanding College Opportunities project. Among students who were assigned to the Fee Waiver treatment, 40 percent could recall the materials. Among students who were assigned to another treatment, 28 percent could recall the materials. As described in the previous section, we believe that these low percentages reflect families' mistaking the materials for solicitations from companies engaged in for-profit college (...continued)28 materials, so we naturally do not observe which of them would have such materials returned to us. We could learn the effects of such a treatment by giving students an incentive to read the materials online29 and monitoring the time they spent doing so. However, we do not think that such a version of the treatment is scalable, so we do not test it. The goal of the project was to test fully scalable interventions. We did not attempt to survey all participants on this question for fear of changing the nature of the30 intervention since a full-scale intervention would not include such telephone calls. Since the calls could potentially generate Hawthorne effects, we have verified that the participants who received such calls do not exhibit outcomes that differ from those who did not. The survey was conducted at the end of the students' senior year in high school. 21
  • 24. advising and parents discarding mail addressed to their children from organizations without a well- known profile. The bottom line is that we use the above percentages to construct Wald Estimates of the treatment-on-the-treated. These estimates are our best estimate of the effects that an organization31 could expect to get if it were well-known and reputable (as the College Board and ACT are) and if it conducted the interventions with timing that was coordinated with students' receiving their test scores. Moregenerally,weinvite readerstointerpolatebetweentheintent-to-treat andtreatment-on- the-treated estimates as they see fit, according the way they envision fully-scaled-up interventions being conducted. In summary, all ofourtests confirmthatrandomization workedasintendedto createstatistically same treatment and control groups. The randomization also worked as intended if we consider only those students who answered our survey. We are therefore confident that our estimates reflect the causal effect of the ECO interventions. We present both intention-to-treat results and treatment-on- the-treated results, where as student is defined as treated if he or she merely remembers having received ECO materials. 4 The Effects of the ECO-C Intervention on Students' College Applications, Admissions, and Enrollment In this section, we describe the effect of the ECO-C Intervention on students' outcomes. Because the ECO-C Intervention was given only to students in the 2011-12 cohort, all of the results shown in this section are based on them only. In later sections, we show results for the interventions for which we have data from the 2010-11 cohort as well. We show estimates of the intent-to-treat effect, â, from the straightforward regression. (1) We conducted the telephone survey for a cohort in which the (combined) ECO-C Intervention was not31 used, but it is clear from conversation with participants that it is the fee waivers that account for the greater memorability and credibility of the Fee Waiver intervention materials. Therefore, we use 40 percent as the percentage of participants who were treated for both the Fee Waiver and (combined) ECO-C Intervention. 22
  • 25. where Outcome is the relevant application, admissions, or enrollment outcome and ECOinterv is an indicator variable for the student's having been assigned to the ECO-C Intervention. The estimate32 of the parameter á records the average outcome among students in the control group. We also show estimates of the treatment-on-the-treated-effect based on the Wald Estimator (2) where Prob(Recall Intervention), our measure of treatment, is equal to 40 percent for the ECO-C Intervention. Our estimating equation (1) does not control for any predetermined covariates, such as a student's gender, race, ethnicity, and neighborhood characteristics. This is because we found that adding them, in various plausible combinations, does not affect the coefficients. This is not surprising given the balance in the covariates reported in the previous section. Estimates for regression in which covariates are included are available from the authors. Each table in the sections that follow shows the effects in native units (for instance, the number of applications submitted), in percentage changes relative to the control group's mean, and in effect size (as a share of the control group's standard deviation). On the whole, we believe that the percentage changes are easiest for readers to interpret because theydo not require the reader to know the mean of the outcome for himself. Indeed, we have found that readers usually have only hazy ideas of the outcomes of the high-achieving, low-income students who are targeted in this study. Thus, they often end up with misimpressions if the effects are not put into percentage changes or effect sizes for them. However, each reader is welcome to focus on whichever translation of the effects is most transparent to him. a Effects of the ECO-C Intervention on College Application Behavior Table 1 shows intention-to-treat effects of the ECO-C Intervention on students' application behavior. We find that the ECO-C Intervention causes students to submit 19 percent more applications and to be 27 percent more likely to submit at least five applications. The ECO-C NSC-based outcomes are slightly rescaled to account for mail that was undeliverable and for mis-32 matching to NSC data. Survey-based outcomes need no such rescaling. 23
  • 26. Intervention raises their probability of applying to a peer public university by 19 percent, a peer private university by 17 percent, and a peer liberal arts college by 15 percent. Students were also more likely to apply to institutions in the range immediately below and above peer institutions. The pattern of effects clearly shows students targeting peer institutions the most, and other institutions less as their median student differs more from the student himself. (Note that, because they are so high-achieving themselves, only a small share of our students can apply to an institution whose median students score more than 5 percentiles higher.) The ECO-C Intervention causes students' "maximum" application to have higher median SAT scores (by 34 points), a graduation rate that is 7 percent higher, instructional spending that is 22 percent higher, and student-related spending that is 21 percent higher.33 If an organization like the College Board or ACT were to conduct the ECO-C Intervention, we believe that the effects would be more like the treatment-on-the-treated effects shown in Table 2. If a student could at least recall having seen ECO materials, the ECO-C Intervention caused her to submit 48 percent more applications and be 66 percent more likely to submit at least five applications. She was 48 percent more likely to apply to a peer public university, 42 percent more likely to apply to a peer private university, and 38 percent more likely to apply to a peer liberal arts college. If she could recall seeing ECO materials, the ECO-C Intervention also caused her to apply to a college with a 17 percent higher four-year graduation rate, 55 percent higher instructional spending, 52 percent higher student-related spending, and a 86 point higher median SAT score. b Effects of the ECO-C Intervention on College Admissions Outcomes Because the students targeted by the ECO program had high college assessment scores and grades, we expected that they would be admitted to more selective colleges if the intervention did, in fact, cause them to applyto such colleges. This expectation was correct, as shown in Tables 3 and 4. First consider the intention-to-treat effects. Students who were assigned to the ECO-C Intervention were admitted to 12 percent more colleges. They were 31 percent more likely to be admitted toapeercollegeandthemaximum college to which theywere admitted had students whose median SAT score was 21 points higher. Students were admitted to a college with a 10 percent Student-related spending is spending on instruction, student services, academic support, and institutional33 support. It does not include research spending or public service spending. 24
  • 27. higher graduation rate, 14 percent higher instructional spending, and 14 percent higher student- related spending. The effects for a student who could at least recall having seen ECO materials are larger. He was admitted to 31 percent more colleges. He was 78 percent more likely to be admitted to a peer college, and the maximum college to which he was admitted had students whose median SAT score was 53 points higher. The maximum college that admitted him also had a 24 percent higher graduation rate, 34 percent higher instructional spending, and 34 percent higher student- related spending. c Effects of the ECO-C Intervention on Fee Waiver Use and FAFSA Filing Tables 3 and 4 show that the ECO-C Intervention increased the probability that a student used any application fee waivers and increased the number of application fee waivers she used. This is not a mechanical effect because most of the target students could have had their application fees waived at most institutions anyway. That is, receiving low-paperwork fee waivers did make a difference in students' use of fee waivers. This indicates that eligible students tend not to utilize all of the fee waiver opportunities normally available to them. This is not a surprising result because it accords with the evidence of other studies such as Pallais (2009). The ECO-C Intervention did not affect a student's probability of filing the FAFSA. We suspect that this is because nearly all of our target students apply to some postsecondary institution, and all of them require that students who wish to be considered for financial aid file the FAFSA. In fact, there is evidence that non-selective institutions, especiallythose that arehighlydependent on federal aid, may be especially proficient at ensuring that students file FAFSA forms (U.S. General Accountability Office, 2010). d Effects of the ECO-C Intervention on College Enrollment Outcomes It is not obvious that the ECO-C Intervention should have affected college enrollment outcomes simply because it affected the colleges to which students applied and were admitted. After all, a student might be willing to invest the time and effort to apply to a college in order to learn about it and the financial aid package it would offer. The same student might, upon receiving this information, decide that the college was--after all--not for him. If we find evidence that the ECO-C Intervention affected enrollment outcomes, the logical interpretation is that our target students were 25
  • 28. initially somewhat misinformed about their college opportunities. To see this, consider a student who is not certain about where he will gain admission and what aid he will be offered but whose information or beliefs are right on average. When such a student receives the ECO-C Intervention, he may decide to apply to more colleges simply because the low-paperwork fee waivers reduce his (time) cost of doing so. However, when he gets his admissions letters, his prior beliefs will be confirmed (since he was right on average), and he will not change his enrollment decision much. That is, unless the target students' information is changed fairlysystematicallybythe application and admissions experience, they should not change their enrollment decisions much.34 In Tables 5 and 6, we show that the ECO-C Intervention does, in fact, alter students' enrollment decisions. The intention-to-treat effects are as follows. Students who were assigned to the ECO-C Intervention enrolled in a college that was 19 percent more likely to be a peer institution. Also, students assigned to the ECO-C Intervention enrolled in colleges with graduation rates that were 6 percent higher, instructional spending that was 8.6 percent higher, and student-related spending that was 10.4 percent higher. The treatment-on-the-treated effects, which are a better guide to what a well-reputed organization could expect, are larger. For instance, a student who experienced the ECO-C Intervention enrolled in a college that was 46 percent more likely to be peer institution, whose graduation rate was 15.1 percent higher, whose instructional spending was 21.5 percent higher, and whose student-related spending was 26.1 percent higher. Tables 5 and 6 are based on the college in which students report enrolling when they take the ECO survey. Thus, it is directly comparable to Tables 1 through 4, which are also based on the surveydata. However, we find that we obtain very similar results if we use NSC data on the college in which students enroll first after high school graduation or enroll the longest after high school graduation. These results are shown in Online Appendix Table 3. This is further evidence that the students who take the survey are not differentially selected among treatment and control status. A student's preferred college could potentially change simply because the fee waivers induced him to34 apply to more reach schools and one of them unexpectedly admitted him. However, if students were fully informed and making rational calculations about application costs and benefits, this effect would be trivially small. 26
  • 29. 5 Who is Most Affected by the ECO-C Intervention? We chose to target the students we did because we hypothesized initially that they would be more affected by the ECO-C Intervention than more affluent students or students who attended a high school with a critical mass of high-achieving students. In this section, we test whether this hypothesis was accurate byexamining the effects of the ECO-C Intervention for the relatively small sample of students whom we drew from outside our target group. (See Section 2.) Tables 7 and 8 present this evidence. The second column in each table shows the intention-to- treat effect of the ECO-C Intervention (in native units) for our target group of students. The third column shows the difference in the intention-to-treat effect for students who are from feeder high schools (Table 7) or students whose families have incomes above our target range (Table 8). The right-hand column indicates whether the difference shown in the third column is statistically significantly distinguishable from zero. About one-fourth of the differences shown in the third columns of the tables are statistically significant. Moreover, the patternof the differences is consistentlynegative in sign. Thus, it appears that students who are from feeder high schools and students from more affluent families are indeed less affected by the ECO-C Intervention. This broadly confirms our initial hypothesis. Of course, even within our target group of high-achieving, low-income students, students still differ on dimensions such as their family income, test scores, gender, parents' education, race, and ethnicity. We also tested whether the effects of the ECO-C Intervention differed along these dimensions. Specifically, we tested whether there were statistically significant differences in the effects that depended upon whether the student had an above versus below median SAT/ACT score (among the target students), whether the student was female versus male, whether the student had parents whosemaximum educational attainment was greater or less than a baccalaureate degree, and whether the student was white, Asian, or from a underrepresented minority group (black, Latino, Hispanic, Native American, Native Pacific Islander). While we lack the statistical power to cut35 the sample in finer slices, we have sufficient statistical power to find heterogeneous effects for three We tried cutting the students at various other points in parents' educational attainment, and we tried35 various other cuts of race and ethnicity. We saw little evidence of heterogeneous effects. 27
  • 30. groups of fairly equal size. However, we not only found no statistically significant evidence of heterogeneity in the effects along the aforementioned dimensions, we did even see patterns in the point estimates that might suggest systematic heterogeneity in the effects. This suggests that the ECO-C Intervention has fairly similar effects on students in the target group. 6 Which Parts of the ECO-C Intervention Matter? Students in the Pilot and 2010-11 cohorts did not experience the ECO-C Intervention. Instead, they could experience one of the four interventions that the ECO-C Intervention largely subsumes: Application Guidance, the Net Cost intervention, the Fee Waiver intervention, or the Parent Intervention. As mentioned above, the ECO-C Intervention was created as a combination of the first three of the interventions because we observed a few phenomena in the Pilot and 2010-11 cohorts. First, based on our telephone survey, we observed that Fee Waivers were associated with a substantial increase in the probability that a student would recall having seen the ECO materials--that is, be "treated" in our generous terminology. Second, a large number of informal contacts with parents and students suggested that the Fee Waivers raised the credibility of the information in the materials. Third, we observed that Fee Waivers tended to have disproportionate effects on application behavior, relative to enrollment behavior. In contrast, the Application Guidance and Net Cost interventions tended to have disproportionate effects on enrollment behavior, relative to application behavior. As a result, we speculated that the Fee Waivers induced students to apply to more institutions but that students did not consistently submit their induced, marginal applications to colleges that were a good match for them. We speculated that the Application Guidance and Net Costs interventions gave students more information about how to find institutions that suited them and gave them a better sense of how colleges differed in terms of graduation rates, resources, and aid. However, these two purely informational interventions were less likely to be taken up by students. We hoped that the ECO-C Intervention would have the take-up of the Fee Waiver intervention combined with the greater informativeness of the Application Guidance and Net Costs intervention. In this section, we investigate whether the ECO-C Intervention was, in fact, capable of inducing 28
  • 31. students to recognize a larger arrayof postsecondaryoptions and to recognize a better matched array of postsecondary options (that is, an array from which they would rather choose even if the number of institutions in the array were unchanged).36 For this investigation, we relyonevidencefromthe2011-12cohort,which was randomized into the control group or into the ECO-C, Application Guidance, Net Cost, Fee Waiver, or Parent intevention. Before presenting the evidence, however, we need to explain one circumstance that changed betweenthe Pilot and 2010-11 cohorts (for whom weobserved the patterns describe above) and the 2011-12 cohort (on whom we will conduct the investigation). Net cost information was difficult for the Pilot and 2010-11 cohort to obtain on their own–that is, if they did not receive the Net Cost intervention. Although a small number of institutions had readily accessible net cost calculators posted on their websites in the years that the Pilot and 2010-11 cohort applied to college, the vast majority of institutions did not post such calculators. In contrast, the 2011-12 cohort was the first to experience very widespread availability of net cost calculators, owing to the Higher Education Opportunity Act of 2008, which mandated that such calculators be posted on each college's website and linked to the U.S. Department of Education's College Navigator by October 29, 2011. We expected that this legal change would make all of the interventions except the Net Cost intervention more effective and would make the Net Cost intervention less effective. This is, in fact, what occurred. The Net Cost intervention went from having effects that were routinely substantially larger than those of Application Guidance to having effects that were routinely smaller.37 To understand why this occurred, consider the experience of a student assigned to Application Guidance before and after net cost calculators were widely posted. The intervention would advise a student on how to find colleges that counselors would consider to be a match. It would inform a student about graduation rates and application requirements for various colleges. All this might help We did not fold the Parent Intervention into the ECO-C Intervention because our numerous informal36 contacts with families suggested that it was not possible to target any of the interventions to a specific family member. We concluded that adding the Parent Intervention to the ECO-C Intervention would cause families to read similar information in two different forms, an experience that might prove frustrating. The Net Cost intervention's effects on enrollment outcomes were, on average, twice those of Application37 Guidance for the Pilot and 2010-11 cohorts. As shown below, this is not the case for the 2011-12 cohort. 29
  • 32. a student choose a portfolio of colleges that were better suited to him academically. However, before the net cost calculators were posted, the student might still have substantial difficultylearning which of the colleges for which he was academically suited were likely to offer him sufficient aid so that he could attend. In contrast, once the net cost calculators were posted, a student who had found his way to an academically suitable college using the College Navigator could easily determine what net cost he was likely to face with a link on the same page. Now consider the experience of a student assigned to the Net Cost intervention which--recall-- did not give students information on how to find colleges that were academic matches. Rather, it helped them find colleges that were attractive in terms of aid. Once the student had found these colleges, he still had figure out which of them was academically suited to him. This task was made no easier by the posting of the net cost calculators. Indeed, one might even argue that, owing to the media's and U.S. Department of Education's focus on the net cost calculators (which was natural because they were new), students were distracted from learning which colleges suited them academically. Such distraction might be a short-term phenomenon since net cost calculators will not have so much news value in future years. With this important caveat about the changing environment for the Net Cost and other interventions for the 2011-12 cohort, let us examine how the four "partial" interventions compared to the ECO-C Intervention. Table 9 presents this comparison. Each row shows an outcome that was affected by the ECO-C Intervention. Each column shows how a intervention affected the same outcome relative to the ECO-C Intervention. An number less (greater) than one indicates that the intervention had a smaller (greater) effect than the ECO-C Intervention. For instance, a 0.7 for Application Guidance would indicate that the Application Guidance intervention had 0.7 of the effect on the same outcome that the ECO-C Intervention had. A 1.2 would indicate that the Application Guidance intervention had 1.2 times the effect on the same outcome that the ECO-C Intervention had. The asterisks in the table indicate that a intervention had an effect that was statistically significantly different from the randomized control group.38 The first thing to observe about Table 9 is that the vast majority of numbers in it are less than That is, the asterisks do not indicate that the intervention had an effect that was statistically significant38 different from that of the ECO-C Intervention or another intervention. 30
  • 33. 1, suggesting that the ECO-C Intervention tends to have larger effects than any one of its parts The second thing to observe is that the Fee Waiver intervention tends to have effects that are close in size to those of the ECO-C Intervention when the outcome is an application behavior. Its effects are, however, less commensurate in size when the outcome is an enrollment outcome. In contrast, the Application Guidance intervention tends to have effects that are close in size to the those of the ECO-C Intervention when the outcome is an enrollment outcome. Its effects are less commensurate in size when the outcome is an application behavior. Summing up, we find support for both of our hypotheses: the ECO-C Intervention is more effective than any of its parts and it appears to combine the strengths of the Fee Waiver intervention and the strengths of the Application Guidance and Net Cost interventions. 7 The Effects of the ECO-C Intervention on Freshman Year Grades and Persistence through the Middle of the Sophomore Year? The interventions induce students to attend colleges where their peers are higher-achieving but also more like them in terms of incoming test scores and high school grade point averages. Thus, one can form a variety of hypotheses about how the interventions will affect students' grades and persistence in college. On the one hand, if we hypothesize that students study more if they are surrounded by peers who also study and if they take courses more attuned to their level of incoming achievement, then the interventions may improve students' learning. On the other hand, colleges tend to grade students "on a curve" and promote them according to their "curved" grades. Thus, the interventions will systematically put downward pressure on a student's grades and (therefore) persistence simply because he is more likely to be attending an institution where other students tend to be high-achieving too. Note that this "on the one hand" and "on the other" is not symmetric. In the first case, the student is learning more. In the second case, the student could still be learning more but might earn lower grades nonetheless. Until our target students have outcomes such as earnings which areon a fairlyabsolute standard, we can onlycomparestudents'in-college outcomes, which are measured in relative terms and are therefore biased in favor of students who attend less 31
  • 34. selective colleges. Students in the 2011-12 cohort are still freshman for whom we do not yet have grades or measures of persistence. We can, however, examine students in the 2010-11 cohort who were affected by the interventions. We could presents the results as reduced-form effects--for instance, the effect of the intention-to-treat for a specific intervention on the student's freshman grades. In our experience, though, readers find such effects hard to interpret because readers are required to remember each intervention's effects on enrollment outcomes and then multiply through in order to obtain useful and comparable magnitudes. Therefore, we present the effects on grades/persistence of being induced, by a intervention, to attend a more selective college. These results are from an instrumental variables estimation where our measure of selectivity is the most continuous one available to us: the median SAT score of the students at the college where the student enrolls.39 Table 10 shows, first, that the indicators for the interventions have a statistically significant effect on the selectivity of the college attended by students in the 2010-11 cohort. That is, the first stage of the instrumental variables procedure has power: the F-statistic is 3.05. Second, if a student was induced bythe interventions to enroll in a more selective college, his grades and persistence into sophomore year were not statistically significantly affected. (Both point estimates are positive and have p-values around 0.3.) This evidence suggests that students induced to attend more selective colleges are earning similar grades and persisting with similar probabilities as theywould if theyhad attended less selective colleges. Because grading and promotion standards are higher in more selective colleges, this evidence hints that the students induced to attend more selective colleges are learning more. We will have more evidence on this question in future years as the 2011-12 cohort ages. 8 Cost-Benefit Analyses for the ECO-C Intervention In this section, we compare the costs and benefits of the ECO-C Intervention. The costs of the intervention are simple: approximately$6 per student whom we intended to treat. We can scale this Our alternative measures of selectivity produce results that are similar in spirit but harder to interpret39 because they are categorical. 32
  • 35. up to generate a cost of actually treating a student. The upper bound on this cost of treatment is $15 ($6 divided by 0.4, the probability of treatment). We believe, however, that a highly reputable organization like the College Board or ACT could achieve a cost of treatment of approximately $6 simply because mail from such an organization would likely be opened and at least cursorily reviewed (our definition of treatment). Such an organization would presumably also have lower mailing and in-house printing costs than our small experimental organization had. Some useful are statistics are the following. Per 10 dollars of cost, the ECO-C Intervention caused target students to apply to 4 more colleges and to be 50.5 percentage points more likely to apply to a peer college. Per 10 dollars of cost, the ECO-C Intervention caused target students to enroll in colleges where graduation rates were 13 percentage points higher, where instructional spending was $5906 greater, where the median student had college assessment scores that were 65 points higher, and that were 22.2 percentage points more likely to be peer colleges. a Comparing ECO Benefit-to-Cost to that of In-Person Counseling Interventions The most comparable alternative intervention is a counseling intervention also directed to high- achieving, low-income students: Avery (2010). Students in the target population were asked whether they were willing to experience expert college counseling. If they agreed that they were, they were randomly selected to a control status or to receive ten hours of individualized college advising from a experienced counselor normally employed in full-time college counseling at high school with a critical mass of high achievers. The cost of the intervention was approximately $600 perstudent. Thiscounselinginterventiondid not havestatisticallysignificanteffectsonmost college application or enrollment outcomes, but this was partlydue to the experiment's low statistical power. If we take the most optimistic point estimates from the study, however, we find that, per 10 dollars of cost, the counseling intervention caused high-achieving, low-income students to apply to 0.006 more colleges and to be 0.18 percent (18 one-hundredths of 1 percent) more likelyto enroll in highly selective colleges (Barron's "most competitive" category). These per-dollar effects are clearly very 33
  • 36. small compared to those of the ECO-C Intervention.40 Most other related studies have evaluated counseling interventions targeted to much less high- achieving students. Thus, they typically examine whether a student attends a postsecondary institution at all as the main outcome. Since our target students nearly always attend some postsecondaryinstitution evenwithnointervention,comparingourbenefit-costratioto theirbenefit- cost ratio is not possible. However, we can nevertheless learn about the cost of an in-person counseling intervention from these studies. Carroll and Sacerdote (2012) study a counseling intervention randomlyassigned to about 500 New Hampshire high school seniors. The intervention would probably cost between $600 and $900 per student to replicate at scale because it paid each student $100 for participating, paid all of a student's application and CollegeBoard/ACT assessment fees, and paid Dartmouth college students to mentor each student through the application process.41 Cuhna and Miller (2009) study Texas' GO Centers, which are centers placed in high schools, designed to encourage college attendance. Each GO Center costs about $80,000, is staffed, has computer facilities and information to make applying online easy, provides a marketing campaign that emphasizes peer-to-peer persuasion about the importance of college, and links students with local colleges and universities. The number of students in each school averages 197, no more than half of whom use the GO Center. Thus, the cost per student is about $812 per student.42 The most direct comparison we can make is based on the probability of applying to a Barron's Most40 Competitive institution. The treatment-of-the-treated effect of the ECO-C intervention is 30.8 percentage points or 51.2 percentage points per $10 of cost. Per dollar of cost, the ECO-C intervention is thus 285 times more "productive" (on this measure) than the counseling intervention reported by Avery (2010). Carroll and Sacerdote report that they paid 20 Dartmouth students to work full-time for about 2.541 months. If we figure that Dartmouth students' full-time earnings would be at least $40,000 per year (which is much less per hour than the counselors were paid in the Avery (2010) interventions), then the cost of the mentoring was $333 per student. Application and assessment fees would typically total between $100 and $400. There were also costs associated with arranging the mentoring times, matching mentors with the students, and the like. Among those that have been studied rigorously, the least expensive intervention with some in-person42 elements is investigated by Castleman and Page (2012). This summer-after-senior-year intervention used email, text-messaging, and Facebook along with in-person counseling and cost only about $200 per student. However, it is not comparable to the ECO-C Intervention or the in-person interventions described above because it was directed to students who were attending non-selective colleges (mainly community colleges). These colleges have only a small fraction of the application requirements associated with selective four-year colleges: test scores, letters of reference, mid-senior year transcripts, essays, applications submitted by specific mid-senior-year deadlines. Given that the summer-after-senior-year intervention was not attempting to advise students on any of these dimensions, it is not (continued...) 34
  • 37. In short, it seems reasonable to conclude that alternative, in-person counseling interventions typically cost upwards of $600. Thus, in order to achieve the same benefit-to-cost ratio as the ECO- C Intervention, such interventions would need to have effects that are at least 100 times as large as those of the ECO-C Intervention. It is noteworthy that each of the in-person counseling interventions described above is reported by its evaluators to be a much more cost efficient way of changing students' college-going behavior than reducing the cost of college through tuition reductions, grants, and other forms of aid. That is, in-person counseling has high benefits to costs relative to manyalternative policies that are intended to increase college-going. Some other low-cost interventions have been shown to change college application and/or enrollment behavior. These tend to have something of the same character as the ECO interventions insofar as they are not only primarily informational but are driven by large-scale databases (as opposed to information conveyed byindividual human beings). We especiallynote Bettinger, Long, Oreopoulos, andSanbonmatsu's (2009)of how H&R Block'sautomaticallyfillingout FAFSA forms affectedcollege-going,Pallais'(2009)studyof howcollegeapplicationsrosewhenACTmadeit free to sendscores to additionalcolleges,andBulman's (2012)studyofhowahighschool's college-going rises when it allows the SAT to be administered on school premises on at least one Saturday morning. All of the above interventions are inexpensive. However, because all of them target much lower-achieving students than the ECO-C Intervention, their main outcomes are whether a student applies to or attends college at all (or applies to or attends a four-year college). This makes it difficult to compare their benefits-to-cost ratios to those of the ECO-C Intervention. b The ECO-C Intervention's Long-Run Benefits Relative to its Costs The ECO-C Intervention causes students to attend colleges that have higher graduation rates, have richer resources devoted to instruction and other student-related activities, and are more selective. Whethersuchcollegesimprovestudents'long-runoutcomes,especiallytheirearnings,has (...continued)42 surprising that it cost less than the interventions studied by Avery, Carroll and Sacerdote, and Cuhna and Miller. 35