call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
Admission To Law School New Measures
1. Berkeley Law
From the SelectedWorks of Marjorie M. Shultz
2012
Admission to Law School: New Measures
Marjorie M. Shultz, Berkeley Law
Sheldon Zedeck
Available at: htp://works.bepress.com/marjorie_shultz/12/
2. This article was downloaded by: [ University of California, Berkeley]
On: 09 July 2013, At: 14: 13
Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer
House, 37-41 Mortimer Street, London W1T 3JH, UK
Educational Psychologist
Publication details, including instructions for authors and subscription information:
http:/ / www.tandfonline.com/ loi/ hedp20
Admission to Law School: New Measures
Marj orie M. Shultz
a
& Sheldon Zedeck
b
a
School of Law University of California, Berkeley
b
Department of Psychology University of California, Berkeley
Published online: 24 Jan 2012.
To cite this article: Marj orie M. Shultz & Sheldon Zedeck (2012) Admission to Law School: New Measures, Educational
Psychologist, 47:1, 51-65, DOI: 10.1080/ 00461520.2011.610679
To link to this article: http:/ / dx.doi.org/ 10.1080/ 00461520.2011.610679
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained
in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no
representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of
the Content. Any opinions and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied
upon and should be independently verified with primary sources of information. Taylor and Francis shall
not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other
liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or
arising out of the use of the Content.
This article may be used for research, teaching, and private study purposes. Any substantial or systematic
reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any
form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http: / /
www.tandfonline.com/ page/ terms-and-conditions
3. EDUCATIONAL PSYCHOLOGIST, 47(1), 51–65, 2012
Copyright C
Division 15, American Psychological Association
ISSN: 0046-1520 print / 1532-6985 online
DOI: 10.1080/00461520.2011.610679
Admission to Law School: New Measures
Marjorie M. Shultz
School of Law
University of California, Berkeley
Sheldon Zedeck
Department of Psychology
University of California, Berkeley
Standardized tests have been increasingly controversial over recent years in high-stakes admis-
sion decisions. Their role in operationalizing definitions of merit and qualification is especially
contested, but in law schools this challenge has become particularly intense. Law schools have
relied on the Law School Admission Test (LSAT) and an INDEX (which includes grade point
average [GPA]) since the 1940s. The LSAT measures analytic and logical reasoning and read-
ing. Research has focused on the validity of the LSAT as a predictor of 1st-year GPA in law
school, with almost no research on predicting lawyering effectiveness. This article examines
the comparative potential between the LSAT versus noncognitive (e.g., personality, situational
judgment, and biographical information) predictors of lawyering effectiveness. Theoretical
links between 26 lawyering effectiveness factors and potential predictors are discussed and
evaluated. Implications for broadening the criterion space, diversity in admissions, and the
practice of law are discussed.
Standardized tests have been increasingly controversial over
recent years in all levels of education. Their role in oper-
ationalizing definitions of merit and qualification in high-
stakes admission decisions is especially contested. All insti-
tutions of higher education struggle to find ways to achieve
both equity and excellence in choosing students, but in law
schools this challenge has become particularly intense.
Law schools have relied on the Law School Admission
Test (LSAT) since the 1940s when Ivy League schools initi-
ated its development to help them appraise candidates grad-
uating from lesser known colleges (LaPiana, 2001). The
LSAT is the property of the Law School Admission Council
(LSAC)—a member organization of law schools accredited
by the American Bar Association, currently numbering 200
in the United States and 16 in Canada. In addition to the test,
the LSAC provides an Index Score for each applicant that
combines and weights an applicant’s LSAT score with un-
dergraduate grade point average (UGPA). Individual schools
decide each year how to weight LSAT score and UGPA for
the coming admission cycle based on LSAC’s data on the
Correspondence should be addressed to Sheldon Zedeck, Department
of Psychology, University of California, MC 1650, Berkeley, CA 94720.
E-mail: zedeck@socrates.berkeley.edu
school’s prior classes plus the school’s own policy choices.
The LSAC implements the school’s choices, calculating In-
dex Scores for each year’s applicants to that school.
The LSAT, a cognitive measure, provides some but limited
value in that it predicts grades, one focal dimension of law
school performance, but not other forms of achievement in
either academic or professional contexts. In addition, as dis-
cussed next, the LSAT has an adverse impact against under-
represented minority applicants. As a consequence, research
is needed to identify lawyering effectiveness criteria for all
applicants and to find valid ways to attain greater fairness in
selection. Such outcomes call for investigation of other types
of admission tests, such as noncognitive predictors, as well as
exploration of a broader range of performance criteria than
tests, whether cognitive or noncognitive, might predict.
CURRENT LAW ADMISSION PRACTICE:
EVIDENCE AND CRITIQUE
Evidence
The validity of the LSAT and UGPA as predictors of the
1st-year law school grade point average (FYGPA) has con-
sistent statistical support (Anthony, Harris, Pashley, 1999;
Dalessandro, Stilwell, Reese, 2005; Evans, 1984; Linn
Downloaded
by
[University
of
California,
Berkeley]
at
14:13
09
July
2013
4. 52 SHULTZ AND ZEDECK
Hastings, 1983; Powers, 1982; Schrader, 1977; Wightman,
1993). A relatively recent analysis (2001–2003 data) shows
that the combination of LSAT and UGPA correlates approx-
imately .47 with FYGPA in law school (Dalessandro et al.,
2005), which explains approximately 22% of the variance,
leaving 78% unexplained. By itself, the LSAT correlates .35
with FYGPA, whereas UGPA alone correlates approximately
.28. Finally, results cross-validate, indicating that the regres-
sion equation for the combination of predictors is a useful
model for predicting FYGPA for law school applicants (Da-
lessandro et al., 2005; Stilwell, Dalessandro, Reese, 2003).
Correlations between LSAT and FYGPA are unsurpris-
ing, given the overlap in what both purport to and actually
do measure. The LSAT was designed by asking teachers of
1st-year law courses to identify which skills they reward with
high grades. The LSAT is a test that measures analytic and
logical reasoning, along with reading. Measurement of per-
formance in law school, especially in 1st year, is similarly
narrow. Exams typically require students to read fact patterns,
identify and analyze legal issues, assemble evidence and ar-
guments, and sometimes assess implications—essentially the
same abilities measured by the LSAT. Also, both the LSAT
and 1st-year exams place a significant premium on speeded-
ness (Henderson, 2004).
Norton, Suto, and Reese (2006) examined the differen-
tial validity of the LSAT and UGPA combination for dif-
ferent ethnic groups (African American, Asian American,
Latino, and White law students) in 2002, 2003, and 2004 en-
tering law school classes. Using data from 183 law schools,
with FYGPA as the criterion, they showed that the LSAT is
not differentially valid for the groups studied. Furthermore,
the differential validity results are similar to those reported
for cognitive ability tests used in employment settings (cf.
Schmidt Hunter, 1981, 1998). That is, when the regression
equation for a combined group (minority and nonminority)
is used to make predictions of academic success, the equa-
tion tends, if anything, to overpredict minority students’ per-
formance. These findings replicate those of earlier studies
(Anthony Liu, 2000; Stilwell Pashley, 2003; Wightman
Muller, 1990). Some commentators, however, raise prob-
ing questions about these judgments (Kidder, 2000, 2001).
Specifically, Kidder reported that overreliance on the LSAT
not only yields adverse consequences for students of color
(Kidder, 2000) but also ignores problems that undermine
LSAT claims to fair testing—stereotype threat, criterion nar-
rowness of LSAT predictive validity studies, overinterpre-
tation of the academic significance of differences in LSAT
scores between race subgroups, and the potential racial bias
in the environment of legal education (Kidder, 2001).
The LSAT has been the dominant force in law school ad-
mission for decades, but recent changes in the environment
of legal education have intensified its impact. In 1950, 2 years
after the first use of the LSAT, 6,750 tests were administered;
in 1955, there were 11,750. A leading judge described the
situation in the 50s: “There were two requirements for admis-
sion . . . first, you had to have a college degree; and, second,
you had to be breathing. And either requirement might be
waived” (Raushenbush, 1986, p. 2). By fall 2009, however,
151,400 LSAT tests were administered; 58,400 of 86,600
applicants were admitted to some American Bar Association
accredited law school (LSAC Volume Summary, 2010). More
applicants means greater selectivity and more competition,
especially at highly selective schools. Law school attended,
for the most part, controls access to law’s highly stratified
array of jobs. For many aspiring lawyers, getting into a de-
sired law school can be tougher than passing the bar, and for
many purposes, admission to school is now the tightest point
in entry to the legal profession. Because growing numbers
covet a legal education as a pipeline to professional status
and salaries, public scrutiny of admission policies is intense,
and the constant threat of litigation over the “fairness” of
admission policies drives decision makers toward ever more
“defensible” strategies.
In a context where for more than half a century the LSAT
has been the only empirically valid admission test, the pull to
lean on it heavily has been nearly irresistible. The attraction is
further amplified because relying on simple numeric indica-
tors reduces faculty and administrative time needed to make
decisions. But the danger of succumbing to a fallacy of mis-
placed precision is substantial. With more applicants, more
scores cluster at any given point along the range. Choices
between individual scores tend to rest on smaller actual dif-
ferences. Sometimes decision makers distinguish between
scores falling within a test’s statistical error of measurement
(Cascio, Outtz, Zedeck, Goldstein, 1991). Recognizing the
narrowness of its intent and validating research, the LSAC
regularly admonishes schools not to overrely on LSAT scores.
However, some evidence suggests that even when schools
adopt policies to prevent that overemphasis, scores exert
greater influence than those policies intend (Kidder, 2000,
2001).1
U.S. News’s profitable gambit of ranking educational insti-
tutions has further fueled excessive preoccupation with LSAT
scores. Under the banner of consumer protection, “rankings
fever” has become an obsession (Espeland Sauder, 2009;
Henderson Morriss, 2006). No matter where a school falls
in the hierarchy, higher rankings increase prestige, draw stu-
dents, loosen alumni and donor wallets, give faculty ego
points, and raise leverage within the university. The quick-
est, most direct route a school can take to raise its competitive
rank is to raise the LSAT scores of its matriculating students.2
1In a study of University of California law school admission statistics,
Kidder (2000) found that in 1998, holding undergraduate institution and
major constant, for applicants who had GPAs of 3.75 or more, a 5-point
difference in LSAT score cut the chance of admission from 89% to 44% at
Berkeley Law School; for the same year at UCLA, the chance of admission
dropped from 66% to 10%.
2Most factors in the ranking depend on reputation as evaluated by various
relevant audiences, or matters related to institutional wealth. One factor is
based on the LSAT. Until recently, the rankings used LSAT scores of entering
Downloaded
by
[University
of
California,
Berkeley]
at
14:13
09
July
2013
5. ADMISSION TO LAW SCHOOL 53
Ergo, LSAT scores have become even more decisive in ad-
missions.
Critique
In recent decades, commentators have criticized le-
gal education for overemphasis on academic compe-
tencies and inadequate attention to professional prepa-
ration (A.B.A., 1992; Edwards, 1992; Sullivan, Colby,
Wegner, Bond, Shulman, 2007). The same issues are
echoed in commentary on admission practices. Scholars and
commentators have urged that “merit” for purposes of admis-
sion to law school is too narrowly defined (Haddon Post,
2006; Kidder, 2001; Sturm Guinier, 1996). Assessing only
three competencies (logic, analysis, and reading) and vali-
dating itself only through correlation with 1st-year grades,
the LSAT makes no attempt to account for professional com-
petence, or for the range of other factors that contribute to
grades or other indices of law school performance. Linda
Wightman (2000), former vice-president of operations, test-
ing, and research for LSAC, urged that new assessments be-
yond the LSATshould focus on diverse abilities and skills that
are needed to perform in school, but no such assessment has
emerged.
Law schools are professional schools; few graduates are
destined for academic jobs. The cost of 3 years of law school
is escalating sharply, but law graduates are not adequately
preparedfor actual legal workwhentheyfinishschool. Judges
decry the poor preparation of lawyers practicing before them
(Edwards, 1992). Clients increasingly object to bills that in-
clude a sizable premium for training new lawyers who are
not yet productive enough to support their salaries (Weg-
ner, 2010). Even students who do not want to practice in
large firms routinely take those jobs because they believe
that only relatively wealthy employers can afford to provide
the training necessary to practice. In sum, admission criteria
that focus primarily on academic potential please faculties
oriented to university priorities, but they do not necessarily
match students’ goals or optimize the schools’ mandate.
Critics also object to admissions practices they see as
reinforcing racial and class privilege. Scores on cognitive
tests, including the LSAT, correlate strongly with socioeco-
nomic status and educational opportunity; both overlap with
race (Haddon Post, 2006; Kidder, 2000; Sturm Guinier,
1996; Wightman, 1997). Due to differences in race and eth-
nic group test performance as measured by mean differences
in test scores, heavy emphasis on LSAT scores in admission
decisions reduces the presence of African Americans and
Chicano/Latinos in law school and the profession as well
as diminishing the prospects of those from nonelite families
(Kidder, 2000, 2001, 2003; Wightman, 1997). This adverse
impact occurs because of race group differences on LSAT
students at the 25th and 75th percentile of a class; currently the ranking uses
median LSAT scores for the class.
scores, even though the LSAT is considered “test fair” on the
basis of statistical tests that find no differences in regression
lines for minority and majority members (i.e., the degree
to which LSAT predicts FYGPA for different racial groups
is equivalent; American Educational Research Association,
the American Psychological Association, and the National
Council on Measurement in Education, 1999) when the out-
come being predicted is also cognitive (FYGPA). In brief,
the regression line analysis shows that there is overpredic-
tion for the minority group, which argues against the test
being “unfair.”
Rising numbers of applicants make competition more in-
tense with particularly negative effects on minority aspirants.
Decisions to admit turn on smaller differences in applicants’
test scores. Black and Mexican American applicants today
have stronger academic credentials than in the past; minor-
ity applicants also have stronger academic indicators than
most of those (of all races) admitted to law school decades
ago (Kidder, 2003; LaPiana, 2001; Raushenbush, 1986). But
credential inflation is overcoming growth in the number and
quality of minority applications, resulting in a decline in the
percentage of underrepresented minority students enrolled
in law school today (Lewin, 2010; Society of American Law
Teachers Johnson, 2010).
The LSAT itself has been vetted for bias, but overreliance
on applicant LSAT scores may create what Jencks (1998) de-
scribed as “selection system bias.” Jencks defined selection
system bias as disproportionate weighting of factors—each
highly important to the goal sought—where racial group per-
formance differs on one factor but not the other; weighting
the first factor significantly more than the second consti-
tutes selection system bias. Law school admission provides
an illuminating example. In admission, cognitive, test-taking
ability is one factor in selection; another is or should be
competencies vital to effective professional performance.
Research shows that White students typically outperform
African American, Hispanic, and other underrepresented mi-
nority groups in cognitive, school-skill type tests (Hough,
Oswald, Ployhart, 2001). By contrast, research suggests
that racial groups perform similarly on noncognitive tests
that have been demonstrated to be related to job perfor-
mance (Guion, 1987; Hough et al., 2001; Hunter Hunter,
1984; Schmitt, Rogers, Chan, Sheppard, Jennings, 1997).
Emphasizing academic tests (on which Whites excel) with-
out including tests that predict lawyering performance (on
which race does not significantly affect performance) could
be characterized as selection system bias that harms under-
represented minority applicants (Jencks, 1988). Such bias is
especially problematic where persuasive arguments for in-
clusion of predictors of professional performance arise from
institutional role, from professional critique, and from a need
for greater fairness in populating the profession of the future.
Themajor limits anddownsides of current admissionprac-
tices, as well as the logic of law schools’ role as gatekeep-
ers for the profession, urge that admissions research move
Downloaded
by
[University
of
California,
Berkeley]
at
14:13
09
July
2013
6. 54 SHULTZ AND ZEDECK
beyond attempts to predict grades in law school and that
other criteria of effectiveness and predictors of those crite-
ria be investigated. We believe that legal education needs a
richer set of tools that not only can reliably predict academic
performance but also can identify and assess competencies
predictive of professional effectiveness. That such predictors
could produce greater diversity through race neutral, merit-
based assessment is a plus. Research that has examined the
validity of measures other than cognitive ability provides the
opportunity to compare prediction of performance both in
law school and in lawyering (Shultz Zedeck, 2011).
IDENTIFYING AND MEASURING FACTORS IN
LAWYERING EFFECTIVENESS
Before we could work on predictors of lawyering effective-
ness, we needed to identify the factors that are important to
effective lawyering. To identify such factors, we conducted
hundreds of individual and then group interviews with Berke-
ley Law alumni, faculty, students, judges, and some clients,
asking questions like, “If you were looking for a lawyer
for an important matter for yourself, what qualities would
you look for? What kind of lawyer do you want to teach
or be?” Twenty-six factors emerged from an iterative inter-
view and discussion process; discussion among participants
from different types of settings, substantive fields, and levels
of lawyering experience reflected consensus that the set of
26 factors was comprehensive. Admittedly, the results came
from alumni of only one law school, and thus perhaps are
biased. But although all participants were graduates of Boalt
Law School, they worked in many different types of practice
and ranged in experience from 1 to 30 years. The identifica-
tion of effectiveness factors (and the generation of behavioral
examples of outcomes used in performance rating scales, dis-
cussed next) represent one cohort of law school alumni, but
they embody experiences and expectations in hundreds of
different settings and firms.
We also asked participants to specify examples of more
and less effective behaviors for each factor (“What behav-
ior demonstrates to you that a particular lawyer has or lacks
effectiveness?”); this process generated approximately 800
examples. We then asked alumni to rate a subset of the sug-
gested behavioral examples on a 1-to-5 scale in terms of how
effective they judged each stated behavior to be as an illustra-
tion of a particular effectiveness factor. Many organizations
use this procedure (Behaviorally Anchored Rating Scales
[BARS]; Smith Kendall, 1963) to develop performance
appraisal measures. More than 2,000 alumni responded. We
calculated means and standard deviations for the examples
rated and then used only those examples about which there
was considerable agreement as “anchors” for various levels
along the performance rating scales. The products of this
research stage were (a) 26 factors identified as important
to effective lawyering, (b) 715 behavioral examples of var-
TABLE 1
Twenty-Six Effectiveness Factors With Eight Umbrella
Categories
1: Intellectual Cognitive
• Analysis and Reasoning
• Creativity/Innovation
• Problem Solving
• Practical Judgment
2: Research Information Gathering
• Researching the Law
• Fact Finding
• Questioning and Interviewing
3: Communications
• Influencing and Advocating
• Writing
• Speaking
• Listening
4: Planning and Organizing
• Strategic Planning
• Organizing and Managing One’s Own Work
• Organizing and Managing Others (Staff/Colleagues)
5: Conflict Resolution
• Negotiation Skills
• Able to See the World Through the Eyes of Others
6: Client Business Relations – Entrepreneurship
• Networking and Business Development
• Providing Advice Counsel Building Relationships with Clients
7: Working with Others
• Developing Relationships within the Legal Profession
• Evaluation, Development, and Mentoring
8: Character
• Passion and Engagement
• Diligence
• Integrity/Honesty
• Stress Management
• Community Involvement and Service
• Self-Development
ied levels of performance on those factors, and (c) BARS for
each of the 26 effectiveness factors that provide a mechanism
for assessing the effectiveness of any given lawyer.
For convenience of discussion, however, we grouped the
26 effectiveness factors into eight conceptual clusters while
recognizing that others might group them differently (see Ta-
ble 1). A statistical factor analysis may also have identified
a smaller or different subset, but for the purposes of the re-
search, we maintained each of the 26 effectiveness factors as
a distinct area of measurement. We adhered to the generated
list of factors rather than a factorial solution for several rea-
sons. First, we wanted to use the effectiveness factors as bases
for evaluating performance. Maintaining the uniqueness and
specificity of the effectiveness factor provided a context for
the rater by which to evaluate lawyers; given that we had
lawyers from different types of practice, we were convinced
that maintaining specificity in context would yield the most
reliable rater judgments. Had we factor analyzed the data,
and found a construct such as “communications,” the task
of rating on “communications” (which might subsume “ad-
vocacy,” “negotiations,” “speaking,” etc.) would be difficult
and different for the various practices.
Downloaded
by
[University
of
California,
Berkeley]
at
14:13
09
July
2013
7. ADMISSION TO LAW SCHOOL 55
Second, factor analysis is often undertaken for purposes of
parsimony in theoretical explanations; we were interested in
practical applications of the factors for rating purposes—the
more specific and concrete the factor, the easier to rate. Third,
retaining the different types of factors provides for the op-
portunity to explore different patterns of correlations for the
potential test measures and the factors. This allows for differ-
ent decision rules such that if the interest is in selecting those
who are likely to be good at “influencing and advocating,”
as opposed to those who are good at “negotiation skills,”
separation allows for this investigation. Also, there is the
opportunity for a law school to use the results to determine
whether its curriculum should focus on one or more factors
at the exclusion of others, particularly if the results show that
a test score predicts good performance on one factor while
it predicts poor performance on another factor (a result that
we found in our research; Shultz Zedeck, 2011).
TYPES OF PREDICTORS OF JOB
PERFORMANCE
Effective lawyering, like effectiveness in any career, draws
upon many competencies (as identified in Table 1). It is the
competency or criterion of performance that should drive
what predictor is studied for use as an admissions instru-
ment, that is, understanding of what concept defines or un-
derlies performance should be the basis of the predictors or
tests investigated. Accordingly, we used the “eight umbrella
categories” in Table 1 to drive the search for predictors that
measure constructs hypothesized to be pertinent to the com-
petencies. We identified tests that are (a) “cognitive” and
measure skills that are academic, involve reasoning, and re-
quire analysis; (b) measures of practical and judgment skills
or “tacit knowledge” (practical intelligence and tacit knowl-
edge reflect individuals’ ability to find the best fit between
themselves and the demands of the environment; knowledge
relevant to real world problems and grounded in experience;
knowledge based on emotions, experiences, insights, and in-
tuition; Sternberg Horvath, 1999); (c) measures of person-
ality, style, interest, and motive that clarify how individuals
approach and deal with people, situations, and data; and (d)
character and background measures to identify how experi-
ence relates to performance in particular endeavors.
The LSAT represents the “cognitive” aspect of the pre-
dictor battery. As traditionally used in psychometrics, the
category “cognitive” mainly encompasses academic and
test-taking capability, especially verbal and numeric knowl-
edge and reasoning. Overwhelming evidence shows that
cognitive ability thus defined is a predictor of job per-
formance (Sackett, Schmitt, Ellingson, Kabin, 2001;
Schmidt, 2002). The LSAT, UGPA, and Index Score as-
sess (traditionally) cognitive skills. However, other types
of assessments—traditionally labeled “noncognitive” dimen-
sions like personality, interpersonal skills, and practical
judgment—are also valid predictors of performance in em-
ployment; their use can improve selection of good employ-
ees. Some evidence suggests that validity can be increased in
some jobs if appropriate additional predictors such as mea-
sures of social skills or personality traits are used in combi-
nation with cognitive ability measures (Guion, 1987; Hunter
Hunter, 1984; Schmidt Hunter, 1998; Schmitt et al.,
1997).
In addition, racial subgroup differences are smaller, or
nonexistent, on noncognitive measures such as biodata and
personality inventories. As already noted, a major concern
with standardized cognitive tests such as the LSAT is the
mean difference in performance between ethnic groups, par-
ticularly African Americans. Generally, African Americans
score about 1 standard deviation below Whites on measures
of general cognitive ability, though this standardized mean
score difference is reduced in high complexity jobs (Hough
et al., 2001). Latinos also tend to score lower than Whites
on these measures, whereas Asians tend to score slightly
higher than Whites (Hough et al., 2001). Employment per-
sonnel research attempts to minimize disadvantage to mem-
bers of racial, gender, or ethnic groups by combining valid
noncognitive measures of performance with traditional cog-
nitive ability tests in selection processes (Hunter Hunter,
1984; Ones, Viswesvaran, Schmidt, 1993; Schmitt et al.,
1997).
RESEARCH PLAN
Use of a broader range of predictors and criteria could im-
prove not only coherence between admission and profes-
sional effectiveness but also diversity. Our project (Shultz
Zedeck, 2011) therefore sought to determine to what degree
various types of predictors could explain lawyer effective-
ness factors, particularly those that are not well explained by
LSAC-type cognitive measures; accordingly, the emphasis
was on noncognitive predictors.
Having identified new and broader criteria (26 lawyer
effectiveness factors) and developed methods to assess in-
dividual performance on those factors (26 BARS), we then
selected, adapted and/or wrote new noncognitive tests that
we hypothesized would predict high performance on these
factors. To test the validity of our hypotheses, we recruited
law graduate volunteers for the study. We invited alumni
(1973–2006) from two Bay Area law schools (N = 15,750)
to participate in the research. Participant volunteers num-
bered 1,148 and were female (56.8%), Caucasian (68.5%)
practicing attorneys, with the largest number in large firm
(16.6%) or government (13.7%) practice. All areas of exper-
tise were represented with the most frequent practice focus
being litigation/advocacy (29.1%). (A full description of the
sample is presented in Shultz Zedeck, 2011.) To ensure
that no volunteer need spend more than 2 hr, the computer
Downloaded
by
[University
of
California,
Berkeley]
at
14:13
09
July
2013
8. 56 SHULTZ AND ZEDECK
randomly and evenly directed participants to one of 40 dif-
ferent combinations of subparts of the new test battery.
In the study consent form, we asked participants’ per-
mission to collect their LSAT scores, UGPA, and FYGPA.
We also requested that each volunteer do a self-evaluation
on a self-identified set of relevant effectiveness factors and
identify two supervisors and two peers who could evalu-
ate their recent lawyering performance. Identified evaluators
were asked to assess as many effectiveness factors as their
knowledge of the participant’s work would allow. Raters used
that subset of relevant BARS to rate (ranging from 1 to 5
in .5 increments) the participant’s level of performance on
each factor. We obtained more than 4,000 evaluations of
participants’ lawyering performance via self-, peer, and su-
pervisor ratings. In analyzing performance rating results, we
created an average peer score, average supervisor score, and
a self score for each participant. In this article, we focus
on results for what we called “Other” ratings (average of
peer and supervisor ratings); use of different rating combi-
nations did not change the bottom-line conclusions of the
research.3
Examining the resulting correlations between traditional
academic criteria, new test results, and ratings of professional
performance, we were able to determine which new tests were
predictive of effective lawyering and to compare their validity
with that of cognitive predictors like the LSAT, UGPA, and
Index Score.
Before describing the results, we note several limitations
in the data:
1. All participants had graduated from law school. The
sample population included graduates of only two law
schools. Sample sizes for ethnic groups, although rep-
resentative of the overall potential pool, were small
thereby limiting the opportunity (low statistical power)
to identify significant race/ethnic differences.
2. Participants were self-selected volunteers who re-
sponded to a broad invitation. Volunteers may be more
likely than other grads to be reasonably successful in
law.
3. Participants were practicing law or were working in
law-related jobs; those who quit the profession were
not included.
3Barrett (2008) undertook an analysis of the project’s ratings within
rater groups. He concluded that averaging both the two peer ratings and
two supervisor ratings was reasonable. Additional analyses indicated that
sufficient similarity existed between averaged supervisor and averaged peer
ratings to average the two averages to yield an “Other” rating viewpoint. See
Shultz and Zedeck (2008) for a complete presentation of the intercorrelations
among all performance perspectives.
COMPARATIVE VALIDITY OF PREDICTORS
OF LAWYER EFFECTIVENESS
LSAT, UGPA, and Index Score as Cognitive
Predictors of Lawyering Performance
Our results show that LSAT scores correlated with eight
effectiveness factors: Analysis and Reasoning, Creativ-
ity/Innovation, Problem Solving, Researching the Law, Writ-
ing, Networking, Integrity, and Community Service.4
Corre-
lations ranged from .07 (Problem Solving) to .15 (Writing);
the median correlation was approximately .10. Given that
the LSAT specifically aims to assess analytic and logical rea-
soning and reading, correlations with effectiveness factors
such as Analysis and Reasoning, Writing, and Research-
ing the Law were expected and supported. For Networking
and Community Service performance factors, the correla-
tions (–.12 and –.10, respectively) were negative; high scor-
ers on the LSAT did not do well on these two effectiveness
factors. (The negative correlations illustrate one reason we
decided to retain the 26 factors as separate; to identify dif-
ferent patterns of correlations that have the potential to yield
contradictory conclusions and outcomes.) Networking and
Community Service both require interaction with others. It
may be that those who scored highly on the LSAT were not
viewed by the raters as devoting attention to Networking
and Community Service or lacked the necessary skills to be
effective. An alternative hypothesis is that Networking and
Community Service require skills other than cognitive ones;
subsequent analyses with the other tests in this study support
this hypothesis (presented next).
Correlations between the Index Score and the 26 factors
generally paralleled those found for the LSAT. UGPA results
showed fewer correlations than LSAT scores. UGPA corre-
lated best with Writing (r = .12) as well as with Managing
Self, Diligence (rs = .09), and Integrity (r = .07). Overall,
the LSAT, UGPA, and INDEX were predictive of relatively
few of the effectiveness factors, mainly those that overlapped
with the LSAT’s measurement targets. Although the LSAT,
UGPA, and Index Score were not developed or intended to
predict the relatively less cognitive lawyering effectiveness
factors, the important finding for this research was that, for
the most part, they did not. These results provide an impetus
to identify different types of predictors, such as noncogni-
tive measures, as potential screening devices for admission
to law school and for predicting who will be successful at
lawyering.
Personality and Related Constructs
Strong evidence suggests that certain dimensions of person-
ality are useful in predicting job performance. Personality can
be described as those traits, states, and moods that are stable
4All reported correlations in this section were significant at the .05 level.
Downloaded
by
[University
of
California,
Berkeley]
at
14:13
09
July
2013
9. ADMISSION TO LAW SCHOOL 57
and enduring over time and that distinguish one person from
another (Allport, 1937). A broader conceptualization can en-
compass a person’s strengths, weaknesses, values, and moti-
vations (R. Hogan, Hogan, Warrenfeltz, 2007). Personality
is important to performance because the degree to which an
individual’s personality fits with the requirements of a job or
the values of an organization will have a significant impact on
both success and satisfaction (e.g., Chatman, 1991; Kristof,
1996). The question for this research is whether personality
factors can explain variability in lawyering performance in-
dependent of or in addition to that explained by the cognitive
components.
Much personality research embraces the Five-Factor
Model (FFM; Big 5), which categorizes personality into five
broad factors: Extraversion, Agreeableness, Conscientious-
ness, Neuroticism (Emotional Stability), and Openness to
Experience (Saucier Goldberg, 1998; Wiggins Trapnell,
1997). Early meta-analytic work (Barrick Mount, 1991;
Salgado, 1997; Tett, Jackson, Rothstein, 1991) found that
personality holds some utility for predicting job performance.
Barrick and Mount (1991) reviewed 117 studies and found
personality–performance correlations ranging from .03 to
.13 among the five facets of the FFM, with Conscientious-
ness being the strongest and most consistent predictor of
job performance across professions. More recently, Hurtz
and Donovan (2000) reexamined the relationship between
personality and job performance and found that the mean
sample-size weighted correlations ranged from .04 to .14
across dimensions, again with Conscientiousness having the
highest validity. Conscientiousness is a general predictor of
job performance. Other Big 5 traits predict performance in
specific jobs because different jobs call for different person-
ality profiles and strengths (R. Hogan, Hogan, Roberts,
1996). This variability in profiles and strengths is obvious
for the lawyering profession as shown in the 26 effectiveness
factors—the speciality of law and the setting in which law
is practiced require different numbers and combinations of
competencies. Accordingly, we examined multiple personal-
ity facets.
Another reason for examining personality constructs is to
meet the goal of minimizing adverse impact in selection. In
general, in Big 5 assessments few ethnic differences emerge
(Oswald Hough, 2010).
Reported correlations between Big 5 factors and job per-
formance of .13 and .14 are relatively small, but these findings
mainly reflect bivariate relationships with criteria. Multiple
regressions for the Big 5 as a set show correlations ranging
from .1 to .45 (predicting individual teamwork, which we
presumed would be reflected in effectiveness factors such as
Planning and Organizing, Client and Business Relationships
– Entrepreneurship, and Working with Others). Furthermore,
individual Big 5 personality traits have specific facet-level
characteristics that may have different relationships with job
performance, obscuring the relationship of the higher order
personality dimensions to job performance. For example, the
factor of Conscientiousness has more specific facets such
as Order, Impulsivity, Cognitive Structure, Play, Endurance,
and Achievement.
Our measure of personality, the Hogan Personality Inven-
tory (HPI; R. Hogan Hogan, 2007) is composed of 206
true–false self-report items; it measures normal personal-
ity based on the FFM and is designed specifically for use
with working adults. The HPI includes seven primary per-
sonality scales scored on the basis of J. Hogan and Hogan’s
(1991) reinterpretation of the FFM: Adjustment, Ambition,
Sociability, Interpersonal Sensitivity, Prudence, Inquisitive,
and Learning Approach. (The main difference between the
HPI and the FFM is that it divides Extraversion into Adjust-
ment and Ambition and divides Openness into Inquisitive
and Learning Approach.)
Table 2 presents brief definitions/descriptions of the seven
HPI scales; we note our hypotheses about which effectiveness
factor categories (italicized in brackets) would be predicted
by particular HPI scales. The hypothesized relations influ-
enced the choice of the HPI as one noncognitive measure to
examine. (Note that only the effectiveness category Commu-
nications is presumed not to be associated with any of the
HPI scales.)
TABLE 2
Hogan Personality Inventory
Adjustment Reflects the degree to which a person is steady in the face of pressure, or conversely, moody and self-critical (FFM: Emotional
Stability). [Character]
Ambition Evaluates the degree to which a person seems leader-like, status-seeking, and achievement-oriented (FFM: Extraversion).
[Working with Others]
Sociability Assesses the degree to which a person needs and/ or enjoys social interaction (FFM: Extraversion). [Client Business Relations
- Entrepreneurship]
Interpersonal Sensitivity Reflects social sensitivity, tact, and perceptiveness (FFM: Agreeableness). [Conflict Resolution]
Prudence Concerns self-control and conscientiousness (FFM: Conscientiousness). [Planning and Organizing]
Inquisitive Reflects the degree to which a person seems imaginative, adventurous, and analytical (FFM: Openness). [Intellectual
Cognitive]
Learning Approach Reflects the degree to which a person enjoys academic activities and values education as an end in itself (FFM: Openness).
[Research Information Gathering; Character]
Note. FFM = Five-Factor Model.
Downloaded
by
[University
of
California,
Berkeley]
at
14:13
09
July
2013
10. 58 SHULTZ AND ZEDECK
Correlations between our participants’ HPI test results
and their rated performance on our 26 lawyering effective-
ness factors showed promising correlations for four of the
HPI scales—Adjustment (correlated with 22 effectiveness
factors; rs = .07–.22; the median correlation was approx-
imately .10.), Ambition (correlated with 14 effectiveness
factors; rs = .08–.24; the median correlation was approx-
imately .11.), Interpersonal Sensitivity (correlated with 12
effectiveness factors; rs = .08–.18; the median correlation
was approximately .13.), and Prudence (correlated with 18
effectiveness factors; rs = .07–.20; the median correlation
was approximately .095).5
The strongest correlations for Adjustment were with
Stress Management (r = .22) and Advising Clients (r = .13).
The strongest correlations for Ambition were with Network-
ing (r = .24) and with Passion (r = .21). Prudence correlated
most highly with Managing Self (r = .20), Managing Others
(r = .17), Developing Relationships (r = .17), and Diligence
(r = .19). Interpersonal Sensitivity correlated most highly
with Developing Relationships (r = .18); Evaluating, Men-
toring, and Developing (r = .17); and Community Service
(r = .18).
The most highly correlated HPI scales (Adjustment, Am-
bition, Prudence, and Interpersonal Sensitivity) did not show
a pattern of highly significant correlations with three of
the lawyering effectiveness factors that are reflected in the
LSAT—Analysis and Reasoning, Researching the Law, and
Writing. Note also that the HPI scales correlated with more
effectiveness factors than did the LSAC measures, and they
correlated at somewhat higher levels. In general, they corre-
lated with factors not measured by the cognitive LSAC-type
scores. This finding lends support to our contention that cog-
nitive factors by themselves cannot explain the domain of
lawyering performance, and that different types of predic-
tors may be necessary to enhance prediction of lawyering
performance.
Across the seven HPI scale scores, the only group dif-
ference patterns to emerge were modest: Male participants
generally scored slightly more positively on three dimensions
(Adjustment, Sociability, and Intellectance) than female, and
Caucasians scored somewhat higher on Learning Approach
than did Hispanics and African Americans.
Situational Judgment Tests
Another type of test that increases the coverage of the ef-
fectiveness domain and has proven useful in employment
selection is called a Situational Judgment Test (SJT). This
type of test has roots in practical/tacit knowledge (Stern-
berg Horvath, 1999). SJTs present descriptions of hy-
pothetical job-related scenarios, asking respondents to pick
how they would handle the situation from a list of possi-
ble responses. The hypothetical situations are often devel-
5These correlations were significant at the .05 level.
oped by asking professionals in the field what critical sit-
uations they encounter in their jobs (Weekley Ployhart,
2005).
Organizations selecting among applicants often pair SJTs
with traditional cognitive ability tests because SJTs have
significant criterion-related validity and possess incremental
validity beyond cognitive ability and personality measures
(Chan Schmitt, 2002; McDaniel, Morgeson, Finnegan,
Campion, Braverman, 2001). For example, Chan and
Schmitt (2002) found that the SJT had a significant .30 cor-
relation with overall job performance and had an incremental
validity of .21 for overall performance. Weekley and Ployhart
(2005) found that the SJT was correlated .21 with overall job
performance and had a significant incremental validity of .18
above and beyond a cognitive ability test and a FFM personal-
ity inventory. SJTs are also drawing interest to predict student
performance (judged by mission statement and educational
objectives) in undergraduate schools (Oswald, Schmitt, Kim,
Ramsay, Gillespie, 2004). Oswald et al. (2004) showed
that the SJT has validity above and beyond cognitive ability
and personality for predicting college performance. Another
important reason for the popularity of SJTs is that they result
in fewer ethnic differences than traditional cognitive ability
tests (Clevenger, Pereira, Wiechmann, Schmitt, Harvey-
Schmidt, 2001).
Understanding how potential lawyers would react in crit-
ical situations is important to predicting performance in the
complex, conflict-ridden, and pressured roles of lawyers. Af-
ter examining prototypes from other contexts, we wrote an
SJT that we hypothesized would predict the 26 factors of
effective lawyering. Note, however, that because we aimed to
create an admission test for law school, the items could not
rest on legal knowledge or lawyering experience but only on
a general understanding of the factors identified in our ear-
lier research. Our construction of the SJT required multiple
steps.
We wrote approximately 200 hypothetical situations
—each hypothesized to reflect one or more of the 26 ef-
fectiveness factors—such that several items linked to each
of the factors. Sometimes we used items from existing SJT
measures (e.g., from W. J. Camara, personal communication,
January 9, 2006; S. J. Motowidlo, personal communication,
January 9, 2006) to stimulate ideas for items that could be
customized for our lawyer effectiveness factors; however, we
wrote many items as originals for the specific purposes of
the research. For each item scenario, we developed four to
five answer options representing a range of viable responses.
Second, we pilot tested the items with practicing lawyers
to get feedback regarding suitability, clarity, sensitivity to
bias, and balance. The final product was an SJT with 72
items.
The following is a sample SJT item that we hypothesized
would reflect competency in three effectiveness factors: In-
fluencing and Advocating, Developing Relationships, and
Integrity:
Downloaded
by
[University
of
California,
Berkeley]
at
14:13
09
July
2013
11. ADMISSION TO LAW SCHOOL 59
You learn that a co-worker, Angela, who you helped train for
the job, copied some confidential and proprietary information
from the company’s files. What would you do?
a. Tell Angela what I learned and that she should destroy
the information before she gets caught.
b. Anonymously report Angela to management.
c. Report Angela to management and after disciplinary
action has been taken, tell Angela that I’m the one that
did so.
d. Threaten to report Angela unless she destroys the infor-
mation.
e. Do nothing.
We developed an empirical scoring key for the SJT in-
strument based on responses that differentiated those rated
as more effective from those rated as less effective lawyers
in our sample, as well as on hypothesized relationships with
the effectiveness factors. The key was cross-validated (see
Shultz Zedeck, 2011, for a full discussion on the scoring
key development and how SJT scores were established for
each participant).
In our validation research, our volunteer participants’ SJT
scores showed significant correlations (p .05) with 23 of
the effectiveness factors (except Managing Others; Evalu-
ation, Development, and Mentoring; and Community Ser-
vice). The correlations ranged from .11 to .21; the median
correlation was approximately .17. Consistent with the lit-
erature on noncognitive tests (Clevenger et al., 2001), there
were no real differences among the race or gender subgroups
for the SJT, except that Hispanics generally scored higher
than other ethnic groups. We believe the SJT results were
impressive both because (a) a large number of effectiveness
factors were predicted by the SJT, and (b) correlations were
generally higher, though moderately, than correlations be-
tween the LSAT and the subset of effectiveness factors that
are most cognitively oriented.
Biographical Information Data
Past performance is often the best predictor of future perfor-
mance. Biographical Information Data (BIO) measures offer
structured and systematic methods for collecting and scoring
information about an individual’s background, experience,
and interests (Mumford, 1994). Items vary both in the nature
of the constructs measured (e.g., past attitudes; experiences)
and in the type of response scale (e.g., frequency of behavior,
amount, degree of agreement). Research has shown that BIO
scales can predict both college GPA and job performance.
Like other noncognitive predictors, BIO scores reflect fewer
ethnic differences than standardized tests such as the SAT
(Oswald et al., 2004).
Following the same process as with the SJT, we wrote
220 BIO items that we hypothesized would predict the var-
ious effectiveness factors. Sometimes we used items from
existing BIO measures (e.g., from W. J. Camara, personal
communication, January 9, 2006; S. J. Motowidlo, personal
communication, January 9, 2006) to stimulate ideas for items
that could be customized for our lawyer effectiveness factors.
Once again, we pilot-tested the items with attorneys and se-
lected those rated best for clarity, suitability, and balance
before including them in our final test battery of 80 items.
As with the SJT, we used empirical scoring keys based on
participants’ responses and hypothesized relation to our ef-
fectiveness factors. (Again, see Shultz Zedeck, 2011, for a
full discussion on the scoring key development and how BIO
scores were established for each participant.)
The following is a sample BIO item that we hypothesized
would reflect competency in both Creativity and Problem
Solving:
How many times in the past year were you able to think of
a way of doing something that most others would not have
thought of?
a. Almost never.
b. Seldom.
c. Sometimes.
d. Often.
e. Very frequently.
BIO scores showed significant correlations (p .05; rang-
ing from .09 to .25; six correlations were .20 or higher; the
median correlation is approximately .175) with 24 of the 26
effectiveness factors (except Integrity and Stress Manage-
ment). Average scores on the BIO test yielded similar find-
ings for female and male participants, and for Caucasians
and Asian/Pacific Islanders. African Americans scored high-
est on the BIO, and Hispanics scored lowest, although the
differences among the four groups are not statistically sig-
nificant. In general, the results show no practical differences
based on gender and ethnicity (accounting for at most 1% of
variance), a finding consistent with the literature (Clevenger
et al., 2001) for these types of tests. Once again, the num-
ber of effectiveness factors predicted by the BIO test was
compelling.
Dispositional Optimism (OPT)
OPT refers to a generalized tendency to expect positive and
favorable outcomes in the future (Carver Scheier, 1981).
Optimism has been recognized as a fundamental compo-
nent of individual adaptability because of its relationship
with stress resilience and coping (Hobfoll, 2002; Scheier
Carver, 1992). Optimists are more confident and per-
sistent when confronting challenges; pessimists are more
doubtful and hesitant (Carver Scheier, 2002). Some re-
search indicates that optimism predicts lower levels of stress
Downloaded
by
[University
of
California,
Berkeley]
at
14:13
09
July
2013
12. 60 SHULTZ AND ZEDECK
and depression for students making their transitions to the
1st year of college (Aspinwall Taylor, 1992; Brissette,
Scheier, Carver, 2002). Evidence suggests that OPT has
a unique impact on both self-reported job performance and
organizational performance appraisals (Youssef Luthans,
2007). Optimism may be a valuable resource for lawyers
who face great time demands, high job insecurity, and
poor organizational climate (Goldhaber, 1999; Heinz, Hull,
Harter, 1999; Makikangas Kinnunen, 2003; Scheier
Carver, 1985; Schiltz, 1999; Xanathopoulou, Bakker,
Demerouti, Schaufeli, 2007).
We used the Revised Life Orientation Test (LOT–R;
Scheier, Carver, Bridges, 1994) to assess generalized out-
come expectancies, with higher scores indicating a more op-
timistic overall outlook on life (Scheier Carver, 1985). The
LOT–R consists of six items, three of which assess optimism
and three reverse-scored items that measure pessimism, plus
four filler items answered on 5-point Likert scales.
The OPT predictor showed potential in that it significantly
(p .05) correlated positively with 10 of the effectiveness
factors (rs = .08–.15; the median correlation was approxi-
mately .105). Most notable are the correlations with Speak-
ing, Networking, Passion, Stress Management, and Commu-
nity Service. Because OPT correlated in the high .4s with
the HPI Adjustment and Ambition scales, the use of OPT
and HPI might be duplicative depending on choices about
which tests to include in a battery. Notably, African Amer-
icans scored higher than Caucasians on OPT, although no
other significant subgroup differences emerged.
Other Predictors Examined in This Research
In addition to the HPI, we also administered two other Hogan
personality tests: the Hogan Development Survey (HDS),
which assesses behavioral tendencies that can “derail” a
person’s career success (Bentz, 1985; R. Hogan Hogan,
1997), and the Motivation, Values and Preferences Inventory
(MVPI), which evaluates the fit between an individual and
the organizational culture by directly assessing a person’s
motives, values, and preferences (J. Hogan Hogan, 1996).
Only two HDS scales showed some predictive promise: Ex-
citable, measuring overenthusiasm followed by disappoint-
ment and lack of persistence (significant [p .05] correlated
negatively with 18 of the effectiveness factors, correlations
ranging from –.12 to –.23), and Reserved, reflecting detach-
ment and lack of awareness of others’ feelings, correlated
with eight effectiveness factors (correlations ranging from
–.13 to –.25). However, each also correlated with scales from
the HPI, making them arguably redundant unless one wanted
for other reasons to use the HDS in preference to the HPI.
Only the MVPI Altruism scale showed some promise, corre-
lating with seven effectiveness factors (correlations ranging
from .16 to .25), the highest being with Community Service.
The MVPI Altruism result illustrates a potential value of
increasing the types of predictors and lawyering effectiveness
measures pertinent to law school admissions, that is, data and
results can be used to address curricula as well as orientation
for the school’s training. For example, a school that wished
to focus selection on identifying applicants that would em-
phasize community service in their law careers could pursue
use of the MVPI instrument, and in particular, rely on the
Altruism score.
Self-Monitoring, a trait that differs from those included in
the Big 5, seemed potentially salient because of lawyers’ dis-
tinctive professional responsibilities for representing clients
(“role morality”). Self-monitoring involves learning what
is socially appropriate in new situations, having good self-
control of emotional expression (facial and verbal), and using
this ability to create a desired impression (Snyder, 1974; Sny-
der Gangestad, 1986). Some evidence suggests that high
self-monitors have more career mobility and success (Kilduff
Day, 1994), as well as higher ratings of job performance
(Caldwell O’Reilly, 1982; Caligiuri Day, 2000). We
administered the Self-Monitoring Scale (Snyder, 1974); the
results showed only one significant correlation (with Com-
munity Service) and did not suggest continuing pursuit at
least in this form.6
Emotional intelligence targets the ability to regulate one’s
own emotions and perceive/understand others’ emotions
(Goleman, 1995). Some studies suggest that emotional intel-
ligence predicts the performance of students (Lam Kirby,
2002) as well as job performance (Law, Wong, Song, 2004;
Slaski Cartwright, 2002). Emotional intelligence could be
important to lawyers who must manage interactions with
clients, juries, judges, and colleagues as well as “read and
interpret” whether the communications between lawyers and
others are being understood.
To try to assess this quality, we developed an Emotion
Recognition test patterned after work by Ekman who as-
sesses individuals’ speed and accuracy in recognizing var-
ious emotions on slides of faces (cf. Ekman, 2004). In our
exploratory test battery, we used stock color photos of neutral
and emotional facial expressions developed in the laborato-
ries of emotion researchers (e.g., Ekman, 2004; D. Keltner,
personal communication, 2006) to present faces of different
people expressing one of 10 emotions: anger, compassion,
contempt, disgust, embarrassment, fear, happiness, sadness,
shame, and surprise. In each item, participants saw (a) a neu-
tral facial expression, followed by (b) a very brief (one sixth
of a second) change in expression reflecting one particular
emotion, and (c) a return to the initial expression. Participants
had 5 s to choose which of the 10 emotions appeared dur-
ing the changed facial expression. Perhaps because of tech-
nical problems,7
results were not predictive of the factors
we hypothesized would be correlated; only two significant
6A Self-Monitoring Scale type test rewritten specifically for law perfor-
mance might show better results.
7An ER test allowing longer time intervals, presenting fewer emotions,
and using more consistent photographs might have improved results.
Downloaded
by
[University
of
California,
Berkeley]
at
14:13
09
July
2013
13. ADMISSION TO LAW SCHOOL 61
correlations were obtained (with Researching the Law and
with Writing).
Intercorrelations Among New Predictors
For HPI, HDS, and MVPI, the strength of the intercorre-
lations within each instrument and between the individual
scales across instruments ranged from .00 to .55. For BIO,
the correlations between it and the other predictors ranged
from .00 to .39; for SJT, the correlations between it and other
predictors ranged from .01 to .21; for Self-Monitoring Scale
the correlations ranged from .00 to .50; for OPT, the correla-
tions ranged from .02 to .54; and for the Emotion Recognition
test, the correlations ranged from .00 to .13.
This pattern of results suggests that the different, poten-
tially new predictors were measuring abilities and charac-
teristics that are relatively independent of each other. These
findings lend support to our premise for the research—the
need for understanding the broad domain of lawyering per-
formance and identifying tests that can provide a compre-
hensive prediction model.
Correlation Among the LSAC Measures and the
New Predictors
The intercorrelations among the LSAT, UGPA, and Index
Score measures ranged from .20 (between LSAT and UGPA)
to .78 (for the relations between the components and the IN-
DEX). These same predictors had correlations that ranged
from .00 to .37 with the new predictors studied in this re-
search. However, approximately 74% of the correlations were
below .10; also, a number of the correlations were negative.
The pattern of correlations among the three traditional and
the new predictors suggests some degree of independence.
The lack of overlap in the existing and new measures suggests
that different traits and abilities were being measured, and,
again, that the tests had the potential to predict different
aspects of performance. Use of different test measures could
explain significant incremental variance above and beyond
that which is explained by any single test such as the LSAT
or INDEX.
Moderator Variables
We examined the relationship between the separate predic-
tors of HPI (seven scales), BIO, SJT, and OPT and each of
the 26 effectiveness factors of lawyer performance through
moderated regression to determine whether there was differ-
ential validity for any participant subgroup. Results indicated
few instances of significant incremental variance due to race
or gender (slope differences, similar to those found in other
research). Where significant increments existed, the amount
of variance was negligible (approximately 1% incremental
variance).
Incremental Variance
The research sought to determine whether a battery of tests
could be formed that would explain variance in ratings of
actual lawyer performance and whether the tests identi-
fied/developed for the project would yield incremental vari-
ance (hierarchical moderated regression) above what the
LSAT, UGPA, and Index Score explain. However, given that
the LSAT, UGPA, and INDEX scores did not demonstrate
many correlations with the lawyering effectiveness factors,
rather than conduct hierarchical regression analyses, we con-
ducted stepwise regression analysis in which the order of
entry into the analysis was determined by statistical rela-
tionships among the predictors and their correlations with
the performance evaluations. In the stepwise solution, the
noncognitive factors emerged first in the analyses.
Multiple regression results show a combination of two,
and in some instances three, tests can produce multiple cor-
relations with the effectiveness factors in the range of the mid
.20s to the low .30s. For 15 of the 26 effectiveness factors,
both SJT and BIO were identified as the best predictive bat-
tery. In contrast, LSAT appeared in only three batteries; the
LSAT and the INDEX did not demonstrate much value along
with or in addition to the other potential tests in predicting
lawyering performance. Taken as whole, the data suggest that
SJT, BIO, HPI, and OPT have promising potential to predict
lawyer performance effectiveness.
CONCLUSION
Our program of research (Shultz Zedeck, 2011) has inves-
tigated what factors define effective lawyering and assessed
the validity and utility of cognitive and noncognitive tests
for predicting those competencies. Based on extensive re-
search in the field of employment selection and diversity,
the hypothesis underlying this research design of the study
we have described was that well-developed predictors that
cover a broad range of different types of requisite job-related
characteristics, skills, and abilities would show significant re-
lationships with factors important to actual lawyer effective-
ness (see Table 2). If so, use of such predictors could broaden
the admissions criteria for and better justify the current bases
for making highly selective law school admission decisions.
To test the hypothesis, we first engaged with focus groups
of lawyers to identify cognitive and noncognitive abilities
that are desirable for good practice of law, such as analy-
sis and reasoning, research and information-gathering skills,
communications, planning and organizing, conflict resolu-
tion, client and business relations, working with others, and
positive character dispositions. Again, using focus groups
of lawyers, we developed examples of above-average, av-
erage, and below-average behaviors that illustrated possible
lawyer performance responses. We next selected predictors
we hypothesized would likely show meaningful correlation
Downloaded
by
[University
of
California,
Berkeley]
at
14:13
09
July
2013
14. 62 SHULTZ AND ZEDECK
with the effectiveness factors, administered the resultant test
battery (including multiple types of predictor tests), and
collected performance appraisals from peers, supervisors,
and self. The sample included a large number of practicing
lawyers and workers in law-related jobs who had varied ex-
perience in terms of years, settings, and practice areas and
comprised a modest but representative number of minority
practitioners. Correlation of results confirmed our hypothe-
sis about the relation between predictors and effective per-
formance in this sample. The exploratory data we report
here make a compelling case for undertaking large-scale,
more definitive research on a national sample. These research
results, if confirmed in a national study, suggest that law
school admission decisions should expand beyond the usual
cognitive set of competencies emphasized by the LSAT and
INDEX.
New tests like the ones we developed (SJT and BIO)
could make important contributions to law school admission
decisions, using merit-based, theoretically justified selection
methods to select applicants whose competencies are con-
ducive to good lawyering as well as good academic achieve-
ment, while also strengthening the profession’s racial diver-
sity. The results suggest that the best overall predictors of
lawyer effectiveness were SJT, BIO, OPT, and several of the
HPI scales. But which tests, or combination of tests, should
be used depends on particular targets and goals of schools
and employers, not just on overall prediction.
If further research confirms the validity of the
performance-predictive tests identified in the Shultz and
Zedeck (2011) research, the new measures would open up
an array of valuable options in admission practices. In select-
ing employees, tests have typically been used in top-down
fashion, where those with the highest scores are selected
first. Other alternatives could be explored. Member schools
might, for example, use the LSAT and/or Index Score to set
an academic floor and then use the new scores and other file
materials to select among applicants who surpass that floor.
This could prioritize successful completion of law school and
the bar while admitting a diverse and generally effective set
of prospective lawyers. Or a school might use the LSAT to
identify the top 20 to 30% (in terms of academic potential)
and then combine the LSAT score with one or several of the
new test scores into a new type of INDEX, using the com-
bined information to admit applicants. This would allow a
school to select and train the top intellectual analysts while
choosing particular types of lawyering traits to get special
emphasis; diversity impact might be less than with some
other approaches. By contrast to admission practices that
now predominantly emphasize LSAT scores, a school might
simply combine the Index Score and new test scores in order
to assure that it selects a student body on the basis of rele-
vant academic and performance-predictive factors, and one
that is likely to be more diverse than is attained by current
admissions criteria. An approach like this would equalize the
importance of predicted academic performance and profes-
sional competency while excluding applicants who are low
on either variable. Or a school might establish minimum
scores for each of multiple test instruments and require that
an applicant achieve the minimum score on each element
in order to gain admission. Depending on where the mini-
mums were set, this could serve mainly to exclude people
whose strengths were concentrated in one of the two cate-
gories. These suggestions for use of additional information
from the new scores are not exhaustive but it is hoped that
they will stimulate thought about potential new approaches
to law school admission.
Beyond admission decision making, legal education could
derive added benefits from the theory undergirding this re-
search (linking predictors with effectiveness factors); the ap-
proach undergirding our research has utility in the general
landscape of lawyer education, selection, and practice for
(a) development of law school curriculum and professional
training, (b) improved alignment of hiring and compensation
choices with employer objectives, and (c) improved appraisal
and targeting of professional development.
A law school could discuss within itself its educational
orientation and modify its curriculum accordingly. Schools
might use the effectiveness factors identified in the Shultz and
Zedeck (2011) research to identify curricular weaknesses or
to emphasize particular specialities such as negotiation or
effective advocacy. Curricular changes could also be paired
with admission selection that uses the particular test(s) that
predict effectiveness on the specialized factor. New measure-
ments would also provide applicants, career placement coun-
selors, and employers with more information about appli-
cants’/students’/graduates’ strengths and areas for improve-
ment.
Legal employers are especially concerned at present with
hiring attorneys who will be not just smart but productive in
particular ways within projectable time frames. They need
better alignment of compensation with work performance
and better ways to assess that performance. When they have
specialized needs, employers could use predictors from the
Shultz and Zedeck (2011) research that correlate with rele-
vant performance criteria as part of their selection process.
Performance appraisal on factors most important for a given
employer could contribute to promotion and retention deci-
sions.
The database for the Shultz and Zedeck (2008, 2011) re-
search and development of instruments based on it could pro-
vide detailed information for attorney performance targets,
appraisal, and feedback. The effectiveness factors themselves
and scales for appraising them in terms of job performance
could be useful training and self-development tools for prac-
ticing attorneys.
Overall, by focusing on the conceptual linkages between
predictors and job performance, the Shultz and Zedeck
(2011) research has identified the value of considering a
range of cognitive and noncognitive abilities in legal work.
This model has implications that could change the landscape
Downloaded
by
[University
of
California,
Berkeley]
at
14:13
09
July
2013
15. ADMISSION TO LAW SCHOOL 63
of law—who enters the profession; how they are educated;
and how legal employers select, develop, and reward their
attorneys.
ACKNOWLEDGMENTS
The research on the LSAT described in this article received
funding from the Law School Admission Council (LSAC).
The opinions and conclusions are those of the authors and do
not necessarily represent the positions or policy of the LSAC.
REFERENCES
A.B.A. Section on Legal Education and Admissions to the Bar. (1992). Le-
gal education and professional development: An educational continuum
(Report prepared by a task force: “Report of The Task Force on Law
Schools and the Profession: Narrowing the Gap. American Bar Associ-
ation, Section of Legal Education and Admissions to the Bar). Chicago,
IL: A.B.A.
Allport, G. W. (1937). Personality: A psychological interpretation. New
York, NY: Holt.
American Educational Research Association, the American Psychological
Association, and the National Council on Measurement in Education.
(1999). Standards for educational and psychological testing. Washington,
DC: Author.
Anthony, L. C., Harris, V
. F., Pashley, P. J. (1999). Predictive validity of
the LSAT: A national summary of the 1995–96 correlation studies (Law
School Admission Council LSAT Tech. Research Rep. No. TR-97-01).
Newtown, PA: Law School Admission Services.
Anthony, L. C., Liu, M. (2000). Analysis of differential prediction of law
school performance by racial/ethnic subgroups based on the 1996–1998
entering law school classes (Law School Admission Council Tech. Re-
search Rep. No. TR-00-02). Newton, PA: Law School Admission Ser-
vices.
Aspinwall, L. G., Taylor, S. E. (1992). Individual differences, coping,
and psychological adjustment: A longitudinal study of college adjust-
ment and performance. Journal of Personality and Social Psychology, 63,
989–1003.
Barrett, P. (2008). Some analyses of personality, academic, and performance
rating data from alumni and students with the faculty of law. Tulsa, OK:
Hogan Assessment Systems.
Barrick, M. R., Mount, M. K. (1991). The Big Five personality dimensions
and job performance: A meta analysis. Personnel Psychology, 4, 1–26.
Bentz, V
. J. (1985, August). A view from the top: A thirty year perspective
of research devoted to discovery, description, and prediction of executive
behavior. Paper presented at the 93rd Annual Convention of the American
Psychological Association, Los Angeles, CA.
Brissette, I., Scheier, M. F., Carver, C. S. (2002). The role of optimism
in social network development, coping, and psychological adjustment
during a life transition. Journal of Personality and Social Psychology, 82,
102–111.
Caldwell, D. F., O’Reilly, C. A. (1982). Boundary spanning and indi-
vidual performance: The impact of self-monitoring. Journal of Applied
Psychology, 67, 124–127.
Caligiuri, P. M., Day, D. V
. (2000). Effects of self-monitoring on technical,
contextual, and assignment-specific performance. Group Organization
Management, 25, 154–174.
Carver, C. S., Scheier, M. F. (1981). Attention and self-regulation: A
control-theory approach to human behavior. New York, NY: Springer-
Verlag.
Carver, C. S., Scheier, M. F. (2002). Optimism. In C. R. L. Snyder
S. J. Lopez (Eds.), Handbook of positive psychology (pp. 231–243). New
York, NY: Oxford University Press.
Cascio, W. F., Outtz, J., Zedeck, S., Goldstein, I. L. (1991). Statistical im-
plications of six methods of test score use in personnel selection. Human
Performance, 4, 233–264.
Chan, D., Schmitt, N. (2002). Situational judgment and job performance.
Human Performance, 15, 233–254.
Chatman, J. (1991). Matching people and organizations: selection and so-
cialization in public accounting firms. Administrative Science Quarterly,
36, 459–484.
Clevenger, J., Pereira, G. M., Wiechmann, D., Schmitt, N., Harvey-
Schmidt, V
. (2001). Incremental validity of situational judgment tests.
Journal of Applied Psychology, 86, 410–417.
Dalessandro, S. P., Stilwell, L. A., Reese, L. M. (2005). LSAT performance
with regional, gender, and racial/ethnic breakdowns: 1993–1994 through
1999–2000 testing years (Law School Admission Council Research Rep.
No. TR-00–01). Newton, PA: Law School Admission Services.
Edwards, H. T. (1992). The growing disjunction between legal ed-
ucation and the legal profession. Michigan Law Review, 91, 34–
78.
Ekman, P. (2004). Emotions revealed: Recognizing faces and feelings to
improve communication and emotional life. New York, NY: Macmillan.
Espeland, W., Sauder, M. (2009). Rankings and diversity. Southern Cali-
fornia Review of Law Social Justice, 18, 587–608.
Evans, F. R. (1984). Recent trends in law school validity studies (Law School
Admission Council Rep. No. 82-1; Reports of LSAC Sponsored Research:
Volume IV
, 1978–1983, pp. 347–361). Newtown, PA: Law School Ad-
mission Services.
Goldhaber, M. D. (1999, July 5). Is the Promised Land Heav’n or Hell?
National Law Journal, p. A17.
Goleman, D. (1995). Emotional intelligence. New York, NY: Bantam.
Guion, R. (1987). Changing views for personnel selection research. Person-
nel Psychology, 40, 199–213.
Haddon, P. A., Post, D. W. (2006). Misuse and abuse of the LSAT: Making
the case for alternative evaluative efforts and a redefinition of merit. St.
John’s Law Review, 80, 41–105.
Heinz, J. P., Hull, K. E., Harter, A. (1999). Lawyers and their discontents:
Findings from a survey of the Chicago bar. Indiana Law Journal, 74,
735–758.
Henderson, W. D. (2004). The LSAT, law school exams, and meritocracy:
The surprising and undertheorized role of test-taking speed. Texas Law
Review, 82, 975–1051.
Henderson, W. D., Morriss, A. P. (2006). The next generation of law
school rankings: Ranking methodologies: student quality as measured by
LSAT scores: Migration patterns in the U.S. News rankings era. Indiana
Law Journal, 81, 163–203.
Hobfoll, S. E. (2002). Social and psychological resources and adaptation.
Review of General Psychology, 6, 307–324.
Hogan, J., Hogan, R. (1991, May). Levels of analysis in big five theory:
The structure of self-description. Paper presented at the Sixth Annual
Conference of the Society for Industrial and Organizational Psychology,
St. Louis, MO.
Hogan, J., Hogan, R. (1996). Motives, Values, Preferences Inventory man-
ual. Tulsa, OK: Hogan Assessment Systems.
Hogan, R., Hogan, J. (1997). Hogan Development Survey manual. Tulsa,
OK: Hogan Assessment Systems.
Hogan, R., Hogan, J. (2007). Hogan Personality Inventory manual (3rd
ed.). Tulsa, OK: Hogan Assessment Systems.
Hogan, R., Hogan, J., Roberts, B. W. (1996). Personality measurement and
employment decisions: Questions and answers. American Psychologist,
51, 469–477.
Hogan, R., Hogan J., Warrenfeltz, R. (2007). The Hogan Guide: Inter-
pretation and use of Hogan inventories. Tulsa, OK: Hogan Assessment
Systems.
Downloaded
by
[University
of
California,
Berkeley]
at
14:13
09
July
2013
16. 64 SHULTZ AND ZEDECK
Hough, L. M., Oswald, F. L., Ployhart, R. E. (2001). Determinants, de-
tection, and amelioration of adverse impact in personnel selection pro-
cedures: Issues, evidence and lessons learned. International Journal of
Selection and Assessment, 9, 152–194.
Hunter, J., Hunter, R. (1984). Validity and utility of alternative predictors
of job performance. Psychological Bulletin, 96, 1, 72–98.
Hurtz, G. M., Donovan, J. J. (2000). Personality and job perfor-
mance: The Big Five revisited. Journal of Applied Psychology, 85, 869–
879.
Jencks, C. (1988). Racial bias in testing. In C. Jencks M. Phillips (Eds.),
The Black–White test score gap (pp. 55–85). Washington, DC: Brookings
Institution Press.
Kidder, W. C. (2000). The rise of the testocracy: An essay on the LSAT:
Conventional wisdom, and the dismantling of diversity. Texas Journal of
Women and the Law, 9, 167–217.
Kidder, W. C. (2001). Comment: Does the LSAT mirror or magnify racial
and ethnic differences in educational attainment?: A Study of equally
achieving “elite” college students. California Law Review, 89, 1055–1124.
Kidder, W. C. (2003). Law and education: Affirmative Action under at-
tack: The struggle for access from Sweatt to Grutter: A History of
African American, Latino and American Indian law school admissions,
1950–2000. Harvard Blackletter Journal, 19, 1–41.
Kilduff, M., Day, D. V
. (1994). Do chameleons get ahead? The effects of
self-monitoring on managerial careers. Academy of Management Journal,
37, 1047–1060.
Kristof, A. L. (1996). Person-organization fit: An integrative review of its
conceptualizations, measurement, and implications, Personnel Psychol-
ogy, 49, 1–49.
Lam, L. T., Kirby, S. L. (2002). Is emotional intelligence an advan-
tage? An exploration of the impact of emotional and general intelligence
on individual performance. The Journal of Social Psychology, 142(1),
133–143.
LaPiana, W. P. (2001). A history of the law school admission council and the
LSAT. Retrieved from http://www.lsacnet.org/publications/history-lsac-
lsat.pdf
Law, K. S., Wong, C., Song, L. J. (2004). The construct and criterion
validity of emotional intelligence and its potential utility for management
studies. Journal of Applied Psychology, 89, 483–496.
Law School Admission Council Volume Summary Applicants:
2000–2009. (2010). Retrieved from http://members.lsac.org/Public/
MainPage.aspx?ReturnUrl=%2fPrivate%2fMainPage2.aspx
Lewin, T. (2010, January 6). Law school admissions lag among minorities.
New York Times.
Linn, R. L., Hastings, C. N. (1983). A meta-analysis of the validity of
predictors of performance in law school (Reports of LSAC Sponsored
Research: Volume IV
, 1978–1983, pp. 507–544). Newtown, PA: Law
School Admission Services.
Makikangas, A., Kinnunen, U. (2003). Psychosocial work stressors
and well-being: Self-esteem and optimism as moderators in a one-year
longitudinal sample. Personality and Individual Differences, 35, 537–
557.
McDaniel, M. A., Morgeson, F. P., Finnegan, E. B., Campion, M. A.,
Braverman, E. P. (2001). Predicting job performance using situational
judgment tests: A clarification of the literature. Journal of Applied Psy-
chology, 86, 730–740.
Mumford, M. (1994). Biodata handbook: Theory, research, and use of bio-
graphical information in selection and performance prediction. Palo Alto,
CA: CPP Books.
Norton, L. L., Suto, D. A., Reese, L. M. (2006). Analysis of differential
prediction of law school performance by racial/ethnic subgroups based on
2002–2004 entering law school classes (Law School Admission Council
Tech. Research Rep. No. TR-06-01). Newton, PA: Law School Admission
Services.
Ones, D. S., Viswesvaran, C., Schmidt, F. L. (1993). Comprehensive
meta-analysis of integrity test validities: Findings and implications for
personnel selection and theories of job performance. Journal of Applied
Psychology, 78, 679–703.
Oswald, F. L., Hough, L. M. (2010). Personality and its assessment in or-
ganizations: Theoretical and empirical developments. In S. Zedeck (Ed.),
APA handbook of industrial and organizational psychology: Volume 2
(pp. 153–184). Washington, DC: APA.
Oswald, F. L., Schmitt, N., Kim, B. H., Ramsay, L. J., Gillespie, M. A.
(2004). Developing a biodata measure and situational judgment inven-
tory as predictors of college student performance. Journal of Applied
Psychology, 89, 187–207.
Powers, D. E. (1982). Long-term predictive and construct validity of two
traditional predictors of law school performance. Journal of Educational
Psychology, 74, 568–576.
Raushenbush, W. (Ed.). (1986). Law school admissions 1984–2001: Se-
lecting lawyers for the twenty-first century. Newton, PA: Law School
Admission Services.
Sackett, P. R., Schmitt, N., Ellingson, J. E., Kabin, M. B. (2001).
High-stakes testing in employment, credentialing, and higher education:
Prospects in a post affirmative-action world. American Psychologist 56,
302–378.
Salgado, J. F. (1997). The five factor model of personality and job perfor-
mance in the European community. Journal of Applied Psychology, 82,
30–43.
Saucier G.., Goldberg, L. R. (1998). What is beyond the Big Five? Journal
of Personality, 66, 495–524.
Scheier, M. F., Carver, C. S. (1985). Optimism, coping, and health: As-
sessment and implications of generalized outcome expectancies. Health
Psychology, 4, 219–247.
Scheier, M. F., Carver, C. S. (1992). Effects of optimism on psychologi-
cal and physical well-being: Theoretical overview and empirical update.
Cognitive Therapy and Research, 16, 201–228.
Scheier, M. F., Carver, C. S., Bridges, M. W. (1994). Distinguishing opti-
mism from neuroticism (and trait anxiety, self-mastery, and self-esteem):
A reevaluation of the Life Orientation Test. Journal of Personality and
Social Psychology, 67, 1063–1078.
Schiltz, P. J. (1999). On being a happy, healthy and ethical member of an
unhappy, unhealthy and unethical profession. Vanderbilt Law Review, 52,
871–951.
Schmidt, F. L. (2002). The role of general cognitive ability and job perfor-
mance: Why there cannot be a debate. Human Performance, 15, 187–
210.
Schmidt, F. L., Hunter, J. E. (1981). Employment testing: Old theories
and new research findings. American Psychologist, 36, 1128–1137.
Schmidt, F. L., Hunter, J. E. (1998). The validity and utility of selection
methods in personnel psychology: Practical and theoretical implications
of 85 years of research findings. Psychological Bulletin, 124, 262–274.
Schmitt, N., Rogers, W., Chan, D., Sheppard, L., Jennings, D. (1997). Ad-
verse impact and predictive efficiency of various predictor combinations.
Journal of Applied Psychology, 82, 719–730.
Schrader, W. B. (1977). Summary of law school validity studies, 1948–1975
(Report No. LSAC-76–8). Law School Admission Council. Reports
of LSAC Sponsored Research: Volume III, 1975–1977, pp. 519–550).
Princeton, NJ: Law School Admission Council.
Shultz, M., Zedeck, S. (2008). Identification, development, and
validation of predictors for successful lawyering. Available from
http://ssrn.com/abstract = 1442118
Shultz, M. M., Zedeck, S. (2011). Predicting lawyer effectiveness: Broad-
ening the basis for law school admission decisions. Law and Social In-
quiry, 36, 620–661.
Slaski, M., Cartwright, S. (2002). Health, performance and emotional
intelligence: An exploratory study of retail managers. Stress and Health,
18, 63–68.
Smith, P. C., Kendall, L. M. (1963). Retranslation of expectations: An
approach to the construction of unambiguous anchors for ratings scales.
Journal of Applied Psychology, 47, 149–155.
Downloaded
by
[University
of
California,
Berkeley]
at
14:13
09
July
2013
17. ADMISSION TO LAW SCHOOL 65
Snyder, M. (1974). Self-monitoring of expressive behavior. Journal of Per-
sonality and Social Psychology, 30, 526–537.
Snyder, M., Gangestad, S. (1986). On the nature of self-monitoring:
matters of assessment, matters of validity. Journal of Personality and
Social Psychology, 51, 125–139.
Society of American Law Teachers, Johnson, C. (for Lawyering
in the Digital Age Clinic at Columbia University Law School)
(2010). A disturbing trend in law school diversity. Accessed from
http:/blogs.law.columbia.edu/salt
Stilwell, L. A., Dalessandro, S. P., Reese, L. M. (2003). Predictive validity
of the LSAT: A national summary of the 2001–2002 correlation studies
(Law School Admission Council LSAT Technical Research Rep. No.
TR-03-01). Newtown, PA: Law School Admission Services.
Stilwell, L. A., Pashley, P. J. (2003). Analysis of differential prediction of
law school performance by racial/ethnic subgroups based on 1999–2001
entering law school classes (Law School Admission Council LSAT Tech-
nical Research Rep. No. TR-03-03). Newtown, PA: Law School Admis-
sion Services.
Sternberg, R. J., Horvath, J. A. (Eds.). (1999). Tacit knowledge in profes-
sional practice: Researcher and practitioner perspective. Mahwah, NJ:
Erlbaum.
Sturm, S., Guinier, L. (1996). The future of Affirmative Action: Reclaim-
ing the innovative ideal. California Law Review, 84, 953–1036.
Sullivan, W. M., Colby, A., Wegner, J., Bond, L., Shulman, L. (2007).
Educating lawyers: Preparation for the profession of law. San Francisco,
CA: Jossey-Bass.
Tett, R. P, Jackson, D. N., Rothstein, M. (1991). Personality measures as
predictors of job performance: A meta-analytic review. Personnel Psy-
chology, 44, 703–742.
Weekley, J. A., Ployhart, R. E. (2005). Situational Judgment: Antecedents
and Relationships with Performance. Human Performance, 18, 81–
104.
Wegner, J. W. (2010). Response: More complicated than we think: A re-
sponse to Rethinking Legal Education in Hard Times: The Recession,
Practical Legal Education, and the New Job Market. Journal of Legal
Education, 59, 623–644.
Wiggins, J. S., Trapnell, P. D. (1997). Personality structure: the return of
the Big Five. In R. Hogan, J. Johnson, S. Briggs (Eds.), Handbook of
personality psychology (pp. 737–766). San Diego, CA: Academic.
Wightman, L. F. (1993). Predictive validity of the LSAT: A national summary
of the 1990–1992 correlation studies (Law School Admission Council Re-
search Rep. No. 93-05). Newtown, PA: Law School Admission Services.
Wightman, L. F. (1997). The threat to diversity in legal education: An
empirical analysis of the consequences of abandoning race as a factor
in law school admission decisions. New York University Law Review, 72,
1–53.
Wightman, L. F. (2000). The role of standardized admission tests in the de-
bate about merit, academic standards, and affirmative action. Psychology,
Public Policy, and Law, 6, 90–100.
Wightman, L. F., Muller, D. G. (1990). An analysis of differential validity
and differential prediction for Black, Mexican American, Hispanic, and
White law school students (Law School Admission Council Research
Rep. No. 90-03). Newtown, PA: Law School Admission Services.
Xanthopoulou, D., Bakker, A. B., Demerouti, E., Schaufeli, W. B. (2007).
The role of personal resources in the job demands-resources model. In-
ternational Journal of Stress Management, 14, 121–141.
Youssef, C. M., Luthans, F. (2007). Emerging positive organizational
behavior. Journal of Management, 33, 321–349.
Downloaded
by
[University
of
California,
Berkeley]
at
14:13
09
July
2013