Handbook of psychology vol 10 - assessment psychology


Published on

Published in: Technology, Education
1 Comment
No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Handbook of psychology vol 10 - assessment psychology

  1. 1. HANDBOOKofPSYCHOLOGYVOLUME 10ASSESSMENT PSYCHOLOGYJohn R. GrahamJack A. NaglieriVolume EditorsIrving B. WeinerEditor-in-ChiefJohn Wiley & Sons, Inc.
  3. 3. HANDBOOKofPSYCHOLOGYVOLUME 10ASSESSMENT PSYCHOLOGYJohn R. GrahamJack A. NaglieriVolume EditorsIrving B. WeinerEditor-in-ChiefJohn Wiley & Sons, Inc.
  4. 4. This book is printed on acid-free paper.Copyright © 2003 by John Wiley & Sons, Inc., Hoboken, New Jersey. All rights reserved.Published simultaneously in Canada.No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic,mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 UnitedStates Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of theappropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400,fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to thePermissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008,e-mail: permcoordinator@wiley.com.Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, theymake no representations or warranties with respect to the accuracy or completeness of the contents of this book and specificallydisclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended bysales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation.You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit orany other commercial damages, including but not limited to special, incidental, consequential, or other damages.This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is soldwith the understanding that the publisher is not engaged in rendering professional services. If legal, accounting, medical,psychological or any other expert assistance is required, the services of a competent professional person should be sought.Designations used by companies to distinguish their products are often claimed as trademarks. In all instances where John Wiley &Sons, Inc. is aware of a claim, the product names appear in initial capital or all capital letters. Readers, however, should contact theappropriate companies for more complete information regarding trademarks and registration.For general information on our other products and services please contact our Customer Care Department within the U.S. at(800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available inelectronic books.Library of Congress Cataloging-in-Publication Data:Handbook of psychology / Irving B. Weiner, editor-in-chief.p. cm.Includes bibliographical references and indexes.Contents: v. 1. History of psychology / edited by Donald K. Freedheim — v. 2. Researchmethods in psychology / edited by John A. Schinka, Wayne F. Velicer — v. 3. Biologicalpsychology / edited by Michela Gallagher, Randy J. Nelson — v. 4. Experimentalpsychology / edited by Alice F. Healy, Robert W. Proctor — v. 5. Personality and socialpsychology / edited by Theodore Millon, Melvin J. Lerner — v. 6. Developmentalpsychology / edited by Richard M. Lerner, M. Ann Easterbrooks, Jayanthi Mistry — v. 7.Educational psychology / edited by William M. Reynolds, Gloria E. Miller — v. 8.Clinical psychology / edited by George Stricker, Thomas A. Widiger — v. 9. Health psychology /edited by Arthur M. Nezu, Christine Maguth Nezu, Pamela A. Geller — v. 10. Assessmentpsychology / edited by John R. Graham, Jack A. Naglieri — v. 11. Forensic psychology /edited by Alan M. Goldstein — v. 12. Industrial and organizational psychology / editedby Walter C. Borman, Daniel R. Ilgen, Richard J. Klimoski.ISBN 0-471-17669-9 (set) — ISBN 0-471-38320-1 (cloth : alk. paper : v. 1)— ISBN 0-471-38513-1 (cloth : alk. paper : v. 2) — ISBN 0-471-38403-8 (cloth : alk. paper : v. 3)— ISBN 0-471-39262-6 (cloth : alk. paper : v. 4) — ISBN 0-471-38404-6 (cloth : alk. paper : v. 5)— ISBN 0-471-38405-4 (cloth : alk. paper : v. 6) — ISBN 0-471-38406-2 (cloth : alk. paper : v. 7)— ISBN 0-471-39263-4 (cloth : alk. paper : v. 8) — ISBN 0-471-38514-X (cloth : alk. paper : v. 9)— ISBN 0-471-38407-0 (cloth : alk. paper : v. 10) — ISBN 0-471-38321-X (cloth : alk. paper : v. 11)— ISBN 0-471-38408-9 (cloth : alk. paper : v. 12)1. Psychology. I. Weiner, Irving B.BF121.H1955 2003150—dc212002066380Printed in the United States of America.10 9 8 7 6 5 4 3 2 1➇
  5. 5. Volume 1History of PsychologyDonald K. Freedheim, PhDCase Western Reserve UniversityCleveland, OhioVolume 2Research Methods in PsychologyJohn A. Schinka, PhDUniversity of South FloridaTampa, FloridaWayne F. Velicer, PhDUniversity of Rhode IslandKingston, Rhode IslandVolume 3Biological PsychologyMichela Gallagher, PhDJohns Hopkins UniversityBaltimore, MarylandRandy J. Nelson, PhDOhio State UniversityColumbus, OhioVolume 4Experimental PsychologyAlice F. Healy, PhDUniversity of ColoradoBoulder, ColoradoRobert W. Proctor, PhDPurdue UniversityWest Lafayette, IndianaVolume 5Personality and Social PsychologyTheodore Millon, PhDInstitute for Advanced Studies inPersonology and PsychopathologyCoral Gables, FloridaMelvin J. Lerner, PhDFlorida Atlantic UniversityBoca Raton, FloridaVolume 6Developmental PsychologyRichard M. Lerner, PhDM. Ann Easterbrooks, PhDJayanthi Mistry, PhDTufts UniversityMedford, MassachusettsVolume 7Educational PsychologyWilliam M. Reynolds, PhDHumboldt State UniversityArcata, CaliforniaGloria E. Miller, PhDUniversity of DenverDenver, ColoradoVolume 8Clinical PsychologyGeorge Stricker, PhDAdelphi UniversityGarden City, New YorkThomas A. Widiger, PhDUniversity of KentuckyLexington, KentuckyVolume 9Health PsychologyArthur M. Nezu, PhDChristine Maguth Nezu, PhDPamela A. Geller, PhDDrexel UniversityPhiladelphia, PennsylvaniaVolume 10Assessment PsychologyJohn R. Graham, PhDKent State UniversityKent, OhioJack A. Naglieri, PhDGeorge Mason UniversityFairfax, VirginiaVolume 11Forensic PsychologyAlan M. Goldstein, PhDJohn Jay College of CriminalJustice–CUNYNew York, New YorkVolume 12Industrial and OrganizationalPsychologyWalter C. Borman, PhDUniversity of South FloridaTampa, FloridaDaniel R. Ilgen, PhDMichigan State UniversityEast Lansing, MichiganRichard J. Klimoski, PhDGeorge Mason UniversityFairfax, VirginiaEditorial Boardv
  6. 6. Parents are powerful forces in the development of their sons and daughters. In recognitionof that positive influence, we dedicate our efforts on this book to the lovingmemory of Craig Parris (1910–2001) and Sam Naglieri (1926–2001).
  7. 7. Handbook of Psychology PrefacePsychology at the beginning of the twenty-first century hasbecome a highly diverse field of scientific study and appliedtechnology. Psychologists commonly regard their disciplineas the science of behavior, and the American PsychologicalAssociation has formally designated 2000 to 2010 as the“Decade of Behavior.” The pursuits of behavioral scientistsrange from the natural sciences to the social sciences and em-brace a wide variety of objects of investigation. Some psy-chologists have more in common with biologists than withmost other psychologists, and some have more in commonwith sociologists than with most of their psychological col-leagues. Some psychologists are interested primarily in the be-havior of animals, some in the behavior of people, and othersin the behavior of organizations. These and other dimensionsof difference among psychological scientists are matched byequal if not greater heterogeneity among psychological practi-tioners, who currently apply a vast array of methods in manydifferent settings to achieve highly varied purposes.Psychology has been rich in comprehensive encyclope-dias and in handbooks devoted to specific topics in the field.However, there has not previously been any single handbookdesigned to cover the broad scope of psychological scienceand practice. The present 12-volume Handbook of Psychol-ogy was conceived to occupy this place in the literature.Leading national and international scholars and practitionershave collaborated to produce 297 authoritative and detailedchapters covering all fundamental facets of the discipline,and the Handbook has been organized to capture the breadthand diversity of psychology and to encompass interests andconcerns shared by psychologists in all branches of the field.Two unifying threads run through the science of behavior.The first is a common history rooted in conceptual and em-pirical approaches to understanding the nature of behavior.The specific histories of all specialty areas in psychologytrace their origins to the formulations of the classical philoso-phers and the methodology of the early experimentalists, andappreciation for the historical evolution of psychology in allof its variations transcends individual identities as being onekind of psychologist or another. Accordingly, Volume 1 inthe Handbook is devoted to the history of psychology asit emerged in many areas of scientific study and appliedtechnology.A second unifying thread in psychology is a commitmentto the development and utilization of research methodssuitable for collecting and analyzing behavioral data. Withattention both to specific procedures and their applicationin particular settings, Volume 2 addresses research methodsin psychology.Volumes 3 through 7 of the Handbook present the sub-stantive content of psychological knowledge in five broadareas of study: biological psychology (Volume 3), experi-mental psychology (Volume 4), personality and social psy-chology (Volume 5), developmental psychology (Volume 6),and educational psychology (Volume 7). Volumes 8 through12 address the application of psychological knowledge infive broad areas of professional practice: clinical psychology(Volume 8), health psychology (Volume 9), assessment psy-chology (Volume 10), forensic psychology (Volume 11), andindustrial and organizational psychology (Volume 12). Eachof these volumes reviews what is currently known in theseareas of study and application and identifies pertinent sourcesof information in the literature. Each discusses unresolved is-sues and unanswered questions and proposes future direc-tions in conceptualization, research, and practice. Each of thevolumes also reflects the investment of scientific psycholo-gists in practical applications of their findings and the atten-tion of applied psychologists to the scientific basis of theirmethods.The Handbook of Psychology was prepared for the pur-pose of educating and informing readers about the presentstate of psychological knowledge and about anticipated ad-vances in behavioral science research and practice. With thispurpose in mind, the individual Handbook volumes addressthe needs and interests of three groups. First, for graduate stu-dents in behavioral science, the volumes provide advancedinstruction in the basic concepts and methods that define thefields they cover, together with a review of current knowl-edge, core literature, and likely future developments. Second,in addition to serving as graduate textbooks, the volumesoffer professional psychologists an opportunity to read andcontemplate the views of distinguished colleagues concern-ing the central thrusts of research and leading edges of prac-tice in their respective fields. Third, for psychologists seekingto become conversant with fields outside their own specialtyix
  8. 8. x Handbook of Psychology Prefaceand for persons outside of psychology seeking informa-tion about psychological matters, the Handbook volumesserve as a reference source for expanding their knowledgeand directing them to additional sources in the literature.The preparation of this Handbook was made possible bythe diligence and scholarly sophistication of the 25 volumeeditors and co-editors who constituted the Editorial Board.As Editor-in-Chief, I want to thank each of them for the plea-sure of their collaboration in this project. I compliment themfor having recruited an outstanding cast of contributors totheir volumes and then working closely with these authors toachieve chapters that will stand each in their own right asvaluable contributions to the literature. I would like finally toexpress my appreciation to the editorial staff of John Wileyand Sons for the opportunity to share in the development ofthis project and its pursuit to fruition, most particularly toJennifer Simon, Senior Editor, and her two assistants, MaryPorterfield and Isabel Pratt. Without Jennifer’s vision of theHandbook and her keen judgment and unflagging support inproducing it, the occasion to write this preface would nothave arrived.IRVING B. WEINERTampa, Florida
  9. 9. The title of this volume, Assessment Psychology, was deliber-ately chosen to make the point that the assessment activities ofpsychologists constitute a legitimate and important subdisci-pline within psychology. The methods and techniques devel-oped by assessment pioneers were central in establishing aprofessional role for psychologists in schools, hospitals, andother settings. Although interest in psychological assessmenthas waxed and waned over the years and various assessmentprocedures and instruments have come under attack, thepremise of this volume is that assessment psychology isalive and well and continues to be of paramount importancein the professional functioning of most psychologists. Inaddition, assessment psychology contributes greatly to thewell-being of the thousands of individuals who are assessedeach year.A primary goal of this volume is to address important is-sues in assessment psychology. Some of these issues havebeen around a long time (e.g., psychometric characteristics ofassessment procedures), whereas others have come on thescene more recently (e.g., computer-based psychological as-sessment). The volume also has chapters devoted to theunique features of assessment in different kinds of settings(adult and child mental health, schools, medical centers, busi-ness and industry, forensic and correctional, and geriatric).Other chapters address assessment in various domains offunctioning (e.g., cognitive and intellectual, interests, person-ality and psychopathology). Still other chapters addressvarious approaches used in the assessment process (e.g.,interviews, behavioral methods, projective approaches, andself-report inventories). The final chapter summarizes themajor conclusions reached by other authors in the volumeand speculates about the future of assessment psychology.We should also state clearly what this volume does not in-clude. Although many specific tests and procedures aredescribed (some in greater detail than others), the volume isnot intended as a practical guide for the administration andinterpretation of tests and other procedures. There are otherexcellent interpretive handbooks already available, and manyof these are referenced in the various chapters of this volume.It is our hope that the detailed and insightful considerationof issues and problems will provide a strong foundation forall who are part of the discipline of assessment psychology,regardless of the specific techniques or instruments that theyemploy. We view this volume as having been successful if itraises the sensitivity of assessment psychologists to the im-portant issues inherent in the use of assessment procedures ina wide variety of settings and to the strengths and weaknessesof the various approaches and instruments.This volume is intended for several audiences. Graduatestudents in psychology, education, and related disciplinesshould find the chapters informative and thought provokingas they master the assessment process. Psychologists whoengage in psychological assessment, either routinely or on amore limited basis, should find the various chapters to be en-lightening. Finally, those who use the results of psychologi-cal assessment (e.g., medical and social work professionals,teachers, parents, clients) should become more informed con-sumers after reading the chapters in this volume.We want to thank those who contributed to the completionof this volume. Of course, the most important contributorsare those who wrote the individual chapters. Their effortsresulted in informative and thought-provoking chapters. Theeditor-in-chief of the series of which this volume is a part,Irv Weiner, deserves considerable credit for his organiza-tional skills in making the project happen as planned and forhis specific contributions to each of the chapters in this vol-ume. We also want to thank Alice Early and Brian O’Reillyfor their editorial contributions. The Department of Psychol-ogy at Kent State University and the Department of Psy-chology and the Center for Cognitive Development at GeorgeMason University supported this project in various ways.JOHN R. GRAHAMJACK A. NAGLIERIVolume Prefacexi
  13. 13. R. Michael Bagby, PhDDepartment of PsychiatryUniversity of TorontoToronto, Ontario, CanadaPaul Barrett, PhDDepartment of PsychiatryHenry Ford Health SystemDetroit, MichiganYossef S. Ben-Porath, PhDDepartment of PsychologyKent State UniversityKent, OhioBruce A. Bracken, PhDSchool of EducationCollege of William and MaryWilliamsburg, VirginiaJeffery P. Braden, PhDDepartment of Educational PsychologyUniversity of WisconsinMadison, WisconsinJames N. Butcher, PhDDepartment of PsychologyUniversity of MinnesotaMinneapolis, MinnesotaAndrew D. Carson, PhDRiverside PublishingItasca, IllinoisAmanda Jill Clemence, BADepartment of PsychologyUniversity of TennesseeKnoxville, TennesseeRobert J. Craig, PhDWest Side VA Medical Center andIllinois School of Professional PsychologyChicago, IllinoisPhilip A. DeFina, PhDDepartment of Clinical NeuropsychologyThe Fielding InstituteSanta Barbara, CaliforniaKevin S. Douglas, LLB, PhDDepartment of Mental Health Law and PolicyLouis de la Parte Florida Mental Health InstituteUniversity of South FloridaTampa, FloridaBarry A. Edelstein, PhDDepartment of PsychologyWest Virginia UniversityMorgantown, West VirginiaHoward N. Garb, PhDVeterans Administration PittsburghHealthcare System and Department of PsychiatryUniversity of PittsburghPittsburgh, PennsylvaniaKurt F. Geisinger, PhDOffice of Academic Affairs andDepartment of PsychologyThe University of St. ThomasHouston, TexasElkhonon Goldberg, PhDDepartment of NeurologyNew York University School of MedicineNew York, New YorkJohn R. Graham, PhDDepartment of PsychologyKent State UniversityKent, OhioLeonard Handler, PhDDepartment of PsychologyUniversity of TennesseeKnoxville, TennesseeContributorsxvii
  14. 14. xviii ContributorsStephen N. Haynes, PhDDepartment of PsychologyUniversity of Hawaii ManoaHonolulu, HawaiiRichard J. Klimoski, PhDDepartment of PsychologyGeorge Mason UniversityFairfax, VirginiaGerald P. Koocher, PhDGraduate School for Health StudiesSimmons CollegeBoston, MassachusettsLesley P. Koven, MADepartment of PsychologyWest Virginia UniversityMorgantown, West VirginiaDavid Lachar, PhDDepartment of Psychiatry and Behavioral SciencesUniversity of Texas Houston Health Science CenterHouston, TexasRodney L. Lowman, PhDCollege of Organizational StudiesAlliant International UniversitySan Diego, CaliforniaRonald R. Martin, PhDDepartment of PsychologyWest Virginia UniversityMorgantown, West VirginiaMark E. Maruish, PhDUnited Behavioral HealthMinneapolis, MinnesotaAnneMarie McCullen, MADepartment of PsychologyUniversity of WindsorWindsor, Ontario, CanadaJennifer J. McGrath, MADepartment of PsychologyBowling Green State UniversityBowling Green, OhioEdwin I. Megargee, PhDDepartment of PsychologyFlorida State UniversityTallahassee, FloridaJack A. Naglieri, PhDDepartment of Psychology andCenter for Cognitive DevelopmentGeorge Mason UniversityFairfax, VirginiaWilliam H. O’Brien, PhDDepartment of PsychologyBowling Green State UniversityBowling Green, OhioJames R. P. Ogloff, JD, PhDSchool of Psychology, Psychiatry, andPsychological MedicineMonash University and VictorianInstitute of Forensic Mental HealthVictoria, AustraliaKenneth Podell, PhDDepartment of PsychiatryHenry Ford Health SystemDetroit, MichiganMichael C. Ramsay, PhDDepartment of Educational PsychologyTexas A&M UniversityCollege Station, TexasCeliane M. Rey-Casserly, PhDDepartment of PsychiatryChildren’s Hospital and Harvard Medical SchoolBoston, MassachusettsCecil R. Reynolds, PhDDepartment of Educational PsychologyTexas A&M UniversityCollege Station, TexasBridget Rivera, MSCalifornia School of Professional PsychologyAlliant International UniversitySan Diego, CaliforniaYana Suchy, PhDDepartment of PsychologyFinch University of Health SciencesChicago Medical SchoolNorth Chicago, IllinoisJerry J. Sweet, PhDEvanston Northwestern Healthcare andDepartment of Psychiatry and Behavioral SciencesNorthwestern University Medical SchoolChicago, Illinois
  15. 15. Contributors xixSteven M. Tovian, PhDEvanston Northwestern Healthcare andDepartment of Psychiatry and Behavioral SciencesNorthwestern University Medical SchoolChicago, IllinoisAndrea Turner, BScDepartment of PsychologyUniversity of WindsorWindsor, Ontario, CanadaDonald J. Viglione, PhDCalifornia School of Professional PsychologyAlliant UniversitySan Diego, CaliforniaJohn D. Wasserman, PhDDepartment of Psychology andCenter for Cognitive DevelopmentGeorge Mason UniversityFairfax, VirginiaIrving B. Weiner, PhDDepartment of Psychiatry and Behavioral MedicineUniversity of South FloridaTampa, FloridaNicole Wild, BScDepartment of PsychologyUniversity of WindsorWindsor, Ontario, CanadaLori B. Zukin, PhDBooz Allen HamiltonMcLean, Virginia
  17. 17. CHAPTER 1The Assessment ProcessIRVING B. WEINER3COLLECTING ASSESSMENT INFORMATION 3Formulating Goals 4Selecting Tests 4Examiner Issues 7Respondent Issues 8Data Management Issues 9INTERPRETING ASSESSMENT INFORMATION 11Basis of Inferences and Impressions 11Malingering and Defensiveness 17Integrating Data Sources 19UTILIZING ASSESSMENT INFORMATION 20Bias and Base Rates 20Value Judgments and Cutting Scores 21Culture and Context 22REFERENCES 23Assessment psychology is the field of behavioral science con-cerned with methods of identifying similarities and differ-ences among people in their personal characteristics andcapacities. As such, psychological assessment comprises avariety of procedures that are employed in diverse ways toachieve numerous purposes. Assessment has sometimes beenequated with testing, but the assessment process goes beyondmerely giving tests. Psychological assessment involves inte-grating information gleaned not only from test protocols, butalso from interview responses, behavioral observations, col-lateral reports, and historical documents. The Standards forEducational and Psychological Testing (American Educa-tional Research Association [AERA], American Psychologi-cal Association, and National Council on Measurement inEducation, 1999) specify in this regard thatthe use of tests provides one method of collecting informationwithin the larger framework of a psychological assessment of anindividual. . . .Apsychological assessment is a comprehensive ex-amination undertaken to answer specific questions about a client’spsychological functioning during a particular time interval or topredict a client’s psychological functioning in the future. (p. 119)The diverse ways in which assessment procedures areemployed include many alternative approaches to obtainingand combining information from different sources, and thenumerous purposes that assessment serves arise in response toa broad range of referral questions raised in such companionfields as clinical, educational, health, forensic, and industrial/organizationalpsychology.Subsequentchaptersinthisvolumeelaborate the diversity of assessment procedures, the natureof the assessment questions that arise in various settings, andthe types of assessment methods commonly employed to ad-dress these questions.This introductory chapter sets the stage for what is to followby conceptualizing assessment as a three-stage process com-prising an initial phase of information input, a subsequentphase of information evaluation, and a final phase of informa-tion output. Information input involves collecting assessmentdata of appropriate kinds and in sufficient amounts to addressreferral questions in meaningful and useful ways. Informationevaluation consists of interpreting assessment data in a mannerthat provides accurate descriptions of respondents’psycholog-icalcharacteristicsandbehavioraltendencies.Informationout-put calls for utilizing descriptions of respondents to formulateconclusions and recommendations that help to answer referralquestions. Each of these phases of the assessment process re-quires assessors to accomplish some distinctive tasks, and eachinvolves choices and decisions that touch on critical issues inconducting psychological assessments.COLLECTING ASSESSMENT INFORMATIONThe process of collecting assessment information begins witha formulation of the purposes that the assessment is intendedto serve. A clear sense of why an assessment is being con-ducted helps examiners select tests and other sources ofinformation that will provide an adequate basis for arriving
  18. 18. 4 The Assessment Processat useful conclusions and recommendations. Additionallyhelpful in planning the data collection process is attention toseveral examiner, respondent, and data management issuesthat influence the nature and utility of whatever findings areobtained.Formulating GoalsPsychological assessments are instigated by referrals thatpose questions about aspects of a person’s psychologicalfunctioning or likely future behavior. When clearly stated andpsychologically relevant, referral questions guide psycholo-gists in determining what kinds of assessment data to collect,what considerations to address in examining these data, andwhat implications of their findings to emphasize in their re-ports. If referral questions lack clarity or psychological rele-vance, some reformulation is necessary to give direction tothe assessment process. For example, a referral in a clinicalsetting that asks vaguely for personality evaluation or differ-ential diagnosis needs to be specified in consultation with thereferring person to identify why a personality evaluation isbeing sought or what diagnostic possibilities are at issue. As-sessment in the absence of a specific referral question can re-sult in a sterile exercise in which neither the data collectionprocess nor the psychologist’s inferences can be focused in ameaningful way.Even when adequately specified, referral questions are notalways psychological in nature. Assessors doing forensicwork are frequently asked to evaluate whether criminal de-fendants were insane at the time of their alleged offense.Sanity is a legal term, however, not a psychological term.There are no assessment methods designed to identify insan-ity, nor are there any research studies in which being insanehas been used as an independent variable. In instances of thiskind, in order to help assessors plan their procedures andframe their reports, the referral must be translated into psy-chological terms, as in defining insanity as the inability todistinguish reality from fantasy.As a further challenge in formulating assessment goals,specific and psychologically phrased referral questions maystill lack clarity as a consequence of addressing complex andmultidetermined patterns of behavior. In employment evalu-ations, for example, a referring person may want to knowwhich of three individuals is likely to perform best in a posi-tion of leadership or executive responsibility. To address thistype of question effectively, assessors must first be able toidentify psychological characteristics that are likely to makea difference in the particular circumstances, as by proceed-ing, in this example, in the belief that being energetic, deci-sive, assertive, self-confident, and reasonably unflappablecontribute to showing effective and responsible leadership.Then the data collection process can be planned to measurethese characteristics, and the eventual report can be focusedon using them as a basis for recommending a hiring decision.Selecting TestsThe multiple sources of assessment information previouslynoted include the results of formal psychological testing withstandardized instruments; responses to questions asked instructured and unstructured interviews; observations of be-havior in various types of contrived situations and natural set-tings; reports from relatives, friends, employers, and othercollateral persons concerning an individual’s previous lifehistory and current characteristics and behavioral tendencies;and documents such as medical records, school records, andwritten reports of earlier assessments. Individual assessmentsvary considerably in the availability and utility of these di-verse sources of information. Assessments may sometimesbe based entirely on record reviews and collateral reports,because the person being assessed is unwilling to be seen di-rectly by an examiner or is for some reason prevented fromdoing so. Some persons being assessed are quite forthcomingwhen interviewed but are reluctant to be tested; others find itdifficult to talk about themselves but are quite responsive totesting procedures; and in still other cases, in which bothinterview and test data are ample, there may be a dearth ofother information sources on which to draw.There is little way to know before the fact which sources ofinformation will prove most critical or valuable in an assess-ment process. What collateral informants say about a personin a particular instance may be more revealing and reliablethan what the person says about him- or herself, and in someinstances historical documents may prove more informa-tive and dependable than either first-person or collateralreports. Behavioral observations and interview data maysometimes contribute more to an adequate assessment thanstandardized tests, or may even render testing superfluous;whereas in other instances formal psychological testing mayreveal vital diagnostic information that would otherwise nothave been uncovered.The fact that psychological assessment can proceedeffectively without psychological testing helps to distinguishbetween these two activities. The terms psychological assess-ment and psychological testing are sometimes used synony-mously, as noted earlier, but psychological testing is only oneamong many sources of information that may be utilizedin conducting a psychological assessment. Whereas testingrefers to the administration of standardized measuring instru-ments, assessment involves multiple data collection proce-dures leading to the integration of information from diversesources. Thus the data collection procedures employed in
  19. 19. Collecting Assessment Information 5testing contribute only a portion of the information that is typi-cally utilized in the complex decision-making process thatconstitutes assessment. This distinction between assessmentand testing has previously been elaborated by Fernandez-Ballesteros (1997), Maloney and Ward (1976, chapter 3), andMatarazzo (1990), among others.Nonetheless, psychological testing stands out among thedata collection procedures employed in psychological assess-ment as the one most highly specialized, diverse, and in needof careful regulation. Psychological testing brings numerousissues to the assessment process, beginning with selection ofan appropriate test battery from among an extensive array ofavailable measuring instruments (see Conoley & Impara,1995, and Fischer & Corcoran, 1994; see also chapters 18–24of the present volume). The chief considerations that shoulddetermine the composition of a test battery are the psycho-metric adequacy of the measures being considered; the rele-vance of these measures to the referral questions beingaddressed; the likelihood that these measures will contributeincremental validity to the decision-making process; and theadditive, confirmatory, and complementary functions thatindividual measures are likely to serve when used jointly.Psychometric AdequacyAs elaborated by Anastasi and Urbina (1997), in the Stan-dards for Educational and Psychological Testing (AERA,et al., 1999, chapters 1, 2, & 5), and in the chapter byWasserman and Bracken in this volume, the psychometricadequacy of an assessment instrument consists of the ex-tent to which it involves standardized test materials and ad-ministration procedures, can be coded with reasonably goodinterscorer agreement, demonstrates acceptable reliability,has generated relevant normative data, and shows validcorollaries that serve the purposes for which it is intended.Assessment psychologists may at times choose to use testswith uncertain psychometric properties, perhaps for ex-ploratory purposes or for comparison with a previous exami-nation using these tests. Generally speaking, however, formaltesting as part of a psychological assessment should be lim-ited to standardized, reliable, and valid instruments for whichthere are adequate normative data.RelevanceThe tests selected for inclusion in an assessment batteryshould provide information relevant to answering the ques-tions that have been raised about the person being examined.Questions that relate to personality functions (e.g., What kindof approach in psychotherapy is likely to be helpful to thisperson?) call for personality tests. Questions that relate toeducational issues (e.g., Does this student have a learningdisability?) call for measures of intellectual abilities andacademic aptitude and achievement. Questions that relate toneuropsychological functions (e.g., Are there indicationsof memory loss?) call for measures of cognitive functioning,with special emphasis on measures of capacities for learningand recall.These examples of relevance may seem too obvious tomention. However, they reflect an important and sometimesoverlooked guiding principle that test selection should be jus-tifiable for each measure included in an assessment battery.Insufficient attention to justifying the use of particular mea-sures in specific instances can result in two ill-advised as-sessment practices: (a) conducting examinations with a fixedand unvarying battery of measures regardless of what ques-tions are being asked in the individual case, and (b) usingfavorite instruments at every opportunity even when theyare unlikely to serve any central or unique purpose in a par-ticular assessment. The administration of minimally usefultests that have little relevance to the referral question is awasteful procedure that can result in warranted criticism ofassessment psychologists and the assessment process. Like-wise, the propriety of charging fees for unnecessary proce-dures can rightfully be challenged by persons receiving orpaying for services, and the competence of assessors whogive tests that make little contribution to answering the ques-tions at issue can be challenged in such public forums as thecourtroom (see Weiner, 2002).Incremental ValidityIncremental validity in psychological assessment refers to theextent to which new information increases the accuracy of aclassification or prediction above and beyond the accuracyachieved by information already available. Assessors payadequate attention to incremental validity by collectingthe amount and kinds of information they need to answera referral question, but no more than that. In theory, then, fa-miliarity with the incremental validity of various measureswhen used for certain purposes, combined with test selectionbased on this information, minimizes redundancy in psycho-logical assessment and satisfies both professional and scien-tific requirements for justifiable test selection.In practice, however, strict adherence to incremental valid-ity guidelines often proves difficult and even disadvan-tageous to implement. As already noted, it is difficult toanticipate which sources of information will prove to be mostuseful. Similarly, with respect to which instruments to includein a test battery, there is little way to know whether the tests ad-ministered have yielded enough data, and which tests havecontributed most to understanding the person being examined,
  20. 20. 6 The Assessment Processuntil after the data have been collected and analyzed. In mostpractice settings, it is reasonable to conduct an interview andreview previous records as a basis for deciding whether formaltesting would be likely to help answer a referral question—that is, whether it will show enough incremental validity towarrant its cost in time and money. Likewise, reviewing a setof test data can provide a basis for determining what kind ofadditional testing might be worthwhile. However, it is rarelyappropriate to administer only one test at a time, to chooseeach subsequent test on the basis of the preceding one, and toschedule a further testing session for each additional test ad-ministration. For this reason, responsible psychological as-sessment usually consists of one or two testing sessionscomprising a battery of tests selected to serve specific additive,confirmatory, and complementary functions.Additive, Confirmatory, and ComplementaryFunctions of TestsSome referral questions require selection of multiple teststo identify relatively distinct and independent aspects of aperson’s psychological functioning. For example, studentsreceiving low grades may be referred for an evaluation tohelp determine whether their poor academic performance isdue primarily to limited intelligence or to personality charac-teristics that are fostering negative attitudes toward achievingin school. A proper test battery in such a case would includesome measure of intelligence and some measure of personal-ity functioning. These two measures would then be used in anadditive fashion to provide separate pieces of information,both of which would contribute to answering the referralquestion. As this example illustrates, the additive use of testsserves generally to broaden understanding of the personbeing examined.Other assessment situations may create a need for con-firmatory evidence in support of conclusions based on testfindings, in which case two or more measures of the samepsychological function may have a place in the test battery.Assessors conducting a neuropsychological examination toaddress possible onset of Alzheimer’s disease, for example,ordinarily administer several memory tests. Should each ofthese tests identify memory impairment consistent withAlzheimer’s, then from a technical standpoint, only one ofthem would have been necessary and the others have shownno incremental validity. Practically speaking, however, themultiple memory measures taken together provide confirma-tory evidence of memory loss. Such confirmatory use of testsstrengthens understanding and helps assessors present con-clusions with confidence.The confirmatory function of a multitest battery is espe-cially useful when tests of the same psychological functionmeasure it in different ways. The advantages of multimethodassessment of variables have long been recognized in psy-chology, beginning with the work of Campbell and Fiske(1959) and continuing with contemporary reports by theAmerican PsychologicalAssociation’s (APA’s) PsychologicalAssessment Work Group, which stress the improved validitythat results when phenomena are measured from a variety ofperspectives (Kubiszyn et al., 2000; Meyer et al., 2001):The optimal methodology to enhance the construct validity ofnomothetic research consists of combining data from multiplemethods and multiple operational definitions. . . . Just as effec-tive nomothetic research recognizes how validity is maximizedwhen variables are measured by multiple methods, particularlywhen the methods produce meaningful discrepancies . . . thequality of idiographic assessment can be enhanced by clinicianswho integrate the data from multiple methods of assessment.(Meyer et al., p. 150)Such confirmatory testing is exemplified in applications ofthe Minnesota Multiphasic Personality Inventory (MMPI,MMPI-2) and the Rorschach Inkblot Method (RIM), whichare the two most widely researched and frequently used per-sonality assessment instruments (Ackerman & Ackerman,1997; Butcher & Rouse, 1996; Camara, Nathan, & Puente,2000; Watkins, Campbell, Nieberding, & Hallmark, 1995).As discussed later in this chapter and in the chapters byViglione and Rivera and by Ben-Porath in this volume, theMMPI-2 is a relatively structured self-report inventory,whereas the RIM is a relatively unstructured measure ofperceptual-cognitive and associational processes (see alsoExner, 2003; Graham, 2000; Greene, 2000; Weiner, 1998).Because of differences in their format, the MMPI-2 andthe RIM measure normal and abnormal characteristics in dif-ferent ways and at different levels of a person’s ability andwillingness to recognize and report them directly. Should aperson display some type of disordered functioning on boththe MMPI-2 and the RIM, this confirmatory finding becomesmore powerful and convincing than having such informationfrom one of these instruments but not other, even thoughtechnically in this instance no incremental validity derivesfrom the second instrument.Confirmatory evidence of this kind often proves helpfulin professional practice, especially in forensic work. Asdescribed by Blau (1998), Heilbrun (2001), Shapiro (1991),and others, multiple sources of information pointing in thesame direction bolsters courtroom testimony, whereas con-clusions based on only one measure of some characteristiccan result in assessors’ being criticized for failing to conducta thorough examination.Should multiple measures of the same psychological char-acteristics yield different rather than confirmatory results,
  21. 21. Collecting Assessment Information 7these results can usually serve valuable complementary func-tions in the interpretive process. At times, apparent lack ofagreement between two purported measures of the samecharacteristic has been taken to indicate that one of themeasures lacks convergent validity. This negative view of di-vergent test findings fails to take adequate cognizance of thecomplexity of the information provided by multimethod as-sessment and can result in misleading conclusions. To con-tinue with the example of conjoint MMPI-2 and RIM testing,suppose that a person’s responses show elevation on indicesof depression on one of these measures but not the other.Inasmuch as indices on both measures have demonstratedsome validity in detecting features of depression, the keyquestion to ask is not which measure is wrong in this in-stance, but rather why the measures have diverged.Perhaps, as one possible explanation, the respondent hassome underlying depressive concerns that he or she does notrecognize or prefers not to admit to others, in which case de-pressive features might be less likely to emerge in response tothe self-report MMPI-2 methodology than on the more indi-rect Rorschach task. Or perhaps the respondent is not partic-ularly depressed but wants very much to give the impressionof being in distress and needing help, in which case theMMPI-2 might be more likely to show depression than theRIM. Or perhaps the person generally feels more relaxed andinclined to be forthcoming in relatively structured than rela-tively unstructured situations, and then the MMPI-2 is morelikely than the RIM to reveal whether the person is depressed.As these examples show, multiple measures of the samepsychological characteristic can complement each otherwhen they diverge, with one measure sometimes picking upthe presence of a characteristic (a true positive) that is missedby the other (a false negative). Possible reasons for the falsenegative can contribute valuable information about the re-spondent’s test-taking attitudes and likelihood of behavingdifferently in situations that differ in the amount of structurethey provide. The translation of such divergence betweenMMPI-2 and RIM findings into clinically useful diagnosticinferences and individual treatment planning is elaborated byFinn (1996) and Ganellen (1996). Whatever measures may beinvolved in weighing the implications of divergent findings,this complementary use of test findings frequently serves todeepen understanding gleaned from the assessment process.Examiner IssuesThe amount and kind of data collected in psychological as-sessments depend in part on two issues concerning the exam-iners who conduct these assessments. The first issue involvesthe qualifications and competence of examiners to utilize theprocedures they employ, and the second has to do with waysin which examiners’personal qualities can influence how dif-ferent kinds of people respond to them.Qualifications and CompetenceThere is general consensus that persons who conduct psycho-logical assessments should be qualified by education andtraining to do so. The Ethical Principles and Code of Con-duct promulgated by the APA (1992) offers the followinggeneral guideline in this regard: “Psychologists provide ser-vices, teach, and conduct research only within the boundariesof their competence, based on their education, training, su-pervised experience, or appropriate professional experience”(Ethical Code 1.04[a]). Particular kinds of knowledge andskill that are necessary for test users to conduct adequate as-sessments are specified further in the Test User Qualificationsendorsed by the APA (2001). Finally of note with respect tousing tests in psychological assessments, the Standards forEducational and Psychological Testing (AERA et al., 1999)identify who is responsible for the proper use of tests: “Theultimate responsibility for appropriate test use and interpreta-tion lies predominantly with the test user. In assuming thisresponsibility, the user must become knowledgeable abouta test’s appropriate uses and the populations for which it issuitable” (p. 112).Despite the clarity of these statements and the considerabledetail provided in the Test User Qualifications, two persistentissues in contemporary assessment practice remain unre-solved. First, adequate psychological testing qualifications aretypically inferred for any examiners holding a graduate degreein psychology, being licensed in their state, and presentingthemselves as competent to practice psychological assessment.Until such time as the criteria proposed in the Test User Quali-fications become incorporated into formal accreditation proce-dures, qualification as an assessor will continue to be conferredautomatically on psychologists obtaining licensure. Unfortu-nately, being qualified by license to use psychological testsdoes not ensure being competent in using them. Being compe-tent in psychological testing requires familiarity with the latestrevision of whatever instruments an assessor is using, with cur-rent research and the most recent normative data concerningthese instruments, and with the manifold interpretive complex-ities they are likely to involve. Assessment competence alsorequires appreciation for a variety of psychometric, interper-sonal, sociocultural, and contextual issues that affect not onlythe collection but also the interpretation and utilization of as-sessment information (see Sandoval, Frisby, Geisinger, &Scheuneman, 1990). The chapters that follow in this volumebear witness to the broad range of these issues and to the steadyoutput of new or revised measures, research findings, and prac-tice guidelines that make assessment psychology a dynamic
  22. 22. 8 The Assessment Processand rapidly evolving field with a large and burgeoning lit-erature. Only by keeping reasonably current with these devel-opments can psychological assessors become and remaincompetent, and only by remaining competent can they fulfilltheir ethical responsibilities (Kitchener, 2000, chapter 9;Koocher & Keith-Spiegel, 1998; Weiner, 1989).The second persistent issue concerns assessment by per-sons who are not psychologists and are therefore not bound bythis profession’s ethical principles or guidelines for practice.Nonpsychologist assessors who can obtain psychologicaltests are free to use them however they wish. When easily ad-ministered measures yield test scores that seem transparentlyinterpretable, as in the case of an elevated Borderline scale onthe Millon Multiaxial Clinical Inventory–III (MCMI-III;Choca, Shanley, & Van Denberg, 1997) or an elevated Acqui-escence scale on the Holland Vocational Preference Inventory(VPI; Holland, 1985), unqualified examiners can draw super-ficial conclusions that take inadequate account of the com-plexity of these instruments, the interactions among theirscales, and the limits of their applicability. It accordingly be-hooves assessment psychologists not only to maintain theirown competence, but also to call attention in appropriate cir-cumstances to assessment practices that fall short of reason-able standards of competence.Personal InfluenceAssessors can influence the information they collect by virtueof their personal qualities and by the manner in which theyconduct a psychological examination. In the case of self-administered measures such as interest surveys or personalityquestionnaires, examiner influence may be minimal. Inter-views and interactive testing procedures, on the other hand,create ample opportunity for an examiner’s age, gender,ethnicity, or other characteristics to make respondents feelmore or less comfortable and more or less inclined to be forth-coming. Examiners accordingly need to be alert to instancesin which such personal qualities may be influencing the na-ture and amount of the data they are collecting.The most important personal influence that examiners can-not modify or conceal is their language facility. Psychologicalassessment procedures are extensively language-based, eitherin their content or in the instructions that introduce nonverbaltasks, and accurate communication is therefore essential forobtaining reliable assessment information. It is widely agreedthat both examiners and whomever they are interviewing ortesting should be communicating either in their native lan-guage or in a second language in which they are highly profi-cient (AERAet al., 1999, chapter 9). The use of interpreters tocircumvent language barriers in the assessment process rarelyprovides a satisfactory solution to this problem. Unless aninterpreter is fully conversant with idiomatic expressions andcultural referents in both languages, is familiar with standardprocedures in psychological assessment, and is a stranger tothe examinee (as opposed to a friend, relative, or member ofthe same closely knit subcultural community), the obtained re-sults may be of questionable validity. Similarly, in the case ofself-administered measures, instructions and test items mustbe written in a language that the respondent can be expected tounderstand fully. Translations of pencil-and-paper measuresaccordingly require close attention to the idiomatic vagaries ofeach new language and to culture-specific contents of individ-ual test items, in order to ensure equivalence of measures in thecross-cultural applications of tests (Allen & Walsh, 2000;Dana, 2000a).Unlike their fixed qualities, the manner in which examinersconduct the assessment process is within their control, and un-toward examiner influence can be minimized by appropriateefforts to promote full and open response to the assessmentprocedures. To achieve this end, an assessment typically be-gins with a review of its purposes, a description of the proce-dures that will be followed, and efforts to establish a rapportthat will help the person being evaluated feel comfortable andwilling to cooperate with the assessment process. Variations inexaminer behavior while introducing and conducting psycho-logical evaluations can substantially influence how respon-dents perceive the assessment situation—for example, whetherthey see it as an authoritarian investigative process intended toferret out defects and weaknesses, or as a mutually respectfuland supportive interaction intended to provide understandingand help. Even while following closely the guidelines for astructured interview and adhering faithfully to standardizedprocedures for administering various tests, the examiner needsto recognize that his or her manner, tone of voice, and apparentattitude are likely to affect the perceptions and comfort level ofthe person being assessed and, consequently, the amount andkind of information that person provides (see Anastasi &Urbina, 1977; Masling, 1966, 1998).Respondent IssuesExaminer influence in the assessment process inevitably inter-acts with the attitudes and inclinations of the person being ex-amined. Some respondents may feel more comfortable beingexamined by an older person than a younger one, for example,or by a male than a female examiner, whereas other respon-dents may prefer a younger and female examiner. Amongmembersofaminoritygroup,somemayprefer tobeexaminedby a person with a cultural or ethnic background similar totheirs, whereas others are less concerned with the examiner’s
  23. 23. Collecting Assessment Information 9background than with his or her competence. Similarly, withrespect to examiner style, a passive, timid, and dependent per-son might feel comforted by a warm, friendly, and supportiveexaminer approach that would make an aloof, distant, and mis-trustful person feel uneasy; conversely, an interpersonallycautious and detached respondent might feel safe and securewhen being examined in an impersonal and businesslike man-ner that would be unsettling and anxiety provoking to an inter-personally needy and dependent respondent. With suchpossibilities in mind, skilled examiners usually vary their be-havioral style with an eye to conducting assessments in waysthat will be likely to maximize each individual respondent’slevel of comfort and cooperation.Two other respondent issues that influence the data col-lection process concern a person’s right to give informedconsent to being evaluated and his or her specific attitudes to-ward being examined. With respect to informed consent, theintroductory phase of conducting an assessment must ordinar-ily include not only the explanation of purposes and proce-dures mentioned previously, which informs the respondent,but also an explicit agreement by the respondent or personslegally responsible for the respondent to undergo the evalua-tion. As elaborated in the Standards for Educational and Psy-chological Testing (AERAet al., 1999), informed consent canbe waived only when an assessment has been mandated by law(as in a court-ordered evaluation) or when it is implicit, aswhen a person applies for a position or opportunity for whichbeing assessed is a requirement (i.e., a job for which all appli-cants are being screened psychologically; see also Kitchener,2000, and the chapters by Geisinger and by Koocher and Rey-Casserly in this volume). Having given their consent to beevaluated, moreover, respondents are entitled to revoke it atany time during the assessment process. Hence, the prospectsfor obtaining adequate assessment data depend not only onwhether respondents can be helped to feel comfortable and beforthcoming, but even more basically on whether they consentin the first place to being evaluated and remain willing duringthe course of the evaluation.Issues involving a respondent’s specific attitudes towardbeing examined typically arise in relation to whether theassessment is being conducted for clinical or for administra-tive purposes.When assessments are being conducted forclin-ical purposes, the examiner is responsible to the person beingexamined, the person being examined is seeking some type ofassistance, and the examination is intended to be helpful to thisperson and responsive to his or her needs. As common exam-ples in clinical assessments, people concerned about their psy-chological well-being may seek an evaluation to learn whetherthey need professional mental health care, and people uncer-tain about their educational or vocational plans may want lookfor help in determining what their abilities and interests suitthem to do. In administrative assessments, by contrast, exam-iners are responsible not to the person being examined, but tosome third party who has requested the evaluation to assist inarriving at some judgment about the person. Examiners in anadministrative assessment are ethically responsible for treat-ing the respondent fairly and with respect, but the evaluation isbeing conducted for the benefit of the party requesting it, andthe results may or may not meet the respondent’s needs orserve his or her best interests. Assessment for administrativepurposes occurs commonly in forensic, educational, and orga-nizational settings when evaluations are requested to helpdecide such matters as whether a prison inmate should beparoled, a student should be admitted to a special program, ora job applicant should be hired (see Monahan, 1980).As for their attitudes, respondents being evaluated for clin-ical purposes are relatively likely to be motivated to revealthemselves honestly, whereas those being examined for ad-ministrative purposes are relatively likely to be intent onmaking a certain kind of impression. Respondents attemptingto manage the impression they give are likely to show them-selves not as they are, but as they think the person requestingthe evaluation would view favorably. Typically such efforts atimpression management take the form of denying one’s limi-tations, minimizing one’s shortcomings, attempting to putone’s very best foot forward, and concealing whatever mightbe seen in a negative light. Exceptions to this general trend arenot uncommon, however. Whereas most persons being evalu-ated for administrative purposes want to make the best possi-ble impression, some may be motivated in just the oppositedirection. For example, a plaintiff claiming brain damage in apersonal injury lawsuit may see benefit in making the worstpossible impression on a neuropsychological examination.Some persons being seen for clinical evaluations, despitehaving come of their own accord and recognizing that the as-sessment is being conducted for their benefit, may neverthe-less be too anxious or embarrassed to reveal their difficultiesfully. Whatever kind of impression respondents may want tomake, the attitudes toward being examined that they bringwith them to the assessment situation can be expected to influ-ence the amount and kind of data they produce.These attitudesalso have a bearing on the interpretation of assessment data,and the further implications of impression managementfor malingering and defensiveness are discussed later in thechapter.Data Management IssuesA final set of considerations in collecting assessment infor-mation concerns appropriate ways of managing the data that
  24. 24. 10 The Assessment Processare obtained. Examiners must be aware in particular of issuesconcerning the use of computers in data collection; the re-sponsibility they have for safeguarding the security of theirmeasures; and their obligation, within limits, to maintain theconfidentiality of what respondents report or reveal to them.Computerized Data CollectionSoftware programs are available to facilitate the data col-lection process for most widely used assessment methods.Programs designed for use with self-report questionnairestypically provide for online administration of test items, au-tomated coding of item responses to produce scale scores,and quantitative manipulation of these scale scores to yieldsummary scores and indices. For instruments that require ex-aminer administration and coding (e.g., a Wechsler intelli-gence test), software programs accept test scores entered bythe examiner and translate them into the test’s quantitative in-dices (e.g., the Wechsler IQ and Index scores). Many of theseprograms store the test results in files that can later be ac-cessed or exported, and some even provide computationalpackages that can generate descriptive statistics for sets oftest records held in storage.These features of computerized data management bringseveral benefits to the process of collecting assessment infor-mation. Online administration and coding of responses helprespondents avoid mechanical errors in filling out test formsmanually, and they eliminate errors that examiners sometimesmake in scoring these responses (see Allard & Faust, 2000).For measures that require examiner coding and data entry, theutility of the results depends on accurate coding and entry, butonce the data are entered, software programs eliminate exam-iner error in calculating summary scores and indices fromthem. The data storage features of many software programsfacilitate assessment research, particularly for investigatorsseeking to combine databases from different sources, and theycan also help examiners meet requirements in most states andmany agencies for keeping assessment information on filefor some period of time. For such reasons, the vast majority ofassessment psychologists report that they use software for testscoring and feel comfortable doing so (McMinn, Ellens, &Soref, 1999).Computerized collection of assessment information hassome potential disadvantages as well, however. When assess-ment measures are administered online, first of all, the relia-bility of the data collected can be compromised by a lack ofequivalence between an automated testing procedure and thenoncomputerized version on which it is based. As elaboratedby Butcher, Perry, and Atlis (2000), Honaker and Fowler(1990), and Snyder (2000) and discussed in the chapter byButcher in the present volume, the extent of such equivalenceis currently an unresolved issue.Available data suggest fairlygood reliability for computerized administrations basedon pencil-and-paper questionnaires, especially those used inpersonality assessment. With respect to the MMPI, for exam-ple, a meta-analysis by Finger and Ones (1999) of all avail-able research comparing computerized with booklet forms ofthe instrument has shown them to be psychometrically equiv-alent. On the other hand, good congruence with the originalmeasures has yet to be demonstrated for computerized ver-sions of structured clinical interviews and for many measuresof visual-spatial functioning used in neuropsychologicalassessment. Among software programs available for test ad-ministration, moreover, very few have been systematicallyevaluated with respect to whether they obtain exactly thesame information as would emerge in a standard administra-tion of the measure on which they are based.A second potential disadvantage of computerized data col-lection derives from the ease with which it can be employed.Although frequently helpful to knowledgeable assessmentprofessionals and thus to the persons they examine, auto-mated procedures also simplify psychological testing for un-trained and unqualified persons who lack assessment skillsand would not be able to collect test data without the aid of acomputer. The availability of software programs thus createssome potential for assessment methods to be misused andrespondents to be poorly served. Such outcomes are not aninescapable by-product of computerized assessment proce-dures, however. They constitute instead an abuse of technol-ogy by uninformed and irresponsible persons.Test SecurityTest security refers to restricting the public availability of testmaterials and answers to test items. Such restrictions addresstwo important considerations in psychological assessment.First,publiclycirculatedinformationabouttestscanunderminetheir validity, particularly in the case of measures comprisingitems with right and wrong or more or less preferable answers.Prior exposure to tests of this kind and information about cor-rect or preferred answers can affect how persons respond tothem and prevent an examiner from being able to collect a validprotocol.The validity of test findings is especially questionablewhen a respondent’s prior exposure has included specificcoaching in how to answer certain questions. As for relativelyunstructuredassessmentproceduresthathavenorightorwronganswers, even on these measures various kinds of responsescarry particular kinds of interpretive significance. Hence, thepossibility exists on relatively unstructured measures as wellthat persons intent on making a certain kind of impression can
  25. 25. Interpreting Assessment Information 11be helped to do so by pretest instruction concerning what vari-ous types of responses are taken to signify. However, the extentto which public dissemination of information about the inferredmeaning of responses does in fact compromise the validity ofrelatively unstructured measures has not yet been examinedempirically and is a subject for further research.Second, along with helping to preserve the validity of ob-tained results, keeping assessment measures secure protectstest publishers against infringement of their rights by piratedor plagiarized copies of their products. Ethical assessors re-spect copyright law by not making or distributing copies ofpublished tests, and they take appropriate steps to prevent testforms, test manuals, and assessment software from fallinginto the hands of persons who are not qualified to use themproperly or who feel under no obligation to keep them secure.Both the Ethical Principles and Code of Conduct (APA,1992, Section 2.10) and the Standards for Educational andPsychological Testing (AERA et al., 1999, p. 117) addressthis professional responsibility in clear terms.These considerations in safeguarding test security alsohave implications for the context in which psychological as-sessment data are collected. Assessment data have becomeincreasingly likely in recent years to be applied in forensicsettings, and litigious concerns sometimes result in requeststo have a psychological examination videotaped or observedby a third party. These intrusions on traditional examinationprocedures pose a threat to the validity of the obtained data intwo respects. First, there is no way to judge or measure theimpact of the videotaping or the observer on what the re-spondent chooses to say and do. Second, the normative stan-dards that guide test interpretation are derived from dataobtained in two-person examinations, and there are no com-parison data available for examinations conducted in thepresence of a camera or an observer. Validity aside, exposureof test items to an observer or through a videotape posesthe same threat to test security as distributing test forms ormanuals to persons who are under no obligation to keep themconfidential. Psychological assessors may at times decide fortheir own protection to audiotape or videotape assessmentswhen they anticipate legal challenges to the adequacy of theirprocedures or the accuracy of their reports. They may alsouse recordings on occasion as an alternative to writing a longand complex test protocol verbatim. For purposes of test se-curity, however, recordings made for other people to hear orsee, like third-party observers, should be avoided.ConfidentialityA third and related aspect of appropriate data managementpertains to maintaining the confidentiality of a respondent’sassessment information. Like certain aspects of safeguardingtest security, confidentiality is an ethical matter in assessmentpsychology, not a substantive one. The key considerations inmaintaining the confidentiality of assessment information, asspecified in the Ethical Principles and Code of Conduct(APA, 1992, Section 5) and elaborated by Kitchener (2000,chapter 6) involve (a) clarifying the nature and limits of con-fidentiality with clients and patients prior to undertaking anevaluation; (b) communicating information about personsbeing evaluated only for appropriate scientific or professionalpurposes and only to an extent relevant to the purposes forwhich the evaluation was conducted; (c) disclosing informa-tion only to persons designated by respondents or other dulyauthorized persons or entities, except when otherwise permit-ted or required by law; and (d) storing and preserving re-spondents’ records in a secure fashion. Like the matter ofinformed consent discussed previously, confidentiality iselaborated as an ethical issue in the chapter by Koocher andRey-Casserly in this volume.INTERPRETING ASSESSMENT INFORMATIONFollowing the collection of sufficient relevant data, theprocess of psychological assessment continues with a phase ofevaluation in which these data are interpreted. The interpreta-tion of assessment data consists of drawing inferences andforming impressions concerning what the findings revealabout a respondent’s psychological characteristics. Accurateand adequately focused interpretations result in summarydescriptions of psychological functioning that can then be uti-lized in the final phase of the assessment process as a founda-tion for formulating conclusions and recommendations thatanswer referral questions. Reaching this output phase requiresconsideration during the evaluation phase of the basis onwhich inferences are drawn and impressions formed, the pos-sible effects on the findings of malingering or defensiveness,and effective ways of integrating data from diverse sources.Basis of Inferences and ImpressionsThe interpretation of assessment data involves four sets of al-ternatives with respect to how assessors go about drawing in-ferences and forming impressions about what these dataindicate. Interpretations can be based on either empiricalor conceptual approaches to decision making; they can beguided either by statistically based decision rules or by clini-cal judgment; they can emphasize either nomothetic or idio-graphic characteristics of respondents; and they can includemore or less reliance on computer-generated interpretive
  26. 26. 12 The Assessment Processstatements. Effective assessment usually involves informedselection among these alternatives and some tailoring of theemphasis given each of them to fit the particular context ofthe individual assessment situation.Empirical and Conceptual GuidelinesThe interpretation of assessment information can be ap-proached in several ways. In what may be called an intuitiveapproach, assessment decisions stem from impressions thathave no identifiable basis in the data. Instead, interpretationsare justified by statements like “It’s just a feeling I have abouther,” or “I can’t say where I get it from, but I just know he’sthat way.” In what may be called an authoritative approach,interpretations are based on the pronouncements of well-known or respected assessment psychologists, as in saying,“These data mean what they mean because that’s whatDr. Expert says they mean.” The intuition of unusually em-pathic assessors and reliance on authority by well-read prac-titioners who choose their experts advisedly may on occasionyield accurate and useful impressions. Both approaches haveserious shortcomings, however. Unless intuitive assessorscan identify specific features of the data that help themreach their conclusions, their diagnostic sensitivity cannot betaught to other professionals or translated into scientificallyverifiable procedures. Unless authoritative assessors canexplain in their own words the basis on which expertshave reached the conclusions being cited, they are unlikely toimpress others as being professionally knowledgeable them-selves or as knowing what to think in the absence of beingtold by someone else what to think.Moreover, neither intuitive nor authoritative approaches tointerpreting assessment information are likely to be as consis-tently reliable as approaches based on empirical and concep-tual guidelines. Empirical guidelines to decision makingderive from the replicated results of methodologically soundresearch. When a specific assessment finding has repeatedlybeen found to correlate highly with the presence of a particularpsychological characteristic, it is empirically sound to inferthe presence of that characteristic in a respondent who dis-plays that assessment finding. Conceptual guidelines to deci-sion making consist of psychological constructs that provide alogical bridge between assessment findings and the inferencesdrawn from them. If subjectively felt distress contributes to aperson’s remaining in and benefiting from psychotherapy (forwhich there is considerable evidence; see Garfield, 1994;Greencavage & Norcross, 1990; Mohr, 1995), and if a test in-cludes a valid index of subjectively felt distress (which manytests do), then it is reasonable to expect that a positive findingon this test index will increase the predicted likelihood of afavorable outcome in psychotherapy.Both empirical and conceptual guidelines to interpretationbring distinct benefits to the assessment process. Empiricalperspectives are valuable because they provide a foundationfor achieving certainty in decision making. The adequacy ofpsychological assessment is enhanced by quantitative dataconcerning the normative distribution and other psychomet-ric properties of measurements that reflect dimensions ofpsychological functioning. Lack of such data limits the confi-dence with which assessors can draw conclusions about theimplications of their findings. Without being able to comparean individual’s test responses with normative expectations,for example, or without a basis for estimating false positiveand false negative possibilities in the measures they haveused, assessors can only be speculative in attaching inter-pretive significance to their findings. Similarly, the absenceof externally validated cutting scores detracts considerablyfrom the certainty with which assessors can translate testscores into qualitative distinctions, such as whether a personis mildly, moderately, or severely depressed.Conceptual perspectives are valuable in the assessmentprocess because they provide some explanation of why certainfindings are likely to identify certain kinds of psychologicalcharacteristics or predict certain kinds of behavior. Havingsuch explanations in hand offers assessors the pleasure of un-derstanding not only how their measures work but also whythey work as they do; they help assessors focus their attentionon aspects of their data that are relevant to the referral questionto which they are responding; and they facilitate the commu-nication of results in terms that address characteristics of theperson being examined and not merely those of the dataobtained. As a further benefit of conceptual formulations ofassessment findings, they foster hypotheses concerning previ-ously unknown or unexplored linkages between assessmentfindings and dimensions of psychological functioning andthereby help to extend the frontiers of knowledge.Empirical guidelines are thus necessary to the scientificfoundations of assessment psychology, as a basis for cer-tainty in decision making, but they are not sufficient to bringthis assessment to its full potential. Conceptual guidelines donot by themselves provide a reliable basis for drawing con-clusions with certainty. However, by enriching the assess-ment process with explanatory hypotheses, they point theway to advances in knowledge.For the purposes that each serves, then, both empirical andconceptual guidelines have an important place in the interpre-tation of assessment information. At times, concerns aboutpreserving the scientific respectability of assessment have ledto assertions that only empirical guidelines constitute an ac-ceptable basis for decision making and that unvalidated con-ceptual guidelines have no place in scientific psychology.McFall and Treat (1999), for example, maintain that “the
  27. 27. Interpreting Assessment Information 13aim of clinical assessment is to gather data that allow us toreduce uncertainty concerning the probability of events”(p. 215). From their perspective, the information value ofassessment data resides in scaled numerical values and con-ditional probabilities.As an alternative point of view, let it be observed that theriver of scientific discovery can flow through inferential leapsof deductive reasoning that suggest truths long before they areconfirmed by replicated research findings. Newton graspedthe reason that apples fall from trees well in advance ofexperiments demonstrating the laws of gravity, Einsteinconceived his theory of relativity with full confidence thatempirical findings would eventually prove him correct, andneither has suffered any challenges to his credentials as a sci-entist. Even though empirical guidelines are, on the average,more likely to produce reliable conclusions than are concep-tual formulations, as already noted, logical reasoning con-cerning the implications of clearly formulated concepts canalso generate conclusions that serve useful purposes andstand the test of time.Accordingly, the process of arriving at conclusions in indi-vidual case assessment can involve creative as well as confir-matory aspects of scientific thinking, and the utilization ofassessment to generate hypotheses and fuel speculation mayin the course of scientific endeavor increase rather than de-crease uncertainty in the process of identifying new alterna-tive possibilities to pursue. This perspective is echoed byDeBruyn (1992) in the following comment: “Both scientificdecision making in general, and diagnostic decision makingin particular, have a repetitive side, which consists of formu-las and algorithmic procedures, and a constructive side, whichconsists of generating hypotheses and theories to explainthings or to account for unexpected findings” (p. 192).Statistical Rules and Clinical JudgmentEmpirical guidelines for decision making have customarilybeen operationalized by using statistical rules to arrive at con-clusions concerning what assessment data signify. Statisticalrules for interpreting assessment data comprise empiricallyderived formulas, or algorithms, that provide an objective,actuarial basis for deciding what these data indicate. Whenstatistical rules are applied to the results of a psychologicalevaluation, the formula makes the decision concerningwhether certain psychological characteristics are present (asin deciding whether a respondent has a particular trait or dis-order) or whether certain kinds of actions are likely to ensue(as in predicting the likelihood of a respondent’s behaving vi-olently or performing well in some job). Statistical rules havethe advantage of ensuring that examiners applying a formulacorrectly to the same set of data will always arrive at the sameconclusion concerning what these data mean. As a disadvan-tage, however, the breadth of the conclusions that can bebased on statistical rules and their relevance to referral ques-tions are limited by the composition of the database fromwhich they have been derived.For example, statistical rules may prove helpful in deter-mining whether a student has a learning disability, but saynothing about the nature of this student’s disability; they maypredict the likelihood of a criminal defendant’s behaving vio-lently, but offer no clues to the kinds of situations that are mostlikely to evoke violence in this particular criminal defendant;or they may help identify the suitability of a person for onetype of position in an organization, but be mute with respect tothe person’s suitability for other types of positions in the sameorganization. In each of these instances, moreover, a statisticalrule derived from a group of people possessing certain demo-graphic characteristics (e.g., age, gender, socioeconomic sta-tus, cultural background) and having been evaluated in aparticular setting may lack validity generalization to personswith different demographic characteristics evaluated in someother kind of setting. Garb (2000) has similarly noted in thisregard that “statistical-prediction rules are of limited value be-cause they have typically been based on limited informationthat has not been demonstrated to be optimal and they have al-most never been shown to be powerful” (p. 31).In other words, then, the scope of statistical rules is re-stricted to findings pertaining to the particular kinds of persons,psychological characteristics, and circumstances that wereanticipated in building them. For many of the varied types ofpeople seen in actual assessment practice, and for many of thecomplex and specifically focused referral questions raisedabout these people, then, statistical rules that by themselvesprovide fully adequate answers may be in short supply.As a further limitation of statistical rules, they share withall quantified assessment scales some unavoidable artificialitythat accompanies translating numerical scores into qualitativedescriptive categories. On the Beck Depression Inventory(BDI; Beck, Steer, & Garbin, 1988), for example, a score of14 to 19 is taken to indicate mild depression and a score of20 to 28 indicates moderate depression. Hence two peoplewho have almost identical BDI scores, one with a 19 and theother with a 20, will be described much differently by the sta-tistical rule, one as mildly depressed and the other as moder-ately depressed. Likewise, in measuring intelligence with theWechsler Adult Intelligence Scale–III (WAIS-III; Kaufman,1990) a Full Scale IQ score of 109 calls for describing a per-son’s intelligence as average, whereas a person with almostexactly the same level of intelligence and a Full Scale IQ of110 falls in the high average range.According to the WAIS-IIIformulas, a person with a Full Scale IQ of 91 and a person witha Full Scale IQ of 119 would also be labeled, respectively, as
  28. 28. 14 The Assessment Processaverage and high average. Some assessors minimize thisproblem by adding some further specificity to the WAIS-IIIcategories, as in labeling a 109 IQ as the high end of the aver-age range and a 110 IQ as the low end of the high averagerange. Although additional categorical descriptions for morenarrowly defined score ranges can reduce the artificiality inthe use of statistical rules, there are limits to how many quan-titative data points on a scale can be assigned a distinctivequalitative designation.Conceptual guidelines for decision making have been op-erationalized in terms of clinical judgment, which consists ofthe cumulative wisdom that practitioners acquire from theirexperience. Clinical guidelines may come to represent theshared beliefs of large numbers of practitioners, but theyemerge initially as impressions formed by individual practi-tioners. In contrast to the objective and quantitative featuresof statistical rules, clinical judgments constitute a subjectiveand qualitative basis for arriving at conclusions. When clini-cal judgment is applied to assessment data, decisions aremade by the practitioner, not by a formula. Clinical judg-ments concerning the interpretive significance of a set of as-sessment data are consequently less uniform than actuarialdecisions and less likely to be based on established fact. Onthe other hand, the applicability of clinical judgments is infi-nite, and their breadth and relevance are limited not by anydatabase, but only by the practitioner’s capacity to reasonlogically concerning possible relationships between psycho-logical characteristics identified by the assessment data andpsychological characteristics relevant to addressing referralquestions, whatever their complexity and specificity.The relative merit of statistical rules and clinical judgmentin the assessment process has been the subject of considerabledebate since this distinction was first formulated by Meehl(1954) in his book Clinical Versus Statistical Prediction. Sub-sequent publications of note concerning this important issueinclude articles by Grove and Meehl (1996), Grove, Zald,Lebow, Snitz, and Nelson (2000), Holt (1958, 1986), Karon(2000), Meehl (1986), and Swets, Dawes, and Monahan(2000), and a book by Garb (1998) entitled Studying the Clin-ician. Much of the literature on this topic has consisted of as-sertions and rebuttals concerning whether statistical methodsgenerally produce more accurate assessment results than clin-ical methods. In light of the strengths and weaknesses inherentin both statistical prediction and clinical judgment, as elabo-rated in the chapter by Garb in this volume, such debate serveslittle purpose and is regrettable when it leads to disparagementof either approach to interpreting assessment data.As testimony to the utility of both approaches, it is impor-tant to note that the creation of good statistical rules formaking assessment decisions typically begins with clinicallyinformed selection of both (a) test items, structured interviewquestions, and other measure components to be used as pre-dictor variables, and (b) psychological conditions, behavioraltendencies, and other criterion variables to which the predictorvariables are expected to relate. Empirical methods of scaleconstruction and cross-validation are then employed to shapethese clinically relevant assessment variables into valid actu-arial measures of these clinically relevant criterion variables.Hence good statistical rules should almost always producemore accurate results than clinical judgment, because they en-compass clinical wisdom plus the sharpening of this wisdomby replicated research findings. Clinical methods of assess-ment at their best depend on the impressions and judgment ofindividual practitioners, whereas statistical methods at theirbest constitute established fact that has been built on clinicalwisdom. To rely only on clinical judgment in decision-makingsituations for which adequate actuarial guidelines are avail-able is tantamount to playing cards with half a deck. Even thebest judgment of the best practitioner can at times be cloudedby inadvertent bias, insufficient awareness of base rates, andother sources of influence discussed in the final section ofthis chapter and elaborated in the chapter by Reynolds andRamsay in this volume. When one is given a reasonablechoice, then, assessment decisions are more advisedly basedon established fact rather than clinical judgment.On the other hand, the previously noted diversity of peopleand of the circumstances that lead to their being referred foran evaluation mean that assessment questions regularly arisefor which there are no available statistical rules, and patternsof assessment data often resemble but do not quite match theparameters for which replicated research has demonstratedcertain correlates. When statistical rules cannot fully answerquestions being asked, what are assessors to do in the absenceof fully validating data? Decisions could be deferred, onthe grounds that sufficient factual basis for a decision is lack-ing, and recommendation could be delayed, pending greatercertainty about what recommendation to make. Alternatively,assessors in a situation of uncertainty can supplement what-ever empirical guidelines they do have at their disposal withlogical reasoning and cumulative clinical wisdom to arrive atconclusions and recommendations that are more responsiveand at least a little more likely to be helpful than sayingnothing at all.As these observations indicate, statistical rules and clinicaljudgment can properly be regarded as complementary com-ponents of effective decision making, rather than as compet-ing and mutually exclusive alternatives. Each brings value toassessment psychology and has a respectable place in it.Geisinger and Carlson (2002) comment in this regard that thetime has come “to move beyond both purely judgmental,
  29. 29. Interpreting Assessment Information 15speculative interpretation of test results as well as extrapola-tions from the general population to specific cases that do notmuch resemble the remainder of the population” (p. 254).Assessment practice should accordingly be subjected toand influenced by research studies, lest it lead down blindalleys and detract from the pursuit of knowledge and thedelivery of responsible professional service. Concurrently,however, lack of unequivocal documentation should notdeter assessment psychologists from employing proceduresand reaching conclusions that in their judgment will assist inmeeting the needs of those who seek their help. Commentingon balanced use of objective and subjective contributions toassessment decision making, Swets et al. (2000) similarlynote that “the appropriate role of the SPR [Statistical Predic-tion Rule] vis-à-vis the diagnostician will vary from one con-text to another” and that the most appropriate roles of each“can be determined for each diagnostic setting in accordancewith the accumulated evidence about what works best” (p. 5).Putting the matter in even simpler terms, Kleinmuntz (1990)observed that “the reason why we still use our heads, flawedas they may be, instead of formulas is that for many deci-sions, choices and problems, there are as yet no availableformulas” (p. 303).Nomothetic and Idiographic EmphasisEmpirical guidelines and statistical rules constitute a basi-cally nomothetic approach to interpreting assessment infor-mation, whereas conceptual guidelines and clinical judgmentunderlie a basically idiographic approach. Nomothetic inter-pretations address ways in which people resemble otherkinds of people and share various psychological characteris-tics with many of them. Hence, these interpretations involvecomparisons between the assessment findings for the personbeing examined and assessment findings typically obtainedfrom groups of people with certain known characteristics, asin concluding that “this person’s responses show a patternoften seen in people who feel uncomfortable in social situa-tions and are inclined to withdraw from them.” The mannerin which nomothetic interpretations are derived and ex-pressed is thus primarily quantitative in nature and may evenspecify the precise frequency with which an assessment find-ing occurs in particular groups of people.Idiographic interpretations, by contrast, address ways inwhich people differ from most other kinds of people and showpsychological characteristics that are fairly unique to themand their particular circumstances. These interpretations typi-cally comprise statements that attribute person-specific mean-ing to assessment information on the basis of general notionsof psychological processes, as in saying that “this persongives many indications of being a passive and dependent in-dividual who is more comfortable being a follower than aleader and will as a consequence probably have difficultyfunctioning effectively in an executive position.” Derivingand expressing idiographic interpretations is thus a largelyqualitative procedure in which examiners are guided by in-formed impressions rather than by quantitative empiricalcomparisons.In the area of personality assessment, both nomothetic andidiographic approaches to interpretation have a long and dis-tinguished tradition. Nomothetic perspectives derive fromthe work of Cattell (1946), for whom the essence of person-ality resided in traits or dimensions of functioning that allpeople share to some degree and on which they can be com-pared with each other. Idiographic perspectives in personalitytheory were first clearly articulated by Allport (1937), whoconceived the essence of personality as residing in theuniqueness and individuality of each person, independentlyof comparisons to other people. Over the years, assessmentpsychologists have at times expressed different convictionsconcerning which of these two traditions should be empha-sized in formulating interpretations. Practitioners typicallyconcur with Groth-Marnat (1997) that data-oriented descrip-tions of people rarely address the unique problems a personmay be having and that the essence of psychological assess-ment is an attempt “to evaluate an individual in a problem sit-uation so that the information derived from the assessmentcan somehow help with the problem” (p. 32). Writing from aresearch perspective, however, McFall and Townsend (1998)grant that practitioners must of necessity provide idiographicsolutions to people’s problems, but maintain that “nomo-thetic knowledge is a prerequisite to valid idiographic solu-tions” (p. 325). In their opinion, only nomothetic variableshave a proper place in the clinical science of assessment.To temper these points of view in light of what has alreadybeen said about statistical and clinical prediction, there is noreason that clinicians seeking solutions to idiographic prob-lem cannot or should not draw on whatever nomotheticguidelines may help them frame accurate and useful interpre-tations. Likewise, there is no reason that idiography cannotbe managed in a scientific fashion, nor is a nomothetic-idiographic distinction between clinical science and clinicalpractice likely to prove constructive in the long run. Stricker(1997) argues to the contrary, for example, that science in-corporates an attitude and a set of values that can characterizeoffice practitioners as well as laboratory researchers, and that“the same theoretical matrix must generate both science andpractice activities” (p. 442).Issues of definition aside, then, there seems little to begained by debating whether people can be described better in