• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Foundations of NBCOT Exams

Foundations of NBCOT Exams



Presentation about the key psychometric

Presentation about the key psychometric
principles that form the foundations of the NBCOT certification examinations



Total Views
Views on SlideShare
Embed Views



0 Embeds 0

No embeds


Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

    Foundations of NBCOT Exams Foundations of NBCOT Exams Document Transcript

    • The National Board for Certification in Occupational Therapy, Inc. Foundations of the NBCOT ® CERTIFICATION EXAMINATIONS NBCOT... Serving the Public Interest ◘ Foundations of the Certification Examinations 1 www.nbcot.org ◘
    • Our Mission... Above all else, the mission of the National Board for Certification in Occupational Therapy, Inc. (NBCOT®) is to serve the public interest. We provide a world-class standard for certification of occupational therapy practitioners. NBCOT will develop, administer, and continually review a certification process based on current and valid standards that provide reliable indicators of competence for the practice of occupational therapy. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise) without prior written permission of the copyright owners. ©2009 National Board for Certification in Occupational Therapy, Inc. “NBCOT®” is a service and trademark of the National Board for Certification in Occupational Therapy, Inc. “OTR®” is a service and trademark of the National Board for Certification in Occupational Therapy, Inc. “COTA®” is a service and trademark of the National Board for Certification in Occupational Therapy, Inc. All marks are registered in the United States of America. National Board for Certification in Occupational Therapy, Inc. 12 South Summit Avenue, Suite 100 Gaithersburg, MD 20877-4150 http://www.nbcot.org Printed in the United States of America.
    • Introduction Historically, regulation of the health professions in the United States began with a necessity to protect the public from the under-educated and under-trained professional. Over time, licensure, credentialing and certification have continued the tradition of protecting the public but have also increased their scope of activity to continuously improve the quality of practice in the profession. Certification has become the hallmark credential for professionals in a variety of industries often serving as a benchmark for hiring and promotion (Microsoft, 2003). Certification is a process by which key required competencies for practice are measured, and the professional is endorsed by a board of his/her peers (Barnhart, 1997). Earned certification means an individual has met a specified quality standard that reflects nationally-accepted practice principles and values (McClain, Richardson & Wyatt, 2004.). The purpose of awarding the credential – OCCUPATIONAL THERAPIST REGISTERED (OTR®) and CERTIFIED OCCUPATIONAL THERAPIST ASSISTANT (COTA®) - is to identify for the public those persons who have demonstrated the knowledge and the skills necessary to provide occupational therapy services. For more than 70 years, the OTR and COTA “marks” have been recognized by agencies, employers, payers, and consumers as viable symbols of quality educated and currently prepared occupational therapy practitioners. The National Board for Certification in Occupational Therapy (NBCOT®) is a not-for-profit credentialing agency responsible for the development and implementation of policies related to the certification of occupational therapy practitioners in the United States. This independent national credentialing agency awards the OTR® or COTA® certification to eligible candidates. The primary mission of NBCOT is to “serve the public interest”. NBCOT certification uses a formal process to grant a certification credential to an individual who: 1) meets academic and practice experience requirements; 2) successfully completes a comprehensive examination to assess knowledge and skills for practice; and 3) agrees to adhere to the NBCOT Candidate/Certificant Code of Conduct. Currently, 50 states, Guam, Puerto Rico and the District of Columbia require NBCOT initial certification for occupational therapy state regulation, e.g. licensing. NBCOT has received and maintained accreditation from the National Commission for Certifying Agencies (NCCA). Accreditation ensures that a certification agency meets a recognized standard for the programs and services it employs, and that the certification agency is engaged in continuous review and quality improvement. ® Purpose Using a question and answer format, this monograph presents information about the key psychometric principles that form the foundations of the NBCOT certification examinations. With a strong psychometric foundation underlying a certification program’s measurement process, stakeholders can be assured that each certification examination is a “high-quality test that validly serves the intended purpose” (Downing, Haladyna p.37). This monograph presents the psychometric principles central to the development of high-stakes certification examinations. ◘ Foundations of the Certification Examinations 3 www.nbcot.org ◘
    • What guidelines does NBCOT follow when developing its certification examinations? The procedures used to prepare the NBCOT certification examinations are consistent with the technical guidelines recommended by the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education (AERA, APA, NCME; 1999). Additionally, NBCOT test development and administration procedures adhere to relevant sections of the Uniform Guidelines on Employee Selection adopted by the Equal Employment Opportunity Commission, Civil Service Commission, Department of Justice and Department of Labor (EEOC, CSC, DOJ, 1978; DOL, 1993). The NBCOT certification policies and procedures satisfy accreditation standards of the National Commission for Certifying Agencies (NCCA) How does NBCOT decide on the practice content to include in its certification examinations? Credentialing organizations like NBCOT use standardized exams to determine if a candidate possesses the minimal acceptable knowledge and skills to competently perform job requirements. In order for examination scores to be meaningful, exam administrators must provide ongoing evidence that examination content is valid and relates to contemporary professional practice. Following certification industry standards NBCOT certification examinations are constructed based on the results of practice analysis studies. The ultimate goal of a practice analysis study is to ensure that there is a representative linkage of test content to practice, making certain that the credentialing examination contains meaningful indicators of competence, and providing evidence that supports the examination’s content validity of current occupational therapy practice. The periodic performance of practice analysis studies assists NBCOT with evaluating the validity of the test specifications that guide content distribution of the credentialing examinations. Because the practice of occupational therapy changes and evolves over time, practice analysis studies are conducted by NBCOT on a regular basis. When did NBCOT complete its latest practice analysis studies? NBCOT conducted two practice analysis studies in 2007 – one study examined OTR practice and the other examined COTA practice. Results from these studies – in part - were used to construct examination test blueprints for each credential for administrations starting January 2009. How were these practice analysis studies conducted? A practice analysis study is a detailed description of practice – as in, what an OTR or COTA does. Building upon previous studies, a large-scale survey was used with entry-level OTR and COTA certificants who were asked to evaluate their respective job requirements on criticality and frequency rating scales. The job requirements were classified as the domains, tasks, knowledge and skills required for current occupational therapy practice relative to their respective credential: ◘ Domains broadly define the major performance components of the profession. ◘ Tasks describe activities that are performed in each domain (i.e. things that practitioners do). ◘ Knowledge statements describe the information required to perform each task competently. ◘ Skills describe the abilities needed by the certificant to implement the task. What were the outcomes of the practice analysis studies? The results of the surveys were analyzed to identify the most critical and frequently performed tasks by the OTR and COTA survey respondents. Weights were then established to determine the relative proportion of test items devoted to each domain areas established for the OTR and COTA examination blueprints. ◘ Foundations of the Certification Examinations 4 www.nbcot.org ◘
    • Percent of OTR Domain Descriptions Exam 01 Gather information regarding factors that influence occupational performance 13% Formulate conclusions regarding the client’s needs and priorities to develop a 02 client-centered intervention plan 28% Select and implement evidence-based interventions to support participation in areas of occupation (e.g., ADL, education, work, play, leisure, social participation) 03 throughout the continuum of care 39% Uphold professional standards and responsibilities to promote quality in 04 practice 20% Percent of COTA Domain Descriptions Exam Gather information and formulate conclusions regarding the client’s needs and 01 priorities to develop a client-centered intervention plan 33% Select and implement evidence-based interventions to support participation in areas of occupation (e.g., ADL, education, work, play, leisure, social participation) 02 throughout the continuum of care 47% Uphold professional standards and responsibilities to promote quality in 03 practice 20% The percentage of items in each domain area shown above remains constant on each exam form of the OTR and COTA certification examination. There are multiple task, knowledge, and skill statements for each of the domain areas for the respective credentialing exam (OTR or COTA). The results of the 2007 NBCOT practice analysis study and a full overview of the validated domain, task, and knowledge statements for the OTR and COTA examinations are posted on the NBCOT website (www.nbcot.org). Do outcomes from the practice analysis impact the examination pass point? After establishing examination blueprint specifications as part of the practice analysis, and prior to administering the first exam form based on these specifications, a performance standard or pass point for the examination must be determined. The NBCOT certification examinations are criterion referenced. This means a candidate’s performance is compared to a pre-established minimum standard. The minimum passing score represents an absolute standard and does not depend on the performance of other candidates taking the same examination. In order to pass the OTR or COTA certification examination, a candidate must obtain a score equal to or higher than the minimum passing standard or cut-score. NBCOT sets the minimum passing score on the certification examination based on the outcomes of a Standard Setting Study. The methodology used for the most recent standard setting studies, the modified Angoff methodology, required panels of subject matter experts to make judgments about the examination items using the “minimally acceptable” candidate response as a benchmark Impara & Plake, 1997). The outcomes of these studies served to establish the recommended scores required to pass the NBCOT certification examinations and upon which the NBCOT Board of Directors based the final cut score decisions. Is there more than one version of the OTR or COTA examination? In certification programs including NBCOT’s, multiple forms of a test are often developed subsequent to the anchor form and accompanying pass point study. These different forms are developed to ensure the ◘ Foundations of the Certification Examinations 5 www.nbcot.org ◘
    • security and integrity of examinations. Every new form of the certification examination employs a unique combination of items, so that from one administration to another no two forms are identical. How does NBCOT ensure examination versions are comparable? Although different forms for a given test are built to have similar psychometric qualities, they cannot be expected to be precisely equivalent in level and range of difficulty of the unique set of test questions comprising them. If candidates take one form of a test that is more difficult than a previous version, they would be at a disadvantage in comparison with previous test candidates unless some type of adjustment is made. When comparing scores on different test forms, it is necessary to make them equivalent through the use of an appropriate equating method. Equating methods measure the difficulty of each form and adjust the passing score as needed so the same level of candidate performance is reflected in the passing score regardless of form difficulty. Although the mathematical procedures behind equating are not simple, the basic premise for using equating is quite simply – equitability. Equating enables NBCOT to maintain the same passing standard across different examination forms. Thus, ensuring scores for each examination forms are equal and fair, and that no candidate is given an unfair advantage or disadvantage because one form of the examination is easier or harder than another form. How does NBCOT develop questions for the OTR and COTA examinations? Questions or “items” for NBCOT examinations are developed, reviewed, and tested through a rigorous process designed to validate that the knowledge and tasks measured are compatible with the domain-level blueprint specifications, and meet validation criteria related to criticality and frequency. For purposes of item validation, criticality is defined as the degree to which a member of the public or other stakeholder would be physically, emotionally, or financially harmed if the certificant failed to perform the task competently. Frequency is defined as the time that a competent practitioner spends performing the task. In addition, each item is reviewed to meet the established fairness in testing standards as outlined in The Standards for Educational and Psychological Testing (AERA, APA, NCME, 1999). NBCOT follows a variety of guidelines during the development of items to ensure that questions are appropriate for all candidates. Knowing that all examinations are embedded in a culture, NBCOT takes into account the culture fairness of its examinations during item development. NBCOT adheres to item writing, and test development and review procedures to ensure readability, neutral language, and universal accuracy of its items. Additional fairness criteria include, but are not limited to: ◘ Editing items for issues of bias and stereotyping; ◘ Coding items to the approved examination blueprint specifications; ◘ Referencing items to approved and published resources in occupational therapy; ◘ Selecting item writers who are OTR and COTA practitioners and educators from diverse geographical areas, practice experiences, and cultural; and ◘ Field testing items prior to their use as scored items. NBCOT begins the item writing process by providing extensive training to its appointed OTR and COTA content experts. At a minimum, item development training includes instruction on the following topics: ◘ Higher level thinking; ◘ Stylistic and technical guidelines for item development; ◘ Foundations of the Certification Examinations 6 www.nbcot.org ◘
    • ◘ Bias and stereotyping; ◘ Item validity; and ◘ Classification coding. After completing the initial training, item writers are given assignments to develop items related to a specific domain, task and knowledge area based on item bank needs. All new items must be supported by a current occupational therapy textbook reference commonly used in OT educational programs. A numeric designation or “classification” is assigned to each item, tying the item to the specific OTR or COTA validated domain, task and knowledge statement. An item banking system is used to classify and store items for use on an exam form. Prior to entering an item into the item bank, the item must undergo several group validation reviews and receive a final editorial and psychometric review. How are NBCOT certification examinations constructed? OTR and COTA certification examinations are constructed using a combination of scored (pre-equated) items and non-scored (field-test) items. Items for each examination are selected from the respective banks in proportions that reflect the requirements of the OTR and COTA blueprints. Items selected as scored items for each examination must have been pre-tested on a sufficiently large sample of candidates and have acceptable item-level classical statistics. An appointed committee of OTR and COTA content experts – the Certification Examination Validation Committee (CEVC) - then validates each item and reviews the entire examination, to ensure the exam content meets blueprint specifications. Items that do not meet validation criteria, overlap with other items, or cue to the content of a subsequent item are replaced with a more suitable item from the bank. After the CEVC validates an exam form, the exam as a whole receives a final psychometric review before it is released for administration. Items that do not meet validation, content, or psychometric criteria are returned to NBCOT item writers for further revision and validation. The revised items must undergo initial review, validation, field-testing, and psychometric analysis prior to being used as a scored item on an examination form. What validation criteria does NBCOT use during the item and exam review process? Item reviewers and validation committee participants apply the same criteria used to validate the domain, task, and knowledge statements during the practice analysis study (NBCOT, 2007). Each item is reviewed and rated based on criticality and frequency scales. As mentioned previously, criticality is defined as the degree to which a member of the public or other stakeholder would be physically, emotionally, or financially harmed if the certificant failed to perform the task competently. Frequency is defined as the time that a competent practitioner spends performing the task. What evidence does NBCOT use to ensure the exam is valid and reliable? Reliability and validity are measurement terms that are typically used when discussing test development standards. These measurements provide NBCOT with evidence directly related to the issues of fairness and accuracy, and provide the foundation for the NBCOT examinations. As described in previous sections, the procedures NBCOT employs to complete its practice analysis studies, set passing standards, and select items for an exam form underpin the validity of the NBCOT examinations. ◘ Foundations of the Certification Examinations 7 www.nbcot.org ◘
    • Examination Development Process Practice Establish Develop Construct Review Item Analysis Cut Score New Items Exam Forms Performance • Conduct study to • Panels recommend • Write new items Select scored Review item validate passing standard based on items based performance occupational against benchmark blueprint on blueprint data therapy practice of minimally specifications acceptable candidate • Assess cognitive Assess response Select items reliability using • Identify domain, level of items for field KR 20 & split task, knowledge • Use modified Angoff testing half estimates & skill procedure • Review, validate , statements and reclassify Apply • Provides "anchor" existing items statistical Analyze item • Establish for subsequent procedures to difficulty and blueprint versions of the • Complete item equate new discrimination specifications examination content review form to using classical difficulty level statistics • Apply passing • Conduct test of anchor form Use IRT • Link exam item standard to equated fairness CEVC validates methods to content to forms inventory exam content equate domain, task, individual test knowledge and items skill statements Establish Conduct Complete Review Item Equate Exam Examination Standard Statistical Properties Forms Blueprint Setting Analysis Psychometric Analysis While reliability and validity are different, they are also highly interdependent. An assessment of validity informs us whether we are measuring what we are intending to measure - in this case, occupational therapy practice; whereas reliability provides an indication of the accuracy of that measurement. NBCOT uses two statistical measures - the Kuder-Richardson Formula #20 (KR 20) and split half reliability estimates – to assess the reliability of each OTR and COTA certification examination form. These estimates yield evidence regarding the internal consistency of the NBCOT exams. Internal consistency refers to the degree of homogeneity or comparability among test items. NBCOT examinations consistently demonstrate an acceptable level of reliability. This suggests that the exams assess comparable knowledge and skills related to the competent practice of occupational therapy. Included in these procedures, NBCOT collects two different types of statistics on each scored OTR and COTA examination item – Item Response Theory (IRT) statistics and classical statistics. IRT statistics are used primarily to screen field-tested items for exam selection. Classical statistics provide item-level information based on candidate responses and are used to hypothesize item deficiencies. ◘ Foundations of the Certification Examinations 8 www.nbcot.org ◘
    • How does IRT methodology contribute to the exam process? Using IRT equating, psychometricians are able to pre-equate tests – establish the passing point and other score values before an exam form is administered. Pre-equating requires tests be comprised of items that have been field-tested (unscored) on previous versions of the examination. This pre-equating of items not only facilitates scoring, but gives test developers the advantage of ensuring the items selected for an exam have functioned as expected based on item difficulty and item discrimination statistical measures. Thus pre-equating contributes to the reliability and validity of the examination. For licensure and certification tests, equating detects and corrects changes in test difficulty based on an item characteristic curve (ICC) - the relationship between item difficulty and individual ability. In other words, a candidate’s probability of answering a given question correctly depends on the individual’s ability and the characteristic of the item. A criterion level of performance must be translated into a passing score for each test form in a manner that is not influenced by changes in the ability range of candidates taking each form. Under the framework of IRT, new forms of a test can be developed that are equated to an anchor form once the new forms are assembled. This approach allows test developers to construct psychometric properties. All eligible items in the item bank are calibrated to a common scale using existing performance data. For each item, statistics are generated, which describe the item’s difficulty and discriminating power, and how well the item fits the chosen IRT model. New forms of the test are constructed by selecting items from the calibrated bank according to the specifications. IRT statistics provide test developers with valuable information on the psychometric properties of each item in the bank. Access to this information during test construction facilitates the selection of appropriate items for the new test forms. Any forms constructed with items drawn from a calibrated bank are already equated to the anchor form, so no additional test equating is required prior to scoring. To maintain test security and to keep abreast of developments in the profession, new items are added to the bank and periodically obsolete or overexposed items are retired from the bank. Any new or revised items are initially administered as field-test (unscored) items embedded within an operational form of the test. These items are then calibrated before being included in the operational test item pool. This enhances test reliability and validity because only items with known psychometric properties are selected when constructing new forms, poorer performing items can be returned to the item writers for revisions. Are there any other benefits to using IRT methodology? In addition to enhancing the reliability of an exam, the advantages of building pre-equated test forms using IRT include a significant reduction in the waiting period between the initial administration of a new form and the release of score reports. Because item analysis, verification of answer keys, and test equating are completed before new forms are administered, score reports can be issued with minimal delay. If the tests are pre-calibrated, why doesn’t NBCOT provide test scores immediately upon completion of the exam? Calibration of the test items using IRT statistics is only one of the quality control procedures NBCOT uses to ensure candidates receive the absolute correct scoring information. Additional quality control measures regarding test administration processes and procedures take place after administration of an examination and prior to sending an official score report in order to ensure a candidate receives accurate information about their final score. Based on the extensive quality control measures in place for examination scoring, ◘ Foundations of the Certification Examinations 9 www.nbcot.org ◘
    • candidates can be assured of accurate score reporting. Therefore, NBCOT does not offer a hand-scoring option for re-scoring of the examinations. Why are field-test (unscored) items used on an exam? Test developers include a pre-selected number of field-test items on each exam form. Although these items are not considered when scoring candidates’ exams, performance data is collected and analyzed. This item-level analysis is another quality control step that NBCOT uses to preserve the reliability of the examination. Statistics based on candidate responses help test developers to identify items that need further revision prior to their use as a scored item on the exam. Furthermore, the information generated from candidate responses provides information about the difficulty of the item, and how well the item discriminates between candidates who have a strong knowledge of the overall exam content, and those whose knowledge is weaker. Once a sufficient number of responses are collected on an item, the item statistics are reviewed based on a pre-determined metric. Item-level statistics falling below this metric are returned to item writers for further revision and subsequent field- testing. Items meeting the metric are entered into the pool of items that can be used as scored items on subsequent tests. Including field-test items in an exam not only enhances the reliability of the exam, but provides another level of fairness to the candidates taking the exams. How are NBCOT examinations scored? Scores for the NBCOT examinations are reported on a scale from 300 to 600. A total scaled score of at least 450 is required to pass the OTR or COTA certification examinations. Candidates with a total scaled score of less than 450 have not met the minimum passing standard, and therefore, do not pass the examination. Candidates who are unsuccessful in passing the exam receive a score report indicating their total scaled score on the examination plus a scaled score for each domain area represented. This domain-level score provides feedback to assist candidates assess areas of strengths and weaknesses. It is important to note that the passing standard is based on a total test scaled score. Candidates do not have to pass each specific domain area in order to achieve a successful outcome on the examination. What is the benefit of using a scaled scoring method instead of just a raw score or percentage score? A raw score is simply the number of correct answers obtained by the candidates (e.g., 117 out of 170 questions answered correctly). As mentioned in a previous section, NBCOT has multiple forms of the OTR and COTA examinations. New forms of the examinations may vary slightly in their level of difficulty from earlier forms. A given raw score or percentage on one form of the examination may not be comparable to the same raw score or percentage score on another form of the examination. To ensure that scores on different forms of the examination have the same meaning, raw scores are converted to scale scores that represent equivalent levels of achievement regardless of differences in test form difficulty. This is done using a statistical process called “equating” (see page 6 for details). Equating provides a standard range of scores that allows a direct comparison of results from one examination form to another. Scaling is the process of converting the raw scored from one scale to another. In the same way that distance is converted from miles to kilometers or temperature is expressed in terms of Centigrade or Fahrenheit, test scores can be reported as raw scores or equivalent scaled scores. A single scale is used for all forms of the examination. NBCOT uses a scale of 300 to 600 with 450 set as the minimum passing standard (pass point). The raw score needed to pass each examination form is set to the minimum passing score on the scale (450). The highest and lowest raw score for a specific exam form is then set respectively to the upper (600) and lower (300) limits of the scale. It is important to recognize that the pass point of 450 does not ◘ Foundations of the Certification Examinations 10 www.nbcot.org ◘
    • simply represent 75% or 450 points out of a possible 600 (450/600). Similarly, the number of questions answered correctly, or the percentage of items needed to achieve a passing score cannot be calculated based solely on a specific scaled score. The benefit of using a scaled score is to provide candidates with information about their scores in relation to the absolute minimum passing standard that has been set in order to obtain OTR or COTA certification. Summary This monograph addresses questions commonly asked about the foundational processes and procedures NBCOT employs for its certification examinations. NBCOT examinations are “high stakes” examinations. To ensure the defensibility of these examinations, NBCOT uses rigorous psychometric methods and applies multiple levels of quality controls during every aspect of the examination development process. Adhering to accredited certification standards, NBCOT provides a world-class standard for certification of occupational therapy practitioners. Through continuous review of standards and processes, stakeholders can be assured that the NBCOT certification examinations are valid and reliable measures of occupational therapy practice. ◘ Foundations of the Certification Examinations 11 www.nbcot.org ◘
    • References American Educational Research Association, American Psychological Association, National Council on Measurement in Education (1999). Standards for Educational and Psychological Testing. Washington, D.C.: AERA. Angoff, W. H. (1971). Scales, Norms, and Equivalent Scores. In Thorndike, R.L. (Ed.), Educational Measurement, 2nd Edition. Washington, DC: American Council on Education. Equal Employment Opportunity Commission (1978). The Office of Personnel Management, U.S. Department of Justice and U.S. Department of Labor (1979). Questions and Answers Clarifying and Interpreting the Uniform Guidelines on Employee Selection Procedures. 29 CFR Part 1607 (1988). Equal Employment Opportunity Commission (1978). The Office of Personnel Management, U.S. Department of Justice and U.S. Department of Labor (1979). Uniform Guidelines on Employee Selection Procedures. 41 CFR Part 603 (1978). Barnhart, P.A. (1997). The Guide to National Professional Certification Programs (2nd ed.). Amherst, MA: HRD Press. Downing, S.M., & Haladyna, T.M. (2006). Handbook of Test Development. Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Impara, J.C. & Plake, B.S. (1997). Standard setting: An alternative approach. Journal of Educational Measurement, 34, 353-366. McClain, N., Richardson, B. & Wyatt, J. (2004, May-June). A profile of certification for pediatric nurses. Pediatric Nursing, 207-211. Microsoft (2003). Microsoft certifications: benefits of certification. Retrieved from www.microsoft.com/ traincert. NBCOT (2008). Executive Summary for the Practice Analysis Study. Certified Occupational Therapy Assistant COTA®. http://www.nbcot.org NBCOT (2008). Executive Summary for the Practice Analysis Study. Registered Occupational Therapist OTR®. http://www.nbcot.org National Organization for Competency Assurance (2004). Standards for the Accreditation of Certification Programs. Washington, DC: NOCA. U.S. Department of Labor (1993). JTPA: Improving assessment: A technical assistance guide. Washington, DC: Author.
    • National Board for Certification in Occupational Therapy, Inc. 12 South Summit Avenue, Suite 100 Gaithersburg, MD 20877-4150 P: 301.990.7979 F: 301-869.8492 www.nbcot.org