UTILIZATION OF
EVALUATION
ACCREDITATION
AND
CERTIFICATION
By Younas Ahmad
EVALUATION
 Evaluation is a systematic method for collecting,
analyzing, and using information to answer
questions about projects, policies and programs,
particularly about their effectiveness and
efficiency.
 Freeman (2004) recommend 4 steps of Evaluation
EVALUATION PROCESS
 Evaluation of the need for the program
 Evaluation of program design or logic theory
 Evaluation of how the program is being
implemented
 Evaluation of the program's outcome or impact
EVALUATION OF THE NEED FOR THE
PROGRAM
 Needs Evaluation include the procedures or
strategies utilized by evaluators to described and
analyze social needs.
 This is fundamental for evaluators since they have to
recognize whether program is effective.
 Need Evaluation include research and regular
consultation with community stakeholders and with
the general population that will profit by the project
before the program can be developed and
implemented.
FOUR STEPS IN CONDUCTING A NEEDS
EVALUATION
 Perform a gap analysis
 Identify priorities and importance
 Identify causes of performance problems and
opportunities
 Identify possible solutions and growth
opportunities
PERFORM A GAP ANALYSIS
 Evaluators need to compare current circumstance
with the desired circumstance. The difference or
the gap between the two circumstances will help
identify the need, purpose and aims of the
program.
IDENTIFY PRIORITIES AND IMPORTANCE
 evaluators would have identified a number of
interventions that could potentially address the
need e.g. training and development, organization
development etc. These must now be examined in
view of their significance to the program’s goals
and constraints.
IDENTIFY CAUSES OF PERFORMANCE
PROBLEMS AND OPPORTUNITIES
 identify specific problem areas within
the need to be addressed.
 assess the skills of the people that will
be carrying out the interventions.
IDENTIFY POSSIBLE SOLUTIONS AND
GROWTH OPPORTUNITIES
 Compare the consequences of the interventions if
it was to be implemented or not.
 Needs analysis is a very important step in
evaluating programs because the effectiveness of
a program cannot be assessed unless we know
what the problem was in the first place.
EVALUATING PROGRAM THEORY
 The program theory also called a logic model.
 The program theory drives the hypothesis to test
for affect evaluation.
 Developing a logic model can also built common
comprehension among program staff and
stakeholders about what the program is really
expected to do and how it should do it.
EVALUATING IMPLEMENTATION
 Process evaluation is ongoing process in which
repeated measures might be utilized to evaluate
whether the program is being implemented
successfully.
 This problem is particularly critical because many
innovations, particularly in areas like education a
significant number of components depend on the
right implementation and will fail if the earlier
implementation was not done correctly.
EVALUATING THE IMPACT
(EFFECTIVENESS)
 The effect evaluation determines the fundamental
impacts of the program.
 This includes trying to measure if the program
has accomplished its expected outcomes
MEASURING PROGRAM OUTCOMES
 Outcomes measurement serves to enable you to
comprehend whether the program is effective or
not.
 clarify your comprehension about the program
RELIABILITY
 The reliability of a measurement instrument is the
extent to which the measure produces the same
results when used repeatedly to measure the same
thing.
 more reliable a measure is, the more reliable its
findings will be.
 If a measuring instrument is unreliable the
program will appear to be less effective than it
actually is.
VALIDITY
 The validity of a measurement instrument is the
degree to which it measures what it is intended to
measure
ACCREDITATION
 Accreditation is the process in which certification of
competency, authority, or credibility is presented.
 Educational accreditation is a kind of value
confirmation process under which administrations
and operations of educational institutions or
programs are assessed by an external body to decide
if applicable standards are met. If standards are met
are met, accredited status is granted by the
appropriate agency.
TYPES OF ACCREDITATION
 Accreditation Visit
 An institution applying for accreditation visit is
expected to fulfill all the requirements like
faculty, curriculum, laboratories, library,
infrastructure and other related facilities as per
the accreditation guidelines or criteria.
CONFIRMATORY VISIT
 This visit is imposed only if required by the
National Computing Education Accreditation
Council (NCEAC) as a result of any deferred /
pended / conditional accreditation decision, based
on the accreditation visit report of the program, to
confirm the removal of deficiencies.
CERTIFICATION
 Certification refers to the confirmation of certain
characteristics of an object, person, or
organization.
THANK YOU

Utilization of evaluation

  • 1.
  • 2.
    EVALUATION  Evaluation isa systematic method for collecting, analyzing, and using information to answer questions about projects, policies and programs, particularly about their effectiveness and efficiency.  Freeman (2004) recommend 4 steps of Evaluation
  • 3.
    EVALUATION PROCESS  Evaluationof the need for the program  Evaluation of program design or logic theory  Evaluation of how the program is being implemented  Evaluation of the program's outcome or impact
  • 4.
    EVALUATION OF THENEED FOR THE PROGRAM  Needs Evaluation include the procedures or strategies utilized by evaluators to described and analyze social needs.  This is fundamental for evaluators since they have to recognize whether program is effective.  Need Evaluation include research and regular consultation with community stakeholders and with the general population that will profit by the project before the program can be developed and implemented.
  • 5.
    FOUR STEPS INCONDUCTING A NEEDS EVALUATION  Perform a gap analysis  Identify priorities and importance  Identify causes of performance problems and opportunities  Identify possible solutions and growth opportunities
  • 6.
    PERFORM A GAPANALYSIS  Evaluators need to compare current circumstance with the desired circumstance. The difference or the gap between the two circumstances will help identify the need, purpose and aims of the program.
  • 7.
    IDENTIFY PRIORITIES ANDIMPORTANCE  evaluators would have identified a number of interventions that could potentially address the need e.g. training and development, organization development etc. These must now be examined in view of their significance to the program’s goals and constraints.
  • 8.
    IDENTIFY CAUSES OFPERFORMANCE PROBLEMS AND OPPORTUNITIES  identify specific problem areas within the need to be addressed.  assess the skills of the people that will be carrying out the interventions.
  • 9.
    IDENTIFY POSSIBLE SOLUTIONSAND GROWTH OPPORTUNITIES  Compare the consequences of the interventions if it was to be implemented or not.  Needs analysis is a very important step in evaluating programs because the effectiveness of a program cannot be assessed unless we know what the problem was in the first place.
  • 10.
    EVALUATING PROGRAM THEORY The program theory also called a logic model.  The program theory drives the hypothesis to test for affect evaluation.  Developing a logic model can also built common comprehension among program staff and stakeholders about what the program is really expected to do and how it should do it.
  • 11.
    EVALUATING IMPLEMENTATION  Processevaluation is ongoing process in which repeated measures might be utilized to evaluate whether the program is being implemented successfully.  This problem is particularly critical because many innovations, particularly in areas like education a significant number of components depend on the right implementation and will fail if the earlier implementation was not done correctly.
  • 12.
    EVALUATING THE IMPACT (EFFECTIVENESS) The effect evaluation determines the fundamental impacts of the program.  This includes trying to measure if the program has accomplished its expected outcomes
  • 13.
    MEASURING PROGRAM OUTCOMES Outcomes measurement serves to enable you to comprehend whether the program is effective or not.  clarify your comprehension about the program
  • 14.
    RELIABILITY  The reliabilityof a measurement instrument is the extent to which the measure produces the same results when used repeatedly to measure the same thing.  more reliable a measure is, the more reliable its findings will be.  If a measuring instrument is unreliable the program will appear to be less effective than it actually is.
  • 15.
    VALIDITY  The validityof a measurement instrument is the degree to which it measures what it is intended to measure
  • 16.
    ACCREDITATION  Accreditation isthe process in which certification of competency, authority, or credibility is presented.  Educational accreditation is a kind of value confirmation process under which administrations and operations of educational institutions or programs are assessed by an external body to decide if applicable standards are met. If standards are met are met, accredited status is granted by the appropriate agency.
  • 17.
    TYPES OF ACCREDITATION Accreditation Visit  An institution applying for accreditation visit is expected to fulfill all the requirements like faculty, curriculum, laboratories, library, infrastructure and other related facilities as per the accreditation guidelines or criteria.
  • 18.
    CONFIRMATORY VISIT  Thisvisit is imposed only if required by the National Computing Education Accreditation Council (NCEAC) as a result of any deferred / pended / conditional accreditation decision, based on the accreditation visit report of the program, to confirm the removal of deficiencies.
  • 19.
    CERTIFICATION  Certification refersto the confirmation of certain characteristics of an object, person, or organization.
  • 20.