Assessment techniques, etiquette, ways and how to do it in home business rtfcccvvvvvv and ghhh to the open position for new teachers in the school and school 🚸 and I have been working on 3 4 for a long time and I am very proud of them when I
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
LESSON 6 JBF 361.pptx
1. LESSON FIVE
TESTS
A test refers to a tool, technique, or method that is intended to measure students’ knowledge or their
ability to complete a particular task. In this sense, testing can be considered as a form of assessment. Test
should meet some basic requirements such as validity and reliability. A test can also be explained as a
method of collecting data for evaluation
2.
3.
4. TYPES OF TESTS.
Criterion- referenced test
Norm- reference test
Aptitude test
Intelligence test
Achievement test.
5. Criterion- referenced test
This type of test compares a student’s academic achievement to a set of criteria or standards. This
type of assessment measures learners’ performance against a fixed set of predetermined criteria or
learning standards. It checks what learners are expected to know and be able to do at a specific time.
In this test, the pupil’s ability is measured in regard to a criterion that is specific body of knowledge or
skill. The test usually measures what students know or what they can do in a specific domain of
learning.
6. Norm-referenced
test
The norm could be a national average score in a particular subject. Standardized tests are norm
referenced tests when grades are assigned to a pupil based upon comparison with other pupils.
The state examination such as BECE and WASSCE are norm referenced tests.
It can also be used in a classroom when teachers compare a pupil performance to others in the class.
Norm referenced tests used by states and nations are more reliable and tend to be valid since they are
based on large population.
7. Aptitude tests
Aptitude tests predicts achievement.
This test may stress what is not taught in schools. For example, a lot of applicants to foreign
schools are asked to take scholastic aptitude tests (SAT, test of English as a foreign language
TOFEL).
8. Intelligence tests
Intelligence test are used only for special testing or placement of students. They are used
nowadays by most school.
9. Achievement tests
Achievement tests have replaced intelligence tests.
They provide information about present achievement or past learning on cumulative bases.
They deal with content that the schools teach or should be teaching.
10. Functions of test in home economics
education
To the Learner:
To assign students grades or rank them.
It provides feedback to the students to know his or her strength and
weaknesses.
It reduces fear and anxiety.
Result from test serve as source of motivation.
It enables learners acquire learning habits.
11. Self-assessment questions
Differentiate between Norm-Referenced and Criterion- Referenced tests.
Identify two (2) functions of test to the learner and the teacher respectively in Home Economic
education.
12. To the Home Economics Teacher:
Tests help the teacher to identify whether the instructional objectives are achieved.
To evaluate his or herself, to improve upon the methods and techniques of teaching.
To enable the give feedback to the parent of the learners
To grade the students.
To find out the level of the pupils.
To help assign pupils to specific learning groups.
To help teacher identify students’ interests
13. Self-assessment questions
Differentiate between Norm-Referenced and Criterion- Referenced tests.
Identify two (2) functions of test to the learner and the teacher respectively
in Home Economic education.
14. LESSON 6
TEST VALIDITY AND RELIABILITY
Validity
It refers to the degree to which evidence and theory support the
interpretation of test scores entailed by proposed uses of tests. In other
words, validity refers to the soundness or appropriateness of your
interpretations and uses of students’ assessment results. Validity therefore
emphasize the results of your assessment which you interpret and not the
instrument or procedure itself.
The process of validation, therefore involves accumulating evidence to
provide a sound scientific basis for the proposed score interpretation. Validity
involves a judgment that one make concerning the interpretation and uses of
assessment results after considering evidence from all relevant sources.
15. Types of Validity
Content-related validity
Criterion-related validity
Construct related validity
Content validity evidence:
This is related to how adequately the content of a test and the responses to the test samples the
domain about which inferences are to be made. In other words, content validity refers to the
extent to which a student’s respond to the items of a test may be considered to be a
representative sample of their responses to a real or hypothetical universe. In classroom
assessment, the curriculum and instruction determine the domain of achievement tasks
16. Criterion related validity:
This is a type of evidence that pertains to empirical technique of studying the
relationship between the test course or some other measures and some
independent external measures such as intelligent scores and university grade
point average. It answers the question how well the results of an assessment
can be used to infer or predict an individual standing on one or more outcome
other than the assessment procedure itself. The outcome is called the
criterion.
There are two types of criterion related evidence
17. TYPES OF VALIDITY
Concurrent validity: refers to the extent to which individuals’ current status
on criterion can be estimated from their current performance on an
assessment instrument. With the concurrent validity evidence, both test
course and the criterion scores are collected at the same time.
Predictive validity: refers to the extent to which individuals’ performance
on criterion can be predicted from their prior performance. Whereas with the
predictive validity evidence, the criterion data are gathered at a later date.
For example using the performance in BDT home economics will predict their
selection into the senior high school home economics programme.
18. Construct related validity:
This type of evidence refers to how well the assessment result can be
interpreted as reflecting an individual status regarding an educational trait
attribute or mental process. For example, reading comprehension, honesty,
creativity, and health.
19. Factors that affect validity
1. Factors in the assessment instrument itself: A test itself or assessment task
appears to be measuring a subject matter content and the expected outcome.
However, there may be certain factors in the items that can prevent the items
from functioning as intended by the assessor. These factors tend to lower the
validity of the uses and the interpretation of the results. The following are such
factors
Unclear direction
Reading vocabulary and sentence structure
Ambiguity of items
Inadequate time limit
Difficult test item
Poor construction of test items
Inappropriate test items for learning
A test that is too short
Improper arrangement of test items.
Identifiable pattern of items.
20. 2. How the item function in relation to what has been taught
Teachers for example establish learning outcome to attain by the end of their
lesson. The task in the test should necessarily be measuring those content
areas and their related learning outcome.
3. Factors in administration of the assessment instrument
The administration of an assessment or test may introduce factors that may
tend to lower the validity of the interpretations of the results. With regards
to teacher made test, such factors as insufficient time, unfair assistance to
individual student who ask for help, cheating, poor lighting and ventilation of
the testing room and disruptive noise during the testing turns to lower validity
of the results.
21. 4. Factors in students responses
These are factors inherent in students and tend to affect their performance
during a test. Such factors include emotional disturbance, over anxiety and
level of motivation.
5. Factors in scoring of an assessment
It may introduce factors that have a detrimental effect on the validity of the
results particularly scoring of constructed responses (essay and performance
assessment
6. The nature of the group
Validity is always specific to a particular group and for a particular purpose.
So the characteristics of a group such as age, gender, ability level,
educational background and cultural background are important in establishing
the validity of assessment results. If the assessment are interpreted and used
without due consideration to those groups characteristics, the validity may be
lowered.
22. Reliability:
It refers to the consistency of assessment scores over time on a population of
individual or groups. In general reliability is refers to the degree to which
assessment results are the same when:
They complete same task(s) on two different occasions
They complete different but equivalent or alternative task on the same or
different occasions
Two or more assessors score (mark) their performance on the same task(s)
In relation to reliability test, it refers to the consistency of the scores
obtained by the individuals when examined with the same test on different
occasion or with alternate forms. Therefore reliability implies the exactness
with which some traits are measured. In this case, there should be reason to
believe that the test case is stable and trustworthy over time on a population
of individuals or groups
23. Types of reliability
There are four main types of reliability. Each can be estimated by comparing
different sets of results produced by the same method
Test-retest reliability.
Inter-rater reliability
Parallel forms reliability
Internal consistency
24. Test retest reliability:
This measures consistency of results when you repeat the same test on the same sample at a
different point in time. It is used when you are measuring something you expect to stay constant in
your sample. Example: a test of grades of a trainee teacher applicant should have high retest
reliability because a grade in performance is a trait that does not change overtime.
Inter-rater reliability:
Also known as inter observer reliability measures the degree of agreement between different
people observing the same thing. It is used when data is collected by researchers assigning ratings,
scores or categories to one or more variables. For example, a team of tutors observed self-
garments made by student teachers to record the fit of their garment. Rating scales can be used
with a set of criteria to assess various processes and fashion features of their garment. The results
of different tutors assessing the same student are compared and there is a strong correlation
between all set of results so the test has high inter-rater reliability.
25. Parallel form reliability:
Measures the correlation between two equivalent versions of a test. It is used when you have
two assessment tools or set of questions designed to measure the same thing. For example a set
of questions is formulated to measure financial risk aversion in a group of respondents. The
questions are randomly divided into two set and the respondent are randomly divided into two
groups. Both groups take both tests that is group A takes test A first and group B takes test B
first. The results of the two tests are compared and the results are almost identical, indicating
high parallel form reliability.
Internal consistency:
This assesses the correlation between multiple items in a test that are intended to measure the
same construct. It is used to calculate the internal consistency without repeating the test or
involving other researchers.
26. Factors that affect reliability of a test
1. Characteristic of a test:
A test is usually a composite of single items. It takes up the characteristics of
individual item that make it up. It follows that any weakness in the individual
items of the test from which the total score is derived will be reflected in the
total scores in terms of errors. The errors introduce into the total scores
intend reduce the reliability of a test
2.Test difficulty:
The difficulty of a selection type test item in terms of the proportion of
examinee that has answered the particular item correctly. The difficulty of a
test depends on the difficulty of the composite item. When the test is
difficult, students may be introduced to guess the answers to the items hence
introducing errors into the scores.
27. 3.Test length:
This refers to the number of item in a test. Generally, other things being equall, the longer the
test, the higher the reliability because a test with limited number of items is not likely to
measure the abilities or behaviours under consideration accurately.
4.Time allocated to the test:
Testing time affect students’ performance. If the time allocated to take a test is too short,
students will not have enough time to read and think about the problem before answering
them. This could lead to guessing. On the other hand, if the time is too long, the fast students
will finish and would be tempted to help their colleagues leading to irregularities. Adequate
time should be given to students to take any test in order demonstrate their understanding and
comprehension.
5.Testing conditions:
When uniformity of the testing conditions is not ensured, inconsistencies are likely to be
introduced into the performance of the students which would affect their scores. Maintaining
uniform testing conditions is essential to reducing the errors and making the results reliable.
28. 6.Group variability:
This influences reliability because reliability coefficient is directly influenced by the spread of
scores on the group assessed. Other things being equal, the larger the spread of score, the higher
the estimate of reliability will be. In general, if the group tested is heterogeneous, the reliability
of the scores tends to be high
Subjectivity in scoring:
If a test is subjectively scored, inconsistencies are allowed to create random errors within the
contest that in tend lower the reliability of the test
29. Self-assessment questions
Distinguish between validity and reliable in test or examination
State and explain two (2) types each of the following:
Validity
Reliability
Explain two factors each that affect validity and reliability of tests.