1) Validity refers to the extent to which a test measures what it claims to measure. There are different types of validity including content validity, construct validity, predictive validity, and concurrent validity.
2) Reliability is the consistency of a test and whether it would provide the same results over multiple administrations. Factors that influence reliability include the length of the test, score distribution, difficulty level, and objectivity.
3) There are different ways to measure validity and reliability including calculating correlation coefficients and using formulas like Pearson product-moment correlation, Kuder-Richardson, Cronbach's alpha, and point biserial correlation.
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
Β
PHYSICS EDUCATION PRINCIPLE & EVALUATION TECHNIQUES (LARAS & NUR ASIAH)
1. Presentation
Evaluation of Physics Education Principle & Evaluation Techniques
Arrenged By :
Larasati Rizky Putri (3236159180)
Nur Asiah Rangkuty (3236159179)
2. Validity
Validity is a concept related to the extent to
which the test was to measure what should be
measured. The validity of a test is always divided
into two kinds of logical validity and empirical
validity. Logical and empirical validity of the same
with a qualitative analysis of the problem, which is
to determine whether or not a matter of function
based on predetermined criteria, which in this case
is the criterion of material, construction, and
language.
3. Forms Validity
β’ Validity of the content (Content Validity) is sometimes called a validity of the
curriculum means that a measuring instrument is deemed invalid if it is in
accordance with the curriculum content to be measured.
β’ Construct validity (Construct Validity) is something that is related to phenomena
and abstract object, but the symptoms can be observed and measured.
β’ The validity of the prediction (Predictive Validity) shows a relationship between
test scores obtained by the test takers circumstances that will occur at a time
when that will come. A test is said to have predictive validity if it has the ability
to predict what will happen in the future.
β’ Concurrent validity (Concurrent Validity) refers to the relationship between test
scores to that achieved with the current situation. Validity is known as the
empirical validity. A test is said to have Concurrent Validity when the results are
in accordance with the experience. Validity is commonly used statistical
techniques, the correlation analysis.
4. Measure validity
One way to determine the validity of the measuring instrument is
using product moment correlation with deviation expressed by the
following Pearson :
ππ₯π¦ =
π Ξ£XY β Ξ£X (Ξ£Y)
π Ξ£X2 β(Ξ£X)2 π Ξ£Y2 β(Ξ£Y)2
Description:
Rxy = index of correlation between the two variables are correlated
X = the score of each items
Y = total score items
N = the number of respondents trials
5. The validity of the test question
The validity of the test question was the validity of an
index indicating the gauge really - really measure what is
being measured. The validity of the instrument regarding
the accuracy of the data of the variables studied properly.
To determine whether or not valid instrument to one study,
researchers to test the validity of the multiple choice
questions using the formula Correlation Coefficient Point
biserial is to test the validity of the questions that are
usually in the form of correlation grains - grains with a
total of grain - the grain.
6. Point analysis technique biserial stated in point biserial following formula:
πππππ =
π π β ππ‘
ππ‘
π
π
Description :
πππππ : The correlation coefficient point biserial
π π : The mean scores of subjects who responded well to the validity of the
items sought
ππ‘ : Mean total scores
π : The standard deviation of the total score
π : The proportion of students who answered correctly
π =
the number of students who answered correctly
the total number of students
π : The proportion of tests that answered one of the items that are being
searched correlation with overall test.
q = (1 β p)
To calculate the standard deviation of the total used the formula:
π =
ππ
2
β
ππ
2
π
π
7. Reliability
Reliability is the level or degree of consistency of an
instrument. Reliability of the test with regard to the question
whether a test accurately and reliably in accordance with
established criteria. A test can be said to be reliable if it
always gives the same results when tested on the same
group at a different time or opportunity.
Gronlund (1985) suggests there are four factors that can
influence the reliability, namely:
β’ Long test,
β’ score distribution,
β’ level of difficulty,
β’ objectivity
8. The underlying concept of reliability measurement
error which may occur in a measurement process or
at a specific single value, causing a change in the
composition of the group (error of measurement).
According to calculations of the Pearson product-
moment, there are three kinds of reliability, namely :
1. The coefficient of stability,
2. The coefficient equivalent,
3. The coefficient of internal consistency.
9. Especially for internal consistency coefficient calculation,
the correlation is only part of the whole test. To obtain
overall figures correlation coefficient of these tests should
be calculated from the number - the number that test with
Spearman Brown formula:
πππ =
2 π1.2
1 + π β 1 π1.2
Description :
n = Long test that is always equal to 2 for the entire test = 2 Γ 1/2
10. Mechanical Kuder-Richardson (two experts psychometric
formulate equations to figure reliability) are more popular
with the term KR20. One of KR20 formula is as follows :
π11 =
π
π β 1
(
π2
β ππ
π2
)
Description :
π11 = Reliability Instruments
π = The number of items or the questions
π2 = Variations total
π = The proportion of students who answered correctly
π = The proportion of students who answered incorrectly
11. Mechanical Cronbach Alpha or Alpha Coefficient. The
difference with Kuder-Richardson technique is not only
used this technique to the test with only two options, but its
application is more extensive, such as testing the reliability
of the measurement scale attitude with three, five, or seven
choices. The formula used to calculate the coefficient Alpha
are:
πΌ =
π
π β 1
1 β
ππΌ
2
π π
2
Description :
R = Number of items
ππΌ
2
= variant items
π π
2
=total score variant
12. For items that dichotomous as an option - a double,
a variant items obtained by the formula:
π π
2
= ππ ππ
Description :
ππ = question difficulty level
ππ = (1 β ππ)