SlideShare a Scribd company logo
1 of 178
Download to read offline
International Journal
of
Learning, Teaching
And
Educational Research
p-ISSN:1694-2493
e-ISSN:1694-2116IJLTER.ORG
Vol.16 No.1
PUBLISHER
London Consulting Ltd
District of Flacq
Republic of Mauritius
www.ijlter.org
Chief Editor
Dr. Antonio Silva Sprock, Universidad Central de
Venezuela, Venezuela, Bolivarian Republic of
Editorial Board
Prof. Cecilia Junio Sabio
Prof. Judith Serah K. Achoka
Prof. Mojeed Kolawole Akinsola
Dr Jonathan Glazzard
Dr Marius Costel Esi
Dr Katarzyna Peoples
Dr Christopher David Thompson
Dr Arif Sikander
Dr Jelena Zascerinska
Dr Gabor Kiss
Dr Trish Julie Rooney
Dr Esteban Vázquez-Cano
Dr Barry Chametzky
Dr Giorgio Poletti
Dr Chi Man Tsui
Dr Alexander Franco
Dr Habil Beata Stachowiak
Dr Afsaneh Sharif
Dr Ronel Callaghan
Dr Haim Shaked
Dr Edith Uzoma Umeh
Dr Amel Thafer Alshehry
Dr Gail Dianna Caruth
Dr Menelaos Emmanouel Sarris
Dr Anabelie Villa Valdez
Dr Özcan Özyurt
Assistant Professor Dr Selma Kara
Associate Professor Dr Habila Elisha Zuya
International Journal of Learning, Teaching and
Educational Research
The International Journal of Learning, Teaching
and Educational Research is an open-access
journal which has been established for the dis-
semination of state-of-the-art knowledge in the
field of education, learning and teaching. IJLTER
welcomes research articles from academics, ed-
ucators, teachers, trainers and other practition-
ers on all aspects of education to publish high
quality peer-reviewed papers. Papers for publi-
cation in the International Journal of Learning,
Teaching and Educational Research are selected
through precise peer-review to ensure quality,
originality, appropriateness, significance and
readability. Authors are solicited to contribute
to this journal by submitting articles that illus-
trate research results, projects, original surveys
and case studies that describe significant ad-
vances in the fields of education, training, e-
learning, etc. Authors are invited to submit pa-
pers to this journal through the ONLINE submis-
sion system. Submissions must be original and
should not have been published previously or
be under consideration for publication while
being evaluated by IJLTER.
VOLUME 16 NUMBER 1 January 2017
Table of Contents
Item Consistency Index: An Item-Fit Index for Cognitive Diagnostic Assessment .......................................................1
Hollis Lai, Mark J. Gierl, Ying Cui and Oksana Babenko
Factors That Determine Accounting Anxiety Among Users of English as a Second Language Within an
International MBA Program................................................................................................................................................ 22
Alexander Franco and Scott S. Roach
(Mis)Reading the Classroom: A Two-Act “Play” on the Conflicting Roles in Student Teaching .............................. 38
Christi Edge
Coping Strategies of Greek 6th Grade Students: Their Relationship with Anxiety and Trait Emotional Intelligence
.................................................................................................................................................................................................57
Alexander- Stamatios Antoniou and Nikos Drosos
Active Learning Across Three Dimensions: Integrating Classic Learning Theory with Modern Instructional
Technology ............................................................................................................................................................................ 72
Thaddeus R. Crews, Jr.
The Effects of Cram Schooling on the Ethnic Learning Achievement Gap: Evidence from Elementary School
Students in Taiwan .............................................................................................................................................................. 84
Yu-Chia Liu, Chunn-Ying Lin, Hui-Hua Chen and He Huang
Teachers’ Self-Efficacy atMaintaining Order and Discipline in Technology-Rich Classrooms with Relation to
Strain Factors....................................................................................................................................................................... 103
Eyvind Elstad and Knut-Andreas Christophersen
Using Reflective Journaling to Promote Achievement in Graduate Statistics Coursework...................................... 120
J. E. Thropp
Competence and/or Performance - Assessment and Entrepreneurial Teaching and Learning in Two Swedish
Lower Secondary Schools.................................................................................................................................................. 135
Monika Diehl and Tord Göran Olovsson
Review in Form of a Game: Practical Remarks for a Language Course ...................................................................... 161
Snejina Sonina
1
© 2017 The authors and IJLTER.ORG. All rights reserved.
International Journal of Learning, Teaching and Educational Research
Vol. 16, No. 1, pp. 1-21, January 2017
Item Consistency Index: An Item-Fit Index for
Cognitive Diagnostic Assessment
Hollis Lai,1 Mark J. Gierl,2 Ying Cui,2 Oksana Babenko3
1 School of Dentistry, Faculty of Medicine & Dentistry
2Centre for Research in Applied Measurement and Evaluation
3Department of Family Medicine, Faculty of Medicine & Dentistry
University of Alberta, Canada
Abstract. An item-fit index is a measure of how accurately a set of
item responses can be predicted using the test design model. In a
diagnostic assessment where items are used to evaluate student
mastery on a set of cognitive skills, this index helps determine the
alignment between the item responses and skills that each item is
designed to measure. In this study, we introduce the Item
Consistency Index (ICI), a modification of an existing person-
model fit index, for diagnostic assessments. The ICI can be used to
evaluate item-model fit on assessments designed with a Q-matrix.
Results from both a simulation and real data study are presented.
In the simulation study, the ICI identified poor-fitting items under
three manipulated conditions: sample size, test length, and
proportion of poor-fitting items. In the real-data study, the ICI
detected three poor-fitting items for an operational diagnostic
assessment in Grade 3 mathematics. Practical implications and
future research directions for the ICI are also discussed.
Keywords: Item Consistency Index; cognitive diagnostic assessment; test
development
Introduction
In educational testing, items are developed to elicit a correct response
when examinees demonstrate adequate knowledge or understanding on
the required tasks and skills within a specified domain. The methods of
specifying knowledge, the conceptualization of content domains, and the
design of how an item elicits responses are currently undergoing
significant change with the evolution of our test designs. But one outcome
that remains the same is that an item must assess the tasks and skills as
intended, and the quality of each item must be judged to be high if it is to
be included on the test. In most test designs, item discrimination power is
a statistical criterion that is synonymous with describing item quality.
2
© 2017 The authors and IJLTER.ORG. All rights reserved.
Item discrimination helps describe how well an item can differentiate
examinees at different performance levels. Depending on the test design
and how the scale of examinee performance is realized, different
measures of item discrimination may be used. Additional information
about item discrimination can also be garnered from measures of item-
model fit. An item-model fit index describes the overall difference
between real responses on a given item with a corresponding set of
expected responses predicted by the test design. Item-model fit indices
can be summarized, in general, as a ratio between the expected and actual
correct responses on each item to compare the proportion of correct
responses across examinees of different abilities with an expected correct
proportion from the test design model. Different criterions that represent
the examinee overall performance such as total score, estimated ability, or
pseudo-scores have been used to group the responses of examinees' with
similar ability to produce variations of item-model fit (Bock, 1972; Yen,
1981; Rost & von Davier, 1994; Orlando & Thissen, 2003). Application of
item-model fit indices include the identification of poor performing items,
cheating, or test administration anomalies, along with addressing issues
related to dimensionality, item construction, calibration, and model
selection (Reise, 1990).
Cognitive Diagnostic Assessment and Model Fit
Demand for more assessment feedback to better guide instruction and
learning has led to the development of more complex test designs.
Cognitive diagnostic assessment (CDA) is an example of a test design that
yields enhanced assessment feedback by providing test takers with
specific information about their problem-solving mastery on a given
domain (Gierl, Leighton, & Hunka, 2007). The cornerstone of a CDA is the
use of a cognitive model to guide test development. The use of a
cognitive model allows CDA to provide enhanced feedback because
cognitive information can be extracted from the examinees’ item
responses which, in turn, provide more detailed and instructionally
relevant results to test takers. Compared to traditional tests where an
item response is linked to a single outcome scale, the cognitive inferences
made in CDA allow each item to measure multiple skills related to
student learning. Due to the complexity of interpreting and modeling
different aspects of cognitive skills, many approaches to modeling and
scoring examinee responses are available. Sinharay, Puhan, and
Haberman (2009) summarized three common features among different
methods of CDA:
(1) tests assess student mastery based on a cognitive model of skills; (2)
items probe student mastery on a pattern of skills expressed in a Q-
matrix; and (3) items probing the same pattern of skill mastery should
elicit a similar pattern of student responses.
3
© 2017 The authors and IJLTER.ORG. All rights reserved.
An essential part of CDA development relies on the definition of a Q-
matrix. The Q-matrix is an item-by-attribute matrix used to describe the
skills probed by each item. For example, if a CDA is designed to
determine examinee mastery on four skills, and an item was designed to
elicit a correct response if the examinee has mastered the first and the
fourth skill, then the row corresponded to that item in the Q-matrix
would be expressed as {1,0,0,1}. The Q-matrix and the student response
patterns are used to calibrate the model parameters and provide students
with diagnostic results related to their cognitive problem-solving
strengths and weaknesses.
To ensure that CDA results provide the most accurate information to
examinees about their cognitive skills, the quality of CDA items must be
scrutinized. The evaluations of the claim that items are to probe a
specified set of skills have varied by the scope of how item-skill relations
are represented. Model-data fit has traditionally been used to evaluate
how items are aligned with construct of the skills based upon item
responses. Few studies have investigated the relations of item-skill
alignment. Wang, Shu, Shagn, and Xu (2015) have developed a measure
which allows the evaluation of skill-to-item fit based on the DINA model
that assumes a probabilistically scaled skill representation. To evaluate
item-model fit in CDA, items need to be evaluated beyond the
relationship of the correct responses on a particular item and single
outcome scores. Because each item is designed to provide student
mastery information on multiple skills, an item-model fit index is needed
to ensure item responses are aligned with the intended cognitive skills.
Evaluating Model-Fit for CDA
The rationale evaluating model fit in CDA can be considered in two
approaches, evaluating the fit with the expected psychometric properties
of the test items or evaluating the fit of responses with the blueprint of
skills. Existing developments tend to focus on the former approach. For
example, Jang (2005) compared total raw score distributions between
observed and predicted responses using the mean absolute difference
(MAD). Jang’s approach to evaluating model-fit is akin to IRT model fit
approaches, where the level of fit is determined by total score differences
between the expected and examinee results. But with each correct
response of a CDA item linked to mastery on a vector of skills, evaluating
item-model fit for CDA need to consider the fit of an item with the pre-
requisite skills rather than a single test-level outcome.
Sinharay and Almond (2007) also developed an approach for evaluating
item fit for CDA by assuming that examinees categorized with the same
skill pattern should also have the same diagnostic outcome. With their
4
© 2017 The authors and IJLTER.ORG. All rights reserved.
approach, the proportion correct response for examinees with the same
skill pattern is compared with the expected proportion predicted by the
cognitive model. Differences between the expected and observed correct
proportions are then summed across all skill patterns and weighted
proportionally by sample size. That is, model-fit for item j was defined
as:
𝑋𝑗
2
=
𝑁 𝑘(𝑂 𝑘𝑗 −𝐸 𝑘𝑗 )2
𝐸 𝑘𝑗 (𝑁 𝑘−𝐸 𝑘𝑗 )
.
𝑘 ,
where 𝑁𝑘 is the number of examinees with skill pattern k, 𝑂 𝑘𝑗 is the
number of examinees with skill pattern k that responded correctly to item
j, and 𝐸𝑘𝑗 is the product of the expected proportion of correct response for
pattern k multiplied by 𝑁𝑘. Although this approach can be applied to
account for fit among multiple sets of skills, results rely on an expected
correct response rate of a given item for each skill pattern. As the
expected correct response for a given set of skill pattern is not readily
available, application of this method for determining model fit may be
problematic. Moreover, a poor sample representation of a skill pattern or
psychometrically indistinguishable skill patterns will also misestimate
item-model fit. One way to avoid the influence of misclassification on an
item-model fit measure for CDA is to comparatively evaluate items that
measure the same skills. That is, items measuring the same skills are
expected to elicit similar response patterns with one other.
Hierarchy Consistency Index (HCI)
One statistic developed specifically for CDA to evaluate person-model fit
is the Hierarchy Consistency Index (HCI; Cui & Leighton, 2009; Cui & Li,
2014; Cui & Mousavi, 2015). The HCI is a statistic for evaluating the fit of
the observed responses from an examinee with the expected responses
from a CDA model based on a comparison between the observed and
expected response vectors. The main assumption for the HCI is that if an
examinee gives a correct response to an item requiring a set of skills, then
the examinee is assumed to have mastered that set of skills and therefore
should also respond correctly to items that designed to measure those
skills. For example, if an examinee gives a correct response to an item
that requires the first and third skill in a CDA that assess four skills (or an
item with a skill pattern of [1,0,1,0] in the Q matrix), then the examinee is
also expected to respond correctly to items that probe the same set of
skills [1,0,1,0], or a subordinate or prerequisite of those skills (e.g., [1,0,0,0]
, [0,0,1,0]), which require skills should have been acquired. In this
manner, the number of misfitting responses across all items with their
corresponding subsets of skills is calculated for each examinee to
determine an index of person-fit.
5
© 2017 The authors and IJLTER.ORG. All rights reserved.
Given I examinees were administered with J items, the HCI for examinee i
is calculated as:
𝐻𝐶𝐼𝑖 = 1 −
2 𝑋 𝑖𝑗 (1−𝑋 𝑖𝑔 ).
𝑔𝜖 𝑠 𝑗
𝐽
𝑗=1
𝑁
, (1)
where Xj is the examinee’s scored response for item j, sj is an index set
that includes items requiring the subset of attributes measured by item j,
and Xg is the examinee’s scored response for item g. For example, if item j
is answered correctly, then all items that measure the attributes or a
subset of attributes probed by item j is represented by index set sj , where
g is an item index within sj. N is the number of comparisons made across
all sj. The HCI has a maximum of 1 and minimum of -1, where a high
positive HCI value represents good person-fit with the expected response
model.
The HCI is a useful index for analyzing person-fit across different types of
CDAs, as it requires only the use of the Q-matrix and examinee responses.
In this study, we modify the HCI to create an index for analyzing item-
model fit. Thus, the purpose of this study is twofold. First, we introduce
and define an item-model fit index called the item consistency index (ICI).
The ICI is used to evaluate the fit of an item related to the underlying
cognitive model used to make diagnostic inferences with that item.
Second, we present results from two studies to demonstrate both the
simulated and practical performance of the ICI across of host of testing
conditions typically found in diagnostic assessments.
Item Consistency Index (ICI)
As elaborated earlier, the HCI measures the proportion of misfitting
observed examinee responses relative to the expected examinee responses
on a diagnostic assessment. This principle can also be extended to
evaluate item-fit. With the HCI, the misfitting responses related to each
item is summed across all items for each examinee. As described in (1),
misfit for examinee i (mi) can be written as:
𝑚𝑖 = 𝑋𝑖𝑗 (1 − 𝑋𝑖𝑔).
𝑔𝜖 𝑠 𝑗
𝐽
𝑗 . (2)
Alternatively, to evaluate the misfit for item j, the number of misfitting
responses from the subset of item j can be summed across all examinees.
This modification can be written as:
𝑚𝑗 = 𝑋𝑖𝑗 1 − 𝑋𝑖𝑔.𝑔∈𝑆 𝑗
.
𝑖 , (3)
where 𝑋𝑖 is student 𝑖’s score (1 or 0) to item 𝑗, and 𝑋𝑖 𝑔
is student 𝑖’s score
(1 or 0) to item 𝑔. Item g belongs to 𝑆𝑗 , a subset of items that require the
6
© 2017 The authors and IJLTER.ORG. All rights reserved.
subset of skills measured by item j. In this manner, for a correct response
to item j for examinee i (𝑋𝑖 𝑗
= 1), one can consider any incorrect responses
in 𝑆𝑗 to be a misfit for examinee i. The number of misfits is then summed
across all examinees.
It should be noted that the HCI only considers students’ correct responses
for analyzing misfit of a given item (𝑋𝑗 = 1). That is, misfit is calculated
against the required skills only when students have provided the correct
response. While this was adequate for analyzing misfit for person-fit,
analyzing item-fit against a cognitive model also requires comparisons to
be made when students respond to an item incorrectly (𝑋𝑗 = 0). As such,
an evaluation of item-fit needs to account for this alternative comparison.
For example, suppose an incorrect response was given on our exemplar
item that required the skill pattern of [1,0,1,0]. From this item response,
we could infer that the examinee does not possess all the necessary skills
required to solve this item and, therefore, should respond incorrectly to
all items that require the same skill pattern of [1,0,1,0]. Furthermore, the
examinee should also respond incorrectly to items that require more skills
than the current item (i.e., [1,1,1,0], [1,0,1,1], [1,1,1,1]). These items that
require the same skill or a more complex skill pattern can be
conceptualized as an alternative subset of item j (𝑆𝑗
∗
), and a correct
response in any of the items belonging to 𝑆𝑗
∗
can be conceptualized as a
misfit. This outcome can be expressed as:
𝑚𝑗
∗
= 𝑋𝑖ℎ
(1 − 𝑋𝑖 𝑗
)ℎ∈𝑆𝑗
∗
.
𝑖 . (4)
The set of alternative comparisons combined with comparisons from
correct responses form the numerator of the ICI. To maintain the same
scale of comparison with HCI, the numerator is then divided by the total
number of comparisons, which effectively transforms the outcome to a
proportion of misfit responses for item j. The proportion is then rescaled
to a maximum of 1 and a minimum of -1. The ICI for item 𝑗 is then given
as:
𝐼𝐶𝐼𝑗 = 1 −
2 𝑋 𝑖 𝑗
(1−𝑋 𝑖 𝑔 )𝑔∈𝑆 𝑗
+ 𝑋 𝑖ℎ
(1−𝑋 𝑖 𝑗
)ℎ∈𝑆 𝑗
∗𝑖
𝑁 𝑐 𝑗
, (5)
where 𝑋𝑖 𝑗
is student 𝑖’s score (1 or 0) to item 𝑗, 𝑆𝑗 is an index set that
includes items requiring the subset of attributes measured by item 𝑗, 𝑋𝑖 𝑔
is
student 𝑖’s score (1 or 0) to item 𝑔 where item 𝑔 belongs to 𝑆𝑗 , 𝑆𝑗
∗
is an
index set that includes items requiring all, but not limited to, the
attributes measured by item 𝑗, 𝑋𝑖ℎ
is student 𝑖’s score (1 or 0) to item ℎ
where item ℎ belongs to 𝑆𝑗
∗
, and 𝑁𝑐 𝑗
is the total number of comparisons for
7
© 2017 The authors and IJLTER.ORG. All rights reserved.
item 𝑗 across all students.
To illustrate the calculation of the ICI, consider a hypothetical
administration of a CDA with 15 items and a Q-matrix presented in (6).
1 0 0 0
0 1 0 0
1 1 0 0
0 0 1 0
1 0 1 0
0 1 1 0
1 1 1 0
0 0 0 1
1 0 0 1
0 1 0 1
1 1 0 1
0 0 1 1
1 0 1 1
0 1 1 1
1 1 1 1
. (6)
Suppose this CDA of four skills was administered to an examinee who
produced the item response vector (0,0,0,0,0,1,1,0,0,0,0,0,0,0,0). That is, the
examinee responded correctly to items 6 and 7 only. To calculate the ICI
for item 6, we first consider that the examinee has responded to the item
correctly, therefore comparisons should be made with items that require
skills that are prerequisites to or same with the original item. In this case,
items 2 and 4 belong in 𝑆6. Since both item responses were incorrect, two
comparisons were made (𝑁𝑐6
= 2) and two unexpected responses were
found (𝑚6 = 2) for this examinee. In addition, suppose we wanted to
calculate the ICI of item 2 for this examinee. The alternative subset (𝑆𝑗
∗
)
will be needed since the examinee responded to the item incorrectly. For
this instance, seven items form the alternative subset for item 2 (𝑆2
∗
=
{3,6,7,10,11,14,15}). Since the examinee responded correctly to item 6 and
7, there were two unexpected responses (𝑚2 = 2) from a total of seven
comparisons (𝑁𝑐2
= 7). In this manner, the number of unexpected
responses and comparisons are summed across all examinees and
rescaled to form the ICI.
To demonstrate the performance of this item-model fit index across a
variety of different testing situations, a simulation study was conducted
to determine the performance of ICI for detecting poor-fitting items.
Then, a real data study was conducted to demonstrate how the ICI can be
applied in operational testing situations a CDA in Mathematics.
8
© 2017 The authors and IJLTER.ORG. All rights reserved.
Methods and Results
Study 1: Simulation Study
To evaluate how well the ICI can identify items that fit poorly relative to
their underlying cognitive model, a Monte-Carlo study was conducted by
simulating responses from a diagnostic test designed to measure seven
skills. To determine the performance of the ICI using simulated CDA
data, examinee responses were generated under the Bernoulli
distribution. In addition to generating examinee responses, different
testing conditions were manipulated to probe conditions that may occur
in a real CDA administration. Finally, to classify poor-fitting items using
ICI, a common evaluation criterion was used to determine which items
were fit poorly with the given cognitive model.
The simulation process is similar to the actual steps used in developing
CDAs (Gierl, Leighton, & Hunka, 2007), where cognitive model, items,
and responses were developed in a sequential manner. First, an existing
cognitive model from Cui and Leighton (2009) was used to guide the
simulation process. The cognitive model consists of seven skills, with 15
patterns of skill mastery identified as permissible. The patterns of
required skills for each item are expressed in the Q-matrix presented in
Table A1 in the Appendix. To generate examinee responses, examinees
were first assigned to an expected pattern of skill mastery from one of the
15 skill patterns. In addition to the 15 skill patterns, a null pattern
[0,0,0,0,0,0,0] was also used to represent examinees who did not master
any skills. In total, sixteen expected skill patterns are distributed equally
among the sample examinees. To simulate response for an examinee on a
given item, the examinee’s assigned skill pattern is compared with the
skills required by that item as indicated by the Q matrix. A probability of
correct response is assigned based on whether the examinee has all the
prerequisite skills of the item. Based on this assigned probability, the
examinee’s response to each item was generated using a Bernoulli
function.
To simulate the effectiveness of ICI under different testing conditions,
three factors were manipulated. First, the number of items representing
each skill pattern in the CDA was varied by three levels. If a CDA is
lengthened by including multiple items probing the same set of skills,
then the reliability of each corresponding skill measured is expected to
increase (Gierl, Cui, & Zhou, 2009). In our study, the number of items in
the CDA varied by one, two, or three items representing each possible
skill pattern. These three levels of variation on a total of 15 skill patterns
resulted in test lengths of 15, 30, and 45 items, respectively.
9
© 2017 The authors and IJLTER.ORG. All rights reserved.
Second, unlike the related person-fit HCI which is independent of sample
size, the ICI is based on the proportion of misfit responses from all
examinees. Therefore, different sample sizes may affect the outcome of
the ICI. Three levels of sample sizes were manipulated: 800, 1600, and
2400. Since the 15 skill patterns and a null pattern are distributed equally
among the examinees, the numbers of examinees representing each skill
pattern are 50, 100, and 150, respectively.
Third, an important feature for an item-model fit index is to detect items
that fit poorly with the expected response determined by cognitive model.
This concept is contaminated when the ICI is influenced by misfitting
items related to the skills of the original item. To investigate whether the
proportion of poor-fitting items have an effect on the ICI, the proportion
of poor-fitting items were manipulated at three levels proportional to the
test length: 5%, 10% and 25%. In Cui and Leighton (2009), a well-fitting
item was deemed to have a 10% chance for slips, where an examinee
without mastery of the necessary skills will have a 10% chance of
responding correctly while an examinee who has mastered the necessary
skills will have a 90% chance of responding correctly. While there can be
many reasons for an item to fit poorly with the underlying cognitive
model (e.g., model misspecification, item quality, option availability),
generally a poor-fitting item yields a response that is aberrant from the
cognitive model. To simulate a poor-fitting item, items responses were
generated close to random. Table 1 contains the probabilities of correct
response given the level of item fit (good or poor fit) and whether the
examinee possesses the required set of skills. Taken together, three
manipulated factors with three levels each yielded a total of 27 conditions
as shown in Table A2 of the Appendix.
Table 1. Correct response probability given the level of item fit and whether
the examinee possesses the required set of skills
Item Fit
Required skills Good Poor
Present 0.9 0.6
Not present 0.1 0.4
To evaluate the effectiveness of the ICI for detecting poor-fitting items, a
criterion is needed for the ICI to differentiate between poor- and well-
fitting items. A classification approach was used to measure the precision
of the ICI in this study. A cut-score criterion, set at an ICI value of 0.5, was
used to illustrate the classification characteristics for poor-fitting items.
For example, if an item was calculated to have an ICI value of less than
0.5, then that item was deemed to fit poorly with the expected responses
from the cognitive model. This preliminary criterion for dichotomizing
10
© 2017 The authors and IJLTER.ORG. All rights reserved.
item fit was needed because no point of comparison currently exists in
determining an appropriate level of fit with an existing cognitive model.
Further, an ICI value of 0.5 for any item translates to roughly 75% of the
responses on a given item fitting with the expected skill pattern as
defined by the cognitive model. Using this initial cut-score, we could then
classify items as poor- or well-fitting.
To ensure the classification results were consistently produced, each of
the 27 testing conditions was replicated 100 times. The dependent
variables for the simulation study included the average proportion of
correctly identified poor-fitting items and misclassification of well-fitting
items across all conditions. The simulation environment, the
implementation of the ICI, and the replication of results were
programmed in R (R Core Development Team, 2011), and are available
from the first author.
Table 2 contains a summary of the mean ICIs for each condition. The
mean ICIs were calculated separately for the poor- and well-fitting items.
The overall mean for poor-fitting items was 0.30 whereas the mean ICI for
well-fitting items was 0.53. Three observations must be noted from the
results in Table 2. First, test length tended to have a positive impact on the
values of ICI. For example, CDAs with only one item measuring each
skill pattern (i.e., test length=15) had consistently lower ICIs compared to
CDAs with two or three items measuring each skill (i.e., test length=30 or
45). Second, as expected, the magnitude of the mean ICI differences
between poor and well-fitting items tended to decrease when an increase
in poor-fitting items included in the ICI. Third, the means of ICI were
relatively stable across different sample sizes for each condition.
Table 2. Summary of the mean ICIs across the three variables manipulated in
the simulation study
Sample
Size
Proportion of
Poor Fitting
Items
Test
Length
Mean ICI
Poor Fitting
Items
Well-Fitting
Items
800 5% 15 0.24 0.49
5% 30 0.22 0.57
5% 45 0.30 0.59
10% 15 0.31 0.48
10% 30 0.29 0.56
10% 45 0.38 0.58
25% 15 0.37 0.43
25% 30 0.29 0.56
25% 45 0.32 0.51
11
© 2017 The authors and IJLTER.ORG. All rights reserved.
1600 5% 15 0.21 0.41
5% 30 0.22 0.56
5% 45 0.29 0.59
10% 15 0.27 0.44
10% 30 0.29 0.57
10% 45 0.38 0.58
25% 15 0.36 0.41
25% 30 0.29 0.56
25% 45 0.32 0.51
2400 5% 15 0.24 0.55
5% 30 0.23 0.58
5% 45 0.30 0.59
10% 15 0.32 0.53
10% 30 0.30 0.57
10% 45 0.38 0.58
25% 15 0.32 0.53
25% 30 0.29 0.56
25% 45 0.32 0.51
Items were also classified based on the cut-score criterion. This simulation
process was repeated 100 times, with the correct classification rate, or
power, being the likelihood of correctly identifying a poor-fitting item
using the ICI across the conditions in the simulation study. The power
values for the 27 conditions are shown in Table 3. The conditions with the
highest power were found in CDAs with the longest test-length (45),
specifically with conditions that had the largest proportion of poor-fitting
items (25%). Under those conditions, the highest power was 0.99,
meaning that for the ICI criterion of 0.50, 99% of all poor-fitting items
were correctly classified across 100 replications. The lowest power values
were found in conditions with the smallest sample size (800), where a
power of 0.67 was found for a 30-item CDA with 5% of poor-fitting items
and 1600 examinees.
12
© 2017 The authors and IJLTER.ORG. All rights reserved.
Table 3. Power of ICI for identifying poor-fitting items
Test
Length
Sample
Size
Proportion of Poor-Fitting Items
5% 10% 25%
15 800 0.68 0.76 0.92
1600 0.93 0.89 0.95
2400 0.79 0.79 0.92
30 800 0.67 0.73 0.79
1600 0.77 0.74 0.81
2400 0.73 0.72 0.79
45 800 0.76 0.80 0.99
1600 0.77 0.83 0.99
2400 0.76 0.81 0.99
Table 4 summarizes the likelihood of a well-fitting item being
misclassified by the ICI as a poor-fitting item in each condition. The
lowest misclassification rates were associated with CDAs that have the
longest test-length (45) and the smallest proportion of poor-fitting items
(5%). Under those conditions, the lowest misclassification rate was 15%.
The highest error rates were observed with the shortest test length (15),
where misclassification was 78%.
Taken together, the simulation study results highlight important trends
and outcomes that can be used to interpret how accurately the ICI
identifies poor-fitting items. The power values of ICI were erratic when
the number of items probing each skill pattern was small, but stabilized
as the number of items representing each skill pattern increased. For
example, each increase in test length resulted in a decrease in the
variation of power values among the same proportion of poor-fitting
items and between different sample sizes. This finding suggests that the
reliability of using the ICI to classify poor-fitting items is related to the
reliability of the CDA as a whole. Moreover, the proportions of
misclassification were approximately 2.5 times higher in CDAs with a
single item representing each test skills as compared to the other two
levels. This outcome further supports the conclusion that as skills are
measured more accurately, the ICI better distinguishes poor- from well-
fitting items.
13
© 2017 The authors and IJLTER.ORG. All rights reserved.
Table 4. Misclassification rate of ICI in identifying well-fitting items
Test
Length
Sample
Size
Proportion of Well-Fitting Items
5% 10% 25%
15 2400 0.28 0.35 0.66
1600 0.78 0.65 0.72
800 0.50 0.50 0.66
30 2400 0.16 0.20 0.22
1600 0.28 0.20 0.27
800 0.27 0.22 0.24
45 2400 0.15 0.18 0.33
1600 0.17 0.19 0.34
800 0.15 0.19 0.33
There were no obvious trends that the sample size manipulated across the
three levels yielded important differences among the power or
misclassification of well-fitting items. This finding suggests that the
sample sizes used in this study do not yield important ICI differences
across our study conditions. This outcome could also suggest that the
representation of approximately 50 examinees per skill pattern may be
sufficient for evaluation of the ICI.
When the proportion of poor-fitting items was manipulated, the power
increased with the proportion of poor-fitting items in the CDA, where the
overall power rose as the proportion of poor fitting item increased. An
increase of poor-fitting items also yielded more misclassification of well-
fitting items. This finding suggests that poor-fitting item responses
contribute to an overall decrease in the magnitude of ICI, where the
resulting errors are reflected using the classification criterion of 0.50.
Study 2: Use Case Application
The purpose of the second study is to demonstrate how the ICI can be
used to identify poor-fitting items in an operational CDA. The ICI was
used to evaluate item-model fit for a CDA program designed to assess
students’ knowledge and skills in Grade 3 mathematics. From this CDA
program, 324 students responded to an 18-item CDA (see Gierl, Alves, &
Taylor-Majeau, 2010).
The CDA we used was designed to evaluate student mastery for
subtraction skills. Each item was designed to yield specific diagnostic
information in a hierarchy of cognitive skills were the first skill was the
easiest (Subtraction of two consecutive 2 digit numbers) and the last skill
was most difficult (Subtraction of two 2 digit numbers using the digits 1
14
© 2017 The authors and IJLTER.ORG. All rights reserved.
to 9 with regrouping). The CDA was developed as follows. First, a
cognitive model of task performance was created by specifying the
cognitive skills necessary to master subtraction in Grade 3. The domain of
subtraction was further specified into a set of six attributes related in a
linearly hierarchical manner by a group of subject matter experts. The
attributes produced a total of seven unique patterns of skill mastery (six
plus null). Three items were created by content experts to probe student
mastery on each attribute to ensure adequate representativeness of each
skill pattern resulting in eighteen items for this CDA. The test was
administered to students in 17 Grade 3 classrooms. A list of the attributes
and the Q-matrix for the 18-item CDA are shown in Table A3 and Table
A4 of the Appendix, respectively.
Three hundred and twenty four student responses were collected, which
would yield approximately 45 students per skill pattern if the patterns
were distributed equally across the skills. Participating teachers would
first instruct on the topics relevant to subtraction within their classrooms,
and then administer the CDA to students at a convenient time within
two-week of instruction. The CDA was delivered using an online
computer-based testing system. Students were presented with CDA
items that contain both an item stem to prompt for a typed-response and
an interactive multimedia component that provided additional
information for students to understand the item. From this
administration process, responses were collected, formatted and scored
dichotomously. As the participation of this CDA was voluntary, students
with greater than two missing responses were removed from the analysis
to minimize unmotivated responses (as the completion of the CDA was
not mandatory). For the purposes of demonstrating the ICI, only the
scored student responses were used.
The results are summarized first at the test level and then at the item
level. Overall, the results were ideal at the test level. The median HCI,
which is used to quantify the fit of the responses to the expected model of
response on a CDA, was 0.81. With a cut-off of 0.70 as the quality criterion
for CDA designs (Gierl, Alves, & Taylor-Marjeau, 2010), this result
suggests that the student responses fit with the expect model of response
for subtraction. As the purpose of this CDA is to identify non-mastery
students in order to refine and enhance instruction, the majority of
students were expected to master the CDA.
At the item level, Table 5 provides a summary of the results from the
subtraction CDA. The p-values of each item and the discrimination value
(i.e., point-biserial correlation) are presented along with the ICI values.
Three findings should be noted from these results. First, the ICI was not
15
© 2017 The authors and IJLTER.ORG. All rights reserved.
correlated with either the difficulty or discrimination values. This result
supports the idea that item-model fit is summarizing a different outcome
from the classically defined notion of difficulty and discrimination.
Second, with items created in a principled manner, with three items
representing each skill pattern, the real data results support the results of
the simulation study. Further, as p-values decrease, ICI values increase
because the items change from measuring simple to more complex skills.
Third, using the cut-score criterion of 0.50 from the simulation study, only
three items were deemed to have poor item fit (Items 1, 2, 3). The poor ICI
values for these items may suggest a problem at the attribute level (see
Table A3 in the Appendix for the description of the skills assessed). It is
important to note that without the ICI conventional scoring and
psychometric approaches would not have identified issues of misfit at the
attribute level, where items one through three are performing nominally
at the item level. Although subject matter experts did not evaluate the
cognitive model in the light of the student results, a follow-up study may
find that a reorganization of the attributes may yield better fitting
responses.
Table 5. Summary of the results from the subtraction CDA
Attribute Item Number P-Value Discrimination ICI
1 1 0.76 0.58 0.22
2 0.78 0.87 0.39
3 0.80 0.96 0.46
2 4 0.84 0.89 0.64
5 0.87 1.11 0.72
6 0.85 0.94 0.65
3 7 0.86 1.06 0.76
8 0.80 0.68 0.65
9 0.84 1.01 0.75
4 10 0.77 0.79 0.73
11 0.72 0.78 0.72
12 0.75 0.82 0.73
5 13 0.74 0.82 0.78
14 0.77 0.92 0.79
15 0.79 0.98 0.80
6 16 0.35 0.56 0.81
17 0.34 0.57 0.81
18 0.33 0.53 0.80
16
© 2017 The authors and IJLTER.ORG. All rights reserved.
Discussion
The purpose of this study is to introduce a statistic for determining item-
model fit with CDA. The item consistency index (ICI), an extension of a
person-fit index for CDA called the Hierarchy Consistency Index (HCI), is
a standardized outcome that measures the ratio of misfitting responses
relative to the total number of response across all examinees on a given
item. Similar to the HCI, the requirements for evaluating item-model fit
using the ICI is an item-by-attribute definition of skill mastery called the
Q-matrix in addition to the student response vectors. The ICI has a
maximum value of 1, which suggests all students responded identically to
an expected skill pattern, and a minimum value of -1, which suggests
item responses were the exact opposite to what the expected skill patterns
suggest. We present two use cases to demonstrate the properties of the
ICI under simulation. In addition, we demonstrate the applicability of the
ICI through the use of real data to highlight how the ICI can be applied to
identify poor-fitting items on a CDA. These two proof-of-concept
applications demonstrate how the ICI can be applied in the real world
and call for future studies to establish better evaluation criterion for the
ICI.
Results from the simulation study provided some general insights on how
the ICI performs as a method for detecting item misfit in CDA across a
range of testing conditions. Using a cut-score classification method to
determine poor-fitting items, the ICI was able to identify the majority of
the poor fitting items across different simulated conditions. Although the
item-model fit is described in a range by the ICI, the use of a cut-score to
classify poor fitting items provided a simple outcome to interpret for
evaluating how the ICI will perform in a given testing scenario. In
addition, results from the simulation study demonstrated a few
assumptions that must be met for the ICI to detect item misfit accurately.
The number of items used for each skill pattern and the total number of
poor fitting items were two features that affected ICI performance. The
implication from these findings demonstrate that although CDA demands
a different paradigm of scoring and statistical approaches, traditional
issues such as consistency of the responses for a given set of skill can still
be problematic in estimating item-model fit. From our simulation results,
we suggest the use of three items per attribute or more per skill pattern to
ensure adequate ICI detection. This finding is consistent with the
research in establishing an adequate reliability in measuring attributes of
skills (Gierl, Cui, & Zhou, 2009), where the authors stated that the idea of
a short yet diagnostic test will not likely yield results with sufficient
reliability.
17
© 2017 The authors and IJLTER.ORG. All rights reserved.
Sinharay and Almond (2007) noted that tests with many poor-fitting items
indicate a problem with the overall model, whereas tests with few poor-
fitting items indicate problems lie in the items themselves. In our
simulation, we demonstrated that the ICI will produce similar results,
where an increase of poor-fitting items in a CDA will lower the precision
of the ICI. This finding may be linked to the fact that as more poor-fitting
items are introduced, these items affect the fit of items requiring the same
set of skills leading to an overall decrease in magnitude of ICIs. Table A5
in the Appendix illustrates this effect, where the mean ICI for well- and
poor-fitting items under the 45-item simulation decreases as the
proportion of ICI increases. In sum, a rigorous and principled test
development process is needed for CDA to ensure all test items are
created with minimal deviation from the expected set of skills they were
designed to probe. Otherwise, poor model-fit results will lead to poor
diagnostic outcomes.
The second study provided a snapshot on the utility of the ICI when
applied to an operational CDA. Using a set of carefully designed CDA
items, the ICI detected three consecutive poor-fitting items at the
beginning of the assessment. This finding suggests that the ICI can not
only be used for evaluating item-model fit, but can also be used for
evaluating the consequences of test design at the item, attribute, or the
cognitive model level. In our example, the three items flagged as poor
fitting measure the same attribute revealing that the attribute may be mis-
specified in the cognitive model. In addition, the independence of ICI
from the difficulty and discrimination values suggest that item model-fit
for CDA provides a unique measure of how an item is able to accurately
predict performance. Hence, the definition of a good item for CDA may
not only be how well an item is able to distinguish poor-performers from
good-performers, but also how consistently an item can elicit responses
that match the expected response patterns specified in the cognitive
model (i.e., Q-matrix).
Item-model fit is challenging to measure, especially when cognitive
inferences are involved in the test design. Items have to be aligned with
the cognitive skills in the Q-matrix, skills have to be defined and
organized in a systematic manner, and examinee responses have to match
the expected skill patterns. The ICI can provide a source of evidence for
identifying poor-fitting items or poor models for Q-matrix based CDA.
Implications for Future Research
By introducing and demonstrating an item-model fit index for CDA, our
study provides two practical implications for the development of
diagnostic assessments in addition to a new measure of item-fit. The ICI
18
© 2017 The authors and IJLTER.ORG. All rights reserved.
has the benefit of applicability, meaning that it can be used with a Q-
matrix based CDA for determining the relationship between items and
skills. Using the Q-matrix, item and examinee responses can be
compared to provide a measure of item model-fit. While research on
CDA has prompted a plethora of diagnostic scoring methods, one
common starting point is the use of the Q-matrix in defining the skills and
item requirements. Because item development, validation, and
administration all depend on the veracity of the Q-matrix, evidence for
validating the cognitive model is paramount. The ICI offers some initial
evidence that can be used for validating the definition of skills through
item response patterns to determine the relative fit between an item and
its set of required skills defined in the Q-matrix.
While the ICI provides a new statistical method for scrutinizing CDA
development, the second study highlighted the fact that the most crucial
part of a well-designed CDA remains with item development. The
importance of item development is, sometimes, neglected in CDA.
Although CDA scoring methods can account for different levels of skill
contributions, the link between how a skill is measured with how the skill
is presented in the form of an item remains largely a subjective
interpretation of the test developer and content specialist who create the
CDA. To reliably measure a set of skills, multiple items are needed. Yet
creating parallel items is often time consuming and expensive. Ensuring
that each item is uniformly developed with the same set of skills is one
critical activity in test development for CDA that ensures examinees
receive useful diagnostic feedback.The ICI is co-dependent with all items
requiring a related set of skills. Therefore, to ensure adequate item model-
fit, every item in the CDA must adhere to a high level of quality and
alignment relative to the expected skill the item is designed to measure.
Through introducing an item model-fit index for CDA, we have
demonstrated how such measure can be applied to identify problematic
items that are aberrant from the expected response model. This initial
study provides directions of future research as further investigation is
needed to apply and validate the use of this index. We also suggest three
directions of future research. First, more research is needed to ensure
different structures of knowledge represented by the Q-matrix can be
evaluated with the ICI to identify misfitting items. The number of
possible skill pattern representation increases exponentially as the
number of evaluated skills increases, therefore more research is needed to
ensure ICI provides an appropriate measure for different organization of
skills. Second, guidelines to interpret ICIs are needed so we can
accurately identify and distinguish adequate and problematic items. As
the ICI provides a scaled measure of item model-fit, interpretations of the
19
© 2017 The authors and IJLTER.ORG. All rights reserved.
index has not yet been established and is required to determine the
adequacy threshold of item model-fit. Third, as the reliability of CDA
measures is highly dependent on the defined skills, more research is
needed to determine which model structure is ideal in the application of
the ICI. Our analysis relies on non-compensatory attributes, meaning
skills are independently defined, acquired and cannot be moderated by
existence of other skills. This will likely limit the ICI in measuring item fit
for testing complex skills but not for general skills such as elementary
mathematics. More research is needed to evaluate appropriate use cases
of the ICI.
References
Bock, R. (1972). Estimating item parameters and latent ability when responses
are scored in two or more nominal categories. Psychometrika, 37, 29-51.
Cui, Y., & Leighton, J. (2009). The hierarchy consistency index: Evaluating
person fit for cognitive diagnostic assessment. Journal of Educational
Measurement, 46(4), 429-449.
Cui, Y, & Li, J. C.-H. (2014). Evaluating person fit for cognitive diagnostic
assessment. Applied Psychological Measurement, 39, 223-238.
Cui, Y, & Mousavi, A. (2015). Explore the usefulness of person-fit analysis on
large scale assessment. International Journal of Testing, 15, 23-49.
Gierl, M., Leighton, J., & Hunka, S. (2007). Using the attribute hierarchy method
to make diagnostic inferences about examinees’ cognitive skills. In J.
Leighton & M. Gierl (Eds.), Cognitive diagnostic assessment for education:
Theory and applications (pp. 242-274). Cambridge, MA: Cambridge
University Press.
Gierl, M., Cui, Y., & Zhou, J. (2009). Reliability and attribute-based scoring in
cognitive diagnostic assessment. Journal of Educational Measurement, 46(3),
293-313.
Gierl, M., Alves, C., & Taylor-Majeau, R. (2010). Using the Attribute Hierarchy
Method to Make Diagnostic Inferences about Examinees’ Knowledge and
Skills in Mathematics: An Operational Implementation of Cognitive
Diagnostic Assessment. International Journal of Testing, 10(4), 318-341.
Jang, E. (2005). A validity narrative: Effects of reading skills diagnosis on
teaching and learning in the context of NG TOEFL (Doctoral
dissertation). University of Illinois at Urbana-Champaign, IL, USA.
Orlando, M., & Thissen, D. (2003). Further investigation of the performance of
S-X2: An item fit index for use with dichotomous item response theory
models. Applied Psychological Measurement, 27(4), 289-298.
R Development Core Team (2011). R: A language and environment for statistical
computing. R Foundation for Statistical Computing, Vienna, Austria. .
Reise, S. (1990). A Comparison of item- and person-fit methods of assessing
model-data fit in IRT. Applied Psychological Measurement, 14(2), 127-137.
Rost, J., & von Davier, M. (1994). A conditional item-fit index for Rasch models.
Applied Psychological Measurement, 18(2), 171-182.
20
© 2017 The authors and IJLTER.ORG. All rights reserved.
Sinharay, S., Puhan, G., & Haberman, S. (2009, April). Reporting diagnostic
scores: Temptations, pitfalls, and some solutions. Paper presented at the
National Council on Measurement in Education, San Diego, CA, USA.
Sinharay, S., & Almond, R. (2007). Assessing fit of cognitive diagnostic models a
case study. Educational and Psychological Measurement. 67(2), 239-257.
Wang, C., Shu, Z., Shagn, Z., & Xu, G. (2015). Assessing Item-Level Fit for the
DINA Model. Applied Psychological Measurement, 1-14.
Yen, W. (1981). Using simulation results to choose a latent trait model. Applied
Psychological Measurement, 5, 245-262.
APPENDIX A
Table A1. The Q-matrix and skill patterns used for the simulation of CDA
responses
Pattern
Skill
1 2 3 4 5 6 7
1 1 0 0 0 0 0 0
2 1 1 0 0 0 0 0
3 1 1 1 0 0 0 0
4 1 1 0 1 0 0 0
5 1 1 1 1 0 0 0
6 1 1 0 1 1 0 0
7 1 1 1 1 1 0 0
8 1 1 0 1 0 1 0
9 1 1 1 1 0 1 0
10 1 1 0 1 1 1 0
11 1 1 1 1 1 1 0
12 1 1 0 1 0 1 1
13 1 1 1 1 0 1 1
14 1 1 0 1 1 1 1
15 1 1 1 1 1 1 1
Table A2. Variables manipulated in the simulation
Level
Conditions 1 2 3
Test length 15 30 45
Sample size 800 1600 2400
Proportion of poor-fitting items 5% 10% 25%
21
© 2017 The authors and IJLTER.ORG. All rights reserved.
Table A3. Description of the skills assessed in the CDA for subtraction in Grade 3
Cognitive
Attribute #
Skill Descriptor: Apply a mental mathematics strategy to subtract
6 Two 2 digit numbers using the digits 1 to 9 with regrouping
5 Two 2 digit doubles (e.g., 24, 36, 48, 12)
4 Two 2 digit numbers where only the subtrahend is a multiple of 10
3 Ten from a 2 digit number
2 Two 2 digit numbers where the minuend and subtrahend are multiples of 10
1 Two consecutive 2 digit numbers (e.g., 11, 22, 33)
Table A4. Q-matrix of the CDA for subtraction in Grade 3
Pattern
Skill
1 2 3 4 5 6
1 1 0 0 0 0 0
2 1 0 0 0 0 0
3 1 0 0 0 0 0
4 1 1 0 0 0 0
5 1 1 0 0 0 0
6 1 1 0 0 0 0
7 1 1 1 0 0 0
8 1 1 1 0 0 0
9 1 1 1 0 0 0
10 1 1 1 1 0 0
11 1 1 1 1 0 0
12 1 1 1 1 0 0
13 1 1 1 1 1 0
14 1 1 1 1 1 0
15 1 1 1 1 1 0
16 1 1 1 1 1 1
17 1 1 1 1 1 1
18 1 1 1 1 1 1
Table A5. Summary of the mean ICI in extreme situations when n=2400
Item Quality
Proportion of Poor-Fitting Items
0% 25% 50% 100%
Well-Fitting Items 0.61 0.49 0.39 n/a
Poor-Fitting Items n/a 0.33 0.28 0.15
22
© 2017 The authors and IJLTER.ORG. All rights reserved.
International Journal of Learning, Teaching and Educational Research
Vol. 16, No. 1, pp. 22-37, January 2017
Factors That Determine Accounting Anxiety
Among Users of English as a Second Language
Within an International MBA Program
Alexander Franco and Scott S. Roach
Stamford International University, Graduate School of Business
Bangkok, Thailand
Abstract. The primary goal of this study was to determine the factors
related to accounting anxiety among MBA students who utilize English
as a second language (ESL). The analysis included components within
the learning environment and also differentiations as to demographic
variables such as gender, age, ethnicity, and any prior undergraduate
exposure to the study of accounting. A secondary goal of the study was
to determine perception of anxiety among ESL students in an MBA
program regarding quantitative courses as opposed to qualitative
courses. Finally, the study examined different strategies used by ESL
students to deal with accounting anxiety. The study found that there
were significant differences in accounting anxiety based on gender,
ethnicity, and exposure to undergraduate accounting. However, age was
not a factor. In addition, the study supported the hypothesis that there is
a negative relationship between levels of English proficiency and
accounting anxiety. It also supported the hypothesis that there is a
positive relationship between the levels of anxiety with classes involving
quantitative subject matter. Finally, the study rejected significant
differences in coping strategies by levels of accounting anxiety.
Keywords: accounting; accounting anxiety; English as second language
(ESL); language anxiety, strategies regarding accounting anxiety
Introduction
Within the context of globalization, English has become the lingua franca of the
business world, a transnational instrument vital in both a local and a global
context (Buripakdi, 2014; Easthope, 1999). The study of language anxiety among
students using English as foreign language has been steadily growing for the
past three decades (Horwitz, 1991; Kao & Craigie, 2013; Kondo & Yang, 2004;
Mahmoodzadeh 2012, Marwan, 2007; Ozturk & Gurbuz, 2014; Semmar, 2010;
Wang, 2010). During this period, a body of work has also been developed that
focused on anxiety suffered by students while studying accounting, although
23
© 2017 The authors and IJLTER.ORG. All rights reserved.
none of the studies specifically examined a student body primarily consisting of
ESL students (Ameen, Guffey, & Jackson, 2002; Borja, 2003; Buckhaults & Fisher,
2011; Chen, Hsu, & Chen, 2013; Clark & Schwartz, 1989; Dull, Schleifer, &
McMillan, 2015; Duman, Apak, Yucenursen, & Peker, 2014; Ghaderi & Salehi,
2011; Malgwi, 2004; Uyar & Gungormus, 2011).
This study sought to investigate those factors that are related to varying anxiety
levels among students of accounting who are challenged with learning this
quantitative subject and its nomenclature while utilizing English as a second
language. The first section of this paper presents a review of related material on
accounting anxiety and proposes the hypotheses to be tested. The second part of
this paper provides a discussion of the research methodology and analysis of the
data collected. The final part presents utilitarian suggestions for minimizing
anxiety by ESL students as they learn accounting, as well as recommendations
for future research.
1. Literature Review
Academic anxiety, within a pedagogical context, can best be seen as emotional
state that is not inherent, but which is situational and can be “treated” by
creating an effective association between teaching and receiving apprehension
(Chu & Spires, 1991; Malgwi, 2004). Anxiety as to the learning of accounting at a
level of higher education has been based on students’ perceptions that the
nomenclature of the subject is akin to learning a new language (Borjas, 2003).
Further, the knowledge base for this subject is perceived as being extensive and
usually there is a corresponding apprehension that the period of time necessary
to properly comprehend the principles and application of accounting is
inadequate (Malgwi, 2004).
Previous studies suggest that differences in anxiety levels regarding the study of
technical material may related to variables such as gender (Todman, 2000), age,
background experience or exposure to the subject being studied (Chu & Spires,
1991; McIlroy, Bunting, Tierney, & Gordon, 2001; Towell & Lauer, 2001) or
nationality/ethnicity (Burkett, Compton, & Burkett, 2001; Rosen & Weil, 1995).
Based on this, the following hypotheses were examined:
H1: There will be differences in accounting anxiety levels of ESL students
in an international MBA program across different demographic
groups.
H1a: There will be differences in accounting anxiety levels of ESL
students in an international MBA program across age groups.
H1b: There will be differences in accounting anxiety levels of ESL
students in an international MBA program across genders.
H1c: There will be differences in accounting anxiety levels of ESL
students in an international MBA program across different
ethnic groups.
24
© 2017 The authors and IJLTER.ORG. All rights reserved.
H2: There will be differences in accounting anxiety levels of ESL students
in an international MBA program for those students who took an
undergraduate accounting course as opposed to those who did not.
Among ESL students, the level of anxiety in learning technical subjects and in
communication apprehension has been tied to the degree of their proficiency in
the use of the English language (Casado & Dereshiswsky, 2004; Horwitz,
Horwitz, & Cope, 1986; Marwan, 2007; Onwuegbuzie, Bailey, & Daley, 1999;
Pappamihiel, 2002). Therefore, H3 was proposed:
H3: There will be a negative relationship between level of English
proficiency and accounting anxiety for ESL students enrolled in an
international MBA program.
The degree of quantification in a course of study impacts on the level of anxiety
experienced by students (Kao & Craigie, 2013; Kondo & Yang, 2004; Rosen &
Weil, 1995; Todman, 2000). Kondo & Yang (2004) devised a typology of
strategies (5 strategy categories from 70 basic tactics) that ESL students use to
cope with language anxiety. The strategies include peer seeking, positive
thinking, preparation, and resignation. From this, the following hypotheses were
proposed for testing:
H4: There will be a positive relationship between level of anxiety with
classes involving quantitative subject matter and accounting anxiety
for ESL students enrolled in an international MBA program.
H5: There will be differences in the accounting anxiety associated with the
coping strategy selected by ESL students enrolled in an international
MBA program.
2. Research Methodology and Findings
2.1 Sample
The population studied was an international university in Thailand with an
MBA student body consisting of 380 ESL students which were 57% female, 43%
male; 64% were Thai and 36% were non-Thai. As per Krejcie and Morgan’s
(1970) table of sample size determination, a sample population of 190 was
calculated for this study. The sample consisted of 107 females (56% of the sample
population), and 83 males (44%). Within the sample, 105 (55.3%) were Thais, 16
(8.4%) were Thai of Chinese lineage (1st and 2nd generations) and 69 (36.3%) were
non-Thai.
2.2 Instrument
A self-administered questionnaire was used with 15 accounting-focused, Likert
scale questions, many which were modifications from the Horowitz et al. (1986)
Foreign Language Classroom Anxiety Scale (FLCAS), a survey that has been
used in several studies (Argaman & Abu-Rabia, 2002; Casado & Dershiwsky,
2004; Marwan, 2007; Matsuda & Gobel, 2004; Semmar, 2010; Yashima, 2002). All
scales had a Cronbach alpha internal reliability score of over .80, indicating
consistency (Hair, Black, Babin, & Anderson, 2010; Sekaran, 2000; Tavakol &
25
© 2017 The authors and IJLTER.ORG. All rights reserved.
Dennick, 2011). The questionnaire also tested coping strategies by incorporating
the Foreign Language Anxiety Coping Scale, which was designed by Kondo and
Wang (2004). This scale was assessed to have an alpha coefficient of .91
(Marwan, 2007), demonstrating high internal reliability.
The questionnaire consisted of a forced, 4-point Likert scale from “strongly
agree” to “strongly disagree.” A neutral option (e.g., “not sure”) was
deliberately avoided because of cultural traits within Thai society that inhibit the
motivation to express personal opinion: a strong hierarchical system with high
power-distance and kreng jai –the culturally operationalized practice of avoiding
the display of emotion or asserting one’s opinion (Holmes, Tangtongtavy, &
Tomizawa, 2003; Johnson & Morgan, 2016; Suntaree, 1990). The questionnaire
was translated into Thai for Thai students (and translated back into English to
assure accuracy) in order to maximize effective feedback (Behling & Law, 2000;
Harkness, Van de Vijer, & Mohler, 2002; Domyei & Taguchi, 2009). An English
language version was distributed to non-Thai ESL students. The questionnaire
was administered during a six-month period by the same lecturer who taught
the only accounting course (a core course) required by the university’s MBA
program. The actual day in which the questionnaire was administered was the
first day of each starting class during that period.
2.3 Findings
The first hypothesis proposed that there would be differences in accounting
anxiety levels across groups defined by the demographic variables of age,
gender and ethnicity. Descriptives for the first of these three demographic
factors are presented below in Table 1. As shown in the table, the mean
accounting anxiety rating declines consistently across the four age groups.
Table 1: Descriptive Analysis of Accounting Anxiety Ratings by Age Group*
Age Group N Min Max M SD
18-22 58 1 4 3.17 .920
23-25 48 1 4 2.94 .836
26-30 46 1 4 2.91 .784
30 + 38 1 4 2.74 .724
Total 190 2.96 .838
*Where 1 = Strongly Disagree and 4 = Strongly Agree with the statement: Taking
an accounting class gives me high anxiety (i.e., feeling of stress, fear).
In order to test whether this decline was statistically significant, a one-way
ANOVA was performed to analyze differences in accounting anxiety ratings
across the age groups. The results are displayed in Table 2 below. Results
indicate no significant difference across the four age groups for accounting
anxiety, F (3, 186) = 2.242, p = .085. Therefore, Hypothesis 1a is rejected.
26
© 2017 The authors and IJLTER.ORG. All rights reserved.
Table 2: One-Way Analysis of Variance of Accounting Anxiety Scores by Age
Group
Source df SS MS F p
Between Groups 3 4.633 1.544 2.242 .085
Within Groups 186 128.109 .689
Total 189 132.742
The second part of this hypothesis proposed differences in accounting anxiety
across gender groups. Descriptive statistics by gender are presented below in
Table 3. As shown in the Table, the mean female accounting anxiety rating is
slightly higher than the mean rating for males.
Table 3: Descriptive Analysis of Accounting Anxiety Ratings by Gender*
Gender N Min Max M SD
Male 83 1 4 2.77 .860
Female 107 1 4 3.11 .793
Total 190 2.96 .838
*Where 1 = Strongly Disagree and 4 = Strongly Agree with the statement: Taking
an accounting class gives me high anxiety (feeling of stress, fear).
In order to test whether this difference was significant, a t-test was conducted.
Results of that test are provided in Table 4, below. The results indicate a
significant difference in scores with women reporting significantly higher levels
of accounting anxiety (M=3.11, SD= .793) as compared to males (M=2.77, SD=
.860), t (188) = -2.834, p = .005. Therefore, Hypothesis 1b is supported.
Table 4: Comparison of Anxiety Ratings by Gender*
Gender N Mean SD t df p 95% Confidence
Interval
Male 83 2.77 .860
Female 107 3.11 .793
Total 190 2.96 .838 -2.834 188 .005 -.578 – -.101
*Where 1 = Strongly Disagree and 4 = Strongly Agree with the statement: Taking
an accounting class gives me high anxiety (i.e., feeling of stress, fear).
27
© 2017 The authors and IJLTER.ORG. All rights reserved.
The third part of Hypothesis 1 proposed that there would be differences in
accounting anxiety ratings across different ethnic groups. Table 5 provides the
descriptive statistics associated with the three ethnic groups that were analyzed.
Table 5: Descriptive Analysis of Accounting Anxiety Ratings by Ethnic Group*
Ethnic Group N Min Max M SD
Thai of
Chinese
18 2 4 3.13 .619
Thai 106 1 4 3.09 .810
Not Thai 69 1 4 2.74 .885
Total 190 2.96 .838
*Where 1 = Strongly Disagree and 4 = Strongly Agree with the statement: Taking
an accounting class gives me high anxiety (i.e., feeling of stress, fear).
Testing for significant differences in accounting anxiety ratings across the three
ethnic groups was conducted with a one-way ANOVA. Findings of this analysis
are presented in Table 6 below. As depicted in the table, there was a statistically
significant difference between the ethnic groups as determined by the one-way
ANOVA F (2, 187) = 4.010, p = .020. Therefore, Hypothesis 1c is supported. A
Tukey post hoc test was then performed revealing that the Thai group had
statistically significant higher ratings of accounting anxiety as compared with
the Other Than Thai group (3.09 + .810, p = .020).
In sum, Hypothesis 1 proposed that there would be differences across the
demographic groups of age, gender and ethnicity. Upon testing, the age portion
of Hypothesis 1 was rejected, the gender differences hypothesis was supported
and differences in accounting anxiety were found to exist between “Thai” and
the “Other Than Thai” groups.
Table 6: One-Way Analysis of Variance of Accounting Anxiety Scores by Ethnic
Group
Source df SS MS F p
Between Groups 2 5.459 2.730 4.010 .020
Within Groups 187 127.283 .681
Total 189 132.742
Hypothesis 2 proposed that there would be differences in accounting anxiety
levels for those ESL students that had taken an undergraduate accounting course
28
© 2017 The authors and IJLTER.ORG. All rights reserved.
and those who had not. Descriptive statistics for these two groups are presented
in Table 7.
Table 7: Descriptive Analysis of Accounting Anxiety Ratings by Whether or Not
Student Had an Undergraduate Accounting Class*
Undergrad
Class
N Min Max M SD
Yes 96 1 4 2.79 .882
No 94 1 4 3.14 .756
Total 190 2.96 .838
*Where 1 = Strongly Disagree and 4 = Strongly Agree with the statement:
“Taking an accounting class gives me high anxiety” (i.e., feeling of stress, fear).
As shown in the table, those students who reported having had an
undergraduate class in accounting had lower mean accounting anxiety ratings.
To test to see if this difference was significant, a t-test was run on the accounting
anxiety ratings between the two groups. The results of this test are below
reported in Table 8. The results indicate a significant difference in scores with
ESL students in the group that did have an undergraduate accounting course
reporting significantly lower levels of accounting anxiety (M=2.79, SD= .882) as
compared to those students who had not had an undergraduate accounting
course (M=3.14, SD= .7.56), t (188) = -2.271, p = .004. Therefore, Hypothesis 2 is
supported.
Table 8: Comparison of Anxiety Ratings by Whether or Not Student Had Taken
an Undergraduate Accounting Class*
Undergrad
Class
N Mean SD t df p 95% Confidence
Interval
Yes 96 2.79 .882
No 94 3.14 .756
Total 190 2.96 .838 -2.834 188 .004 -.582 – -.111
*Where 1 = Strongly Disagree and 4 = Strongly Agree with the statement:
“Taking an accounting class gives me high anxiety” (i.e., feeling of stress, fear).
The third hypothesis proposed that there is a significant negative relationship
between English proficiency and accounting anxiety for ESL students. Self-
reported English proficiency levels ranged from 1, “Bad” to 5, “Excellent” (N =
29
© 2017 The authors and IJLTER.ORG. All rights reserved.
190; M = 3.54; SD = .801). Ratings of accounting anxiety ranged from 1, “Strongly
Disagree” to 4, “Strongly Agree” with the statement “Taking an accounting class
gives me high anxiety” (i.e., feeling of stress, fear) (N = 190; M = 2.96; SD = .838).
A simple regression analysis showed that the level of English proficiency
significantly affected ratings of accounting anxiety. Results of the analysis are
presented in Table 9, below. The higher the English proficiency ratings, the
lower the accounting anxiety ratings (t = -2.899; p < .001). Therefore, Hypothesis
3 is supported. However, the R2 =.043, so the predictive power of the model is
quite low.
Table 9: Summary of the Simple Regression Analysis for English Proficiency and
Accounting Anxiety
Variable B SE(B) β t p
English
Proficiency
-.216 .075 -.207 -2.899 .004
R2 =.043
Hypothesis 4 suggests a positive relationship between classes involving
quantitative subject matter and accounting anxiety ratings. This was based on
self-reported anxiety with classes that are quantitatively based and which was
ranged from 1, “Strongly Disagree” to 4, “Strongly Agree” with the statement, “I
get anxiety from an accounting class because of the numbers involved” (N = 190;
M = 2.75; SD = .913). A simple regression analysis was used to test this
relationship. The results of this analysis are presented below in Table 10. These
results indicate that as a person’s anxiety with quantitatively based classes
increases so does their ratings of accounting anxiety ratings (t = 10.386; p < .001).
Therefore, Hypothesis 4 is supported. The R2 =.365 so the independent variable
(anxiety with quantitative based classes) explains 36.5% of the variance in the
dependent variable, accounting anxiety.
Table 10: Summary of the Simple Regression Analysis for Quantitative Class
Anxiety and Accounting Anxiety
Variable B SE(B) β t p
English
Proficiency
.555 .053 .604 10.386 < .001
R2 =.365
The final hypothesis suggests that differences in accounting anxiety will be
associated with the coping strategy employed by ESL students. As displayed in
Table 11, the means do differ across the various strategies employed by the
students. This is particularly true for “Positive Thinking” and for “Peer Seeking”
which fall at the lowest and highest levels of accounting anxiety, respectively. In
order to determine whether these differences were significant, a one-way
30
© 2017 The authors and IJLTER.ORG. All rights reserved.
ANOVA was performed to examine group differences in accounting anxiety
scores. The results of this analysis are reported in Table 12.
Table 11: Descriptive Analysis of Accounting Anxiety Ratings by Coping
Strategy*
Coping Strategy N Min Max M SD
Preparation 100 1 4 2.97 .758
Relaxation 22 2 4 2.91 .921
Positive
Thinking
47 1 4 2.79 .977
Peer Seeking 21 2 4 3.38 .669
Total 190 2.96 .838
*Where 1 = Strongly Disagree and 4 = Strongly Agree with the statement:
“Taking an accounting class gives me high anxiety” (i.e., feeling of stress, fear).
Table 12: One-Way Analysis of Variance of Accounting Anxiety Scores by
Coping Strategy
Source df SS MS F p
Between Groups 3 5.189 1.730 2.522 .059
Within Groups 186 127.553 .686
Total 189 132.742
As shown in Table 12, the results indicate no significant difference across the
four coping strategy groups for accounting anxiety, F (3, 186) = 2.522, p = .059.
Therefore, Hypothesis 5 is rejected.
A summary of the findings of this study is provided below in Table 13. There
was support for two of the demographic factors and varied levels of accounting
anxiety (gender and ethnicity) but differences by age was rejected. Having taken
an undergraduate course in accounting significantly reduced accounting
anxiety. In addition, English proficiency was shown to be negatively related to
higher levels of accounting anxiety. Anxiety toward courses with quantitative
content was positively related to accounting anxiety. Coping strategies
employed by students did not vary significantly by level of accounting anxiety.
31
© 2017 The authors and IJLTER.ORG. All rights reserved.
Table 13: Summary of Study Findings
Hypothesis SS
H1a Differences in Accounting Anxiety by Age Rejected
H1b Differences in Accounting Anxiety by Gender Supported
H1c Differences in Accounting Anxiety by Ethnicity Supported
H2 Differences in Accounting Anxiety by Undergraduate
Accounting
Supported
H3 Negative Relationship between English Proficiency
and Accounting Anxiety
Supported
H4 Positive Relationship between Anxiety for
Quantitative Courses and Accounting Anxiety
Supported
H5 Differences in Coping Strategy by Level of
Accounting Anxiety
Rejected
As a part of this study, the ESL students were requested to rate the various core
subjects and work on their thesis in terms of difficulty of learning the subject in
English. Table 14 presents results of these questions. As shown in the table, those
subjects that are based on a primarily quantitative content (accounting M = 2.09
SD = .733; and finance M = 2.22 SD = .751) were rated as more difficult than
those subjects that are more theoretical in nature (marketing M = 2.97 SD = .6.29
and management M = 2.94 SD = .6.72). The two subject areas that employ both
quantitative analysis and theory (research methods M = 2.55 SD = .780 and
thesis M = 2.25 SD = .8.77) were rated in the middle in terms of difficulty with
thesis being closer to the quantitative subjects.
Table 14: Difficulty of Studying Subjects in English Ratings by Percentage*
Subject Very
Difficult
Somewhat
Difficult
Somewhat
Easy
Very
Easy
Mean Standard
Deviation
Accounting 16.8 63.2 14.2 5.8 2.09 .733
Finance 12.6 59.5 21.1 6.8 2.22 .751
Marketing 1.1 17.9 63.7 17.4 2.97 .629
Research
Methods
6.8 42.1 40.0 11.0 2.55 .780
Management 1.6 21.1 59.5 17.9 2.94 .672
Thesis 19.5 45.8 25.3 9.5 2.25 .877
*Where 1 = Very Difficult and 4 = Very Easy
3. Conclusion and Recommendations
Though the findings did not support a significant statistical difference in
accounting anxiety by age, it did reveal significant differences for the factors of
gender, ethnicity, and exposure to undergraduate accounting. The findings also
32
© 2017 The authors and IJLTER.ORG. All rights reserved.
supported a negative relationship between levels of English proficiency and
accounting anxiety as well as a positive relationship between the levels of
accounting anxiety and the quantitative nature of business courses. Finally, the
study did not find significance difference between levels of accounting anxiety
and the selecting of coping strategies for such anxiety. These mixed results
conform within the disparity within the studies discussed in the literature
review. However, it is important to emphasize that this study differs from most
of the literature review studies in that it examines anxiety within the context of
learning the subject of accounting by using English as a second language. Within
that context, Franco (2016) suggested the following eight tactical components for
lowering anxiety in general, and accounting anxiety in particular, within an ESL
environment:
1. Initial assessment of students. This can be done in two ways: On the
first day of class, he student fills out a simple one-page form that
requests information on the student’s knowledge of the subject matter
but also asks the student to evaluate himself/herself as to English
proficiency by way of a Likert scale. The form should also include
questions like, “Who do you admire most?” Each student is then asked to
introduce himself/herself to the class and verbally answer some of the
questions on the form. This allows the teacher make initial assessments
of each student (written and oral presentations) as well as obtain a
general assessment of the level of English proficiency of the group in
order to adapt the course accordingly. Secondarily, the assessment form
would allow the instructor to determine any previous knowledge of
accounting by the students as a result of undergraduate courses and or
work-related experience. This allows making a better initial
determination as to the speed the accounting course should take.
2. Vocabulary Buildup and Word “Dissection.” Absorption of the
nomenclature of accounting is difficult enough for those tackling the
subject in their native language. In an ESL environment, it is vital that
students be introduced to key words and phrases even if this requires a
discussion of such vocabulary before beginning the lecture. The lecturer
should reinforce the meaning of key terms/phrases and provide a
context within which they have meaning. Without a focus on building up
the vocabulary for a particular lecture, there is a stronger likelihood that
some students will not be able to follow the narrative. Frustration will set
in as key terms, not properly absorbed by the student, will become
obstacles in comprehending the narrative and context of the discussion.
The lecturer should write key words and phrases on the board, along
with their definitions, and require the students to them write down. This
creates a mental imprimatur since students are more likely to remember
a word if they physically see it and work with it. Grammatical analysis of
a word can be performed by “dissecting” it and presenting its
grammatical variations. For example, a word like “accountability” –
defined as being held responsible for something – can be broken up from
33
© 2017 The authors and IJLTER.ORG. All rights reserved.
its noun form to its adjective – “accountable” – and the verb phrase “to
account” for. This dissection, along with the lecturer’s use of the word
within a context and the solicited use of the word from students in a
sentence or two, allows the students to “chew” on the word or phrase
and obtain an adequate comfort level of understanding.
3. Concept Checking. Concept checking involves asking questions to
students to test the depth of their knowledge of newly accumulated
information. These questions are sometime difficult to construct and
some see their creation as more of an art form than a skill. The checking
of concepts is developed in part, by anticipating, beforehand, concept
checking questions you might use. However, it is primarily developed
through practice and experience – “thinking on your feet.” Concept
checking should be used throughout the lecture. In some situations, you
can repeat a concept checking question that was successfully used in the
same lecture in the past. However, the teacher will have to be conscious
of coming up with new and pertinent concept checking questions within
the serendipitous dynamics of the classroom discussion. This is art form
more than anything else and the interaction of concept checking allows
for a good balance between teacher talking time and student talking
time.
Concept checking is not open questioning. Avoid questions such as, “Do
you understand? that can merely be answered with “yes” or “no.” If your
narrative flow causes you to create a question that can be answered in
that way, follow up with “why?” “Marry” students in the class to come
up with financial solutions to a marriage or business partnership
problem. This personalizes the class analysis and gets students to interact
with each other. The teacher should avoid adding unfamiliar vocabulary
when working through concept checking. This is part of a self-imposed
discipline that is always conscious of the ESL experience and the
appropriate implementation of knowledge within that setting.
4. Eliciting. Eliciting can be simply defined as asking for answers
(information) instead of just giving out the information. In a learner-
centered classroom this provides for constant interaction. Eliciting
should be performed by choosing students – not by depending on
volunteers (i.e., the “alpha” few that will dominate classroom discussions
if the teachers allows for this). Choosing students also keeps all students
alert (“on their toes”) and avoids the awkward situation where a
question asked to the entire class is met with silence. Even if the student
chosen by the lecturer does not have an answer, he/she will usually
provide some response that the teacher can build on. Letting everyone
know that they can and will be called on helps to identify students who
are falling behind (“stragglers”).
Pace yourself in your elicitations. Avoid repetition, condescension, and
the need to turn everything into a question. Avoid asking questions
34
© 2017 The authors and IJLTER.ORG. All rights reserved.
about material that has already been covered unless you are conducting a
review for an examination.
5. Pacing. Even while abiding by the institution’s guidelines, rules, and
expectations, the lecturer remains the “master of his domain” within
his/her classroom. Lectures, homework, assignments, projects, and
examinations are all the creations of the teacher. Especially in the ESL
environment, the teacher must recognize the need to alter the pace of a
lecture and even the pace of the entire course. Slow down when red flags
and bells are going off. This is particularly true regarding subject matter
that is built in layers (like accounting) where the next layer requires that
you fundamentally understand the prior layer(s) of knowledge. If the
lecturer keeps moving just to follow a schedule of his own design (e.g., a
stated calendar on the syllabus), the results will be poor performances on
the midterm exam. At that point the lecturer will have to go “back to
basics” or risk moving forward and witnessing poor performances again,
this time on the final exam. Almost nothing is more nonsensical for a
lecturer than shackling himself/herself to rigid or impractical time
restraints that were self-created and self-imposed.
6. Monitoring. In an ESL, learner-center environment the interaction
should not only be verbal but also physical. The lecturer should not hide
behind a podium or desk. Instead, the lecturer should be moving around
to keep the students alert, away from their phones, or Facebook on their
laptops. Moving amongst the students also allows for better eliciting,
“marrying students,” and concept checking. When students are
performing an in-class exercise (e.g., accounting), the teacher should
move from one student to the next to see if the student is stuck on a word
or a concept. Sometimes they are stuck on a verb or some other word
within an explanatory or instructional text. An explanation or
clarification at that moment is crucial. Otherwise, the student gets stuck
and needlessly frustrated at the very start and gives up on solving the
problem or resorts to looking to the student next to him/her for the
answer. Sometimes a student who is stuck asks another student for an
explanation. When a teacher sees this, he/she should step in, do the
explanation, and provide further guidance.
7. Use of Paper. ESL students need to see physical words, not just hear
them. They need a physical imprimatur. Power points have limited
impact, unless the students have the physical text of the power point
slides in front of them. If the lecturer gives handouts of core material
(material that will be tested) the student has the pertinent text and can
make notes including the meaning of the word in their native language.
For test preparation, ESL students tend to rely on paper since they are
not only looking at concepts but also the specific words that constitute
the definition or explanation of that concept.
35
© 2017 The authors and IJLTER.ORG. All rights reserved.
8. Feedback. It is nonsensical to wait until the student evaluations to obtain
feedback on how well ESL students were coping with their English
comprehension in a business course. Feedback is best solicited from the
first day of the course, on an individual basis when the student feels
he/she can be more candid or less embarrassed (i.e., no disclosure in
public). Feedback can be attained before and after class, during breaks,
by email, and at office hours. The teacher can also specifically approach
students that he/she feels are having trouble. Individual feedback, in the
aggregate, can help the teacher determine the overall situation in the
class and who the “stragglers” are.
The continuation of globalization guarantees the internalization of higher
education business studies using English as the commercial lingua franca. This
study focused specifically on accounting anxiety experienced by ESL students. A
body of literature needs to be created to specifically address accounting anxiety
within the context of ESL education.
References
Ameen, E.C., Guffey, D. M., & Jackson, C. (2002). Evidence of teaching anxiety among
educators. Journal of Education for Business, September/October, 16-22.
Argaman, O., & Abu-Rabia, S. (2002). The influence of language anxiety on English
teading and writing tasks among Hebrew speakers. Language, Culture, and
Curriculum, 15(2), 143-160.
Behling, O., & Law. K. S. (2000). Translating questionnaires and other research instruments:
Problems and solutions. Thousand Oaks, CA: SAGE Publications, Inc.
Borja, P. M. (2003). So you’ve been asked to teach principles of accounting. Business
Education Forum, 58(2), 30-32.
Buckhaults, J., & Fisher, D. (2011). Trends in accounting education: Decreasing
accounting anxiety and promoting new methods. Journal of Education for
Business, 86, 31-35.
Buripakdi, A. (2014). Hegemonic English, standard Thai, and narratives of the subaltern
in Thailand. In P. Liamputtong (Ed), Contemporary Socio-cultural and Political
Perspectives In Thailand (pp. 95-109). Dordrecht, Netherlands: Springer.
Burkett, W. H., Compton, D.M., & Burkett, G.G. (2001). An examination of computer
attitudes, anxieties, and aversions among diverse college populations: Issues
central to understanding information sciences in the new millennium. Informing
Science 4(3), 77- 85.
Casado, M. A., & Dereshiswsky, M. I. (2004). Effect of educational strategies on anxiety
in the second language. College Student Journal, 38(1), 23-35.
Chen, B. H., Hsu, M., & Chen, M. (2013). The relationship between learning attitude and
anxiety in accounting classes: The case of hospitality management university
students in Taiwan. Qual Quant 47, 2815-2827.
Chu, P. C., & Spires, E. E. (1991). Validating the computer anxiety rating scale: Effects of
cognitive style and computer courses on computer anxiety. Computers in Human
Behavior 7(1/2), 7-21.
Clark, C. E., & Schwartz, B. N. (1989). Accounting anxiety: An experiment to determine
the effects of an intervention on anxiety levels and achievement of introductory
accounting students. Journal of Accounting Education 7, 149-169.
Domyei, Z., & Taguchi, T. (2009). Questionnaires in second language research: Construction,
administration, and processing (2nd ed.). London: Routledge.
36
© 2017 The authors and IJLTER.ORG. All rights reserved.
Dull, R. B., Schleifer, L. F., & McMillan, J. J. (2015). Achievement goal theory: The
relationship of accounting students’ goal orientations with self-efficacy, anxiety,
and achievement. Accounting Education: An International Journal 24(2), 152-174.
Duman, H., Apak, I., Yucenursen, M, & Peker, A. A. (2015). Determining the anxieties of
accounting education students: A sample of Aksaray University. Procedia – Social
and Behavioral Sciences 174, 1834-1840.
Eastrope, A. (1999). Englishness and national culture. London: Routledge.
Franco, A. (2016). MBA instructor’s guide for teaching business to ESL students.
Unpublished manuscript, Bangkok, Thailand.
Ghaderi, A. R., & Salehi, M. (2011). A study of the level of self-efficacy, depression and
anxiety between accounting and management students: Iranian evidence. World
Applied Sciences Journal 12(8), 1299-1306.
Hair, J. F. Jr., Black, W. C., Babin, B.J., & Anderson, R. E. (2010). Multivariate data analysis:
a global perspective (7th ed.). Saddle River, NJ: Prentice-Hall International.
Harkness, J. A., van de Vijver, F. J. R., & Mohler, P. P. (2002). Cross-cultural survey
methods. Hoboken, NJ: Wiley-Interscience.
Holmes, H., Tangtongtawy, S., & Tomizawa, R. (2003). Working with the Thais: A guide to
managing in Thailand (2nd ed.). Bangkok: White Lotus Press.
Horwitz, E. (1991). Preliminary evidence for the reliability and validity of a foreign
language anxiety scale. In E. K. Horwitz & D. J. Young (Eds.) Language anxiety:
From theory and research to classroom implications. Englewood Cliffs, NJ: Prentice
Hall.
Horwitz, M. B., Horwitz, E. K., & Cope, J. A. (1986). Foreign language classroom anxiety.
The Modern Language Journal, 70(2), 125-132.
Johnson, R. L., & Morgan, G. B. (2016). Survey scales: A guide to development, analysis, and
reporting. New York: The Guilford Press.
Kao, P., & Craigie, P. (2013). Coping strategies of Taiwanese university students as
predictors of English language learning anxiety. Social Behavior and Personality
41(3), 411-420.
Kondo, D. S., & Yang, Y-L. (2004). Strategies for coping with language anxiety: The case
of students of English in Japan. ELT Journal 58(3), 258-265.
Krejcie, R. V., & Morgan, D. (1970). Determination of sample size for research activities.
Educational and Psychological Measurement 30, 607-610.
Mahmoodzadeh, M. (2012). Investigating foreign language speaking anxiety within the
EFL Learner’s inter-language system: The case of Iranian learners. Journal of
Language Teaching and Research 3(3), 466-476.
Malgwi, C. A. (2004). Determinants of accounting anxiety in business students. Journal of
College Teaching and Learning 1(2), 81-94.
Marwan, A. (2007). Investigating students’ foreign language anxiety. Malaysian Journal of
ELT Research, 3, 37-55.
Matsuda, S., & Gobel, P. (2004). Anxiety and predictors of performance in the foreign
language classroom. System 32, 21-36.
McIlroy, D., Bunting, B., Tierney, K., & Gordon, M. (2001). The relation of gender and
background experience to self-reported computing anxieties and cognitions.
Computers in Human Behavior 17, 21-33.
Onwuegbuzie, A., Bailey, P., & Daley, C. E. (1999). Factors associated with foreign
language anxiety. Applied Socio Linguistics 20(2), 218-239.
Ozturk, G. & Gurbuz, N. (2014). Speaking anxiety among Turkish EFL learners: The case
at a state university. Journal of Language and Linguistic Studies 10(1), 1-17.
Pappamihiel, N. E. (2002). English as a second language student and English language
anxiety issues in the mainstream classroom. Proquest Education Journal 36(3), 327-
355.
37
© 2017 The authors and IJLTER.ORG. All rights reserved.
Rosen, L. D., & Weil, M. M. (1995). Computer anxiety: A cross-cultural comparison of
university students in ten countries. Computers in Human Behavior 11(1), 45-64.
Sekaran, U. (2000). Research methods for business: A skill building approach (4th ed.). NY:
John Wiley & Sons, Inc.
Semmar, Y. (2010). First year university students and language anxiety: Insights into the
English version of the foreign language classroom anxiety scale. The International
Journal of Learning, 17(1), 81-93.
Suntaree, K. (1990). Psychology of the Thai people: Values and behavioral patterns. Bangkok:
Research institute of Development Administration.
Tavakol, M., & Dennick, R. (2011). Making sense of Cronbach’s alpha. International
Journal of Medical Education 2, 53-55.
Todman, J. (2000). Gender differences in computer anxiety among university entrants
since 1992. Computers & Education 34, 27-35.
Towell, E. R., & Lauer, J. (2001). Personality differences and computer related stress in
business students. Mid-American Journal of Business 16(1), 69-75.
Uyar, A., & Gungormus, A. H. (2011). Factors associated with student performance in
financial accounting course. European Journal of Economic and Political Studies 2,
139-154.
Wang, S. (2010). An experimental study of Chinese English major students’ listening
anxiety of classroom learning activity at the university level. Journal of Language
Teaching and Research, 1(5), 562-568.
Yashima, T. (2002). Willingness to communicate in a second language. The Japanese EFL
context. Modern Language Journal 86(1), 54-66.
38
© 2017 The author and IJLTER.ORG. All rights reserved.
International Journal of Learning, Teaching and Educational Research
Vol. 16, No. 1, pp. 38-56, January 2017
(Mis)Reading the Classroom: A Two-Act ―Play‖ on
the Conflicting Roles in Student Teaching
Christi Edge, Ph.D.
Northern Michigan University
Marquette, Michigan, United States of America
Abstract. This case study examined concentric and reciprocal notions of
reading—that of high school students, a pre-service teacher, and a
teacher educator. An intern charged with teaching students to read,
interact with, and compose texts in an English/language arts classroom
constructed her role in the classroom based on her reading the ―text‖ of
her internship experiences, relationships, and responsibilities. Using
interviews and observations, a teacher educator read and interpreted the
classroom ―text‖ the pre-service teacher ―composed‖ during her
internship and then constructed a two-act ―play‖ which details the
conflict in the intern‘s enacting the dual role of student-teacher and her
subsequent reading of the classroom ―text‖ from her stance as student-
teacher. Concepts of classroom literacy for teachers and teacher
educators are considered.
Keywords: teacher education; reading classroom text; classroom
literacy; student teaching internship; stance
Introduction
In light of growing pedagogical, professional, and public awareness that twenty-
first century literacy involves more than just printed words on a page and that
specific literacies are acquired throughout the duration of an individual‘s
education (Barton, 2000; Biancarosa & Snow, 2006; Buehl, 2014; Clark & Flores,
2007; Draper, 2011; Gee, 2012; International Reading Association, 2012; Langer,
1987; Lankshear & Knobel, 2007; Maclellan, 2008; National Council Teachers of
English, 2007, 2008; National Center for Education Statistics [NCES], 2006, 2007;
Rogers, 2000), it is time to consider the professional literacy needs of the very
individuals to whom we look to educate our children and our adolescents
(International Literacy Association, 2015).
Review of the Literature
Lad Tobin (2004) implies a connection between the disciplinary focus of
studying texts and the pedagogical importance of studying classrooms as text by
asserting that ―teaching is a way of reading and writing. Students learn to teach
through, first, learning to read the classroom and, second, learning to write
themselves within that classroom‖ (p. 129). A teacher is simultaneously a reader
and a writer of her classroom. Like readers whose meaning making is framed by
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017
Vol 16 No 1 - Janaury 2017

More Related Content

What's hot

Network meta-analysis & models for inconsistency
Network meta-analysis & models for inconsistencyNetwork meta-analysis & models for inconsistency
Network meta-analysis & models for inconsistencycheweb1
 
Joseph Levy MedicReS World Congress 2013 - 1
Joseph Levy MedicReS World Congress 2013 - 1 Joseph Levy MedicReS World Congress 2013 - 1
Joseph Levy MedicReS World Congress 2013 - 1 MedicReS
 
EDUC8102-6 MD7Assgn4: Research Application Paper #1.
EDUC8102-6 MD7Assgn4: Research Application Paper #1. EDUC8102-6 MD7Assgn4: Research Application Paper #1.
EDUC8102-6 MD7Assgn4: Research Application Paper #1. eckchela
 
Qualitative Research Chapter 3 g11 Research Method and Procedures
Qualitative Research Chapter 3 g11 Research Method and ProceduresQualitative Research Chapter 3 g11 Research Method and Procedures
Qualitative Research Chapter 3 g11 Research Method and ProceduresGhail RC
 
Topic 1 a glance of research process
Topic 1 a glance of research processTopic 1 a glance of research process
Topic 1 a glance of research processYoussef2000
 
Nicholas Jewell MedicReS World Congress 2014
Nicholas Jewell MedicReS World Congress 2014Nicholas Jewell MedicReS World Congress 2014
Nicholas Jewell MedicReS World Congress 2014MedicReS
 
PDU 211 Research Methods: The Process of Conducting Research
PDU 211 Research Methods: The Process of Conducting ResearchPDU 211 Research Methods: The Process of Conducting Research
PDU 211 Research Methods: The Process of Conducting ResearchAgatha N. Ardhiati
 
Research design: A Major too in Research Methods
Research design: A Major too in Research MethodsResearch design: A Major too in Research Methods
Research design: A Major too in Research MethodsDr S.Sasi Kumar Phd(N)
 
Meta analysis: Made Easy with Example from RevMan
Meta analysis: Made Easy with Example from RevManMeta analysis: Made Easy with Example from RevMan
Meta analysis: Made Easy with Example from RevManGaurav Kamboj
 
Enhancing Performance in Medical Articles Summarization with Multi-Feature Se...
Enhancing Performance in Medical Articles Summarization with Multi-Feature Se...Enhancing Performance in Medical Articles Summarization with Multi-Feature Se...
Enhancing Performance in Medical Articles Summarization with Multi-Feature Se...IJECEIAES
 
Meta Analysis of Medical Device Data Applications for Designing Studies and R...
Meta Analysis of Medical Device Data Applications for Designing Studies and R...Meta Analysis of Medical Device Data Applications for Designing Studies and R...
Meta Analysis of Medical Device Data Applications for Designing Studies and R...NAMSA
 
PDU 211 Research Methods: Mixed-Methods & Action Research Designs
PDU 211 Research Methods: Mixed-Methods & Action Research DesignsPDU 211 Research Methods: Mixed-Methods & Action Research Designs
PDU 211 Research Methods: Mixed-Methods & Action Research DesignsAgatha N. Ardhiati
 
Living evidence 3
Living evidence 3Living evidence 3
Living evidence 3stanbridge
 
PDU 211 Research Methods: Specifying a Purpose and Research Questions or Hypo...
PDU 211 Research Methods: Specifying a Purpose and Research Questions or Hypo...PDU 211 Research Methods: Specifying a Purpose and Research Questions or Hypo...
PDU 211 Research Methods: Specifying a Purpose and Research Questions or Hypo...Agatha N. Ardhiati
 
Chapter 3 g10 Research Method and Procedure
Chapter 3 g10 Research Method and ProcedureChapter 3 g10 Research Method and Procedure
Chapter 3 g10 Research Method and ProcedureGhail RC
 
Test anxiety gender and academic achievements
Test anxiety gender and academic achievementsTest anxiety gender and academic achievements
Test anxiety gender and academic achievementsDr.Nasir Ahmad
 
Research methodologies increasing understanding of the world
Research methodologies increasing understanding of the worldResearch methodologies increasing understanding of the world
Research methodologies increasing understanding of the worldDr. Mary Jane Coy, PhD
 
Effective feedback practices in formative assessment recognizing the relevance
Effective feedback practices in formative assessment   recognizing the relevanceEffective feedback practices in formative assessment   recognizing the relevance
Effective feedback practices in formative assessment recognizing the relevanceAlexander Decker
 

What's hot (20)

Gaskin2014 sem
Gaskin2014 semGaskin2014 sem
Gaskin2014 sem
 
Network meta-analysis & models for inconsistency
Network meta-analysis & models for inconsistencyNetwork meta-analysis & models for inconsistency
Network meta-analysis & models for inconsistency
 
Joseph Levy MedicReS World Congress 2013 - 1
Joseph Levy MedicReS World Congress 2013 - 1 Joseph Levy MedicReS World Congress 2013 - 1
Joseph Levy MedicReS World Congress 2013 - 1
 
EDUC8102-6 MD7Assgn4: Research Application Paper #1.
EDUC8102-6 MD7Assgn4: Research Application Paper #1. EDUC8102-6 MD7Assgn4: Research Application Paper #1.
EDUC8102-6 MD7Assgn4: Research Application Paper #1.
 
Qualitative Research Chapter 3 g11 Research Method and Procedures
Qualitative Research Chapter 3 g11 Research Method and ProceduresQualitative Research Chapter 3 g11 Research Method and Procedures
Qualitative Research Chapter 3 g11 Research Method and Procedures
 
Topic 1 a glance of research process
Topic 1 a glance of research processTopic 1 a glance of research process
Topic 1 a glance of research process
 
Nicholas Jewell MedicReS World Congress 2014
Nicholas Jewell MedicReS World Congress 2014Nicholas Jewell MedicReS World Congress 2014
Nicholas Jewell MedicReS World Congress 2014
 
PDU 211 Research Methods: The Process of Conducting Research
PDU 211 Research Methods: The Process of Conducting ResearchPDU 211 Research Methods: The Process of Conducting Research
PDU 211 Research Methods: The Process of Conducting Research
 
Research design: A Major too in Research Methods
Research design: A Major too in Research MethodsResearch design: A Major too in Research Methods
Research design: A Major too in Research Methods
 
Meta analysis: Made Easy with Example from RevMan
Meta analysis: Made Easy with Example from RevManMeta analysis: Made Easy with Example from RevMan
Meta analysis: Made Easy with Example from RevMan
 
Enhancing Performance in Medical Articles Summarization with Multi-Feature Se...
Enhancing Performance in Medical Articles Summarization with Multi-Feature Se...Enhancing Performance in Medical Articles Summarization with Multi-Feature Se...
Enhancing Performance in Medical Articles Summarization with Multi-Feature Se...
 
Meta Analysis of Medical Device Data Applications for Designing Studies and R...
Meta Analysis of Medical Device Data Applications for Designing Studies and R...Meta Analysis of Medical Device Data Applications for Designing Studies and R...
Meta Analysis of Medical Device Data Applications for Designing Studies and R...
 
EDR8205-5
EDR8205-5EDR8205-5
EDR8205-5
 
PDU 211 Research Methods: Mixed-Methods & Action Research Designs
PDU 211 Research Methods: Mixed-Methods & Action Research DesignsPDU 211 Research Methods: Mixed-Methods & Action Research Designs
PDU 211 Research Methods: Mixed-Methods & Action Research Designs
 
Living evidence 3
Living evidence 3Living evidence 3
Living evidence 3
 
PDU 211 Research Methods: Specifying a Purpose and Research Questions or Hypo...
PDU 211 Research Methods: Specifying a Purpose and Research Questions or Hypo...PDU 211 Research Methods: Specifying a Purpose and Research Questions or Hypo...
PDU 211 Research Methods: Specifying a Purpose and Research Questions or Hypo...
 
Chapter 3 g10 Research Method and Procedure
Chapter 3 g10 Research Method and ProcedureChapter 3 g10 Research Method and Procedure
Chapter 3 g10 Research Method and Procedure
 
Test anxiety gender and academic achievements
Test anxiety gender and academic achievementsTest anxiety gender and academic achievements
Test anxiety gender and academic achievements
 
Research methodologies increasing understanding of the world
Research methodologies increasing understanding of the worldResearch methodologies increasing understanding of the world
Research methodologies increasing understanding of the world
 
Effective feedback practices in formative assessment recognizing the relevance
Effective feedback practices in formative assessment   recognizing the relevanceEffective feedback practices in formative assessment   recognizing the relevance
Effective feedback practices in formative assessment recognizing the relevance
 

Similar to Vol 16 No 1 - Janaury 2017

The Importance Of A Family Intervention For Heart Failure...
The Importance Of A Family Intervention For Heart Failure...The Importance Of A Family Intervention For Heart Failure...
The Importance Of A Family Intervention For Heart Failure...Paula Smith
 
Correlation and Regression Study.docx
Correlation and Regression Study.docxCorrelation and Regression Study.docx
Correlation and Regression Study.docxsdfghj21
 
HighFidelity Simulation in Nursing Education for EndofLife Care Essay.pdf
HighFidelity Simulation in Nursing Education for EndofLife Care Essay.pdfHighFidelity Simulation in Nursing Education for EndofLife Care Essay.pdf
HighFidelity Simulation in Nursing Education for EndofLife Care Essay.pdfsdfghj21
 
Vol 4 No 1 - April 2014
Vol 4 No 1 - April 2014Vol 4 No 1 - April 2014
Vol 4 No 1 - April 2014ijlterorg
 
Nursing Shortageby Monica CastelaoSubmission dat e 01-.docx
Nursing Shortageby Monica CastelaoSubmission dat e  01-.docxNursing Shortageby Monica CastelaoSubmission dat e  01-.docx
Nursing Shortageby Monica CastelaoSubmission dat e 01-.docxcherishwinsland
 
Validity of Instruments, Appropriateness of Designs and Statistics in Article...
Validity of Instruments, Appropriateness of Designs and Statistics in Article...Validity of Instruments, Appropriateness of Designs and Statistics in Article...
Validity of Instruments, Appropriateness of Designs and Statistics in Article...iosrjce
 
#1 Characteristics, Strengths, Weaknesses, Kinds of.pptx
#1 Characteristics, Strengths, Weaknesses, Kinds of.pptx#1 Characteristics, Strengths, Weaknesses, Kinds of.pptx
#1 Characteristics, Strengths, Weaknesses, Kinds of.pptxJessaMaeGastar1
 
Reliability Analysis Of Refined Model With 25 Items And 5...
Reliability Analysis Of Refined Model With 25 Items And 5...Reliability Analysis Of Refined Model With 25 Items And 5...
Reliability Analysis Of Refined Model With 25 Items And 5...Jessica Myers
 
Article Repeated Measures.docx
Article Repeated Measures.docxArticle Repeated Measures.docx
Article Repeated Measures.docxwrite12
 
Assessing Skills Of Identifying Variables And Formulating Hypotheses Using Sc...
Assessing Skills Of Identifying Variables And Formulating Hypotheses Using Sc...Assessing Skills Of Identifying Variables And Formulating Hypotheses Using Sc...
Assessing Skills Of Identifying Variables And Formulating Hypotheses Using Sc...Anita Miller
 
Lesson 1 Introduction to Quantitative Research.pptx
Lesson 1 Introduction to Quantitative Research.pptxLesson 1 Introduction to Quantitative Research.pptx
Lesson 1 Introduction to Quantitative Research.pptxJunilynSamoya1
 
Pilot Study Publication (in press)
Pilot Study Publication (in press)Pilot Study Publication (in press)
Pilot Study Publication (in press)naomi tutticci
 
Essent of Nursing homework help.docx
Essent of Nursing homework help.docxEssent of Nursing homework help.docx
Essent of Nursing homework help.docx4934bk
 
A Protocol For The Development Of A Critical Thinking Assessment Tool For Nur...
A Protocol For The Development Of A Critical Thinking Assessment Tool For Nur...A Protocol For The Development Of A Critical Thinking Assessment Tool For Nur...
A Protocol For The Development Of A Critical Thinking Assessment Tool For Nur...Darian Pruitt
 
CONTENT VALIDITY OF CREATIVE THINKING SKILLS ASSESSMENT
CONTENT VALIDITY OF CREATIVE THINKING SKILLS ASSESSMENTCONTENT VALIDITY OF CREATIVE THINKING SKILLS ASSESSMENT
CONTENT VALIDITY OF CREATIVE THINKING SKILLS ASSESSMENTandi ulfa tenri pada
 
Running Head Construct Development, Scale Creation, and Process A.docx
Running Head Construct Development, Scale Creation, and Process A.docxRunning Head Construct Development, Scale Creation, and Process A.docx
Running Head Construct Development, Scale Creation, and Process A.docxtodd271
 
Assessment Methods In Medical Education
Assessment Methods In Medical EducationAssessment Methods In Medical Education
Assessment Methods In Medical EducationNathan Mathis
 
Translating Evidence into Practice Data Collection Assignment.pdf
Translating Evidence into Practice Data Collection Assignment.pdfTranslating Evidence into Practice Data Collection Assignment.pdf
Translating Evidence into Practice Data Collection Assignment.pdfsdfghj21
 
Course Project Part 3—Translating Evidence Into PracticeIn Pa.docx
Course Project Part 3—Translating Evidence Into PracticeIn Pa.docxCourse Project Part 3—Translating Evidence Into PracticeIn Pa.docx
Course Project Part 3—Translating Evidence Into PracticeIn Pa.docxbuffydtesurina
 

Similar to Vol 16 No 1 - Janaury 2017 (20)

The Importance Of A Family Intervention For Heart Failure...
The Importance Of A Family Intervention For Heart Failure...The Importance Of A Family Intervention For Heart Failure...
The Importance Of A Family Intervention For Heart Failure...
 
Correlation and Regression Study.docx
Correlation and Regression Study.docxCorrelation and Regression Study.docx
Correlation and Regression Study.docx
 
HighFidelity Simulation in Nursing Education for EndofLife Care Essay.pdf
HighFidelity Simulation in Nursing Education for EndofLife Care Essay.pdfHighFidelity Simulation in Nursing Education for EndofLife Care Essay.pdf
HighFidelity Simulation in Nursing Education for EndofLife Care Essay.pdf
 
Vol 4 No 1 - April 2014
Vol 4 No 1 - April 2014Vol 4 No 1 - April 2014
Vol 4 No 1 - April 2014
 
Nursing Shortageby Monica CastelaoSubmission dat e 01-.docx
Nursing Shortageby Monica CastelaoSubmission dat e  01-.docxNursing Shortageby Monica CastelaoSubmission dat e  01-.docx
Nursing Shortageby Monica CastelaoSubmission dat e 01-.docx
 
Validity of Instruments, Appropriateness of Designs and Statistics in Article...
Validity of Instruments, Appropriateness of Designs and Statistics in Article...Validity of Instruments, Appropriateness of Designs and Statistics in Article...
Validity of Instruments, Appropriateness of Designs and Statistics in Article...
 
#1 Characteristics, Strengths, Weaknesses, Kinds of.pptx
#1 Characteristics, Strengths, Weaknesses, Kinds of.pptx#1 Characteristics, Strengths, Weaknesses, Kinds of.pptx
#1 Characteristics, Strengths, Weaknesses, Kinds of.pptx
 
Reliability Analysis Of Refined Model With 25 Items And 5...
Reliability Analysis Of Refined Model With 25 Items And 5...Reliability Analysis Of Refined Model With 25 Items And 5...
Reliability Analysis Of Refined Model With 25 Items And 5...
 
Article Repeated Measures.docx
Article Repeated Measures.docxArticle Repeated Measures.docx
Article Repeated Measures.docx
 
Assessing Skills Of Identifying Variables And Formulating Hypotheses Using Sc...
Assessing Skills Of Identifying Variables And Formulating Hypotheses Using Sc...Assessing Skills Of Identifying Variables And Formulating Hypotheses Using Sc...
Assessing Skills Of Identifying Variables And Formulating Hypotheses Using Sc...
 
Lesson 1 Introduction to Quantitative Research.pptx
Lesson 1 Introduction to Quantitative Research.pptxLesson 1 Introduction to Quantitative Research.pptx
Lesson 1 Introduction to Quantitative Research.pptx
 
Pilot Study Publication (in press)
Pilot Study Publication (in press)Pilot Study Publication (in press)
Pilot Study Publication (in press)
 
Essent of Nursing homework help.docx
Essent of Nursing homework help.docxEssent of Nursing homework help.docx
Essent of Nursing homework help.docx
 
A Protocol For The Development Of A Critical Thinking Assessment Tool For Nur...
A Protocol For The Development Of A Critical Thinking Assessment Tool For Nur...A Protocol For The Development Of A Critical Thinking Assessment Tool For Nur...
A Protocol For The Development Of A Critical Thinking Assessment Tool For Nur...
 
CONTENT VALIDITY OF CREATIVE THINKING SKILLS ASSESSMENT
CONTENT VALIDITY OF CREATIVE THINKING SKILLS ASSESSMENTCONTENT VALIDITY OF CREATIVE THINKING SKILLS ASSESSMENT
CONTENT VALIDITY OF CREATIVE THINKING SKILLS ASSESSMENT
 
Running Head Construct Development, Scale Creation, and Process A.docx
Running Head Construct Development, Scale Creation, and Process A.docxRunning Head Construct Development, Scale Creation, and Process A.docx
Running Head Construct Development, Scale Creation, and Process A.docx
 
Assessment Methods In Medical Education
Assessment Methods In Medical EducationAssessment Methods In Medical Education
Assessment Methods In Medical Education
 
Translating Evidence into Practice Data Collection Assignment.pdf
Translating Evidence into Practice Data Collection Assignment.pdfTranslating Evidence into Practice Data Collection Assignment.pdf
Translating Evidence into Practice Data Collection Assignment.pdf
 
Research Essay Questions
Research Essay QuestionsResearch Essay Questions
Research Essay Questions
 
Course Project Part 3—Translating Evidence Into PracticeIn Pa.docx
Course Project Part 3—Translating Evidence Into PracticeIn Pa.docxCourse Project Part 3—Translating Evidence Into PracticeIn Pa.docx
Course Project Part 3—Translating Evidence Into PracticeIn Pa.docx
 

More from ijlterorg

ILJTER.ORG Volume 22 Number 12 December 2023
ILJTER.ORG Volume 22 Number 12 December 2023ILJTER.ORG Volume 22 Number 12 December 2023
ILJTER.ORG Volume 22 Number 12 December 2023ijlterorg
 
ILJTER.ORG Volume 22 Number 11 November 2023
ILJTER.ORG Volume 22 Number 11 November 2023ILJTER.ORG Volume 22 Number 11 November 2023
ILJTER.ORG Volume 22 Number 11 November 2023ijlterorg
 
ILJTER.ORG Volume 22 Number 10 October 2023
ILJTER.ORG Volume 22 Number 10 October 2023ILJTER.ORG Volume 22 Number 10 October 2023
ILJTER.ORG Volume 22 Number 10 October 2023ijlterorg
 
ILJTER.ORG Volume 22 Number 09 September 2023
ILJTER.ORG Volume 22 Number 09 September 2023ILJTER.ORG Volume 22 Number 09 September 2023
ILJTER.ORG Volume 22 Number 09 September 2023ijlterorg
 
ILJTER.ORG Volume 22 Number 07 July 2023
ILJTER.ORG Volume 22 Number 07 July 2023ILJTER.ORG Volume 22 Number 07 July 2023
ILJTER.ORG Volume 22 Number 07 July 2023ijlterorg
 
ILJTER.ORG Volume 22 Number 06 June 2023
ILJTER.ORG Volume 22 Number 06 June 2023ILJTER.ORG Volume 22 Number 06 June 2023
ILJTER.ORG Volume 22 Number 06 June 2023ijlterorg
 
IJLTER.ORG Vol 22 No 5 May 2023
IJLTER.ORG Vol 22 No 5 May 2023IJLTER.ORG Vol 22 No 5 May 2023
IJLTER.ORG Vol 22 No 5 May 2023ijlterorg
 
IJLTER.ORG Vol 22 No 4 April 2023
IJLTER.ORG Vol 22 No 4 April 2023IJLTER.ORG Vol 22 No 4 April 2023
IJLTER.ORG Vol 22 No 4 April 2023ijlterorg
 
IJLTER.ORG Vol 22 No 3 March 2023
IJLTER.ORG Vol 22 No 3 March 2023IJLTER.ORG Vol 22 No 3 March 2023
IJLTER.ORG Vol 22 No 3 March 2023ijlterorg
 
IJLTER.ORG Vol 22 No 2 February 2023
IJLTER.ORG Vol 22 No 2 February 2023IJLTER.ORG Vol 22 No 2 February 2023
IJLTER.ORG Vol 22 No 2 February 2023ijlterorg
 
IJLTER.ORG Vol 22 No 1 January 2023
IJLTER.ORG Vol 22 No 1 January 2023IJLTER.ORG Vol 22 No 1 January 2023
IJLTER.ORG Vol 22 No 1 January 2023ijlterorg
 
IJLTER.ORG Vol 21 No 12 December 2022
IJLTER.ORG Vol 21 No 12 December 2022IJLTER.ORG Vol 21 No 12 December 2022
IJLTER.ORG Vol 21 No 12 December 2022ijlterorg
 
IJLTER.ORG Vol 21 No 11 November 2022
IJLTER.ORG Vol 21 No 11 November 2022IJLTER.ORG Vol 21 No 11 November 2022
IJLTER.ORG Vol 21 No 11 November 2022ijlterorg
 
IJLTER.ORG Vol 21 No 10 October 2022
IJLTER.ORG Vol 21 No 10 October 2022IJLTER.ORG Vol 21 No 10 October 2022
IJLTER.ORG Vol 21 No 10 October 2022ijlterorg
 
IJLTER.ORG Vol 21 No 9 September 2022
IJLTER.ORG Vol 21 No 9 September 2022IJLTER.ORG Vol 21 No 9 September 2022
IJLTER.ORG Vol 21 No 9 September 2022ijlterorg
 
IJLTER.ORG Vol 21 No 8 August 2022
IJLTER.ORG Vol 21 No 8 August 2022IJLTER.ORG Vol 21 No 8 August 2022
IJLTER.ORG Vol 21 No 8 August 2022ijlterorg
 
IJLTER.ORG Vol 21 No 7 July 2022
IJLTER.ORG Vol 21 No 7 July 2022IJLTER.ORG Vol 21 No 7 July 2022
IJLTER.ORG Vol 21 No 7 July 2022ijlterorg
 
IJLTER.ORG Vol 21 No 6 June 2022
IJLTER.ORG Vol 21 No 6 June 2022IJLTER.ORG Vol 21 No 6 June 2022
IJLTER.ORG Vol 21 No 6 June 2022ijlterorg
 
IJLTER.ORG Vol 21 No 5 May 2022
IJLTER.ORG Vol 21 No 5 May 2022IJLTER.ORG Vol 21 No 5 May 2022
IJLTER.ORG Vol 21 No 5 May 2022ijlterorg
 
IJLTER.ORG Vol 21 No 4 April 2022
IJLTER.ORG Vol 21 No 4 April 2022IJLTER.ORG Vol 21 No 4 April 2022
IJLTER.ORG Vol 21 No 4 April 2022ijlterorg
 

More from ijlterorg (20)

ILJTER.ORG Volume 22 Number 12 December 2023
ILJTER.ORG Volume 22 Number 12 December 2023ILJTER.ORG Volume 22 Number 12 December 2023
ILJTER.ORG Volume 22 Number 12 December 2023
 
ILJTER.ORG Volume 22 Number 11 November 2023
ILJTER.ORG Volume 22 Number 11 November 2023ILJTER.ORG Volume 22 Number 11 November 2023
ILJTER.ORG Volume 22 Number 11 November 2023
 
ILJTER.ORG Volume 22 Number 10 October 2023
ILJTER.ORG Volume 22 Number 10 October 2023ILJTER.ORG Volume 22 Number 10 October 2023
ILJTER.ORG Volume 22 Number 10 October 2023
 
ILJTER.ORG Volume 22 Number 09 September 2023
ILJTER.ORG Volume 22 Number 09 September 2023ILJTER.ORG Volume 22 Number 09 September 2023
ILJTER.ORG Volume 22 Number 09 September 2023
 
ILJTER.ORG Volume 22 Number 07 July 2023
ILJTER.ORG Volume 22 Number 07 July 2023ILJTER.ORG Volume 22 Number 07 July 2023
ILJTER.ORG Volume 22 Number 07 July 2023
 
ILJTER.ORG Volume 22 Number 06 June 2023
ILJTER.ORG Volume 22 Number 06 June 2023ILJTER.ORG Volume 22 Number 06 June 2023
ILJTER.ORG Volume 22 Number 06 June 2023
 
IJLTER.ORG Vol 22 No 5 May 2023
IJLTER.ORG Vol 22 No 5 May 2023IJLTER.ORG Vol 22 No 5 May 2023
IJLTER.ORG Vol 22 No 5 May 2023
 
IJLTER.ORG Vol 22 No 4 April 2023
IJLTER.ORG Vol 22 No 4 April 2023IJLTER.ORG Vol 22 No 4 April 2023
IJLTER.ORG Vol 22 No 4 April 2023
 
IJLTER.ORG Vol 22 No 3 March 2023
IJLTER.ORG Vol 22 No 3 March 2023IJLTER.ORG Vol 22 No 3 March 2023
IJLTER.ORG Vol 22 No 3 March 2023
 
IJLTER.ORG Vol 22 No 2 February 2023
IJLTER.ORG Vol 22 No 2 February 2023IJLTER.ORG Vol 22 No 2 February 2023
IJLTER.ORG Vol 22 No 2 February 2023
 
IJLTER.ORG Vol 22 No 1 January 2023
IJLTER.ORG Vol 22 No 1 January 2023IJLTER.ORG Vol 22 No 1 January 2023
IJLTER.ORG Vol 22 No 1 January 2023
 
IJLTER.ORG Vol 21 No 12 December 2022
IJLTER.ORG Vol 21 No 12 December 2022IJLTER.ORG Vol 21 No 12 December 2022
IJLTER.ORG Vol 21 No 12 December 2022
 
IJLTER.ORG Vol 21 No 11 November 2022
IJLTER.ORG Vol 21 No 11 November 2022IJLTER.ORG Vol 21 No 11 November 2022
IJLTER.ORG Vol 21 No 11 November 2022
 
IJLTER.ORG Vol 21 No 10 October 2022
IJLTER.ORG Vol 21 No 10 October 2022IJLTER.ORG Vol 21 No 10 October 2022
IJLTER.ORG Vol 21 No 10 October 2022
 
IJLTER.ORG Vol 21 No 9 September 2022
IJLTER.ORG Vol 21 No 9 September 2022IJLTER.ORG Vol 21 No 9 September 2022
IJLTER.ORG Vol 21 No 9 September 2022
 
IJLTER.ORG Vol 21 No 8 August 2022
IJLTER.ORG Vol 21 No 8 August 2022IJLTER.ORG Vol 21 No 8 August 2022
IJLTER.ORG Vol 21 No 8 August 2022
 
IJLTER.ORG Vol 21 No 7 July 2022
IJLTER.ORG Vol 21 No 7 July 2022IJLTER.ORG Vol 21 No 7 July 2022
IJLTER.ORG Vol 21 No 7 July 2022
 
IJLTER.ORG Vol 21 No 6 June 2022
IJLTER.ORG Vol 21 No 6 June 2022IJLTER.ORG Vol 21 No 6 June 2022
IJLTER.ORG Vol 21 No 6 June 2022
 
IJLTER.ORG Vol 21 No 5 May 2022
IJLTER.ORG Vol 21 No 5 May 2022IJLTER.ORG Vol 21 No 5 May 2022
IJLTER.ORG Vol 21 No 5 May 2022
 
IJLTER.ORG Vol 21 No 4 April 2022
IJLTER.ORG Vol 21 No 4 April 2022IJLTER.ORG Vol 21 No 4 April 2022
IJLTER.ORG Vol 21 No 4 April 2022
 

Recently uploaded

Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application ) Sakshi Ghasle
 
PSYCHIATRIC History collection FORMAT.pptx
PSYCHIATRIC   History collection FORMAT.pptxPSYCHIATRIC   History collection FORMAT.pptx
PSYCHIATRIC History collection FORMAT.pptxPoojaSen20
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsKarinaGenton
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfUmakantAnnand
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesFatimaKhan178732
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
Micromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersMicromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersChitralekhaTherkar
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...Marc Dusseiller Dusjagr
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Celine George
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 

Recently uploaded (20)

Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application )
 
PSYCHIATRIC History collection FORMAT.pptx
PSYCHIATRIC   History collection FORMAT.pptxPSYCHIATRIC   History collection FORMAT.pptx
PSYCHIATRIC History collection FORMAT.pptx
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its Characteristics
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.Compdf
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
Micromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersMicromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of Powders
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 

Vol 16 No 1 - Janaury 2017

  • 1. International Journal of Learning, Teaching And Educational Research p-ISSN:1694-2493 e-ISSN:1694-2116IJLTER.ORG Vol.16 No.1
  • 2. PUBLISHER London Consulting Ltd District of Flacq Republic of Mauritius www.ijlter.org Chief Editor Dr. Antonio Silva Sprock, Universidad Central de Venezuela, Venezuela, Bolivarian Republic of Editorial Board Prof. Cecilia Junio Sabio Prof. Judith Serah K. Achoka Prof. Mojeed Kolawole Akinsola Dr Jonathan Glazzard Dr Marius Costel Esi Dr Katarzyna Peoples Dr Christopher David Thompson Dr Arif Sikander Dr Jelena Zascerinska Dr Gabor Kiss Dr Trish Julie Rooney Dr Esteban Vázquez-Cano Dr Barry Chametzky Dr Giorgio Poletti Dr Chi Man Tsui Dr Alexander Franco Dr Habil Beata Stachowiak Dr Afsaneh Sharif Dr Ronel Callaghan Dr Haim Shaked Dr Edith Uzoma Umeh Dr Amel Thafer Alshehry Dr Gail Dianna Caruth Dr Menelaos Emmanouel Sarris Dr Anabelie Villa Valdez Dr Özcan Özyurt Assistant Professor Dr Selma Kara Associate Professor Dr Habila Elisha Zuya International Journal of Learning, Teaching and Educational Research The International Journal of Learning, Teaching and Educational Research is an open-access journal which has been established for the dis- semination of state-of-the-art knowledge in the field of education, learning and teaching. IJLTER welcomes research articles from academics, ed- ucators, teachers, trainers and other practition- ers on all aspects of education to publish high quality peer-reviewed papers. Papers for publi- cation in the International Journal of Learning, Teaching and Educational Research are selected through precise peer-review to ensure quality, originality, appropriateness, significance and readability. Authors are solicited to contribute to this journal by submitting articles that illus- trate research results, projects, original surveys and case studies that describe significant ad- vances in the fields of education, training, e- learning, etc. Authors are invited to submit pa- pers to this journal through the ONLINE submis- sion system. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated by IJLTER.
  • 3. VOLUME 16 NUMBER 1 January 2017 Table of Contents Item Consistency Index: An Item-Fit Index for Cognitive Diagnostic Assessment .......................................................1 Hollis Lai, Mark J. Gierl, Ying Cui and Oksana Babenko Factors That Determine Accounting Anxiety Among Users of English as a Second Language Within an International MBA Program................................................................................................................................................ 22 Alexander Franco and Scott S. Roach (Mis)Reading the Classroom: A Two-Act “Play” on the Conflicting Roles in Student Teaching .............................. 38 Christi Edge Coping Strategies of Greek 6th Grade Students: Their Relationship with Anxiety and Trait Emotional Intelligence .................................................................................................................................................................................................57 Alexander- Stamatios Antoniou and Nikos Drosos Active Learning Across Three Dimensions: Integrating Classic Learning Theory with Modern Instructional Technology ............................................................................................................................................................................ 72 Thaddeus R. Crews, Jr. The Effects of Cram Schooling on the Ethnic Learning Achievement Gap: Evidence from Elementary School Students in Taiwan .............................................................................................................................................................. 84 Yu-Chia Liu, Chunn-Ying Lin, Hui-Hua Chen and He Huang Teachers’ Self-Efficacy atMaintaining Order and Discipline in Technology-Rich Classrooms with Relation to Strain Factors....................................................................................................................................................................... 103 Eyvind Elstad and Knut-Andreas Christophersen Using Reflective Journaling to Promote Achievement in Graduate Statistics Coursework...................................... 120 J. E. Thropp Competence and/or Performance - Assessment and Entrepreneurial Teaching and Learning in Two Swedish Lower Secondary Schools.................................................................................................................................................. 135 Monika Diehl and Tord Göran Olovsson Review in Form of a Game: Practical Remarks for a Language Course ...................................................................... 161 Snejina Sonina
  • 4. 1 © 2017 The authors and IJLTER.ORG. All rights reserved. International Journal of Learning, Teaching and Educational Research Vol. 16, No. 1, pp. 1-21, January 2017 Item Consistency Index: An Item-Fit Index for Cognitive Diagnostic Assessment Hollis Lai,1 Mark J. Gierl,2 Ying Cui,2 Oksana Babenko3 1 School of Dentistry, Faculty of Medicine & Dentistry 2Centre for Research in Applied Measurement and Evaluation 3Department of Family Medicine, Faculty of Medicine & Dentistry University of Alberta, Canada Abstract. An item-fit index is a measure of how accurately a set of item responses can be predicted using the test design model. In a diagnostic assessment where items are used to evaluate student mastery on a set of cognitive skills, this index helps determine the alignment between the item responses and skills that each item is designed to measure. In this study, we introduce the Item Consistency Index (ICI), a modification of an existing person- model fit index, for diagnostic assessments. The ICI can be used to evaluate item-model fit on assessments designed with a Q-matrix. Results from both a simulation and real data study are presented. In the simulation study, the ICI identified poor-fitting items under three manipulated conditions: sample size, test length, and proportion of poor-fitting items. In the real-data study, the ICI detected three poor-fitting items for an operational diagnostic assessment in Grade 3 mathematics. Practical implications and future research directions for the ICI are also discussed. Keywords: Item Consistency Index; cognitive diagnostic assessment; test development Introduction In educational testing, items are developed to elicit a correct response when examinees demonstrate adequate knowledge or understanding on the required tasks and skills within a specified domain. The methods of specifying knowledge, the conceptualization of content domains, and the design of how an item elicits responses are currently undergoing significant change with the evolution of our test designs. But one outcome that remains the same is that an item must assess the tasks and skills as intended, and the quality of each item must be judged to be high if it is to be included on the test. In most test designs, item discrimination power is a statistical criterion that is synonymous with describing item quality.
  • 5. 2 © 2017 The authors and IJLTER.ORG. All rights reserved. Item discrimination helps describe how well an item can differentiate examinees at different performance levels. Depending on the test design and how the scale of examinee performance is realized, different measures of item discrimination may be used. Additional information about item discrimination can also be garnered from measures of item- model fit. An item-model fit index describes the overall difference between real responses on a given item with a corresponding set of expected responses predicted by the test design. Item-model fit indices can be summarized, in general, as a ratio between the expected and actual correct responses on each item to compare the proportion of correct responses across examinees of different abilities with an expected correct proportion from the test design model. Different criterions that represent the examinee overall performance such as total score, estimated ability, or pseudo-scores have been used to group the responses of examinees' with similar ability to produce variations of item-model fit (Bock, 1972; Yen, 1981; Rost & von Davier, 1994; Orlando & Thissen, 2003). Application of item-model fit indices include the identification of poor performing items, cheating, or test administration anomalies, along with addressing issues related to dimensionality, item construction, calibration, and model selection (Reise, 1990). Cognitive Diagnostic Assessment and Model Fit Demand for more assessment feedback to better guide instruction and learning has led to the development of more complex test designs. Cognitive diagnostic assessment (CDA) is an example of a test design that yields enhanced assessment feedback by providing test takers with specific information about their problem-solving mastery on a given domain (Gierl, Leighton, & Hunka, 2007). The cornerstone of a CDA is the use of a cognitive model to guide test development. The use of a cognitive model allows CDA to provide enhanced feedback because cognitive information can be extracted from the examinees’ item responses which, in turn, provide more detailed and instructionally relevant results to test takers. Compared to traditional tests where an item response is linked to a single outcome scale, the cognitive inferences made in CDA allow each item to measure multiple skills related to student learning. Due to the complexity of interpreting and modeling different aspects of cognitive skills, many approaches to modeling and scoring examinee responses are available. Sinharay, Puhan, and Haberman (2009) summarized three common features among different methods of CDA: (1) tests assess student mastery based on a cognitive model of skills; (2) items probe student mastery on a pattern of skills expressed in a Q- matrix; and (3) items probing the same pattern of skill mastery should elicit a similar pattern of student responses.
  • 6. 3 © 2017 The authors and IJLTER.ORG. All rights reserved. An essential part of CDA development relies on the definition of a Q- matrix. The Q-matrix is an item-by-attribute matrix used to describe the skills probed by each item. For example, if a CDA is designed to determine examinee mastery on four skills, and an item was designed to elicit a correct response if the examinee has mastered the first and the fourth skill, then the row corresponded to that item in the Q-matrix would be expressed as {1,0,0,1}. The Q-matrix and the student response patterns are used to calibrate the model parameters and provide students with diagnostic results related to their cognitive problem-solving strengths and weaknesses. To ensure that CDA results provide the most accurate information to examinees about their cognitive skills, the quality of CDA items must be scrutinized. The evaluations of the claim that items are to probe a specified set of skills have varied by the scope of how item-skill relations are represented. Model-data fit has traditionally been used to evaluate how items are aligned with construct of the skills based upon item responses. Few studies have investigated the relations of item-skill alignment. Wang, Shu, Shagn, and Xu (2015) have developed a measure which allows the evaluation of skill-to-item fit based on the DINA model that assumes a probabilistically scaled skill representation. To evaluate item-model fit in CDA, items need to be evaluated beyond the relationship of the correct responses on a particular item and single outcome scores. Because each item is designed to provide student mastery information on multiple skills, an item-model fit index is needed to ensure item responses are aligned with the intended cognitive skills. Evaluating Model-Fit for CDA The rationale evaluating model fit in CDA can be considered in two approaches, evaluating the fit with the expected psychometric properties of the test items or evaluating the fit of responses with the blueprint of skills. Existing developments tend to focus on the former approach. For example, Jang (2005) compared total raw score distributions between observed and predicted responses using the mean absolute difference (MAD). Jang’s approach to evaluating model-fit is akin to IRT model fit approaches, where the level of fit is determined by total score differences between the expected and examinee results. But with each correct response of a CDA item linked to mastery on a vector of skills, evaluating item-model fit for CDA need to consider the fit of an item with the pre- requisite skills rather than a single test-level outcome. Sinharay and Almond (2007) also developed an approach for evaluating item fit for CDA by assuming that examinees categorized with the same skill pattern should also have the same diagnostic outcome. With their
  • 7. 4 © 2017 The authors and IJLTER.ORG. All rights reserved. approach, the proportion correct response for examinees with the same skill pattern is compared with the expected proportion predicted by the cognitive model. Differences between the expected and observed correct proportions are then summed across all skill patterns and weighted proportionally by sample size. That is, model-fit for item j was defined as: 𝑋𝑗 2 = 𝑁 𝑘(𝑂 𝑘𝑗 −𝐸 𝑘𝑗 )2 𝐸 𝑘𝑗 (𝑁 𝑘−𝐸 𝑘𝑗 ) . 𝑘 , where 𝑁𝑘 is the number of examinees with skill pattern k, 𝑂 𝑘𝑗 is the number of examinees with skill pattern k that responded correctly to item j, and 𝐸𝑘𝑗 is the product of the expected proportion of correct response for pattern k multiplied by 𝑁𝑘. Although this approach can be applied to account for fit among multiple sets of skills, results rely on an expected correct response rate of a given item for each skill pattern. As the expected correct response for a given set of skill pattern is not readily available, application of this method for determining model fit may be problematic. Moreover, a poor sample representation of a skill pattern or psychometrically indistinguishable skill patterns will also misestimate item-model fit. One way to avoid the influence of misclassification on an item-model fit measure for CDA is to comparatively evaluate items that measure the same skills. That is, items measuring the same skills are expected to elicit similar response patterns with one other. Hierarchy Consistency Index (HCI) One statistic developed specifically for CDA to evaluate person-model fit is the Hierarchy Consistency Index (HCI; Cui & Leighton, 2009; Cui & Li, 2014; Cui & Mousavi, 2015). The HCI is a statistic for evaluating the fit of the observed responses from an examinee with the expected responses from a CDA model based on a comparison between the observed and expected response vectors. The main assumption for the HCI is that if an examinee gives a correct response to an item requiring a set of skills, then the examinee is assumed to have mastered that set of skills and therefore should also respond correctly to items that designed to measure those skills. For example, if an examinee gives a correct response to an item that requires the first and third skill in a CDA that assess four skills (or an item with a skill pattern of [1,0,1,0] in the Q matrix), then the examinee is also expected to respond correctly to items that probe the same set of skills [1,0,1,0], or a subordinate or prerequisite of those skills (e.g., [1,0,0,0] , [0,0,1,0]), which require skills should have been acquired. In this manner, the number of misfitting responses across all items with their corresponding subsets of skills is calculated for each examinee to determine an index of person-fit.
  • 8. 5 © 2017 The authors and IJLTER.ORG. All rights reserved. Given I examinees were administered with J items, the HCI for examinee i is calculated as: 𝐻𝐶𝐼𝑖 = 1 − 2 𝑋 𝑖𝑗 (1−𝑋 𝑖𝑔 ). 𝑔𝜖 𝑠 𝑗 𝐽 𝑗=1 𝑁 , (1) where Xj is the examinee’s scored response for item j, sj is an index set that includes items requiring the subset of attributes measured by item j, and Xg is the examinee’s scored response for item g. For example, if item j is answered correctly, then all items that measure the attributes or a subset of attributes probed by item j is represented by index set sj , where g is an item index within sj. N is the number of comparisons made across all sj. The HCI has a maximum of 1 and minimum of -1, where a high positive HCI value represents good person-fit with the expected response model. The HCI is a useful index for analyzing person-fit across different types of CDAs, as it requires only the use of the Q-matrix and examinee responses. In this study, we modify the HCI to create an index for analyzing item- model fit. Thus, the purpose of this study is twofold. First, we introduce and define an item-model fit index called the item consistency index (ICI). The ICI is used to evaluate the fit of an item related to the underlying cognitive model used to make diagnostic inferences with that item. Second, we present results from two studies to demonstrate both the simulated and practical performance of the ICI across of host of testing conditions typically found in diagnostic assessments. Item Consistency Index (ICI) As elaborated earlier, the HCI measures the proportion of misfitting observed examinee responses relative to the expected examinee responses on a diagnostic assessment. This principle can also be extended to evaluate item-fit. With the HCI, the misfitting responses related to each item is summed across all items for each examinee. As described in (1), misfit for examinee i (mi) can be written as: 𝑚𝑖 = 𝑋𝑖𝑗 (1 − 𝑋𝑖𝑔). 𝑔𝜖 𝑠 𝑗 𝐽 𝑗 . (2) Alternatively, to evaluate the misfit for item j, the number of misfitting responses from the subset of item j can be summed across all examinees. This modification can be written as: 𝑚𝑗 = 𝑋𝑖𝑗 1 − 𝑋𝑖𝑔.𝑔∈𝑆 𝑗 . 𝑖 , (3) where 𝑋𝑖 is student 𝑖’s score (1 or 0) to item 𝑗, and 𝑋𝑖 𝑔 is student 𝑖’s score (1 or 0) to item 𝑔. Item g belongs to 𝑆𝑗 , a subset of items that require the
  • 9. 6 © 2017 The authors and IJLTER.ORG. All rights reserved. subset of skills measured by item j. In this manner, for a correct response to item j for examinee i (𝑋𝑖 𝑗 = 1), one can consider any incorrect responses in 𝑆𝑗 to be a misfit for examinee i. The number of misfits is then summed across all examinees. It should be noted that the HCI only considers students’ correct responses for analyzing misfit of a given item (𝑋𝑗 = 1). That is, misfit is calculated against the required skills only when students have provided the correct response. While this was adequate for analyzing misfit for person-fit, analyzing item-fit against a cognitive model also requires comparisons to be made when students respond to an item incorrectly (𝑋𝑗 = 0). As such, an evaluation of item-fit needs to account for this alternative comparison. For example, suppose an incorrect response was given on our exemplar item that required the skill pattern of [1,0,1,0]. From this item response, we could infer that the examinee does not possess all the necessary skills required to solve this item and, therefore, should respond incorrectly to all items that require the same skill pattern of [1,0,1,0]. Furthermore, the examinee should also respond incorrectly to items that require more skills than the current item (i.e., [1,1,1,0], [1,0,1,1], [1,1,1,1]). These items that require the same skill or a more complex skill pattern can be conceptualized as an alternative subset of item j (𝑆𝑗 ∗ ), and a correct response in any of the items belonging to 𝑆𝑗 ∗ can be conceptualized as a misfit. This outcome can be expressed as: 𝑚𝑗 ∗ = 𝑋𝑖ℎ (1 − 𝑋𝑖 𝑗 )ℎ∈𝑆𝑗 ∗ . 𝑖 . (4) The set of alternative comparisons combined with comparisons from correct responses form the numerator of the ICI. To maintain the same scale of comparison with HCI, the numerator is then divided by the total number of comparisons, which effectively transforms the outcome to a proportion of misfit responses for item j. The proportion is then rescaled to a maximum of 1 and a minimum of -1. The ICI for item 𝑗 is then given as: 𝐼𝐶𝐼𝑗 = 1 − 2 𝑋 𝑖 𝑗 (1−𝑋 𝑖 𝑔 )𝑔∈𝑆 𝑗 + 𝑋 𝑖ℎ (1−𝑋 𝑖 𝑗 )ℎ∈𝑆 𝑗 ∗𝑖 𝑁 𝑐 𝑗 , (5) where 𝑋𝑖 𝑗 is student 𝑖’s score (1 or 0) to item 𝑗, 𝑆𝑗 is an index set that includes items requiring the subset of attributes measured by item 𝑗, 𝑋𝑖 𝑔 is student 𝑖’s score (1 or 0) to item 𝑔 where item 𝑔 belongs to 𝑆𝑗 , 𝑆𝑗 ∗ is an index set that includes items requiring all, but not limited to, the attributes measured by item 𝑗, 𝑋𝑖ℎ is student 𝑖’s score (1 or 0) to item ℎ where item ℎ belongs to 𝑆𝑗 ∗ , and 𝑁𝑐 𝑗 is the total number of comparisons for
  • 10. 7 © 2017 The authors and IJLTER.ORG. All rights reserved. item 𝑗 across all students. To illustrate the calculation of the ICI, consider a hypothetical administration of a CDA with 15 items and a Q-matrix presented in (6). 1 0 0 0 0 1 0 0 1 1 0 0 0 0 1 0 1 0 1 0 0 1 1 0 1 1 1 0 0 0 0 1 1 0 0 1 0 1 0 1 1 1 0 1 0 0 1 1 1 0 1 1 0 1 1 1 1 1 1 1 . (6) Suppose this CDA of four skills was administered to an examinee who produced the item response vector (0,0,0,0,0,1,1,0,0,0,0,0,0,0,0). That is, the examinee responded correctly to items 6 and 7 only. To calculate the ICI for item 6, we first consider that the examinee has responded to the item correctly, therefore comparisons should be made with items that require skills that are prerequisites to or same with the original item. In this case, items 2 and 4 belong in 𝑆6. Since both item responses were incorrect, two comparisons were made (𝑁𝑐6 = 2) and two unexpected responses were found (𝑚6 = 2) for this examinee. In addition, suppose we wanted to calculate the ICI of item 2 for this examinee. The alternative subset (𝑆𝑗 ∗ ) will be needed since the examinee responded to the item incorrectly. For this instance, seven items form the alternative subset for item 2 (𝑆2 ∗ = {3,6,7,10,11,14,15}). Since the examinee responded correctly to item 6 and 7, there were two unexpected responses (𝑚2 = 2) from a total of seven comparisons (𝑁𝑐2 = 7). In this manner, the number of unexpected responses and comparisons are summed across all examinees and rescaled to form the ICI. To demonstrate the performance of this item-model fit index across a variety of different testing situations, a simulation study was conducted to determine the performance of ICI for detecting poor-fitting items. Then, a real data study was conducted to demonstrate how the ICI can be applied in operational testing situations a CDA in Mathematics.
  • 11. 8 © 2017 The authors and IJLTER.ORG. All rights reserved. Methods and Results Study 1: Simulation Study To evaluate how well the ICI can identify items that fit poorly relative to their underlying cognitive model, a Monte-Carlo study was conducted by simulating responses from a diagnostic test designed to measure seven skills. To determine the performance of the ICI using simulated CDA data, examinee responses were generated under the Bernoulli distribution. In addition to generating examinee responses, different testing conditions were manipulated to probe conditions that may occur in a real CDA administration. Finally, to classify poor-fitting items using ICI, a common evaluation criterion was used to determine which items were fit poorly with the given cognitive model. The simulation process is similar to the actual steps used in developing CDAs (Gierl, Leighton, & Hunka, 2007), where cognitive model, items, and responses were developed in a sequential manner. First, an existing cognitive model from Cui and Leighton (2009) was used to guide the simulation process. The cognitive model consists of seven skills, with 15 patterns of skill mastery identified as permissible. The patterns of required skills for each item are expressed in the Q-matrix presented in Table A1 in the Appendix. To generate examinee responses, examinees were first assigned to an expected pattern of skill mastery from one of the 15 skill patterns. In addition to the 15 skill patterns, a null pattern [0,0,0,0,0,0,0] was also used to represent examinees who did not master any skills. In total, sixteen expected skill patterns are distributed equally among the sample examinees. To simulate response for an examinee on a given item, the examinee’s assigned skill pattern is compared with the skills required by that item as indicated by the Q matrix. A probability of correct response is assigned based on whether the examinee has all the prerequisite skills of the item. Based on this assigned probability, the examinee’s response to each item was generated using a Bernoulli function. To simulate the effectiveness of ICI under different testing conditions, three factors were manipulated. First, the number of items representing each skill pattern in the CDA was varied by three levels. If a CDA is lengthened by including multiple items probing the same set of skills, then the reliability of each corresponding skill measured is expected to increase (Gierl, Cui, & Zhou, 2009). In our study, the number of items in the CDA varied by one, two, or three items representing each possible skill pattern. These three levels of variation on a total of 15 skill patterns resulted in test lengths of 15, 30, and 45 items, respectively.
  • 12. 9 © 2017 The authors and IJLTER.ORG. All rights reserved. Second, unlike the related person-fit HCI which is independent of sample size, the ICI is based on the proportion of misfit responses from all examinees. Therefore, different sample sizes may affect the outcome of the ICI. Three levels of sample sizes were manipulated: 800, 1600, and 2400. Since the 15 skill patterns and a null pattern are distributed equally among the examinees, the numbers of examinees representing each skill pattern are 50, 100, and 150, respectively. Third, an important feature for an item-model fit index is to detect items that fit poorly with the expected response determined by cognitive model. This concept is contaminated when the ICI is influenced by misfitting items related to the skills of the original item. To investigate whether the proportion of poor-fitting items have an effect on the ICI, the proportion of poor-fitting items were manipulated at three levels proportional to the test length: 5%, 10% and 25%. In Cui and Leighton (2009), a well-fitting item was deemed to have a 10% chance for slips, where an examinee without mastery of the necessary skills will have a 10% chance of responding correctly while an examinee who has mastered the necessary skills will have a 90% chance of responding correctly. While there can be many reasons for an item to fit poorly with the underlying cognitive model (e.g., model misspecification, item quality, option availability), generally a poor-fitting item yields a response that is aberrant from the cognitive model. To simulate a poor-fitting item, items responses were generated close to random. Table 1 contains the probabilities of correct response given the level of item fit (good or poor fit) and whether the examinee possesses the required set of skills. Taken together, three manipulated factors with three levels each yielded a total of 27 conditions as shown in Table A2 of the Appendix. Table 1. Correct response probability given the level of item fit and whether the examinee possesses the required set of skills Item Fit Required skills Good Poor Present 0.9 0.6 Not present 0.1 0.4 To evaluate the effectiveness of the ICI for detecting poor-fitting items, a criterion is needed for the ICI to differentiate between poor- and well- fitting items. A classification approach was used to measure the precision of the ICI in this study. A cut-score criterion, set at an ICI value of 0.5, was used to illustrate the classification characteristics for poor-fitting items. For example, if an item was calculated to have an ICI value of less than 0.5, then that item was deemed to fit poorly with the expected responses from the cognitive model. This preliminary criterion for dichotomizing
  • 13. 10 © 2017 The authors and IJLTER.ORG. All rights reserved. item fit was needed because no point of comparison currently exists in determining an appropriate level of fit with an existing cognitive model. Further, an ICI value of 0.5 for any item translates to roughly 75% of the responses on a given item fitting with the expected skill pattern as defined by the cognitive model. Using this initial cut-score, we could then classify items as poor- or well-fitting. To ensure the classification results were consistently produced, each of the 27 testing conditions was replicated 100 times. The dependent variables for the simulation study included the average proportion of correctly identified poor-fitting items and misclassification of well-fitting items across all conditions. The simulation environment, the implementation of the ICI, and the replication of results were programmed in R (R Core Development Team, 2011), and are available from the first author. Table 2 contains a summary of the mean ICIs for each condition. The mean ICIs were calculated separately for the poor- and well-fitting items. The overall mean for poor-fitting items was 0.30 whereas the mean ICI for well-fitting items was 0.53. Three observations must be noted from the results in Table 2. First, test length tended to have a positive impact on the values of ICI. For example, CDAs with only one item measuring each skill pattern (i.e., test length=15) had consistently lower ICIs compared to CDAs with two or three items measuring each skill (i.e., test length=30 or 45). Second, as expected, the magnitude of the mean ICI differences between poor and well-fitting items tended to decrease when an increase in poor-fitting items included in the ICI. Third, the means of ICI were relatively stable across different sample sizes for each condition. Table 2. Summary of the mean ICIs across the three variables manipulated in the simulation study Sample Size Proportion of Poor Fitting Items Test Length Mean ICI Poor Fitting Items Well-Fitting Items 800 5% 15 0.24 0.49 5% 30 0.22 0.57 5% 45 0.30 0.59 10% 15 0.31 0.48 10% 30 0.29 0.56 10% 45 0.38 0.58 25% 15 0.37 0.43 25% 30 0.29 0.56 25% 45 0.32 0.51
  • 14. 11 © 2017 The authors and IJLTER.ORG. All rights reserved. 1600 5% 15 0.21 0.41 5% 30 0.22 0.56 5% 45 0.29 0.59 10% 15 0.27 0.44 10% 30 0.29 0.57 10% 45 0.38 0.58 25% 15 0.36 0.41 25% 30 0.29 0.56 25% 45 0.32 0.51 2400 5% 15 0.24 0.55 5% 30 0.23 0.58 5% 45 0.30 0.59 10% 15 0.32 0.53 10% 30 0.30 0.57 10% 45 0.38 0.58 25% 15 0.32 0.53 25% 30 0.29 0.56 25% 45 0.32 0.51 Items were also classified based on the cut-score criterion. This simulation process was repeated 100 times, with the correct classification rate, or power, being the likelihood of correctly identifying a poor-fitting item using the ICI across the conditions in the simulation study. The power values for the 27 conditions are shown in Table 3. The conditions with the highest power were found in CDAs with the longest test-length (45), specifically with conditions that had the largest proportion of poor-fitting items (25%). Under those conditions, the highest power was 0.99, meaning that for the ICI criterion of 0.50, 99% of all poor-fitting items were correctly classified across 100 replications. The lowest power values were found in conditions with the smallest sample size (800), where a power of 0.67 was found for a 30-item CDA with 5% of poor-fitting items and 1600 examinees.
  • 15. 12 © 2017 The authors and IJLTER.ORG. All rights reserved. Table 3. Power of ICI for identifying poor-fitting items Test Length Sample Size Proportion of Poor-Fitting Items 5% 10% 25% 15 800 0.68 0.76 0.92 1600 0.93 0.89 0.95 2400 0.79 0.79 0.92 30 800 0.67 0.73 0.79 1600 0.77 0.74 0.81 2400 0.73 0.72 0.79 45 800 0.76 0.80 0.99 1600 0.77 0.83 0.99 2400 0.76 0.81 0.99 Table 4 summarizes the likelihood of a well-fitting item being misclassified by the ICI as a poor-fitting item in each condition. The lowest misclassification rates were associated with CDAs that have the longest test-length (45) and the smallest proportion of poor-fitting items (5%). Under those conditions, the lowest misclassification rate was 15%. The highest error rates were observed with the shortest test length (15), where misclassification was 78%. Taken together, the simulation study results highlight important trends and outcomes that can be used to interpret how accurately the ICI identifies poor-fitting items. The power values of ICI were erratic when the number of items probing each skill pattern was small, but stabilized as the number of items representing each skill pattern increased. For example, each increase in test length resulted in a decrease in the variation of power values among the same proportion of poor-fitting items and between different sample sizes. This finding suggests that the reliability of using the ICI to classify poor-fitting items is related to the reliability of the CDA as a whole. Moreover, the proportions of misclassification were approximately 2.5 times higher in CDAs with a single item representing each test skills as compared to the other two levels. This outcome further supports the conclusion that as skills are measured more accurately, the ICI better distinguishes poor- from well- fitting items.
  • 16. 13 © 2017 The authors and IJLTER.ORG. All rights reserved. Table 4. Misclassification rate of ICI in identifying well-fitting items Test Length Sample Size Proportion of Well-Fitting Items 5% 10% 25% 15 2400 0.28 0.35 0.66 1600 0.78 0.65 0.72 800 0.50 0.50 0.66 30 2400 0.16 0.20 0.22 1600 0.28 0.20 0.27 800 0.27 0.22 0.24 45 2400 0.15 0.18 0.33 1600 0.17 0.19 0.34 800 0.15 0.19 0.33 There were no obvious trends that the sample size manipulated across the three levels yielded important differences among the power or misclassification of well-fitting items. This finding suggests that the sample sizes used in this study do not yield important ICI differences across our study conditions. This outcome could also suggest that the representation of approximately 50 examinees per skill pattern may be sufficient for evaluation of the ICI. When the proportion of poor-fitting items was manipulated, the power increased with the proportion of poor-fitting items in the CDA, where the overall power rose as the proportion of poor fitting item increased. An increase of poor-fitting items also yielded more misclassification of well- fitting items. This finding suggests that poor-fitting item responses contribute to an overall decrease in the magnitude of ICI, where the resulting errors are reflected using the classification criterion of 0.50. Study 2: Use Case Application The purpose of the second study is to demonstrate how the ICI can be used to identify poor-fitting items in an operational CDA. The ICI was used to evaluate item-model fit for a CDA program designed to assess students’ knowledge and skills in Grade 3 mathematics. From this CDA program, 324 students responded to an 18-item CDA (see Gierl, Alves, & Taylor-Majeau, 2010). The CDA we used was designed to evaluate student mastery for subtraction skills. Each item was designed to yield specific diagnostic information in a hierarchy of cognitive skills were the first skill was the easiest (Subtraction of two consecutive 2 digit numbers) and the last skill was most difficult (Subtraction of two 2 digit numbers using the digits 1
  • 17. 14 © 2017 The authors and IJLTER.ORG. All rights reserved. to 9 with regrouping). The CDA was developed as follows. First, a cognitive model of task performance was created by specifying the cognitive skills necessary to master subtraction in Grade 3. The domain of subtraction was further specified into a set of six attributes related in a linearly hierarchical manner by a group of subject matter experts. The attributes produced a total of seven unique patterns of skill mastery (six plus null). Three items were created by content experts to probe student mastery on each attribute to ensure adequate representativeness of each skill pattern resulting in eighteen items for this CDA. The test was administered to students in 17 Grade 3 classrooms. A list of the attributes and the Q-matrix for the 18-item CDA are shown in Table A3 and Table A4 of the Appendix, respectively. Three hundred and twenty four student responses were collected, which would yield approximately 45 students per skill pattern if the patterns were distributed equally across the skills. Participating teachers would first instruct on the topics relevant to subtraction within their classrooms, and then administer the CDA to students at a convenient time within two-week of instruction. The CDA was delivered using an online computer-based testing system. Students were presented with CDA items that contain both an item stem to prompt for a typed-response and an interactive multimedia component that provided additional information for students to understand the item. From this administration process, responses were collected, formatted and scored dichotomously. As the participation of this CDA was voluntary, students with greater than two missing responses were removed from the analysis to minimize unmotivated responses (as the completion of the CDA was not mandatory). For the purposes of demonstrating the ICI, only the scored student responses were used. The results are summarized first at the test level and then at the item level. Overall, the results were ideal at the test level. The median HCI, which is used to quantify the fit of the responses to the expected model of response on a CDA, was 0.81. With a cut-off of 0.70 as the quality criterion for CDA designs (Gierl, Alves, & Taylor-Marjeau, 2010), this result suggests that the student responses fit with the expect model of response for subtraction. As the purpose of this CDA is to identify non-mastery students in order to refine and enhance instruction, the majority of students were expected to master the CDA. At the item level, Table 5 provides a summary of the results from the subtraction CDA. The p-values of each item and the discrimination value (i.e., point-biserial correlation) are presented along with the ICI values. Three findings should be noted from these results. First, the ICI was not
  • 18. 15 © 2017 The authors and IJLTER.ORG. All rights reserved. correlated with either the difficulty or discrimination values. This result supports the idea that item-model fit is summarizing a different outcome from the classically defined notion of difficulty and discrimination. Second, with items created in a principled manner, with three items representing each skill pattern, the real data results support the results of the simulation study. Further, as p-values decrease, ICI values increase because the items change from measuring simple to more complex skills. Third, using the cut-score criterion of 0.50 from the simulation study, only three items were deemed to have poor item fit (Items 1, 2, 3). The poor ICI values for these items may suggest a problem at the attribute level (see Table A3 in the Appendix for the description of the skills assessed). It is important to note that without the ICI conventional scoring and psychometric approaches would not have identified issues of misfit at the attribute level, where items one through three are performing nominally at the item level. Although subject matter experts did not evaluate the cognitive model in the light of the student results, a follow-up study may find that a reorganization of the attributes may yield better fitting responses. Table 5. Summary of the results from the subtraction CDA Attribute Item Number P-Value Discrimination ICI 1 1 0.76 0.58 0.22 2 0.78 0.87 0.39 3 0.80 0.96 0.46 2 4 0.84 0.89 0.64 5 0.87 1.11 0.72 6 0.85 0.94 0.65 3 7 0.86 1.06 0.76 8 0.80 0.68 0.65 9 0.84 1.01 0.75 4 10 0.77 0.79 0.73 11 0.72 0.78 0.72 12 0.75 0.82 0.73 5 13 0.74 0.82 0.78 14 0.77 0.92 0.79 15 0.79 0.98 0.80 6 16 0.35 0.56 0.81 17 0.34 0.57 0.81 18 0.33 0.53 0.80
  • 19. 16 © 2017 The authors and IJLTER.ORG. All rights reserved. Discussion The purpose of this study is to introduce a statistic for determining item- model fit with CDA. The item consistency index (ICI), an extension of a person-fit index for CDA called the Hierarchy Consistency Index (HCI), is a standardized outcome that measures the ratio of misfitting responses relative to the total number of response across all examinees on a given item. Similar to the HCI, the requirements for evaluating item-model fit using the ICI is an item-by-attribute definition of skill mastery called the Q-matrix in addition to the student response vectors. The ICI has a maximum value of 1, which suggests all students responded identically to an expected skill pattern, and a minimum value of -1, which suggests item responses were the exact opposite to what the expected skill patterns suggest. We present two use cases to demonstrate the properties of the ICI under simulation. In addition, we demonstrate the applicability of the ICI through the use of real data to highlight how the ICI can be applied to identify poor-fitting items on a CDA. These two proof-of-concept applications demonstrate how the ICI can be applied in the real world and call for future studies to establish better evaluation criterion for the ICI. Results from the simulation study provided some general insights on how the ICI performs as a method for detecting item misfit in CDA across a range of testing conditions. Using a cut-score classification method to determine poor-fitting items, the ICI was able to identify the majority of the poor fitting items across different simulated conditions. Although the item-model fit is described in a range by the ICI, the use of a cut-score to classify poor fitting items provided a simple outcome to interpret for evaluating how the ICI will perform in a given testing scenario. In addition, results from the simulation study demonstrated a few assumptions that must be met for the ICI to detect item misfit accurately. The number of items used for each skill pattern and the total number of poor fitting items were two features that affected ICI performance. The implication from these findings demonstrate that although CDA demands a different paradigm of scoring and statistical approaches, traditional issues such as consistency of the responses for a given set of skill can still be problematic in estimating item-model fit. From our simulation results, we suggest the use of three items per attribute or more per skill pattern to ensure adequate ICI detection. This finding is consistent with the research in establishing an adequate reliability in measuring attributes of skills (Gierl, Cui, & Zhou, 2009), where the authors stated that the idea of a short yet diagnostic test will not likely yield results with sufficient reliability.
  • 20. 17 © 2017 The authors and IJLTER.ORG. All rights reserved. Sinharay and Almond (2007) noted that tests with many poor-fitting items indicate a problem with the overall model, whereas tests with few poor- fitting items indicate problems lie in the items themselves. In our simulation, we demonstrated that the ICI will produce similar results, where an increase of poor-fitting items in a CDA will lower the precision of the ICI. This finding may be linked to the fact that as more poor-fitting items are introduced, these items affect the fit of items requiring the same set of skills leading to an overall decrease in magnitude of ICIs. Table A5 in the Appendix illustrates this effect, where the mean ICI for well- and poor-fitting items under the 45-item simulation decreases as the proportion of ICI increases. In sum, a rigorous and principled test development process is needed for CDA to ensure all test items are created with minimal deviation from the expected set of skills they were designed to probe. Otherwise, poor model-fit results will lead to poor diagnostic outcomes. The second study provided a snapshot on the utility of the ICI when applied to an operational CDA. Using a set of carefully designed CDA items, the ICI detected three consecutive poor-fitting items at the beginning of the assessment. This finding suggests that the ICI can not only be used for evaluating item-model fit, but can also be used for evaluating the consequences of test design at the item, attribute, or the cognitive model level. In our example, the three items flagged as poor fitting measure the same attribute revealing that the attribute may be mis- specified in the cognitive model. In addition, the independence of ICI from the difficulty and discrimination values suggest that item model-fit for CDA provides a unique measure of how an item is able to accurately predict performance. Hence, the definition of a good item for CDA may not only be how well an item is able to distinguish poor-performers from good-performers, but also how consistently an item can elicit responses that match the expected response patterns specified in the cognitive model (i.e., Q-matrix). Item-model fit is challenging to measure, especially when cognitive inferences are involved in the test design. Items have to be aligned with the cognitive skills in the Q-matrix, skills have to be defined and organized in a systematic manner, and examinee responses have to match the expected skill patterns. The ICI can provide a source of evidence for identifying poor-fitting items or poor models for Q-matrix based CDA. Implications for Future Research By introducing and demonstrating an item-model fit index for CDA, our study provides two practical implications for the development of diagnostic assessments in addition to a new measure of item-fit. The ICI
  • 21. 18 © 2017 The authors and IJLTER.ORG. All rights reserved. has the benefit of applicability, meaning that it can be used with a Q- matrix based CDA for determining the relationship between items and skills. Using the Q-matrix, item and examinee responses can be compared to provide a measure of item model-fit. While research on CDA has prompted a plethora of diagnostic scoring methods, one common starting point is the use of the Q-matrix in defining the skills and item requirements. Because item development, validation, and administration all depend on the veracity of the Q-matrix, evidence for validating the cognitive model is paramount. The ICI offers some initial evidence that can be used for validating the definition of skills through item response patterns to determine the relative fit between an item and its set of required skills defined in the Q-matrix. While the ICI provides a new statistical method for scrutinizing CDA development, the second study highlighted the fact that the most crucial part of a well-designed CDA remains with item development. The importance of item development is, sometimes, neglected in CDA. Although CDA scoring methods can account for different levels of skill contributions, the link between how a skill is measured with how the skill is presented in the form of an item remains largely a subjective interpretation of the test developer and content specialist who create the CDA. To reliably measure a set of skills, multiple items are needed. Yet creating parallel items is often time consuming and expensive. Ensuring that each item is uniformly developed with the same set of skills is one critical activity in test development for CDA that ensures examinees receive useful diagnostic feedback.The ICI is co-dependent with all items requiring a related set of skills. Therefore, to ensure adequate item model- fit, every item in the CDA must adhere to a high level of quality and alignment relative to the expected skill the item is designed to measure. Through introducing an item model-fit index for CDA, we have demonstrated how such measure can be applied to identify problematic items that are aberrant from the expected response model. This initial study provides directions of future research as further investigation is needed to apply and validate the use of this index. We also suggest three directions of future research. First, more research is needed to ensure different structures of knowledge represented by the Q-matrix can be evaluated with the ICI to identify misfitting items. The number of possible skill pattern representation increases exponentially as the number of evaluated skills increases, therefore more research is needed to ensure ICI provides an appropriate measure for different organization of skills. Second, guidelines to interpret ICIs are needed so we can accurately identify and distinguish adequate and problematic items. As the ICI provides a scaled measure of item model-fit, interpretations of the
  • 22. 19 © 2017 The authors and IJLTER.ORG. All rights reserved. index has not yet been established and is required to determine the adequacy threshold of item model-fit. Third, as the reliability of CDA measures is highly dependent on the defined skills, more research is needed to determine which model structure is ideal in the application of the ICI. Our analysis relies on non-compensatory attributes, meaning skills are independently defined, acquired and cannot be moderated by existence of other skills. This will likely limit the ICI in measuring item fit for testing complex skills but not for general skills such as elementary mathematics. More research is needed to evaluate appropriate use cases of the ICI. References Bock, R. (1972). Estimating item parameters and latent ability when responses are scored in two or more nominal categories. Psychometrika, 37, 29-51. Cui, Y., & Leighton, J. (2009). The hierarchy consistency index: Evaluating person fit for cognitive diagnostic assessment. Journal of Educational Measurement, 46(4), 429-449. Cui, Y, & Li, J. C.-H. (2014). Evaluating person fit for cognitive diagnostic assessment. Applied Psychological Measurement, 39, 223-238. Cui, Y, & Mousavi, A. (2015). Explore the usefulness of person-fit analysis on large scale assessment. International Journal of Testing, 15, 23-49. Gierl, M., Leighton, J., & Hunka, S. (2007). Using the attribute hierarchy method to make diagnostic inferences about examinees’ cognitive skills. In J. Leighton & M. Gierl (Eds.), Cognitive diagnostic assessment for education: Theory and applications (pp. 242-274). Cambridge, MA: Cambridge University Press. Gierl, M., Cui, Y., & Zhou, J. (2009). Reliability and attribute-based scoring in cognitive diagnostic assessment. Journal of Educational Measurement, 46(3), 293-313. Gierl, M., Alves, C., & Taylor-Majeau, R. (2010). Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees’ Knowledge and Skills in Mathematics: An Operational Implementation of Cognitive Diagnostic Assessment. International Journal of Testing, 10(4), 318-341. Jang, E. (2005). A validity narrative: Effects of reading skills diagnosis on teaching and learning in the context of NG TOEFL (Doctoral dissertation). University of Illinois at Urbana-Champaign, IL, USA. Orlando, M., & Thissen, D. (2003). Further investigation of the performance of S-X2: An item fit index for use with dichotomous item response theory models. Applied Psychological Measurement, 27(4), 289-298. R Development Core Team (2011). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. . Reise, S. (1990). A Comparison of item- and person-fit methods of assessing model-data fit in IRT. Applied Psychological Measurement, 14(2), 127-137. Rost, J., & von Davier, M. (1994). A conditional item-fit index for Rasch models. Applied Psychological Measurement, 18(2), 171-182.
  • 23. 20 © 2017 The authors and IJLTER.ORG. All rights reserved. Sinharay, S., Puhan, G., & Haberman, S. (2009, April). Reporting diagnostic scores: Temptations, pitfalls, and some solutions. Paper presented at the National Council on Measurement in Education, San Diego, CA, USA. Sinharay, S., & Almond, R. (2007). Assessing fit of cognitive diagnostic models a case study. Educational and Psychological Measurement. 67(2), 239-257. Wang, C., Shu, Z., Shagn, Z., & Xu, G. (2015). Assessing Item-Level Fit for the DINA Model. Applied Psychological Measurement, 1-14. Yen, W. (1981). Using simulation results to choose a latent trait model. Applied Psychological Measurement, 5, 245-262. APPENDIX A Table A1. The Q-matrix and skill patterns used for the simulation of CDA responses Pattern Skill 1 2 3 4 5 6 7 1 1 0 0 0 0 0 0 2 1 1 0 0 0 0 0 3 1 1 1 0 0 0 0 4 1 1 0 1 0 0 0 5 1 1 1 1 0 0 0 6 1 1 0 1 1 0 0 7 1 1 1 1 1 0 0 8 1 1 0 1 0 1 0 9 1 1 1 1 0 1 0 10 1 1 0 1 1 1 0 11 1 1 1 1 1 1 0 12 1 1 0 1 0 1 1 13 1 1 1 1 0 1 1 14 1 1 0 1 1 1 1 15 1 1 1 1 1 1 1 Table A2. Variables manipulated in the simulation Level Conditions 1 2 3 Test length 15 30 45 Sample size 800 1600 2400 Proportion of poor-fitting items 5% 10% 25%
  • 24. 21 © 2017 The authors and IJLTER.ORG. All rights reserved. Table A3. Description of the skills assessed in the CDA for subtraction in Grade 3 Cognitive Attribute # Skill Descriptor: Apply a mental mathematics strategy to subtract 6 Two 2 digit numbers using the digits 1 to 9 with regrouping 5 Two 2 digit doubles (e.g., 24, 36, 48, 12) 4 Two 2 digit numbers where only the subtrahend is a multiple of 10 3 Ten from a 2 digit number 2 Two 2 digit numbers where the minuend and subtrahend are multiples of 10 1 Two consecutive 2 digit numbers (e.g., 11, 22, 33) Table A4. Q-matrix of the CDA for subtraction in Grade 3 Pattern Skill 1 2 3 4 5 6 1 1 0 0 0 0 0 2 1 0 0 0 0 0 3 1 0 0 0 0 0 4 1 1 0 0 0 0 5 1 1 0 0 0 0 6 1 1 0 0 0 0 7 1 1 1 0 0 0 8 1 1 1 0 0 0 9 1 1 1 0 0 0 10 1 1 1 1 0 0 11 1 1 1 1 0 0 12 1 1 1 1 0 0 13 1 1 1 1 1 0 14 1 1 1 1 1 0 15 1 1 1 1 1 0 16 1 1 1 1 1 1 17 1 1 1 1 1 1 18 1 1 1 1 1 1 Table A5. Summary of the mean ICI in extreme situations when n=2400 Item Quality Proportion of Poor-Fitting Items 0% 25% 50% 100% Well-Fitting Items 0.61 0.49 0.39 n/a Poor-Fitting Items n/a 0.33 0.28 0.15
  • 25. 22 © 2017 The authors and IJLTER.ORG. All rights reserved. International Journal of Learning, Teaching and Educational Research Vol. 16, No. 1, pp. 22-37, January 2017 Factors That Determine Accounting Anxiety Among Users of English as a Second Language Within an International MBA Program Alexander Franco and Scott S. Roach Stamford International University, Graduate School of Business Bangkok, Thailand Abstract. The primary goal of this study was to determine the factors related to accounting anxiety among MBA students who utilize English as a second language (ESL). The analysis included components within the learning environment and also differentiations as to demographic variables such as gender, age, ethnicity, and any prior undergraduate exposure to the study of accounting. A secondary goal of the study was to determine perception of anxiety among ESL students in an MBA program regarding quantitative courses as opposed to qualitative courses. Finally, the study examined different strategies used by ESL students to deal with accounting anxiety. The study found that there were significant differences in accounting anxiety based on gender, ethnicity, and exposure to undergraduate accounting. However, age was not a factor. In addition, the study supported the hypothesis that there is a negative relationship between levels of English proficiency and accounting anxiety. It also supported the hypothesis that there is a positive relationship between the levels of anxiety with classes involving quantitative subject matter. Finally, the study rejected significant differences in coping strategies by levels of accounting anxiety. Keywords: accounting; accounting anxiety; English as second language (ESL); language anxiety, strategies regarding accounting anxiety Introduction Within the context of globalization, English has become the lingua franca of the business world, a transnational instrument vital in both a local and a global context (Buripakdi, 2014; Easthope, 1999). The study of language anxiety among students using English as foreign language has been steadily growing for the past three decades (Horwitz, 1991; Kao & Craigie, 2013; Kondo & Yang, 2004; Mahmoodzadeh 2012, Marwan, 2007; Ozturk & Gurbuz, 2014; Semmar, 2010; Wang, 2010). During this period, a body of work has also been developed that focused on anxiety suffered by students while studying accounting, although
  • 26. 23 © 2017 The authors and IJLTER.ORG. All rights reserved. none of the studies specifically examined a student body primarily consisting of ESL students (Ameen, Guffey, & Jackson, 2002; Borja, 2003; Buckhaults & Fisher, 2011; Chen, Hsu, & Chen, 2013; Clark & Schwartz, 1989; Dull, Schleifer, & McMillan, 2015; Duman, Apak, Yucenursen, & Peker, 2014; Ghaderi & Salehi, 2011; Malgwi, 2004; Uyar & Gungormus, 2011). This study sought to investigate those factors that are related to varying anxiety levels among students of accounting who are challenged with learning this quantitative subject and its nomenclature while utilizing English as a second language. The first section of this paper presents a review of related material on accounting anxiety and proposes the hypotheses to be tested. The second part of this paper provides a discussion of the research methodology and analysis of the data collected. The final part presents utilitarian suggestions for minimizing anxiety by ESL students as they learn accounting, as well as recommendations for future research. 1. Literature Review Academic anxiety, within a pedagogical context, can best be seen as emotional state that is not inherent, but which is situational and can be “treated” by creating an effective association between teaching and receiving apprehension (Chu & Spires, 1991; Malgwi, 2004). Anxiety as to the learning of accounting at a level of higher education has been based on students’ perceptions that the nomenclature of the subject is akin to learning a new language (Borjas, 2003). Further, the knowledge base for this subject is perceived as being extensive and usually there is a corresponding apprehension that the period of time necessary to properly comprehend the principles and application of accounting is inadequate (Malgwi, 2004). Previous studies suggest that differences in anxiety levels regarding the study of technical material may related to variables such as gender (Todman, 2000), age, background experience or exposure to the subject being studied (Chu & Spires, 1991; McIlroy, Bunting, Tierney, & Gordon, 2001; Towell & Lauer, 2001) or nationality/ethnicity (Burkett, Compton, & Burkett, 2001; Rosen & Weil, 1995). Based on this, the following hypotheses were examined: H1: There will be differences in accounting anxiety levels of ESL students in an international MBA program across different demographic groups. H1a: There will be differences in accounting anxiety levels of ESL students in an international MBA program across age groups. H1b: There will be differences in accounting anxiety levels of ESL students in an international MBA program across genders. H1c: There will be differences in accounting anxiety levels of ESL students in an international MBA program across different ethnic groups.
  • 27. 24 © 2017 The authors and IJLTER.ORG. All rights reserved. H2: There will be differences in accounting anxiety levels of ESL students in an international MBA program for those students who took an undergraduate accounting course as opposed to those who did not. Among ESL students, the level of anxiety in learning technical subjects and in communication apprehension has been tied to the degree of their proficiency in the use of the English language (Casado & Dereshiswsky, 2004; Horwitz, Horwitz, & Cope, 1986; Marwan, 2007; Onwuegbuzie, Bailey, & Daley, 1999; Pappamihiel, 2002). Therefore, H3 was proposed: H3: There will be a negative relationship between level of English proficiency and accounting anxiety for ESL students enrolled in an international MBA program. The degree of quantification in a course of study impacts on the level of anxiety experienced by students (Kao & Craigie, 2013; Kondo & Yang, 2004; Rosen & Weil, 1995; Todman, 2000). Kondo & Yang (2004) devised a typology of strategies (5 strategy categories from 70 basic tactics) that ESL students use to cope with language anxiety. The strategies include peer seeking, positive thinking, preparation, and resignation. From this, the following hypotheses were proposed for testing: H4: There will be a positive relationship between level of anxiety with classes involving quantitative subject matter and accounting anxiety for ESL students enrolled in an international MBA program. H5: There will be differences in the accounting anxiety associated with the coping strategy selected by ESL students enrolled in an international MBA program. 2. Research Methodology and Findings 2.1 Sample The population studied was an international university in Thailand with an MBA student body consisting of 380 ESL students which were 57% female, 43% male; 64% were Thai and 36% were non-Thai. As per Krejcie and Morgan’s (1970) table of sample size determination, a sample population of 190 was calculated for this study. The sample consisted of 107 females (56% of the sample population), and 83 males (44%). Within the sample, 105 (55.3%) were Thais, 16 (8.4%) were Thai of Chinese lineage (1st and 2nd generations) and 69 (36.3%) were non-Thai. 2.2 Instrument A self-administered questionnaire was used with 15 accounting-focused, Likert scale questions, many which were modifications from the Horowitz et al. (1986) Foreign Language Classroom Anxiety Scale (FLCAS), a survey that has been used in several studies (Argaman & Abu-Rabia, 2002; Casado & Dershiwsky, 2004; Marwan, 2007; Matsuda & Gobel, 2004; Semmar, 2010; Yashima, 2002). All scales had a Cronbach alpha internal reliability score of over .80, indicating consistency (Hair, Black, Babin, & Anderson, 2010; Sekaran, 2000; Tavakol &
  • 28. 25 © 2017 The authors and IJLTER.ORG. All rights reserved. Dennick, 2011). The questionnaire also tested coping strategies by incorporating the Foreign Language Anxiety Coping Scale, which was designed by Kondo and Wang (2004). This scale was assessed to have an alpha coefficient of .91 (Marwan, 2007), demonstrating high internal reliability. The questionnaire consisted of a forced, 4-point Likert scale from “strongly agree” to “strongly disagree.” A neutral option (e.g., “not sure”) was deliberately avoided because of cultural traits within Thai society that inhibit the motivation to express personal opinion: a strong hierarchical system with high power-distance and kreng jai –the culturally operationalized practice of avoiding the display of emotion or asserting one’s opinion (Holmes, Tangtongtavy, & Tomizawa, 2003; Johnson & Morgan, 2016; Suntaree, 1990). The questionnaire was translated into Thai for Thai students (and translated back into English to assure accuracy) in order to maximize effective feedback (Behling & Law, 2000; Harkness, Van de Vijer, & Mohler, 2002; Domyei & Taguchi, 2009). An English language version was distributed to non-Thai ESL students. The questionnaire was administered during a six-month period by the same lecturer who taught the only accounting course (a core course) required by the university’s MBA program. The actual day in which the questionnaire was administered was the first day of each starting class during that period. 2.3 Findings The first hypothesis proposed that there would be differences in accounting anxiety levels across groups defined by the demographic variables of age, gender and ethnicity. Descriptives for the first of these three demographic factors are presented below in Table 1. As shown in the table, the mean accounting anxiety rating declines consistently across the four age groups. Table 1: Descriptive Analysis of Accounting Anxiety Ratings by Age Group* Age Group N Min Max M SD 18-22 58 1 4 3.17 .920 23-25 48 1 4 2.94 .836 26-30 46 1 4 2.91 .784 30 + 38 1 4 2.74 .724 Total 190 2.96 .838 *Where 1 = Strongly Disagree and 4 = Strongly Agree with the statement: Taking an accounting class gives me high anxiety (i.e., feeling of stress, fear). In order to test whether this decline was statistically significant, a one-way ANOVA was performed to analyze differences in accounting anxiety ratings across the age groups. The results are displayed in Table 2 below. Results indicate no significant difference across the four age groups for accounting anxiety, F (3, 186) = 2.242, p = .085. Therefore, Hypothesis 1a is rejected.
  • 29. 26 © 2017 The authors and IJLTER.ORG. All rights reserved. Table 2: One-Way Analysis of Variance of Accounting Anxiety Scores by Age Group Source df SS MS F p Between Groups 3 4.633 1.544 2.242 .085 Within Groups 186 128.109 .689 Total 189 132.742 The second part of this hypothesis proposed differences in accounting anxiety across gender groups. Descriptive statistics by gender are presented below in Table 3. As shown in the Table, the mean female accounting anxiety rating is slightly higher than the mean rating for males. Table 3: Descriptive Analysis of Accounting Anxiety Ratings by Gender* Gender N Min Max M SD Male 83 1 4 2.77 .860 Female 107 1 4 3.11 .793 Total 190 2.96 .838 *Where 1 = Strongly Disagree and 4 = Strongly Agree with the statement: Taking an accounting class gives me high anxiety (feeling of stress, fear). In order to test whether this difference was significant, a t-test was conducted. Results of that test are provided in Table 4, below. The results indicate a significant difference in scores with women reporting significantly higher levels of accounting anxiety (M=3.11, SD= .793) as compared to males (M=2.77, SD= .860), t (188) = -2.834, p = .005. Therefore, Hypothesis 1b is supported. Table 4: Comparison of Anxiety Ratings by Gender* Gender N Mean SD t df p 95% Confidence Interval Male 83 2.77 .860 Female 107 3.11 .793 Total 190 2.96 .838 -2.834 188 .005 -.578 – -.101 *Where 1 = Strongly Disagree and 4 = Strongly Agree with the statement: Taking an accounting class gives me high anxiety (i.e., feeling of stress, fear).
  • 30. 27 © 2017 The authors and IJLTER.ORG. All rights reserved. The third part of Hypothesis 1 proposed that there would be differences in accounting anxiety ratings across different ethnic groups. Table 5 provides the descriptive statistics associated with the three ethnic groups that were analyzed. Table 5: Descriptive Analysis of Accounting Anxiety Ratings by Ethnic Group* Ethnic Group N Min Max M SD Thai of Chinese 18 2 4 3.13 .619 Thai 106 1 4 3.09 .810 Not Thai 69 1 4 2.74 .885 Total 190 2.96 .838 *Where 1 = Strongly Disagree and 4 = Strongly Agree with the statement: Taking an accounting class gives me high anxiety (i.e., feeling of stress, fear). Testing for significant differences in accounting anxiety ratings across the three ethnic groups was conducted with a one-way ANOVA. Findings of this analysis are presented in Table 6 below. As depicted in the table, there was a statistically significant difference between the ethnic groups as determined by the one-way ANOVA F (2, 187) = 4.010, p = .020. Therefore, Hypothesis 1c is supported. A Tukey post hoc test was then performed revealing that the Thai group had statistically significant higher ratings of accounting anxiety as compared with the Other Than Thai group (3.09 + .810, p = .020). In sum, Hypothesis 1 proposed that there would be differences across the demographic groups of age, gender and ethnicity. Upon testing, the age portion of Hypothesis 1 was rejected, the gender differences hypothesis was supported and differences in accounting anxiety were found to exist between “Thai” and the “Other Than Thai” groups. Table 6: One-Way Analysis of Variance of Accounting Anxiety Scores by Ethnic Group Source df SS MS F p Between Groups 2 5.459 2.730 4.010 .020 Within Groups 187 127.283 .681 Total 189 132.742 Hypothesis 2 proposed that there would be differences in accounting anxiety levels for those ESL students that had taken an undergraduate accounting course
  • 31. 28 © 2017 The authors and IJLTER.ORG. All rights reserved. and those who had not. Descriptive statistics for these two groups are presented in Table 7. Table 7: Descriptive Analysis of Accounting Anxiety Ratings by Whether or Not Student Had an Undergraduate Accounting Class* Undergrad Class N Min Max M SD Yes 96 1 4 2.79 .882 No 94 1 4 3.14 .756 Total 190 2.96 .838 *Where 1 = Strongly Disagree and 4 = Strongly Agree with the statement: “Taking an accounting class gives me high anxiety” (i.e., feeling of stress, fear). As shown in the table, those students who reported having had an undergraduate class in accounting had lower mean accounting anxiety ratings. To test to see if this difference was significant, a t-test was run on the accounting anxiety ratings between the two groups. The results of this test are below reported in Table 8. The results indicate a significant difference in scores with ESL students in the group that did have an undergraduate accounting course reporting significantly lower levels of accounting anxiety (M=2.79, SD= .882) as compared to those students who had not had an undergraduate accounting course (M=3.14, SD= .7.56), t (188) = -2.271, p = .004. Therefore, Hypothesis 2 is supported. Table 8: Comparison of Anxiety Ratings by Whether or Not Student Had Taken an Undergraduate Accounting Class* Undergrad Class N Mean SD t df p 95% Confidence Interval Yes 96 2.79 .882 No 94 3.14 .756 Total 190 2.96 .838 -2.834 188 .004 -.582 – -.111 *Where 1 = Strongly Disagree and 4 = Strongly Agree with the statement: “Taking an accounting class gives me high anxiety” (i.e., feeling of stress, fear). The third hypothesis proposed that there is a significant negative relationship between English proficiency and accounting anxiety for ESL students. Self- reported English proficiency levels ranged from 1, “Bad” to 5, “Excellent” (N =
  • 32. 29 © 2017 The authors and IJLTER.ORG. All rights reserved. 190; M = 3.54; SD = .801). Ratings of accounting anxiety ranged from 1, “Strongly Disagree” to 4, “Strongly Agree” with the statement “Taking an accounting class gives me high anxiety” (i.e., feeling of stress, fear) (N = 190; M = 2.96; SD = .838). A simple regression analysis showed that the level of English proficiency significantly affected ratings of accounting anxiety. Results of the analysis are presented in Table 9, below. The higher the English proficiency ratings, the lower the accounting anxiety ratings (t = -2.899; p < .001). Therefore, Hypothesis 3 is supported. However, the R2 =.043, so the predictive power of the model is quite low. Table 9: Summary of the Simple Regression Analysis for English Proficiency and Accounting Anxiety Variable B SE(B) β t p English Proficiency -.216 .075 -.207 -2.899 .004 R2 =.043 Hypothesis 4 suggests a positive relationship between classes involving quantitative subject matter and accounting anxiety ratings. This was based on self-reported anxiety with classes that are quantitatively based and which was ranged from 1, “Strongly Disagree” to 4, “Strongly Agree” with the statement, “I get anxiety from an accounting class because of the numbers involved” (N = 190; M = 2.75; SD = .913). A simple regression analysis was used to test this relationship. The results of this analysis are presented below in Table 10. These results indicate that as a person’s anxiety with quantitatively based classes increases so does their ratings of accounting anxiety ratings (t = 10.386; p < .001). Therefore, Hypothesis 4 is supported. The R2 =.365 so the independent variable (anxiety with quantitative based classes) explains 36.5% of the variance in the dependent variable, accounting anxiety. Table 10: Summary of the Simple Regression Analysis for Quantitative Class Anxiety and Accounting Anxiety Variable B SE(B) β t p English Proficiency .555 .053 .604 10.386 < .001 R2 =.365 The final hypothesis suggests that differences in accounting anxiety will be associated with the coping strategy employed by ESL students. As displayed in Table 11, the means do differ across the various strategies employed by the students. This is particularly true for “Positive Thinking” and for “Peer Seeking” which fall at the lowest and highest levels of accounting anxiety, respectively. In order to determine whether these differences were significant, a one-way
  • 33. 30 © 2017 The authors and IJLTER.ORG. All rights reserved. ANOVA was performed to examine group differences in accounting anxiety scores. The results of this analysis are reported in Table 12. Table 11: Descriptive Analysis of Accounting Anxiety Ratings by Coping Strategy* Coping Strategy N Min Max M SD Preparation 100 1 4 2.97 .758 Relaxation 22 2 4 2.91 .921 Positive Thinking 47 1 4 2.79 .977 Peer Seeking 21 2 4 3.38 .669 Total 190 2.96 .838 *Where 1 = Strongly Disagree and 4 = Strongly Agree with the statement: “Taking an accounting class gives me high anxiety” (i.e., feeling of stress, fear). Table 12: One-Way Analysis of Variance of Accounting Anxiety Scores by Coping Strategy Source df SS MS F p Between Groups 3 5.189 1.730 2.522 .059 Within Groups 186 127.553 .686 Total 189 132.742 As shown in Table 12, the results indicate no significant difference across the four coping strategy groups for accounting anxiety, F (3, 186) = 2.522, p = .059. Therefore, Hypothesis 5 is rejected. A summary of the findings of this study is provided below in Table 13. There was support for two of the demographic factors and varied levels of accounting anxiety (gender and ethnicity) but differences by age was rejected. Having taken an undergraduate course in accounting significantly reduced accounting anxiety. In addition, English proficiency was shown to be negatively related to higher levels of accounting anxiety. Anxiety toward courses with quantitative content was positively related to accounting anxiety. Coping strategies employed by students did not vary significantly by level of accounting anxiety.
  • 34. 31 © 2017 The authors and IJLTER.ORG. All rights reserved. Table 13: Summary of Study Findings Hypothesis SS H1a Differences in Accounting Anxiety by Age Rejected H1b Differences in Accounting Anxiety by Gender Supported H1c Differences in Accounting Anxiety by Ethnicity Supported H2 Differences in Accounting Anxiety by Undergraduate Accounting Supported H3 Negative Relationship between English Proficiency and Accounting Anxiety Supported H4 Positive Relationship between Anxiety for Quantitative Courses and Accounting Anxiety Supported H5 Differences in Coping Strategy by Level of Accounting Anxiety Rejected As a part of this study, the ESL students were requested to rate the various core subjects and work on their thesis in terms of difficulty of learning the subject in English. Table 14 presents results of these questions. As shown in the table, those subjects that are based on a primarily quantitative content (accounting M = 2.09 SD = .733; and finance M = 2.22 SD = .751) were rated as more difficult than those subjects that are more theoretical in nature (marketing M = 2.97 SD = .6.29 and management M = 2.94 SD = .6.72). The two subject areas that employ both quantitative analysis and theory (research methods M = 2.55 SD = .780 and thesis M = 2.25 SD = .8.77) were rated in the middle in terms of difficulty with thesis being closer to the quantitative subjects. Table 14: Difficulty of Studying Subjects in English Ratings by Percentage* Subject Very Difficult Somewhat Difficult Somewhat Easy Very Easy Mean Standard Deviation Accounting 16.8 63.2 14.2 5.8 2.09 .733 Finance 12.6 59.5 21.1 6.8 2.22 .751 Marketing 1.1 17.9 63.7 17.4 2.97 .629 Research Methods 6.8 42.1 40.0 11.0 2.55 .780 Management 1.6 21.1 59.5 17.9 2.94 .672 Thesis 19.5 45.8 25.3 9.5 2.25 .877 *Where 1 = Very Difficult and 4 = Very Easy 3. Conclusion and Recommendations Though the findings did not support a significant statistical difference in accounting anxiety by age, it did reveal significant differences for the factors of gender, ethnicity, and exposure to undergraduate accounting. The findings also
  • 35. 32 © 2017 The authors and IJLTER.ORG. All rights reserved. supported a negative relationship between levels of English proficiency and accounting anxiety as well as a positive relationship between the levels of accounting anxiety and the quantitative nature of business courses. Finally, the study did not find significance difference between levels of accounting anxiety and the selecting of coping strategies for such anxiety. These mixed results conform within the disparity within the studies discussed in the literature review. However, it is important to emphasize that this study differs from most of the literature review studies in that it examines anxiety within the context of learning the subject of accounting by using English as a second language. Within that context, Franco (2016) suggested the following eight tactical components for lowering anxiety in general, and accounting anxiety in particular, within an ESL environment: 1. Initial assessment of students. This can be done in two ways: On the first day of class, he student fills out a simple one-page form that requests information on the student’s knowledge of the subject matter but also asks the student to evaluate himself/herself as to English proficiency by way of a Likert scale. The form should also include questions like, “Who do you admire most?” Each student is then asked to introduce himself/herself to the class and verbally answer some of the questions on the form. This allows the teacher make initial assessments of each student (written and oral presentations) as well as obtain a general assessment of the level of English proficiency of the group in order to adapt the course accordingly. Secondarily, the assessment form would allow the instructor to determine any previous knowledge of accounting by the students as a result of undergraduate courses and or work-related experience. This allows making a better initial determination as to the speed the accounting course should take. 2. Vocabulary Buildup and Word “Dissection.” Absorption of the nomenclature of accounting is difficult enough for those tackling the subject in their native language. In an ESL environment, it is vital that students be introduced to key words and phrases even if this requires a discussion of such vocabulary before beginning the lecture. The lecturer should reinforce the meaning of key terms/phrases and provide a context within which they have meaning. Without a focus on building up the vocabulary for a particular lecture, there is a stronger likelihood that some students will not be able to follow the narrative. Frustration will set in as key terms, not properly absorbed by the student, will become obstacles in comprehending the narrative and context of the discussion. The lecturer should write key words and phrases on the board, along with their definitions, and require the students to them write down. This creates a mental imprimatur since students are more likely to remember a word if they physically see it and work with it. Grammatical analysis of a word can be performed by “dissecting” it and presenting its grammatical variations. For example, a word like “accountability” – defined as being held responsible for something – can be broken up from
  • 36. 33 © 2017 The authors and IJLTER.ORG. All rights reserved. its noun form to its adjective – “accountable” – and the verb phrase “to account” for. This dissection, along with the lecturer’s use of the word within a context and the solicited use of the word from students in a sentence or two, allows the students to “chew” on the word or phrase and obtain an adequate comfort level of understanding. 3. Concept Checking. Concept checking involves asking questions to students to test the depth of their knowledge of newly accumulated information. These questions are sometime difficult to construct and some see their creation as more of an art form than a skill. The checking of concepts is developed in part, by anticipating, beforehand, concept checking questions you might use. However, it is primarily developed through practice and experience – “thinking on your feet.” Concept checking should be used throughout the lecture. In some situations, you can repeat a concept checking question that was successfully used in the same lecture in the past. However, the teacher will have to be conscious of coming up with new and pertinent concept checking questions within the serendipitous dynamics of the classroom discussion. This is art form more than anything else and the interaction of concept checking allows for a good balance between teacher talking time and student talking time. Concept checking is not open questioning. Avoid questions such as, “Do you understand? that can merely be answered with “yes” or “no.” If your narrative flow causes you to create a question that can be answered in that way, follow up with “why?” “Marry” students in the class to come up with financial solutions to a marriage or business partnership problem. This personalizes the class analysis and gets students to interact with each other. The teacher should avoid adding unfamiliar vocabulary when working through concept checking. This is part of a self-imposed discipline that is always conscious of the ESL experience and the appropriate implementation of knowledge within that setting. 4. Eliciting. Eliciting can be simply defined as asking for answers (information) instead of just giving out the information. In a learner- centered classroom this provides for constant interaction. Eliciting should be performed by choosing students – not by depending on volunteers (i.e., the “alpha” few that will dominate classroom discussions if the teachers allows for this). Choosing students also keeps all students alert (“on their toes”) and avoids the awkward situation where a question asked to the entire class is met with silence. Even if the student chosen by the lecturer does not have an answer, he/she will usually provide some response that the teacher can build on. Letting everyone know that they can and will be called on helps to identify students who are falling behind (“stragglers”). Pace yourself in your elicitations. Avoid repetition, condescension, and the need to turn everything into a question. Avoid asking questions
  • 37. 34 © 2017 The authors and IJLTER.ORG. All rights reserved. about material that has already been covered unless you are conducting a review for an examination. 5. Pacing. Even while abiding by the institution’s guidelines, rules, and expectations, the lecturer remains the “master of his domain” within his/her classroom. Lectures, homework, assignments, projects, and examinations are all the creations of the teacher. Especially in the ESL environment, the teacher must recognize the need to alter the pace of a lecture and even the pace of the entire course. Slow down when red flags and bells are going off. This is particularly true regarding subject matter that is built in layers (like accounting) where the next layer requires that you fundamentally understand the prior layer(s) of knowledge. If the lecturer keeps moving just to follow a schedule of his own design (e.g., a stated calendar on the syllabus), the results will be poor performances on the midterm exam. At that point the lecturer will have to go “back to basics” or risk moving forward and witnessing poor performances again, this time on the final exam. Almost nothing is more nonsensical for a lecturer than shackling himself/herself to rigid or impractical time restraints that were self-created and self-imposed. 6. Monitoring. In an ESL, learner-center environment the interaction should not only be verbal but also physical. The lecturer should not hide behind a podium or desk. Instead, the lecturer should be moving around to keep the students alert, away from their phones, or Facebook on their laptops. Moving amongst the students also allows for better eliciting, “marrying students,” and concept checking. When students are performing an in-class exercise (e.g., accounting), the teacher should move from one student to the next to see if the student is stuck on a word or a concept. Sometimes they are stuck on a verb or some other word within an explanatory or instructional text. An explanation or clarification at that moment is crucial. Otherwise, the student gets stuck and needlessly frustrated at the very start and gives up on solving the problem or resorts to looking to the student next to him/her for the answer. Sometimes a student who is stuck asks another student for an explanation. When a teacher sees this, he/she should step in, do the explanation, and provide further guidance. 7. Use of Paper. ESL students need to see physical words, not just hear them. They need a physical imprimatur. Power points have limited impact, unless the students have the physical text of the power point slides in front of them. If the lecturer gives handouts of core material (material that will be tested) the student has the pertinent text and can make notes including the meaning of the word in their native language. For test preparation, ESL students tend to rely on paper since they are not only looking at concepts but also the specific words that constitute the definition or explanation of that concept.
  • 38. 35 © 2017 The authors and IJLTER.ORG. All rights reserved. 8. Feedback. It is nonsensical to wait until the student evaluations to obtain feedback on how well ESL students were coping with their English comprehension in a business course. Feedback is best solicited from the first day of the course, on an individual basis when the student feels he/she can be more candid or less embarrassed (i.e., no disclosure in public). Feedback can be attained before and after class, during breaks, by email, and at office hours. The teacher can also specifically approach students that he/she feels are having trouble. Individual feedback, in the aggregate, can help the teacher determine the overall situation in the class and who the “stragglers” are. The continuation of globalization guarantees the internalization of higher education business studies using English as the commercial lingua franca. This study focused specifically on accounting anxiety experienced by ESL students. A body of literature needs to be created to specifically address accounting anxiety within the context of ESL education. References Ameen, E.C., Guffey, D. M., & Jackson, C. (2002). Evidence of teaching anxiety among educators. Journal of Education for Business, September/October, 16-22. Argaman, O., & Abu-Rabia, S. (2002). The influence of language anxiety on English teading and writing tasks among Hebrew speakers. Language, Culture, and Curriculum, 15(2), 143-160. Behling, O., & Law. K. S. (2000). Translating questionnaires and other research instruments: Problems and solutions. Thousand Oaks, CA: SAGE Publications, Inc. Borja, P. M. (2003). So you’ve been asked to teach principles of accounting. Business Education Forum, 58(2), 30-32. Buckhaults, J., & Fisher, D. (2011). Trends in accounting education: Decreasing accounting anxiety and promoting new methods. Journal of Education for Business, 86, 31-35. Buripakdi, A. (2014). Hegemonic English, standard Thai, and narratives of the subaltern in Thailand. In P. Liamputtong (Ed), Contemporary Socio-cultural and Political Perspectives In Thailand (pp. 95-109). Dordrecht, Netherlands: Springer. Burkett, W. H., Compton, D.M., & Burkett, G.G. (2001). An examination of computer attitudes, anxieties, and aversions among diverse college populations: Issues central to understanding information sciences in the new millennium. Informing Science 4(3), 77- 85. Casado, M. A., & Dereshiswsky, M. I. (2004). Effect of educational strategies on anxiety in the second language. College Student Journal, 38(1), 23-35. Chen, B. H., Hsu, M., & Chen, M. (2013). The relationship between learning attitude and anxiety in accounting classes: The case of hospitality management university students in Taiwan. Qual Quant 47, 2815-2827. Chu, P. C., & Spires, E. E. (1991). Validating the computer anxiety rating scale: Effects of cognitive style and computer courses on computer anxiety. Computers in Human Behavior 7(1/2), 7-21. Clark, C. E., & Schwartz, B. N. (1989). Accounting anxiety: An experiment to determine the effects of an intervention on anxiety levels and achievement of introductory accounting students. Journal of Accounting Education 7, 149-169. Domyei, Z., & Taguchi, T. (2009). Questionnaires in second language research: Construction, administration, and processing (2nd ed.). London: Routledge.
  • 39. 36 © 2017 The authors and IJLTER.ORG. All rights reserved. Dull, R. B., Schleifer, L. F., & McMillan, J. J. (2015). Achievement goal theory: The relationship of accounting students’ goal orientations with self-efficacy, anxiety, and achievement. Accounting Education: An International Journal 24(2), 152-174. Duman, H., Apak, I., Yucenursen, M, & Peker, A. A. (2015). Determining the anxieties of accounting education students: A sample of Aksaray University. Procedia – Social and Behavioral Sciences 174, 1834-1840. Eastrope, A. (1999). Englishness and national culture. London: Routledge. Franco, A. (2016). MBA instructor’s guide for teaching business to ESL students. Unpublished manuscript, Bangkok, Thailand. Ghaderi, A. R., & Salehi, M. (2011). A study of the level of self-efficacy, depression and anxiety between accounting and management students: Iranian evidence. World Applied Sciences Journal 12(8), 1299-1306. Hair, J. F. Jr., Black, W. C., Babin, B.J., & Anderson, R. E. (2010). Multivariate data analysis: a global perspective (7th ed.). Saddle River, NJ: Prentice-Hall International. Harkness, J. A., van de Vijver, F. J. R., & Mohler, P. P. (2002). Cross-cultural survey methods. Hoboken, NJ: Wiley-Interscience. Holmes, H., Tangtongtawy, S., & Tomizawa, R. (2003). Working with the Thais: A guide to managing in Thailand (2nd ed.). Bangkok: White Lotus Press. Horwitz, E. (1991). Preliminary evidence for the reliability and validity of a foreign language anxiety scale. In E. K. Horwitz & D. J. Young (Eds.) Language anxiety: From theory and research to classroom implications. Englewood Cliffs, NJ: Prentice Hall. Horwitz, M. B., Horwitz, E. K., & Cope, J. A. (1986). Foreign language classroom anxiety. The Modern Language Journal, 70(2), 125-132. Johnson, R. L., & Morgan, G. B. (2016). Survey scales: A guide to development, analysis, and reporting. New York: The Guilford Press. Kao, P., & Craigie, P. (2013). Coping strategies of Taiwanese university students as predictors of English language learning anxiety. Social Behavior and Personality 41(3), 411-420. Kondo, D. S., & Yang, Y-L. (2004). Strategies for coping with language anxiety: The case of students of English in Japan. ELT Journal 58(3), 258-265. Krejcie, R. V., & Morgan, D. (1970). Determination of sample size for research activities. Educational and Psychological Measurement 30, 607-610. Mahmoodzadeh, M. (2012). Investigating foreign language speaking anxiety within the EFL Learner’s inter-language system: The case of Iranian learners. Journal of Language Teaching and Research 3(3), 466-476. Malgwi, C. A. (2004). Determinants of accounting anxiety in business students. Journal of College Teaching and Learning 1(2), 81-94. Marwan, A. (2007). Investigating students’ foreign language anxiety. Malaysian Journal of ELT Research, 3, 37-55. Matsuda, S., & Gobel, P. (2004). Anxiety and predictors of performance in the foreign language classroom. System 32, 21-36. McIlroy, D., Bunting, B., Tierney, K., & Gordon, M. (2001). The relation of gender and background experience to self-reported computing anxieties and cognitions. Computers in Human Behavior 17, 21-33. Onwuegbuzie, A., Bailey, P., & Daley, C. E. (1999). Factors associated with foreign language anxiety. Applied Socio Linguistics 20(2), 218-239. Ozturk, G. & Gurbuz, N. (2014). Speaking anxiety among Turkish EFL learners: The case at a state university. Journal of Language and Linguistic Studies 10(1), 1-17. Pappamihiel, N. E. (2002). English as a second language student and English language anxiety issues in the mainstream classroom. Proquest Education Journal 36(3), 327- 355.
  • 40. 37 © 2017 The authors and IJLTER.ORG. All rights reserved. Rosen, L. D., & Weil, M. M. (1995). Computer anxiety: A cross-cultural comparison of university students in ten countries. Computers in Human Behavior 11(1), 45-64. Sekaran, U. (2000). Research methods for business: A skill building approach (4th ed.). NY: John Wiley & Sons, Inc. Semmar, Y. (2010). First year university students and language anxiety: Insights into the English version of the foreign language classroom anxiety scale. The International Journal of Learning, 17(1), 81-93. Suntaree, K. (1990). Psychology of the Thai people: Values and behavioral patterns. Bangkok: Research institute of Development Administration. Tavakol, M., & Dennick, R. (2011). Making sense of Cronbach’s alpha. International Journal of Medical Education 2, 53-55. Todman, J. (2000). Gender differences in computer anxiety among university entrants since 1992. Computers & Education 34, 27-35. Towell, E. R., & Lauer, J. (2001). Personality differences and computer related stress in business students. Mid-American Journal of Business 16(1), 69-75. Uyar, A., & Gungormus, A. H. (2011). Factors associated with student performance in financial accounting course. European Journal of Economic and Political Studies 2, 139-154. Wang, S. (2010). An experimental study of Chinese English major students’ listening anxiety of classroom learning activity at the university level. Journal of Language Teaching and Research, 1(5), 562-568. Yashima, T. (2002). Willingness to communicate in a second language. The Japanese EFL context. Modern Language Journal 86(1), 54-66.
  • 41. 38 © 2017 The author and IJLTER.ORG. All rights reserved. International Journal of Learning, Teaching and Educational Research Vol. 16, No. 1, pp. 38-56, January 2017 (Mis)Reading the Classroom: A Two-Act ―Play‖ on the Conflicting Roles in Student Teaching Christi Edge, Ph.D. Northern Michigan University Marquette, Michigan, United States of America Abstract. This case study examined concentric and reciprocal notions of reading—that of high school students, a pre-service teacher, and a teacher educator. An intern charged with teaching students to read, interact with, and compose texts in an English/language arts classroom constructed her role in the classroom based on her reading the ―text‖ of her internship experiences, relationships, and responsibilities. Using interviews and observations, a teacher educator read and interpreted the classroom ―text‖ the pre-service teacher ―composed‖ during her internship and then constructed a two-act ―play‖ which details the conflict in the intern‘s enacting the dual role of student-teacher and her subsequent reading of the classroom ―text‖ from her stance as student- teacher. Concepts of classroom literacy for teachers and teacher educators are considered. Keywords: teacher education; reading classroom text; classroom literacy; student teaching internship; stance Introduction In light of growing pedagogical, professional, and public awareness that twenty- first century literacy involves more than just printed words on a page and that specific literacies are acquired throughout the duration of an individual‘s education (Barton, 2000; Biancarosa & Snow, 2006; Buehl, 2014; Clark & Flores, 2007; Draper, 2011; Gee, 2012; International Reading Association, 2012; Langer, 1987; Lankshear & Knobel, 2007; Maclellan, 2008; National Council Teachers of English, 2007, 2008; National Center for Education Statistics [NCES], 2006, 2007; Rogers, 2000), it is time to consider the professional literacy needs of the very individuals to whom we look to educate our children and our adolescents (International Literacy Association, 2015). Review of the Literature Lad Tobin (2004) implies a connection between the disciplinary focus of studying texts and the pedagogical importance of studying classrooms as text by asserting that ―teaching is a way of reading and writing. Students learn to teach through, first, learning to read the classroom and, second, learning to write themselves within that classroom‖ (p. 129). A teacher is simultaneously a reader and a writer of her classroom. Like readers whose meaning making is framed by